• Deceptichum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 month ago

    What an oddly written article.

    Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”

    They make it sound like the prompts are important and/or more important than the 13,000 images…

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.

      The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.

      • PirateJesus@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        And also it’s an AI.

        13k images before AI involved a human with Photoshop or a child doing fucked up shit.

        13k images after AI is just forgetting to turn off the CSAM auto-generate button.

    • Lowlee Kun@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Having an AI generate 13.000 images does not even take 24 hours (depending on hardware and settings ofc).