• Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    31
    ·
    1 month ago

    I always try to replicate these results, because the majority of them are fake. For this one in particular I don’t get any AI results, which is interesting, but inconclusive

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      28
      arrow-down
      3
      ·
      1 month ago

      How would you expect to recreate them when the models are given random perturbations such that the results usually vary?

      • Nurse_Robot@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        19
        ·
        1 month ago

        The point here is that this is likely another fake image, meant to get the attention of people who quickly engage with everything anti AI. Google does not generate an AI response to this query, which I only know because I attempted to recreate it. Instead of blindly taking everything you agree with at face value, it can behoove you to question it and test it out yourself.

        • andyburke@fedia.io
          link
          fedilink
          arrow-up
          19
          arrow-down
          3
          ·
          1 month ago

          Google is well known to do A/B testing, meaning you might not get a particular response (or even whole sets of results generated via different algorithms they are testing) even if your neighbor searches for the same thing.

          So again, I ask how your anecdotal evidence somehow invalidates other anecdotal evidence? If your evidence isn’t anecdotal, I am very interested in your results.

          Otherwise, what you’re saying has the same or less value than the example.