• pop@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Because these posts are nothing but the model making up something believable to the user. This “prompt engineering” is like asking a parrot who’s learned quite a lot of words (but not their meaning), and then the self-proclaimed “pet whisperer” asks some random questions and the parrot, by coincidence makes up something cohesive. And he’s like “I made the parrot spill the beans.”

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      if it produces the same text as its response in multiple instances I think we can safely say it’s the actual prompt