• Downcount@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    7 months ago

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

    Or it get stuck in an endless loop of two different but wrong solutions.

    Me: This is my system, version x. I want to achieve this.

    ChatGpt: Here’s the solution.

    Me: But this only works with Version y of given system, not x

    ChatGpt: <Apology> Try this.

    Me: This is using a method that never existed in the framework.

    ChatGpt: <Apology> <Gives first solution again>

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      14
      ·
      7 months ago
      1. “Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
      2. Goto 1
    • UberMentch@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”

      • brbposting@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.

        It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.

        *[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        But only sometimes. Not often enough that I don’t still find it more useful than not.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.