Computer pioneer Alan Turing’s remarks in 1950 on the question, “Can machines think?” were misquoted, misinterpreted and morphed into the so-called “Turing Test”. The modern version says if you can’t tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like “thinking” and “intelligent” to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let’s put this new software to the Turing Test - by Grabthar’s Hammer, it passed! We’ve achieved Artificial Intelligence!

  • DragonTypeWyvern@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 days ago

    No, it doesn’t render the Turing Test invalid, because the premise of the test is not to prove that machines are intelligent but to point out that if you can’t tell the difference you either must assume they are or risk becoming a monster.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      or risk becoming a monster.

      Remind me. What became of Turing, a man who saved untold British lives during WW2?

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      Okay but in casual conversation I probably couldn’t spot a really good LLM on a thread like this, but on the back end that LLM is completely incapable of learning or changing in any meaningful way, its not quite a chinese room as previously mentioned but it’s still a set model that can’t learn or understand context, even with infinite context memory it could still only interact with that data within the confines of the original model.

      e.g. I can train the model to understand a spoon and a fork, it will never come up with that idea of a spork unless I re-train it to include the concept of sporks or directly tell it. Even after I tell it what a spork is it can’t infer the properties of a spork based on a fork or a spoon without additional leading prompts by me.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        even with infinite context memory

        Interestingly, infinite context memory is functionally identical to learning.

        It seems wildly different but it’s the same as if you have already learned absolutely everything that there is to know. There is absolutely nothing you could do or ask that the infinite context memory doesn’t already have stored response ready to go.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Interestingly, infinite context memory is functionally identical to learning.

          Except for still being incapable of responding to anything not within that context memory, todays models have zero problem solving skills; or to put it another way they’re incapable of producing novel solutions to new problems.

            • CheeseNoodle@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Hence the reason it’s not a real intelligence (yet) even a goldfish can do problem solving without first having to be equipped with god like levels of prior knowledge about the entire universe.

              • Blue_Morpho@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 days ago

                Current LLM’s aren’t that stupid. They do have limited learning. You give it a question, tell it where it’s wrong and it will remember and change all future replies with the new information you give it. You certainly can’t ask a goldfish to write a c program that blinks an led on a microcontroller. I have used it to get working programs to questions that were absolutely nowhere on the internet. So it didn’t just copy/paste something found.

    • deranger@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      The premise of the test is to determine if machines can think. The opening line of Turing’s paper is:

      I propose to consider the question, ‘Can machines think?’

      I believe the Chinese room argument demonstrates that the Turing test is not valid for determining if a machine has intelligence. The human in the Chinese room experiment is not thinking to generate their replies, they’re just following instructions - just like the computer. There is no comprehension of what’s being said.