I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    Also an autistic person here.

    How are people supposed to tell this is an opinion?

    And please dont say “by reading the article, maybe some (like me) do so but its well known that most people stop at the title.

    Grammatically speaking it remains a direct statement. They admit == appear to hint == pure opinion (Title: “Ai cant be scaled further”)

    While i am not disagreeing with the premise perse i have to perceive this as anti-ai propaganda at best, a attempt at misinformation at worst.

    On a different note, do you believe things can only be an issue if neurotypical struggle with it? There is no good argument to not communicate more clearly in the context of sharing opinions with the world.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      David and Amy are - openly - skeptics in the subject matters they write about. But it’s important to understand that being a skeptic is not inherently the same thing as being unfairly biased against something.

      They cite their sources. They backup what they have to say. But they refuse to be charitable about how they approach their subjects, because it is their position that those subjects have not acted in a way that is deserving of charity.

      This is a problem with a lot of mainstream journalism. A grocery store CEO will say “It’s not our fault, we have to raise prices,” and mainstream news outlets will repeat this statement uncritically, with no interrogation, because they are so desperate to avoid any appearance of bias. Donald Trump will say “Immigrants are eating dogs” and news outlets will simply repeat this claim as something he said, with adding “This claim is obviously insane and only an idiot would have made it.” Sometimes being overly fair to your subject is being unfair to objective truth.

      Of course OpenAI et al are never going to openly admit that they can’t substantially improve their models any further. They are professional bullshitters, they didn’t suddenly come down with a case of honesty now. But their recent statements, when read with both a critical eye, and an understanding of the limitations of the technology, amount to a tacit admission that all the significant gains have already been made with this particular approach. That’s the claim being made in this headline.