I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I believe that the current LLM paradigm is a technological dead end. We might see a few additional applications popping up, in the near future; but they’ll be only a tiny fraction of what was promised.

    My bet is that they’ll get superseded by models with hard-coded logic. Just enough to be able to correctly output “if X and Y are true/false, then Z is false”, without fine-tuning or other band-aid solutions.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        If you’re referring to symbolic AI, I don’t think that the AI scene will turn 180° and ditch NN-based approaches. Instead what I predict is that we’ll see hybrids - where a symbolic model works as the “core” of the AI, handling the logic, and a neural network handles the input/output.

      • MajorHavoc@programming.devOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 month ago

        Unlikely, but there’s some percedent.

        We’ve seen this pattern play out in video games a bunch of times.

        Revolutionary new way to do things. It’s cool, but not… You know…fun.

        So we give up on it as a dead and and go back to the old ways for awhile.

        Then somebody figures out how to (usually hard code) bumpers on the new revolutionary new way, such that it stays fun.

        Now the revolutionary new way is the new gold stand and default approach.

        For other industries, replace “fun” above with the correct goal for than industry. “Profitable” is one that the AI hucksters are being careful not to say…but “honest”, “correct” and “safe” also come to mind.

        We are right before the bit where we all decide it was a bad idea.

        Which comes before we figure out hard-coding the bumpers can get us where we wanted, after a lot of work by really smart well paid humans.

        I’ve seen industries skip the “all decide it was a bad idea” phase, and go straight to the “hard work by humans to make this fulfill the available promise” phase, but we don’t actually look on track to, today.

        Many current investors are convicned that their clever talking puppet is going to do the hard work of engineering the next generation of talking puppet.

        I have some faith that we can reach that milestone. I’m familiar enough with the current generation of talking puppet to confidently declare that this won’t be the time it happens.

        My incentive in sharing all this is that I like over half of you reading there, and so figure I can give some of you a shot at not falling for this particular “investment phase” which is essentially, in practical terms, a con.