As someone whose employer is strongly pushing them to use AI assistants in coding: no. At best, it’s like being tied to a shitty intern that copies code off stack overflow and then blows me up on slack when it magically doesn’t work. I still don’t understand why everyone is so excited about them. The only tasks they can handle competently are tasks I can easily do on my own (and with a lot less re-typing.)
Sure, they’ll grow over the years, but Altman et al are complaining that they’re running out of training data. And even with an unlimited body of training data for future models, we’ll still end up with something about as intelligent as a kid that’s been locked in a windowless room with books their whole life and can either parrot opinions they’ve read or make shit up and hope you believe it. I’ll think we’ll get a series of incompetent products with increasing ability to make wrong shit up on the fly until C-suite moves on to the next shiny bullshit.
That’s not to say we’re not capable of creating a generally-intelligent system on par with or exceeding human intelligence, but I really don’t think LLMs will allow for that.
tl;dr: a lot of woo in the tech community that the linux community isn’t as on board with
There was a senior dev at my first job that we called Lord Voldemort and he was the king of ungreppable variable names. Short, full of common characters, and none of them actually described what they were doing. I swear he only used characters that appeared in C++ keywords, so looking for
fo
would invariably tag every for statement in the file.He also had hooks set up to notify when anyone was in his area of the code and you’d always get a two-hour phonecall where he’d slowly wear you down and browbeat you into backing out your changes. Every time I pulled a ticket in his codebase I’d internally shudder. He was friends and/or had dirt on the CTO so he just remained in that role and made everyone’s life hell.