I’m so glad I’m not a tiny bug.
I’m so glad I’m not a tiny bug.
The biotech industry is already extremely nervous: https://www.axios.com/2024/11/15/rfk-jr-uncertainty-biotech-startups
They don’t like this at all, with the hope being RFK just focuses on other stuff.
Or… it could make investor money fly away and collapse the US biotech industry. Great.
What on Earth is the NIH thinking right now?
I mean, what if an moon landing skeptic took over NASA? It’s like that. They literally produce this mountain of evidence and organize this stuff, and… yeah.
Don’t jinx it.
Especially not if they somehow coincidentally get some government funding.
On the other hand, the track record of old social networks is not great.
And it’s reasonable to posit Twitter is deep into the enshitifiication cycle.
The facebook/mastadon format is much better for individuals, no? And Reddit/Lemmy for niches, as long as they’re supplemented by a wiki or something.
And Tumblr. The way content gets spread organically, rather than with an algorithm, is actually super nice.
IMO Twitter’s original premise, of letting novel, original, but very short thoughts fly into the ether has been so thoroughly corrupted that it can’t really come back. It’s entertaining and engaging, but an awful format for actually exchanging important information, like discord.
This is called prompt engineering, and it’s been studied objectively and extensively. There are papers where many different personas are benchmarked, or even dynamically created like a genetic algorithm.
You’re still limited by the underlying LLM though, especially something so dry and hyper sanitized like OpenAI’s API models.
…No. No one is saying that.
Her alignment depends on her depiction though. In the Harley Quinn series, for instance, she’s obviously not the bad guy because she’s a main character and depicted as a sane, regular person. Classic BTAS, very mixed but rather bad.
To add to this:
All LLMs absolutely have a sycophancy bias. It’s what the model is built to do. Even wildly unhinged local ones tend to ‘agree’ or hedge, generally speaking, if they have any instruction tuning.
Base models can be better in this respect, as their only goal is ostensibly “complete this paragraph” like a naive improv actor, but even thats kinda diminished now because so much ChatGPT is leaking into training data. And users aren’t exposed to base models unless they are local LLM nerds.
Just imagine if the UN had teeth for enforcement, at least for overwhelming votes like this. I feel like its one of the biggest oversights of the post WWII order they tried to make.
Big countries, of course, would never allow that, but still.
As a fervent AI enthusiast, I disagree.
…I’d say it’s 97% hype and marketing.
It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are explicitly saying what they’re going to do, and that anyone buys it. TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.
Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.
Almost all of Qwen 2.5 is Apache 2.0, SOTA for the size, and frankly obsoletes many bigger API models.
These days, there are amazing “middle sized” models like Qwen 14B, InternLM 20B and Mistral/Codestral 22B that are such a massive step over 7B-9B ones you can kinda run on CPU. And there are even 7Bs that support a really long context now.
IMO its worth reaching for >6GB of VRAM if LLM running is a consideration at all.
I am not a fan of CPU offloading because I like long context, 32K+. And that absolutely chugs if you even offload a layer or two.
For local LLM hosting, basically you want exllama, llama.cpp (and derivatives) and vllm, and rocm support for all of them is just fine. It’s absolutely worth having a 24GB AMD card over a 16GB Nvidia one, if that’s the choice.
The big sticking point I’m not sure about is flash attention for exllama/vllm, but I believe the triton branch of flash attention works fine with AMD GPUs now.
Basically the only thing that matters for LLM hosting is VRAM capacity. Hence AMD GPUs can be OK for LLM running, especially if a used 3090/P40 isn’t an option for you. It works fine, and the 7900/6700 are like the only sanely priced 24GB/16GB cards out there.
I have a 3090, and it’s still a giant pain with wayland, so much that I use my AMD IGP for display output and Nvidia still somehow breaks things. Hence I just do all my gaming in Windows TBH.
CPU doesn’t matter for llm running, cheap out with a 12600K, 5600, 5700x3d or whatever. And the single-ccd x3d chips are still king for gaming AFAIK.
I hate turn based combat too, but it was super enjoyable in coop. And it’s quite good for being turn based.
It’s also real-time outside of combat, FYI.
For solo, I’d probably get the mod that automates your companions, and reduce the difficulty to your taste to compensate.
Still an understatement, it deserves it and more.
I don’t even like turned based games. I don’t like most high fantasy. But holy moly, what a ride BG3 is.
I’m just gonna be pissed of their mixed support of modding (due to wotc) kills the modding community. If Skyrim and Rimworld can have a whole universe of fan content, BG3 should too.
Rimworld was another shining example. Its actual early access was a forum release, the Steam EA was polishing.
That being said I have a dead EA or two in my library. Starforge comes to mind…