when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning
Can you paste an example of this error?
when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning
Can you paste an example of this error?
As you’re trying to make a link between [using neural nets to research plasma control for fusion] and [Biden is a Maoist], I have no.reason to take you seriously.
Right. Like if I were talking to someone in total delirium and their responses were random and not a good fit for the question.
LLMs are not like that.
That’s the least plausible slippery-slope argument I have heard this month.
Like if I go to Journal of Fusion Energy – https://link.springer.com/journal/10894 – the latest article is titled ‘Artificial Neural Network-Based Tomography Reconstruction of Plasma Radiation Distribution at GOLEM Tokamak’ and the 4th-latest is ‘Deep Learning Based Surrogate Model a fast Soft X-ray (SXR) Tomography on HL-2 a Tokamak’. I am sorry if that upsets you but that’s the way the field is.
That’s just cosmetic stuff. Why care about what words people use?
I’m mostly answering the question I was asked: what are some examples of technical research in the field.
How can we solve plasma control without AI? And why exclude that tool?
They can functionally understand a good portion of it.
e.g. I can input a meme plus the words “explain this meme” and it can output an explanation.
This thread is funny. A few users are like “😡😡😡I hate everything about AI😡😡😡” and also “😲😲😲AI is used for technical research??? 😲😲😲 This is news to me! 😲😲😲”
Talk about no-investigation-no-right-to-speak. How can you have an opinion on a field without even knowing roughly what the field is?
https://www.frontiersin.org/research-topics/65016/deep-learning-for-industrial-applications
etc.: https://www.frontiersin.org/journals/artificial-intelligence/research-topics
https://www.nature.com/articles/s42256-024-00883-x
https://www.nature.com/articles/s42256-024-00882-y
https://engineering.princeton.edu/news/2024/02/21/engineers-use-ai-wrangle-fusion-power-grid
But it’s inherently impossible to “show” anything except inputs&outputs (including for a biological system).
What are you using the word “real” to mean, and is it aloof from the measurable behaviour of the system?
You seem to be using a mental model that there’s
A: the measurable inputs & outputs of the system
B: the “real understanding”, which is separate
How can you prove B exists if it’s not measurable? You say there is an “onus” to do so. I don’t agree that such an onus exists.
This is exactly the Chinese Room paper. ‘Understand’ is usually understood in a functionalist way.
Here is the latest edition of Nature Machine Intelligence, to give you a basic idea of the sort of research that constitutes the AI field: https://www.nature.com/natmachintell/current-issue
Topics in Frontiers In Artificial Intelligence: https://www.frontiersin.org/journals/artificial-intelligence/research-topics
Foundations and Trends in Machine Learning: https://www.nowpublishers.com/MAL
Yes. I mean, this is absolute basics.
Nuclear fusion
The fusion ones for example
No. You want to suddenly change the subject to language models?
You don’t?
Fusion’s close to a core need of humanity.
In the mid 20th-century, people reliably got more petty bourgeois as they got older.
https://www.sciencedirect.com/science/article/pii/S0261379422000452