The exam is tomorrow (today is another)
Ouch. Been there. Good luck on your exams!
Anarchist, autistic, engineer, and Certified Professional Life-Regretter. I mosty comment bricks of text with footnotes, so don’t be alarmed if you get one.
You posted something really worrying, are you okay?
No, but I’m not at risk of self-harm. I’m just waiting on the good times now.
Alt account of PM_ME_VINTAGE_30S@lemmy.sdf.org. Also if you’re reading this, it means that you can totally get around the limitations for display names and bio length by editing the JSON of your exported profile directly. Lol.
The exam is tomorrow (today is another)
Ouch. Been there. Good luck on your exams!
Sounds like fun! I’m going to bed soonish but I’m willing to answer questions about multivariable calculus probably when I wake up.
When I took multivariable calculus, the two books that really helped me “get the picture” were Multivariable Calculus with Linear Algebra and Series by Trench and Kolman, and Calculus of Vector Functions by Williamson, Crowell, and Trotter. Both are on LibGen and both are cheap because they’re old books. But their real strength lies in the fact that both books start with basic matrix algebra, and the interplay between calculus and linear algebra is stressed throughout, unlike a lot of the books I looked at (and frankly the class I took) which tried to hide the underlying linear algebra.
I’m autistic too and I had to relearn math as an adult. Now I know calculus and advanced mathematics.
I can go find some book recommendations, but when I was first learning I really got a lot out of watching The Organic Chemistry Tutor.
Linear algebra (ex: multiply the matrices A and B), multivariable calculus (example: find ∇F with F=[xy,yz,xz]^T ), or actual “multidimensional analysis” (example: define the norm of [1m,1m/s,1m/s^2 ] in a way that makes sense)? I can help with all three.
It can use ChatGPT I believe, or you could use a local GPT or several other LLM architectures.
GPTs are trained by “trying to fill in the next word”, or more simply could be described as a “spicy autocomplete”, whereas BERTs try to “fill in the blanks”. So it might be worth looking into other LLM architectures if you’re not in the market for an autocomplete.
Personally, I’m going to look into this. Also it would furnish a good excuse to learn about Docker and how SearXNG works.
LLMs are not necessarily evil. This project seems to be free and open source, and it allows you to run everything locally. Obviously this doesn’t solve everything (e.g., the environmental impact of training, systemic bias learned from datasets, usually the weights themselves are derived from questionably collected datasets), but it seems like it’s worth keeping an eye on.
Google using ai, everyone hates it
Because Google has a long history of doing the worst shit imaginable with technology immediately. Google (and other corporations) must be viewed with extra suspicion compared to any other group or individual because they are known to be the worst and most likely people to abuse technology.
Literally if Google does literally anything, it sucks by default and it’s going to take a lot more proof to convince me otherwise for a given Google product. Same goes for Meta, Apple, and any other corporations.
If your signal looks like f(t) = K•u(t)e^at with u(t) = {1 if t≥0, 0 else}:
So e pops up all the time in stable systems and bounded signals because the function e^at solves the common differential equation dx/dt = ax(t) with x(0)=1 regardless of the value of a, particularly regardless of whether or not the real part of a causes the solution to blow up.
If Justin/anyone at Cockos is reading this: please open-source REAPER. You really would be doing the audio community a huge service.
Well I just tried #define int void in C and C++ before a “hello world” program. C++ catches it because main() has to be an int, but C doesn’t care. I think it is because C just treats main() as an int by default; older books on C don’t even include the “int” part of “int main()” because it’s not strictly necessary.
#define int void replaces all ints with type void, which is typically used to write functions with no return value.
Running #define ; anything yields error: macro names must be identifiers for both C and C++ in an online compiler. So I don’t think the compiler will let you redefine the semicolon.
Reddit --> Lemmy
Facebook --> fucking nothing lmao
YouTube --> FreeTube + Invidious [1]
Windows --> Debian [2] with KDE Plasma
Word --> LyX
Microsoft Office --> LibreOffice
Built-in phone music player --> Odyssey [3]
Firefox --> LibreWolf [4]
Adobe Reader --> Okular + Librera on Android
Default phone launcher --> KISS Launcher
[1] I prefer FreeTube on computers where I have it installed, but one of my family’s jank 10-year-old work PCs can’t handle it, so I’ll typically watch videos in Invidious in LibreWolf on that computer.
[2] I can’t recommend Debian for absolutely everyone since it prioritizes stability and predictability over new features and ease of use, but it’s great for most of my use cases. I typically recommend Linux Mint for complete beginners.
[3] It handles extremely large music libraries (>100 GB of .mp3 files) without constantly taking forever to reload when I add a single new album.
[4] Firefox is pretty good and FOSS, but LibreWolf comes with better defaults and I’m a lazy fucker.
Mandroid Echostar - Catchy prog metal with clean vocals
Anaal Nathrakh - Grind/black metal with industrial influences
The Arcane Order - Long-form melodic death metal
Arcania - Kinda like a thrashier Gojira
Rivers of Nihil - Proggy death metal
Unfathomable Ruination - Brutal tech death
Thantifaxath - Dissonant black metal
We Lost the Sea - Post-metal. I need to give some context for this one:
Departure Songs is inspired by failed, yet epic and honourable journeys or events throughout history where people have done extraordinary things for the greater good of those around them, and the progress of the human race itself. This is a celebration and a tribute. Each song has it’s own story and is a soundtrack to that story.
This is our 3rd album and our first instrumental album. We’re exploring new ground and exploring ourselves in the past 2 and a bit years since Chris went on his own journey. It’s slightly bleak with shimmers of hope and layers of emotion. It’s a tribute and a catharsis of emotion and honesty.
I.e., their vocalist died and this was their way of grieving. Incredibly deep cut IMO.
Alpinist - “heavylowfastslowdark hardcore”
Theoretically this exists: https://github.com/Nikilites/nuzu
Also, I managed to install the Flatpak of Yuzu off FlatHub about a half hour ago.
Yeah, it would be nice if they are accepting donations.
Oh whoops I didn’t know. Yeah then just pirate that shit.
If you want to pay for it and can afford to do so, buy it and pay for it. Otherwise, pirate it and pay for it when you can afford to do so. Easy peasy. Publisher gets paid in either case.
What are you doing in assembly?
Textbooks compile information about a subject into one cohesive whole for study. They’re super useful, even though they are too expensive typically. Library Genesis is great for obtaining textbooks you can’t afford to purchase.
I’m not OP, and frankly I don’t really disagree with the characterization of ChatGPT as “fancy autocomplete”. But…
I’m still in the process of reading this cover-to-cover, but Chapter 12.2 of Deep Learning: Foundations and Concepts by Bishop and Bishop explains how natural language transformers work, and then has a short section about LLMs. All of this is in the context of a detailed explanation of the fundamentals of deep learning. The book cites the original papers from which it is derived, most of which are on ArXiv. There’s a nice copy on Library Genesis. It requires some multi-variable probability and statistics, and an assload of linear algebra, reviews of which are included.
So obviously when the CEO explains their product they’re going to say anything to make the public accept it. Therefore, their word should not be trusted. However, I think that when AI researchers talk simply about their work, they’re trying to shield people from the mathematical details. Fact of the matter is that behind even a basic AI is a shitload of complicated math.
At least from personal experience, people tend to get really aggressive when I try to explain math concepts to them. So they’re probably assuming based on their experience that you would be better served by some clumsy heuristic explanation.
IMO it is super important for tech-inclined people interested in making the world a better place to learn the fundamentals and limitations of machine learning (what we typically call “AI”) and bring their benefits to the common people. Clearly, these technologies are a boon for the wealthy and powerful, and like always, have been used to fuck over everyone else.
IMO, as it is, AI as a technology has inherent patterns that induce centralization of power, particularly with respect to the requirement of massive datasets, particularly for LLMs, and the requirement to understand mathematical fundamentals that only the wealthy can afford to go to school long enough to learn. However, I still think that we can leverage AI technologies for the common good, particularly by developing open-source alternatives, encouraging the use of open and ethically sourced datasets, and distributing the computing load so that people who can’t afford a fancy TPU can still use AI somehow.
I wrote all this because I think that people dismiss AI because it is “needlessly” complex and therefore bullshit. In my view, it is necessarily complex because of the transformative potential it has. If and only if you can spare the time, then I encourage you to learn about machine learning, particularly deep learning and LLMs.
We definitely don’t need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to “nearby but different, enough” tasks. And once they’re trained (and possibly quantized), they (LLMs and reinforcement learning policies) don’t require that much more power to implement compared to traditional algorithms. So IMO, the question should be “is it worthwhile to spend the energy to train X thing?” Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.
For a person without access to big computing resources (me lol), there’s also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on “normal” computers with FOSS tools.
IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.
IMO I’m waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists’ newest plaything.
Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don’t understand these tools and use them against the powerful.