PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. I mosty comment bricks of text with footnotes, so don’t be alarmed if you get one.

You posted something really worrying, are you okay?

No, but I’m not at risk of self-harm. I’m just waiting on the good times now.

Alt account of PM_ME_VINTAGE_30S@lemmy.sdf.org. Also if you’re reading this, it means that you can totally get around the limitations for display names and bio length by editing the JSON of your exported profile directly. Lol.

  • 1 Post
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

    We definitely don’t need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to “nearby but different, enough” tasks. And once they’re trained (and possibly quantized), they (LLMs and reinforcement learning policies) don’t require that much more power to implement compared to traditional algorithms. So IMO, the question should be “is it worthwhile to spend the energy to train X thing?” Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.

    For a person without access to big computing resources (me lol), there’s also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on “normal” computers with FOSS tools.

    all it does is remix a huge field of data without even knowing what that data functionally says.

    IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.

    IMO I’m waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists’ newest plaything.

    Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don’t understand these tools and use them against the powerful.



  • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.orgtoMemes@lemmy.mlMath
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    19 days ago

    Sounds like fun! I’m going to bed soonish but I’m willing to answer questions about multivariable calculus probably when I wake up.

    When I took multivariable calculus, the two books that really helped me “get the picture” were Multivariable Calculus with Linear Algebra and Series by Trench and Kolman, and Calculus of Vector Functions by Williamson, Crowell, and Trotter. Both are on LibGen and both are cheap because they’re old books. But their real strength lies in the fact that both books start with basic matrix algebra, and the interplay between calculus and linear algebra is stressed throughout, unlike a lot of the books I looked at (and frankly the class I took) which tried to hide the underlying linear algebra.




  • It can use ChatGPT I believe, or you could use a local GPT or several other LLM architectures.

    GPTs are trained by “trying to fill in the next word”, or more simply could be described as a “spicy autocomplete”, whereas BERTs try to “fill in the blanks”. So it might be worth looking into other LLM architectures if you’re not in the market for an autocomplete.

    Personally, I’m going to look into this. Also it would furnish a good excuse to learn about Docker and how SearXNG works.


  • LLMs are not necessarily evil. This project seems to be free and open source, and it allows you to run everything locally. Obviously this doesn’t solve everything (e.g., the environmental impact of training, systemic bias learned from datasets, usually the weights themselves are derived from questionably collected datasets), but it seems like it’s worth keeping an eye on.

    Google using ai, everyone hates it

    Because Google has a long history of doing the worst shit imaginable with technology immediately. Google (and other corporations) must be viewed with extra suspicion compared to any other group or individual because they are known to be the worst and most likely people to abuse technology.

    Literally if Google does literally anything, it sucks by default and it’s going to take a lot more proof to convince me otherwise for a given Google product. Same goes for Meta, Apple, and any other corporations.


  • If your signal looks like f(t) = K•u(t)e^at with u(t) = {1 if t≥0, 0 else}:

    • If Real(a) > 0, then your signal will eventually blow up.
    • If Real(a) < 0, then you signal will not blow up. In fact, your signal will have a maximum absolute value of |K|, and it will approach zero as time goes on.
    • If Real(a) = 0, it is either a complex sinusoid or a constant. In either case, it is bounded with maximum absolute value of |K|. It very much does not blow up.

    So e pops up all the time in stable systems and bounded signals because the function e^at solves the common differential equation dx/dt = ax(t) with x(0)=1 regardless of the value of a, particularly regardless of whether or not the real part of a causes the solution to blow up.



  • Well I just tried #define int void in C and C++ before a “hello world” program. C++ catches it because main() has to be an int, but C doesn’t care. I think it is because C just treats main() as an int by default; older books on C don’t even include the “int” part of “int main()” because it’s not strictly necessary.

    #define int void replaces all ints with type void, which is typically used to write functions with no return value.



  • Reddit --> Lemmy

    Facebook --> fucking nothing lmao

    YouTube --> FreeTube + Invidious [1]

    Windows --> Debian [2] with KDE Plasma

    Word --> LyX

    Microsoft Office --> LibreOffice

    Built-in phone music player --> Odyssey [3]

    Firefox --> LibreWolf [4]

    Adobe Reader --> Okular + Librera on Android

    Default phone launcher --> KISS Launcher

    [1] I prefer FreeTube on computers where I have it installed, but one of my family’s jank 10-year-old work PCs can’t handle it, so I’ll typically watch videos in Invidious in LibreWolf on that computer.

    [2] I can’t recommend Debian for absolutely everyone since it prioritizes stability and predictability over new features and ease of use, but it’s great for most of my use cases. I typically recommend Linux Mint for complete beginners.

    [3] It handles extremely large music libraries (>100 GB of .mp3 files) without constantly taking forever to reload when I add a single new album.

    [4] Firefox is pretty good and FOSS, but LibreWolf comes with better defaults and I’m a lazy fucker.


  • Mandroid Echostar - Catchy prog metal with clean vocals

    Anaal Nathrakh - Grind/black metal with industrial influences

    The Arcane Order - Long-form melodic death metal

    Arcania - Kinda like a thrashier Gojira

    Rivers of Nihil - Proggy death metal

    Unfathomable Ruination - Brutal tech death

    Thantifaxath - Dissonant black metal

    We Lost the Sea - Post-metal. I need to give some context for this one:

    Departure Songs is inspired by failed, yet epic and honourable journeys or events throughout history where people have done extraordinary things for the greater good of those around them, and the progress of the human race itself. This is a celebration and a tribute. Each song has it’s own story and is a soundtrack to that story.

    This is our 3rd album and our first instrumental album. We’re exploring new ground and exploring ourselves in the past 2 and a bit years since Chris went on his own journey. It’s slightly bleak with shimmers of hope and layers of emotion. It’s a tribute and a catharsis of emotion and honesty.

    I.e., their vocalist died and this was their way of grieving. Incredibly deep cut IMO.

    Alpinist - “heavylowfastslowdark hardcore”








  • I’m not OP, and frankly I don’t really disagree with the characterization of ChatGPT as “fancy autocomplete”. But…

    I’m still in the process of reading this cover-to-cover, but Chapter 12.2 of Deep Learning: Foundations and Concepts by Bishop and Bishop explains how natural language transformers work, and then has a short section about LLMs. All of this is in the context of a detailed explanation of the fundamentals of deep learning. The book cites the original papers from which it is derived, most of which are on ArXiv. There’s a nice copy on Library Genesis. It requires some multi-variable probability and statistics, and an assload of linear algebra, reviews of which are included.

    So obviously when the CEO explains their product they’re going to say anything to make the public accept it. Therefore, their word should not be trusted. However, I think that when AI researchers talk simply about their work, they’re trying to shield people from the mathematical details. Fact of the matter is that behind even a basic AI is a shitload of complicated math.

    At least from personal experience, people tend to get really aggressive when I try to explain math concepts to them. So they’re probably assuming based on their experience that you would be better served by some clumsy heuristic explanation.

    IMO it is super important for tech-inclined people interested in making the world a better place to learn the fundamentals and limitations of machine learning (what we typically call “AI”) and bring their benefits to the common people. Clearly, these technologies are a boon for the wealthy and powerful, and like always, have been used to fuck over everyone else.

    IMO, as it is, AI as a technology has inherent patterns that induce centralization of power, particularly with respect to the requirement of massive datasets, particularly for LLMs, and the requirement to understand mathematical fundamentals that only the wealthy can afford to go to school long enough to learn. However, I still think that we can leverage AI technologies for the common good, particularly by developing open-source alternatives, encouraging the use of open and ethically sourced datasets, and distributing the computing load so that people who can’t afford a fancy TPU can still use AI somehow.

    I wrote all this because I think that people dismiss AI because it is “needlessly” complex and therefore bullshit. In my view, it is necessarily complex because of the transformative potential it has. If and only if you can spare the time, then I encourage you to learn about machine learning, particularly deep learning and LLMs.