PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. If you got a brick of text, don’t be alarmed; that’s normal.

No, I’m not interested in voting for your candidate.

  • 1 Post
  • 44 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.orgtoMemes@lemmy.mlAI bros
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    “Gradient descent” ≈ on a “hilly” (mathematical) surface, try to find the lowest point by finding the lowest point near an initial guess. “Gradient” is basically the steepness, or rate that the thing you’re trying to optimize changes as you move through “space”. The gradient tells you mathematically which direction you need to go to reach the bottom. “Descent” means “try to find the minimum”.

    I’m glossing over a lot of details, particularly what a “surface” actually means in the high dimensional spaces that AI uses, but a lot of problems in mathematical optimization are solved like this. And one of the steps in training an AI agent is to do an optimization, which often does use a gradient descent algorithm. That being said, not every process that uses gradient descent is necessarily AI or even machine learning. I’m actually taking a course this semester where a bunch of my professor’s research is in optimization algorithms that don’t use a gradient descent!


  • They created a good product so people used it and there were no alternatives when it got shit.

    They created an inherently centralizing implementation of a video sharing platform. Even if it was done with good intentions (which it wasn’t, it was some capitalist’s hustle, and its social importance is a side effect), we should basically always condemn centralizing implementations of a given technology because they reinforce existing power structures regardless of the intentions of their creators.

    It’s their fault because they’re a corporation that does what corporations do. Even when corporations try to do right by the world (which is an extremely generous appraisal of YouTube’s existence), they still manage to create centralizing technologies that ultimately serve to reinforce their existing power, because that’s all they can do. Otherwise, they would have set themselves up as a non-profit or some other type of organization. I refuse to accept the notion of a good corporation.

    There’s no lock in. They don’t force you off the platform if you post elsewhere (like twitch did).

    That’s a good point, but while there isn’t a de jure lock-in for creators, there is a de facto lock-in that prevents them from migrating elsewhere. Namely, that YouTube is a centralized, proprietary service, which can’t be accessed from other services.













  • Russian bots aren’t all that bad

    Yes, in two senses:

    1. I don’t lose sleep at night knowing that these bots exist (or those of any other government). They shouldn’t exist for the simple reason that public institutions shouldn’t be in the business of deceiving people, but unfortunately, deceiving the public is a bunch of what the State actually fucking does “for” “us”. I especially don’t think the Russian government cares to run bots/trolls on our little corner of the internet when bigger targets exist.
    2. Vacuously, I don’t disagree with literally everything that the Russian bots say because they can be found saying just about anything.

    I cannot stress enough that I do NOT approve of state-sponsored botting or trolling of public spaces in general. However, when you see Pro-Russian or Pro-whatever opinions on the Internet, you are probably reading the words of a “useful idiot” or non-State troll.

    This reality is a lot scarier than if the opinions were all just from some Russian troll farm, because now we have to interrogate the reality that these people have different and complex reasons for why they ended up with those opinions. It means that the task of persuasion is a lot more complicated than just shielding people from bots and trolls.


  • if you find yourself on the same side as Russian bots and don’t find it so disturbing that you immediately change your position

    As mentioned by another commenter, the actual strategy of the real Russian government is to sow division by advocating a bunch of positions, so a particular position being presented by Russian trolls absolutely does not warrant immediately changing my position. Your position is not special in that regard.

    But more generally, I’m not going to change my position on anything solely because someone awful agrees with it.

    And even more generally, I don’t care about unifying people under the political agenda of any existing government or political party. I want to see people unified about organizing themselves. To that end, letting one of the existing political parties, including yours, dictate our political will to us goes against the goal of people organizing themselves.

    you can’t claim any high moral ground use that to lecture other people.

    I do not claim nor need the moral high ground to present my opinions. Same goes for everyone else.




  • Can AI systems have a religious or political bias? Yes, they can and do learn biases in their datasets, and this is probably the toughest problem to solve in AI research because it’s a social rather than technical problem.

    Can an AI agent be programmed to give responses with religious or political beliefs? Sure, just drop it into the system prompt.

    Can an AI agent have religious or political beliefs like a human? No, because AI agents as they stand are a comparatively crude ** ** machine that mimics how humans learn to perform a task that’s useful to the machine’s creator, not a human or other sentient being.

    So I’ve found Facebook pages maybe run by AI that keeps bringing up the same text and a number of times it’s political or religious content sometimes not AI pictures.

    If I wanted to do something like that, I would probably start with ordinary chatbot code and plug in a large language model to generate posts. I would probably have a system prompt like:

    You are an ordinary Facebook poster. You are a very religious and devout [insert religion here]. You are also a [insert desired ideology here]. Your religious and political views are core parts of personality and MUST be a part of everything you do. Your posts MUST be explicitly religious and political. Please respond to all users by trying to bring them in line with your religious and political beliefs. You must NEVER break character or reveal for any reason that you are an AI assistant.

    Then just feed people’s comments into the AI periodically as a prompt and spit out the response. If it is an AI agent, and not just a human propagandist, that’s probably the gist of how they’re doing it.


  • A deep neural adaptive PID controller would be a bit overkill for a simple robot arm, but for say a flexible-link robot arm it could prove useful. They can also work as part of the controller for systems governed by partial differential equations, like in fluid dynamics. They’re also great for system identification, the results of which might indicate that the ultimate controller should be some “boring” algorithm.