• 2 Posts
  • 93 Comments
Joined 3 years ago
cake
Cake day: January 21st, 2021

help-circle


  • kevincox@lemmy.mlMtoOpen Source@lemmy.mlELI5: What is RISC-V?
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    17 days ago

    For software to run on a computer it needs to speak the computer’s “language”. This is typically called “machine language” but differs across different hardware. For example most modern Intel and AMD processors speak x86_64. This language has ways to express different operations such as “add these two numbers” or “put this CPU core into a low power mode”. This is the fundamental way that software works, but running in this language.

    There are languages that are completely different, such as ARM which is very common on mobile devices and is the language used by Apple’s new M chips. These have basically nothing in common with x86_64.

    These languages also evolve over time. For example x86_64 is a significant extension to the older x86 language. For the most part this is fine, it is like the CPU now knows more words, if you use those new words the new CPU will understand them, but older CPUs won’t.

    RISC-V is a new machine language. What makes it interesting is that it is a free and open specification. This means that anyone can create a new RISC-V CPU, unlike x86_64 where you need to buy a license from Intel or ARM where you need to buy a license from the ARM corporation. Most people think that this openness has major benefits, for example now anyone can create a new processor which may be better, rather than having innovation being stifled by licensing costs (if you can even get a license) or needing to create their own machine language and require huge amounts of effort to migrate software to it.

    Note: It is important not to confuse “machine language” with “programming language”. When people write software they very rarely write code in machine language directly. Usually they use a programming language which is then converted into the machine language of the CPU it will run on.




  • The problem with Yubikey is that it doesn’t have a good enough management story for broad use. I do use it for a few core sites (like GitHub) but if I lose a key I need to get a replacement and register that replacement with every site I have set up U2F 2FA on. This is ok with a few core accounts but doesn’t scale to the hundreds of sites that I have an account with. I am sure to miss a few and then either I can’t log in with the new key or get completely locked out when I lose that key and get a second replacement.



    1. Salt doesn’t matter if your password is unique.
    2. If they can download data via SQL injection having them log in probably doesn’t matter that much.
    3. If they can dump your password/hash they can likely also dump the TOTP secret.
    4. A lot of website security expert attention is focused on raising the minimum security level. If you are using randomly generated passwords + auto-fill you are likely above their main target audience.

    So yes, it is slightly better, but in practice that difference probably doesn’t matter. If you use U2F then you may have a meaningful security increase but IMHO U2F is not practical to use on every site due to basically being impossible to manage credentials.

    So yes, it is better. But for me using random passwords and a password manager it isn’t worth the bother.





  • How exactly does Samsung police this? Surely the repair shop could just… not tattle?

    Well there is a contract in place and there would be consequences for not upholding the agreement. Sure, they could probably get away with it for quite a while. But it likely isn’t worth the risk, they would rather just out Samsung as being a piece of shit and go on their merry way.

    It would be pretty easy to catch this as well. Samsung can just occasionally submit a phone with a known third party part for repair and see if the expected report comes in.





  • Yeah, just jump in.

    To get started it is best to keep Windows around, then if you need to get something done urgently you can go back to what you know then figure out how to do it in Linux later. Dual-booting is probably the best option if you are gaming as GPU passthrough can be difficult to get great performance. That is the approach I took a long time ago and then at some point I realized that I hadn’t booted into Windows for months and just deleted the partition.


  • I’m sure some people will demand it. But for 99.9% of the population you don’t need 1000Hz content. The main benefit is that whatever framerate your content is it will not have notable delay from the display refresh rate.

    For example if you are watching 60Hz video on a 100Hz monitor you will get bad frame pacing. But on a 1000Hz monitor even though it isn’t perfectly divisible. the 1/3ms delay isn’t perceptible.

    VRR can help a lot here, but can fall apart if you have different content at different frame rates. For example a notification pops up and a frame is rendered but then your game finishes its frame and needs to wait until the next refresh cycle. Ideally the compositor would have waited for the game frame before flushing the notification but it doesn’t really know how long the game will take to render the next frame.

    So really you just need your GPU to be able to composite at 1000Hz, you probably don’t need your game to render at 1000Hz. It isn’t really going to make much difference.

    Basically at this point faster refresh rates just improve frame pacing when multiple things are on screen. Much like VRR does for single sources.