Also, yesterday was Bandcamp Friday (they forgoe their cut and everything goes to the artist). The next two are Oct 4th and Dec 6th.
Also, yesterday was Bandcamp Friday (they forgoe their cut and everything goes to the artist). The next two are Oct 4th and Dec 6th.
I think this was Steve Jobs’ primary skill. He could see a clear vision of the product people didn’t know they wanted. Bottom to top, from the hardware to run on, to the typeface their apps used; he knew that the best user experiences happened when every level of the stack harmonized to create a very finely tuned user experience.
Unfortunately, the people who are that good usually don’t work for free. We’re very fortunate that Valve is choosing to open source their work and keep their SteamDeck platform an open one.
Debian is the only one there I haven’t actually tried myself as a daily driver, so idk if using the terminal is necessary. I’ve just heard it’s solid and I assumed all normal user operations can be done via GUI in gnome or KDE like you can with Fedora.
It’s better to ask which distro is dummy proof. Some are made for noobs and windows users, others are not, and they’re all based on “Linux”.
Mint, Debian, and Fedora are all good starter options, and all are made to get stuff done without having to use the command line.
I agree. Specifying the same param twice like this feels like it should be idempotent. Sometimes a final cmdline string is built by multiple tools concatenating their outputs together; if each one adds --force
without any way to know if it’s already been added elsewhere, this could lead to undesirable behavior.
Even --forceforce
would be better.
I remember when the mapping of virtual memory segments clicked for me. I think i said out loud, “that’s so clever!”. Now it just seems so fundamental to managing memory for user space applications, but I hadn’t thought about how it was done before.
I’m actually not sure what TPM can guard against, but I think you’re right, I think if a malicious OS borked with the bootloader, TPM would catch it and complain before you decrypt the other OS.
Yeah, physical access usually means all bets are off, but you still lock your doors even though a hammer through a window easily circumvents it. Because you don’t know what the attacker is willing to do/capable of. If you only ever check for physical devices, you’ll miss the attack in software, similarly if you only rely on Secure Boot you’ll miss any hardware based attacks. It’s there as a tool to plug one attack vector.
Also, my guess is the most common thing this protects against are stupid employees plugging a USB they found in the parking lot into their PC. If they do it while the OS is running, IT can have a policy that blocks it from taking action. But if they leave it there during a reboot, IT is otherwise helpless.
No point in putting locks on your house, because an attacker can just drive their car through your front door.
The attacks you mention have their own ways of being detected: usually eyeballs. But eyeballs can’t help you against something hiding in your bootloader. So Secure Boot was made.
And I don’t really follow your dual boot claim. If you don’t trust one of the OSes, and you boot it up on your hw, you’re already hosed. At that point it can backdoor your bootloader and compromise your other OS. Secure Boot prevents malicious OSes from being booted, it can’t help you if you willingly boot a malicious OS.
Cool, that’s a good source to peruse, thanks.
Yeah, afaik the tegra was only used for embedded, closed source devices though, no? Did they submit any non-proprietary tegra support upstream?
And afaik CUDA has also always been proprietary bins. Maybe you mean they had to submit upstream fixes here and there to get their closed-source stuff working properly?
I think you and I are using two different definitions of the word “powerful”, or are at least applying them to subtly different aspects of the discussion.
I don’t know if you are familiar with basic finite automata theory, but a Finite State Machine is provably less “powerful” than a Turing Machine. This is the definition of “power” that I’m using, “power” as in “expressiveness”. i.e. The fact that you can literally create a terminal as a sub-element within a GUI if you wanted means that a GUI is provably more “powerful” (or more expressive) than a TUI. And thus the best GUI for a tool will always be better than the best TUI for the same tool. (Comparing the worst GUI vs the best TUI is a waste of time).
But you’re using the definition of “powerful” as in a “powerful programming language”. This is a common use of the term, but is much more fuzzy and harder to quantify. It’s no longer synonymous with “expressiveness”. Generally a language is “powerful” if you can get “a lot done” with relatively few characters or operations. Ex. Python is often considered more “powerful” than C because you can do in a single line what would take dozens or hundreds of lines in C. Similarly, you’re saying that a developer can make a comprehensive TUI using less time and effort than it would take for them to make a GUI that’s at least as good (including integration with other tools afforded by pipes and redirects).
And I agree with you. But hopefully you also agree with me that a GUI is objectively more “expressive” than a TUI, and in that sense has a higher ceiling for how useful it can be to a user.
What’s an example? I would have thought, back then especially, their driver (and maybe nvapi) was most of the software they shipped.
That sounds fine, but isn’t this also what LXC is for?
What does it mean for something to be an “artistic success”?
I’m talking about a properly made GUI, you’re talking about most GUIs. I believe I covered this in my original comment: poorly made GUIs are worse than a terminal interface.
But don’t act like a linear string of characters, typed in one-by-one is the optimal way to interface with a computer. Obviously, a non-invasive neuralink implant that is able to interpret your intentions with 100% accuracy without uploading any of your data to Elon Musk is the ideal Human Interface Device, but we’re not quite there yet.
In the meantime, I assume you run a window manager of some kind. Why? Do you regularly browse the internet from the terminal? Unlikely. Why not? Have you ever tried non-linear video editing, image manipulation, or 3D modeling in a terminal? How about debugging multi threaded code, or visualizing allocation patterns? Pored over profiling metrics to root cause a performance issue? And if VR/AR is part of your workflow, trying to use a terminal in concert feels sillier than the hacking montage from Hackers.
Terminals are objectively more limited than a GUI, because that’s literally the definition of a terminal: a very limited graphical user interface. The advantage of a terminal is that it’s easy (especially for programmers who don’t have an artistic/UX-bone in their body, and are thinking in terms of functions and operands) to make a primitive interface that adheres to a set of expectations. But no one commits every parameter for every command line tool to memory, and even if they did, people don’t want to type out a novel when moving a cursor to a specific region of the screen feels more natural and takes a fraction of the time. (Not that it always feels more natural in every circumstance, but in the times when it does, that’s what every sane person should prefer to do).
So just like I told OP, the goal shouldn’t be to use a terminal; you should instead focus on solving a problem. The terminal is just often the least bad tool that currently exists to solve a lot of problems.
As other’s have said, have a goal. A computer is a tool, use it to accomplish something, try to get something working for yourself that currently doesn’t. If your PC aleady does everything you need it to, great, you’re ahead of everyone else 😅.
Don’t think of the command line as a good option, it’s archaic, and its capabilities are objectively rudimentary, it’s just often the least bad option because no one has made a convenient GUI for what you’re trying to do (or if they have, they did it poorly, and somehow the command line is still less bad). So you will inevitably have to interact with it.
That’s already been happening for the last 15+ years, but Linux growth is primarily in the last 3. People are definitely moving to mobile, but the ones on desktop seem to be preferring Linux more than they did even 5-10 years ago (Note that laptops are included in “desktop” here).
You should definitely throw that whole line into a script though, no reason to type it out every time. Then if it’s possible to have a hook that runs it after a kernel update, that would be ideal. Not sure if there’s a standard way to do that, might be a bit distro dependent.
I would still say dual booting is the superior option, but that might be complicated for some people, so this is probably a good recommendation.
Here’s hoping they build something useful that can be forked to work without the garbage.