That’s mostly down to Teams though (being the bloated web app that it is), and not the underlying operating system.
That’s mostly down to Teams though (being the bloated web app that it is), and not the underlying operating system.
When talking about the kernel, Windows actually skipped 3 major versions iirc from the top of my head. Windows 8 was Windows (NT) 6.2, and Windows 10 skipped that version number to, well, 10.
Why when a simple alias will do?
I also experienced less “hiccups” since switching to Linux with KDE but I’d like to know on what combination of hardware and Windows you experienced anywhere close to an average of 1s response time to “any input”.
I’m no expert here, but I’m pretty sure branch prediction logic is not part of the instruction set, so I don’t see how RISC alone would “fix” these types of issues.
I think you have to go back 20-30 years to get CPUs without branch prediction logic. And VSCodium is quite the resource hog (as is the modern web), so good luck with that.
Sorry, I don’t know of a guide for other distributions.
Nvidia might be selling the shovels to the customer during this gold rush, but TSMC is making them.
It’s kind of in the word distribution, no? Distros package and … distribute software.
Larger distros usually do a quite a bit of kernel work as well, and they often include bugfixes or other changes in their kernel that isn’t in mainline or stable. Enterprise-grade distributions often backport hardware support from newer kernels into their older kernels. But even distros with close-to-latest kernels like Tumbleweed or Fedora do this to a certain extent. This isn’t limited to the kernel and often extends to many other packages.
They also do a lot of (automated) testing, just look at openQA for example. That’s a big part of the reason why Tumbleweed (relatively) rarely breaks. If all they did was collect an up-to-date version of every package they want to ship, it’d probably be permanently broken.
Also, saying they “just” update the desktop environment doesn’t do it justice. DEs like KDE and GNOME are a lot more than just something that draws application windows on your screen. They come with userspace applications and frameworks. They introduce features like vastly improved HDR support (KDE 6.2, usually along with updates to Wayland etc.).
Some of the rolling (Tumbleweed) or more regular (Fedora) releases also push for more technical changes. Fedora dropped X11 by default on their KDE spin with v40, and will likely drop X11 with their default GNOME distro as well, now that GNOME no longer requires it even when running Wayland. Tumbleweed is actively pushing for great systemd-boot support, and while it’s still experimental it’s already in a decent state (not ready for prime time yet though).
Then, distros also integrate packages to work together. A good example of this is the built-in enabled-by-default snapshot system of Tumbleweed (you might’ve figured out that I’m a Tumbleweed user by now): it uses snapper to create btrfs snapshots on every zypper (package manager) system update, and not only can you rollback a running system, you can boot older snapshots directly from the grub2 or systemd-boot bootloader. You can replicate this on pretty much any distro (btrfs support is in the kernel, snapper is made by an openSUSE member but available for other distros etc.), but it’s all integrated and ready to go out of the box. You don’t have to configure your package manager to automatically create snapshots with snapper, the btrfs subvolume layout is already setup for you in a way that makes sense, you don’t have to think about how you want to add these snapshots to your bootloader, etc.
So distros or their authors do a lot and their releases can be exciting in a way, but maybe not all of that excitement is directly user-facing.
What do you mean by “option for remote”?
Passed openQA in Tumbleweed, so should be available with 20241007.
-0.05% might be insignificant, but do you think it has to do with more games requiring kernel-level anti-cheat?
They can and they are making their own chip designs to do the job.
The cloud part of Apple Intelligence runs on their own designed hardware.
I just use whatever is included with the desktop environment. On KDE and GNOME launching an application involves pressing the Super (“Windows”) key, typing the first couple of letters of the application I want to launch and pressing the return key.
I might be missing something here but I don’t know how other launchers could possibly make this a simpler process.
Show 'em, that’ll teach these nasty fanboys! Reads like writing that got you a big dopamine rush.
I agree, commenting “Use Firefox!!!1!11” on every post remotely related to (other) browsers doesn’t help anybody, just like commenting “Use Linux!!!1!11” on every post about a vulnerability in Windows doesn’t contribute anything meaningful at all.
Look, I also disagree with what Mozilla is doing here and yes, they 100% deserve the flak they are getting for it. But - like most things in life - it’s not black and white. Firefox could still be less intrusive to your privacy than Chrome (I’m not saying it necessarily is, but it could be that way). A different example: your mail provider could track every time you login to your account, or it could analyze and track the content of every email you receive. One is clearly worse than the other, right?
Which browser(s) do you recommend/use?
Let’s see if this really affects all Linux systems or if the stars need to align for this to actually be exploitable.
Yeah, duplicate flags should just be ignored.
To be fair, a big portion of the work that goes into Linux (at least the kernel) is done by paid developers working for big corporations.
Considering Intel is behind TSMC as well, China might be quite close to Intel then.
More than enough for Apple to bend to pretty much everything the Chinese government is asking for.
I think I have a simple function in my
.zshrc
file that updates flatpaks and runsdnf
orzypper
depending on what the system uses. This file is synced between machines as part of my dotfiles sync so I don’t have to install anything separate. The interface of most package managers is stable, so I didn’t have to touch the function.This way I don’t have to deal with a package that’s on a different version in different software repositories (depending on distribution) or manually install and update it.
But that’s just me, I tend to keep it as simple as possible for maximum portability. I also avoid having too many abstraction layers.