I bought a used gen1 Thinkpad X1 Nano. It is super light (<1kg), works flawless out of the box with Linux, and while I think it does have a fan I’ve never noticed it.
I bought a used gen1 Thinkpad X1 Nano. It is super light (<1kg), works flawless out of the box with Linux, and while I think it does have a fan I’ve never noticed it.
Based on the neofetch it’s a Samsung Fold Z 4
In a lot of modern work flows this is incompatible with the development pattern.
For example, at my job we have to roll a test release through CI that we then have to deploy to a test kubernetes cluster. You can’t even do that if the build is failing because of linting issues.
If that’s your attitude, then I don’t think this is going to work out.
Wine is not a company. People building and fixing Wine to support a specific piece of software are largely volunteers. Noone works at Wine. Noone does product support. It’s a free service created by volunteers.
That’s how most Linux software gets built. And none of these people owe you anything. No support, no easy to use config.
Frankly, you sound incredibly entitled and unwilling to listen and learn to everyone here who’s tried to help you.
To answer your original question: there’s no one global way to make Wine run all software out of the box. That’s why Valve spends so much time tuning different setups of Wine for all the games they support. CodeWeavers to some extent does that for non game software.
Doing this for the wide variety of Windows software out there is an impossibly large task and frankly out of scope for what most Linux distributions have as a goal or intended use case. If you want to run Windows software on Linux, there are many different projects that try to package or help you install the most popular things. But other than that, you’re free to try on your own.
Depends heavily on the market segment. I also work in Europe and in my 15 years as a software developer (the first 6-7 as C/C++ developer) I’ve never seen anyone use Visual Studio.
To quote the author himself:
Great, do whatever you want. Just shut the fuck up about it, nobody cares.
But then he proceeds to do the exact opposite and posts a vitriolic rant about how everyone who doesn’t use what they use is, in their words, and idiot.
Sorry, yes, that was durability. I got it mixed up in my head. Availability had lower targets.
But I stand by the gist of my argument - you can achieve a lot with a live/live system, or a 3 node system with a master election, or…
High availability doesn’t have to equate high cost or complexity, if you can take it into account when designing the system.
I used to work on an on premise object storage system before, where we required double digits of “nines” availability. High availability is not rocket science. Most scenarios are covered by having 2 or 3 machines.
I’d also wager that using the cloud properly is a different skillset than properly managing or upgrading a Linux system, not necessarily a cheaper or better one from a company point of view.
Got to agree with @Zushii@feddit.de here, although it depends on the scope of your service or project.
Cloud services are good at getting you up and running quickly, but they are very, very expensive to scale up.
I work for a financial services company, and we are paying 7 digit monthly AWS bills for an amount of work that could realistically be done with one really big dedicated server. And now we’re required to support multiple cloud providers by some of our customers, we’ve spent a TON of effort trying to untangle from SQS/SNS and other AWS specific technologies.
Clouds like to tell you:
The last item is true, but the first two are only true if you are running a small service. Scaling up on a cloud is not cost effective, and maintaining a complicated cloud architecture can be FAR more complicated than managing a similar centralized architecture.
Surely Elon would prefer the old Lucid fork, https://www.xemacs.org/
It’s an NVIDIA specific term that is the abbreviation for GPU System Processor. It’s a RISC-V core that does all sorts of management tasks on a modern Nvidia card, mostly related to task setup, resource allocation, context switching, adjusting clock speeds, etc.