• 9 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • j4k3@lemmy.worldtoLinux@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    Slowly over time you learn what you need when you need it. There is no hand holding. Under the surface, this thing is very complex. Every aspect of Linux is public. You do not need to understand most of it, but this is the realm of many brilliant developers and most computer science students, especially those studying operating systems. Everyone is welcome here, but be aware that all levels are present.

    The vast majority of Linux is not related to desktop users. Linux is more common on servers and embedded devices like routers, cars, and industrial/enterprise equipment. People are happy to help you learn when you hit a wall, but no one wants to be your tech support.

    Distros are not brands or marketing. They all have a specific reason to exist and specialties. Learning what these specialties are and how to leverage them for things like documentation for any specific task can make a big difference in your overall experience.

    It is quite common for people to call it Linux, but you are unlikely to interact with kernel space very much. Your actual experience is mostly limited to the desktop environment and applications.

    Since you are on a Debian > Ubuntu derivative, you are on a distro that may have outdated dependencies in some cases, especially with outlier software. Terms like outdated and stable/unstable are not at all what they seem at first intuitive thought. Windows is a stable OS, which really means it has outdated dependencies in most cases too. Distros like Fedora or Arch are kept up to date with the latest kernel and dependencies. If your software you want to run is actively developed and kept up to date, these are the best distros to run. If your software is static, these distros may break it and create headaches. By contrast, if your software is kept up to date but you are on a stable distro, either the distro packager may keep the needed libraries up to date or you need to go to the extra effort required to update stuff yourself by adding a PPA to your Aptitude sources list. This is important to understand because, if you are following documentation for some package using the internet, that documentation may be for a much newer version than what is available in the distro natively. This mostly applies to edgy software when you’re doing something specific that is not super common. The practical way to think about this is that Debian stable is primarily created as a way for developers to create some device that will be used online for a specific task and uses many high level software packages. Once the thing is working, the developer knows that the packages they used are not going to get updated arbitrarily and break what they created, while the device is still going to receive all the needed security updates to remain online safely for as long as the kernel is supported by the Debian team. This is beneficial for small one off devices and subcontracted types of development without a full time dev. Understanding this paradigm will massively improve your overall experience. I had a lot of frustration before I understood that much of what I was using was outdated and why when I first started using Ubuntu over 10 years ago.


  • They told us they were going to invest in EV R&D back in 2014. You know, back before we had that orange anal experience of a Russian puppet wannabe pornstar felon president. We put 6b into GM to compete; they pumped their stocks with it. Such is 3rd world America. Lay off the McCarthy bullshit whining about investing in R&D to mask corruption and ineptitude. This was no fucking surprise. Spinning this bullshit is just trying to justify screwing over average Americans with overpriced undeveloped bloated unaffordable garbage made to pad our useless incompetent oligarchy’s pockets.


  • Slowly trying to learn sh while using mostly bash. Convenience is nice and all, but when I encounter something like OpenWRT or Android, I don’t like the feeling of speaking a foreign language. Maybe if I can get super familiar with sh, then I might explore prettier or more convenient options, but I really want to know how to deal with the most universal shell.



  • I found a Python project that does enough for my needs. Jq looks super powerful though. Thanks. I managed to get yq working for PNG’s, but I had trouble with both jq and yq with safetensor files. I couldn’t figure out how to parse a string embedded in an inconsistent starting binary, and with massive files. I could get in and grab the first line with head. I tried some stuff with expansions, but that didn’t work and sent me looking for others that have solved the issue better than myself.







  • The best deal is probably going to be looking for a used machine with a 3080Ti. There were several of these made with Intel 12th gen CPU’s. That is probably the cheapest way to get a 16 GB GPU. They can be found for considerably less than $2k. Anything with a “3080Ti” where the “Ti” part is super important, has a 16 GB GPU, (the “3080” is 8GB). That was the only 16 GB laptop GPU until the newer Nvidia 4k stuff.

    That can play any game, and can run some large models for AI stuff if you become interested. On the AI front, you want maximum system memory too if possible. My machine can only address 64 GB of sysmem. Some go up to 96 GB. I wish I could get like 256 GB.

    Just because a machine comes with Linux does not mean the problems are solved. You will find many times when people buy machines that have peripheral kernel modules that are orphaned and not part of the kernel. Orphaned kernels are not real Linux and are like phones. Indeed this is the exact mechanism used to steal your phone and prevent you from using it for its true hardware lifetime.

    The real solution is https://linux-hardware.org/. Use that to see what works where. You also need to understand modern secure boot with the TPM chip and package keys. These exist outside of the Linux kernel. If delving into this system is too much for you to deal with or of no interest, just stick to using either Ubuntu or Fedora. These both have a special system outside of Linux that will handle the keys for you. Presently, these are the only two distro choices that do this; not derivatives either, it must be vanilla Ubuntu or Fedora. You won’t be able to change anything in kernel space when going this route, but if the keys issue is unimportant, that probably won’t be an issue.


  • For me, it is not about “lost history.” It is about contextual history and knowing if some tool I built in a distrobox uses only dandified, pacman, aptitude, portage; or if it also uses venv, conda; or if there was some install script.

    It would be nice if I was on a stable kernel to avoid such a dependency salad, but that is not within the scope of playing with the latest AI toys where new tools and exploring new spaces is constantly creating opportunities to explore.

    It would be nice if I was some genius full stack dev that could easily normalized all the tools under a single dependency containerization scheme, but that is not within my mental scope or interests at the present. For most AI tools, I follow the example given and only add a distrobox container as an extra layer of dependency buffering from the host. The ability to lazily see the terminal history for each of those containers is a handy way to see exactly what I did months ago.


  • Distrobox supports waydroid to use android apps on wayland. There are many small purpose built apps for android than can be useful on desktop.

    No one seems to be mentioning apps in this specific kind of context, and I don’t consider a locked down and stripped orphan kernel to be “Linux” but a lot of this stuff it FOSS and can now run on both.





  • If it died as a result of spilling something on it. You most likely damaged something hardware wise. If it was powered off, first remove the battery asap. Then just take off the bottom cover, pat anything needed dry, and let it air out.

    The real concern are the chips that do not have any pins sticking out of them. Those are ball grid arrays (a whole bunch of connections are made under the black epoxy packaging. Those can hold moisture under them for longer. Your best bet is to let it dry in a warm place for a few hours.

    Getting wet is not a problem. The problem is a powered connection having a conductive fluid bridging two or more connections that can not tolerate the current the fluid creates.

    When the actual circuit board is made, it goes into ovens and submerged in liquids. Some even go across molten pools of tin as part of the component assembly process. The board itself, (not all the other plastics and stuff for the case, screen, etc., is very resilient.

    In many industrial settings where the environment is very dirty, it is common to take a desktop PC apart and hose it off with water. The only issue is shorting connections under powered conditions.

    So yes, technically, any form of drying can help “recover” the device.



  • I backup and then upgrade through the mechanism provided. Why? Lazy. I should take the time to set up a NAS and run most of /home from that, but never have been motivated enough to try it.

    I usually let myself lag behind on Fedora to wait until the kinks have been worked out. I just jumped from 38 to 40 in an upgrade and totally regret it. Python is screwed up in distrobox and making problems, but I can roll back too.



  • I’m presently having issues with 40 and old Stable Diffusion/ComfyUI related to torch and stuck in a dependency loop. Almost defiantly unrelated.

    When I was looking into AMD a year ago or so, the 7k thing was in a conference somewhere on YT. It had to do with some kinds of conflicts or something like that in how 7k versus the older stuff was designed and how CUDA is set up. I really don’t recall the details well. I was about to pull the trigger on a 6k setup, and after seeing that info I went the other direction.

    I was researching the CPU scheduler at the time and I may be blurring this and the GPU stuff together when I say: I think it was the open source team that was talking about this in a Linux Plummers conference, it might have been about the enterprise GPU stuff and about HIPS or something like that. Sorry I’m fuzzy on it.

    Edit: I was always only looking for the AI side, so the back end/kernel/API was all I cared about.