• 0 Posts
  • 348 Comments
Joined 2 years ago
cake
Cake day: August 11th, 2023

help-circle
  • An OS or a hypervisor can run in bare metal. If I have Windows running in KVM, KVM is running bare metal but Windows isn’t. Ditto with ESXi or Hyper-V. In the case of your setup Linux and KVM are both bare metal, but Windows isn’t. KVM, ESXi, Xen are always running a privilege level above their guests. Does this make sense?

    The difference between KVM and the more conventional Type 1 hypervisors is that a conventional type 1 can’t run along side a normal kernel. So with Linux and KVM both Linux and KVM are baremetal. With Linux and Xen, only Xen is baremetal, and Linux is a guest. Likewise if you have something like Hyper-V or WSL2 on Windows, then Windows is actually running as a guest OS, as is Linux or any other guests you have. Only Hyper-V is running natively. Some people still consider KVM a Type 1, since it is running bare metal itself, but you can see how it’s different to the model other Type 1 hypervisors use. It’s a naming issue in that regard.

    It might help to read up more on virtualization technology. I am sure someone can explain this stuff better than me.


  • Yes I know GPU passthrough is possible. Almost no one does it as consumer GPUs don’t normally don’t support the virtualization technologies that allow multiple OSes to use one GPU. It’s an enterprise feature mostly. There are projects like VirGL that work with KVM and QEMU, but they don’t support Windows last I checked, and are imperfect even on Linux guests. I think only Apple Silicon and Intel integrated graphics support the right technologies you would need. Buying a second GPU is a good option, although that has it’s own complexities and is obviously more expensive. Most modern consumer platforms don’t have enough PCIe lanes to give two GPUs a full x16 bandwidth. There is a technology in Windows called GPU paravirtualization to make this happen with Hyper-V, but you have to be using a Hyper-V host, not a Linux based one. It’s also quite finicky to make that work.

    Out of interest what games are you running that don’t need GPU performance? Basically any modern 3D game needs a GPU to run well. Obviously 2D games might not, though even that varies.

    All of the above is far more complex than setting up a dual boot. A dual boot can be as simple as having two different drives and picking which to boot from in the UEFI or BIOS firmware. I don’t understand why you think that would be less complicated than a high tech solution like virtualization.

    There are basically three types of virtualization in classical thinking. Type 1, Type 2, and Type 3. KVM is none of these. With Type 1 there is no operating system running bare metal, instead only the hypervisor itself runs as bare metal. Everything else, including the management tools for the hypervisor, run in guest OSes. Hyper-V, ESXi, and anything using Xen are great examples. Type 2 is where you have virtualization software running inside a normal OS. KVM is special because it’s a hypervisor running in the same CPU ring and privilege level as the full Linux kernel. It’s like if a Type-1 hypervisor ran at the same time as a normal OS in the same space. This means it behaves somewhat like a Type-1 and somewhat like a Type-2. It’s bare metal just like a Type-1 would be, but has to share resources with Linux processes and other parts of the Linux kernel. You could kind of say it’s a type 1.5. It’s not the only hypervisor these days to use that approach, and the Type 1, 2, 3 terminology kind of breaks down in modern usage anyway. Modern virtualization has gotten a bit too complex for simplifications like that to always apply. Type 3 had to be added to account for containers for example. This ends up getting weird when you have modern Linux systems that get to be a Type-1.5 hypervisor while also being a Type 3 at the same time.


  • That’s not how that works. I think your confusing bare metal with bare metal hypervisor. The latter is meant to mean a Type-1 Hypervisor, which KVM isn’t anyway but that’s another story.

    Without GPU pass through you aren’t going to get nearly the graphics performance for something like gaming. I’ve also had issues with KVM and libvirt breaking during sleep. It’s a lot more janky than you make out.





  • There is a manual pre-installed on your machine for most commands available. You just type man and the name of the thing you want the manual for. Many commands also have a --help option that will give you a list of basic options.

    I should point out this isn’t Linux specific either. Many of these commands come from Unix or from other systems entirely. macOS has a similar command line system actually. It’s more that Linux users tend to use and recommend the command line more. Normally because it’s the way of doing things that works across the largest number of distributions and setups, but also because lots of technical users prefer command line anyway. Hence why people complain about Windows command lines being annoying. I say command lines because they actually have two of them for some odd reason. Anyway I hope this helped explain why things are the way they are.









  • I’ve tried making this argument before and people never seem to agree. I think Google claims their Kubernetes is actually more secure than traditional VMs, but how true that really is I have no idea. Unfortunately though there are already things we depend upon for security that are probably less secure than most container platforms, like ordinary unix permissions or technologies like AppArmour and SELinux.





  • That’s not true though. The models themselves are hella intensive to train. We already have open source programs to run LLMs at home, but they are limited to smaller open-weights models. Having a full ChatGPT model that can be run by any service provider or home server enthusiast would be a boon. It would certainly make my research more effective.


  • There is a lot that can be discussed in a philosophical debate. However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work. This suggests me that it’s not just a model/training difference. Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.

    Actually humans have more computing power than is required to run an LLM. You have this backwards. LLMs are comparably a lot more efficient given how little computing power they need to run by comparison. Human brains as a piece of hardware are insanely high performance and energy efficient. I mean they include their own internal combustion engines and maintenance and security crew for fuck’s sake. Give me a human built computer that has that.

    Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.

    I agree here. I do think though that LLMs are closer than you think. They do in fact have both attention and working memory, which is a large step forward. The fact they can only process one medium (only text) is a serious limitation though. Presumably a general purpose AI would ideally have the ability to process visual input, auditory input, text, and some other stuff like various sensor types. There are other model types though, some of which take in multi-modal input to make decisions like a self-driving car.

    I think a lot of people romanticize what humans are capable of while dismissing what machines can do. Especially with the processing power and efficiency limitations that come with the simple silicon based processors that current machines are made from.