![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/c0ed0a36-2496-4b4d-ac77-7d2fd7f2b5b7.png)
It isn’t misusing metric, it just simply isn’t metric at all.
It isn’t misusing metric, it just simply isn’t metric at all.
Parses HTML with regex
Oh no… What have you done?!
No, it is customer’s since there will only be one customer left at that point.
There’s pros and cons. On one hand, packing your dependencies into your executable leads to never having to worry about broken dependencies, but also leads you into other problems. What happens when a dependency has a security update? Now you need an updated executable for every executable that has that bundled dependency. What if the developer has stopped maintaining it and the code is closed source? Well, you are out of luck. You either have the vulnerability or you stop using the program. Additionally bundling dependencies can drastically increase executable size. This is partially why C programs are so small, because they can rely on glibc when not all languages have such a core ubiquitous library.
As an aside, if you do prefer the bundled dependency approach, it is actually available on Linux. For example, you can use appimages, which are very similar to a portable exe file on windows. Of course, you may run afoul of the previously mentioned issues, but it may be an option depending on what was released.
If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.
You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
Same, I thought it was used commonly too.