

I have read somewhere that some text can be compressed incredibly efficiently in some AI models. The issue being that the compressed data is worthless without the model and power to recover it.
I have read somewhere that some text can be compressed incredibly efficiently in some AI models. The issue being that the compressed data is worthless without the model and power to recover it.
I had a poodle mix and the nice thing is they don’t shed their fur on their own. Which also helps with allergies.
Thats true, but that sadly won’t help against a state forcing a company to put these things into the silicon. Not saying they do rn, but its a real possibility.
I mean can’t they just audit a version that doesn’t have a backdoor/snoops. Verifying against silicon is probably very hard.
How do you want to verify a RISC core not doing something funny?
I see the appeal for the package manager for a lot of things, but space got so incredibly cheap and fast that duplication is way less of a deal than the effort to make stuff work the traditional way. But im not a real linux user. I don’t like tinkering, I want to download something and it works. And the amazing thing is we can have both. If people like spending time to package something be my guest.
The funniest interaction I had recently. I downloaded a program that isn’t in my package manager or had any sort of flatpack/appimage so I downloaded it as a deb and it didn’t run because of some dependency. So I could clone the git and build it from source which might have worked, but I was too lazy to. So I just downloaded the windows exe and ran it through wine, which worked flawlessly.
Still probably piled there to stop some kind of degradation.
But I like my applications years out of date and I think its good that every distro has to spend manhours on packaging it individually.
Also its 40 per hour per user
I had a problem with a Intel HD4000 on arch.
When I look at these patents all of them seem to be patenting others inventions from years ago. So I hope prior art wins.
How do you want to federate Petabytes or even Exabytes of content? And your second sentence leads to a monolithic instance.
I want to see how you can serve thousands or millions of people with a Chromebook in your closet. And if you say p2p, that doesn’t deal with spikes in demand and a lot of old content will just vanish even easier than on YouTube. Also it would rely on people being willing to seed.
This is probably common. The people that work on UI often aren’t the people who do pull requests. But I think if you want to contribute it would be best to get in touch with a maintainer on the chat of the project. Projects often have a matrix/irc/discord on the git page.
In FOSS most people can program, but only a hand full of people can design a decent UI.
deleted by creator
You can’t really blame that on rust.
The thing is data poisoning is a arms race that the Ai side will win with ease. You can either solve it with pre processing or filtering. All it does is make the images look worse. I can’t think of a way that you can poison data that doesn’t take more effort to unpoison than to poison.
Do you have a source for that claim?