

Which is a container, not an encoding.


Which is a container, not an encoding.


I mean, fundamentally, yeah.
But we live in a corporate controlled, corrupt, world and now of these larger companies can be trusted with this process.
Some smaller communities and platforms DO this right sometimes, as they build in house processu that respect privacy. But governments world wide are making this impossible through increasingly strict compliance requirements that actually increase data privacy risks and funnel these needs to 3rd party services who just lie about what they do with the data.
===========
I’m not kidding when I say this is a REAL BIG PROBLEM.
bot based traffic and astroturfing will supplement and replace human communication on platforms like Lemmy. Driving the narrative and how we engage to the whims of a few rich people. Bots are relatively cheap, and easy to deploy at scale across many platforms.
There will be no open corner of the internet safe from manipulation and forced division. More people will be forced into walled gardens from corps that implement human verification, as they are the only ones with the resources to do something (While also being the source of the problem, see how that works?)
How do you carve out spaces that are protected from that? Well, you need to determine who’s a bot, and who’s and actual person.
But we can’t do that, so the alternative is we are ran over by bots and astroturfing till we’re at each other’s throats like good culture war puppets.
The future is bleak…


It’s largely considered ineffective these days. Detecting elements that don’t affect layout is trivial, or elements that are occluded, transparent…etc
Capchas are one of the best options. But even then, LLM users bypass those relatively easily, and LLM users are one of the biggest risk areas for astroturfing.


Training is constant. None of these models by any of these providers are static. You’ll notice that they are releasing new models and new model versions regularly.
This means that training is happening constantly. It never stops. There’s always new shit being trained.


You’re talking to non -tech nerds about something that usually only tech nerds are familiar with.
Just like on Reddit you’re going to get downvoted because people don’t understand. Lemmy is effectively the same in that regard.
Just having data is cheap. Actually serving that data up in a meaningful way is expensive as fuck.


Yeah if that’s the case they really kind of fuck themselves by not being strategic in how they announce this.
Announcing it first so you get all the heat before the date is even released is a great way to sabotage your mission.


Biggest problem for Lemmy and similar applications is scalability and controls and detection for bots.
The compute costs to operate instances are astronomical compared to the actual user load they receive.
That’s a bit of a side problem compared to bots though. Bots are a real problem that services like this are not equipped to handle.


They are already arriving to some degree.
The difference being is that Lemmy and other similar services have zero controls or ability to handle bots or bop traffic if those bots were bots from 2014.
Not bots from today.
It’s a bit of a problem and honestly with increasing bot traffic across the internet and fedaverse being extremely vulnerable to it It’s absolutely bat shit insane, but I don’t see any other option than somehow having some form of human verification.
It’s a problem


This is exactly something that the government and government regulation should be protecting against and providing remedies for
After all, the entire point of a government is organized citizenry aligning on how they want society to operate.


Naw, they’ll get legislation passed that carves our extra protections for their robot guard dogs.
Considering these companies own the legislative process, and the government formed and ran by the people no longer serves the people, and is instead a funnel for class traitors to enforce the will of the Epstein class on all of us.


It was a typo. Fixed.


“Child safety” has simply become the marketing department for a rent-seeking surveillance industry.
Damn ain’t that the truth.


They want to identify the bots from the humans. Partially for advertising.
It is, but WSL is also pretty much shit.
I’ve been maining Windows with WSL at work, and it works great, till it doesn’t. And then it just sucks, and sucks, and sucks.
Almost always has to do with processes on WSL.not being killed by connectors to their windows counterparts. And docker desktop, holy hell, docker desktop and WSL just love to turn WSL into sludge.
I’ve been fighting with it for years, WSL is an awesome idea, it works great when it works. But as soon as you out real development loads onto it it just folds.


I mean there’s effectively very capable text and conversation. Generators so powering NPCs is most definitely a strong suit for them.
Especially if you self-host some smaller models, you can effectively just do this on your own hardware for pretty cheap.
Having customizable dialogue per player that shifts the tone based off of players, actions, level gear or interactions with that NPC or other NPCs that that MPC is associated with is really cool.


If it seeds division, doesn’t provide a path to discussion, is overtly toxic, is in bad faith, or is flame bait. It gets the downvote.


And instead install a chromium based browser, right?


That’ll eventually die the same way Firefox does because forks only survive by way of subsidized capabilities off of the work of the Firefox engineering team.
There is no winning here.


California, Colorado and New York now.
Honestly is getting insane.
Given how many states are pushing legislation like this and how quickly they’re doing it, there’s effectively no way to push back against it…
I do hope that they stop this bullshit though.
Or because you’re not using a chromium based browser.
Just some classic anti-competative practices