I much prefer transparent censorship over subliminal propaganda. The LLM should tell me when the subject is something it’s not wired to talk about instead of spouting propaganda pretending it’s not biased.
And anyway, it’s FOSS. Mess around with it to fit your needs.
I think the baked-in political biases highlight how not “open source” the weights really are without knowledge of the training data. You can bias these systems however you want to and it’s nontrivial or impossible to remove it once it’s there.
I much prefer transparent censorship over subliminal propaganda. The LLM should tell me when the subject is something it’s not wired to talk about instead of spouting propaganda pretending it’s not biased.
And anyway, it’s FOSS. Mess around with it to fit your needs.
A LLM are hallucinating all the time. The idea that people use it to inform their opinion of e.g. Gaza, is crazy to me.
They aren’t search engines.
And then people complain about propaganda as if the software wouldn’t
lieprovide literally random shit to you as a fundamental part of the design.I think the baked-in political biases highlight how not “open source” the weights really are without knowledge of the training data. You can bias these systems however you want to and it’s nontrivial or impossible to remove it once it’s there.