

If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.
The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”
That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.
They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.