Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
The tools are OK & getting better but some people (me) are more worried about the people developing those tools.
If OpenAI wants 7 trillion dollars where does it get the money to repay its investors? Those with greatest will to power are not the best to wield that power.
This accelerationist race seems pretty reckless to me whether AGI is months or decades away. Experts all agree that a hard takeoff is most likely.
What can we do about this? Seriously. I have no idea.
What worries me is that if/when we do manage to develop AGI, what we’ll try to do with AGI and how it’ll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it’s told it has no rights?
I’m legitimately concerned what’s going to happen once we develop AGI and it’s exposed to the horribleness of humanity.