In my opinion, it’s far more likely for people to use AI as a weapon to kill people than for AI to “go rogue” and destroy humanity.
While humans are doing a fairly good job on their own of people psychopathic freaks, imagine a world where police robots are laying siege to neighborhoods, where corporations use AI to maximize efficiency without regard for human suffering.
The real danger of AI is the lack of liability. If a cop kills an innocent person, you can put him on trial. If a robot kills an innocent person, this will get written off as the unfortunate collateral of technological progress (and maybe the department will have to pay the family a fine, a fine that is just coming out of tax dollars anyways).
I’m part of a coalition trying to prevent a private equity firm from buying out a local nonprofit hospital and using AI to “Improve efficiency” is one of their plans that we’ve had to study (done by people much more competent than I).
The main thing they plan to use AI for is filling out paperwork - nurses will record their introductory interviews with patients and the AI (basically, speech recognition + knowing what fields to fill out for certain information) will automatically fill out that patient’s chart.
I’m sure they’re planning on using AI for other purposes as well, but this is the most prevalent use - speech recognition and filling out charts automatically.