AI can be used in automated solutions and it doesn’t intrinsically have to be supervised. It being or not being intelligent is irrelevant - it can still cause harm.
[Artificial intelligence] can be used in automated solutions and it doesn’t intrinsically have to be supervised. It being or not being intelligent is irrelevant
It’s relevant because, when people talk about “AI” that’s not actually intelligent (ie. all AI), they’re being incoherent. What exactly are they talking about? Computers in general? It’s just noise, spam, etc.
It’s relevant because, when people talk about “AI” that’s not actually intelligent (ie. all AI), they’re being incoherent. What exactly are they talking about? Computers in general? It’s just noise, spam, etc.
If your objection is that AI “isn’t actually intelligent” then you’re just being pedantic and your objection has no substance. Replace “AI” with “systems that leverage machine learning solutions and that we don’t fully understand how they work” if you need to.
Did you watch the video? Do you have any familiarity with how AI technologies are being used today? At least one of those answers must be a no for you to have thought that the video’s message was incoherent.
Let me give you an example. As part of the ongoing conflict in Gaza, Israel has been using AI systems nicknamed “the Gospel” and “Lavender” to identify Hamas militants, associates, and the buildings that they operate from. Then, this information is rubber-stamped by a human analyst and then unguided missiles are sent to the identified location, often destroying entire buildings (filled with other people, generally the family of the target) to kill the identified target.
There are countless incidents of AI being used without sufficient oversight, often resulting in harm to someone - the general public, minorities, or even the business who put the AI in place.
The paperclip video is a cautionary tale against giving an AI system too much power or not enough oversight. That warning is relevant today, regardless of the precise architecture of the underlying system.
AI can be used in automated solutions and it doesn’t intrinsically have to be supervised. It being or not being intelligent is irrelevant - it can still cause harm.
deleted by creator
It’s relevant because, when people talk about “AI” that’s not actually intelligent (ie. all AI), they’re being incoherent. What exactly are they talking about? Computers in general? It’s just noise, spam, etc.
If your objection is that AI “isn’t actually intelligent” then you’re just being pedantic and your objection has no substance. Replace “AI” with “systems that leverage machine learning solutions and that we don’t fully understand how they work” if you need to.
Did you watch the video? Do you have any familiarity with how AI technologies are being used today? At least one of those answers must be a no for you to have thought that the video’s message was incoherent.
Let me give you an example. As part of the ongoing conflict in Gaza, Israel has been using AI systems nicknamed “the Gospel” and “Lavender” to identify Hamas militants, associates, and the buildings that they operate from. Then, this information is rubber-stamped by a human analyst and then unguided missiles are sent to the identified location, often destroying entire buildings (filled with other people, generally the family of the target) to kill the identified target.
There are countless incidents of AI being used without sufficient oversight, often resulting in harm to someone - the general public, minorities, or even the business who put the AI in place.
The paperclip video is a cautionary tale against giving an AI system too much power or not enough oversight. That warning is relevant today, regardless of the precise architecture of the underlying system.