These are the subtle types of errors that are much more likely to cause problems than when it tells someone to put glue in their pizza.
Obviously you need hot glue for pizza, not the regular stuff.
It do be keepin the cheese from slidin off onto yo lap tho
You’re giving humans too much in the sense of intelligence…there are people who literally drove in lakes because a GPS told them to…
Yes its totally ok to reuse fish tank tubing for grammy’s oxygen mask
Wait… why can’t we put glue on pizza anymore?
because the damn liberals canceled glue on pizza!
How else are the toppings supposed to stay in place?
And this technology is what our executive overlords want to replace human workers with, just so they can raise their own compensation and pay the remaining workers even less
So much this. The whole point is to annihilate entire sectors of decent paying jobs. That’s why “AI” is garnering all this investment. Exactly like Theranos. Doesn’t matter if their product worked, or made any goddamned sense at all really. Just the very idea of nuking shitloads of salaries is enough to get the investor class to dump billions on the slightest chance of success.
Exactly like Theranos
Is it though? This one is an idea that can literally destroy the economic system. Seems different to ignore that detail.
Current gen AI can’t come close to destroying the economy. It’s the most overhyped technology I’ve ever seen in my life.
You’re missing the point. They aim to replace most/all jobs. For that to be possible, it will need investment, and to get a lot better. If that happens, a worldwide inability to make a living will happen. It likely will have negative impact even on the rich bastards.
There’s an upper ceiling on capability though, and we’re pretty close to it with LLMs. True artificial intelligence would change the world drastically, but LLMs aren’t the path to it.
Yeah, I never said this is going to happen. All I was commenting on is how it’s ironic that the people investing in destroying jobs are too myopic to realize that would be bad for them too.
They always miss this part. It’s (part of) why the Republicans wanting to be Russian-style oligarchs is so insane. And ignoring good faith government and their disregard for the rule of law.
Do they KNOW what happens to Russian oligarchs? Why do they think they’re immune to that part of it? Do they really want the cutthroat politics of places like Russia and Africa, where they constantly have to watch their backs?
These people already have money. Their aims, if achieved, will not make their lives better.
Many years ago the people who ruled this country figured out that the best thing for them was to spread power and have most civilians in good health. Government by committee and good faith government is less about ethical treatment of citizens (though I appreciate the side effect) and more about protecting the committee and/or the would be dictator.
Ah, I misunderstood then, sorry. But still, even with all the investment in the world, LLM is a bubble waiting to burst. I have a hunch we will see truly world-altering technology in the next ~20 years (the kind that’d put huge swathes of people out of work, as you describe), but this ain’t it.
This is the kind of shit that makes Idiocracy the most weirdly prophetic movie I’ve ever seen.
Ignoring the blatant eugenics of the very first scene, I’d rather live in the idiocracy world because at least the president with all of his machismo and grandstanding was still humble enough to put the smartest guy in the room in charge of actually getting plants to grow.
My take away from that was the poorly educated had more kids.
Giving the benefit of the doubt I can see that reading but it definitely implied that stupidity is genetic because of how big the stupid people family tree gets and the scifi story it was based on was a looooot more explicit with the eugenics of the story.
yeah, I honestly am expecting to die in a camp at this point.
Hope for the best but prepare for the worst!
This combined with the meteoric rise of fascism absolutely leave me thinking that I’ll probably end up in a concentration camp
I am starting to think google put this up on purpose to destroy people’s opinion on AI. They are so much behind Open AI that they would benefit from it.
I doubt there’s any sort of 4D chess going on, instead of the whole thing being brought about by short-sighted executives who feel like they have to do something to show that they’re still in the game exactly because they’re so much behind "Open"AI
It is possible to happen without any 4D chess thinking, they try, they realize that they failed, but they realize that they win here either way.
This shit is so bad that even a blind guy can see it.
This shit is so bad that even a blind guy can see it.
You severely underestimate the shortsightedness of the executive class. They’re usually so convinced of their infallibility that they absolutely will make decisions that are obviously terrible to anyone looking in from the outside
Conspiratorial thinking at it’s finest.
Yes, because this whole thing is incredible stupid, that how could they not see it? Saying of course is that “never attribute to malice what can be attributed to incompetence”, but holy shit how incompetent this 2 trillion dollar company was.
It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn’t learn, it generates. It’s all made up, yet they want to slap it on a search engine like it provides factual information.
Yeah, I use ChatGPT fairly regularly for work. For a reminder of the syntax of a method I used a while ago, and for things like converting JSON into a class (which is trivial to do, but using chatGPT for this saves me a lot of typing) it works pretty good.
But I’m not using it for precise and authoritative information, I’m going to a search engine to find a trustworthy site for that.
Putting the fuzzy, usually close enough (but sometimes not!) answers at the top when I’m looking for a site that’ll give me a concrete answer is just mixing two different use cases for no good reason. If google wants to get into the AI game they should have a separate page from the search page for that.
Yeah it’s damn good for translating between languages, or things that are simple in concept but drawn out in execution.
Used it the other day to translate a complex EF method syntax statement into query syntax. It got it mostly right, did need some tweaking, but it saved me about 10 minutes of humming and hawing to make sure I did it correctly (honestly I don’t use query syntax often.)
They give zero fucks about their customers, they just want to pump that stock price so their RSUs vest.
This stuff could give you incurable highly viral brain cancer that would eliminate the human race and they’d spend millions killing the evidence.
True, and it’s excellent at generating basic lists of things. But you need a human to actually direct it.
Having Google just generate whatever text is like just mashing the keys on a typewriter. You have tons of perfectly formed letters that mean nothing. They make no sense because a human isn’t guiding them.
It’s like the difference between being given a grocery list from your mum and trying to remember what your mum usually sends you to the store for.
… Or calling your aunt and having her yell things at you that she thinks might be on your Mum’s shopping list.
That could at least be somewhat useful… It’s more like grabbing some random stranger and asking what their aunt thinks might be on your mum’s shopping list.
… but only one word at a time. So you end up with:
- Bread
- Cheese
- Cow eggs
- Chicken milk
I mean, it does learn, it just lacks reasoning, common sense or rationality.
What it learns is what words should come next, with a very complex a nuanced way if deciding that can very plausibly mimic the things that it lacks, since the best sequence of next-words is very often coincidentally reasoned, rational or demonstrating common sense. Sometimes it’s just lies that fit with the form of a good answer though.I’ve seen some people work on using it the right way, and it actually makes sense. It’s good at understanding what people are saying, and what type of response would fit best. So you let it decide that, and give it the ability to direct people to the information they’re looking for, without actually trying to reason about anything. It doesn’t know what your monthly sales average is, but it does know that a chart of data from the sales system filtered to your user, specific product and time range is a good response in this situation.
The only issue for Google insisting on jamming it into the search results is that their entire product was already just providing pointers to the “right” data.
What they should have done was left the “information summary” stuff to their role as “quick fact” lookup and only let it look at Wikipedia and curated lists of trusted sources (mayo clinic, CDC, national Park service, etc), and then given it the ability to ask clarifying questions about searches, like “are you looking for product recalls, or recall as a product feature?” which would then disambiguate the query.
It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they’ll get a good overview.
I usually see the results as quick summaries from an untrusted source. Even if they aren’t exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.
Today I searched something like “Are owls endangered?”. I knew I was about to get a great overview because it’s a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn’t trust it.
It has improved my search experience… But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.
I think the issue is that most people aren’t that bright and will not verify information like you or me.
They already believe every facebook post or ragebait article. This will sadly only feed their ignorance and solidify their false knowledge of things.
The same people who didn’t understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.
If people don’t understand the tools they are using and don’t double check the information from single sources, I think it’s kinda on them. I have a dietician friend, and I usually get back to him after doing my “Google research” for my diets… so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn’t a scholar or research search engine.
you can just disable it
This is not actually true. Google re-enables it and does not have an account setting to disable AI results. There is a URL flag that can do this, but it’s not documented and requires a browser plugin to do it automatically.
Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can’t find behind section 230 since this is content they are generating but IANAL.
Iirc cases where the central complaint is AI, ML, or other black box technology, the company in question was never held responsible because “We don’t know how it works”. The AI surge we’re seeing now is likely a consequence of those decisions and the crypto crash.
I’d love CVS try to push a lawsuit though.
In Canada there was a company using an LLM chatbot who had to uphold a claim the bot had made to one of their customers. So there’s precedence for forcing companies to take responsibility for what their LLMs says (at least if they’re presenting it as trustworthy and representative)
This was with regards to Air Canada and its LLM that hallucinated a refund policy, which the company argued they did not have to honour because it wasn’t their actual policy and the bot had invented it out of nothing.
An important side note is that one of the cited reasons that the Court ruled in favour of the customer is because the company did not disclose that the LLM wasn’t the final say in its policy, and that a customer should confirm with a representative before acting upon the information. This meaning that the the legal argument wasn’t “the LLM is responsible” but rather “the customer should be informed that the information may not be accurate”.
I point this out because I’m not so sure CVS would have a clear cut case based on the Air Canada ruling, because I’d be surprised if Google didn’t have some legalese somewhere stating that they aren’t liable for what the LLM says.
But those end up being the same in practice. If you have to put up a disclaimer that the info might be wrong, then who would use it? I can get the wrong answer or unverified heresay anywhere. The whole point of contacting the company is to get the right answer; or at least one the company is forced to stick to.
This isn’t just minor AI growing pains, this is a fundamental problem with the technology that causes it to essentially be useless for the use case of “answering questions”.
They can slap as many disclaimers as they want on this shit; but if it just hallucinates policies and incorrect answers it will just end up being one more thing people hammer 0 to skip past or scroll past to talk to a human or find the right answer.
But it has to be clearly presented. Consumer law and defamation law has different requirements on disclaimers
Yeah the legalise happens to be in the back pocket of sundar pichai. ???
“We don’t know how it works but released it anyway” is a perfectly good reason to be sued when you release a product that causes harm.
I would love if lawsuits brought the shit that is ai down. It has a few uses to be sure but overall it’s crap for 90+% of what it’s used for.
iirc alaska airlines had to pay
That was their own AI. If CVS’ AI claimed a recall, it could be a problem.
So will the google AI be held responsible for defaming CVS?
Spoiler alert- they won’t.
The crypto crash? Idk if you’ve looked at Crypto recently lmao
Current froth doesn’t erase the previous crash. It’s clearly just a tulip bulb. Even tulip bulbs were able to be traded as currency for houses and large purchases during tulip mania. How much does a great tulip bulb cost now?
67k, only barely away from it’s ATH
deleted by creator
People been saying that for 10+ years lmao, how about we’ll see what happens.
deleted by creator
67k what? In USD right? Tell us when buttcoin has its own value.
Are AI products released by a company liable for slander? 🤷🏻
I predict we will find out in the next few years.
So, maybe?
I’ve seen some legal experts talk about how Google basically got away from misinformation lawsuits because they weren’t creating misinformation, they were giving you search results that contained misinformation, but that wasn’t their fault and they were making an effort to combat those kinds of search results. They were talking about how the outcome of those lawsuits might be different if Google’s AI is the one creating the misinformation, since that’s on them.
Yeah the Air Canada case probably isn’t a big indicator on where the legal system will end up on this. The guy was entitled to some money if he submitted the request on time, but the reason he didn’t was because the chatbot gave the wrong information. It’s the kind of case that shouldn’t have gotten to a courtroom, because come on, you’re supposed to give him the money any it’s just some paperwork screwup caused by your chatbot that created this whole problem.
In terms of someone someone getting sick because they put glue on their pizza because google’s AI told them to… we’ll have to see. They may do the thing where “a reasonable person should know that the things an AI says isn’t always fact” which will probably hold water if google keeps a disclaimer on their AI generated results.
They’re going to fight tooth and nail to do the usual: remove any responsibility for what their AI says and does but do everything they can to keep the money any AI error generates.
Slander is spoken. In print, it’s libel.
- J. Jonah Jameson
That’s ok, ChatGPT can talk now.
At the least it should have a prominent “for entertainment purposes only”, except it fails that purpose, too
I think the image generators are good for generating shitposts quickly. Best use case I’ve found thus far. Not worth the environmental impact, though.
Tough question. I doubt it though. I would guess they would have to prove mal intent in some form. When a person slanders someone they use a preformed bias to promote oneself while hurting another intentionally. While you can argue the learned data contained a bias, it promotes itself by being a constant source of information that users can draw from and therefore make money and it would in theory be hurting the company. Did the llm intentionally try to hurt the company would be the last bump. They all have holes. If I were a judge/jury and you gave me the decisions I would say it isn’t beyond a reasonable doubt.
If you’re a start up I guarantee it is
Big tech… I’ll put my chips in hell no
Yet another nail in the coffin of rule of law.
🤑🤑🤑🤑
Slander/libel nothing. It’s going to end up killing someone.