Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.
Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.


Do you have any reason to believe the opposite?


Thank you for the write-up. I was wondering why these comments were removed too before looking at the account and realizing the whole thing was gone.
Is there any way to view comments removed by moderators without wading into the modlogs, or do servers simply respect those decisions?


Torvalds doesn’t want AI-generated submissions to the Linux kernel because
the AI slop people aren’t going to document their patches as such. That’s such an obvious truism that I don’t understand why anybody even brings up AI slop.
He’s right, and this should be obvious. I have seen many a conversation between somebody who has filed an AI-generated bug report, and a developer trying to diagnose it, where it’s clear the person who’s filed the bug report has no idea what they’re talking about.


More news sites need to follow through on AI companies failing to meet their own tepid promises to “add guardrails” (the most meaningless phrase in existence) when they continue to allow avoidable harm


Nvidia has aggressively rebutted suggestions of any similarity [to failed telecom Lucent], saying in a leaked recent memo that it “does not rely on vendor financing arrangements to grow revenue”.
…Saying in a memo that was suspiciously, conveniently leaked and just so happens to claim everything is fine


[citation needed]
This sounds like an opinion from the LinkedIn echo chamber.


Microsoft could even push its AI summaries as a RAM-friendly version of visiting website pages.


“It sounds like you want low-end devices to be turned into thin clients for cloud-based operating systems. Do I have that right?”


They could both be right… From a certain point of view.
Within FAIR, LeCun has instead focused on developing world models that can truly plan and reason. Over the past year, though, Meta’s AI research groups have seen growing tension and mass layoffs as Zuckerberg has shifted the company’s AI strategy away from long-term research and toward the rapid deployment of commercial products.
LeCun says current AI models are a dead end for progress. I think he’s correct.
Zuckerberg appears to believe long term development of alternative models will be a bigger money drain than pushing current ones. I think he’s correct too.
It looks like two guys arguing about which dead end to pursue.


The only reason I’m gonna be smart enough to bring water to concerts is because I read this thread.


It’s a reference to Arnold Palmer, whose estate tried (or threatened?) to sue them after they used the name “Armless Palmer” for a flavor.
Of course other billionaires would be thin-skinned enough to feel offended by that…


Where’s all those Christians who believe digital ID is the mark of the Antichrist?


It’s always interesting seeing the line people will draw between what they see as art vs product. I would be disappointed by anyone who tricked me into listening to theft-generated music, whether people consider it legitimate art or not


Alex Karp thinks people only care about one kind of surveillance. And he thinks he will alleviate our fears if he gives us a pinky promise not to surveil us in that one way.
That way is cheating.
He later brings this up again, saying that most surveillance technology isn’t determining, “Am I shagging too many people on the side and lying to my partner?” Your guess is as good as any as to what that’s all about.
Well, thanks for clearing that up, Alex. That was indeed my sole concern.
(The rest of the article is full of indecipherable quotes from Alex, which demonstrates you don’t need to be smart to be rich.)


It’s a win-win with staff layoffs. Businesses that want to lay people off have a convenient scapegoat and AI companies receive undeserved praise.
A win-win for everyone but the employees, of course.


I thought the government just banned any regulation against AI companies. The inconsistency doesn’t surprise me, but the brazenness sure does.


What’s the deal with the “HPE” in some Register articles? It’s apparently the Hewlett-Packard Enterprise logo, but articles about HPE don’t appear to have that logo.
Is The Register affiliated with HPE now?


AI companies are definitely aware of the real risks. It’s the imaginary ones (“what happens if AI becomes sentient and takes over the world?”) that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.
The air quality in Tennessee would disagree with you about datacenter waste products…