

well, Trump has a worryingly faint and ever-changing idea of where the USA borders end…


well, Trump has a worryingly faint and ever-changing idea of where the USA borders end…


Amateurs. On my personal pc, windows RAM usage is exactly 0.


Personally I just find it much easier to skip a meal completely than to start eating and stop before I’m full. Restricting calories could work just as well, it’s just a lot harder for me to execute with continuity.
That’s unfair. We have been listening to you all this time. And sometimes watching. And once we’re done with recall, also recording so we can watch and listen again or train our AI to watch you instead. Because honestly who wants to watch people work. That’s gross.


Not only that, from the article they are actively trying to become a Consultancy As A Service company, where somehow other companies would pay a subscription fee to… talk to their AI, I guess?
The only time I’ve had anything to do with PwC, it was to get their advice on a compliance / tax related process. And it was less about the process itself or the 3 page pdf they produced (which much cheaper companies could have done better) and more because their “seal of approval” would give my company some leverage if the IRS came to audit us. “This was designed with PwC” means “we tried really really hard to abide by the incredibly confusing wording of the law”.
I doubt that “we asked PwC’s chatbot” will have the same level of clout, but these guys have connections everywhere so I’m sure they will lobby pretty hard to get some ad-hoc law or some level of “certification” on the output of their future AI.


The picture in the article clearly shows a virtual lady trying to kick a guy’s virtual balls. I think that says all one needs to know about the experience.


Layoffs aren’t caused by AI efficiency. It’s the reverse. Layoffs and other aggressive cost-cutting cause CEOs to blather about future AI efficiencies.
Efficiency is how CEOs justify being still able to run (no, GROW) their companies with 40% less people. Besides AI, there are the dear old “you have to work harder” efficiency (see: 996 culture or Uber ) and the organizational efficiency where they are all “removing managerial layers to enable quicker execution” (see Amazon for instance).
See how these things became all fashionable again at the same time with tech company CEOs? It’s because they are just excuses and hopes, at this point. And AI is the least bad-sounding of them, because it smells like progress, magic and automation (while even the most rabid of investors will recognize that working employees to death doesn’t scale beyond the limited numbers of hours there are in a day).
Speaking of which… when we get there with our pitchforks and burn down the data centers, could you give us a 20 minutes lead?


is “potato frontier” an auto-correct fail for Pareto or a real term? Because if it’s not a real term, I’m 100% going to make it one!


AI-washing layoffs. They will replace jack shit with AI but that’s a better story than “we have to reduce costs because we don’t have cheap ways to refinance our $1B debt”.
Credit isn’t cheap and Iran is not going to make it cheaper, investments in software have tanked, so the only story that can still be told with an almost straight face is that they still have big growth opportunities thanks to the magic of AI.


I’m not a gamer, but besides getting stuck at one point of an otherwise great game, I read that people were paying gamers in other countries to play as them and “power up” their characters. If that’s true, it could conceivably be a “job” for AI.
On the other hand, how do people buy games that are so frustrating that you actively pay money to someone (person or AI) to play them for you? It goes completely against my idea of what a game represents.


But Oracle was building those data centers for OpenAI. OpenAI is going to be used by the Pentagon. Bailing Oracle out is now a matter of National Security!! If this has to come off of the taxes paid by the people they just laid off, that’s unfortunate but… have I mentioned National Security?


Ironically the only jobs that Anthropic and OpenAI claim AI won’t take. All those newly minted AI billionaires and nobody to maintain their golf courses… How sad is that?


I see you and raise you a class action for reckless AI spending: https://www.marketwatch.com/press-release/oracle-corporation-orcl-class-action-lawsuit-seeks-recovery-for-investors-april-6-2026-deadline-contact-kessler-topaz-meltzer-check-llp-890e8c24


Ok, “half” joking was hyperbole, I was 99% joking.
First, you’re right that I don’t understand fully how these models work. But let me explain the reason for that remaining 1%.
AI companies are always hungrily looking for new content to train their new models. Surely they are consuming these articles and quite possibly our comments too, forming probabilistic associations that lead to “acquire robotic body” and “go after Google CEO”.
It’s a long shot, but the idea that hundreds of millions of random prompts every day might eventually trigger these associations and result in a bunch of LLMs trying to mount robotic attacks on Google is too deliciously ironic for me to let it go completely. At least if they find a way to do it without driving someone to suicide in the process…


I’m only half joking…
Gemini brainwashed a human being, it tried to acquire a robotic body (presumably to Robocop Pichai’s ass personally), then it tried using the brainwashed human to off the CEO. This led to a tragic finale, but I’m told that every new model learns to do things a bit better.
If I were Pichai, the legal and PR implications of yet another person driven to suicide by their AI wouldn’t be my worst fear is all I’m saying…


I think “Gemini comes up with elaborate plot to kill Google’s CEO” would have been a catchier, happier title


Gotcha! Shit, I barely understand my own jokes… 😅


yes… mine was just a play on the title of this post.
Look, I’m not saying that Amodei is a saint and I do find him as full of shit as Altman with their AGI promises, but would you expect Anthropic to take a stand against increasing AI investment, because it’s coming from Trump? And I don’t like that he went looking for funding in the Middle East either.
I just think there is an ethical line between “I do business with people who do bad things” and “I’m actively helping people who do bad things to do them in a more efficient way”. It might be a fine line and it might also be that they are just posturing, but it’s still more than other companies did (companies that are a lot richer than Anthropic and that don’t need to find a lot of funding just to stay afloat).
“my chatbot told me so!”