

Why did you read a sequel to a book you hated?


Why did you read a sequel to a book you hated?


Wow I enjoyed this one back in the day. It advertised itself as self contained fantasy novel, no prequels sequels or junk, which was mostly what sold me on picking it up.


They don’t care about making profit by selling a product or a service, it’s all a speculative bet. They think if they “simply” make AGI they will win all the economy forever, and that there is no second place.


Everyone can see it coming, but they believe the AI companies’ hype that the AGI breakthrough will be here “soon”. Which if actually true, might be worth the bet.
For my money they either hit AGI and then we all die, or there is a crash before that. Yay.


It’s only pervasive because the AI companies are losing money on every generated token while burning investor money to keep the lights on. If people had to pay for what it really costs they’d be using it a lot less.


And even if they solve some problems with AI and make them smarter, they still have to solve the “actually making a profit” problem to justify these share prices. LLMs already have some use at their current level, but certainly not for the price they’d need to charge to break even, let alone actually making a profit. If they double the smarts but double the training and/or inference cost, they’ll still end up in the same place.


As I already said, it’s impossible to time it and you’d be an idiot to try. There could be three more years of bubble first in which case shorting on margin would be ruinous. “Markets can remain irrational a lot longer than you and I can remain solvent” yada yada.


I can say that nvidia is way overvalued and that it’s share price is going to go down without saying that we won’t need powerful chips.


I have similar concerns, comparing gdp to valuation seems nonsensical. But at the same time the valuation is still ludicrous. Nvidia designs chips, TSMC makes them, datacenters buy them, datacenters sell the compute to AI vendors like openai who sell services to customers for a price that doesn’t cover even a fraction their costs, let alone being profitable.
In my book, either two things will happen. Before the money runs out, the AI companies will hit their stated goal of AGI, but without doing any of the safety work, and then everybody dies. The money running out and GFC 2.0 is the “good” ending. If I was even remotely confident in my ability to guess the timing of how it would all play out I’d be shorting up to my eyeballs.


I downloaded the entirety of wikipedia as of 2024 to use as a reference for “truth” in the post-slop world. Maybe I should grab the 2022 version as well just in case…
The concept of a list comprehenshion is sinple but syntax is awful, as if Yoda was writing a for loop. “x for x in y it is, hmm yes”.