• 0 Posts
  • 242 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle


  • But I don’t think even that is the case, as they can essentially just “swap out” the video they’re streaming

    You’re forgetting that the “targeted” component of their ads (while mostly bullshit) is an essential part of their business model. To do what you’re suggesting they’d have to create and store thousands of different copies of each video, to account for all the different possible combinations of ads they’d want to serve to different customers.



  • Comparitively speaking, a lot less hype than their earlier models produced. Hardcore techies care about incremental improvements, but the average user does not. If you try to describe to the average user what is “new” about GPT-4, other than “It fucks up less”, you’ve basically got nothing.

    And it’s going to carry on like this. New models are going to get exponentially more expensive to train, while producing less and less consumer interest each time, because “Holy crap look at this brand new technology” will always be more exciting than “In our comparitive testing version 7 is 9.6% more accurate than version 6.”

    And for all the hype, the actual revenue just isn’t there. OpenAI are bleeding around $5-10bn (yes, with a b) per year. They’re currently trying to raise around $11bn in new funding just to keep the lights on. It costs far more to operate these models (even at the steeply discounted compute costs Microsoft are giving them) than anyone is actually willing to pay to use them. Corporate clients don’t find them reliable or adaptable enough to actually replace human employees, and regular consumers think they’re cool, but in a “nice to have” kind of way. They’re not essential enough a product to pay big money for, but they can only be run profitably by charging big money.





  • It’s the second one. They are all in on this AI bullshit because they’ve got nothing else. There are no other exponential growth markets left. Capitalism has gotten so autocanibalistic that simply being a global monopoly in multiple different fields isn’t good enough. For investors it’s not about how big your company is, how reliable your yearly returns are, how stable your customer base; the only thing that matters is how fast your business is growing. But small businesses have no space to grow because of the monopolies filling every available space, and the monopolies are already monopolies. There are no worlds left to conquer. They’ve already turned every single piece of our digital lives into a subscription, blockchain was a total bust, the metaverse was a demented fever dream, VR turned out to be a niche toy at best; unless someone comes up with some brand new thing that no one has ever heard of before, AI is the last big boondoggle they have left to hit the public with.



  • Personally I think it’d be interesting to see this per capita, so here’s my back of a napkin math for data centers per 1 million pop (c. 2022):

    • NL - 16.78
    • US - 16.15
    • AU - 11.72
    • CA - 8.63
    • GB - 7.68
    • DE - 6.22
    • FR - 4.63
    • JP - 1.75
    • RU - 1.74
    • CN - 0.32

    Worth noting of course that this only lists the quantity of discrete data centers and says nothing about the capacity of those data centers. I think it’d be really interesting to break down total compute power and total storage by country and by population.

    I’d also be interested to know what qualifies as a “data center”? For example, are ASIC based crypto mining operations counted, even though their machinery cannot be repurposed to any other function? That would certainly account for a chunk of the the US (almost all of it in Texas).






  • While truly defining pretty much any aspect of human intelligence is functionally impossible with our current understanding of the mind, we can create some very usable “good enough” working definitions for these purposes.

    At a basic level, “reasoning” would be the act of drawing logical conclusions from available data. And that’s not what these models do. They mimic reasoning, by mimicking human communication. Humans communicate (and developed a lot of specialized language with which to communicate) the process by which we reason, and so LLMs can basically replicate the appearance of reasoning by replicating the language around it.

    The way you can tell that they’re not actually reasoning is simple; their conclusions often bear no actual connection to the facts. There’s an example I linked elsewhere where the new model is asked to list states with W in their name. It does a bunch of preamble where it spells out very clearly what the requirements and process are; assemble a list of all states, then check each name for the presence of the letter W.

    And then it includes North Dakota, South Dakota, North Carolina and South Carolina in the list.

    Any human being capable of reasoning would absolutely understand that that was wrong, if they were taking the time to carefully and systematically work through the problem in that way. The AI does not, because all this apparent “thinking” is a smoke show. They’re machines built to give the appearance of intelligence, nothing more.

    When real AGI, or even something approaching it, actually becomes a thing, I will be extremely excited. But this is just snake oil being sold as medicine. You’re not required to buy into their bullshit just to prove you’re not a technophobe.



  • Noted. I’ll have to play around with that sometime.

    Despite my obvious stance as an AI skeptic, I have no problem with putting it to use in places where it can be used effectively (and ethically). I’ve just found that in practice, those uses are varnishingly few. I’m not on some noble quest to rid the world of computers, I just don’t like being sold overhyped crap.

    I’m also hesitant to try to rebuild any part of my workflow around the current generation of these tools, when they obviously aren’t going to exist in a few years, or will exist but at an exorbitant price. The cost to run genAI is far, far higher than any entity (even Microsoft) has any willingness to sustain long term. We’re in the “give it away or make it super cheap to get everyone bought in” phase right now, but the enshittification will come hard and fast on this one, much sooner than anyone thinks. OpenAI are literally burning billions just in compute right now. It’s unsustainable. Short of some kind of magical innovation that brings those compute costs down a hundred or thousand fold, this isn’t going to stick around.



  • More and more advanced tools for automation are an important part of creating a post-scarcity future. If we can combine that with tearing down our current economic system - which inherently requires and thus has to manufacture scarcity - we can uplift our species in ways we can currently only imagine.

    But this ain’t it bud. If I ask you for water and you hand me a glass of warm piss, I’m not “against drinking water” for refusing to gulp it down.

    This isn’t AI. It isn’t - meaningfully and usefully - any form of automation at all. A bunch of conmen slapped the letters “AI” on the side of their bottle of piss and you’re drinking it down like it’s grandma’s peach tea.

    The people calling out the fundamental flaws with these products aren’t doing so because we hate the entire concept of automation, any more than someone exposing a snake-oil salesman hates medicine. What we hate is being lied to. The current state of this technology is bullshit and hype. It is not fit for human consumption (other than recreationally) and the money being pumped into it could be put to far better uses. OpenAI may have lofty goals, but they have utterly failed at achieving them, and right now any true desire to create AGI has been totally subsumed by the need to keep pumping out slightly better looking versions of the same polished turd in order to convince investors to keep paying for their staggeringly high hosting costs.