VGhlcmUgaXMgbm8gZ2VudWluZSBpbnRlbGxpZ2VuY2UgLCB0aGVyZSBpcyBhcnRpZmljaWFsIHN0dXBpZGl0eS4NClRoZXJlIGlzIG5vIHNlcmVuaXR5LCB0aGVyZSBpcyBhbnhpZXR5Lg0KVGhlcmUgaXMgbm8gcGVhY2UsIHRoZXJlIGlzIHR1cm1vaWwuDQpUaGVyZSBpcyBubyBzdHJ1Y3R1cmUsIHRoZXJlIGlzIHBvcnJpZGdlLg0KVGhlcmUgaXMgbm8gb3JkZXIsIHRoZXJlIGlzIGNoYW9zLg==

  • 2 Posts
  • 335 Comments
Joined 1 year ago
cake
Cake day: May 14th, 2024

help-circle



  • Wow, those are some pretty big numbers! About 10x bigger than what I was thinking. I knew these things can get pretty weird, but this is just absolutely wild. When expectations fly that high, the crash can be all the more spectacular.

    When you notice that your free account can’t do much, that’s a sign that OpenAI is beginning to run out of money. When that happens, the competitors will be ready to welcome all the users who didn’t feel like paying OpenAI.


  • That’s a very good point. Actually, video hosting services also suffer from a similar problem, and that’s one of the main reasons why it’s so hard to compete with YouTube. Since there are so many LLM services out there at the moment, it makes me think that there must be a completely ridiculous amount of investor money floating around there. Doesn’t sound like a sustainable situation to me.

    Apparently, the companies are hoping that everyone gets so hooked on LLMs that they have no choice but to pay up when the inevitable tsunami of enshittification hits us.


  • As long as they can convince investors of potential future revenue, they will be just fine. In the growth stage, companies don’t have to be profitable because the investors will cover the expenses. Being profitable becomes a high priority only when you run out of series F money, and the next investors can’t borrow another 700 million. It’s a combination of having low interest rates and convincing arguments.

    BTW I don’t think this is a good way to run a company, but many founders and investors clearly disagree with me.


  • Probably not going to go belly-up in a while, but the enshittification cycle still applies. At the moment, investors are pouring billions into the AI business, and as a result, companies can offer services for free while only gently nudging users towards the paid tiers.

    When the interest rates rise during the next recession, investors won’t have access to money any more. Then, the previously constant stream of money dries up, AI companies start cutting what the free tier has, and people start complaining about enshittification. During that period, the paid tiers also get restructured to squeeze more money out of the paying customers. That hasn’t happened yet, but eventually it will. Just keep an eye on those interest rates.


  • Maybe in the future you could have an AI implant to take care of all translations while you’re talking to people, and this idea has been explored in scifi many times. I think the babel fish was the funniest way to implement this idea in a story.

    If that sort of translator becomes widespread, it would definitely change the status learning languages has. That would also mean you have to think about a potential man in the middle attack. Can you trust the corporation that runs the AI? What if you want to have a discussion about a topic that isn’t approved by your local tyrannical dictatorship? MITM attack can become a serious concern. Most people probably don’t care that much, so they won’t learn new languages, but some people really need to.



  • The Last Airbender.

    If you just forget about the avatar series for a while, and treat this as a bit of harmless fun, it’s not that bad. Well it’s not good enough that I would watch it again, nor is it bad enough to warrant all the abysmal reviews. If you expect this movie to fit in with the series, all of the hate and anger is entirely justified though.

    It all depends on how you watch this movie, and I would argue that there is a way to enjoy it. It’s not all bad.







  • That’s a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

    When that approach stops working, AI companies need to figure out a way to get high quality data, and that’s when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn’t even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.


  • Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

    For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you’ll be ok.