• 0 Posts
  • 72 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle


  • 30% might be high. I’ve worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I’m really not sure what the LLM actually provides other than some natural language processing.

    Before human correction, the agents i’ve tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.

    In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests…

    Weirdly, chatgpt does a better job than a purpose built, purchased agent.









  • Doing it well requires a different approach and skill set than in person learning, which can be difficult to retrofit into an existing institution, especially when budgets are tight. Plus established institutions tend to be a bit conservative about things. Even if the administration is on board, getting faculty to adjust their curricula and adopt the new technology can be near impossible.



  • Pure conjecture on my part but I think…

    When these first came out, Google approached them in full venture capital mode with the idea of building a market first, then monetizing it. So they threw money and people at it, and it worked fairly well.

    They tried making it part of a home automation plaform, but after squandering the good will and market position of acquisitions like Nest and Dropcam, they failed to integrate these products into a coherent platform and needed another approach.

    So they turned to media and entertainment only to lose the sonos lawsuit.

    After that the product appears to have moved to maintenance mode where people and server resources are constantly being cut, forcing the remaining team to downsize and simply the tech.

    Now they are trying to plug it into their AI platform, but in effort to compete with openai and microsoft, they are likely rushing that platform to market far before it is ready.



  • Scenario I’ve been playing with:

    Suppose you are kidnapped by two people. They tell you that one of them will shoot you and then let you go, but you get to decide who shoots. Person A says he will shoot you in the head. Person B says he will shoot you in the shoulder. Which do you choose?

    The more think about this the more I like it. Both persons are clearly awful and contributed to the situation. Both could offer better choices but refuse. Both are rather similar in outcomes. But one is clearly worse.

    Is it rational to choose to be shot at all? Is it rational to not choose the better of two alternatives?






  • esc27@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    A third term implies the constitution is still in place and don’t see them passing an amendment without doing something ridiculous like creating a bunch of extra states.

    Far easier to just never end the second term. Claim a national emergency and suspend elections/the constitution.