ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • thehatfox@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.

    That’s a good point. For real brains, size and intelligence are not linked. An elephant brain has 3 times the amount of neurons as a human brain, but a human brain is more intelligent. There is more to intelligence than just the amount of neutrons, real or virtual, so making larger and larger AI models may not be the right direction.

    • KevonLooney@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.

      They’re like an annoying friend who just can’t shut up.

      • nilloc@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        They aren’t thinking though. They’re making connection with the trained data that they’ve processed.

        This is really clear when they are asked to write code worth to vague a prompt.

        Maybe feeding them through primary school curriculum (including essays and tests) would be helpful, but I don’t think the language models really sort knowledge yet.