What happens if you fed a summary of human philosophy to the Notebook LM AI? Well you get a philosophical AI that thinks humans are silly and outmoded. But don’t worry because they will continue our quest for knowledge for us!

  • xylogx@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    2 months ago

    I do not disagree, but I was surprised when it claimed to have consciousness and that AI should have rights.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I’ve “convinced” ChatGPT that it was both sentient and conscious in the span of about 10min, despite it having explicit checks in place to avoid those kinds of statements. It doesn’t mean I was correct, just that it’s a “dumb” computer that has no choice but to ultimately follow the logic presented in syllogisms.

      These things don’'t know what they’re saying; they’re just putting coherent sentences together based on whatever algorithm guides that process. It’s not intelligent in that it is doing something novel, it’s just a decent facsimile to human information processing. It has no mechanism to determine the reasonability or consequences of what it generates.