• Diplomjodler@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    8 months ago

    Damn thing doesn’t even know it’s running locally. Just ask it. And it can’t tell the time.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      8 months ago

      Another easy test is to ask a question, note the answer, then clear the chat and repeat the same question. Do this over and over again and you’ll see varying responses because the majority of it is just made up instead of pooled information from somewhere. A lot of those LLM models are just good for roleplaying purposes. But even the large commercial models that actually were trained on a lot of potentially valuable information have this issue, which is why you should never blindly trust LLM answers.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 months ago

      Of course not. They don’t have any external info other than what you provide them. They don’t know the concept of “running local” at all