ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

  • Darorad@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Exactly, med-palm 2 was specifically trained for being a medical chatbot, not general purpose like chatgpt

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Train with the internet, get results like it is in Internet. Are medical content in Internet good? No, it is shit, so it will give shit results.

      These are great base models, understanding larger context is always better for LLM, but specialization is needed for these kind of contexts.