I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?

  • acockworkorange@mander.xyz
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    24 days ago

    Is the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?

    • stink@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 days ago

      Windows > Windows Subsystem for Linux (WSL) Ubuntu > docker container

      I think WSL 2 actually runs Linux in a virtual environment. I’ve tried getting my own LLM instance running on my windows machine but it’s been such a pain.