• kevincox@lemmy.ml
    link
    fedilink
    English
    arrow-up
    38
    ·
    10 months ago

    This is pretty clever. As I understand it.

    1. Because LLMs are slow most of them stream the response to the user.
    2. The response is streamed as text, but generated in tokens.
    3. This means that each “chunk” leaks the length of the text corresponding to the token.
    4. You can then use heuristics to guess the text of the response based on the token lengths.

    This is a good reminder any time you are sending content in small chunks over an encrypted channel, many encrypted channels don’t provide protection against size leaks by default.

    It seems there are a few easy solutions to this:

    1. Send the token IDs (as fixed-size integers) over the network rather than the text.
    2. Pad the text representations of the tokens to a fixed length.
    3. Batch the tokens more (and maybe add padding) to produce bigger chunks and obscure individual token size.

    These still all leak the approximate length of the response, but that is probably acceptable.

    • PlexSheep@feddit.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      That actually is really really interesting. Thanks for giving the tldr. Do token lengths vary that much?

      • kevincox@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Absolutely. They are sort of a compression scheme so the tokens contain different numbers of characters based on how frequent that string is. So common words like “the” will typically be one token, or maybe even common phrases like “I am”. On the other hand rare punctuation such as “~” may be its own token. There will also be tokens for many common prefixes and suffixes such as “non” and “n’t”. The tokens of each model are different but they definitely vary in length.

  • MajorHavoc@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    10 months ago

    Interesting stuff, but probably not huge news.

    The AI guessing portion is currently playing on ultra-easy mode because there are only a few useful LLMs to compare to, and none of those have given serious thought to strongly securing their tech demo solution communications channel yet.

    This is important work though, since someday someone might use an AI for something important and actually want to prevent eavesdropping.

    I’m being a little unfair, but it’s early days and let’s try to not take ourselves too seriously just yet. If I’m trusting one of these LLMs with something deeply sensitive, it’s not eavesdropping that’s going to get me into deep shit, it’s the LLM’s hallucinations.

    This is important research, though. Someday the AI will get good, and I’ll want to chat with it securely.

    Thankfully, I suspect mitigation will be quite straightforward. Carefully designed small random changes to token handling should throw this approach way off, while not even being noticed by end users.

    The habit of sending tokens right as they generate is a dumb sales gimmick that needs to go away anyway. So if security folks manage to kill it, good riddance.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 months ago

      Someday the AI will get good, and I’ll want to chat with it securely.

      GPT4 is pretty good now. I’m not convinced it will be secure until we can run it locally on our own hardware.

      As soon as we can run it locally, I plan to do so. Even if it means using a GPT4 quality LLM when far better exists if I use a cloud service.

      Sure it would be nice to have something that hallucinates less than GPT-4, but I kinda feel like striving for that is making perfect the enemy of good. I’d rather stick with GPT-4 quality, and focus on usability/speed/reliability/etc and let people keep working on the fancy theoretical stuff in the background as a lower priority.

      A Steve Jobs said, Real Artists Ship. They don’t keep working on it forever until they can’t think of any more improvements. You’ll never ship.

      The habit of sending tokens right as they generate is a dumb sales gimmick

      Seems like it would be trivial to just place tokens in a buffer on the server and send output to the client in say 1KB chunks (a TCP packet can’t be much bigger than that anyway, and it needs a bit of space for routing metadata).

      And if the entire output is less than 1KB… pad it out to that length. Pretty standard to do that anywhere you care about security… e.g. if you were to dump the password table databases… they’re all 256 bits. That’s obviously not the real length - most will be shorter, some will be longer. Whatever they are it’s cryptographically expanded (or shortened) to 256.