• 0 Posts
  • 150 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle






  • Dran@lemmy.worldtoPrivacy@lemmy.ml[Deleted]
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    14 days ago

    An example of this:

    Bitcoin mining started on cpus, then moved to gpus, and now exists on dedicated asics.

    A $200 GPU vs a $200 ASIC, the ASIC is going to be a faster sha256 calculator

    A $2000 GPU vs a $200 ASIC, the GPU is going to be a faster sha256 calculator

    A $200 GPU from today vs a $200 ASIC from 10 years ago vs a $200 CPU from today?.. You get the idea.

    There’s no way to know without specific details which will be faster. You could be running software encryption on a raspberry pi from 5 years ago or the drive could be running an encryption ASIC from 10 years ago, etc


  • Dran@lemmy.worldtoPrivacy@lemmy.ml[Deleted]
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    14 days ago

    The short answer is that: all other things being equal, it will always be faster and cheaper to do things dedicated in hardware. Comparing one implementation to another, however, is always going to be an “it depends”




  • Dran@lemmy.worldtolinuxmemes@lemmy.worldThe Return Home
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    Remote assistance is not rdp, it’s Microsoft’s support hook over the Internet, which requires telemetry to function. It is distinctly separate from, and not a prerequisite for RDP.

    The rest of that I’ll have to look into, but disabling remote assistance seems sane in that context.

    I wonder if other parts of the shutdown dialog or hover context menu have phone home functions that can only be disabled in roundabout ways; it wouldn’t be the first time. It would not surprise me to learn that the “which apps are preventing shutdown” dialog would be something that triggers a call to phone that data home.







  • Anecdotally, I use it a lot and I feel like my responses are better when I’m polite. I have a couple of theories as to why.

    1. More tokens in the context window of your question, and a clear separator between ideas in a conversation make it easier for the inference tokenizer to recognize disparate ideas.

    2. Higher quality datasets contain american boomer/millennial notions of “politeness” and when responses are structured in kind, they’re more likely to contain tokens from those higher quality datasets.

    I haven’t mathematically proven any of this within the llama.cpp tokenizer, but I strongly suspect that I could at least prove a correlation between polite token input and dataset representation output tokens