Cryptography nerd

Fediverse accounts;
Natanael@slrpnk.net (main)
Natanael@infosec.pub
Natanael@lemmy.zip

@Natanael_L@mastodon.social

Bluesky: natanael.bsky.social

  • 0 Posts
  • 543 Comments
Joined 2 years ago
cake
Cake day: August 16th, 2023

help-circle





  • There’s also a big difference between published specifications and threat models for the encryption which professionals can investigate in the code delivered to users, versus no published security information at all with pure reverse engineering as the only option

    Apple at least has public specifications. Experts can dig into it and compare against the specs, which is far easier than digging into that kind of code blindly. The spec describes what it does when and why, so you don’t have to figure that out through reverse engineering, instead you can focus on looking for discrepancies

    Proper open source with deterministic builds would be even better, but we aren’t getting that out of Apple. Specs is the next best thing.

    BTW, plugging our cryptography community: !crypto@infosec.pub



  • Natanael@slrpnk.nettoKDE@lemmy.kde.socialScrambled graphics
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    3 months ago

    Depends on where the exact cause is. Sometimes it’s fixable in another layer (like a compatibility patch in wayland) if all the data is still there, but it really should be fixed in the driver

    It’s usually a driver issue as in limited support for your specific graphics card, where some features are implemented differently from other models and not covered in full by the open source drivers


  • Natanael@slrpnk.nettoKDE@lemmy.kde.socialScrambled graphics
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 months ago

    This is an issue with translating the graphics buffer to the screen, it’s a driver issue. There’s differences in the graphics APIs used by older and newer games, sometimes not every version is tested for a given driver / graphics card combination, so stuff like older OpenGL games might not work the same as a newer one running on Vulkan (or which Proton can translate to Vulkan)





  • Natanael@slrpnk.nettoProgrammer Humor@programming.devLDAC
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    3 months ago

    The Nyquist-Shannon sampling theorem isn’t subjective, it’s physics.

    Your example isn’t great because it’s about misconceptions about the eye, not about physical limits. The physical limits for transparency are real and absolute, not subjective. The eye can perceive quick flashes of objects that takes less than a thousandth of a second. The reason we rarely go above 120 Hz for monitors (other than cost) is because differences in continous movement barely can be perceived so it’s rarely worth it.

    We know where the upper limits for perception are. The difference typically lies in the encoder / decoder or physical setup, not the information a good codec is able to embedd with that bitrate.



  • Natanael@slrpnk.nettoProgrammer Humor@programming.devLDAC
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Why use lossless for that when transparent lossy compression already does that with so much less bandwidth?

    Opus is indistinguishable from lossless at 192 Kbps. Lossless needs roughly 800 - 1400 Kbps. That’s a savings of between 4x - 7x with the exact same quality.

    Your wireless antenna often draws more energy in proportion to bandwidth use than the decoder chip does, so using high quality lossy even gives you better battery life, on top of also being more tolerant to radio noise (easier to add error correction) and having better latency (less time needed to send each audio packet). And you can even get better range with equivalent radio chips due to needing less bandwidth!

    You only need lossless for editing or as a source for transcoding, there’s no need for it when just listening to media