• 3 Posts
  • 380 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle



  • Only if you don’t have the critical thinking to understand how information management is a significant problem and barrier to medical care.

    Being able to research and find material relevant to a patient’s problem is an arduous task that often is too high a barrier for doctors to invest in given their regular workloads.

    Which leads to a reduction in effective care.

    By providing a more efficient and effective way to dig up information that saves a ton of time and improves care.

    It’s still up to the doctor to evaluate that information, but now they’re not slogging away trying to find it.


  • And it won’t scale at all!

    Congratulations, you made more AI slop, and the problem is still unsolved 🤣

    Current AI solves 0% of difficult programming problems, 0%, it’s good at producing the lowest common denominator, protocols are sitting at 99th percentile here. You’re not going to be developing anything remotely close to a new, scale able, secure, federated protocol with it.

    Nevermind the interoperability, client libraries…etc Or the proofs and protocol documentation. Which exist before the actual code.





  • You can’t really host your own AWS, You can self-host various amalgamations of services that imitate some of the features of AWS, but you can’t really self-host your own AWS by any stretch of the imagination.

    And if you’re thinking with something like localstack, that’s not what it’s for, and it has huge gaps that make it unfit for live deployment (It is after all meant for test and local environments)




  • I mean, it’s more complicated than that.

    Of course, data is persisted somewhere, in a transient fashion, for the purpose of computation. Especially when using event based or asynchronous architectures.

    And then promptly deleted or otherwise garbage collected in some manner (either actively or passively, usually passively). It could be in transitory memory, or it could be on high speed SSDs during any number of steps.

    It’s also extremely common for data storage to happen on a caching layer level and not violate requirements that data not be retained since those caches are transitive. Let’s not mention the reduced rate “bulk” non-syncronous APIs, Which will use idle, cheap, computational power to do work in a non-guaranteed amount of time. Which require some level of storage until the data can be processed.

    A court order forcing them to start storing this data is a problem. It doesn’t mean they already had it stored in an archival format somewhere, it means they now have to store it somewhere for long term retention.




  • The sad part is is that you’re right.

    And the reason that it’s sad is that most of the individual veneers on proprietary projects deeply about a project itself and have the same goals as they do with open source software, which is just to make something that’s useful and do cool shit.

    Yep, the business itself can force them not take care of problems or force them to go in directions that are counter to their core motivations.



  • Did I say that it did?

    No?

    Then why the rhetorical question for something that I never stated?


    Now that we’re past that, I’m not sure if I think it’s okay, but I at least recognize that it’s normalized within society. And has been for like 70+ years now. The problem happens with how the data is used, and particularly abused.

    If you walk into my store, you expect that I am monitoring you. You expect that you are on camera and that your shopping patterns, like all foot traffic, are probably being analyzed and aggregated. What you buy is tracked, at least in aggregate, by default really, that’s just volume tracking and prediction.

    Suffice to say that broad customer behavior analysis has been a thing for a couple generations now, at least.

    When you go to a website, why would you think that it is not keeping track of where you go and what you click on in the same manner?

    Now that I’ve stated that I do want to say that the real problems that we experience come in with how this data is misused out of what it’s scope should be. And that we should have strong regulatory agencies forcing compliance of how this data is used and enforcing the right to privacy for people that want it removed.


  • Oh, you get the benefit of explicit scanning?

    We get the beauty of every file that’s modified being scanned before the write “completes”. It’s an absolute joy starting a build and watching ~80% of the available compute be consumed by antivirus software.

    Or, you know, normal filesystem caching as part of your tool’s workflow.

    Or dependency installing and unpacking…

    Or anything actually that touches a lot of files.



  • I build software and can confirm this.

    This is pretty run-of-the-mill analytics and user session recording. There’s nothing surprising here.

    Usually it’s not actual screen recording but rather user action diff recording (Which effectively acts like recording the application except that it only records things that changed so that the recording is much cheaper to store)

    This is extremely effective for tracking down bugs, solving user support issues with software, or watching session recordings to figure out if users are using the software in unexpected ways.