• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • Stories like this are sometimes more complicated than they appear. The infamous examples of $500 hammers, for example, were anti sparking hammers for working around flammables or munitions, hence requiring special materials, certification, and low production runs.

    For this case, we have liquid hand soap dispensed by a pump. Pumps require a sealed vessel. Unlike commercial planes, military planes are required to anticipate prolonged operation with an unpressurized cabin. At max altitude of a C17, atmospheric pressure is only 20% of sea level. Off the shelf dispensers are unlikely to be designed to withstand that pressure difference, let alone function normally. In a high demand environment like aerospace, even apparently minor failures like an exploding soap container needs to be taken seriously due to the possibility of unexpected cascading failures. Why not use bar soap, then? Unfortunately this too has complications, like not being able to be securely mounted, liquid soaps having superior hygiene and cross contamination characteristics, and necessity for military standardized soap, sometimes designed for heavy metal, eg lead, which is likely if the cargo were munitions.

    This unusual set of requirements unlikely to be seen outside the military context, so whether designed by Boeing or off the shelf the unit would likely have low quantity manufacturing runs, significantly increasing per unit costs. Combine that with the necessary certifications and the per unit costs balloon even further.

    While a soap dispenser having an 80x markup seems absurd, it might be more reasonable than it seems at first glance. To be clear, there absolutely is military contractor graft. I just don’t expect even a $10,000 soap dispenser would be a substantial proportion if it even within the C17.


  • I recently removed in editor AI cause I noticed I was acquiring muscle memory for my brain, not thinking through the rest past the start of a snippet that would get an LLM to auto complete. I’m still using LLMs, particularly for languages and libraries I’m not familiar with, but using the artifacts editors in ChatGPT and Claude.










  • So this is probably another example of Google using too blunt of instruments for AI. LLMs are very suggestible and leading questions can severely bias responses. Most people using them without knowing a lot about the field will ask “bad” questions. So it likely has instructions to avoid “which is better” and instead provide pros and cons for the user to consider themselves.

    Edit: I don’t mean to excuse, just explain. If anything, the implication is that Google rushed it out after attempting to slap bandaids on serious problems. OpenAI and Anthropic, for example, have talked about how alignment training and human adjustment takes a majority of the development time. Since Google is in a self described emergency mode, cutting that process short seems a likely explanation.



  • Compression is actually a mathematical field that’s fairly well explored, and this isn’t compression. There are theoretical limits on how much you can compress data, so the data is always somewhere, either in the dictionary or the input. Trained models like these are gigantic, so even if it was perfect recall the ratio still wouldn’t be good. Lossy “compression” is another issue entirely, more of an engineering problem of determining how much data you can throw out while making acceptable compromises.


  • This is a classic problem for machine learning systems, sometimes called over fitting or memorization. By analogy, it’s the difference between knowing how to do multiplication vs just memorizing the times tables. With enough training data and large enough storage AI can feign higher “intelligence”, and that is demonstrably what’s going on here. It’s a spectrum as well. In theory, nearly identical recall is undesirable, and there are known ways of shifting away from that end of the spectrum. Literal AI 101 content.

    Edit: I don’t mean to say that machine learning as a technique has problems, I mean that implementations of machine learning can run into these problems. And no, I wouldn’t describe these as being intelligent any more than a chess algorithm is intelligent. They just have a much more broad problem space and the natural language processing leads us to anthropomorphize it.