

i guess the point that’s being missed is that when i say “hard” i mean practically impossible


i guess the point that’s being missed is that when i say “hard” i mean practically impossible


my point is that it’s hard to program someone’s subjective, if written in whatever form of legalese, point of view into a detection system, especially when those same detection systems can be used to great effect to train systems to bypass them. any such detection system would likely be an “AI” in the same way the ones they ban are and would be similarly prone to mistakes and to reflecting the values of the company (read: Jack Dorsey) rather than enforcing any objective ethical boundary.


but what are the criteria? just because you think you have a handle on it doesn’t mean everyone else does or even shares your conclusion. and there’s no metric here i can measure, to for example block it from my platform.


what about the neural networks that power the DSP modules in all modern cell phones cameras? does a neural network filter that generates a 3D mesh or rather imposes a 3D projection, eg putting dog ears on yourself or Memojis, count? what if i record a real video and have Gemini/Veo/whatever edit the white balance? i don’t think it’s as cut and dry as most people think


it’s already the case that the distinction between what’s “AI” and what isn’t is a subjective, aesthetic difference and not a technical one


pretty sure it’s SteamOS, an Arch Linux derivative, on a fairly popular Snapdragon platform. probably not too difficult to hack on it.
there are some directories in my machines i consider ephemeral. Downloads and ~/temp should be able to be deleted with no real consequences


i’d say so. i was a professional Android dev for years, and security and privacy are definitely one of the reasons i prefer iOS. i don’t have time to play with my phone so much for my personal device. Apple is the lesser of 2 evils since their business model doesn’t depend on this kind of tracking (even if they do it as well albeit to a lesser extent)


but LLMs do represent a significant technological leap forward. i also share the skepticism that we haven’t “cracked AGI” and that a lot of these products are dumb. i think another comment made a better analogy to the dotcom bubble.
ETA: i’ve been working in ML engineering since 2019, so i can sometimes forget most people didn’t even hear about this hype train until ChatGPT, but i assure you inference hardware and dumb products were picking up steam even then (Tesla FSD being a classic example).


i know it’s popular to be very dismissive, but a lot of “AI” has already been integrated into normal workflows. AI autocomplete in development text editors, software keyboards, and question asking bots isn’t going away. speech-to-text, “smart eraser”, subject classification, signal processing kernels like DLSS and frame generation, and so many more will be with us and improving for a long time. Transformers, machine learning optimized chips, and other ML fields are going to be with us for a long time. the comparison to NFTs is either angst or misunderstanding.


i don’t know the full nature of the exploit, but zlib has an exploitable integer overflow via the MiniZip project. even though our images don’t use that project.


i know people usually are like, “oh cool new features”
but this has a security patch that will literally unblock my pipelines at work lol 🎉
i absolutely cannot take this rant about “absurd” conventions seriously with that fuckin thorn character lol


academic fraud has always existed
this is why i moved from Arch to NixOS. now i know what system packages are installed and can even leave comments in the config to remind myself what the heck cyme does, for instance


without checking, Gates’ wealth is probably tied up in a lot of MS stock, and he could probably walk into the office and ask the intern to get him a coffee. but yeah i think mostly retired.
Linus is still active is maintaining the Linux kernel.
and yes, this is fluff, not some kind of summit
if you really want to stick it to Google you have to go for Firefox or something derived from it. Chromium gives Google a ton of leverage to push features to all of their downstreams. not sure what engine these are using, but i also prefer to use Firefox because it’s open source. if these were open source you could easily see which engine they’re using.


it’s so much worse than the normal search. i would search for “dog” or “pasta” or “house” and get a pretty good result, but this conversational shit is just plain worse. and the “conversational” aspect is useless
he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality