AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
Most of my qualms with AI aren’t in the usage of AI, but in its creation (water usage, mass layoffs, etc.—you’ve heard it all before).
To me it’s like asking “What are some good uses for slaves?” (An extreme example to show the point, I’m not trying to say AI is the same as slavery).
Like yeah I could find good uses for it, but should it exist in the first place?
For every small benefit, there are disastrous mistakes. We shouldn’t discuss one without the other:
https://tech.co/news/list-ai-failures-mistakes-errors
March 2026
- Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited
February 2026
- Health advice given by AI chatbots is frequently wrong, says new study
January 2026
-
Study reveals that fixing AI mistakes takes up to 40% of the time that it saves
-
An AI tool used by ICE to identify applicants with previous law enforcement experience falsely flagged applicants with no such experience, leading to the placement of unqualified recruits in field offices.
December 2025
- AI mistakes clarinet for gun at Florida school
November 2025
-
Google Antigravity deletes entire content of user’s computer drive
-
Report finds AI hallucinations in 490 court filings from the past six months
October 2025
-
Teenager handcuffed after AI mistakes Doritos packet for gun
-
Lawyer submits AI-assisted court filing with fake citations
-
Man follows ChatGPT advice over stopping eating salt, develops rare condition. The man was hospitalized, sectioned, and eventually treated for psychosis. He tried to escape the hospital within 24 hours of being admitted.
-
ChatGPT-5 jailbroken with 24 hours of release
July 2025
-
AI Coding app deletes entire company database
-
McDonald’s AI chatbot error exposes data of 64 million job applicants
-
AI program is tasked with running a small shop, goes insane, claims to be human
-
Apple Intelligence falsely presents BBC headline
… and it just keeps going.
So don’t put AI in front of anything mission critical or without going through a review of a human.
So LLMs in agentic mode are a disaster waiting to happen.
God yes.
I have a friend at work that does a lot of video. He films weddings, music videos etc. and is making a pilot for Netflix. He uses AI to go through all his footage and tag it according to content. E.g. if he needs a clip of birds, he can just search ‘birds’ and it will pull up all relevant footage. Incredibly useful.
This could come in pretty handy for me. What’s he edit on that does this?
It’s a custom app he made using ai vision and xml files I guess. He uses DaVinci resolve though
Is the AI vision local?
Multimodal local models like qwen3.5 are pretty good
Basically you grab screenshots from the video at intervals and ask the model to tag / describe them.
I should probably learn how you link all this stuff up, probably not doing myself a lot of favours ignoring it.
A 20€ plan of Claude Code for a month will teach you all of it.
It’s pretty much just scripting ffmpeg and feeding the screenshots to a local model via an API (or use a CLIP model)
Cheers
that seems like a great system, i wonder how it’s structured.
translation is pretty good.
they want to make ai npcs on games, which could be awesome if we can ever reduce the system requirements for running it.
I tried out a game/demo thing that was a tester for AI NPC dialogue. I asked an NPC to tell me about himself and he replied that he could not connect to server lol
There’s that one silly vampire game which uses AI NPCs, I think it’s kind of fun looking from people I saw play it
Converting PDFs into HTMLs or RFT/TXT docs witout OCR typos. Until recently, it was almost impossible to turn a scanned book from PDF into doc or TXT, because the output of copying and pasting or converting using PDF tools was illegible. AI now can do a “deep AI seek” (look it up) into the texts.
I am converting a textbook into an audiobook in HTML (paragraph highlighting with manual sync) with an integrated popup glossary into every word (with grammar and meaning) and dictionary lookup if clicked.
Besides, as an apendix to each chapter, I add all the explanations from the book.
I took the ~4 500 words of the book and asked for a grammar analysis and meaning lookup to create a glossary. The IA joyfully skipped many terms but that is something I will fix when each chapter is finished. Now I am being punished with waiting despite having paid $20.
LLMs tend to be a “jack of all trades, master of none”. You are likely to find them useful for helping you with something you are inexperienced at, but not at something you are an expert in. However, because they lie a lot, it’s best to double-check your information, but the LLM can still be helpful with the ”you don’t know what you don’t know” issue.
Curating massive music libraries. I’ve been using a small embedding model to organise my music for DJing, and being able to generate a t-sne plot clustered on perceptual similarity has been wonderfully useful.
I’ve also found CLIP models useful for searching videos, just embed a screenshot every couple of min of footage and query with a description of the scene.
And as bad as generated subtitles can be, when the only other option is nothing at all they are pretty nice to have.
Running automated hacking and blackmail campaigns against AI companies.
Anything that’s fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
Watch out, personal finance is not one of those things.
Learning, exploring concepts and ideas.
I actually find it pretty helpful for tech support stuff. It doesn’t always get it right, but it’s usually at least in the right general area and TBH it beats going through endless forums where the answer is buried among 8 pages of people bickering about nothing, or those ones where someone has your exact problem and then replies “nm I fixed it” and doesn’t say what they did.
Chatbots? Basically nothing. Any interaction I have with one leads to spending more time verifying its output, inevitably finding many mistakes, and eventually finding a primary source for what I’m actually looking for. The best actual impact it has is forcing me to narrow down my nebulous question into what I actually specifically want, but the bot itself is contributing very little to that.
Neutral nets in general have limited real usefulness in analyzing large batches of data when other purpose-built analysis software doesn’t exist.
“AI” is a misnomer and there is absolutely zero evidence to suggest that we’re even on a path toward actual AI, sometimes called AGI, though they’re also changing that to just mean a profitable LLM which is fucking hilarious.
Any task you use a bot to do, you will become worse at that task. For mass data analysis, that’s fine, poring over reams of data is already a skill that other technology has largely obsoleted. But using it to do research, to read or write for you, or god forbid to make actual decisions and think for you, are very slippery slopes that are already causing a lot of the general public to seriously erode their basic mental capabilities.
- Searching a large dataset with a vague search criteria.
- Real-time feedback when studying a foreign language (since accuracy is less important than quantity).
- Apparently in medicine they’re using generative AI for something meaningful, but I’m not entirely convinced it is actually generative AI and I’d need to do more research.
- Sometimes it can help in learning to program and in sanity-checking code security.
If you’re thinking of protein design it is, just with a sequence instead of natural language text. Although it’s not just a straight LLM, there’s some kind of physics awareness engineered in as well.
An amazing use for it in audio engineering is for feedback suppression. The old way to give yourself more headroom required you to sit there and turn up the gain until feedback happens and cut that frequency. Now you just turn on the feedback suppression and it does all that for you on the fly. It’s game changing for live sound, every major venue has it now.
Great for film sound too. You’re filming a rainy scene and the rain is way to loud? You had to get the actors into the studio and do voiceover, now you can often just filter it out.
Oh yeah you’re right! It’s the same for all unwanted noise. Rustling, wind, buzz, ac noise. All of it can be filtered out now! You can even take away the reverb from an untreated room and add in your own reverb. Convolution reverb is amazing, you can actually capture the reverb of any space you want and add it into your recording in post. I honestly don’t know how much an expensive treated room matters over some investment in the plugins that let you do those things.
An example for movies: instead of trying to capture the actors talking inside their helmets for Interstellar, they actually made an IR inside of the helmet itself and added that to the overdubs!
The way you create an IR (impulse response) to capture the reverb of a space is you take a speaker and play a sine wave (or a gunshot/balloon pop,) then record it with a good mic. Then just take that WAV file and put it into a convolution reverb plugin. It sounds identical, the technology is amazing! You can use this to capture all kinds of analog circuitry like guitar amps also, that’s how they make those guitar amp plugins.
feedback suppression has been a thing for ages
it does not require AI, all you need to do is identify the consistent tone and subtract it out
I know it doesn’t need AI for older versions of feedback suppression, but there are newer systems using it that are more effective at dynamically subtracting those frequencies
i have never seen a pro live sound engineer use AI to do this
and i dont think a neural net will have significant improvements over standard ways to do it like the neve 5054 at the same latency
Yeah you know what, I definitely am wrong about this, I totally thought they were using AI but that makes way more sense
The best way to learn is to say something confidently wrong on the Internet haha
I went to my local neighborhood association because I wanted to improve where I live. I was elected president of the association a couple months later, mostly because no one else wanted to do it. It’s a fairly poor part of a medium sized city in the U.S.
I’ve been using AI (running locally on a computer I built that isn’t connected to the internet, to reduce harm to the environment) to apply for grants, plan events and help me run the meetings.
It is actually perfect for the job. Saying that as someone who thinks AI is mostly hype and useless for the majority of its current common uses these days. I feed it the text from city grant applications or ask it to make a poster to increase attendance and it’s saved me a lot of time. Without it, being someone diagnosed ADHD, I would not have been able to do most of the stuff I have accomplished so far.






