Yeah but the people who made it like that probably understand whether to trust it to write code or not. The AI Tony wrote, he knows what it does best and he trusts it to write his code. Just because it’s AI doesn’t mean it’s LLM. Like I trust the errors compilers give me even if I didn’t write them because it’s good. And I trust my scripts to do things that I wrote them for, specifically since I tested them. Same with the AI you yourself made, you’d test it, and you’d know the design principles.
an ai is not a script. You can know what a script does. neural networks don’t work that way. You train them, and hope you picked the right dataset for it to hopefully learn what you want it to learn. You can’t test it. You can know that it works sometimes but you also know that it will also not work sometimes and there’sjacksjit you can do about it. A couple of gigabytes of floating point numbers is not decipherable to anyone.
The point is that no vibe coder could design an LLM without an LLM already existing. The math and tech behind machine learning is incredible, whatever you may think. Just because we can spin up new ones at will doesn’t mean we ever could have skipped ahead and built Jarvis in 2008, even if all of society was trying to do so - because they were trying.
In the fictional universe where a human could singlehandedly invent one from scratch in 2008 with 3D image generation and voice functionality that still exceeds modern tech… yeah, that person and their fictional AI wouldn’t necessarily be producing slop.
I think you missed the part where “nothing like it even existed”
so? someone invented current llms too. Nothing like them existed before either. If they vibe coded with them they’d still be producing slop.
Coding an llm is very very easy. What’s not easy is having all the data, hardware and cash to train it.
Yeah but the people who made it like that probably understand whether to trust it to write code or not. The AI Tony wrote, he knows what it does best and he trusts it to write his code. Just because it’s AI doesn’t mean it’s LLM. Like I trust the errors compilers give me even if I didn’t write them because it’s good. And I trust my scripts to do things that I wrote them for, specifically since I tested them. Same with the AI you yourself made, you’d test it, and you’d know the design principles.
an ai is not a script. You can know what a script does. neural networks don’t work that way. You train them, and hope you picked the right dataset for it to hopefully learn what you want it to learn. You can’t test it. You can know that it works sometimes but you also know that it will also not work sometimes and there’sjacksjit you can do about it. A couple of gigabytes of floating point numbers is not decipherable to anyone.
The point is that no vibe coder could design an LLM without an LLM already existing. The math and tech behind machine learning is incredible, whatever you may think. Just because we can spin up new ones at will doesn’t mean we ever could have skipped ahead and built Jarvis in 2008, even if all of society was trying to do so - because they were trying.
In the fictional universe where a human could singlehandedly invent one from scratch in 2008 with 3D image generation and voice functionality that still exceeds modern tech… yeah, that person and their fictional AI wouldn’t necessarily be producing slop.