

If I understand correctly then this means mostly adapting the interface?


If I understand correctly then this means mostly adapting the interface?


Sure, you’re right, I just worry (maybe needlessly) about people re-inventing the wheel because it’s “easier” than searching without properly understand the cost of the entire process.


FWIW that’s a good question but IMHO the better question is :
What kind of small things have you vibed out that you needed that didn’t actually exist or at least you couldn’t find after a 5min search on open source forges like CodeBerg, Gitblab, Github, etc?
Because making something quick that kind of works is nice… but why even do so in the first place if it’s already out there, maybe maintained but at least tested?
Went ice skating today, knee high socks, Debian user. It’s actually factual.


I agree… but beside the point I have access to a dedicated workshop and a tool library https://www.tournevie.be/ which challenges this whole setup. It’s relatively unique though, unfortunately, so your example still stands, thanks for sharing.


Yep. That’s exactly why I tend to never discuss “AI” with people who don’t have to actually have a PhD in the domain, or at least a degree in CS. It’s nothing against them specifically, it’s only that they are dangerously repeating what they heard during marketing presentations with no ability to criticize it and, in such cases, it can be quite dangerous.
TL;DR: people who could benefit from it don’t need it, people who would shouldn’t.


Mostly because the model is incapable
There, fixed that for you.


That’s their question too, why the hell did Google makes this the default, as opposed to limiting it to the project directory.


Because “agentic”. IMHO running commands is actually cool, doing it without very limited scope though (as he did say in the video) is definitely idiotic.


Well… at least do that for Windows and MacOS, not for Linux.


Because people who runs this shit precisely don’t know what containers, scope, permissions, etc are. That’s exactly the audience.


The user can choose whether the AI can run commands on its own or ask first.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :
rm *filename
versus
rm * filename
where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.
Of course here you will spot it because you’ve been primed for it. In a normal workflow, with pressure, then it’s totally different.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.


It should also be sandboxed with hard restrictions that it cannot bypass
duh… just using it in a container and that’s it. It won’t blue pill its way out.


I think that’s the point, the “agent” (whatever that means) is not running in a sandbox.
I imagine the user assumed permissions are small at first, e.g. single directory of the project, but nothing outside of it. That would IMHO be a reasonable model.
They might be wrong about it, clearly, but it doesn’t mean they explicitly gave permission.
Edit: they say it in the video, ~7min in, they expected deletion to be scoped within the project directory.


Wow… who would have guessed. /s
Sorry but if in 2025 you believe claims from BigTech you are a gullible moron. I genuinely do not wish data loss on anyone but come on, if you ask for it…


Me too, they just keep on investing in interrop and I’m all for that.


Right, better be safe than sorry. The important point though IMHO that with Proton and now FEX they have shown that compatibility layers are not that costly or complex :
So… I don’t want to diminish how amazing that is, technically speaking, but we now all know it’s feasible. Initially it looks like supporting an entire OS architecture was ridiculous (and it was, emulation was just “good enough” for games that were some years old and for much more powerful machine) until somebody tried “just” swapping or fixing the right API (i.e. DirectX) and … that was actually OK.
Again, it’s a TON of work. A lot of it also comes from Wine. But… now we now why it works and how to do that. Even if Valve were to lock SteamOS, that knowledge wouldn’t be lost on the broader community.
PS: they briefly mention this during the Tested video (sorry YouTube only) on the new hardware.


FWIW been using SteamOS on the SteamDeck for ~3 years now and from gaming to tinkering, no major problems. Never had to tinker hard or re-install. A couple of time it didn’t suspend properly or I had to hold power button of to force a shutdown but that’s about it.
I doubt Valve would back of from the openness because that’s their one single advantage.


True, but would one want to have a BigTech labels on their Linux distribution? Wouldn’t that kind of miss the point and bring us back to e.g. ChromeBooks?
Open an issue to explain why it’s not enough for you? If you can make a PR for it that actually implements the things you need, do it?
My point to say everything is already out there and perfectly fits your need, only that a LOT is already out there. If all re-invent the wheel in our own corner it’s basically impossible to learn from each other.