There’s a difference between OpenAI storing conversations and the LLM being able to search all your previous conversations in every clean session you start.
Balder
- 0 Posts
- 202 Comments
Always has been. Nothing has changed.
The fact that OpenAI stores all input typed doesn’t mean you can make a prompt and ChatGPT will use any prior information as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).
What OpenAI stores and what the LLM uses as input when you start a session are totally separate things. This update is about the LLM being able to search your prior conversations and referencing it (using it as input, in practice), so saying “Nothing has changed” is false.
Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.
I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.
Balder@lemmy.worldto Technology@lemmy.world•Zuckerberg Lobbies Trump to Avoid Meta Antitrust Trial.English5·15 days agoNo paywalled link: https://archive.is/1QR8H
Balder@lemmy.worldto Technology@lemmy.world•China plans world’s first fusion-fission power plantEnglish2·16 days agoIt’s more accurate to say they might be, but not necessarily. China is very aware of the benefits of keeping ahead technologically.
Balder@lemmy.worldto Technology@lemmy.world•MIT introduced a smart assistant for LLMEnglish3·16 days agoWell yeah, but the article is about a paper that’s showing a strategy to improve planning capabilities in comparison to using LLMs as they are currently. It’s just research, they’re not saying to use that in production now, and I’d say it isn’t something the researchers are even worried about for this particular artifact.
Balder@lemmy.worldto Technology@lemmy.world•Do you dislike your dependency on Android? To the rescue comes Mobile Linux "PostmarketOS" - Funded via Donations (link to 2025 Priorities -> Focus on Reliabilty, Audi, Camera, etc)English6·18 days agoI think the problem is there’s just too much work that needs to be put in these things and people don’t really think about it. Android has at this point almost 2 decades of refining the experience for phones, so it’s a good starting point.
But the most important thing I guess is software. People often neglect how much time and effort is put to refine software to the point it becomes polished and bug free. Android has a mature stack to build apps that is very difficult to replicate.
But to be more clear I didn’t mean just getting a degoogled Android and settle with it. Android could also evolve in other ways that aren’t in Google’s interest, such as allowing you to have a sort of Dex that’s actually a Linux Desktop Environment.
Balder@lemmy.worldto Technology@lemmy.world•Do you dislike your dependency on Android? To the rescue comes Mobile Linux "PostmarketOS" - Funded via Donations (link to 2025 Priorities -> Focus on Reliabilty, Audi, Camera, etc)English71·18 days agoIt’s much less effort to have something based on Android open source project though.
Balder@lemmy.worldto Technology@lemmy.world•Privacy disaster as LGBTQ+ and BDSM dating apps leak private photos.English4·19 days agoAt this point, I think it’s required to have a sort of alternate identity online and keeping anything private, photos of yourself and other information just offline. Except for government stuff, which requires your real identity.
Balder@lemmy.worldto Technology@lemmy.world•Privacy disaster as LGBTQ+ and BDSM dating apps leak private photos.English1231·19 days agoBrace yourselves, because this is only going to get worse with the current “vibe coding” trend.
Balder@lemmy.worldto Technology@lemmy.world•Move fast, kill things: the tech startups trying to reinvent defence with Silicon Valley valuesEnglish4·20 days agoI’m starting to think my parents who are already reaching their 70s are lucky people.
Balder@lemmy.worldto Technology@lemmy.world•What could possibly go wrong? DOGE to rapidly rebuild Social Security codebase.English5·20 days agoOnly those that criticize the government, somehow. “Oops, because of some complicated algorithm, it only affected people who posted the word ‘orange’ on social media recently.”
Balder@lemmy.worldto Technology@lemmy.world•OpenAI's move to allow generating "Ghibly stlye" images isn't just a cute PR stunt. It is an expression of dominance and the will to reject and refuse democratic values. It is a display of powerEnglish10·20 days agoYeah the text makes many freestyle assumptions, although the overall sentiment is correct that these big companies and especially egocentric billionaires do stuff to trigger others simply for power display. I believe the text linked about it being a distraction for the new round of funding is the real reason.
Balder@lemmy.worldto Technology@lemmy.world•Grok Reveals Elon Musk Has ‘Tried Tweaking My Responses’ After AI Bot Repeatedly Labels Him a ‘Top Misinformation Spreader’English1·21 days agoI mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
Balder@lemmy.worldto Technology@lemmy.world•Google will move Android AOSP development behind closed doorsEnglish3·21 days agoNot only that, the Android Police article mentions they had a lot of trouble merging the internal branches and the public branches, so I’m guessing as time went on they’ve diverged more and more.
Balder@lemmy.worldto Technology@lemmy.world•Musk 'Pressured' Reddit CEO to Silence DOGE Critics, Leaving Moderators Outraged: Report.English1·21 days agoThere are apps which display the user karma though.
Balder@lemmy.worldto Technology@lemmy.world•Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead EndEnglish11·29 days agoI remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
Balder@lemmy.worldto Technology@lemmy.world•US classifies South Korea as ‘sensitive country,’ limiting cooperation on advanced tech.English21·1 month agoSeems like the US and China will be good allies, judging by how things are going.
Balder@lemmy.worldto Technology@lemmy.world•AI coding assistant refuses to write code, tells user to learn programming insteadEnglish6·1 month agoThis seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.
In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.
Yeah, adults should be able to tell the difference between someone disagreeing with them and someone being rude/trolling.
I don’t think I ever needed to block anyone, but I kinda stopped commenting as much nowadays cause I realized a lot of times people just don’t understand something and say things out of ignorance + pretentiousness, immediately attacking whoever correct what they’re saying. I don’t think there’s a way out of that in these kinds of open discussion threads, unfortunately, because it’s not exactly bad faith.