

“New York City police investigating lake after officers got wet after walking into it.”
I keep picking instances that don’t last. I’m formerly known as:
@EpeeGnome@lemm.ee
@EpeeGnome@lemmy.fmhy.net
@EpeeGnome@lemmy.antemeridiem.xyz
@EpeeGnome@lemmy.fmhy.ml


“New York City police investigating lake after officers got wet after walking into it.”


As much as I hate these assholes coopting Tolkien’s works, it is a really apt name for this company. In the story, the Palantir were wonderous tools for communication until they were corrupted and taken over by literal evil, turned into tools of surveillance and control.
I dislike that the article refers to Thiel as a Libertarian without putting quote marks around it. While he is firmly opposed to governments regulating businesses, he makes and sells the tools to help governments exercise more authority over private citizens. I know the term has also been largely coopted by fascists, but it still annoys me.


Some of the services supposedly built on AI have turned out to be exactly that. The AIs themselves aren’t though. They’re dumb in a way that is very distinct from the way we humans are dumb.


A TV doctor snake oil marketer turned MAGA official.


Not high school, but close. We hung out in the same group of friends freshman year at the local technical college. He was a very free spirited guy, with all sorts of wild tattoos and piercings, like a few others in the group. He even got some sort of genital piercing that I declined to see when he was showing it off after he got it. He was also fairly antiestablishment, an atheist who I think leaned politicaly towards anarchism.
Unlike the rest of the group though, he was way into drugs. There were a few who dabbled in marijuana and probably one dedicated stoner, but nothing like this. This guy was snorting lines of cocaine off the bathroom sink between classes, and always finding new pills try. Aside from that he was a very personable guy who had interesting perspectives to include in our conversations about anything and everything. Even when he wasn’t all there, at worst he was still decent company, so everyone just let it go. We’d all expressed our concerns at one point, and there wasn’t any point in continuing to bring it up. We were a very diverse group and most of us had some things we tolerated but didn’t agree with in each other.
For Christmas that year I bought a cheap little gift for each person in the group. Most were silly, but I got him a pill organizer. He excitedly began to brainstorm organizational ideas on how to use it, going on about uppers and downers and more terminology I can’t recall. I told him something along the lines of knowing he wasn’t going to stop experimenting, but I hoped it would help him stay safe. He hugged me and said it was one of the most thoughtful gifts he’d ever gotten.
At the end of the school year we largely all ended up going different ways and I lost track of him. Many years later, I heard from a friend I had kept in touch with that they had run into him. I’d feared he’d end up in jail or dead, but he was doing well, if in an unexpected way. Still had kept the crazy piercings, but was otherwise a button down, white collar guy. He had a wife and kids, lived in a suburban home, and worked as a manager at some office business. He was even a deacon at his church. He was healthy, happy, and proud to be many years clean of drugs. I’m glad he kept enough of the rebel spirit to keep the piercings, and I’m more glad he was off the drugs.


I’ve admittedly got a lot of selection bias, since people don’t tend to bring me their computer when it’s working correctly. I’m sure it usually works fine. Still, the only times I’ve seen a multiboot system suddenly fail, it was Windows’s fault.


We do understand exactly how LLMs work though, and it no way fits with any theories of consciousness. It’s just a word extruder with a really good pattern matcher.


I like the comparison but LLMs can’t go insane as they just word pattern engines. It’s why I refuse to go along with the AI industry’s insistance in calling it a “hallucination” when it spits out the wrong words. It literally can not have a false perception of reality because it does not perceive anything in the first place.


This feels to me like a common folk saying from somewhere translated into English. It’s also a very apt and appropriately vulgar metaphor for the situation.


You can, but on the same drive they will share the same bootloader. While Windows officially can share its EFI partition, be aware that updates may rewrite it, and might not bring your other install along. This is fixable, but still very annoying if you have some expertise on Windows boot setup commands.
This is why I always to prefer to setup multiboot systems on separate drives with their own EFI partition, and always use the non-M$ install for the boot menu.
In win11 some of the control panel links now only open the equivalent page in the settings app, despite the fact that they don’t have feature parity yet. The work around is to type the panel name in the file path bar manually if you want to adjust one of the missing settings. I mention this not because I think people here will want to know it, but because it gives y’all another reason to be glad to have moved off of Windows.
If all 6 got the same answer multiple times, then that means that your query very strongly correlated with that reply in the training data used by all of them. Does that mean it’s therefore correct? Well, no. It could mean that there were a bunch of incorrect examples of your query they used to come up with that answer. It could mean that the examples it’s working from seem to follow a pattern that your problem fits into, but the correct answer doesn’t actually fit that seemingly obvious pattern. And yes, there’s a decent chance it could actually be correct. The problem is that the only way to eliminate those other still also likely possibilities is to actually do the problem, at which point asking the LLM accomplished nothing.


Yes, I just glossed over that detail by saying “similar to”, but that is a more accurate explanation.


Unfortunately the most probable response to a question is an authoritative answer, so that’s what usually comes out of them. They don’t actually know what they do or don’t know. If they happen to describe themselves accurately, it’s only because a similar description was in the training data, or they where specifically instructed to answer that way.


Yeah, this really depends on what you meant by winning here. Are we actually changing the other person’s mind or do they admit we are right, but still believe otherwise? Are people who witness the argument included? Do people continue to agree indefinitely? Does it change reality to match?
Same username I use everywhere. I came up with it back around '07. For the first part, I fenced epee at the time. I never did earn my C rating in the sport, though I came within a literal inch of it multiple times. For the second part, we were playing a lot of DnD at the time, and I am very much like a DnD gnome.