Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.

Introverted, yet I enjoy discussion to a fault.

  • 11 Posts
  • 529 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle










  • Oh. We exist.

    You may need to be subbed to a relevant community (for me it’s anime girls) to repeatedly see a given user to the point that you start recognizing them. I certainly have users I recognize.

    You might know me for running the dailycomic bot that has continued the CnH posting after the original mod stopped doing it manually, or the !moomin@sopuli.xyz community.

    But I know for a fact most people who’d recognize my handle, consider me “the anime girl poster”. I use the same username in games, and have been asked “are you the MentalEdge from Lemmy” once, which was new.

    From my perspective there are also people who comment often enough that I remember them.

    There’s the guy that posts a daily collection of screenshots from games he is playing, along with commentary on how he likes the games.

    There’s a couple big names over on lemmyshitpost.

    Same goes for tenforward.

    There’s Blaze, who I know as a general background engine of activity and advocate for good on the fediverse.

    There’s a bunch more, but I’m not gonna bombard them with mentions.


  • Like you said, it might be impossible to avoid ascribing things like intentionality to it

    That’s not what I meant. When you say “it makes stuff up” you are describing how the model statistically predicts the expected output.

    You know that. I know that.

    That’s the asterisk. The more in-depth explanation a lot of people won’t bother getting far enough to learn about. Someone who doesn’t read that far into it, can read that same phrase and assume that we’re discussing what type of personality LLMs exhibit, that they are “liars”. But they’d be wrong. Neither of us is attributing intention to it or discussing what kind of “person” it is, in reality we’re referring to the fact that it’s “just” a really complex probability engine that can’t “know” anything.

    No matter what word we use, if it is pre-existing, it will come with pre-existing meanings that are kinda right, but also not quite, requiring that everyone involved in a discussion know things that won’t be explained every time a term or phrase is used.

    The language isn’t “inaccurate” between you and me because you and I know the technical definition, and therefore what aspect of LLMs is being discussed.

    Terminology that is “accurate” without this context does not and cannot exist, short of coming up with completely new words.


  • Yes.

    Who are you trying to convince?

    What AI is doing is making things up.

    This language also credits LLMs with an implied ability to think they don’t have.

    My point is we literally can’t describe their behaviour without using language that makes it seems like they do more than they do.

    So we’re just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.