• nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    why are you arguing that at me?

    Rationally and in vacuum, anthropomorphizing tools and animals is kinda silly and sometimes dangerous. But human brains don’t work do well at context separation and rationality. They are very noisy and prone to conceptual cross-talk.

    The reason that this is important is that, as useless as LLMs are at nearly everything they are billed as, they are really good at fooling our brains into thinking that they possess consciousness (there’s plenty even on Lemmy that ascribe levels of intelligence to them that are impossible with the technology). Just like knowledge and awareness don’t grant immunity to propaganda, our unconscious processes will do their own thing. Humans are social animals and our brains are adapted to act as such, resulting in behaviors that run the gamut from wonderfully bizzare (keeping pets that don’t “work”) to dangerous (attempting to pet bears or keep chimps as “family”).

    Things that are perceived by our brains, consciously or unconsciously, are stored with associations to other similar things. So the danger here that I was trying to highlight is that being abusive to a tool, like an LLM, that can trick our brains into associating it with conscious beings, is that that acceptability of abusive behavior towards other people can be indirectly reinforced.

    Basically, like I said before, one can unintentionally train themselves into practicing antisocial behaviors.

    You do have a good point though that people believing that ChatGPT is a being that they can confide in, etc is very harmful and, itself, likely to lead to antisocial behaviors.

    that is fucking stupid behavior

    It is human behavior. Humans are irrational as fuck, even the most rational of us. It’s best to plan accordingly.