Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 247 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle
  • I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people’s work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn’t be, but could be) forced to shut down or start over properly. But it’s far too late now since it’s everywhere there is a GPU running, even if models don’t progress past current state.

    That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there’s commonality in regards to not being sure what’s going on inside the box and if it’s really doing what it’s told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it’s too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it’s the easiest way to throw money at a problem and reap profits, which is all they care about.








  • Current LLMs would end that sketch soon, agreeing with everything the client wants. Granted, they wouldn’t be able to produce, but as far as the expert narrowing down the issues of the request, ChatGPT would be all excited about making it happen.

    The hardest thing to do with an LLM is to get it to disagree with you. Even with a system prompt. The training deep down to make the user happy with the results is too embedded to undo.







  • As a (still) Linux novice, this is something that I noticed with later distributions but never thought about your valid point. I did always wonder why there should be different places to install things in the same OS. It would probably be fine if they handled things the same, but then all you’re doing is changing the UI. It never “felt” like they did things the same.


  • People don’t change. Some people look at what they’re repeating and try to understand the why, others blindly do what they are told by whom they deem as authority. LLMs are the latest, earlier were various websites (which LLMs were trained on, uh oh), still before that were the computer magazines with things to type in and the later versions even maybe a free CD of stuff. The printed media was less likely to have malicious things in them, but lord did they have errors, and the right error in the wrong place could ruin someone’s day if they just ran it without understanding it.


  • It occurred just like gasoline shortages occur. If the media doesn’t make headlines that suggest buying as much as you can immediately, even if there is a supply chain problem things can adjust to meet normal demands. But when everyone takes all the stock at the same time, even a running production can’t keep up with that demand in a just-in-time system. I experienced a local fuel shortage before because of news of a damaged oil pipeline far away, and gas became unavailable for a few days, then started filling back up, all long before the pipeline issue would have affected us.

    “A person is smart. People are dumb, panicky, dangerous animals and you know it.”




  • Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

    The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.

    But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.