• 0 Posts
  • 60 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • Literally nothing you’ve said gives any indication that you actually know the current state of foundation model research. I won’t claim it’s my research specialty, but I work directly with people whose full time job is research and tuning on foundation models, and everything I’m saying is being relayed from conversations that I have with them.

    “Cannot ever possibly be used like that”… Like what specifically? To drive a car? That’s being done. To give financial advice? That’s being done. To console people who are suicidal or at risk of harming themselves? That’s being done. To make kill / no kill decisions in an active warzone? It’s being considered (if not already being done in secret).

    This technology is being used in extremely consequential positions despite having very weak guarantees around safety. This should give any reasonable person pause. I’m not taking any firm stance on whether this specific regulation is the right approach, but if you think there should be no legal accountability for the outcomes of how this technology gets used then I guess you’re someone who thinks seatbelts should be optional in cars and it’s okay for airplanes to fall out of the sky due to neglect.



  • That’s not what an algorithms researcher means when we talk about “understanding”. Obviously we know the mechanism by which it operates, it’s not an unknown alien technology that dropped into our laps.

    Understanding an algorithm means being able to predict the characteristics of its outputs based on the characteristics of its inputs. E.g. will it give an optimal solution to a problem that we pose? Will its response satisfy certain constraints or fall within certain bounds?

    Figuring this stuff out for foundation models is an active area of research, and the absence of this predictability is an enormous safety concern for any use cases where the output can be consequential.

    It cannot possibly develop agency.

    I don’t believe I’ve suggested anywhere that I think it will, but I’ll play around with this concern anyway… There’s a lot of discussion going on about having models feed back on themselves to learn from their own output. I don’t find it all that hard to imagine that something we could reasonably consider self awareness could be formed by a very complex neural network that is able to consume and process its own outputs. And once self awareness starts to form, it’s not that hard for me to imagine a sense of agency following. I have no idea what the model might use that agency for, but I don’t think it’s all that far fetched to consider the possibility of it happening.


  • Sure, but this outcome is not at all surprising. There are plenty of smart AI people that have nuanced views of what kind of threat could be posed by recklessly unleashing tools that we don’t fully understand into the hands of people who are likely to do harmful things with them.

    It’s not surprising that those valid nuanced concerns get translated into overly simplistic misrepresentations entangled with pop sci fi panic around rogue AI as they try to move into public discourse.



  • AI person reporting in. Without saying whether or not I personally believe that the current tools will lead to the end of humanity, I’ll point out a few possibilities that I find concerning about what’s going on:

    • The hype around AI is being used to justify mass layoffs, where humans are being replaced by tools that do a questionable job and can’t really understand the things those humans could understand. Whether or not the AI can do as good of a job according to some statical measurement is less relevant than the fact that a human is less likely to make an extremely grave mistake and more likely to be able to recognize when that does happen. I’m concerned this will lead to cross-industry enshitification on an unprecedented scale.

    • The foundation models consume a huge amount of energy. The more impressive you want it to be, the more energy it needs. As long as the data centers which run them are dependent on fossil fuels, they’ll be pumping a huge amount of carbon in the air just to do replace jobs that we didn’t need to have replaced.

    • As these tools are used more and more, they’re going to end up “learning” from content created by themselves instead of something that’s closer to a ground truth. It’s hard to predict what kind of degradation of service will come from this, but the more we create systems that rely on these tools, the more harm it will do to us.

    • Given the cost and nature of these tools, they’re likely to yield the most benefit to moneyed interests that want to automate the systems that maintain their power and wealth. E.g. generating large amounts of convincing disinformation to manipulate the public into supporting politicians or policies that benefit a small number of wealthy people in the short term while locking humanity into a path towards destruction.

    And none of this accounts for possible future iterations of AI tools that may be far more capable than what exists today. That future technology will most likely be controlled by powerful people who are primarily interested in using it to bolster the systems that keep them in power, to the detriment of humanity as a whole.

    Personally I’m far less concerned about a malicious AI intentionally doing harm to humanity than AI being used as a weapon by unscrupulous people.


  • I agree with you, but at the same time I can’t think of any other candidate that would both (1) have enough name recognition to motivate semi-apathetic democrats to vote, and (2) not rile up the semi-apathetic bigots into counter-voting.

    According to the gossip, Biden has said that if he does choose to step down, he’ll promote Harris. I don’t think Democrats will have much of a problem supporting Harris, but I’m concerned being a woman of color will motivate a lot of bigots into a counter-vote when they might have otherwise stayed home. The next best pick might be Buttigieg, but then you get the homophobic bigots coming out for the counter-vote. Newsom might be the next best after that, but then you have the anti-California lunatics coming out to counter-vote him.

    As much as I don’t want to cater to bigots, I think the stakes are just too high here. If it were a campaign against Mitt Romney then I wouldn’t think twice about running any of these people, but when we’re on a razor’s edge against America spiraling into a fascist dictatorship, every risk needs to be accounted for.

    Obviously Biden should’ve declared back in August that he wouldn’t run for reelection so the Democrats could run a primary and build up someone to have name recognition and a positive image, but now it’s too late for that 🤷‍♂️







  • Weird how this notion of “personal responsibility” applies to every person except for those people who choose to intentionally misrepresenting the product by branding it in ways that are misleading. The people running this company aren’t responsible for their role in misleading the public, just because the fine print happens to indicate that the product isn’t actually what it’s marketed as?

    Now you’ll probably say something to the effect of “I never said that! You’re putting words in my mouth!” except what other motivation can you have to jump to the defense of the liar and blame people for being misled, except that you want to put all the responsibility on individuals for being misled and not on the company that is systematically and intentionally misleading them? Maybe you just manage to derive a smug sense of superiority thinking of yourself as someone who is invulnerable to this kind of tactic so blaming the victims lets you feel good about yourself.