• 0 Posts
  • 97 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle




  • What’s with the negativity from you and the other comments?

    I can tell you why Americans care. Because identity matters to people. The story of the melting pot is central to the American story as a nation of immigrants (even today) and central to individual identities. Thus, there is a lot of interest in backgrounds and geneology. If you ask the average American about their heritage you’re likely to get a surprising answer - so people talk about it more.

    I get why it seems weird to many other cultures - if you ask the average French person (for example) their heritage they’ll say ‘French as far back as we can tell’.

    The French person celebrates their identity through the lens of the French story, and the American does too, it’s just that the American story is the immigrant story.

    I hope you do actually care. I hope in this era of rising nationalism and online hate enough of us value diversity of backgrounds and ancestries.




  • I’m not as optimistic about that as you are. The average person only knows what they’re told and as long as the right controls the narrative in their homes they’re going to think ‘liberals’ and ‘illegals’ and ‘trans’ are causing their pain, no matter how bad it gets.

    Maybe some pain is what they need to snap out of this, but they also need a trusted voice to tell them the truth about who is doing it to them. Right now that person doesn’t exist in a vast swath of American homes.





  • IMO it’s even worse than that. At least from what I gather from the AI/Singularity communities I follow. For them, AGI is the end goal - a creative thinking AI capable of deduction far greater than humanity. The company that owns that suddenly has the capability to solve all manner of problems that are slowing down technological advancement. Obviously owning that would be worth trillions.

    However it’s really hard to see through the smoke that the Altmans etc. are putting up - how much of it is actual genuine prediction and how much is fairy tales they’re telling to get more investment?

    And I’d have a hard time believing it isn’t mostly the latter because while LLMs have made some pretty impressive advancements, they still can’t have specialized discussions about pretty much anything without hallucinating answers. I have a test I use for each new generation of LLMs where I interview them about a book I’m relatively familiar with and even with the newest ChatGPT model, it still makes up a ton of shit, even often contradicting its own answers in that thread, all the while absolutely confident that it’s familiar with the source material.

    Honestly, I’ll believe they’re capable of advancing AI when we get an AI that can say ‘I actually am not sure about that, let me do a search…’ or something like that.