Ⓐ☮☭

  • 1 Post
  • 443 Comments
Joined 2 years ago
cake
Cake day: July 20th, 2023

help-circle


  • For context:

    This bid is an attempt at a threat because it overvalued the perceived value of the non profit side of openai.

    Openai has a known perceived total value. (dont know how much exactly) Musk wants to lay the argument that by refusing his bid. Most of that value is in the non profits side and thus less in the for profit side.

    This is a threat because the profit side is where openai partners, like Microsoft have their stake. So its lowering the perceived monopoly money of those shareholders.

    Of course can be ignored. OpenAi has stated they can refuse on many other reasons unrelated to the bidding price.

    Also OpenAi is a fascist collaborator just like musk.

    Source: ai explained yt channel



  • “When it detects a user may be under 18, Google will notify them that it has changed some of their settings and will offer information about how users can verify their age with a selfie, credit card, or government ID.”

    Selfies are useless as proof for young adults and people with a young face. No algorithm can tell a 17.11 yo from a 18 yo so it will flag every single one of them.

    This is a way to normalize handing over your details for the next generations. Nothing more.

    The good news isn that once teens figure out that other browsers and search engines don’t stop them from finding porn they might stick with them.





  • The Israel State committing genocide is horrible but how is this relevant to Sutkevers start up and AI?

    The only slivers of an Israel connection i could find are.

    • Sutskever was born in the Soviet Union moved to Israel at the age of 5 and lived there till he was 16 before moving to Canada.

    • The picture they used in the article was taken at a university in Israel 2 years ago when he was still a part of OpenAI.

    • The startup does have offices in Palo Alto (US) and Tel Aviv (Israel) but thats a silicon valley kinda place, literally all the major tech players have an office there. And i fully agree that is something we should be skeptical about, but its not something you can single American company out on.



  • There is no need to communicate like this. I actually had already learned about this app and checked it out yesterday on a different community with way less people and there was no article there. I just saw the opportunity of this post as way to discuss it.

    I like to make a few things clear that seem to cause confusion.

    1. I made my post in full understanding that it probably does not feature a “preference based algorithm”, trying it out did not gave it the impression it did. But i want to be more sure there is nothing along those lines included and tried to engage in discussion about the need for preferential settings. Its somewhere between a genuine and rhetoric question.

    2. Call me pedantric, trough i prefer autistic but “algoritm” has like i explained a specific meaning to me. Its mathematical formula for a specific purpose. The app in question is code, code is math. Randomizing code (which are never truly random btw) uses an algorithm. You can not tell me that the app uses NO algorithm, well oc you can but to my brain that does not compute. Any form of customizing the feed, even a hack, would be algorithmic in nature.

    3. I disagree this (and many things) require a full fledged article, just a clear few line description or like including those quotes you put up here would do for me. I know the internet likes to make everything into “news” but i don’t have to like or partake in that. I prefer to spend my time engaging on lemmy about the topic directly because that (this diverted discussion included) helps it grow.









  • Unacceptable by literal definition.

    They did create a very reasonable list of what they deem unacceptable. At last some good news.

    Some of the unacceptable activities include:

    • AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
    • AI that manipulates a person’s decisions subliminally or deceptively.
    • AI that exploits vulnerabilities like age, disability, or socioeconomic status.
    • AI that attempts to predict people committing crimes based on their appearance.
    • AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
    • AI that collects “real time” biometric data in public places for the purposes of law enforcement.
    • AI that tries to infer people’s emotions at work or school.
    • AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.