• 0 Posts
  • 90 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle





  • This might be a wild take but people always make AI out to be way more primitive than it is.

    Yes, in it’s most basic for an LLM can be described as an auto-complete for conversations. But let’s be real: the amount of different optimizations and adjustments made before and after the fact is pretty complex, and the way the AI works is pretty close already to a brain. Hell that’s where we started out; emulating a brain. And you can look into this, the base for AI is usually neural networks, which learn to give specific parts of an input a specific amount of weight when generating the output. And when the output is not what we want, the AI slowly adjusts those weights to get closer.

    Our brain works the same in it’s most basic form. We use electric signals and we think associative patterns. When an electric signal enters one node, this node is connected via stronger or lighter bridges to different nodes, forming our associations. Those bridges is exactly what we emulate when we use nodes with weighted connectors in artificial neural networks.

    Our AI output is quality wise right now pretty good, but integrity and security wise pretty bad (hallucinations, not following prompts, etc.), but saying it is performing at the level of a three year old is simultaneously under-selling and overselling how AI performs. We should be aware that just because it’s AI doesn’t mean it’s good, but it also doesn’t mean it’s bad either. It just means there’s a feature (which is hopefully optional) and then we can decide if it’s helpful or not.

    I do music production and I need cover art. As a student, I can’t afford commissioning good artworks every now and then, so AI is the way to go and it’s been nailing it.

    As a software developer, I’ve come to appreciate that after about 2y of bad code completion AIs, there’s finally one that is a net positive for me.

    AI is just like anything else, it’s a tool that brings change. How that change manifests depends on us as a collective. Let’s punish bad AI, dangerous AI or similar (copilot, Tesla self driving, etc.) and let’s promote good AI (Gmail text completion, chatgpt, code completion, image generators) and let’s also realize that the best things we can get out of AI will not hit the ceiling of human products for a while. But if it costs too much, or you need quick pointers, at least you know where to start.






  • In every evaluation Reuters was the only one reported slightly left leaning. AP is very dry and fact-based and therefore neutral, CNN is neutral to slightly right leaning iirc. News Max and OAN are both strongly right leaning.

    Fox News is also still strongly right leaning. Tell me one time Fox News tried to push for equality, gun control, or universal basic income. Instead you can find pro gun content, strong defenses for very conservative politicians who are periodically being given a platform and they are regularly talking about how businesses should get more financial aid.

    My guy, we can politically disagree but there is not a universe in which fox news is left leaning. Objectively not.


  • To me it’s just a passion some ppl have. I used to be more into it (never did it), but as with every passion, you can get blinded by the flashiness of the people who wanna sell you a specific technique or way to do something.

    I think you are right, one can get trapped very easily in the pitfalls of the superhuman, but the actual benefits are rooted in the basic.

    Other examples: there’s a lot of benefits to general fitness exercises, but the fitness bros will sell you their one and only way to get extremely muscular (which most people don’t even start out striving for). Or games, where people push you to be the very best, although the actual benefit is in having fun and keeping your brain sharp. Or creative hobbies, where a lot of people strive for the spotlight, but the actual benefit is when you do it for yourself and do what you want, regardless of what other people think.

    It’s a hard balance but I guess that’s just life.



  • I made course for 11th and 12th graders together with a friend when I at university. It was only one week long and the topic was neural networks (we hit the timing right, it was 3 years before the AI hype started).

    I did that experiment where you give the students 5 random places and amount of time out of a week. You say that is the movement profile of a fictional person and the students have to find out why those places matter.

    Makes them learn the importance of information and how linking data can be an insane tool for understanding as something. But it also teaches how easy it is to gather information from small data points and self aware you should be about your digital footprint.