• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • If a car can receive OTA updates from the manufacturer, then it can receive harmful OTA updates from an attacker who has compromised the car’s update mechanism or the manufacturer.

    There’s potential for a very dystopian future where we see people assassinated, not via car bomb but via the their cars being hacked to remove braking functionality (or something similar). And then a constant game of security whack-a-mole like we see with anti-virus software. And then some brilliant entrepreneur will start selling firewalls for cars. And then it’ll be passed into law that it’s illegal to use a vehicle that doesn’t have an active firewall/anti-virus subscription.

    It almost feels like the obvious path things will go down. Yay, capitalism…

    I’m not totally opposed to software being used in cars (as long as it’s tested and can be trusted to the degree mechanical components are) but yeah, OTA updates just seem like a terrible idea just for a little convenience. I’d rather see updates delivered via plugging the car in (and not via the charging port - it would need to be a specific data transfer port for security reasons). Alert people when there’s an update, and even allow the car to “refuse to boot” if it detects it’s not on the latest version. But updates should absolutely be done manually and securely.



  • The reason it’s overwhelmingly called “climate change” instead of global warming now is because of language change pushed by billionaire foundations.

    I do think “global warming” struggles to convince some more simple people anyway, unfortunately. Because while the average temperature of the globe is increasing and causing the changes in climate that we’re seeing, I’ve come across far too many comments from people saying things like “global warming must be a myth because it snows more than it used to” and things themselves smarter than all climate scientists combined for that observation.

    Of course, those same people probably think global warming is good because they like their summer holidays so perhaps their opinions shouldn’t matter much either way!








  • You’re right about osu! Although it’s probably one of the few competitive games where there’s no gameplay interaction between players - if another player is cheating, it hurts the overall competitiveness, of course, but it doesn’t directly affect your gameplay experience.

    It’s not like playing a shooter where someone has an aimbot and wallhacks, or a racing game where someone can ram you off the track without slowing themselves down - those things directly ruin your gameplay experience as well as obviously hurting the competitive integrity. I don’t think those kinds of games would work at all if they were open-source and without anti-cheat unless there was strict moderation and likely whitelisting in place for servers.




  • I agree completely. I think AI can be a valuable tool if you use it correctly, but it requires you to be able to prompt it properly and to be able to use its output in the right way - and knowing what it’s good at and what it’s not. Like you said, for things like brainstorming or looking for inspiration, it’s great. And while its artistic output is very derivative - both because it’s literally derived from all the art it’s been trained on and simply because there’s enough other AI art out there that it doesn’t really have a unique “voice” most of the time - you could easily use it as a foundation to create your own art.

    To expand on my asking it questions: the kind of questions I find it useful for are ones like “what are some reasons why people may do x?” or “what are some of the differences between y and z?”. Or an actual question I asked ChatGPT a couple of months ago based on a conversation I’d been having with a few people: “what is an example of a font I could use that looks somewhat professional but that would make readers feel slightly uncomfortable?” (After a little back and forth, it ended up suggesting a perfect font.)

    Basically, it’s good for divergent questions, evaluative questions, inferent questions, etc. - open-ended questions - where you can either use its response to simulate asking a variety of people (or to save yourself from looking through old AskReddit and Quora posts…) or just to give you different ideas to consider, and it’s good for suggestions. And then, of course, you decide which answers are useful/appropriate. I definitely wouldn’t take anything “factual” it says as correct, although it can be good for giving you additional things to look into.

    As for writing code: I’ve only used it for simple-ish scripts so far. I can’t write code, but I’m just about knowledgeable enough to read code to see what it’s doing, and I can make my own basic edits. I’m perfectly okay at following the logic of most code, it’s just that I don’t know the syntax. So I’m able to explain to ChatGPT exactly what I want my code to do, how it should work, etc, and it can write it for me. I’ve had some issues, but I’ve (so far) always been able to troubleshoot and eventually find a solution to them. I’m aware that if want to do anything more complex then I’ll need to expand my coding knowledge, though! But so far, I’ve been able to use it to write scripts that are already beyond my own personal coding capabilities which I think is impressive.

    I generally see LLMs as similar to predictive text or Google searches, in that they’re a tool where the user needs to:

    1. have an idea of the output they want
    2. know what to input in order to reach that output (or something close to that output)
    3. know how to use or adapt the LLM’s output

    And just like how people having access to predictive text or Google doesn’t make everyone’s spelling/grammar/punctuation/sentence structure perfect or make everyone really knowledgeable, AIs/LLMs aren’t going to magically make everyone good at everything either. But if people use them correctly, they can absolutely enhance that person’s own output (be it their productivity, their creativity, their presentation or something else).


  • I don’t think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there’s somewhat of a hype bubble surrounding AI, though - it’s the hot, new buzzword that a lot of companies are mentioning to bring investors on board. “We’re planning to use some kind of AI in some way in the future (but we don’t know how yet). Make cheques out to ________ please”

    I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a “solution looking for a problem”. Like, I’m a fairly normal person and I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc. Whereas I’ve never touched anything to do with crypto.

    AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don’t see AI going away.


  • I don’t know if it’s perhaps a regional thing but, in the UK, “being patronising” is used pretty much exclusively in the pejorative sense, with a similar meaning to “condescending”. I don’t think I’ve ever heard (in actual conversation) “being patronising” used to mean someone is giving patronage, in fact - we would say someone is “giving patronage” or “is a patron” instead. We also pronounce “patronise” differently, for whatever reason: “patron” is “pay-trun”, “patronage” is “pay-trun-idge” but “patronise” is “pah-trun-ise”.

    It seems the pejorative use of the word dates back to at least 1755, too, so it’s not exactly a new development.