![](https://lemmy.world/pictrs/image/0af10ed3-a982-492e-8aec-a765d3e43eee.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
That GitHub “archive here” link leads to a page where it hasn’t been archived… (or was the archive removed??).
That GitHub “archive here” link leads to a page where it hasn’t been archived… (or was the archive removed??).
They expanded the initial recall. It affects models from 2017 to 2022. If you read the linked article I previously provided, then you missed the key point that vehicles were still bursting into flames even after the recall.
Expanded recall: https://gmauthority.com/blog/2021/09/gm-asking-chevy-bolt-ev-owners-to-park-50-feet-away-from-other-vehicles/
GM stopped replacing the batteries of the newer models and instead offered a software solution that would monitor the batteries for any issues and allow the vehicle to charge beyond the 80% limit that they had set because of these issues. https://electrek.co/2023/06/14/bolt-battery-recall-diagnostics/
But it’s worth noting that this software update has failed to prevent some fires, so the problem isn’t really “fixed” even with this: https://electrek.co/2021/07/08/chevy-bolt-ev-catches-on-fire-after-receiving-both-of-gm-software-fixes/
I would avoid used Bolts, especially because of all the issues those have had with going up in flames.
Hopefully they’ve fixed those issues in the newest models…
I think I was thrown off by the “trackpad” example that was given above. That would have been a bit more complex than just a simple button press (which is still doable in low level firmware) but I was curious how they would pull it off.
I looked up what “solid state buttons” are and it makes a lot more sense now. This isn’t like some trackpad you can swipe along the endge, they’re still buttons in separate locations, just not in the mechanical clicking sense that we’re used to.
You could also use something like GrayJay, I’ve been using it for a while now and haven’t had any issues with it.
despite the fact that hosting images is orders of magnitude less bandwith and storage requiring than videos.
In general, yes, when comparing images/video of the same resolution. But if I compare an 8k image to a low quality video with low FPS, I can easily get a few minutes worth of video compared to that one picture.
As you said, it definitely costs money to keep these services running. What’s also important is how well they are able to compress the video/images into a smaller size without losing out on too much quality.
Additionally, with the way ML models have made their way into frame generation (such as DLSS) I wouldn’t be surprised if we start seeing a new compressed format that removes frames from a video (if they haven’t started doing it already).
My one concern is, what do I do if the phone freezes up?
With physical buttons there is a hardware bypass so I can force the phone to reset.
With a “trackpad” I’m not as confident it will register those touches correctly when the OS has seized up.
I’m assuming they’ll have something figured out at the hardware level, but I’m curious what that will be.
I think when you say “Hates AI” you mean “Hates ChatGPT”
“AI” itself has a lot of awesome uses, ML models with DLSS, robots that can maneuver over different terrain, image generation, audio transcription, etc.
Even with LLMs, I’m fine with them as long as I was the one that was able to pick and choose the model as well as the software to use to run it.
Well, now’s a great time to let them know about Pixelfed, although explosive growth like this will be a strain on any website.
You realize that copyright law still applies… whether you add some additional license to your software or not… right?
What does copyright law have to do with a ban on removing malicious code?
What do you mean by this?:
Cara, bans us from removing malicious source code
Is there obviously malicious source code? Is there a policy that specifically says we can’t remove any source code? Is this even open source?
Getting away from Google Maps has been a tough one. There aren’t many options there, it’s either Google, Apple, Microsoft, or OpenStreetMap.
I’ve been contributing to OSM for my local area as much as possible to update businesses and their opening hours, website, etc., but it’s not a small task.
Once again, the Simpsons predicted the future: https://www.youtube.com/watch?v=WPc-VEqBPHI
It might be the lack of sleep, but what are you trying to say here?
Bitcoin difficulty chart - good point.
Effectiveness of AI powered search - Agreed, it is a very subjective topic. I don’t use LLMs for the majority of my searches (who needs hallucinated dates and times for the movies playing at a cinema near me?) and it sounds like Google is trying to use their LLM with every search now… In my opinion we should have a button to activate the LLM on a search rather than have it respond every time (but I don’t really use Google search anyway).
Translation/Transcription tech - It’s incredibly useful for anyone who’s deaf.
Your average person doesn’t need this, although I’m sure they benefit from the auto-generated subtitles if they’re trying to watch a video in a noisy environment (or with the volume off).
In my own personal use I’ve found it useful for cutting through the nonsense posted by both sides of either the Ukraine/Russia conflict or the Israel/Gaza conflict (in the case of misinformation targeting those who don’t speak the language).
Generative AI - Yeah, this will be interesting to see how it plays out in courts. I definitely see good points raised by both sides, although I’m personally leaning towards a ruling that would allow smaller startups/research groups to be able to compete with larger corporations (when they will be able to buy their way into training data). It’ll be interesting to see how these cases proceed on the text vs audio vs image/art fronts.
Wasteful AI - Agreed… too many companies are jumping in on the “AI” bandwagon without properly evaluating whether there’s a better way to do something.
Anyway, thanks for taking the time to read through everything.
OK… warning: wall of text incoming.
TL/DR: We end up comparing LLM executions with Google searches (a single prompt to ChatGPT uses about 10x as much electricity as a single Google search execution). How many Google searches and links do you need to click on vs requesting information from ChatGPT? I also touch on different use cases beyond just the use of LLMs.
The true argument comes down to this: Is the increase in productivity worth the boost in electricity? Is there a better tool out there that makes more sense than using an AI Model?
For the first article:
The only somewhat useful number in here just says that Microsoft had 30% higher emissions than what it’s goals were from 2020… that doesn’t breakdown how much more energy AI is using despite how much the article wants to blame the training of AI models.
The second article was mostly worthless, again pointing at numbers from all datacenters, but conveniently putting 100% of the blame on AI throughout most of the article. But, at the very end of the article it finally included something a bit more specific as well as an actual source:
AI could burn through 10 times as much electricity in 2026 as it did in 2023, according to the International Energy Agency.
Link to source: https://www.iea.org/reports/electricity-2024
A 170 page document by the International Energy Agency.
Much better.
Page 8:
Electricity consumption from data centres, artificial intelligence (AI) and the cryptocurrency sector could double by 2026.
Not a very useful number since it’s lumping in cryptocurrency with all Data centers and “AI”.
Moreover, we forecast that electricity consumption from data centres in the European Union in 2026 will be 30% higher than 2023 levels, as new data facilities are commissioned amid increased digitalisation and AI computations.
Again, mixing AI numbers with all datacenters.
Page 35:
By 2026, the AI industry is expected to have grown exponentially to consume at least ten times its demand in 2023.
OK, I’m assuming this is where they got their 10x figure, but this does not necessarily mean the same thing as using 10x more electricity especially if you’re trying to compare traditional energy use for specific tasks to the energy use required by executing a trained AI Model.
Page 34:
When comparing the average electricity demand of a typical Google search (0.3 Wh of electricity) to OpenAI’s ChatGPT (2.9 Wh per request)
Link to source of that number: https://www.sciencedirect.com/science/article/abs/pii/S2542435123003653?dgcid=author
It’s behind a paywall, but if you’re on a college campus or at certain libraries you might be able to access it for free.
Finally we have some real numbers we can work with. Let’s break this down. A single Google search uses a little more than 1/10th of a request made to ChatGPT.
So here’s the thing, how many times do you have to execute a Google search to get the right answer? And how many links do you need to click on to be satisfied? It’s going to depend based on what you’re looking for. For example, if I’m working on doing some research or solving a problem, I’ll probably end up with about 10-20 browser tabs open at the same time by the time I get all of the information I need. And don’t forget that I have to click on a website and load it up to get more info. However, when I’m finally done, I get the sweet satisfaction of closing all the tabs down.
Compare that to using an LLM, I get a direct answer to what I need, I then do a little double checking to verify that the answer is legitimate (maybe 1-2 Google equivalent searches), and I’m good to go. Not only have I spent less time overall on the problem, but in some cases I might have even used less electricity after factoring everything in.
Let’s try a different use case: Images. I could spend hours working in Photoshop to create some image that I can use as my Avatar on a website. Or I can take a few minutes generating a bunch of images through Stable Diffusion and then pick out one I like. Not only have I saved time in this task, but I have used less electricity.
In another example I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper to quickly translate and transcribe what was said in a matter of seconds.
On the other hand, there are absolutely use cases where using some ML model is incredibly wasteful. Take, for example, a rain sensor on your car. Now, you could setup some AI model with a camera and computer vision to detect when to turn on your windshield wipers. But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers. The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.
Of course we still need to factor in the amount of electricity that’s required to train and later fine-tune a model. Small models only need a few seconds-minutes to train. Other models may need about a month or more to train. Once the training is complete, no more electricity is required, the model can be packaged up and spread out over the internet like any other file (of course electricity is used for that, but then you might as well complain about people streaming 8k video to their homes for entertainment purposes).
So everything being said, it really comes down to this:
Does the increase in productivity warrant the bump in electricity usage?
Is there a better tool out there that makes more sense than using an AI Model?
That’s just a link to all datacenters and doesn’t break out how much energy is going to AI vs how much energy is being used to stream Netflix.
You might as well say we should shut down the internet because it uses too much electricity.
I thought for sure it was going to be this video: https://www.youtube.com/watch?v=_2VDLYWi5ck&t=50s