![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Per the software website (which the article links to), I don’t see any mention of generative AI. Their “ai image intelligence” only makes mention of tagging images for SEO. https://www.pixometry.com/en/publishing/ai-image-intelligence/
Per the software website (which the article links to), I don’t see any mention of generative AI. Their “ai image intelligence” only makes mention of tagging images for SEO. https://www.pixometry.com/en/publishing/ai-image-intelligence/
It would be great if the model could produce this beautifully disfigured stuff when the user asked it to. But if it can’t follow the user’s prompts reasonably, then it’s pretty useless as a tool
So they used AI to determine this? So I’m sure the result must be totally accurate.
What else does the article say? Hmmm let’s see. “The researchers found that Sky was also reminiscent of other Hollywood stars, including Anne Hathaway and Keri Russell. The analysis of Sky often rated Hathaway and Russell as being even more similar to the AI than Johansson.” Alright that proves it! Clearly this voice was based on Scarlett Johansson!
I’m addition to the other comment, I’ll add that just because you train the AI on good and correct sources of information, it still doesn’t necessarily mean that it will give you a correct answer all the time. It’s more likely, but not ensured.
What amazes me is how many people care about this piece of crap in the first place. Like WHY is everyone talking about it? It could have been one of thousands of shitty tech products that I had never even heard of, but everyone wants to talk about it for some reason.
You don’t seem to understand. There is no database.
Never used this search engine before, but I just tried it out for several searches, and it seems to give me what I want. I’ll probably try it out for a while.
Everyone here seems to be missing the point and haven’t read the article. Nvidia isn’t being targeted because they make the hardware that enables training ai. They are being sued because they trained an ai using the authors books.
Wtf, they just released stable cascade like a week ago. All they are doing is just confusing the users and model trainers. From the blog post it’s not even clear why I would want to use this one over the other one that they just released.
“AMD decided this year to discontinue funding the effort and not release it as any software product”.
So AMD decided that it wasn’t worthwhile and so the developer released it on his own. AMDs decisions are just baffling. You still can’t even install Pytorch for rocm on Windows.
I mean, they would have started appearing in there from the first moment that someone created one and hosted it somewhere, no? So it’s already been a thing for a couple years now, I believe.
Here is a simple video that breaks down how neurons work in machine learning. It can give you an idea about how this works and why it would be so difficult for a human to reverse engineer. https://youtu.be/aircAruvnKk?si=RpX2ZVYeW6HV7dHv
They provide a simple example with a few thousand neurons, and even then, we can’t easily tell what the network is doing, because the neurons do not produce any traditional computer code with logic that can be followed. They are just a collection of weights and biases (a bunch of numbers) which transform the input in a some way that the computer decided that it can arrive at the solution. GPT4 contains well over a trillion neurons, for comparison.
It’s nearly limitless because they used nearly 200 lasers. If they built a new one with the full 200 lasers, who knows what could happen.
While I get what you are saying, it’s pretty clear that what he was saying was that if you actually populate the dataset by downloading the images contained in the links (which anyone who is actually using the dataset to train a model would need to do), then you have inadvertantly downloaded illegal images.
It is mentioned repeatedly in the article that the dataset itself is simply a list of urls to the images.
No one is moving to Firefox, because most people don’t care. Just like people stay on Reddit or X, they are going to stay on chrome. Google will feed them shit and they’ll ask for more.
All we can do is worry about ourselves and keep trying to make alternatives viable.
No shit that using your PC for any purpose will consume electricity. A modern GPU can generate an image in a couple of seconds. Or I could just play a video game for an hour, and consume a few thousand times more energy
At the moment, it doesn’t really seem to be that much better. Maybe the frames generated are a little more coherent, but you have less control over it. And from what I have seen so far, it just does very simple motions, like billowing smoke, a turning head, or clouds moving through the sky.
This bill seems somewhat misguided. How in the hell is something like a large language model going to cause a mass casualty incident? What I am more worried about is things that could more realistically pose a danger. What if robotic dogs patrolling the border have machine guns mounted on their backs, then a child does something unexpected and the robot wipes out an entire family? What if a self driving car suddenly takes off at full speed through a parade? They are trying to slot AI into everything now, and it will inevitably end up in some places that are going to cause loss of life. But chatbots? Give me a break.