

“Since the dataset isn’t 100% perfectly annotated for analysis, we should give up the whole project entirely.”
“Since the dataset isn’t 100% perfectly annotated for analysis, we should give up the whole project entirely.”
I see two new features that look fantastic, but the rest of the UI seems likely unchanged. I’ll definitely give it a shot though.
GIMP is unfortunately not a good competitor, the UX/UI is atrocious, and that’s after spending 25 years using it now… I switched to Krita for most things at this point. GIMP needs some sort of revamp.
He’s probably got dozens of examples that could be codified via the DSM-5 at this point.
“You can lead a horse to water…”
I’m not even sure I’ve seen him on the news this year. Gotta stay “relevant” somehow I guess?
Collection of personal data is arguably worth money to them though, for advertising and whatever else they’re doing.
A lot of their chips are fab’d in the US and Israel and Germany and others though. It’s weird that nobody has mentioned all their US fabs. The new ones coming up in Ohio shortly (construction has been going already) will be two next-gen fab plants.
Does Intel make its main CPUs in China for those high tariffs?
Looked it up and found this info at least:
Key US Locations:
Arizona (Fab 52 and 62), New Mexico (Fab 9 and 11x), and Oregon (Hillsboro) are major Intel manufacturing hubs in the US, with the new Fab 42 and 32 also being part of a larger campus in Arizona. Ohio is also a major site with construction well underway for two new leading-edge chip factories.
Global Footprint:
Intel also has manufacturing facilities in locations like Israel (Jerusalem, Kiryat Gat) and Ireland (Leixlip).
Expansion and Future:
Intel is actively expanding its global network with new fabs in Ohio, Germany, and other locations, according to Intel Newsroom and plans to make the German fab one of the most advanced in the world.
That’s wild, I would not be able to keep up with trends in the industry I work in as much if I isolated myself from everyone. But happy it works for you.
Ren from Ren and Stimpy?
This is one small example, but I get notifications on developer livestreams for new models and new API updates and feature releases. The OpenAI sub itself is not only too many hours late in publishing any of them, but it’s also only a fraction of the updates coming directly from the company itself. This extends to many other orgs and people I follow.
I’m a developer so I like to have quick access to new info to many frameworks and languages (and other lead devs that post updates.)
I keep a raspberry pi dedicated just to have NES/SNES/etc emulators via the “retropie” distro. I have thousands of ROMs that I can plug into any TV with HDMI and SNES/NES USB controllers for it. $100 for a full raspi kit to have full access to anything just by copying some files over to a microsd card. Can’t remember controller cost but that’s kind of a given requirement.
X is where people I want to follow post unfortunately. If they posted on mastodon, I would use that more. As it stands, a lot of people and creators I want to keep up with are only on a few select platforms at the moment. Maybe that’ll change in time but I doubt anytime soon. Same situation with YouTube, I’d like to stop using that too but it’s the only place to find certain things (small example: individual magicians who sometimes perform on Penn & Teller also post their own videos on YT only.)
https://ollama.ai/, this is what I’ve been using for over a year now, new models come out regularly and you just “ollama pull <model ID>” and then it’s available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)
All free and available to everyone.
In my experience it depends on the math. Every model seems to have different strengths based on a wide berth of prompts and information.
+1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.
I put it on archive.md for anybody that couldn’t see past the paywall like myself. (Sorry for the hijack but I wanted to see it quickly.) I’m not sure how everyone else is managing to read it for free tho.
They’re fighting harder for non-citizens than citizens at this point it seems. Not entirely sure why.
86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.