The subtitle made me laugh so much. Thanks OP
Developer. Feminist. Ecologist. Used to be a protection Paladin.
The subtitle made me laugh so much. Thanks OP
Someone gotta hack the prompters and put a bunch of 4 syllables words in his speech. I heard he struggles with those.
It rusts under the rain
Well it’s definitely true that you will have hard times getting true things from garbage. But funny enough, the model might hallucinate true things:)
You can train an LLM on the best possible set of data without a single false statement and it will still hallucinate. And there’s nothing to be done against that.
Without understanding of the context everything can be true or false.
“The acceleration due to gravity is equal to 9.81m/s2” True or False?
LLM basically works like this: given the previous words written and their order, the most probable next word of the sentence is this one.
Removed by mod
Well, before SpaceX I looked at the space exploration program as a science enthusiast. The missions were rare but important for science. Then this dude came out of nowhere, saying he was about to save the Earth with electric cars and build a station on Mars. And for a moment it really worked. I genuinely thought he was a good billionaire. Then he completely loose his mind, start talking and acting like the worse moron of the universe, and I started studying his statements without the shiny distorting layer. He’s so full of shit it makes me sick. Most of the things he says is nonsense.
So I can’t tell why my brain works that way, but it does. Today I’m more exited by new ways to produce renewable energies on Earth than I am about rockets. That joy I felt for any SpaceX news slipped away.
My comment was just the realization of that. That was weird to be honest, but true.
A few years ago (already) I would have been sad and shocked. Now I don’t give a shit about SpaceTwitter. That douchebag managed to kill all the interest I had for space exploration, a topic I was passionate about for most of my life. He really is that kind of killjoy.
Well, I got a good news and a bad news.
The bad news is you won’t do shit with that my dear friend.
The good news is that you won’t need it because the duck is back.
Most of 7b-8b models run just fine in 4bits quant and won’t use more than 4 or 5 GB of VRAM.
The only important metric is the amount of VRAM as the model must be loaded in VRAM for fast inference.
You could use CPU and RAM but it is really painfully slow.
If you got an Apple Silicon Mac it could be even simpler.
That’s a nice hobby
I would suggest you to install a local instance of a LLM (mistral or llama3 for example) to widen your source of information. Go straight to Wikipedia instead of “googling” it if you don’t already.
Anyway, I didn’t know about kagi so I might take my own advice and give it a try.
Nice read. Thanks OP!
Cairo + global-menu and a little bit of taskbar rearrangement should do the trick
Linux users using Gnome Tweaks to make their PC look exactly like macOS.
When I’m not working on my Mac I enjoy the sheer simplicity of Sway
PL setups are the best.
Yes
Oh thanks. That makes (more) sense now!
More like max-width: 8000px;
Good thing we got flexboxes and grids and container queries now.
I have 2 24" displays side by side. At some point I unified the desktops (or Spaces if you’re on Linux) to make it act “as if” it was a single ultra wide monitor. This was absolutely awful to use, especially during Google meetings where I had to share my screen.
Besides, I like being able to rotate 90° one of my screen because sometimes it’s just the best way to work.
This thing is stupid. Appealing maybe, but stupid.
Portugal is a very nice place to live indeed!