• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • pixxelkick@lemmy.world
    cake
    toTechnology@lemmy.worldOpenAI Just Gave Away the Entire Game
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    31
    ·
    1 month ago

    I mean, that’s just how it has always worked, this isn’t actually special to AI.

    Tom Hanks does the voice for Woody in Toy Story movies, but, his brother Jim Hanks has a very similar voice, but since he isnt Tom Hanks he commands a lower salary.

    So many video games and whatnot use Jim’s voice for Woody instead to save a bunch of money, and/or because Tom is typically busy filming movies.

    This isn’t an abnormal situation, voice actors constantly have “sound alikes” that impersonate them and get paid literally because they sound similar.

    OpenAI clearly did this.

    It’s hilarious because normally fans are foaming at the mouth if a studio hires a new actor and they sound even a little bit different than the prior actor, and no one bats an eye at studios efforts to try really hard to find a new actor that sounds as close as possible.

    Scarlett declined the offer and now she’s malding that OpenAI went and found some other woman who sounds similar.

    Thems the breaks, that’s an incredibly common thing that happens in voice acting across the board in video games, tv shows, movies, you name it.

    OpenAI almost certainly would have won the court case if they were able to produce who they actually hired and said person could demo that their voice sounds the same as Gippity’s.

    If they did that, Scarlett wouldn’t have a leg to stand on in court, she cant sue someone for having a similar voice to her, lol.


  • There’s basically no reason to keep using windows.

    Debian or Linux Mint are both easy to install, work out of the box, and the only thing that might take a smidge of effort is the 3 commands you gotta run to install gpu drivers.

    Steam proton works incredibly well. I ran my entire steam library (most of which were “windows only” games) and even single one worked with proton as is without issues.

    I’ve been using steam link from my debian box for months now and it’s smooth as butter.


  • Well tbh Quests dont really bug you much about anything FB related. After you setup the account the only thing you deal with is the initial menu starts opened to the app store with suggestions based on what you already bought.

    But that initial menu let’s you also set quick access buttons for your favorite apps.

    So it’s only a single click to go from “put on headsst” to “open thing I want” usually.

    It’s not any different from steam starting you out in the store tbh, I can accept that level of advertising as it’s pretty transparent and half the time it has something of interest for me anyways.

    It’s about as big of a deal as a gift shop at a museum.


  • This seems like it has pretty powerful potential for space flight.

    Being able to aggressively min max packaging materials to secure materials could be critical for reducing payload sizes on shuttles, where every single individual gram counts.

    Each kg of packaging is thousands of dollars to get into orbit, so that’s really appealing.

    I’d be curious to see if Amazon is also working on box packing algorithms for maximizing fitting n parcels across x delivery trucks.

    IE if you have 10,00 boxes to move, what’s the fewest delivery trucks you can fit those boxes into as fast as possible too, which introduces multiple complex concepts. Both packing to maximize space usage and the order you pack it in to minimize armature travel time…

    I’d put money down amazon is perfecting this algorithm right now, and has been for awhile.





  • Yup, I usually have it set to the slowest setting when typing.

    I find I work much better and can think clearer while walking, as it keeps the blood flowing and makes me feel more awake and engaged.

    If I have a tough problem I’m trying to work through I turn the speed up to a faster pace and sorta just work through it in my head while speed walking, often this helps a lot!

    During meetings when I’m bored I also turn the speed up a bit.

    I often get around 10k to 12k steps in a day now.

    Note I don’t stay on the treadmill all day long, I usually clock a good 4 hours on it though.

    Then I take a break and chill on the couch with my work laptop, usually I leave my more “chill” tasks like writing my tests for this part, and throw on some Netflix while I churn all my tests out.

    Highly recommend it, I’ve lost a good 15ish lbs now in the past year since I started doing it, and I just generally feel a lot better, less depressed, less anxious :)


  • I have heard of jupyter but am not familiar with its nuances.

    But doing python dev with neovim is very doable, it uses the same LSP I think.

    I personally have a dedicated dev machine running debian that has everything on it, including nvim configured.

    I SSH into my dev box from other machines to do work, because neovim is a TUI it “just works” over SSH inside the terminal itself, which is what I like about it.

    It feels good to just

    1. SSH into my box
    2. tmuxinator my-project-name

    And boom, 4 tmux tabs pop open ready to go in the terminal:

    • nvim (pointing at the project dir)
    • lazygit already open
    • nvim (pointing at my secrets.json file elsewhere)
    • an extra general console window opened to project root

    And I can just deep dive into working asap in just those 2 steps, it feels very smooth.

    I often can even just do tmux a (short for attach) to just straight re-open whatever session I last had open in tmux, instantly jumping right back into where I left off.


  • I try and start using it for basic tasks, like note taking, to get used to its interface and basic commands like :w and :q, as well as switching between insert and cmd mode.

    Once you are familiar with switching between modes, copying, pasting, etc, then you probably will wanna Starr learning it’s lua api and how to load in some QoL plugins. Basic stuff like treesitter, telescope, and nvim-tree are good places to start.

    Once you feel comfortable with swapping between files with telescope and configuring plugins, I’d deep dive into getting an LSP up and running for your language of choice so you can actually code.

    In the interim I’d recommend getting comfy with using tmux in your terminal, try and open new tmux tabs to do units of work instead of constantly cding around.

    I like to keep 4 tmux tabs open for a project:

    • nvim
    • lazygit
    • secrets file open in nvim (usually my secrets file is in another dir so it doesn’t check into git)
    • a general terminal tab for running commands

  • From my experience the only big changes I’d say I made overtime are:

    1. Font size bumped up

    2. Switched to neovim from visual studio, which took like a year to relearn my entire workflow (100% worth it though)

    3. Switched from multiscreen setup to one single big screen (largely due to #2 above no longer needing a second screen, tmux+harpoon+telescope+fzf goes brrrr)

    4. Switched to a standing desk with a treadmill, because I became able to afford a larger living space where I can fit such a setup.

    If I were to do this meme though it’d mostly be #1, there just came a day when I had to pop open my settings and ++ the font size a couple times, that’s how I knew I was getting old.




  • Also people are glossing over the capability for it to improve sexual drive.

    The “my wife read a slightly spicy book today and now she wants to get it on” trope is well known on social media, AI’s ability to just generate whatever you want likely will boost that.

    However, at this time AI is unable to really handle pacing well.

    It’s pretty well known that most attempts with current uncensored LLMs tends to produce saucy encounters are… poorly paced.

    Good spicy novels have a lot of build up and slow pace, which requires remembering facts from many chapters ago.

    Even the top end of massive LLMs lack the memory capacity to last more than a handful of pages before they completely lose the thread.

    But hopefully this gets remedied eventually.


  • I’ve been calling this for awhile now.

    I’ve been calling it the Ouroboros effect.

    There’s even bigger parts at play the paper didn’t even dig into, and that’s selective bias dye to human intervention.

    See at first let’s say an AI has 100 unique outputs for a given prompt.

    However, humans will favor let’s say half of em. Humans will naturally regenerate a couple times and pick their preferred “cream of the crop” result.

    This will then ouroboros for an iteration.

    Now the next iteration only has say 50 unique responses, as half of them have been ouroboros’d away by humans picking the one they like more.

    Repeat, each time “half-lifing” the originality.

    Over time, everything will get more abd more sameish. Models will degrade on originality as everything muddles into corporate speak.

    You know how every corporate website uses the same useless “doesn’t mean anything” jargon string of words, to say a lot without actually saying anything?

    That’s how AI is going to local minima to, as it keeps getting selectively “bred” to speak in an appealing and nonspecific way for the majority of online content.


  • Not related to the article at all mate.

    This article is about how many plugins have Bern discovered to have implemented oath in a very insecure way and simply using them can expose your sensitive info you have linked to your chatgpt account.

    IE:

    1. You connect your github account to your chatgpt account (so you can ask chatgpt questions about your private codebase)

    2. You install and use one of many other compromisable weakly implemented plugins

    3. Attacker uses the weak plugin to compromise your whole account and can now access anything you attached to your account, IE they can now access your private git repos you hooked up in step 1…

    Most of the attack vectors involve a basic (hard to notice) phish attack on weak oath urls.

    The tricky part is the urls truly are and look legit. It isn’t a fake url, it actually links to the legit page, but they added some query params (the part after the ? In the url) that compromise the way it behaves


  • Note that ChatGPT indeed implemented a state parameter, but their state was not a random value, and therefore could be guessed by the attacker.

    Bruh wut, rookie mistake.

    State is supposed to be mathematically random and should expire fairly quickly.

    I always have used a random guid that expires after 10-15 minutes for state, if they try and complete the oauth with an expired state value I reject ad ask them to try again.

    Also yeah the redirect uri trick is common, that’s why oath apis must always have a “whitelist urls” functionality. And not just domain, the whole url.

    That’s why when you make a Google api token you gotta specify what urls it’s valid for explicitly. That way any other different redirect uri gets rejected, to prevent an injection attack from a third party providing their own different redirect uri to a victim.

    Oath is pretty explicit about all these things in its spec. It really sucks people treat it as optional “not important” factors.

    It’s important. Do it. Always.




  • Often these types of articles tend to be very “a lot written but nothing said”, but that is very much not the case for this article.

    I really enjoyed how in depth it went both on the history of how things used to be compared to what the new advancements will provide.

    Outsourcing through the foundry program might lead to some pretty big revolutions in chip making. I wonder if we will start to see open source chips start to show up as large companies open the floodgates to let individuals contribute to make improvements on dies…