As it turns out, developer Embark Studios is using AI for essentially all the voicework in The Finals.

  • R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    11 months ago

    Played it for like six hours last night and I didnt even realize the announcers were AI generated. It can’t be that bad if it’s passable to those who aren’t informed.

  • UpsKaputt@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    Played a couple of matches already and I haven’t noticed. I don’t think it’s that big of a deal. The announcers need to relay information about events in the game, but you’re focused on other stuff happening and don’t really have time to think about their voices.

    • Siegfried@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      And for defending the city and its people, i name you Thane XxXPussy_DestroyerXxX of Whiterun

    • bigmclargehuge@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Not just saying your name, imagine having real conversations with characters, or better yet, having the characters come up with their own quest lines for you. Previously meaningless side npcs could in theory end up as a key character in an evolving story. Obviously thats probably a few years off, but it’d be amazing for variety and replayability.

  • SkyeStarfall@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    The biggest issue to me seems to be the model struggling to put emphasis in the right places, but if that’s something you can manually tune…

    Aside from that, it sounds fine. I would hate to be a VA right now. Maybe it won’t kill the field, but it will reduce a lot of job opportunities.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I think it’ll change VA’s so that their likeness or specific voices they do can be licensed out or paid royalties to.

      Wanna make a Machinima with the voice of a videogame character? Sure, build it in Eleven labs based on voice lines you got from the game, but the moment you make money on it a percentage goes to “X” actor or maybe they’d just prefer a one time fee per person to use the voice, idk.

  • thatsTheCatch@lemmy.nz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    11 months ago

    I opened the article planning to dislike it, and I do dislike the man’s voice (the woman’s voice sounds quite good though), but then I thought about it. Sports announcers actually seem like a decent use for AI. I would imagine that it would be incredibly difficult to build decent announcers from pre-recorded voice lines… It’s heavily context-dependent, and this way it can even read out the team names, which I imagine are chosen by the teams. I think this could possibly work well. I don’t see how to create a system like this without AI

    • ryven@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      It’s not a generative model running on your graphics card coming up with novel speech to match the gamestate; it would never be fast enough (and you’re also using that card to, you know, run the graphics). They’re using machine learning to generate speech for their pre-written lines so that they can avoid hiring voice actors. I guess they didn’t want to try the old way of having whoever isn’t too busy in your office record the lines.

      • subignition@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Depending on how many variables/contexts there are in the lines, that could still be a combinatorial nightmare to record.