YouTubeā€™s rollout of AI tools shows nonsensical AI-generated audience engagement and AI slop thumbnails.

YouTube is AI-generating replies for creators on its platform so they could more easily and quickly respond to comments on their videos, but it appears that these AI-generated replies can be misleading, nonsensical, or weirdly intimate.

YouTube announcedĀ that it would start rolling out ā€œeditable AI-enhanced reply suggestionsā€ in September, but thanks to aĀ new videoĀ uploaded by Clint Basinger, the man behind the popular LazyGameReviews channel, we can now see how they actually work in the wild. For years, YouTube has experimented with auto-generated suggested replies to comments that work much like the suggested replies you might have seen in your Gmail, allowing you to click on one of three suggested responses like ā€œThanks!ā€ or ā€œIā€™m on it,ā€ which might be relevant, instead of typing out the response yourself. ā€œEditable AI-enhanced reply suggestionsā€ on YouTube work similarly, but instead of short, simple replies, they offer longer, more involved answers that are ā€œreflective of your unique style and tone.ā€ According Basingerā€™s video demoing the feature, it does appear the AI-generated replies are trained on his own comments, at times replicating previous comments he made word for word, but many of the suggested replies are strangely personal, wrong, or just plain weird.

For example, last week Basinger posted a short video about aĀ Duke Nukem-branded G Fuel energy drinkĀ that comes in powder that needs to be mixed with water. In the video, Basinger makes himself a serving of the drink but canā€™t find the scoop heā€™s supposed to use to measure out the formula.

ā€œI wouldnā€™t be surprised if the scoop was buried in the powder,ā€ one YouTube user commented on the Duke Nukem G Fuel video, which certainly sounds right to me as someone whoā€™s been serving up baby formula for the last year.

YouTubeā€™s AI suggested that Basinger reply to that comment by saying: ā€œItā€™s not lost, they just havenā€™t released the scoop yet.Ā  Itā€™s coming soon.ā€

Image: LGR Blerbs/YouTube

I can see how that comment could make sense in the context of the types of other videos LGR publishes, which usually review old games, gadgets, and other tech, but is obviously wrong in this instance.

Another suggested reply to that same comment said: ā€œIā€™ll have to check if theyā€™re using a proprietary blend that requires a special scoop.ā€

ā€œMy creativity and craft stems completely from my own brain, and handing that off to some machine learning thing that mimics my style not only takes away from the enjoyment of it all for me, but it feels supremely disingenuous,ā€ Basinger told me in an email. ā€œThe automated comments in particular come across as tone deaf, since a huge reason YouTube makes sense at all is the communication and relationship between audience and creator. Iā€™ve had dozens of people say that they now second-guess every interaction with YouTubers in the comments since it could easily be a bot, a fake response.ā€

Another commenter on the Duke Nukem G Fuel video joked that Basinger should have had a tighter grip on the lid as he was shaking the formula to prevent it from flying all over the place.

YouTubeā€™s generative content features in 2024

Basinger bursts out laughing as he reads YouTubeā€™s suggested AI-generated reply to that comment: ā€œIā€™ve got a whole video on lid safety coming soon, so you donā€™t have to worry!ā€

At other times, the AI-suggested replies are just nonsensical. The Duke Nukem G Fuel review wasnā€™t posted to the main LGR channel, but a channel called LGR Blerbs, which is his naming convention for shorter, less deeply researched videos about whatever heā€™s interested in. A few commenters said they were happy he was posting to the Blerbs channel again, with one saying ā€œNice. Back to the blerbs.ā€

YouTubeā€™s AI suggested Basinger reply to that comment by saying: ā€œItā€™s a whole new kind of blerp,ā€ which I suppose is funny, but also doesnā€™t mean anything.

The weirdest examples of AI-generated replies in the video in my opinion are those that attempt to speak to Basingerā€™s personal life. In response to another commenter who said they were happy Basinger was posting to the Blerbs channel again, YouTubeā€™s AI suggested the following reply: ā€œYeah, Iā€™m a little burnt out on the super-high-tech stuff so it was refreshing to work on something a little simpler šŸ™‚.ā€ Another AI-generated reply thanked commenters for their patience and said that Basinger was taking a break but was back to making videos now.

YouTuber burnoutĀ is a well established problem among YouTube creators, to the point where YouTube itself offersĀ tipsĀ on how to avoid it. The job is taxing not only because churning out a lot of videos helps them get picked up by YouTubeā€™s recommendation algorithm, comments on those videos and replies to comments helps increase engagement and visibility for those videos.

YouTube rewarding that type of engagement incentivises the busywork of creators replying to comments, which predictably resulted in anĀ entire practiceĀ and set of tools that allow creators to plug their channels to a variety ofĀ AIthat will automatically reply to comments for them. YouTubeā€™s AI-enhanced reply suggestions feature just brings that practice of manufactured engagement in-house.

Clearly, Googleā€™s decision to brand the feature asĀ editableĀ AI-enhanced reply suggestions means that itā€™s not expecting creators to use them as-is. Its announcement calls them ā€œa helpful starting point that you can easily customize to craft your reply to comments.ā€ However, judging by what they look like at the moment, many of the AI-generated replies are too wrong or misleading to be salvageable, which once again shows the limitations of generative AIā€™s capabilities despite its rapid deployment by the biggest tech companies in the world.

ā€œI would not consider using this feature myself, now or in the future,ā€ Basinger told me. ā€œAnd Iā€™d especially not use it without disclosing the fact first, which goes for any use of AI or generative content at all in my process. Iā€™d really prefer that YouTube not allow these types of automated replies at all unless there is a flag of some kind beside the comment saying ā€˜This creator reply was generated by machine learningā€™ or something like that.ā€

The feature rollout is also a worrying sign that YouTube could see a rapid descent towards AI-sloppyfication of the type weā€™ve been documentingĀ on Facebook.

In addition to demoing the AI-enhanced reply suggestion feature, Basinger is also one of the few YouTube creators who now has access to the new YouTube Studio ā€œInspirationā€ tab, which YouTubeĀ also announced in September. YouTube says this tab is supposed to help creators ā€œcurate suggestions that you can mold into fully-fledged projects ā€“ all while refining those generated ideas, titles, thumbnails and outlines to match your style.ā€

Basinger shows how he can write a prompt that immediately AI-generates an idea for a video, including an outline and a thumbnail. The issue in this case is that Basingerā€™s channel is all about reviewing real, older technology, and the AI will outline videos for products that donā€™t exist, like a Windows 95 virtual reality headset. Also, the suggested AI-generated thumbnails have all the issues weā€™ve seen in other AI image generators, like clear misspelling of simple words.

Image: LGR Blerbs/YouTube

ā€œIf youā€™re really having that much trouble coming up with a video idea, maybe making videos isnā€™t the thing for you,ā€ Basinger said.

Google did not respond to a request for comment.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    114
    arrow-down
    1
    Ā·
    9 days ago

    AI generated video ideas, AI generated thumbnails, AI generated comments from the viewers, AI generated comments from the creatorsā€¦

    I mean, AI already gave me the ick but this is super extra ick.

    Youtube is going to be 100% over-run with absolute garbage, and thereā€™s going to be zero way to determine which content is human and not and itā€™s going to completely make the platform utterly worthless.

    It feels like the most urgent things to figure out how to make viable are things like Loops and Peertube, even over 160-character hot-take platforms or link aggregation or whatever, since the audience is SO much larger, and SO much more susceptible to garbage.

    • JeffKerman1999@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      31
      Ā·
      edit-2
      8 days ago

      Itā€™s all a ploy to generate ā€œengagementā€ and make more money for Google. They have already been caught once boosting video views to milk the advertisers. Oh no, it wasnā€™t them! It was some evil hackers using their infrastructure and tools.

      Regarding the video platforms, the only way is everyone hosts their own content and distribute via RSSā€¦ But then where is the money in it

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        21
        Ā·
        8 days ago

        Regarding the video platforms, the only way is everyone hosts their own content and distribute via RSSā€¦ But then where is the money in it

        The same place a lot of it is now: patreon, merch, and in-video sponsors.

        Sure you lose the Google adsense money, but really, thatā€™s pretty minimal these days after constant payout cuts (see: everyone on youtube complaining about it every 18 months or so) but the bigger pain is reach.

        If I post a video on Youtube, it could land in front of a couple of million people either by search, algorithm promotion, or just random fucking chance.

        If I post it on my own Peertube instance, itā€™s in front of uh, well uh, nobody.

        Thatā€™s probably the harder solution to solve: how can you make a platform/tech stack gain suffient intertia that itā€™s not just dumping content in a corner and nobody ever seeing it.

    • Bobby Turkalino@lemmy.yachts
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      10
      Ā·
      8 days ago

      thereā€™s going to be zero way to determine which content is human and not

      The thing about this prediction is that if it actually does become true, itā€™ll apply to the content you already enjoy. If itā€™s content you enjoy, itā€™s content you enjoy. If not, donā€™t watch it. I donā€™t want to see human creators vanish either but I also think AI creators can coexist with them and ultimately, viewers will vote with their views. YouTube isnā€™t gonna make ad money off videos people arenā€™t watching, so if AI content is shite, theyā€™ll walk back on it