• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • “metadata” is such a pretty word. How about “recipe” instead? It stores all information necessary to reproduce work verbatim or grab any aspect of it.

    The legal issue of copyright is a tricky one, especially in the US where copyright is often being weaponized by corporations. The gist of it is: The training model itself was an academic endeavor and therefore falls under a fair use. Companies like StabilityAI or OpenAI then used these datasets and monetized products built on them, which in my understanding skims gray zone of being legal.

    If these private for-profit companies simply took the same data and built their own, identical dataset they would be liable to pay the authors for use of their work in commercial product. They go around it by using the existing model, originally created for research and not commercial use.

    Lemmy is full of open source and FOSS enthusiasts, I’m sure someone can explain it better than I do.

    All in all I don’t argue about the legality of AI, but as a professional creative I highlight ethical (plagiarism) risks that are beginning to arise in majority of the models. We all know Joker, Marvel superheroes, popular Disney and WB cartoon characters - and can spot when “our” generations cross the line of copying someone else’s work. But how many of us are familiar with Polish album cover art, Brazilian posters, Chinese film superheroes or Turkish logos? How sure can we be that the work “we” produced using AI is truly original and not a perfect copy of someone else’s work? Does our ignorance excuse this second-hand plagiarism? Or should the companies releasing AI models stop adding features and fix that broken foundation first?




  • I was on the same page as you for the longest time. I cringed at the whole “No AI” movement and artists’ protest. I used the very same idea: Generations of artists honed their skills by observing the masters, copying their techniques and only then developing their own unique style. Why should AI be any different? Surely AI will not just copy works wholesale and instead learn color, composition, texture and other aspects of various works to find it’s own identity.

    It was only when my very own prompts started producing results I started recognizing as “homages” at best and “rip-offs” at worst that gave me a stop.

    I suspect that earlier generations of text to image models had better moderation of training data. As the arms race heated up and pace of development picked up, companies running these services started rapidly incorporating whatever training data they could get their hands on, ethics, copyright or artists’ rights be damned.

    I remember when MidJourney introduced Niji (their anime model) and I could often identify the mangas and characters used to train it. The imagery Niji produced kept certain distinct and unique elements of character designs from that training data - as a result a lot of characters exhibited “Chainsaw Man” pointy teeth and sticking out tongue - without as much as a mention of the source material or even the themes.



  • The problem in here is that while the Joker is a pretty recognizable cultural icon, somebody using an AI may have genuinely original idea for an image that just happens to have been independently developed by someone before. As a result, the AI can produce an image that’s a copy or close reproduction of an original artwork without disclosing its similarity to the source material. The new “author” then will unknowingly rip off the original.

    The prompts to reproduce joker and other superhero movies were quite specific, but asking for “Animated Sponge” is pretty innocent. It is not unthinkable that someone may not be familiar with Mr. Squarepants and think they developed an original character using AI


  • These models were trained on datasets that, without compensating the authors, used their work as training material. It’s not every picture on the net, but a lot of it is scrubbing websites, portfolios and social networks wholesale.

    A similar situation happens with large language models. Recently Meta admitted to using illegally pirated books (Books3 database to be precise) to train their LLM without any plans to compensate the authors, or even as much as paying for a single copy of each book used.