Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          7
          ·
          edit-2
          8 months ago

          Restricting AI will only kill the open source scene and make all AI products subscription based. Since we are moving quickly to an AI driven society this would give our whole economy to google and Microsoft.

          Some of us understand what’s at stake.

          The individuals doing such actions should absolutely be prosecuted, it needs to be illegal to make deep fakes of someone, triply so when it’s used to extort that person.

          But if you catch someone drunk driving, you prosecute him for drunk driving, you don’t ban cars.

          But obviously, if someone says “think of the children”, you should always mindlessly give up whatever freedoms they are asking you too.

            • stevedidwhat_infosec@infosec.pub
              link
              fedilink
              English
              arrow-up
              12
              arrow-down
              4
              ·
              8 months ago

              Except you’re not trying to ask for seatbelts. You’re arguing we get rid of the cars.

              Ai being the vessel for the problem which is cyber extortion.

              You handle the extortion bit by making seatbelts. Not seatbelts that auto buckle. Not cars that don’t start without one. But by providing the safe guards to the people who can then make the decision to wear them and to punish those that put others at risk by their misuse.

              You don’t ban alcohol because of alcoholics. You punish those who refuse to use them safely and appropriately and, most of all, those who put others at risk.

              That’s freedom. That’s the American way. Not anything else.

                • stevedidwhat_infosec@infosec.pub
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  arrow-down
                  3
                  ·
                  8 months ago

                  No I don’t. You want me to think that because it makes it easier to be aggressive towards.

                  I’ve obviously misunderstood you, so I’m sorry about that. I should’ve led with questions instead of assumptions and that’s on me.

                  I think any mature adult who’s for AI, knows that some safeguards and changes are necessary- just like they are for any new invention

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              4
              ·
              edit-2
              8 months ago

              There are no seatbelts. Its either cars or only public transport.

              Can you explain what’s wrong in what I said instead of saying “you are one of those that is against restrictive regulations, therefore are wrong”

              We should be very vocal about it, Openai and their friends are. They have lobbyists in Washington trying to convince the government AI is too dangerous for people to have free access to it. They are using the media to dessimate hate and trigger people’s emotional response.

                • rebelsimile@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  3
                  ·
                  8 months ago

                  I think the question is: should we have designed the internet such as to have made it impossible to find bomb plans on it? And to be honest, I don’t think the internet would be what it is if it were possible to have that level of filtering and censorship. Child porn is reprehensible in any form. To me, it makes more sense to blame the moron with the hammer than to blame the hammer.

                • Grimy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  4
                  ·
                  edit-2
                  8 months ago

                  What you are asking for is equivalent to stopping people from writing literotica about children using word.

                  Nobody is advocating for child literotica or defending it, but most understand that it would take draconian measures to stop it. Word would have to be entirely online and everything written would have to pass through a filter to verify it isn’t something illegal.

                  By it’s very nature, it’s very difficult to remove such things from generative models. Although there is one solution I can think of which would be to take children completely out of models.

                  The problem is this isn’t a solution that is being proposed, sadly all current possible legislations are meant to do one thing and that is to create and cement a monopoly around AI.

                  I’m ready to tackle all issues involving AI but the main current issue is a handful of companies trying to rip it out of our hands and playing on people’s emotions to do so. Once that’s done, we can take care of the 0.01 % of users that are generating CP.

    • stevedidwhat_infosec@infosec.pub
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      6
      ·
      8 months ago

      Pedophile apologists*

      Nobody interested in the development of AI would be interested in defending pedos, unless they’re pedos. That’s reality.

      Why lump the two groups together?

      In fact, AI is used by these orgs to prevent workers from having to look at these images themselves which is partially why mod/admin/content filter people’s burnout is so high.

      Everytime some nasty shit (pedo shit, gore, etc) is posted on tumblr, Facebook, Instagram, etc, those reports go through real people (or did prior to these AI models). Now imagine smaller, upcoming websites like lemmy instances that might not have the funds or don’t know of this AI solution.

      AI fixes problems too - the root of the problem is cyber extortion. Whether that means the criminals are photoshopping or using AI. They’re targeting children for Christ sake, besides that being fucked up all by itself, it’s not hard to fool a child. AI or not. How criminals are contacting and blackmailing YOUR CHILDREN is the problem imo

      • Llewellyn@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        8 months ago

        Not even if it’s fully AI generated.

        Who is a victim in that case?