• vrighter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    arrow-down
    5
    ·
    10 hours ago

    So? Some of the people pushing out ai slop would be perfectly capable of writing their own llm out of widely available free tools. Contrary to popular belief, they are not complex pieces of software, just extremely data hungry. Does not mean they magically understand the code output by the llm when it spits out something.

    • Honytawk@feddit.nl
      link
      fedilink
      arrow-up
      5
      ·
      10 hours ago

      Stark would have developed their own way of training their AI. It wouldn’t be an LLM in the first place.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 hours ago

        and he stil wouldn’t understand its output. Because as we clearly see, he doesn’t even try to look at it.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            9 hours ago

            given that expert systems are pretty much just a big ball of if-then statements, then he might be considered to have written the app. Just with way more extra steps.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        8 hours ago

        so? someone invented current llms too. Nothing like them existed before either. If they vibe coded with them they’d still be producing slop.

        Coding an llm is very very easy. What’s not easy is having all the data, hardware and cash to train it.

        • thevoidzero@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          7 hours ago

          Yeah but the people who made it like that probably understand whether to trust it to write code or not. The AI Tony wrote, he knows what it does best and he trusts it to write his code. Just because it’s AI doesn’t mean it’s LLM. Like I trust the errors compilers give me even if I didn’t write them because it’s good. And I trust my scripts to do things that I wrote them for, specifically since I tested them. Same with the AI you yourself made, you’d test it, and you’d know the design principles.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            5 hours ago

            an ai is not a script. You can know what a script does. neural networks don’t work that way. You train them, and hope you picked the right dataset for it to hopefully learn what you want it to learn. You can’t test it. You can know that it works sometimes but you also know that it will also not work sometimes and there’sjacksjit you can do about it. A couple of gigabytes of floating point numbers is not decipherable to anyone.

        • Initiateofthevoid@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          The point is that no vibe coder could design an LLM without an LLM already existing. The math and tech behind machine learning is incredible, whatever you may think. Just because we can spin up new ones at will doesn’t mean we ever could have skipped ahead and built Jarvis in 2008, even if all of society was trying to do so - because they were trying.

          In the fictional universe where a human could singlehandedly invent one from scratch in 2008 with 3D image generation and voice functionality that still exceeds modern tech… yeah, that person and their fictional AI wouldn’t necessarily be producing slop.