• uranibaba@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 days ago

    I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.

    While the requirements never changed, the tools sure did and they made it a lot easier to not understand.

    • major_jellyfish@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.

      It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.

      If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

      And then you get people like OP, blaming the generation while if anything its them and their company to blame… for falling behind. Got to keep up folks. Our field moves fast.

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        My rule of thumb: Use ChatGPT for questions whos answer I already know.

        Otherwise it hallucinates and tries hard in convincing me of a wrong answer.

      • uranibaba@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.

        All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.

        Have you actually found that to be the case in anything complex though?

        I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.

        If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

        I agree with you on that.