When I used to do image and signal processing for embedded systems in C and C++, AI was useless. Now that I do backend web development in Python and Ruby, AI is better than me. It really depends on the problem area and how many sample code and answers are out there for it to steal from.
I do backend development in PHP and Ruby, and AI sometimes has a suggestion that helps me out but is often completely, utterly useless, especially at actually coding the thing from scratch.
Yeah lol, a lot of “I need to do X” and I often get “use functionThatDoesX then” and sometimes it’s a wonderful discovery, most often though it just doesn’t exist lol
It needed a module to get the sun’s position, it used sun::alt:: azimuth which doesn’t exist rather than Astro::Coord::ECI::Sun
It needed a module to calculate mirror angle between the Sun’s altitude and azimuth and the target altitude and azimuth. It left that commented out rather than selecting the altitude halfway between Sun and target and azimuth between Sun and azimuth
It turns out there’s precious little on the internet on how to aim a mirror, partly because it’s not popular, partly because it’s dead simple
Like 95% of the people on here do the latter, though. (Way too often I get in arguments where it seems like people don’t realise there’s other kinds of coding)
I do a ton of Powershell scripting, and AI is either a half competent programmer, or someone let grandpa respond with the syntax from nineteen dickety two
Can confirm. AI is worse than useless for embedded systems.
I was working with Yocto on a very specialized xilinx chip. I had chatGPT makeup a chapter that didn’t exist (in a manual that did), reference non-existent paragraphs from said chapter, and then argue with me quite confidently that that chapter was real and the information it was giving me was accurate.
And as soon as you enter corporate stuff, LLMs are useless again, because most things are integrated into existing ecosystems which LLMs don’t know and/or libraries are only used for closed source code.
Really? I had an app that would autogenarate time sheets for work in Google Sheets. I decided to minimise API calls by doing a single call to Google Drive then parse the HTML and reupload. Not a big Python project but ChatGPT hit a wall pretty fast on that one. Though, tbf the documentation was suprisingly opaque so I suppose that goes back to your point.
That project also produced my finest pile of spaghetti code since I had to account for stretched cells in the HTML parsing. I still have a piece of paper with my innumerate math scribbles. The paper makes sense to me. The code does not.
When I used to do image and signal processing for embedded systems in C and C++, AI was useless. Now that I do backend web development in Python and Ruby, AI is better than me. It really depends on the problem area and how many sample code and answers are out there for it to steal from.
I do backend development in PHP and Ruby, and AI sometimes has a suggestion that helps me out but is often completely, utterly useless, especially at actually coding the thing from scratch.
Yeah lol, a lot of “I need to do X” and I often get “use functionThatDoesX then” and sometimes it’s a wonderful discovery, most often though it just doesn’t exist lol
I asked gpt for code to aim a heliostat
It needed a module to get the sun’s position, it used sun::alt:: azimuth which doesn’t exist rather than Astro::Coord::ECI::Sun
It needed a module to calculate mirror angle between the Sun’s altitude and azimuth and the target altitude and azimuth. It left that commented out rather than selecting the altitude halfway between Sun and target and azimuth between Sun and azimuth
It turns out there’s precious little on the internet on how to aim a mirror, partly because it’s not popular, partly because it’s dead simple
Like 95% of the people on here do the latter, though. (Way too often I get in arguments where it seems like people don’t realise there’s other kinds of coding)
I do a ton of Powershell scripting, and AI is either a half competent programmer, or someone let grandpa respond with the syntax from nineteen dickety two
Can confirm. AI is worse than useless for embedded systems.
I was working with Yocto on a very specialized xilinx chip. I had chatGPT makeup a chapter that didn’t exist (in a manual that did), reference non-existent paragraphs from said chapter, and then argue with me quite confidently that that chapter was real and the information it was giving me was accurate.
ChatGPT can make ridiculous claims about what it can do. It sometimes even says that it does real world things when it’s laughably impossible.
And as soon as you enter corporate stuff, LLMs are useless again, because most things are integrated into existing ecosystems which LLMs don’t know and/or libraries are only used for closed source code.
Really? I had an app that would autogenarate time sheets for work in Google Sheets. I decided to minimise API calls by doing a single call to Google Drive then parse the HTML and reupload. Not a big Python project but ChatGPT hit a wall pretty fast on that one. Though, tbf the documentation was suprisingly opaque so I suppose that goes back to your point.
That project also produced my finest pile of spaghetti code since I had to account for stretched cells in the HTML parsing. I still have a piece of paper with my innumerate math scribbles. The paper makes sense to me. The code does not.