• 1 Post
  • 21 Comments
Joined 6 months ago
cake
Cake day: July 10th, 2025

help-circle


  • That AI (as in “generative AI”) helps in learning if you give it the right prompt. There is evidence to support that when a user asks AI to implement code, that they (the user) won’t touch it because they are unfamiliar of the code it generated. The AI effectively made a psychological black box that no programmer wants to touch even for a (relatively speaking) small snippet of code to a larger program, that was programmed by another programmer or him.

    To further generalize, I fully believe AI doesn’t improve the learning process, it makes it more accessible and easier for less literate people in a field to understand. I can explain Taylor expansions and power series simplistically to my brother who is less literate and familiar with math. I would be shocked that after a brief general overview he can now approximate any function or differential equation.

    Same applies with chatGPT: You can ask it to explain simplistically taylor and power series solutions, or better yet, approximate a differential equation, it doesn’t change the fact that you still can’t replicate it. I know I’m talking about an extreme case where the person trying to learn Taylor expansions has no prior experience with math, but it still won’t even work for someone who does…

    I want to pose a simple thought experiment of my experience using AI on say (for example) taylor expansions. Lets assume i wants to learn Taylor expansion, ive already done differential calculus (the main requirement for taylor expansions) and I asks chatGPT “how to do Taylor expansions” as in what is the proof to the general series expansion, and show an example of applying Taylor expansions to a function. What happens when I try and do a problem is when I experience a level of uncertainty in my ability to actually perform it, and this is when I ask chatGPT if i did it correct or not. But you sort of see what I’m saying it’s a downward spiral of loosing your certainty, sanity, and time commitment over time when you do use it.

    That is what the programmers are experiencing, it’s not that they don’t want to touch it because they are unfamiliar with the code that the AI generated, it’s that they are uncertain in their own ability to fix an issue as they may fuck it up even more. People are terrified of the concept of failure and fucking shit up, and by using AI they “solve” that issue of theirs even though the probability of it hallucinating is higher then if someone spent time figuring out any conflicts themselves.


  • Privacy reasons. More specifically, I just don’t like using platforms when there are alternatives that don’t compromise my data. In the end, I don’t lose as many features or communities going this route. That said, however, I do miss shitting on people who joined the “christian V atheist” Facebook it’s one of my guilty pleasures. These people can’t have a logical debates, and often times just completely unrelated to Christianity or atheism. so I end up just personally insulting them.




  • I want to believe you, but the people at my school are abusing it a lot, to the point where i they just give an entire assignment through chatGPT and it gives them a solution.

    The only time I see where it didn’t fully work is using it for my skip list implementation. I asked a LLM to implement a skiplist with insert, delete, and get functionality. What it gave me is an implementation that traversed through the list as a standard linked list: it is unaware of the time complexity concept associated with the skiplist, and implements it as a standard O(1) linked list. It works, but it doesn’t incorporate the “skipping” of nodes. I wonder how many student are shitting in their pants when they realize that the time isn’t being reduced compared to a standard linked list.


  • No, my intention wasn’t to undermine the value of a degree. I’m saying most people priorities for getting a degree, more specifically an engineering degree, is to just have a pay check. On a more related note, there’s a lot of “engineering majors” that use artificial intelligence to code, who don’t actually enjoy the process of learning at my uni.

    So yea, at the rate of adoption and use of generative AI at my school, a pool boy can do what most of the sophomore engineers do.





  • I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.

    The second reason why I don’t like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like “chatGPT-5 is terrifying” or “openAI has fired 70,000 employees over AI improvements” they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that’s how much I frown on these unethical practices.

    I often joke about this to people who don’t believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they’ve fired X amount of employees for “AI improvements” why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than “the reason X amount of employees were fired was because of AI improvement”


  • I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true, when I started


  • I’ve been using delta chat for about a year now and I will say I really do like it compared to signal.

    For one thing, email encryption (yes the fucking bedbug of the internet) is being used here, it’s decentralized, and just recently (on android) they’ve added phone calling functionality… fucking phone calling functionality on email encryption.

    I’ll say signal isn’t any safer (in terms of privacy and security) than WhatsApp, and i had a revelation that all centralized messaging services aren’t any better than WhatsApp even the proclaimed privacy focused ones. I have two reasons for this: 1.) They have the option to flick a switch and monetize their entire platform, that includes selling data to data brokers and other individuals. 2.) Because it is centralized it’s easier for hackers to breach and easier for governments to get user data.

    I’m not saying that signal is monetizing their platform, but compared to their decentralized counterparts, they have the option to do so. Delta chat requires building a new messaging service from the ground up, if they wanted to monetize.

    My only complaint is, since it is on email encryption, I can’t receive SMS messages, so everyone would have to transition to delta chat (atleast if you plan to use a chatmail) to get the same network as before. You can also create an account using your personal email and send messages via email.




  • Do not fully know the entire context, but based on what’s given, it seems you enjoy math or majoring in mathematics. I think this isn’t an overreaction on your end, this is asshole behavior.

    To clarify, You have the option to learn math, you chose to learn math. I don’t think that’s greed or entitlement, you had the opportunity to learn math and you took that opportunity. Working comes into play when you are in a dire need to make money, and it seems you’re not in that current situation.

    I was once questioning my self-worth, since I wasn’t working at community college. I remembered in 2023 I was doing arithmetic, and in 2025, exactly 2 years from when I started, I was doing multivariable calculus. Within 2 years I’ve surpassed every low expectation set upon me: people thought I was going to do a trade, I graduated with an associates in mathematics, and I’m now doing a bachelors in electrical engineering because I fucking can.

    This random non has no idea of your back story, don’t let him get into your head and make you doubt your self-worth. If you plan on doing engineering or physics the math that you are learning right the fuck now will be applied. By the time you start working there will be so much fucking money that you won’t have a care going into debt. For fuck sake, I’m a sophomore, I could quit now and make $100,000 a year as an FPGA developer, technically speaking.




  • My god, yes. Just yesterday I stopped using duckduckgo since even that has now become increasingly infuriating with AI. I’m using this search engine with no AI it’s based on database queries and to go to a specific website there is a small tab you can use. I love it because now I get to appreciate and use textbooks (whereas i would have chatGPT’d it) because of how limited the queries are and the limited selection. It’s not like google where it dumps the most relevant information at the top, you have to search for it. Anyways, if you were wondering it’s called marginalia search.