• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle



  • Its even worse than that. It is completely unpredictable and just does what it want. When I type in “Vi”, the first choice is Visual Studio. It will stay on Visual Studio until I have typed in “Visual Studi”. But if I’m a fast typer, and I type in the entirety of “Visual Studio”, it opens Visual Studio Code.

    So the fastest way to open up Code is to type “VSC”. This doesn’t work with “VS” for Visual Studio.

    I have to type out “Spot” specifically to open Spotify. Typing out Spotify opens edge.

    There are also files and programs it cannot find despite having been installed for years, even though I’ve MANUALLY added the paths to the searched directories.

    If anyone of you is on Windows for whatever reason and want your mind blown, try downloading a little program called Everything. It can literally find every single program on your computer as fast as you can type. And it looks up exactly what you type in. It also supports wildcard characters etc. This is the kind of behavior I expect from my computer. Sure, make a shiny frontend for casual users who don’t need to see every single file on their system, but please, why do I have to go through third parties to get this experience on an OS that my company paid for, when I can get the same experience out of the box on any free Linux distro?



  • What you’re describing is an interface. An interface is a contract that ensures you can do something, but doesn’t care how.

    Abstract classes can have abstract functions. When you do this, you’re basically just creating a base class with an interface on top; you’re saying “all my children must implement this interface of mine” without having to actually make a separate interface.

    Abstract classes also offer additional functionality though, such as the ability to define properties, and default implementations of methods. You can even utilize the base class implementation of the method in your child class, in order to perform extra steps or format your input before you do whatever it is you were doing in the first place.

    So, an interface is a contract that allows you to call a method, without having to know the specific class or implementation.

    Inheritance is more like “it does everything that X does, but it also does Y and Z.” If you’re ever finding yourself writing an abstract class with purely abstract methods, you probably want to write an interface instead. That way, you get all the same functionality, but it’s more loosely coupled

    Epecially when you think in “real” OOP terms:

    Abstract classes are “child is a parent”, fx “duck is a bird”. Bird describes all the traits that all birds have in common. But not all birds fly, so flight must come from an interface. This interface can be passed around to any number of objects, and they’re not as tightly coupled because unlike an abstract class, an interface doesnt imply that “duck is a flight”. The interface is just something we know the duck can do.

    As you can probably tell, I work with OOP on a daily basis and have for years. There are a lot of valid criticisms of the OOP philosophy, and I have heard a lot of good points for the record. I am just educating on the OOP principles because you said you were interested and to clear up any misconceptions.







  • You should note that this was a Gmail feature that is now made available by a bunch of email providers, but you might wanna check that you do indeed get your emails delivered to plus addresses before you rush out to change your contact info everywhere. Some providers have lacking support and sometimes emails may fail to send to plus addresses even if your side does support it. Using a catchall will always work because you know, that’s just how email works.


  • It is definitely the exact opposite of this. Even though I understand why you would think this.

    The thing with systems like these is they are mission critical, which is usually defined as failure = loss of life or significant monetary loss (like, tens of millions of dollars).

    Mission critical software is not unit tested at all. It is proven. What you do is you take the code line by line, and you prove what each line does, how it does it, and you document each possible outcome.

    Mission critical software is ridiculously expensive to develop for this exact reason. And upgrading to deploy on different systems means you’ll be running things in a new environment, which introduces a ton of unknown factors. What happens, on a line by line basis, when you run this code on a faster processor? Does this chip process the commands in a slightly different order because they use a slightly different algorithm? You don’t know until you take the new hardware, the new software, and the code, then go through the lengthy process of proving it again, until you can document that you’ve proven that this will not result in any unusual train behavior.




  • GoosLife@lemmy.worldtolinuxmemes@lemmy.world:wq!
    link
    fedilink
    arrow-up
    3
    ·
    6 months ago

    Ctrl-X in Nano is arguably more nonsensical, considering that vi was made in an era long (decades) before many of the conventions we know today came about. They were figuring it out in real time. And the criterium here is much simpler: it must be available on all keyboards so no fancy keys. That’s all.

    On the other hand, when nano decided to use Ctrl+X for eXit, Apples Ctrl+X/C/V had already been brought over to Windows and Apple, and was also the de facto way for most Linux apps to handle these inputs although I do think it came before any “official” efforts to standardize these shortcuts in desktop environments.