• Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    46
    arrow-down
    3
    ·
    10 months ago

    The argument is basically that it does too much and as the motto of Unix was basically “make it do 1 thing and that very well”, systemd goes against that idea.

    You might think it is silly because what is the issue with it doing many things. Arguably, it harms customization and adaptability, as you can’t run only 2/3 of systemd with 1/3 being replaced with that super specific optimisation for your specific use case. Additional, again arguably, it apparently makes it harder to make it secure as it has a bigger attack surface.

    • fruitycoder@sh.itjust.works
      link
      fedilink
      arrow-up
      25
      arrow-down
      2
      ·
      10 months ago

      Sustemd is modular though, you don’t have to use every subsystem. The base init system and service manager is very comprehensive for sure.

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        7
        ·
        10 months ago

        I tried to express my understanding of the arguments. I don’t know and I couldn’t argue either case to a point that it is worth adding to the conversation

    • Kusimulkku@lemm.ee
      link
      fedilink
      arrow-up
      23
      arrow-down
      2
      ·
      10 months ago

      Then again, it doing all those things can lead to those parts working together better because it’s the one project instead of a dozen different projects with every distro having a different mix.

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        10 months ago

        I understand your point and I want to make clear that my own opinion is not in favor of systemd or against systemd. I am very much neutral. I just expressed my understanding of the arguments. But I welcome the discussion.

    • BestBouclettes@jlai.lu
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      10 months ago

      And funnily enough, the kernel doesn’t follow the unix philosophy either as far as I know.

          • Tartas1995@discuss.tchncs.de
            link
            fedilink
            arrow-up
            8
            arrow-down
            1
            ·
            10 months ago

            It doesn’t seem to be a debate. “Microkernels are better” “yes but I don’t have the time for it” but thanks

            • ozymandias117@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              10 months ago

              At a high level, microkernels push as much as possible into userspace, and monolithic kernels keep drivers in kernel space

              There are arguments for each e.g. a buggy driver can’t write into the memory space of another driver as easily in a micro kernel, however it’s running in the same security level as userspace code. People will make arguments for both sides of which is more secure

              Monolithic kernels also tended to be more performant at the time, as you didn’t have to context switch between ring 0 and ring 1 in the CPU to perform driver calls - we also regularly share memory directly between drivers

              These days pretty much all kernels have moved to a hybrid kernel, as neither a truly monolithic kernel nor a truly micro kernel works outside of theoretical debates

    • whoelectroplateuntil@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      10 months ago

      Problem is, nobody’s alternative solves all of the problems people wanted their init system to solve. sysvinit didn’t solve booting/service supervision well, so it’s hard to say it was really a UNIX philosophy solution, and it wasn’t even part of the OG Unix system but came over a decade later in 1981 with AT&T’s system iii (later included in system v, hence the name sysvinit). There’s nothing sysvinit does well. The most popular services and distributions had simply thrown away so many hours of time and effort bashing their heads against sysvinit’s limitations that they had managed to make them work, but that’s different from the system overall working well.

      Anyways, people don’t like Poettering, but he made inroads with systemd in large part because he actively took notes on what people wanted, and then delivered. He’s an unlikable prick, but he delivered a product it was hard for many projects to say no to. That’s why project after project adopted it. It solved problems that needed solving. This counts for more than adherence to an archaic design philosophy from the 70’s most people don’t follow anyways and which the predecessor wasn’t even a good exemplar of anyways.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 months ago

      You can in fact run 2/3 Systemd whatever that means. Systemd components are modular so you can run the base system by itself if you want to.

      Additionally systemd just works. You really don’t need to care about the details as running something like a web server or service is as simple as starting it. Dependencies are handled automatically.

    • MonkderZweite@feddit.ch
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      10 months ago

      More like it’s bad because of architecturial decisions (integrated init system; system state managemt in the same package as init and supervision), creating lots of unneeded complexity, number of CVE’s, how the developers behave (or don’t), and that you can’t have other init systems in the same repo without a fuckton of shims and wrappers.

      Sounds like valid concerns to me.

      • EyesInTheBoat@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        10 months ago

        That’s the problem with how most things Lennart designs are. They are typically 70-80 percent excellent ideas brilliantly architected, 10-20 percent decisions that we can agree to disagree on but well designed still, and ~10 percent horrifically bad ideas that he is unable to receive criticism on because of his standing, terrible attitude and ~90 percent good and acceptable ideas.

        Another problem is that they all seem to be designed in a way that they are the One True Way to do something and are designed to choke out any alternatives because Lennart Knows Best.

        I’m still ambivalent about having this much extra logic and complexity attached to my init system but the ship sailed long ago and I’m well into making lemonade at this point.

    • A_Random_Idiot@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Unix was also made in 1969, Computers are a tiny bit more complicated now and expected to do slightly more than they did back then.