PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. I mosty comment bricks of text with footnotes, so don’t be alarmed if you get one.

You posted something really worrying, are you okay?

No, but I’m not at risk of self-harm. I’m just waiting on the good times now.

Alt account of PM_ME_VINTAGE_30S@lemmy.sdf.org. Also if you’re reading this, it means that you can totally get around the limitations for display names and bio length by editing the JSON of your exported profile directly. Lol.

  • 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle







  • For my use cases (audio, programming, engineering school, watching crap on FreeTube) I value stability and predictability over security and shiny new stuff. In the rare cases that things break, they break in ways that are already well-understood, so usually have workarounds or solutions.

    In the few cases I do need something newer than the Debian repos provide, I just use Flatpaks or get an updated .deb from the devs of the particular software.

    So yeah, zero rush for Plasma 6 for me. It looks nice, but I’ll just be chilling on Plasma 5 until it comes out.





  • I still need that Windows partition for two reasons:

    (1). I need Windows because my audio interface uses a proprietary driver only available on Windows. It simply does not perform as quickly on Linux. It’s for real-time audio recording and production, so I need absolutely every clock cycle I can possibly spare. For that reason, a VM is out of the question for this particular application. On Linux with JACK, it uses JACK’s default USB audio driver, which is really good but not as fast as the custom driver ostensibly using FocusRite’s hidden features. It’s not Linux’s fault, it’s FocusRite’s for not supporting Linux and mine for “backing the wrong horse” about ten years ago when I bought it. To my knowledge, Linux pro audio was simply nowhere near as developed as it is now. It is only this exact piece of hardware, which I currently cannot afford to replace, that requires me to keep any copies of Windows alive. Other than for similar reasons where users are trapped, Windows sucks as an audio production operating system, whereas Linux with JACK is great.

    (2). I need the Windows partition as it is because there is some old but important work there that I need to finish. I wasn’t very organized about where I saved my work, i.e. things are all over the place. Eventually, I have to spend several hours moving the project files and effects off the drive. Since these projects were recorded on Windows, I will probably have to move all my Windows-exclusive effects to Linux. Yabridge actually does an excellent job for this, but it’s not painless.

    I’m currently in grad school for engineering, so I won’t have time to bring over my project files until at least the summer. But even then, all the compatibility layers are starting to add up on Linux. The projects I want to work on were nearly maxing out the CPU and RAM on Windows. Really, I need a hardware upgrade, but I can’t afford that for a long time.


  • I’m on Debian and that kind of stuff basically doesn’t happen. For the first couple weeks I broke stuff every once in a while because I didn’t know how Linux worked, but it’s basically been smooth sailing on all my computers for about six months.

    Contrast with the Windows 10 on the same laptop which just the other day decided it doesn’t want to play anymore. I guess I ran an update the last time I touched it (like a month ago) and now it won’t boot. Debian boots perfectly. Even in safe mode, I can’t boot into Windows and Automatic Startup repair refuses to work even using both the recovery USB and installation media. Probably going to have to reinstall Windows from scratch.






  • Tl;Dr: double-track the guitar

    I was also thinking that I could somehow split the guitar into two tracks by frequency and pan those differently.

    You really don’t want to do that, because there’s nothing else to occupy the frequencies you cut on each side. Also, unless you use a Linkwitz-Reiley (crossover) filter, you’re going to get some frequency distortion around the crossover frequency if a listener listens in mono.

    IMO your best bet, especially considering that you’re tracking distorted guitar playing power chords, is to double-track and pan hard left and right as is customary. IIRC stereo expanders work better on already stereo material, so it can’t hurt. IMO hard-panning should be enough, but if it isn’t, try dropping in a stereo expander.

    Now I probably would double-track something like this in this case, but in other situations like this it might not be necessary. I pretty much always double-track distorted guitar, even if it’s not heavy distortion. IMO the arrangement you’re describing is pretty sparse at this part of the track. If the whole song or album is wide and expensive, it might make for some good contrast to make this part not wide.

    Check how the mix sounds in mono! This is good practice in general, but if you’re messing with the stereo width then you really need to make sure that there’s not too much phase cancellation when your track is played through mono speakers.

    I have seen short stereo delays used to approximate double-tracking (one side delayed by 15-30ms, other side dry). However, this isn’t very convincing IMO. Plus, the two sides are basically the same other than a time delay, so there’s some nasty phase cancellation when summed to mono.

    This next approach is a bit…experimental. I have gotten good results, but I don’t fully understand why it works. Proceed with caution.

    Lastly, I have experimentally gotten my best results with “live single-take double-tracking” for heavy guitars by taking a formant shift plugin and modulating a formant shift using (pseudo)random automation. All this is before the amp (plugin). My theory about why double-tracking “works” when simply doubling the signal or using “typical” modulation (chorus, phaser, flanger) or delay plugins is that the outputs of those transformations are too correlated with the inputs, so we just hear a “shittier” (transformed) version of a single signal instead of two (relatively) uncorrelated signals. If the notes played in both takes of a real double-tracked performance are musically identical and recorded using identical gear, but not mathematically identical, then the difference between these two signals are only in the transient responses, i.e. in the timbre. Practically, this represents the fact that these takes were recorded separately with different initial conditions and slightly different playing technique. Since timbre is determined by the formants, applying a small and randomized formant shift to one channel should make them sound like at least somewhat different performances. I’m still perfecting sensible ranges, but so far I’ve had good results with a formant shift range around ±200 cents as fast as the DAW is capable of switching. Because I use a formant shift and not a broadband pitch shift, the fundamental and a few higher harmonics should not be affected. Additionally, formant shift plugins are designed for the exact purpose of shifting the timbre without shifting the fundamentals or the rest of the performance. Finally, this plays nicer in mono.

    For reasons I don’t fully understand, it works poorly for fake quad-tracking, like substantially worse than double-tracking.

    I’m looking to develop this into a (would be FOSS) plugin, but the math and coding behind pitch/formant shifting compared to other effects is like final-boss level hard, even using already-existing libraries, so it’s going to be a while before I can even begin to develop this.

    I based the above approach on how I understand Waves’ Vocal Doubler to work. I’m not 100% sure because it’s not open-source, but I think it uses a randomly modulated stereo delay.


  • Tl;Dr: slap a bandpass filter on it, consider some distortion before the filter. Footnotes are just technical details.

    Without going into too much technical detail, a bandpass filter only lets a narrow band of frequencies pass. Practically, a bandpass filter can be realized as a low pass “+” a high pass [1]. So just use your generic EQ plugin and set a really high-frequency high pass and low-frequency low-pass [2,3]. If I had to guess, I’d set them for a passband (what frequencies are allowed) from 700Hz-6kHz, possibly with a wide peak around 3kHz, but set them to taste. This should get you like 90% of the way there.

    Based on my experience [5], chances are that in a real elevator, the speakers aren’t perfectly linear (they “suck”). Frankly, the whole speaker system is probably an afterthought, i.e. designed quick-and-dirty. What this means is that you probably want some light distortion from your favorite guitar distortion plugin, preferably something kinda “boring”, before the filter [4]. I would also investigate using a bit-reduction plugin (there are a couple free ones floating around; the average bitcrusher usually can also do bit reduction) and setting the bit-depth to 8, or 4 if you really want to wreck it. For similar reasons, and because data compression is done well before it is piped out to the speakers or even the rest of the analog output, you want this before the filter. This would simulate an elevator where the track is stored as a heavily compressed audio file. Lowering the sample rate setting in your bitcrusher would simulate a shitty DAC (digital to analog converter) that doesn’t interpolate quickly enough, which I would also expect to find in an elevator circuit.

    If you have any questions about what I just wrote, or just any burning questions about audio or electrical engineering, this is what I live to do please ask.

    [1] Mathematically, by “+” I really mean “in cascade”, i.e. one after the other (order doesn’t matter if you’re using a transparent EQ).

    [2] Your EQ plugin cascades all the individual equalizer bands under the hood, so you just need one multiband EQ.

    [3] In audio, we typically use the term “low cut” to refer to a “high pass” filter from electrical engineering, and similarly for high cuts. However, “low cut” really refers to any filter that dramatically cuts the low end. In an actual circuit, you would probably have a genuine high-pass, i.e. that the frequency response (the graph if you use a parametric EQ) goes to zero (-infinity dB, bottom of the graph in a parametric EQ). There’s also a “low shelf” that cuts (or boosts) the low end stuff, but it doesn’t drop to zero. Because the “elevator effect” isn’t intentional, and based on my electrical engineering experience and prior reading into loudspeaker design, I would not expect to find that someone intentionally or accidentally designed a low-cut circuit, whereas it is pretty easy to accidentally get high or low-pass responses while trying to satisfy other design criteria.

    [4] Here, the order wildly matters, because distortion is nonlinear. Having it the other way around would defeat some of the frequency attenuation.

    [5] Graduated w/ associates degree in music technology, produced an assload of music for Redditors and for a few other clients, then went back to school and graduated w/ bachelors degree in electrical engineering. Starting my masters next semester.


  • Actually I tried out KDE Plasma on my grandmother’s budget laptop from about the same time. It was a little too slow with default settings, but once I killed the animations (can be done in Settings app) it ran pretty well. It ran a whole hell of a lot better than the Windows it came with.

    I also tested KDE vs XFCE in my old gaming computer, and I actually managed to get slightly less RAM usage in KDE than XFCE, so long as no plugins were used.

    Both systems were tested with Debian 12. On the gaming PC, I actually used the XFCE iso, so it was installed first.

    So depending on how your distro ships the default KDE Plasma settings or how you set it up, it actually can be a lightweight option compared with XFCE.