

This guy is a gem. And this converter has been helpful for me getting out of the Kontakt ecosystem.
This guy is a gem. And this converter has been helpful for me getting out of the Kontakt ecosystem.
Love that. Old school DJ looping, but in the box. :)
In Reaper, you can tab to transient, which might be helpful for cutting things in rhythm, since the downbeat usually has got a big transient.
Oh damn, this sounds rad.
Sounds especially guitar-like on those chromatic runs. :) Well done!
Same. Glad to learn these principles and apply them to digital audio nonetheless. :)
Nixon in China.
Jk lol.
Yes, Brandi has got a lovely vibrato!
Yeah the sibilance is right. I believe it has to do with the large coil which sound has to move when using a dynamic.
Yeah that would work well if I were precise enough to know when to automate the click in and out. If I’m playing free-tempo, I won’t really know when that click should begin.
I’ve never tried this! That sounds like pure vibes.
I’d imagine nostalgia is a big factor.
But otherwise, low-pass-filtered stuff generally sounds less exciting. You can read about pink noise and Brownian noise, for example, which more closely resemble natural phenomena like wind or rain from inside a shelter. Pink noise is white noise which decreases 3dB in volume each octave; Brownian noise, -6dB/oct. It were as if you put such a low-pass filter on white noise. So music that shares a similar frequency profile is relaxing.
Reverb can be used for lots of purposes. As you say, it simulates reverberations a physical space.
Short, natural-sounding reverb can be used to blend tracks together. If two instruments need to sound like they were recorded in the same room, do it virtually.
As others have written, longer, natural reverbs can create a perception of size. You can make a vocalist sound like sang in a concert hall or a church or a bar.
Sometimes reverb may be used to impart tone, evoke a vintage. A spring reverb has different cultural associations than a Lexicon. Some reverbs’ modal resonances highlight certain frequencies.
Long reverbs can be used to increase sustain of an instrument. Every ambient guitarist in the world is familiar with this.
What reverb means in a piece is for the artist and listener to interpret.
Are you talking about something like Soothe or SplitEQ? There are certain spectral effects that can remove the tonal characteristics from a sound, leaving only the nontonal aspects. E.g. on a piano, you’d only hear the unpitched, percussive hammer and key sounds.
Here’s a FOSS alternative for Soothe. https://bedroomproducersblog.com/2024/03/03/nih-plug-spectral-compressor/
Suno CEO doesn’t even know what tuning an 808 means.
Based on my experience, there shouldn’t be any difference between distributing a long song vs. a short one. You may not have come across them in the wild because algorithms, like radio, tend to favor 3-4 min songs.
Pretty neat IMO (though I’m not a Bitwig user). They really seem to be upping the ante as an Ableton alternative.
Stuff like this makes me think this may one day become classic, historical gear.
Also, closer sounds tend to have more prominent transients, so you can use a transient shaper and automate the attack (if you’re working with something that’s not a sustained tone, that is).
Regarding your second paragraph: I think about that so much these days.
You didn’t miss once with this take.
This is how I understand what you’re saying: Over here we have the “musicians,” a title reserved for only those who play an instrument. On the other hand we have the “producers,” who make music on a computer, without an “instrument.”
But I say both make human music. And a computer can be an instrument. Drawing notes in MIDI is not much different than composing in Musescore. The producer is not unlike the classical composer, and I say both are musicians. And in a discussion on AI music vs. human music, why should we make a false dichotomy within human music anyway? “All models are wrong, but some are useful,” said George E. P. Box, statistician.
I think people have taken you as arguing in bad faith. I, and I assume many others, would agree with you that Suno AI is bad. I think AI is ethically uncouth. But your original comment seems to be making a false equivalency between AI music and sampling. I think I understand what you mean better, but I still disagree with your premise and think it’s a weak argument for fighting AI.
If we can resolve the ethical complications of AI, I agree, AI could be a net-positive, beneficial tool for learning and accessibility. Suno isn’t really that, though. It’s more like a vending machine.
Sounds to me kinda like a heavily distorted bitcrusher. That’s what I’d do if I was trying to recreate it. It sounds like what I remember the stock Logic bitcrusher sounding like. But I wonder if it’s not just a shitty mic overdriving a shitty preamp.
And in post, they’ve panned it moving from one side to another in the stereo field.