Pianíssimo, piano, forte and Fortíssimo!

Big_B

Ritsu's Renegades
Defender of Defoko
Hello guys, sorry for not being online that much on the community. It's bc I'm working really hard in a cover.

But I'm here today to ask something. This is a qiestion that I did in some Discord srevers and I'm curiois about what you think. So let's go! ^^

Discord servers

Hello guys, I have a question to you (Being honest, more than one... '3'). Guys, I'm curious about some new stuff that I was searching about music. Did you heard about Pianíssimo, Piano, Forte and Fortíssimo?

I'm a huge fan of Mitchie M's work, so what I really want to see more realistic covers.

Utau always will be robotic (Yeah, I know that), but I really like to extract the full potential of something and if I'm curious about it I will search for things that can help me to get the result that I want to achieve. For me do an utau sound like Mitchie does is the best that we can do with utau.

I was searching about Volume and Intensity in music. I think that it always is put aside in synth covers (Just take any ust and you will see that every note is set on 100%). When we're talking about human singers the volume of every note changes constantly, so I'll go to search everything that I could about vocal Dynamics and found these concepts. They tell us basically how intense a note is: Pianíssimo - The less intense note that is reached in a music; Piano - It's intensity is low too, but it's more stronger than the pianíssimo; Fortíssimo - It's an intese note; Forte - It's a really, really intense note.

So the question is: Someone tried to mess woth the notes intensitys? What result did you achieve? Did you think that can help users to make more human oriented covers?

And the last question: Am I crazy for care about it? '3'

(Ps: I know that Mitchie get his results through vocalshifter, so you don't need to mention it or Utalis. Thx for reading! ^^ <3)
 

Aeroza

Ruko's Ruffians
Defender of Defoko
I myself did try messing around with the notes intensities, and to me, it did brought some good results. I guess you could say that the voice had more emotions or something. However, I only messed with them only when the song is slow and soft. I find it harder to mess with the notes intensities when the song is more powerful, I guess? i don't know what i'm trying to say at this point.

I'm pretty sure messing around with the notes intensities could bring some great results, if you know what you're doing. And, no, I don't think you're crazy for caring about it. I am also always thinking about it.
 

WinterdrivE

Ritsu's Renegades
Defender of Defoko
Dynamics in vocal synths are largely irrelevant.

The thing about vocals in popular music is that there's actually very little variation in terms of physical volume/amplitude. Vocals tend to be compressed to lessen the variation in amplitude, in fact. Because of this, what actually conveys dynamics is the tone of the vocals. Even if they're at the same physical amplitude, soft vocals don't sound the same as strong vocals because of the difference in tone.

And this is why dynamics in vocal synths are irrelevant, because vocal synths, as they are currently, largely cannot imitate a full spectrum of vocal tones and dynamics. They're recorded a specific way and that's the only tone they're capable of. No amount of changing the amplitude or EQs can change that, and that's all that most vocal synths afford. (SynthV's tension parameter being an exception, but that still isn't a perfect substitute for what a human singer can achieve in terms of true dynamics)

Otherwise the next best thing is add-on and multiexpression VBs, which record multiple tones, but still can't recreate the full spectrum of tones. If human vocals are a full color wheel, multiexpression VBs are a fixed palette where you can only choose red, blue, green, or yellow and nothing in between.

All that said, will editing intensity/dynamics (volume, really) while using a vocal synth make a difference? Sure. It can help smooth things out and add a little bit of life to the vocals. Will it create human-like realism? Absolutely not. The first thing you should focus on if realism is your goal is pitch editing, because that's where the robotic-ness really comes from, since even human vocals are processed to have little variation in volume but still sound human because they still have imperfect human pitch variation.

This applies especially to UTAU since UTAU really has no good way to edit volume. You can change the intensity, but that only changes the whole note. You can use the envelopes, but that only gives you, what, 6 points to edit? If you want intricate variations in volume, crescendos, decrescendos, etc, you're better off doing that with an automation track in your DAW. And even then, like I said absolutely none of this will actually create a true sense of dynamics since all you're changing is the amplitude and not the tone of the vocals.
 

Sunny

Teto's Territory
Dynamics in vocal synths are largely irrelevant.

The thing about vocals in popular music is that there's actually very little variation in terms of physical volume/amplitude. Vocals tend to be compressed to lessen the variation in amplitude, in fact. Because of this, what actually conveys dynamics is the tone of the vocals. Even if they're at the same physical amplitude, soft vocals don't sound the same as strong vocals because of the difference in tone.

And this is why dynamics in vocal synths are irrelevant, because vocal synths, as they are currently, largely cannot imitate a full spectrum of vocal tones and dynamics. They're recorded a specific way and that's the only tone they're capable of. No amount of changing the amplitude or EQs can change that, and that's all that most vocal synths afford. (SynthV's tension parameter being an exception, but that still isn't a perfect substitute for what a human singer can achieve in terms of true dynamics)

Otherwise the next best thing is add-on and multiexpression VBs, which record multiple tones, but still can't recreate the full spectrum of tones. If human vocals are a full color wheel, multiexpression VBs are a fixed palette where you can only choose red, blue, green, or yellow and nothing in between.

All that said, will editing intensity/dynamics (volume, really) while using a vocal synth make a difference? Sure. It can help smooth things out and add a little bit of life to the vocals. Will it create human-like realism? Absolutely not. The first thing you should focus on if realism is your goal is pitch editing, because that's where the robotic-ness really comes from, since even human vocals are processed to have little variation in volume but still sound human because they still have imperfect human pitch variation.

This applies especially to UTAU since UTAU really has no good way to edit volume. You can change the intensity, but that only changes the whole note. You can use the envelopes, but that only gives you, what, 6 points to edit? If you want intricate variations in volume, crescendos, decrescendos, etc, you're better off doing that with an automation track in your DAW. And even then, like I said absolutely none of this will actually create a true sense of dynamics since all you're changing is the amplitude and not the tone of the vocals.

I mostly agree what you said. But for me, dynamics are irrelevant for the original off vocal, but not for accoustic, piano arrange, etc.

I'm saying this because I'm music oriented. And i also has a bit of knowledge in piano.
 

Similar threads