Hello!
So, some time ago, I've posted a thread stating that my UTAU, Yaemi Keisuke, didn't have emotion in her voice. Most people that have answered told me that was because I spoke instead of singing in my samples.
After a bit of training, I've tried again and made a short comparison between the two. Speaking voice comes first, then singing.
The old one has the following caracteristics:
-Tetra-pitch (4 pitches, B3, D4, G4, C5)
-No end breaths
-Speaking voice, not voice acted
The new one will have the following caracteristics (sample is just monopitch) :
-Tri-pitch powerscale (3 pitches, B3, D4, F4)
-End breaths, in and out
-Singing voice, slightly voice acted
I personally think that the speaking one had more resampler-generated noise than the singing. Also, it had this accent that I don't like very much.
What do you guys think? I'd love constructive criticism and opinions. I'm also open to new ideas!
Thank you for reading this, and have a good day!
So, some time ago, I've posted a thread stating that my UTAU, Yaemi Keisuke, didn't have emotion in her voice. Most people that have answered told me that was because I spoke instead of singing in my samples.
After a bit of training, I've tried again and made a short comparison between the two. Speaking voice comes first, then singing.
The old one has the following caracteristics:
-Tetra-pitch (4 pitches, B3, D4, G4, C5)
-No end breaths
-Speaking voice, not voice acted
The new one will have the following caracteristics (sample is just monopitch) :
-Tri-pitch powerscale (3 pitches, B3, D4, F4)
-End breaths, in and out
-Singing voice, slightly voice acted
I personally think that the speaking one had more resampler-generated noise than the singing. Also, it had this accent that I don't like very much.
What do you guys think? I'd love constructive criticism and opinions. I'm also open to new ideas!
Thank you for reading this, and have a good day!
Last edited: