Voice conversion (VC) transforms source speech into a target voice by preserving the content while replacing the timbre of the source speaker with that of the target speaker. However, timbre information from the source speaker is inherently embedded in the content representations, causing significant timbre leakage and reducing similarity to the target speaker. To address this, we introduce a Universal Semantic Matching (USM) residual block to a content extractor. The residual block consists of two branches with tunable weights. One is a skip connection to the original content layer, the other is a universal semantic dictionary based Content Feature Re-expression (CFR) module. Specifically, each dictionary entry in the universal semantic dictionary represents a phoneme class, computed statistically using speech from multiple speakers, creating a stable, speaker-independent semantic set. Additionally, we introduce CFR within the USM block to obtain timbre-free and contextual content representations by expressing each content frame as a weighted linear combination of dictionary entries using corresponding phoneme posteriors as weights. Extensive experiments across various VC frameworks demonstrate that our approach effectively mitigates timbre leakage and significantly improves similarity to the target speaker.
Source | Target | Conversion | |
---|---|---|---|
|
|
MLF S-Unit USM |
USM* |
|
|
MLF S-Unit USM |
USM* |
|
|
MLF S-Unit USM |
USM* |
Source | Target | Conversion | |
---|---|---|---|
|
|
BNF S-Unit USM |
USM* |
|
|
BNF S-Unit USM |
USM* |
|
|
BNF S-Unit USM |
USM* |
Source | Target | Conversion |
---|---|---|
|
|
BNF S-Unit USM |
|
|
BNF S-Unit USM |
|
|
BNF S-Unit USM |
Source | Target | Conversion |
---|---|---|
|
|
BNF S-Unit USM |
|
|
BNF S-Unit USM |
|
|
BNF S-Unit USM |