AUD-7: Audio and Speech Source Separation 3: Deep Learning |
Session Type: Poster |
Time: Wednesday, 9 June, 13:00 - 13:45 |
Location: Gather.Town |
Virtual Session: View on Virtual Platform |
Session Chair: Minje Kim, Indiana University Bloomington |
AUD-7.1: LASAFT: LATENT SOURCE ATTENTIVE FREQUENCY TRANSFORMATION FOR CONDITIONED SOURCE SEPARATION |
Woosung Choi; Korea University |
Minseok Kim; Korea University |
Jaehwa Chung; Korea National Open University |
Soonyoung Jung; Korea University |
AUD-7.2: SURROGATE SOURCE MODEL LEARNING FOR DETERMINED SOURCE SEPARATION |
Robin Scheibler; LINE Corporation |
Masahito Togami; LINE Corporation |
AUD-7.3: AUDITORY FILTERBANKS BENEFIT UNIVERSAL SOUND SOURCE SEPARATION |
Han Li; Northwestern Polytechnical University, Technical University of Munich |
Kean Chen; Northwestern Polytechnical University |
Bernhard U. Seeber; Technical University of Munich |
AUD-7.4: WHAT'S ALL THE FUSS ABOUT FREE UNIVERSAL SOUND SEPARATION DATA? |
Scott Wisdom; Google |
Hakan Erdogan; Google |
Daniel P. W. Ellis; Google |
Romain Serizel; Universite de Lorraine |
Nicolas Turpault; Universite de Lorraine |
Eduardo Fonseca; Universitat Pompeu Fabra |
Justin Salamon; Adobe |
Prem Seetharaman; Descript |
John R. Hershey; Google |
AUD-7.5: SEPNET: A DEEP SEPARATION MATRIX PREDICTION NETWORK FOR MULTICHANNEL AUDIO SOURCE SEPARATION |
Shota Inoue; University of Tsukuba |
Hirokazu Kameoka; NTT Communication Science Laboratories |
Li Li; University of Tsukuba |
Shoji Makino; University of Tsukuba |
AUD-7.6: CDPAM: CONTRASTIVE LEARNING FOR PERCEPTUAL AUDIO SIMILARITY |
Pranay Manocha; Princeton University |
Zeyu Jin; Adobe Research |
Richard Zhang; Adobe Research |
Adam Finkelstein; Princeton University |