Researcher

Ye Zhu

Monge Tenure-Track Assistant Professor


Department of Computer Science
École Polytechnique, Institut Polytechnique de Paris (IPP)
Office 1073, Bâtiment Alan Turing
1 Rue d'Estiennes d'Orves, Palaiseau 91120, France
Email: ye[dot]zhu[at]polytechnique[dot]edu

I am a Monge tenure-track assistant professor in the Computer Science Department, and a Principal Investigator (PI) at the Computer Science Laboratory (Laboratoire d'Informatique, LIX) of École Polytechnique. My research lies in Machine Learning and Computer Vision, with a particular focus on deep generative models (e.g., diffusion models and GANs) and their applications in multimodal settings (e.g., vision, audio, and text) as well as in scientific domains (e.g., astrophysical inversions). I was fortunate to receive several awards and recognitions from the community, including the MIT EECS Rising Stars Award in 2024.

Before joining l'X in 2025, I spent two years as a postdoctoral researcher at Princeton University, working with Prof. Olga Russakovsky. I earned my Ph.D. in Computer Science under the supervision of Prof. Yan Yan at Illinois Tech in Chicago. I also hold M.S. and B.S. degrees from Shanghai Jiao Tong University (SJTU), and received the French engineering diploma through the dual-degree program with SJTU after studying at École Polytechnique.

[Google Scholar]     [Twitter]     [GitHub]     [CV]

💡 We are hiring! My colleague Prof. Johannes Lutzeyer and I are co-recruiting a PhD student to start in Fall 2026 on the topic of Graph-Guided Multimodal Generation and Control. Please see PhD_Hiring_Opening for more details!

📬 For students and external collaborators with questions about my availability, please see the Contact.

News

09/2025: Our works Dynamic diffusion Schrödinger bridge for astrophysical inversions and BNMusic for noise acoustic masking via personalized music generation accepted to NeurIPS 2025. I am traveling to NeurIPS Mexico in December.

09/2025: I joined École Polytechnique as a Monge tenure-track assistant professor in Computer Science.

07/2025: Our work NoiseQuery for enhanced goal driven image generation accepted to ICCV 2025 as a Highlight paper.

02/2025: Our work D3 for scaling up deepfake detection accepted to CVPR 2025.

01/2025: Our work on exploring magnetific field in the interstellar medium via diffusion generative models accepted to The Astrophysical Journal (APJ).

Recent Publications

* for equal contributions. A complete list can be found from Google Scholar.

  • Dynamic Diffusion Schrödinger Bridge in Astrophysical Observational Inversions
    Ye Zhu, Duo Xu, Zhiwei Deng, Jonathan C. Tan, Olga Russakovsky.
    In Conference on Neural Information Processing Systems (NeurIPS), 2025.
  • [Paper]   [Code]   [Bibtex]
  • BNMusic: Blending Environmental Noises into Personalized Music
    Chi Zuo, Martin B. Møller, Pablo Martínez-Nuevo, Huayang Huang, Yu Wu, Ye Zhu.
    In Conference on Neural Information Processing Systems (NeurIPS), 2025.
  • [Paper]   [Code]   [Bibtex]
  • The Silent Assistant: NoiseQuery as Implicit Guidance for Goal-Driven Image Generation
    Ruoyu Wang, Huayang Huang, Ye Zhu, Olga Russakovsky, Yu Wu.
    In International Conference on Computer Vision (ICCV Highlight), 2025.
  • [Paper]   [Code]   [Bibtex]
  • D3: Scaling Up Deepfake Detection by Learning from Discrepancy
    Yongqi Yang*, Zhihao Qian*, Ye Zhu, Olga Russakovsky, and Yu Wu.
    In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025.
  • [Paper]   [Code]   [Bibtex]
  • Exploring Magnetic Fields in Molecular Clouds through Denoising Diffusion Probabilistic Models
    Duo Xu, Jenna Karcheski, Chi-Yan Law, Ye Zhu, Chia-Jung Hsu, and Jonathan Tan.
    In The Astrophysics Journal (APJ), 2025.
  • [Paper]   [Code]   [Bibtex]
  • Vision + X: A Survey on Multimodal Learning in the Light of Data
    Ye Zhu, Yu Wu, Nicu Sebe, and Yan Yan.
    In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024.
  • [Paper]   [Bibtex]
  • What is Dataset Distillation Learning?
    William Yang, Ye Zhu, Zhiwei Deng, Olga Russakovsky.
    In International Conference on Machine Learning (ICML), 2024.
  • [Paper]   [Code]   [Bibtex]
  • Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation
    Ruoyu Wang*, Yongqi Yang*, Zhihao Qian, Ye Zhu, and Yu Wu.
    In International Conference on Learning Representations (ICLR), 2024.
  • [Paper]   [Code]   [Bibtex]
  • Surveying Image Segmentation Approaches in Astronomy
    Duo Xu, Ye Zhu.
    In Astronomy and Computing, 2024.
  • [Paper]   [Bibtex]
  • Boundary Guided Learning-Free Semantic Control with Diffusion Models
    Ye Zhu, Yu Wu, Zhiwei Deng, Olga Russakovsky, and Yan Yan.
    In Conference on Neural Information Processing Systems (NeurIPS), 2023.
  • [Paper]   [Code]   [Bibtex]
  • Denoising Diffusion Probabilistic Models to Predict the Density of Molecular Clouds
    Duo Xu, Jonathan Tan, Chia-Jung Hsu, and Ye Zhu.
    In The Astrophysics Journal (APJ), 2023.
  • [Paper]   [Bibtex]
  • Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation
    Ye Zhu, Yu Wu, Kyle Olszewski, Jian Ren, Sergey Tulyakov, and Yan Yan.
    In International Conference on Learning Representations (ICLR), 2023.
  • [Paper]   [Code]   [Bibtex]
  • Quantized GAN for Complex Music Generation from Dance Videos
    Ye Zhu, Kyle Olszewski, Yu Wu, Panos Achlioptas, Menglei Chai, Yan Yan, and Sergey Tulyakov.
    In European Conference on Computer Vision (ECCV), 2022.
  • [Paper]   [Code]   [Bibtex]
  • Saying the Unseen: Video Descriptions via Dialog Agents
    Ye Zhu, Yu Wu, Yi Yang, and Yan Yan.
    In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022.
  • [Paper]   [Code]   [Bibtex]
  • Learning Audio-Visual Correlations From Variational Cross-Modal Generations
    Ye Zhu, Yu Wu, Hugo Latapie, Yi Yang, and Yan Yan.
    In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021.
  • [Paper]   [Code]   [Bibtex]
  • Describing Unseen Videos via Multi-Modal Cooperative Dialog Agents
    Ye Zhu, Yu Wu, Yi Yang, and Yan Yan.
    In European Conference on Computer Vision (ECCV), 2020.
    [Paper]   [Code]   [Bibtex]

Teaching

Fall 2025
    CSC_51054_EP: Deep Learning, École Polytechnique, France

Say Hi !

Thanks for your interest in getting in touch! You can reach me at: ye[dot]zhu[at]polytechnique[dot]edu

Before reaching out, please kindly take a moment to read the notes below to help keep our communication efficient. I usually receive a high volume of emails and messages, so replies may take a few days or sometimes longer, depending on my schedule. Thank you for your patience and understanding — I’ll do my best to respond!

For perspective Ph.D. students:

Prof. Johannes Lutzeyer and I are co-recruiting a fully funded PhD student at the Department of Computer Science, École Polytechnique, to start in September 2026 (with the option to begin a research internship with us from Spring 2026). The application deadline is mid-January 2026, details can be found from HERE!

For prospective M.S. students and research/visiting interns:

There is no need to contact me directly if you are applying to the M.S. programs at École Polytechnique (EP) or Institut Polytechnique de Paris (IPP). Please refer to the official program websites for detailed information on application procedures and deadlines. An example is the new LLGA MSC&T master's program, which I am co-directing this year.

At the moment, I have limited availability to supervise external visiting students or research interns. The best way to connect with me regarding research opportunities is usually through the classes I teach. Please refer to my Teaching for more information.

For outreach and other service:

I occasionally give talks and regularly organize workshops at ML/CV venues such as NeurIPS and CVPR. Workshop talks are generally easier to accommodate if I plan to attend the conference in person. Other outreach activities may depend on my availability during the semester and are subject to my teaching schedule. Please feel free to reach out with details and expectations.