Jiahan (Han) Zhang

🎓 Master's student in Computer Science
🏛️ Johns Hopkins University

Profile

About Me

I am a Master’s student in Computer Science at Johns Hopkins University. I have been fortunate to collaborate with Prof. Lei Feng at Southeast University, Prof. Alan Yuille and Ph.D. candidate Jieneng Chen at Johns Hopkins University.

At present, I focus on integrating richer real-world knowledge into generative models. Earlier, I worked on adversarial robustness and weakly supervised learning for large multimodal models, which laid the groundwork for my current research.

Research Interests

My research focuses on scalable world models and generative models for embodied agents. I am especially interested in:

If you share these interests, please feel free to contact me by email!

Selected Publications & Manuscripts

Recent / By Year    * denotes equal contribution

2025

World-in-World: World Models in a Closed-Loop World
Jiahan Zhang*, Muqing Jiang*, Nanru Dai, Taiming Lu, Arda Uzunoglu, Shunchi Zhang, Yana Wei, Jiahao Wang, Vishal M. Patel, Paul Pu Liang, Daniel Khashabi, Cheng Peng, Rama Chellappa, Tianmin Shu, Alan Yuille, Yilun Du, Jieneng Chen†
Under review 2025
By grounding evaluation in embodied task success rather than visual metrics, World-in-World provides a principled yardstick and a comprehensive framework for assessing the real-world utility of generative world models in embodied settings.
Improving Generalizability and Undetectability for Targeted Adversarial Attacks on Multimodal Pre-trained Models
Zhifang Zhang*, Jiahan Zhang*, Shengjie Zhou, Qi Wei, Shuo He, Feng Liu, Lei Feng†
Under review 2025
We propose Proxy Targeted Attack (PTA), enabling adversarial examples to generalize to semantically similar targets while remaining on-manifold to evade anomaly detection, revealing a new vulnerability in large multimodal models.

2024

Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data
Jiahan Zhang*, Qi Wei*, Feng Liu, Lei Feng†
ICML 2024 · Oral (top 1.4%)
Candidate Pseudolabel Learning (CPL) fine-tunes VLMs with limited labeled data using candidate label sets and partial-label losses, achieving consistent gains over hard pseudolabeling across nine datasets and three learning paradigms.

All Publications & Manuscripts

  1. Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data
    Jiahan Zhang*, Qi Wei*, Feng Liu, Lei Feng† (2024)
    ICML 2024 · Oral (top 1.4%)
    [Paper]
  2. Influence-Based Fair Selection for Sample-Discriminative Backdoor Attack
    Qi Wei, Shuo He, Jiahan Zhang, Lei Feng, Bo An† (2024/2025)
    AAAI 2025 · Oral
    [Paper]
  3. World-in-World: World Models in a Closed-Loop World
    Jiahan Zhang*, Muqing Jiang*, Nanru Dai, Taiming Lu, Arda Uzunoglu, Shunchi Zhang, Yana Wei, Jiahao Wang, Vishal M. Patel, Paul Pu Liang, Daniel Khashabi, Cheng Peng, Rama Chellappa, Tianmin Shu, Alan Yuille, Yilun Du, Jieneng Chen† (2025)
    Under review
    [Paper]
  4. Improving Generalizability and Undetectability for Targeted Adversarial Attacks on Multimodal Pre-trained Models
    Zhifang Zhang*, Jiahan Zhang*, Shengjie Zhou, Qi Wei, Shuo He, Feng Liu, Lei Feng† (2025)
    Under review
    [Paper]
  5. EvoWorld: Evolving Panoramic World Generation with Explicit 3D Memory
    Jiahao Wang, Luoxin Ye, TaiMing Lu, Junfei Xiao, Jiahan Zhang, Yuxiang Guo, Xijun Liu, Rama Chellappa, Cheng Peng, Alan Yuille, Jieneng Chen† (2025)
    Under review
    [Paper]

Contact

jhanzhang01@gmail.com  Â·  GitHub  Â·  LinkedIn  Â·  Google Scholar

© Jiahan (Han) Zhang · Back to top