Hi! I am a visual prompt tuning researcher for vision & language models and generative models. I earned my M.S. in Artificial Intelligence from Hanyang University. I was fortunate to be supervised by Prof. Dong-Jin Kim.
My research interests span in computer vision, vision & language, generative models and machine learning. More specifically, my primary research is about structured visual prompt tailored to various domains and further explores methods to bridge modality gaps using these prompts. My works expect integrating fine-grained and structured representations to generate more effective, interpretable, and robust prompts that align seamless with multi-modal applications.
If you are interested in working with me, please feel free to drop me an email through taewhan7725@gmail.com.
Research News
[12/2024] ViPCap is accepted by AAAI 2025
[09/2024] IFCap is accepted by EMNLP 2024
[03/2022] Is college students’ trajectory associated with academic performance? is accepted by Computers & Education 2022
Selected Publications
![]() | ViPCap: Retrieval Text-Based Visual Prompts for Lightweight Image Captioning.Taewhan Kim, Soeun Lee, Si-Woo Kim, Dong-Jin Kim.AAAI, 2025 |
![]() | IFCap: Image-like Retrieval and Frequency-based Entity Filtering for Zero-shot Captioning.Soeun Lee*, Si-Woo Kim*, Taewhan Kim, Dong-Jin Kim.EMNLP, 2024 |
| Is college students’ trajectory associated with academic performance?.Hyoungjoon Lim, Soohyun Kim, Kyong-Mee Chung, Kangjae Lee, Taewhan Kim, Joon Heo.Computers & Education, 2022 |