Dr. Yiling Lou (娄一翎)

                    Pre-Tenure Associate Professor
                    School of Computer Science
                    Fudan University
                    2005 Songhu Road
                    Shanghai, China

                           



Dr. Yiling Lou is currently a Pre-Tenure Associate Professor in the School of Computer Science, Fudan University, China. Before joining Fudan University, she was a Postdoctoral Fellow at Purdue University, working with Prof. Lin Tan. She received her Ph.D degree and B.S degree in Computer Science from Peking University, where she was very fortunate to work under the supervision of Prof. Lu Zhang and Prof. Dan Hao . Her research interests mainly focus on Software Engineering, and its synergy with Artificial Intelligence and Programming Language.


Our group is always recruiting undergraduate/master/Phd students interested in SE and AI. Please send me an email with your CV if you are interested in working with us.

News

  • [Pinned] Check out our survey on LLM-based agents for software engineering! [Preprint] [Github]
  • [Pinned] We are organizing the second International Workshop on Large Language Models for Code (LLM4Code 2025) co-located with ICSE 2025. Looking forward to your high-quality submissions by Nov 18!
  • Our work on patch validation efficiency is accepted by TOSEM.
  • ChatTester (LLM-based Unit Test Generation) is accepted to FSE 2024. Congratulations to Zhiqiang!
  • Check out ClassEval Leaderboard for evaluating LLMs on class-level code generation.
  • ClassEval is accepted to ICSE 2024. We're currently working on evaluating more recent code models on ClassEval. [Benchmark Github] [Hugging Face]
  • We are launching the first International Workshop on Large Language Models for Code (LLM4Code 2024) co-located with ICSE 2024. Looking forward to your submissions!
  • Our work won ACM SIGSOFT Distinguished Paper Award at ESEC/FSE 2023.
  • Check out our manually-crafted benchmark ClassEval for evaluating LLMs on class-level code generation. [Benchmark Github] [Hugging Face] [Preprint]
  • Check out our preprint for evaluating instruction-tuned LLMs on code comprehension tasks.
  • Two papers accepted to ESEC/FSE 2023 after major revision.
  • Three papers accepted to ASE 2023.
  • Check out our preprint on LLM-based unit test generation.
  • Two papers accepted to ESEC/FSE 2023.
  • Two papers accepted to ICSE 2023.