Dr. Yiling Lou (娄一翎)

                    Pre-tenure Associate Professor
                    School of Computer Science
                    Fudan University
                    2005 Songhu Road
                    Shanghai, China

                           



Dr. Yiling Lou is currently a Pre-tenure Associate Professor in the School of Computer Science, Fudan University, China. Before joining Fudan University, she was a Postdoctoral Fellow at Purdue University, working with Prof. Lin Tan. She received her Ph.D degree and B.S degree in Computer Science from Peking University, where she was very fortunate to work under the supervision of Prof. Lu Zhang and Prof. Dan Hao . Her research interests mainly focus on Software Engineering, and its synergy with Artificial Intelligence and Programming Language.


Our group is always recruiting undergraduate/master/Phd students interested in SE and AI. Please send me an email with your CV if you are interested in working with us.

News

  • [Pinned] I am co-chairing AIware 2025 (co-located with ASE 2025 in Seoul, South Korea). Looking forward to your submissions!
  • [Pinned] Check out our survey on LLM-based agents for software engineering! [Preprint] [Github]
  • Our work on human-in-the-loop patch correctness checking is accepted to OOPSLA 2025.
  • Our work on measuring fault diagnosis capabilities of tests is accepted to TOSEM.
  • Our work (INFERROI) on enhancing traditional static analysis with LLMs in resource leak detection is accepted by ICSE 2025. INFERROI extends the knowledge boundary of traditional static analysis with the API specifications inferred by LLMs, which has detected previously-unknown resource leaks in open-source projects. Check out our Preprint.
  • We are organizing the second International Workshop on Large Language Models for Code (LLM4Code 2025) co-located with ICSE 2025. Looking forward to your high-quality submissions by Nov 18!
  • I attended the Dagstuhl Seminar on Automated Programming and Program Repair and gave a talk on LLM-based agents for software engineering. [Slides]
  • Our work on patch validation efficiency is accepted by TOSEM.
  • ChatTester (LLM-based Unit Test Generation) is accepted to FSE 2024. Congratulations to Zhiqiang!
  • I attended No. 176 Shonan Meeting on "Foundation Models and Software Engineering: Challenges and Opportunities".
  • Check out ClassEval Leaderboard for evaluating LLMs on class-level code generation.
  • ClassEval is accepted to ICSE 2024. We're currently working on evaluating more recent code models on ClassEval. [Benchmark Github] [Hugging Face]
  • We are launching the first International Workshop on Large Language Models for Code (LLM4Code 2024) co-located with ICSE 2024. Looking forward to your submissions!
  • Our work won ACM SIGSOFT Distinguished Paper Award at ESEC/FSE 2023.
  • Check out our manually-crafted benchmark ClassEval for evaluating LLMs on class-level code generation. [Benchmark Github] [Hugging Face] [Preprint]
  • Check out our preprint for evaluating instruction-tuned LLMs on code comprehension tasks.
  • Two papers accepted to ESEC/FSE 2023 after major revision.
  • Three papers accepted to ASE 2023.
  • Check out our preprint on LLM-based unit test generation.
  • Two papers accepted to ESEC/FSE 2023.
  • Two papers accepted to ICSE 2023.