Jung-Eun Kim

I am an assistant professor in Computer Science at North Carolina State University. I work on trustworthy, interpretable, and efficient AI/deep learning. My interest lies in the intersection of failure mode/vulnerability/bias and efficiency of deep learning. I also have a background in safety-critical systems. Prior to joining NC State in 2022, I was an assistant professor in EECS at Syracuse University (2021-2022.) Before then, I was an associate research scientist in Computer Science at Yale University. I received my PhD in Computer Science at the University of Illinois at Urbana-Champaign, and my BS and MS degrees in Computer Science and Engineering at Seoul National University, Seoul, Korea.

 

jung-eun.kim@ncsu.edu

Available Positions

I am looking for genuinely motivated PhD students. If you are interested in working with me for a PhD program, please contact me with your CV and mention your research interests and experiences with regard to mine.

 

Research Focus of Our Group

Somebody said, “If you think you are not biased, you are dangerous.” That is so true and not different for learning models. We identify and understand that a neural network model can fail/be vulnerable/be biased, so we and our models would be trustworthy and safe.

In our group, we care about trustworthy, interpretable, efficient, and sustainable AI/deep learning. We identify how models can go wrong and answer why. We fundamentally anatomize neural networks to understand and verify what is invariant in them and what causes failure modes/vulnerabilities/biases. Also, when these issues meet efficiency considerations of deep learning, we may gain or lose something. So, like a heart surgeon, we open the heart of a neural network, look into it, interpret it, and cure it. Once we get all the anatomy of the architectures, we intend to let the models sustain and keep learning in the era of overwhelmingly large models. This is our mission, and we are savvy about them.

 

Media Coverage

 

Selected Honors and Awards

  • IBM Faculty award, 2023
  • CRA Early & Mid Career Mentoring Workshop, 2023
  • Cloud GPU provided by Lambda, worth $17,280, for my course, Resource-dependent neural networks, Spring 2023. Thank you, Lambda!
  • NeurIPS Spotlight and a nomination for Best Paper Award, 2022
  • CRA (Computing Research Association) Career Mentoring Workshop, 2022
  • NSF SaTC (Secure and Trustworthy Cyberspace): CORE: Small: Partition-Oblivious Real-Time Hierarchical Scheduling, Co-PI, National Science Foundation, 2020–2024
  • GPU Grant by NVIDIA Corporation, 2018
  • The MIT EECS Rising Stars, 2015
  • The Richard T. Cheng Endowed Fellowship, 2015 – 2016

Selected Program Committee/Panel Service

  • Publicity Chair of IJCAI 2024
  • Senior Program Committee of AAAI 2024, Safe, Robust and Responsible AI track
  • Program Committee/Reviewer of ICLR 2024-2025, NeurIPS 2023-2024, ICML 2024, AAAI 2023-2025, IJCAI 2023-2024
  • NSF review panel 2023 (for a different program from the other)
  • NSF review panel 2023

 

Students

I am fortunate to advise and work with the brilliant students who have a vision for the future:

  • Xingli Fang
  • Varun Mulchandani
  • Vishwesh Sangarya
  • Jianwei Li
  • Rishi Singhal
  • Sai Kishore Honnavalli Ravi Shankar

Teaching

  • Trustworthy and efficient deep learning, CSC 495, Spring 2025
  • Deep learning beyond accuracy, CSC 591/791, ECE 591 Fall 2023, Fall 2024, Spring 2025
  • Resource-dependent neural networks, Spring 2023, Cloud GPU provided by Lambda, worth $17,280. Thank you, Lambda!
  • Resource-/Time-dependent learning, Fall 2022

 

Publications

(* Students that I advised are underlined.)

 

Patent

  • Chang-Gun Lee, Jung-Eun Kim, and Junghee Han. Sensor Deployment System for 3-Coverage. KR 10-1032998, filed Dec. 30, 2008, and issued Apr. 27, 2011.

Web Analytics