Jung-Eun Kim

I am an assistant professor in Computer Science at North Carolina State University. I work on trustworthy, interpretable, and efficient AI/deep learning. My interest lies in the intersection of failure mode/safety risk/vulnerability/bias and efficiency of deep learning. I also have a background in safety-critical systems. Prior to joining NC State in 2022, I was an assistant professor in EECS at Syracuse University (2021-2022.) Before then, I was an associate research scientist in Computer Science at Yale University. I received my PhD in Computer Science at the University of Illinois at Urbana-Champaign, and my BS and MS degrees in Computer Science and Engineering at Seoul National University, Seoul, Korea.

 

jung-eun.kim@ncsu.edu

Available Positions

I am looking for genuinely motivated PhD students. If you are interested in working with me for a PhD program, please contact me with your CV and mention your research interests and experiences with regard to mine.

 

Research Focus of Our Group

In our group, we care about trustworthy, interpretable, and efficient AI/deep learning. We identify how models can go wrong and answer why. We fundamentally anatomize neural networks to understand and verify what is invariant in them and what causes failure modes/safety risks/vulnerabilities/biases. Deep learning is used to be generalized for high-dimensional problems – if you already know what to encounter, you must program it instead. However, such aforementioned issues hurt the generalizability and exacerbate memorization behaviors of AI/deep learning models. Also, when these issues meet efficiency considerations, we may gain or lose something. So, like a heart surgeon, we open the heart of a neural network architecture, look into it, interpret it, and cure it. This is our mission, and we are savvy about them.

 

Media Coverage

 

Selected Honors and Awards

  • ICLR Spotlight, 2025
  • IBM Faculty award, 2023
  • CRA Early & Mid Career Mentoring Workshop, 2023
  • Cloud GPU provided by Lambda, worth $17,280, for my course, Resource-dependent neural networks, Spring 2023. Thank you, Lambda!
  • NeurIPS Spotlight and a nomination for Best Paper Award, 2022
  • CRA (Computing Research Association) Career Mentoring Workshop, 2022
  • NSF SaTC (Secure and Trustworthy Cyberspace): CORE: Small: Partition-Oblivious Real-Time Hierarchical Scheduling, Co-PI, National Science Foundation, 2020–2024
  • GPU Grant by NVIDIA Corporation, 2018
  • The MIT EECS Rising Stars, 2015
  • The Richard T. Cheng Endowed Fellowship, 2015 – 2016

Selected Program Committee/Panel Service

  • Program Committee/Reviewer of ICLR 2024-2025, ICML 2024-2025, NeurIPS 2023-2025, AAAI 2023-2025, IJCAI 2023-2025
  • Publicity Chair of IJCAI 2024
  • Senior Program Committee of AAAI 2024, Safe, Robust and Responsible AI track
  • NSF review panel 2023, twice

 

Students

I am fortunate to advise and work with the brilliant students who have a vision for the future:

Teaching

  • Deep learning beyond accuracy, CSC 591 & 791 ECE 591, Fall 2023, Fall 2024, Spring 2025, Fall 2025
  • Trustworthy and efficient deep learning, CSC 495 & 591, Spring 2025
  • Resource-dependent neural networks, Spring 2023, Cloud GPU provided by Lambda, worth $17,280. Thank you, Lambda!
  • Resource-/Time-dependent learning, Fall 2022

 

Publications

(* Students whom I advised are underlined.)

 

Patent

  • Chang-Gun Lee, Jung-Eun Kim, and Junghee Han. Sensor Deployment System for 3-Coverage. KR 10-1032998, filed Dec. 30, 2008, and issued Apr. 27, 2011.
  • Divyang Doshi and Jung-Eun Kim, “ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillation,” US Patent pending

Web Analytics