Hyungjun Kim received his PhD degree in 2021. He is currently working as a post-doc. researcher in Device and Integrated Circuit Engineering (DICE) Lab..
Hyungjun’s research interest is about energy-efficient Deep Learning Accelerators. Especially, he is currently working on In-Memory Computing (or Compute-In-Memory) scheme. He is also very interested in neural network compression techniques such as low-precision quantization.
You may contact Hyungjun for more discussion on his research via hyungjun.kim /.at./ postech.ac.kr.
- Hyungjun’s paper entitled “Energy-efficient charge sharing-based 8T2C SRAM in-memory accelerator for binary neural networks in 28nm CMOS” is accepted to ASSCC 2021. (To appear)
- Hyungjun gave an invited talk at Kakao Brain Open seminar (Neural Network Quantization for On-device AI Applications: from 8-bit to 1-bit, Jul. 2021)
- Hyungjun chairs a session (Learning Models and Applications of Intelligent Systems) at AICAS 2021.