Hyungjun Kim received his PhD degree in 2021. He is currently working as a post-doc. researcher in Device and Integrated Circuit Engineering (DICE) Lab..
Hyungjun’s research interest is about energy-efficient Deep Learning Accelerators. Especially, he is currently working on In-Memory Computing (or Compute-In-Memory) scheme. He is also very interested in neural network compression techniques such as low-precision quantization.
You may contact Hyungjun for more discussion on his research via hyungjun.kim /.at./ postech.ac.kr.
- Hyungjun chairs a session (Learning Models and Applications of Intelligent Systems) at AICAS 2021.
- Hyungjun’s paper entitled “Single RRAM Cell-Based In-Memory Accelerator Architecture for Binary Neural Networks” is accepted to AICAS 2021.
- Hyungjun’s paper entitled “Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution” is accepted to CVPR 2021. Arxiv version can be found here.