Hyungjun Kim received his PhD degree in 2021. He is currently working as a post-doc. researcher in Device and Integrated Circuit Engineering (DICE) Lab..
Hyungjun’s research interest is about energy-efficient Deep Learning Accelerators. Especially, he is currently working on In-Memory Computing (or Compute-In-Memory) scheme. He is also very interested in neural network compression techniques such as low-precision quantization.
You may contact Hyungjun for more discussion on his research via hyungjun.kim /.at./ postech.ac.kr.
- Hyungjun’s paper entitled “Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution” is accepted to CVPR 2021. Arxiv version can be found here.
- Hyungjun’s paper entitled “Mapping Binary ResNets on Computing-In-Memory Hardware with Low-bit ADCs” is accepted to DATE 2021.
- Hyungjun’s paper entitled “Energy-efficient XNOR-free In-Memory BNN Accelerator with Input Distribution Regularization” is accepted to ICCAD 2020.