Joon Kim

I am a PhD student in Machine Learning Department at Carnegie Mellon University, advised by Professor Ameet Talwalkar. I am also supported by a Graduate Research Fellowship from Kwanjeong Educational Foundation based in South Korea.

Prior to joining CMU, I received a Bachelors of Science degree in Computer Science at Caltech.

Email  /  CV  /  GoogleScholar

Research

I am interested in studying methods that can help and facilitate understandings of complex models, and their implications on fairness and interpretability.

Publications

Efficient Topological Layer based on Persistent Landscape
Jisu Kim, Kwangho Kim, Manzil Zaheer, Joon Sik Kim, Frederic Chazal, Larry Wasserman
Pre-print, 2020.

We introduce a novel topological layer for deep neural networks based on persistent landscapes, which is able to efficiently capture underlying toplogical feautres of the input data, with stronger stability guarantees.

FACT: A Diagnostic for Group Fairness Trade-offs
Joon Sik Kim, Jiahao Chen, Ameet Talwalkar
International Conference on Machine Learning (ICML), 2020.
proceedings (tba) / blog post (tba) / slides / code

We proposed a general framework of understanding and diagnosing different types of trade-offs in group fairness, deriving new incompatibility conditions and post-processing method for fair classification.

Automated Dependence Plots
David Inouye, Leqi Liu, Joon Sik Kim, Bryon Aragam, Pradeep Ravikumar
Uncertainty in Artificial Intelligence (UAI), 2020.
(Previously on Safety and Robustness in Decision Making Workshop, NeurIPS 2019)
proceedings (tba) / code

We introduce a framework for automating a search for a partial dependence plot which best captures certain types of model behaviors, encoded with customizable utility functions.

Representer Point Selection for Explaining Deep Neural Networks
Chih-kuan Yeh*, Joon Sik Kim*, Ian E.H. Yen, Pradeep Ravikumar
Neural Information Processing Systems (NeurIPS), 2018
proceedings / blog post / poster / code

We propose a method that decomposes a deep neural network prediction into a linear combination of activation values of training points, in which the weights (called representer values) allow intuitive interpreation of the prediction

A Rotation Invariant Latent Factor Model for Moveme Discovery from Static Poses
Matteo Ruggero Ronchi, Joon Sik Kim, Yisong Yue
IEEE International Conference on Data Mining (ICDM), 2016
proceedings / project page / slides / poster / code / dataset

We propose a method to discover a set of rotation-invariant 3-D basis poses that can characterize the manifold of primitive human motions, from a training set of 2-D projected poses obtained from still images taken at various camera angles.

Teaching

10737 Creative AI - Fall 2019


template taken from here