Posts /

At the End of 2018

Twitter Facebook Google+
31 Dec 2018

On the last day of 2018, I’m sitting in my living room trying to finish up my first website. After many commits and banging my head on the table, I ended up with this beautiful Jekyll theme, which was mostly based on the work of Vangelis Tzortzis (vangeltzo.com). Though it turned out to be slightly more challenging than expected, I’m pretty content with the result.

As the first post of the new website, I think a review of the main projects I worked on in 2018 is well suited given the time of the year.

Projects

Denoising ECGs using accelerometer signals and Denoising Autoencoders

This was a year long group project that was part of my intercalated BSc degree in Biomedical Engineering. We had a fantastic group of people working together to build a prototype of an Armband that could collect clean electrocardiogram (ECG) signals. The main motivation behind this work was that real time continuous ECG monitoring is limited by the high level of EMG noise (which arise from the underlying muscles) that can’t be filtered in the traditional way.

Over the year we tested different materials and positioning for the electrodes, designed a PCB, and programmed an algorithm for integrating the accelerometer and electrode signals to produce a clean ECG even during active movement. My main contribution was leading the modelling of the neural network that would work as a robust filter against EMG. To achieve this, I used the PyTorch framework to implement neural networks with LSTM and Convolutional components. There was quite a learning curve as my previous python experience was minimal and I had to figure out how to train models on AWS and the university computing cluster. At the end of the day, I created a Convolutional Denoising Autoencoder structure that outperformed the state of the art performance in ECG denoising.

ReST: Decreasing the Labeling Burden by Integrating Reinforcement Learning to Self Training for Pulmonary Nodule Segmentation in Chest X-rays

The second half of the year was invested in finishing up this project which came to life while drinking beer in Chicago while attending RSNA (Radiology Society of North America). At the 2017 RSNA, there was so much interest in AI that some people jokingly called the conference RSNAI. What I found frustrating was that every company that had a showcase at the conference seemed to be working on the exact same problem of finding nodules in Lung CTs. Practically, it was understandable as there was a well established dataset and a potential market. However, the true potential of clinical decision support systems is, in my opinion, when the use case currently lacks doctors. Long story short, I came to the conclusion that the accessibility to quality labels was one of the key bottlenecks in widespread application of supervised learning models in medical imaging and decided to work on semi-supervised learning methods.

In the ReST paper, which was presented at the ML4H workshop at NeurIPS, we explore the self training method, a form of semi-supervised learning, to address the labeling burden. The main objectives of the paper were:

This work was only possible because of the great support of my co-author Sejin Park and VUNO who kindly sponsored my trip to NeurIPS. I am still very passionate about semi-supervised learning methods and will post a quick review on the current trends soon.

Travelling

Thought it would be nice to leave a record of the places I travelled in 2018 to make it more like a proper end of year review. I started the New Year in Paris with my family. Then I flew over to Lisbon in Feb for a few days to enjoy the sun and meet a few friends. Went back to South Korea over the summer holidays. Visited Montreal for NeurIPS and then Copenhagen in the following week with a group of friends.

Inspiration

World Models by David Ha and Jürgen Schmidhuber

I just want to end this post with a brief discussion about a paper I found quite inspirational. The World Models paper mainly focuses on the idea that we learn and make decisions based on an abstract representation of the world. This representation is not just an abstraction in the spatial sense but also the temporal which means that it will actively be predicting the future sensory input.

To handle the vast amount of information that flows through our daily lives, our brain learns an abstract representation of both spatial and temporal aspects of this information. We are able to observe a scene and remember an abstract description thereof. Evidence also suggests that what we perceive at any given moment is governed by our brain’s prediction of the future based on our internal model. [Ha, 2018]

This concept has a rather long history, especially in the realm of Neurobiology, but is yet to be integrated in the computational frameworks. I think the World Models approach and HTM approach of Numenta are the closest to making this happen. Along with the trend of Meta Learning / Continual Learning which also has many overlapping ideas, I am very excited for all the new developments that will happen in the new year.


Twitter Facebook Google+