Finally not a student anymore! Now I can use “Ph.D. candidate” in my CV. That is the first thought came to my mind after I got passed the PQE. I have to throw myself into bed for two days. That is my second immediate thought. Generally speaking, my PQE is far from a good one. The time management is like a disaster. Luckily, the oral presentation is finished as expected.
This year, VIS was held in Phoenix, Arizona. As the name implies, Phoenix is a hot and dry city located in the Sonoran Desert.
This is my second year attending VIS. Unlike last year, I am able to present a conference paper on VAST this year. This is really a great experience for me. More importantly, it’s really nice to have the opportunity to learn what others are doing in this community.
This is a paper list that I summarized for my PQE.
Interpretable Machine Learning for Complex Systems - A Workshop in NIPS 2016
I have been reading papers and articles and searching for ideas of my Ph.D. Qualification Exam (PQE) for a few days. Since I am interested in working on the interdisciplinary field of Visualization and Machine Learning, the idea of “explainable AI” (XAI) seems promising to me. After discussing with my professor, I decided to fixed the survey topic to “Visualization for Explainable Machine Learning”. This blog summarizes my understanding on the motivation, scope and application of XAI.
This paper is written by Amershi, a researcher in MSA, who is kind of a leading researcher in the crossing area of ML + HCI.
I am working closely on RNNs these days, trying to reveal the ``black box’’ and see what RNN learned to use its the hidden states and gates.
After intensively trying to do experiments, I suddenly realize that maybe first analyze them mathematically would give some clues for better visualization
There are already many good articles introducing RNNs and its variants (LSTMs, GRU) on the internet right now. So this is just a post for myself to summarize things up on RNNs.
So first, what is Recurrent Neural Network (RNN)?
In short, RNN is a type of neural network that deal with sequence data. Classical neural networks, e.g. Multi-layer Perceptron (MLP) or Convolutional Neural Network (CNN) takes a fixed sized input an produce a fixed size output. Although for CNNs you can resize images of different size into a standard size so that the model can work with variable size input, but for the CNN part it still only accept fixed sized input.
I am thinking of starting a blog for a long time. And this kind of unable-to-finish-my-goals feeling is actually driving me ill.
And here, this blog serves as a FLAG noting I am going to start blogging “seriously”. Don’t mock me for my strange English, I am trying to make it serious here!
As for the stimulus that drives myself really starting this comes from Yuehan. Her suggestion of writing things down is actually working – it kinds of saving my presentation today: Writing down things really helps me make sure I really understand what I want to presents. And it helps me clear my thoughts. People who writes logically and clearly must have the clear thoughts in mind, right?
I think I will keeps this habit from now on, for clearing my thoughts, recording my ideas, and exercising my English (Though I am not saying all of them will be in English :D).