Picture

Yifan Liu

37 XueYuan Rd, Beihang University (BUAA), Beijing, P.R.China, 100191
irfan111@163.com   +86-188-1072-5200

 

EDUCATION

Beihang University(BUAA), Beijing, P.R.China Sep.2016~Present

School of Automation Science and Electrical Engineering

Beihang University(BUAA), Beijing, P.R.China Sep.2012~Jul.2016

School of Automation Science and Electrical Engineering

 


HONORS AND AWARDS

  • 2016 The outstanding graduates of Beijing
  • 2012-2014 “Zeng xianzi” motivational scholarship
  • 2013-2014 Robot Competition Champion of Beihang university

PROJECTS

ObjColor
Emotion Classi cation with Data Augmentation Using Generative Adversarial Networks [pdf]
  2017.09-2018.02

  Xinyue Zhu, Yifan Liu, Zengchang Qin, and Jiahong Li

  PAKDD 2018 (oral presentation)

It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%-10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.

ObjColor
Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks [pdf] [demo] [have a try!]
  2017.04-2017.05

  Yifan Liu, Zengchang Qin, Zhenbo Luo, and Hua Wang

Recently, realistic image generation using deep neural networks has become a hot topic in machine learning and computer vision. Images can be generated at the pixel level by learning from a large collection of images. Learning to generate colorful cartoon images from black-and-white sketches is not only an interesting research problem, but also a potential application in digital entertainment. In this paper, we investigate the sketch-to-image synthesis problem by using conditional generative adversarial networks (cGAN). We propose the auto-painter model which can automatically generate compatible colors for a sketch. The new model is not only capable of painting hand-draw sketch with proper colors, but also allowing users to indicate preferred colors. Experimental results on two sketch datasets show that the auto-painter performs better that existing image-to-image methods.

ObjColor
Stock Volatility Prediction Using Recurrent Neural Networks with Sentiment Analysis[pdf][code]
  2016.02-2016.10

  Yifan Liu, Zengchang Qin, Pengyu Li, and Tao Wan

  IEA/AIE 2017 (oral presentation)

In this paper, we propose a model to analyze sentiment of online stock forum and use the information to predict the stock volatility in the Chinese market. We have labeled the sentiment of the online financial posts and make the dataset public available for research. By generating a sentimental dictionary based on financial terms, we develop a model to compute the sentimental score of each online post related to a particular stock. Such sentimental information is represented by two sentiment indicators, which are fused to market data for stock volatility prediction by using the Recurrent Neural Networks (RNNs). Empirical study shows that, comparing to using RNN only, the model performs significantly better with sentimental indicators.

ObjColor
Logical Parsing from Natural Language Based on a Neural Translation Model [pdf]
  2017.04-2017.05

  Liang Li, Yifan Liu, Zengchang Qin,Pengyu Li, Tao Wan,

  PACLING2017 (oral presentation)

Semantic parsing has emerged as a significant and powerful paradigm for natural language interface and question answering systems. Traditional methods of building a semantic parser rely on high-quality lexicons, hand-crafted grammars and linguistic features which are limited by applied domain or representation. In this paper, we propose a general approach to learn from denotations based on Seq2Seq model augmented with attention mechanism. We encode input sequence into vectors and use dynamic programming to infer candidate logical forms. We utilize the fact that similar utterances should have similar logical forms to help reduce the searching space. Under our learning policy, the Seq2Seq model can learn mappings gradually with noises. Curriculum learning is adopted to make the learning smoother. We test our method on the arithmetic domain which shows our model can successfully infer the correct logical forms and learn the word meanings, compositionality and operation orders simultaneously.


SKILLS

  • Language in C, python;

Copyleft © 2011

Valid HTML 4.01 Transitional