The Third International Workshop on
Deep Learning for Pattern Recognition (DLPR20)
To be held in conjunction with the 25th International Conference on Pattern Recognition ICPR 2020,
Milan, Italy
Invated Speakers
Professor Liang Zheng, Australian National University.
Topic: Do We Really Need Ground Truths to Evaluate A Model?
Abstract: Model performance evaluation is an important step in machine learning. Generally speaking,
we require the test set to include test samples and their labels, and compare the test labels with
the model prediction results. Although most data sets in academia meet this requirement,
in practical situations, we often can only obtain test data but not their labels. In this talk,
I will introduce an important and less discussed issue: auto-evaluation. Specifically, given
a labeled training set and a model, the goal is to estimate the performance of the model on
an unlabeled test set. I will present two methods that are proposed by my group. Both resorts to
regression models designed on the data set level that achieves a relatively good accuracy.
Bio: Dr. Liang Zheng is a Senio Lecturer in the School of Computing,
Australian National University. He obtained both his B.S degree (2010) and Ph.D degree (2015) from
Tsinghua University, China. He is best known for his contributions in large-scale object
re-identification. Named as the Top-40 Early Achievers by The Australians, Dr Zheng received the
Outstanding PhD Thesis and the Wen-Tsun Wu Award from Chinese Association of Artificial Intelligence,
and the DECRA award from the Australian Research Council. He serves as an Area Chair for CVPR 2021,
ECCV 2020, and Multimedia 2020, and organizes tutorials and workshops at ECCV 2018, CVPR 2019 and
CVPR 2020. He is an Associate Editor of IEEE TCSVT.