Ting Han

Research scientist
Artificial Intelligence Research Center(AIST), Tokyo, Japan
Email: firstname.lastname@aist.go.jp

About Me

I’m interested in enabling AIs to understand natural human communications. My PhD research focuses on exploring how natural language informs the interpretation of co-verbal gestures.

I received my PhD from Bielefeld University, advised by Prof. Dr. David Schlangen. I was also a member of the Cognitive Interaction Technology Center of Excellence (CITEC) graduate school. Previously I earned a M.S. in computer vision and B.S. in computer science, both from Lanzhou University in China.

Publications

  1. Enabling Robots to Draw and Tell: Towards Visually Grounded Multimodal Description Generation. Ting Han and Sina Zarrieß. The 2nd Workshop on NLG for HRI (NLG4HRI co-located with INLG2020), 15 December 2020. PDF

  2. Mandarinograd: A Chinese Collection of Winograd Schemas. Timothee Bernard and Ting Han. The 12th edition of the Language Resources and Evaluation Conference (LREC2020), 13-15 May 2020, Marseille, France. PDF
  3. Sketch Me if You Can: Towards Generating Detailed Descriptions of Object Shape by Grounding in Images and Drawings. Ting Han, and Sina Zarrieß. Proceedings of the 12th International Conference on Natural Language Generation (INLG19), October 19 -November 1 2019, Tokyo, Japan. PDF
  4. Bridging the Gap between Robotic Applications andComputational Intelligence - An Overview on Domestic Robotics. Junpei Zhong, Ting Han, Ahmad Lotfi, Angelo Cangelosi, and Xiaofeng Liu. 2019 IEEE Symposium Series on Computational Intelligence (IEE DR), December 6-9 2019, Xiamen, China. PDF
  5. Learning to Describe Multimodally from Parallel Unimodal Data? A Pilot Study on Verbal and Sketched Object Descriptions. Ting Han, Sina Zarrieß, Kazunori Komatani, and David Schlangen. Proceedings of the 22nd Workshop on the Semantics and Pragmatics of Dialogue (AixDial), November 8-10th 2018, Aix-en-Provence (France) PDF
  6. Learning to Interpret and Apply Multimodal Descriptions. Ting Han. Bielefeld University, 2018.
  7. A Corpus of Natural Multimodal Spatial Scene Descriptions. Ting Han and David Schlangen. The 11th edition of the Language Resources and Evaluation Conference (LREC18), 7-12 May 2018, Miyazaki, Japan. pdf bib
  8. Placing Objects in Gesture Space: Toward Real-Time Understanding of Spatial Descriptions. Ting Han, Casey Kennington and David Schlangen. The thrity-second AAAI conference on artificial intelligence (AAAI18), Feburary 2-7, New Orleans, Louisiana, USA, 2018. pdf bib
  9. Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieving Task. Ting Han and David Schlangen. Inproceedings of the 8th international joint conference of natural language processing (IJCNLP17), Taipei, Taiwan, 2017. pdf PUB
  10. Natural Language Informs the Interpretation of Iconic Gestures: A Computational Approach. Ting Han, Julian Hough, and David Schlangen. Inproceedings of the 8th international joint conference of natural language processing (IJCNLP17), Taipei, Taiwan, 2017. PUB
  11. Temporal Alignment Using the Incremental Unit Framework. Casey Kenninton, Ting Han, and David Schlangen, Proceedings of 19th ACM International Conference on Multimodal Interaction, Glasgow, Scotland, November 13-17th, 2017 (ICMI17). PUB
  12. Grounding Language by Continuous Observation of Instruction Following. Ting Han and David Schlangen, Proceedings of the Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL), Valencia: 491-496, 2017. PUB
  13. Building and Applying Perceptually-Grounded Representations of Multimodal Scene Descriptions. Ting Han, Casey Kennington, and David Schlangen, Proceedings of the 19th SemDial Workshop on the Semantics and Pragmatics of Dialogue (goDIAL) 2015. PUB
  14. A Corpus of Virtual Pointing Gestures. Ting Han, Spyros Kousidis, and David Schlangen, Presented at the REFNET Workshop on Computational and Psychological Models of Reference Comprehension and Production, Edinburgh 2014. [Poster] PUB
  15. Towards Automatic Understanding of ‘Virtual Pointing’ in Interaction. Ting Han, Spyros Kousidis, and David Schlangen, Proceedings of the 18th SemDial Workshop on the Semantics and Pragmatics of Dialogue (DialWatt), Posters. Herriot-Watt University; 2014: 188-190. [poster] PUB
  16. A Fast Dark Channel Prior-Based Depth Map Approximation Method for Dehazing Single Images. Ting Han and Yi Wan. In International Conference on Information Science and Technology (ICIST), 2013, pages 1355–1359. PUB