Publications

  1. Sketch Me if You Can: Towards Generating Detailed Descriptions of Object Shape by Grounding in Images and Drawings. Ting Han, and Sina Zarrieß. Proceedings of the 12th International Conference on Natural Language Generation (INLG19), October 19 -November 1 2019, Tokyo, Japan. PDF

  2. Bridging the Gap between Robotic Applications andComputational Intelligence - An Overview on Domestic Robotics. Junpei Zhong, Ting Han, Ahmad Lotfi, Angelo Cangelosi, and Xiaofeng Liu. 2019 IEEE Symposium Series on Computational Intelligence (IEE DR), December 6-9 2019, Xiamen, China. PDF

  3. Learning to Describe Multimodally from Parallel Unimodal Data? A Pilot Study on Verbal and Sketched Object Descriptions. Ting Han, Sina Zarrieß, Kazunori Komatani, and David Schlangen. Proceedings of the 22nd Workshop on the Semantics and Pragmatics of Dialogue (AixDial), November 8-10th 2018, Aix-en-Provence, France. PDF

  4. Learning to Interpret and Apply Multimodal Descriptions. Ting Han. Bielefeld University, 2018.

  5. A Corpus of Natural Multimodal Spatial Scene Descriptions. Ting Han and David Schlangen. The 11th edition of the Language Resources and Evaluation Conference (LREC18), 7-12 May 2018, Miyazaki, Japan. pdf bib

  6. Placing Objects in Gesture Space: Toward Real-Time Understanding of Spatial Descriptions. Ting Han, Casey Kennington and David Schlangen. The thrity-second AAAI conference on artificial intelligence (AAAI18), Feburary 2-7, New Orleans, Louisiana, USA, 2018. pdf bib

  7. Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieving Task. Ting Han and David Schlangen. Inproceedings of the 8th international joint conference of natural language processing (IJCNLP17), Taipei, Taiwan, 2017. pdfPUB

  8. Natural Language Informs the Interpretation of Iconic Gestures: A Computational Approach. Ting Han, Julian Hough, and David Schlangen. Inproceedings of the 8th international joint conference of natural language processing (IJCNLP17), Taipei, Taiwan, 2017. PUB

  9. Temporal Alignment Using the Incremental Unit Framework. Casey Kenninton, Ting Han, and David Schlangen, Proceedings of 19th ACM International Conference on Multimodal Interaction, Glasgow, Scotland, November 13-17th, 2017 (ICMI17). PUB

  10. Grounding Language by Continuous Observation of Instruction Following. Ting Han and David Schlangen, Proceedings of the Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL), Valencia: 491-496, 2017. PUB

  11. Building and Applying Perceptually-Grounded Representations of Multimodal Scene Descriptions. Ting Han, Casey Kennington, and David Schlangen, Proceedings of the 19th SemDial Workshop on the Semantics and Pragmatics of Dialogue (goDIAL) 2015. PUB

  12. A Corpus of Virtual Pointing Gestures. Ting Han, Spyros Kousidis, and David Schlangen, Presented at the REFNET Workshop on Computational and Psychological Models of Reference Comprehension and Production, Edinburgh 2014. [Poster] PUB

  13. Towards Automatic Understanding of ‘Virtual Pointing’ in Interaction. Ting Han, Spyros Kousidis, and David Schlangen, Proceedings of the 18th SemDial Workshop on the Semantics and Pragmatics of Dialogue (DialWatt), Posters. Herriot-Watt University; 2014: 188-190. [poster] PUB

  14. A Fast Dark Channel Prior-Based Depth Map Approximation Method for Dehazing Single Images. Ting Han and Yi Wan. In International Conference on Information Science and Technology (ICIST), 2013, pages 1355–1359. PUB