Story Dialogue with Gestures (SDG) Corpus

If you use this data in your research, please refer to and cite: Zhichao Hu, Michelle Dick, Chung-Ning Chang, Michael Neff, Jean E. Fox Tree, and Marilyn Walker. "A Corpus of Gesture-Annotated Dialogues for Monologue-to-Dialogue Generation from Personal Narratives," In Language Resources and Evaluation Conference (LREC), Portorož, Slovenia, 2016.

Overview and Data: The Story Dialogue with Gestures (SDG) Corpus contains:

  • 50 personal blog stories with IDs from the Spinn3r ICWSM corpus
  • Human generated dialogues for those stories
  • Human annotated gestures for those dialogues (with videos of all the gesture forms)
  • Audios generated using AT&T Text-to-Speech for each dialogue (voices Mike & Crystal)
  • Gesture annotations are time-synched with generated TTS audio

You can download a sample here [click to download, 5MB].

Works that use this corpus:

  • The SDG Corpus also contains all experimental stimulus videos and Mechanical Turk results from Zhichao Hu, Marilyn A. Walker, Michael Neff and Jean E. Fox Tree. “Storytelling Agents with Personality and Adaptivity,” Intelligent Virtual Agents: 15th International Conference (IVA), Delft, Netherlands, 2015

We are still working on finilizing the corpus, please refer to this Google Spreadsheet [link] for annotation status of all the stories. 

Download: Fill out the following form to download the Story Dialogue with Gestures Corpus.

Contact: Please direct questions to Zhichao Hu: zhu [at] ucsc [dot] edu

 

User Information