There are many branches of NLP research that involve the generation of language (summarisation, MT, human-computer dialogue, application front-ends, data-to-text generation, document authoring, etc.). However, it is not always easy to identify common ground among the generation components of these application areas, which has sometimes made it difficult for generic research in 'Natural Language Generation' (NLG) to engage with them effectively. Increasingly common corpus-based approaches across these areas, and in particular in NLG itself, offer a new perspective on this situation and the opportunity to explore synergies and differences from the common grounding of corpus data.
This workshop is the fourth in an occasional series seeking to provide a forum for discussing NLG and its links with these closely related fields from a corpus-oriented perspective. The workshops have the general aims
Each of these workshops has a special theme: at the first workshop (at Corpus Linguistics in 2005) it was use of corpora in NLG; at the second (UCNLG+MT at MT Summit XI in 2007) it was Language Generation and Machine Translation; at the third it was Language Generation and Summarisation (UCNLG+Sum at ACL-IJCNLP'09). The special theme of the 2011 workshop is Language Generation and Evaluation, and the event will showcase the latest developments in methods for evaluating computationally generated language across NLP, continue the discussion of future directions and host an invited talk on shared-task evaluation campaigns.
- to provide a forum for reporting and discussing corpus-oriented methods for generating language;
- to foster cross-fertilisation between NLG and other fields where language is automatically generated; and
- to promote the sharing of data and methods for the purpose of system building and comparative evaluation in all language generation research.
Evaluation Special Theme
The past five years have seen big changes in NLG evaluation. The field has moved from a situation where there were no comparative evaluation results for independently developed alternative approaches to the present increasingly rich diversity of data sets, methods and results for comparative evaluation (intrinsic and extrinsic, human-assessed and automatically computed). A distinctive and critical feature of these developments has been the community-led approach to the establishment of tasks, datasets and evaluation methods. The aim of the special evaluation theme at UCNLG+Eval is to provide a forum for reporting cutting-edge research on evaluation, taking stock of recent developments, discussing and comparing alternative approaches to evaluation and exploring possible directions for future development.
Details of the final workshop programme can be found on the Workshop Programme page.
Anja Belz, University of Brighton, UK
Roger Evans, University of Brighton, UK
Albert Gatt, University of Malta, Malta
Kristina Striegnitz, Union College, USA
Aoife Cahill, Stuttgart University, Germany
Charlie Greenbacker, University of Delaware, USA
Emiel Krahmer, Tilburg University, NL
Mirella Lapata, University of Edinburgh, UK
Oliver Lemon, Heriot-Watt University, Edinburgh, UK
Daniel Marcu, ISI, University of Southern California, USA
Kathy McKeown, Columbia, USA
Karolina Owczarzak, NIST, USA
Ehud Reiter, Aberdeen, UK
|3 May 2011:|| Deadline for paper submissions|
|22 May 2011:|| Deadline for poster submissions|
|27 May 2011:|| Notification of acceptance (papers)|
|01 Jun 2011:|| Notification of acceptance (posters)|
|19 Jun 2011:|| Camera-ready copies due (papers and posters)|
|21 Jun 2011:||EMNLP 2011 Early registration deadline|
|31 Jul 2011: || UCNLG+Eval workshop in Edinburgh|