[Crm-sig] cfp: NedDiMAH workshop “Ontology based annotation" July 17th 2012 in connection with DH2012,
Christian-Emil Ore
c.e.s.ore at iln.uio.no
Mon Apr 2 11:01:30 EEST 2012
CALL FOR PAPERS
Preconference workshop “Ontology based annotation” July 17th 2012 in
connection with DH2012 in Hamburg, Germany
The Network for Digital Methods in the Arts and Humanities (NeDiMAH) ,
www.nedimah.eu, is a research network running from 2011 to 2015, funded
by the European Science Foundation, ESF. The network will examine the
practice of, and evidence for, advanced ICT methods in the arts and
humanities across Europe, and disseminate findings in a series of
outputs and publications.
The NeDiMAH WG3, Linked data and ontological methods, will organise a
half day preconference workshop “Ontology based annotation” in
connection with the conference Digital Humanities 2012 in Hamburg.
Workshop format: Short presentations 15 – 20 minutes including discussion.
Deadline for submission April 30th. We will endeavour to decide on the
final workshop programme by May 15th.
Submission format: Extended abstract, ca 1000 – 1500 words
Contact address: c.e.s.ore at iln.uio.no
Presenters of accepted papers will have their workshop fees covered.
Successful contributors will also be considered for having their travel
and accommodation expenses covered by NeDiMAH. The full papers should be
circulated before the workshop.
Motivation and background
The use of computers as tools in the study of textual material in the
humanities and cultural heritage goes back to the late 1940s, with links
back to similar methods used without computer assistance, such as word
counting in the late nineteenth century and concordances from the
fourteenth century onwards. In the sixty years of computer assisted text
research, two traditions can be seen. One is that which includes corpus
linguistics and the creation of digital scholarly editions, while the
other strain is related to museum and archival texts. In the former
tradition, texts are commonly seen as first class feasible objects of
study, which can be examined by the reader using aesthetic, linguistic
or similar methods. In the latter tradition, texts are seen mainly as a
source for information; readings concentrate on the content of the
texts, not the form of their writing. Typical examples are museum
catalogues and historical source documents.
In the end of the 1980s the historian Manfred Thaller developed Kleio, a
simple ontological annotation system for historical texts. Later in the
1990s hypertext with inline links, not databases, became the tool of
choice for textual editions (Vanhoutte 2010). In the last decade the
stand-off database approach has been reintroduced, this time in the form
of ontologies (conceptual models) often expressed in the RDF formalism
to enable its use in the linked data world, and the semantic web.
A basic assumption is that reading a text includes a process of creating
a model in the mind of the reader. Reading a novel and reading a
historical source document both result in models. These models will be
different, but they can all be manifested as explicit ontologies
expressed in computer formats. The external model stored in the computer
system will be a different model from the one stored in the mind, but it
will still be a model of the text reading. By manipulating the computer
based model new things can be learned about the text in question, or it
can be compared to other similarly-treated texts.
An objective of the workshop is to throw light on consequences and
experiences of the renewed database approach in computer assisted
textual work, based on the development in text encoding over the last
decade as well as in ontological systems.
Short discussion papers are invited on any topic that looks at the
theory or practice of ontology-based annotation, including (but not
limited to):
• How do we create models, and what ontologies should we use?
• To what extent can new insight be gained by linking together the
models based on information from the texts?
• How do we relate models back to the source text?
• Can we manage an ontology-based annotation of a text in different
editions and translations?
• How do we model uncertainty in annotation, and multiple annotations?
• Can ontology based annotation be combined with crowdsourcing, and does
this ask for special types of crowds?
Programme Committee
Øyvind Eide, Kings College, London UK
Faith Lawrence, Kings College, London UK
Sebastian Rahtz, University of Oxford UK
Christian-Emil Ore, University of Oslo Norway
Alois Pichler, University of Bergen, Norway
More information about the Crm-sig
mailing list