Mit den Keilschrifttafeln aus Haft Tappeh im SĂĽdwesten des Iran befasst sich ein im September gestartestes, DFG-gefĂ¶rdertes Forschungsprojekt von Prof. Dr.â€¦
In diesem Jahr tagte die AG CAA, Computeranwendungen und Quantitative Methoden in der ArchĂ¤ologie, am 23. und 24. September in Wilhelmshaven.
This poster has been presented during the conference Der Fachaustausch Geoinformation (http://www.fachaustausch-geoinformation.de/) organized by GeoNet.MRN to exhibit the semantic geographic information system developed in the context of SemGIS project. The poster shows the approaches used to integrate heterogeneous data sets from different sources. These data sets can then, be enriched through resources from the Semantic Web. An example of such enrichment is presented from an integrated XErleben data. Finally, it illustrates the functionalities of the system to query and visualize data, but also the downlift of selected data according to different standardized formats.
The disaster response still faces problems of collaboration due to lack of policies concerning the information exchange during the response. Moreover, plans are prepared to respond to a disaster, but drills to apply them are limited and do not allow to determine their efficiency and conflicts with other organizations. This paper presents a framework allowing for different organizations involving in the disaster response to assess their collaboration through its simulation using an explicit representation of their knowledge. This framework is based on a multi-agent system composed of three generic agent models to represent the organizational structure of disaster response. The decision-making about response actions is done through task decomposition and repartition. It is based reasoning on ontologies which provides an explicit trace of the response plans design and their execution. Such framework aims at identifying cooperation problems and testing strategies of information exchange to support the preparation of disaster response.
Dieser Beitrag beschreibt die a priori AbschĂ¤tzung erreichbarer Genauigkeiten der BĂĽndelblockausgleichung. Dies geschieht mit Hilfe des EMVA1288 Standards. Durch numerische Simulationen wird hierzu zunĂ¤chst die Zentrumsunsicherheit der Zielmarken mit der daraus folgenden Objektraum-Unsicherheit verknĂĽpft. Der nĂ¤chste Schritt ist eine VerknĂĽpfung der EMVA1288-Kennzahlen und der daraus resultierenden Unsicherheit eines Grauwertes mit Algorithmen zur Ellipsendetektion. AbschlieĂźend wird ein stochastisches Modell vorgeschlagen und an einer real durchgefĂĽhrten Kamerakalibrierung untersucht.
The activities of COSCH community and the disciplines it represents were as diverse as they could possibly be in research into cultural heritage. To achieve common goals it was of utmost importance to have a common understanding of these diverse activities and disciplines. Work on the COSCH Knowledge Representation, or COSCHKR, was undertaken to develop a common semantic base representing different disciplines and to facilitate communication within the Action. The COSCHKR is an ontology-based inference model, guided by inference rules that provide a semantic bridge between various interdisciplinary activities involved in non-invasive technical documentation of material cultural heritage. The model is intended to support humanities experts by recommending optimal spatial and spectral techniques. The model may also be used by technology experts to compare their own solutions with the ones recommended through COSCHKR, and to understand why they may differ.
In this chapter we present the methods adopted for designing the COSCHKR and the steps in the development of the inference model. The difficulties in maintaining a common level of understanding within the diverse disciplines during the knowledge acquisition process are discussed. We present mechanisms and methods of information collection, its structuring, and aligning, to formulate different axioms and theorems within the model. The design and development of COSCHKR was based on an iterative procedure where the gathered knowledge was first verified with the group of experts before it was processed. This verification mechanism was important for the reliability of the model, ensuring technical consistency. This chapter highlights the importance of these iterative mechanisms in the validation of knowledge gathered and then information populated inside the knowledge base.