Auf dem 184. DVW-Seminar "Terrestrisches Laserscanning 2019 (TLS 2019)" Anfang Dezember in Fulda prĂ¤sentierte Denise Becker ihre Masterarbeit. Unter denâ€¦
Mitte November stellte Timo Homburg auf der LBS 2019, der 15th Conference on Location Based Services, in Wien die Ergebnisse einer Untersuchung zum Thema "â€¦
Disaster management requires both individual and collaborative preparedness among the various stakeholders.
Collaborative exercises aim to train stakeholders to apply the plans prepared and to identify potential problems and areas for improvement. As these exercises are costly, computer simulation is an interesting tool to evaluate preparation through a wider variety of contexts.
However, research on simulation and disaster management focuses on a particular problem rather than on the overall assessment of the plans prepared. This limitation is explained by the challenge of creating a simulation model that can represent and adapt to a wide variety of plans from various disciplines.
The work presented in this paper addresses this challenge by adapting the simulation model based on disaster management information and plans integrated into a knowledge base. The simulation model created is then automatically programmed to perform simulation experiments to improve action plans.
The results of the experiments are analyzed in order to generate new knowledge and know-how to enrich disaster management plans in a virtuous cycle.
This paper presents a proof of concept on the French national Novi plan, for which simulation experiments have made it possible to know the impact of the distribution of doctors on the application of the plan as well as to identify their distribution.
Die Raumstrecken zwischen den Mittelpunkten spiegelnder Kugeln werden mit einem theodolitbasierten Industriemesssystem hochgenau bestimmt. Dabei werden die Theodolitokulare durch je eine Adapteroptik samt Industriekamera ersetzt. So kommt ein ausschlieĂźlich auf Methoden des Machine Vision aufbauender, automatisierter Workflow zum Einsatz, bei dem die Zielpunktdefinition und die einzelnen Theodolitzielungen erfolgreich voneinander entkoppelt werden, Ă¤hnlich wie bei der automatischen Zielpunkterkennung von geodĂ¤tischen Tachymetern auf Vermessungsreflektoren.
The spatial distances between the centers of reflecting spheres are precisely determined using a theodolite-based industrial measuring system. The theodolite eyepieces are each replaced by an adapter optic including an industrial camera. For the first time, an automated workflow based exclusively on Machine Vision methods is used, in which the target definition and the individual theodolite targetings are successfully decoupled from each other, similar to the automatic target
recognition of geodetic tachymeters towards survey reflectors.
3D and spectral digital recording of cultural heritage monuments is a common activity for their documentation, preservation, conservation management, and reconstruction. Recent developments in 3D and spectral technologies have provided enough flexibility in selecting one technology over another, depending on the data content and quality demands of the data application. Each technology has its own pros/cons, suited perfectly to some situations and not to others. They are mostly unknown to humanities experts, besides having a limited understanding of the data requirements demanded by the research question. These are often left to technical experts who again have a limited understanding of cultural heritage requirements. A common point of view has to be achieved through interdisciplinary discussions. Such agreements need to be documented for their future references and re-uses. We present a method based on semantic concepts that not only documents the semantic essence of such discussions, but also uses it to infer a guidance mechanism that recommends technologies/technical process to generate the required data based on individual needs. Experts' knowledge is represented explicitly through a knowledge representation that allows machines to manage and infer recommendations. First, descriptive semantics guide end users to select the optimal technology/technologies for recording data. Second, structured knowledge controls the processing chain extracting and classifying objects contained in the acquired data. Circumstantial situations during object recording and the behaviour of the technologies in that situation are taken into account. We will explain the approach as such and give results from tests at a CH object.
In the domain of computer vision, object recognition aims at detecting and classifying objects in data sets. Model-driven approaches are typically constrained through their focus on either a specific type of data, a context (indoor, outdoor) or a set of objects. Machine learning-based approaches are more flexible but also constrained as they need annotated data sets to train the learning process. That leads to problems when this data is not available through the specialty of the application field, like archaeology, for example. In order to overcome such constraints, we present a fully semantic-guided approach. The role of semantics is to express all relevant knowledge of the representation of the objects inside the data sets and of the algorithms which address this representation. In addition, the approach contains a learning stage since it adapts the processing according to the diversity of the objects and data characteristics. The semantic is expressed via an ontological model and uses standard web technology like SPARQL queries, providing great flexibility. The ontological model describes the object, the data and the algorithms. It allows the selection and execution of algorithms adapted to the data and objects dynamically. Similarly, processing results are dynamically classified and allow for enriching the ontological model using SPARQL construct queries. The semantic formulated through SPARQL also acts as a bridge between the knowledge contained within the ontological model and the processing branch, which executes algorithms. It provides the capability to adapt the sequence of algorithms to an individual state of the processing chain and makes the solution robust and flexible. The comparison of this approach with others on the same use case shows the efficiency and improvement this approach brings.
In this publication we introduce a linked data powered application which assists users to find so-called Stolpersteine, stones commemorating Jewish victims of the second world war. We show the feasibility of a dedicated location based service as an app using linked data resources and evaluate this approach against local data sources gathered by communities to find out if the current linked data environment can equally and/or sufficiently support an application in this knowledge domain.