Thomas Kaleske
In 2015 Kuehne + Nagel started the evaluation of semantic web based on a concrete use case from contract logistics. The project results were presented at The Open Group London Event and Member Meeting, from 25-28 April 2016.
Clemens Wass
The presentation will focus on the EU Project openlaws.eu, co-funded by the European Commission (DG Justice). The aim of the project is to create a vision of what „Big Open Legal Data“ can do in the future and to develop a prototype of a platform that links multiple legal open data sources.
Christian Dirschl
The talk will be based on tasks executed within the ALIGNED Project. In this phase, we collected requirements based on real world use cases like better data curation capabilities. The talk will present the project scope and the objectives of Wolters Kluwer as one of the industry use case partners.
Andreas Blumauer
While the term 'Semantic search' has become a buzzword on the search market, the concepts behind it remain unclear for most end-users. We want to present a rating system which helps to better understand and to classify the different forms of semantic search.
Raffaele PalmieriVincenzo Orabona
As well known, Semantic Web technologies make available a set of facilities for enabling interoperability among software agents in the Web, providing a common framework that allows data to be shared and reused across applications. From the other hand, the related data formats (as XML and RDF) constitute a suitable mean to represent in a machine understandable way the knowledge connected to the great amount of semi-structured or unstructured documents accessible by the Web itself. Following the Semantic Web vision, the last generation of Content Management System (CMS) focuses their attention on data (information embedded in a document) rather than content (the document itself), thus shifting from a “content centric” approach to a “data centric” one. To this goal, they incorporate semantic annotation modules in order to derive useful information from the managed contents and deal with their semantics, leveraging the Linked Data paradigm to relate extracted concepts with the available external knowledge (often coded in the shape of vocabularies, taxonomies or ontologies) depending on the considered application scenario. In this presentation we describe the design and development of a novel Semantic Content Management System, representing our solution to the content management processing problem. In particular, we provide a CMS combined with a fully featured semantic metadata repository with reasoning capabilities, based on reusing different Open Source solutions (Apache Stanbol, Apache SOLR, Openlink Virtuoso...).
Vera MeisterJonas Jetschni
The implementation of IT service catalogs at public organizations can be considered as an effective first step towards IT service management. Latter becomes more and more inevitable due to growing financial, business and security threats faced by public organizations. Traditional IT service catalog implementations are mostly based on common Content Management Systems. A small number of pubic organizations uses document-based catalogs or stick to Configuration Management Databases, which provide a rather technical type of service catalog. None of the implementation types met all of the valid requirements against an IT service catalog. That’s why the development and implementation of a semantic catalog was initiated. A vertical prototype is now implemented and tested and can be presented at the conference.
Tudor B. Ionescu
In the mobility industry collaborative processes are often described in natural language and stored in Word and PDF handbooks and logbooks. This unstructured information is complemented by emails and meeting minutes resulting from the communication between project stakeholders (customers, managers, engineers). Execution logs of past processes also contribute to this unstructured repository of process information. In the railway domain, non-functional requirements, such as safety, reliability, certifiability, and standard compliance of both the systems and the business processes used in creating them are key to the success of products and projects. As the fulfillment of these non-functional requirements is extremely costly and time-consuming, the automation and optimization of business processes for developing railway systems are constantly sought after in large business organizations.
To enable automation and optimization of a business process for configuring railway interlocking systems, a BPMN (Business Process Model and Notation) workflow was implemented using the Camunda Suite, which supports visual semantics and executable code generation from BPMN models. In the proposed solution, semantic technologies are used to infer semantic process models, which refine existing models at runtime. The proposed solution helps reduce the process execution time and costs through process automation and optimization. This is facilitated by semantic technologies and a strict separation of concerns using a 3-process approach: (1) a productive process monitored by (2) a mining process and dynamically refined by (3) an adaptation process.
Tomas Knap
In my talk, I would like to introduce 2 pilot projects we ran as part COMSODE EU FP7 project with Slovak Environment Agency (SEA) and Czech Trade Inspection Authority (CTIA). The goal of these pilot projects was to help these organisations to transform and publish their selected datasets as (linked) open data. I will also demonstrate UnifiedViews, an ETL for RDF data, and detail its role in Open Data Node, the publication platform prepared in COMSODE project.
Roland FleischhackerDr. Sonja Kabicher-Fuchs
As one of the largest property management companies in Europe Stadt Wien - Wiener Wohnen (WW) manages approximately 220,000 community-owned apartments, 47,000 parking spaces and 5,500 shops. More than half a million tenants, and thus about a quarter of Vienna's city population, cause 1.5 million customer inquiries to the contact center per year. The reported customer issues are manifold, ranging from technical defects, suggestions, information and complaints to commercial issues about rent and operating costs. This large variety of topics and the proper selection of associated procedures for handling the concerns remain for the employees of the contact center a major challenge. In particular taking into account the fact that some by the call center initiated businesses process very high costs.
To increase the quality and speed of the concern identification, WW implemented the cognitive decision system DEEP.assist, which went live in June 2014. With DEEP.assist the call center agent now only has to type in the statements of the caller in form of normal German sentences. Doing this, the call center agent documents the business case and the system analyses additionally in real time the meaning of the text and the call center agent gets proposals for the solution already during the writing. A key challenge in problem solving was the fact, that the caller often does not describe the specific problem, but articulates the symptoms of the concern. With the help of chains of associations DEEP.assist is able to identify the concerns, even with very unusual descriptions of the caller.
Miroslav LíškaMarek Šurek
At present it is very difficult to work with government data effectively. A lot of effort is just spent to integrate various datasets. Data are often at low level quality such as they are inconsistent or incomplete and published in different formats. This all limits their integration and utilization for various purposes. At present the linked data based method to government data integration seems to be most promising approach in this field. Data are annotated with ontologies hence they can be easily linked with semantics and processed with reasoners for additional content inferencing. Subsequently, when the government data are linked and also open, then great business value can be produced. On the one hand the data integration process is more effective and precise and on the other hand, any software project can benefit from including open linked data in their solutions.
This presentation aims to provide information about semantic web adoption process for Slovak government data. First, an initial formal proposal of semantic standards for Slovak government data [SK-SEM2013] is presented. Second, the focus is oriented into presentation how the URI became the key element of Slovak semantics standards. Third, a new approach to semantic standards is presented. The base properties of semantic standards, i.e. approved ontologies and a method to URI creation, are shown. Finally, the concrete example of government linked data is presented. It covers the Slovpedia, Slovak open linked data database and the Pharmanet, the Slovpedia client that provides an approach to NLP based drugs interactions extended with inferencing.
Pages