What is OSLO?
OSLO is the acronym for Open Standards for Linked Organisations. Many (public) organisations keep all kinds of data and exchange them with each other. In order to simplify, streamline and automate this process, OSLO, Open Standards for Linked Organisations, was created. This initiative wants to make the sharing of data and information run more smoothly by establishing the meanings of concepts, words and definitions (thus avoiding semantic discussions) and how to structure them in one’s own databases or software packages. In this way, high-quality up-to-date data can be created and local shadow databases can be avoided.
The aim of OSLO is to ensure greater coherence, better comprehensibility and findability of information and services. In order to increase the exchangeability of information, we base ourselves on web standards and European standards.
OSLO Air & Water
Within the ODALA several OSLO core vocabularies and application profiles have been developed.
An Applicationprofile is a specification for data exchange for applications that fulfil a certain use case. In addition to a shared semantics, it also allows for the imposition of additional restrictions, such as the definition of cardinalities or the use of certain code lists. An application profile can serve as documentation for analysts and developers.
Vocabularies are the basis for open semantic information standards, they provide a shared conceptual framework for certain concepts with a focus on data exchange and have been developed in the workrroups.
The information in the vocabularies and application profiles is organised according to the Resource Description Framework (RDF) , which makes them suitable for use in Linked Data applications.
Each application profile has an JSON-LD context file and a SHACL template.
More information about OSLO semantic approach: https://purl.eu/
Merged Meeting Minute Workshops OSLO Air & Water
Linked Data Event Streams (LDES) for open data publishing
Modern cities maintain numerous data resources such as real time air quality observation and detailed descriptions of the road network. However, these resources are primarily used by the cities themselves to perform their core duties such as city planning. Making the data publicly available is often a nice-to-have, and this is reflected in which datasets are made public, and how they are published. Concretely, open data publishing is often limited to providing an open data portal of static data dumps. Although this is certainly better than not offering any open data at all, this method is ill-suited to rapidly changing datasets such as air quality observations. Nevertheless, cities often express their desire to make such datasets publicly available – as long as it does not interfere with their internal systems that are needed to perform their duties. One common solution to this problem is to provide data access APIs, but with some form of authentication and rate limiting to avoid overloading the systems. Unfortunately this makes the data harder to reuse, and smaller cities do not have the resources to build and maintain such APIs.
As we focus on Linked Data within Activity 5 of ODALA, there are other ways of publishing data. All ways of publishing Linked Data (data dumps, query endpoints, subject pages, …) expose a fragment of the whole dataset. In the case of a data dump, there is simply one fragment containing all data. A SPARQL endpoint on the other hand exposes all fragments that match any graph pattern. This idea forms the conceptual framework of Linked Data Fragments (LDF): each publishing method exposes certain fragments of the data, and each method includes hypermedia controls to access the fragments. The core insight of this framework is that no single publishing method is great at everything, and that there are alternatives to the conventional data dumps or query APIs – and that their strengths should be explored as well.
Figure 25: The two extremes of the Linked Data Fragments axis are static data dumps on one hand, and query APIs on the other. Both approaches have their own characteristics, with no single method being the indisputable best approach for all use cases. For example, maintaining a data portal of static data dumps is easier to maintain than a feature-rich query API, but these APIs can more easily return realtime data. There are many in-between approaches on this axis though, notably including Triple Pattern Fragments and Linked Data Event Streams.
As part of this project, we refined the definitions and tooling of one such alternative LDF interface: the Linked Data Event Stream(LDES). Intuitively, an LDES is an append-only collection of immutable objects. Everything that has ever been added to an LDES collection remains part of the collection forever, and the individual data objects never change. A Basic LDES is a special case where each data object contains a (creation) timestamp, and the collection is fragmented by grouping objects from the same time interval in the same fragment. This simple restriction entails that at any given time, only one data fragment can still change – all the others contain all the data they will ever contain and can be persisted to disk or to a cloud-based storage system, as if they were small static data dumps.