The Risk Function Ontology The Risk Function Ontology is a framework that aims to represent and categorize knowledge about risk management functions using semantic web information technologies. Codenamed RFO codifies the relationship between the various components of a risk management organization. Individuals, teams or even whole departments tasked with risk management exist in some shape or form in most organizations. The ontology allows the definition of risk management roles in more precise terms, which in turn can be used in a variety of contexts: towards better structured actual job descriptions, more accurate description of internal processes and easier inspection of alignement and consistency with risk taxonomies (See also live version and white paper (OpenRiskWP04_061415)
Federated Credit Systems, Part I: Unbundling the Credit Provision Business Model: As an architectural design and information technology approach, federation has received increased attention in domains such as the medical sector (under the name federated analysis), in official statistics (under the name trusted data) and in mass computing devices (smartphones), under the name federated learning. In this (the first of series of three) white paper, we introduce and explore the concept of federated credit systems.
Semantic Web Technologies: The Risk Model Ontology is a framework that aims to represent and categorize knowledge about risk models using semantic web information technologies. In principle any semantic technology can be the starting point for a risk model ontology. The Open Risk Manual adopts the W3C’s Web Ontology Language (OWL). OWL is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things.
The motivation for federated credit risk models: Federated learning is a machine learning technique that is receiving increased attention in diverse data driven application domains that have data privacy concerns. The essence of the concept is to train algorithms across decentralized servers, each holding their own local data samples, hence without the need to exchange potentially sensitive information. The construction of a common model is achieved through the exchange of derived data (gradients, parameters, weights etc).
From Big Data, to Linked Data and Linked Models: The big data problem: As certainly as the sun will set today, the big data explosion will lead to a big clean-up mess How do we know? We only have to study the still smouldering last chapter of banking industry history. Currently banks are portrayed as something akin to the village idiot as far as technology adoption is concerned, and there is certainly a nugget of truth to this.