Open Risk White Paper 14: Integrated energy accounting using relational databases In this Open Risk White Paper we demonstrate a concrete implementation of an integrated energy accounting framework using relational database technologies. The framework enables accounting of non-financial disclosures (such as the physical and embodied energy footprints of economic transactions) while enforcing the familiar double-entry balance constraints used to produce conventional (monetary) accounts and financial statements. In addition, it allows enforcing constraints associated with the flow and transformations of energy that can happen inside the organizational perimeter.
Open Risk White Paper 12: Deep-Linking Financial and Energy Accounting We develop a conceptual framework for integrated accounting that produces (where possible) non-financial disclosures subject to the same double-entry balance constraints as those used to produce conventional financial statements and automatically ensures any additional conservation laws are satisfied. We identify the key ingredients required for such a rigorous integrated accounting framework, in terms of concepts, postulates and design choices. Our focus and concrete use case is built around energy accounting, keeping track on an entity’s detailed energy footprint (primary inputs, transformations and waste generation) as an extension of its standard financial accounting and reporting.
In this second Open Risk White Paper on "Connecting the Dots" we examine measures of concentration, diversity, inequality and sparsity in the context of economic systems represented as network (graph) structures.
Concentration, diversity, inequality and sparsity in the context of economic networks In this second Open Risk White Paper on Connecting the Dots we examine measures of concentration, diversity, inequality and sparsity in the context of economic systems represented as network (graph) structures. We adopt a stylized description of economies as property graphs and illustrate how relevant concepts can represent in this language. We explore in some detail data types representing economic network data and their statistical nature which is critical in their use in concentration analysis.
We explore a variety of distinct uses of graph structures in data science. We review various important graph types and sketch their linkages and relationships. The review provides an operational guide towards a better overall understanding of those powerful tools
Graphs seem to be everywhere in modern data science Graphs (and the related concept of Networks) have emerged from a relative mathematical and physics niche to an ubiquitous model for describing and interpreting various phenomena. While the scholarly account of how this came about would probably need a dedicated book, there is no doubt that one of the key factors that increased the visibility of the graph concept is the near universal adoption of digital social networks.
Semantic Web Technologies integrate naturally with the worlds of open data science and open source machine learning, empowering better control and management of the risks and opportunities that come with increased digitization and model use
The ongoing and accelerating digitisation of many aspects of social and economic life means the proliferation of data driven/data intermediated decisions and the reliance on quantitative models of various sorts (going under various hashtags such as machine learning, artificial intelligence, data science etc.
Course Content This CrashCourse is an introduction to semantic data using Python. It covers the following topics:
We learn to work with RDF graphs using rdflib We explore the owlready package and OWL ontologies We look into json-ld serialization of RDF/OWL data We try data validation using pySHACL We use throughout a realistic data set based on the Credit Ratings Ontology Who Is This Course For The course is useful to:
The Non-Perfoming Loan Ontology The Non-Performing Loan Ontology is a framework that aims to represent and categorize knowledge about non-performing loans using semantic web information technologies. Codenamed NPLO, it codifies the relationship between the various components of a Non-Performing Loan portfolio dataset.(NB: Non-performing loans are bank loans that are 90 days or more past their repayment date or that are unlikely to be repaid, for example if the borrower is facing financial difficulties).
The Risk Function Ontology The Risk Function Ontology is a framework that aims to represent and categorize knowledge about risk management functions using semantic web information technologies. Codenamed RFO codifies the relationship between the various components of a risk management organization. Individuals, teams or even whole departments tasked with risk management exist in some shape or form in most organizations. The ontology allows the definition of risk management roles in more precise terms, which in turn can be used in a variety of contexts: towards better structured actual job descriptions, more accurate description of internal processes and easier inspection of alignement and consistency with risk taxonomies (See also live version and white paper (OpenRiskWP04_061415)
Semantic Web Technologies The Risk Model Ontology is a framework that aims to represent and categorize knowledge about risk models using semantic web information technologies.
In principle any semantic technology can be the starting point for a risk model ontology. The Open Risk Manual adopts the W3C’s Web Ontology Language (OWL). OWL is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things.
Extending the Open Risk API to include the EBA Portfolio Data Templates The Open Risk API provides a mechanism to integrate arbitrary collections of risk data and risk modelling resources in the context of assessing and managing financial risk. It is based on two key technologies of the modern Web, RESTful architectures and Semantic Data.
OpenNPL, the credit portfolio management platfrom we launched recently fully integrates the latest versions of the Open Risk API.
From Big Data, to Linked Data and Linked Models The big data problem:
As certainly as the sun will set today, the big data explosion will lead to a big clean-up mess How do we know? It is simply a case of history repeating. We only have to study the still smouldering last chapter of banking industry history. Currently banks are portrayed as something akin to the village idiot as far as technology adoption is concerned (and there is certainly a nugget of truth to this).