Risk Model

Connecting the Dots: Concentration, diversity, inequality and sparsity in economic networks

Connecting the Dots: Concentration, diversity, inequality and sparsity in economic networks

In this second Open Risk White Paper on "Connecting the Dots" we examine measures of concentration, diversity, inequality and sparsity in the context of economic systems represented as network (graph) structures.

Reading Time: 6 min.
Concentration, diversity, inequality and sparsity in the context of economic networks In this second Open Risk White Paper on Connecting the Dots we examine measures of concentration, diversity, inequality and sparsity in the context of economic systems represented as network (graph) structures. We adopt a stylized description of economies as property graphs and illustrate how relevant concepts can represented in this language. We explore in some detail data types representing economic network data and their statistical nature which is critical in their use in concentration analysis.
9 Ways Graphs Show Up in Data Science

9 Ways Graphs Show Up in Data Science

We explore a variety of distinct uses of graph structures in data science. We review various important graph types and sketch their linkages and relationships. The review provides an operational guide towards a better overall understanding of those powerful tools

Reading Time: 17 min.
Graphs seem to be everywhere in modern data science: Graphs (and the related concept of Networks) have emerged from a relative mathematical and physics niche to an ubiquitous model for describing and interpreting various phenomena. While the scholarly account of how this came about would probably need a dedicated book, there is no doubt that one of the key factors that increased the visibility of the graph concept is the near universal adoption of digital social networks.
Towards the Semantic Description of Machine Learning Models

Towards the Semantic Description of Machine Learning Models

Reading Time: 7 min.
Semantic Web Technologies integrate naturally with the worlds of open data science and open source machine learning, empowering better control and management of the risks and opportunities that come with increased digitization and model use The ongoing and accelerating digitisation of many aspects of social and economic life means the proliferation of data driven/data intermediated decisions and the reliance on quantitative models of various sorts (going under various hashtags such as machine learning, artificial intelligence, data science etc.
Federated Credit Systems, Part One: Unbundling the Credit Provision Business Model

Federated Credit Systems, Part One: Unbundling the Credit Provision Business Model

In this Open Risk White Paper, the first in a series of three, we introduce and explore the concept of federated credit systems as a potentially interesting domain for the application of federated analysis and federated learning.

Reading Time: 1 min.
Federated Credit Systems, Part I: Unbundling the Credit Provision Business Model: As an architectural design and information technology approach, federation has received increased attention in domains such as the medical sector (under the name federated analysis), in official statistics (under the name trusted data) and in mass computing devices (smartphones), under the name federated learning. In this (the first of series of three) white paper, we introduce and explore the concept of federated credit systems.
Connecting the Dots: Economic Networks as Property Graphs

Connecting the Dots: Economic Networks as Property Graphs

Reading Time: 0 min.
Connecting the Dots: Economic Networks as Property Graphs: We develop a quantitative framework that approaches economic networks from the point of view of contractual relationships between agents (and the interdependencies those generate). The representation of agent properties, transactions and contracts is done in the a context of a property graph. A typical use case for the proposed framework is the study of credit networks. You can find the white paper here: (OpenRiskWP08_131219)
Risk Model Ontology

Risk Model Ontology

Reading Time: 2 min.
Semantic Web Technologies: The Risk Model Ontology is a framework that aims to represent and categorize knowledge about risk models using semantic web information technologies. In principle any semantic technology can be the starting point for a risk model ontology. The Open Risk Manual adopts the W3C’s Web Ontology Language (OWL). OWL is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things.
Federated Credit Risk Models

Federated Credit Risk Models

Reading Time: 4 min.
The motivation for federated credit risk models: Federated learning is a machine learning technique that is receiving increased attention in diverse data driven application domains that have data privacy concerns. The essence of the concept is to train algorithms across decentralized servers, each holding their own local data samples, hence without the need to exchange potentially sensitive information. The construction of a common model is achieved through the exchange of derived data (gradients, parameters, weights etc).
Overview of the Julia-Python-R Universe

Overview of the Julia-Python-R Universe

We introduce a side-by-side review of the main open source ecosystems supporting the Data Science domain: Julia, Python, R, the trio sometimes abbreviated as Jupyter

Reading Time: 3 min.
Overview of the Julia-Python-R Universe: A new Open Risk Manual entry offers a side-by-side review of the main open source ecosystems supporting the Data Science domain: Julia, Python, R, sometimes abbreviated as Jupyter. Motivation A large component of Quantitative Risk Management relies on data processing and quantitative tools (aka Data Science ). In recent years open source software targeting Data Science finds increased adoption in diverse applications. The overview of the Julia-Python-R Universe article is a side by side comparison of a wide range of aspects of Python, Julia and R language ecosystems.
Machine learning approaches to synthetic credit data

Machine learning approaches to synthetic credit data

Reading Time: 9 min.
The challenge with historical credit data: Historical credit data are vital for a host of credit portfolio management activities: Starting with assessment of the performance of different types of credits and all the way to the construction of sophisticated credit risk models. Such is the importance of data inputs that for risk models impacting significant decision making / external reporting there are even prescribed minimum requirements for the type and quality of necessary historical credit data.
OpenCPM NPL Database

OpenCPM NPL Database

Reading Time: 2 min.
Motivation for Building an open source database based on EBA’s Standardized NPL Templates In a recent insightful piece “Overcoming non-performing loan market failures with transaction platforms”, Fell et al. dug deeply into the market failures that help perpetuate the NPL problem. They highlight, in particular, information asymmetries and the attendant costs of valuing NPL portfolios as key obstacles. In the same wavelength, the European Banking Authority published standardized NPL data templates as a step towards reducing the obstacles that prevent the reduction of NPL’s.
Data Scientists Have No Future

Data Scientists Have No Future

Reading Time: 1 min.
Data Scientists Have No Future: The working definition of a Data Scientist seems to be in the current overheated environment: doing whatever it takes to get the job done in a digital #tech domain that we have long neglected but which is now coming back to haunt us! That is nice urgency while it lasts, but it is not a serious job description for the future. You will always find entrepreneurial institutions to offer degrees and certifications on the latest trending hashtag.
Machine Learning Ballyhoo

Machine Learning Ballyhoo

Reading Time: 1 min.
Machine Learning Ballyhoo: Are you getting a bit tired with all the machine learning ballyhoo? You can blame it all on a German mathematician(*), Carl Friedrich Gauss, who started the futuristic mega-trend back in 1809: He showed us how to train a straight line to pass nicely through a cloud of unruly, scattered data points. To find, in effect, a path of least embarrassment. Two+ centuries later it is still a profitable enterprise to invent elaborate variations of that theme, now going under the more exalted name of supervised learning, which may or may not include deep learning.
The Zen of IFRS 9 Modeling

The Zen of IFRS 9 Modeling

Reading Time: 6 min.
The Zen of IFRS 9 Modeling: At Open Risk we are firm believers in balancing art and science when developing quantitative risk tools. The introduction of the IFRS 9 and CECL accounting frameworks for reporting credit sensitive financial instruments is a massive new worldwide initiative that relies in no small part on quantitative models. The scope and depth of the program in comparison with previous similar efforts (e.g. Basel II) suggests that much can go wrong and it will take considerable time, iterations, communication and training to develop a mature toolkit that is fit-for-purpose.
Guiding principles for a viable open source operational risk model

Guiding principles for a viable open source operational risk model

Reading Time: 1 min.
Guiding principles for a viable open source operational risk model (OSORM): Such a framework: Must avoid formulaic inclusion of meaningless risk event types (e.g., legal risk created by the firm’s own management decisions) or any risks where the nature and state of current knowledge does not support any meaningful quantification. Such potential risks would be managed outside the framework Must employ a bottom-up design that addresses the risk characteristics of simpler business units first and (if needed) creates a combined profile for a more complex business in a building block fashion.
Reducing variation in credit risk-weighted assets – The benign and vicious cycles of internal risk models

Reducing variation in credit risk-weighted assets – The benign and vicious cycles of internal risk models

Reading Time: 4 min.
Reducing variation in credit risk-weighted assets - The benign and vicious cycles of internal risk models: March 2016 wasn’t a good month for so called internal risk models, the quantitative tools constructed by banks for determining such vital numbers as how much buffer capital is needed to protect the savings of their clients. First came the Basel Committee’s proposed revision to the operational risk capital framework applicable to banks, next came a similarly fundamental overhaul of what form of risk quantification will be acceptable for calculating credit risk capital requirements.
Risk Capital for Non-Performing Loans

Risk Capital for Non-Performing Loans

Reading Time: 2 min.
Risk Capital for Non-Performing Loans: Currently many countries are drowning in bad credits This visualization from the World Bank shows the current distribution of non-performing loans (NPL’s in short) around the world, as fraction of the total outstanding loans: Translated in absolute numbers (according to IMF data) the European NPL book alone stands at around 1 trillion EUR. As the adage goes, a trillion here, a trillion there, you pretty soon talk about serious money
From Big Data, to Linked Data and Linked Models

From Big Data, to Linked Data and Linked Models

Reading Time: 5 min.
From Big Data, to Linked Data and Linked Models: The big data problem: As certainly as the sun will set today, the big data explosion will lead to a big clean-up mess How do we know? It is simply a case of history repeating. We only have to study the still smouldering last chapter of banking industry history. Currently banks are portrayed as something akin to the village idiot as far as technology adoption is concerned (and there is certainly a nugget of truth to this).
Seven Heavens of Finance and the Open Risk API

Seven Heavens of Finance and the Open Risk API

Reading Time: 8 min.
Seven Heavens of Finance and the Open Risk API: Back-to-basics is not salvation It has become trendy since the financial crisis to be wearing an anti-complexity hat in matters concerning the shape of the financial system. This is an understandable reaction to the entangled constructions that had sprung to existence in the hyper-leveraged markets of the naughty noughties. Yet shifting through the ruminations and proclamations one cannot help but get the impression that there is a sort of denial of the complexity that underlies the real economy.
Open Source Risk Data with MongoDB and Python

Open Source Risk Data with MongoDB and Python

Reading Time: 3 min.
Open Source Risk Data with MongoDB and Python: Open source software is all the rage those days in IT and the concept is making rapid inroads in all parts of the enterprise. An earlier comprehensive survey by Gartner, Inc. found that by 2011 more than half of organizations surveyed had adopted open-source software (OSS) solutions as part of their IT strategy. This percentage may have currently exceeded the 75% mark according to open source advisory firms.
Open Risk API

Open Risk API

Reading Time: 3 min.
Open Risk API: If you work in financial risk management you will most likely recognize where the following sentence is coming from: One of the most significant lessons learned from the global financial crisis that began in 2007 was that banks information technology (IT) and data architectures were inadequate to support the broad management of financial risks. This had severe consequences to the banks themselves and to the stability of the financial system as a whole For those lucky few risk managers not being affected by inadequate IT systems, the excerpt is from the Basel Committee’s Principles for effective risk data aggregation and risk reporting (2013).