Connecting the Dots: Economic Networks as Property Graphs We develop a quantitative framework that approaches economic networks from the point of view of contractual relationships between agents (and the interdependencies those generate). The representation of agent properties, transactions and contracts is done in the a context of a property graph. A typical use case for the proposed framework is the study of credit networks. You can find the white paper here: (OpenRiskWP08_131219)
The motivation for federated credit risk models Federated learning is a machine learning technique that is receiving increased attention in diverse data driven application domains that have data privacy concerns. The essence of the concept is to train algorithms across decentralized servers, each holding their own local data samples, hence without the need to exchange potentially sensitive information. The construction of a common model is achieved through the exchange of derived data (gradients, parameters, weights etc).
The challenge with historical credit data Historical credit data are vital for a host of credit portfolio management activities: Starting with assessment of the performance of different types of credits and all the way to the construction of sophisticated credit risk models. Such is the importance of data inputs that for risk models impacting significant decision making / external reporting there are even prescribed minimum requirements for the type and quality of necessary historical credit data.
First Release: OpenCPM NPL Database Further building out the OpenCPM set of tools, we took the Eupean Banking Authority’s recommended Non-Performning Loan templates and created an #opensource production grade database for capturing NPL portfolio data sets. Motivation for Building an open source database based on EBA’s Standardized NPL Templates In a recent insightful piece “Overcoming non-performing loan market failures with transaction platforms”, Fell et al. dug deeply into the market failures that perpetuate the NPL problem.
The working definition of a Data Scientist seems to be in the current overheated environment: doing whatever it takes to get the job done in a digital #tech domain that we have long neglected but which is now coming back to haunt us! That is nice urgency while it lasts, but it is not a serious job description for the future. You will always find entrepreneurial institutions to offer degrees and certifications on the latest trending hashtag.
Are you getting a bit tired with all the machine learning ballyhoo? You can blame it all on a German mathematician(*), Carl Friedrich Gauss, who started the futuristic “mega-trend” back in 1809: He showed us how to “train” a straight line to pass nicely through a cloud of unruly, scattered data points. To find, in effect, a path of least embarrassment. Two+ centuries later it is still a profitable enterprise to invent elaborate variations of that theme, now going under the more exalted name of “supervised learning“, which may or may not include “deep learning”.
Guiding principles for a viable open source operational risk model (OSORM) Such a framework: Must avoid formulaic inclusion of meaningless risk event types (e.g., legal risk created by the firm’s own management decisions) or any risks where the nature and state of current knowledge does not support any meaningful quantification. Such potential risks would be managed outside the framework Must employ a bottom-up design that addresses the risk characteristics of simpler business units first and (if needed) creates a combined profile for a more complex business in a building block fashion.
Reducing variation in credit risk-weighted assets – The benign and vicious cycles of internal risk models March 2016 wasn’t a good month for so called “internal risk models”, the quantitative tools constructed by banks for determining such vital numbers as how much buffer capital is needed to protect the savings of their clients. First came the Basel Committee’s proposed revision to the operational risk capital framework applicable to banks, next came a similarly fundamental overhaul of what form of risk quantification will be acceptable for calculating credit risk capital requirements.
Risk Capital for Non-Performing Loans Currently many countries are drowning in bad credits This visualization from the World Bank shows the current distribution of non-performing loans (NPL’s in short) around the world, as fraction of the total outstanding loans: Translated in absolute numbers (according to IMF data) the European NPL book alone stands at around 1 trillion EUR. As the adage goes, a trillion here, a trillion there, you pretty soon talk about serious money…
From Big Data, to Linked Data and Linked Models The big data problem: “As certainly as the sun will set today, the big data explosion will lead to a big clean-up mess” How do we know? We only have to study the still smouldering last chapter of banking industry history. Currently banks are portrayed as something akin to the village idiot as far as technology adoption is concerned, and there is certainly a nugget of truth to this.
Seven Heavens of Finance and the Open Risk API Back-to-basics is not salvation It has become trendy since the financial crisis to be wearing an “anti-complexity” hat in matters concerning the shape of the financial system. This is an understandable reaction to the entangled constructions that had sprung to existence in the hyper-leveraged markets of the “naughty noughties”. Yet shifting through the ruminations and proclamations one cannot help but get the impression that there is a sort of denial of the complexity that underlies the real economy.
Open Source Risk Data with MongoDB and Python Open source software is all the rage those days in IT and the concept is making rapid inroads in all parts of the enterprise. An earlier comprehensive survey by Gartner, Inc. found that by 2011 more than half of organizations surveyed had adopted open-source software (OSS) solutions as part of their IT strategy. This percentage may have currently exceeded the 75% mark according to open source advisory firms.
Open Risk API If you work in financial risk management you will most likely recognize where the following sentence is coming from: “One of the most significant lessons learned from the global financial crisis that began in 2007 was that banks information technology (IT) and data architectures were inadequate to support the broad management of financial risks […] This had severe consequences to the banks themselves and to the stability of the financial system as a whole” For those lucky few risk managers not being affected by inadequate IT systems, the excerpt is from the Basel Committee’s “Principles for effective risk data aggregation and risk reporting” (2013).
Risk modeling is as much art as it is science The Zen of Modeling aims to capture the struggle for risk modeling beauty An undocumented risk model is only a computer program A risk model that cannot be programmed is only a concept A risk model only comes to life with empirical validation Correct implementation of an imperfect model is better than wrong implementation of a perfect model In complex systems there is always more than one path to a risk model There are no persistently true models but there are many persistently wrong models Correlation is imperfectly correlated with causation Nirvana is the simplest model that is fit for purpose Hierarchical systems lead to hierarchical models.
A mini course on risk management, its perils and the silver lining When talking about risk management, it is not very clear what we are talking about in broad terms, definitely not getting clearer when we start getting into the details and it is even not clear how to best use the (possibly flawed) insights we produce. Yet that’s what we have at this stage and with lemons we do lemonade.
Open Source Risk Modeling Manifesto This post is a summary of a presentation given at the 2014 Autumn TopQuants Meeting, aka, the “Open Source Risk Modeling Manifesto”. The dismal state of quantitative risk modeling The current framework of internal risk modeling at financial institutions has had a fatal triple stroke. We saw in quick sequence: market risk, operational risk, and credit risk measurement failures, covering practically all business models.
Financial Risk Modelling has suffered enormous setbacks in recent years, with all major strands of modelling (market, credit, operational risk) proven to have debilitating limitations. It is impossible to imagine a modern financial system that does not make extensive use of risk quantification tools, yet rebuilding confidence that these tools are fit-for-purpose will require significant changes. These need to improve governance, transparency, quality standards and in some areas even the development of completely new strands of modelling.