Category Archives: Data Governance & Risk

Data Quality Expert Panel at DGIQ 2015 in San Diego

I really enjoyed the Data Governance and Information Quality conference that took place in mid June in San Diego. There were many great talks, a highlight was Anthony Algmin  who talked about his first 100 days as the new Chief Data Officer at the Chicago Transit Authority.  A great keynote was given by Scott Hallworth about the data quality journey at Capital One. Nancy Fessatidis of SAP gave a keynote on an emerging topic that gets a lot of attention these days: the ethics and morality of big data. The panel on controversial issues in data governance had been a great ending of the conference.

During the conference, I gave a half day tutorial on “setting up a data quality risk management program at your organization”, where I could enjoy a very active and interested audience. Going to a lot of data conferences, I can observe a rising level of interest over the years in applying the risk paradigm to data quality, especially from regulated industries like banking and insurance.

I also participated in a very interesting panel discussion on data quality best practices with Michael Scofield, Peter Aiken, David Loshin and John Talburt, in which I have highlighted the role of business outcome focused data quality metrics.  You can watch the video of the panel discussion below.

Big Data – Big Risk: Why Companies Need Total Information Risk Management

In our book “Total Information Risk Management: Maximizing the Value of Data and Information Assets”, we argue that data has become the major source of risk in most industries. Never before in human history, data could create so many opportunities and do so much harm to an organization`s success as today. Data has penetrated quickly into every corner of our society. We use data in higher volumes, of higher variety and velocity and from many new sources like social media and embedded sensors in real-time to drive a majority of our decisions. It has become the most important asset of the 21st Century, sometimes referred to as the “new oil”. Yet, the rising importance of data and information assets makes it also the major source of risk for most companies. Oil can catch fire. When companies finally start to understand the dangers sleeping in poor data and information assets often accumulated over decades and combined with new data from a variety of untrustful sources, it is often already too late: massive mis-investments, huge regulatory fines and permanent brand damages are only some of the consequences that cannot be easily undone once they happen.

Total Information Risk Management is a step by step guide for managers to identify and quantify the business impact of using poor data on business process performance and organizational success and how such risks can be mitigated. Solid measurement and quantification of data and information risk enables companies to generate real accountability and to treat data and information assets seriously and more responsibly. It also gives a great basis to build a convincing business case for data quality improvement.

A very typical situation is, for example, a manager who asks: “How many new sellers do I need to hire to meet my targets?” And the business analysts would come back after a while with the precise answer: “our analysis reveals that 3520 new sellers are needed”, which would lead the decision makers to reply: “Ok, this is interesting, well done, 3520 sounds very reasonable. But how reliable is the data?” The business analysts would assure that the analysis is rigorously conducted using data that comes from a system which is considered as a trusted source by most of the departments. The leadership team, being fully satisfied, would announce the new targets to the rest of the organization: “We need to recruit 3520 new sellers in the next quarter. This is grounded on a rigorous analysis of our business analysts!”

But, what if these numbers are wrong? Who would have time to verify if the methodology and the data that is used to calculate the results are indeed trustworthy and of high quality? And who would dare to question such “hard” facts indeed? And if something goes wrong, management can always refer to the business analysts. And business analysts can easily blame the data behind the analysis, the general complexity of the problem, and other external factors that influence the outcome of the decision.

Literally, millions of the most important decisions made by companies are executed exactly this way – every day. And an incredible amount of these decisions are mislead by poor data and sub-optimal analysis, leading to immense costs and risks in these organizations. There is a general lack of accountability and this is why huge risks are created in companies day by day – and why nobody addresses the true root causes of these problems. The formula is simple: Bad data leads to bad analysis, which leads to bad decisions, which leads to risks in operations and strategy.

So, how can risks from poor data be prevented? Companies can only protect themselves and make data and information reliable assets, if they start measuring the risks created by not having the right data and information of sufficiently high quality. Assessing risk caused through poor data and information assets will make the potential data and information risks tangible and visible to anyone – impossible to be ignored by the business part of the organization. Risk mitigation can then address the causes of the data and information risks with a targeted mix of technologies, transformation of the business environment and suitable information governance.

Leading companies are not the ones that simply use data to drive decision making, but those companies that assure that the risks hidden behind the data are clearly understood, measured and managed pro-actively.