This post first appeared on Risk Management Magazine. Read the original article.
Given the avalanche of information that has become available to businesses over the past several years, data-driven decision-making (DDDM), the practice of basing business decisions on data analysis rather than intuition, has become a critical tool to help organizations reduce risk, avoid costly mistakes and take advantage of opportunities. DDDM can now be used for a variety of purposes, from determining how much to charge for a product to whether a customer should be extended a line of credit to how often an industrial machine needs to be serviced in order to avoid unplanned downtime.
To illustrate the impact of data-driven decision-making, Douglas Mirsky, director of advisory services at the International Institute for Analytics (IIA), used the example of football helmets: For years, sports organizations went with their gut feeling and believed that the hard, crumple-free design of football helmets would keep players safe from serious head injury. More recent data-driven analysis, however, has shown that is not the case. In fact, the design may have contributed to the number of concussions by creating a false sense of safety, Mirsky said.
Indeed, in the aftermath of lawsuits by injured players, we now know that the perception of safety that comes with the use of hard, seemingly impenetrable helmets has encouraged athletes to become far more aggressive and increased the impact force on the field. New data-driven models are being deployed to help create helmets to specifically mitigate the risk of concussions. These have softer outsides that absorb the impact rather than transfer it to the brain.
But data-driven decision-making is not just about the avoidance of risk. If managed correctly, experts say, DDDM can also help businesses gain a competitive edge, better understand customer groups, identify new product categories, increase company revenue and improve efficiency.
Cutting-edge companies like Amazon, Google, Netflix and Uber have exploited data from the get-go, developing advanced models and algorithms to facilitate business decisions, and are leaders in the persistent use of data collection and analysis.
In the case of Amazon, the data they gather and algorithms they have built about your buying preferences generate predictions about what books and products you may want to buy next. Google knows what ads might catch your eye and Netflix knows what films you are likely to want to watch, while Uber is at-the-ready to pick you up wherever you might be, thanks to their monitoring algorithms.
Facilitated by lower processing and data storage costs coupled with advances in analytics, data-related technology is more accessible than ever before. Platforms such as Hadoop, an open-source programming framework that facilitates the processing and storage of extremely large data sets, as well as technology and services from the likes of Amazon Web Services, Cloudera, Domo, Datameer, Google, IBM, Microsoft and SAS, have helped to advance the DDDM trend, as have data visualization tools such as Sisense, Tibco’s Spotfire, Tableau and VizaData.
“We see three classes of technology players in data-driven decision-making,” said Gary Angel, founder of Digital Mortar, a retail data analysis company. “There are firms that help you gather and manage all the data; firms that help you integrate it and store it in the cloud; and front-end technology that can help you to visualize all that data.”
Ultimately, the effort can also help to put a business on the path toward the wider use of predictive analytics and automation systems, including machine learning and artificial intelligence, which could lead to faster and more efficient decision-making.
But experts caution that we are still in the early stages of the development and use of these tools. If not developed with clear goals in mind, regularly monitored and updated properly, data-driven decision-making and its attendant models and algorithms can produce poor, unacceptable or even disastrous results.
Equations and models are a kind of idolatry—it’s dangerous to put too much faith in them.
Failures of Data
The 2008 financial crisis offers an example of the drawbacks to relying too heavily on data models. “There were companies engaged in mortgage lending who had hard-coded into their models continuous home price increases of 4% a year for as far as the eye could see,” said Kevin Buehler, co-founder of McKinsey’s global risk practice and leader of its risk advanced analytics group. “Clearly, those models did not perform well when home prices fell.”
Speaking at a gathering of analytics experts at Columbia University in November 2016, Emanuel Derman, a professor of financial engineering and author of the book Models Behaving Badly: Why Confusing Illusion with Reality Can Lead to Disaster on Wall Street and in Life, also cautioned against over-reliance on models. “None of these models ever describe the world the way it really is,” Derman said. “You have to keep in mind, they don’t always work like iPhones. Equations and models are a kind of idolatry—it’s dangerous to put too much faith in them.”
Data scientist Cathy O’Neil, author of the book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, has reported on the ever-growing prevalence of what she considers to be toxic or poorly designed models and algorithms that lack transparency and “are often taken as truth rather than an indication.” This data, in effect, reduces the sum total of an individual’s worth to a score that can dictate access to schools, jobs and loans, with little or no recourse.
Embedded in these algorithms are a host of assumptions, some of which encode human bias, misunderstandings and prejudice into software systems, O’Neil said.
For example, in her book, she explains how a poorly designed algorithm might view a bad credit score as a proxy for one’s job qualifications. Under such a system, when you apply for a job, “there’s an excellent chance that a missed credit card payment or late fees on student loans could be working against you,” she said.
O’Neil also notes that judges who determine criminal sentencing have turned increasingly to predictive algorithms that indicate the odds of recidivism. Over time, however, such tools and the questionnaire data they employ have proven biased against minorities and the poor.
In addition, tools like Facebook have also incorporated toxic algorithms, resulting in the distribution of fake news to millions of users during the 2016 U.S. presidential election. Faced with consumer complaints, Facebook has since revised its news feed algorithm and says it now works with third-party fact-checkers to help improve the news distribution process.
But O’Neil remains troubled. In many instances, she believes, “the people who build these algorithms are very rarely worried about the side effects or unintended consequences or destructive feedback loops engendered by these algorithms,” as evidenced by their failure to regularly monitor, update and improve these models.
Inherent Risks
So how can businesses and organizations address the challenge of data-driven decision-making and realize the benefits, without entering into the realm of toxic models?
First and foremost, firms need to acknowledge that there are risks associated with the effort. According to Judah Phillips, founder of data analytics consulting firm Smart Current and author of the book Building a Digital Analytics Organization, many mistakes can occur when pursuing data-driven decision-making.
These include an inability to change the mindset of the business to value data over gut instinct and domain experts, a lack of a clearly identifiable business goals and clear-cut questions to guide the data-driven effort, and not being able to tie data analysis output to actual business outcomes that can then be financially measured. “That is a huge place where a lot of companies stumble—they do not always link their data efforts back to actual business value,” Phillips said.
Additional risks include the failure to monitor and protect against privacy or data breaches; the lack of data governance related to the data a firm collects, uses and stores; the risk of buying the wrong technology or deploying it incorrectly or incompletely; and the risk of hiring the wrong people for data science and analysis.
A final risk, according to Phillips, is failure to effectively communicate and act on the findings unearthed by data-driven tools and decision-making capabilities, sometimes due to the inability to determine what modes of communication are best suited to a given CEO or management team. For example, does management prefer visualization, explanatory or story-telling tools? This must be considered by the data science team in order for any effort to be effective.
Buehler pointed out that all DDDM efforts are dependent on the quality and limitations of the data used. If, for example, you have credit histories that go back to 2010, as many online lenders do, but none that date before the 2008 financial crisis, these models will not perform well in times of severe stress.
DDDM models are also vulnerable to poor design, he said, such as overfitting, spurious correlations and the identification of inappropriate criteria. Overfitting, for example, occurs when there are too many independent variables attempting to explain too few data points, resulting in a model that performs poorly over time.
Spurious correlations can result when two variables in a model appear related, but there is no basis in fact for their relationship. For example, for some time, Super Bowl results appeared to predict stock market performance, although there is in fact no causal relationship between the two.
The identification of inappropriate criteria happens when race, gender, religion or other discriminatory variables are identified as predictive in modeling efforts. Modelers need to be aware, however, that sometimes race, gender or religion are not themselves identified, but other related variables, such as zip codes or certain product purchases, can be indicative of categories that are inappropriate to track and are inadvertently used in models.
Banks in particular, Buehler said, are very aware of these potential flaws and independently validate their models to assure that they are conceptually sound, statistically valid and do not facilitate redlining or other discriminatory practices.
The data scientists have to be willing to be teachers for management, at some level. You have to be able to impart and teach, not merely show the data.
Fixing the Problem
In order to limit the risks and challenges associated with DDDM efforts, it is necessary to have a clear understanding of what you are trying to accomplish. “The ability to be very targeted about your problem is important when pursing data-driven decision-making, as is making sure you aim to solve a problem that matters to the business,” said Kimberly Nevala, director of the best practices team at SAS, a statistics and analytics software provider.
Companies should have processes in place to regularly update and validate the algorithms’ conclusions. Your algorithm may aim to arrive at insights, Nevala said, “but you have to validate it and ask, does its findings reasonably make sense, are its findings appropriate, or are there inherent biases in the data that you have to account for or allocate against?”
Buehler agreed. “You need to make sure that, when you develop a sophisticated model, you don’t embed in it the same biases and prejudices that exist in our heads, inside our own neural networks. Is it making recommendations and decisions on appropriate criteria and drawing appropriate conclusions?” If not, revision is required.
Any successful DDDM effort is one that involves a willingness to regularly test and experiment when it comes to the structure of models and their output. “You always have to be willing to understand the circumstances under which a model was built and have the imagination to ask if important things changed relative to the problem set and history,” Angel said. This will prevent you from ending up with models that are off-the-mark or out-of-date, as we saw around the time of the 2008 financial crash.
For best results, some suggest that CEOs and management obtain some training in statistics and analytics to ensure good communication with data science teams. Others recommend the use of a “data science translator” to work alongside management to help them understand the language of data science.
Still others have opted for training the data scientists in the language of the business at hand. In many cases, however, “the data scientists have to be willing to be teachers for management, at some level,” said John Kamensky, senior fellow at the IBM Center for the Business of Government. “You have to be able to impart and teach, not merely show the data.”
Kamensky also believes that CEOs and management teams need to be curious, ask a lot of questions about an algorithm’s conclusions, and encourage an iterative approach so that DDDM efforts and algorithms are regularly monitored, updated and improved over time. “You always want to ask questions that bring greater understanding like, why are the numbers going in that direction and what do we need to do to change direction?” he said.
Given the growing prevalence and impact of algorithms in our everyday lives, some experts believe we need to develop guidelines that go beyond best business practices. For example, Derman has proposed a kind of Hippocratic Oath for data scientists that would require them to explain an algorithm’s potential for misuse and misrepresentation and discourage assumptions about its absolute accuracy.
In her book, O’Neil advocated for embedding morality into many of our algorithms and for regulations to protect individuals from toxic algorithm outcomes. “Right now we don’t have the laws that we need to handle the big data era,” she said. “And the regulators don’t have the technology chops to investigate these algorithms.”
Still others have called for independent algorithm auditors. Buehler said that, thanks to the mandate outlined in the Federal Reserve’s supervisory letter SR 11-7, “Guidance of Model Risk Management,” independent model validation has become a business standard in the banking industry. “It requires that banking models meet fairly strict criteria, that the statistical technology used is robust, and that the use of the model is appropriate,” he said, adding that it makes sense to apply the same standards to other industry sectors.
Buehler explained, “Firms outside of banking are starting to work with independent model auditors—people who can function as an independent check, separate from model developers,” who can play a skeptical role to assess quality and make sure that models are “fit for purpose.” The impetus is the realization that a series of bad decisions made by these models can actually signify a serious risk to the business, especially when the models are automated and decisions occur rapidly at scale.
Finally, some experts highlight the importance of melding data-driven decision-making with the gut instinct and domain expertise that businesses have always relied on to ensure success.
Vishal Singh, who teaches and researches data-driven business strategies at New York University’s Stern School of Business, said that consumers of data-driven decision-making always have to keep in mind that findings “are not some objective truth,” but rather simply a tool to help them make better decisions.
“You should never throw away your gut instinct,” he said. “Keep in mind that, while the numbers and models are there to guide you, at the end of the day, you always need to use your own judgment.”