This post first appeared on Risk Management Monitor. Read the original article.
On the first day of the RIMS virtual event TechRisk/RiskTech, author and UCLA professor Dr. Ramesh Srinivasan gave a keynote titled “The Opportunities and Downside Risks of Using AI,” touching on the key flashpoints of current technological advancement, and what they mean for risk management. He noted that as data storage has become far cheaper, and computation quicker, this has allowed risk assessment technology to improve. But with these improvements come serious risks.
Srinivasan provided an overview of where artificial intelligence and machine learning stand, and how companies use these technologies. AI is “already here,” he said, and numerous companies are using the technology, including corporate giants Uber and Airbnb, whose business models depend on AI. He also stressed that AI is not the threat portrayed in movies, and that these portrayals have led to a kind of “generalized AI anxiety,” a fear of robotic takeover or the end of humanity—not a realistic scenario.
However, the algorithms that support them and govern many users’ online activities could end up being something akin to the “pre-cogs” from Minority Report that predict future crimes because the algorithms are collecting so much personal information. Companies are using these algorithms to make decisions about users, sometimes based on data sets that are skewed to reflect the biases of the people who collected that data in the first place.
Often, technology companies will sell products with little transparency into the algorithms and data sets that the product is built around. In terms of avoiding products that uses AI and machine learning that are built with implicit bias guiding those technologies, Srinivasan suggested A/B testing new products, using them on a trial or short-term basis and using them on a small subset of users or data to see what effect they have.
Srinivasan recommended that when deciding which AI/machine learning technology their companies should use, risk professionals should specifically consider mapping out what technology your company is using and weigh the benefits against the potential risks, and also examining those risks thoroughly and what short- and long-term threats they pose to the organization.
Specific risks of AI (as companies currently use it) that risk professionals should consider include:
- Economic risk in the form of the gig economy, which, while making business more efficient, also leaves workers with unsustainable income
- Increased automation in the form of the internet of things, driverless vehicles, wearable tech, and other ways of replacing workers with machines, risk making labor obsolete.
- Users do not get benefits from people and companies using and profiting off of their data.
- New technologies also have immense environmental impact, including the amount of power that cryptocurrencies require and the health risks of electronic waste.
- Issues like cyberwarfare, intellectual property theft and disinformation are all exacerbated as these technologies advance.
- The bias inherent in AI/machine learning have real world impacts. For example, court sentencing often relies on biased predictive algorithms, as do policing, health care facilities (AI giving cancer treatment recommendations, for example) and business functions like hiring.
Despite these potential pitfalls, Srinivasan was optimistic, noting that risk professionals “can guide this digital world as much as it guides you,” and that “AI can serve us all.”
RIMS TechRisk/RiskTech continues today, with Carey Anne Nadeau, Co-Founder and Co-CEO of Loop Insurance, and her keynote at 11:15 am EST, “Converting Customers into Risk Managers.”
Here’s what’s also coming up:
- Emerging Risk: AI Bias
- Connected & Protected
- Tips for Navigating the Cyber Market
- Taking on Rising Temps: Tools and Techniques to Manage Extreme Weather Risks for Workers
- Using Telematics to Give a Total Risk Picture
You can register and access the virtual event here.