Faculty of law blogs / UNIVERSITY OF OXFORD

Implications of Financial Artificial Intelligence

Author(s)

Tom Lin
Professor of Law at Temple University, Beasley School of Law.

Posted

Time to read

3 Minutes

Artificial intelligence has had a profound impact on finance.  In the span of a few decades, it has made finance faster, more accessible, more profitable, and more efficient in many ways.  Despite all the significant benefits made possible by financial artificial intelligence, it also presents serious risks and implications for law, business, and society.

My recent article, ‘Artificial Intelligence, Finance, and the Law’, published in the Fordham Law Review, offers a study of those risks and implications. It provides a broad examination of the inherent risks and larger implications of financial artificial intelligence. As detailed in the article, financial artificial intelligence poses four broad categories of risks related to coding limitations, data bias, virtual threats, and systemic risks.  In particular, financial artificial intelligence is limited in its ability to code an uncertain world, susceptible to discriminatory uses of data, vulnerable to cyber attacks, and capable of creating new forms of systemic risks. Beyond these inherent risks, financial artificial intelligence also has broader implications for law, business, and society in areas relating to financial cybersecurity, competition, and societal impact. 

First, financial artificial intelligence will have significant implications for financial cybersecurity.  Today, many of the more sophisticated attempts to manipulate and disrupt financial markets take place exclusively in cyberspace, and are frequently aimed at artificial intelligence systems. Because financial artificial intelligence relies on interconnected, complex technological systems, being able to safeguard those disparate private systems from threats and attacks is critical to preserving the integrity of the financial system. High-speed, self-executing systems based on financial artificial intelligence can be ripe targets for bad actors. As such, private and public institutions throughout the world must act with greater speed and coordination to protect against the looming threats of cyberattacks, manipulation, and other bad acts targeting systems based on financial artificial intelligence.  

Second, the rise of financial artificial intelligence will have significant implications for competition within the financial industry and the greater economy. Because artificial intelligence is highly dependent on large data sets for insights, firms with captive, large sets of data built into their structural platforms may end up having a durable competitive advantage in the marketplace that ultimately skews the competitive landscape in finance and hurts consumer welfare. The ongoing debates and investigations concerning competition and antitrust among large technology companies like Google, Amazon, and Facebook may soon spill over into the financial industry with large financial institutions, which are functionally large technology companies similarly powered by large troves of data. Because the technology and data underlying much of financial artificial intelligence requires significant investments and favors the data-rich, there is appropriate concern that early movers and better-resourced institutions would acquire durable competitive advantages that ultimately stifle innovation, eliminate meaningful competition, and harm consumer welfare.

Third, the rise of financial artificial intelligence will have profound implications on society, on an individual and collective basis. On an individual basis, the rise of artificial intelligence in finance raises important questions about the role of humans in finance. Artificial intelligence has gradually—then rapidly—displaced much human labor and effort in finance, and understandably so. Smart machines driven by artificial intelligence with perfect memory and recall can process large volumes of data faster, cheaper, and more accurately than humans in most circumstances, and they do not tire with more work or grow irrational with ‘animal spirits’ the way humans normally do.  Individuals working in finance would need to evolve and adapt in this brave new financial world, and that transition may not be easy or successful for many individuals. On a collective basis, as finance continues to adopt new technologies like artificial intelligence, we can sometimes lose sight of the fact that finance at its core—behind and beyond all the high-tech gadgetries, complex codes, and seas of data—is driven by real people and real social purposes.  Finance is ultimately a tool of social utility and connection that would lose much of its meaning without the effects it has on people and society. In discussing matters of finance, scholars, executives, regulators, and policymakers frequently forget that people and communities are at the heart of finance and markets. As such, one of the critical responsibilities for executives, policymakers, and regulators in the years ahead centers on how better to update a twentieth century financial system to account for twenty-first century financial advances like artificial intelligence without losing focus on the human-oriented missions of finance and democratic values like equal access and transparency.

Ultimately, the rise and growth of artificial intelligence in finance will likely be one of the most significant developments for law, business, and society in the coming decades. While we should appreciate the potentials and powers of financial artificial intelligence, we should also be more aware of its inherent risks and broader implications. We must grow more cognizant of the ways financial artificial intelligence can harm and hinder individual progress as well as societal progress, as we try to build better and smarter financial artificial intelligence.

Tom C.W. Lin is Professor of Law at Temple University, Beasley School of Law.

 

Share

With the support of