BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The AI Financial Crisis Theory Demystified: How To Create Resilient Global Ecosystems

Has sci-fi fantasy finally become reality?

Once upon a time, the fictional idea of robots and humanoids propelling global crises would have been a far reaching fascination among science fiction aficionados. Previously a figment of imagination and creative genius, blockbuster favorites such as The Matrix and I, Robot are now somewhat plausible.

With artificial intelligence taking the world by storm, humans are rapidly conforming to new technological norms, led by logistic regression learning algorithms, deep neural network architecture, and natural language processors.

Since the introduction of models like ChatGPT for public use, generative AI has been all the rave due to its remarkable capabilities. With just a simple prompt, models can generate humanlike outcomes within seconds in the form of text, music, videos, and more, greatly enhancing productivity and creativity among many users.

The economy at large is also set to reap massive benefits from generative AI. For example, the banking industry can expect to see significant revenue impact from generative AI, totaling $200 billion - $340 billion in added value if use cases that lie within customer operations, marketing and sales, software engineering, and R&D were to be fully implemented, according to McKinsey & Company.

But just as we’ve seen in TV and film, with great innovation, comes risk.

Gary Gensler To The New York Times: “A Financial Crash Is More Likely”

In a paper co-authored by Gary Gensler, the U.S. Securities and Exchange Commission chairman presented his posture of a looming financial crash ahead, with deep learning, a subfield of AI, being the malefactor.

“Mr. Gensler expects that the United States will most likely end up with two or three foundational A.I. models. This will deepen interconnections across the economic system, making a financial crash more likely because when one model or data set becomes central, it increases “herding” behavior, meaning that everyone will rely on the same information and respond similarly”, according to the New York Times.

Mr. Gensler further hypothesizes in his paper that since financial crises can germinate from a singular sector, market, or region, a systemic risk that takes root in even one area will eventually cascade into fragility of global ecosystems.

What Are Foundation Models And Why Do They Matter?

At the root of a generative AI system is a foundation model.

A foundation model is any model that is trained on a curated dataset, typically through unsupervised learning, from data drawn from many mediums such as social media footprints, spending patterns, and IoT sensors from any type of mobile device. Data can even be drawn from cameras, appliances, and other telematics. In fact, large language models are trained on datasets curated from books, with some models such as LLaMA being trained on about 170,000 books!

A foundation model’s unique ability to take information learned from one task and apply it to a different task to create a new AI model is what is referred to as transfer learning. Once transfer learning has taken place, the surrogate model can then scale utilizing GPUs, which can handle many computations simultaneously. For instance, it can take hundreds of GPUs like the Nvidia A100 chip to train an AI model.

However, there is a such thing as too much data—which is why fine-tuning task-specific models is a common approach to transfer learning. As data growth has become exponential over time, foundation models are further trained on target-specific data. Thus, the model becomes “fine-tuned” to perform specific tasks. As a result, the new AI system may inherit problematic biases because homogenization creates “single points of failure”, according to the Center for Research on Foundation Models and the Stanford Institute for Human-Centered Artificial Intelligence.

Likewise, machine learning and deep learning also give way to homogenization, which occurs when a wide range of applications are powered by a single learning algorithm or when the same deep neural network architecture is utilized for many downstream applications, respectively.

Is An AI-Induced Financial Crisis Likely?

Contrary to Gary Gensler’s New York Times statement, “the United States will most likely end up with two or three foundational A.I. models”, the Stanford group benchmarked 30 foundation models, noting that the field is moving so fast that they didn’t review some of the newer and prominent ones, while Nvidia reports that hundreds of foundation models are now available.

Therefore, rather than focusing on whether the U.S. will most likely end up with only two or three foundation models or even hundreds, the emphasis should instead be placed on “de-risking” AI model deployments to create more resilient global ecosystems by:

  1. Curating diversified and less centralized data sources for foundation models to train on.
  2. Safeguarding models against human manipulation of training data to influence outcomes, determinations, and predictions.
  3. Addressing intrinsic biases and lack of explainability, among other AI ethical concerns and considerations.

Curating Diversified And Less Centralized Datasets

Foundation models are intrinsically characterized by unparalleled levels of homogenization and nearly all of the latest AI systems are adapted from the same foundation models. On the upside, enhancements made to foundation models can easily scale across all natural language processing applications. Yet, this also in turn leads to the propagation of harm, inherited biases, and flawed determinations across all models.

In Meta’s paper on “LLaMA: Open and Efficient Foundation Language Models”, the social media giant admittedly reuses common datasets like CommonCrawl, C4, Github, Wikipedia, Books3, Gutenberg, ArXiv, and StackExchange that have been leveraged to train other large language models.

“Foundation models keep getting larger and more complex, that rather than building new models from scratch, many businesses are instead customizing retrained foundation models to turbocharge their AI journeys”, according to Nvidia.

To Mr. Gensler’s point, as deep learning becomes more broadly adopted in financial and supply chain ecosystems, systemic risk can promulgate along the data pathway. “Models built on the same datasets are likely to generate highly correlated predictions that proceed in lockstep, causing crowding and herding”. In his paper, Mr. Gensler points out the tendency towards utilizing concentrated datasets and data aggregators increases exposure to risks leading to financial instability because this concentration of data adds to uniformity and monocultures.

For example, in Singapore’s Model AI Governance Framework, the country warns that increased overall market volatility can result from herding behavior when the widespread adoption of a stock recommendation algorithm nudges a sufficient number of individuals to make similar decisions at once.

“AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator. Thus, AI may play a central role in the after-action reports of a future financial crisis”, Mr. Gensler also said in his remarks before the National Press Club.

In this regard, curating diversified and less centralized data sources for foundation models to train on may help to reduce uniformity and monocultures within global financial, supply chain, and interrelated systems.

Safeguarding Models Against Human Manipulation

However, curating diversified and less centralized datasets for models to train on alone is not enough to curtail an AI-induced global crisis.

Mr. Gensler believes that deep learning models present a challenge of limited robustness since its latent features are unobservable, thus emphasizing systemic risks stemming from adversarial or cyberattacks. The smallest perturbations to a model’s unobservable latent features could result in flawed determinations, outcomes, or predictions that can be easily transferred between models.

However, IBM researcher Pin-Yu Chen offers a different view.

He suggests that there are many tools available for developers to proactively prepare AI deployments for the real world by detecting—and even predicting—incidental and intentional adversity to AI models, as well as data poisoning to their training data early on to ensure fairness, interpretability, and robustness.

“In the real world, AI models can encounter both incidental adversity, such as when data becomes corrupted, and intentional adversity, such as when hackers actively sabotage them. Both can mislead a model into delivering incorrect predictions or results”, according to Pin-Yu Chen. “Our recent work looks to improve the adversarial robustness of AI models, making them more impervious to irregularities and attacks. We’re focused on figuring out where AI is vulnerable, exposing new threats, and shoring up machine learning techniques to weather a crisis.”

Therefore, while deep learning has yet to fully penetrate financial and supply chain ecosystems, AI developers have an opportunity to proactively ensure robustness within a model to safeguard against human manipulation of the model and its training data. An AI model with remarkably high resistance to manipulation, perturbation, and attacks is said to exude adversarial robustness.

Addressing Intrinsic Biases and Limited Explainability

AI determinations, predictions, and outcomes are also often inexplicable because the underlying math is non-linear and hyperdimensional with extensive parameters, according to Mr. Gensler. Next, he points out that outcomes of predictive algorithms may be based on data reflecting historical biases and mask underlying systemic prejudices.

For example, the Guardian recently reported that biases uncovered in AI detector programs can discriminate against people who are non-native English speakers with the potential to flag college and job applications as AI-generated, thereby marginalizing non-native English speakers. In another example, automated employment decision tools must now undergo a comprehensive bias audit before being put into use to make employment determinations in New York City. Additionally, the prevalence of racial and age biases in healthcare algorithms have also been called out in various reports.

As deep learning becomes more broadly adopted in the financial and supply chain ecosystems, AI developers must be cognizant of representational and societal biases as well as performance disparities. By working to drive greater financial inclusion when deploying models, developers can help mitigate financial fragility and systemic risk that could in turn lead to an AI-induced financial crisis.

The National Institute of Standards and Technology (NIST) describes a trustworthy AI system as being safe, secure and resilient, explainable and interpretable, privacy enhanced, fair (with harmful bias managed), accountable and transparent, and valid and reliable. “Tradeoffs are usually involved, rarely do all characteristics apply in every setting, and some will be more or less important in any given situation”, according to the NIST.

In the case of Meta AI, bias, toxicity, and misinformation are detected through four distinct benchmarks to gain insights into LLaMA’s propensity to generate toxic language, express biases in seven protected categories in addition to physical appearance and socioeconomic status, and measure the truthfulness of a model.

Building on top of this methodology, developers can perhaps begin to explore additional benchmarks to address biases and limited explainability in order to scale detection methodologies towards building trustworthy AI systems.

Strong Policy Frameworks For AI Governance Are Needed To Reduce The Likelihood Of An AI-Induced Financial Crisis

Simply put, when everything is taken into account, proactive measures must be taken to create, enforce, and reform policy frameworks for AI governance to de-risk AI model deployments to create more resilient global ecosystems. This will be crucial to mitigating systemic risk exposure stemming from herding behavior, homogenization, perturbations, and biases to the broader global economy that could lead to a financial crisis.

“We should not rely on post-hoc audits of ethical and social consequences, conducted only after the technical architecture and deployment decisions have been made. We instead need to infuse social considerations and ethical design deeply into the technological development of foundation models and their surrounding ecosystem from the start”, according to the Center for Research on Foundation Models and the Stanford Institute for Human-Centered Artificial Intelligence.

Follow me on LinkedInCheck out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.