Finance

Considering the Benefits and Risks


The Fourth New York Fed Conference on Fintech, held on September 29, convened thought leaders from the financial industry, academia, and public sector to discuss financial technology developments and their impact, with a focus on artificial intelligence (AI) and digital assets. As I noted in remarks at that event, I believe it is important to broaden our collective understanding of how financial technology—and especially generative AI—can be harnessed to deliver progress on our collective goals, while recognizing the need for guardrails and safety measures.

Considering the history of finance, today we are on the cusp of profound transformation. At the heart of this revolution lies the capabilities of generative AI tools like ChatGPT, which have garnered significant attention due to both their versatile applications and potential pitfalls, which I will discuss below. During the conference, researchers explored how large language models, a type of natural language processing (NLP), could benefit economic research. Professor Anton Korinek highlighted the potential for large language models to become helpful as research assistants and tutors for tasks like ideation, writing, background research, data analysis, coding, and more. Additionally, Professor Baozhong Yang discussed how he and his coauthors employed ChatGPT to extract managerial insights from corporate policy disclosures. Anne Hansen and Sophia Kazinnik of the Richmond Fed presented on the applications of GPT and its efficacy in analyzing “Fedspeak,” or Federal Reserve communications, to classify the overall stance of monetary policy.

Another domain where AI has provided value to the financial industry is through market analysis and risk assessment. AI excels in processing extensive datasets, detecting anomalies, making informed predictions, and projecting future trends. For instance, AI can harness vast datasets, including information from digital currency transactions and social media platforms, to provide insights into digital asset pricing and volatility, which can assist market participants in making informed investment decisions. Moreover, AI offers dynamic solutions for pricing models that account for real-time information. The potential applications of AI in market analysis and risk assessment remain a promising area for future exploration.

AI, through the utilization of NLP, can also analyze textual data to perform sentiment analysis. The wide availability of textual data in the public domain provides a rich source for understanding public sentiment and peer influence. For example, AI can harness social media content and news sentiment to develop sentiment-based risk indicators for the digital asset sector—an area that merits further investigation and exploration.

Unique to the digital asset realm, AI could be used for strengthening digital wallet security, enhancing decentralized finance solutions, and addressing potential scalability issues. For example, AI can fortify digital wallet security through improved biometric authentication, behavior analysis, and fraud prevention measures.

Closer to my area of professional interest, AI can be deployed to identify risks in the financial industry. Recent research has shown AI’s effectiveness in identifying financial risk in globally significant financial institutions, thanks to its ability to capture tail behaviors beyond what previous models can achieve. Looking ahead, an interesting avenue for exploration involves applying AI to assess additional risks in the digital asset space. As the digital asset market continues to expand and becomes more interconnected with traditional financial markets, it may present additional risks in the future. In such scenarios, AI could play a vital role in proactively identifying and understanding these risks.

Proceeding Cautiously

As we harness the incredible potential of artificial intelligence, we cannot ignore the profound ethical and technical challenges it presents. These challenges require the close collaboration of industry experts, researchers, and policymakers to establish regulations, standards, and guardrails for responsible AI development.

Ethical Concerns

It is widely recognized that AI systems can perpetuate biases present in their training data, leading to unjust and discriminatory outcomes, especially in areas like hiring, lending, and criminal justice. Additionally, there are well documented concerns about intellectual property and privacy infringement through surveillance systems and data mining, to name a few.

Official bodies and industry groups should establish clear guidelines for AI development and use, covering basic safety, fairness, and transparency requirements. Ethical frameworks need to be developed and implemented in AI systems, enforcing adherence to decision-making that prioritizes human well-being, safety, and benefit. Developers must quickly identify and mitigate biases in AI algorithms and data, and institutions and regulatory bodies need to conduct focused audits on AI models.

Safety and Security

AI can be exploited for malicious purposes—such as cyberattacks, misinformation campaigns, or the creation of convincing deepfakes—that can have far-reaching societal and financial consequences.

Security measures, such as data anonymization and access controls, need to be in place to protect individuals’ data used by AI systems, preventing unauthorized use. Rigorous testing, validation, continuous monitoring, and auditing processes are vital for ensuring the reliability and safety of AI systems. Maintaining human oversight, and “human-in-the-loop” decisions, is essential to ensure AI responsible use, especially in critical applications in finance.

Accountability and Transparency

The intricate nature of AI systems often makes it challenging to assign responsibility for their actions, especially when errors, accidents, or unintended consequences occur. Furthermore, the opacity of AI decision-making processes, often referred to as the “black box” phenomenon, hinders accountability and trust in AI systems.

Given the fast pace of innovation, developing effective safety features and regulatory frameworks for AI technologies is an ongoing challenge for policymakers that will require the right balance between allowing innovation to flourish and safeguarding against potential risks.

AI systems should be designed to provide clear explanations for their decisions, such that users can understand the rationale behind AI-generated outcomes. Assigning responsibility for AI decisions is also important to ensure that developers and organizations are held accountable for the actions of their AI systems.

Conclusion

In this fast-changing landscape, ongoing collaboration between the public and private sectors will be necessary to shape a future where AI enhances, rather than diminishes, the human experience. Information sharing, international cooperation, the establishment of ethics review boards, and investment in long-term safety research are key components in addressing AI risks collectively and responsibly.

For a video replay of the conference, along with research papers and related resources, see The Fourth New York Fed Conference on Fintech: Artificial Intelligence and Digital Assets.

Li He, a financial risk specialist at the New York Fed, assisted in preparing this article.

Mihaela Nistor is chief risk officer and head of the Risk Group at the Federal Reserve Bank of New York.


The views expressed in this article are those of the contributing authors and do not necessarily reflect the position of the New York Fed or the Federal Reserve System.



Source link

Leave a Reply