The increased sophistication of fintech poses many policy concerns, especially when harnessing A.I. in asset management. Currently, there is a lack of international regulatory standards for A.I. and machine learning in asset management. Since A.I. is already being used by investment managers to improve operational structure, investment strategy, and trading efficiency, the need to address this policy gap is urgent.
On March 25, the Institute of Electrical and Electronics Engineers (IEEE), the world’s largest professional organization devoted to engineering, is launching Ethically Aligned Design (EAD1e), a set of guidelines for the design and use of intelligent systems. The initiative is a step in the right direction because it begins to provide the ethical foundations for designing transparent and impartial systems—but more must be done.
Regulators have largely refrained from issuing new rules for the use of A.I. in asset management—a regulatory lag likely caused by the speed of A.I. adoption. However, a 2017 report from the Financial Stability Board (FSB), an organization that reviews the stability of the international financial system, identified areas of concern with the use of A.I. in financial services. These included macro-level risks due to the lack of auditability of A.I. and difficulty maintaining adherence to current protocols on data security, conduct, and cybersecurity during a period of rapid adoption of largely untested new A.I. technologies.
Rather than take a wait-and-see approach, financial services must get ahead of policymakers by crafting sensible and fair industry-specific rules for the use of A.I. in asset management. A proactive, industry-led approach to A.I. governance and ethics for asset management should include the following elements:
First, standards should be in alignment with IEEE. Let’s not reinvent the wheel: There’s been a lot of work recently on setting ethical guidelines by this very group. And with the final version of the EAD1e being released later this month, it would be advantageous for the asset management industry to link its standards and ethics guidelines to those of the IEEE.
Second, new standards should conform to the fundamental ethics and legal principles that govern how modern society functions. At a minimum, A.I. should not be used to circumvent any current rules that protect customers against unscrupulous industry players. Practices that exploit customers are wrong whether by a human or intelligent machine. Existing guidelines should also be reviewed, and if needed, broadened and updated for all A.I. applications that currently exist.
Third, A.I. measures need to be integrated with existing data policies. Since data privacy is already a fairly mature governance area, this relationship provides a helpful starting point and also suggests a potential organizational home for A.I.-related policies within investment firms.
Fourth, effective regulation will require industry collaboration. There is strength in numbers: Legislators and regulators will be more likely to recognize the efforts of a majority of the financial industry than of a single company. Asset management firms should collaborate on setting an industry code for A.I. regulation. This may be done either by creating a new consortium of firms eager to shape A.I. policy for the industry or by working through existing industry bodies such as the IEEE or the FSB.
With this constantly developing technology and its rapid adoption throughout the industry, time is of the essence. Asset management already uses A.I. and it’s important for clients and investors that the players adhere to the industry best practices sooner rather than later.
Joseph Byrum is chief data scientist of the Principal Financial Group.