Hong Kong's Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market


Hong Kong's Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market

On 28 October 2024, the Financial Services and Treasury Bureau ("FSTB") of Hong Kong issued a policy statement on the responsible application of artificial intelligence ("AI") in the financial market ("Policy Statement"). The Policy Statement seeks to balance the promotion of AI development whilst mitigating the associated risks to cybersecurity, the protection of intellectual property rights, and data privacy, under a "dual-track approach".

AI in Hong Kong

The FSTB has focused its analysis on three key attributes for the application of AI in the financial services industry, namely data-driven, double-edged and dynamic.

The FSTB recommends that the financial services sector in Hong Kong should adopt a dual-track approach when deploying or using AI to ensure sustainable and responsible use.

Dual-track approach - Capturing Opportunities

The Policy Statement details the different benefits AI applications can bring to the financial services industry, including:

Dual-track approach - Preventing Risks

The FSTB emphasises in the Policy Statement that responsible use of AI requires financial institutions to focus on the protection of data privacy and intellectual property rights, information security, accountability, operational resilience and job security. Financial institutions should develop an AI governance strategy which takes on a risk-based approach throughout the AI-lifecycle, including procurement, use and management of AI, with human oversight in place to mitigate potential risks.

The Policy Statement outlines the key risks associated with the use of AI and sets out recommended mitigation measures.

For data privacy, cybersecurity and intellectual property rights protection, the Policy Statement prompts AI users to ensure robust cybersecurity safeguards are in place to protect the AI model and any confidential and personal information used. As personal data and copyright materials may be used as training data for AI models, AI users must ensure their practices comply with the relevant personal data privacy laws and respect intellectual property rights.

In terms of other risks such as fraud, social engineering attacks and cybercrime, a robust AI detection system is needed to detect and thwart fraudulent activities. The FSTB also calls for industry cooperation in sharing best practices and formulating measures to prevent such risks.

Other categories of risk discussed relate to bias, hallucination, and data and model governance. The FSTB notes that AI users need to ensure the diversity, quality and representativeness of training data to minimise bias in AI generated output. To prevent hallucinations, human oversight is necessary as it allows for humans to assess and correct any inaccurate AI-generated output.

Financial institutions using AI need to disclose and be transparent about such use in order to protect consumers and investors, particularly when AI is used to make business decisions. Transparency in the use of AI allows investors and customers to make informed decisions regarding the use of their personal information and other preferences.

Way Forward

The government aims to collaborate with financial regulators in developing a clear and comprehensive supervisory framework. With the rapid development and evolution of AI, the government will continue to adapt its supervision to market developments accordingly and also draw from international standards. Financial regulators will be responsible for monitoring the deployment of AI in the financial services industry, whilst ensuring the regulatory framework is adequate in view of the latest developments in AI. Recent initiatives include, the Generative AI ("Gen AI") Sandbox launched by the Hong Kong Monetary Authority and Cyberport in August 2024, which encourages banks to embark on their novel Gen AI use cases under a risk-managed framework accompanied by supervisory feedback and technical assistance. In November 2024, Security and Futures Commission (SFC) also issued a circular on the use of Gen AI for licensed corporations. The circular echoes the risk-based approach from the Policy Statement, and focuses on four core principles namely, AI model risk management, senior management oversight, cybersecurity and data risks management, and third-party providers risk management.

Conclusion

The Policy Statement makes it clear that through collaboration with financial regulators and industry players, the Hong Kong Government is seeking to foster a sustainable financial market environment which enables financial institutions to leverage AI effectively and responsibly. As more AI-related laws and regulations emerge, businesses are advised to stay informed of the latest legal and regulatory developments and start putting in place robust AI governance now.

Previous articleNext article

POPULAR CATEGORY

corporate

8499

tech

9265

entertainment

10556

research

4838

misc

11288

wellness

8497

athletics

11029