Workshop: Compliance risks behind the application of AI in Financial Services
Regulators, as well as major players in Financial Services, identified the risks behind AI application: The Bank of America requests that any “AI system that makes a judgement about a customer needs to be able to explain itself”. The Monetary Authority of Singapore issued principles in 2018, to promote fairness, ethics, responsibility and transparency (FEAT) in the use of AI and Data Analytics in the Financial Sector. And also Finma noted already in Circular 2013/8: “Supervised institutions must document the key features of their algorithmic trading strategies in a way that third parties can understand.”
GDPR follows-up on these ideas and requests companies that employ AI to ensure that in automated decision-making (Art. 22 GDPR), individuals must express their consent and have the right of not being subject to an exclusively automated processing of their data, including the application of profiling.
Companies are faced with a variety of questions:
- How is the regulatory environment for AI projects? What are our duties as corporations?
- What are the legal risks behind AI application? How can we minimize the risks?
- How can we interpret algorithms for ourselves and for our customers?
- What points should be considered when AI software is bought from third parties?
We offer answers to these and further aspects of compliance risks alongside AI application in Financial Services in a Workshop:
Please contact us with your inquiry.