Eye on Emerging Markets
Q4 2023 Edition Eight
Scroll down
Welcome.
Latest topics
The potential for AI to transform lives in emerging markets is huge. As Mahamudu Bawumia, the vice-president of Ghana and head of the government’s economic management team said recently: “One thing is clear: Africans have a goldmine at our fingertips. A rapidly growing population of 1.4 billion people, 70% under the age of 30, combined with huge growth in AI investments, creates a potent recipe…We will not sit back and wait for the rest of the world to reap our rewards.” But many emerging markets have challenges that make it difficult to reap these rewards. Striking the right balance between regulating AI, whilst also encouraging investment and innovation, is crucial – a topic we touch on in this edition of Eye on Emerging Markets.
Key Takeaways from the Global AI Safety Summit
An important first step towards building international cooperation
Read full article
AI Safety Regulation: Lessons for Africa
Read ESG Blog
Our Eye on ESG blog provides insights and analysis to help navigate the ESG landscape on a global scale. We cover a range of timely ESG updates and issues, including regulatory, policy, political and industry-related developments, as well as judicial developments and case law.
Blog: Eye on ESG
Additional Insights
Select one our experienced lawyers for more information
Key Emerging Markets Team
Ash McDermott | Banking & Finance
Nanak Keswani | Derivatives
Barry Cosgrave | Restructuring
Ed Parker | Derivatives
Jim Patti | Banking & Finance
Douglas Doetsch | Banking & Finance
Gonçalo Falcão | Corporate
Ian Coles | Africa & Mining
Jason Hungerford | Regulatory
Kirsti Massie | Projects & Infrastructure
Kwadwo Sarkodie | Construction & Engineering
Luiz Aboim | Litigation
Musonda Kapotwe | Regulatory
Anne Wicks | Global Energy
Peter Pears | Capital Markets
Rachel Speight | Banking & Finance
Robert Hamill | Corporate
Sam Eastwood | Regulatory
Get in touch
For more information on Emerging Markets, or any of the topics in this newsletter, please contact one of our key contacts above or vist our website.
Visit Mayer Brown
Raid Abu-Manneh | International Arbitration
Back >
Eye on Emerging Markets | Q4 2023
Written by Ingrid Pistili and Priscilla Santos
Exploring opportunities in Brazil in the Open Finance Landscape
In an increasingly digitized and interconnected world, changes in the financial sector are inevitable. Technological advancements are reshaping how individuals handle their finances, and Open Finance is emerging as the next logical step in the evolution of the financial industry, following the success of Open Banking.
Priscilla Santos Partner, São Paulo (T&C) E: ppsantos@mayerbrown.com T: +55 11 2504 4269
View Profile
This new ecosystem offers an exceptional opportunity for emerging players, particularly fintechs – financial sector firms leveraging technology to forge innovative business models. It is paramount for these companies to invest in education and training to meet the technological, regulatory, and data security prerequisites set by the market. Open Finance in Brazil represents a significant milestone in the National Financial System. This initiative allows for the standardized sharing of data, transactions, and services among financial institutions and related entities. While the name of the project changed in 2022 to emphasize its broader scope, the core principles remain the same, ensuring that consumers retain control over their data and decide when and how to share their information. Open Finance is being jointly implemented in several phases by the Central Bank of Brazil (“BCB”) and the National Monetary Council through Joint Resolution No. 01/2020, each contributing to a more transparent and consumer-friendly financial system. The first phase, known as Open Data, began on February 1, 2021, and included the sharing of basic information such as service channels and essential financial products and services like checking accounts and loans. This marked a significant step towards increased transparency. The second phase, initiated in August 2021, expanded Open Finance further by allowing the sharing of customer data, including sensitive information like personal data, bank balances, and credit information. Importantly, data sharing only occurs with explicit customer consent. The number of active consents has grown substantially, demonstrating increasing customer acceptance. Between March and December 2022, the BCB recorded more than 18.7 million active consents, showing a 95% increase compared to the first half of 2022. On October 29, 2022, the third phase further expanded the scope of Open Finance to include the sharing of financial services, such as payments and the submission of credit proposals. This phase started with payment transaction initiation services via Pix, further enhancing convenience and accessibility. The fourth and final phase, Open Investment, began on September 29, 2023. In this phase, customers of participating institutions can share data related to foreign exchange, insurance, pensions, and investment products and services with other institutions within this ecosystem. This initiative aims to increase transparency and interoperability in the financial sector, providing customers with more choices and control over the management of their resources and investments. Open Investment allows customers to share information about the investment products across various participating financial institutions, enabling personalized services and investment recommendations. This data sharing benefits customers by providing a comprehensive view of their investment portfolio, more accurate financial insights, and the convenience of managing all their investments in one place, regardless of where they are held.
Open Finance's impact extends to both traditional banks and fintechs, especially credit-focused fintechs. The increased data openness and interconnectivity create a fertile ground for innovation and collaboration. It fosters an inclusive ecosystem where both fintechs and traditional banks can excel, catering to diverse customer demands and preferences. This contributes to a more dynamic, competitive, and customer-centric financial market. As Open Finance advances, credit fintechs and other financial institutions face operational and legal challenges that necessitate specialized legal and financial assistance. These challenges also present opportunities for innovation and improvement in areas such as data protection, compliance with regulations, and consumer rights.
General Data Protection Law (LGPD): Compliance with LGPD guidelines is crucial for participating institutions, including fintechs, to ensure the privacy and security of customer information. Adhering to data protection practices not only avoids legal risks and fines but also builds customer trust.
The Open Finance Ecosystem currently hosts over 800 participating institutions, representing more than 150 conglomerates. Credit-focused fintechs play a pivotal role in driving this transformation, with more than 40 million active consents for data sharing within the Open Finance framework. Our lawyers are well-prepared to provide strategic guidance to financial institutions and credit-focused fintechs operating within the Open Finance environment. With their deep expertise in financial and regulatory law, they understand the intricacies of the regulations and legal obligations involved, ensuring their clients navigate this new financial paradigm safely and successfully.
“
Our lawyers are well-prepared to provide strategic guidance to financial institutions and credit-focused fintechs operating within the Open Finance environment.
Compliance with Regulatory Obligations: Strict adherence to BCB regulations is essential for ensuring the continuity of operations and gaining customer and investor trust.
Consumer Rights: With increased access to financial options, credit fintechs must elevate their customer service standards. Meeting contractual obligations and being transparent with terms and conditions are essential for preventing complaints and building a strong reputation.
Ingrid Pistili Associate, São Paulo (T&C) E: ipistili@mayerbrown.com T: +55 11 2504 4264
Written by Oliver Yaros and Oliver Jones
The list of confirmed attendees included government representatives of India, Indonesia, Kenya, Nigeria, the Republic of the Philippines and Rwanda, as well as the African Commission on Human and Peoples' Rights. However, no African universities or technology companies attended. The outcomes of the Summit include a Declaration by leading AI nations to work together to identify, evaluate and regulate the risks of using AI, and the establishment of an AI Safety Institute in the UK to test new types of frontier AI. The UK and global partners also pledged £80 million to fund safe and responsible AI projects for development, beginning in Africa. Accelerating development using AI On the first day of the Summit, it was announced that the UK, Canada, the Bill and Melinda Gates Foundation, the USA and partners in Africa would fund an £80 million project to accelerate development in the world’s poorest countries using AI. The UK AI for Development Programme will contribute £38 million towards the project, which will commence in sub-Saharan Africa. The goals of the programme are to help local AI innovators boost growth and support Africa’s long-term development, and specifically include:
The Global AI Safety Summit took place at Bletchley Park in the United Kingdom on 1 and 2 November 2023. Over one hundred representatives – made up of international politicians, executives from the world’s most prominent AI companies, academics and civil society representatives – gathered to consider the risks of AI and how to mitigate these.
The project power, water and communications infrastructure similarly need to be designed and implemented with adaptability and sustainability in mind.
unlocking the benefits of AI to the 700 million people who speak 46 African languages;
making 5 or more African countries globally influential in the worldwide conversation on AI including in using AI to help achieve the Sustainable Development Goals;
creating or scaling up at least 8 responsible AI research labs at African universities;
helping at least 10 countries create sound regulatory frameworks for responsible, equitable and safe AI; and
helping bring down the barriers to entry for African AI innovators within the private sector.
The UK and other global partners hope to achieve this by using the £80 million to:
fund post-graduate training and fellowships in AI in African universities;
invest in innovators building models with data that accurately represents the African continent, using home-grown skills and computing power;
foster responsible AI governance to help African countries mitigate the risks of AI and adapt their economies to technological change; and
help sub-Saharan African countries have a bigger voice in influencing how AI is used to further the UN’s Sustainable Development Goals.
Bletchley Declaration on AI Safety 28 countries, including Brazil, Saudi Arabia and the United Arab Emirates, reached a “world-first agreement” at the Summit on establishing a shared understanding of the opportunities and risks posed by “frontier AI”. Frontier AI refers to highly capable general-purpose AI models at the ‘frontier’ of AI development. These models can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced AI systems. The participants produced a Declaration which sets out the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, but notes that the risks are “best addressed through international cooperation”.
More work to be done on AI Despite the progress made at the Summit, there has also been some criticism that there was too little focus on certain key issues, such as:
the energy-intensive nature of AI and the environmental impact;
algorithmic bias and misinformation resulting in discrimination;
safety issues for women and girls, such as deepfake technology; and
generative AI affecting upcoming elections through disinformation.
The International Context of the Summit The event coincided with US President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order seeks to promote US leadership on AI and establishes rules governing the use of AI by government agencies, as well as imposing reporting requirements for foundation models. In the EU, legislators are close to agreeing the EU-wide AI Act which seeks to introduce far-reaching regulation of most AI systems. In the UK, rather than introducing new legislation, the UK Government plans to empower existing regulators to regulate AI in their areas of competence. Future of International Cooperation Following the AI Safety Summit Together with the G7 Hiroshima AI Process, the AI Safety Summit is an important first step towards building international cooperation on the use of AI. The support of the Global South and China, which earlier this year proposed its own draft measures on regulating generative AI, for the Bletchley Declaration paves a way for more international collaboration on AI regulation. To continue the momentum, the Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months. This will be followed by France hosting the next in-person AI safety summit in autumn 2024. The international nature of these discussions could lead to greater harmonisation of AI governance across national borders, which would reduce the cost of compliance for multi-national businesses.
The Establishment of an AI Safety Institute Rishi Sunak, UK Prime Minister, announced that the UK Frontier AI Taskforce was evolving to become the new AI Safety Institute. The Institute is the first state-backed organisation focusing on advanced AI safety for the public interest. Its core functions will be to: 1. develop and conduct evaluations on advanced AI systems; 2. drive foundational AI safety research; and 3. facilitate information exchange. Whilst the AI Safety Institute is not a regulator, its insights will inform UK and international policymaking, enabling the UK Government to take an evidence-based, proportional approach to regulating AI risks.
...it was announced that the UK, Canada, the Bill and Melinda Gates Foundation, the USA and partners in Africa would fund an £80 million project to accelerate development in the world’s poorest countries using AI.
Oliver Yaros Partner, London E: oyaros@mayerbrown.com T: +44 20 3130 3698
Oliver Jones Trainee Solicitor, London E: ojones@mayerbrown.com T: + 44 20 3130 3412
Ashley McDermott Partner, London E: amcdermott@mayerbrown.com T: +44 20 3130 3120
Written by Noreen Kidunduhu, ILFA Secondee at Mayer Brown
Artificial Intelligence (AI) has the potential to alter various facets of daily life in Africa with applications spanning diverse areas such as employment, education, health and justice.
It holds immense potential for transformative impact in the area of science, clean energy, biodiversity and climate action. While AI offers unparalleled opportunities, some argue that the potential risks require detailed regulations, including potentially on an African-wide basis. Other blog posts in this edition of Eye on Emerging Markets highlight the recent Global AI Summit that took place at Bletchley Park in the United Kingdom as well as President Biden's recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. More and more African countries are adopting new laws and regulations relating to AI, generally with the aim of fostering responsible development. For example:
whether to categorise AI based on potential risks and to focus oversight on high-risk categories/uses;
...legislators have to balance the desire for regulation with the dangers of over-regulating an innovative and fast moving industry.
the EU risk-based approach where the obligations for an AI system are proportionate to the level of risk that it poses;
requiring developers to share critical information with the government and implementing rigorous red-team testing;
prioritising privacy-preserving techniques as well as developing guidelines for evaluating their effectiveness in protecting citizens' privacy;
establishing or collaborating with an AI safety body/institute;
investing in AI safety training programmes; and
African regulators have also acknowledged that regulations with respect to incident reporting for serious AI-related incidents will also be key for ensuring continuous improvement. While considering the specifics of their context, African nations can adapt these elements to fortify their AI safety frameworks whilst still encouraging investment and innovation. Striking a balance between fostering innovation and safeguarding against potential risks is a critical challenge that African policymakers need to make. As Africa navigates the complex landscape of AI safety regulation, international cooperation and the sharing of best practices become paramount. The region has the opportunity to shape ethical and responsible development throughout the continent that will drive innovation, whilst prioritising human rights, transparency and environmental sustainability. Collaborative efforts will not only safeguard against potential harms but also ensure that AI fulfils its transformative potential for the continent's future prosperity and well-being.
Noreen Kidunduhu ILFA Secondee at Mayer Brown
Kenya, Rwanda and Nigeria endorsed the recent Bletchley Declaration.
Kenya has proposed the Kenya Robotics and Artificial Intelligence Society Bill, 2023 which would establish a "Kenya Robotics and Artificial Intelligence Society" to regulate, promote and facilitate the activities of robotics and AI practitioners within the country;
Rwanda's "National AI Policy" prioritises trustworthy AI adoption in the public sector and beneficial AI adoption in the private sector and its Utilities Regulatory Authority is also reported to be in the final stages of developing practical, ethical guidelines for the use of AI;
Nigeria's National Information Technology Development Agency is progressing the country's "National Artificial Intelligence Policy" that will provide direction on its strategy to mitigate AI risk.
Egypt has a charter for responsible AI that sets out various guidelines on the ethical and responsible development, deployment, use and management of AI;
Tunisia's National Agency for the Protection of Personal Data has issued recommendations on ethical AI development.
However, though African countries are now striving towards enacting national comprehensive regulatory frameworks on AI safety, they may also seek cost efficiencies for both market participants and regulators by adopting uniform regulations across multiple countries. Whilst multilateral agreements are challenging to reach, examples such as the African Continental Free Trade Agreement and the OHADA legal harmonisation system adopted by seventeen West and Central African countries to assist their economic and legal integration many years ago show that it can be done.
In addition, as with other countries and regions around the world, regulators and legislators have to balance the desire for regulation with the dangers of over-regulating an innovative and fast moving industry. As such, African jurisdictions have been paying close attention to the safety standards and principles being adopted by other countries and regions, in particular with respect to:
Striking a balance between fostering innovation and safeguarding against potential risks is a critical challenge...
the need for regulation to be proportionate and not overly costly or burdensome for market participants.