Published on May 22, 2024
The technologies associated with artificial intelligence (AI) have revolutionized all industries, offering companies unprecedented opportunities to improve their efficiency and decision-making. On the other hand, as underscored by the ICGN, the development and use of these technologies entail certain ethical risks that may be associated with: undesirable biases which could give rise to and propagate discrimination, the creation of false information, data leaks and intellectual property infringements.
In the global context, various legislative and normative initiatives have been put in place to encourage responsible AI. Internationally, this includes the OECD Principles as well as the ISO/IEC 421001 standard. In Europe, the European Union agreed on a bill regarding artificial intelligence. In the U.S., the Biden administration issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. In Canada, while the Code of Conduct on generative AI systems sets out guidelines to be adopted on a voluntary basis, the Canadian government has been working on a law aimed at regulating AI.
Investors are also mobilizing. The Candriam-led collaboration initiative on facial recognition technology, in which we took part, was integrated into the Collective Impact Coalition for Ethical Artificial Intelligence. We are pleased to have joined this initiative, commonly referred to as the Ethical AI CIC, which is based on the findings of the World Benchmarking Alliance’s Digital Inclusion Benchmark. The group of investors issued a statement at the beginning of 2024 summarizing the context, the issues and our expectations. Few companies have developed ethical principles to guide the development and use of their technologies; among those that have done so, only a few disclose information on how their principles have been implemented. However, we were satisfied to see that in 2023, Verizon published guiding principles for its approach. We have several main requirements for the companies developing or using this type of technology:
- Implement robust governance, in the form of a board of directors with the mandate and abilities to supervise this issue.
- Adopt policies and principles on the ethical/ responsible use of AI technologies.
- Put in place a due diligence process aimed at identifying, assessing, preventing and mitigating their negative impacts on human rights.
- Disclose information about these various items as well as on how they are implemented.