Brazil has proposed a new framework for regulating the ethical and responsible use of Artificial Intelligence (AI) systems. The bill is the result of a comprehensive effort to create a new draft law that would replace three bills pending in Congress over the past four years (5.051/2019, 21/2020, and 872/2021). The creation of a commission in March 2022 marked the beginning of this effort, which spanned nearly 240 days and involved meetings, seminars, and public hearings. The result is a new text comprising just over 40 articles and a report of more than 900 pages, which outlines principles, rules, and guidelines for regulating AI in the country.
Robust Human Rights Principles and a Strict Liability Regime for AI Providers: An Overview of Brazil’s Proposed AI Legislation
The proposed legislation represents a strong commitment to protecting human rights. Its primary aim is to grant individuals significant rights and place specific obligations on companies that develop or use AI technology (AI supplier or operator). To achieve this, the bill establishes the creation of a new regulatory body to enforce the law and takes a risk-based approach by categorising AI systems into different categories. It also introduces a protective system of civil liability for providers or operators of AI systems, along with a reporting obligation for significant security incidents.
Establishing National Norms for Ethical and Responsible Use of Artificial Intelligence Systems
Articles 2 and 3 lay down foundations and guiding principles for the development and use of AI, including respect for human rights, democratic values, equality, non-discrimination, plurality, and respect for labour rights. They also provide guiding principles, such as the importance of accountability (Article 3, IX), as well as measures to prevent, mitigate, and address systemic risks that may arise from intentional or unintentional use and effects of AI-based systems (Article 3, Section XI).
Protecting Individual Rights
Chapter II of the bill aims to protect the rights of individuals impacted by AI decision-making. The law guarantees various rights, including an explanation of decisions, the ability to contest them, and human participation in the decision-making process. The law also highlights the right to non-discrimination and to correct identified biases. Individuals can enforce these rights before administrative bodies and courts, individually or collectively.
Section II emphasises the importance of transparency and understanding of AI decisions. The law grants individuals the right to request explanations and information about criteria and procedures used by the system. It also includes measures to protect vulnerable groups, such as children, adolescents, the elderly, and people with disabilities.
Risk-Based Approach to AI Regulation
Chapter III introduces a risk-based regulatory model for AI systems. Article 13 requires providers to conduct a preliminary assessment to classify the degree of risk as ‘Excessive’ or ‘High’. Systems classified as posing ‘Excessive’ risk will not be permitted, including those that exploit vulnerabilities of specific groups or use subliminal techniques. It also prohibits the use of these systems by public bodies to evaluate, classify, or rank people based on their social behaviour or personality attributes for access to goods and services and public policies in an illegitimate or disproportionate manner.
Article 17 defines what high-risk sectors and applications are, which include AI systems used for the following purposes: critical infrastructure security, education, recruitment, HR management, and health.
Governance and Algorithmic Impact Assessments of AI Systems
Chapter IV establishes governance rules and processes for AI agents to ensure system security and protect individual rights. These measures apply throughout the life cycle of AI systems – particularly those that pose a high risk – and require documentation, testing, and bias prevention measures. AI agents must also ensure the explainability of AI results and provide relevant information for interpreting the system’s outcomes.
Algorithmic impact assessments must be performed by independent professionals with technical, scientific, and legal knowledge, as mandated by Article 22. Article 24 provides that the impact assessment must take into account several factors pertaining to the artificial intelligence system, such as foreseeable and known risks, associated benefits, likelihood and gravity of negative outcomes, operational logic, conducted tests and evaluations, mitigation measures, training and awareness, transparency measures for the public, and others. Additionally, the assessment must be accompanied by regular quality-control tests and a rationale for the residual risk of the system.
These assessments must then be updated continuously throughout the system’s life cycle (Article 25). In case of unexpected risks that threaten individuals’ rights, AI agents must immediately notify the authorities and affected individuals (Article 24).
Civil Liability for Damages Caused by AI Systems
Chapter V outlines the civil liability of suppliers and operators of AI systems for any damages caused. Article 27 specifies that if the system is deemed high-risk, the supplier or operator will be held objectively liable for any resulting damages. If the system is not classified as high-risk, the responsibility for the harm will be attributed to the AI agent by default, with the burden of proof shifting in favour of the victim.
Regulations and Oversight of the Artificial Intelligence Law
Chapter VI allows AI agents to create codes of best practices and governance, which will serve as a reference for demonstrating good faith. The competent authority will consider these codes when administering administrative sanctions.
Chapter VII requires AI agents to report serious security incidents to the competent authority. The authority will then determine if measures need to be taken to mitigate the effects. Article 31 outlines the types of incidents that must be reported.
Chapter VIII outlines the regulatory framework for the implementation and oversight of the law. The Executive Branch is responsible for appointing a competent authority to oversee the implementation process, conduct studies, and promote best practices regarding the development and use of AI systems. The competent authority will issue regulations, monitor, and enforce sanctions for non-compliance with the legislation, prepare annual reports, and carry out other tasks as assigned under Article 32 and its associated paragraphs.
Next Steps and What Companies Need to Know
The bill, currently under consideration in Brazil’s Congress, seeks to address the potential risks and negative impacts of AI while promoting its benefits.
Companies that develop or use AI need to pay attention to the requirements outlined in the bill, including compliance with security measures, the creation of mechanisms for users to contest decisions made by AI-powered systems, and the role of human oversight in decision-making.
The final version of the bill is yet to be approved and may undergo further changes during the legislative process. Therefore, it is important for companies that develop or use AI to stay informed on its progress and understand any potential implications for business operations. As the proposed text moves through Congress, companies should engage with relevant stakeholders, including government representatives and civil society organisations, to provide input and feedback on the proposed legislation. Companies should also prepare for potential changes in their AI systems, including implementing measures to ensure compliance with the proposed legislation’s requirements, such as risk assessments, transparency, and accountability.
Access Partnership is monitoring the advancement of this bill and is available to provide additional information on how it may impact your business. If you require further assistance, please contact Paula Rabacov at [email protected].