[Updated: March 7, 2024]
For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.
For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.
Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.
Below, we have compiled a list* of official guidance recently published by authorities around the globe.
Canada
- 1/17/2022 – Government of Ontario, “Beta principles for the ethical use of AI and data enhanced technologies in Ontario”
https://www.ontario.ca/page/beta-principles-ethical-use-ai-and-data-enhanced-technologies-ontario
The Government of Ontario released six beta principles for the ethical use of AI and data enhanced technologies in Ontario. In particular, the principles set out objectives to align the use of data enhanced technologies within the government processes, programs, and services with ethical considerations being prioritized.
China
- 3/1/2023 – National Information Security Standardization Technical Committee, Technical Document on Basic Requirements for Security of Generative Artificial Intelligence
https://www.tc260.org.cn/upload/2024-03-01/1709282398070082466.pdf (in Chinese)
The Technical Document provides security requirements for the use of generative AI services. These requirements include conducting a security assessment before collecting data for a generative AI model, entering legally binding contracts with generative AI service providers, and acquiring consent for certain use cases of generative AI services. - 12/12/2022 – Cyberspace Administration of China, Regulations on the Administration of Deep Synthesis of Internet Information Services
http://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm (in Chinese) and
http://www.cac.gov.cn/2022-12/11/c_1672221949570926.htm (in Chinese)
The Regulations target deep synthesis technology, which are synthetic algorithms that produce text, audio, video, virtual scenes, and other network information. The accompanying Regulations FAQs state that providers of deep synthesis technology must provide safe and controllable safeguards and conform with data protection obligations. - 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design. - 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.
Denmark:
- 3/5/2024 – Datatilsynet, New regulatory sandbox for AI
https://www.datatilsynet.dk/presse-og-nyheder/nyhedsarkiv/2024/mar/ny-regulatorisk-sandkasse-for-ai
The Danish Data Protection Authority, in collaboration with the Danish Agency for Digitalisation, established a regulatory sandbox for AI, where companies and entities can access relevant expertise and GDPR guidance when they develop or use AI. - 2/29/2024 – Danish Ministry of Business and Industry, Recommendations on tech development and use of artificial intelligence
https://em.dk/Media/638447961317309808/Tech-ekspertgruppens%20anbefalinger.pdf (in Danish)
The Danish Government’s expert group announced recommendations on tech giants’ development and use of AI, which aims to explore the potential of AI while negating its potential harmful effects. The recommendations focus on regulation of unauthorized use of copyrighted material, imposing responsibility on tech giants for the credibility of information, and default standards for chatbots.
E.U.:
- 2/21/2024 – European Commission, Creation of AI Office
https://digital-strategy.ec.europa.eu/en/policies/ai-office
The European Commission announced the creation of the European AI Office, which is established within the Commission and will play a key role in implementing the EU’s AI Act. The AI Office will work with public and private entities to promote cooperation and adoption of the EU’s AI Act. - 1/20/2022 – European Institute of Innovations & Technology (“EIT”), AI Maturity Tool
https://ai.eitcommunity.eu/ai-maturity-tool/
The EIT published a web-based AI maturity tool which allows businesses to assess how prepared they are for the use of AI, and which will allow businesses to compare their maturity level to that of other organizations in the future. - European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”)
https://www.etsi.org/committee/1640-sai
The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. - 7/14/2021 – European Commission’s Joint Research Center (“JRC”), Report
https://publications.jrc.ec.europa.eu/repository/handle/JRC125952
Most recently, the JRC published this report on the AI standardization landscape. The report describes the ongoing standardization efforts on AI and aims to contribute to the definition of a European standardization roadmap. - 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788
The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.
France:
- 4/5/2022 – French Data Protection Authority (“CNIL”), AI and GDPR Compliance Guide and Self-Assessment Tool
https://www.cnil.fr/fr/intelligence-artificielle/ia-comment-etre-en-conformite-avec-le-rgpd (in French)
https://www.cnil.fr/fr/intelligence-artificielle/guide (in French)
In its newest set of resources for AI compliance, CNIL offers a step-by-step guide to GDPR compliance when utilizing AI. Additionally, CNIL published a Self-Assessment Guide for AI Systems, which allows organizations to assess the maturity of their AI systems with regard to the GDPR, along with best practice guidance. - 9/3/2020 – CNIL, Whitepaper and Guidance on Use of Voice Assistance
https://www.cnil.fr/sites/default/files/atoms/files/cnil_livre-blanc-assistants-vocaux.pdf (in French)
This whitepaper explores legal and technical considerations for developers and businesses which may utilize voice assistance technology in light of recent AI technology development. It further includes best practices and recommended approaches.
Germany:
- 5/24/2022 – Federal Office for Information Security, Towards Auditable AI Systems whitepaper
https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Towards_Auditable_AI_Systems_2022.pdf?__blob=publicationFile&v=4
This paper emphasizes the need for methods to audit AI technology, to guarantee trustworthiness and ensure integration of emerging AI standards. The paper seeks to embolden the auditability of AI systems by proposing a newly developed certification system. - 6/15/2021 – Federal Financial Supervisory Authority (“BaFin”), “Big Data and Artificial Intelligence”
https://www.bafin.de/SharedDocs/Downloads/EN/Aufsichtsrecht/dl_Prinzipienpapier_BDAI_en.pdf?__blob=publicationFile&v=2
This paper provides key principles and best practices for the use of algorithms and AI in decision making processes.
- 5/6/2021 – Federal Office for Information Security (“BSI”), “Towards Auditable AI Systems”
https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Towards_Auditable_AI_Systems.pdf?__blob=publicationFile&v=4
This whitepaper addresses current issues with and possible solutions for AI systems, with a focus on the goal of auditability and standardization of AI systems.
Hong Kong:
- 8/18/2021 – Office of the Privacy Commissioner for Personal Data (“PCPD”), “Guidance on the Ethical Development and Use of Artificial Intelligence”
https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf
This guidance discusses ethical principles for AI development and management while also highlighting recent development in AI governance around the globe. The guidance further includes a helpful self-assessment checklist in its appendix concerning businesses’ AI strategy and governance, risk assessment and human oversight, development and management of AI systems as well as communication and engagement with stakeholders.
India:
- 9/28/2021 – INDIAai, “Mitigating Bias in AI – A Handbook For Startups”
https://indiaai.s3.ap-south-1.amazonaws.com/docs/AI+Handbook_27-09-2021.pdf
INDIAai, a government-based initiative, published this formalized framework for startups. The handbook identifies different risk factors that may lead to bias in AI. - 7/15/2021 – Data Security Council of India (“DCSI”), “Handbook on Data Protection and Privacy for Developers of Artificial Intelligence in India”
https://www.dsci.in/sites/default/files/documents/resource_centre/AI%20Handbook.pdf
The handbook establishes guidelines for responsible and ethical AI development in line with the applicable legal data protection framework. While the handbook does not provide technical solution but instead focuses on the ethical and legal objectives to pursue when designing AI systems, it does provide for a checklist of questions and good practices which developers shall keep in mind while in the design process. - 2/24/2021 – National Institution for Transforming India (“NITI Aayog”), “Responsible AI”
http://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
In this paper, the Government think tank highlights the ethical and legal framework for AI technology management. The paper further includes a self-assessment guide for AI usage in its annex.
International:
- 3/5/2024 – Organisation for Economic Co-operation and Development (“OECD”), Explanatory Memorandum on the Updated OECD Definition of an AI System
https://www.oecd-ilibrary.org/docserver/623da898-en.pdf?expires=1709828184&id=id&accname=guest&checksum=B906C1E98329EC8C1E539374B37DF045
The OECD published a memorandum that revisits the definition of an artificial intelligence system contained within the 2019 OECD Recommendation on AI, by redefining and expanding the term. However, the memorandum recognizes that the new definition, even though it is broader, nonetheless may require additional criteria to tailor the definition to a specific use case or context. - 9/28/2023 – OECD, Catalogue of Tools & Metrics for Trustworthy AI
https://oecd.ai/en/
The OECD published a catalogue of tools and metrics for building and deploying trustworthy AI systems. This catalogue provides users with a one-stop-shop for tools that can mitigate bias, measure performance, audit systems, and create procedural processes to oversee the system. - 8/1/2023 – Future of Privacy Forum (“FPF”), Generative AI for Organizational Use: Internal Policy Checklist
https://fpf.org/wp-content/uploads/2023/07/Generative-AI-Checklist.pdf
To help organizations initialize the process of regulating the use of generative AI, FPF released a checklist to help organizations revise policies and procedures governing generative AI. The checklist provides a non-exhaustive list of topics to consider when revising such policies and procedures. - 5/31/2023 – EU-US Terminology and Taxonomy for Artificial Intelligence
https://digital-strategy.ec.europa.eu/en/library/eu-us-terminology-and-taxonomy-artificial-intelligence
To align EU and US risk-based approaches to regulating AI, a group of experts created this document to provide a unified approach to AI terminologies and taxonomies. A total number of 65 terms were identified with reference to key documents from the EU and US. - International Organization for Standardization (“ISO”) – ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management
https://www.iso.org/standard/77304.html
This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize AI can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management. - ISO – ISO/IEC 38507:2022
https://www.iso.org/standard/56641.html
Together with the International Electrotechnical Commission (“IEC”), ISO has published a number of AI standards in recent years. The newest standards published in April 2022, called “Governance implications of the use of artificial intelligence by organizations”, provides guidance for the governing body of organizations regarding the use and implications of AI. - ISO – ISO/IEC JTC 1/SC 42 Standards
https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0
These standards published in March of 2021 provide background about existing methods to assess the robustness of neural networks. Additional AI standards are currently under development. - 9/15/2022 – Information Technology Industry Council (“ITI”), Policy Principles for Enabling Transparency of AI Systems
https://www.itic.org/documents/artificial-intelligence/ITIsPolicyPrinciplesforEnablingTransparencyofAISystems2022.pdf
The ITI published guidance for policymakers, emphasizing the need for transparency as a critical part of developing accountable and trustworthy AI systems. - 2/22/2022 – Organization for Economic Co-operation and Development (‘OECD’), Framework for the Classification of AI Systems
https://www.oecd-ilibrary.org/science-and-technology/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en;jsessionid=lWU_vM8LQfX-wAZgVIjj31FS.ip-10-240-5-181
In the Framework, the OECD has developed a tool to evaluate AI systems from a policy perspective, by providing a baseline to characterize the application of an AI system deployed in specific contexts. The Framework contributed to the OECDS “AI in Work, Innovation, Productivity, and Skills” (“AI-WIPS”) program. - 1/26/2022 – Information Technology Industry Council (“ITI”), Recommendations on NIST AI Risk Management Framework
https://www.itic.org/documents/artificial-intelligence/ITICommentsonAIRMFConceptPaperFINAL.pdf
In response to the AI Risk Management Framework concept paper released by NIST, the ITI has published a series of recommendations in order to improve the framework and encourage NIST to align the framework with prior works as well as standards that are currently under development in international standards bodies. - 1/18/2022 – Information Technology Industry Council (“ITI”), Recommendations on AI-enabled Biometric Technologies
https://www.itic.org/documents/artificial-intelligence/ITICommentsBiometricTechRFIFINAL.pdf
ITI released a series of recommendations addressed to the U.S. Government regarding the use of AI and biometric technologies, elaborating on governance programs and practices that may be useful to consider in the context of biometric technologies, including with regard to performance auditing and post-deployment impact assessment.
Japan:
- 4/8/2022 – Ministry of Economy, Trade, and Industry (“METI”), Artificial Intelligence Introduction Guidebook for Small and Medium Sized Companies
https://www.meti.go.jp/policy/it_policy/jinzai/AIutilization.html (in Japanese)
The Guidebook provides SMEs with guidance on how to prepare for and begin utilization of AI in their enterprises, providing practical steps for decision-making. - 2/15/2022 – Ministry of Internal Affairs and Communications (“MIC”), Guidebook on Cloud Services Using AI
https://www.soumu.go.jp/main_content/000792669.pdf (in Japanese)
The Guidebook summarizes the steps to keep in mind when developing and providing AI cloud services while gaining the trust of users and considering data collection requirements. - 1/28/2022 – METI, Governance Guidelines for Implementation of AI Principles
https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf
The METI has released an updated version of its Guidelines for the Practice of Artificial Intelligence Principles, outlining AI governance rules which include risk analysis, systems design, implementation and evaluation, along with providing practical examples. - 8/4/2021 – MIC, AI Network Society Promotion Council Report
https://www.soumu.go.jp/main_content/000761967.pdf (in Japanese)
The report highlights recent trends in AI utilization as well as efforts to promote secure and reliable social implementation of AI.
Jordan
- 8/5/2022 – Ministry of Digital Economy and Entrepreneurship, National Charter of Ethics for Artificial Intelligence
https://tinyurl.com/w4e3acdy
The charter provides an ethical baseline to regulate the development of AI technologies. The charter includes a set of principles that include accountability, transparency, impartiality, respect for privacy, promotion of human values, and other such principles that promote democratic values, human rights, and diversity.
Mexico
- 6/1/2022 – National Institute for Access to Information and Protection of Personal Data (“INAI”), Recommendations for the Processing of Personal Data derived from the Use of Artificial Intelligence
https://home.inai.org.mx/wp-content/documentos/DocumentosSectorPublico/RecomendacionesPDP-IA.pdf (in Spanish)
The INAI released its recommendations concerning regulation of personal data and AI technology. In particular, the recommendations focus on such topics as AI and its implication in public security, AI in the education sector, AI and privacy by design, AI and cloud computing, and more.
Saudi Arabia:
- 4/27/2022 – Saudi Food and Drug Authority (‘SFDA’), “Guidance on Review and Approval of AI and Big Data based Medical Devices”
https://beta.sfda.gov.sa/sites/default/files/2021-04/SFDAArtificial%20IntelligenceEn.pdf
The Guidance sets out the requirements for obtaining a Medical Devices Marketing Authorization for AI-based medical devices within the KSA. It applies to the standalone software type of medical devices, which diagnose, manage, or predict diseases by analyzing medical Big Data using AI, as well as to AI software that is configured with hardware.
Senegal:
- 4/13/2023 – Commission de Protection des Données Personnelles (“CDP”)
https://cdp.sn/content/deliberation-de-portee-generale-n%C2%B000645cdp-du-13-avril-2023-relative-l%E2%80%99utilisation-de (in French)
The Senegalese data protection authority, the CDP, published a deliberation on the use of biometric devices within the workplace, which may includes the use of such information within AI technologies. The deliberation focuses on when and how such biometric processing can occur, how the information must be stored and secured, and the obligations that must be imposed on vendors.
Singapore:
- 3/1/2024 – Personal Data Protection Commission (“PDPC”), Advisory Guidelines on the use of Personal Data in AI Recommendation and Decision Systems
https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems
The guidelines detail their applicability to the design and development of AI systems that use personal data governed by the Personal Data Protection Act (No. 26 of 2012). The guidelines provide best practices for organizations on how to be transparent about whether and how their AI systems utilize personal data. - 5/24/2022 – Infocomm Media Development Authority (“IMDA”) and the PDPC, AI Verify governance testing framework and toolkit
https://file.go.gov.sg/aiverify-primer.pdf, along with https://file.go.gov.sg/aiverify.pdf
The AI Verify framework and toolkit seeks to help companies integrate responsible AI and transparency in their products and services, and facilitate interoperability between governance frameworks. The toolkit allows developers and owners to conduct self-testing of AI systems against eight principles, including fairness, transparency, accountability, and human oversight. - 2/4/2022 – Monetary Authority of Singapore (“MAS”), Fairness, Ethics, Accountability, and Transparency (“FEAT”) assessment methodology whitepapers
https://www.mas.gov.sg/news/media-releases/2022/mas-led-industry-consortium-publishes-assessment-methodologies-for-responsible-use-of-ai-by-financial-institutions
The MAS released five whitepapers providing AI assessment guidelines for financial institutions, including a comprehensive checklist for responsible AI development and use. - 10/20/2020 – Personal Data Protection Commission (“PDPC”), “Compendium of Use Cases”
https://file.go.gov.sg/ai-gov-use-cases-2.pdf
This practical illustration includes a number of use cases which outline how different organizations have effectively aligned their AI governance practices with the Model Framework. - 1/21/2020 – PDPC, “Model AI Governance Framework” (Second Edition)
https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf
along with “Implementation and Self-Assessment Guide for Organizations” https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGIsago.pdf
The Model Framework focuses on internal governance, decision making models, operations and customer relationship management. The Implementation Guide provides useful industry examples, best practices and specific practice guides for the employment of AI.
South Korea:
- 8/3/2023 – Personal Information Protection Commission (“PIPC”), Guidance for Safe Use of Personal Information in the Age of AI
https://www.pipc.go.kr/np/cop/bbs/selectBoardArticle.do?bbsId=BS074&mCode=C020010000&nttId=9083 (in Korean)
The PIPC’s guidance presents rules on how to interpret and apply the Personal Information Protection Act (PIPA) in the AI environment. The guidance presents a blueprint for how government and the private sector can cooperate to regulate AI. - 8/30/2022 – Ministry of Science and ICT, Expanded Virtual World Ethics Principles
https://www.msit.go.kr/bbs/view.do?sCode=user&mId=113&mPid=112&pageIndex=&bbsSeqNo=94&nttSeqNo=3182064 (in Korean)
The Ministry of Science and ICT outlined principles to address the growing development of artificial intelligence, among other technologies. The report details eight principles meant to serve as a code of ethics addressing what it calls the expanded virtual world. - 8/11/2021 – PIPC, Summary Report on Data Protection Regulatory Sandbox
https://www.pipc.go.kr/np/cop/bbs/selectBoardArticle.do?bbsId=BS074&mCode=C020010000&nttId=7487#LINK (in Korean)
The report highlights data protection considerations resulting out of 133 cases, specifically including the use of robotics and new technology, such as unmanned moving objects. - 7/20/2021 – PIPC, “AI Personal Information Protection Self-checklist”
https://www.metaverselaw.com/wp-content/uploads/2021/09/Artificial-IntelligenceAI-Personal-Information-Protection-Self-Checklist2021.07.20.final★.pdf
The checklist provides guidelines for the protection of personal information gathered and used by artificial intelligence. The checklist includes requirements and guidelines for each stage of development of AI systems and is meant to account for the flexibility and continuous change of AI technology.
Spain:
- 1/12/2021 – Spanish Data Protection Authority (“AEPD”), “Audit Requirements for Personal Data Processing Activities Involving AI”
https://www.aepd.es/sites/default/files/2021-01/requisitos-auditorias-tratamientos-incluyan-ia-en.pdf
This paper is aimed to help evaluate regulatory compliance of AI systems by providing methodologies and control objectives to be included in data protection audits for processes that incorporate AI components or solutions.
Sweden:
- 2/28/2024 – Swedish Authority for Privacy Protection, Guidance on the GDPR and AI
https://www.imy.se/verksamhet/dataskydd/innovationsportalen/vagledning-om-gdpr-och-ai/ (in Swedish)
The guidance discusses artificial intelligence from two viewpoints: technical and legal. The technical portion includes explanations of AI, machine learning, and deep leaning, along with professional insights into AI training models. The legal portion focuses on how to determine when the GDPR applies to the development and use of AI.
Turkey:
- 9/15/2021 – Turkish Personal Data Protection Authority (“KVKK”), Recommendations When Processing Personal Data Using AI
https://kvkk.gov.tr/SharedFolderServer/CMSFiles/25a1162f-0e61-4a43-98d0-3e7d057ac31a.pdf (in Turkish)
This guidance discusses fundamental principles for AI development and management, and advises developers, manufacturers, and service providers on privacy by design and data minimization approaches.
U.K.:
- 2/26/2024 – Information Commissioner’s Office (“ICO”), Generative AI second call for evidence: Purpose limitation in the generative AI lifecycle
https://ico.org.uk/about-the-ico/what-we-do/our-work-on-artificial-intelligence/generative-ai-second-call-for-evidence/
The ICO launched a consultation series on generative AI, which, in part, focuses on how the data protection principle of purpose limitation should be applied at different stages in the generative AI life cycle. The consultation highlights the importance for AI developers to sufficiently set out clear purposes for each stage of the AI and to explain what personal data is processed in each stage. - 6/7/2023 – Department for Science, Innovation and Technology (“DSIT”), “Find out about artificial intelligence (AI) assurance techniques”
https://www.gov.uk/ai-assurance-techniques
Following up on the UK government’s AI Regulation White Paper (see next bullet), DSIT created a portfolio of use cases illustrating various AI assurance techniques being used in the real-world to support the development of trustworthy AI. The portfolio includes case studies from across multiple sectors and features a range of technical, procedural, and educational approaches to promote responsible AI. - 3/29/2023 – DSIT, “A pro-innovation approach to AI regulation”
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
The DSIT published a white paper introducing an AI regulation framework underpinned by five principles: Safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Rather than recommend specific AI legislation, the white paper recommends that existing regulators incorporate these principles into their enforcement efforts. - 3/15/2023 – ICO, Guidance on AI and data protection
https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/
The ICO published guidance clarifying requirements for fairness in AI. The document includes guidance on solely automated decision-making and technical approaches to mitigating algorithmic bias. - 5/4/2022 – ICO, AI and Data Protection Risk Toolkit
https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/
ICO recently launched its updated AI and Data Protection Risk Toolkit, which contains risk statements to help organizations using AI to correctly assess the risk of their processing practices. The toolkit provides suggestions and practical steps for technical and organizational measures used to mitigate risks and demonstrate compliance with applicable data protection laws. It further includes references to other core resources. - 1/12/2022 – Department for Digital, Culture, Media & Sports (“DCMS”) and Office for Artificial Intelligence (“OAI”), AI Standards Hub Pilot
https://www.gov.uk/government/news/new-uk-initiative-to-shape-global-standards-for-artificial-intelligence
The DCMS and OAI announced the pilot of a new AI Standards Hub as part of the UK’s National AI Strategy. In its pilot phase, the Hub will focus on creating tools and guidance for education, training, and professional development to help businesses engage with creating AI technical standards, and bringing the AI community together through workshops, events, and a new online platform to encourage more coordinated engagement in the development of standards around the world. - 9/22/2021 – UK Secretary of State for Digital, Culture, Media & Sport (“DCMS”), “National AI Strategy”
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf
The UK Government announced its National AI Strategy, which aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy, and ensure the UK governs AI effectively. - 5/5/2020 – ICO, “Explaining Decisions Made with AI”
https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-ai/
This detailed guidance released by the ICO in cooperation with the lan Turing Institute gives businesses practical advice to explain the legal framework and effects of AI decision-making processes and the necessary considerations for compliance with existing data protection laws.
U.S.:
- 10/31/2023 – National Institute of Standards and Technology (“NIST”), “Executive Order FAQs”
https://www.nist.gov/artificial-intelligence/executive-order-safe-secure-and-trustworthy-artificial-intelligence/executive-order-faqs
The Biden Administration’s EO on Safe, Secure, and Trustworthy Artificial Intelligence issued on October 30, 2023, charges multiple agencies – including NIST – with producing guidelines and taking other actions to advance the safe, secure, and trustworthy development and use of artificial intelligence. In response, NIST released a short series of FAQs addressing the agency’s role in developing guidelines under the EO. - 10/30/2023 – The White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
The Biden Administration issued an Executive Order that establishes new safety and security standards for the use of AI. This whole-of-government approach requires numerous agencies to develop standards for what constitutes “responsible” uses of artificial intelligence. - 10/30/2023 – The White House, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence”
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
This is the accompanying Fact Sheet for the Biden Administration’s Executive Order regarding the development of safe, secure, and trustworthy development and use of AI (immediately above). - 03/09/2023 – U.S. Chamber of Commerce, “CTEC AI Commission 2023″
https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Report_v5.pdf
The U.S. Chamber of Commerce published a report calling for the regulation of AI and outlining five key principles that stakeholders should consider when drafting a regulatory framework. In contrast to the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, the Chamber’s report seeks to regulate AI without hindering economic development. - 1/26/2023 – NIST, “AI Risk Management Framework”
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
On January 26, 2023, the National Institute of Standards and Technology (NIST) released the first version of the Artificial Intelligence Risk Management Framework (AI RMF). The AI RMF is a voluntary resource meant to help organizations manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. As a flexible framework designed to adapt to a wide range of systems, products, and organizations, the AI RMF provides a list of characteristics that must be balanced based on the AI system’s context of use. - 10/4/2022 – White House Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights”
https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
The White House Office of Science and Technology Policy published a non-binding white paper detailing a list of principles that, if incorporated into the development and use of AI technologies, should protect the American public during the age of artificial intelligence. The document calls upon policymakers to adopt these principles when considering how to regulate AI technologies. - 5/13/2022 – Department of Justice Civil Rights Division, “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring”
https://beta.ada.gov/resources/ai-guidance/
The guidance explains how use of algorithms and AI in hiring can lead to disability discrimination and legal consequences. The guidance details how employers can avoid such disability discrimination when using AI technology. - 3/16/2022 – National Institute of Standards and Technology (“NIST”), “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence”
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
In this Special Publication, NIST analyzes the challenges of AI bias, aiming to provide some detailed socio-technical guidance for identifying and managing AI bias. - 12/14/2021 – NIST, “AI Risk Management Framework Concept Paper”
https://www.nist.gov/system/files/documents/2021/12/14/AI%20RMF%20Concept%20Paper_13Dec2021_posted.pdf
NIST has developed for public review a concept paper for the Artificial Intelligence Risk Management Framework (“AI RMF”), intended for voluntary use and to address risks in the design, development, use, and evaluation of AI products, services, and systems. NIST stated that it intends to release the AI RMF 1.0 in early 2023. - 7/30/2021 – Department of Homeland Security (“DHS”), “Artificial Intelligence and Machine Learning Strategic Plan”
https://www.dhs.gov/sites/default/files/publications/21_0730_st_ai_ml_strategic_plan_2021.pdf
The strategic plan of DHS’ Science and Technology Directorate (“S&T”) outlines its goals that are committed to ensuring that AI/ML research, development, test, evaluation, and departmental applications comply with statutory and other legal requirements, and sustain privacy protections and civil rights and liberties for individuals. It further advises stakeholders on recent developments in AI/ML and the associated opportunities and risks. - 5/5/2021 – Electronic Privacy Information Center (“EPIC”), New National Artificial Intelligence Initiative Office Website.
https://www.ai.gov/
The White House launched its new website, AI.gov, featuring policy priorities, reports, and news regarding AI. - 4/19/2021 – Federal Trade Commission (“FTC”), “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI”
https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
In this blog post, the FTC offers guidance for companies in their use of AI, specifically instructing them to show transparency and accountability when employing new algorithms. - 4/8/2020 – FTC, “Using Artificial Intelligence and Algorithms”
https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms
In this blog post, the FTC outlines best practices when relying on algorithms and highlights key principles such as transparency, fairness, accuracy, and accountability. - 9/9/2019 – NIST, “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tool”
https://www.nist.gov/artificial-intelligence/ai-standards-federal-engagement
Following an executive order directing federal agencies to develop international standards to promote and protect innovation and public confidence in AI technologies, NIST published this plan. The plan intends to provide guidance regarding priorities and appropriate levels of engagement in matters of AI standards.
*While extensive, this list is not meant to be exhaustive. We will do our best to update this list from time to time, and add new guidance as it becomes available.