Around the world, governments are architecting regulatory frameworks as foundations for the ethical and responsible use of AI. Many are in the proposal stage and are open to feedback and public commentary (e.g. Singapore).
For companies with entities in multiple concurrent jurisdictions, internal and external Legal and Compliance teams will be responsible for ensuring compliance in multiple regions at the regional, national, and continental level.
On a practical level, how can companies track and manage those potential compliance standards– already in the thousands – as easily and accurately as possible?
It’s this question that’s driving development for our Compliance Standards Database, with respect to one piece of the larger puzzle: how Generative AI is used by an organization’s software developers (Gen AI for code).
The goal of the Database is to enable organizations, particularly those with simultaneous nexus in many geographies, to stay ahead of the curve on compliance. Periodically, our team publishes situational assessments of hypothetical organizations in different regions.
Below, we share an assessment for a hypothetical company in Baden-Württemberg, Germany. Note, this isn’t legal advice. You’ll want to loop in your Legal team to assess your organization’s specific compliance risk posture.
No city-based standards for GenAI for code were identified.
Overview: The Discussion Paper covers data privacy concerns similar to and overlapping with GDPR with a particular view towards employee information.
- Data collection about employees' GenAI use will need to meet GDPR requirements: Data collection about persons must meet GDPR requirements: "First, the legal basis of the GDPR for the processing of personal data will be presented, which is applicable to both public and non-public bodies."
- Using Gen AI to evaluate employees' code will get higher scrutiny: Employee data protection will receive higher scrutiny: "In the context of employee data protection, it should also be noted that strict standards must be applied when checking consent  due to a subordination and superiority relationship."
Name: German Copyright Law
Overview: Copyright Law in place for Germany
Status: Final and Implemented
- Purely AI-generated works will not get copyright protection: Therefore, works created solely by AI systems are not amenable to copyright protection. If there is a sufficiently large human influence on the act of creation, only the natural persons behind the AI would be recognized as authors.
- Patentability: An AI could not be an inventor, and a human being cannot be regarded as the legal successor of an AI as the actual inventor of a patent.
Name: EU AI Act
Overview: On December 8th, 2023, the negotiators from the European Parliament and the bloc’s 27 member countries reached an agreement, making Europe the first continent to set clear rules for the use of AI. It still must pass formal approval by the European Parliament and the Council. This must now be formally approved by the European Parliament and the Council, the representative body of the 27 member states, which is due to take place in April 2024 at the end of the parliament's legislative period. Member states will then have two years to transpose the AI law into national law.
Status: Final but not Implemented
- High-Risk AI Systems will be regulated, not banned. High-Risk AI Systems are those that are used in critical infrastructure (e.g., power grids, hospitals, etc.), those that help make decisions regarding people’s lives (e.g., employment or credit rating), or those that have a significant impact on the environment.
- Foundation model providers must meet registration requirements, including:
- ~Describe data sources
- ~Describe capabilities and limitations
- ~Articulate risks and mitigations
- ~Description of evaluation(s) against industry benchmarks
- ~Prepare all necessary technical documentation for downstream providers
- ~Transparency surrounding machine-generated content
- ~Document training on copyrighted data
At present, there are no global regulatory or compliance requirements organizations must conform to in regard to the utilization of Generative AI in software development.
Overview: The International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.
Status: Final and Implemented
- Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
- Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.
- Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability
- Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia
- Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures
- Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle
- Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content
- Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures
- Advance the development of and, where appropriate, adoption of international technical standards
- Implement appropriate data input measures and protections for personal data and intellectual property
ISO/IEC FDIS 42001 - The International Standards Organization (ISO) is developing AI Management System Standards. ISO/IEC FDIS 42001 is projected to be published in December 2023.
Partnership on AI - Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society. By convening diverse, international stakeholders, PAI seeks to pool collective wisdom to make change.
No significant trade association guidelines have surfaced from German trade associations that organizations within Baden-Württemberg must comply with.
Generative AI Bill of Materials - Several leading investors have indicated that GenAI Code Composition-- what percentage of the code is AI-Generated, how is that code managed, and what are the applicable compliance standards -- will be included as a due diligence topic beginning Q12024.
The need for governance and risk management of Generative AI in insurance - Insurers will need to adopt a governance model and risk management approach to address a unique and varied set of risks, including data security, privacy threats, and regulatory concerns about ethics and bias, among others.
Representative AI Regulation in Insurance - As the AI regulatory environment evolves and matures, sector-specific regulations will inevitably come to fruition.
Vendor/Purchaser of Technology Solution Contract Terms - Key considerations when supplying or purchasing a technology solution produced in whole or in part by Generative AI.
Prioritizing Ethics and Transparency in Procurement - Deloitte: How Generative AI will transform sourcing and procurement operations.
Keeping track of global GenAI compliance standards
Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s GenAI Code compliance Database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.
About Sema Technologies, Inc.
Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.
Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.