by Dennis Crouch
The White House this week issued a new executive order focusing on a variety of aspects of regulating artificial intelligence, some of which focuses on IP issues. The executive order lays out eight guiding principles for manages risks while allowing growth and benefits:
- Ensuring AI is safe, secure and trustworthy, including through developing guidelines, standards and best practices, verifying reliability, and managing risks related to national security, critical infrastructure and cybersecurity.
- Promoting innovation and competition in AI, such as through public-private partnerships, addressing intellectual property issues in ways that “Protect inventors and creators”, and ensuring market competition and opportunities for small businesses.
- Supporting workers affected by AI adoption, including through training, principles for workplace deployment, and analyzing labor market impacts.
- Advancing equity and civil rights when using AI in criminal justice, government benefits, hiring, and other areas.
- Protecting consumers, patients, passengers and students from risks of AI systems. “[C]onsumer protections are more important than ever in moments of technological change.”
- Safeguarding privacy including through evaluating commercial data use and advancing privacy-enhancing technologies. “[T]he Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks.”
- Improving government use of AI.
- Strengthening American leadership abroad to advance international cooperation on AI.
Beyond the stated goals, the order has a number of requirements — most of them directed to the various Federal executive agencies. The tightest new regulations appear intended to focus on future AI models that are a few times larger than what OpenAI and other are currently deploying as well as “dual-use” models that could have potential national security impact.
- It requires companies developing or intending to develop “potential dual-use foundation models” to provide information to the government about model training, ownership of model weights, results of AI “red team” testing, and measures taken to meet safety objectives.
- It authorizes the Secretary of Commerce to define the technical conditions that would trigger these reporting requirements. Until defined, reporting is required for models trained on over 10^26 operations or primarily on biological sequence data over 10^23 operations.
- NIST is charged with developing standards and tests to ensure that AI systems are safe, secure, and trustworthy and the Department of Commerce to develop guidance for content authentication and watermarking of AI-generated content.
The intellectual property-related aspects of the executive order ask the appropriate agencies to work on the problem.
- It directs the USPTO Director to issue guidance to patent examiners and applicants on AI and inventorship, including issues related to using generative AI in the inventive process. It calls for the USPTO Director to issue updated guidance on patent eligibility for AI and emerging technologies.
- It instructs the USPTO Director to consult with the Copyright Office and make recommendations on potential executive actions related to copyright and AI, including the scope of protection for AI-generated works and the use of copyrighted works for AI training.
- It directs Homeland Security to develop a program to address AI-related intellectual property theft, including investigating incidents and enforcing actions. It also calls for updating the IP enforcement strategic plan to address AI.
- It encourages the FTC to use its authorities to promote competition in the AI marketplace and protect consumers and workers from related harms.
- It promotes public-private partnerships on advancing innovation, commercialization and risk-mitigation methods for AI. This includes addressing novel IP questions.
None of these requirements have immediate effect, but indicate that there will be further action over the next few months.