Right before the holidays, the U.S. Federal Trade Commission (FTC) began its long-anticipated crackdown on deployers of artificial intelligence (AI) systems. While the FTC had taken action against alleged privacy-related violations in connection with AI systems, its complaint and proposed settlement regarding Rite Aid's use of AI-based facial-recognition surveillance technology was the agency's first foray into enforcing Section 5 of the FTC Act against non-privacy-related allegations arising out of AI deployments.

To learn more about the Rite Aid case and why we expect more of such cases to follow, please read our recent Advisory and our recent Consumer Products blog post.

For ideas on putting in place a comprehensive, ongoing system for managing their risks, the U.S. National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework (Framework) and the accompanying Playbook explain how to govern, map, measure, and manage AI risks. Please see an earlier Enforcement Edge post for more information about the Framework.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mr Alexis Sabet
Arnold & Porter
601 Mass. Ave., NW
Washington, DC
DC 20001-3743
UNITED STATES
Tel: 202942.5000
Fax: 202942.5999
E-mail: anna.shelkin@arnoldporter.com
URL: www.arnoldporter.com

© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source Business Briefing