Overview

With the European Parliament's approval of the European Union (EU) Artificial Intelligence (AI) Act, most companies are not prepared to comply with these sweeping AI regulations.

We spoke with Nader Henein, VP Analyst at Gartner, to understand how well-prepared companies are, what they need to do to get ready in the near-term, and how to get started.

Journalists who would like to speak with Nader regarding this topic can contact Laurence.Goasduff@Gartner.com. Members of the media can reference this material in articles with proper attribution to Gartner.

Q: How prepared are organizations to comply with the regulations outlined in the EU AI Act?

A: Many organizations think that because they are not building AI tools and services in-house, they are free and clear. What they don't realize is that almost every organization has exposure to the AI Act because they are not only responsible for the AI capabilities they build, but also those capabilities they already bought.

Gartner recommends organizations put in place an AI governance program to catalog and categorize AI use cases and address any banned instances as soon as possible.

The rules around prohibited-AI systems will become effective 6 months from the AI Act coming into force, and those rules will carry the highest fine tier at €35 million or 7% of global turnover. Eighteen months later (at the two-year mark), the majority of the rules associated with high-risk AIsystems come into force. Those will apply to many enterprise use cases requiring a fair bit of due diligence and even more of the documentation outlined in this, and subsequent guides.

Q: What do organizations need to do to avoid hefty fines in the near-term?

A: The first and most critical step is to discover and catalog AI-enabled capabilities with enough detail for the subsequent risk assessment.

Many organizations have hundreds of AI driven capabilities deployed within the enterprise, some of which are purpose built, but the majority are invisible; embedded across many of the platforms used on a day-to-day basis.

Cataloging requires organizations, providers, and developers to undertake the discovery and listing of each AI-enabled system deployed across the enterprise. This will facilitate subsequent categorization into one of the four risk tiers outlined in the Act: low-risk AI systems, high-risk AI systems, prohibited AI systems and general purpose AI-systems.

"Security and risk management leaders must first discover and catalog AI-enabled capabilities with enough detail for the subsequent risk assessment."

Q: How should the discovery process be approached?

A: It is best to work systematically through each of the four AI deployment classes (see Figure 1). The reason for taking this approach is because even though the short-term target is risk-identification, the ultimate goal is risk-mitigation and each of these classes has its own unique risk-mitigation requirements.

Figure 1: AI Adoption Classes

Source: Gartner (April 2024)

A: AI In-the-wild: AI tools, generative or otherwise, available in the public domain that employees are using for work-related purposes formally and informally, such as ChatGPT or Bing.

  • How to Catalog: This requires employee education and a series of surveys to quickly compile the list of systems in use. In parallel, the IT team may be able to identify additional AI tools used by looking at the organization's web traffic analytics.

Embedded AI: AI capabilities built into standard solutions, and SaaS offerings used within the enterprise. Service providers have been complementing their offerings with AI capabilities for the better part of the past decade, many of which are completely invisible to the organization, such as machine learning models powering spam or malware detection engines.

  • How to Catalog: Organizations will need to expand their third-party risk management program and request detailed information from their providers as embedded AI capabilities may not be obvious.

AI In-house: AI capabilities trained, tested, and developed internally where the organization has full visibility over the data, the technologies, and the subsequent tuning made to the models, as well as the purpose for which they are used.

  • How to Catalog: Organizations building and maintaining their own AI models would have data scientists and a governance program to curate the data in scope, making the discovery process seamless and the data needed for the subsequent categorization readily available.

Hybrid AI: Enterprise AI capabilities that are built in-house using one or more off-the-shelf foundational models, generative or otherwise, complemented with enterprise data.

  • How to Catalog: As this is a combination of external pre-trained models with internal data, this undertaking becomes a combination of the vendor management questions from embedded AI and the internal metadata sourcing from AI in-house to collect the appropriate information needed to categorize each use case.

Gartner clients can learn more in "Getting Ready for the EU AI Act, Phase 1: Discover & Catalog."

About Gartner Security & Risk Management Summit

Gartner analysts will present their latest expertise on the impacts of AI within the enterprise at the Gartner Security & Risk Management Summits, taking place June 3-5 in National Harbor, July 24-26 in Tokyo and September 23-25 in London. Follow news and updates from the conferences on X using #GartnerSEC.


To register for a complimentary press pass to the conference, please contact laurence.goasduff@gartner.com.

Attachments

  • Original Link
  • Permalink

Disclaimer

Gartner Inc. published this content on 03 April 2024 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 03 April 2024 07:46:02 UTC.