• Technology
  • Electrical equipment
  • Material Industry
  • Digital life
  • Privacy Policy
  • O name
Location: Home / Technology / How the US plans to manage artificial intelligence

How the US plans to manage artificial intelligence

techserving |
1152

As the EU’s Artificial Intelligence (AI) Act fights its way through multiple rounds of revisions at the hands of MEPs, in the US a little-known organisation is quietly working up its own guidelines to help channel the development of such a promising and yet perilous technology.

In March, the Maryland-based National Institute of Standards and Technology (NIST) released a first draft of its AI Risk Management Framework, which sets out a very different vision from the EU.

The work is being led by Elham Tabassi, a computer vision researcher who joined the organisation just over 20 years ago. Then, “We built [AI] systems just because we could,” she said. “Now we ask ourselves: should we?”

While the EU’s AI Act is legislation, NIST’s framework will be entirely voluntary. NIST, as Tabassi repeatedly stresses, is not a regulator. Founded at the beginning of the 20th century, NIST instead creates standards and measurement systems for technologies ranging from atomic clocks to nanomaterials, and was asked by the US Congress to work up AI guidelines in 2020.

Unlike the EU’s AI Act, NIST does not single out any particular use of AI as off limits (the Act, by contrast, could ban facial recognition in public spaces by the authorities, albeit with exceptions for things like terrorism).

And as NIST’s guidelines dryly note, its framework “does not prescribe risk thresholds or [risk] values.”In other words, it is up to developers to weigh the risks and advantages of unleashing their AI systems on the world.

“At the end of the day, we truly believe that there isn't one size fits all,” said Tabassi. “It's up to the application owner, developer […] whoever is in charge, to do a cost benefit analysis and decide.” Facial recognition by police, say, is a much riskier prospect than using it to unlock a smartphone, she argues. Given this, prohibiting a particular use case makes no sense (though recent compromise texts on the EU AI Act suggest there may be exceptions for unlocking phones).

The EU AI Act repeatedly emphasises that there needs to be ultimate “human oversight” of AI. NIST’s guidelines don’t mention this, because whether or not it is needed all comes down to how AI is being used. “We truly believe that AI is all about context, and ‘AI without a human’ doesn't mean much,” said Tabassi. NIST is not trying to regulate to that level of detail, of when exactly a human should be in the loop, she stresses.

Cultural revolution

Instead of hard red legal lines in the sand, NIST hopes to induce a voluntary revolution in the culture of AI development. It wants AI creators to think about the perils and pitfalls of their intelligent systems before they are let loose on the public. “Risk management should not be an afterthought,” Tabassi said.

In practice, NIST’s guidelines could entail US tech companies submitting to quite a lot of outside oversight when they create their AI products. NIST recommends than an “independent third party” or “experts who did not serve as front-line developers” weigh up the pros and cons of an AI system, consulting “stakeholders” and “impacted communities.”

Similar ideas are already beginning to take off in the industry. The practice of “red teaming”, where a company opens up its system to a simulated attack to probe for vulnerabilities, has already been used by some major AI developers, said Sebastien Krier, an AI policy expert at Stanford University. “I wouldn’t say there’s a norm yet, but it’s increasingly used.”

NIST also wants AI developers to ensure they have “workforce diversity” to make sure AI works for everyone, not just a narrow subset of users.

“It shouldn't really be up to the people that are developing technology, or only to them, to think about the consequences and impact,” Tabassi said. “That's why you need a very diverse group of people.”

This doesn’t just mean demographically and ethnically diverse teams, she stressed. The people creating AI systems need to have disciplinary diversity too, including, say, sociologists, cognitive scientists and psychologists. AI can’t just be left to a room full of computer science graduates.

And if a developer decides an AI system has more benefits than risks, they need to document how they came to this decision, the NIST guidelines say. It is unclear as yet whether these documents will be made public.

Conflicts of interest

How the US plans to manage artificial intelligence

This points to one obvious conflict of interest with letting AI developers decide whether or not their systems are too risky. A new AI tool may allow a tech company to reap huge profits while causing untold damage in the real world, as social media platforms arguably do today. This misalignment of incentives is, for now, not directly addressed in the NIST framework, although outside experts could provide wider society a voice in decision making.

“We don't think that it's our job to say what's the acceptable level of risks, what's the acceptable level of the benefits,” said Tabassi. “Our job is to provide enough guidance that this decision could be done in an informed way.”

There are plenty of suggestions about how to defuse this conflict of interest. One is to demand that AI systems are “loyal”: that is, that they truly serve users, rather than the companies that build them, or some other outside interest.

“There's a lot of systems out there that are not transparent about whose incentives they're aligned with,” said Carlos Ignacio Gutierrez, an AI policy researcher at the US-based Future of Life Institute, which campaigns to de-risk emerging technologies.

For example, a navigation tool might have a deal with a fast food company, so that it diverts your route very slightly to steer you closer to a burger joint. More seriously, a medical diagnostics app could actually be programmed to save a medical insurer money, rather than do what is best for the patient.

“The whole idea behind loyalty is that there's transparency between where these incentives are aligned,” said Gutierrez. But currently the concept is not entrenched in either NIST’s guidelines or the EU’s AI Act, he notes.

Artificial general intelligence

The NIST guidelines also fail to directly address fears around the creation of so-called artificial general intelligence (AGI): an agent capable of anything a human can do, which, if not cleverly aligned to human goals, could spin out of our control and, in the view of a number of AI luminaries and other figures including Elon Musk, threaten the existence of mankind itself.

“Even if the likelihood of catastrophic events is low, their potential impact warrants significant attention,” warns the Oxford-based Centre for the Governance of AI in its recent submission to NIST about its guidelines. The UK, although taking a laissez faire attitude to AI in general, did acknowledge the “long term risk of non-aligned Artificial General Intelligence” in its AI strategy last year.

AGI is “not a problem for next five years, maybe a decade,” said Tabassi. “But is it time to plan for it and understand it now? For sure.” In the US, instead of NIST, worries around AGI are being explored in the National Artificial Intelligence Advisory Committee, a presidential advisory body that last month appointed its first members.

Building trust

Tabassi admits that NIST doesn’t know which companies will use its framework, and to what extent.“No, we don't have any leverage on that,” she said.

But previous NIST frameworks on privacy and cyber security have been adopted by companies, despite also being voluntary. The cyber guidelines became codified within US federal agencies, and then spread to the private sector. “The really good scientists that we have here allow us to build trust with the industry,” Tabassi said.

Many of the US’s biggest tech firms, like Google, IBM, Adobe and Microsoft have submitted recommendations or taken part in NIST workshops to help craft the guidelines.

Voluntary or ‘soft law’ frameworks like NIST’s get adopted by industry when companies know that hard law backing it up is coming down the road, or will be if they don’t clean up their act, said Gutierrez.

But at the federal level in the US, there is no prospect of hard law in the near future, although some states like Texas and Illinois have created legislation controlling facial recognition.

“So NIST will have to hope that companies find it in their interest to adopt the framework,” Gutierrez said. “It's a signal to the market saying, hey, I'm trying to be responsible.”

What’s more, he noted, if a tech company uses the framework, this could limit damages awarded in court, if and when a company is sued for malfunctioning AI.

And the advantage of NIST’s soft law is its nimbleness. The first official framework is set to be released in January 2023, when the EU will likely still have years of wrangling over its own AI legislation ahead.“It's a first step to making this one way of managing risks not the law of land […] but the practice of the land,” said Gutierrez.