Managing ‘A.I.’ Risks

By admin
There has been a wave of artificial intelligence regulatory news

this week
DATE

, and I thought it would be useful to collect a few of those stories in a single post.


Earlier this week
DATE

,

U.S.
GPE

president

Joe Biden
PERSON

issued an executive order:

My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated,

Federal Government
ORG

-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels

the United States
GPE

to lead in this moment for the sake of our security, economy, and society.

Reporting by

Josh Boak
PERSON

and

Matt O’Brien
PERSON

of

the Associated Press
ORG

indicates this executive order was informed by several experts in the technology and human rights sectors. Unfortunately, it seems that something I interpreted as a tongue-in-cheek statement to the adversary of the latest “Mission: Impossible” movie is being taken seriously and out of context by some.


Steven Sinofsky
PERSON

— who, it should be noted, is a board partner at

Andreessen Horowitz
PERSON

which still has as its homepage that ridiculous libertarian manifesto which is, you know, foreshadowing — is worried about that executive order:

I am by no means certain if

AI
ORG

is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here

today
DATE

if the

AI
ORG

products just in market

less than a year
DATE

are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know. What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?

This is a very long article with many arguments against the

Biden
ORG

order. It is worth reading in full; I have just pulled its conclusion as a summary. I think there is a lot to agree with, even if I disagree with its conclusion. The dispute is not between optimism and pessimism; it is between democratically regulating industry, and allowing industry to dictate the terms of if and how it is regulated.

That there are “no in-market products […] upon which to base such concerns” is probably news to companies like Stable AI and OpenAI, which sell access to Eurocentric and sexually biased models. There are, as some will likely point out, laws in many countries against bias in medical care, hiring, policing, housing, and other significant areas set to be revolutionized by

A.I.
ORG

in

the coming years
DATE

. That does not preclude the need for regulations specifically about how

A.I.
ORG

may be used in those circumstances, though.


Ben Thompson
PERSON

:

The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early

AI
ORG

winners seem the most invested in generating alarm in

Washington
GPE

,

D.C.
GPE

about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors. […] In short,

this Executive Order
ORG

is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.

There is a neat rhetorical trick in both

Sinofsky
PERSON

’s and

Thompson
ORG

’s articles. It is too early to regulate, they argue, and doing so would only stifle the industry and prevent it from reaching its best potential and highest aspirations. Also, it is a little bit of a smokescreen to call it a nascent industry; even if the technology is new, many of the businesses working to make it a reality are some of the world’s most valuable. Alas, it becomes more difficult to create rules as industries grow and businesses become giants — look, for example, to

Sinofsky
PERSON

’s appropriate criticism of the patchwork approach to proposed privacy laws in several

U.S.
GPE

states, or

Thompson
ORG

’s explanation of how complicated it is to regulate “entrenched” corporations like

Facebook
ORG

and

Google
ORG

on privacy grounds given their enormous lobbying might.

These are not contradictory arguments, to be clear; both writers are, in fact, raising a very good line of argument. Regulations enacted on a nascent industry will hamper its growth, while waiting too long will be good news for any company that can afford to write the laws. Between these, the latter is a worse option. Yes, the former approach means a new industry faces constraints on its growth, both in terms of speed and breadth. With a carefully crafted regulatory framework with room for rapid adjustments, however, that can actually be a benefit. Instead of a well poisoned by

years
DATE

of risky industry experiments on the public,

A.I.
ORG

can be seen as safe and beneficial. Technologies made in countries with strict regulatory regimes may be seen as more dependable. There is the opportunity of a lifetime to avoid entrenching the same mistakes, biases, and problems we have been dealing with for generations.

Where I do agree with

Sinofsky
PERSON

and

Thompson
ORG

is that such regulation should not be made by executive order. However, regardless of how much I think the mechanism of this policy is troublesome and much of the text of the order is messy, it is wrong to discard the very notion of

A.I.
ORG

regulation simply on this basis.

A group of academics published a joint paper concerning

A.I.
ORG

development, which I thought was less alarmist and more grounded than most of these efforts:

The rate of improvement is already staggering, and tech companies have the cash reserves needed to scale the latest training runs by multiples of

100 to 1000
CARDINAL

soon. Combined with the ongoing growth and automation in

AI R&D
ORG

, we must take seriously the possibility that generalist AI systems will outperform human abilities across many critical domains within

this decade
DATE

or the next. What happens then? If managed carefully and distributed fairly, advanced

AI
ORG

systems could help humanity cure diseases, elevate living standards, and protect our ecosystems. The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well. Humanity is pouring vast resources into making AI systems more powerful, but far less into safety and mitigating harms. For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.


John Davidson
PERSON

, columnist at

the Australian Financial Review
ORG

, interviewed

Andrew Ng
PERSON

, who co-founded

Google Brain
ORG

:

“There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction. “It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he said.


Ng
PERSON

is not an anti-regulation hardliner. He acknowledges the harms already caused by

A.I.
ORG

and supports oversight.


Dan Milmo
PERSON

and

Kiran Stacey
PERSON

, of the

Guardian
ORG

, covered

this week
DATE

’s

Bletchley Park
FAC

A.I. safety summit:

The possibility that AI can wipe out humanity – a view held by less hyperbolic figures than

Musk
PERSON

– remains a divisive one in the tech community. That difference of opinion was not healed by

two days
DATE

of debate in

Buckinghamshire
GPE

. But if there is a consensus on risk among politicians, executives and thinkers, then it focuses on the immediate fear of a disinformation glut. There are concerns that elections in the

US
GPE

,

India
GPE

and the

UK
GPE


next year
DATE

could be affected by malicious use of generative AI.

I do not love the mainstreaming of the apparently catastrophic risks of

A.I.
ORG

on civilization because it can mean

one
CARDINAL

of

two
CARDINAL

possibilities: either its proponents are wrong and are using it for cynical or attention-seeking purposes, or they are right. This used to be something which was regarded as ridiculous science fiction. That apparently serious and sober people see it as plausible is discomforting.