The Korea Herald

피터빈트

[Frank Pasquale] Industrial policy for the real world:

By Korea Herald

Published : Sept. 19, 2024 - 05:27

    • Link copied

Google CEO Sundar Pichai has characterized AI as “the most profound technology humanity is working on. More profound than fire, electricity, or anything that we have done in the past." The hype around “existential risk” in AI follows a similar narrative, analogizing it to Oppenheimer’s atomic bomb. Such grand pronouncements have stirred many a corporate board and government agency to develop AI deployment plans.

The problem, though, is that it’s not yet clear if the juice is worth the squeeze. The Head of Global Equity Research at Goldman Sachs recently threw a wet blanket on Promethean AI ambitions. As a company report summarized his findings: “to earn an adequate return on costly AI technology, AI must solve very complex problems, which it currently isn’t capable of doing, and may never be.” According to an Upwork survey, “Almost 80 percent of workers who use generative AI in their jobs said it has added to their workload and is hampering their productivity.” The core problem is that LLMs are language models, not knowledge models. They can predict what the next word in a text (or pixel in an image) is likely to be, but they’ve done no reasoning to make that prediction. That lack of reasoning limits their applicability in many situations.

Call it the “savage unicorn” problem, after a bizarre image brought to prominence by Gary Marcus. After being prompted to produce a picture of an old man hugging a unicorn in the style of Michelangelo, an AI image engine obliged -- but showed the man looking deeply contented while being excruciatingly impaled by the unicorn’s horn. To be sure, such errors are rare. But in all too many contexts where serious money is involved, the kind of reliability assured by human reasoning is crucial. And AI is just not there yet. This limits generative AI’s capability to be a truly disruptive innovation in business.

Meanwhile, generative AI is disrupting public discourse, often in corrosive and misleading ways. Donald Trump has already had a field day with it during his 2024 campaign. He touted a fake Taylor Swift endorsement, even though Tennessee’s Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (No Fakes Act) gives Swift the right to sue him for it. Some Trump followers have created and shared fake images meant to boost his standing with minorities and women. A fabricated image of Trump with his arms around smiling black voters was taken as real by at least one BBC interviewee. Trump fans have shared a manipulated video of Kamala Harris meant to discredit her views.

The much-touted reality-warping aspects of generative AI have created what legal scholars Danielle Citron and Bobby Chesney call a “liar’s dividend:” opportunistic chances to call into question the authenticity of any photo or video. After Vice President Harris drew large crowds, he ranted “Has anyone noticed that Kamala CHEATED at the airport? There was nobody at the plane, and she ‘A.I.’d’ it, and showed a massive ‘crowd’ of so-called followers, BUT THEY DIDN’T EXIST!”

The picture was widely confirmed as real by multiple reliable sources, revealing one more Trump lie. But new technology is making this type of claim ever-easier to make. The AI-enhanced photo-editing capacities of Google’s recently released Pixel 9 phone are a disinformation dream, giving anyone capacities to easily alter photos in undetectable ways. As Sarah Jeong recently wrote, it’s possible that “the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.”

While generative AI’s failures at improving business, and success at disrupting politics, may seem to be different issues, they are actually related. A poorly regulated public sphere means that “anything goes” in many contexts. Meanwhile, spheres like finance and health care are rightly, tightly regulated. This asymmetry helps drive investment and excitement toward direct-to-consumer applications of image- and text-generating software that can be so easily misused in so many contexts.

So what is to be done? First, there need to be limits on the spread of reality-warping AI. We need new rules for the dissemination of generative AI. When it is photorealistic, it should always be labelled as AI-generated, whether in a caption or (in the case of images and videos) with some small indicator in the corner (perhaps an icon of a robot). Korea provides one good precedent for taking a step in this direction: after two leading presidential candidates released regionally targeted videos featuring AI versions of themselves in 2021, the South Korean Election Commission required that such avatars must disclose that they are not actual candidates.

Platforms should also require “proof of personhood” to post materials, to prevent botnets from rapidly disseminating fake images. Researchers from Georgetown, OpenAI, and the Stanford AI Observatory have also proposed many other proposed mitigations of AI propaganda in an insightful report.

But this regulation is only one half of the generative AI puzzle. Governments should also help ensure just rewards to reality-improving technology, to dissipate excess investment in hype-driven industries (ranging from crypto to the metaverse to the current direct-to-consumer GenAI boom). The US Inflation Reduction Act and CHIPS (Creating Helpful Incentives to Produce Semiconductors) Act are two good examples of such legislation: both have spurred higher investment in factories. China provides an industrial policy success story of longer vintage. As Angela Zhang’s recent book High Wire: How China Regulates Big Tech and Governs its Economy demonstrates, its government has backed many forms of “hard tech,” the type of “new quality productive forces” behind better automobiles and robotics.

Generative AI can play important roles in such “hard tech” -- imagine voice or text-commanded robots “learning” from videos of tasks to be done. But ordinary markets are often not patient enough to finance such advances adequately. Enlightened industrial policy needs to fill this vacuum, shifting investment from reality-warping to reality-improving generative AI.

Frank Pasquale

Frank Pasquale is a professor of law at Cornell Tech and Cornell Law School. The views expressed here are the writer’s own. -- Ed.