Sunday, 24 September 2023

Hitting the Books: Beware the Tech Bro who comes bearing gifts

American entrepreneurs have long fixated on extracting the maximum economic value out of, well really, any resource they can get their hands on — from Henry Ford's assembly line to Tony Hsieh's Zappos Happiness Experience Form. The same is true in the public sector where some overambitious streamlining of Texas' power grid contributed to the state's massive 2021 winter power crisis that killed more than 700 people. In her new book, the riveting Optimal Illusions: The False Promise of Optimization, UC Berkeley applied mathematician and author, Coco Krumme, explores our historical fascination with optimization and how that pursuit has often led to unexpected and unwanted consequences in the systems we're streamlining. 

In the excerpt below, Krumme explores the recent resurgence of interest in Universal Basic (or Guaranteed) Income and the contrasting approaches to providing UBI between tech evangelists like Sam Altman and Andrew Yang, and social workers like Aisha Nyandoro, founder of the Magnolia Mother’s Trust, in how to address the difficult questions of deciding who should receive the financial support, and how much.

blue background stylized iceberg with white writing
Riverhead Books

Excerpted from Optimal Illusions: The False Promise of Optimization by Coco Krumme. Published by Riverhead Books. Copyright © 2023 by Coco Krumme. All rights reserved.


False Gods

California, they say, is where the highway ends and dreams come home to roost. When they say these things, their eyes ignite: startup riches, infinity pools, the Hollywood hills. The last thing on their minds, of course, is the town of Stockton.

Drive east from San Francisco and, if traffic cooperates, you’ll be there in an hour and a half or two, over the long span of slate‑colored bay, past the hulking loaders at Oakland’s port, skirting rich suburbs and sweltering orchards and the government labs in Livermore, the military depot in Tracy, all the way to where brackish bay waters meet the San Joaquin River, where the east‑west highways connect with Interstate 5, in a tangled web of introductions that ultimately pitches you either north toward Seattle or south to LA.

Or you might decide to stay in Stockton, spend the night. There’s a slew of motels along the interstate: La Quinta, Days Inn, Motel 6. Breakfast at Denny’s or IHOP. Stockton once had its place in the limelight as a booming gold‑rush supply point. In 2012, the city filed for bankruptcy, the largest US city until then to do so (Detroit soon bested it in 2013). First light reveals a town that’s neither particularly rich nor desperately poor, hitched taut between cosmopolitan San Francisco on one side and the agricultural central valley on the other, in the middle, indistinct, suburban, and a little sad.

This isn’t how the story was supposed to go. Optimization was supposed to be the recipe for a more perfect society. When John Stuart Mill aimed for the greater good, when Allen Gilmer struck out to map new pockets of oil, when Stan Ulam harnessed a supercomputer to tally possibilities: it was in service of doing more, and better, with less. Greater efficiency was meant to be an equilibrating force. We weren’t supposed to have big winners and even bigger losers. We weren’t supposed to have a whole sprawl of suburbs stuck in the declining middle.

We saw how overwrought optimizations can suddenly fail, and the breakdown of optimization as the default way of seeing the world can come about equally fast. What we face now is a disconnect between the continued promises of efficiency, the idea that we can optimize into perpetuity, and the reality all around: the imperfect world, the overbooked schedules, the delayed flights, the institutions in decline. And we confront the question: How can we square what optimization promised with what it’s delivered?

Sam Altman has the answer. In his mid-thirties, with the wiry, frenetic look of a college student, he’s a young man with many answers. Sam’s biography reads like a leaderboard of Silicon Valley tropes and accolades: an entrepreneur, upper‑middle‑class upbringing, prep school, Stanford Computer Science student, Stanford Computer Science dropout, where dropping out is one of the Valley’s top status symbols. In 2015, Sam was named a Forbes magazine top investor under age thirty. (That anyone bothers to make a list of investors in their teens and twenties says as much about Silicon Valley as about the nominees. Tech thrives on stories of overnight riches and the mythos of the boy genius.)

Sam is the CEO and cofounder, along with electric‑car‑and‑rocket‑ship‑magnate Elon Musk, of OpenAI, a company whose mission is “to ensure that artificial general intelligence benefits all of humanity.” He is the former president of the Valley’s top startup incubator, Y Combinator, was interim CEO of Reddit, and is currently chairman of the board of two nuclear‑energy companies, Helion and Okto. His latest venture, Worldcoin, aims to scan people’s eyeballs in exchange for cryptocurrency. As of 2022, the company had raised $125 million of funding from Silicon Valley investors.

But Sam doesn’t rest on, or even mention, his laurels. In conversation, he is smart, curious, and kind, and you can easily tell, through his veneer of demure agreeableness, that he’s driven as hell. By way of introduction to what he’s passionate about, Sam describes how he used a spreadsheet to determine the seven or so domains in which he could make the greatest impact, based on weighing factors such as his own skills and resources against the world’s needs. Sam readily admits he can’t read emotions well, treats most conversations as logic puzzles, and not only wants to save the world but believes the world’s salvation is well within reach.

A 2016 profile in The New Yorker sums up Sam like this: “His great weakness is his utter lack of interest in ineffective people.”

Sam has, however, taken an interest in Stockton, California.

Stockton is the site of one of the most publicized experiments in Universal Basic Income (UBI), a policy proposal that grants recipients a fixed stipend, with no qualifications and no strings attached. The promise of UBI is to give cash to those who need it most and to minimize the red tape and special interests that can muck up more complex redistribution schemes. On Sam’s spreadsheet of areas where he’d have impact, UBI made the cut, and he dedicated funding for a group of analysts to study its effects in six cities around the country. While he’s not directly involved in Stockton, he’s watching closely. The Stockton Economic Empowerment Demonstration was initially championed by another tech wunderkind, Facebook cofounder Chris Hughes. The project gave 125 families $500 per month for twenty‑four months. A slew of metrics was collected in order to establish a causal relationship between the money and better outcomes.

UBI is nothing new. The concept of a guaranteed stipend has been suggested by leaders from Napoleon to Martin Luther King Jr. The contemporary American conception of UBI, however, has been around just a handful of years, marrying a utilitarian notion of societal perfectibility with a modern‑day faith in technology and experimental economics.

Indeed, economists were among the first to suggest the idea of a fixed stipend, first in the context of the developing world and now in America. Esther Duflo, a creative star in the field and Nobel Prize winner, is known for her experiments with microloans in poorer nations. She’s also unromantic about her discipline, embracing the concept of “economist as plumber.” Duflo argues that the purpose of economics is not grand theories so much as on‑the‑ground empiricism. Following her lead, the contemporary argument for UBI owes less to a framework of virtue and charity and much more to the cold language of an econ textbook. Its benefits are described in terms of optimizing resources, reducing inequality, and thereby maximizing societal payoff.

The UBI experiments under way in several cities, a handful of them funded by Sam’s organization, have data‑collection methods primed for a top‑tier academic publication. Like any good empiricist, Sam spells out his own research questions to me, and the data he’s collecting to test and analyze those hypotheses.

Several thousand miles from Sam’s Bay Area office, a different kind of program is in the works. When we speak by phone, Aisha Nyandoro bucks a little at my naive characterization of her work as UBI. “We don’t call it universal basic income,” she says. “We call it guaranteed income. It’s targeted. Invested intentionally in those discriminated against.” Aisha is the powerhouse founder of the Magnolia Mother’s Trust, a program that gives a monthly stipend to single Black mothers in Jackson, Mississippi. The project grew out of her seeing the welfare system fail miserably for the very people it purported to help. “The social safety net is designed to keep families from rising up. Keep them teetering on edge. It’s punitive paternalism. The ‘safety net’ that strangles.”

Bureaucracy is dehumanizing, Aisha says, because it asks a person to “prove you’re enough” to receive even the most basic of assistance. Magnolia Mother’s Trust is unique in that it is targeted at a specific population. Aisha reels off facts. The majority of low‑income women in Jackson are also mothers. In the state of Mississippi, one in four children live in poverty, and women of color earn 61 percent of what white men make. Those inequalities affect the community as a whole. In 2021, the trust gave $1,000 per month to one hundred women. While she’s happy her program is gaining exposure as more people pay attention to UBI, Aisha doesn’t mince words. “I have to be very explicit in naming race as an issue,” she says.

Aisha’s goal is to grow the program and provide cash, without qualifications, to more mothers in Jackson. Magnolia Mother’s Trust was started around the same time as the Stockton project, and the nomenclature of guaranteed income has gained traction. One mother in the program writes in an article in Ms. magazine, “Now everyone is talking about guaranteed income, and it started here in Jackson.” Whether or not it all traces back to Jackson, whether the money is guaranteed and targeted or more broadly distributed, what’s undeniable is that everyone seems to be talking about UBI.

Influential figures, primarily in tech and politics, have piled on to the idea. Jack Dorsey, the billionaire founder of Twitter, with his droopy meditation eyes and guru beard, wants in. In 2020, he donated $15 million to experimental efforts in thirty US cities.

And perhaps the loudest bullhorn for the idea has been wielded by Andrew Yang, another product of Silicon Valley and a 2020 US presidential candidate. Yang is an earnest guy, unabashedly dorky. Numbers drive his straight‑talking policy. Blue baseball caps for his campaign are emblazoned with one short word: MATH.

UBI’s proponents see the potential to simplify the currently convoluted American welfare system, to equilibrate an uneven playing field. By decoupling basic income from employment, it could free some people up to pursue work that is meaningful.

And yet the concept, despite its many proponents, has managed to draw ire from both ends of the political spectrum. Critics on the right see UBI as an extension of the welfare state, as further interference into free markets. Left‑leaning critics bemoan its “inefficient” distribution of resources: Why should high earners get as much as those below the poverty line? Why should struggling individuals get only just enough to keep them, and the capitalist system, afloat?

Detractors on both left and right default to the same language in their critiques: that of efficiency and maximizing resources. Indeed, the language of UBI’s critics is all too similar to the language of its proponents, with its randomized control trials and its view of society as a closed economic system. In the face of a disconnect between what optimization promised and what it delivered, the proposed solution involves more optimizing.

Why is this? What if we were to evaluate something like UBI outside the language of efficiency? We might ask a few questions differently. What if we relaxed the suggestion that dollars can be transformed by some or another equation into individual or societal utility? What if we went further than that and relaxed the suggestion of measuring at all, as a means of determining the “best” policy? What if we put down our calculators for a moment and let go of the idea that politics is meant to engineer an optimal society in the first place? Would total anarchy ensue?

Such questions are difficult to ask because they don’t sound like they’re getting us anywhere. It’s much easier, and more common, to tackle the problem head‑on. Electric‑vehicle networks such as Tesla’s, billed as an alternative to the centralized oil economy, seek to optimize where charging stations are placed, how batteries are created, how software updates are sent out — and by extension, how environmental outcomes take shape. Vitamins fill the place of nutrients leached out of foods by agriculture’s maximization of yields; these vitamins promise to optimize health. Vertical urban farming also purports to solve the problems of industrial agriculture, by introducing new optimizations in how light and fertilizers are delivered to greenhouse plants, run on technology platforms developed by giants such as SAP. A breathless Forbes article explains that the result of hydroponics is that “more people can be fed, less precious natural resources are used, and the produce is healthier and more flavorful.” The article nods only briefly to downsides, such as high energy, labor, and transportation costs. It doesn’t mention that many grains don’t lend themselves easily to indoor farming, nor the limitations of synthetic fertilizers in place of natural regeneration of soil.

In working to counteract the shortcomings of optimization, have we only embedded ourselves deeper? For all the talk of decentralized digital currencies and local‑maker economies, are we in fact more connected and centralized than ever? And less free, insofar as we’re tied into platforms such as Amazon and Airbnb and Etsy? Does our lack of freedom run deeper still, by dint of the fact that fewer and fewer of us know exactly what the algorithms driving these technologies do, as more and more of us depend on them? Do these attempts to deoptimize in fact entrench the idea of optimization further?

A 1952 novel by Kurt Vonnegut highlights the temptation, and also the threat, of de-optimizing. Player Piano describes a mechanized society in which the need for human labor has mostly been eliminated. The remaining workers are those engineers and managers whose purpose is to keep the machines online. The core drama takes place at a factory hub called Ilium Works, where “Efficiency, Economy, and Quality” reign supreme. The book is prescient in anticipating some of our current angst — and powerlessness — about optimization’s reach.

Paul Proteus is the thirty‑five‑year‑old factory manager of the Ilium Works. His father served in the same capacity, and like him, Paul is one day expected to take over as leader of the National Manufacturing Council. Each role at Ilium is identified by a number, such as R‑127 or EC‑002. Paul’s job is to oversee the machines.

At the time of the book’s publication, Vonnegut was a young author disillusioned by his experiences in World War II and disheartened as an engineering manager at General Electric. Ilium Works is a not‑so‑thinly‑veiled version of GE. As the novel wears on, Paul tries to free himself, to protest that “the main business of humanity is to do a good job of being human beings . . . not to serve as appendages to machines, institutions, and systems.” He seeks out the elusive Ghost Shirt Society with its conspiracies to break automation, he attempts to restore an old homestead with his wife. He tries, in other words, to organize a way out of the mechanized world.

His attempts prove to be in vain. Paul fails and ends up mired in dissatisfaction. The machines take over, riots ensue, everything is destroyed. And yet, humans’ love of mechanization runs deep: once the machines are destroyed, the janitors and technicians — a class on the fringes of society — quickly scramble to build things up again. Player Piano depicts the outcome of optimization as societal collapse and the collapse of meaning, followed by the flimsy rebuilding of the automated world we know.

This article originally appeared on Engadget at https://ift.tt/tvYz8Vx

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/tvYz8Vx

No comments:

Post a Comment