Saturday, July 27, 2024

At 1 p.m. on a Friday shortly before Christmas last year, Kent Walker, Google’s top lawyer, summoned four of his employees and ruined their weekend. The group worked in SL1001, a bland building with a blue glass facade betraying no sign that dozens of lawyers inside were toiling to protect the interests of one of the world’s most influential companies. For weeks they had been prepping for a meeting of powerful executives to discuss the safety of Google’s products. The deck was done. But that afternoon Mr. Walker told his team the agenda had changed, and they would have to spend the next few days preparing new slides and graphs. In fact, the entire agenda of the company had changed — all in the course of nine days. Sundar Pichai, Google’s chief executive, had decided to ready a slate of products based on artificial intelligence — immediately. He turned to Mr. Walker, the same lawyer he was trusting to defend the company in a profit-threatening antitrust case in Washington, D.C. Mr. Walker knew he would need to persuade the Advanced Technology Review Council, as Google called the group of executives, to throw off their customary caution and do as they were told. It was an edict, and edicts didn’t happen very often at Google. But Google was staring at a real crisis. Its business model was potentially at risk.

What had set off Mr. Pichai and the rest of Silicon Valley was ChatGPT, the artificial intelligence program that had been released on Nov. 30, 2022, by an upstart called OpenAI. It had captured the imagination of millions of people who had thought A.I. was science fiction until they started playing with the thing. It was a sensation. It was also a problem.

At the Googleplex, famed for its free food, massages, fitness classes and laundry services, Mr. Pichai was also playing with ChatGPT. Its wonders did not wow him. Google had been developing its own A.I. technology that did many of the same things. Mr. Pichai was focused on ChatGPT’s flaws — that it got stuff wrong, that sometimes it turned into a biased pig. What amazed him was that OpenAI had gone ahead and released it anyway, and that consumers loved it. If OpenAI could do that, why couldn’t Google?

Why not plow ahead? That’s the question that loomed over A.I.’s adolescence — the year or so after the technology made the leap from lab to living room. There was hand-wringing over chatbots writing seductive phishing emails and spewing disinformation, or high schoolers using them to cheat their way to an A. Doomsayers insisted that unfettered A.I. could lead to the end of humankind.

For tech company bosses, the decision of when and how to turn A.I. into a (hopefully) profitable business was a more simple risk-reward calculus. But to win, you had to have a product.

By Monday morning, Dec. 12, the team at SL1001 had a new agenda with a deck labeled “Privileged and Confidential/Need to Know.” Most attendees tuned in over videoconference. Mr. Walker started the meeting by announcing that Google was moving ahead with a chatbot and A.I. capabilities that would be added to cloud, search and other products. What are your concerns? Let’s get in line, Mr. Walker said, according to Jen Gennai, the director of responsible innovation.

There would be guardrails, but approvals would be fast-tracked. Mr. Walker called it the “green lane” approach. It was all laid out in the deck. Opportunities for “Green Lane streamlining” were identified. Dangers were color-coded. Blue indicated risks where “mitigations” were “required.” Risks that were “controllable with minimum thresholds/mitigations” were rendered in orange.

In one chart, under “Hate & Toxicity,” the plan was to “curb stereotypes, toxicity and hate speech in outputs.” One topic was: “What are we missing in order to fast-track approvals?”

Not everyone was on board. “My standards are as high if not higher than they usually are, and we will be going through a review process with all of this,” Ms. Gennai remembered a cloud executive saying.

Eventually a compromise was reached. They would limit the rollout, Ms. Gennai said. And they would avoid calling anything a product. For Google, it would be an experiment. That way it didn’t have to be perfect. (A Google spokeswoman said the A.T.R.C. did not have the power to decide how the products would be released.)

What played out at Google was repeated at other tech giants after OpenAI released ChatGPT in late 2022. They all had technology in various stages of development that relied on neural networks — A.I. systems that recognized sounds, generated images and chatted like a human. That technology had been pioneered by Geoffrey Hinton, an academic who had worked briefly with Microsoft and was now at Google. But the tech companies had been slowed by fears of rogue chatbots, and economic and legal mayhem.

Once ChatGPT was unleashed, none of that mattered as much, according to interviews with more than 80 executives and researchers, as well as corporate documents and audio recordings. The instinct to be first or biggest or richest — or all three — took over. The leaders of Silicon Valley’s biggest companies set a new course and pulled their employees along with them.

Over 12 months, Silicon Valley was transformed. Turning artificial intelligence into actual products that individuals and companies could use became the priority. Worries about safety and whether machines would turn on their creators were not ignored, but they were shunted aside — at least for the moment.

At Meta, Mark Zuckerberg, who had once proclaimed the metaverse to be the future, reorganized parts of the company formerly known as Facebook around A.I.

Elon Musk, the billionaire who co-founded OpenAI but had left the lab in a huff, vowed to create his own A.I. company. He called it X.AI and added it to his already full plate.

Satya Nadella, Microsoft’s chief executive, had invested in OpenAI three years before and was letting the start-up’s cowboys tap into its computing power. He sped up his plans to incorporate A.I. into Microsoft’s products — and give Google a poke in its searching eye.

“Speed is even more important than ever,” Sam Schillace, a top executive, wrote Microsoft employees. It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”

The strange thing was that the leaders of OpenAI never thought ChatGPT would shake up Silicon Valley. In early November 2022, a few weeks before it was released to the world, it didn’t really exist as a product. Most of the 375 employees working in their new offices, a former mayonnaise factory, were focused on a more powerful version of technology, called GPT-4, that could answer almost any question using information gleaned from an enormous collection of data scraped from seemingly everywhere.

It was revolutionary, but there were problems. Sometimes the tech spewed hate speech and misinformation. The engineers at OpenAI kept postponing the launch and talking about what to do.

One option was to release an older, less powerful version of the technology — and just see what happened. The idea, according to four people familiar with OpenAI’s work, was to watch the public’s reaction and use it to work out the kinks.

And though some executives have downplayed it, they wanted to beat the competition. Lots of tech companies were working on their own A.I. chatbots. But the people to beat were at Anthropic, started the year before by researchers and engineers who left OpenAI because they thought that Sam Altman, its chief executive, had not made safety a priority as A.I. grew more powerful. The defectors had helped build the technology that OpenAI was so excited about before they trooped out the door.

In mid-November 2022, Mr. Altman; Greg Brockman, OpenAI’s president; and others met in a top-floor conference room to discuss the problems with their breakthrough tech yet again. Suddenly Mr. Altman made the decision — they would release the old, less-powerful technology.

The plan was to call it Chat with GPT 3.5 and put it out by the end of the month. They referred to it as a “low key research preview.” It didn’t feel like a big-deal decision to anyone in the room.

“We plan to frame it as a research release,” Mira Murati, OpenAI’s chief technology officer, told staff over Slack. “This reduces risk in all dimensions while allowing us to learn a lot,” she wrote. “We are….”

Check out our other content

Check out other tags:

Most Popular Articles