Google’s big week was a flex for the power of big tech

Google’s big week was a flex for the power of big tech

Last week, this space was all about OpenAI’s 12 days of shipmas. This week, the spotlight is on Google, which has been speeding toward the holiday by shipping or announcing its own flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industry—even if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses.

To start, last week Google Introduced Veo, a new video generation model, and Imagen 3, a new version of its image generation model. 

Then on Monday, Google announced a  breakthrough in quantum computing with its Willow chip. The company claims the new machine is capable of a “standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years.” you may recall that MIT Technology Review covered some of the Willow work after researchers posted a paper preprint in August.   But this week marked the big media splash. It was a stunning update that had Silicon Valley abuzz. (Seriously, I have never gotten so many quantum computing pitches as in the past few days.)

Google followed this on Wednesday with even more gifts: a Gemini 2 release, a Project Astra update, and even more news about forthcoming agents called Mariner, an agent that can browse the web, and Jules, a coding assistant.  

First: Gemini 2. It’s impressive, with a lot of performance updates. But I have frankly grown a little inured by language-model performance updates to the point of apathy. Or at least near-apathy. I want to see them do something.

So for me, the cooler update was second on the list: Project Astra, which comes across like an AI from a futuristic movie set. Google first showed a demo of Astra back in May at its developer conference, and it was the talk of the show. But, since demos offer companies chances to show off products at their most polished, it can be hard to tell what’s real and what’s just staged for the audience. Still, when my colleague Will Douglas Heaven recently got to try it out himself, live and unscripted, it largely lived up to the hype. Although he found it glitchy, he noted that those glitches can be easily corrected. He called the experience “stunning” and said it could be generative AI’s killer app.

On top of all this, Will notes that this week Google DeepMind CEO (the company’s AI division) Demis Hassabis was in Sweden to receive his Nobel Prize. And what did you do with your week?

Making all this even more impressive, the advances represented in Willow, Gemini, Astra, and Veo are ones that just a few years ago many, many people would have said were not possible—or at least not in this timeframe. 

A popular knock on the tech industry is that it has a tendency to over-promise and under-deliver. The phone in your pocket gives the lie to this. So too do the rides I took in Waymo’s self-driving cars this week. (Both of which arrived faster than Uber’s estimated wait time. And honestly it’s not been that long since the mere ability to summon an Uber was cool!) And while quantum has a long way to go, the Willow announcement seems like an exceptional advance; if not a tipping point exactly, then at least a real waypoint on a long road. (For what it’s worth, I’m still not totally sold on chatbots. They do offer novel ways of interacting with computers, and have revolutionized information retrieval. But whether they are beneficial for humanity—especially given energy debts, the use of copyrighted material in their training data, their perhaps insurmountable tendency to hallucinate, etc.—is debatable, and certainly is being debated. But I’m pretty floored by this week’s announcements from Google, as well as OpenAI—full stop.)

And for all the necessary and overdue talk about reining in the power of Big Tech, the ability to hit significant new milestones on so many different fronts all at once is something that only a company with the resources of a Google (or Apple or Microsoft or Amazon or Meta or Baidu or whichever other behemoth) can do. 

All this said, I don’t want us to buy more gadgets or spend more time looking at our screens. I don’t want us to become more isolated physically, socializing with others only via our electronic devices. I don’t want us to fill the air with carbon or our soil with e-waste. I do not think these things should be the price we pay to drive progress forward. It’s indisputable that humanity would be better served if more of the tech industry was focused on ending poverty and hunger and disease and war.

Yet every once in a while, in the ever-rising tide of hype and nonsense that pumps out of Silicon Valley, epitomized by the AI gold rush of the past couple of years, there are moments that make me sit back in awe and amazement at what people can achieve, and in which I become hopeful about our ability to actually solve our larger problems—if only because we can solve so many other dumber, but incredibly complicated ones. This week was one of those times for me. 


Now read the rest of The Debrief

The News

• Robotaxi adoption is hitting a tipping point

• But also, GM is shutting down its Cruise robotaxi division.

• Here’s how to use OpenAI’s new video editing tool Sora.

• Bluesky has an impersonator problem.

• The AI hype machine is coming under government scrutiny.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. This week, I hit up James O’Donnell, who covers AI and hardware, about his story on how the startup defense contractor Anduril is bringing AI to the battlefield.

Mat: James, you got a pretty up close look at something most people probably haven’t even thought about yet, which is how the future of AI-assisted warfare might look. What did you learn on that trip that you think will surprise people?

James: Two things stand out. One, I think people would be surprised by the gulf between how technology has developed for the last 15 years for consumers versus the military. For consumers, we’ve gotten phones, computers, smart TVs and other technologies that generally do a pretty good job of talking to each other and sharing our data, even though they’re made by dozens of different manufacturers. It’s called the “internet of things.” In the military, technology has developed in exactly the opposite way, and it’s putting them in a crisis. They have stealth aircraft all over the world, but communicating about a drone threat might be done with Powerpoints and a chat service reminiscent of AOL Instant Messenger.

The second is just how much the Pentagon is now looking to AI to change all of this. New initiatives have surged in the current AI boom. They are spending on training new AI models to better detect threats, autonomous fighter jets, and intelligence platforms that use AI to find pertinent information. What I saw at Anduril’s test site in California is also a key piece of that. Using AI to connect to and control lots of different pieces of hardware, like drones and cameras and submarines, from a single platform. The amount being invested in AI is much smaller than for aircraft carriers and jets, but it’s growing.

Mat: I was talking with a different startup defense contractor recently, who was talking to me about the difficulty of getting all these increasingly autonomous devices on the battlefield talking to each other in a coordinated way. Like Anduril, he was making the case that this has to be done at the edge, and that there is too much happening for human decision making to process. Do you think that’s true?  Why is that?

James: So many in the defense space have pointed to the war in Ukraine as a sign that warfare is changing. Drones are cheaper and more capable than they ever were in the wars in the Middle East. It’s why the Pentagon is spending $1 billion on the Replicator initiative to field thousands of cheap drones by 2025. It’s also looking to field more underwater drones as it plans for scenarios in which China may invade Taiwan.

Once you get these systems, though, the problem is having all the devices communicate with one another securely. You need to play Air Traffic Control at the same time that you’re pulling in satellite imagery and intelligence information, all in environments where communication links are vulnerable to attacks.

Mat: I guess I still have a mental image of a control room somewhere, like you might see in Dr. Strangelove or War Games (or Star Wars for that matter) with a handful of humans directing things. Are those days over?

James: I think a couple things will change. One, a single person in that control room will be responsible for a lot more than they are now. Rather than running just one camera or drone system manually, they’ll command software that does it for them, for lots of different devices. The idea that the defense tech sector is pushing is to take them out of the mundane tasks—rotating a camera around to look for threats—and instead put them in the driver’s seat for decisions that only humans, not machines, can make.

Mat: I know that critics of the industry push back on the idea of AI being empowered to make battlefield decisions, particularly when it comes to life and death, but it seems to me that we are increasingly creeping toward that and it seems perhaps inevitable. What’s your sense?

James: This is painting with broad strokes, but I think the debates about military AI fall along similar lines to what we see for autonomous vehicles. You have proponents saying that driving is not a thing humans are particularly good at, and when they make mistakes, it takes lives. Others might agree conceptually, but debate at what point it’s appropriate to fully adopt fallible self-driving technology in the real world. How much better does it have to be than humans?

In the military, the stakes are higher. There’s no question that AI is increasingly being used to sort through and surface information to decision-makers. It’s finding patterns in data, translating information, and identifying possible threats. Proponents are outspoken that that will make warfare more precise and reduce casualties. What critics are concerned about is how far across that decision-making pipeline AI is going, and how much there is human oversight.

I think where it leaves me is wanting transparency. When AI systems make mistakes, just like when human military commanders make mistakes, I think we deserve to know, and that transparency does not have to compromise national security. It took years for reporter Azmat Khan to piece together the mistakes made during drone strikes in the Middle East, because agencies were not forthcoming. That obfuscation absolutely cannot be the norm as we enter the age of military AI.

Mat: Finally, did you have a chance to hit an In-N-Out burger while you were in California?

James: Normally In-N-Out is a requisite stop for me in California, but ahead of my trip I heard lots of good things about the burgers at The Apple Pan in West LA, so I went there. To be honest, the fries were better, but for the burger I have to hand it to In-N-Out.


The Recommendation

A few weeks ago I suggested Ca7riel and Paco  Amoroso’s appearance on NPR Tiny Desk. At the risk of this space becoming a Tiny Desk stan account, I’m back again with another. I was completely floored by Doechii’s Tiny Desk appearance last week. It’s so full of talent and joy and style and power. I came away completely inspired and have basically had her music on repeat in Spotify ever since. If you are already a fan of her recorded music, you will love her live. If she’s new to you, well, you’re welcome. Go check it out. Oh, and don’t worry: I’m not planning to recommend Billie Eilish’s new Tiny Desk concert in next week’s newsletter. Mostly because I’m doing so now.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *