MIT Technology Review https://www.technologyreview.com Wed, 09 Oct 2024 14:14:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://wp.technologyreview.com/wp-content/uploads/2024/09/cropped-TR-Logo-Block-Centered-R.png?w=32 MIT Technology Review https://www.technologyreview.com 32 32 172986898 Preventing Climate Change: A Team Sport https://www.technologyreview.com/2024/10/09/1105349/preventing-climate-change-a-team-sport/ Wed, 09 Oct 2024 13:22:11 +0000 https://www.technologyreview.com/?p=1105349

This sponsored session was presented by MEDC at MIT Technology Review’s 2024 EmTech MIT event.

Michigan is at the forefront of the clean energy transition, setting an example in mobility and automotive innovation. Other states and organizations can learn from Michigan’s approach to public-private partnerships, actionable climate plans, and business-government alignment. Progressive climate policies are not only crucial for sustainability but also for attracting talent in today’s competitive job market.

Read more from MIT Technology Review Insights & MEDC about addressing climate change impacts


About the speaker

Hilary Doe, Chief Growth & Marketing Officer, Michigan Economic Development Corporation

As Chief Growth & Marketing Officer, Hilary Doe leads the state’s efforts to grow Michigan’s population, economy, and reputation as the best place to live, work, raise a family, and start a business. Hilary works alongside the Growing Michigan Together Council on a once-in-a-generation effort to grow Michigan’s population, boost economic growth, and make Michigan the place everyone wants to call home.

Hilary is a dynamic leader in nonprofits, technology, strategy, and public policy. She served as the national director at the Roosevelt Network, where she built and led an organization engaging thousands of young people in civic engagement and social change programming at chapters nationwide, which ultimately earned the organization recognition as a recipient of the MacArthur Award for Creative and Effective Institutions. She also served as Vice President of the Roosevelt Institute, where she oversaw strategy and expanded the Institute’s Four Freedoms Center, with the goal of empowering communities and reducing inequality alongside the greatest economists of our generations. Most recently, she served as President and Chief Strategy Officer at Nationbuilder, working to equip the world’s leaders with software to grow their movements, businesses, and organizations, while spreading democracy.

Hilary is a graduate of the University of Michigan’s Honors College and Ford School of Public Policy, a Detroit resident, and proud Michigander.

]]>
1105349
Productivity Electrified: Tech That Is Supercharging Business https://www.technologyreview.com/2024/10/09/1105355/productivity-electrified-tech-that-is-supercharging-business/ Wed, 09 Oct 2024 13:21:21 +0000 https://www.technologyreview.com/?p=1105355

This sponsored session was presented by Ford Pro at MIT Technology Review’s 2024 EmTech MIT event.

A decarbonized transportation system is a necessary pre-requisite for a sustainable economy. In the transportation industry, the road to electrification and greater technology adoption can also increase business bottom lines and reduce downstream costs to tax payers. Focusing on early adopters such as first responders, local municipalities, and small business owners, we’ll discuss common misconceptions, barriers to adoption, implementation strategies, and how these insights carry over into wide-spread adoption of emerging technology and electric vehicles.


About the speaker

Wanda Young, Global Chief Marketing & Experience Officer, Ford Pro

Wanda Young is a visionary brand marketer and digital transformation expert who thrives at the intersection of brand, digital, technology, and data; paired with a deep understanding of the consumer mindset. She gained her experience working for the largest brands in retail, sports & entertainment, consumer products, and electronics. She is a successful brand marketer and change agent that organizations seek to drive digital and data transformation – a Chief Experience Officer years before the title was invented. In her roles managing multiple notable brands, including Samsung, Disney, ESPN, Walmart, Alltel, and Acxiom, she developed knowledge of the interconnectedness of brand, digital, and data; of the importance of customer experience across all touchpoints; the power of data and localization; and the in-the-trenches accountability to drive outcomes. Now at Ford Pro, the Commercial Division of Ford Motor Company, she is focused on helping grow the newly-launched division and brand which only Ford can offer commercial customers – an integrated lineup of vehicles and services designed to meet the needs of all businesses to keep their productivity on pace to drive growth.

Young enjoyed a series of firsts in her career, including launching ESPN+, developing Walmart’s first social media presence and building 5000 of their local Facebook pages (which are still live today and continue to scale), developing the first weather-triggered ad product with The Weather Company, designing an ad product with Google called Local Inventory Ads, being part of team who took Alltel Wireless private (which later sold to Verizon Wireless), launching the Acxiom.com website on her first Mother’s Day with her daughter on her lap. She serves on the board of or is involved in a number of industry memberships and has been the recipient of many prestigious awards. Young received a Bachelor of Arts in English with a minor in Advertising from the University of Arkansas.

]]>
1105355
The Download: another Nobel Prize for AI, and Adobe’s anti-scraping tool https://www.technologyreview.com/2024/10/09/1105339/the-download-another-nobel-prize-for-ai-and-adobes-anti-scraping-tool/ Wed, 09 Oct 2024 12:10:00 +0000 https://www.technologyreview.com/?p=1105339 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Google DeepMind wins joint Nobel Prize in Chemistry for protein prediction AI  

Google DeepMind founder Demis Hassabis has won a joint Nobel Prize for Chemistry for using artificial intelligence to predict the structures of proteins. Hassabis shares half the prize with John M. Jumper, a director at Google DeepMind, while the other half has been awarded to David Baker, a professor in biochemistry at the University of Washington for his work on computational protein design.

The potential impact of this research is enormous. Proteins are fundamental to life, but understanding what they do involves figuring out their structure—a very hard puzzle that once took months or years to crack for each type of protein.

By cutting down the time it takes to predict a protein’s structure, computational tools such as those developed by this year’s award winners are helping scientists gain a greater understanding of how proteins work and opening up new avenues of research and drug development. The technology could unlock more efficient vaccines, speed up research for the cure to cancer, or lead to completely new materials.

It also marks a second Nobel win for AI, after computer scientist Geoffrey Hinton was awarded the 2024 Nobel Prize in physics for his foundational contributions to deep learning. Read the full story.

—Melissa Heikkilä

David Baker spoke to MIT Technology Review in 2022 about his work. Check out what he had to say about the revolutionary technology.

Adobe wants to make it easier for artists to blacklist their work from AI scraping

The news: Adobe has announced a new tool to help creators watermark their artwork and opt out of having it used to train generative AI models.

How it works: The web app, called Adobe Content Authenticity, allows artists to signal that they do not consent for their work to be used by AI models, which are generally trained on vast databases of content scraped from the internet. It also gives creators the opportunity to add what Adobe is calling “content credentials,” including their verified identity, social media handles, or other online domains, to their work.

Why it matters: Adobe’s relationship with the artistic community is complicated. While it says that it doesn’t (and won’t) train its AI on user content, many artists have argued that the company doesn’t actually obtain consent or own the rights to individual contributors’ images. Read the full story.

—Rhiannon Williams 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Florida residents are being warned to move their EVs
Hurricane Milton-induced floodwaters mean there’s a heightened risk of battery fires. (NYT $)
+ It’s likely to take years to fully recover from Hurricanes Helene and Milton. (Vox)
+ Climate change is making these extreme weather events more damaging. (Economist $)

2 Meta’s Oversight Board is opening a new appeals center
It’ll issue decisions on cases brought by Facebook, YouTube or TikTok users. (WP $)

3 The US government is working out how to break up Google
If it went ahead, it’d be the first major breakup since AT&T in 1984. (WSJ $)
+ The measures could prevent Google from using Chrome or Android to give it an edge. (FT $)

4 Baidu is considering rolling out robotaxis outside of China
Just as the US has proposed banning Chinese-made software in connected cars. (CNBC)
+ Tesla is poised to announce some robotaxi news tomorrow. (Bloomberg $)
+ The autonomous taxi market is locked in intense competition right now. (Insider $)
+ What’s next for robotaxis in 2024. (MIT Technology Review)

5 X is back in Brazil
The country has lifted its ban on the platform after it paid hefty fines. (BBC)
+ In theory, that should be the end of Elon Musk’s feud with the judge who blocked X. (Bloomberg $)+ Meanwhile, Turkey has banned Discord after it refused to cooperate with authorities. (Reuters)

6 We’re living in the era of politically-motivated AI slop
Political figures are openly sharing AI images without caring that they’re not real.(404 Media)
+ Thankfully, AI-generated content doesn’t seem to have swayed recent European elections. (MIT Technology Review)

7 This carbon sequestration startup is building a huge plant in Quebec
Buoyed by successful pilots in LA and Singapore, Equatic is on the up. (Hakai Magazine)
+ Meta’s former CTO has a new $50 million project: ocean-based carbon removal. (MIT Technology Review)

8 Is Satoshi Nakamoto really Peter Todd?
A new documentary claims that the mysterious bitcoin inventor is actually an early developer of the cryptocurrency. (CoinDesk)
+ Canadian Peter Todd has denied that he’s the crypto mastermind. (New Yorker $)
+ But isn’t that exactly what he would say? (Wired $)

9  Elon Musk’s Las Vegas tunnels are full of trespassers
The Boring Company is sick and tired of dealing with people breaking and entering its underground road network. (Fortune $)

10 What this French cave can tell us about our ancient ancestors
Artifacts are shedding light on how they lived—and died. (New Scientist $)

Quote of the day

“Cybertrucks present acute dangers and don’t meet European standards.”

—James Nix, vehicles policy manager at the nonprofit Transport & Environment, urges the European Commission and authorities in the Czech Republic to ban Tesla’s colossal vehicles from European roads, the Guardian reports.

The big story

The US wants to use facial recognition to identify migrant children as they age

August 2024

The US Department of Homeland Security (DHS) plans to collect and analyze photos of the faces of migrant children at the border in a bid to improve facial recognition technology, MIT Technology Review can reveal.

The technology has traditionally not been applied to children, largely because training data sets of real children’s faces are few and far between, and consist of either low-quality images drawn from the internet or small sample sizes with little diversity. Such limitations reflect the significant sensitivities regarding privacy and consent when it comes to minors. 

In practice, the new DHS plan could effectively solve that problem. But, beyond concerns about privacy, transparency, and accountability, some experts also worry about testing and developing new technologies using data from a population that has little recourse to provide—or withhold—consent. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Next time you see a panda, look closely—it might actually be a dog ($) 🐼
+ If you prefer your autumn films on the spooky, rather than scary side, I got you.
+ Feeling bored isn’t always a bad thing, sometimes it’s actually productive.
+ If you can’t visit these amazing museums in person, this handy app might just be the next best thing.

]]>
1105339
Google DeepMind leaders share Nobel Prize in chemistry for protein prediction AI   https://www.technologyreview.com/2024/10/09/1105335/google-deepmind-wins-joint-nobel-prize-in-chemistry-for-protein-prediction-ai/ Wed, 09 Oct 2024 11:32:04 +0000 https://www.technologyreview.com/?p=1105335 In a second Nobel win for AI, the Royal Swedish Academy of Sciences has awarded half the 2024 prize in chemistry to Demis Hassabis, the cofounder and CEO of Google DeepMind, and John M. Jumper, a director at the same company, for their work on using artificial intelligence to predict the structures of proteins. The other half goes to David Baker, a professor of biochemistry at the University of Washington, for his work on computational protein design. The winners will share a prize pot of 11 million Swedish kronor ($1 million). 

The potential impact of this research is enormous. Proteins are fundamental to life, but understanding what they do involves figuring out their structure—a very hard puzzle that once took months or years to crack for each type of protein. By cutting down the time it takes to predict a protein’s structure, computational tools such as those developed by this year’s award winners are helping scientists gain a greater understanding of how proteins work and opening up new avenues of research and drug development. The technology could unlock more efficient vaccines, speed up research on cures for cancer, or lead to completely new materials.

Hassabis and Jumper created AlphaFold, which in 2020 solved a problem scientists have been wrestling with for decades: predicting the three-dimensional structure of a protein from a sequence of amino acids. The AI tool has since been used to predict the shapes of all proteins known to science.

Their latest model, AlphaFold 3, can predict the structures of DNA, RNA, and molecules like ligands, which are essential to drug discovery. DeepMind has also released the source code and database of its results to scientists for free. 

“I’ve dedicated my career to advancing AI because of its unparalleled potential to improve the lives of billions of people,” said Demis Hassabis. “AlphaFold has already been used by more than two million researchers to advance critical work, from enzyme design to drug discovery. I hope we’ll look back on AlphaFold as the first proof point of AI’s incredible potential to accelerate scientific discovery,” he added.

Baker has created several AI tools for designing and predicting the structure of proteins, such as a family of programs called Rosetta. In 2022, his lab created an open-source AI tool called ProteinMPNN that could help researchers discover previously unknown proteins and design entirely new ones. It helps researchers who have an exact protein structure in mind find amino acid sequences that fold into that shape.

Most recently, in late September, Baker’s lab announced it had developed custom molecules that allow scientists to precisely target and eliminate proteins associated with diseases in living cells. 

“[Proteins] evolved over the course of evolution to solve the problems that organisms faced during evolution. But we face new problems today, like covid. If we could design proteins that were as good at solving new problems as the ones that evolved during evolution are at solving old problems, it would be really, really powerful,” Baker told MIT Technology Review in 2022.  

This article has been updated with a quote from Demis Hassabis.

]]>
1105335
Adobe wants to make it easier for artists to blacklist their work from AI scraping https://www.technologyreview.com/2024/10/08/1105234/adobe-wants-to-make-it-easier-for-artists-to-blacklist-their-work-from-ai-scraping/ Tue, 08 Oct 2024 13:00:00 +0000 https://www.technologyreview.com/?p=1105234 Adobe has announced a new tool to help creators watermark their artwork and opt out of having it used to train generative AI models.

The web app, called Adobe Content Authenticity, allows artists to signal that they do not consent for their work to be used by AI models, which are generally trained on vast databases of content scraped from the internet. It also gives creators the opportunity to add what Adobe is calling “content credentials,” including their verified identity, social media handles, or other online domains, to their work.

Content credentials are based on C2PA, an internet protocol that uses cryptography to securely label images, video, and audio with information clarifying where they came from—the 21st-century equivalent of an artist’s signature. 

Although Adobe had already integrated the credentials into several of its products, including Photoshop and its own generative AI model Firefly, Adobe Content Authenticity allows creators to apply them to content regardless of whether it was created using Adobe tools. The company is launching a public beta in early 2025.

The new app is a step in the right direction toward making C2PA more ubiquitous and could make it easier for creators to start adding content credentials to their work, says Claire Leibowicz, head of AI and media integrity at the nonprofit Partnership on AI.

“I think Adobe is at least chipping away at starting a cultural conversation, allowing creators to have some ability to communicate more and feel more empowered,” she says. “But whether or not people actually respond to the ‘Do not train’ warning is a different question.”

The app joins a burgeoning field of AI tools designed to help artists fight back against tech companies, making it harder for those companies to scrape their copyrighted work without consent or compensation. Last year, researchers from the University of Chicago released Nightshade and Glaze, two tools that let users add an invisible poison attack to their images. One causes AI models to break when the protected content is scraped, and the other conceals someone’s artistic style from AI models. Adobe has also created a Chrome browser extension that allows users to check website content for existing credentials.

Users of Adobe Content Authenticity will be able to attach as much or as little information as they like to the content they upload. Because it’s relatively easy to accidentally strip a piece of content of its unique metadata while preparing it to be uploaded to a website, Adobe is using a combination of methods, including digital fingerprinting and invisible watermarking as well as the cryptographic metadata. 

This means the content credentials will follow the image, audio, or video file across the web, so the data won’t be lost if it’s uploaded on different platforms. Even if someone takes a screenshot of a piece of content, Adobe claims, credentials can still be recovered.

However, the company acknowledges that the tool is far from infallible. “Anybody who tells you that their watermark is 100% defensible is lying,” says Ely Greenfield, Adobe’s CTO of digital media. “This is defending against accidental or unintentional stripping, as opposed to some nefarious actor.”

The company’s relationship with the artistic community is complicated. In February, Adobe updated its terms of service to give it access to users’ content “through both automated and manual methods,” and to say it uses techniques such as machine learning in order to improve its vaguely worded “services and software.” The update was met with a major backlash from artists who took it to mean the company planned to use their work to train Firefly. Adobe later clarified that the language referred to features not based on generative AI, including a Photoshop tool that removes objects from images. 

While Adobe says that it doesn’t (and won’t) train its AI on user content, many artists have argued that the company doesn’t actually obtain consent or own the rights to individual contributors’ images, says Neil Turkewitz, an artists’ rights activist and former executive vice president of the Recording Industry Association of America.

“It wouldn’t take a huge shift for Adobe to actually become a truly ethical actor in this space and to demonstrate leadership,” he says. “But it’s great that companies are dealing with provenance and improving tools for metadata, which are all part of an ultimate solution for addressing these problems.”

]]>
1105234
The Download: Geoffrey Hinton’s Nobel Prize, and multimodal AI https://www.technologyreview.com/2024/10/08/1105220/the-download-geoffrey-hintons-nobel-prize-and-multimodal-ai/ Tue, 08 Oct 2024 12:25:00 +0000 https://www.technologyreview.com/?p=1105220 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Geoffrey Hinton, AI pioneer and figurehead of doomerism, wins Nobel Prize

Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in the 1980s and 90s underpins all of the most powerful AI models in the world today, has been awarded the 2024 Nobel Prize for Physics by the Royal Swedish Academy of Sciences.

Hinton shares the award with fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that could store and reconstruct data. Hinton built on this technology, known as Hopfield networks, to develop back propagation, an algorithm that lets neural networks learn.

But since May 2023, when MIT Technology Review helped break the news that Hinton was now scared of the technology that he had helped bring about, the 76-year-old scientist has become much better known as a figurehead for doomerism—the mindset that there is a very real risk that near-future AI could produce catastrophic results, up to and including human extinction. Read the full story.

—Will Douglas Heaven

Forget chat. AI that can hear, see and click is already here

Chatting with an AI chatbot is so 2022. The latest hot AI toys take advantage of multimodal models, which can handle several things at the same time, such as images, audio, and text. 

Multimodal generative content has also become markedly better in a very short time, and the way we interact with AI systems is also changing, becoming less reliant on text. What unites these features is a more interactive, customizable interface and the ability to apply AI tools to lots of different types of source material. But we’ve yet to see a killer app. Read the full story.

—Melissa Heikkilä

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

Why artificial intelligence and clean energy need each other

—Michael Kearney is a general partner at Engine Ventures, a firm that invests in startups commercializing breakthrough science and engineering. Lisa Hansmann is a principal at Engine Ventures and previously served as special assistant to the president in the Biden administration, working on economic policy and implementation.

We are in the early stages of a geopolitical competition for the future of artificial intelligence. The winners will dominate the global economy in the 21st century.

But what’s been too often left out of the conversation is that AI’s huge demand for concentrated and consistent amounts of power represents a chance to scale the next generation of clean energy technologies.

If we ignore this opportunity, the United States will find itself disadvantaged in the race for the future of both AI and energy production, ceding global economic leadership to China. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Florida is bracing itself for Hurricane Milton
Just days after Hurricane Helene devastated the state, residents have been ordered to evacuate. (The Guardian)
+ Experts are stunned at how quickly the storm intensified. (FT $)
+ It grew from a tropical storm to a Category 5 hurricane within just a day. (Vox)

2 Google has been ordered to open its Play Store to rivals
A judge has ruled that Google must allow developers to add their own app stores to its Android system for three years. (NYT $)
+ Google isn’t allowed to strike exclusive deals for its Play Store, either. (WSJ $)
+ It’s a major antitrust victory for Epic Games. (WP $)

 3 FTX customers are going to get their money back
A US judge has greenlit a plan to repay them billions of dollars. (Wired $)

4 Greenland has changed dramatically in the past few decades
Its future depends on how we react to global warming. (New Yorker $)
+ Many dams across the world aren’t fit for purpose any more. (Undark Magazine)
+ Sorry, AI won’t “fix” climate change. (MIT Technology Review)

5 Work is drying up for freelance gig workers
Fewer people are hiring them for small tasks in the wake of covid. (FT $)

6 What it’s like to build a data center in Malaysia
The region is home to one of the world’s biggest AI construction projects. (WSJ $)
+ Meanwhile, Ireland is struggling to do the same. (FT $)

7 A European Space Agency probe is investigating an asteroid smash
It’s going to assess how a 2022 NASA mission affected it. (IEEE Spectrum)
+ Watch the moment NASA’s DART spacecraft crashed into an asteroid. (MIT Technology Review)

8 Inside the world’s first humanoid robot factory 🤖
Agility Robotics is building major production lines to assemble its Digit machines. (Bloomberg $)

9 AI-generated pro-North Korea propaganda is floating around TikTok
Bizarrely, they appear to be linked to ads for supplements. (404 Media)

10 What lies beneath the moon’s surface?
A soft, gooey layer, apparently. (Vice)
+ What’s next for the moon. (MIT Technology Review)

Quote of the day

“You’re going to end up paying something to make the world right after having been found to be a monopolist.”

—US District Judge James Donato warns Google’s lawyers of tough times ahead after he ordered the company to overhaul its mobile app business, Reuters reports.

The big story

Large language models can do jaw-dropping things. But nobody knows exactly why.

March 2024

Two years ago, Yuri Burda and Harri Edwards, researchers at OpenAI, were trying to find out what it would take to get a large language model to do basic arithmetic. At first, things didn’t go too well. The models memorized the sums they saw but failed to solve new ones. 

By accident, Burda and Edwards left some of their experiments running for days rather than hours. The models were shown the example sums over and over again, and eventually they learned to add two numbers—it had just taken a lot more time than anybody thought it should.

In certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on, a behavior the researchers called grokking. Grokking is just one of several odd phenomena that have AI researchers scratching their heads. The largest models, and large language models in particular, seem to behave in ways textbook math says they shouldn’t.

This highlights a remarkable fact about deep learning, the fundamental technology behind today’s AI boom: for all its runaway success, nobody knows exactly how—or why—it works. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The sausage dogs are coming!
+ Ever wondered what the worst-rated film of all time is? Wonder no more.
+ How to make downsizing more rewarding, less harrowing.
+ These secluded hikes look fabulous—just don’t forget your map.

]]>
1105220
Geoffrey Hinton, AI pioneer and figurehead of doomerism, wins Nobel Prize https://www.technologyreview.com/2024/10/08/1105221/geoffrey-hinton-just-won-the-nobel-prize-in-physics-for-his-work-on-machine-learning/ Tue, 08 Oct 2024 12:01:13 +0000 https://www.technologyreview.com/?p=1105221 Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in the 1980s and ’90s underpins all of the most powerful AI models in the world today, has been awarded the 2024 Nobel Prize in physics by the Royal Swedish Academy of Sciences.

Speaking on the phone to the Academy minutes after the announcement, Hinton said he was flabbergasted: “I had no idea this would happen. I’m very surprised.”

Hinton shares the award with fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that could store and reconstruct data. Hinton built on this technology, known as a Hopfield network, to develop backpropagation, an algorithm that lets neural networks learn.

Hopfield and Hinton borrowed methods from physics, especially statistical techniques, to develop their approaches. In the words of the Nobel Prize committee, the pair are recognized “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”

But since May 2023, when MIT Technology Review helped break the news that Hinton was now scared of the technology that he had helped bring about, the 76-year-old scientist has become much better known as a figurehead for doomerism—the idea that there’s a very real risk that near-future AI could precipitate catastrophic events, up to and including human extinction.  

Doomerism wasn’t new, but Hinton—who won the Turing Award, the top prize in computing science, in 2018—brought new credibility to a position that many of his peers once considered kooky.

What led Hinton to speak out? When I met with him in his London home last year, Hinton told me that he was awestruck by what new large language models could do. OpenAI’s latest flagship model, GPT-4, had been released a few weeks before. What Hinton saw convinced him that such technology—based on deep learning—would quickly become smarter than humans. And he was worried about what motivations it would have when it did.  

“I have suddenly switched my views on whether these things are going to be more intelligent than us,” he told me at the time. “I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?”

Hinton’s views set off a months-long media buzz and made the kind of existential risks that he and others were imagining (from economic collapse to genocidal robots) into mainstream concerns. Hundreds of top scientists and tech leaders signed open letters warning of the disastrous downsides of artificial intelligence. A moratorium on AI development was floated. Politicians assured voters they would do what they could to prevent the worst.

Despite the buzz, many consider Hinton’s views to be fantastical. Yann LeCun, chief AI scientist at Meta and Hinton’s fellow recipient of the 2018 Turing Award, has called doomerism “preposterously ridiculous.”

Today’s prize rewards foundational work in a technology that has become part of everyday life. It is also sure to shine an even brighter light on Hinton’s more scaremongering opinions.

]]>
1105221
Why artificial intelligence and clean energy need each other https://www.technologyreview.com/2024/10/08/1105165/why-artificial-intelligence-and-clean-energy-need-each-other/ Tue, 08 Oct 2024 10:00:00 +0000 https://www.technologyreview.com/?p=1105165 We are in the early stages of a geopolitical competition for the future of artificial intelligence. The winners will dominate the global economy in the 21st century.

But what’s been too often left out of the conversation is that AI’s huge demand for concentrated and consistent amounts of power represents a chance to scale the next generation of clean energy technologies. If we ignore this opportunity, the United States will find itself disadvantaged in the race for the future of both AI and energy production, ceding global economic leadership to China.

To win the race, the US is going to need access to a lot more electric power to serve data centers. AI data centers could add the equivalent of three New York Cities’ worth of load to the grid by 2026, and they could more than double their share of US electricity consumption—to 9%—by the end of the decade. Artificial intelligence will thus contribute to a spike in power demand that the US hasn’t seen in decades; according to one recent estimate, that demand—previously flat—is growing by around 2.5% per year, with data centers driving as much as 66% of the increase.

Energy-hungry advanced AI chips are behind this growth. Three watt-hours of electricity are required for a ChatGPT query, compared with just 0.3 watt-hours for a simple Google search. These computational requirements make AI data centers uniquely power dense, requiring more power per server rack and orders of magnitude more power per square foot than traditional facilities. Sam Altman, CEO of OpenAI, reportedly pitched the White House on the need for AI data centers requiring five gigawatts of capacity—enough to power over 3 million homes. And AI data centers require steady and reliable power 24 hours a day, seven days a week; they are up and running 99.999% of the year.

The demands that these gigawatt-scale users are placing on the electricity grid are already accelerating far faster than we can expand the physical and political structures that support the development of clean electricity. There are over 1,500 gigawatts of capacity waiting to connect to the grid, and the time to build transmission lines to move that power now stretches into a decade. One illustration of the challenges involved in integrating new power sources: The biggest factor delaying Constellation’s recently announced restart of the Three Mile Island nuclear plant isn’t the facility itself but the time required to connect it to the grid.

The reflexive response to the challenge of scaling clean-electricity supply has been to pose a false choice: cede the United States’ advantage in AI or cede our commitment to clean energy. This logic argues that the only way to meet the growing power demands of the computing economy will involve the expansion of legacy energy resources like natural gas and the preservation of coal-fired power plants.

The dire ecological implications of relying on more fossil fuels are clear. But the economic and security implications are just as serious. Further investments in fossil fuels threaten our national competitiveness as other countries leap ahead in the clean technologies that present the next generation of economic opportunity—markets measured in the trillions.

The reality is that the unprecedented scale and density of power needed for AI require a novel set of generation solutions, able to deliver reliable power 24-7 in ever increasing amounts. While advocates for legacy fuels have historically pointed to the variability of renewables, power sources that require massive, distributed, and disruptable fuel supplies like natural gas are also not the answer. In Texas, natural-gas plants accounted for 70% of outages after a severe winter storm in late 2022. As climate change intensifies, weather-related disruptions are only likely to increase.   

Rather than seeing a choice between AI competitiveness and climate, we see AI’s urgent demand for power density as an opportunity to kick-start a slew of new technologies, taking advantage of new buyers and new market structures—positioning the US to not only seize the AI future but create the markets for the energy-dense technologies that will be needed to power it.

Data centers’ incessant demand for computing power is best matched to a set of novel sources of clean, reliable power that are currently undergoing rapid innovation. Those include advanced nuclear fission that can be rapidly deployed at small scale and next-generation geothermal power that can be deployed anywhere, anytime. One day, the arsenal could include nuclear fusion as a source of nearly limitless clean energy. These technologies can produce large amounts of energy in relatively small footprints, matching AI’s demand for concentrated power. They have the potential to provide stable, reliable baseload power matched to AI data centers’ 24-7 operations. While some of these technologies (like fusion) remain in development, others (like advanced fission and geothermal energy) are ready to deploy today.

AI’s power density requirements similarly necessitate a new set of electricity infrastructure enhancements—like advanced conductors for transmission lines that can move up to 10 times as much power through much smaller areas, cooling infrastructure that can address the heat of vast quantities of energy-hungry chips humming alongside one another, and next-generation transformers that enable the efficient use of higher-voltage power. These technologies offer significant economic benefits to AI data centers in the form of increased access to power and reduced latency, and they will enable the rapid expansion of our 20th-century electricity grid to serve 21st-century needs. 

Moreover, the convergence of AI and energy technologies will allow for faster development and scaling of both sectors. Across the clean-energy sector, AI serves as a method of invention, accelerating the pace of research and development for next-generation materials design. It is also a tool for manufacturing, reducing capital intensity and increasing the pace of scaling. Already, AI is helping us overcome barriers in next-generation power technologies. For instance, Princeton researchers are using it to predict and avoid plasma instabilities that have long been obstacles to sustained fusion reactions. In the geothermal and mining context, AI is accelerating the pace and driving down the cost of commercial-grade resource discovery and development. Other firms use AI to predict and optimize performance of power plants in the field, greatly reducing the capital intensity of projects.

Historically, deployment of novel clean energy technologies has had to rely on utilities, which are notoriously slow to adopt innovations and invest in first-of-a-kind commercial projects. Now, however, AI has brought in a new source of capital for power-generation technologies: large tech companies that are willing to pay a premium for 24-7 clean power and are eager to move quickly.

These “new buyers” can build additional clean capacity in their own backyards. Or they can deploy innovative market structures to encourage utilities to work in new ways to scale novel technologies. Already, we are seeing examples, such as the agreement between Google, the geothermal developer Fervo, and the Nevada utility NV Energy to secure clean, reliable power at a premium for use by data centers. The emergence of these price-insensitive but time-sensitive buyers can accelerate the deployment of clean energy technologies.

The geopolitical implications of this nexus between AI and climate are clear: The socioeconomic fruits of innovation will flow to the countries that win both the AI and the climate race. 

The country that is able to scale up access to reliable baseload power will attract AI infrastructure in the long-run—and will benefit from access to the markets that AI will generate. And the country that makes these investments first will be ahead, and that lead will compound over time as technical progress and economic productivity reinforce each other.

Today, the clean-energy scoreboard tilts toward China. The country has commissioned 37 nuclear power plants over the last decade, while the United States has added two. It is outspending the US two to one on nuclear fusion, with crews working essentially around the clock on commercializing the technology. Given that the competition for AI supremacy boils down to scaling power density, building a new fleet of natural-gas plants while our primary competitor builds an arsenal of the most power-dense energy resources available is like bringing a knife to a gunfight.

The United States and the US-based technology companies at the forefront of the AI economy have the responsibility and opportunity to change this by leveraging AI’s power demand to scale the next generation of clean energy technologies. The question is, will they?

Michael Kearney is a general partner at Engine Ventures, a firm that invests in startups commercializing breakthrough science and engineering. Lisa Hansmann is a principal at Engine Ventures and previously served as special assistant to the president in the Biden administration, working on economic policy and implementation.

]]>
1105165
Forget chat. AI that can hear, see, and click is already here. https://www.technologyreview.com/2024/10/08/1105214/forget-chat-ai-that-can-hear-see-and-click-is-already-here/ Tue, 08 Oct 2024 09:04:52 +0000 https://www.technologyreview.com/?p=1105214 This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Chatting with an AI chatbot is so 2022. The latest hot AI toys take advantage of multimodal models, which can handle several things at the same time, such as images, audio, and text. 

Exhibit A: Google’s NotebookLM. NotebookLM is a research tool the company launched with little fanfare a year ago. A few weeks ago, Google added an AI podcasting tool called Audio Overview to NotebookLM, which allows users to create podcasts about anything. Add a link to, for example, your LinkedIn profile, and the AI podcast hosts will boost your ego for nine minutes. The feature has become a surprise viral hit. I wrote about all the weird and amazing ways people are using it here

To give you a taste, I created a podcast of our 125th-anniversary magazine issue. The AI does a great job of picking some highlights from the magazine and giving you the gist of what they are about. Have a listen below. 

Multimodal generative content has also become markedly better in a very short time. In September 2022, I covered Meta’s first text-to-video model, Make-A-Video. Next to today’s technology, those videos look clunky and silly. Meta just announced its competitor to OpenAI’s Sora, called Movie Gen. The tool allows users to use text prompts to create custom videos and sounds, edit existing videos, and make images into videos.

The way we interact with AI systems is also changing, becoming less reliant on text. OpenAI’s new Canvas interface allows users to collaborate on projects with ChatGPT. Instead of relying on a traditional chat window, which requires users to do several rounds of prompting and regenerating text to get the desired result, Canvas allows people to select bits of text or code to edit. 

Even search is getting a multimodal upgrade. In addition to inserting ads into AI overviews, Google has rolled out a new feature where users can upload a video and use their voice to search for things. In a demo at Google I/O, the company showed how you can open the Google Lens app, take a video of fish swimming in an aquarium, and ask a question about them. Google’s Gemini model will then search the web and offer you an answer in the form of Google’s AI summary. 

What unites these features is a more interactive, customizable interface and the ability to apply AI tools to lots of different types of source material. NotebookLM was the first AI product in a while that brought me wonder and delight, partly because of how different, realistic, and unexpected the AI voices were. But the fact that NotebookLM’s Audio Overviews became a hit despite being a side feature hidden inside a bigger product just goes to show that AI developers don’t really know what they are doing. Hard to believe now, but ChatGPT itself was an unexpected hit for OpenAI.

We are a couple of years into the multibillion-dollar generative AI boom. The huge investment in AI has contributed to rapid improvement in the quality of the resulting content. But we’ve yet to see a killer app, and these new multimodal applications are a result of the immense pressure AI companies are under to make money and deliver. Tech companies are throwing different AI tools at people and seeing what sticks. 


Now read the rest of The Algorithm

Deeper Learning

AI-generated images can teach robots how to act

Image-generating AI models have been used to  create training data for robots. The new system, called Genima,  fine-tunes the image-generating AI model Stable Diffusion to draw robots’ movements, helping guide them both in simulations and in the real world. 

What’s the big deal: Genima could make it easier to train different types of robots to complete tasks—machines ranging from mechanical arms to humanoid robots and driverless cars. It could also help make AI web agents, a next generation of AI tools that can carry out complex tasks with little supervision, better at scrolling and clicking. Read more from Rhiannon Williams here

Bits and Bytes

This startup uses AI to detect wildfires 
Our 2024 list of Climate Tech Companies to Watch is here! One company on the list is Pano AI, which uses computer vision and ultra-high-definition cameras to alert firefighters to new blazes. (MIT Technology Review

How Sam Altman concentrated power to his own hands
And then there was one. With OpenAI now valued at $157 billion, Bloomberg details how the company lost most of its top executives and shifted to an Altman-led profit-making monster.  (Bloomberg

Eight scientists, a billion dollars, and the moonshot agency trying to make Britain great again
A nice profile on the UK’s new Advanced Research and Invention Agency, or ARIA. The agency is the UK’s answer to DARPA in the US. It is funding projects such as Turing Award winner Yoshua Bengio’s project to prevent AI catastrophes. (Wired

Why women in tech are sounding an alarm
Tech’s AI mania is encouraging the field to backtrack on years of diversity and inclusion efforts, at the expense of women. (The Information

]]>
1105214
The Download: how to find new music online, and climate friendly food https://www.technologyreview.com/2024/10/07/1105160/the-download-how-to-find-new-music-online-and-climate-friendly-food/ Mon, 07 Oct 2024 12:10:00 +0000 https://www.technologyreview.com/?p=1105160 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How to break free of Spotify’s algorithm

Since the heyday of radio, records, cassette tapes, and MP3 players, the branding of sound has evolved from broad genres like rock and hip-hop to “paranormal dark cabaret afternoon” and “synth space,” and streaming has become the default. 

Meanwhile, the  ritual of discovering something new is now neatly packaged in a 30-song playlist, refreshed weekly. The only rule in music streaming, as in any other industry these days, is personalization.

But what we’ve gained in convenience, we’ve lost in curiosity. Sure, our unlimited access lets us listen to Swedish tropical house or New Jersey hardcore, but this abundance of choice actually makes our listening experience less expansive or eclectic.

As we grow accustomed to the convenience of shuffling a generated playlist, we forget that discovering music is an active exercise. But it doesn’t have to be this way. Read the full story.

—Tiffany Ng

Tiffany’s piece is from the latest print issue of MIT Technology Review, which is celebrating 125 years of the magazine! If you don’t already, subscribe now to ensure you get hold of future copies once they land.

Roundtable: Producing climate-friendly food

Our food systems account for a major chunk of global greenhouse-gas emissions, but some businesses are attempting to develop solutions that could help address the climate impacts of agriculture. That includes two companies on the recently-announced 2024 list of MIT Technology Review’s 15 Climate Tech Companies to Watch. Pivot Bio is inventing new fertilizers, and Rumin8 is working to tackle emissions from cattle.  

Join MIT Technology Review senior editor James Temple and senior reporter Casey Crownhart at 12pm ET this Thursday October 10 for a subscriber-exclusive Roundtable diving into the future of food and the climate with special guests Karsten Temme, chief innovation officer and co-founder of Pivot Bio, and Matt Callahan, co-founder and counsel of Rumin8. Register here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A deadly virus is spreading across Rwanda
Marburg, which is similar to Ebola, is likely to spread to its neighboring countries. (Vox)
+ Rwanda has started vaccine trials to attempt to contain it. (BBC)
+ The risk of it spreading globally is relatively low, though. (NYT $)

2 Two American biologists have been awarded the Nobel Prize
Victor Ambros and Gary Ruvkun have been honored for their microRNA research. (CNN)

3 This powerful lobbying group is challenging US child safety bills
Experts are concerned it’s misusing the First Amendment to do so. (NYT $)
+ Silicon Valley’s lobbying power is on the ascent. (New Yorker $)
+ Child online safety laws will actually hurt kids, critics say. (MIT Technology Review)

4 Scammers in Southeast Asia stole up to $37 billion last year
Gen AI and deepfakes mean their schemes are more convincing than ever. (Bloomberg $)
+ Telegram is a hotbed of criminal activity and fraud networks. (Reuters)
+ Five ways criminals are using AI. (MIT Technology Review)

5 How rural communities are fighting back against data centers 
Grassroots movements are taking back the power—and winning. (WP $)
+ Energy-hungry data centers are quietly moving into cities. (MIT Technology Review)

6 Viable search alternatives to Google are finally emerging
After 15 years of dominance, advertisers are hungry for something different. (WSJ $)
+ It looks as though even more AI Google features are on their way. (Insider $)
+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review)

7 Substack wants to expand beyond newsletters
How, exactly? By becoming a means of payment for creators. (Semafor)

8 The future of search and rescue
Drones can be much quicker and more thorough than human volunteers. (Wired $)
+ AI-directed drones could help find lost hikers faster. (MIT Technology Review

9 Inside the last’s year wild and wacky British inventions
From flatpack coffins to a downwards-facing computer monitor. (The Guardian)

10 Can robots suffer?
That’s the question artist Lawrence Lek is exploring in his latest AI film. (FT $)

Quote of the day

“You don’t need to press a button to open a window. You can just open the window.”

—Adam DeMartino, cofounder of sustainable food startup Smallhold, reflects on how technology can over complicate simple ideas to the Guardian.

The big story

AI was supposed to make police bodycams better. What happened?

April 2024

When police departments first started buying and deploying bodycams in the wake of the police killing of Michael Brown in Ferguson, Missouri, a decade ago, activists hoped it would bring about real change.

Years later, despite what’s become a multibillion-dollar market for these devices, the tech is far from a panacea. Most of the vast reams of footage they generate go unwatched.  Officers often don’t use them properly. And if they do finally provide video to the public, it’s often selectively edited, lacking context and failing to tell the complete story.

A handful of AI startups see this problem as an opportunity to create what are essentially bodycam-to-text programs for different players in the legal system, mining this footage for misdeeds. But like the bodycams themselves, the technology still faces procedural, legal, and cultural barriers to success. Read the full story.

—Patrick Sisson

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These chickens are well and truly getting into the Halloween spirit!
+ If you’re lucky enough to live anywhere near these national parks, I suggest you get yourselves down there immediately.
+ Don’t fight it—Mr Brightside is still a banger.
+ No more microtrends, I beg.

]]>
1105160