Why Meta’s AI chief doesn’t fear the new tech

January 3, 2025

Yann LeCun joins Kara Swisher for a conversation about regulation, open-source tech, and more

Key Takeaways


Meta, the tech giant behind Facebook and Instagram, has quietly become an industry leader in artificial intelligence over the past year. Meta AI was on track to become the world’s most popular AI assistant before the end of 2024. The tool currently has 600 million monthly users. Meta’s open-source AI model, Llama and its derivatives, have been downloaded more than 650 million times.

“There’s a lot of innovations that we would not have had the idea of, or we didn’t have the bandwidth to do, that people have done because they have the Llama system in their hands,” Yann LeCun, Meta’s chief AI scientist, said. “They were able experiment with it and come up with new ideas.”

LeCun’s comments came during a recent conversation with veteran tech journalist Kara Swisher for her podcast held at the Johns Hopkins University Bloomberg Center as part of the Discovery Series. 

Here are three things to know about how Meta and LeCun approach AI:

  1. Why Meta made Llama open source

Unlike Anthropic and OpenAI, Meta publicly shared the algorithm behind Llama, although it has not disclosed the data set used to train the AI tool. Companies including Accenture, IBM, and Spotify are now using Llama to enhance their operations.

Meta did not release an open-source version of the first iteration of Llama, and LeCun said the company received several requests for one. Since then, it has release multiple open-source versions.

“It allows you to run the system and also fine tune it however you want,” LeCun explained. He added that Meta thinks the platform will advance faster with more people using it.

  1. LeCun thinks AI fears are overblown

LeCun has emerged as a leading critic of proposed AI regulations, arguing that limiting research and development will slow innovation.

“Regulating [R&D] is extremely counterproductive. It’s based on false ideas about the potential dangers of AI,” he said. “The proposals that have existed have would have resulted in regulatory capture by a small number of companies.”

LeCun offered pointed criticism of legislation that requires authorization based on a computer’s processing speed.

“There are important questions about AI safety that need to be discussed,” he said, “but a limit of computation just makes no sense.”

  1. Robots “can’t understand the physical world”

LeCun said we’re still years away from AI taking on physical tasks, such as cleaning.

“We don’t have that,” he said. “And it’s not because we can’t build the robots. We just cannot make them smart enough.”

The physical world is much harder to decode than language, he explained.

“It’s counterintuitive for humans to think that; … we think language is the pinnacle of intelligence,” he said. “It’s actually simple because it’s, you know, just a sequence of discrete symbols we can handle.”