20VC Yann LeCun on Why Artificial Intelligence Will Not Dominate Humanity, Why No Economists Believe All Jobs Will Be Replaced by AI, Why the Size of Models Matters Less and Less & Why Open Models Beat Closed Models

Abstract
Summary Notes

Abstract

In a conversation with Harry Stebbings on 20vc, AI luminary Jan Lacun, VP and Chief AI Scientist at Meta and Silver Professor at NYU, discusses the transformative potential of AI, envisioning a renaissance where human intelligence is amplified by AI, enabling creativity and productivity. Lacun, a Turing Award recipient, shares his journey into AI, influenced by a philosophical debate on language acquisition, leading to breakthroughs in neural networks and deep learning. Despite the hype cycles and periods of diminished interest in AI, Lacun, alongside colleagues like Yoshua Bengio and Geoff Hinton, persisted, ultimately contributing to the resurgence of neural nets. Lacun addresses the misconceptions about AI's risks, emphasizing that intelligence doesn't inherently entail a desire to dominate, and future AI systems will be designed with controllable objectives. He predicts a shift from autoregressive language models to more sophisticated, controllable systems. Lacun also critiques the notion that AI will lead to mass unemployment, arguing that technology creates as many jobs as it displaces. He advocates for open-source AI as a means to harness collective intelligence, contrasting with proprietary models, and highlights the need for adaptive regulation in AI development.

Summary Notes

AI as a Catalyst for Human Renaissance

  • AI is projected to greatly amplify human intelligence.
  • Individuals will have access to "staff" of AI that are smarter and more knowledgeable.
  • This empowerment will affect everyone, enhancing capabilities and understanding.

"AI is going to bring a new renaissance for humanity, a new form of enlightenment, if you want, because AI is going to amplify everybody's intelligence. Every one of us will have a staff of people who are smarter than us and know most things about most things. So it's going to empower every one of us."

The quote highlights the anticipated transformative impact of AI on society, suggesting that it will lead to a new era of human enlightenment by augmenting our intellectual capacities.

Jan Lacun's Background and Contributions

  • Jan Lacun is a prominent figure in AI, holding positions at MET and NYU.
  • He received the 2018 ACM Turing Award for his work on deep neural networks.
  • His work began with an interest sparked by a philosophical debate on language acquisition.

"Jan is VP and chief AI scientist at MET and silver professor at NYU. He was the founding director of fair and of the NYU center for Data Science. He's the recipient of the 2018 ACM Turing Award for Conceptual and Engineering breakthroughs that have made deep neural networks a critical component of computing."

This quote provides a brief overview of Jan Lacun's professional titles and achievements, emphasizing his significant contributions to the field of AI.

The Genesis of Jan Lacun's Interest in AI

  • Lacun's interest in AI began during his undergraduate studies in France.
  • He was influenced by a philosophical book discussing nature vs. nurture in language acquisition.
  • The debate introduced him to the concept of machine learning and neural networks.

"I was still an undergraduate engineering student in France, and I stumbled on a philosophy book, which was a debate between Jean Pierget and the cognitive psychologist and Noam Chomsky, the famous linguist."

The quote explains the initial encounter that sparked Lacun's interest in AI, which was the result of a philosophical debate on language development.

Breakthroughs in Neural Networks and Deep Learning

  • Lacun's first significant breakthrough was during his engineering studies.
  • He focused on overcoming limitations of old systems by developing new learning algorithms.
  • Collaboration with Jeff Hinton led to further advancements and creation of convolutional networks.

"I started getting interested in what was not yet called machine learning, but eventually became neural nets and now deep learning."

This quote describes Lacun's early engagement with the field that would become known as machine learning and his subsequent contributions to neural nets and deep learning.

Persistence Through AI's "Desert" Periods

  • Yann LeCun, along with colleagues, anticipated the resurgence of neural nets.
  • He experienced a period of disinterest in neural nets during the mid-90s to early 2000s.
  • LeCun switched focus to internet technologies before returning to deep learning.

"Yosha and I were actually working together at, at and T Bell Labs in the early ninety s. And then the interest of the community for those methods started waning around 1995 or so."

The quote reflects on a time when interest in neural networks waned, yet it also highlights the collaborative nature of LeCun's work with colleagues like Yosha Bengio.

AI Development: Continuous Evolution vs. Inflection Points

  • The progress in AI can seem like both an extension of past work and an inflection point.
  • Recent advancements in self-supervised learning and transformer architectures were unexpected.
  • Public awareness of AI often peaks with high-profile demonstrations, while researchers see a continuous progression.

"It's a combination of the two. On the one hand, a lot of what we see today, when you are down in the trenches of research, looks a logical extension."

This quote provides insight into the perception of AI development from within the research community, contrasting it with public perception.

Surprising Advancements in AI

  • The capabilities of autoregressive large language models (LLMs) have been a recent surprise.
  • Training language models on extensive data has yielded unexpected abilities.
  • LeCun is already considering what the next stage of AI development will be.

"The fact that self supervised learning methods applied to transformer architectures work amazingly well, and they work way beyond what we could have expected."

This quote acknowledges the remarkable progress in self-supervised learning, which has exceeded expectations in the field.

Limitations of Current AI Systems

  • Current AI systems lack human-level intelligence and understanding.
  • Their knowledge is superficial, limited to language without real-world experience.
  • AI researchers are exploring what is missing and how to advance beyond current capabilities.

"So those systems do not have anywhere close to human level intelligence. Okay. Despite what you might think, we are kind of fooled into thinking it because those systems are very fluent with language, but their ability to think, to understand how the world works, to plan, are very limited."

The quote emphasizes the current limitations of AI systems, particularly in terms of their depth of understanding and world knowledge.

The Future of AI Beyond Language Models

  • Future AI systems will need to understand the world in ways comparable to humans.
  • This understanding will require experiences or simulations of the real world.
  • The future AI will not be limited to language models but will encompass broader capabilities.

"There is no question that eventually AI systems will understand the world in similar ways that humans do, perhaps better ways, but there will not be autoregressive."

This quote suggests that future AI systems will evolve beyond current language models, acquiring a more comprehensive understanding of the world.## Autoregressive ML Models and Future AI Systems

  • Jan Lacun predicts a shift away from autoregressive machine learning models (MLMs) to more sophisticated and controllable systems.
  • Future AI systems will be designed to plan their outputs to meet specific objectives.
  • The objectives set for AI systems will ensure safety and relevance of their actions.

"Okay? So my prediction is that within a few years, nobody in their right mind would use autoregressive mlms. They'll go away in favor of something more sophisticated and controllable that can plan its answer, as opposed to just produce one word after the other reactively."

This quote explains Jan Lacun's prediction that autoregressive MLMs will become obsolete as more advanced, controllable AI systems take their place. These systems will be able to plan their responses rather than simply reacting word by word.

Intelligence and the Desire to Dominate

  • Intelligence does not inherently include the desire or ability to dominate others.
  • The desire to influence is a trait evolved in social species for survival, not a byproduct of intelligence.
  • Intelligence and the will to dominate should be considered separate concepts.

"The second fallacy is that there is this idea somehow that the desire to and the ability to dominate is linked with intelligence."

Jan Lacun addresses the misconception that higher intelligence inevitably leads to a desire to dominate, emphasizing that this trait is not inherently linked to intelligence but rather to social evolutionary needs.

Instilling Values in AI

  • Future AI systems will be designed with objectives that align with safety and user needs.
  • The objectives will be hardwired to ensure safe interactions with humans.
  • The process of instilling values in AI is iterative, with ongoing adjustments based on real-world effects.

"So that's the way to build safe AI system. You make them produce answers that by construction have to satisfy objectives, and you design those objectives so that their actions are safe."

Jan Lacun describes how to create safe AI systems by designing them to fulfill objectives that inherently make their actions safe. This is an integral part of the AI system's architecture.

Setting Objectives for AI Systems

  • There will be a vetting process to determine who can set objectives for AI systems.
  • Regulating agencies may be involved in overseeing the objectives for AI systems, especially in sensitive areas like healthcare or transportation.
  • The process of setting objectives may be akin to the vetting process used in other professional fields.

"There's going to have to be a process by which we allow people to do this, some vetting process, the same way that there's a vetting process for people to take care of your health or cut your hair or fix your plumbing or your car."

Jan Lacun highlights the need for a structured process to determine who can set objectives for AI systems, drawing parallels to how professionals in various fields are vetted and regulated.

The Future of Intelligent Assistants

  • Intelligent assistants will become a primary interface for interacting with the digital world.
  • The infrastructure for these assistants will likely be open and subject to a vetting process similar to Wikipedia.
  • The open infrastructure will ensure that the knowledge base and functionality of AI assistants are accurate and trustworthy.

"You're not going to go to Google or Wikipedia, you're just going to talk to your assistant."

Jan Lacun envisions a future where intelligent assistants, with open infrastructure and a collaborative vetting process, will become the main way people access information and interact with the digital world.

Open vs. Closed AI Models

  • Open AI models allow the entire world to contribute ideas and improvements.
  • Open source projects, especially those that serve as basic infrastructure, tend to succeed due to wide-ranging contributions.
  • Historical examples show that open systems often outperform closed systems in the tech industry.

"It's very simple. It's because no outfit as powerful as they may be, has a monopoly on good ideas."

Jan Lacun argues that open AI models are superior because they benefit from the collective intelligence of contributors worldwide, as opposed to the limited scope of ideas within a single organization.

Meta's Approach to Open Source

  • Meta has historically open-sourced its basic infrastructure technologies.
  • Open sourcing does not prevent a company from utilizing its technology effectively.
  • Meta's open-source contributions include React, PyTorch, and hardware server designs.

"It is not because other people can use your technology that you can't exploit it to the same extent."

Jan Lacun explains that Meta's strategy of open-sourcing its technologies does not hinder its ability to leverage those technologies for its purposes.

Data Models and AI Efficiency

  • Large data models are not the only path to effective AI; smaller models can also be highly efficient.
  • AI systems that plan and satisfy objectives may not need to be as large as current models.
  • The efficiency of AI learning and planning is an area where human brains still outperform AI models.

"So I think it opened the minds of people to the fact that there is like, enormous opportunities that really weren't thought to be possible before."

Jan Lacun discusses how the success of smaller models like LLaMA has changed perceptions about the necessity of large models and opened up new possibilities for AI development.

Startups vs. Incumbents in AI

  • The future of AI may involve an open platform for base language models, leading to a diverse ecosystem.
  • An open platform would allow for contributions from many sources, fostering innovation and accuracy.
  • Proprietary approaches may fall behind due to the advantages of open collaboration.

"So the scenario I think will happen, and I'm certainly rooting for, is the scenario I described earlier, where you have some sort of open platform for base llms."

Jan Lacun advocates for an open platform scenario for AI development, which he believes will lead to a more innovative and collaborative environment, benefiting startups and incumbents alike.## Introduction of Galactica

  • Galactica was a large language model designed to assist scientists in writing papers.
  • The model could generate text, build tables with LaTeX commands, and translate chemical formulas into names.
  • Despite its potential utility, especially for non-native English speakers, Galactica faced severe backlash on social media.
  • Critics feared it would lead to an influx of nonsensical scientific papers.
  • Due to the intense negative reaction, the Meta team removed the Galactica demo.

"So Galactica was a large language model trained to trend on the entirety of the scientific literature." This quote explains the purpose and training of Galactica, emphasizing its focus on scientific literature.

"And it was basically designed to help scientists write papers." The quote highlights the primary function of Galactica, which was to assist in the scientific paper writing process.

"As soon as the demo was put out, it was murdered by the social network Twitter sphere." This quote describes the immediate and intense negative response from the Twitter community upon the release of Galactica's demo.

"The people at Meta who built it couldn't take it. They took down the demo because they said, we can't sleep at night." The developers at Meta were overwhelmed by the backlash, leading them to take down the Galactica demo.

AI Innovations and Business Models

  • Large companies with strong reputations are cautious about releasing new technologies due to legal and public image concerns.
  • Small companies may not face the same level of scrutiny when releasing similar technologies.
  • Google's minor factual error with Bard led to a significant stock price drop, illustrating the high stakes for reputable companies.
  • There is a paradox where companies with the best technology face difficulties releasing it due to these concerns.

"Which is why I think there's a bit of a paradox, which is that the companies that have the best technology basically can't have difficulties putting it out because of those legal issues and public image." This quote identifies the paradox where leading companies struggle to release new technologies due to potential legal and image repercussions.

"Google's stock went down by 8%." The quote exemplifies the tangible financial impact of a minor error in AI technology on a major company like Google.

Digital World Interaction and AI

  • AI assistants will become the primary interface for interacting with the digital world.
  • Companies must adapt quickly to these changes, even if it means cannibalizing existing products.
  • Meta has a history of making bold moves to embrace new technology trends.
  • The shift to AI is inevitable, and companies must build the technology as rapidly as possible.

"There's no question that people interact mostly with the digital world using AI assistant." This quote predicts the future dominance of AI assistants in digital interactions.

"You have to build it as quickly as you can." The urgency for companies to develop AI technology is emphasized, suggesting that delays could be detrimental to their success.

Job Creation Through AI

  • Historical shifts in labor, such as from agriculture to services, show that job markets adapt to technological changes.
  • New professions emerge with technological advancements, such as web designers and podcasters.
  • Economists generally do not believe that AI will lead to a permanent job shortage.
  • The challenge lies in creating a fair distribution of wealth generated by increased productivity due to AI.

"No economist believes we're going to run out of job because no economist believes we're going to run out of problems to solve." This quote relays the consensus among economists that job creation will continue as new problems arise needing human creativity and communication.

"Technology makes people more productive." The quote underscores the positive impact of technology on productivity, leading to the generation of more wealth for the same amount of work.

Future Jobs in an AI-Driven World

  • Creative and communication-oriented jobs have a bright future.
  • Personal services will continue to require human interaction.
  • The exact nature of future jobs is uncertain, but there will be opportunities for people to utilize their creativity and interpersonal skills.

"There are two types of jobs that have a bright future, creative jobs, whether they are scientific, technical, educational or artistic." The quote categorizes the types of jobs likely to thrive in the future due to their creative or communicative nature.

"I don't know. That's a good question. But it's not because I don't know that it won't happen." The speaker admits uncertainty about the specifics of future jobs but remains confident in the emergence of new opportunities.

The Speed of Technological Transition

  • There is concern that the rapid pace of AI development might lead to short-term high unemployment.
  • Historically, the adoption rate of new technologies is limited by the time it takes for people to learn to use them.
  • The transition to new forms of employment may take longer than anticipated due to the conservative nature of the business world.

"The speed at which a technology disseminate in the economy is actually limited by how fast people can learn to use it." This quote suggests that the adoption rate of new technologies, such as AI, is constrained by the learning curve of the workforce.

"It's going to take 1015 years or possibly more." The speaker estimates the time frame for significant AI-driven changes in the job market, indicating a gradual transition.

Attraction to Doomsday Scenarios

  • Humans are naturally drawn to potential dangers as a survival mechanism.
  • Surprising or dangerous events capture our attention because they challenge our understanding of the world.
  • This hardwiring explains the public's fascination with negative predictions about AI and unemployment.

"We're hardwired to pay attention to things that occur or may occur that could be dangerous to us." The innate human focus on potential threats is highlighted, explaining the common interest in doomsday scenarios.

"We naturally pay attention to stuff that is surprising or dangerous or both." The quote reinforces the idea that humans are predisposed to be captivated by alarming or unexpected events.

Personal Perspectives and Experiences

  • Jan Lacun shares his experience of having the freedom to speak his mind at Meta.
  • He contrasts this with Jeff's decision to leave Google for greater freedom of expression.
  • Lacun believes that AI is not the cause of social network issues but rather the solution.
  • He discusses the role of AI in the evolution of Meta's newsfeed and its impact on user engagement.

"I say whatever I want, okay. I'm not under the tight control of the communications department or anything." Jan Lacun discusses the autonomy he has at Meta to express his opinions publicly.

"AI is such a complicated, fast evolving issue that you need someone to be able to speak freely." The quote explains the necessity for open communication about AI, given its complex and rapidly changing nature.

"AI is the solution to those problems." Jan Lacun argues that AI is being used to address issues within social networks, countering the narrative that AI is the problem.## Political Discourse and Clickbait

  • Clickbait companies emerged to exploit the tendency of people to click on more sensational content.
  • These companies, often run by teenagers in places like Montenegro, created false news for ad revenue.
  • Facebook recognized the issue and had groups studying the effects of clickbait, leading to corrective measures.
  • After the 2016 U.S. presidential election, Facebook significantly altered its newsfeed algorithm to reduce clickbait and propaganda.
  • Efforts increased to remove false accounts and prevent attempts to corrupt the democratic system.
  • AI advancements enabled the reliable removal of hate speech in multiple languages, previously an impossible task.

"And the fact that what I was talking about earlier, that people tend to click on things that is more trenches, right? So it caused the appearance of clickbait companies that basically were just like farms of teenagers in Montenegro or someplace, making false news to get people to click on them and get money from the ads that they show them."

This quote explains the rise of clickbait companies and their business model, which capitalized on sensational content to generate ad revenue.

"That's what happened in 2017, after the presidential election, american presidential election in 2016, the main newsletter algorithm was completely changed so that there was no clickbaits anymore, there was no news outlets that could push their content."

This quote describes the significant changes made to Facebook's algorithm to combat the spread of clickbait and false news following the 2016 U.S. election.

"The progress of AI over the last few years basically allowed systems to be deployed to do things like take down hate speech relatively reliably in hundreds of different languages, which was basically impossible to."

This quote highlights the role of AI in addressing online hate speech by enabling systems to manage content across various languages effectively.

AI Regulation and the Myth of Hard Takeoff

  • Elon Musk's view that AI cannot be corrected after release is challenged as false.
  • The concept of a "hard takeoff," where a superintelligent AI system immediately escapes control, is deemed ridiculous.
  • Real-world processes are not exponential for long, and intelligent systems would not inherently seek to dominate.
  • Comparisons are made to historical resistance to the printing press, which led to significant societal advancements despite initial opposition.
  • The notion of preventing AI development or intense regulation is likened to obscurantism, which historically has hindered progress.

"That's not true. That's completely false. It makes an assumption which Elon and perhaps some other people may have become convinced of by reading Nick Boxstrom's book, Superintelligence or reading some of Elias or Yudkowski's writing."

Jan Lacun refutes the idea that AI cannot be amended after release, suggesting that Elon Musk's view is based on a false premise.

"Systems are not going to take over just because they are intelligent. Even within the human species it is not the most intelligent among us that want to dominate others."

This quote argues against the fear that intelligent systems will inherently seek to dominate, pointing out that intelligence does not equate to a desire for control.

"Basically it's obscurantism. It's like people who wanted to stop the printing press and the diffusion of printed books because if people could read the Bible for themselves they wouldn't have to talk to priests anymore."

Jan Lacun compares the resistance to AI development to historical opposition to the printing press, both of which are seen as attempts to hinder progress and maintain control.

Global Scientific Research and Incentive Mechanisms

  • China faces an epidemic of bad science due to flawed academic incentive mechanisms, resulting in frequent retractions of published work.
  • Europe offers excellent undergraduate education but lacks systems to motivate talented individuals to pursue science and research, leading many to North America.
  • Switzerland and Canada are highlighted for balancing good pay for academics with free education for students.
  • The U.S. invests significantly in fundamental research, contributing to its tech industry success, but the high cost of education is a downside.
  • A vibrant startup scene in the U.S. and increasingly in Europe is praised for taking risks on innovative ideas.
  • The difficulty of accessing investment funds in Europe compared to the U.S. is noted as a challenge for startups.

"So China has a bit of an epidemic of bad science. There are a lot of very smart people in China, a lot of very good researchers, a lot of very good work coming out of China, particularly in AI, particularly in computer vision, but a lot of absolutely terrible work that has to be retracted a few months later, it's been published."

Jan Lacun discusses the issue of poor-quality scientific research in China, attributing it to problematic incentive structures within the academic system.

"European engineers and scientists are great at top based in the world. But then what are the opportunities for people who want to go into science and research?"

This quote raises the concern about the lack of opportunities for scientists and researchers in Europe, despite the high quality of education and talent.

"Universities pays their faculty pretty well. Now, this comes with a downside. And the downside is that studying in the US is expensive and it's a trade off."

Jan Lacun acknowledges the strengths and weaknesses of the U.S. research and education system, including the trade-off between faculty pay and the high cost of education.

AI as a Catalyst for a New Renaissance

  • AI is expected to amplify human intelligence and creativity by assisting in various tasks, including art and music production.
  • The potential risks associated with AI are acknowledged, but they are not seen as inevitable or necessarily catastrophic.
  • Historical technological advancements, like aviation, faced skepticism, but ultimately transformed society positively.
  • Regulation of AI products is necessary, especially for critical decision-making, but hindering research is considered counterproductive.

"AI is going to bring a new renaissance for humanity, a new form of enlightenment, if you want, because AI is going to amplify everybody's intelligence."

Jan Lacun expresses optimism about the transformative potential of AI to enhance human capabilities and foster a new era of enlightenment.

"But regulating or slowing down research is complete nonsense. It's just obscurantism."

This quote emphasizes Jan Lacun's stance against excessive regulation that could slow down AI research, comparing it to historical resistance to progress.

Shifts in the AI Research Landscape

  • Many researchers and engineers are leaving large companies to start their own ventures, driven by the commercialization potential of AI.
  • The exodus from established labs to startups is happening across the tech industry, as seen with the creators of Google's BERT and Meta's LLaMA.
  • Despite these shifts, Jan Lacun believes that the focus should remain on advancing the science of AI, particularly in developing systems with common sense and human-level intelligence.
  • Fair (Facebook AI Research) and the new DeepMind (combined with Google Brain) are considered the best positioned to lead this scientific progress.

"So you see a relatively large motion of applied research engineers, a few scientists, basically leaving those labs to do startups."

Jan Lacun observes a trend of AI professionals moving from large companies to startups to capitalize on the commercial opportunities of AI.

"What we need to do, people like me, who are really working on research, is coming up with new concepts that will allow us to get machines that basically have common sense, have in the experience of the real world, have basically human level intelligence."

This quote outlines the ongoing research priorities in AI, aimed at achieving systems with more advanced cognitive abilities akin to human intelligence.

Jan Lacun's Future in AI Research

  • Jan Lacun remains passionate about understanding and replicating human intelligence through AI, a goal he has pursued since the beginning of his career.
  • He values the complementary nature of working in both academia and industry and intends to continue contributing to AI as long as he is able.
  • Lacun's excitement for the future of AI research is undiminished, and he is optimistic about the potential breakthroughs in the next decade.

"I'm excited like a teenager now because I see the opportunity of the next step in AI, and the opportunity perhaps, to get to the goal that I set myself so that I imagined for myself when I started working on AI many years ago."

Jan Lacun shares his enthusiasm for the progress and future prospects of AI research, reflecting on his long-term personal goals in the field.

"As long as my brain keeps working, that I think I can contribute and that I'm given the means to contribute, I keep working."

This quote conveys Jan Lacun's commitment to advancing AI research as long as he is capable and has the necessary resources to make meaningful contributions.

What others are sharing

Go To Library

Want to Deciphr in private?
- It's completely free

Deciphr Now
Footer background
Crossed lines icon
Deciphr.Ai
Crossed lines icon
Deciphr.Ai
Crossed lines icon
Deciphr.Ai
Crossed lines icon
Deciphr.Ai
Crossed lines icon
Deciphr.Ai
Crossed lines icon
Deciphr.Ai
Crossed lines icon
Deciphr.Ai

© 2024 Deciphr

Terms and ConditionsPrivacy Policy