Godfather of AIYann LeCun, Fade Up Disaster scenes About the future of new technology.
For LeCun, there is Another threat A very real one on the horizon: 1% of the population is power hungry Stop looting everyone else’s AI riches.
Meta’s chief AI scientist blamed some of the technology’s key developers last week Paying „alarmists” and „massive corporate pressure”. To serve your own interests.
In a detailed post on X (formerly Twitter) this weekend, he named Sam Altman from OpenAI, Demis Hassabis from Google DeepMind, and Dario Amodi from Anthropic as important people.
„Altman, Hassabis and Amodei are currently exerting enormous institutional pressure,” LeCun wrote, referring to their role in shaping regulatory standards on AI security. „They’re the ones trying to use regulation of the AI industry.”
If these efforts succeed, the tech expert promises that the result will be a „catastrophe” as „a small number of companies will control AI.”
This is no small matter, because we are talking about the people in charge of the technology Revolutionary impact The microchip or the internet was around at the time.
Altman, Hassabis and Amodei did not respond to requests for comment on the matter. Business Insider.
LeCun’s comments came in response A post on X From physicist Max Teckmark, Meta’s AI chief suggested that AI doesn’t take doomsday arguments seriously enough.
„Thanks @Rishi Sunak Y @vonderleyen Realizing that the AI risk arguments of Turing, Hinton, Bengio, Russell, Altman, Hassabis and Amodei cannot be rebutted by sarcasm and corporate pressure alone, Techmark wrote in reference to the upcoming Global Summit on AI Security.UK.
LeCun says the tech leaders’ concern is simply a lobbying exercise
Since the launch of ChatGPT, AI power brokers have become important public figures.
But founders like Altman and Hassabis spend a lot of time, according to LeCun They are spreading fear about the technology they are selling.
In March, more than 1,000 tech leaders signed on, including Elon Musk, Altman, Hassabis and Amodi. A letter In this they asked for at least a 6-month pause in the development of AI due to the potential consequences of uncontrolled progress.
Quoted in the letter „Profound Risks to Society and Humanity” Presented by hypothetical AI systems. Techmark, one of the signatories of the letter, described the development of AI „A Suicidal Race”.
LeCun and others say these kinds of headline warnings do nothing but cement power in the hands. Real and Imminent Risks of AI.
According to the Distributed AI Research Institute (DAIR), those risks include labor exploitation and data theft that generate profits for „some companies.”
Focusing on imaginary risks diverts attention from the boring but important question How the applications of AI are actually realized.
An AI security expert outlines 8 catastrophic hypotheses about the end of the world
LeCun explains how people are „Hyperventilating About the Dangers of AI” Because they fall into the things he describes The Myth of the „Hard Departure”.It refers to the idea that „the moment a superintelligent system is activated, humanity will be doomed to its consequences.”
But he argues that an immediate apocalypse is unlikely now, because every new technology goes through a more orderly development process before a widespread rollout takes place.
So the area to focus on, above all, is how AI is developing today What use are they making of it at this time?
For LeCun, A real danger The development of artificial intelligence is in the hands of private for-profit companies that never publish their results and exploit the power it generates. Open source community Artificial intelligence is fading.
One of their main concerns Regulators should allow this to happen Because they are distracted by arguments that robots will become killers and destroy humanity.
Leaders like LeCun have vehemently defended open source developers because their work on competing tools, such as OpenAI’s ChatGPT, offers something new. Level of transparency for the development of AI.
The company LeCun works with, Meta, has developed its own large language model that competes with GPT, called LLaMa 2, which is (sort of) open source.
The idea is that the technical community can view, analyze and test the model at large. No other major tech company has developed a similar open-source release, although this remains a rumor OpenAI is thinking about that.
For LeCun, At this point the development of AI is an opaque process A real cause for alarm.
„The inevitable alternative if the destruction of open-source AI is controlled is that a small number of companies in the US and the West Coast of China must control AI technology and, therefore, control all digital aspects. People”, sentence
„What does this mean for democracy? And what does it mean for cultural diversity?”