AI masters language. Should we trust what he says?

But just as GPT-3’s fluidity has dazzled many observers, the big-tongue model’s approach has also drawn a lot of criticism in recent years. Some skeptics claim that the software is only capable of blind mimicry – that it mimics the syntactic patterns of human language but is unable to generate its own ideas or make complex decisions, a fundamental limitation that will prevent the LLM approach to never mature into something resembling human intelligence. For these critics, GPT-3 is just the latest shiny object in a long history of AI hype, funneling research dollars and attention into what will ultimately prove to be a dead end, preventing ‘other promising approaches to mature. Other critics believe that software like GPT-3 will forever be compromised by bias, propaganda, and misinformation in the data it was trained on, which means using it for more than trickery sleight of hand will always be irresponsible.

Wherever you find yourself in this debate, the pace of recent improvement in large language models makes it hard to imagine that they won’t be deployed commercially in the years to come. And that begs the question of exactly how they – and, for that matter, other dizzying advances in AI – should be unleashed on the world. With the rise of Facebook and Google, we’ve seen how dominance in a new area of ​​technology can quickly lead to amazing power over society, and AI threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of such scale and ambition, with such promise and potential for abuse?

Or should we build it at all?

The origins of OpenAI dates back to July 2015, when a small group of tech luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place in the midst of two recent developments in the world of technology, one positive and the other troubling. On the one hand, radical advances in computing power – and new breakthroughs in neural network design – had created a palpable sense of excitement in the field of machine learning; it felt like the long “winter of AI,” the decades in which the field failed to live up to its initial hype, was finally beginning to melt away. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a level of precision far beyond what any neural network had. reached before. Google moved quickly to hire the creators of AlexNet, while simultaneously acquiring DeepMind and launching its own initiative called Google Brain. The widespread adoption of smart assistants like Siri and Alexa has demonstrated that even scripted agents can be consumer successes.

But over the same period, a seismic shift in public attitudes towards Big Tech was underway, with once-popular companies like Google or Facebook being criticized for their near-monopoly powers, amplifying conspiracy theories and inexorable siphoning off of our attention. to algorithmic flows. Long-term fears about the dangers of artificial intelligence were popping up on the opinion pages and on the TED Stage. Nick Bostrom of the University of Oxford has published his book “Superintelligence”, presenting a series of scenarios in which advanced AI could deviate from the interests of humanity with potentially disastrous consequences. At the end of 2014, Stephen Hawking announced to the BBC that “the development of full artificial intelligence could spell the end of the human race”. It seemed like the business consolidation cycle that characterized the age of social media was already underway with AI. , but this time around, algorithms might not just sow polarization or sell our attention to the highest bidder — they might end up destroying humanity itself. And again, all the evidence suggested that this power was going to be controlled by a few Silicon Valley megacorporations.

The agenda for dinner on Sand Hill Road that night in July was nothing if not ambitious: to find the best way to steer AI research toward the most positive outcome possible, while avoiding both the consequences short-term negatives that have plagued the Web 2.0 era and long-term existential threats. From that dinner, a new idea began to take shape – one that would soon become a full-time obsession for Y Combinator’s Sam Altman and Greg Brockman, who had recently left Stripe. Interestingly, the idea was not so much technological as organizational: if AI was to be unleashed on the world in a safe and beneficial way, it was going to require innovations in governance, incentives and engagement. stakeholders. The technical path to what the field calls artificial general intelligence, or AGI, was not yet clear to the group. But Bostrom’s and Hawking’s unsettling predictions convinced them that AI’s achievement of human intelligence would cement an astonishing amount of power and moral burden on whoever ultimately manages to invent and control them.

In December 2015, the group announced the creation of a new entity called OpenAI. Altman had signed on to be the company’s general manager, with Brockman overseeing technology; another dinner attendee, AlexNet co-creator Ilya Sutskever, had been recruited by Google to lead the search. (Elon Musk, who also attended the dinner, joined the board, but left in 2018.) In a blog post, Brockman and Sutskever laid out the scope of their ambition: ”OpenAI is a non-profit artificial intelligence research society,” they wrote. “Our goal is to advance digital intelligence in the way most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” They added: “We believe that AI should be an extension of individual human will and, in the spirit of freedom, as widely and evenly distributed as possible.

The founders of OpenAI would publish a public charter three years later, setting out the fundamental principles of the new organization. The document was easily interpreted as a not-so-subtle digging into Google’s “don’t be mean” slogan from its inception, an acknowledgment that maximizing the social benefits – and minimizing the harms – of new technologies hasn’t always been that. just a calculation. As Google and Facebook had achieved global dominance through closed-source algorithms and proprietary networks, the founders of OpenAI promised to go the other way, sharing new research and coding it freely with the world.

About Georgia Duvall

Check Also

Microsoft Teams now uses AI to improve echo, dropouts and acoustics

Microsoft has spent the past two years adding flashy new productivity features to Teams, and …