Generally smart secures money from OpenAI veterans to build capable AI systems TechCrunch

Generally smart secures money from OpenAI veterans to build capable AI systems TechCrunch

A new AI research company is stealthily launching today with an ambitious goal: to research the fundamentals of human intelligence that machines currently lack. Called General Intelligent, they plan to do this by turning those fundamentals into an array of tasks to solve and by designing and testing the ability of different systems to learn how to solve them in highly complex 3D worlds built by their team.

“We believe that generally intelligent computers will one day unleash extraordinary potential for human creativity and insight,” CEO Kanjun Qiu told TechCrunch in an email interview. “However, today’s AI models lack several key elements of human intelligence, which inhibits the development of general-purpose AI systems that can be safely deployed… The work of General Intelligent aims to understand the fundamentals of human intelligence in order to design safe AI systems that can learn and understand like humans do.

Qiu, the former chief of staff of Dropbox and co-founder of Ember Hardware, which designed laser displays for VR headsets, co-founded General Intelligent in 2021 after shutting down her previous startup, Sourceress, a staffing firm that used l ‘AI to browse the web. (Qiu blamed the high nature of lead research activity.) General Intelligent’s second co-founder is Josh Albrecht, who co-launched a number of companies, including BitBlinder (a privacy-preserving torrenting tool ) and CloudFab (a 3D -printing services company).

Although the co-founders of General Intelligent may not have a background in traditional AI research – Qiu was an algorithmic trader for two years – they managed to enlist the support of several luminaries in the field. Among those contributing to the company’s initial $20 million in funding (plus more than $100 million in options) are Tom Brown, former head of engineering for OpenAI’s GPT-3; Jonas Schneider, former OpenAI Robotics Lead; Dropbox co-founders Drew Houston and Arash Ferdowsi; and the Astera Institute.

Qiu said the unusual funding structure reflects the capital-intensive nature of the problems Generally Intelligent is trying to solve.

“Avalon’s ambition to create hundreds or thousands of tasks is an intensive process – it requires a lot of assessment and evaluation. Our funding is put in place to ensure that we make progress against the encyclopedia of problems that we expect Avalon to become as we continue to develop it,” she said. “We have an agreement in place for $100 million — that money is secured by a drawdown setup that allows us to fund the “long-term business. We have established a framework that will trigger additional funding from this withdrawal, but we are not going to disclose this funding framework as it amounts to disclosing our roadmap.”

Picture credits: Generally intelligent

What convinced them? Qiu says it’s General Intelligent’s approach to the problem of AI systems that struggle to learn from others, safely extrapolate, or learn continuously from small amounts of data. Generally Intelligent has built a simulated research environment where AI agents – entities that act on the environment – train themselves by performing increasingly difficult and complex tasks inspired by animal evolution and milestones cognitions of infant development. The goal, Qiu says, is to train many different agents powered by different under-the-hood AI technologies to understand what the different components of each do.

“We believe that such [agents] could empower humans in a wide range of fields, including scientific discovery, materials design, personal assistants and tutors, and many other applications that we cannot yet comprehend,” Qiu said. “Using complex and open research environments to test agent performance on a large battery of intelligence tests is the approach most likely to help us identify and address aspects of human intelligence that machines are missing. [A] structured battery of tests facilitates the development of a real understanding of the functioning of the [AI]which is essential for designing safe systems.

Currently, Generally Intelligent is primarily focused on studying how agents process object occlusion (i.e. when an object is visually blocked by another object) and persistence and understanding of what is actively happening in a scene. Among the most challenging areas the lab is investigating is whether agents can internalize the rules of physics, such as gravity.

General Intelligent’s work is reminiscent of earlier work by Alphabet’s DeepMind and OpenAI, which sought to study the interactions of AI agents in game-like 3D environments. For example, OpenAI in 2019 explored how hordes of agents controlled by AIs rampaging through a virtual environment could learn increasingly sophisticated ways to hide and seek each other. DeepMind, meanwhile, last year trained agents capable of solving problems and challenges, including hide-and-seek, capture the flag and finding objects, some of which have not been encountered for a while. Training.

Game agents may not sound like a technical breakthrough, but it’s the assertion of experts from DeepMind, OpenAI and now General Intelligent that these agents are a step towards more general and adaptive AI capable of physically grounded and relevant behaviors. for humans – like AI that can power a robot picker or an automatic parcel sorting machine.

“In the same way that you cannot build safe bridges or design safe chemicals without understanding the theory and the components that make them up, it will be difficult to create safe and capable AI systems without a theoretical and practical understanding. the impact of components on the system,” Qiu said. “Generally Intelligent’s goal is to develop general-purpose AI agents with human intelligence to solve real-world problems.”

Generally intelligent

Picture credits: Generally intelligent

Indeed, some researchers have questioned whether the efforts made so far for “safe” AI systems are really effective. For example, in 2019, OpenAI released Safety Gym, a suite of tools designed to develop AI models that meet certain “constraints”. But the constraints as defined in Safety Gym would not prevent, for example, a self-driving car programmed to avoid collisions from driving two centimeters from other cars at all times or from doing a number of other dangerous things in order to optimize to “avoid collisions”. constraint.

Security-focused systems aside, a slew of startups are looking for AI that can perform a wide range of diverse tasks. Adept develops what it describes as “the general intelligence that allows humans and computers to work together creatively to solve problems”. Elsewhere, legendary computer programmer John Carmack has raised $20 million for his latest venture, Keen Technologies, which seeks to create AI systems that can theoretically perform any task a human can perform.

Not all AI researchers agree that general-purpose AI is possible. Even after the release of systems like DeepMind’s Gato, which can perform hundreds of tasks from playing games to controlling robots, luminaries like Mila founder Yoshua Bengio and Facebook vice president and chief scientist of the AI Yann LeCun have repeatedly argued that so-called artificial general intelligence isn’t technically feasible – at least not today.

Will General Intelligent prove the skeptics wrong? The jury is out. But with a team of around 12 and a board that includes Neuralink founding team member Tim Hanson, Qiu thinks he’s got a great shot.

Similar Posts

Leave a Reply

Your email address will not be published.