TipsMake
Newest

AI platforms can learn from experience just like humans.

Computer scientist Christopher Kanan gives an introduction to Generative AI , large-scale language modeling, and the responsible use of artificial intelligence .

 

It turns out that training artificial intelligence systems is not unlike raising a child. That's why some AI researchers have begun to emulate how children naturally acquire knowledge and learn about the world around them – through exploration, curiosity, gradual learning, and positive reinforcement.

"Many of the problems with current AI algorithms could be solved by drawing on ideas from neuroscience and child development," said Christopher Kanan, associate professor in the Department of Computer Science at the University of Rochester, and an expert in artificial intelligence, continuous learning, vision, and brain-inspired algorithms.

Of course, the ability to learn and reason like humans—even faster and potentially better—raises questions about the best way to protect humans from increasingly sophisticated AI systems. That's why Kanan says all AI systems need built-in safeguards, but doing so at the very end of development is too late. 'It shouldn't be the last step, otherwise we could unleash a monster.'

What is general artificial intelligence? How does it differ from other types of artificial intelligence?

General artificial intelligence (AGI) aims to build systems capable of understanding, reasoning, and learning like humans. AGI is more advanced than narrow artificial intelligence (ANI), which is designed for specific tasks, but has yet to be realized.

 

Artificial intelligence involves creating computer systems that can perform tasks typically requiring human intelligence, such as perception, reasoning, decision-making, and problem-solving. Traditionally, much of the research in artificial intelligence has focused on building systems designed for specific tasks—known as narrow artificial intelligence (ANI). Examples include image recognition systems, voice assistants, or playing strategic games, all of which can perform their tasks exceptionally well, often outperforming humans.

Next is general artificial intelligence (AGI), which aims to build systems capable of understanding, reasoning, and learning across a wide range of tasks, much like humans. Achieving AGI remains a major goal in artificial intelligence research but is yet to be accomplished. Beyond general artificial intelligence (AGI) is super-intelligent artificial intelligence (ASI) – a form of AI far superior to human intelligence in almost every field, which remains theoretical and exists only in science fiction. In the lab, scientists are particularly interested in getting closer to general artificial intelligence by drawing inspiration from neuroscience and child development, allowing AI systems to learn and adapt continuously, much like children.

AI platforms can learn from experience just like humans. Picture 1

 

What are some ways AI can 'learn'?

ANI's success stems from Deep Learning , which has been used since around 2014 to train these systems to learn from large amounts of human-annotated data. Deep Learning involves training large artificial neural networks, consisting of many interconnected layers. Today, Deep Learning is the foundation of most modern AI applications, from computer vision and natural language processing to robotics and biomedical research. These systems excel in tasks such as image recognition, language translation, playing complex games like Go and chess, generating text, images, and even code.

A large-scale language model (LLM) like OpenAI's GPT-4 is trained on massive amounts of text using self-supervised learning. This means the model learns by predicting the next word or phrase from the existing text, without any explicit human guidance or labeling. These models are typically trained on trillions of words—essentially all human text available online, including books, articles, and websites. To put it into perspective, if a person tried to read all of this text, it would take tens of thousands of lifetimes.

Following this intensive initial training, the model undergoes supervised refinement, in which humans provide examples of desired outputs, guiding the model to generate responses that closely match human preferences. Finally, techniques such as human-responsive reinforcement learning (RLHF) are applied to shape the model's behavior by defining acceptable limits for what it can or cannot produce.

What are some of the current limitations of Generative AI tools?

Current AI versions lack human-like self-awareness and reasoning abilities. Large language models (LLMs) can still suffer from 'hallucinations,' meaning they confidently produce information that sounds plausible but is inaccurate. Their reasoning and planning capabilities, while rapidly improving, remain limited compared to the flexibility and depth at the human level. And they don't continuously learn from experience; their knowledge is practically frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.

'Current generative AI systems also lack metacognition, meaning they often don't know what they don't know. This lack of self-awareness limits their effectiveness in real-world interactions.'

Current generative AI systems also lack metacognition, meaning they often don't know what they don't know, and they rarely ask clarifying questions when faced with uncertainty or vague cues. This lack of self-awareness limits their effectiveness in real-world interactions.

 

Humans excel at continuous learning, where skills acquired early serve as a foundation for increasingly complex abilities. For example, infants must master basic motor skills before progressing to walking, running, or even gymnastics. Current vocational training programs neither demonstrate nor are effective in this type of cumulative, transferable learning. Addressing this limitation is a key objective of future research.

What are the main challenges and risks posed by artificial intelligence (AI)?

AI is reshaping the workforce and the regulatory debate. Generative AI has been dramatically changing the workplace. It's particularly disruptive for office jobs – positions traditionally requiring specialized education or expertise – because AI assistants significantly boost the productivity of individual workers; they can transform entry-level professionals into near-experts.

This increased productivity means companies can operate efficiently with significantly fewer employees, increasing the potential for large-scale downsizing of office jobs across many industries. Conversely, jobs requiring dexterity, creativity, leadership, and face-to-face interaction, such as highly skilled trades, healthcare positions involving direct patient care, or crafts, are unlikely to be replaced by AI in the short term.

While scenarios like Nick Bostrom's famous 'Paperclip Maximizer,' in which generalized artificial intelligence (AGI) inadvertently destroys humanity, are often discussed, many believe the greater risk at hand is that humans might intentionally use advanced AI for catastrophic purposes. Efforts should focus on international collaboration, responsible development, and investment in safe AI research within academia.

To ensure AI is developed and used safely, we need regulations surrounding specific applications. Interestingly, those currently demanding government regulation are the very people running AI companies. However, there are also concerns that regulations could stifle open-source AI development efforts, hinder innovation, and concentrate the benefits of AI in the hands of a select few.

What are the chances of achieving generalized artificial intelligence (AGI)?

Many AI researchers agree that Generalized Artificial Intelligence (AGI) is feasible, but current large language models (LLMs) are too limited.

The three 'fathers' of modern AI and Turing Prize winners—Yoshua Bengio, Geoffrey Hinton, and Yann LeCun—all agree that achieving AGI is feasible. Recently, Bengio and Hinton have expressed considerable concern, warning that AGI could pose an existential risk to humanity. However, none of them believe that current LLM architecture alone will suffice to achieve true AGI.

LLMs (Level-Limorous Intelligence) inherently reason using language, whereas for humans, language primarily serves as a means of communication rather than the primary means of thinking. This reliance on language inherently limits LLMs' ability to engage in abstract reasoning or visualization, constraining their potential for broader, more human-like intelligence.

See more:

  1. Robots that converse in strange languages ​​– a threat to humanity?
  2. AI creates fake videos of a person speaking as if they were real.
  3. Artificial intelligence may help detect malignant skin cancer.
David Pac
Share by David Pac
Update 10 March 2026