What AI still can’t do #002

Toluwani Aremu
4 min readAug 14, 2022

--

Photo by Dominik Scythe on Unsplash

As seen in the first part of the topic, “What AI still can’t do”, AI still has many limitations despite the much benefits it brings when deployed in several real world cases. Asides that, AI’s current deployments have lacked ethical reviews and that has led to the downfall of humans’ trusts in different AI products. In the first article for this topic, I stated that for human trusts in AI to grow and affect AI development positively, enthusiasts and experts need to educate and understand the current limitations of AI. In this article, I will discuss the another limitation of AI (with a simple real world example).

#002 — Continual Learning is still an AI concept, not yet a reality

https://www.istockphoto.com/photo/robot-sitting-on-a-bunch-of-books-contains-clipping-path-gm496822526-78776805?phrase=robot%20reading

Continual learning, also called life-long learning or incremental learning, is a fundamental idea and concept in AI (ML/DL/RL) in which AI models continuously learn and evolve based on the input of increasing amounts of data (familiar or strange) while retaining knowledge that had been learnt previously.

It was discovered by researchers that retraining pretrained models on a new task (models that have been trained previously on a different task) makes the model learn faster and perform more optimally than training a new model from scratch. However, it was quickly discovered that these models totally forgot most or all of what was learnt previously while performing extremely well on the new data. This opened up a new research field in AI for the purpose of creating methods which could allow models learn from new data while keeping the existing information.

What does this mean? Let’s go back to ‘Imagineland’ where creativity exceeds otiosity!

Photo by Andy Kelly on Unsplash

Using the same entities I used in #001, imagine we’ve got a toddler and a robot who have been taught to identify cats and dogs, and they are doing pretty well at identifying both animals. On seeing a horse, their responses will be with respect to what they have learnt previously. The toddler, due to the natural common sense he/she has, would use a qualifying adjective on the horse and classify the horse as a “big dog” while the robot would classify the horse as a dog. If you teach the toddler to classify the new entity as a horse and you train the robot to do the same, the next time you present a horse to them, they would both be able to identify the horse correctly.

However, if you present a cat and a dog to the robot, it would only classify them as horses, as it has forgotten what it has learnt earlier. The toddler will still be able to accurately identify a cat, a dog and a horse.

From the example above, we can see that the toddler has been able to utilize the concept of life-long/continual learning, but the concept is still mysterious to the robot. Continual learning remains a current trend in AI research as researchers are looking into how to effectively tackle the problem of forgetfulness in models. Recently, there have been series of breakthroughs in the field by researchers utilizing different methods for different AI tasks such as Asymmetric Loss Approximation (link), Knowledge Transfer (link), Adaptive Regularization (link), etc.

Continual learning could however be the key towards achieving Artificial General Intelligence and total autonomous AI, if and only if the concept is fully optimized, and the utilization is feasible. Until then, acording to experts in the field, Continual learning remains a concept.

To access current research materials on continual learning, as well as codes, click here.

Further readings & References,

  1. Preetipadma [Online]. Available: https://www.analyticsinsight.net/continual-learning-an-overview-into-the-next-stage-of-ai/. [Accessed: 14-Aug-2022].
  2. K. Weiss, T. M. Khoshgoftaar, and D. D. Wang, “A survey of Transfer Learning,” Journal of Big Data, vol. 3, no. 1, 2016.
  3. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in Neural Networks,” Proceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, 2017.

--

--