What AI still can’t do #001
In recent years, we have witnessed huge advances and breakthroughs in technology, especially in the areas concerned with Artificial Intelligence. Interestingly, the numbers of people and organizations adopting AI in their everyday lives and businesses have continued to rise “marginally”. The ease of use and increased throughput rates seem to have convinced a large portion of the populace about the wonders and potentials of deploying AI in “every” sector. We also have seen a noticeable increase in the number of organizations set up to evangelize AI to newbies and enthusiasts. Well, it is pro bono publico!
While these developments have been fast, yet fantastic and welcomable, it feels like the emphasis placed on the ethical aspects of AI deployment is rather too little. Newbies, enthusiasts and professionals prefer to deploy AI in use cases just because AI can be used, without thinking about the ethical aspects. In fact, ethical reviews are often done way after the technology has been deployed; sometimes after a group or entity is negatively affected…and that is a pitfall.
A recent article posted by the Stanford University Human-Centered AI urge researchers and experts to start their projects with social and ethical reviews. While it is a fact that AI can be used for a lot of things and AI can do so much, it is imperative and advantageous that these ethical reviews are done early on. Actually, it is also a fact that there is so much AI can’t do yet, and that’s the motivation behind this article and others that would come later (as a series, hence the #001).
#001 — Common Sense is mostly Uncommon for AI
With so much AI algorithms used for different AI tasks, the common difference between algorithms used for each specific task is the accuracy and precision they use in carrying out the task. For example, VGG16 can recognize the 10 different classes of CIFAR10 at a test accuracy of about 95%, while AlexNet would hover around 85%. The problem is,both algorithms will fail miserably at classifying “out-of-class” objects. Too technical? Let me use a failsafe option.
Imagine you have a 4 year old toddler and a robot. You trained both of them to identify cats and dogs, and you test their knowledge. They would correctly identify the cats and dogs. Now, present a horse to your toddler. The toddler immediately sees the difference and gets confused. For more confident toddlers, their answers would be “that is a very big dog”, putting very big into context as their best description for an object they have never seen or identified before. Now, present the horse to the robot, and you will get a very confident “dog” remark as its answer.
Let’s drive this further. An example presented by a Forbes article showcased how difficult it would be for a state-of-the-art NLP algorithm to answer a commonsensical question that a human would really answer without having to think twice. Like Oren Etzioni (Allen AI) said, “Common sense is the dark matter of Artificial Intelligence”. Well, researchers have tried to come up with different solutions to this particular issue, using knowledge based approaches, continual learning approaches, etc, but this problem still exists as current solutions are either fixed to a problem area or the compensations in performance are too expensive.
Maybe in the future, Irgendwie, Irgendwo, Irgendwann, a log-lasting solution for this problem will be found. Until then, newbies and enthusiasts need to be educated about this current limitation of AI, as well as the other limitations which could bring about ethical issues if AI is ignorantly applied. This way, a healthy culture of developing and deploying safe AI is maintained.
This is just the first out of many for this series on “What AI still can’t do”. Do stay tuned as I ignite and explore this important aspect of AI Ethics.
Follow me on Medium, and engage with my posts.