Limitations ?
Before I start talking about the details on limitation on AI/Machine Learning, there is one thing I want to say first. The purpose of this page is not to show about my personal skepticism about this technology, nor disappoint/discourage the readers to persue this area of the technology. I personally is (and will be) a strong follower of AI/Machine Learning for several
reasons.
What they say about Limitation of AI/Machine Learning ?
Followings are a list of items that are often mentioned as a limitation of current AI/Machine Learning technology (as of Jan 2020, later update is added in green). I wouldn't say all of these items will be removed from the list but at least some of them will not be considered as limitation anymore in the future as we all have seen in the history and science of engineering.
- Always hangry of Data : It is true that the current AI/Machine Learning can learn by itself, but it requires huge set of training data. In general, human brain can learn things from much smaller set of training data (or examples). There are many cases where it is very hard (or almost impossible) to get large enough dataset
to train a machine learning algorithm.
- Challenges of Generalization in AI/ML: An AI/Machine Learning algorithm trained with a specific set of training data does not always work with the data that it never experienced during the training phase even if the new data really belongs to the same category by nature. Some times there are cases where a Image based Machine Learning
algorithm
fail to work to indentify those images which is similar to the training images but from different camera (e.g, different brightness etc) and different resolutions etc.
- The Puzzle of AI Explainability : I know I know it, but I don't know how I know it. 'It' in this context mean 'AI/Machine Learning'. This is often called 'explainability'. From various examples and real applications, we know (see) some AI/Machine Learning can learn something and get to know something(for example, classify
image), but we don't know exactly how it learns. You may say "We know how it learns. I learns by continuously updating the weight value by the mechanism called backpropagation" etc... but when we say 'explainable' here, it mean more specific.. like some deterministic / explicit logics. Does this really matter ? There are many things in Human brain that we don't know exactly how it works. Why we expect that kind of explainability from AI/Machine Learning ? I will leave it up to you for the answer
to this question.
- Hard to Generalize the learning. In most case (probably in every case) of AI/Machine Learning Algorithm, they are specialized to perform only a specific functionality and it is very hard to generalize / extend the learning to other area. For example, even the best image classifier cannot do very
basic
things in Driving Car. The algorithm that has beaten all the best human player in a specific genre of game would not do any better than the first time player in other genre of game.
- A Systematic way of cheat the algorithm. This is originally noticed for those algorithm used for image classification (i.e, CNN based algorithm). It was found that a specially designed small pertubation to an image can lead the algorithm to come up with a completely wrong classification whereas
the same pertubation does not impact the human brain at all. You may heard of a famous example where a photo of a temple with this type of pertubation led the image classification algorithm to classify it as an ostrich. You would see more details and more examples of this type. Now it seems that this problem exists not only for image classification algorith, but also for other application like natural language processing.
- No (or very limited) on understanding Context. It would be more for Natural Language Processing related AI/Machine Learning. By nature of our language, most of language (every kind of natural language as far as I personally knows of) has ambiguity for a lot of expression. In most case, we human
solves
(at least to some degree) this problem of ambiguity based on context, but AI/Machine Learning has no (or very limited) understandings on context. As a simple test, try translating 'Can I cut in ?' in google translator to another language. You would know that it would be translated differently depending on context...like you have a knife in one hand trying to cut something ... or you are in front of a long line trying to get in the middle of the waiting line.==> Since the release of chatGPT (Nov 2022),
this
limitation is shown to be removed greatly. I wouldn't say, this limitation is removed completely, but I would say that the level of understanding context with AI/Machine Learning is almost as same level as ordinary people.
- More abstract problems. Like Machine Bias, Decision Making in critical situations like those often mentioned in AI/Machine Learning applications in Auto Driving. I am personally not sure if this is the limitation of AI/Machine Learning. It may be
the limitation of Human
Brain and/or human society and we don't know the exact solution in human brain/human society.. and we may not be able to find any clear solution even in the future in human brain/society. Then why we care ? I leave this up to you. I personally think it is worth thinking, rethinking and trying to find a little bit better step, better adjustment even if we will not be able to reach any single solution (or single AI/Machine Learning Algorithm) to fix these problems all at once.
- Scalability and Computational Cost: Many powerful AI models require significant computational resources, which can be costly and environmentally unsustainable. This limits the accessibility and scalability of AI solutions, especially for smaller organizations or in resource-constrained settings.==> Around the end of 2023, the term OnDevice AI which usually mean the running AI/ML models on a small/low powered device like mobile phone start becoming a hot topic. It
would be interesting to how well those OnDevice AI will perform comparing to regular models running on large/power hungry systems.
- Dependence on Quality Data: The performance of AI models is heavily dependent on the quality of the training data. Poor quality, noisy, or unrepresentative data can lead to ineffective or incorrect outcomes. Ensuring data quality and representativeness is a significant challenge.
- Long-Term Learning and Adaptation: Most AI systems are not capable of long-term learning or adaptation. They are typically trained once and then deployed, without the ability to learn or adapt from new data or changing environments unless they are retrained or fine-tuned.
- Understanding Causal Relationships: AI, particularly in its current dominant form of statistical learning, is often criticized for its focus on correlation rather than causation. Understanding and modeling causal relationships is crucial for many decision-making processes but remains a challenging area.
Why we care about the Limitation ?
Why we care about the limitation ? Different people would have different answers to this question. Reading many articles and watching lecturs and workshops on YouTube, followings are some of the reasons that I understand (this is just based on my interpretation from those documents and video, and no guartee that my interpretation is correct).
- Warning to Hype : Sometimes it is suggested to have some clear understanding on the limitation of the current AI/Machine Learning in order not to faill into too much hype. As we have seen in the history of science and engineering, a certain degree of Hype is necessary to the motivate each stakeholders
in the industry to realize the technology, but we have also seen many cases where too much hype lead to too high expectation and too high expectation often lead to too deep disappointment causing a long/harsh winter.
- Pushing the boundary : I think this would a positive side of understanding the limitation. When a technology is in a hype and too many pepole working on it, people often get discouraged saying "This is too competitive ,and already almost everything is discovered and implemented. There wouldn't
be anything I can contribute and push the boundary". But if you have clear understandings on the limitation of the technology, those are the exact point you can start if you want to shine yourself.
- User Trust and Adoption : For AI to be widely adopted, users must trust its reliability and effectiveness. Knowing the limitations allows developers to set realistic expectations and communicate them to users, building trust. It also helps in designing systems that are transparent about their capabilities and limitations.
- Resource Allocation : AI projects require significant investment in terms of data, computational resources, and human expertise. Knowing the limitations helps organizations and researchers allocate these resources more effectively, focusing on areas where AI can have the most impact or where innovation is needed to overcome current barriers.
- Innovation and Competition: Recognizing limitations can drive innovation as it challenges researchers and developers to find solutions. It also fosters healthy competition in the industry, as companies and institutions strive to overcome these limitations and offer better, more advanced solutions.
- Risk Management: Understanding what AI cannot do or where it might fail is vital for risk management. This includes foreseeing possible failures, planning for contingencies, and ensuring that there are safeguards against catastrophic failures, especially in critical applications like healthcare, transportation, or finance.
- Safety Check -:) : Is it safe (or good timing) to get into this industry for a business or for a job if there are clear limitation of the technology and it does not seem to be solved in near future ?
YouTube
|
|