How AI Fan Fail Us Due To Wrong Training

Machine learning or deep learning are terms very often used in any industry. They took science and business by a storm and apparently offered solutions to almost all the most complicated problems that older algorithms or traditional computational methods were incapable of carrying out. Machine learning has deep roots in exploiting data for giving the opportunity to artificial intelligence (AI) in order to learn from the data provided directly without being explicitly programmed. This way, machine learning appears to be a self-sustained way of pattern recognition that can develop itself better than anyone else as the system is aware of its abilities and the aspects of requests better than any programmers and avoid being flawed by human interference. However, rarely the methods used and implemented for training AI is being discussed and we rather witness the final results. There are, however, serious issues and flaws with building machine learning models, and these are in fact very well known among the community of developers and programmers. One of the essential issues associated with the well-known machine learning models is: whether these models are compatible or useful for real-life solutions? Because so far, their near-perfect performance is only tested well within labs.

Data shift is a terminology assigned to the mismatch between the data an AI is trained and passed tests on it with respect to the data the same AI has to deal with in the real world. A good example of this is training an AI with high-quality images for recognizing a pattern while the real-life data or everyday images do not always enjoy such high-quality images. Coming to think of it, this can be a huge issue as recognizing a pattern based on only a ‘proper data set with specific qualities’ can basically bring in partiality and unwanted biases that can lead to catastrophic selections. An everyday, real-life example of this was training an AI for selecting candidates for a job. The result of excluding all women from the selection process in the first stage due to some of their limitations, although in the first place, women and other minorities were highly encouraged to apply for the job.

Another issue with AI training models has been discovered lately within a team of 40 researchers active in different sectors of Google. This problem is called ‘underspecification‘ which is when the trained AI assigns a specific cause to an observation which can have many causes originally. The lead researcher of this study, D’Amour, was curious to understand why AI fails in real-life examples. As we explained earlier, an AI needs to build several models that pass a test and recognizes the same patterns in various situations and through unseen before examples. If the AI successfully can recognize the pattern on those examples, then the test is passed successfully. It is simple logic to understand why this does not work or is a very simple way of handling over-complicated problems. Imagine that your AI just changes some random variables and takes certain measures to just pass the test without addressing the real issue behind it. This way, the report of your AI, in real-life examples, can be connecting an irrelevant cause to a solution, or relate just one issue to an observation that could have been caused by many factors that the AI is just not aware of!

By Author

error

Enjoy this blog? Please spread the word :)