Neural Networks Learning to Assess their Reliability

Neural networks are a set of algorithms developed to recognize patterns between the data in a huge data set similar to the way human brain functions. One of the significant shortcomings of neural networks, or AI in general, is the method lacking its self-assessment. Deep learning neural networks are methods that develop themselves by learning from examples and data and are designed to assist human beings in decision making by taking into account a huge parameter space. Therefore, we should know when these methods don’t work or evaluate their confidence level, especially when it comes to decision makings that affect human lives. Safer outcomes are crucial when it comes to autonomous driving (if possible ever due to moral and ethical considerations) or medical diagnosis. But why we don’t have a confidence level assessment for deep learning methods when it comes to huge databases? Simply because it would be computationally expensive for the current computational power of the computers to evaluate the confidence level of every single analysis taking into account numerous parameters.

A team of researchers at MIT and Harvard Universities have now developed a method that enables neural networks to assess their predictions, based on the quality of the data available. This is of particular importance since as we discussed in another article in the AI series, you can train an AI-based on high-quality data and then feed it with low-quality data in real life. The assessment that how the two data sets match is of crucial significance. As an example, take this situation: A) “A certain vaccine should be injected because patient A has shown a set of diagnostics matching a certain disease,” and B) “A certain vaccine should be injected because patient A has probably shown a set of diagnostics matching a certain disease,”. Here, we are talking about the life of a human being on the line. Depending on how fast the decision making should happen, the under question neural network might be requested to assess its confidence level in a matter of seconds. So far, this has been impossible to carry out, and based on the example above, now we know that only having a high-performance system is not enough.

The new method, termed as “Deep Evidential Regression” accelerates the self-assessment of the neural networks and allows us to understand when these systems cannot be trusted. This method will also inform us of what sort of information is missing from the original modeling. The previous methods such as Bayesian deep learning were proved to be time-consuming, memory-consuming, and highly relying on resampling the models to assess the confidence level of the models. All these factors are rarely available to everyone using deep learning techniques. The nice thing about Deep Evidential Regression is the fact that it only needs a single run of the neural network.

The other problem that we mentioned about deep learning methods earlier, is the fact that the models the AI is trained at might not really correspond to the result and one answer might not be a safe answer. Deep Evidential Regression gives us the power to generate a probability spectrum of the answers possible as an outcome. This naturally produces the confidence level of all the answers possible.

By Author

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)