New theorems could help robots to correct errors on-the-fly and learn from each other

Posted by ap507 at Aug 21, 2017 09:19 AM |
New stochastic separation theorems proved by Leicester mathematicians could enhance capabilities of artificial intelligence

Errors in Artificial Intelligence which would normally take a considerable amount of time to resolve could be corrected immediately with the help of new research by Leicester mathematicians.

Researchers from our Department of Mathematics have published a paper in the journal Neural Networks outlining mathematical foundations for new algorithms which could allow for Artificial Intelligence to collect error reports and correct them immediately without affecting existing skills - at the same time accumulating corrections which could be used for future versions or updates.

This could essentially provide robots with the ability to correct errors instantaneously, effectively ‘learn’ from their mistakes without damage to the knowledge already gained, and ultimately spread new knowledge amongst themselves.

Together with Industrial partners from ARM, the algorithms are combined into a system, an AI corrector, capable of improving performance of legacy AIs on-the-fly.

Professor Alexander Gorban said: “Multiple versions of Artificial Intelligence Big Data analytics systems have been deployed to date on millions of computers and gadgets across various platforms. They are functioning in non-uniform networks and interact.

“It seems to be very natural that humans can learn from their mistakes immediately and do not repeat them (at least, the best of us). It is a big problem how to equip Artificial Intelligence with this ability.

“We have recently found that a solution to this issue is possible. In this work, we demonstrate that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non-destructive corrector problem.”

Dr Ivan Tyukin added: “The development of sustainable large intelligent systems for mining of Big Data requires creation of technology and methods for fast non-destructive, non-iterative, and reversible corrections. No such technology existed until now.”