Validation, Explanation, and Correction of AI systems

Special Session at the IEEE World Congress on Computational Intelligence 2020, International Joint Conference on Neural Networks (IJCNN)


Ivan Tyukin (University of Leicester, UK), professor of Applied Mathematics at the School of Mathematics and Actuarial Science at the University of Leicester. His research interest in mathematical foundations of Artificial Intelligence and learning systems, mathematical modelling, adaptive systems, inverse problems with nonconvex and nonlinear parameterization, data analytics, and computer vision. He is an Associate Editor of Communications in Nonlinear Science and Numerical Simulations.

Alexander N. Gorban (University of Leicester, UK), professor of Applied Mathematics at the School of Mathematics and Actuarial Science at the University of Leicester. He is an expert in mathematical modelling, neural networks and learning systems, methods for dimensionality reduction, mathematical foundations and algorithms for data mining and data analytics. He was a visiting professor and research scholar at the Clay Mathematics Institute, IHES, Courant Institute, and Isaac Newton Institute. He has active academic collaborations include: IHES, ETH Zurich, Institut Curie, Saint Louis University, Eindhoven University of Technology, and Toyota Technical Centre, Ann Arbor, Lobachevsky Universiy (Nizhny Novgorod).

Danil Prokhorov (Toyota Motor North America, USA) Head of the Future Research Department in Toyota Motor North America R&D. He is an expert in machine learning research focusing on neural networks with applications to system modelling, powertrain control, diagnostics and optimization.  He has been serving as a panel expert for NSF, DOE, ARPA, Senior and Associate Editor of several scientific journals for over 20 years.  He has been involved with several professional societies including IEEE Intelligent Transportation Systems (ITS) and IEEE Computational Intelligence (CI), as well as International Neural Network Society (INNS) as its former Board member, President and recently elected Fellow.

Desmond Higham (University of Edinburgh, UK) is a numerical analyst and Professor of Numerical Analysis at the School of Mathematics at the University of Edinburgh, UK. His main area of research is stochastic computation, with applications in computational biology, technological, sociological, security networks and mathematical finance. He is Editor-in-Chief of SIAM Review and is a member of the editorial boards of several other journals. He held a Royal Society Wolfson Research Merit Award (2012–2017) and is a Society for Industrial and Applied Mathematics (SIAM) Fellow and Fellow of the Royal Society of Edinburgh.


Aims and Scope:

Significant progress in Artificial Intelligence (AI) over recent years brought great benefits to end-users ranging from health, banking, and security areas to advanced manufacturing and space. This gave rise to echo-systems of modern AI penetrating through our entire society. Modern AI systems are built using massive volumes of data, both curated and raw, with all the uncertainties inherent to these data. One of the major fundamental barriers limiting further advances and use of AI systems of this type is the problem of validation, explanation, and correction of AI’s decision-making. This is particularly important for safety-critical and infrastructural applications, but also this is crucial in other use-cases, including financial, career, education, and health services

This session focuses on the development of mathematical and algorithmic foundations underpinning the issues of validation, explanation, and correction of AI systems, leading ultimately to addressing the problem of trust in AI. It aims at bringing together experts across relevant academic disciplines and industry by establishing a forum for developing a roadmap of trustworthy and safe AI as well as for identifying technical obstacles and mathematical / computational frameworks for overcoming these. We envision submissions addressing problems of

  • quantification of errors in AI systems, including in deep learning neural networks
  • guaranteed one-shot learning
  • analysis and quantification of ensemble learning
  • AI generalization bounds
  • computationally-constrained AIs
  • robustness, resilience, and reliability of AI systems, including to adversarial activities
  • decision-making in ecosystems of AIs.

Submissions illustrating and discussing the problem of validation, explanations, and correction of AI for health, security, autonomous vehicles, and safety-critical applications are particularly welcome. Special attention will be paid to mathematical backgrounds of new technological approaches.

Please submit your contribution through the IEEE IJCNN submission site by following and selecting "Validation, Explanation, and Correction of AI systems" as the main research topic.

Share this page:

Contact details

Department of Mathematics
University of Leicester
University Road
Leicester LE1 7RH
United Kingdom

Tel.: +44 (0)116 229 7407

Campus Based Courses

Postgraduate Taught:

Postgraduate Research:

Distance Learning Course  

Actuarial Science:

DL Study

Student complaints procedure

AccessAble logo

The University of Leicester is committed to equal access to our facilities. DisabledGo has detailed accessibility guides for College House and the Michael Atiyah Building.