Informatics Seminars 2020/21

2020/21 Semester Two

  • Seminars will be held online until further notice. Please contact Effie Law or Thomas Erlebach to be added to the mailing list for receiving seminar announcements with the meeting links.
  • To get the link to the Teams meeting for the next seminar, please click here (UoL account login required)

Fri Jan 22, 14:00 (Host: Effie Law)

Javier Bargas-Avila (Google)

Title: The Design of Everyday Things / UX @ Google - What makes people love or hate the products we create?

Abstract: This talk will show you why we often fail using everyday things in our everyday life. No matter how smart you are, no matter how much experience you have, you will often feel inept as you fail to figure out the simplest things like whether you should push, pull, or slide a door. You will see why these mistakes are not “human error”, but origin in flawed product design that ignores the basic ways humans think and behave. You will see examples from everyday objects that will change the way you look at products around you in the future. You will learn to value good, usable designs and detect flawed ones. And you will not blame yourself for making mistakes but blame the designers that came up with complicated and unsuited solutions for simple problems.

Fri Jan 29, 14:00 (Host: Effie Law)

Per Ola Kristensson (University of Cambridge)

Title: Using Design Engineering Methods to Understand and Analyse Intelligent Interactive Systems

Abstract: Advances in machine learning, computer vision and statistical language processing have resulted in interactive systems capable of inferring users' intention with high accuracy. However, designing such intelligent interactive systems is difficult as these systems have high complexity and are often operated in uncontrollable environments by users with large variations in user behaviour. As a result, the designer is tasked with choosing an operating point in a multidimensional design space where many design dimensions are conflicting and poorly understood. In this talk I will explain how to model such systems at a purely functional level as function structure models, which can then be parameterized as a set of controllable and uncontrollable parameters. Controllable parameters are design parameters that can be tuned and optimized by the designer. Uncontrollable parameters are parameters that are not directly governable but nonetheless affect system performance. An understanding of uncontrollable parameters allows for sensitivity analysis of system performance. I will discuss the broader benefits of this design engineering approach for HCI system design and exemplify it with two applications: 1) assessing the potential performance of a context-aware sentence retrieval system for nonspeaking individuals with motor disabilities; and 2) an in-depth analysis of when and how word predictions are likely to actually benefit the user.

Fri Feb 5, 14:00 (Host: Thomas Erlebach)

Rayna Dimitrova (CISPA Helmholtz Center for Information Security)

Title: Formal Methods for Autonomy in Partially Observable Environments

Abstract:

Complex cyber-physical systems, in which software interacts with physical components, play an important role in almost all parts of everyday life. Notable examples include autonomous robots and cars, plant controllers, and medical devices. While the correctness of such systems is vital to human safety and security, ensuring their reliability is hard and still remains a major challenge. The automatic synthesis of programs from high-level specifications is a highly attractive and increasingly viable alternative to manual system design, and holds the potential to provide such guarantees. A fundamental challenge to the application of synthesis techniques in domains like autonomous systems is that such systems operate in a complex and often largely unknown environment.

In this talk, I will focus on the application of reactive synthesis to the problem of automatically deriving strategies for autonomous mobile sensors conducting surveillance, that is, maintaining knowledge of the location of a moving, possibly adversarial target. I will discuss two key complications that synthesis methods face in this setting. First, naively keeping track of the knowledge of the surveillance agent does not scale. Second, while sensor networks with a large number of dynamic sensors can achieve better coverage, synthesizing coordinated surveillance strategies is challenging computationally. I will outline how abstraction, refinement, and compositional synthesis techniques can be used to address these challenges.

Fri Feb 12, 14:00 (Host: Ashiq Anjum)

Frank McQuade (Director of Capability, Bloc Digital)

Title: Digital Twins and their Industrial Applications

Abstract: Starting with the history of Digital Twins and industrial requirements, the presentation will discuss how Digital Twins are applied to real world industrial problems and the benefits that they bring. Different types of Digital Twins will be presented including augmented and virtual-reality based solutions, with the perspective of understanding what solutions the future may hold.

Speaker Bio: Dr Frank McQuade graduated from the University of Glasgow with a PhD in Autonomous Assembly of Spacecraft Structures. Following a successful career in the space industry, he joined Rolls-Royce and held a number of posts before becoming the Head of Engineering Strategy. In 2018 he joined Bloc Digital as the Director of Capability and is currently responsible for a number of industrial research programmes, KTP projects and academic relationships.

Fri Feb 19, 14:00

Fri Feb 26, 14:00 (Host: Effie Law)

José David Lopes (Heriot Watt University, UK)

Title: Conversational AI Systems

Abstract: In this talk I will guide you through the work we have been doing at Heriot Watt to conversational system for a number of different applications. First, I will focus on the data collection strategies and the impact they have in the system performance. In parallel with the data collections and the research that we do to create robust systems, we have investigate the impact that interaction initiative, embodiment and transparency in instilling trust through interaction. I will highlight the main results and conclusions we have found so far.

Bio: José is currently a Research Fellow at the Interaction Lab at Heriot Watt University, Edinburgh, UK. Currently his research focuses on multi-modal human-robot interaction, namely how to build interactive trustworthy conversational agents. He has been a Post-Doc and a Researcher at KTH Royal Institute of Technology in Stockholm, Sweden where we worked on Robot-Aided Language Learning and Spoken Dialogue System Analytics. He holds a PhD from the University of Lisbon in collaboration with Carnegie Mellon University.

Fri March 5, 14:00 (Host: Mohammad Mousavi)

Maurice ter Beek (ISTICNR,  Pisa, Italy)

Title: Efficient Static Analysis and Verification of Featured Transition Systems


Abstract: A Featured Transition System (FTS) models the behaviour of all products of a Software Product Line (SPL) in a single compact structure, by associating action-labelled transitions with features that condition their presence in product behaviour. It may however be the case that the resulting featured transitions of an FTS cannot be executed in any product (so called dead transitions) or, on the contrary, can be executed in all products (so called false optional transitions). Moreover, an FTS may contain states from which a transition can be executed only in some products (so called hidden deadlock states). It is useful to detect such ambiguities and signal them to the modeller, because dead transitions indicate an anomaly in the FTS that must be corrected, false optional transitions indicate a redundancy that may be removed, and hidden deadlocks should be made explicit in the FTS to improve the understanding of the model and to enable efficient verificationif the deadlocks in the products should not be remedied in the first place. We provide an algorithm to analyse an FTS for ambiguities and a means to transform an ambiguous FTS into an unambiguous one. The scope is twofold: an ambiguous model is typically undesired as it gives an unclear idea of the SPL and, moreover, an unambiguous FTS can efficiently be model checked. We empirically show the suitability of the algorithm by applying it to a number of benchmark SPL examples from the literature, and we show how this facilitates a kind of family-based model checking of a wide range of properties on FTSs.


Bio: Maurice ter Beek is a researcher at ISTICNR (Pisa, Italy) and head of the Formal Methods and Tools lab. He obtained a Ph.D. at Leiden University (The Netherlands). He authored over 150 peer-reviewed papers, edited over 25 special issues of journals and proceedings, and serves on the editorial board of the Journal of Logical and Algebraic Methods in Programming, Science of Computer Programming, PeerJ Computer Science and ERCIM News. He works on formal methods and model-checking tools for the specification and verification of safety-critical software systems and communication protocols, focussing in particular on applications in service-oriented computing, software product line engineering and railway systems. He is member of the Steering Committees of the FMICS, SPLC and VaMoS conference series, and regular PC member of the FM, FMICS, FormaliSE, SEFM, SPIN, SPLC and VaMoS conference series.

Fri March 12, 14:00

Fri March 19, 14:00 (Host: Huiyu Zhou)

Xianghua Xie (Swansea University, UK)

Title: Deep Learning for Healthcare

Abstract: In this talk, I would like to discuss some of our recent attempts in developing predictive models for analysing electronic health records and understanding anatomical structures from medical images. In the first part of the talk, I will present two studies of using electronic health records to predict dementia patient hospitalisation risks and the onset of sepsis in an ICU environment. Two different types of neural network ensemble are used, but both aim to provide some degrees of interpretability. For example in the dementia study, the GP records of each patient were selected one year before diagnosis up to hospital admission. 52.5 million individual records of 59,298 patients were used. 30,178 were admitted to hospital and 29,120 remained with GP care. From the 54,649 initial event codes, the ten most important signals identified for admission were two diagnostic events (nightmares, essential hypertension), five medication events (betahistine dihydrochloride, ibuprofen gel, simvastatin, influenza vaccine, calcium carbonate and colecalciferol chewable tablets), and three procedural events (third party encounter, social group 3, blood glucose raised). They performed significantly above conventional methods. In the second part of the talk, I would like to present our work on graph deep learning and how this can be used to perform segmentation on volumetric medical scans. I will present a graph-based convolutional neural network, which simultaneously learns spatially related local and global features on a graph representation from multi-resolution volumetric data. The Graph-CNN models are then used for the purpose of efficient marginal space learning. Unlike conventional convolutional neural network operators, the graph-based CNN operators allow spatially related features to be learned on the non-Cartesian domain of the multi-resolution space. Some challenges in graph deep learning will be briefly discussed as well.

Bio: Xianghua Xie is a Professor at the Department of Computer Science, Swansea University. His research covers various aspects of computer vision and pattern recognition. He was a recipient of an RCUK academic fellowship, and has been an investigator on several projects funded by EPSRC, Leverhulme, NISCHR, and WORD. He has been working in the areas of Pattern Recognition and Machine Intelligence and their applications to real world problems since his PhD work at Bristol University. His recent work includes detecting abnormal patterns in complex visual and medical data, assisted diagnosis using automated image analysis, fully automated volumetric image segmentation, registration, and motion analysis, machine understanding of human action, efficient deep learning, and deep learning on irregular domains. He has published over 160 research papers and (co-)edited several conference proceedings.

Fri March 26, 14:00 (Host: Effie Law)

Mohamed Khamis (University of Glasgow, UK)

Title: Security and Privacy in the age of Ubiquitous Computing

Abstract: Today, a thermal camera can be bought for < £150 and used to track the heat traces your fingers produced when entering your password on your keyboard. We recently found that thermal imaging can reveal 100% of PINs entered on smartphones up to 30 seconds after they have been entered. Other ubiquitous technologies are continuously becoming more powerful and affordable. They can now be maliciously exploited by average non-tech-savvy users. The ubiquity of smartphones can itself be a threat to privacy; with personal data being accessible essentially everywhere, sensitive information can easily become subject to prying eyes. There is a significant increase in the number of novel platforms in which users need to perform secure transactions (e.g., payments in VR stores), yet we still use technologies from the 1960s to secure access to them. Mohamed will talk about the implications of these developments and his work in this area with a focus on the challenges, opportunities, and directions for future work.

Bio.: Dr Mohamed Khamis is a lecturer at the University of Glasgow where he leads research in Human-centered Security. His team focuses on a) understanding threats to privacy and security that imposed or facilitated by ubiquitous technologies, and on b) designing user-centered systems that address these threats. His team’s research has received funding from the Royal Society of Edinburgh, the EPSRC and the National Cyber Security Centre. He regularly publishes in CHI, TOCHI and other top HCI and Human-centered security conferences and journals. He has been on the program committee of CHI since 2019 and is an editorial board member for IMWUT and the international Journal of Human-computer studies.  Mohamed received his PhD from Ludwig Maximilian University of Munich (LMU).


Fri 7 May, 14:00 (Host: Mohammad Mousavi)

Marjan Sirjani (Malardalen University, Sweden)

Title: Security and Privacy in the age of Ubiquitous Computing

Abstract: Today, a thermal camera can be bought for < £150 and used to track the heat traces your fingers produced when entering your password on your keyboard. We recently found that thermal imaging can reveal 100% of PINs entered on smartphones up to 30 seconds after they have been entered. Other ubiquitous technologies are continuously becoming more powerful and affordable. They can now be maliciously exploited by average non-tech-savvy users. The ubiquity of smartphones can itself be a threat to privacy; with personal data being accessible essentially everywhere, sensitive information can easily become subject to prying eyes. There is a significant increase in the number of novel platforms in which users need to perform secure transactions (e.g., payments in VR stores), yet we still use technologies from the 1960s to secure access to them. Mohamed will talk about the implications of these developments and his work in this area with a focus on the challenges, opportunities, and directions for future work.

Bio.:Dr Mohamed Khamis is a lecturer at the University of Glasgow where he leads research in Human-centered Security. His team focuses on a) understanding threats to privacy and security that imposed or facilitated by ubiquitous technologies, and on b) designing user-centered systems that address these threats. His team’s research has received funding from the Royal Society of Edinburgh, the EPSRC and the National Cyber Security Centre. He regularly publishes in CHI, TOCHI and other top HCI and Human-centered security conferences and journals. He has been on the program committee of CHI since 2019 and is an editorial board member for IMWUT and the international Journal of Human-computer studies.  Mohamed received his PhD from Ludwig Maximilian University of Munich (LMU).

Fri 14 May, 14:00 (Host: Huiyu Zhou)

Jungong HAN (Aberystwyth University, UK)

Title: TBA

Abstract: TBA

Fri 21 May, 14:00 (Host: Effie Law)

Fridolin Wild (Open University, UK)

Title:  Holographic AI

Abstract: TBA

Fri 28 May, 14:00 (Host: Effie Law)

Jose C. Campos (University of Minho, Portugal)

Title: TBA

Abstract: TBA

Fri 4 June, 14:00 (Host: Mohammad Mousavi)

Mahsa Varshoaz (IT University of Copenhagen, Denmark)

Title: TBA

Abstract: TBA

Fri 11 June, 14:00

2020/21 Semester One

Fri Oct 9, 14:00 (Host: Effie Law)

Alan Blackwell (University of Cambridge)

TitleHow to design a programming language

Abstract: Programming languages - how to tell a computer what to do - are the core technology of the digital revolution, just as the invention of the wheel was the core technology enabling the design of land transportation systems. Wheels have been necessary, but not sufficient, for the design of effective cars. Beyond basic optimisation, and occasional innovation, research into wheel technologies may be important, but provides little practical guidance for successful products. In the same way, compilers and type systems are necessary, but not sufficient, for the design of effective programming languages. This talk draws lessons from the design of cars to propose principles and processes for programming language design, as well as research agendas that will support those principles and processes.

Speaker Bio: Alan Blackwell is Professor of Interdisciplinary Design at the University of Cambridge Computer Laboratory. Originally trained as a control engineer, his early career in industrial automation soon led to an interest in programming as a technical user interface. He implemented his first visual programming language in 1983 (an antecedent of Harel’s StateCharts) for specifying control of a cement batching plant in his hometown Wellington. Subsequent projects included a real-time expert systems language used to implement emergency response systems that now run on the trains of London Underground’s Central and Jubilee lines. After delivering his first conference keynote on programming language design in 1995, he realised that he knew nothing about the scientific causes that made one programming language more usable than another, so left his role as design lead of novel end-user programming languages at Hitachi to study for a PhD with Thomas Green at the MRC Applied Psychology Unit in Cambridge. Since becoming an academic, he and his team have contributed to the design of programming languages, tools and techniques at companies around the world, including Microsoft, Google, Intel, Nokia, Sony, AutoDesk and many others.

Fri Oct 16, 14:00 (Host: Eugene Zhang)

Yuankai Huo (Vanderbilt University)

Title: Machine Learning for Medical Image Analysis using Big Data

Abstract: Rapid developments in data sharing and computational resources are reshaping the medical imaging research field from small-scale (e.g., a cohort < 300 subjects) to large-scale (e.g., big data with thousands or more subjects). However, traditional medical image analysis techniques can be inadequate to overcome the new challenges in big data; including robustness of algorithm, inter-subject variabilities, computational resources etc. In this talk, I will (1) present an end-to-end large-scale lifespan brain image analyses on more than 5000 patients, (2) discuss the challenges and opportunities in machine learning for medical image analysis using big data.

Fri Oct 23, 14:00 (Host: Effie Law)

Jo Iacovides (University of York)

Title: Beyond play: exploring the complexity of player experience

Abstract: Gameplay frequently involves a combination of positive and negative emotions, where there is increasing interest in understanding more complex forms of player experience. In this talk I will present the findings of three different studies that consider overlooked aspects of gameplay. The first focuses on reflection as a core component of the player experience through exploring what sorts of reflection players engage in, when they do so and how they feel about reflection. The second study examines uncomfortable gameplay interactions across different commercial games to investigate how discomfort manifests and influences player engagement. Finally, the third study focuses on player motivation, through examining the role of games during difficult life experiences. Through exploring reflection, discomfort, and gaming as a form of coping, the talk will discuss how games can invoke powerful experiences that impact how we think and feel beyond the initial instance of play.

Speaker Bio: Dr Ioanna (Jo) Iacovides, is a Lecturer in Computer Science at the University of York, UK. Her research interests lie in Human Computer Interaction with a particular focus on understanding the role of learning within the player experience, and on investigating complex emotional experiences in the context of digital play. In addition, she is interested in exploring how games and playful technologies can created for a range of persuasive purposes, such as education and behaviour change. She has received awards for a work on examining reflection and gaming (best paper, CHI PLAY 2018), evaluating persuasive games (honourable mention, CHI 2015) and for the game Resilience Challenge, which encourages healthcare practitioners to consider how they adapt safely under pressure (first prize, 2017 Annual Resilience Healthcare Network symposium).

Fri Oct 30, 14:00 (Host: Effie Law)

Julien Maffre (Microsoft Research Cambridge)

Title: Overview of the Confidential Consortium Framework (CCF)

Abstract: In this talk, I present the Confidential Consortium Framework (CCF), Microsoft's open source framework (https://github.com/Microsoft/CCF) for building a new category of secure, highly available, and performant applications that focus on multi-party compute and data.

Fri Nov 6, 14:00 (Host: Eugene Zhang)

Shuai Liu (Hunan Normal University, China)

Title: Introduction of Advance in Single Target Tracking

Abstract: Computer vision is one of the main application scenarios of artificial intelligence, and visual tracking is the main research branch of computer vision. Therefore, based on traditional methods, our team uses human visual perception capabilities, Short-Long Term memory functions and attention characteristics by combining traditional feature models, and proposed corresponding visual tracking method/mechanism from the direction of target template matching and updates mechanism. Experiments show that the introduction of human vision can improve the tracking effect, and the proposed model can also propose a targeted development direction for human vision research.

Speaker Bio,: Prof. Shuai Liu received his Ph.D. degree from the Jilin University, 2011. From 2011 to 2018, lecturer to full Professor, he was on the faculty at the Inner Mongolia University, China. He joined the Department of Artificial Intelligence at Hunan Normal University in 2019, where he is now a director of this department. He was assigned the editorial role for many respected journals. His research interests include computer graphics, image processing, and computer vision.

Fri Nov 13, 14:00 (Host: José Miguel Rojas)

Syed Waqee Wali & Jan Ringert (University of Leicester)

Title: Semantic Comparisons of Alloy Models

Abstract: Alloy is a textual modeling language for structures and behaviors of software designs. The Alloy Analyzer provides various analyses making it a popular light-weight formal methods tool. While Alloy models can be analyzed, explored, and tested, there is little support for comparing different versions of Alloy models. We believe that these comparisons are crucial when trying to refactor, refine, or extend models. In this work we present an approach for the semantic comparisons of Alloy models. Our pair-wise comparisons include semantic model differencing and the checking of refactoring, refinement, and extension.  We enable semantic comparisons of Alloy models by a translation of two versions into a single model that is able to simulate instances of either one or of the versions. Semantic differencing and instance computation require only a single analysis of the combined model in the Alloy Analyzer. We implemented our work and evaluated it using 654 Alloy models from different sources including version histories. Our evaluation examines the cost of semantic comparisons using the Alloy Analyzer in terms of running times and variable numbers over individual models.

This work -- an extension of Syed’s MSc thesis published at the 23rd IEEE/ACM MoDELS conference -- has received the Best Foundation Paper Award and an ACM SigSoft Distinguished Paper Award.

Bio of Syed Waqee Wali:  Syed is a Senior Software Engineer at CGI UK, mostly focussed towards Web Engineering, IoT and Database Administration. He worked on the topic of this research paper as his final project to graduate with an MSc in Advanced Software Engineering with Industry. Syed won the Schools prize for the best software development MSc project. Syed was also part of the School's Driverleics project, involved in research and development with autonomous vehicles.

Bio of Jan Oliver Ringert:  Jan is a lecturer in Model-Based Software Development at University of Leicester. His research interests are in using formal methods for model-based software engineering with applications to autonomous systems. His work has been published in top software engineering and modeling conferences and journals.

Fri Nov 20, 14:00 (Host: Effie Law)

Nicholas Cummins (King's College London)

Title: Speech analysis for mental health: opportunities and challenges

Abstract: The production of speech is remarkably complex, combining conscious and subconscious cognitive thought with the physical actions of the respiratory system. This complexity means that changes in our health state can affect speech, often at a subconscious level. Such changes can then alter both the linguistic, acoustic and prosodic content in ways that are potentially measurable through intelligent signal processing techniques. The multifaceted nature of speech means it is uniquely placed as a signal of interest in remote monitoring (RMT) mobile health (mHealth) studies. No other RMT signal contains such a combination of cognitive, neuromuscular and physiological information. However, considerable research efforts are required to realise the potential of speech as a mHealth biomarker. This talk will consist of three parts, a general introduction to speech production, an overview of current works in this area and an outlook on the challenges and opportunities associated with this fascinating and unique health signal.

Speaker Bio.: Nicholas (Nick) Cummins is a lecturer in AI for speech analysis for health at the Department of Biostatistics and Health Informatics at King’s College London. Nick’s current research interests include speech processing, affective computing and multisensory signal analysis. He is fascinated by the application of machine learning techniques to improve our understanding of different health conditions and mental health disorders in particular. Nick is actively involved in RADAR-CNS project in which he assists in the management of Work Package 8: Data Analysis & Biosignatures. Nick was awarded his PhD in electrical engineering from UNSW Australia in February 2016 for his thesis ‘Automatic assessment of depression from speech: paralinguistic analysis, modelling and machine learning’. After completing his PhD, he was a postdoctoral researcher at the Chair of Complex and Intelligent Systems at the University of Passau, Germany. Most recently, he was a habilitation candidate at the Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, also in Germany. During his time in Germany, he was involved in the DE-ENIGMA, RADAR-CNS, TAPAS and sustAGE Horizon 2020 projects. He also wrote and delivered courses in speech pathology, deep learning and intelligent signal analysis in medicine.

Fri Nov 27, 14:00 (Host: Huiyu Zhou)

Hui Wang (University of Ulster)

Title: Small Data Analytics

Abstract: Big data is a key characteristic of the modern times. Very large data sets arise in various domains at low costs – for example, data being captured from sensors, which are widely available, or crowd sourced from the public. However, large data sets with annotations (labelling, structuring etc) are very rare, as proper annotation must be done by experts, and this is very expensive. This is a small data challenge in the big data age. Machine learning, especially deep learning, can effectively learn with large data sets. However, it cannot effectively learn with small data sets due to various issues, e.g. overfitting, noise, outliers and sampling bias, which can render the learned model less generalisable. In this talk, I will review various approaches to effective learning with small data sets such as data augmentation, transfer learning, regularisation and visualisation, and knowledge-based learning. I will also present an overview of our approach, lattice machine.

Biography: Hui Wang is Professor of Computer Science at Ulster University. His research interests are machine learning, knowledge representation and reasoning, combinatorial data analytics, and their applications in image, video, spectra and text analyses. He has over 270 publications in these areas. He is principal investigator of a number of regional, national and international projects in the areas of image/video analytics (EPSRC funded MVSE 2021-2024, Horizon 2020 funded DESIREE and ASGARD, FP7 funded SAVASA, Royal Society funded VIAD), spectral data analytics (EPSRC funded VIPIRS 2020-2022), text analytics (INI funded DEEPFLOW, Royal Society funded BEACON), and intelligent content management (FP5 funded ICONS); and is co-investigator of several other EU funded projects.

He is Head of the AI Research Centre at Ulster University, Director of Research (Computer Science and Informatics, 2018-2020) in School of Computing, Ulster University. He is an associate editor of IEEE Transactions on Cybernetics, and an associate editor of The Computer Journal. He is the Chair of IEEE SMCS Northern Ireland Chapter (2009-2018), and a member of IEEE SMCS Board of Governors (2010-2013).

Fri Dec 4, 14:00 (Host: Huiyu Zhou)

Andrew M. Wallace (Heriot-Watt University)

Title: PervAsive low-TeraHz and optical sensing for Car Autonomy and Driver assistance (PATHCAD)

Abstract:

What if it rains?

While media coverage about the future of the autonomous car changes from the naïve acceptance of a disruptive technology to the damning realisation that it may actually have the occasional accident due to adverse weather, adversarial attack, and hardware/software malfunction, the vast majority of trials have taken place in good weather with considerable pre-mapping and detailed route planning.

The PATHCAD project explored alternatives to vehicle sensing for scene mapping and actor recognition in adverse conditions, based on video, LiDAR and radar technologies. I shall present work to develop a higher resolution radar system, to recognise and track actors and predict their behaviour in radar images, and discuss how active LiDAR imaging can penetrate bad weather by use of full waveform processing, potentially aided by concurrent radar sensing.

As time allows, I will present some additional material on how fast, eye-safe, full wave, automotive LiDAR systems can be built from improvements in solid state semiconductor arrays, allied to random sampling, compressed sensing and approximate computing. i Much of the work was supported by Jaguar Land Rover and EPSRC (EP/N012402/1) as part of the TASCC programme, and was carried out by the Universities of Birmingham, Edinburgh and Heriot-Watt.

Fri Dec 11, 14:00 (Host: José Miguel Rojas/ Mohammad Mousavi)

Michael Fisher (University of Manchester)

TITLE: Responsible Autonomous Systems?

ABSTRACT: Autonomous systems can make decisions and take actions without direct human intervention. These systems are increasingly advocated for use across robotics, "driverless" cars, unmanned air systems, etc. Yet the title of this seminar is deliberately vague. Do we mean to ask whether autonomous systems are responsible for certain actions or decisions, or whether the autonomous systems are developed and used in a responsible way? We actually aim to address both: if we want truly autonomous systems then some actions or decisions must be made by the system; if we want to develop and deploy autonomous systems responsibly then we must surely know what they will do and why.

In this talk I will describe how the answer to both questions can be provided through a combination of software architectures and formal verification. Having an appropriate autonomous systems architecture allows us to identify nodes/components that make the key decisions. Formally verifying these nodes/components then allows us to examine, and provide guarantees for, their decision-making processes. Once we move to deploying autonomous systems in unknown environments then the decisions the systems might have to make often cannot be predicted beforehand. And here it becomes crucial to verify not what decisions the system will make but why it makes the decisions it does.

BIOGRAPHY: Michael Fisher is Professor of Computer Science at the University of Manchester, holding a Royal Academy of Engineering Chair in Emerging Technologies. His research centres on the safety, reliability, ethics, and verification of autonomous systems, he is involved in BSI and IEEE standards for autonomous systems and robotics, and is Co-Chair of the IEEE Technical Committee on the Verification of Autonomous Systems. (At least until this year) He is involved in public engagement activities and has spoken on autonomous systems within key international organisations such as the International Committee of the Red Cross and the Global Forum on AI for Humanity.

Fri Dec 18, 14:00 (Host: Eugene Zhang)

Yuyin Zhou (John Hopkins University/Stanford University, US)

Title: Medical Machine Intelligence: Data-Efficiency and Knowledge-Awareness

Abstract: While deep learning has largely advanced the field of computer-aided diagnosis by offering an avenue to deliver automated medical image analysis, there remain several challenges towards medical machine intelligence, such as unsatisfactory performance regarding challenging small targets, insufficient training data, high annotation cost, the lack of domain-specific knowledge, etc. These challenges have affected the applicability of such models being deployed in safety-critical medical scenarios. In this talk, I will first briefly summarize the latest progress of deep learning in medical image analysis, with a focus on the aforementioned challenges. I will then present different data-efficient and knowledge-aware deep learning approaches that can facilitate the model generalization to different medical tasks without requiring intensive manual labeling efforts via incorporating domain-specific knowledge in the learning process. Finally, I will discuss how to make neural networks approach the real clinical expertise in general and how this can benefit further diagnostics and procedures.

Biosketch: Yuyin Zhou is an incoming postdoctoral scholar at Stanford University. She has just completed her Ph.D. degree of Computer Science at the Johns Hopkins University, under the supervision of Bloomberg Distinguished Professor Alan Yuille. Before that, Yuyin received her M.S. degree from the University of California, Los Angeles (UCLA) in 2016, and a B.S. degree from Huazhong University of Science and Technology in 2014. Yuyin’s research interests span in the fields of medical image computing, computer vision, and machine learning, especially the intersection of them. Her project with Johns Hopkins Medicine on organ segmentation has been featured on National Public Radio. Yuyin has published over 20 peer-reviewed papers including top-tier venues such as CVPR, ICCV, AAAI, MICCAI, TMI, and MedIA, and was the runner-up in MICCAI 2018 - Computational Precision Medicine: Pancreatic Cancer Survival Prediction Challenge. She also worked at Google Cloud AI and Google Brain.

Previous Years' Seminars

Top

 

Share this page:

Contact Us

Admissions Enquiries:
BSc: +44 (0) 116 252 5280
MSc: +44 (0) 116 252 2265
E: BSc  seadmissions@le.ac.uk
E: MSc  pgadmissions@le.ac.uk

Departmental Enquiries:
T: +44 (0) 116 252 2129/3887
F: +44 (0) 116 252 3604
E: csadmin@mcs.le.ac.uk

Dept of Informatics
University of Leicester
Leicester, LE1 7RH
United Kingdom

Accessibility

AccessAble logo

The University of Leicester is committed to equal access to our facilities. DisabledGo has a detailed accessibility guide for the Informatics Building.