Informatics Seminars 2018/19

2018/19 Semester Two

  • Please refer to the campus map for the exact location of lecture rooms.
  • Please pay attention to the change of venue: The first 4 seminars in January (18th, 25th) and February (1st, 8th) will take place in BEN LT1 (Bennett Lecture Theatre 1) and the other seminars will be in BENL LT (Bennett Link Lecture Theatre)

Friday June 21, 14:00 (Host: Mohammad Mousavi)

Augusto Sampaio (the Federal University of Pernambuco, Brazil)

Friday May 31, 14:00 (Host: Jose Rojas Siles)

Antonio Fillieri (Imperial College London)

Friday May 24, 14:00 (Host: Effie Law)

Max Wilson (University of Nottingham)

Friday May 17, 14:00 (Host: Thomas Erlebach)

Robert Mercas (Loughborough University)

Friday March 22, 14:00 (Host: Effie Law)

Neil Walkinshaw (University of Sheffield)

Title: Are 20% of Files Responsible for 80% of Defects?
Over the past two decades a mixture of anecdote from the software development industry and empirical studies from academia have suggested that the 80:20 rule (otherwise known as the Pareto Principle) applies to the relationship between source code files and the number of defects in the system: a small minority of files (roughly 20%) are responsible for a majority of defects (roughly 80%). This has significant implications when it comes to committing developer time and effort, especially for V&V activities such as testing and inspection.
One problem is that the foundations of this 80:20 claim are quite weak. Aside from the industrial anecdote, the underlying published studies from academia are only based upon a relatively small number of replicated studies of 2-3 systems, from specific domains (usually telecoms).
In this talk I will report on an ongoing collaboration with Leandro Minku that revolves around a larger automated analysis of 100 open source GitHub version repositories. As a result, we have found that the occurrence of the rule is actually quite brittle. It only applies if one considers a highly simplistic definition of what is meant by "defect", and even if one accepts this definition there is a high degree of variance between projects.

Friday March 15, 14:00 (Host: Mohammad Mousavi)

Robert Hierons (University of Sheffield)

Title:  Conditional Entropy and Failed Error Propagation in Software Testing
In  failed error propagation (FEP), an erroneous program state (that occurs in testing) does not lead to an observed failure. FEP is known to hamper software testing, yet it remains poorly understood. If we could, for example, assess where FEP is likely to occur then we might be able to produce more effective notions of test coverage that would incorporate an element of semantics. This talk will describe an information theoretic formulation of FEP that is based on conditional entropy. This formulation considers the situation in which  we are interested in the potential for an incorrect program state at statement s to fail to propagate to incorrect output. The talk will explore the underlying idea, metrics that have been defined to try to estimate the probability of FEP, and experimental evaluation.

Friday March 8, 14:00 (Host: Mohammad Mousavi)

Alexandra Silva (University College London)

Title: Scalable Verification of Probabilistic Networks


We will present the foundations of McNetKAT, a scalable tool for verifying probabilistic network programs. McNetKATis based on a new semantics for the history-free fragment of Probabilistic NetKAT in terms of finite state, absorbing Markov chains. This view allows the semantics of all programs to be computed exactly, enabling construction of an automatic verification tool. Domain-specific optimizations and a parallelizing backend enable McNetKAT to analyze networks with thousands of nodes, automatically reasoning about general properties such as probabilistic program equivalence and refinement, as well as networking properties such as resilience to failures. We evaluate McNetKAT's scalability using real-world topologies, compare its performance against state-of-the art tools, and develop an extended case study on a recently proposed data center network design.
Short bio:
Alexandra Silva is a theoretical computer scientist whose main research focuses on semantics of programming languages and modular development of algorithms for computational models. A lot of her work uses the unifying perspective offered by coalgebra, a mathematical framework established in the last decades.  
Alexandra is currently a Royal Society Wolfson fellow and Professor of Algebra, Semantics, and Computation at University College London. Previously, she was an assistant professor in Nijmegen and a post-doc at Cornell University, with Prof. Dexter Kozen, and a PhD student at the Dutch national research center for Mathematics and Computer Science (CWI), under the supervision of Prof. Jan Rutten and Dr. Marcello Bonsangue. 
She was the recipient of the Needham Award 2018, the Presburger Award 2017, the Leverhulme prize 2016, and an ERC starting Grant in 2015. 


Friday February 22, 14:00 (Host: Jose Rojas Siles)

Nikos Aletras (University of Sheffield)

Title: Inferring social media user attributes using language and network information

Inferring attributes of social media users is an important problem in computational social science. Automated inference of user characteristics has applications in personalised recommender systems, targeted computational advertising and online political campaigning. In this talk, I’ll present how we can combine language and network information to predict users’ (1) socioeconomic characteristics (e.g., income and occupational class); and (2) voting behaviour.

Nikos Aletras is a Lecturer in Natural Language Processing in the Computer Science Department at the University of Sheffield, co-affiliated with the Machine Learning (ML) group. Previously, he was a research scientist at Amazon (Core ML and Alexa) and a research associate at UCL, Department of Computer Science, Media Futures Group. He completed a PhD in Natural Language Processing at the University of Sheffield. His research interests are in NLP, Machine Learning and Data Science. He develops text analysis methods to solve problems in other scientific areas such as (computational) social and legal science.

Friday February 15, 14:00 (Host: )


Friday February 8, 14:00 (Host: Effie Law)

Yvonne Rogers (University College London)

Title: Are Smart Interfaces Changing Who We Are?

Abstract: There has been a proliferation of technological developments in the last few years that are beginning to improve how we perceive, attend to, notice, analyse and remember events, people, data and other information. These include machine learning, computer vision, advanced user interfaces (e.g. augmented reality) and sensor technologies. A goal of being augmented with ever more computational capabilities is to enable us to see more and, in doing so, make more intelligent decisions. But to what extent are the new interfaces enabling us to become super-human? What is gained and lost through our reliance on ever pervasive computational technology? In my talk, I will cover latest developments in tech advances, such as conversational interfaces, robots, and augmented reality. I will then draw upon relevant recent findings in HCI that demonstrate how our human capabilities are being extended but also struggling to adapt to the new demands on our attention.

Bio: Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at UCL. Her research interests are in the areas of ubiquitous computing, interaction design and human-computer interaction. A central theme of her work is how to design interactive technologies that can enhance life by augmenting and extending everyday, learning and work activities. This involves informing, building and evaluating novel user experiences through designing, implementing and deploying a diversity of technologies. A current focus of her research is on human-centred data and people in the Internet of Things in urban settings. She is also interested in operationalising and investigating what human-centred AI means in practice. Central to her work is a critical stance towards how visions, theories and frameworks shape the fields of HCI, cognitive science and Ubicomp. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., "Being Human: HCI in 2020"). She has also published two monographs "HCI Theory: Classical, Modern and Contemporary." and "Research in the Wild." with Paul Marshall. She is a fellow of the ACM, BCS and the ACM CHI Academy.

Friday February 1, 14:00 (Host: Roy Crole ) - CANCELLED

Samson Abramsky (Oxford University) - CANCELLED

Title: Harnessing the atom: quantum advantage, contextuality and paradoxes - CANCELLED

Abstract: Quantum physics is notorious for confounding our intuitions, and predicts phenomena such as ``spooky action at a distance’’ which cannot be accounted for in terms of classical physics, and yet which have been verified experimentally.

We are now seeing the rise of quantum technologies, which aim to harness these phenomena to perform information processing tasks which would be impossible classically.

In this talk, we will explain some of these ideas without assuming any knowledge of quantum physics! We will see how the phenomenon of quantum contextuality leads us to the very borders of paradox, while enabling Alice and Bob to perform remarkable feats of cooperation which would not be possible without the use of quantum resources.

Friday January 25, 14:00 (Host: Effie Law)

Joshua Vande Hey (University of Leicester)

Title:  Air pollution, human health, and data: how can informatics, data science and statistics help us answer questions about environmental impacts on health and healthcare?

Abstract: In this talk, Dr. Vande Hey will present a brief overview of how air pollution effects our health, and introduce active interdisciplinary research questions he is currently pursuing.  Current and future challenges and opportunities related to informatics and data science will be emphasized, and the need for interdisciplinary and cross sector partnerships to address these challenges will be discussed..

Friday January 18, 14:00 (Host: Thomas Erlebach)

Ligang He (University of Warwick)

Title: Developing the Graph-based Methods for Optimizing Job Scheduling on Multicore Computers
Abstract: It is common nowadays that multiple cores reside on the same chip and share the on-chip cache. Resource sharing may cause performance degradation of the co-running jobs. Job co-scheduling is a technique that can effectively alleviate the contention. Many co-schedulers have been developed in the literature, but most of them do not aim to find the optimal co-scheduling solution. Being able to determine the optimal solution is critical for evaluating co-scheduling systems. Moreover, most co-schedulers only consider serial jobs. However, there often exist both parallel and serial jobs in many situations. This talk presents our work to tackle these issues. In this work, a graph-based method is developed to find the optimal co-scheduling solution for serial jobs, and then the method is extended to incorporate parallel jobs. A number of optimization measures are also developed to accelerate the solving process. Moreover, a flexible approximation technique is proposed to strike the balance between the solving speed and the solution quality.

2018/19 Semester One

  • Please pay attention to the change of venue: The first 4 seminars in October (5th, 12th, 19th and 26th) will take place in BEN LT2 (Bennett Lecture Theatre 2) and the other seminars will be in BENL LT (Bennett Link Lecture Theatre).

Friday September 28 14:00 (Host: Mohammad Mousavi)

Location: CW2 BCL (Charles Wilson Second Floor LT Belvoir City Lounge)

Alessandro Abate (Oxford University)


Formal verification of complex systems: model-based and data-driven methods


Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large-scale, complex models, such as those needed in Cyber-Physical Systems (CPS) applications.  Using data, which nowadays is becoming ever more accessible, has the potential to mitigate such limitations. However, this notoriously leads to a lack of formal proofs that are needed in safety-critical systems.

This talk covers research which addresses these shortcomings, by bringing model-based and data-driven methods together, which can help pushing the envelope of existing algorithms and tools in formal verification.

In the first part of the talk, I will discuss a new, formal, measurement-driven and model-based automated technique, for the quantitative verification of systems with partly unknown dynamics. I will formulate this setup as a data-driven Bayesian inference problem, formally embedded within a quantitative, model-based verification procedure. I argue that the approach can be applied to complex physical systems (e.g., with spatially continuous variables), driven by external inputs and accessed under noisy measurements.

In the later part of the talk, I will concentrate on systems represented by models that are probabilistic with heterogeneous dynamics (continuous/discrete, i.e. hybrid, as well as nonlinear). Such stochastic hybrid models (SHS) are a natural mathematical framework for CPS. With focus on model-based verification procedures, I will provide algorithms for quantitative model checking of temporal specifications on SHS with formal guarantees. This is attained via the development of formal abstraction techniques based on quantitative approximations.
Theory is complemented by algorithms, all packaged in a software tool that is available to users, and applied in the domain of Smart Energy.
Alessandro Abate is an  Associate Professor in the Department of Computer Science at the University of Oxford. He received a Laurea degree in Electrical Engineering (summa cum laude) in 2002 from the University of Padova. As an undergraduate, he also studied at UC Berkeley and RWTH Aachen. Alessandro earned an MS in 2004 and a PhD in 2007, both in Electrical Engineering and Computer Sciences, at UC Berkeley, working on Systems and Control Theory with S. Sastry. Following his PhD, Alessandro was a PostDoctoral Researcher at Stanford University, in the Department of Aeronautics and Astronautics, working with C. Tomlin on Systems Biology in affiliation with the Stanford School of Medicine. From 2009 to 2013, he was an Assistant Professor at the Delft Center for Systems and Control, TU Delft - Delft University of Technology, working with his research group on Verification and Synthesis over complex systems and on Smart Energy applications.


Fri October 5, 14:00 in BEN LT2 (Host: Huiyu Zhou)

Jian Liu (Centre for Systems Neuroscience, University of Leicester)

Title: Neuroscience-inspired artificial intelligence: a case study of the retina

The retina is the first stage of visual processing in the brain. Due to the relatively simple structure of the retinal neuronal network, it is suitable for developing novel algorithms for both neuroscience and computer vision. Here, I will discuss some data-driven approaches recently developed in our group,  such as non-negative matrix factorization and deep learning nets, for characterizing the underlying mechanisms of retinal visual processing. I further demonstrate how these computational principles of neuroscience can be translated to neuromorphic chips for the next generation of artificial retina to enhance the performance of neuroprosthesis and neurorobotics.

Bio: Dr. Jian Liu received the Ph.D. in mathematics from UCLA in 2009. He is currently a Lecturer in Computational Neuroscience at the Centre for Systems Neuroscience, University of Leicester. His area of research includes computational neuroscience and brain-inspired computation for artificial intelligence. His work was published in Nature Comms., eLife, J. of Neurosci.,  PLOS Comput. Biol, Neural Comput., IEEE TRANS Cybernetics, NNSL, etc.

Fri Oct 12, 14:00 in BEN LT2 (Host: Effie Law/ William Darler)

Steve Benford (University of Nottingham)

Title: Failing with Style: Why and How we Should Encourage Humans to Fail with Highly Capable Systems

Abstract: Failure is a common artefact of challenging experiences, a fact of life for interactive systems but also a resource for aesthetic and improvisational performance. I will discuss an example of how three professional pianists performed an interactive piano composition that included playing hidden computational codes within the music so as to control their path through the piece and trigger system actions. I will reveal how their apparent failures to play the codes (at least when seen from the system’s point of view) occurred for diverse reasons including mistakes in their playing, limitations of the system, but also deliberate failures as a way of controlling the system, and how these failures provoked aesthetic and improvised responses from the performers. I will argue that creative interfaces should be designed to enable aesthetic failures and introduce a taxonomy that compares human approaches to failure with approaches to capable systems, revealing new creative design strategies of gaming, taming, riding and assimilating with the system. I’ll try and persuade you that these strategies might also be applied to less obviously creative interfaces, from conversational interfaces to autonomous vehicles.

Short Bio.: Steve is Professor of Collaborative Computing in the Mixed Reality Laboratory at the University of Nottingham where he directs the Horizon ‘My Life in Data’ Centre for Doctoral Training. He previously held an EPSRC Dream Fellowship, has been a Visiting Professor at the BBC and was elected to the CHI Academy in 2012.

Fri Oct 19, 14:00 in BEN LT2 (Host: Emilio Tuosto)

Hugo López (DCR Solutions / IT University of Copenhagen)

Title: Digital processes: correctness, compliance and adoption considerations

Process Digitalisation describes the transformation of inter-organizational processes from paper-based descriptions to digital-based ones, with the promise that once in their digital form, they will be subject to analysis and automation. These processes involve knowledge workers, cybernetical components (e.g.: microservices), and physical entities (sensors & actuators). Digital systems are much more than the sum of their parts, and coordinating the different entities involved poses interesting challenges on correctness and flexibility. With respect to correctness, the interactions of distributed components need to abide by explicit flows (protocols), as well as respecting general structural properties (e.g.: deadlock-freedom) and compliance with existing regulations. Regarding flexibility, digital processes need to cater to the ways knowledge workers organise their current work practices, by, for instance, letting them discretionary adapt the activities executed according to the information available.

In this talk, I will provide an overview of different programming-language techniques used in process digitalization. First I will cover approaches to guarantee structural properties in distributed systems with high levels of parallelism and under physical considerations, such as parallel programming and sensor networks. In the second part, I will concentrate in the adoption of formal modelling techniques to non-formal users, by present a tool that facilitates the creation of process models from natural language specifications, such as process descriptions and laws.

In the talk I will cover material from the following publications:

  • Hugo A. López, Søren Debois, Thomas T. Hildebrandt, Morten Marquard: The Process Highlighter: From Texts to Declarative Processes and Back. BPM (Dissertation/Demos/Industry) 2018: 66-70
  • Hugo A. López, Flemming Nielson, Hanne Riis Nielson: Enforcing Availability in Failure-Aware Communicating Systems. FORTE 2016: 195-211
  • Hugo A. López, Eduardo R. B. Marques, Francisco Martins, Nicholas Ng, César Santos, Vasco Thudichum Vasconcelos, Nobuko Yoshida: Protocol-based verification of message-passing parallel programs. OOPSLA 2015: 280-298

Fri Oct 26, 14:00 in BEN LT2 (Host: Tom Ridge)

Nada Amin (University of Cambridge)

Title: Generating C from Scala

Using the Lightweight Modular Staging (LMS) framework, we review a principled approach, driven by types, to generative programming: writing expressive high-level programs that generate fast low-level code at runtime. We show how to extend the approach to verification by generating annotations that enable the low-level code to be independently validated for safety or functional correctness. We illustrate the LMS motto "abstraction without regret" with a small and fast SQL query engine, and a high-level, fast and safe HTTP parser.

Fri Nov 2, 14:00 in BENL LT (Host: Nervo Verdezoto)

Martin Porcheron (University of Nottingham)

Title: The 'Conversational' Interface': Voice UIs in vivo

Abstract: The furore surrounding the 'conversational interface' (also known as voice interface, voice UI, conversational agent, virtual agent, and so on) has truly taken hold within HCI, driven in a large part by the rapid availability of commercial devices that promise users realistic experiences where they can talk to the device 'like it was a friend'. My colleagues and I have turned to examining the embedded sociality with which these devices become intertwined with by recording families use of the Echo over 1-month periods and taking a conversation analytic approach to reveal how their use unfolds in multi-party settings. Through our analysis, we unpack the constituent components of interaction with the Echo—the work of making an address to the device and dealing with a response—to evoke critical questions for how we device interactive voiced-based interfaces for users. In this talk, I'll present some of the background to our study, findings, and also pose some questions for progressing design for voice-based interactions.

Short bio:
Martin Porcheron is a researcher in the Mixed Reality Lab at the University of Nottingham. His prior work encompassed how we use everyday technologies—from smartphones to voice-based assistants—while we are socialising with others in places from the pub to the home. He currently is part of the Autonomous Internet of Things (A-IoT) project that examines how to design for a world where the embedded technologies around us co-operate as an interactive proactive system to meet our everyday needs, ranging from smart thermostats to automated grocery shopping.

Fri Nov 16, 14:00 in BENL LT (Host: Mohammad Mousavi)

Ana Cavalcanti (University of York)

Title: Software Engineering for Robotics


There are many challenging aspects involved in ensuring safety of mobile and autonomous robots. One aspect that has been neglected is software engineering. The state-of-practice uses costly iterations of trial and error, often with hardware and environment in the loop, after an ad hoc use of simulations. It is clear that the modern outlook of the robotic applications is not matched by the outdated practices of design and verification when it comes to controller software.

We present two new domain-specific languages for robotics: RoboChart and RoboSim. They support a model-based approach for early formal validation and sound generation of simulations. In robotics, simulation is the favoured technique for analysis. In our work, simulation is complemented by analysis based on model checking, using CSP, tock-CSP, and the refinement model checker FDR, and theorem proving, using Isabelle/UTP. Our ultimate goal is to support reasoning and code generation using a variety of techniques, but relying on theorem proving to deal with data-rich and large systems, especially with models for swarm applications.
RoboChart is a UML-like notation that includes timed and probabilistic primitives for modelling of controllers. We link it to RoboSim, another graphical notation defined as a gateway between RoboChart models and customised simulations developed for traditional robotics simulators. RoboChart enforces design patterns appropriate to deal with robotic applications and now includes also notation to model collections of robots as present in swarms.
Our approach is predicated on a few principles. First, we adopt notations akin to those in current use by practitioners, namely, notations based on state machines, to ensure usability. Second, we enforce specific architectural patterns relevant for the area, and define precise languages that can be used for automatic generation of mathematical models.  Third, our goal is to hide the formalisms from practitioners entirely. Via specialisation and automation, we seek both scalability  and usability.  Finally, we investigate the integrated use of a variety of techniques together to tackle the different needs of verification and development.

Fri Nov 23, 14:00 in BENL LT (Host: Emilio Tuosto, Paula Severi)

Ornela Dharda (University of Glasgow)

Title: A New Linear Logic for Deadlock-Free Session Typed Processes

Abstract: The π-calculus, viewed as a core concurrent programming language, has been used as the target of much research on type systems for concurrency. In this paper we propose a new type system for deadlock-free session-typed π-calculus processes, by integrating two separate lines of work. The first is the propositions-as-types approach by Caires and Pfenning, which provides a linear logic foundation for session types and guarantees deadlock-freedom by forbidding cyclic process connections. The second is Kobayashi’s approach in which types are annotated with priorities so that the type system can check whether or not processes contain genuine cyclic dependencies between communication operations. We combine these two techniques for the first time, and define a new and more expressive variant of classical linear logic with a proof assignment that gives a session type system with Kobayashi-style priorities. This can be seen in three ways: (i) as a new linear logic in which cyclic structures can be derived and a Cycle-elimination theorem generalises Cut-limination; (ii) as a logically-based session type system, which is more expressive than Caires and Pfenning’s; (iii) as a logical foundation for Kobayashi’s system, bringing it into the sphere of the propositions- as-types paradigm.

Fri Dec 7, 14:00 (Host:Mohammad Mousavi)

David Clark (University College London)

Title: OASIs — a tool for improving assertions used as oracles in software testing

Abstract: Assertions acting as oracles in code may be subject to false negatives and false positives (failures of their soundness and completeness with respect to ground truth). OASIs is an extension of EvoSuite (tool for automated Java unit testing) and uses a combination of program transformation, mutation testing and a human in the loop to support oracle improvement. Initial oracles to improve can be gleaned from many sources, for example the implicit oracle and assertions generated by Daikon. This talk explains how OASIs works, sketches the underlying theory of oracle improvement, and reports on some results from a human study in oracle assessment and improvement. This is joint work with Gunel Jahangirova and Paolo Tonella.

Short bio: David Clark is a Reader in Program Analysis in the Department of Computer Science at UCL. He did a mid life conversion from campaign politics to computer science. Then followed a PhD at Imperial College and academic posts at Imperial, KCL and UCL. He has applied program analysis to functional languages, formal specification languages, algebraic process languages, object-oriented languages and simple imperative languages. Applications of interest in his research have included functional strictness, safety critical properties, slicing, secure information flow and software testing.



Previous Years' Seminars


Share this page:

Contact Us

Admissions Enquiries:
BSc: +44 (0) 116 252 5280
MSc: +44 (0) 116 252 2265
E: BSc
E: MSc

Departmental Enquiries:
T: +44 (0) 116 252 2129/3887
F: +44 (0) 116 252 3604

Dept of Informatics
University of Leicester
Leicester, LE1 7RH
United Kingdom


DisabledGo logo

The University of Leicester is committed to equal access to our facilities. DisabledGo has a detailed accessibility guide for the Informatics Building.