18th Tuesday: Dr Ben Horsburgh
Principal ML Engineer
QuantumBlack
Ben is a Principal ML Engineer at QuantumBlack. He is an experienced Machine Learning / Data Scientist with a demonstrated history of solving problems across the retail, automotive, travel, publishing, fashion, oil & gas, tourism, and multimedia industries. His Ph.D. was on Integrating Content and Semantic Representations for Music Recommendation which he obtained from The Robert Gordon University in 2013. His work has been published in top-tier conferences and journals including IJCAI, AAAI and ICCBR.
Title: Scaling AI from the lab into industry
Abstract: AI research has the potential to drive fundamental changes and lasting impact across every type of industry. As areas like CBR continue to solve novel problems in innovative ways, there is a need to take these solutions out of the lab and into the hands of those who can use them. From new approaches to data and algorithms, through to complete systems, solving the problem is only a small part of the journey each innovation needs to go through. In this talk we will discuss an approach to solving novel problems, so that our solutions have the best chance of success. We will discuss ways of thinking, working, and designing solutions, and how to match those to needs across industry. Finally we will discuss how industry has changed over recent years, and emerging trends and future directions.
19th Wednesday: Professor Ruth Byrne
Professor of Cognitive Science
School of Psychology and Institute of Neuroscience, Trinity College Dublin
Ruth Byrne is the Professor of Cognitive Science at Trinity College Dublin, University of Dublin, in the School of Psychology and the Institute of Neuroscience, a chair created for her by the university in 2005. Her research expertise is in the cognitive science of human thinking, including experimental and computational investigations of reasoning and imaginative thought. She has published over 100 articles in journals and her books include, 'The rational imagination: how people create alternatives to reality' (2005, MIT press), 'Deduction', co-authored with Phil Johnson-Laird (1991, Erlbaum Associates), and most recently, 'Thinking, reasoning, and decision-making in autism', co-edited with Kinga Morsanyi (2019, Routledge). She is a senior editor for Cognitive Science, journal of the US Cognitive Science Society, and former chair of the European Research Council's advanced grants panel on the human mind. She is a member of the Royal Irish Academy, a Senior Fellow of Trinity College Dublin, and a Fellow of the US Association for Psychological Science. She was awarded the 2021 Gold Medal for Social Sciences by the Royal Irish Academy.
Title: How people reason with counterfactual explanations for AI decisions
Abstract: Cognitive science research on human thinking indicates that people often create counterfactual explanations for past decisions, in which they consider how an outcome would have been different if some preceding events had been different. Counterfactual explanations are closely related to causal ones, but experimental evidence indicates that people tend to reason differently about them. To illustrate some of their differences I discuss the sorts of mental models people construct to simulate counterfactual and causal assertions and describe findings from eye-tracking studies that shed light on the source of these differences. I suggest that such insights from cognitive science about how people understand explanations can be instructive for the use of explanations in eXplainable Artificial Intelligence (XAI). Given the considerable interest recently in counterfactual explanations in XAI, I consider the use of counterfactual and causal explanations to explain decisions made by AI decision support systems. I describe current empirical findings which show that although people tend subjectively to prefer counterfactual to causal explanations for an AI system’s decision, there are few differences objectively in people’s accuracy in predicting an AI system’s decisions whether they have been given counterfactual or causal explanations. Finally, I address the question of whether an AI system’s recommendations can persuade people to make choices they would not otherwise make, in particular to make risky decisions. Overall, I conclude that central to the XAI endeavour is the requirement that automated explanations provided by an AI system must make sense to human users and be interpreted by them in the way intended.
20th Thursday: Dr Derek Bridge
Senior Lecturer
School of Computer Science & Information Technology, University College Cork
Derek Bridge is a senior lecturer and principal investigator at the Insight SFI Research Centre for Data Analytics. He has over 30 years’ of experience in AI research and education. He has made sustained contributions in the areas of recommender systems and case-based reasoning. In particular, he has a history of work in explanations; designing recommender systems that achieve goals other than recommendation relevance (including diversity and serendipity); and in interactive recommender systems. In both case-based reasoning and recommender systems, he has won a number of best paper prizes. He is a founding member of the ACM Conference on Recommender Systems. He combines research with research-led teaching, twice recently winning teaching excellence awards.
Title: Knowledge Graphs in Recommending and CBR
Abstract: Knowledge Graphs represent relationships between entities and concepts. They have been popularized by Google in evolving their search engine from document retrieval to question-answering. But they have a wide range of applications. In this talk, Derek presents four applications in recommendation and classification, drawn mostly (but not exclusively) from his team's work over the past few years. The applications that he chooses differ in the ways they process the knowledge graphs. One focuses on paths in the graphs; another considers collective classification within neighbourhoods; the third represents the graphs in a vector-space model; and the last uses graph convolutions. The talk also highlights different similarity measures - five of them in total - that follow from these different ways of processing the graphs. The talk offers an opportunity to reflect on how knowledge graphs might contribute to Case-Based Reasoning.