We are pleased to announce the following invited talks:
-
Saturday, November 19th, 3:45 PM EST:
Mei Si – Associate Professor, Cognitive Science Department, Rensselaer Polytechnic Institute
Title: Virtual Humans and Interactive Storytelling
Abstract: Narrative is an important aspect of the human experience. With the rapid advancement of computer technologies in recent years, virtual environments are becoming increasingly capable of providing a vivid, fictional world in which users can immerse and interact with virtual humans controlled by other users or by an AI system. Game designers have been looking into ways to use rich characters and narratives to engage the player. Virtual humans and social robots have also been widely used for training and pedagogical purposes, ranging from math and physics tutoring to language and social skills training. My research seeks to develop automated procedures for driving conversational agents’ verbal and nonverbal behaviors. In this talk, I will discuss AI techniques for creating virtual humans and interactive stories. The talk will focus on how different technologies can be applied to model virtual humans with personalities, emotions, and a theory of mind. I will also discuss the challenges and different approaches to creating interactive stories. -
Sunday, November 20th, 11:10 AM EST:
Herbert A. Simon Prize Talk:
Anthony Cohn – Professor of Automated Reasoning, University of Leeds
Title: Cognitive Systems with Spatial IntelligenceAbstract: Being able to represent and reason about space is crucial for an embodied cognitive agent (and indeed many virtual agents too). In this talk I will discuss the kinds of spatial representation and reasoning that a cognitive agent might need, focussing on the kinds of qualitative spatial representations which my research group has been investigating and indeed also using over the years, in particular for learning activity models from video data. I will also talk about how a cognitive agent might ground language to spatio-temporal perceptions. During the talk I will also outline various as yet unsolved challenges.
-
Monday, November 21st, 11:15 AM EST:
Ruth Byrne – Professor of Cognitive Science, Psychology, Trinity Inst. of Neurosciences (TCIN)
Title: How people reason about counterfactual explanationsAbstract: People often create explanations about how an outcome could have turned out differently, if some preceding events had been different. In this talk I focus on the cognitive processes that underlie the construction and comprehension of counterfactual explanations and causal explanations. I discuss recent experimental discoveries of differences in how people reason about counterfactual and causal assertions, including evidence from eye-tracking experiments of differences in the tendency to consider multiple possibilities, and evidence from sentence-stem completion experiments of differences in the content of such explanations. I consider some of the implications of these discoveries for the use of automated counterfactual explanations for decisions by Artificial Intelligence (AI) decision support systems in eXplainable AI (XAI). I describe empirical findings of differences in people’s subjective preferences for counterfactual and causal explanations for decisions by AI systems in familiar and unfamiliar domains, and people’s objective accuracy in predicting such decisions or making their own decisions. I argue that counterfactual explanations confer several cognitive benefits despite their potential cognitive costs.
-
Monday, November 21st, 2:30 PM EST:
Doug Lenat – CEO, Cycorp
Title: Computers versus Common SenseAbstract: (See also the Cyc website)
Almost everyone who talks about Artificial Intelligence, nowadays, means training deep neural nets on big data. Developing and using those patterns is a lot like the cognition historically attributed to our right brain hemispheres; it enables AI’s to react quickly and – very often – adequately. But we human beings also make good use of the sort of cognition historically attributed to our left brain hemisphere, reasoning slowly, logically, and causally. I will discuss this “other type of AI”– i.e., symbolic AI, which comprises a formal representation language, a “seed” knowledge base with hand-engineered default rules of common sense and domain knowledge written in that language, and an inference engine capable of producing hundreds-deep chains of deduction, induction, and abduction on that large knowledge base. I will describe the largest such platform, Cyc, a few commercial applications that were produced just by educating it as one might teach a new human employee, and give a short demo. We’ve made a lot of mistakes, and learned a lot of lessons, in the last four decades, in trying to get such an AI to operate on without having to compromise on speed or on the expressiveness of its representation. But it is important to remember that human beings’ “super-power” is our ability to harness both types of reasoning, and I believe that the most powerful AI solutions in the coming decade will likewise be hybrids of the two. That is the only path I see by which we will overcome the current dangerous inability of deep-learning AI’s to understand and explain their decisions, and will make AI’s far more trusted and – more importantly – far more trustworthy.
We are also pleased to announce the following panel and community discussion:
-
Sunday, November 20th, 4:30 PM EST:
Panel and Community DiscussionAbstract: Recently a growing number of prominent voices have challenged the view that large language models and deep learning, in their current conceptions, will ever achieve human-level intelligence, reasoning, and understanding on their own. What is our contribution to these debates? This discussion will focus on the future objectives of the Advances in Cognitive Systems community from scientific, methodological, and philosophical perspectives and what role we might play within the broader research landscape.
Panelists:
|