Conference starts in:

Keynote Speakers

 26/08/2014

Elisabeth André

 

Prof. Elisabeth André 
  • University of Augsburg, Germany

Elisabeth André is a Full professor of Computer Science at Augsburg University, and Chair of the Research Unit Human-Centered Multimedia. She has a long track record in multimodal interfaces, embodied conversational agents and social signal processing and participated in a variety of international projects in these areas, such as CEEDS (The Collective Experience of Empathic Data Systems), TARDIS (Training young Adult’s Regulation of emotions and Development of social Interaction Skills), ILHAIRE (Incorporating Laughter into Human-Avatar Interactions: Research and Evaluation)  and eCute (education in Cultural understanding technology enhanced). In 2010, Elisabeth André was elected a member of the prestigious German Academy of Sciences Leopoldina, the Academy of Europe and AcademiaNet. She is also an ECCAI Fellow (European Coordinating Committee for Artificial Intelligence).

Exploring the Effect of Unconscious Cues in Human-Robot Interaction

Societal challenges, such as assisted living for elderly people, create a high demand for technologies that enable smooth interactions between humans and machines. Currently, most human-robot interfaces focus on input explicitly issued by the human users. However, in human-human communication, it is often the myriad of unconsciously conveyed signals that will determine whether an encounter between people is successful or not. Recently, techniques for analyzing human behavioral signals in real-time have become more robust. In addition, the robots’ expressivity has been increased, for example, by equipping them with synthetic skin that can be manipulated to convey a variety of emotional states. Therefore, time has come to explore the power and potential of subtle social signals in human-robot interaction. Usually, social signals are not deliberately communicated by users. Nevertheless, they may reveal information that can be exploited to adjust a robot’s behavior without making users aware of the fact that they are implicitly controlling it. In my talk, I will demonstrate how progress made in the area of social signal processing can contribute to a deeper symbiosis in human-robot interaction. This includes the collection of human behavioral cues under naturalistic conditions and their linkage to higher-level intentional states.

 

 

komarekMr Jan Komarek 
  • Invited Speaker from the European Commission

Jan Komarek is working as a project officer for the Digital social platforms unit of the European Commission. He follows several projects supporting research and deployment of ICT solutions in the fields of service robotics as well as integration of long-term and health care services. Furthermore he is responsible for the AAL JP and impact assessments. Preceding this work in the current team, he also worked in the Commission in the field of macroeconomic analysis and statistics. Before joining the EC in 2005, Jan worked for four years in the telecommunications filed for both private and public sectors. Jan holds two degrees: economics and international relations.

Support of the European Commission to service robotics and related research

The European Commission has been financing research and innovation in the field of ageing well through 7th Framework Programme, Ambient Assisted Living (AAL) Joint Programme and the Competitiveness and Innovation Programme (CIP). In particular topics like service robotics and smart environments have attracted strong response from the industry, academia as well as research due to their high relevance to the ageing challenge. The projects are expected to result in benefits to the elderly, the care system as well as to the economy.


 27/08/2014

DavidLane

Prof. David Lane
  • Ocean Systems Laboratory, Heriot-Watt University, UK

David Lane is Professor of Autonomous Systems Engineering at Heriot-Watt University, Edinburgh, Director of the EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems, the ROBOTARIUM National Facility for Research into Robot Interaction (edu-ras.org) and the Ocean Systems Laboratory (www.oceansystemslab-heriotwatt.com). He co-ordinates EU FP7 PANDORA (persistentautonomy.com), participating also in FP7 ARROWS, ROBOCADEMY and EPSRC AIS. In public leadership roles, he chairs the BIS/TSB Robotics and Autonomous Systems Special Interest Group (RAS-SIG) formulating a national cross-sector RAS innovation strategy, and is a Director of the euRobotics aisbl not-for-profit that shapes EU Horizon2020 Robotics PPP. In 2001 he founded SeeByte Ltd and Inc, and as CEO until 2010 lead the company’s organic evolution from startup to a multi-million dollar organization located in Edinburgh and San Diego, commercialising a 20 year portfolio of research in autonomous subsea robotics into international offshore energy and defence markets. SeeByte won the 2010 Praxis Unico Business Impact Achieved Award and 2013 Scottish Digital Technology Award for International Growth. He has been elected to Fellowships of the Royal Academy of Engineering and the Royal Society of Edinburgh.

 

Autonomy Is All About The Operator

Autonomous underwater robots have moved from laboratory prototypes to commercial systems for inspection and survey tasks in offshore oil and gas, environment monitoring, search/rescue and security applications. Recent examples in the news include the Deep Water Horizon Gulf of Mexico oil spill, Fukushima nuclear reactor incident and the search for the Malaysian Airlines MH370 wreckage and black box. These autonomous robots operate in isolation or collaboratively as part of a group. Soon, laboratory prototypes that can autonomously manipulate and interact will make their way into commercial systems for inspection, repair and maintenance tasks. Dedicated ocean observatories are now being designed and deployed involving autonomous robots, gliders and fixed instrumentation on the seabed. First research projects are underway studying interaction of these robots with divers, to operate in synergy and extend the divers’ capabilities.

The lack of available communication bandwidth through the Ocean means these autonomous, un-cabled robots must have high degrees of autonomy with only minimal operator supervision during mission execution. From the human-robot interaction perspective, there are therefore difficult challenges in keeping human operators situated with events as missions evolve, with effective mission debriefing and data visualisation on return, and with programming robust behaviours into the robot that can deal with imprecision in navigation and positioning.

The talk will look at some typical operational scenarios in these application sectors and the various ways autonomous operations are critically dependent on human interaction to be effective. Recent research outputs and some prototype systems will be presented to illustrate both pragmatic and esoteric approaches in forming human-robot interaction interfaces. These include use of narrative for debriefing, skill learning from operators to perform new tasks, and predictive multi-vehicle interfaces.

 


28/08/2014

WilliamHarwin

 

Prof.  William Harwin
  •   University of Reading, UK

William Harwin is the professor of human and interactive systems at the University of Reading. His research interests are in human-machine-world interactions at all levels with a particular interest in assistive robotics, haptic interface design, haptic rendering, robots in rehabilitation, and embodied systems both machine and biological.  Research projects include Gentle; robots for neurorehabilitation, Haptel; a haptic teaching aid for dental students, and Sphere; ubiquitous on-body sensing for health-care.

 

Biological intelligence, machine ignorance and rehabilitation robotics

Robotics research has become overly preoccupied with the mechanics of machines.  Although this is a necessary step in enabling useful robots, the high level control of the machines is largely left as the responsibility of humans. This may be an acceptable solution for applications such as robot surgery, but there is much to be gained if robots can be programmed to have a concept of the task it is required to do. This is particularly important when the robot is required to make physical contact with a person or an unstructured and complex world. More importantly such abilities would enable humans to predict and shape the actions of the robot.  To do this will need a) greater sensing ability, in particular of contacts, b) better ability to manipulate of the forces, c) More flexible models of the world that adapt in real-time to sensory data, d) more scope for path planning and practice in virtual space, e) More reliable communication with humans that discloses the machine intent and purpose.  All these are attributes of biological systems, and we should look to biomimetics and systems neuroscience so as to advance the field of robotics. Solutions to these problems would enable robots to have more value to society, in particular areas such as health-care and skills training.

 


 29/08/2014
Raja

 

Prof. Raja Chatila
  • Directeur de Recherches, France

Raja Chatila is senior scientist at the French CNRS. He is director of the Institute of Intelligent Systems and Robotics (ISIR) at University Pierre and Marie Curie in Paris. He has led or contributed to several projects on autonomous and cognitive robotics and coordinated the European FP6 FET Integrated Project “The cognitive Robot Companion” (COGNIRON) in 2004-2008.  His current research is focused on robot learning and human robot interaction. He is IEEE Fellow and president of the IEEE Robotics and Automation Society for the term 2014-2015

 

Can A Robot be a Companion? (On Some Open Problems in Human Robot Interaction)

Robotics research has produced in the recent years a wealth of new results, which have increased robot abilities in perception, locomotion, control, navigation, action planning, and manipulation. Robots have thus reached a reasonable level of autonomous behaviour, even if this is still constrained by the complexity of the environment in which they operate. However, when we consider the capacities that are necessary for enabling a robot companion to reach the expected level of cognition for interacting and cooperating naturally with people, it appears that all these results have yet to be translated into this very specific context in order to be operational. We shall overview a few of the cognitive capacities involved in HRI such as perspective taking, space sharing and spatial reasoning, cooperative action planning and execution, and discuss a global framework to integrate them.