Thursday, September 6, 2012
9:00-11:00 |
Contributed Talks: Perception David González-Aguirre, Karlsruher Institute of Technology (KIT): Model-Based Environmental Object Recognition for Humanoid Robots Jörg Stückler, Uni Bonn: Dense Modelling and Tracking of Scenes and Objects Using Multi-Resolution Surfel Maps Joshua Hampp, IPA Stuttgart: Evaluation of Registration Methods on Mobile Robots Hanchen Xiong, Uni Insbruck: Pose Estimation of 3D Point Clouds via Feature Map Damien Teney, Uni Insbruck: Exemplar-Based Full Pose Estimation from 2D Images without Correspondences |
H 1058 |
11:00-11:30 | Coffee Break |
Lichthof |
11:30-13:00 |
Contributed Talks: Representations and Learning Felix Reinhart, Uni Bielefeld: Representation and Generalization of Bi-manual Skills from Kinesthetic Teaching Leon Bodenhagen, University of Southern Denmark: Probabilistic Manipulation Functions: Statistical Representation of Actions and their Outcomes Luca Lonini, Uni Frankfurt/Main: Learning efficient representations of stereo-disparity perception and vergence movements through sparse coding and reinforcement learning using the iCub robot head Jost Tobias Springenberg, Uni Freiburg: Learning Sensory Representations for Reinforcement Learning Arne Nordmann, Uni Bielfeld: Teaching Nullspace Constraints in Physical Human-Robot Interaction using Reservoir Computing Jan Hartmann, Uni Lübeck: Self-Adaptation for mobile robot algorithms using organic computing principles |
H 1058 |
13:00-14:00 | Light Lunch |
Lichthof |
14:00-15:30 |
Contributed Talks: Manipulation and Motion Planning Sebastian Höfer, Roberto Martin, Clemens Eppner, TU Berlin: Learning to Manipulate Articulated Objects – an Integrated Experiment Raphael Deimel, TU Berlin: How to grasp with Soft Hands Henning Koch, Uni Heidelberg: Optimization-based walking generation for the humanoid robot HRP-2 Arne Sieverling, TU Berlin: Robot motion under uncertainty: Integrating planning, sensing, and control |
H 1058 |
15:30-16:00 | Coffee Break |
Lichthof |
16:00-17:30 |
Contributed Talks: Robot Programming Christopher Armbrust, Uni Kaiserslautern: Verification of Behaviour Networks Using Finite-State Automata Kevin Eilers, IRT Dortmund: Simulating Robots by means of generated Petri Nets Contributed Talks: Flying, Swimming, Rolling Christian Blum, HU Berlin: An autonomous flying Robot for Network Robotics Markus Ryll, MPI Tübingen: Overactuation in UAVs – Modeling and Control of a Quadrotor with Tilting Propellers Benjamin Meyer, Uni Lübeck: SMART-E - An Autonomous Omnidirectional Underwater Vehicle |
H 1058 |
18:00-19:00 | Lab Tour with Demo Discussions |
EN 268 |
20:00- | Dinner at Zwölf Apostel Mitte, Georgenstraße 2, S-Bahn Friedrichstraße |
Friday, September 7, 2012
(abstracts are listed below)
9:00-10:30 |
Presentations from related disciplines John-Dylan Haynes, Neuroscience, Charité Berlin: "Brain reading" with MRI: Decoding mental states from human brain activity Marianne Maertens, Modelling of Cognitive Processes, TU Berlin: Appearance Matters |
H 1058 |
10:30-11:00 | Coffee Break |
Lichthof |
11:00-12:30 |
Presentations from related disciplines Robert Gaschler, Psychology, HU Berlin: Task representations – how humans configure their mind for a task and how they change configuration with practice Nina Gaißert, Festo: Generating new impulses for technology and industrial applications |
H 1058 |
12:30-13:30 | Light Lunch |
Lichthof |
13:30-15:30 |
Robotic presentations Wolfram Burgard, Uni Freiburg: Techniques for Object Recognition from Range Data Michael Beetz, Uni Bremen: Robotic Agents Performing Human-Scale Everyday Manipulation Tasks - In the Knowledge Lies the Power |
H 1058 |
15:30-16:00 | Coffee Break |
Lichthof |
16:00-17:30 |
Robotic presentations Helge Ritter, Uni Bielefeld: Robotics as a Science of Integration Ludovic Righetti, MPI Intelligent Systems: Movement generation and control for agile robots: applications to manipulation and locomotion |
H 1058 |
Task representations – how humans configure their mind for a task and how they change configuration with practice
Psychology
Humboldt Universtiät zu Berlin
The idea that humans literally program their mind to perform a task has been the target of experimental research in psychology for over a century. While we observe everyday that humans can use instructions to prepare actions, surprisingly little is known about how verbal instructions are translated into actions. The possibility to quickly establish task representations allows humans to master arbitrary tasks much more quickly as compared to other primates. However, to a surprising extent, task preparation can shield us from appreciating short cut options for performing a task more efficiently. For instance, in implicit learning experiments, many participants fail to notice that they are constantly repeating the same short sequence of responses for over hundred repetitions. Under some conditions, however, automatic learning processes can help to substitute an instruction-based task representation by a more efficient one.
Appearance Matters
Modeling of Cogntive Processes
School of Electrical Engineering and Computer Science
Technische Universtiät Berlin
One of the challenges for the visual system is to recover object properties, such as surface lightness, from the retinal image in which information about a surfaces' reflectance, local illumination, and potentially intervening media, are intertwined in a single local luminance measure. A black paper in bright light and a white paper in a shadow might reflect the same amount of light to the eye, and yet, the black paper appears black and the white paper white. It has turned out to be extremely difficult to get a grasp on how the human perceptual system solves that so-called inverse optics problem. It is undisputed that the visual system must rely on contextual information to derive the lightness of an image region. What is disputed though, is whether the computation of lightness involves a representation of a scene's geometry and its illumination conditions, or whether lightness can be computed based on the contrast between image regions. I will present psychological models of lightness perception, and a recent research example to illustrate the type of questions the field is currently tackling.
"Brain reading" with MRI: Decoding mental states from human brain activity
Director of Berlin Center for Advanced Neuroimaging
Bernstein Center for Computational Neuroscience
Charité Berlin
Recent advances in human neuroimaging have shown that it is possible to accurately decode a person's conscious experience based on non-invasive MRI measurements of their brain activity. Such 'brain reading' has mostly been studied in the domain of visual perception, where it helps reveal the way in which individual experiences are encoded in the human brain. Here several studies will be presented that directly demonstrate the ability to decode various types of mental states from brain signals. It is possible to decode simple mental states, such as visual percepts, even if they do not reach conscious awareness ("subliminal perception"). But even high-level mental states such as intentions can be read out. It is even possible to predict a person's free choices several seconds before they believe to be making up their mind. There are also several potential applications such as the prediction of purchase decisions for consumer products. A number of fundamental challenges to the science of "brain reading" will be presented and discussed.
Generating new impulses for technology and industrial applications
Nina Gaißert
Bionic Research
Festo
Festo is constantly in search for new impulses for technology and industrial applications. More specifically, Festo is especially interested in new approaches in the field of drive, control and gripping technology. Because of this, the Bionic Learning Network was founded in 2006. Within this cooperation between Festo, well-known universities, research institutes and innovative companies, the engineers, designers and researchers take inspiration from nature’s vast amount of intelligent strategies to solve technical problems. Here we present how animals like fish, elephants or birds can inspire the field of robotics leading to smart and energy efficient solutions. Further, we present how new technologies like generative manufacturing, brain-machine-interfaces or tactile feedback will shape human-technology-cooperation of the future.
Techniques for Object Recognition from Range Data
Albert-Ludwigs-Universität Freiburg
In this talk we address the problem of object recognition in 3D point cloud data. We first present a novel interest point extraction method that operates on range images generated from arbitrary 3D point clouds. Our approach explicitly considers the borders of the objects according transitions from foreground to background. We furthermore introduce a corresponding feature descriptor. We present rigorous experiments in which we analyze the usefulness our method for object detection. We furthermore describe a novel algorithm for constructing a compact representation of 3D point clouds. Our approach extracts an alphabet of local scans from the scene. The words of this alphabet are then used to replace recurrent local 3D structures, which leads to a substantial compression of the entire point cloud. We optimize our model in terms of complexity and accuracy by minimizing the Bayesian information criterion (BIC). Experimental evaluations on large real-world data show that our method allows us to accurately reconstruct environments with as few as 70 words. We finally discuss how this method can be utilized for object recognition and loop closure detection ind SLAM (Simultaneous Localization and Mapping).
Robotic Agents Performing Human-Scale Everyday Manipulation Tasks - In the Knowledge Lies the Power
Universität Bremen
Enabling robotic service agents to perform natural language instructions such as "flip the pancake" or "push the spatula under the pancake" requires us to equip robots with large amounts of knowledge. To perform such tasks adequately, robots must, for instance, be able to infer the appropriate tool to use, how to grasp it and how to operate it. They must, in particular, not push the whole spatula under the pancake, i.e. they must not interpret instructions literally but rather recover the intended meaning.
Recently, we have seen impressive examples of information systems that have scaled towards open problem-solving tasks formulated in naturalistic ways by leveraging vast sources of information, e.g. the Siri personal assistant on the iPhone and the Watson system, which outperformed the human champions in the popular US quiz show "Jeopardy!".
In this talk, I will present some of our ongoing research in which we transfer these ideas to autonomous robotics in order to realize knowledge-intensive robot control programs that use the world-wide web as a comprehensive source of knowledge.
Robotics as a Science of Integration
Universität Bielefeld
Performing even seemingly simple actions, such as taking a footstep or grasping an object, usually requires the coordination of a multitude of processes if the action has to succeed in unstructured, real-world environments.
While we usually can facilitate solutions through specializing the requirements, the deep challenge of robotics is to create capabilities that generalize to a broad range range of situations. This may be very hard to achieve through a "blueprinting" of the system for all possible interaction contexts. As an alternative, we argue for systems that emphasise the properties of incrementality and functional growth, and that take inspiration from principles seen in evolution or in cognitive development, such as scaffolding, adaptivity and parsimonious use of ressources.
We discuss how this might pave the way towards a "science of integration", striving for a principled understanding of systems and architectures that support continuous extension and functional growth, and provide examples of how the Bielefeld "Cognitive Interaction Technology" research line attempts to contribute to such a research agenda.
Movement generation and control for agile robots: applications to manipulation and locomotion
MPI Intelligent Systems
We expect to use autonomous robots such as humanoids in a wide spectrum of applications ranging from disaster relief scenarios to personal assistants in everyday life. Such robots should be able to perform complex tasks that involve at the same time locomotion and manipulation skills (it might be necessary to grasp parts of the environment to cross a difficult terrain and to maintain balance on the tip-toes while reaching for an object of interest). In all these scenarios, robots will have to perform tasks in constant interaction with an uncertain and changing environment. A very simple question is then: how should these robots move?
Several research challenges underlie this question: how should we generate and control the robot movements? how can we make the robot versatile (i.e. easily acquire new skills)? how should the robot quickly react to unexpected events? how can we use sensory information to adapt movements? In this talk, I will present our recent results addressing several of these questions. Taking examples from our research on legged locomotion and manipulation, I will discuss the importance of force/torque control approaches to improve task performance, how robots can acquire new skills involving fine contact interaction and how sensory information can be used to create very reactive behaviors.