top of page

NMMI Supporting


Recent progress in physical Human‐Robot Interaction (pHRI) research showed in principle that human and robots can actively and safely share a common workspace. The fundamental breakthrough that enabled these results was the human-centered design of robot mechanics and control. This made it possible to limit potential injuries due to unintentional contacts.










Although many advances have been made in the mechatronics and computational hardware of artificial hands, the state of the art appears to be only marginally closer to a satisfactory functional approximation of the human hand than it was twenty years ago.The main reasons for this  invest some fundamental issues in the understanding of the organization and control of hands, and ultimately the lack of a theory to guide us in the search for a principled approach to taming the complexity of hands as the physical embodiments of the sense of active touch, and comprised of the sensorimotor apparatus that ultimately creates the link between perception and action.

This project aims to break through the rather slowly moving front of the state of the art because of the combination of two crucial, recent innovations, an approach to the description of the organization of the hand sensorimotor system in terms of geometric constraints, or synergies, and the understanding of the role of variable impedance actuation in embodying intelligent grasping and manipulation behaviours in humans, and the availability of a new generation of “robot muscles”, i.e. actuators capable of tuning their impedance to adapt to the environment and the task.

One unique feature of the research is that the new cognitive and physical architecture for artificial hands will be integrated within a complete cognitive and physical architecture for artificial humans, in the form of a humanoid robot.


Globalization has caused the increased demand and hence transportation of goods. Today, most goods are shipped via sea freight in containers and then transferred onto trucks for further transportation. Surprisingly, the containers are unloaded manually since they are frequently packed chaotically, the variety of transported goods is high, and time requirements are strict. Unloading of containers is a strenuous task as goods can weigh up to 70 kg each, posing health risks such as exposure to toxicities of pesticides and poisonous gases as well as injuries through unexpectedly falling objects. Human labor is therefore a high cost factor and combined with unhealthy working conditions, making automated solutions highly desirable. Existing systems for automated unloading are restricted to specific scenarios and still have drawbacks in their flexibility, adaptability and robustness. A robotic system suited for the unloading tasks of any container requires a high amount of cognitive capabilities, sensing and perception all with the appropriate control. The RobLog Project aims at just this, to develop appropriate methods and technologies that meet the requirements to automate logistics processes.

The aim of the project is to advance technologies for, and understand the principles of cognition and control in complex systems. We will meet this challenge by advancing methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object's shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot's actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable. These two assumptions are mirrored in the structure of the proposed work, as we will develop two main strands of work:
i) a multi-modal compositional, probabilistic representation of object properties to support perception and manipulation, and ii) algorithms for reasoning with this representation, that will estimate object properties from visual and haptic data, and also plan how to actively gather information about shape and other object properties (frictional coefficients, mass) while achieving a task. These two strands will be combined and tested on robots performing aspects of a dishwasher loading task. The outcome will be robust manipulation (i.e. under unreliable perception and action) of unfamiliar objects from familiar categories or with familiar parts.

WALK-MAN is a 4 years integrated project (IP) funded by the European Commission through the call FP7-ICT-2013-10. The project started on September 2013 and has the goal to develop a robotic platform (of an anthropomorphic form) which can operate outside the laboratory space in unstructured environments and work spaces as a result of natural and man-made disasters. The robot will demonstrate new skills including:

  • Dextrous, powerful manipulation skills - e.g. turning a heavy valve of lifting collapsed masonry,

  • Robust balanced locomotion - walking, crawling over uneven terrain surfaces

  • Physical sturdiness - e.g. operating/manipulating conventional hand tools such as pneumatic drills or cutters.


Furthermore the robot will have sufficient perception/cognitive ability to permit to operate in autonomously or under reduced tele-operation in case of severe communication limitations for remote control due to limited channel bandwidth and/or reliability). The robot will demonstrate human levels of locomotion, balance and manipulation and will be validated in realistic challenge tasks outside the laboratory environment.

Soft Manipulation (SOMA) is the key for the development of simple, compliant, yet strong, robust, and easy-to-program manipulation systems. SOMA explores a new avenue of robotic manipulation, exploiting the physical constraints imposed by the environment to enable robust grasping and manipulation in dynamic, open, and highly variable contexts.

SoftPro project will study and design soft synergy-based robotics technologies to develop new prostheses, exoskeletons, and assistive devices for upper limb rehabilitation, which will greatly enhance the efficacy and accessibility to a greater number of users. Building on solid methodological bases, SoftPro will produce a significant social impact, promoting advanced robot prosthetic and assistive technology “from bench to bedside”; but it will also introduce disruptively new, admittedly risky but potentially high-impact ideas and paradigms.

Please reload

bottom of page