Technical Absorbents
Techtextil Frankfurt

Free membership

Receive our weekly Newsletter
and set tailored daily news alerts.

Technology/Machinery

Fully dressed by robots

Machine learning supplies models for predicting arm movements, elbow position and the force applied.

19th April 2022

Innovation in Textiles
 |  Cambridge, MA, USA

Clothing/​Footwear

Robots are already adept at tasks such as lifting objects that are too heavy or cumbersome for people to manage, as well as the precision assembly of items like watches that have large numbers of tiny parts – some so small they can barely be seen with the naked eye.

Much harder, are tasks that require situational awareness, involving almost instantaneous adaptations to changing circumstances in the environment, and things become even more complicated when a robot has to interact with a human and work together to safely and successfully complete a task.

Researchers at MIT are currently working on programming a robot to assist with dressing a human, and specifically how to deal with sleeves.

 “The robot cannot see the human arm during the entire dressing process and in particular, it cannot always see the elbow or determine its precise position or bearing,” explains Shen Li, a PhD candidate in the MIT Department of Aeronautics and Astronautics. “This affects the amount of force the robot has to apply to pull the article of clothing – such as a long-sleeve shirt – from the hand to the shoulder.”

Reasonably precise

To deal with the issue of obstructed vision, the team has developed a “state estimation algorithm” that allows them to make reasonably precise educated guesses as to where, at any given moment, the elbow is and how the arm is inclined – whether it is extended straight out or bent at the elbow, pointing upwards, downwards, or sideways – even when it’s completely obscured by clothing. At each instance of time, the algorithm takes the robot’s measurement of the force applied to the cloth as input and then estimates the elbow’s position – not exactly, but placing it within a box that encompasses all possible positions.

That knowledge, in turn, tells the robot how to move.

“If the arm is straight, then the robot will follow a straight line and if the arm is bent, the robot will have to curve around the elbow,” says Theodoros Stouraitis, a visiting scientist in the Interactive Robotics Group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Getting a reliable picture is important because if the elbow estimation is wrong, the robot could decide on a motion that would create an excessive and unsafe force.”

The algorithm includes a dynamic model that predicts how the arm will move in the future, and each prediction is corrected by a measurement of the force that’s being exerted on the cloth at a particular time. While other researchers have made state estimation predictions of this sort, what distinguishes this new work is that the MIT investigators and their partners can set a clear upper limit on the uncertainty and guarantee that the elbow will be somewhere within a prescribed box.  

Machine learning

The model for predicting arm movements and elbow position and the model for measuring the force applied by the robot both incorporate machine learning techniques.

The data used to train the machine learning systems were obtained from people wearing Xsens motion tracking suits with built-insensors that accurately track and record body movements.

After the robot was trained, it was able to infer the elbow pose when putting a jacket on a human subject, a man who moved his arm in various ways during the procedure – sometimes in response to the robot’s tugging on the jacket and sometimes engaging in random motions of his own accord.

This work was strictly focused on estimation – determining the location of the elbow and the arm pose as accurately as possible – but the team of Professor Julie A. Shah of MIT’s Interactive Robotics Group has already moved on to the next phase of developing a robot that can continually adjust its movements in response to shifts in the arm and elbow orientation.

In the future, they plan to address the issue of “personalization” — developing a robot that can account for the idiosyncratic ways in which different people move. In a similar vein, they envisage robots versatile enough to work with a diverse range of fabrics, each of which may respond somewhat differently to pulling.

Although the researchers in this group are definitely interested in robot-assisted dressing, they recognise the technology’s potential for far broader utility.

“We didn’t specialise this algorithm in any way to make it work only for robot dressing,” Li notes. “Our algorithm solves the general state estimation problem and could therefore lend itself to many possible applications. The key to it all is having the ability to guess, or anticipate, the unobservable state. Such an algorithm could, for instance, guide a robot to recognise the intentions of its human partner as it works collaboratively to move blocks around in an orderly manner, or set a dinner table.”

The research was supported by the US Office of Naval Research, the Alan Turing Institute, and the Honda Research Institute Europe.

www.mit.edu

Latest Reports

Business intelligence for the fibre, textiles and apparel industries: technologies, innovations, markets, investments, trade policy, sourcing, strategy...

Find out more