قالب وردپرس درنا توس
Home / Science / System uses RFID tags to home in on targets; could benefit robotic manufacturing, collaborative drones, and other applications — ScienceDaily

System uses RFID tags to home in on targets; could benefit robotic manufacturing, collaborative drones, and other applications — ScienceDaily



A new system developed at MIT uses RFID tags to help robots find moving objects with unprecedented speed and accuracy. The system could allow greater collaboration and precision from robots that work on packaging and assembly and from swarms of drones that perform search and rescue missions.

In a paper presented next week at the USENIX Symposium on the design and implementation of networked systems, researchers demonstrate that the robots that use the system can locate objects tagged within 7.5 milliseconds, on average, and with an error of less than a centimeter.

In the system, called TurboTrack, an RFID tag (radio frequency identification) can be applied to any object. A reader sends a wireless signal that reflects the RFID tag and other nearby objects and bounces back to the reader. An algorithm passes through all the reflected signals to find the RFID tag response. The final calculations then exploit the movement of the RFID tag ̵

1; even if this usually decreases the accuracy – to improve the accuracy of the location.

Researchers say the system could replace artificial vision for some robotic activities. As with its human counterpart, artificial vision is limited by what it can see and may not notice objects in cluttered environments. Radio frequency signals do not have such restrictions: they can identify targets without visualization, inside the disorder and through walls.

To validate the system, the researchers attached an RFID tag to one cap and another to a vial. A robotic arm located the cap and placed it on the bottle, held by another robotic arm. In another demonstration, the researchers tracked the nanodromes equipped with RFID technology during docking, maneuvers and flight. In both tasks, the system was as accurate and fast as traditional artificial vision systems, while working in scenarios where computer vision does not work,

"If you use RF signals for activities typically performed using artificial vision Not only do you allow robots to do human things, you can also empower them to do superhuman things, "says Fadel Adib, assistant professor and principal investigator at MIT Media Lab and founder of the Signal Kinetics Research Group. "And you can do it in a scalable way, because these RFID tags are only 3 cents each."

In the manufacturing sector, the system could allow the robotic arms to be more precise and versatile, for example, by collecting, assembling and packing items along an assembly line. Another promising application is the use of portable "nanodrones" for search and rescue missions. Nanodrones currently use artificial vision and methods to stitch together images captured for localization purposes. These drones often merge into chaotic areas, lose each other behind walls and can not uniquely identify one another. All this limits their ability, for example, to expand on an area and to collaborate in the search for a missing person. Using the researchers' system, the nanodromes in the swarms could be better identified, for greater control and collaboration.

"You could activate a swarm of nanodrones to form in certain ways, fly in cluttered environments and even in environments hidden from view, with great precision," says first author Zhihong Luo, a graduate student in the Signal Kinetics Research Group.

The other co-authors of the Media Lab are visiting the student Qiping Zhang, the postdoc Yunfei Ma and the research assistant Manish Singh.

Super Resolution

The Adib group has worked for years on the use of radio signals for identification and identification purposes, such as detecting contamination in bottled food, communicating with devices inside the body and manage the inventory of the warehouse.

Similar systems have attempted to use RFID tags for localization tasks. But these come with compromises in accuracy or speed. To be precise, it may take a few seconds to find a moving object; to increase speed, they lose precision.

The challenge was simultaneously achieving speed and accuracy. To do this, the researchers drew inspiration from an imaging technique called "super-resolution imaging". These systems connect images from multiple angles to get a finer resolution image.

"The idea was to apply these super resolution systems to radio signals," says Adib. "When something moves, you get more perspectives in tracking it, so you can take advantage of the movement for accuracy."

The system combines a standard RFID reader with a "helper" component used to locate radio frequency signals. The assistant launches a broadband signal including multiple frequencies, based on a modulation scheme used in wireless communication, called an orthogonal frequency division multiplexing.

The system captures all signals that bounce on objects in the environment, including the RFID tag. One of these signals carries a specific signal for the specific RFID tag, since the RFID signals reflect and absorb an input signal in a given pattern, corresponding to bits of 0 and 1, that the system is able to recognize.

Because these signals travel at the speed of light, the system can calculate a "flight time" – measure the distance by calculating the time taken by a signal to travel between a transmitter and a receiver – to measure the position of the tag, so like other objects in the environment. But this only provides a location figure of the ballpark, not a subcentimer precision

Leverage movement

To enlarge the position of the tag, the researchers developed what they call a "super resolution space- temporal "algorithm.

The algorithm combines location estimates for all bounce signals, including the RFID signal, which determined using flight time. Using some probability calculations, you limit that group to a handful of potential RFID tag positions.

As the tag moves, its angle of the signal changes slightly – a change that also corresponds to a certain position. The algorithm can then use this angle change to plot the tag's distance as it moves. By constantly comparing that distance measurement that changes with all other measurements away from other signals, it can find the tag in a three-dimensional space. All this happens in a fraction of a second.

"The high-level idea is that by combining these measurements over time and space, we get a better reconstruction of the tag position," says Adib.

The work was sponsored, in part, by the National Science Foundation.


Source link