Robots
Technology
Industry News

Teaching robots new tricks: remotely and at scale

August 6, 2020

Fleets of autonomous mobile robots (AMRs) arefast becoming a reality- but when you have hundreds of these mobile robots working across remote locations, how do you teach them new tricks, quickly?

Brand new technology allows you to remotely update the AI models on a robot from anywhere in the world - adding the Rocos platform, you can even configure different sensor data feeds and outputs on your AI model, all while your robot is still in the field.

AMRS ARE COMPLEX AND OPERATE WITH A HUGE AMOUNT OF DATA

Autonomous Mobile Robots (AMRs) have huge challenges that fixed arm robots don’t. Their behaviour must stay consistent and reliable in an environment that is always changing for them - for example, a robot reading temperature gauges must navigate dynamic environments and approach machinery of all shapes and sizes.  Consequently their software and hardware need to be incredibly sophisticated - the deep integration of the two is also not simple (this is one reason why the industry’s growth has been decades behind that of fixed arm robots).

An AMR must also process large volumes of data - for example, when using AI models to detect objects and making a decision based on this detection. This data can be so voluminous (eg video feeds or point cloud data) that access to powerful parallel computing is required for timely processing (Cloud providers such as Microsoft Azure, AWS and Google Cloud have placed graphic cards in cloud-based servers for this very purpose).

MOUNTAINS OF DATA MAKES CLOUD OPERATION TRICKY

Cloud operation of AMRs is essential for any non-trivial application, but it can be expensive (and impractical) to move mountains of data from the robot to cloud servers via the internet - especially when you’re operating a whole fleet of robots, with varying qualities of connectivity. It can then also be very expensive to process this data on GPUs in the Cloud.

Offloading these tasks to the Cloud also creates latency which is not ideal when a quick reaction is required from a robot, and the robot’s Cloud AI won’t work if the robot is in a location with low or no connectivity.

PROS AND CONS OF EDGE AI

This is where edge computing comes in. Having GPU-based technology installed on each robot means operators pay just once and the robot has everything it needs to process data itself - even when it’s out of connection range.

However until now, one challenge with running AI on the edge has been the effort involved in deploying and configuring sensor inputs for new models or updating existing models remotely.  

ADDING NEW TRICKS TO THE ROBOT: CLOUD OR IN PERSON ONLY

Say you want to add another AI capability to your robots: they’re already reading temperature gauges, but now you want all 100 robots to also identify oil spills on the ground. This new trick might require processing of a different or modified set of sensor streams. It also might require doing something different with the model outputs (for example, the output of reading an analogue temperature gauge might be to convert the picture into a digital reading and store the data, but the output of identifying an oil spill could be to message the nearest human).

Until now, this AI model updating has required an operator to physically visit every single robot over a large network, plugging into the machines.  Or in the case of Cloud computing, the software can be remotely upgraded, but it is difficult, and dangerous because you have to update the entire robot (running a script you hope won’t break the system) and you can’t easily reconfigure inputs and outputs.  It requires a lot of testing and deployment from an operations perspective.

A NEW ABILITY TO UPDATE AI MODELS IN THE FIELD

But new technology advances have made updating AMRs on the edge incredibly easy and safe.

The Rocos software agent works in tandem with the Rocos platform, allowing the operator to remotely update inputs and outputs (such as using a different sensor, and then sending that data somewhere new) without ever having to physically visit robots - giving the robot new capabilities from anywhere in the world, at scale.

Rather than updating the entire robot stack, the new technology works as if it were adding an AI ‘app’ to the robot. The core system is left as is, and an ‘app’ is simply packaged and distributed, configured with its inputs and outputs.

ROBOTS NOW UPDATE WITH NO DOWN-TIME

Rocos’s technology embodies a move to a more modern deployment model for robot software, revolving around microservices (an “app approach”) rather than monoliths (which could kill your whole robot if an update went wrong).

Microservice architecture lets operators deploy discrete units of functionality safely into a live running system (for example, into live trucks already on the road). This ability is a bit like being able to make phone calls while installing a phone app. Just as it’s very rare you do a full phone update, with this technology it’s rare you would have to completely reinstall your robots’ packages.

About the Author

Getting started is simple. Request a demo today.