Protected areas


At the Linley Spring Processor Conference earlier this year, Arm showed how processors designed for harsh real-time control are changing. Traditionally, they have worked with limited memory. Even a gigabyte looks luxurious in the context of a 32-bit processor intended to run a car’s security systems or an industrial robot’s motor controllers. Arm’s Cortex-R82 goes much further with the ability to address up to 1TB.

“This is something we thought was impossible just a few years ago,” Lidwine Martinot, Arm’s IoT solutions manager, explained at the conference, noting that machine learning and analytics are largely the driving force behind this sudden increase in the amount of memory that manufacturers want to attach to real-time processors.

“It is increasingly necessary that this calculation be very close to the source of the data. Why send an entire image back to the servers when you have the option of analyzing the images locally? Said Martinot. “The next generation of autonomous vehicles is expected to produce more than a terabyte per day. Today, a robot can already produce several gigabytes of data per day.

Much of this memory will not be used directly by real-time tasks, but by software running Linux or Windows, in part because of the immediate availability of open source development environments like PyTorch and Tensorflow, and in part because they provide easy access to a huge address. the spaces.

This was possible for a while first thanks to specialized operating systems such as the real-time versions of Linux, which allowed tasks to be prioritized with tight deadlines compared to less time-sensitive applications. This has largely given way to virtualization and the use of separation cores, such as those provided by Green Hills Software, Lynx Software Technologies, and other vendors.

In this type of diagram, all access to I / O ports or to memory outside the address space of an operating system results in the generation of an exception or an interrupt that is intercepted by the separation kernel or the hypervisor, which can choose to block, allow or modify the operation. Virtualizing I / O and memory access in this way makes it difficult for one guest operating system to interfere with the operation of another. This is vital for mixed-critical systems where there may be one or more real-time engines that control machines running alongside other modules that support cloud communications or human-machine interfaces.

Machine learning models are expected to change regularly. A major concern is that real-world conditions move away from what the model is formed out of. To prevent models from being regularly trained on new data and then distributed to systems in the field.

Pavan Singh, vice president of product management at Lynx, said the real-time half would be treated differently. “It will use a more traditional development model: they will not be updated as frequently and will have to comply with formal safety and security rules. “

Containerization

To help deliver these upgrades to the Linux side, development teams are starting to look to techniques now common in cloud computing. In this environment, a modified form of virtualization took hold: containerization. In the server space, the motivation for containers was initially performance.

The repeated context changes that result from each virtualized I / O access quickly take a toll. Containers take advantage of the security features built into Linux and similar operating systems that make it difficult but not impossible for user-level software to access outside of their own space and without incurring the overhead of context switches. frequent.

A second benefit of the container for cloud users quickly emerged: Libraries and system-level functions that an application needs can be grouped together in a stored image and separated from other containers running on the same processor. As long as a server blade can run the binary, the container will work more or less wherever it runs. This in turn led to the rise of orchestration, a method of management where tools like Google’s Kubernetes automatically load, run, move, and destroy containers based on rules determined by administrators.

Work is no longer confined to specific servers. Instead, applications move around a data center based on the availability and cost of the hardware. The results of this trend are striking.

Analysis by security monitoring specialist Sysdig found that nearly half of the containers deployed in the cloud last year ran for less than five minutes. Less than five percent lasted more than two weeks. Most were performing a task and when completed they were taken apart and stored ready for other work in the future. In the meantime, the computing resources could be used for other containers, probably just as ephemeral.

However, Singh highlights a key difference between the likely uses in the embedded and those in the data center space, where the trend is towards so-called serverless computing: a deployment philosophy where the location of the software is largely intangible. Kubernetes and similar tools don’t yet consider location to be important, but issues like latency make it important for edge computing and embedded systems. It won’t help maintain real-time behavior if the Orchestrator tries to deploy a container carrying an AI model several miles away from the robot the model is supposed to help. “In the short term, it will be a challenge to give orchestrators the intelligence to say where the applications are best deployed. In the long run, it will evolve that way, ”Singh said.

Figure 1: The K3s / GitLab CI / CDpipeline automatically moves changes from build to deploy and update

Creation of applications

In practice, tools like Kubernetes will not be used to automatically move containers, but to make building and deploying applications faster and easier to accomplish than with traditional development environments. This is how Michel Chabroux, Senior Director of Product Management at Wind River, sees his own container under 100KB designed to run directly under VxWorks, as well as the Linux implementations used. “The idea is to take advantage of an existing standard to simplify the deployment of embedded applications,” he said, adding that Kubernetes will be widely used to gain better visibility of connected systems rather than managing distribution. and automated deployment.

Once in place, a containerized application will likely live longer on hardware than most of its cloud counterparts, but may not always be active. Applications for predictive maintenance or analytics can run only when real-time core tasks are not fully active. For example, in a robot, some of these containers can only work when it is idle or under load, taking cores that would be used for real-time vision when it is operating in a workshop. This ability to change the roles of processor cores on the fly is one of the reasons Arm sees architectures like the Cortex-R82 taking over from the traditional approach of mixing an A-series processor and the R or M series on a PCB. The integration offered by nanoscale processes allows for many more workloads to be consolidated onto a single piece of silicon, and having many identical cores capable of running real-time or server-class software maximizes performance. flexibility of silicon.

A key difference between embedded systems and cloud models is in the container technology itself. Those used in the cloud environment lack the security features that mixed-critical systems require. To solve the problem, Intel developed a concept called Clear Containers, later renamed Kata. This approach uses hardware features in its x86 processors, such as SR-IOV hardware I / O virtualization, to enforce a better level of isolation. The concept is now at the heart of the ACRN project, which, although having tried to port it to Arm, remains largely focused on the Intel ecosystem.

Arm has his own Cassini project and a more hardware independent project is LF Edge’s Eve, which is based on technology donated by start-up Zededa. Separation cores can provide a higher level of security by applying full virtualization at the cost of more software intervention.

As these technologies mature, support for hardware virtualization will likely increase in real-time processors with wide address ranges to reduce latency in guest operating systems and containers.

At the same time, the management functionality associated with containers will begin to migrate even in real-time games.


Source link

Previous The 57th Chicago International Film Festival launched a new hybrid format
Next Report: 83% of IT Professionals Say Remote Work, IT Staff Safety

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *