ETT 2020 Recap: Accessibility and Innovation in Embedded Technologies

Devon Yablonski speaks at ETT

On January 27 and 28, Mercury Systems joined VITA and some of the leading minds in embedded systems in Atlanta, Georgia, for Embedded Technology Trends (ETT) 2020 – a comprehensive forum on the latest trends and developments in the industry. This year, suppliers of component-, board- and system-level solutions joined industry media to discuss the path forward for continued innovation in embedded systems, and how best to carry that innovation over into the government and defense sectors.

Mercury Systems is proud to work in close partnership with our peers and organizations like VITA on the most challenging and important issues facing aerospace and defense. As in years past, ETT 2020 surfaced several key observations that validate our commitment to enabling affordable public-sector access to the most advanced commercial technologies.

AI Applications Driving the Industry Forward

More than almost anything else, the promise of new AI applications for embedded computing was top of mind at ETT this year. Among other topics, attendees wrestled with how to integrate the immense computing power required by AI into existing and upcoming platforms.

On Monday, January 27, Mercury’s Devon Yablonski, Principal Project Manager for Artificial Intelligence, gave a detailed look at how new applications leveraging AI are being translated to the defense industry, specifically at the tactical edge.

Devon demonstrated how a mirrored datacenter architecture is being built into aerospace and defense platforms that may not have access to the cloud, creating potentially significant challenges for data processing. As a solution, high-performance embedded edge computing (HPEEC) is transferring the data center to the edge, with the built-in security, trust, miniaturization, environmental protective packaging and cooling required for in-theater operation. This in turn is making military platforms smarter, more independent and more autonomous.

Moore’s Law Continues to Prevail

There was also much discussion at ETT regarding how (and how quickly) embedded computing technology will develop moving forward. As advancement through transistor miniaturization approaches an end, some have questioned whether Moore’s Law – the notion that the number of transistors on a given microchip will double every two years, with simultaneous reductions in costs – will hold true.

In his January 28 presentation, Mercury’s Tom Smelker, Vice President & General Manager of Custom Microelectronics Solutions, described advances in 2.5D packaging which dispelled some of those doubts, and suggested that the next phase of development will come from heterogeneous integration of silicon or chiplets – as predicted in the last page of Moore’s Law.

Among other advancements, 2.5D packaging will help continue to drive the industry forward by increasing time-to-market roughly 3x compared to monolithic design, reducing timelines from 3-4 years to 12-18 months.

Open Standards Are a Must

While there was collective optimism at ETT 2020, there was also some doubt regarding the future of open standards at the chip level. At present, chiplet manufacturers design using different, sometimes proprietary, chip-to-chip interfaces, creating inherent inefficiencies that have the potential to hamper growth.

In his presentation, Tom suggested that continued stagnation on establishing universal standards might ultimately dampen projected advances in cost efficiency and development time, as companies continue to operate under multiple standards.

Of course, the question then becomes how best to move forward on open standards. While that question remains unanswered, the conversations we had at ETT 2020 – including detailed analyses of new technologies as well as best practices for consensus-building – leave us ever optimistic about the path ahead.

Rugged Processing AI

GPU Processing at the Edge

Uncompromised data center processing capability deployable anywhere

Evolving compute-intensive AI, SIGINT, autonomous vehicle, Electronic Warfare (EW), radar and sensor fusion applications require data center-class processing capabilities closer to the source of data origin – at the edge. This has driven the need for HPC to evolve into high performance embedded edge computing (HPEEC). Delivering HPEEC capabilities presents challenges as every application has its own survivability, processing, footprint, and security requirements. To address this need, we partner with technology leaders, including NVIDIA, to align technology roadmaps and deliver cutting-edge computing in scalable, field-deployable form-factors that are fully configurable to each unique mission.

What it delivers: HPEEC leverages the latest data center processing and co-processing technologies to accelerate the most demanding workloads in the harshest and most contested environments. Customer benefits include:
· The ability to scale compute applications from the cloud to the edge with rugged embedded subsystems that adhere to open standards and integrate the latest commercial technologies.
· Maximized throughput with contemporary NVIDIA® graphics processing units (GPUs), Intel® Xeon® Scalable server-class processors, contemporary field-programmable gate array (FPGA) accelerators, and high-speed, low-latency networking. 
· Advanced embedded security options that deliver trusted performance and safeguard critical data.

Scaled HPEEC Node
Fig 1. Compose your HPEEC solution with Mercury EnsembleSeries OpenVPX building blocks that include CPU blades powered by Intel Xeon Scalable processors, wideband PCIe switch fabrics and powerful GPU and FPGA co-processing engines that form a truly composable HPEEC architecture. Highly rugged and with built-in BuiltSECURE SSE, these compute solutions are ideally suited to the most hostile and size, weight and size (SWaP) constrained environments characteristic of defense and aerospace applications.

Scaling

We work closely with technology leaders to deliver a composable data center architecture that can be deployed anywhere. As a Preferred Member of the NVIDIA OEM Partner Program our engineering teams leverage their collective capabilities to embed and make secure the latest GPU co-processing resources for defense and aerospace applications. Packaged as rugged OpenVPX modules, these system building blocks are a critical HPEEC scaling element. For even greater interoperability and scalability, these GPU co-processing engines are aligned with the Sensor Open System Architecture (SOSA). In this age of smarter everything, SOSA seeks to place the best technology in the hands of service men and women quicker.

Maximized throughput

Delivering uncompromised data center performance at the edge requires environmental protection. Our proven fifth generation of advanced packaging, cooling and advanced interconnects protect electronics from the harshest environments, keeps them cool for long reliable service lives and enables the fastest switch fabric performance in any environment. The ability to work closely with technology leaders like Intel enables us to package the most general processing capability with hardware enabled AI accelerators as miniaturized OpenVPX blades that form another pillar of a truly composable HPEEC solution (fig 1).

Security

Security has always been important and today it is critical. The closer processing goes to the edge, the more critical this requirement becomes. Proven across tens of defense programs, our embedded BuiltSECURETM technologies counter nation-state reverse engineering with systems security engineering (SSE). BuiltSECURE technology is extensible to deliver system-wide security that evolves over time, building in future proofing. As countermeasures are developed to offset emerging threats, the BuiltSECURE framework keeps pace, maintaining system-wide integrity.

What’s next?

We will soon be announcing an expansion to our portfolio of NVIDIA-powered OpenVPX co-processor engines with the introduction of dual Quadro TU-104 GPU powered configurations. These rugged co-processing engines will feature greater BuiltSECURE capabilities making them exportable as well as enabling them to be deployed anywhere. These options will have NVIDIA’s new NVLinkTM high-speed GPU-to-GPU bus fully implemented to deliver uncompromised data center capability at the edge.

To learn more visit GTC and see Devon Yablonski present “GPU processing at the edge” live – #GTC19