
The world is witnessing an artificial intelligence (AI) tsunami. While the initial waves of this technological shift focused heavily on the cloud, a powerful new surge is now building at the edge. This rapid infusion of AI is set to redefine Internet of Things (IoT) devices and applications, from sophisticated smart homes to highly efficient industrial environments.
This evolution, however, has created significant fragmentation in the market. Many existing silicon providers have adopted a strategy of bolting on AI capabilities to legacy hardware originally designed for their primary end markets. This piecemeal approach has resulted in inconsistent performance, incompatible toolchains, and a confusing landscape for developers trying to deploy edge AI solutions.
To unlock the transformative potential of edge AI, industry must pivot. We must move beyond retrofitted solutions and embrace a purpose-built, AI-native approach that integrates hardware and software right from the foundational design.
The AI-native mandate
“AI-native” is more than a marketing term; it’s a fundamental architectural commitment where AI is the central consideration, not an afterthought. Here’s what it looks like.
- The hardware foundation: Purpose-built silicon
As IoT workloads evolve to handle data across multiple modalities, from vision and voice to audio and time series, the underlying silicon must present itself as a flexible, secure platform capable of efficient processing. Core to such design considerations include NPU architectures that can scale, and are supported by highly integrated vision, voice, video and display pipelines.
- The software ecosystem: Openness and portability
To accelerate innovation and combat fragmentation for IoT AI, the industry needs to embrace open standards. While the ‘language’ of model formats and frameworks is becoming more industry-standard, the ecosystem of edge AI compilers is largely being built from vendor-specific and proprietary offerings. Efficient execution of AI workloads is heavily dependent on optimized data movement and processing across scalar, vector, and matrix accelerator domains.
By open-sourcing compilers, companies encourage faster innovation through broader community adoption, providing flexibility to developers and ultimately facilitating more robust device-to-cloud developer journeys. Synaptics is encouraging broader adoption from the community by open-sourcing edge AI tooling and software, including Synaptics’ Torq edge AI platform, developed in partnership with Google Research.
- The dawn of a new device landscape
AI-native silicon will fuel the creation of entirely new device categories. We are currently seeing the emergence of a new class of devices truly geared around AI, such as wearables—smart glasses, smartwatches, and wristbands. Crucially, many of these devices are designed to operate without being constantly tethered to a smartphone.
Instead, they soon might connect to a small, dedicated computing element, perhaps carried in a pocket like a puck, providing intelligence and outcomes without requiring the user to look at a traditional phone display. This marks the beginning of a more distributed intelligence ecosystem.
The need for integrated solutions
This evolving landscape is complex, demanding a holistic approach. Intelligent processing capabilities must be tightly coupled with secure, reliable connectivity to deliver a seamless end-user experience. Connected IoT devices need to leverage a broad range of technologies from the latest Wi-Fi and Bluetooth standards to Thread and ZigBee.
Chip, device and system-level security are also vital, especially considering multi-tenant deployments of sensitive AI models. For intelligent IoT devices, particularly those that are battery-powered or wearable, security must be maintained consistently as the device transitions in and out of different power states. The combination of processing, security, and power must all work together effectively.
Navigating this new era of the AI edge requires a fundamental shift in mindset, a change from retrofitting existing technology to building products with a clear, AI-first mission. Take the case of Synaptics SL2610 processor, one of the industry’s first AI-native, transformer-capable processors designed specifically for the edge. It embodies the core hardware and software principles needed for the future of intelligent devices, running on a Linux platform.
By embracing purpose-built hardware, rallying around open software frameworks, and maintaining a strategy of self-reliance and strategic partnerships, the industry can move past the current market noise and begin building the next generation of truly intelligent, powerful, and secure devices.
Mehul Mehta is a Senior Director of Product Marketing at Synaptics Inc., where he is responsible for defining the Edge AI IoT SoC roadmap and collaborating with lead customers. Before joining Synaptics, Mehul held leadership roles at DSP Group spanning product marketing, software development, and worldwide customer support.
Related Content
- Edge AI: Bringing Intelligence Closer to the Source
- An edge AI processor’s pivot to the open-source world
- Edge AI powers the next wave of industrial intelligence
- Synaptics, Google partnership targets edge AI for the IoT
- How Advanced Packaging is Unleashing Possibilities for Edge AI
The post Charting the course for a truly multi-modal device edge appeared first on EDN.