Developers and data scientists can create their models using any of the standard machine learning frameworks, and Qualcomm will offer an SDK to port those over to its chips.
Both systems are sampling now and well likely see the first products based on them in the second quarter of this year.
chips already power virtually every Android smartphone on the market, but the company has long set its sights beyond this core market and is increasinglylooking at IoTas one of its next major growth areas. Earlier this year it launched its newembedded platformfor IoT developers; today, its introducing two new systems-on-a-chip for IoT,the QCS605 and QCS603, that combine a multicore ARM processor with the companys AI engine and an image signal processor.
The standard use for these chips is in smart security cameras for both consumers and industry, as well as sports cameras, wearable cameras, VR, robotics and smart displays anywhere you need a good amount of computing power at the edge, as well as the ability to interpret images and run pre-trained machine learning models.
Once you have all of those building blocks in place, you also need to offer developers the ability to run their software on them, of course. Regardless of what product you are building, our customers add a lot of different software to differentiate the product they are building, said Madhavapeddy. For a VR camera, thats the software that combines the images from all the cameras, for example.
Qualcomms IoT journey started a few years ago. Early on, this was mostly about re-purposing the companys Snapdragon mobile chips for this market. As Qualcomms VP of product management Seshu Madhavapeddy told me, the company now believes that it has reached a critical mass of business in IoT and thanks to that, it now also has a far better understanding of the markets needs. Instead of just reusing and tweaking existing systems, Qualcomm is now looking to create purpose-built chipsets that still leverage the building blocks of its smartphone tech but also feature custom designs.
To help developers and OEMs build on top of these new SoCs, Qualcomm also today announced a new VR camera reference design in cooperation with Altek based on the QCS605, as well as an industrial security camera reference design based on the QCS603.
For these new chips, that custom design focuses mostly on the AI engine. That part of the chip can handle 2.1 trillion operations per second for neural network inferencing. Thats only a little bit slower than the promised performance of aMobileeye EyeQ4 chip. As Madhavapeddy stressed, its far more efficient to bring inference to the edge, both in terms of latency and bandwidth. Theres no need for the data to make a roundtrip to the data center, after all. We anticipated that trend, we have seen that trend, and we are catering to that, he said.
All of that AI power is only useful when you have good data to work with, so the team also focused strongly on the actual image processing part of the chips. The camera tech uses the best of the camera tech we have at Qualcomm, Madhavapeddy said. But while the underlying tech is similar, the kind of use cases the team anticipates dont quite match the expectations of smartphone users. You want to have great low-light video performance in both IoT and consumer cameras, for example, but in an IoT device like a security camera, your focus isnt on aesthetics. Similarly, the image stabilization algorithms of a sports camera that youll attach to your mountain bike need to be tuned differently from those of a smartphone.