In 2020, it’s predicted that each person on earth will create 1.7MB/second (147GB/day) of data. ARM believes that 1 trillion IoT devices “will be produced between now and 2035” with some of these devices lasting a short time, while others provide many years of service. And in the last two years, 90% of the world’s data has been created. In the past, the lack of data volume hindered AI progress, but not anymore. The explosion in data volume, the advancement in AI-specific hardware (GPU, TPU, FPGA), and the rapid development in open source frameworks, libraries, and tools is ushering the 4th industrial revolution. Next year, the AI industry will see a significant jump in ML experimentation, training, and deployment across many industries.
However, much of the progress has been made in running AI in the cloud, not the edge. The edge is the bigger market of the two. According to management consultant Chetan Sharma, the “Edge Internet Economy” will be worth $4.1T by 2030. Industry heavyweights have taken notice. The team behind PyTorch has introduced a mobile version for running ML on edge devices and the team behind TensorFlow has introduced a “Lite” version for deploying models in IoT devices. In addition, AI startups like Latent AI have created business models that accelerate the use of AI at the edge.Â
Latent AI, a Menlo Park startup has developed a platform to help companies accelerate AI workflows at the edge. Deploying AI at the edge is challenging because of the hardware constraints (power, memory, and compute) in IoT devices. The startup has come up with Adaptive AI which is a philosophy and method for edge AI. In short, the startup helps optimize, train, and run neural networks at the edge, working within the confines of the edge hardware. The three tenants of Adaptive AI are robustness, efficiency, and agility, whereas services dynamically adjust “algorithmic performance” and computing resources based on the constraints of the specific edge device.
The best part, customers may continue to use their own existing AI tools (BYOB). The startup offers two products at the present, LEIP Compress, and LEIP Compile. The products have been designed to be a “modular workflow” that trains, quantizes, and deploys edge AI neural networks.
- LEIP Compress: “Compresses neural networks to balance the optimization of performance and resource usage based on the specification of the target hardware”
- LEIP Compile: “end-to-end integrated workflow, from ML training framework to an executable binary running on target edge AI hardware”
Background
- Company:Â Latent AI, Inc.
- HQ: Menlo Park
- Founded: 2018
- Raised: $3.5M
- # of Employees: 13
- Founders: Jags Kandasamy (CRO) and Sek Chai (CTO
- Product:Â LEIP Compress and LEIP Compile
- Value Add: Helps customers extend their AI models to the edge