Cool Startup: Mythic AI Develops Breakthrough Analog AI Chips

As the AI industry grapples with the immense energy consumption and computational bottlenecks of digital GPUs, a quiet revolution is brewing. While Nvidia dominates the high-performance computing landscape, a stealthy U.S. startup named Mythic AI has been developing a radically different approach: analog AI inference chips. Instead of pushing more electrons through smaller digital transistors, Mythic leverages the fundamental physics of electricity to perform AI calculations directly within memory, promising unprecedented efficiency and speed for AI at the edge.

Mythic’s thesis is that for many real-world AI applications, especially inference on edge devices, the exact precision of digital computing is overkill. By embracing the inherent approximations of analog computation, they can deliver massive performance gains with significantly less power, opening up new possibilities for ubiquitous AI.

Background

  • Company: Mythic AI
  • Founded: 2012
  • HQ: Austin, Texas, U.S.
  • Employees: ~62 (LinkedIn)
  • Funding: ~$200M+ from investors like SoftBank, Valor Equity Partners, DFJ Growth. Valuation undisclosed, but likely in the hundreds of millions.
  • Product: Analog AI inference processors (Mythic AMP), MAPP software platform

Mythic AI was founded by Mike Henry and Dave Fick, who met at the University of Michigan, where they were researching in-memory computing. Their vision was to commercialize an approach that fundamentally redesigns how AI computations are performed. Rather than sending data back and forth between separate processing and memory units (the “Von Neumann bottleneck”), Mythic’s architecture integrates computation directly into an analog flash memory array. After years of development, the company shipped its first commercial products in late 2023 and early 2024, targeting demanding edge AI applications.

What Mythic AI Does

Mythic AI builds highly power-efficient and high-performance analog AI inference processors. Their core innovation lies in the Mythic Analog Matrix Processor (AMP), which uses an array of NOR flash memory cells to perform matrix multiplication, the most common operation in neural networks, in the analog domain.

The Product Stack:

  • Mythic Analog Matrix Processor (AMP): The flagship hardware chip that integrates large arrays of analog flash memory for in-memory computation. These chips are designed specifically for AI inference workloads.
  • MAPP (Mythic AI Processing Platform): A comprehensive software development kit that allows developers to port their existing AI models (trained in frameworks like TensorFlow or PyTorch) to the Mythic hardware. It includes compilers, optimizers, and a runtime environment.
  • Mythic Boards and Modules: Turnkey solutions that embed the AMP chips into various form factors, making them easier for customers to integrate into their edge devices, drones, robotics, and security cameras.

The Technology: Analog Compute-in-Memory

Here’s a simplified breakdown of how Mythic’s analog chips work:

  1. Analog Flash Memory Array: Imagine a grid of tiny memory cells, similar to the flash memory in your USB drive. Each cell stores a weight (a numerical value) for the neural network as an analog electrical charge.
  2. Voltage as Input: When an input (e.g., pixel data from an image) needs to be processed, it’s converted into an analog voltage. This voltage is then applied across rows of the flash memory array.
  3. Ohm’s Law in Action: As the voltage flows through the memory cells (which act like tiny resistors), Ohm’s Law dictates that the current generated is proportional to the voltage multiplied by the stored weight (conductance).
  4. Current Summation: The currents from each column are then summed up, performing the equivalent of a dot product (multiplication and addition) almost instantaneously.
  5. Digital Conversion: The resulting analog current is converted back into a digital value, which represents the output of that layer of the neural network.

This entire process happens in parallel across thousands of these cells, eliminating the need to constantly shuttle data between separate processing and memory units.

Why It Matters: The Efficiency Revolution for Edge AI

Mythic AI is not directly competing with Nvidia for large-scale AI model training in data centers. Instead, they are targeting the burgeoning market for AI inference at the edge, where power consumption, latency, and form factor are critical.

  • Unprecedented Power Efficiency: By performing computations in the analog domain and in-memory, Mythic chips can achieve orders of magnitude better energy efficiency than digital GPUs for inference. This enables sophisticated AI to run on battery-powered devices, greatly extending their operational life.
  • Lower Latency: The instantaneous nature of analog computation means results are delivered with minimal delay, crucial for real-time applications in robotics, autonomous systems, and industrial automation.
  • Small Form Factor: The integrated compute-in-memory approach reduces the overall chip area and complexity, allowing for compact designs suitable for embedded systems where space is at a premium.
  • Addressing the Inference Bottleneck: As AI models grow, running them efficiently for inference in the field is a massive challenge. Mythic’s technology provides a pathway to deploy these powerful models without requiring massive local power infrastructure or constant cloud connectivity.

Spec Comparison: Analog Efficiency vs. Digital Power

Here’s a conceptual comparison showcasing the different strengths, rather than a direct apples-to-apples spec race, as their architectural approaches are fundamentally different.

Feature Mythic AMP (Analog) Nvidia Jetson Orin (Edge Digital) Nvidia A100 (Data Center Digital)
Architecture Analog Compute-in-Memory (Flash) Digital (CPU + GPU) Digital (GPU)
Primary Use Edge AI Inference (Vision, NLP) Edge AI Inference/Training, Robotics Data Center Training/Inference
Peak Throughput Up to ~35 TOPS (INT8) Up to 275 TOPS (Sparse INT8) 1248 TOPS (Sparse INT8)
Power Consumption <5W 15W – 75W 300W – 400W
Energy Efficiency ~2-4 TOPS/Watt ~3-4 TOPS/Watt ~3-4 TOPS/Watt
Precision INT8 (inherently approximate) FP32, FP16, INT8 FP64, FP32, TF32, FP16, INT8
Memory On-chip Analog Flash (Compute-in-Memory) External LPDDR5 HBM2e
Typical Latency Extremely Low Low Moderate

Key Takeaways from the Comparison:

Mythic AI’s strength lies in its power efficiency and low latency for inference workloads, particularly where every milliwatt matters. While a high-end Nvidia A100 offers vastly higher peak TOPS, it does so at a power budget (300-400W) completely unsuitable for edge devices. Even Nvidia’s edge offerings like the Jetson Orin consume significantly more power than Mythic’s chips. Mythic’s focus on INT8 precision and its analog in-memory compute allows it to deliver competitive throughput per watt, making it ideal for deployments where power and heat are primary constraints.

Challenges and Risks

  • Analog Precision: The inherent “fuzziness” of analog computation means it’s not suitable for applications requiring perfect mathematical precision (like certain scientific simulations or high-accuracy training). Mythic’s technology is optimized for inference, where some error tolerance is acceptable.
  • Software Ecosystem: While Mythic provides a robust SDK, building a developer ecosystem comparable to Nvidia’s CUDA is a massive undertaking. Developers are accustomed to digital workflows, and adapting to analog constraints requires a learning curve.
  • Manufacturing and Yields: Producing complex analog chips can present unique manufacturing challenges compared to mature digital processes, potentially impacting yields and costs.
  • Limited Flexibility: Analog chips are generally less flexible than digital ones. They are highly optimized for specific types of neural network operations, making it harder to adapt them to entirely new AI architectures that might emerge.

Final Thoughts

Mythic AI is a prime example of a startup pushing the boundaries of what’s possible in AI hardware by rethinking fundamental computing paradigms. By leveraging analog compute-in-memory, they offer a compelling solution to the power and latency challenges facing AI deployment at the edge. While they won’t replace Nvidia in every domain, Mythic is carving out a critical niche, proving that for many real-world AI applications, a little bit of analog “fuzziness” can lead to a lot of digital efficiency. Their success could pave the way for a new era of truly ubiquitous, low-power AI that powers everything from smart sensors to autonomous systems.

Scroll to Top