Layer
Chips & Compute
Accelerator TCO, throughput, and supply for AI training and inference.
H100 vs H200 vs B200 TCO
5-year TCO for the three NVIDIA generations on a training workload of a given size.
Coming soon
Inference Cost Calculator
Per-million-tokens cost for self-hosted inference across H100 / H200 / B200 / MI300.
Coming soon
Memory Bandwidth Bottleneck Detector
Given a model + accelerator, decide whether you are bandwidth-bound or compute-bound.
Coming soon