Skip to content

Hardware & Frontier Models

Hardware & Frontier Models

The chips powering AI and today’s most capable models.

The Chips

NVIDIA (H100/B200)

The dominant AI training and inference GPU. CUDA ecosystem makes it the default for nearly all research and production.

Paid

AMD (MI300X)

NVIDIA’s closest competitor — ROCm stack, strong for inference, growing adoption.

Paid

Google TPUs

Tensor Processing Units — Google’s custom ASICs powering Gemini and Google Cloud AI.

Paid

AWS Trainium & Inferentia

Amazon’s custom training and inference chips for cost-efficient ML at scale.

Paid

Groq LPU

Language Processing Unit — world’s fastest LLM inference chip (700+ tokens/sec).

Paid

Apple Silicon (M-series)

M1/M2/M3 unified memory architecture — excellent for running 7B–70B models locally.

Paid

Cerebras WSE

Wafer-Scale Engine — single-chip AI training, eliminates inter-chip communication bottlenecks.

Paid

Frontier Models

Claude 3.5 / 4 Sonnet

Anthropic’s flagship: 200K context, excellent coding & reasoning, safety-focused.

Paid

GPT-4o

OpenAI’s multimodal flagship — text, images, audio in one model.

Freemium

o1 / o3 / o4-mini

OpenAI’s reasoning models — slow deliberate thinking for math, science, code.

Paid

Gemini 1.5 Pro / 2.0

Google’s long-context powerhouse — 1M+ tokens, multimodal, integrated with Google.

Freemium

DeepSeek R1

Open-weight reasoning model from China — rivals GPT-o1 at a fraction of the cost.

Free Open Source

Mistral Large / 7B

French AI lab producing efficient open-weight models. Mistral 7B excellent for local use.

Recommended Open Source

Gemma 2 / 3

Google’s lightweight open models designed for on-device and edge inference.

Open Source

Phi-4

Microsoft’s small but surprisingly capable open model — punches above its weight.

Open Source

Llama 3.3

Meta’s open-weight family — 8B to 405B. The most widely deployed open model.

Recommended Open Source