ChatGPT
The world’s most widely used AI assistant by OpenAI. Excellent for writing, coding, reasoning and creative tasks.
⭐ Recommended FreemiumThe core tools powering modern AI workflows — from chat interfaces to coding and automation.
Free tier limits and full pricing comparison are on the Home page.
ChatGPT
The world’s most widely used AI assistant by OpenAI. Excellent for writing, coding, reasoning and creative tasks.
⭐ Recommended FreemiumClaude
Anthropic’s thoughtful AI — 200K context, excellent for long documents, coding and nuanced analysis.
⭐ Recommended FreemiumGemini
Google’s multimodal AI with a 1M-token context window. Deeply integrated with Google Workspace.
FreemiumPerplexity
AI search engine that synthesises real-time web results with citations. Great for research.
FreemiumNotebookLM
Google’s research assistant: upload papers/docs and have conversations with them.
FreeOllama
Run open-source LLMs (Llama, Mistral, etc.) locally — fully private, no API needed.
Free Open SourceMicrosoft Copilot
Microsoft’s AI assistant embedded in Edge, Windows and Bing. Uses GPT-4.
FreemiumCursor
AI-first fork of VS Code with inline chat, codebase-wide context and agent mode.
⭐ Recommended FreemiumWindsurf
Agentic IDE by Codeium — Cascade model understands your full project context.
FreemiumLovable
Build full-stack React apps from natural language. No coding required.
FreemiumGitHub Copilot
AI pair programmer by GitHub/OpenAI — integrates into any editor, best GitHub workflow tool.
PaidClaude Code
Anthropic’s agentic coding CLI — edits files, runs tests, opens PRs from your terminal.
PaidReplit
Browser-based IDE with AI coding assistant, instant deployment and multiplayer.
FreemiumVercel v0
Generate shadcn/Tailwind UI components from text. Copy-paste into your project.
FreemiumBolt.new
Full-stack AI app builder by StackBlitz — runs entirely in the browser.
FreemiumOpenAI’s code-focused model powering GitHub Copilot. Available via API.
Tags: [Paid]
Open-source terminal AI coding agent — works with any AI provider (OpenAI, Anthropic, Gemini). The provider-agnostic alternative to Claude Code. Run locally, point at any model.
Tags: [Open Source], [Free]
No-code automation connecting 6,000+ apps. Trigger AI actions across your stack.
Tags: [⭐ Recommended], [Freemium]
Open-source workflow automation with powerful AI nodes — self-host or use cloud.
Tags: [Freemium], [Open Source]
AI-native workflow tool designed around agent-based tasks.
Tags: [Freemium]
Python framework for orchestrating multiple AI agents working together.
Tags: [Open Source]
The most popular LLM app framework — chains, agents, tools, memory.
Tags: [Open Source]
Data framework for ingesting, indexing and querying custom data with LLMs.
Tags: [Open Source]
Managed vector database for semantic search and RAG applications.
Tags: [Freemium]
Open-source embedding database — easy to run locally for prototyping.
Tags: [Open Source]
LangChain’s observability and evaluation platform for LLM apps.
Tags: [Freemium]
Open-source vector search engine with multimodal support.
Tags: [Open Source]
Open standard for LLM tracing and observability.
Tags: [Open Source]
China has built a rapidly growing AI ecosystem largely independent of the US big three. These tools are worth knowing — several rival or exceed Western models in specific areas.
Some Chinese AI tools may have restricted access outside mainland China, or require a Chinese phone number to register. DeepSeek and Qwen models are the most accessible internationally via Ollama and Hugging Face.
Built by a Chinese quant hedge fund (High-Flyer). DeepSeek R1 shocked the world by matching OpenAI o1 reasoning quality at a fraction of the cost. V3 is their general-purpose model. Fully open weights under MIT license.
Access: chat.deepseek.com · Ollama · Hugging Face · API
Tags: [Free], [Open Source]
Alibaba’s model family — Qwen 2.5 series covers text, code and multimodal tasks. Tongyi Qianwen is their consumer chat app. Strong multilingual performance, especially Chinese-English.
Access: tongyi.aliyun.com · Hugging Face · Alibaba Cloud API
Tags: [Freemium], [Open Source]
Chinese AI startup pushing the boundaries of context and agentic AI. Their latest Kimi K2.5 is a multimodal model with a 256K token context window, native vision, and an Agent Swarm system that coordinates up to 100 specialised AI agents simultaneously — cutting execution time by 4.5x. Kimi K2 Thinking is a 1-trillion-parameter open-source reasoning model (32B active via MoE) that can autonomously execute 200–300 sequential tool calls. Scores 50.2% on Humanity’s Last Exam at 76% lower cost than Claude Opus.
Access: kimi.com/en · Moonshot API · OpenRouter · Together AI · Hugging Face (open weights)
Tags: [Freemium], [Open Source (K2 Thinking)]
Baidu’s flagship AI — ERNIE Bot (Wenxin Yiyan) is China’s most widely deployed consumer AI assistant. ERNIE 4.0 competes with GPT-4 on Chinese-language benchmarks. Deeply integrated with Baidu Search.
Access: yiyan.baidu.com · Baidu AI Cloud API
Tags: [Freemium]
ByteDance’s (TikTok’s parent company) AI assistant. Based on their Skylark model family. Fastest-growing AI app in China in 2024. Strong at creative writing and entertainment use cases.
Access: doubao.com · Volcano Engine API
Tags: [Freemium]
Open-source model from Zhipu AI (spun out of Tsinghua University). GLM-4 is their flagship — bilingual, strong reasoning, widely used in Chinese enterprise AI deployments.
Access: chatglm.cn · Hugging Face · Zhipu API
Tags: [Freemium], [Open Source]
Video generation AI by Kuaishou (China’s rival to TikTok). Kling produces high-quality, physically realistic video from text or images — widely regarded as matching or exceeding Sora for many use cases.
Access: klingai.com · API
Tags: [Freemium]
Model series from 01.AI, founded by Kai-Fu Lee (former Google China head). Yi-34B was one of the strongest open-weight models when released. Focuses on bilingual English-Chinese performance.
Access: 01.ai · Hugging Face
Tags: [Freemium], [Open Source]
Use when you’re the user
Examples: claude.ai, chatgpt.com, gemini.google.com, perplexity.ai
Use when you’re building something
Access via: OpenAI API, Anthropic API, Google AI Studio, Perplexity API
For a full breakdown of how AI companies are structured and how other tools fit in, see the Home page ecosystem section.
You don’t need a DevOps team or cloud infrastructure. For individuals, deploying AI means building a small app that calls an AI API — and hosting it somewhere always-on. Here’s exactly how it works.
When you use ChatGPT, two things are running: the website (what you see and click) and the AI model (what actually thinks). These are separate. The model lives on OpenAI’s servers. The website is just an interface that sends your message to the model and shows the response back. When you “deploy AI,” you’re building and hosting that middle layer — the app that connects your users to the model. You never touch the model itself.
The flow:
Think of it like a restaurant: your app is the kitchen and dining room you build and run. The AI API is the food supplier. You cook and serve — they grow the ingredients.
Where intelligence lives — pay per use.
You call an API — they run the model. You pay fractions of a cent per message. No GPU, no servers, no setup.
Typical cost: $5–20/month for a personal project with moderate traffic.
Your code — the middle layer you host.
Your app lives here — a website, a bot, a script — always online and accessible via a URL. These platforms take your code and run it on their servers.
Typical cost: free to $5/month depending on traffic.
Skip the code entirely — if you’re not a developer, start here.
These tools generate the app code for you from a plain English description. You describe what you want — they write and deploy the code.
Most people building personal AI tools today use these rather than writing code from scratch.
Get an API key — sign up at platform.openai.com or console.anthropic.com. Both have free credits to start. This is your access to the AI model.
Build the app — if you code, use Cursor or Claude Code to generate it fast. If you don’t, use Lovable or Bolt.new and describe what you want in plain English. Either way, your app calls the API using the key from step 1.
Deploy it — push your code to Vercel or Railway (5 minutes). Set your API key as an environment variable. You now have a live URL you can share with anyone.
Scale if needed — if your project grows and you need GPU control or private model hosting, look at Modal or RunPod for on-demand GPU access, or AWS Bedrock / Azure OpenAI for enterprise-grade managed APIs.
Reality check: A personal AI app serving a few hundred users costs roughly 5 for hosting + API fees based on usage. The complexity people associate with “AI infrastructure” is the enterprise problem of running models at massive scale on private hardware. That’s not your problem as an individual.
Two slide decks that go deeper than the web pages. Preview them inline below, or download the originals to read offline.
A map of how the AI landscape fits together — labs, models, platforms and infrastructure.
Download PDF · Open full-screen
Run agentic AI workflows locally on a 16GB machine — a practical, hardware-aware walkthrough.