Provider Directory
Confidential
Inference
Compare providers running AI inference inside trusted execution environments. Filter by models, pricing, and API features.
9 Providers
55 Models — compare by price → 9 providers
Tinfoil
Verifiable private inference with Intel TDX and NVIDIA H100 CC
Models 11
From $0.05/M
OpenAI API
Streaming
Functions
Vision
Redpill
OpenAI-compatible confidential inference on Phala GPU TEE
Models 15
From $0.04/M
OpenAI API
Streaming
Functions
Vision
Chutes
Decentralized confidential inference on Bittensor with TEE
Models 3
From $0.60/M
OpenAI API
Streaming
Functions
Vision
NEAR AI
Private, verifiable AI — Intel TDX + NVIDIA H200 TEE with on-chain attestation
Models 8
From $0.01/M
OpenAI API
Streaming
Functions
Vision
Maple
Privacy-first LLM inference with AMD SEV-SNP
Models 5
From $1.50/M
OpenAI API
Streaming
Functions
Vision
Privatemode
EU-hosted confidential inference via Cosmian VM
Models 6
From $0.15/M
OpenAI API
Streaming
Functions
Vision
NanoGPT
Pay-per-prompt confidential inference — TEE-backed models with ECDSA per-request attestation
Models 23
From $0.13/M
OpenAI API
Streaming
Functions
Vision
PPQ.AI
Private TEE inference via hardware-attested confidential compute
Models 7
From $0.47/M
OpenAI API
Streaming
Functions
Vision
Venice.ai
End-to-end encrypted private AI with TEE attestation
Models 11
From $0.05/M
OpenAI API
Streaming
Functions
Vision