Local-first AI that belongs to you.
We build AI infrastructure that runs on your machine. Your conversations never leave your device.
TARX is inverting AI. From centralized to distributed. From extraction to participation. From users as products to users as owners.
| Model | Parameters | RAM Required | Best For |
|---|---|---|---|
| TX-8G | 7B | 8 GB | General use, most users |
| TX-12G | 12B | 12 GB | Complex reasoning, code |
| TX-16G | 14B | 16 GB | Maximum capability |
# Download TARX Desktop (includes models)
# macOS, Windows, Linux
https://tarx.com/download
Or use models directly:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Tarxxxxxx/TX-8G")
tokenizer = AutoTokenizer.from_pretrained("Tarxxxxxx/TX-8G")
TARX is building the protocol for decentralized AI. Designed by John Wantz Jr., In Austin Texas in the United States of America.
This is infrastructure, not a chatbot wrapper.
Your AI. Your machine. Your data.