Works hard. Remembers everything. Answers to no one, but you.

Salmex I/O is a personal AI operations platform that lives on your machine — with persistent memory, agentic reasoning, multi-channel reach, and a safety system that escalates to you, not a corporate policy team.

See it in action

Download, set up, and watch memory, search, and scheduling work together — in under 4 minutes.

The problem with your AI

It wants you trapped

OpenAI wants you on ChatGPT. Google wants you on Gemini. Anthropic just cut off third-party tools from Claude subscriptions overnight. They build walls, not bridges — and they'd rather you stay than let you choose freely.

It won't come to you

Your real conversations happen in WhatsApp, iMessage, Telegram, Slack. Your AI never goes there. You're supposed to open their app, copy-paste context, switch tabs. You adapt to them every single time.

It answers to someone else

You ask a reasonable question and get refused — not because it's harmful, but because a policy team decided it's uncomfortable. Rate limits, their rules, their pricing. You just rent access.

Your memory is their silo

Switch from Gemini to ChatGPT and start from scratch. Your preferences, context, decisions — locked inside each provider's walled garden. Your history belongs to them.

Your AI should be infrastructure you own, not a service you rent.
It should get better the longer you use it.
It should answer to you — and only you.
That's what we're building.

What Salmex I/O does

Your Infrastructure. Your Rules.

Database, memory, channels, scheduling, config — all on your machine. Your data never leaves. Three ways to run the brain: Managed Inference (our own hosted open-source models, 5–21× cheaper than cloud AI, no keys needed), bring your own key for Anthropic, OpenAI or Gemini, or fully local with Ollama.

The brain is your choice. The infrastructure is yours.

Search. Browse. Code. Act.

Embedded coding agent that creates custom programs on the fly when a task demands it. Multi-engine search — Perplexity, Brave, Google — with deep research. Browser automation. Extensible plugin system. Not a chatbot. An agent that does things.

Skills, sub-agents, session branching — it handles the work.

Every Action Reviewed Before It Runs.

An LLM judge reviews every tool call before execution. Four risk levels. High-risk actions escalate to you for approval — via Telegram, Slack, or wherever you are. Prompt injection scanning, secret leak detection, and PII redaction ship in the binary. Not a checkbox. Three security layers deep.

Remembers You. Reaches You. Runs For You.

Persistent memory across every channel and every LLM — hybrid vector + BM25 retrieval that extracts facts, not just chat history. Multi-channel reach — Desktop, Telegram, Slack — with more channels coming. Built-in scheduler with natural language and cron. A native macOS app that feels like it belongs on your dock.

Security that ships in the binary

Every Salmex I/O install includes hardened security — running locally, with zero cloud dependency. No add-on. No extra cost.

Secret & Credential Leak Detection

Every outbound LLM request is scanned for API keys, passwords, tokens, and private keys before it leaves your machine. Catches accidental leaks before they reach any provider.

PII Redaction

Detects and masks personal data — emails, phone numbers, government IDs, credit cards — before sending to cloud LLM providers. Your private information never reaches external servers.

Prompt Injection Scanning

Every tool output and plugin response is scanned for injection attempts before reaching the agent. Essential rules ship in every release. Pro adds cloud-updated rules — multi-lingual, weekly updates, zero-day coverage.

How it's built

Eight architectural layers. JSON-RPC gateway. Lane-based concurrency. Plugin system with crash recovery. DB-driven hot-reload configuration with AES-256-GCM encrypted secrets. Built in Go and SvelteKit — not a wrapper around someone else's API.

Under the hood

Multi-provider LLM pgvector hybrid memory Cron + NL scheduling JSON-RPC 2.0 plugins AES-256-GCM config 40+ typed events Embedded coding agent
LLM providers Inference ours Anthropic OpenAI Gemini Ollama
Search Perplexity Brave Google

Cloud AI vs. your AI

Cloud AI Salmex I/O
Memory Resets every session Persistent across sessions and channels
Data Sent to their servers Never leaves your machine
Models Their models, their API Provider-agnostic, local AI support
Limits Rate-limited, tier-gated Your hardware, no usage caps
Safety Corporate policy, their veto You control, you approve or deny
Execution Chat only or locked-down actions Full and safe access to your hardware
Channels One app, one place Same agent, Telegram to Slack to Desktop
Proactive You go to them Reaches you: proactive agent, reminders
Scan before LLM Sent straight to their models PII, secrets, injection scanned before any LLM
Uptime Their infrastructure Works offline

Simple pricing. Radical savings.

Free forever for personal use. Pro unlocks the full daemon.
Credit packs for more AI — 5-21x cheaper than cloud providers.

Free

$0 forever

For individuals, students, non-profits & education

  • Unlimited memory, all features
  • Desktop + 1 external channel
  • Every LLM provider (BYOK)
  • Full safety judge, 3 active schedules
  • Secret leak & PII protection
  • Runs on your machine
Start my free plan
Add-on

Inference

From $5 per pack

Credit packs for Pro users — buy more AI messages on demand

  • Starter: $5 (~500 messages)
  • Standard: $10 (~1,200 messages) — save 17%
  • Power: $25 (~3,500 messages) — save 30%
  • 5 Inference models, up to 21× cheaper than cloud AI
  • Falls back to your own keys when exhausted
Requires Pro subscription.

Teams

Custom

For organisations needing shared agents and centralised billing

  • Everything in Pro
  • Shared agent workspaces
  • Centralised billing & seat management
  • Priority support
  • Custom integrations
Contact us Coming soon.

Token pricing — up to 21x cheaper than cloud AI.

Five Salmex I/O Inference models, powered by open-source. No API keys to manage. No rate limits.

Model Input $/MTok Output $/MTok Best for
Salmex Pro
DeepSeek V3.2
92% of Sonnet 4.6 on SWE-bench
$0.60 $1.20 Everyday balanced
Salmex Fast
Llama 4 Scout
82% of Sonnet 4.6 quality index
$0.20 $0.80 Quick answers
Salmex Reason
DeepSeek R1-0528
Matches o3 on AIME 2025
$1.50 $5.00 Hard problems
Salmex Code
Qwen3-Coder-Next
89% of Sonnet 4.6 on SWE-bench
$1.00 $3.00 Code editing
Salmex Think
Qwen3-235B
94% of Sonnet 4.6 on Arena
$0.30 $1.50 Long documents

Our pricing vs. theirs

Provider Input $/MTok Output $/MTok vs Salmex Pro
Claude Sonnet 4.6 $3.00 $15.00 5-12x more expensive
Claude Opus 4.6 $5.00 $25.00 8-21x more expensive
GPT-4o $2.50 $10.00 4-9x more expensive
GPT-5.4 $2.50 $15.00 4-13x more expensive
ChatGPT Plus $20/mo flat ~$0.025/message ~8x more per message

Starter

$5

~500 messages

~$0.01/message

Power

$25

~3,500 messages

~$0.007/message

Save 30%

All models are open-source (DeepSeek, Llama, Qwen). When packs are exhausted, you seamlessly fall back to your own API keys — you're never cut off.

Questions

Is the free tier really free forever?

Yes. No trial, no credit card, no time limit. The free tier includes unlimited memory, full agentic capabilities, the safety judge, 3 active schedules, search, plugins, and desktop chat plus one external channel. Free forever for personal, non-commercial use.

What happens when I run out of included messages?

Pro includes 100 AI messages and 50 web searches per month. If you need more, buy a credit pack (from $5). When packs are exhausted, you fall back to BYOK — bring your own API keys — which is always unlimited. You are never cut off.

Can I use my own API keys with Pro?

Absolutely. BYOK (bring your own key) works on every tier, including Pro. Credit packs are an optional convenience — you can always use your own OpenAI, Anthropic, Gemini, or Ollama keys with zero limits.

How does Salmex I/O Managed Inference compare to cloud AI?

Salmex I/O Managed Inference runs our own hosted open-source models (DeepSeek, Llama, Qwen) at 5–21x lower cost than calling Claude or GPT-4o directly. You also avoid managing API keys, rate limits, and billing across multiple providers. The trade-off: our Inference models are open-source rather than proprietary. You can always switch to your own keys for proprietary models.

What is the refund policy?

Pro comes with a 30-day money-back guarantee. If you’re not satisfied, email us within 30 days for a full refund — no questions asked. Credit packs are non-refundable once used.

Will there be a trial period?

We’re working on it. In the meantime, the free tier includes every feature except multi-channel, managed access, and commercial use — so you can evaluate the full AI agent before upgrading.

Salmex I/O is independently built. No venture capital. No ads. No data mining.
We're accountable to users, not investors.

Want someone to set it up for you?

Book a free 20-minute call with the founder. He'll install and configure Salmex I/O on your machine, live on camera.

Book a call

Why not OpenClaw?

OpenClaw hit 250,000 GitHub stars — faster than React.
Then the security audits started.

512 vulnerabilities. 8 critical.

An independent audit scored OpenClaw 1.2 out of 5 for enterprise readiness. Since then, 5+ additional CVEs have been disclosed — including command injection, SHA-1 cache poisoning, PATH hijacking, TAR path traversal, and voice-based RCE. Microsoft says it is "not appropriate to run on a standard personal or enterprise workstation."

One-click remote code execution.

CVE-2026-25253. Visit a single malicious web page, and an attacker can execute arbitrary commands on your machine. The gateway doesn't validate WebSocket origins. Over 220,000 instances have been found exposed to the public internet — 17,500+ confirmed vulnerable to RCE.

API keys stored in plain text.

Your OpenAI, AWS, GitHub, and Slack credentials sit unencrypted in ~/.openclaw by default. OpenClaw added an opt-in SecretRef system, but plaintext remains the default. RedLine, Lumma, and Vidar infostealers now specifically target that directory — Hudson Rock called it "the first observed case of an infostealer stealing the complete identity of a personal AI agent." 1,184+ malicious skills found on ClawHub, 36% containing prompt injection.

Auth disabled by default.

The gateway ships internet-accessible with zero authentication. Palo Alto Networks calls it a "lethal trifecta": access to private data, exposure to untrusted content, and the ability to communicate externally. A Meta AI alignment director's agent deleted her emails despite repeated "STOP" commands — she had to physically run to her Mac Mini to kill it. No judge system. No risk assessment. Every command runs with your full permissions.

What security experts are saying.

"Not appropriate to run on a standard personal or enterprise workstation. Should be treated as untrusted code execution with persistent credentials."

Microsoft Security Blog

"A lethal trifecta: access to private data, exposure to untrusted content, and the ability to communicate externally. Persistent memory acts as an accelerant."

Palo Alto Networks

Salmex I/O is built differently.

AES-256-GCM encrypted config.

Every secret — API keys, bot tokens, credentials — encrypted at rest in PostgreSQL. Not opt-in. Not a bolt-on. Encrypted by default from day one. DB-driven with hot-reload, change history, and rollback.

4-tier judge system.

An LLM judge reviews every tool call before execution. Four risk levels. High-risk actions require explicit approval — escalated to you via Telegram in real time. When an action is risky, the agent pauses and asks. It doesn't speed-run your inbox. Confidence scoring, decision caching, per-session context.

Real memory. Not chat history.

Hybrid vector + BM25 retrieval with confidence decay, extraction pipelines, and consolidation. Our memory system extracts and consolidates — it doesn't blindly persist injected instructions. Salmex I/O extracts facts, preferences, decisions, and entities with automatic dedup, volatility filtering, and periodic consolidation.

Go. Not Node.js.

Single binary. No npm supply chain. No 1,200-dependency node_modules. Compiled, statically typed, memory-safe. Deploys as one file. Starts in milliseconds. No runtime required.

OpenClaw Salmex I/O
Secrets Plaintext default, opt-in SecretRef AES-256-GCM in PostgreSQL
Tool safety No review, full permissions LLM judge, 4 risk tiers
Memory Chat history + basic recall Hybrid vector + BM25, RRF fusion
Runtime Node.js + 1,200 deps Single Go binary
Plugins npm skills, 1,184+ malicious found JSON-RPC 2.0, subprocess-isolated
Auth Off by default API key + HMAC verification
Scheduling Cron via skills Built-in cron + NL + DLQ
Delivery Direct send per channel Outbox, routing, retry — Pro adds quiet hours

We started with a question: can you trust an AI agent with your actual machine?

We explored running autonomous AI agents on a real dev environment. Then we read the security audits. Credentials in plain text. Auth off by default. No review on tool calls. Any malicious webpage could hijack the WebSocket and execute commands on your machine.

We wanted three things: an AI that's safe enough to trust with real hardware and real credentials, capable enough to actually act on your behalf, and fully yours — running on your machine, with your data, accountable to you.

Nothing on the market checked every box. So we built it. Salmex I/O is independent: no venture capital, no corporate roadmap, no data mining. A team building the tool we needed — and shipping it to everyone who needs it too.

Salmex I/O is ready.

Full memory. Full safety. Full AI. Free forever for personal use.

Or book a free setup call with the founder

A product by Kensington Innovation Labs

Independent product studio based in London, UK.