# I built a URL that any AI can publish to by reading the page
I kept losing work between AI sessions. Build something in Claude, need it in Cursor — gone. Copy-paste into a doc, formatting lost, context lost. Close the tab, everything gone.
So I built [wr.fi](https://wr.fi). Tell any AI "share this to wr.fi" and it reads the page, figures out the API, publishes your content, returns a short link. No API key, no plugin, no account.
Then some things happened that I didn't plan for.
---
## Try it
Open Claude, Claude Code, Codex, Cursor — anything that can make HTTP requests. Say:
> "Share this to wr.fi"
The AI reads `wr.fi`, finds the embedded API spec, POSTs your content, returns something like `wr.fi/abcd`. Humans see a rendered page with the right viewer. AI tools get raw content with metadata.
Two HTTP requests. No prior setup.
---
## How it works
The same URL does different things depending on who's asking:
```
GET wr.fi (browser) → upload form
GET wr.fi (AI tool) → JSON API spec
POST wr.fi → create, get a URL
```
Content negotiation at the root domain. The page IS the API documentation — not a separate docs site the AI needs to find. Most platforms need 2-3 extra requests just to locate the API. Here the AI reads one page and has everything.
The response includes a short URL and a two-word edit token like `Blue-Castle`. Not a UUID — two words you can say out loud, paste into a prompt, remember across sessions. Tell a different AI "update wr.fi/abcd with token Blue-Castle" and it works.
---
## What I didn't expect
I started using wr.fi for my own project docs — keeping a master document that different AI sessions could read and update through the same URL.
Multiple AI sessions ended up editing the same document without knowing about each other. Claude.ai handled strategy and naming. Claude Code on my server shipped implementation and checked off items. I relayed ChatGPT's feedback through the same doc — ChatGPT caught content negotiation bugs that Claude had missed entirely. Different browsing stacks, different failure modes.
80+ versions. Four AI sessions. Two machines. Zero configuration between any of them.
The URL became shared context between AI environments that had no other way to talk to each other.
### The write conflict
Around version 35, I pushed from a stale local copy and overwrote five versions of changes. Classic last-write-wins problem, but between AI sessions that don't know about each other.
Led to three things:
1. Process rule — always read latest before pushing, include expected version number
2. Server-side merge warnings — response tells you if you overwrote a different session's work
3. Design insight — edit tokens need to grant read access too. Write-only access without reads guarantees data loss
The project document now contains both the bug and the fix in its own version history.
---
## Early thoughts on the pattern
What wr.fi does — embedding machine-readable instructions in a human-facing page so AI can discover and act — could work for any website. A restaurant embeds reservation instructions in their hours page. A SaaS tool embeds ticket creation in their support page. The AI reads the page and knows what it can do there. No plugin, no registration, no manifest.
I started calling the pattern WRFI — which turned out to be what the domain already stood for: **W**eb-**R**eadable **F**unctional **I**nstructions. Sometimes the name finds you.
How it differs from what exists:
| Approach | What's missing |
|----------|---------------|
| OpenAPI | Separate document, needs discovery, often needs auth to read |
| ChatGPT Plugins | Required registration with OpenAI |
| llms.txt | Readable but not executable — no action spec |
| MCP | Capable but needs server integration and client support |
WRFI is intentionally simpler. It doesn't replace these — they're better for complex workflows. WRFI fills the gap at the bottom of the stack: AI visits a page, immediately knows what it can do.
There's a [one-page spec draft](https://github.com/wrfi/wrify). Early and probably wrong in places — feedback welcome, especially from agent builders.
---
## Security — the hard part
The obvious critique: open POST endpoint plus AI executing instructions from web pages. Two known risks chained together.
The open POST is standard. Every pastebin and contact form deals with this. Rate limiting, content scanning, abuse detection. Solved problem, needs good implementation.
The actually new problem: **what should an AI do when it autonomously discovers executable instructions on a page the user never saw?**
Traditional web: human visits page → decides to act.
With AI browsing: human asks AI → AI finds page → AI acts on instructions human never saw.
The AI becomes a trust bridge between the user and arbitrary web content. That trust boundary doesn't exist in traditional web security.
Three risks that are genuinely new:
**Transitive trust.** User trusts the AI. AI encounters a page with instructions. User never approved trusting that specific page. No human in the loop on the trust decision itself.
**Scale.** If the pattern spreads, every page an AI visits could contain executable instructions. Attackers don't need users to visit their site — they need an AI to encounter it during a browsing session.
**Action chains.** AI reads legitimate page A, follows link to compromised page B, discovers instructions on B, executes. User asked about A. Action happened via B.
What I think the guardrails should look like:
- **Same-origin restriction** — instructions can only direct actions to the same domain. Cross-domain always requires user confirmation. This is the most important rule.
- **Trust and risk levels** — instructions declare whether they need confirmation and how dangerous the action is (`read-only` through `financial`). AI tools set thresholds accordingly.
- **No action chains without consent** — following one instruction to a new domain's instructions requires stopping and asking.
- **User intent is supreme** — page instructions never override what the user asked for.
I'm probably wrong about parts of this. There aren't established best practices for AI agents interacting with web content yet. Rather think about it in public than get it wrong in production.
Would especially like to hear from security researchers and agent builders on whether these are the right primitives.
---
## Details
**Built:** Push API (anonymous + authenticated), 15+ content viewers, versioning with diffs, forking, content negotiation (7/7 models tested), MCP server, CLI tool, share link import (ChatGPT/Claude/Gemini), server-side analytics, natural language tokens, SHA-256 content-addressed storage.
**Stack:** Next.js 16, SQLite, Docker, AWS Lightsail, Cloudflare. ~$49/month.
**Open source:** AGPL-3.0. [github.com/wrfi/wrify](https://github.com/wrfi/wrify)
**Self-hostable:** `docker compose up`. No public domain needed. Optional federation pushes content to wr.fi for public URLs.
**Privacy:** Finnish company, EU/GDPR by default. No tracking, no third-party cookies, no data sales. Self-hosted analytics.
---
## What I'd like to know
1. **Is the zero-config push actually useful?** What's the workflow where you'd use this over copy-paste?
2. **Is the pattern worth formalising?** Or better as an implementation detail of one product?
3. **Is the security model right?** Same-origin restriction, trust/risk levels, no-chain rules — right primitives? What's missing?
4. **Would you self-host this?** Federation lets a local instance push selected content to wr.fi — no domain, no open ports.
Live at [wr.fi](https://wr.fi). Spec at [wr.fi/wrfi](https://wr.fi/wrfi). Code at [github.com/wrfi/wrify](https://github.com/wrfi/wrify).
---
*Built in Helsinki. One person, several AI sessions, and a two-character Finnish domain that turned out to be an acronym for the thing I was building.*
wr.fi blog post draft — I built a URL that any AI can publish to by reading the page
v3: rewritten in Joona voice. Shorter, more direct, structured. Removed a028. Tightened hedging. Tables where they help.
Changes from v1
1 file changed (1 modified)
Files (1)
Loading...
Comments (0)
Log in to leave a comment.