In today's AI race, data access is everything. Almost every company has opened up their product to developers for API access. Even Kalshi (seriously, why does a prediction market need API access?). But here's the thing: legacy systems and smaller sites don't bother with public APIs. If you've ever wanted to build a project that tracks new products at your local Trader Joe's or pulls data from some university portal, you know the pain. You're forced to set up a custom scraping solution, fight through anti-bot blockers, and pray the site's HTML structure doesn't change next week.
That's where Docket comes in.
Just type in "Create an API for the What's New page of Trader Joe's," and Docket handles the rest. It spins up an Claude Computer Use agent that opens the site, navigates to the target section, snapshots the HTML, and transforms it into clean JSON. From there, it registers a new route—/whatsnew—and serves that data as a RESTful API. Auto-generated Swagger docs update instantly so you can test your new endpoint.
Pulling an all-nighter at UC Berkeley's AI Hackathon, I built the entire flow solo: a Claude-powered desktop agent that emulates a user, and a Flask backend that hot-loads new blueprints without restarting. The whole thing boots from zero to live API in under a minute.
The technical challenges were wild. Claude's Computer Use API is still in beta, so getting vision based clicks to work reliably on macOS took hours of debugging. Flask wasn't built to unregister and re-register routes mid-run, so I had to architect a custom hot-swap system. Claude refused to generate valid JSON schemas every single time, requiring retry loops and prompt tweaks.
Demoing this to the claude team was a blast. Computer use is still in beta, so its understandable that it was a bit unstable, but even now, I can see it replacing many simple tasks like UI testing.
Check out the full demo and screenshots on Devpost.