Competitive intelligence from the source.
Scraping, monitoring, and data infrastructure. We've been doing this since before it had a name.
In the late 1990s, MCWD built original software to screen-scrape ecommerce websites and sold the resulting data to Wall Street research analysts. That was before anyone called it "web scraping" and before there was a category called "alternative data." The underlying work — reliable data collection, normalization, and delivery — has been core to the studio ever since.
Today, the clearest expression of that practice is PriceGun.io. We track 350+ beverage-alcohol retailer feeds and all 293 Total Wine locations every day, ingesting millions of listings, matching them against a canonical product catalog, and surfacing daily pricing recommendations to subscribers. The technical stack is different from what we ran in 1998. The discipline is the same.
We take data and scraping work when the target is legitimate, the use case is defensible, and the operator wants a durable system rather than a one-off extract. We're happy to explain what's feasible, what's reliable at scale, and what's likely to break.
We don't take work that violates terms of service in ways we're not comfortable with, that scrapes personal data, or that exists primarily to circumvent paywalls.
What we build
Scrapers, monitors, data pipelines, normalization and matching systems, alerting, and delivery infrastructure (CSV, API, dashboard, email).
What we target
Competitor prices, inventory, product listings, public market data, review and sentiment data, and any other structured information published on the web.
How we scope
Short-term extracts start small. Long-running monitoring systems start with a pilot — we prove reliability before you commit to anything ongoing.