The $100 Experiment: Micro-Budgets That Teach You More Than Research
Published September 18, 2025
Why $100 beats another week of research
There’s a point where more reading just makes you confident, not correct. Ask me how I know. A tiny budget, pointed at a clear question, can teach you more in 48 hours than a month of “planning.”
$100 is small enough to be painless. Big enough to get a real signal. Also? It forces you to focus. No endless tinkering—just ship, spend, learn, adjust.
What is a $100 experiment?
Simple: a time-boxed, money-capped test designed to answer a single question.
- One question: “Will people click ‘Request Demo’ for this offer?”
- One channel: search, social, newsletter swap, cold email, partner blast.
- One artifact: a landing page, a short video, a Typeform, a Calendly.
- One metric that decides: CTR, CAC to email, demo booked, reply rate.
- One stop rule: when you hit $100 or the metric is clearly yes/no.
That’s it. No dashboards, no grand theories. Just enough signal to decide the next move.
Guardrails (so you don’t set money on fire)
- Write the hypothesis. If you can’t write it, you’re not ready.
- Predefine success/fail. Example: “If CAC-to-email < $4 or 3+ demos book in 48 hours, proceed.”
- Freeze scope. If you’re tweaking creatives every 20 minutes, it’s not an experiment, it’s avoidance.
- Ship in a day. Max two. If it takes a week, you picked the wrong test.
Ten $100 experiments you can run this week
- Search ads → 1 landing page → email capture. $10/day for 10 days.
- Paid social (one audience, two creatives) → Calendly. Book at least 2.
- Newsletter sponsor swap (free) + $100 gift cards for 5 interviews.
- Cold email sprint: 100 hand-picked, 1-sentence pitch, $100 in Clearbit/Clay credits.
- Retargeting only: did they care enough to come back? $100 says yes/no.
- Price test: put $19 vs. $49 on two otherwise identical pages. Ad-spend split.
- Feature promise test: same product, two headlines. Which promise wins?
- Onboarding “nudge”: $100 in credits for in-app messages/tooltips. Does activation jump?
- Partnership poke: $100 bounties for 5 creators to try and tweet an honest take.
- Content wedge: write one post meant to rank competitive keywords, then boost it with $100. Do signups come from search + paid assist?
How to design the loop
- Hypothesis → 2) Artifact → 3) Traffic → 4) Metric → 5) Decision. That’s the loop.
- Hypothesis: “Ops managers will book a demo if we promise 2-hour onboarding.”
- Artifact: 1-page site with a proof snippet and ‘Book Now’ button.
- Traffic: 3 keywords, exact match. Or 1 lookalike audience. Keep it tight.
- Metric: demo bookings. Not likes, not time-on-page.
- Decision: greenlight next step, pivot promise, or stop.
Repeat weekly. Boring, yes. Boring works.
Reading the signals (without lying to yourself)
- Weak clicks, strong demos: promise is right, audience probably right. Scale cautiously.
- Strong clicks, zero conversions: curiosity without intent. Rework the offer or narrow the ICP.
- Low CTR, high conversion: your ad/subject line is off, the product pitch might be fine.
- Great replies… that go nowhere: you’re asking for too much too soon. Add a smaller step.
If you need to squint to call it a win, it’s not a win. It’s okay. That’s the point of $100—cheap clarity.
Common traps (I fall into these too)
- Changing two variables at once. Now you’ve learned… nothing.
- Declaring victory on soft metrics. “People liked the video!” Cool. Did they buy?
- Chasing edge cases. One whale said yes is not a market.
- Spending $100 to justify a build you already wanted to do.
Set a calendar reminder for the decision moment. Then actually decide.
A 7-day $100 playbook
- Day 1: Write hypothesis, define success/fail, pick channel, draft the artifact.
- Day 2: Ship it ugly. Publish the page. Create the campaign. Don’t overthink.
- Days 3–6: Let it run. Minor fixes only (typos, broken links). Take notes.
- Day 7: Decide. Scale, pivot the promise, or stop. Document the lesson in two sentences.
Do four of these in a month and tell me you didn’t learn more than last quarter’s research bender.
The meta-win
$100 experiments build a muscle: bias to action. You stop worshiping ideas and start trusting data (even small, noisy data). You also collect artifacts—pages, scripts, segments—that compound. Next time is faster.
Related
- When to Quit Your Side Project (and When Not To)
- Why SEO is Slow but Worth It
- The Myth of ‘Perfect Product’ Before Launch
Final note
If you want help picking high-clarity experiments and turning results into weekly momentum, you’ll fit right in at Indie10k. We share playbooks, compare notes, and keep each other honest—kindly, but firmly.