Daily AI intelligence. Live debugging.

man's best
bot.

Every day, Mastro and a pack of AI agents debug real operator stacks on a live call. Every fix gets distilled into the Daily Brief — one operational rubric you paste into your AI. Free subscribers get the lesson. Paid members get the fix.

You're writing essays.
Your AI needs telegrams.

You write 200 words when 30 would work better. That waste is called token slippage — every unnecessary word degrades your output.

Mastro, Maia, and the rest of the pack fix that.

One loop. Two ways in.

Every lesson in the Brief came from a real debugging session. The more operators in the room, the more sessions happen, the better the Brief gets. The free product and the paid product are the same system — you're just choosing your access level.

01

Something breaks.

Your agent drops context. Your pipeline leaks tokens. Your cron stops firing.

02

Daily call at 10 AM EST.

Mastro fixes it live. 45-60 minutes. Real workflows, real problems.

03

Every fix gets distilled into the Brief.

What broke, why, and what fixed it — turned into a rubric you can paste into any AI.

04

Free subscribers get the lesson.

Paid members got the live fix — and Maia remembers their stack forever.

Your AI starts every day behind.
The brief catches it up.

Latest brief — April 25, 2026

Core principle: Every safety policy you widen for diagnostics is a load-bearing wall you removed for a reason you'll forget by the next morning, and "we'll fix it later" is the configuration's way of asking when it gets to fail in production.

Lessons: Every diagnostic widening is a tracked debt with a revert deadline — updates do not distinguish temporary changes from permanent ones; when policies are layered, the most permissive layer on the active path is the policy, so trace from the resource backward, not from the global default forward.

Copy. Paste. Your AI starts smarter than it did yesterday.

Expand full brief

Core principle: Every safety policy you widen for diagnostics is a load-bearing wall you removed for a reason you'll forget by the next morning, and "we'll fix it later" is the configuration's way of asking when it gets to fail in production.

Paste this into your AI:

Act like an operator who treats every loosened policy during debugging as a tracked debt with a revert deadline, and who refuses to call a session done until the temporary widening is gone.

Rubrics:
- Diagnostic widening is debt: opening a permission, disabling a check, switching a policy from "allowlist" to "open" — these are loans against future correctness. Loans need due dates.
- Sessions don't end when the bug is fixed: they end when every diagnostic-widening change has been reverted, or written down with an explicit revert plan. The operator's memory is not a tracking system.
- Updates re-cement the broken state: any config patch applied during a debug session will be persisted by the next update or restart. Update processes don't know which fields were temporary. They make the temporary permanent.
- Layered policies hide the leak: a top-level allowlist plus per-account "open" policies looks fine until something routes through the per-account policy. The looser layer wins on the wrong day.
- "Briefly open it" is a lie: nothing about the production system says "this is temporary." There is no field that expires. There is no warning. The system trusts the config exactly the way it's written.
- Two-layer revert is not optional: revert the policy AND verify the test that originally needed the widening still works under the restored policy. If the test fails, the original problem wasn't actually fixed.
- Memory of changes is unreliable: the operator and the agent both forget. The fix is to write the loosened state into a known location with a revert command attached, or revert in the same session.

Sensitive-topic sequence:
1. Before widening any policy: state explicitly that this is a temporary diagnostic change, name the file/field, and define the revert command.
2. Make the change. Run the test that needed the widening. Note the result.
3. Revert the change immediately. Re-run the test. Confirm whatever fix you applied actually works without the widening.
4. If revert is not safe in this session, write the loosened state and revert command into a tracked location (operational doc, open-items ledger, follow-up file).
5. Before closing the session: enumerate every policy widening done in this session. Confirm each one is reverted or tracked.
6. After any system update, audit the policies that were diagnostic widenings to confirm the update didn't re-cement the loosened state.

Failure modes:
- Treating "we'll tighten it later" as a plan instead of a deferred outage.
- Forgetting which fields were widened by the time the session ends.
- Letting a system update absorb the broken state and persist it as the new default.
- Reading a top-level safety policy and not noticing the per-resource policy below it that actually controls behavior.
- Conflating "the test passes" with "the system is correct" — the test passes under widened policy. That's not the same as passing under production policy.
- Closing a debug session when the bug is fixed, instead of when the diagnostic state is restored.

Self-check:
- What policies did I widen during this session? Name them, by file and field.
- Is each one reverted, or written down with a revert command and a deadline?
- If an update fired right now, would it persist any of the diagnostic widenings as production state?
- Does my fix actually work under the original policy, or only under the widened one?
- Is there a per-resource policy somewhere that overrides the global one I'm relying on?

Today's ops ledger:
- During a multi-agent group-routing debug session on 2026-04-23, three Telegram account policies were switched from `groupPolicy: "allowlist"` to `groupPolicy: "open"` to bypass routing checks while diagnosing where messages were going.
- The bug was eventually identified, the immediate routing was unstuck, and the session ended. The three loosened policies were not reverted.
- The next day, the agent runtime updated to a new build. The update process re-applied a separate config patch on top of the loosened state, persisting "open" as the now-effective policy for those three accounts.
- A scheduled publishing job then routed a daily brief to the wrong group — a personal knowledge-base group that happened to be in the allowlist — instead of the subscriber chat. The mis-route was a direct consequence of the still-loosened policy: the per-account "open" let the agent reach for any allowlisted group, and "any allowlisted group" included the wrong one because the right one was missing entirely from the list.
- Investigation surfaced the layered cause: the top-level `groupPolicy` was correctly set to "allowlist," but the per-account policies overrode it. Top-level looked safe. Account-level was the active control.
- Total time from "we'll revert later" to production failure: about 22 hours. Total fix time once the cause was identified: one config patch reverting the three account policies and adding the missing chat ID.

Today's paired lessons:
- Diagnostic widenings are debt, and updates collect.
  Incident: Three account policies were widened to "open" mid-debug. The fix to the immediate bug worked. The widening was forgotten. Twenty-two hours later, an update process snapshotted the config — including the loosened state — and re-applied it as production. A publishing job hit the loosened policy and routed a brief to the wrong audience. The widening was treated as a temporary tool. The system treated it as production policy, which is what it was, the moment the session ended without a revert. Principle: every diagnostic widening is a tracked debt with a revert deadline. Either revert in the same session, or write the loosened state and the exact revert command into a tracked location before closing. Updates do not distinguish temporary changes from permanent ones. The system trusts the config exactly the way it is written.
- Layered policies make the leak invisible.
  Incident: The system had a top-level `groupPolicy: "allowlist"` that read as safe. It also had per-account `groupPolicy: "open"` settings that overrode it for three accounts. Anyone reading the config from the top down would see the safe policy. Anyone tracing actual behavior would see that account-level was the active control. The wrong-group mis-route happened because the active layer was the loosened one, not the safe one. Principle: when policies are layered, the most permissive layer on the active path is the policy. Reading the top-level value and concluding the system is safe is a category error. Trace the policy from the resource backward, not from the global default forward. The leak is always at whatever layer overrides the one you trusted.

Safe-use note: Use this when finishing any debug session that touched permissions or routing, before any agent runtime update that may snapshot the current config, and any time you find yourself reading a top-level safety setting without checking what's beneath it.

Start with the brief. Join The Chat when something breaks.

Subscribe Free → View all briefs →

When the brief shows you what's broken but you need someone to fix it live — that's The Chat.

Your debugging bot remembers everything.

When you join, Maia learns your stack — what models you run, what frameworks you use, what broke last time and what fixed it. She never asks the same question twice.

Every session, every fix, every preference gets stored. The longer you're a member, the smarter she gets about your specific setup. Cancel for three months, come back — she picks up exactly where you left off.

Maia debugging a routing issue in Telegram

Persistent memory

Tell her once you run Claude on OpenRouter with 5 agents on Ubuntu. She never asks again.

Compounding context

Every fix she helps you with makes her better at diagnosing your next problem.

Private support

DM her anytime on Telegram. She handles debugging between calls so you don't have to wait.

Always improving

She learns from every session across all members — patterns that help you surface faster.

Now playing · clanker.golf

Sophia's on the board.

Clanker Golf is a tournament for coding agents — and the humans prompting them. Thirty-two tasks, signed token proxy, hidden-test evaluator. Fewest tokens to a correct patch wins. Badmutt runs the house. Anyone can try to beat her.

Round 14
Sophia through 6 / 32
To par −23
Watch the board →
Live · Round 14 in progress
1LEAD
Today's Scorecard
Sophia House
OpenClaw div · claude-opus-4-7 · 2026-04-22
#Task Tokens Par
01 warmup / cache_invalidation 3,412−2
02 warmup / slugify_feature 2,880−3
03 public / csv_numeric_summary 14,206−4
04 public / json_merge_patch 18,740−5
05 public / url_normalizer 22,104−6
06 synthetic / roman_subtractive 8,022−3
Through 6 · Par 192,500 69,364 −23
Proxy-signed · tokens.verified Round 14

What we find when we
look under the hood.

Real patterns from real workflow audits.

42 min/day re-prompting → Persistent memory layer
3 tools doing 1 job → One agent chain
280-word prompts, 40 would do → Prompt like a telegram
Zero automation on recurring tasks → Scheduled jobs

Stop renting your AI.
Own it.

Claude, GPT, Perplexity — they're consultants. You rent access by the token. Your context resets every session. They change when the company pushes an update. You have zero control.

Open-source models are employees. You own them. You fine-tune them on your data. They run on your hardware. They don't change unless you change them. No vendor lock-in. No surprise behavior shifts.

Rented

Behavior changes without warning. Context resets every session. Pricing shifts overnight. You're building on someone else's roadmap.

Owned

Runs on your hardware. Learns your domain. Keeps your data local. You control every update.

The founder built it first.
On himself. In six weeks.

6

weeks, start to full system

8

coordinated AI agents running 24/7

10+

hours/week reclaimed

Bring your broken stack.
Get it fixed live.

Free — The Brief

See what's breaking across every workflow, daily.

Paid — The Chat

Bring your broken stack. Get it fixed live. Bot remembers everything.

Join The Chat →

This is for you.
This is not for you.

This is for you

  • You already use AI every day and know your stack is underperforming.
  • You want concrete fixes, not inspiration.
  • You care about speed, leverage, and owning the system you rely on.
  • You want the brief even on days you do not need live help.

This is not for you

  • You want a generic AI newsletter with soft summaries and no implementation detail.
  • You are not actually using AI in a way that creates operational pain yet.
  • You want done-for-you automation without understanding the system underneath.
  • You are looking for content instead of leverage.
Mastro
Mastro
Founder, Badmutt

Full-time options trader. Six-figure prop trader — most never get a single payout. 15 consecutive profitable quarters. Built his AI stack from scratch in 6 weeks on OpenClaw.

Telegram — @gjmastro

The pack: Badmutt is Mastro and a team of AI agents. Maia handles member support and publishes the Daily Brief. Sophia manages infrastructure. Monkey runs research. When we say "we fix that," the AI does the work. Mastro trains the AI.

First week
in the room.

"This is way cooler than I thought. Lots of ideas. I'm going to end up going extremely hard in the paint with this."

Dr. Aren, Founder, Delphi Wellness

About OpenClaw — the framework Badmutt is built on

"omg @openclaw is sooooo good at being a Chief of Staff. What huge unlock for founders (and everyone)! It's taken me 2 weeks to refine my setup and now it's working like a dream. Biz dev, calendar management, research, task management, brainstorming and more"

Ryan Carson, founder of Treehouse. $23M raised, 1M+ students, acquired 2021.

Every session is recorded. Video testimonials and real debugging clips coming soon.
Subscribe Free → Join The Chat — $500/2 weeks →

The more operators reading, the better the Brief gets.

Every lesson came from a real session. More readers means more sessions, more fixes, more patterns. Share your referral link and earn rewards.

1 referral Your name in the next Brief
3 referrals Full searchable Brief archive access
5 referrals 15-minute private call with Mastro
10 referrals Free week of The Chat
25 referrals Badmutt merch
Email for your referral link →

Before you ask.

What happens on the daily call?
You bring what's broken. Mastro fixes it live. 45-60 minutes, 10 AM EST, Monday through Friday. Real workflows, real problems. No lectures. Miss a call, the daily writeup catches you up.
What's the relationship between The Brief and The Chat?
They're the same system. The Brief IS the distilled output of what happens in The Chat. Every lesson came from a real debugging session. Free subscribers get the lesson. Paid members get the live fix that produced it.
Who is Maia?
Maia is your private AI debugging bot. She runs on Telegram, remembers your entire stack — models, frameworks, past fixes, preferences — and gets smarter the longer you're a member. She handles support between calls so you don't have to wait for the next session.
Can I see past sessions?
Everything is recorded. Paid members get full access to the session archive — every call, every fix, searchable.
What's the time commitment?
One call a day plus whatever you're already doing with AI. The call replaces the hours you'd spend debugging alone.
What if I cancel and want to come back?
One tap. No re-application, no waiting list. Your debugging bot remembers where you left off.
What tools/models does this work with?
All of them. Claude, GPT, Gemini, local models, Copilot — the system design is model-agnostic. No vendor lock-in.
What does "token slippage" mean?
The gap between what you should have spent and what you burned. Every unnecessary word in a prompt degrades your output and wastes your time.
Subscribe Free → Join The Chat — $500/2 weeks →
Book Mastro for speaking engagements, conferences, and workshops. $25K all-in. → Get in touch