7 day loan cycles - how we still move daily when outcomes lag

Neeraj Kushwaha
One of the first things fintech teams tell us on calls is this:
“Our loans take about 7 days from lead to dispersal. By the time we know what worked, the money is already gone.”
They could not afford to wait a full week before touching budgets. At the same time, acting only on day one signals kept pulling them toward campaigns that looked good early and died later.
We had to find a way to move daily while thinking weekly.
Living with a 7 day delay
If you map their funnel, it looks something like this:
Day 0: click and lead
Day 0–1: basic checks, first calls
Day 1–3: deeper underwriting
Day 3–7: final approval and dispersal
Meta and Google see the first part very clearly. Clicks, leads, app installs, even “sales” if you set that as the event. What they never see is the real drop off that happens between approval and money out.
So the team was stuck:
if they optimised on early ROAS or cost per lead, they scaled campaigns that drove soft approvals
if they waited only for dispersal data, they reacted too slowly and let bad patterns run for days
We realised we needed two different clocks.
Two clocks, one brain
Clock 1: daily, for course corrections
On this clock, we look at:
click through rate
cost per lead
early form drop offs
basic approval rate in the first 24–48 hours
We use this for “light touch” moves:
cap spend on obvious underperformers
test new creatives
keep experiments under control
Clock 2: weekly, for real judgement
On this clock, we work with cohorts. A cohort could be “all leads from March 1” or “all leads from Campaign X in Week 10.” We watch how many of those go from:
lead → approval
approval → dispersal
Now we can see which campaigns and segments actually print money once the 7 day cycle is done.
The key shift for us was to accept that these clocks will never sync perfectly, but they can talk to each other.
What we ask our agents to watch
Once we plugged both data layers into Thirdi, we gave our agents three simple jobs:
Track early signals for each new campaign
When a fresh campaign launches, we watch day one and day two signals for signs we have seen before: weak click through, bad form completion, odd geography patterns.Learn from past cohorts
For older campaigns, we look at full 7 day dispersal curves inside the CRM. Over time, the agent learns: “when early signals look like this, final dispersals usually look like that.”Turn that into a daily to do list
Each morning, instead of another dashboard, the marketer gets a short list:“These 2 campaigns have early patterns that usually end badly, consider capping them.”
“This 1 ad set has modest CPL but historically strong dispersals, consider giving it more room.”
The agent is not trying to predict the future with magic. It is just standing at the intersection of both clocks and saying, “based on what we know from the past, here is where you should be more or less aggressive today.”
Why this matters for small teams
The team we worked with is small. One marketer is handling the entire growth engine, plus reporting, plus internal reviews. They do not have the luxury of a separate analytics pod.
With this two clock setup, they no longer had to choose between:
flying blindly on fast but shallow numbers, or
sitting on their hands until the CRM finally caught up
They could still move budgets daily, but with a weekly memory sitting behind every change.
Once we saw how much calmer their weeks became, we started recommending this pattern to every lending brand we speak to.
One of the first things fintech teams tell us on calls is this:
“Our loans take about 7 days from lead to dispersal. By the time we know what worked, the money is already gone.”
They could not afford to wait a full week before touching budgets. At the same time, acting only on day one signals kept pulling them toward campaigns that looked good early and died later.
We had to find a way to move daily while thinking weekly.
Living with a 7 day delay
If you map their funnel, it looks something like this:
Day 0: click and lead
Day 0–1: basic checks, first calls
Day 1–3: deeper underwriting
Day 3–7: final approval and dispersal
Meta and Google see the first part very clearly. Clicks, leads, app installs, even “sales” if you set that as the event. What they never see is the real drop off that happens between approval and money out.
So the team was stuck:
if they optimised on early ROAS or cost per lead, they scaled campaigns that drove soft approvals
if they waited only for dispersal data, they reacted too slowly and let bad patterns run for days
We realised we needed two different clocks.
Two clocks, one brain
Clock 1: daily, for course corrections
On this clock, we look at:
click through rate
cost per lead
early form drop offs
basic approval rate in the first 24–48 hours
We use this for “light touch” moves:
cap spend on obvious underperformers
test new creatives
keep experiments under control
Clock 2: weekly, for real judgement
On this clock, we work with cohorts. A cohort could be “all leads from March 1” or “all leads from Campaign X in Week 10.” We watch how many of those go from:
lead → approval
approval → dispersal
Now we can see which campaigns and segments actually print money once the 7 day cycle is done.
The key shift for us was to accept that these clocks will never sync perfectly, but they can talk to each other.
What we ask our agents to watch
Once we plugged both data layers into Thirdi, we gave our agents three simple jobs:
Track early signals for each new campaign
When a fresh campaign launches, we watch day one and day two signals for signs we have seen before: weak click through, bad form completion, odd geography patterns.Learn from past cohorts
For older campaigns, we look at full 7 day dispersal curves inside the CRM. Over time, the agent learns: “when early signals look like this, final dispersals usually look like that.”Turn that into a daily to do list
Each morning, instead of another dashboard, the marketer gets a short list:“These 2 campaigns have early patterns that usually end badly, consider capping them.”
“This 1 ad set has modest CPL but historically strong dispersals, consider giving it more room.”
The agent is not trying to predict the future with magic. It is just standing at the intersection of both clocks and saying, “based on what we know from the past, here is where you should be more or less aggressive today.”
Why this matters for small teams
The team we worked with is small. One marketer is handling the entire growth engine, plus reporting, plus internal reviews. They do not have the luxury of a separate analytics pod.
With this two clock setup, they no longer had to choose between:
flying blindly on fast but shallow numbers, or
sitting on their hands until the CRM finally caught up
They could still move budgets daily, but with a weekly memory sitting behind every change.
Once we saw how much calmer their weeks became, we started recommending this pattern to every lending brand we speak to.
Read more

AI Agents Under The Hood: How They Really Work
This post explains that AI agents are not magic robots, but tools built on one simple trick: predicting the next word very well, then wrapping that prediction engine with rules, tools, and memory so it can actually do jobs for you. It shows how this setup lets you talk to software in plain language while keeping humans in control of what the agent can see, decide, and change.
May 8, 2026

mcp
Treat Meta MCP Like An Analyst, Not A Buyer: Three Low‑Risk Ways To Start
MCP gives you a powerful AI layer inside Meta, but the risk is simple: it can move faster than your brand is ready for.

Anand Kumar
May 7, 2026

data
Context Engineering for Autonomous Marketing Agents: How We Went from Context Overflow to 90% Accuracy
We rebuilt our AI marketing analyst four times. Each iteration taught us something about the hardest problem in agentic systems: giving the model the right information at the right moment. Here's the full story — what broke, what worked, and the architecture pattern we landed on.

Aravind Nair
May 6, 2026

AI Agents Under The Hood: How They Really Work
This post explains that AI agents are not magic robots, but tools built on one simple trick: predicting the next word very well, then wrapping that prediction engine with rules, tools, and memory so it can actually do jobs for you. It shows how this setup lets you talk to software in plain language while keeping humans in control of what the agent can see, decide, and change.
May 8, 2026

mcp
Treat Meta MCP Like An Analyst, Not A Buyer: Three Low‑Risk Ways To Start
MCP gives you a powerful AI layer inside Meta, but the risk is simple: it can move faster than your brand is ready for.

Anand Kumar
May 7, 2026