If I were Monge, Campero, Caracol Knits or Super Selectos
Ricardo Argüello — March 9, 2026
CEO & Founder
General summary
The autoresearch post mentioned companies iterating 12 times per hour. Here we make it concrete with five scenarios: consumer credit at Grupo Monge, zone-based menus at Pollo Campero, textile quality at Caracol Knits, distribution routes at El Latino Foods, and perishable inventory at Super Selectos.
- The same pattern — human defines constraints, agent iterates overnight, team reviews in the morning — works across industries
- Grupo Monge scenario: 200 credit parameter combinations tested over a weekend against 18 months of real data
- Pollo Campero scenario: AI optimizes menu pricing and combos differently for each geographic zone
- Regulated industries work fine — regulations become constraints the agent cannot cross
- A first pilot takes 6-10 weeks: 2 weeks for process mapping, 4-6 for building and supervised testing
Imagine you could test 200 different versions of a business decision overnight — different prices, different terms, different product combinations — and wake up to a ranked list of what worked best. That's what autonomous iteration loops do. This post shows what that looks like at five real Latin American companies, from credit scoring in Costa Rica to supermarket inventory in El Salvador.
AI-generated summary
In the autoresearch post, I wrote that companies who figure out how to direct autonomous iteration loops will operate at a different speed. That they’ll compete against organizations iterating 12 times per hour.
One of IQ Source’s partners read it and asked: “What would that look like at a real company? Not Silicon Valley — around here.”
Good question. An autonomous loop needs a process with adjustable parameters and a clear metric — plus constraints the agent can’t cross. The human sets direction. The agent iterates overnight while everyone sleeps, and the team reviews results in the morning. Five scenarios across five countries and industries.
Grupo Monge (Costa Rica): 200 credit combinations before Monday
Imagine it’s Monday at 7 am at Grupo Monge’s offices. The credit manager opens his dashboard and finds 200 credit parameter combinations that an agent tested over the weekend — different down payments and terms, different rates per customer profile, each segmented by product category.
These aren’t random numbers. The agent used historical delinquency and approval data to simulate each combination against the last 18 months of real behavior. The constraints: never go below SUGEF’s regulatory threshold, never exceed the risk ceiling per category.
The finding nobody would have looked for: a differentiated down payment for the furniture line — lower than standard but with a shorter term — reduces delinquency in that category without affecting approval volume. The general rule treated furniture the same as appliances. The agent found they shouldn’t be.
The credit manager didn’t lose control. He defined the constraints and the metric (delinquency rate at a given approval volume). What he didn’t have was time to test 200 variants one by one.
Pollo Campero (Guatemala): the menu that optimizes by zone
Pollo Campero operates in very different contexts: zona 10 in Guatemala City, Quetzaltenango, US locations. Each market has different cost structures, different customer preferences — and completely different rush-hour rhythms.
Imagine an agent testing menu configurations — which combos to feature and at what price point, during which dayparts — optimizing two metrics: margin per ticket and throughput during peak hours.
| Configuration | Margin per ticket | Average service time |
|---|---|---|
| Current standard menu | Baseline | Baseline |
| Zona 10 variant (premium combo featured) | +8% | +15 sec |
| Quetzaltenango variant (family combo featured) | +5% | −20 sec |
The finding: a combo that’s been sitting near the bottom of the menu for months has the best margin per minute of prep time. Not the highest absolute margin — the best margin per unit of kitchen time. Nobody had evaluated it with that metric because the standard report ranks by total sales, not operational efficiency.
The agent doesn’t decide to change the menu. It presents ranked options. The brand and operations teams decide what to implement and where.
Caracol Knits (Honduras): quality that doesn’t sleep
Instructions (Friday 5 pm): The plant manager defines the goal — reduce defect rate per batch without slowing production speed. Constraints: machine temperature stays below maximum, client specifications are untouchable, and minimum output per shift holds. Metric: defects per 1,000 units.
Overnight: The agent simulates combinations of line speed and thread tension against historical data from the last 6 months of production. 150 combinations. Each evaluated against the actual defect rate of similar batches.
Monday 6 am: The supervisor finds a result that contradicts standard practice. The plant had always adjusted speed OR tension, never both at the same time — out of habit, not evidence. The agent found that a specific combination of both variables reduces defects by ~12% at the same output. Nobody had tried it because simultaneous adjustment was considered risky.
What Caracol Knits delegated was the search for the optimal point across variables it already controlled — not which contracts to accept or which clients to prioritize.
El Latino Foods (Miami): routes that rewrite themselves every night
Before (manual routing):
- Fixed routes designed 18 months ago, updated quarterly
- Loading sequence based on order entry, not delivery route
- I-95 and Palmetto peak-hour traffic absorbed as a fixed cost
After (with autonomous loop):
- The agent recalculates routes nightly using the previous week’s traffic data and next-day confirmed orders
- Loading sequence optimized for the route — last in, first out
- Committed delivery windows maintained as an absolute constraint
The finding: reversing the order of three stops on the south route — hitting the Homestead area first before heading up US-1 — saves 40 minutes of peak-hour traffic. Forty minutes per truck, per day. The original route was designed when those three stops were new clients added at the end. Nobody redesigned it afterward.
Client decisions and margin calls still belong to El Latino Foods. What got delegated was route geometry — nothing more.
Super Selectos (El Salvador): perishable waste that recalibrates itself
The problem: All Super Selectos locations order similar volumes of fruits, vegetables, and dairy — an inherited planogram that doesn’t distinguish between a high-turnover store in San Salvador and a lower-traffic branch. Perishable waste is absorbed as a normal operating cost.
The loop: An agent analyzes actual perishable sales per location over the last 30 days, cross-references supplier delivery schedules, and generates calibrated orders per store. Constraints: minimum stock per category can’t drop, agreed supplier delivery days must be respected. Plus, minimum variety on shelves has to hold. Metric: waste percentage of perishable inventory.
The finding: Several locations order identical dairy quantities despite very different demand — a direct legacy of the copied planogram. Adjusting dairy orders per location based on actual sales reduces waste by ~18% without triggering stockouts. The agent didn’t invent a new category or switch suppliers. It just recalibrated how much to order for each store.
The delegation here is narrow: how much to order per store. Supplier decisions and product discontinuations stay with the team.
The pattern and its limits
What’s common across all five scenarios: the human defines where to aim and where not to go. The agent runs the iterations. Then the human reviews what came back and makes the call.
All five agents found configurations that human teams wouldn’t have tried — not for lack of skill, but for lack of time and operational bias (“we’ve always done it this way”). A simultaneous speed-tension adjustment nobody had risked. A route that went unrevised for months. A combo ranked by the wrong metric — and a planogram blindly copied across locations.
But each scenario has a clear boundary. Monge keeps credit risk appetite. Campero keeps brand decisions. Caracol Knits decides which contracts to accept, El Latino Foods decides which clients to keep — and Super Selectos still chooses which products to stock. The agent operates within the edges the human defines. That’s precisely where its value lies.
This is the same pattern from the agent operator role: it’s not about removing humans from the process, it’s about moving them to the right place. And it requires a clear inventory of which agent does what, just as an org chart defines who’s responsible for what.
Identify your first loop candidate. If you have a manual process where someone tunes parameters, measures results, and repeats — that’s a loop waiting to go autonomous. Send us a paragraph describing the process and the metric you’d optimize — plus any constraints we should know about. We’ll tell you if it’s a candidate and what your program.md would look like. Write to us here.
Frequently Asked Questions
Not necessarily. The pattern requires defining a process with a clear metric, historical data, and constraints — that's business work, not coding. Technical implementation can range from simple scripts to specialized agent platforms. What matters is having the process definition right before choosing the tool.
Processes where someone manually tunes parameters and measures results: credit scoring, fraud detection rules, manufacturing quality parameters, logistics route configuration, perishable inventory management, product mix per store location. The key is a clear, quantifiable metric — not a subjective assessment.
Yes, with tighter constraints. Regulations become limits the agent cannot cross — regulatory thresholds for credit, client specifications in manufacturing, contractual delivery windows. The agent iterates within those boundaries. Audit trails of iteration history are built into the design from the start.
Six to ten weeks. The first two focus on mapping the process, defining the target metric, and codifying constraints. The next four to six weeks are for configuring the agent, running supervised test cycles, and adjusting constraints based on initial real-world results.
Related Articles
LiteLLM Attack: Your AI Trust Chain Just Broke
LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.
Google Stitch + AI Studio: Design-to-Code Without Engineers
Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.