Debugging AI-Generated Code in an Event-Driven System
The hook runs. It returns ok: true. The data looks right. But three hooks later in the pipeline, something breaks — and the stack trace points nowhere useful. Welcome to debugging in an event-driven system.
The Premise
Debugging AI-generated hooks is different from debugging hand-written code. The code looks correct — it follows every convention, passes validation, and compiles cleanly. But AI optimizes for pattern matching, not for understanding the execution context. When a bug hides in the space between hooks, traditional debugging falls apart.
The Story
A developer asks the AI to generate a hook for processing refunds. The hook queries the original invoice, calculates the refund amount, creates a credit memo, and updates the invoice status. The AI generates clean code. Tests pass. But in production, customers report that their invoice status shows "refunded" even when the credit memo creation fails.
The root cause? The AI wrote the status update before the credit memo creation — and without a shared database session, the two operations weren't atomic.
The Debugging Playbook
Step 1: Trace the Event Route
Every request in Logic Bee flows through: HTTP → Auth → FlowUser → Event Engine → Hook(s) → Response. When something goes wrong, first verify that the right hook is even being called:
// Add temporary logging at the top of the try block
console.log(`[DEBUG] ${new Date().toISOString()} - Hook entered`, {
hookPath: 'finance-bills/process-refund',
docId: bob.dataPayload?.docId,
userId: bob.flowUser?.getUserInfo()?.id
})
If you don't see this log, the event routing is wrong — check the @LogicHook decorator's path property.
Step 2: Verify the BobRequest Lifecycle
The bob object carries context through the entire execution. AI-generated code sometimes breaks the lifecycle by:
- Modifying
bob.dataPayloaddirectly instead of usingbob.replaceDataPayload() - Not passing
bob.dbSession()to write operations, breaking transaction boundaries - Calling
bob.executeRequestOrThrow()without error checking
Check the bob state at each step:
// -->Debug: log bob state before critical operations
console.log('[DEBUG] dataPayload:', JSON.stringify(bob.dataPayload, null, 2))
console.log('[DEBUG] dbSession exists:', !!bob.dbSession())
Step 3: Check Operation Order
AI generates operations in the order they appear in the prompt, which may not be the correct execution order. In the refund story, the fix was:
// ❌ AI-generated order (status before credit memo)
await updateInvoiceStatus(bob, 'refunded')
await createCreditMemo(bob, refundData)
// ✅ Correct order (credit memo first, then status)
const creditMemo = await createCreditMemo(bob, refundData)
if (creditMemo) {
await updateInvoiceStatus(bob, 'refunded')
}
Step 4: Verify Transaction Boundaries
When a hook performs multiple writes, all should share the same database session:
// ❌ AI sometimes creates independent write operations
await collection1.docs.flowQuery()...updateOne({ status: 'refunded' })
await collection2.docs.flowQuery()...insertOne(creditMemoDoc)
// ✅ Both must use bob.dbSession()
await collection1.docs.flowQuery()...updateOne({ status: 'refunded' }, bob.dbSession())
await collection2.docs.flowQuery()...insertOne(creditMemoDoc, bob.dbSession())
Step 5: Check Sub-Request Error Propagation
When hooks call other hooks via bob.executeRequestOrThrow() or bob.runAction(), AI sometimes ignores the returned error:
// ❌ AI ignores sub-request failure
const result = await bob.runAction('inventory.adjust-stock', { data: adjustmentData })
// continues execution even if result.ok === false
// ✅ Check the result
const result = await bob.runAction('inventory.adjust-stock', { data: adjustmentData })
if (!result.ok) {
throw result.error || naoFormatErrorById('bad_request', {
reason: 'Stock adjustment failed during refund processing'
})
}
Common AI-Generated Bug Patterns
| Pattern | Symptom | Root Cause |
|---|---|---|
| Silent data corruption | Downstream hooks get wrong data | Direct bob.dataPayload mutation |
| Partial writes | Some documents updated, others not | Missing bob.dbSession() on some writes |
| Wrong hook called | Action does nothing or wrong thing | Decorator path doesn't match expected route |
| Ignored sub-request errors | Happy path always succeeds | Missing .ok check on runAction results |
| Race conditions | Intermittent failures under load | await missing on async calls |
The Takeaway
When debugging AI-generated hooks in a pipeline:
- Start at the route — is the right hook being called?
- Check the bob lifecycle — is the request context intact?
- Verify operation order — does the sequence match the business requirement?
- Audit every write — does each one use
bob.dbSession()? - Test sub-requests — are errors from child hooks handled?
The AI writes code that looks right. Your job is to verify it runs right — in context, in sequence, and under failure.
