-
Notifications
You must be signed in to change notification settings - Fork 312
chat processing with typing indicators and batching support for msgs #685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
commit: |
Claude Code ReviewSummary: This PR adds chat processing modes (sequential/batch) with typing indicators and message batching. The implementation is well-tested and mostly solid, but has a critical race condition bug and some architectural concerns. Critical Issues1. Race Condition in Sequential Mode (packages/agents/src/ai-chat-agent.ts:315-326)The sequential mode uses Problem: If a user sends a message then quickly clears the chat, the message could still be queued but the Line 326: This should either:
2. Non-awaited Queue Call Creates Silent Failure RiskThe queue system is async but the call site doesn't await, meaning errors in queue processing won't surface to the caller. This violates the principle of explicit error handling. 3. Type Assertion Hack (line 326)
Architectural Concerns4. Batch Mode Lock Pattern vs Sequential QueueThe implementation mixes two concurrency patterns:
This creates complexity and testing burden. Consider:
5. Missing Hibernate Safety AnalysisThe batch mode uses in-memory state (
The comment at line 337-340 mentions this but doesn't fully address it. If DO evicts, the timer is lost but 6. No Backpressure in Sequential ModeSequential mode queues all incoming messages without limit. A fast sender could flood the queue. Consider rate limiting or queue size limits. Code Quality Issues7. Typing Timeout Logic (lines 410-422)The typing indicator only works in batch mode and silently does nothing in sequential mode. This asymmetry should be documented or the client API should reflect the mode difference. 8. Test Worker Configuration Leak (packages/agents/src/tests/worker.ts:576-581)
9. Missing Error Cases in TestsThe test suite (packages/agents/src/tests/chat-queue.test.ts) doesn't cover:
Minor Issues10. Client Example Missing Error Handling (examples/resumable-stream-chat/src/client.tsx:69-73)
11. Inconsistent Timeout Naming
The typing timeout is longer than idle (line 41-42), but the comment says "shorter delay" (line 418). This is confusing. The code behavior is: typing sets a longer timeout (5s vs 3s idle), which doesn't match the comment "shorter delay". Actually, looking at line 711-713, the code correctly uses DocumentationThe feature is well-documented with clear JSDoc comments. The queue.md documentation correctly describes the sequential behavior. TestingComprehensive test coverage for batch mode, but sequential mode testing is limited since RecommendationRequest Changes due to the race condition (#1) and the async queue bug (#2). The architectural concerns should be addressed before this pattern proliferates to other features. |
Documents new features from cloudflare/agents#685: - Chat processing modes (sequential and batch) for AIChatAgent - Typing indicator support via onInputChange and inputProps - Configuration options: chatProcessingMode, chatIdleTimeout, chatTypingTimeout - Updated useAgentChat API reference with new return values - Added comprehensive examples for both processing modes Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This update documents new features from PR #685 (cloudflare/agents): - Chat processing modes (sequential vs batch) - Typing indicators for smart message batching - New AIChatAgent properties: chatProcessingMode, chatIdleTimeout, chatTypingTimeout - New useAgentChat returns: onInputChange, inputProps, sendTypingIndicator The documentation includes: - API reference updates for AIChatAgent class properties - Examples showing sequential and batch processing modes - React examples demonstrating typing indicator integration - Use cases and best practices for each mode These features enable better handling of conversational chat patterns where users send multiple rapid messages. Related PR: cloudflare/agents#685 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
deathbyknowledge
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments, I might've gotten the logic wrong with the timeouts
packages/agents/src/ai-chat-agent.ts
Outdated
| }); | ||
| if (this.chatProcessingMode === "sequential") { | ||
| // Sequential mode: queue all requests and process one by one | ||
| this._chatQueue.push(request); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not want to use the built-in queue?
packages/agents/src/ai-chat-agent.ts
Outdated
| /** Default age threshold for cleaning up completed streams (ms) - 24 hours */ | ||
| const CLEANUP_AGE_THRESHOLD_MS = 24 * 60 * 60 * 1000; | ||
| /** Default idle timeout - how long to wait for user to start typing after sending a message (ms) */ | ||
| const DEFAULT_CHAT_IDLE_TIMEOUT_MS = 5000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So by default we wait 5 seconds to start processing a message if the user is not typing? That doesn't sound right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if only applies to batch mode, when chatProcessingMode = "batch", in a normal setup, no delays
packages/agents/src/ai-chat-agent.ts
Outdated
| /** Default idle timeout - how long to wait for user to start typing after sending a message (ms) */ | ||
| const DEFAULT_CHAT_IDLE_TIMEOUT_MS = 5000; | ||
| /** Default typing timeout - how long to wait after user stops typing (ms) */ | ||
| const DEFAULT_CHAT_TYPING_TIMEOUT_MS = 1500; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd expect this one to be higher than the one above, since the user possibly will send another message.
packages/agents/src/ai-chat-agent.ts
Outdated
| * } | ||
| * ``` | ||
| */ | ||
| chatProcessingMode: ChatProcessingMode = "sequential"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The base Agent class has the static options which only host hibernate: true/false. Feels like this is a class-level config as well, maybe we want to have something similar instead of just adding class properties for developers to configure.
Sync documentation for PR cloudflare/agents#685: "chat processing with typing indicators and batching support for msgs" Added documentation for: - Chat processing modes (sequential vs batch) - Configuration options for batch mode (chatIdleTimeout, chatTypingTimeout) - Typing indicator support via onInputChange and inputProps - Usage examples for both modes - Updated useAgentChat API reference with new return values Related PR: cloudflare/agents#685
the idea is to type more than one msg at a time
want to give users 2 options:
inspired by this ai chat agent should handle multiple inputs coming in at the same time #24