Skip to content

Conversation

@whoiskatrin
Copy link
Contributor

@whoiskatrin whoiskatrin commented Nov 27, 2025

the idea is to type more than one msg at a time
want to give users 2 options:

@changeset-bot
Copy link

changeset-bot bot commented Nov 27, 2025

⚠️ No Changeset found

Latest commit: ecfe455

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@pkg-pr-new
Copy link

pkg-pr-new bot commented Nov 27, 2025

Open in StackBlitz

npm i https://pkg.pr.new/cloudflare/agents@685

commit: ecfe455

@claude
Copy link

claude bot commented Nov 27, 2025

Claude Code Review

Summary: This PR adds chat processing modes (sequential/batch) with typing indicators and message batching. The implementation is well-tested and mostly solid, but has a critical race condition bug and some architectural concerns.

Critical Issues

1. Race Condition in Sequential Mode (packages/agents/src/ai-chat-agent.ts:315-326)

The sequential mode uses this.queue() which is async, but doesn't await the result. Between queuing and processing, state changes (like clears or cancellations) can occur without proper synchronization.

Problem: If a user sends a message then quickly clears the chat, the message could still be queued but the _processQueuedChatRequest callback will find no connection or stale state.

Line 326: this.queue("_processQueuedChatRequest" as keyof this, payload);

This should either:

  • Return a promise and handle it properly, OR
  • Use a lock pattern similar to batch mode's _isProcessingChat

2. Non-awaited Queue Call Creates Silent Failure Risk

The queue system is async but the call site doesn't await, meaning errors in queue processing won't surface to the caller. This violates the principle of explicit error handling.

3. Type Assertion Hack (line 326)

"_processQueuedChatRequest" as keyof this - This private method shouldn't be in the public queue API. Consider making it a proper callback or refactoring the queue system to support private callbacks.

Architectural Concerns

4. Batch Mode Lock Pattern vs Sequential Queue

The implementation mixes two concurrency patterns:

  • Batch mode: In-memory lock (_isProcessingChat) + debounce timer
  • Sequential mode: Durable queue (SQLite-backed)

This creates complexity and testing burden. Consider:

  • Why not use the queue for both modes with different enqueue policies?
  • Is the in-memory lock safe across hibernation/eviction?

5. Missing Hibernate Safety Analysis

The batch mode uses in-memory state (_pendingChatRequest, _processingTimeout, _isProcessingChat). What happens if the DO hibernates/evicts during:

  • A pending debounce timer?
  • Active processing?

The comment at line 337-340 mentions this but doesn't fully address it. If DO evicts, the timer is lost but _pendingChatRequest might persist in SQLite (if serialized), creating inconsistency.

6. No Backpressure in Sequential Mode

Sequential mode queues all incoming messages without limit. A fast sender could flood the queue. Consider rate limiting or queue size limits.

Code Quality Issues

7. Typing Timeout Logic (lines 410-422)

The typing indicator only works in batch mode and silently does nothing in sequential mode. This asymmetry should be documented or the client API should reflect the mode difference.

8. Test Worker Configuration Leak (packages/agents/src/tests/worker.ts:576-581)

TestChatAgent hardcodes batch mode with specific timeouts. This means you can't test sequential mode without a separate agent class. Consider parameterizing this or using a factory pattern.

9. Missing Error Cases in Tests

The test suite (packages/agents/src/tests/chat-queue.test.ts) doesn't cover:

  • What happens if onChatMessage throws an error?
  • Network failures during streaming?
  • Race between hibernation and message processing?

Minor Issues

10. Client Example Missing Error Handling (examples/resumable-stream-chat/src/client.tsx:69-73)

onInputChange is called but there's no error boundary if it throws. Given this is example code that users will copy, it should be defensive.

11. Inconsistent Timeout Naming

  • DEFAULT_CHAT_IDLE_TIMEOUT_MS (3000ms)
  • DEFAULT_CHAT_TYPING_TIMEOUT_MS (5000ms)

The typing timeout is longer than idle (line 41-42), but the comment says "shorter delay" (line 418). This is confusing. The code behavior is: typing sets a longer timeout (5s vs 3s idle), which doesn't match the comment "shorter delay".

Actually, looking at line 711-713, the code correctly uses chatTypingTimeout when typing and chatIdleTimeout when not typing. The test configuration (line 580) sets chatTypingTimeout: 150 which is shorter than chatIdleTimeout: 300, matching the comment. So the default values are backwards from the intended design.

Documentation

The feature is well-documented with clear JSDoc comments. The queue.md documentation correctly describes the sequential behavior.

Testing

Comprehensive test coverage for batch mode, but sequential mode testing is limited since TestChatAgent uses batch mode.

Recommendation

Request Changes due to the race condition (#1) and the async queue bug (#2). The architectural concerns should be addressed before this pattern proliferates to other features.

agents-git-bot bot pushed a commit to cloudflare/cloudflare-docs that referenced this pull request Nov 27, 2025
Documents new features from cloudflare/agents#685:
- Chat processing modes (sequential and batch) for AIChatAgent
- Typing indicator support via onInputChange and inputProps
- Configuration options: chatProcessingMode, chatIdleTimeout, chatTypingTimeout
- Updated useAgentChat API reference with new return values
- Added comprehensive examples for both processing modes

Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
agents-git-bot bot pushed a commit to cloudflare/cloudflare-docs that referenced this pull request Nov 27, 2025
This update documents new features from PR #685 (cloudflare/agents):
- Chat processing modes (sequential vs batch)
- Typing indicators for smart message batching
- New AIChatAgent properties: chatProcessingMode, chatIdleTimeout, chatTypingTimeout
- New useAgentChat returns: onInputChange, inputProps, sendTypingIndicator

The documentation includes:
- API reference updates for AIChatAgent class properties
- Examples showing sequential and batch processing modes
- React examples demonstrating typing indicator integration
- Use cases and best practices for each mode

These features enable better handling of conversational chat patterns where
users send multiple rapid messages.

Related PR: cloudflare/agents#685

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@whoiskatrin whoiskatrin marked this pull request as ready for review November 27, 2025 18:14
Copy link
Contributor

@deathbyknowledge deathbyknowledge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments, I might've gotten the logic wrong with the timeouts

});
if (this.chatProcessingMode === "sequential") {
// Sequential mode: queue all requests and process one by one
this._chatQueue.push(request);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not want to use the built-in queue?

/** Default age threshold for cleaning up completed streams (ms) - 24 hours */
const CLEANUP_AGE_THRESHOLD_MS = 24 * 60 * 60 * 1000;
/** Default idle timeout - how long to wait for user to start typing after sending a message (ms) */
const DEFAULT_CHAT_IDLE_TIMEOUT_MS = 5000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So by default we wait 5 seconds to start processing a message if the user is not typing? That doesn't sound right

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if only applies to batch mode, when chatProcessingMode = "batch", in a normal setup, no delays

/** Default idle timeout - how long to wait for user to start typing after sending a message (ms) */
const DEFAULT_CHAT_IDLE_TIMEOUT_MS = 5000;
/** Default typing timeout - how long to wait after user stops typing (ms) */
const DEFAULT_CHAT_TYPING_TIMEOUT_MS = 1500;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd expect this one to be higher than the one above, since the user possibly will send another message.

* }
* ```
*/
chatProcessingMode: ChatProcessingMode = "sequential";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The base Agent class has the static options which only host hibernate: true/false. Feels like this is a class-level config as well, maybe we want to have something similar instead of just adding class properties for developers to configure.

agents-git-bot bot pushed a commit to cloudflare/cloudflare-docs that referenced this pull request Dec 8, 2025
Sync documentation for PR cloudflare/agents#685: "chat processing with typing indicators and batching support for msgs"

Added documentation for:
- Chat processing modes (sequential vs batch)
- Configuration options for batch mode (chatIdleTimeout, chatTypingTimeout)
- Typing indicator support via onInputChange and inputProps
- Usage examples for both modes
- Updated useAgentChat API reference with new return values

Related PR: cloudflare/agents#685
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants