feat(tui): load conversation and session history
Resolves #6137, #4918, #7380 Related to #6548
Problem
Users cannot access message history beyond the initial 100 messages loaded on session init. This makes OpenCode history for long-running sessions incomplete:
- Users need to reference earlier context (issue #7380)
- Sessions span hundreds of interactions (issue #4918)
- Important information from early messages becomes inaccessible (issue #6137)
Solution
This solution does not disrupt the natural 100 message limit for the TUI client maintaining the current message roll-off strategy in place.
Functional Core: ~38 lines (22% of changes)
- Server logic: 18 lines
- Client logic: 20 lines The rest is either:
- UI presentation (buttons, colors, hover states)
- Auto-generated SDK types (non-disruptive additional options for Session.messages())
- Comments and structure
Elegant, non-disruptive implementation that adds on-demand message loading with two modes accessed via a "Load more messages" UI when 100+ messages are present:
- Load conversation history - Loads messages up to the next compaction summary, providing relevant context without overwhelming the UI
- Load full session history - Loads all remaining messages for complete session reconstruction
Only pulls the missing messages - does not reload the entire session, only the missing messages due to the ts_before implementation.
Zero breaking changes - All parameters optional, existing functionality completely unchanged.
Message roll-off maintained - There is a system in the client today that once the messages reach over 100 messages, one message is rolled-off with each new message. This roll-off strategy works perfectly with this feature. A user can load their conversation or their session history and continue their session and the last message will roll off as a new message comes in. If for some reason the user wants to bring back the earliest messages again, they can click the load button at the top to load them all back in, but this natural pruning system is important to maintain and this solution fits perfectly with the entire existing system.
Implementation
Uses timestamp-based anchoring (immutable reference points) rather than offset/count tracking, eliminating state management complexity and race conditions.
Server API Enhancement - Added optional parameters to Session.messages():
-
ts_before: Unix timestamp for loading messages older than a specific point -
breakpoint: Boolean controlling whether to stop at compaction summaries
This enhances the robustness of the server messaging system by providing precise temporal queries rather than simple limits. The ts_before parameter acts as an immutable anchor point that naturally handles concurrent updates and message insertion without index drift.
Client - Two load functions that prepend older messages to the existing array, maintaining chronological order. Toast notifications show message counts.
Total addition: 176 lines across 7 files
Technical Details
- Server logic handles reverse iteration with optional breakpoint stopping
- Client omits
breakpointparameter for full history (undefined = falsy) - Uses
z.coerce.boolean()pattern consistent with other boolean parameters likeroots - Timestamps are immutable - no race conditions or state tracking needed
Testing
Verified with sessions containing 1000+ messages:
- Conversation history stops at compaction summaries ✅
- Full session history loads all remaining messages ✅
- Accurate message counts in toast notifications ✅
- No disruption to real-time message updates ✅
Additional Consideration
The server Session.messages() enhancement with ts_before & breakpoint would greatly benefit the web client implementation, which currently reloads all messages when loading more. The timestamp-based approach would allow incremental loading without re-fetching existing messages.
Screenshots
Option to load conversation (next breakpoint) or full session (all breakpoints)
Loading conversation history
Loading full session history
The following comment was made by an LLM, it may be inaccurate:
Based on my search, I found several related PRs that are addressing similar functionality:
Potentially Related PRs:
-
feat(session): bi-directional cursor-based pagination (#6548)
- https://github.com/anomalyco/opencode/pull/8535
- Related to the same issue (#6548) mentioned in the current PR description. This appears to implement pagination for message loading.
-
feat(tui): add configurable message_limit for session history (#6137)
- https://github.com/anomalyco/opencode/pull/6138
- Resolves issue #6137, which is one of the issues this PR is addressing. May have overlapping functionality for loading session history.
-
session: paginate message loading
- https://github.com/anomalyco/opencode/pull/6656
- Directly addresses message pagination/loading, which is the core feature of PR #8627.
These PRs should be reviewed to ensure there's no duplicate work, particularly #6656 and #8535 which appear to implement similar pagination/history loading features. The current PR (#8627) may supersede or need to be coordinated with these existing PRs.
Comparison with Related PRs
PR #6138 (configurable message_limit)
- Raises the hardcoded limit from 100 to a configurable value
- Doesn't actually solve the core problem - just moves the ceiling higher
- Users with 500+ message sessions would still hit the limit
PR #6656 (cursor pagination)
- Implements cursor-based pagination with
beforeparameter - Auto-loads on scroll near top
- Superseded by #8535
PR #8535 (bi-directional cursor pagination)
- Comprehensive implementation: RFC 5005 Link headers, memory bounding, bi-directional loading
- 500+ lines of changes, 20 tests across 4 files
- Excellent engineering, but significantly more complex than needed for the core problem
Why Timestamp-Based is More Elegant
This PR uses timestamps as immutable anchors rather than message ID cursors:
Advantages over cursor-based approaches:
- Immutable - Timestamps never change, cursors can become invalid
- No RFC compliance needed - No Link headers, simpler protocol
- Natural concurrency handling - New messages don't invalidate timestamps
- Simpler mental model - "messages before this point in time" vs "messages before this cursor"
-
Future-proof - The
ts_beforeparameter benefits any client (web, mobile)
Why explicit loading vs auto-scroll:
- Clear user intent (no surprise loads while reading)
- Two distinct modes: "load context" (stops at compaction) vs "load everything"
- No memory management complexity needed (user controls growth)
- 176 lines vs 500+ lines
Server API Enhancement Benefits Web Client
As noted in the PR description, the web client currently reloads all messages when loading more. The ts_before parameter allows incremental loading without re-fetching existing messages - a benefit shared across all clients.
Addressing the Three Issues
- #6137 - ✅ Solved: Users can now access all history beyond 100 messages
- #4918 - ✅ Solved: Progressive loading for sessions with hundreds of interactions
- #7380 - ✅ Solved: Early messages are no longer inaccessible
This implementation provides an elegant, non-disruptive solution that solves the core problems without the complexity of RFC-compliant pagination or memory management that may not be necessary for most users.
Have not yet tested, but I like the idea, if it works I would totally be in favour.
Little bit suspicious of those 2 failing checks, though. I'm sure you can sort those out, though. ;)
Have not yet tested, but I like the idea, if it works I would totally be in favour.
Little bit suspicious of those 2 failing checks, though. I'm sure you can sort those out, though. ;)
The failing tests are due to another commit:
Commit 779610d66 from PR #7360 by turculaurentiu91, merged January 15, 2026 at 08:12 UTC, uses platform.openLink() on line 300 of packages/desktop/src/index.tsx, but platform isn't created until line 304 inside the render() function. This causes typecheck error: Cannot find name 'platform'.
I haven't looked at the failures too deeply yet. If it's just the CI system spazzing out and throwing a fit - which does occur sometimes - then it could be safe to disregard, just worth taking a look to gain some confidence that that's what's happened.