<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Xinyang Zhang | Hejie Cui</title><link>https://hejiecui.com/author/xinyang-zhang/</link><atom:link href="https://hejiecui.com/author/xinyang-zhang/index.xml" rel="self" type="application/rss+xml"/><description>Xinyang Zhang</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><lastBuildDate>Thu, 30 Apr 2026 01:00:00 +0000</lastBuildDate><item><title>T²PO: Uncertainty-Guided Exploration Control for Stable Multi-Turn Agentic Reinforcement Learning</title><link>https://hejiecui.com/publication/t2po/</link><pubDate>Thu, 30 Apr 2026 01:00:00 +0000</pubDate><guid>https://hejiecui.com/publication/t2po/</guid><description>&lt;p>Recent progress in multi-turn reinforcement learning (RL) has significantly improved reasoning LLMs&amp;rsquo; performances on complex interactive tasks. Despite advances in stabilization techniques such as fine-grained credit assignment and trajectory filtering, instability remains pervasive and often leads to training collapse. We argue that this instability stems from inefficient exploration in multi-turn settings, where policies continue to generate low-information actions that neither reduce uncertainty nor advance task progress. To address this issue, we propose Token- and Turn-level Policy Optimization (T²PO), an uncertainty-aware framework that explicitly controls exploration at fine-grained levels. At the token level, T²PO monitors uncertainty dynamics and triggers a thinking intervention once the marginal uncertainty change falls below a threshold. At the turn level, T²PO identifies interactions with negligible exploration progress and dynamically resamples such turns to avoid wasted rollouts. We evaluate T²PO in diverse environments, including WebShop, ALFWorld, and Search QA, demonstrating substantial gains in training stability and performance improvements with better exploration efficiency.&lt;/p>
&lt;p>&lt;sup>*&lt;/sup> Corresponding author.&lt;/p></description></item><item><title>CoMem: Context Management with A Decoupled Long-Context Model</title><link>https://hejiecui.com/publication/comem/</link><pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate><guid>https://hejiecui.com/publication/comem/</guid><description>&lt;p>Context management enables agentic models to solve long-horizon tasks through iterative summarization of previous interaction histories. However, this process typically incurs substantial decoding overhead for the extra summarization tokens, which significantly affect the end-to-end response latency at deployment. In this paper, we introduce CoMem, a novel framework that decouples memory management from the primary agent workflow, enabling these processes to execute in parallel. We propose a step-off asynchronous pipeline that overlaps the memory model&amp;rsquo;s summarization with the agent&amp;rsquo;s inference, effectively masking the latency of context processing. To ensure robustness under this asynchronous setting, we introduce a reward-driven training strategy that aligns the memory model to capture sufficient statistics for the agent&amp;rsquo;s decision-making. Theoretical analysis confirms that CoMem offers a superior efficiency-effectiveness trade-off compared to coupled architectures. Our extensive experimental results on SWE-Bench-Verified show that CoMem provides 1.4x latency improvements upon vanilla long-context solutions while preserving most of the performance. Furthermore, we demonstrate that these latency gains scale favorably with increased system throughput, offering a modular path forward for the independent optimization of agent reasoning and memory compression.&lt;/p></description></item></channel></rss>