<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Yuwei Zhang | Hejie Cui</title><link>https://hejiecui.com/author/yuwei-zhang/</link><atom:link href="https://hejiecui.com/author/yuwei-zhang/index.xml" rel="self" type="application/rss+xml"/><description>Yuwei Zhang</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><lastBuildDate>Thu, 30 Apr 2026 00:00:00 +0000</lastBuildDate><item><title>CoMem: Context Management with A Decoupled Long-Context Model</title><link>https://hejiecui.com/publication/comem/</link><pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate><guid>https://hejiecui.com/publication/comem/</guid><description>&lt;p>Context management enables agentic models to solve long-horizon tasks through iterative summarization of previous interaction histories. However, this process typically incurs substantial decoding overhead for the extra summarization tokens, which significantly affect the end-to-end response latency at deployment. In this paper, we introduce CoMem, a novel framework that decouples memory management from the primary agent workflow, enabling these processes to execute in parallel. We propose a step-off asynchronous pipeline that overlaps the memory model&amp;rsquo;s summarization with the agent&amp;rsquo;s inference, effectively masking the latency of context processing. To ensure robustness under this asynchronous setting, we introduce a reward-driven training strategy that aligns the memory model to capture sufficient statistics for the agent&amp;rsquo;s decision-making. Theoretical analysis confirms that CoMem offers a superior efficiency-effectiveness trade-off compared to coupled architectures. Our extensive experimental results on SWE-Bench-Verified show that CoMem provides 1.4x latency improvements upon vanilla long-context solutions while preserving most of the performance. Furthermore, we demonstrate that these latency gains scale favorably with increased system throughput, offering a modular path forward for the independent optimization of agent reasoning and memory compression.&lt;/p></description></item></channel></rss>