fix: resolve TypeError and improve Pydantic compatibility in async-context-compression v1.2.2

This commit is contained in:
fujie
2026-01-21 21:51:58 +08:00
parent a75ee555fa
commit 500e090b11
8 changed files with 79 additions and 60 deletions

View File

@@ -1,7 +1,7 @@
# Async Context Compression # Async Context Compression
<span class="category-badge filter">Filter</span> <span class="category-badge filter">Filter</span>
<span class="version-badge">v1.2.1</span> <span class="version-badge">v1.2.2</span>
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence. Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.

View File

@@ -1,7 +1,7 @@
# Async Context Compression异步上下文压缩 # Async Context Compression异步上下文压缩
<span class="category-badge filter">Filter</span> <span class="category-badge filter">Filter</span>
<span class="version-badge">v1.2.1</span> <span class="version-badge">v1.2.2</span>
通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。 通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。

View File

@@ -22,7 +22,7 @@ Filters act as middleware in the message pipeline:
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence. Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
**Version:** 1.2.1 **Version:** 1.2.2
[:octicons-arrow-right-24: Documentation](async-context-compression.md) [:octicons-arrow-right-24: Documentation](async-context-compression.md)

View File

@@ -22,7 +22,7 @@ Filter 充当消息管线中的中间件:
通过智能总结减少长对话的 token 消耗,同时保持连贯性。 通过智能总结减少长对话的 token 消耗,同时保持连贯性。
**版本:** 1.2.1 **版本:** 1.2.2
[:octicons-arrow-right-24: 查看文档](async-context-compression.md) [:octicons-arrow-right-24: 查看文档](async-context-compression.md)

View File

@@ -1,9 +1,13 @@
# Async Context Compression Filter # Async Context Compression Filter
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 1.2.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT **Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 1.2.2 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent. This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
## What's new in 1.2.2
- **Critical Fix**: Resolved `TypeError: 'str' object is not callable` caused by variable name conflict in logging function.
- **Compatibility**: Enhanced `params` handling to support Pydantic objects, improving compatibility with different OpenWebUI versions.
## What's new in 1.2.1 ## What's new in 1.2.1
- **Smart Configuration**: Automatically detects base model settings for custom models and adds `summary_model_max_context` for independent summary limits. - **Smart Configuration**: Automatically detects base model settings for custom models and adds `summary_model_max_context` for independent summary limits.

View File

@@ -1,11 +1,15 @@
# 异步上下文压缩过滤器 # 异步上下文压缩过滤器
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 1.2.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT **作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 1.2.2 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。 > **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。 本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
## 1.2.2 版本更新
- **严重错误修复**: 解决了因日志函数变量名冲突导致的 `TypeError: 'str' object is not callable` 错误。
- **兼容性增强**: 改进了 `params` 处理逻辑以支持 Pydantic 对象,提高了对不同 OpenWebUI 版本的兼容性。
## 1.2.1 版本更新 ## 1.2.1 版本更新
- **智能配置增强**: 自动检测自定义模型的基础模型配置,并新增 `summary_model_max_context` 参数以独立控制摘要模型的上下文限制。 - **智能配置增强**: 自动检测自定义模型的基础模型配置,并新增 `summary_model_max_context` 参数以独立控制摘要模型的上下文限制。

View File

@@ -5,7 +5,7 @@ author: Fu-Jie
author_url: https://github.com/Fu-Jie/awesome-openwebui author_url: https://github.com/Fu-Jie/awesome-openwebui
funding_url: https://github.com/open-webui funding_url: https://github.com/open-webui
description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression. description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.
version: 1.2.1 version: 1.2.2
openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
license: MIT license: MIT
@@ -839,7 +839,7 @@ class Filter:
except Exception as e: except Exception as e:
logger.error(f"Error emitting debug log: {e}") logger.error(f"Error emitting debug log: {e}")
async def _log(self, message: str, type: str = "info", event_call=None): async def _log(self, message: str, log_type: str = "info", event_call=None):
"""Unified logging to both backend (print) and frontend (console.log)""" """Unified logging to both backend (print) and frontend (console.log)"""
# Backend logging # Backend logging
if self.valves.debug_mode: if self.valves.debug_mode:
@@ -849,11 +849,11 @@ class Filter:
if self.valves.show_debug_log and event_call: if self.valves.show_debug_log and event_call:
try: try:
css = "color: #3b82f6;" # Blue default css = "color: #3b82f6;" # Blue default
if type == "error": if log_type == "error":
css = "color: #ef4444; font-weight: bold;" # Red css = "color: #ef4444; font-weight: bold;" # Red
elif type == "warning": elif log_type == "warning":
css = "color: #f59e0b;" # Orange css = "color: #f59e0b;" # Orange
elif type == "success": elif log_type == "success":
css = "color: #10b981; font-weight: bold;" # Green css = "color: #10b981; font-weight: bold;" # Green
# Clean message for frontend: remove separators and extra newlines # Clean message for frontend: remove separators and extra newlines
@@ -999,6 +999,7 @@ class Filter:
# 2. For base models: check messages for role='system' # 2. For base models: check messages for role='system'
system_prompt_content = None system_prompt_content = None
# Try to get from DB (custom model)
# Try to get from DB (custom model) # Try to get from DB (custom model)
try: try:
model_id = body.get("model") model_id = body.get("model")
@@ -1026,12 +1027,17 @@ class Filter:
# Handle case where params is a JSON string # Handle case where params is a JSON string
if isinstance(params, str): if isinstance(params, str):
params = json.loads(params) params = json.loads(params)
# Convert Pydantic model to dict if needed
elif hasattr(params, "model_dump"):
params = params.model_dump()
elif hasattr(params, "dict"):
params = params.dict()
# Handle dict or Pydantic object # Now params should be a dict
if isinstance(params, dict): if isinstance(params, dict):
system_prompt_content = params.get("system") system_prompt_content = params.get("system")
else: else:
# Assume Pydantic model or object # Fallback: try getattr
system_prompt_content = getattr(params, "system", None) system_prompt_content = getattr(params, "system", None)
if system_prompt_content: if system_prompt_content:
@@ -1050,7 +1056,7 @@ class Filter:
if self.valves.show_debug_log and __event_call__: if self.valves.show_debug_log and __event_call__:
await self._log( await self._log(
f"[Inlet] ❌ Failed to parse model params: {e}", f"[Inlet] ❌ Failed to parse model params: {e}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1071,7 +1077,7 @@ class Filter:
if self.valves.show_debug_log and __event_call__: if self.valves.show_debug_log and __event_call__:
await self._log( await self._log(
f"[Inlet] ❌ Error fetching system prompt from DB: {e}", f"[Inlet] ❌ Error fetching system prompt from DB: {e}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
if self.valves.debug_mode: if self.valves.debug_mode:
@@ -1125,7 +1131,7 @@ class Filter:
if not chat_id: if not chat_id:
await self._log( await self._log(
"[Inlet] ❌ Missing chat_id in metadata, skipping compression", "[Inlet] ❌ Missing chat_id in metadata, skipping compression",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
return body return body
@@ -1154,7 +1160,7 @@ class Filter:
else: else:
await self._log( await self._log(
f"[Inlet] ⚠️ Invalid Model Configs (Raw: '{raw_config}'): No valid configs parsed. Expected format: 'model_id:threshold:max_context'", f"[Inlet] ⚠️ Invalid Model Configs (Raw: '{raw_config}'): No valid configs parsed. Expected format: 'model_id:threshold:max_context'",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
else: else:
@@ -1258,7 +1264,7 @@ class Filter:
if total_tokens > max_context_tokens: if total_tokens > max_context_tokens:
await self._log( await self._log(
f"[Inlet] ⚠️ Candidate prompt ({total_tokens} Tokens) exceeds limit ({max_context_tokens}). Reducing history...", f"[Inlet] ⚠️ Candidate prompt ({total_tokens} Tokens) exceeds limit ({max_context_tokens}). Reducing history...",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1395,7 +1401,7 @@ class Filter:
await self._log( await self._log(
f"[Inlet] Applied summary: {system_info} + Head({len(head_messages)} msg, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} msg, {tail_tokens}t) = Total({total_section_tokens}t)", f"[Inlet] Applied summary: {system_info} + Head({len(head_messages)} msg, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} msg, {tail_tokens}t) = Total({total_section_tokens}t)",
type="success", log_type="success",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1455,7 +1461,7 @@ class Filter:
if total_tokens > max_context_tokens: if total_tokens > max_context_tokens:
await self._log( await self._log(
f"[Inlet] ⚠️ Original messages ({total_tokens} Tokens) exceed limit ({max_context_tokens}). Reducing history...", f"[Inlet] ⚠️ Original messages ({total_tokens} Tokens) exceed limit ({max_context_tokens}). Reducing history...",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1523,7 +1529,7 @@ class Filter:
if not chat_id: if not chat_id:
await self._log( await self._log(
"[Outlet] ❌ Missing chat_id in metadata, skipping compression", "[Outlet] ❌ Missing chat_id in metadata, skipping compression",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
return body return body
@@ -1625,7 +1631,7 @@ class Filter:
if current_tokens >= compression_threshold_tokens: if current_tokens >= compression_threshold_tokens:
await self._log( await self._log(
f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})", f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1648,7 +1654,7 @@ class Filter:
except Exception as e: except Exception as e:
await self._log( await self._log(
f"[🔍 Background Calculation] ❌ Error: {str(e)}", f"[🔍 Background Calculation] ❌ Error: {str(e)}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1687,7 +1693,7 @@ class Filter:
target_compressed_count = max(0, len(messages) - self.valves.keep_last) target_compressed_count = max(0, len(messages) - self.valves.keep_last)
await self._log( await self._log(
f"[🤖 Async Summary Task] ⚠️ target_compressed_count is None, estimating: {target_compressed_count}", f"[🤖 Async Summary Task] ⚠️ target_compressed_count is None, estimating: {target_compressed_count}",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1734,7 +1740,7 @@ class Filter:
if not summary_model_id: if not summary_model_id:
await self._log( await self._log(
"[🤖 Async Summary Task] ⚠️ Summary model does not exist, skipping compression", "[🤖 Async Summary Task] ⚠️ Summary model does not exist, skipping compression",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
return return
@@ -1765,7 +1771,7 @@ class Filter:
excess_tokens = estimated_input_tokens - max_context_tokens excess_tokens = estimated_input_tokens - max_context_tokens
await self._log( await self._log(
f"[🤖 Async Summary Task] ⚠️ Middle messages ({middle_tokens} Tokens) + Buffer exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens", f"[🤖 Async Summary Task] ⚠️ Middle messages ({middle_tokens} Tokens) + Buffer exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1822,7 +1828,7 @@ class Filter:
if not new_summary: if not new_summary:
await self._log( await self._log(
"[🤖 Async Summary Task] ⚠️ Summary generation returned empty result, skipping save", "[🤖 Async Summary Task] ⚠️ Summary generation returned empty result, skipping save",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
return return
@@ -1851,7 +1857,7 @@ class Filter:
await self._log( await self._log(
f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters", f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters",
type="success", log_type="success",
event_call=__event_call__, event_call=__event_call__,
) )
await self._log( await self._log(
@@ -1957,14 +1963,14 @@ class Filter:
except Exception as e: except Exception as e:
await self._log( await self._log(
f"[Status] Error calculating tokens: {e}", f"[Status] Error calculating tokens: {e}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
except Exception as e: except Exception as e:
await self._log( await self._log(
f"[🤖 Async Summary Task] ❌ Error: {str(e)}", f"[🤖 Async Summary Task] ❌ Error: {str(e)}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -2066,7 +2072,7 @@ Based on the content above, generate the summary:
if not model: if not model:
await self._log( await self._log(
"[🤖 LLM Call] ⚠️ Summary model does not exist, skipping summary generation", "[🤖 LLM Call] ⚠️ Summary model does not exist, skipping summary generation",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
return "" return ""
@@ -2133,7 +2139,7 @@ Based on the content above, generate the summary:
await self._log( await self._log(
f"[🤖 LLM Call] ✅ Successfully received summary", f"[🤖 LLM Call] ✅ Successfully received summary",
type="success", log_type="success",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -2154,7 +2160,7 @@ Based on the content above, generate the summary:
await self._log( await self._log(
f"[🤖 LLM Call] ❌ {error_message}", f"[🤖 LLM Call] ❌ {error_message}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )

View File

@@ -5,7 +5,7 @@ author: Fu-Jie
author_url: https://github.com/Fu-Jie/awesome-openwebui author_url: https://github.com/Fu-Jie/awesome-openwebui
funding_url: https://github.com/open-webui funding_url: https://github.com/open-webui
description: 通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。 description: 通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。
version: 1.2.1 version: 1.2.2
openwebui_id: 5c0617cb-a9e4-4bd6-a440-d276534ebd18 openwebui_id: 5c0617cb-a9e4-4bd6-a440-d276534ebd18
license: MIT license: MIT
@@ -787,7 +787,7 @@ class Filter:
except Exception as e: except Exception as e:
print(f"Error emitting debug log: {e}") print(f"Error emitting debug log: {e}")
async def _log(self, message: str, type: str = "info", event_call=None): async def _log(self, message: str, log_type: str = "info", event_call=None):
"""统一日志输出到后端 (print) 和前端 (console.log)""" """统一日志输出到后端 (print) 和前端 (console.log)"""
# 后端日志 # 后端日志
if self.valves.debug_mode: if self.valves.debug_mode:
@@ -797,11 +797,11 @@ class Filter:
if self.valves.show_debug_log and event_call: if self.valves.show_debug_log and event_call:
try: try:
css = "color: #3b82f6;" # 默认蓝色 css = "color: #3b82f6;" # 默认蓝色
if type == "error": if log_type == "error":
css = "color: #ef4444; font-weight: bold;" # 红色 css = "color: #ef4444; font-weight: bold;" # 红色
elif type == "warning": elif log_type == "warning":
css = "color: #f59e0b;" # 橙色 css = "color: #f59e0b;" # 橙色
elif type == "success": elif log_type == "success":
css = "color: #10b981; font-weight: bold;" # 绿色 css = "color: #10b981; font-weight: bold;" # 绿色
# 清理前端消息:移除分隔符和多余换行 # 清理前端消息:移除分隔符和多余换行
@@ -948,12 +948,17 @@ class Filter:
# 处理 params 是 JSON 字符串的情况 # 处理 params 是 JSON 字符串的情况
if isinstance(params, str): if isinstance(params, str):
params = json.loads(params) params = json.loads(params)
# 转换 Pydantic 模型为字典
elif hasattr(params, "model_dump"):
params = params.model_dump()
elif hasattr(params, "dict"):
params = params.dict()
# 处理字典或 Pydantic 对象 # 处理字典
if isinstance(params, dict): if isinstance(params, dict):
system_prompt_content = params.get("system") system_prompt_content = params.get("system")
else: else:
# 假设是 Pydantic 模型或对象 # 回退:尝试 getattr
system_prompt_content = getattr(params, "system", None) system_prompt_content = getattr(params, "system", None)
if system_prompt_content: if system_prompt_content:
@@ -972,7 +977,7 @@ class Filter:
if self.valves.show_debug_log and __event_call__: if self.valves.show_debug_log and __event_call__:
await self._log( await self._log(
f"[Inlet] ❌ 解析模型参数失败: {e}", f"[Inlet] ❌ 解析模型参数失败: {e}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -986,7 +991,7 @@ class Filter:
if self.valves.show_debug_log and __event_call__: if self.valves.show_debug_log and __event_call__:
await self._log( await self._log(
f"[Inlet] ❌ 数据库中未找到模型", f"[Inlet] ❌ 数据库中未找到模型",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -994,7 +999,7 @@ class Filter:
if self.valves.show_debug_log and __event_call__: if self.valves.show_debug_log and __event_call__:
await self._log( await self._log(
f"[Inlet] ❌ 从数据库获取系统提示词错误: {e}", f"[Inlet] ❌ 从数据库获取系统提示词错误: {e}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
if self.valves.debug_mode: if self.valves.debug_mode:
@@ -1048,7 +1053,7 @@ class Filter:
if not chat_id: if not chat_id:
await self._log( await self._log(
"[Inlet] ❌ metadata 中缺少 chat_id跳过压缩", "[Inlet] ❌ metadata 中缺少 chat_id跳过压缩",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
return body return body
@@ -1154,7 +1159,7 @@ class Filter:
if total_tokens > max_context_tokens: if total_tokens > max_context_tokens:
await self._log( await self._log(
f"[Inlet] ⚠️ 候选提示词 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...", f"[Inlet] ⚠️ 候选提示词 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1290,7 +1295,7 @@ class Filter:
await self._log( await self._log(
f"[Inlet] 应用摘要: {system_info} + Head({len(head_messages)} 条, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} 条, {tail_tokens}t) = Total({total_section_tokens}t)", f"[Inlet] 应用摘要: {system_info} + Head({len(head_messages)} 条, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} 条, {tail_tokens}t) = Total({total_section_tokens}t)",
type="success", log_type="success",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1350,7 +1355,7 @@ class Filter:
if total_tokens > max_context_tokens: if total_tokens > max_context_tokens:
await self._log( await self._log(
f"[Inlet] ⚠️ 原始消息 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...", f"[Inlet] ⚠️ 原始消息 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1420,7 +1425,7 @@ class Filter:
if not chat_id: if not chat_id:
await self._log( await self._log(
"[Outlet] ❌ metadata 中缺少 chat_id跳过压缩", "[Outlet] ❌ metadata 中缺少 chat_id跳过压缩",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
return body return body
@@ -1486,7 +1491,7 @@ class Filter:
if current_tokens >= compression_threshold_tokens: if current_tokens >= compression_threshold_tokens:
await self._log( await self._log(
f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})", f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1509,7 +1514,7 @@ class Filter:
except Exception as e: except Exception as e:
await self._log( await self._log(
f"[🔍 后台计算] ❌ 错误: {str(e)}", f"[🔍 后台计算] ❌ 错误: {str(e)}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1546,7 +1551,7 @@ class Filter:
target_compressed_count = max(0, len(messages) - self.valves.keep_last) target_compressed_count = max(0, len(messages) - self.valves.keep_last)
await self._log( await self._log(
f"[🤖 异步摘要任务] ⚠️ target_compressed_count 为 None进行估算: {target_compressed_count}", f"[🤖 异步摘要任务] ⚠️ target_compressed_count 为 None进行估算: {target_compressed_count}",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1593,7 +1598,7 @@ class Filter:
if not summary_model_id: if not summary_model_id:
await self._log( await self._log(
"[🤖 异步摘要任务] ⚠️ 摘要模型不存在,跳过压缩", "[🤖 异步摘要任务] ⚠️ 摘要模型不存在,跳过压缩",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
return return
@@ -1624,7 +1629,7 @@ class Filter:
excess_tokens = estimated_input_tokens - max_context_tokens excess_tokens = estimated_input_tokens - max_context_tokens
await self._log( await self._log(
f"[🤖 异步摘要任务] ⚠️ 中间消息 ({middle_tokens} Tokens) + 缓冲超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token", f"[🤖 异步摘要任务] ⚠️ 中间消息 ({middle_tokens} Tokens) + 缓冲超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1681,7 +1686,7 @@ class Filter:
if not new_summary: if not new_summary:
await self._log( await self._log(
"[🤖 异步摘要任务] ⚠️ 摘要生成返回空结果,跳过保存", "[🤖 异步摘要任务] ⚠️ 摘要生成返回空结果,跳过保存",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
return return
@@ -1710,7 +1715,7 @@ class Filter:
await self._log( await self._log(
f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符", f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符",
type="success", log_type="success",
event_call=__event_call__, event_call=__event_call__,
) )
await self._log( await self._log(
@@ -1821,14 +1826,14 @@ class Filter:
except Exception as e: except Exception as e:
await self._log( await self._log(
f"[Status] 计算 Token 错误: {e}", f"[Status] 计算 Token 错误: {e}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
except Exception as e: except Exception as e:
await self._log( await self._log(
f"[🤖 异步摘要任务] ❌ 错误: {str(e)}", f"[🤖 异步摘要任务] ❌ 错误: {str(e)}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -1928,7 +1933,7 @@ class Filter:
if not model: if not model:
await self._log( await self._log(
"[🤖 LLM 调用] ⚠️ 摘要模型不存在,跳过摘要生成", "[🤖 LLM 调用] ⚠️ 摘要模型不存在,跳过摘要生成",
type="warning", log_type="warning",
event_call=__event_call__, event_call=__event_call__,
) )
return "" return ""
@@ -1995,7 +2000,7 @@ class Filter:
await self._log( await self._log(
f"[🤖 LLM 调用] ✅ 成功接收摘要", f"[🤖 LLM 调用] ✅ 成功接收摘要",
type="success", log_type="success",
event_call=__event_call__, event_call=__event_call__,
) )
@@ -2016,7 +2021,7 @@ class Filter:
await self._log( await self._log(
f"[🤖 LLM 调用] ❌ {error_message}", f"[🤖 LLM 调用] ❌ {error_message}",
type="error", log_type="error",
event_call=__event_call__, event_call=__event_call__,
) )