fix(codex): 优化 FPR 的 API 优先审查流程

- 修复 FPR 脚本的 Git 解析、--pr 控制流与 branch 到 PR 的 API 解析路径
- 优化 CodeRabbit summary、CTRF 测试结果与 latest head review threads 的 API-first 提取逻辑
- 更新 skill 文档、agent prompt 与 ai-plan 记录,明确最新提交后 review threads 的优先级
This commit is contained in:
GeWuYou 2026-04-19 09:59:36 +08:00
parent 5db27fc80a
commit 97b619c0b2
5 changed files with 381 additions and 87 deletions

View File

@ -14,10 +14,11 @@ Shortcut: `$gframework-pr-review`
1. Read `AGENTS.md` before deciding how to validate or fix anything. 1. Read `AGENTS.md` before deciding how to validate or fix anything.
2. Resolve the current branch with Windows Git from WSL, following the repository worktree rule. 2. Resolve the current branch with Windows Git from WSL, following the repository worktree rule.
3. Run `scripts/fetch_current_pr_review.py` to: 3. Run `scripts/fetch_current_pr_review.py` to:
- locate the PR for the current branch - locate the PR for the current branch through the GitHub PR API
- fetch the PR conversation page - fetch PR metadata, issue comments, reviews, and review comments through the GitHub API
- extract `Summary by CodeRabbit` - extract `Summary by CodeRabbit` and CTRF test reports from issue comments
- extract `Actionable comments posted` - fetch the latest head commit review threads from the GitHub PR API
- prefer unresolved review threads on the latest head commit over older summary-only signals
- extract failed checks and test-report signals such as `Failed Tests` or `No failed tests in this run` - extract failed checks and test-report signals such as `Failed Tests` or `No failed tests in this run`
4. Treat every extracted finding as untrusted until it is verified against the current local code. 4. Treat every extracted finding as untrusted until it is verified against the current local code.
5. Only fix comments that still apply to the checked-out branch. Ignore stale or already-resolved findings. 5. Only fix comments that still apply to the checked-out branch. Ignore stale or already-resolved findings.
@ -37,17 +38,20 @@ Shortcut: `$gframework-pr-review`
The script should produce: The script should produce:
- PR metadata: number, title, state, branch, URL - PR metadata: number, title, state, branch, URL
- CodeRabbit summary block - CodeRabbit summary block from issue comments when available
- Parsed actionable comments grouped by file - Parsed latest head-review threads, with unresolved threads clearly separated
- Latest head commit review metadata and review threads
- Unresolved latest-commit review threads after reply-thread folding
- Pre-merge failed checks, if present - Pre-merge failed checks, if present
- Test summary, including failed-test signals when present - Test summary, including failed-test signals when present
- Parse warnings when the page structure changes or a section is missing - Parse warnings only when both the primary API source and the intended fallback signal are unavailable
## Recovery Rules ## Recovery Rules
- If the current branch has no matching public PR, report that clearly instead of guessing. - If the current branch has no matching public PR, report that clearly instead of guessing.
- If GitHub access fails because of proxy configuration, rerun the fetch with proxy variables removed. - If GitHub access fails because of proxy configuration, rerun the fetch with proxy variables removed.
- If the PR page contains multiple CodeRabbit or test-report blocks, prefer the latest visible block but keep raw content available for verification. - Prefer GitHub API results over PR HTML. The PR HTML page is now a fallback/debugging source, not the primary source of truth.
- If the summary block and the latest head review threads disagree, trust the latest unresolved head-review threads and treat older summary findings as stale until re-verified locally.
## Example Triggers ## Example Triggers

View File

@ -1,4 +1,4 @@
interface: interface:
display_name: "GFramework PR Review" display_name: "GFramework PR Review"
short_description: "Inspect the current PR and CodeRabbit findings" short_description: "Inspect the current PR and CodeRabbit findings"
default_prompt: "Use $gframework-pr-review to inspect the current branch PR, extract CodeRabbit comments, and summarize failed checks or failed tests." default_prompt: "Use $gframework-pr-review to inspect the current branch PR through the GitHub API, prioritize unresolved review threads on the latest head commit, and summarize failed checks or failed tests."

View File

@ -9,7 +9,9 @@ from __future__ import annotations
import argparse import argparse
import html import html
import json import json
import os
import re import re
import shutil
import subprocess import subprocess
import sys import sys
import urllib.parse import urllib.parse
@ -18,7 +20,55 @@ from typing import Any
OWNER = "GeWuYou" OWNER = "GeWuYou"
REPO = "GFramework" REPO = "GFramework"
WINDOWS_GIT = "/mnt/d/Tool/Development Tools/Git/cmd/git.exe" DEFAULT_WINDOWS_GIT = "/mnt/d/Tool/Development Tools/Git/cmd/git.exe"
GIT_ENVIRONMENT_KEY = "GFRAMEWORK_WINDOWS_GIT"
USER_AGENT = "codex-gframework-pr-review"
CODERABBIT_LOGIN = "coderabbitai[bot]"
REVIEW_COMMENT_ADDRESSED_MARKER = "<!-- <review_comment_addressed> -->"
DEFAULT_REQUEST_TIMEOUT_SECONDS = 60
REQUEST_TIMEOUT_ENVIRONMENT_KEY = "GFRAMEWORK_PR_REVIEW_TIMEOUT_SECONDS"
def resolve_git_command() -> str:
candidates = [
os.environ.get(GIT_ENVIRONMENT_KEY),
DEFAULT_WINDOWS_GIT,
"git.exe",
"git",
]
for candidate in candidates:
if not candidate:
continue
if os.path.isabs(candidate):
if os.path.exists(candidate):
return candidate
continue
resolved_candidate = shutil.which(candidate)
if resolved_candidate:
return resolved_candidate
raise RuntimeError(f"No usable git executable found. Set {GIT_ENVIRONMENT_KEY} to override it.")
def resolve_request_timeout_seconds() -> int:
configured_timeout = os.environ.get(REQUEST_TIMEOUT_ENVIRONMENT_KEY)
if not configured_timeout:
return DEFAULT_REQUEST_TIMEOUT_SECONDS
try:
parsed_timeout = int(configured_timeout)
except ValueError as error:
raise RuntimeError(
f"{REQUEST_TIMEOUT_ENVIRONMENT_KEY} must be an integer number of seconds."
) from error
if parsed_timeout <= 0:
raise RuntimeError(f"{REQUEST_TIMEOUT_ENVIRONMENT_KEY} must be greater than zero.")
return parsed_timeout
def run_command(args: list[str]) -> str: def run_command(args: list[str]) -> str:
@ -30,38 +80,71 @@ def run_command(args: list[str]) -> str:
def get_current_branch() -> str: def get_current_branch() -> str:
return run_command([WINDOWS_GIT, "rev-parse", "--abbrev-ref", "HEAD"]) return run_command([resolve_git_command(), "rev-parse", "--abbrev-ref", "HEAD"])
def fetch_text(url: str) -> str: def open_url(url: str, accept: str) -> tuple[str, Any]:
opener = urllib.request.build_opener(urllib.request.ProxyHandler({})) opener = urllib.request.build_opener(urllib.request.ProxyHandler({}))
with opener.open(url, timeout=30) as response: request = urllib.request.Request(url, headers={"Accept": accept, "User-Agent": USER_AGENT})
return response.read().decode("utf-8", "replace") with opener.open(request, timeout=resolve_request_timeout_seconds()) as response:
return response.read().decode("utf-8", "replace"), response.headers
def fetch_json(url: str) -> tuple[Any, Any]:
text, headers = open_url(url, accept="application/vnd.github+json")
return json.loads(text), headers
def extract_next_link(headers: Any) -> str | None:
link_header = headers.get("Link")
if not link_header:
return None
match = re.search(r'<([^>]+)>;\s*rel="next"', link_header)
return match.group(1) if match else None
def fetch_paged_json(url: str) -> list[dict[str, Any]]:
items: list[dict[str, Any]] = []
next_url: str | None = url
while next_url:
payload, headers = fetch_json(next_url)
if not isinstance(payload, list):
raise RuntimeError(f"Expected list payload from GitHub API, got {type(payload).__name__}.")
items.extend(payload)
next_url = extract_next_link(headers)
return items
def fetch_pull_request_metadata(pr_number: int) -> dict[str, Any]:
payload, _ = fetch_json(f"https://api.github.com/repos/{OWNER}/{REPO}/pulls/{pr_number}")
if not isinstance(payload, dict):
raise RuntimeError("Failed to fetch GitHub PR metadata.")
return {
"number": int(payload["number"]),
"title": payload["title"],
"state": str(payload["state"]).upper(),
"head_branch": payload["head"]["ref"],
"base_branch": payload["base"]["ref"],
"url": payload["html_url"],
}
def resolve_pr_number(branch: str) -> int: def resolve_pr_number(branch: str) -> int:
query = urllib.parse.quote(f"is:pr head:{branch} sort:updated-desc") head_query = urllib.parse.quote(f"{OWNER}:{branch}")
url = f"https://github.com/{OWNER}/{REPO}/pulls?q={query}" payload, _ = fetch_json(f"https://api.github.com/repos/{OWNER}/{REPO}/pulls?state=all&head={head_query}")
html_text = fetch_text(url) if not isinstance(payload, list):
match = re.search(rf'/{OWNER}/{REPO}/pull/(\d+)', html_text) raise RuntimeError("Failed to resolve pull request from branch.")
if match is None:
matching_pull_requests = [item for item in payload if item.get("head", {}).get("ref") == branch]
if not matching_pull_requests:
raise RuntimeError(f"No public PR matched branch '{branch}'.") raise RuntimeError(f"No public PR matched branch '{branch}'.")
return int(match.group(1))
latest_pull_request = max(matching_pull_requests, key=lambda item: item.get("updated_at", ""))
def extract_embedded_data(html_text: str) -> dict[str, Any]: return int(latest_pull_request["number"])
match = re.search(
r'<script type="application/json" data-target="react-app\.embeddedData">(.*?)</script>',
html_text,
re.S,
)
if match is None:
raise RuntimeError("Failed to locate GitHub embedded PR metadata.")
return json.loads(match.group(1))
def extract_clipboard_values(html_text: str) -> list[str]:
return [html.unescape(value) for value in re.findall(r'<clipboard-copy\b[^>]*\bvalue="(.*?)"', html_text, re.S)]
def collapse_whitespace(text: str) -> str: def collapse_whitespace(text: str) -> str:
@ -208,49 +291,208 @@ def parse_test_report(block: str) -> dict[str, Any]:
return report return report
def select_code_rabbit_summary(values: list[str]) -> str: def fetch_issue_comments(pr_number: int) -> list[dict[str, Any]]:
for value in values: return fetch_paged_json(f"https://api.github.com/repos/{OWNER}/{REPO}/issues/{pr_number}/comments?per_page=100")
if "auto-generated comment: summarize by coderabbit.ai" in value:
return value.strip()
return ""
def select_actionable_comments(values: list[str]) -> str: def select_latest_comment_body(
for value in values: comments: list[dict[str, Any]],
if "Actionable comments posted:" in value and "Prompt for all review comments with AI agents" in value: predicate: Any,
return value.strip() required_user: str | None = None,
return "" ) -> str:
matching_comments = []
for comment in comments:
body = html.unescape(str(comment.get("body", "")))
if required_user is not None and comment.get("user", {}).get("login") != required_user:
continue
if predicate(body):
comment_copy = dict(comment)
comment_copy["body"] = body
matching_comments.append(comment_copy)
if not matching_comments:
return ""
latest_comment = max(matching_comments, key=lambda item: (item.get("updated_at", ""), item.get("created_at", "")))
return str(latest_comment.get("body", "")).strip()
def select_test_reports(values: list[str]) -> list[str]: def select_comment_bodies(
return [value.strip() for value in values if "CTRF PR COMMENT TAG:" in value or "### Test Results" in value] comments: list[dict[str, Any]],
predicate: Any,
required_user: str | None = None,
) -> list[str]:
matching_comments = []
for comment in comments:
body = html.unescape(str(comment.get("body", "")))
if required_user is not None and comment.get("user", {}).get("login") != required_user:
continue
if predicate(body):
comment_copy = dict(comment)
comment_copy["body"] = body
matching_comments.append(comment_copy)
matching_comments.sort(key=lambda item: (item.get("created_at", ""), item.get("updated_at", "")))
return [str(comment.get("body", "")).strip() for comment in matching_comments]
def build_result(pr_number: int, branch: str, html_text: str) -> dict[str, Any]: def summarize_review_comment(comment: dict[str, Any]) -> dict[str, Any]:
embedded_data = extract_embedded_data(html_text) return {
pull_request = embedded_data["payload"]["pullRequestsLayoutRoute"]["pullRequest"] "id": comment.get("id"),
clipboard_values = extract_clipboard_values(html_text) "path": comment.get("path") or "",
"line": comment.get("line"),
"side": comment.get("side") or "",
"created_at": comment.get("created_at") or "",
"updated_at": comment.get("updated_at") or "",
"user": comment.get("user", {}).get("login") or "",
"commit_id": comment.get("commit_id") or "",
"in_reply_to_id": comment.get("in_reply_to_id"),
"body": comment.get("body") or "",
}
summary_block = select_code_rabbit_summary(clipboard_values)
actionable_block = select_actionable_comments(clipboard_values)
test_blocks = select_test_reports(clipboard_values)
def classify_review_thread_status(latest_comment: dict[str, Any]) -> str:
body = latest_comment.get("body") or ""
author = latest_comment.get("user") or ""
if author == CODERABBIT_LOGIN and REVIEW_COMMENT_ADDRESSED_MARKER in body:
return "addressed"
return "open"
def build_latest_commit_review_threads(comments: list[dict[str, Any]]) -> list[dict[str, Any]]:
comment_threads: dict[int, dict[str, Any]] = {}
# GitHub review replies point to the root comment id. Grouping them first lets
# the skill surface the latest thread state instead of every historical reply.
for comment in sorted(comments, key=lambda item: (item.get("created_at") or "", item.get("id") or 0)):
comment_id = comment.get("id")
if comment_id is None:
continue
summary = summarize_review_comment(comment)
root_id = summary["in_reply_to_id"] or comment_id
thread = comment_threads.setdefault(
root_id,
{
"thread_id": root_id,
"path": summary["path"],
"line": summary["line"],
"root_comment": None,
"replies": [],
},
)
if summary["in_reply_to_id"] is None:
thread["root_comment"] = summary
thread["path"] = summary["path"]
thread["line"] = summary["line"]
else:
thread["replies"].append(summary)
threads: list[dict[str, Any]] = []
for thread in comment_threads.values():
root_comment = thread.get("root_comment")
if root_comment is None:
continue
ordered_comments = [root_comment, *thread["replies"]]
latest_comment = max(ordered_comments, key=lambda item: (item.get("updated_at") or "", item.get("created_at") or ""))
thread["latest_comment"] = latest_comment
thread["status"] = classify_review_thread_status(latest_comment)
threads.append(thread)
return sorted(threads, key=lambda item: (item["path"], item["line"] or 0, item["thread_id"]))
def fetch_latest_commit_review(pr_number: int) -> dict[str, Any]:
api_base = f"https://api.github.com/repos/{OWNER}/{REPO}/pulls/{pr_number}"
commits = fetch_paged_json(f"{api_base}/commits?per_page=100")
reviews = fetch_paged_json(f"{api_base}/reviews?per_page=100")
comments = fetch_paged_json(f"{api_base}/comments?per_page=100")
if not commits:
return {
"latest_commit": {},
"latest_review": {},
"threads": [],
"open_threads": [],
}
latest_commit = commits[-1]
latest_commit_sha = latest_commit.get("sha", "")
latest_commit_reviews = [
review for review in reviews if review.get("commit_id") == latest_commit_sha and review.get("submitted_at")
]
candidate_reviews = latest_commit_reviews or [review for review in reviews if review.get("submitted_at")]
latest_review = (
max(candidate_reviews, key=lambda review: review.get("submitted_at", ""))
if candidate_reviews
else None
)
latest_commit_comments = [comment for comment in comments if comment.get("commit_id") == latest_commit_sha]
threads = build_latest_commit_review_threads(latest_commit_comments)
open_threads = [thread for thread in threads if thread["status"] == "open"]
return {
"latest_commit": {
"sha": latest_commit_sha,
"message": latest_commit.get("commit", {}).get("message", ""),
},
"latest_review": {
"id": latest_review.get("id") if latest_review else None,
"state": latest_review.get("state") if latest_review else "",
"submitted_at": latest_review.get("submitted_at") if latest_review else "",
"commit_id": latest_review.get("commit_id") if latest_review else "",
"user": latest_review.get("user", {}).get("login") if latest_review else "",
"body": latest_review.get("body") if latest_review else "",
},
"threads": threads,
"open_threads": open_threads,
}
def build_result(pr_number: int, branch: str) -> dict[str, Any]:
warnings: list[str] = [] warnings: list[str] = []
pull_request_metadata = fetch_pull_request_metadata(pr_number)
issue_comments = fetch_issue_comments(pr_number)
summary_block = select_latest_comment_body(
issue_comments,
lambda body: "auto-generated comment: summarize by coderabbit.ai" in body,
required_user=CODERABBIT_LOGIN,
)
actionable_block = select_latest_comment_body(
issue_comments,
lambda body: "Actionable comments posted:" in body and "Prompt for all review comments with AI agents" in body,
required_user=CODERABBIT_LOGIN,
)
test_blocks = select_comment_bodies(
issue_comments,
lambda body: "CTRF PR COMMENT TAG:" in body or "### Test Results" in body,
)
if not summary_block: if not summary_block:
warnings.append("CodeRabbit summary block was not found.") warnings.append("CodeRabbit summary block was not found in issue comments.")
if not actionable_block:
warnings.append("CodeRabbit actionable comments block was not found.")
if not test_blocks: if not test_blocks:
warnings.append("PR test-report block was not found.") warnings.append("PR test-report block was not found in issue comments.")
latest_commit_review: dict[str, Any] = {}
try:
latest_commit_review = fetch_latest_commit_review(pr_number)
except Exception as error: # noqa: BLE001
warnings.append(f"Latest commit review comments could not be fetched: {error}")
if not actionable_block and not latest_commit_review.get("threads"):
warnings.append("CodeRabbit actionable comments block was not found in issue comments.")
return { return {
"pull_request": { "pull_request": {
"number": int(pull_request["number"]), "number": pull_request_metadata["number"],
"title": pull_request["title"], "title": pull_request_metadata["title"],
"state": pull_request["state"], "state": pull_request_metadata["state"],
"head_branch": pull_request["headBranch"], "head_branch": pull_request_metadata["head_branch"],
"base_branch": pull_request["baseBranch"], "base_branch": pull_request_metadata["base_branch"],
"url": f"https://github.com/{OWNER}/{REPO}/pull/{pr_number}", "url": pull_request_metadata["url"],
"resolved_from_branch": branch, "resolved_from_branch": branch,
}, },
"coderabbit_summary": { "coderabbit_summary": {
@ -258,6 +500,7 @@ def build_result(pr_number: int, branch: str, html_text: str) -> dict[str, Any]:
"raw": summary_block, "raw": summary_block,
}, },
"coderabbit_comments": parse_actionable_comments(actionable_block) if actionable_block else {}, "coderabbit_comments": parse_actionable_comments(actionable_block) if actionable_block else {},
"latest_commit_review": latest_commit_review,
"test_reports": [parse_test_report(block) for block in test_blocks], "test_reports": [parse_test_report(block) for block in test_blocks],
"parse_warnings": warnings, "parse_warnings": warnings,
} }
@ -289,6 +532,32 @@ def format_text(result: dict[str, Any]) -> str:
if comment["description"]: if comment["description"]:
lines.append(f" Description: {comment['description']}") lines.append(f" Description: {comment['description']}")
latest_commit_review = result.get("latest_commit_review", {})
latest_commit = latest_commit_review.get("latest_commit", {})
latest_review = latest_commit_review.get("latest_review", {})
open_threads = latest_commit_review.get("open_threads", [])
if latest_commit:
lines.append("")
lines.append(f"Latest reviewed commit: {latest_commit.get('sha', '')}")
if latest_review:
lines.append(
"Latest review: "
f"{latest_review.get('state', '')} by {latest_review.get('user', '')} "
f"at {latest_review.get('submitted_at', '')}"
)
lines.append(
"Latest commit review threads: "
f"{len(latest_commit_review.get('threads', []))} total, {len(open_threads)} open"
)
for thread in open_threads:
root_comment = thread["root_comment"]
latest_comment = thread["latest_comment"]
lines.append(f"- {thread['path']}:{thread['line']}")
lines.append(f" Root by {root_comment['user']}: {collapse_whitespace(root_comment['body'])}")
if latest_comment["id"] != root_comment["id"]:
lines.append(f" Latest by {latest_comment['user']}: {collapse_whitespace(latest_comment['body'])}")
lines.append("") lines.append("")
lines.append(f"Test reports: {len(result['test_reports'])}") lines.append(f"Test reports: {len(result['test_reports'])}")
for index, report in enumerate(result["test_reports"], start=1): for index, report in enumerate(result["test_reports"], start=1):
@ -327,11 +596,14 @@ def parse_args() -> argparse.Namespace:
def main() -> None: def main() -> None:
args = parse_args() args = parse_args()
branch = args.branch or get_current_branch() if args.pr is not None:
pr_number = args.pr or resolve_pr_number(branch) pr_number = args.pr
url = f"https://github.com/{OWNER}/{REPO}/pull/{pr_number}" branch = args.branch or ""
html_text = fetch_text(url) else:
result = build_result(pr_number, branch, html_text) branch = args.branch or get_current_branch()
pr_number = resolve_pr_number(branch)
result = build_result(pr_number, branch)
if args.format == "json": if args.format == "json":
print(json.dumps(result, ensure_ascii=False, indent=2)) print(json.dumps(result, ensure_ascii=False, indent=2))

View File

@ -399,10 +399,10 @@
- `dotnet build GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release` 在 CQRS generator MVP 后通过 - `dotnet build GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release` 在 CQRS generator MVP 后通过
- `dotnet test GFramework.SourceGenerators.Tests/GFramework.SourceGenerators.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~GFramework.SourceGenerators.Tests.Cqrs.CqrsHandlerRegistryGeneratorTests"` 通过,`2` 个测试全部通过 - `dotnet test GFramework.SourceGenerators.Tests/GFramework.SourceGenerators.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~GFramework.SourceGenerators.Tests.Cqrs.CqrsHandlerRegistryGeneratorTests"` 通过,`2` 个测试全部通过
- `dotnet test GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~GFramework.Core.Tests.Cqrs.CqrsHandlerRegistrarTests|FullyQualifiedName~GFramework.Core.Tests.Architectures.ArchitectureModulesBehaviorTests|FullyQualifiedName~GFramework.Core.Tests.Mediator.MediatorArchitectureIntegrationTests|FullyQualifiedName~GFramework.Core.Tests.Mediator.MediatorComprehensiveTests"` 通过,`41` 个测试全部通过 - `dotnet test GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~GFramework.Core.Tests.Cqrs.CqrsHandlerRegistrarTests|FullyQualifiedName~GFramework.Core.Tests.Architectures.ArchitectureModulesBehaviorTests|FullyQualifiedName~GFramework.Core.Tests.Mediator.MediatorArchitectureIntegrationTests|FullyQualifiedName~GFramework.Core.Tests.Mediator.MediatorComprehensiveTests"` 通过,`41` 个测试全部通过
- `dotnet build GFramework/GFramework.sln -c Release` - `dotnet build GFramework/GFramework.sln -c Release`
在当前 WSL 环境下命中既有 `GFramework.csproj` NuGet fallback package folder 配置问题 在当前 WSL 环境下命中既有 `GFramework.csproj` NuGet fallback package folder 配置问题
`D:\\Tool\\Development Tools\\Microsoft Visual Studio\\Shared\\NuGetPackages` (机器本地路径已省略
与本轮 CQRS 改动无关;`GFramework.Core.Tests` 相关项目构建与回归已通过 与本轮 CQRS 改动无关;`GFramework.Core.Tests` 相关项目构建与回归已通过
- `dotnet build GFramework/GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release` - `dotnet build GFramework/GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release`
在显式额外程序集 CQRS 注册入口落地后通过,仅存在既有 `MA0048` warnings 在显式额外程序集 CQRS 注册入口落地后通过,仅存在既有 `MA0048` warnings
- `dotnet test GFramework/GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~GFramework.Core.Tests.Architectures.ArchitectureAdditionalCqrsHandlersTests|FullyQualifiedName~GFramework.Core.Tests.Architectures.ArchitectureModulesBehaviorTests|FullyQualifiedName~GFramework.Core.Tests.Cqrs.CqrsHandlerRegistrarTests|FullyQualifiedName~GFramework.Core.Tests.Architectures.RegistryInitializationHookBaseTests"` - `dotnet test GFramework/GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~GFramework.Core.Tests.Architectures.ArchitectureAdditionalCqrsHandlersTests|FullyQualifiedName~GFramework.Core.Tests.Architectures.ArchitectureModulesBehaviorTests|FullyQualifiedName~GFramework.Core.Tests.Cqrs.CqrsHandlerRegistrarTests|FullyQualifiedName~GFramework.Core.Tests.Architectures.RegistryInitializationHookBaseTests"`
@ -469,7 +469,7 @@
若本轮中断,优先从以下顺序恢复: 若本轮中断,优先从以下顺序恢复:
1. 查看 `ai-plan/public/traces/cqrs-rewrite-migration-trace.md` 1. 查看 `ai-plan/public/traces/cqrs-rewrite-migration-trace.md`
2. 确认当前恢复点 `CQRS-REWRITE-RP-015` 已对应到最新提交 2. 确认当前恢复点 `CQRS-REWRITE-RP-042` 已对应到最新提交
3. 优先继续执行 `ai-plan/migration/CQRS_MODULE_SPLIT_PLAN.md` 中的 Phase 7 3. 优先继续执行 `ai-plan/migration/CQRS_MODULE_SPLIT_PLAN.md` 中的 Phase 7
- 先决定是否正式支持旧 `GFramework.Core.Abstractions.Cqrs*` / `GFramework.Core.Cqrs.Extensions` public namespace 兼容,还是明确要求消费端迁到当前 `GFramework.Cqrs*` 路径 - 先决定是否正式支持旧 `GFramework.Core.Abstractions.Cqrs*` / `GFramework.Core.Cqrs.Extensions` public namespace 兼容,还是明确要求消费端迁到当前 `GFramework.Cqrs*` 路径
- 再评估 `CqrsCoroutineExtensions` 是否保留在 `GFramework.Core`,或连同所需协程辅助一起形成更小的可迁移边界 - 再评估 `CqrsCoroutineExtensions` 是否保留在 `GFramework.Core`,或连同所需协程辅助一起形成更小的可迁移边界
@ -701,9 +701,9 @@
- 已新增项目级 `$gframework-pr-review` skill - 已新增项目级 `$gframework-pr-review` skill
- 目录:`.codex/skills/gframework-pr-review/` - 目录:`.codex/skills/gframework-pr-review/`
- 作用:定位当前分支对应的 GitHub PR直接从 PR 页面提取 CodeRabbit 评论、`Failed checks` - 作用:定位当前分支对应的 GitHub PR优先通过 GitHub PR / issue comments / review comments API 提取
与 CTRF 测试结果 CodeRabbit 汇总、最新 head commit review threads、`Failed checks` 与 CTRF 测试结果
- 约束:不依赖 `gh` CLI默认通过公开 GitHub HTML 页面工作 - 约束:不依赖 `gh` CLI不再把重型 PR HTML 页面当作主数据源
- 已根据 PR `#253` 的公开 review 内容完成本地修正: - 已根据 PR `#253` 的公开 review 内容完成本地修正:
- `.codex/skills/gframework-boot/SKILL.md` 的恢复 heuristics 不再把 `next step/continue/继续` - `.codex/skills/gframework-boot/SKILL.md` 的恢复 heuristics 不再把 `next step/continue/继续`
直接映射为 `recovery` 直接映射为 `recovery`
@ -711,6 +711,14 @@
tracking document” tracking document”
- `Godot/script_templates/Node/*.cs``GFramework.Core.Abstractions/Controller/IController.cs` - `Godot/script_templates/Node/*.cs``GFramework.Core.Abstractions/Controller/IController.cs`
中旧 `Rule` 命名空间残留已同步修正 中旧 `Rule` 命名空间残留已同步修正
- `fetch_current_pr_review.py` 已改为:
- Git 路径支持环境变量覆盖并回退到 `git.exe` / `git`
- `--pr` 模式不再强制读取当前分支
- `--branch` 到 PR 编号的解析改为走 GitHub PR API
- CodeRabbit summary / CTRF 测试报告改为走 issue comments API
- 最新 review 依据改为 latest head commit review threads而不是只看汇总块
- `ai-plan/public/todos/cqrs-rewrite-migration-tracking.md` 已移除公开文档中的机器本地绝对路径,并统一
下次恢复建议里的恢复点编号
### 阶段RP-042 验证 ### 阶段RP-042 验证
@ -719,13 +727,16 @@
- 备注:`fetch_current_pr_review.py` 语法正确 - 备注:`fetch_current_pr_review.py` 语法正确
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253` - `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253`
- 结果:通过 - 结果:通过
- 备注:解析出 2 条 CodeRabbit 待处理评论、1 条 `Title check` warning以及 `2103 passed / 0 failed` - 备注:已通过 API-first 路径解析 PR 元数据、latest head commit review threads、CodeRabbit summary
的测试结果 与 CTRF 测试结果,不再依赖 PR HTML
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --branch feat/cqrs-optimization`
- 结果:通过
- 备注:已验证 branch -> PR 解析同样通过 GitHub API 工作
- `dotnet build GFramework.Core.Abstractions/GFramework.Core.Abstractions.csproj -c Release` - `dotnet build GFramework.Core.Abstractions/GFramework.Core.Abstractions.csproj -c Release`
- 结果:通过 - 结果:通过
- 备注:相关 C# 改动已完成构建验证 - 备注:相关 C# 改动已完成构建验证
### 下一步 ### 下一步
1. 若需要继续处理 PR `#253` 的最后一项 `Title check` warning应在 GitHub 页面上直接修改 PR 标题 1. 若要让 PR `#253` 上的 latest head review threads 反映本轮本地修正,需要先提交并推送当前分支,再重新执行 `$gframework-pr-review`
2. 若后续仍采用 PR 驱动修复流程,优先使用 `$gframework-pr-review` 复查当前分支 PR 的 CodeRabbit 评论和测试状态 2. PR 当前公开 warning 仍包含 `Docstring Coverage`,若后续要继续消除此项,需要单独规划并提交文档注释覆盖率改进

View File

@ -1371,14 +1371,17 @@
- 建立 `CQRS-REWRITE-RP-042` 恢复点 - 建立 `CQRS-REWRITE-RP-042` 恢复点
- 新增项目级 skill `.codex/skills/gframework-pr-review/` - 新增项目级 skill `.codex/skills/gframework-pr-review/`
- 暗号为 `$gframework-pr-review` - 暗号为 `$gframework-pr-review`
- 使用 Windows Git 解析当前分支,并通过公开 GitHub PR 页面定位当前分支对应的 PR - 使用 Windows Git 解析当前分支,并通过 GitHub PR API 定位当前分支对应的 PR
- 直接从 PR HTML 中提取 `Summary by CodeRabbit``Actionable comments posted``Failed checks` 与 CTRF 测试结果 - 通过 GitHub issue comments / reviews / review comments API 提取 `Summary by CodeRabbit`、最新 head
commit review threads、`Failed checks` 与 CTRF 测试结果
- 不再把重型 PR HTML 页面作为主数据源,只在调试或兼容场景下保留为兜底思路
- 不依赖 `gh` CLI也不要求登录态脚本会显式绕过当前 shell 中失效的代理变量 - 不依赖 `gh` CLI也不要求登录态脚本会显式绕过当前 shell 中失效的代理变量
- 用新脚本验证了 PR `#253` 的当前状态: - 用新脚本验证了 PR `#253` 的当前状态:
- `CodeRabbit actionable comments` 仍有 2 条真实待处理项,分别落在 `.codex/skills/gframework-boot/SKILL.md` - latest head commit review threads 已可直接从 API 提取;在远端最新提交未更新前,当前仍显示 4 条 open
`AGENTS.md` threads其中 2 条落在 `fetch_current_pr_review.py`、2 条落在 `ai-plan/public/todos/cqrs-rewrite-migration-tracking.md`
- PR 页面当前无 `Failed Tests`CTRF 测试报告显示 `2103 passed / 0 failed` - PR 页面当前无 `Failed Tests`CTRF 测试报告显示 `2103 passed / 0 failed`
- `Failed checks` 仅剩 `Title check` warning属于 GitHub PR 标题元数据问题,不是本地代码缺陷 - `Failed checks` 当前可稳定提取到 `Docstring Coverage` warning该项属于 PR 级文档注释覆盖率问题,不是 FPR
skill 解析链路故障
- 已按 PR `#253` 的公开建议完成本地修正: - 已按 PR `#253` 的公开建议完成本地修正:
- `gframework-boot` 的恢复 heuristics 改为“先检索 `ai-plan/`,再判定 `resume``recovery` - `gframework-boot` 的恢复 heuristics 改为“先检索 `ai-plan/`,再判定 `resume``recovery`
- `AGENTS.md``ai-libs/**` 观察写入 active plan/trace 的要求收窄到“多步/复杂任务或已有 active tracking document” - `AGENTS.md``ai-libs/**` 观察写入 active plan/trace 的要求收窄到“多步/复杂任务或已有 active tracking document”
@ -1393,7 +1396,11 @@
- 备注:`fetch_current_pr_review.py` 语法正确,且避免了只读文件系统下写 `__pycache__` 的问题 - 备注:`fetch_current_pr_review.py` 语法正确,且避免了只读文件系统下写 `__pycache__` 的问题
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253` - `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253`
- 结果:通过 - 结果:通过
- 备注:成功解析当前 PR 元数据、2 条 CodeRabbit 待处理评论、1 条 `Title check` warning 和 1 组 CTRF 测试报告 - 备注:成功通过 API-first 路径解析当前 PR 元数据、latest head commit review threads、`Docstring Coverage`
warning 和 CTRF 测试报告
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --branch feat/cqrs-optimization`
- 结果:通过
- 备注:验证 branch -> PR 解析也已摆脱 HTML 搜索
- `dotnet build GFramework.Core.Abstractions/GFramework.Core.Abstractions.csproj -c Release` - `dotnet build GFramework.Core.Abstractions/GFramework.Core.Abstractions.csproj -c Release`
- 结果:通过 - 结果:通过
- 备注:`GFramework.Cqrs.Abstractions``GFramework.Core.Abstractions` 均成功构建0 warning / 0 error - 备注:`GFramework.Cqrs.Abstractions``GFramework.Core.Abstractions` 均成功构建0 warning / 0 error
@ -1404,4 +1411,4 @@
### 下一步 ### 下一步
1. 若继续沿用当前 PR 驱动修复流程,可直接用 `$gframework-pr-review` 复查后续 PR 的 CodeRabbit 评论与测试状态 1. 若继续沿用当前 PR 驱动修复流程,可直接用 `$gframework-pr-review` 复查后续 PR 的 CodeRabbit 评论与测试状态
2. 若要消除 PR `#253` 的最后一个 `Title check` warning需要在 GitHub 上手动修改 PR 标题;该项不属于仓库内代码修复范围 2. 若要验证本轮本地修正已经消除远端 latest head review threads需要在提交并推送当前分支后重新执行 `$gframework-pr-review`