fix(pr-review): 补齐多AI评审覆盖并收口文档层级

- 新增 gframework-pr-review 对 greptile-apps 的 reviewer 摘要、thread 计数与 skill 描述

- 修复 fetch_current_pr_review.py 的 reviewer 聚合输出与函数 docstring 覆盖

- 修复 Scene/UI 文档的最小接入路径标题层级问题

- 修复 validate-code-blocks.sh 的 code fence 判断与 module-config 的 README 映射
This commit is contained in:
GeWuYou 2026-04-22 11:05:27 +08:00
parent ce6bab2301
commit fe4fd1aa5e
9 changed files with 228 additions and 55 deletions

View File

@ -185,6 +185,9 @@ get_readme_paths() {
Core.SourceGenerators)
echo "GFramework.Core.SourceGenerators/README.md"
;;
Core.SourceGenerators.Abstractions)
echo "GFramework.Core.SourceGenerators.Abstractions/README.md"
;;
Game)
echo "GFramework.Game/README.md"
;;
@ -200,6 +203,9 @@ get_readme_paths() {
Godot.SourceGenerators)
echo "GFramework.Godot.SourceGenerators/README.md"
;;
Godot.SourceGenerators.Abstractions)
echo "GFramework.Godot.SourceGenerators.Abstractions/README.md"
;;
Cqrs)
echo "GFramework.Cqrs/README.md"
;;
@ -212,6 +218,15 @@ get_readme_paths() {
Ecs.Arch)
echo "GFramework.Ecs.Arch/README.md"
;;
Ecs.Arch.Abstractions)
echo "GFramework.Ecs.Arch.Abstractions/README.md"
;;
SourceGenerators.Common)
echo "GFramework.SourceGenerators.Common/README.md"
;;
*)
return 1
;;
esac
}

View File

@ -25,19 +25,25 @@ if [ $((CODE_FENCE_COUNT % 2)) -ne 0 ]; then
fi
LINE_NUMBER=0
while IFS= read -r LINE; do
IN_CODE_BLOCK=0
while IFS= read -r LINE || [ -n "$LINE" ]; do
LINE_NUMBER=$((LINE_NUMBER + 1))
if echo "$LINE" | grep -qE '^```(cs|c#|C#)$'; then
if [[ "$LINE" =~ ^\`\`\`(cs|c#|C#)$ ]]; then
echo "⚠ 警告: 第 $LINE_NUMBER 行使用了非标准 C# 标记,建议改为 csharp"
WARNING_COUNT=$((WARNING_COUNT + 1))
fi
if echo "$LINE" | grep -qE '^```$'; then
NEXT_LINE=$(sed -n "$((LINE_NUMBER + 1))p" "$FILE")
if [ -n "$NEXT_LINE" ] && ! echo "$NEXT_LINE" | grep -qE '^```'; then
echo "⚠ 警告: 第 $LINE_NUMBER 行的代码块缺少语言标记"
WARNING_COUNT=$((WARNING_COUNT + 1))
if [[ "$LINE" =~ ^\`\`\` ]]; then
if [ "$IN_CODE_BLOCK" -eq 0 ]; then
if [[ "$LINE" == '```' ]]; then
echo "⚠ 警告: 第 $LINE_NUMBER 行的代码块缺少语言标记"
WARNING_COUNT=$((WARNING_COUNT + 1))
fi
IN_CODE_BLOCK=1
else
IN_CODE_BLOCK=0
fi
fi
done < "$FILE"

View File

@ -1,6 +1,6 @@
---
name: gframework-pr-review
description: Repository-specific GitHub PR review workflow for the GFramework repo. Use when Codex needs to inspect the GitHub pull request for the current branch, extract CodeRabbit summary/comments, read failed checks, MegaLinter warnings, or failed test signals from the PR page, and then verify which findings should be fixed in the local codebase. Trigger explicitly with $gframework-pr-review or with prompts such as "look at the current PR", "extract CodeRabbit comments", or "check Failed Tests on the PR".
description: Repository-specific GitHub PR review workflow for the GFramework repo. Use when Codex needs to inspect the GitHub pull request for the current branch, extract AI review findings from CodeRabbit or greptile-apps, read failed checks, MegaLinter warnings, or failed test signals from the PR page, and then verify which findings should be fixed in the local codebase. Trigger explicitly with $gframework-pr-review or with prompts such as "look at the current PR", "extract CodeRabbit comments", "extract Greptile comments", or "check Failed Tests on the PR".
---
# GFramework PR Review
@ -16,8 +16,10 @@ Shortcut: `$gframework-pr-review`
3. Run `scripts/fetch_current_pr_review.py` to:
- locate the PR for the current branch through the GitHub PR API
- fetch PR metadata, issue comments, reviews, and review comments through the GitHub API
- extract `Summary by CodeRabbit`、GitHub Actions bot comments such as `MegaLinter analysis: Success with warnings`、and CTRF test reports from issue comments
- extract CodeRabbit-specific summary blocks such as `Summary by CodeRabbit` and actionable-comment rollups when present
- parse the latest CodeRabbit review body itself, including folded sections such as `🧹 Nitpick comments (N)` and the overall AI-agent prompt
- capture unresolved latest-head review threads for supported AI reviewers, including both `coderabbitai[bot]` and `greptile-apps[bot]`
- surface which supported AI reviewers currently have open latest-commit review threads, even when they do not use CodeRabbit-style issue comments
- fetch the latest head commit review threads from the GitHub PR API
- prefer unresolved review threads on the latest head commit over older summary-only signals
- extract failed checks, MegaLinter detailed issues, and test-report signals such as `Failed Tests` or `No failed tests in this run`
@ -49,6 +51,7 @@ Shortcut: `$gframework-pr-review`
The script should produce:
- PR metadata: number, title, state, branch, URL
- Supported AI reviewer summary, including latest reviews and open-thread counts for `coderabbitai[bot]` and `greptile-apps[bot]`
- CodeRabbit summary block from issue comments when available
- Folded latest-review sections such as `Nitpick comments (N)` when CodeRabbit puts them in the review body instead of issue comments
- Parsed latest head-review threads, with unresolved threads clearly separated
@ -66,6 +69,7 @@ The script should produce:
- If GitHub access fails because of proxy configuration, rerun the fetch with proxy variables removed.
- Prefer GitHub API results over PR HTML. The PR HTML page is now a fallback/debugging source, not the primary source of truth.
- If the summary block and the latest head review threads disagree, trust the latest unresolved head-review threads and treat older summary findings as stale until re-verified locally.
- Do not assume every AI reviewer behaves like CodeRabbit. `greptile-apps[bot]` findings may exist only as latest-head review threads, without CodeRabbit-style issue comments or folded review-body sections.
- Treat GitHub Actions comments with `Success with warnings` as actionable review input when they include concrete linter diagnostics such as `MegaLinter` detailed issues; do not skip them just because the parent check is green.
- Do not assume all CodeRabbit findings live in issue comments. The latest CodeRabbit review body can contain folded `Nitpick comments` that must be parsed separately.
- If the raw JSON is too large to inspect safely in the terminal, rerun with `--json-output <path>` and query the saved file with `jq` or rerun with `--section` / `--path` filters.
@ -76,5 +80,6 @@ The script should produce:
- 'Use FPR'
- `Use $gframework-pr-review on the current branch`
- `Check the current PR and extract CodeRabbit suggestions`
- `Check the current PR and extract Greptile suggestions`
- `Look for Failed Tests on the PR page`
- `先用 $gframework-pr-review 看当前分支 PR`

View File

@ -1,4 +1,4 @@
interface:
display_name: "GFramework PR Review"
short_description: "Inspect the current PR and CodeRabbit findings"
default_prompt: "Use $gframework-pr-review to inspect the current branch PR through the GitHub API, prioritize unresolved review threads on the latest head commit, and summarize failed checks or failed tests."
short_description: "Inspect the current PR and AI review findings"
default_prompt: "Use $gframework-pr-review to inspect the current branch PR through the GitHub API, prioritize unresolved review threads on the latest head commit from supported AI reviewers such as CodeRabbit and greptile-apps, and summarize failed checks or failed tests."

View File

@ -25,11 +25,26 @@ DEFAULT_WINDOWS_GIT = "/mnt/d/Tool/Development Tools/Git/cmd/git.exe"
GIT_ENVIRONMENT_KEY = "GFRAMEWORK_WINDOWS_GIT"
USER_AGENT = "codex-gframework-pr-review"
CODERABBIT_LOGIN = "coderabbitai[bot]"
GREPTILE_LOGIN = "greptile-apps[bot]"
GITHUB_ACTIONS_LOGIN = "github-actions[bot]"
REVIEW_COMMENT_ADDRESSED_MARKER = "<!-- <review_comment_addressed> -->"
VISIBLE_ADDRESSED_IN_COMMIT_PATTERN = re.compile(r"\s*Addressed in commit\s+[0-9a-f]{7,40}", re.I)
DEFAULT_REQUEST_TIMEOUT_SECONDS = 60
REQUEST_TIMEOUT_ENVIRONMENT_KEY = "GFRAMEWORK_PR_REVIEW_TIMEOUT_SECONDS"
SUPPORTED_AI_REVIEWERS = (
{
"slug": "coderabbit",
"login": CODERABBIT_LOGIN,
"display_name": "CodeRabbit",
"supports_review_body_parsing": True,
},
{
"slug": "greptile",
"login": GREPTILE_LOGIN,
"display_name": "Greptile",
"supports_review_body_parsing": False,
},
)
DISPLAY_SECTION_CHOICES = (
"pr",
"failed-checks",
@ -44,6 +59,7 @@ DISPLAY_SECTION_CHOICES = (
def resolve_git_command() -> str:
"""Resolve the git executable to use for this repository."""
candidates = [
os.environ.get(GIT_ENVIRONMENT_KEY),
DEFAULT_WINDOWS_GIT,
@ -68,6 +84,7 @@ def resolve_git_command() -> str:
def resolve_request_timeout_seconds() -> int:
"""Return the GitHub request timeout in seconds."""
configured_timeout = os.environ.get(REQUEST_TIMEOUT_ENVIRONMENT_KEY)
if not configured_timeout:
return DEFAULT_REQUEST_TIMEOUT_SECONDS
@ -86,6 +103,7 @@ def resolve_request_timeout_seconds() -> int:
def run_command(args: list[str]) -> str:
"""Run a command and return stdout, raising on failure."""
process = subprocess.run(args, capture_output=True, text=True, check=False)
if process.returncode != 0:
stderr = process.stderr.strip()
@ -94,10 +112,12 @@ def run_command(args: list[str]) -> str:
def get_current_branch() -> str:
"""Return the current git branch name."""
return run_command([resolve_git_command(), "rev-parse", "--abbrev-ref", "HEAD"])
def open_url(url: str, accept: str) -> tuple[str, Any]:
"""Open a URL with proxy variables disabled and return decoded text plus headers."""
opener = urllib.request.build_opener(urllib.request.ProxyHandler({}))
request = urllib.request.Request(url, headers={"Accept": accept, "User-Agent": USER_AGENT})
with opener.open(request, timeout=resolve_request_timeout_seconds()) as response:
@ -105,11 +125,13 @@ def open_url(url: str, accept: str) -> tuple[str, Any]:
def fetch_json(url: str) -> tuple[Any, Any]:
"""Fetch a JSON payload and its response headers from GitHub."""
text, headers = open_url(url, accept="application/vnd.github+json")
return json.loads(text), headers
def extract_next_link(headers: Any) -> str | None:
"""Extract the next-page link from GitHub pagination headers."""
link_header = headers.get("Link")
if not link_header:
return None
@ -119,6 +141,7 @@ def extract_next_link(headers: Any) -> str | None:
def fetch_paged_json(url: str) -> list[dict[str, Any]]:
"""Fetch every page from a paginated GitHub API endpoint."""
items: list[dict[str, Any]] = []
next_url: str | None = url
while next_url:
@ -133,6 +156,7 @@ def fetch_paged_json(url: str) -> list[dict[str, Any]]:
def fetch_pull_request_metadata(pr_number: int) -> dict[str, Any]:
"""Fetch normalized metadata for a pull request."""
payload, _ = fetch_json(f"https://api.github.com/repos/{OWNER}/{REPO}/pulls/{pr_number}")
if not isinstance(payload, dict):
raise RuntimeError("Failed to fetch GitHub PR metadata.")
@ -148,6 +172,7 @@ def fetch_pull_request_metadata(pr_number: int) -> dict[str, Any]:
def resolve_pr_number(branch: str) -> int:
"""Resolve the most recently updated PR number for a branch."""
head_query = urllib.parse.quote(f"{OWNER}:{branch}")
payload, _ = fetch_json(f"https://api.github.com/repos/{OWNER}/{REPO}/pulls?state=all&head={head_query}")
if not isinstance(payload, list):
@ -162,10 +187,12 @@ def resolve_pr_number(branch: str) -> int:
def collapse_whitespace(text: str) -> str:
"""Collapse repeated whitespace into single spaces."""
return re.sub(r"\s+", " ", text).strip()
def truncate_text(text: str, max_length: int) -> str:
"""Collapse whitespace and truncate long text for CLI display."""
collapsed = collapse_whitespace(text)
if max_length <= 0 or len(collapsed) <= max_length:
return collapsed
@ -174,14 +201,17 @@ def truncate_text(text: str, max_length: int) -> str:
def strip_tags(text: str) -> str:
"""Remove HTML tags and normalize whitespace."""
return collapse_whitespace(re.sub(r"<[^>]+>", " ", text))
def strip_markdown_links(text: str) -> str:
"""Drop Markdown link targets while keeping visible link text."""
return re.sub(r"\[([^\]]+)\]\([^)]+\)", r"\1", text)
def extract_section(text: str, start_marker: str, end_markers: list[str]) -> str | None:
"""Extract text between a start marker and the earliest matching end marker."""
start = text.find(start_marker)
if start < 0:
return None
@ -196,6 +226,7 @@ def extract_section(text: str, start_marker: str, end_markers: list[str]) -> str
def parse_failed_checks(summary_block: str) -> list[dict[str, str]]:
"""Parse CodeRabbit summary rows for failed checks."""
failed_section = extract_section(
summary_block,
"### ❌ Failed checks",
@ -227,6 +258,7 @@ def parse_failed_checks(summary_block: str) -> list[dict[str, str]]:
def parse_actionable_comments(actionable_block: str) -> dict[str, Any]:
"""Parse CodeRabbit actionable comments from its issue-comment rollup."""
comment_count_match = re.search(r"Actionable comments posted:\s*(\d+)", actionable_block)
count = int(comment_count_match.group(1)) if comment_count_match else 0
@ -251,6 +283,7 @@ def parse_actionable_comments(actionable_block: str) -> dict[str, Any]:
def parse_comment_cards(comment_block: str) -> list[dict[str, str]]:
"""Parse CodeRabbit comment cards from a grouped Markdown block."""
comments: list[dict[str, str]] = []
pattern = re.compile(
r"<summary>"
@ -287,6 +320,7 @@ def parse_comment_cards(comment_block: str) -> list[dict[str, str]]:
def normalize_review_body_for_parsing(review_body: str) -> str:
"""Normalize a review body before structured section parsing."""
# CodeRabbit sometimes wraps structured HTML sections in markdown blockquotes,
# such as the CAUTION block used for outside-diff comments. Remove the quote
# prefixes for parsing while leaving the original raw body unchanged for output.
@ -294,6 +328,7 @@ def normalize_review_body_for_parsing(review_body: str) -> str:
def find_section_block_end(review_body: str, block_start: int) -> int:
"""Find the end boundary for a nested <details> section."""
depth = 1
for tag_match in re.finditer(r"<details>|</details>", review_body[block_start:]):
tag = tag_match.group(0)
@ -308,6 +343,7 @@ def find_section_block_end(review_body: str, block_start: int) -> int:
def parse_review_comment_group(review_body: str, section_name: str) -> dict[str, Any]:
"""Parse a folded review-body section into structured comments."""
section_match = re.search(
rf"<summary>[^<]*{re.escape(section_name)} \((?P<count>\d+)\)</summary><blockquote>\s*",
review_body,
@ -327,6 +363,7 @@ def parse_review_comment_group(review_body: str, section_name: str) -> dict[str,
def parse_latest_review_body(review_body: str) -> dict[str, Any]:
"""Parse the latest CodeRabbit review body for grouped comment sections."""
normalized_review_body = normalize_review_body_for_parsing(review_body)
actionable_count_match = re.search(r"\*\*Actionable comments posted:\s*(\d+)\*\*", normalized_review_body)
prompt_match = re.search(
@ -348,6 +385,7 @@ def parse_latest_review_body(review_body: str) -> dict[str, Any]:
def parse_megalinter_comment(comment_body: str) -> dict[str, Any]:
"""Parse a MegaLinter issue comment into structured report fields."""
normalized_body = html.unescape(comment_body).strip()
summary_match = re.search(
r"##\s*(?P<badges>.*?)\[MegaLinter\]\([^)]+\)\s+analysis:\s+\[(?P<status>[^\]]+)\]\((?P<run_url>[^)]+)\)",
@ -402,6 +440,7 @@ def parse_megalinter_comment(comment_body: str) -> dict[str, Any]:
def parse_test_report(block: str) -> dict[str, Any]:
"""Parse a CTRF or GitHub test-reporter comment block."""
report: dict[str, Any] = {
"raw": block.strip(),
"stats": {},
@ -442,6 +481,7 @@ def parse_test_report(block: str) -> dict[str, Any]:
def fetch_issue_comments(pr_number: int) -> list[dict[str, Any]]:
"""Fetch issue comments for a pull request."""
return fetch_paged_json(f"https://api.github.com/repos/{OWNER}/{REPO}/issues/{pr_number}/comments?per_page=100")
@ -450,6 +490,7 @@ def select_latest_comment_body(
predicate: Any,
required_user: str | None = None,
) -> str:
"""Return the latest matching issue-comment body."""
matching_comments = []
for comment in comments:
body = html.unescape(str(comment.get("body", "")))
@ -472,6 +513,7 @@ def select_comment_bodies(
predicate: Any,
required_user: str | None = None,
) -> list[str]:
"""Return all matching issue-comment bodies in chronological order."""
matching_comments = []
for comment in comments:
body = html.unescape(str(comment.get("body", "")))
@ -487,6 +529,7 @@ def select_comment_bodies(
def summarize_review_comment(comment: dict[str, Any]) -> dict[str, Any]:
"""Normalize a GitHub review comment into the output shape used by the skill."""
return {
"id": comment.get("id"),
"path": comment.get("path") or "",
@ -502,6 +545,7 @@ def summarize_review_comment(comment: dict[str, Any]) -> dict[str, Any]:
def classify_review_thread_status(latest_comment: dict[str, Any]) -> str:
"""Classify whether a review thread is still open or already addressed."""
body = latest_comment.get("body") or ""
author = latest_comment.get("user") or ""
if author == CODERABBIT_LOGIN and REVIEW_COMMENT_ADDRESSED_MARKER in body:
@ -510,10 +554,12 @@ def classify_review_thread_status(latest_comment: dict[str, Any]) -> str:
def contains_visible_addressed_commit_text(body: str) -> bool:
"""Detect visible addressed-in-commit text that does not close the thread by itself."""
return bool(VISIBLE_ADDRESSED_IN_COMMIT_PATTERN.search(body))
def build_latest_commit_review_threads(comments: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Group review comments into normalized latest-commit review threads."""
comment_threads: dict[int, dict[str, Any]] = {}
# GitHub review replies point to the root comment id. Grouping them first lets
@ -564,6 +610,7 @@ def select_latest_submitted_review(
required_user: str | None = None,
prefer_non_empty_body: bool = False,
) -> dict[str, Any] | None:
"""Select the newest submitted review, optionally filtered by user."""
filtered_reviews = [review for review in reviews if review.get("submitted_at")]
if required_user is not None:
filtered_reviews = [review for review in filtered_reviews if review.get("user", {}).get("login") == required_user]
@ -579,7 +626,43 @@ def select_latest_submitted_review(
return max(filtered_reviews, key=lambda review: review.get("submitted_at", ""))
def summarize_submitted_review(review: dict[str, Any] | None) -> dict[str, Any]:
"""Normalize a submitted review into a stable JSON shape."""
if review is None:
return {
"id": None,
"state": "",
"submitted_at": "",
"commit_id": "",
"user": "",
"body": "",
}
return {
"id": review.get("id"),
"state": review.get("state") or "",
"submitted_at": review.get("submitted_at") or "",
"commit_id": review.get("commit_id") or "",
"user": review.get("user", {}).get("login") or "",
"body": review.get("body") or "",
}
def build_open_thread_counts_by_user(open_threads: list[dict[str, Any]]) -> dict[str, int]:
"""Count open latest-commit threads by their root-comment author."""
counts: dict[str, int] = {}
for thread in open_threads:
root_user = str(thread.get("root_comment", {}).get("user") or "")
if not root_user:
continue
counts[root_user] = counts.get(root_user, 0) + 1
return counts
def fetch_latest_commit_review(pr_number: int) -> dict[str, Any]:
"""Fetch the latest commit review, grouped threads, and AI-reviewer summaries."""
api_base = f"https://api.github.com/repos/{OWNER}/{REPO}/pulls/{pr_number}"
commits = fetch_paged_json(f"{api_base}/commits?per_page=100")
reviews = fetch_paged_json(f"{api_base}/reviews?per_page=100")
@ -600,47 +683,37 @@ def fetch_latest_commit_review(pr_number: int) -> dict[str, Any]:
]
candidate_reviews = latest_commit_reviews or [review for review in reviews if review.get("submitted_at")]
latest_review = select_latest_submitted_review(candidate_reviews)
latest_coderabbit_review_with_body = select_latest_submitted_review(
candidate_reviews,
required_user=CODERABBIT_LOGIN,
prefer_non_empty_body=True,
)
latest_reviews_by_user: dict[str, dict[str, Any]] = {}
for agent in SUPPORTED_AI_REVIEWERS:
latest_reviews_by_user[agent["login"]] = summarize_submitted_review(
select_latest_submitted_review(
candidate_reviews,
required_user=agent["login"],
prefer_non_empty_body=True,
)
)
latest_commit_comments = [comment for comment in comments if comment.get("commit_id") == latest_commit_sha]
threads = build_latest_commit_review_threads(latest_commit_comments)
open_threads = [thread for thread in threads if thread["status"] == "open"]
open_thread_counts_by_user = build_open_thread_counts_by_user(open_threads)
return {
"latest_commit": {
"sha": latest_commit_sha,
"message": latest_commit.get("commit", {}).get("message", ""),
},
"latest_review": {
"id": latest_review.get("id") if latest_review else None,
"state": latest_review.get("state") if latest_review else "",
"submitted_at": latest_review.get("submitted_at") if latest_review else "",
"commit_id": latest_review.get("commit_id") if latest_review else "",
"user": latest_review.get("user", {}).get("login") if latest_review else "",
"body": latest_review.get("body") if latest_review else "",
},
"latest_coderabbit_review_with_body": {
"id": latest_coderabbit_review_with_body.get("id") if latest_coderabbit_review_with_body else None,
"state": latest_coderabbit_review_with_body.get("state") if latest_coderabbit_review_with_body else "",
"submitted_at": (
latest_coderabbit_review_with_body.get("submitted_at") if latest_coderabbit_review_with_body else ""
),
"commit_id": latest_coderabbit_review_with_body.get("commit_id") if latest_coderabbit_review_with_body else "",
"user": latest_coderabbit_review_with_body.get("user", {}).get("login")
if latest_coderabbit_review_with_body
else "",
"body": latest_coderabbit_review_with_body.get("body") if latest_coderabbit_review_with_body else "",
},
"latest_review": summarize_submitted_review(latest_review),
"latest_coderabbit_review_with_body": latest_reviews_by_user.get(CODERABBIT_LOGIN, {}),
"latest_reviews_by_user": latest_reviews_by_user,
"open_thread_counts_by_user": open_thread_counts_by_user,
"threads": threads,
"open_threads": open_threads,
}
def build_result(pr_number: int, branch: str) -> dict[str, Any]:
"""Build the full review result payload for the selected PR."""
warnings: list[str] = []
pull_request_metadata = fetch_pull_request_metadata(pr_number)
issue_comments = fetch_issue_comments(pr_number)
@ -673,8 +746,26 @@ def build_result(pr_number: int, branch: str) -> dict[str, Any]:
latest_commit_review: dict[str, Any] = {}
coderabbit_review: dict[str, Any] = {}
review_agents: list[dict[str, Any]] = []
try:
latest_commit_review = fetch_latest_commit_review(pr_number)
latest_reviews_by_user = latest_commit_review.get("latest_reviews_by_user", {})
open_thread_counts_by_user = latest_commit_review.get("open_thread_counts_by_user", {})
review_agents = [
{
"slug": agent["slug"],
"login": agent["login"],
"display_name": agent["display_name"],
"supports_review_body_parsing": agent["supports_review_body_parsing"],
"latest_review": latest_reviews_by_user.get(agent["login"], {}),
"open_thread_count": int(open_thread_counts_by_user.get(agent["login"], 0)),
"detected": bool(
latest_reviews_by_user.get(agent["login"], {}).get("id")
or open_thread_counts_by_user.get(agent["login"], 0)
),
}
for agent in SUPPORTED_AI_REVIEWERS
]
latest_review = latest_commit_review.get("latest_coderabbit_review_with_body", {})
latest_review_body = str(latest_review.get("body") or "")
if latest_review.get("user") == CODERABBIT_LOGIN and latest_review_body:
@ -723,6 +814,7 @@ def build_result(pr_number: int, branch: str) -> dict[str, Any]:
},
"coderabbit_comments": parse_actionable_comments(actionable_block) if actionable_block else {},
"coderabbit_review": coderabbit_review,
"review_agents": review_agents,
"latest_commit_review": latest_commit_review,
"megalinter_report": parse_megalinter_comment(megalinter_block) if megalinter_block else {},
"test_reports": [parse_test_report(block) for block in test_blocks],
@ -731,6 +823,7 @@ def build_result(pr_number: int, branch: str) -> dict[str, Any]:
def write_json_output(result: dict[str, Any], output_path: str) -> str:
"""Write the full JSON result to disk and return the destination path."""
destination_path = Path(output_path).expanduser()
destination_path.parent.mkdir(parents=True, exist_ok=True)
destination_path.write_text(json.dumps(result, ensure_ascii=False, indent=2), encoding="utf-8")
@ -738,10 +831,12 @@ def write_json_output(result: dict[str, Any], output_path: str) -> str:
def normalize_path_filters(path_filters: list[str] | None) -> list[str]:
"""Normalize CLI path filters to slash-separated fragments."""
return [path_filter.replace("\\", "/") for path_filter in (path_filters or []) if path_filter.strip()]
def path_matches_filters(path: str, normalized_path_filters: list[str]) -> bool:
"""Return whether a path matches any requested filter fragment."""
if not normalized_path_filters:
return True
@ -753,6 +848,7 @@ def filter_comments_by_path(
comments: list[dict[str, Any]],
normalized_path_filters: list[str],
) -> list[dict[str, Any]]:
"""Filter parsed comments by CLI path fragment."""
return [comment for comment in comments if path_matches_filters(str(comment.get("path") or ""), normalized_path_filters)]
@ -760,6 +856,7 @@ def filter_threads_by_path(
threads: list[dict[str, Any]],
normalized_path_filters: list[str],
) -> list[dict[str, Any]]:
"""Filter parsed review threads by CLI path fragment."""
return [thread for thread in threads if path_matches_filters(str(thread.get("path") or ""), normalized_path_filters)]
@ -771,6 +868,7 @@ def format_text(
max_description_length: int = 400,
json_output_path: str | None = None,
) -> str:
"""Format the result payload into concise text output."""
lines: list[str] = []
selected_sections = set(sections or DISPLAY_SECTION_CHOICES)
normalized_path_filters = normalize_path_filters(path_filters)
@ -865,6 +963,7 @@ def format_text(
latest_review = latest_commit_review.get("latest_review", {})
open_threads = latest_commit_review.get("open_threads", [])
visible_open_threads = filter_threads_by_path(open_threads, normalized_path_filters)
review_agents = [agent for agent in result.get("review_agents", []) if agent.get("detected")]
if latest_commit and "open-threads" in selected_sections:
lines.append("")
lines.append(f"Latest reviewed commit: {latest_commit.get('sha', '')}")
@ -874,6 +973,21 @@ def format_text(
f"{latest_review.get('state', '')} by {latest_review.get('user', '')} "
f"at {latest_review.get('submitted_at', '')}"
)
if review_agents:
lines.append("Detected AI reviewers on latest commit:")
for agent in review_agents:
latest_agent_review = agent.get("latest_review", {})
lines.append(
"- "
f"{agent.get('display_name', '')} ({agent.get('login', '')}): "
f"open_threads={agent.get('open_thread_count', 0)}"
+ (
f", latest_review={latest_agent_review.get('state', '')} "
f"at {latest_agent_review.get('submitted_at', '')}"
if latest_agent_review.get("submitted_at")
else ""
)
)
lines.append(
"Latest commit review threads: "
@ -961,6 +1075,7 @@ def format_text(
def parse_args() -> argparse.Namespace:
"""Parse CLI arguments."""
parser = argparse.ArgumentParser()
parser.add_argument("--branch", help="Override the current branch name.")
parser.add_argument("--pr", type=int, help="Fetch a specific PR number instead of resolving from branch.")
@ -990,6 +1105,7 @@ def parse_args() -> argparse.Namespace:
def main() -> None:
"""Run the CLI entry point."""
args = parse_args()
if args.pr is not None:
pr_number = args.pr

View File

@ -7,12 +7,12 @@
## 当前恢复点
- 恢复点编号:`DOCUMENTATION-GOVERNANCE-REFRESH-RP-009`
- 恢复点编号:`DOCUMENTATION-GOVERNANCE-REFRESH-RP-010`
- 当前阶段:`Phase 3`
- 当前焦点:
- 已建立统一公开 skill`.agents/skills/gframework-doc-refresh/`
- 文档重构入口已从“按 guide/tutorial/api 类型拆 skill”收口为“按源码模块驱动文档刷新”
- PR #268 的当前未解决 review 线程已进入收口:active trace 归档、Game Scene/UI 目录约定补充、skill YAML 修复
- PR #268 的当前未解决 review 线程已进入收口:Scene/UI 标题层级修正、共享脚本 review 修复、`gframework-pr-review` 多 AI reviewer 支持补齐
- 下一轮需要用统一 skill 推进 Godot 相关生成器页面核对
## 当前状态摘要
@ -36,15 +36,21 @@
- `documentation-governance-and-refresh` active trace 已把重复的 `### 下一步` 标题改成带恢复点标识的唯一标题,消除
`MD024/no-duplicate-heading` 告警
- `gframework-pr-review` 脚本已修复“空 `APPROVED` review 覆盖非空 CodeRabbit review body”的解析路径当前分支可重新提取 Nitpick comments
- `gframework-pr-review` 现在显式把 `coderabbitai[bot]``greptile-apps[bot]` 视为支持的 AI reviewer并在输出中单独列出
reviewer 元数据与 latest-head open thread 计数,不再只把 `greptile-apps` 混在通用 thread 列表里
- `.agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py` 已为全部函数补齐 docstring本地 AST 统计为
`44/44`,文件级 docstring coverage 为 `100%`
- `docs/zh-CN/core/events.md``property.md``logging.md` 已改成“当前角色、最常用入口、边界和迁移建议”的结构,
不再复刻旧版大而全 API 列表
- `docs/zh-CN/core/property.md` 已明确记录 `BindableProperty<T>.Comparer` 的闭合泛型级共享语义,避免文档继续误导读者把
`WithComparer(...)` 当成实例级配置
- `docs/zh-CN/core/state-management.md``coroutine.md` 已按当前 runtime / 测试重新核对,当前内容可继续保留
- `docs/zh-CN/game/scene.md` 已改成“真实公开入口、场景栈语义、factory/root 装配、过渡处理器与守卫扩展点”的结构,
不再暗示框架自带统一场景注册与完整引擎装配;本轮已补充项目侧目录布局、文件命名、最小 wiring 与兼容说明
不再暗示框架自带统一场景注册与完整引擎装配;本轮已补充项目侧目录布局、文件命名、最小 wiring 与兼容说明,并把
“推荐目录与文件约定(项目侧)” 收口为 “最小接入路径” 下的子节
- `docs/zh-CN/game/ui.md` 已改成“Page 栈、layer UI、输入动作仲裁、World 阻断与暂停语义”的结构,明确 `Show(...)`
不适用于 `UiLayer.Page`;本轮已补充 router、factory、root、page behavior、params 与 views 的推荐放置约定
不适用于 `UiLayer.Page`;本轮已补充 router、factory、root、page behavior、params 与 views 的推荐放置约定,并修复
“最小接入路径” 空节与标题层级错位问题
- 本轮重写后再次执行 `cd docs && bun run build` 通过,当前 `game` 栏目入口与专题页改动没有破坏站点构建
- `docs/zh-CN/source-generators/context-aware-generator.md` 已改成“真实生成成员、provider/实例缓存语义、与 `ContextAwareBase` 的边界、测试接法”的结构,
不再用旧版简化生成代码替代当前实现
@ -54,6 +60,11 @@
- `.agents/skills/gframework-doc-refresh/SKILL.md` 已改成标准 YAML frontmatter skill并明确支持模块输入、证据顺序、输出优先级与验证步骤
- `.agents/skills/gframework-doc-refresh/SKILL.md``description` 已加引号,修复 `Recommended command:` 中冒号导致的
invalid YAML skill 加载警告
- `.agents/skills/gframework-doc-refresh/scripts/validate-code-blocks.sh` 已改成基于 `IN_CODE_BLOCK` 跟踪 opening /
closing fence避免把 closing fence 误报成缺少语言标记
- `.agents/skills/_shared/module-config.sh``get_readme_paths()` 已补齐 `Core.SourceGenerators.Abstractions`
`Godot.SourceGenerators.Abstractions``Ecs.Arch.Abstractions``SourceGenerators.Common`,并在未映射模块时返回
非零退出码
- `.agents/skills/_shared/module-map.json` 已收口为源码模块映射表覆盖源码目录、测试项目、README、`docs/zh-CN` 栏目与 `ai-libs/` 参考入口
- 旧 `vitepress-api-doc``vitepress-batch-api``vitepress-doc-generator``vitepress-guide``vitepress-tutorial``vitepress-validate`
已不再保留为可用公开 skill 定义文件
@ -79,6 +90,9 @@
- review 跟进遗漏风险:如果 PR review 抓取继续优先选中空 review body会漏掉 CodeRabbit 的 Nitpick 和
linter 跟进项
- 缓解措施:保持当前“最新提交 + 最新非空 CodeRabbit review body”解析策略并在有疑点时以 API 实抓结果复核
- reviewer 适配漂移风险:若后续新增 AI reviewer 但脚本仍只维护固定 bot 名单可能再次出现“线程能看见、skill 却未声明覆盖”的偏差
- 缓解措施:当前已显式支持 `coderabbitai[bot]``greptile-apps[bot]`;新增 reviewer 时同步更新
`.agents/skills/gframework-pr-review/SKILL.md``agents/openai.yaml` 与抓取脚本常量表
## 活跃文档
@ -92,6 +106,11 @@
- active 跟踪文件已按 `ai-plan` 治理规则精简为当前恢复入口
- `cd docs && bun run build`
- `python3 -B .agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --branch docs/sdk-update-documentation --format json --json-output /tmp/current-pr-review.json`
- `python3 -B .agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --branch docs/sdk-update-documentation --section open-threads`
- `python3 -B -c "import ast, pathlib; path=pathlib.Path('.agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py'); tree=ast.parse(path.read_text(encoding='utf-8')); funcs=[node for node in ast.walk(tree) if isinstance(node,(ast.FunctionDef, ast.AsyncFunctionDef))]; documented=sum(1 for node in funcs if ast.get_docstring(node)); print(f'functions={len(funcs)} documented={documented} coverage={documented/len(funcs):.2%}')"`
- `bash .agents/skills/gframework-doc-refresh/scripts/validate-code-blocks.sh docs/zh-CN/game/scene.md`
- `bash .agents/skills/gframework-doc-refresh/scripts/validate-code-blocks.sh docs/zh-CN/game/ui.md`
- `bash -lc 'source .agents/skills/_shared/module-config.sh && get_readme_paths Core.SourceGenerators.Abstractions && if get_readme_paths Not.Real.Module; then exit 1; else echo unmapped-ok; fi'`
- `python3 -c "import pathlib, yaml; text = pathlib.Path('.agents/skills/gframework-doc-refresh/SKILL.md').read_text(); yaml.safe_load(text.split('---', 2)[1]); print('yaml-ok')"`
- `python3 .agents/skills/gframework-doc-refresh/scripts/scan_module_evidence.py Core`
- `python3 .agents/skills/gframework-doc-refresh/scripts/scan_module_evidence.py Godot.SourceGenerators`
@ -101,5 +120,5 @@
1. 继续核对 Godot 相关生成器页面,优先处理 `godot-project-generator.md``get-node-generator.md`
`bind-node-signal-generator.md`,优先用 `gframework-doc-refresh` 的模块扫描结果驱动判断
2. 重点确认 `project.godot``AutoLoad` / `InputActions``GetNode` / `BindNodeSignal` 示例仍与当前包关系和生成器入口一致
3. 若 active trace 再积累新的已完成阶段,按恢复点粒度迁入 `archive/traces/`,避免默认启动入口再次膨胀
2. 下一次推送后先重新执行 `$gframework-pr-review`,确认 PR #268 的 CodeRabbit / Greptile open thread 是否按预期收敛
3. 再继续确认 `project.godot``AutoLoad` / `InputActions``GetNode` / `BindNodeSignal` 示例仍与当前包关系和生成器入口一致

View File

@ -2,30 +2,40 @@
## 2026-04-22
### 当前恢复点RP-009
### 当前恢复点RP-010
- 本轮从 PR #268 的最新 review 数据恢复未发现失败检查CTRF 报告显示 2139 个测试全部通过
- 最新未解决 review 线程要求:
- 精简 active trace避免默认恢复入口继续膨胀
- 为 `docs/zh-CN/game/scene.md` 补充项目目录与文件约定
- 为 `docs/zh-CN/game/ui.md` 补充项目目录与文件约定
- 本轮复核确认当前 PR 的 latest-head open thread 同时来自 `coderabbitai[bot]``greptile-apps[bot]`
- 已本地修复仍然成立的 review
- `docs/zh-CN/game/scene.md` 把“推荐目录与文件约定(项目侧)”降为“最小接入路径”下的子节
- `docs/zh-CN/game/ui.md` 为“最小接入路径”补充导语,并修复同级标题错位
- `.agents/skills/gframework-doc-refresh/scripts/validate-code-blocks.sh` 改成 opening / closing fence 状态机
- `.agents/skills/_shared/module-config.sh` 补齐缺失模块映射,并让未映射模块返回非零退出码
- `gframework-pr-review` 已从文案和输出模型两侧补齐多 reviewer 支持:当前 JSON 会单独给出 `review_agents`
以及 `open_thread_counts_by_user`,文本输出会显式列出 CodeRabbit / Greptile
- `fetch_current_pr_review.py` 的本地函数 docstring 覆盖率已补到 `44/44`
- 已闭环 RP-001 到 RP-008 的执行细节已归档到
`ai-plan/public/documentation-governance-and-refresh/archive/traces/documentation-governance-and-refresh-rp-001-through-rp-008.md`
- 本轮还修复 `.agents/skills/gframework-doc-refresh/SKILL.md` 的 YAML frontmatter使包含冒号的长描述不再破坏 skill 加载
### 当前决策
- active trace 只保留当前恢复点、关键事实、验证和下一步;完成阶段继续进入 `archive/traces/`
- `scene.md``ui.md` 的集成说明必须同时覆盖目录布局、文件命名、接口实现关系、最小 wiring 和兼容说明
- `gframework-pr-review` 抓取结果以最新未解决 head review thread 为准;旧 summary 或已过期评论只作为参考
- `scene.md``ui.md` 的集成说明除目录布局外,也要保证标题层级能真实反映采用路径语义
- `gframework-pr-review` 继续以 latest-head unresolved thread 为主信号,同时显式声明支持的 AI reviewer 名单,避免 skill
声明与实际抓取能力再次漂移
### 验证
- `python3 -B .agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --branch docs/sdk-update-documentation --format json --json-output /tmp/current-pr-review.json`
- `python3 -B .agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --branch docs/sdk-update-documentation --section open-threads`
- `python3 -B -c "import ast, pathlib; path=pathlib.Path('.agents/skills/gframework-pr-review/scripts/fetch_current_pr_review.py'); tree=ast.parse(path.read_text(encoding='utf-8')); funcs=[node for node in ast.walk(tree) if isinstance(node,(ast.FunctionDef, ast.AsyncFunctionDef))]; documented=sum(1 for node in funcs if ast.get_docstring(node)); print(f'functions={len(funcs)} documented={documented} coverage={documented/len(funcs):.2%}')"`
- `bash .agents/skills/gframework-doc-refresh/scripts/validate-code-blocks.sh docs/zh-CN/game/scene.md`
- `bash .agents/skills/gframework-doc-refresh/scripts/validate-code-blocks.sh docs/zh-CN/game/ui.md`
- `bash -lc 'source .agents/skills/_shared/module-config.sh && get_readme_paths Core.SourceGenerators.Abstractions && if get_readme_paths Not.Real.Module; then exit 1; else echo unmapped-ok; fi'`
- `cd docs && bun run build`
### 下一步
1. 继续使用 `gframework-doc-refresh``Godot.SourceGenerators` 做真实模块扫描
2. 优先刷新 `godot-project-generator.md``get-node-generator.md``bind-node-signal-generator.md`
3. 若发现 `module-map.json` 在 Godot 场景下缺少别名或 docs 映射,先回补共享映射,再更新具体页面
1. 下一次推送后重新执行 `$gframework-pr-review`,确认 PR #268 的 CodeRabbit / Greptile open thread 是否关闭或减少
2. 继续使用 `gframework-doc-refresh``Godot.SourceGenerators` 做真实模块扫描
3. 优先刷新 `godot-project-generator.md``get-node-generator.md``bind-node-signal-generator.md`

View File

@ -85,7 +85,7 @@ description: 说明 GFramework.Game 场景路由的当前入口、项目侧接
推荐按下面的顺序接入。
## 推荐目录与文件约定(项目侧)
### 推荐目录与文件约定(项目侧)
场景系统的目录结构不由框架强制,但建议把“路由编排、实例创建、引擎挂载、业务场景”分开放置,避免后续把
`SceneRouterBase` 派生类写成巨型协调器。

View File

@ -138,7 +138,9 @@ description: 说明 GFramework.Game UI 路由当前的页面栈、层级 UI、
## 最小接入路径
## 推荐目录与文件约定(项目侧)
推荐按下面的顺序接入。
### 推荐目录与文件约定(项目侧)
UI 系统的接入文件建议按“路由、工厂、根节点、页面行为、入参”拆开。这样可以让 `UiRouterBase` 只承担编排职责,
把引擎节点创建和页面业务逻辑留在项目侧。