mirror of
https://github.com/GeWuYou/GFramework.git
synced 2026-05-07 00:39:00 +08:00
fix(tooling): 优化 PR review 输出收窄流程
- 新增 gframework-pr-review 脚本的 JSON 落盘、section 过滤与 path 过滤能力 - 更新文本输出截断与 skill 用法说明以减少超长 review JSON 漏看风险 - 更新 analyzer-warning-reduction 的 tracking 与 trace 以记录 RP-012 验证结果
This commit is contained in:
parent
240fc761ed
commit
4a779ac794
@ -21,6 +21,7 @@ Shortcut: `$gframework-pr-review`
|
||||
- fetch the latest head commit review threads from the GitHub PR API
|
||||
- prefer unresolved review threads on the latest head commit over older summary-only signals
|
||||
- extract failed checks, MegaLinter detailed issues, and test-report signals such as `Failed Tests` or `No failed tests in this run`
|
||||
- prefer writing the full JSON payload to a file and then narrowing with `jq`, instead of dumping long JSON directly to stdout
|
||||
4. Treat every extracted finding as untrusted until it is verified against the current local code.
|
||||
5. Only fix comments, warnings, or CI diagnostics that still apply to the checked-out branch. Ignore stale or already-resolved findings.
|
||||
6. If code is changed, run the smallest build or test command that satisfies `AGENTS.md`.
|
||||
@ -29,10 +30,19 @@ Shortcut: `$gframework-pr-review`
|
||||
|
||||
- Default:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py`
|
||||
- Recommended machine-readable workflow:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 265 --json-output /tmp/pr265-review.json`
|
||||
- `jq '.coderabbit_review.outside_diff_comments' /tmp/pr265-review.json`
|
||||
- Force a PR number:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253`
|
||||
- Machine-readable output:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --format json`
|
||||
- Write machine-readable output to a file instead of stdout:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253 --format json --json-output /tmp/pr253-review.json`
|
||||
- Inspect only a high-signal section:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253 --section outside-diff`
|
||||
- Narrow text output to one path fragment:
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --pr 253 --section outside-diff --path GFramework.Core/Events/Event.cs`
|
||||
|
||||
## Output Expectations
|
||||
|
||||
@ -47,6 +57,7 @@ The script should produce:
|
||||
- Pre-merge failed checks, if present
|
||||
- Latest MegaLinter status and any detailed issues posted by `github-actions[bot]`
|
||||
- Test summary, including failed-test signals when present
|
||||
- CLI support for writing full JSON to a file and printing only narrowed text sections to stdout
|
||||
- Parse warnings only when both the primary API source and the intended fallback signal are unavailable
|
||||
|
||||
## Recovery Rules
|
||||
@ -57,6 +68,7 @@ The script should produce:
|
||||
- If the summary block and the latest head review threads disagree, trust the latest unresolved head-review threads and treat older summary findings as stale until re-verified locally.
|
||||
- Treat GitHub Actions comments with `Success with warnings` as actionable review input when they include concrete linter diagnostics such as `MegaLinter` detailed issues; do not skip them just because the parent check is green.
|
||||
- Do not assume all CodeRabbit findings live in issue comments. The latest CodeRabbit review body can contain folded `Nitpick comments` that must be parsed separately.
|
||||
- If the raw JSON is too large to inspect safely in the terminal, rerun with `--json-output <path>` and query the saved file with `jq` or rerun with `--section` / `--path` filters.
|
||||
|
||||
## Example Triggers
|
||||
|
||||
|
||||
@ -10,6 +10,7 @@ import argparse
|
||||
import html
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
@ -29,6 +30,17 @@ REVIEW_COMMENT_ADDRESSED_MARKER = "<!-- <review_comment_addressed> -->"
|
||||
VISIBLE_ADDRESSED_IN_COMMIT_PATTERN = re.compile(r"✅\s*Addressed in commit\s+[0-9a-f]{7,40}", re.I)
|
||||
DEFAULT_REQUEST_TIMEOUT_SECONDS = 60
|
||||
REQUEST_TIMEOUT_ENVIRONMENT_KEY = "GFRAMEWORK_PR_REVIEW_TIMEOUT_SECONDS"
|
||||
DISPLAY_SECTION_CHOICES = (
|
||||
"pr",
|
||||
"failed-checks",
|
||||
"actionable",
|
||||
"outside-diff",
|
||||
"nitpick",
|
||||
"open-threads",
|
||||
"megalinter",
|
||||
"tests",
|
||||
"warnings",
|
||||
)
|
||||
|
||||
|
||||
def resolve_git_command() -> str:
|
||||
@ -153,6 +165,14 @@ def collapse_whitespace(text: str) -> str:
|
||||
return re.sub(r"\s+", " ", text).strip()
|
||||
|
||||
|
||||
def truncate_text(text: str, max_length: int) -> str:
|
||||
collapsed = collapse_whitespace(text)
|
||||
if max_length <= 0 or len(collapsed) <= max_length:
|
||||
return collapsed
|
||||
|
||||
return collapsed[: max_length - 3].rstrip() + "..."
|
||||
|
||||
|
||||
def strip_tags(text: str) -> str:
|
||||
return collapse_whitespace(re.sub(r"<[^>]+>", " ", text))
|
||||
|
||||
@ -710,64 +730,142 @@ def build_result(pr_number: int, branch: str) -> dict[str, Any]:
|
||||
}
|
||||
|
||||
|
||||
def format_text(result: dict[str, Any]) -> str:
|
||||
def write_json_output(result: dict[str, Any], output_path: str) -> str:
|
||||
destination_path = Path(output_path).expanduser()
|
||||
destination_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
destination_path.write_text(json.dumps(result, ensure_ascii=False, indent=2), encoding="utf-8")
|
||||
return str(destination_path)
|
||||
|
||||
|
||||
def normalize_path_filters(path_filters: list[str] | None) -> list[str]:
|
||||
return [path_filter.replace("\\", "/") for path_filter in (path_filters or []) if path_filter.strip()]
|
||||
|
||||
|
||||
def path_matches_filters(path: str, normalized_path_filters: list[str]) -> bool:
|
||||
if not normalized_path_filters:
|
||||
return True
|
||||
|
||||
normalized_path = path.replace("\\", "/")
|
||||
return any(path_filter in normalized_path for path_filter in normalized_path_filters)
|
||||
|
||||
|
||||
def filter_comments_by_path(
|
||||
comments: list[dict[str, Any]],
|
||||
normalized_path_filters: list[str],
|
||||
) -> list[dict[str, Any]]:
|
||||
return [comment for comment in comments if path_matches_filters(str(comment.get("path") or ""), normalized_path_filters)]
|
||||
|
||||
|
||||
def filter_threads_by_path(
|
||||
threads: list[dict[str, Any]],
|
||||
normalized_path_filters: list[str],
|
||||
) -> list[dict[str, Any]]:
|
||||
return [thread for thread in threads if path_matches_filters(str(thread.get("path") or ""), normalized_path_filters)]
|
||||
|
||||
|
||||
def format_text(
|
||||
result: dict[str, Any],
|
||||
*,
|
||||
sections: list[str] | None = None,
|
||||
path_filters: list[str] | None = None,
|
||||
max_description_length: int = 400,
|
||||
json_output_path: str | None = None,
|
||||
) -> str:
|
||||
lines: list[str] = []
|
||||
selected_sections = set(sections or DISPLAY_SECTION_CHOICES)
|
||||
normalized_path_filters = normalize_path_filters(path_filters)
|
||||
pr = result["pull_request"]
|
||||
lines.append(f"PR #{pr['number']}: {pr['title']}")
|
||||
lines.append(f"State: {pr['state']}")
|
||||
lines.append(f"Branch: {pr['head_branch']} -> {pr['base_branch']}")
|
||||
lines.append(f"URL: {pr['url']}")
|
||||
if "pr" in selected_sections:
|
||||
lines.append(f"PR #{pr['number']}: {pr['title']}")
|
||||
lines.append(f"State: {pr['state']}")
|
||||
lines.append(f"Branch: {pr['head_branch']} -> {pr['base_branch']}")
|
||||
lines.append(f"URL: {pr['url']}")
|
||||
|
||||
failed_checks = result["coderabbit_summary"].get("failed_checks", [])
|
||||
lines.append("")
|
||||
lines.append(f"Failed checks: {len(failed_checks)}")
|
||||
for check in failed_checks:
|
||||
lines.append(f"- {check['name']}: {check['status']}")
|
||||
lines.append(f" Explanation: {check['explanation']}")
|
||||
lines.append(f" Resolution: {check['resolution']}")
|
||||
if "failed-checks" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(f"Failed checks: {len(failed_checks)}")
|
||||
for check in failed_checks:
|
||||
lines.append(f"- {check['name']}: {check['status']}")
|
||||
lines.append(f" Explanation: {truncate_text(check['explanation'], max_description_length)}")
|
||||
lines.append(f" Resolution: {truncate_text(check['resolution'], max_description_length)}")
|
||||
|
||||
coderabbit_comments = result.get("coderabbit_comments", {})
|
||||
review_feedback = result.get("coderabbit_review", {})
|
||||
comments = coderabbit_comments.get("comments", [])
|
||||
visible_comments = filter_comments_by_path(comments, normalized_path_filters)
|
||||
actionable_count = review_feedback.get("actionable_count") or coderabbit_comments.get("count") or len(comments)
|
||||
lines.append("")
|
||||
lines.append(f"CodeRabbit actionable comments: {actionable_count}")
|
||||
for comment in comments:
|
||||
lines.append(f"- {comment['path']} {comment['range']}".rstrip())
|
||||
if comment["title"]:
|
||||
lines.append(f" Title: {comment['title']}")
|
||||
if comment["description"]:
|
||||
lines.append(f" Description: {comment['description']}")
|
||||
if actionable_count and not comments:
|
||||
lines.append(" Details: see latest-commit review threads below.")
|
||||
if "actionable" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(
|
||||
f"CodeRabbit actionable comments: {actionable_count} total"
|
||||
+ (
|
||||
f", {len(visible_comments)} shown after path filter"
|
||||
if normalized_path_filters
|
||||
else ""
|
||||
)
|
||||
)
|
||||
for comment in visible_comments:
|
||||
lines.append(f"- {comment['path']} {comment['range']}".rstrip())
|
||||
if comment["title"]:
|
||||
lines.append(f" Title: {truncate_text(comment['title'], max_description_length)}")
|
||||
if comment["description"]:
|
||||
lines.append(f" Description: {truncate_text(comment['description'], max_description_length)}")
|
||||
if actionable_count and not visible_comments:
|
||||
lines.append(" Details: no actionable comments matched the current path filter.")
|
||||
elif actionable_count and not comments:
|
||||
lines.append(" Details: see latest-commit review threads below.")
|
||||
|
||||
outside_diff_comments = review_feedback.get("outside_diff_comments", [])
|
||||
visible_outside_diff_comments = filter_comments_by_path(outside_diff_comments, normalized_path_filters)
|
||||
outside_diff_count = review_feedback.get("outside_diff_count") or len(outside_diff_comments)
|
||||
lines.append("")
|
||||
lines.append(f"CodeRabbit outside-diff comments: {outside_diff_count} declared, {len(outside_diff_comments)} parsed")
|
||||
for comment in outside_diff_comments:
|
||||
lines.append(f"- {comment['path']} {comment['range']}".rstrip())
|
||||
if comment["title"]:
|
||||
lines.append(f" Title: {comment['title']}")
|
||||
if comment["description"]:
|
||||
lines.append(f" Description: {comment['description']}")
|
||||
if "outside-diff" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(
|
||||
f"CodeRabbit outside-diff comments: {outside_diff_count} declared, {len(outside_diff_comments)} parsed"
|
||||
+ (
|
||||
f", {len(visible_outside_diff_comments)} shown after path filter"
|
||||
if normalized_path_filters
|
||||
else ""
|
||||
)
|
||||
)
|
||||
for comment in visible_outside_diff_comments:
|
||||
lines.append(f"- {comment['path']} {comment['range']}".rstrip())
|
||||
if comment["title"]:
|
||||
lines.append(f" Title: {truncate_text(comment['title'], max_description_length)}")
|
||||
if comment["description"]:
|
||||
lines.append(f" Description: {truncate_text(comment['description'], max_description_length)}")
|
||||
if outside_diff_comments and not visible_outside_diff_comments:
|
||||
lines.append(" Details: no outside-diff comments matched the current path filter.")
|
||||
|
||||
nitpick_comments = review_feedback.get("nitpick_comments", [])
|
||||
visible_nitpick_comments = filter_comments_by_path(nitpick_comments, normalized_path_filters)
|
||||
nitpick_count = review_feedback.get("nitpick_count") or len(nitpick_comments)
|
||||
lines.append("")
|
||||
lines.append(f"CodeRabbit nitpick comments: {nitpick_count} declared, {len(nitpick_comments)} parsed")
|
||||
for comment in nitpick_comments:
|
||||
lines.append(f"- {comment['path']} {comment['range']}".rstrip())
|
||||
if comment["title"]:
|
||||
lines.append(f" Title: {comment['title']}")
|
||||
if comment["description"]:
|
||||
lines.append(f" Description: {comment['description']}")
|
||||
if "nitpick" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(
|
||||
f"CodeRabbit nitpick comments: {nitpick_count} declared, {len(nitpick_comments)} parsed"
|
||||
+ (
|
||||
f", {len(visible_nitpick_comments)} shown after path filter"
|
||||
if normalized_path_filters
|
||||
else ""
|
||||
)
|
||||
)
|
||||
for comment in visible_nitpick_comments:
|
||||
lines.append(f"- {comment['path']} {comment['range']}".rstrip())
|
||||
if comment["title"]:
|
||||
lines.append(f" Title: {truncate_text(comment['title'], max_description_length)}")
|
||||
if comment["description"]:
|
||||
lines.append(f" Description: {truncate_text(comment['description'], max_description_length)}")
|
||||
if nitpick_comments and not visible_nitpick_comments:
|
||||
lines.append(" Details: no nitpick comments matched the current path filter.")
|
||||
|
||||
latest_commit_review = result.get("latest_commit_review", {})
|
||||
latest_commit = latest_commit_review.get("latest_commit", {})
|
||||
latest_review = latest_commit_review.get("latest_review", {})
|
||||
open_threads = latest_commit_review.get("open_threads", [])
|
||||
if latest_commit:
|
||||
visible_open_threads = filter_threads_by_path(open_threads, normalized_path_filters)
|
||||
if latest_commit and "open-threads" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(f"Latest reviewed commit: {latest_commit.get('sha', '')}")
|
||||
if latest_review:
|
||||
@ -780,23 +878,32 @@ def format_text(result: dict[str, Any]) -> str:
|
||||
lines.append(
|
||||
"Latest commit review threads: "
|
||||
f"{len(latest_commit_review.get('threads', []))} total, {len(open_threads)} open"
|
||||
+ (
|
||||
f", {len(visible_open_threads)} shown after path filter"
|
||||
if normalized_path_filters
|
||||
else ""
|
||||
)
|
||||
)
|
||||
for thread in open_threads:
|
||||
for thread in visible_open_threads:
|
||||
root_comment = thread["root_comment"]
|
||||
latest_comment = thread["latest_comment"]
|
||||
lines.append(f"- {thread['path']}:{thread['line']}")
|
||||
lines.append(f" Root by {root_comment['user']}: {collapse_whitespace(root_comment['body'])}")
|
||||
lines.append(f" Root by {root_comment['user']}: {truncate_text(root_comment['body'], max_description_length)}")
|
||||
if latest_comment["id"] != root_comment["id"]:
|
||||
lines.append(f" Latest by {latest_comment['user']}: {collapse_whitespace(latest_comment['body'])}")
|
||||
lines.append(
|
||||
f" Latest by {latest_comment['user']}: {truncate_text(latest_comment['body'], max_description_length)}"
|
||||
)
|
||||
if contains_visible_addressed_commit_text(root_comment["body"]) or contains_visible_addressed_commit_text(
|
||||
latest_comment["body"]
|
||||
):
|
||||
lines.append(
|
||||
" Note: thread is still open; treat the visible 'Addressed in commit ...' text as unverified until local code matches."
|
||||
)
|
||||
if open_threads and not visible_open_threads:
|
||||
lines.append(" Details: no open threads matched the current path filter.")
|
||||
|
||||
megalinter_report = result.get("megalinter_report", {})
|
||||
if megalinter_report:
|
||||
if megalinter_report and "megalinter" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(
|
||||
"MegaLinter: "
|
||||
@ -818,32 +925,37 @@ def format_text(result: dict[str, Any]) -> str:
|
||||
|
||||
for issue in megalinter_report.get("detailed_issues", []):
|
||||
lines.append(f"- Detailed issue: {issue['summary']}")
|
||||
lines.append(f" {collapse_whitespace(issue['details'])}")
|
||||
lines.append(f" {truncate_text(issue['details'], max_description_length)}")
|
||||
|
||||
lines.append("")
|
||||
lines.append(f"Test reports: {len(result['test_reports'])}")
|
||||
for index, report in enumerate(result["test_reports"], start=1):
|
||||
stats = report.get("stats", {})
|
||||
if stats:
|
||||
lines.append(
|
||||
f"- Report {index}: tests={stats.get('tests')} passed={stats.get('passed')} "
|
||||
f"failed={stats.get('failed')} skipped={stats.get('skipped')} flaky={stats.get('flaky')} "
|
||||
f"duration={stats.get('duration')}"
|
||||
)
|
||||
else:
|
||||
lines.append(f"- Report {index}: no structured test stats parsed")
|
||||
if "tests" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(f"Test reports: {len(result['test_reports'])}")
|
||||
for index, report in enumerate(result["test_reports"], start=1):
|
||||
stats = report.get("stats", {})
|
||||
if stats:
|
||||
lines.append(
|
||||
f"- Report {index}: tests={stats.get('tests')} passed={stats.get('passed')} "
|
||||
f"failed={stats.get('failed')} skipped={stats.get('skipped')} flaky={stats.get('flaky')} "
|
||||
f"duration={stats.get('duration')}"
|
||||
)
|
||||
else:
|
||||
lines.append(f"- Report {index}: no structured test stats parsed")
|
||||
|
||||
if report["has_failed_tests"]:
|
||||
for failed_test in report["failed_tests"]:
|
||||
lines.append(f" Failed test: {failed_test}")
|
||||
else:
|
||||
lines.append(" Failed tests: none reported")
|
||||
if report["has_failed_tests"]:
|
||||
for failed_test in report["failed_tests"]:
|
||||
lines.append(f" Failed test: {truncate_text(failed_test, max_description_length)}")
|
||||
else:
|
||||
lines.append(" Failed tests: none reported")
|
||||
|
||||
if result["parse_warnings"]:
|
||||
if result["parse_warnings"] and "warnings" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append("Warnings:")
|
||||
for warning in result["parse_warnings"]:
|
||||
lines.append(f"- {warning}")
|
||||
lines.append(f"- {truncate_text(warning, max_description_length)}")
|
||||
|
||||
if json_output_path:
|
||||
lines.append("")
|
||||
lines.append(f"Full JSON written to: {json_output_path}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
@ -853,6 +965,27 @@ def parse_args() -> argparse.Namespace:
|
||||
parser.add_argument("--branch", help="Override the current branch name.")
|
||||
parser.add_argument("--pr", type=int, help="Fetch a specific PR number instead of resolving from branch.")
|
||||
parser.add_argument("--format", choices=("text", "json"), default="text")
|
||||
parser.add_argument(
|
||||
"--json-output",
|
||||
help="Write the full JSON result to a file. When used with --format text, stdout stays concise and points to the file.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--section",
|
||||
action="append",
|
||||
choices=DISPLAY_SECTION_CHOICES,
|
||||
help="Limit text output to specific sections. Can be passed multiple times.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--path",
|
||||
action="append",
|
||||
help="Only show comments and review threads whose path contains this fragment. Can be passed multiple times.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-description-length",
|
||||
type=int,
|
||||
default=400,
|
||||
help="Truncate long text bodies in text output to this many characters.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
@ -866,12 +999,27 @@ def main() -> None:
|
||||
pr_number = resolve_pr_number(branch)
|
||||
|
||||
result = build_result(pr_number, branch)
|
||||
json_output_path: str | None = None
|
||||
if args.json_output:
|
||||
json_output_path = write_json_output(result, args.json_output)
|
||||
|
||||
if args.format == "json":
|
||||
if json_output_path:
|
||||
print(json_output_path)
|
||||
return
|
||||
|
||||
print(json.dumps(result, ensure_ascii=False, indent=2))
|
||||
return
|
||||
|
||||
print(format_text(result))
|
||||
print(
|
||||
format_text(
|
||||
result,
|
||||
sections=args.section,
|
||||
path_filters=args.path,
|
||||
max_description_length=args.max_description_length,
|
||||
json_output_path=json_output_path,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@ -7,10 +7,10 @@
|
||||
|
||||
## 当前恢复点
|
||||
|
||||
- 恢复点编号:`ANALYZER-WARNING-REDUCTION-RP-011`
|
||||
- 当前阶段:`Phase 11`
|
||||
- 恢复点编号:`ANALYZER-WARNING-REDUCTION-RP-012`
|
||||
- 当前阶段:`Phase 12`
|
||||
- 当前焦点:
|
||||
- 当前 PR #265 的 follow-up 已继续收口到 `Event.cs` 监听器计数修正;下一轮恢复到 `MA0046` 主批次
|
||||
- 当前 PR review workflow 已补强到支持 JSON 落盘与按 section/path 收窄输出;下一轮恢复到 `MA0046` 主批次
|
||||
- 后续继续按 warning 类型和数量批处理,而不是回退到按单文件切片推进
|
||||
- 当某一轮主类型数量不足时,允许顺手合并其他低冲突 warning 类型,`MA0015` 与 `MA0077`
|
||||
只是当前最明显的低数量示例,不构成限定
|
||||
@ -23,6 +23,7 @@
|
||||
- 已完成多轮 CodeRabbit follow-up 修复,并用定向测试与项目/解决方案构建验证了关键回归风险
|
||||
- 已完成当前 PR #265 review follow-up:修复 `CoroutineScheduler` 的零容量扩容边界,并补上 `Store` dispatch 作用域的异常安全回滚
|
||||
- 已继续完成当前 PR #265 review follow-up:修复 `Event<T>` 与 `Event<T, TK>` 监听器计数的 off-by-one,并补充回归测试
|
||||
- 已增强 `gframework-pr-review` 脚本与 skill 文档,降低超长 JSON 直出导致的 review 信号漏看风险
|
||||
- 当前 `PauseStackManager`、`Store`、`CoroutineScheduler` 与 `GFramework.Core` 的 `MA0048`
|
||||
文件/类型命名冲突已从 active 入口移除;主题内剩余 warning 主要集中在 `MA0046` delegate 形状、
|
||||
`MA0016` 集合抽象接口、`MA0002` comparer 重载,以及 `MA0015` / `MA0077` 两个低数量尾项
|
||||
@ -48,6 +49,8 @@
|
||||
`_isDispatching = true` 的锁死问题
|
||||
- `RP-011` 根据补充复核继续收口 PR #265 的 outside-diff comment,修复 `Event<T>` / `Event<T, TK>` 默认 no-op
|
||||
委托导致的 `GetListenerCount()` off-by-one,并以定向事件测试验证注册、注销和计数语义
|
||||
- `RP-012` 为 `gframework-pr-review` 增加 `--json-output`、`--section`、`--path` 与文本截断能力,并更新 skill 推荐用法,
|
||||
让“先落盘、再定向抽取”成为默认可操作路径
|
||||
- 当前工作树分支 `fix/analyzer-warning-reduction-batch` 已在 `ai-plan/public/README.md` 建立 topic 映射
|
||||
|
||||
## 当前风险
|
||||
@ -111,6 +114,13 @@
|
||||
- 结果:`15 Warning(s)`,`0 Error(s)`;`Event.cs` 的 listener count 修复未引入新的 `GFramework.Core` `net8.0` 构建错误
|
||||
- `dotnet test GFramework.Core.Tests/GFramework.Core.Tests.csproj -c Release --filter "FullyQualifiedName~EventTests.EventT_GetListenerCount_Should_Exclude_Placeholder_Handler|FullyQualifiedName~EventTests.EventTTK_GetListenerCount_Should_Exclude_Placeholder_Handler" -m:1 -p:RestoreFallbackFolders="" -nologo`
|
||||
- 结果:`2 Passed`,`0 Failed`
|
||||
- `RP-012` 的定向验证结果:
|
||||
- `python3 -m py_compile .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py`
|
||||
- 结果:通过;使用 `PYTHONPYCACHEPREFIX=/tmp/codex-pycache` 规避技能目录只读导致的 `__pycache__` 写入限制
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --help`
|
||||
- 结果:通过;`--json-output`、`--section`、`--path`、`--max-description-length` 已出现在 CLI 帮助中
|
||||
- `dotnet build GFramework.Core/GFramework.Core.csproj -c Release --no-restore -p:TargetFramework=net8.0 -p:RestoreFallbackFolders="" -nologo`
|
||||
- 结果:`0 Warning(s)`,`0 Error(s)`
|
||||
- active 跟踪文件只保留当前恢复点、活跃事实、风险与下一步,不再重复保存已完成阶段的长篇历史
|
||||
|
||||
## 下一步
|
||||
|
||||
@ -1,5 +1,34 @@
|
||||
# Analyzer Warning Reduction 追踪
|
||||
|
||||
## 2026-04-21 — RP-012
|
||||
|
||||
### 阶段:PR review workflow 输出收窄增强(RP-012)
|
||||
|
||||
- 背景:上一轮虽然脚本已经能解析 `outside_diff_comments`,但直接把超长 JSON 打到终端时仍可能因为输出截断而漏看高价值 review 信号
|
||||
- 本轮对 `gframework-pr-review` 做了工作流级增强,而不是继续依赖 shell 重定向技巧:
|
||||
- 为 `fetch_current_pr_review.py` 增加 `--json-output <path>`,允许把完整 JSON 稳定写入文件
|
||||
- 增加 `--section`,可只输出 `outside-diff`、`open-threads`、`megalinter` 等高信号文本摘要
|
||||
- 增加 `--path`,允许把文本输出收窄到特定文件或路径片段
|
||||
- 增加 `--max-description-length`,避免超长 comment/body 在 text 模式下刷屏
|
||||
- 当 text 模式搭配 `--json-output` 时,stdout 保持精简,并显式提示完整 JSON 文件路径
|
||||
- 同步更新 `SKILL.md`:
|
||||
- 将“先落盘,再用 `jq` 或 `--section` / `--path` 缩小范围”写成推荐机器工作流
|
||||
- 补充按 section 和按路径聚焦的示例命令
|
||||
- 预期收益:
|
||||
- 不再要求操作者肉眼阅读整份长 JSON
|
||||
- outside-diff、nitpick 和 open thread 都能成为一等可过滤输出
|
||||
- 即使终端输出有 token/长度上限,完整结果仍可通过文件稳定回查
|
||||
- 定向验证命令:
|
||||
- `python3 -m py_compile .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py`
|
||||
- 结果:通过;使用 `PYTHONPYCACHEPREFIX=/tmp/codex-pycache` 规避 `__pycache__` 写入限制
|
||||
- `python3 .codex/skills/gframework-pr-review/scripts/fetch_current_pr_review.py --help`
|
||||
- 结果:通过;新增 CLI 选项均已出现在帮助输出中
|
||||
- `dotnet build GFramework.Core/GFramework.Core.csproj -c Release --no-restore -p:TargetFramework=net8.0 -p:RestoreFallbackFolders="" -nologo`
|
||||
- 结果:`0 Warning(s)`,`0 Error(s)`
|
||||
- 下一步建议:
|
||||
- 之后执行 `$gframework-pr-review` 时,默认优先使用 `--json-output`
|
||||
- 在 review 跟进阶段,先看 `outside-diff`、`open-threads`、`megalinter` 三个 section,再决定是否需要打开完整 JSON
|
||||
|
||||
## 2026-04-21 — RP-011
|
||||
|
||||
### 阶段:PR #265 outside-diff follow-up 补收口(RP-011)
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user