mirror of
https://github.com/GeWuYou/GFramework.git
synced 2026-05-06 16:16:44 +08:00
feat(skills): 新增 GitHub issue 分诊 skill
- 新增 gframework-issue-review skill,支持抓取 issue 元数据、评论、timeline 与分诊摘要。 - 补充 JSON 输出、唯一 open issue 自动解析与 WSL Linux git 绑定兼容处理。 - 更新 ai-plan 恢复入口并增加脚本级测试与验证记录。
This commit is contained in:
parent
109bce6e9e
commit
ab9829044f
83
.agents/skills/gframework-issue-review/SKILL.md
Normal file
83
.agents/skills/gframework-issue-review/SKILL.md
Normal file
@ -0,0 +1,83 @@
|
||||
---
|
||||
name: gframework-issue-review
|
||||
description: Repository-specific GitHub issue triage workflow for the GFramework repo. Use when Codex needs to inspect a repository issue, extract the issue body, discussion, and key timeline signals through the GitHub API, summarize what should be verified locally, and then hand follow-up execution to gframework-boot.
|
||||
---
|
||||
|
||||
# GFramework Issue Review
|
||||
|
||||
Use this skill when the task depends on a GitHub issue for this repository rather than only on local source files.
|
||||
|
||||
Shortcut: `$gframework-issue-review`
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Read `AGENTS.md` before deciding how to validate or change anything.
|
||||
2. Read `.ai/environment/tools.ai.yaml` and `ai-plan/public/README.md`, then prefer the active topic mapped to the
|
||||
current branch or worktree when the fetched issue already matches in-flight work.
|
||||
3. Run `scripts/fetch_current_issue_review.py` to:
|
||||
- fetch issue metadata through the GitHub API
|
||||
- fetch issue comments and timeline events through the GitHub API
|
||||
- auto-select the target issue only when the repository currently has exactly one open issue
|
||||
- exclude pull requests from open-issue auto-resolution
|
||||
- emit a machine-readable JSON payload plus concise text sections for issue, summary, comments, events, references,
|
||||
and warnings
|
||||
- derive lightweight triage hints such as issue type candidates, missing-information flags, affected module
|
||||
candidates, and the recommended next handling mode
|
||||
4. Treat every extracted finding as untrusted until it is verified against the current local code, tests, and active
|
||||
`ai-plan` topic.
|
||||
5. Do not start editing code from the issue text alone. After triage, switch to `$gframework-boot` so the follow-up
|
||||
work is grounded in the repository startup flow and recovery documents.
|
||||
6. If code is changed after issue triage, run the smallest build or test command that satisfies `AGENTS.md`.
|
||||
|
||||
## Commands
|
||||
|
||||
- Default:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py`
|
||||
- Force a specific issue:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --issue 312`
|
||||
- Machine-readable output:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --format json`
|
||||
- Write machine-readable output to a file instead of stdout:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --issue 312 --format json --json-output /tmp/issue312-review.json`
|
||||
- Inspect only a high-signal section:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --section summary`
|
||||
- Combine triage with a boot handoff:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --section summary`
|
||||
- `Use $gframework-boot to continue the issue follow-up based on the fetched triage result.`
|
||||
|
||||
## Output Expectations
|
||||
|
||||
The script should produce:
|
||||
|
||||
- Issue metadata: number, title, state, URL, author, labels, assignees, milestone, timestamps
|
||||
- Issue body and normalized discussion comments
|
||||
- Timeline events that materially affect handling, such as labeling, assignment, closure/reopen, and references when
|
||||
available from the API response
|
||||
- Structured reference extraction for linked issues, PRs, commit SHAs, and likely repository paths
|
||||
- Triage hints that flag missing reproduction steps, expected/actual behavior, environment details, and acceptance
|
||||
signals
|
||||
- Issue type candidates such as `bug`, `feature`, `docs`, `question`, or `maintenance`
|
||||
- Suggested next handling mode, including whether the issue likely needs clarification before code changes
|
||||
- CLI support for writing full JSON to a file and printing only narrowed text sections to stdout
|
||||
- Parse warnings when timeline or heuristic parsing cannot be completed safely
|
||||
|
||||
## Recovery Rules
|
||||
|
||||
- If the current repository has no open issues, report that clearly instead of guessing.
|
||||
- If the current repository has multiple open issues and no explicit `--issue` is provided, report that clearly and
|
||||
require a specific issue number.
|
||||
- If GitHub access fails because of proxy configuration, rerun the fetch with proxy variables removed.
|
||||
- Prefer GitHub API results over HTML scraping.
|
||||
- Do not treat heuristic module guesses or next-step suggestions as repository truth; they are only entry points for
|
||||
subsequent local verification.
|
||||
- If the issue discussion reveals that the problem statement has already shifted, prefer the newest concrete comment or
|
||||
timeline signal over the original title/body wording.
|
||||
- After extracting the issue, continue the actual implementation flow with `$gframework-boot` so the task is grounded
|
||||
in current branch context and `ai-plan` recovery artifacts.
|
||||
|
||||
## Example Triggers
|
||||
|
||||
- `Use $gframework-issue-review on the current repository issue`
|
||||
- `Check the open GitHub issue and summarize what should be verified locally`
|
||||
- `Inspect issue 312 and tell me whether this looks like bug triage or a feature request`
|
||||
- `先用 $gframework-issue-review 看当前 open issue,再用 $gframework-boot 继续`
|
||||
@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "GFramework Issue Review"
|
||||
short_description: "Inspect the current repository issue and triage next steps"
|
||||
default_prompt: "Use $gframework-issue-review to inspect the current repository issue through the GitHub API, summarize the issue body, discussion, and key timeline signals, highlight what must be verified locally, and then hand follow-up execution to $gframework-boot."
|
||||
@ -0,0 +1,801 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Fetch the current GFramework GitHub issue and extract the signals needed for
|
||||
local follow-up work without relying on gh CLI.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import urllib.request
|
||||
from typing import Any
|
||||
|
||||
OWNER = "GeWuYou"
|
||||
REPO = "GFramework"
|
||||
WORKTREE_ROOT_DIRECTORY_NAME = "GFramework-WorkTree"
|
||||
DEFAULT_WINDOWS_GIT = "/mnt/d/Tool/Development Tools/Git/cmd/git.exe"
|
||||
GIT_ENVIRONMENT_KEY = "GFRAMEWORK_WINDOWS_GIT"
|
||||
GIT_DIR_ENVIRONMENT_KEY = "GFRAMEWORK_GIT_DIR"
|
||||
WORK_TREE_ENVIRONMENT_KEY = "GFRAMEWORK_WORK_TREE"
|
||||
REQUEST_TIMEOUT_ENVIRONMENT_KEY = "GFRAMEWORK_ISSUE_REVIEW_TIMEOUT_SECONDS"
|
||||
DEFAULT_REQUEST_TIMEOUT_SECONDS = 60
|
||||
USER_AGENT = "codex-gframework-issue-review"
|
||||
DISPLAY_SECTION_CHOICES = (
|
||||
"issue",
|
||||
"summary",
|
||||
"comments",
|
||||
"events",
|
||||
"references",
|
||||
"warnings",
|
||||
)
|
||||
ISSUE_TYPE_CANDIDATES = ("bug", "feature", "docs", "question", "maintenance")
|
||||
ACTIVE_TOPIC_KEYWORDS: dict[str, tuple[str, ...]] = {
|
||||
"ai-first-config-system": ("config", "configuration", "gameconfig", "settings"),
|
||||
"coroutine-optimization": ("coroutine", "yield", "await", "scheduler"),
|
||||
"cqrs-rewrite": ("cqrs", "command", "query", "eventbus", "event bus"),
|
||||
"data-repository-persistence": ("repository", "serialization", "persistence", "data", "settings"),
|
||||
"runtime-generator-boundary": ("source generator", "generator", "attribute", "packaging"),
|
||||
"semantic-release-versioning": ("release", "version", "semantic-release", "tag", "publish"),
|
||||
"documentation-full-coverage-governance": ("docs", "documentation", "readme", "vitepress", "api reference"),
|
||||
}
|
||||
ACTUAL_BEHAVIOR_PATTERNS = (
|
||||
"actual",
|
||||
"currently",
|
||||
"instead",
|
||||
"but",
|
||||
"error",
|
||||
"exception",
|
||||
"fails",
|
||||
"failed",
|
||||
"wrong",
|
||||
)
|
||||
EXPECTED_BEHAVIOR_PATTERNS = (
|
||||
"expected",
|
||||
"should",
|
||||
"want",
|
||||
"would like",
|
||||
"needs to",
|
||||
)
|
||||
REPRODUCTION_PATTERNS = (
|
||||
"steps to reproduce",
|
||||
"reproduce",
|
||||
"reproduction",
|
||||
"how to reproduce",
|
||||
"minimal example",
|
||||
"sample",
|
||||
"demo",
|
||||
)
|
||||
ENVIRONMENT_PATTERNS = (
|
||||
"windows",
|
||||
"linux",
|
||||
"macos",
|
||||
"wsl",
|
||||
"godot",
|
||||
".net",
|
||||
"sdk",
|
||||
"version",
|
||||
"environment",
|
||||
)
|
||||
ACCEPTANCE_PATTERNS = (
|
||||
"acceptance",
|
||||
"done when",
|
||||
"definition of done",
|
||||
"verified by",
|
||||
"test plan",
|
||||
)
|
||||
FILE_PATH_PATTERN = re.compile(r"\b(?:[A-Za-z0-9_.-]+/)+[A-Za-z0-9_.-]+\b")
|
||||
ISSUE_REFERENCE_PATTERN = re.compile(r"(?:^|\s)#(\d+)\b")
|
||||
COMMIT_REFERENCE_PATTERN = re.compile(r"\b[0-9a-f]{7,40}\b")
|
||||
LINE_BREAK_NORMALIZER = re.compile(r"\n{3,}")
|
||||
|
||||
|
||||
def resolve_git_command() -> str:
|
||||
"""Resolve the git executable to use for this repository."""
|
||||
candidates = [
|
||||
os.environ.get(GIT_ENVIRONMENT_KEY),
|
||||
DEFAULT_WINDOWS_GIT,
|
||||
"git.exe",
|
||||
"git",
|
||||
]
|
||||
|
||||
for candidate in candidates:
|
||||
if not candidate:
|
||||
continue
|
||||
|
||||
if os.path.isabs(candidate):
|
||||
if os.path.exists(candidate):
|
||||
return candidate
|
||||
continue
|
||||
|
||||
resolved_candidate = shutil.which(candidate)
|
||||
if resolved_candidate:
|
||||
return resolved_candidate
|
||||
|
||||
raise RuntimeError(f"No usable git executable found. Set {GIT_ENVIRONMENT_KEY} to override it.")
|
||||
|
||||
|
||||
def find_repository_root(start_path: Path) -> Path | None:
|
||||
"""Locate the repository root by walking parent directories for repo markers."""
|
||||
for candidate in (start_path, *start_path.parents):
|
||||
if (candidate / "AGENTS.md").exists() and (candidate / ".ai/environment/tools.ai.yaml").exists():
|
||||
return candidate
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def resolve_worktree_git_dir(repository_root: Path) -> Path | None:
|
||||
"""Resolve the main-repository worktree gitdir for this WSL worktree layout."""
|
||||
if repository_root.parent.name != WORKTREE_ROOT_DIRECTORY_NAME:
|
||||
return None
|
||||
|
||||
primary_repository_root = repository_root.parent.parent / REPO
|
||||
candidate_git_dir = primary_repository_root / ".git" / "worktrees" / repository_root.name
|
||||
return candidate_git_dir if candidate_git_dir.exists() else None
|
||||
|
||||
|
||||
def resolve_git_invocation() -> list[str]:
|
||||
"""Resolve the git command arguments, preferring explicit WSL worktree binding."""
|
||||
configured_git_dir = os.environ.get(GIT_DIR_ENVIRONMENT_KEY)
|
||||
configured_work_tree = os.environ.get(WORK_TREE_ENVIRONMENT_KEY)
|
||||
linux_git = shutil.which("git")
|
||||
|
||||
if configured_git_dir and configured_work_tree and linux_git:
|
||||
return [linux_git, f"--git-dir={configured_git_dir}", f"--work-tree={configured_work_tree}"]
|
||||
|
||||
repository_root = find_repository_root(Path.cwd())
|
||||
if repository_root is not None and linux_git:
|
||||
worktree_git_dir = resolve_worktree_git_dir(repository_root)
|
||||
if worktree_git_dir is not None:
|
||||
return [linux_git, f"--git-dir={worktree_git_dir}", f"--work-tree={repository_root}"]
|
||||
|
||||
root_git_dir = repository_root / ".git"
|
||||
if root_git_dir.exists():
|
||||
return [linux_git, f"--git-dir={root_git_dir}", f"--work-tree={repository_root}"]
|
||||
|
||||
return [resolve_git_command()]
|
||||
|
||||
|
||||
def resolve_request_timeout_seconds() -> int:
|
||||
"""Return the GitHub request timeout in seconds."""
|
||||
configured_timeout = os.environ.get(REQUEST_TIMEOUT_ENVIRONMENT_KEY)
|
||||
if not configured_timeout:
|
||||
return DEFAULT_REQUEST_TIMEOUT_SECONDS
|
||||
|
||||
try:
|
||||
parsed_timeout = int(configured_timeout)
|
||||
except ValueError as error:
|
||||
raise RuntimeError(
|
||||
f"{REQUEST_TIMEOUT_ENVIRONMENT_KEY} must be an integer number of seconds."
|
||||
) from error
|
||||
|
||||
if parsed_timeout <= 0:
|
||||
raise RuntimeError(f"{REQUEST_TIMEOUT_ENVIRONMENT_KEY} must be greater than zero.")
|
||||
|
||||
return parsed_timeout
|
||||
|
||||
|
||||
def run_command(args: list[str]) -> str:
|
||||
"""Run a command and return stdout, raising on failure."""
|
||||
process = subprocess.run(args, capture_output=True, text=True, check=False)
|
||||
if process.returncode != 0:
|
||||
stderr = process.stderr.strip()
|
||||
raise RuntimeError(f"Command failed: {' '.join(args)}\n{stderr}")
|
||||
return process.stdout.strip()
|
||||
|
||||
|
||||
def get_current_branch() -> str:
|
||||
"""Return the current git branch name."""
|
||||
return run_command([*resolve_git_invocation(), "rev-parse", "--abbrev-ref", "HEAD"])
|
||||
|
||||
|
||||
def open_url(url: str, accept: str) -> tuple[str, Any]:
|
||||
"""Open a URL with proxy variables disabled and return decoded text plus headers."""
|
||||
opener = urllib.request.build_opener(urllib.request.ProxyHandler({}))
|
||||
request = urllib.request.Request(url, headers={"Accept": accept, "User-Agent": USER_AGENT})
|
||||
with opener.open(request, timeout=resolve_request_timeout_seconds()) as response:
|
||||
return response.read().decode("utf-8", "replace"), response.headers
|
||||
|
||||
|
||||
def fetch_json(url: str, accept: str = "application/vnd.github+json") -> tuple[Any, Any]:
|
||||
"""Fetch a JSON payload and its response headers from GitHub."""
|
||||
text, headers = open_url(url, accept=accept)
|
||||
return json.loads(text), headers
|
||||
|
||||
|
||||
def extract_next_link(headers: Any) -> str | None:
|
||||
"""Extract the next-page link from GitHub pagination headers."""
|
||||
link_header = headers.get("Link")
|
||||
if not link_header:
|
||||
return None
|
||||
|
||||
match = re.search(r'<([^>]+)>;\s*rel="next"', link_header)
|
||||
return match.group(1) if match else None
|
||||
|
||||
|
||||
def fetch_paged_json(url: str, accept: str = "application/vnd.github+json") -> list[dict[str, Any]]:
|
||||
"""Fetch every page from a paginated GitHub API endpoint."""
|
||||
items: list[dict[str, Any]] = []
|
||||
next_url: str | None = url
|
||||
while next_url:
|
||||
payload, headers = fetch_json(next_url, accept=accept)
|
||||
if not isinstance(payload, list):
|
||||
raise RuntimeError(f"Expected list payload from GitHub API, got {type(payload).__name__}.")
|
||||
|
||||
items.extend(payload)
|
||||
next_url = extract_next_link(headers)
|
||||
|
||||
return items
|
||||
|
||||
|
||||
def collapse_whitespace(text: str) -> str:
|
||||
"""Collapse repeated whitespace into single spaces while preserving paragraph intent."""
|
||||
normalized = text.replace("\r\n", "\n").replace("\r", "\n")
|
||||
normalized = LINE_BREAK_NORMALIZER.sub("\n\n", normalized)
|
||||
normalized = re.sub(r"[ \t]+", " ", normalized)
|
||||
normalized = re.sub(r" *\n *", "\n", normalized)
|
||||
return normalized.strip()
|
||||
|
||||
|
||||
def truncate_text(text: str, max_length: int) -> str:
|
||||
"""Collapse whitespace and truncate long text for CLI display."""
|
||||
collapsed = collapse_whitespace(text)
|
||||
if max_length <= 0 or len(collapsed) <= max_length:
|
||||
return collapsed
|
||||
|
||||
return collapsed[: max_length - 3].rstrip() + "..."
|
||||
|
||||
|
||||
def filter_open_issue_candidates(items: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
"""Filter GitHub issue list responses down to non-PR issue items."""
|
||||
return [item for item in items if not item.get("pull_request")]
|
||||
|
||||
|
||||
def select_single_open_issue_number(items: list[dict[str, Any]]) -> int:
|
||||
"""Resolve the target issue number when the repository has exactly one open issue."""
|
||||
issues = filter_open_issue_candidates(items)
|
||||
if not issues:
|
||||
raise RuntimeError("No open GitHub issues found for this repository. Pass --issue <number> to inspect one.")
|
||||
|
||||
if len(issues) > 1:
|
||||
numbers = ", ".join(str(item.get("number")) for item in issues[:5])
|
||||
suffix = "" if len(issues) <= 5 else ", ..."
|
||||
raise RuntimeError(
|
||||
"Multiple open GitHub issues found for this repository "
|
||||
f"({len(issues)} total: {numbers}{suffix}). Pass --issue <number> to inspect one."
|
||||
)
|
||||
|
||||
return int(issues[0]["number"])
|
||||
|
||||
|
||||
def resolve_issue_number(issue_number: int | None) -> tuple[int, str]:
|
||||
"""Resolve the issue number, auto-selecting only when exactly one open issue exists."""
|
||||
if issue_number is not None:
|
||||
return issue_number, "explicit"
|
||||
|
||||
open_items = fetch_paged_json(f"https://api.github.com/repos/{OWNER}/{REPO}/issues?state=open&per_page=100")
|
||||
return select_single_open_issue_number(open_items), "auto-single-open-issue"
|
||||
|
||||
|
||||
def fetch_issue_metadata(issue_number: int) -> dict[str, Any]:
|
||||
"""Fetch normalized metadata for a GitHub issue."""
|
||||
payload, _ = fetch_json(f"https://api.github.com/repos/{OWNER}/{REPO}/issues/{issue_number}")
|
||||
if not isinstance(payload, dict):
|
||||
raise RuntimeError("Failed to fetch GitHub issue metadata.")
|
||||
|
||||
if payload.get("pull_request"):
|
||||
raise RuntimeError(f"Item #{issue_number} is a pull request, not a plain issue.")
|
||||
|
||||
labels = []
|
||||
for label in payload.get("labels", []):
|
||||
if isinstance(label, dict) and label.get("name"):
|
||||
labels.append(str(label["name"]))
|
||||
|
||||
assignees = []
|
||||
for assignee in payload.get("assignees", []):
|
||||
login = assignee.get("login")
|
||||
if login:
|
||||
assignees.append(str(login))
|
||||
|
||||
milestone_title = None
|
||||
milestone = payload.get("milestone")
|
||||
if isinstance(milestone, dict) and milestone.get("title"):
|
||||
milestone_title = str(milestone["title"])
|
||||
|
||||
return {
|
||||
"number": int(payload["number"]),
|
||||
"title": str(payload["title"]),
|
||||
"state": str(payload["state"]).upper(),
|
||||
"url": str(payload["html_url"]),
|
||||
"author": str(payload.get("user", {}).get("login") or ""),
|
||||
"created_at": str(payload.get("created_at") or ""),
|
||||
"updated_at": str(payload.get("updated_at") or ""),
|
||||
"labels": labels,
|
||||
"assignees": assignees,
|
||||
"milestone": milestone_title,
|
||||
"body": str(payload.get("body") or ""),
|
||||
}
|
||||
|
||||
|
||||
def fetch_issue_comments(issue_number: int) -> list[dict[str, Any]]:
|
||||
"""Fetch issue comments for the selected issue."""
|
||||
return fetch_paged_json(f"https://api.github.com/repos/{OWNER}/{REPO}/issues/{issue_number}/comments?per_page=100")
|
||||
|
||||
|
||||
def fetch_issue_timeline(issue_number: int) -> list[dict[str, Any]]:
|
||||
"""Fetch issue timeline events when GitHub exposes them to the current client."""
|
||||
return fetch_paged_json(
|
||||
f"https://api.github.com/repos/{OWNER}/{REPO}/issues/{issue_number}/timeline?per_page=100",
|
||||
accept="application/vnd.github+json",
|
||||
)
|
||||
|
||||
|
||||
def normalize_comment(comment: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Normalize an issue comment for structured output."""
|
||||
return {
|
||||
"id": int(comment.get("id") or 0),
|
||||
"author": str(comment.get("user", {}).get("login") or ""),
|
||||
"created_at": str(comment.get("created_at") or ""),
|
||||
"updated_at": str(comment.get("updated_at") or ""),
|
||||
"body": str(comment.get("body") or ""),
|
||||
}
|
||||
|
||||
|
||||
def normalize_timeline_event(event: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Normalize the GitHub timeline event fields used by triage output."""
|
||||
actor = str(event.get("actor", {}).get("login") or "")
|
||||
created_at = str(event.get("created_at") or event.get("submitted_at") or "")
|
||||
event_type = str(event.get("event") or event.get("__typename") or "unknown")
|
||||
label_name = ""
|
||||
assignee = ""
|
||||
source_issue_number: int | None = None
|
||||
source_issue_url = ""
|
||||
commit_id = ""
|
||||
|
||||
label = event.get("label")
|
||||
if isinstance(label, dict) and label.get("name"):
|
||||
label_name = str(label["name"])
|
||||
|
||||
assignee_payload = event.get("assignee")
|
||||
if isinstance(assignee_payload, dict) and assignee_payload.get("login"):
|
||||
assignee = str(assignee_payload["login"])
|
||||
|
||||
source = event.get("source")
|
||||
if isinstance(source, dict):
|
||||
issue_payload = source.get("issue")
|
||||
if isinstance(issue_payload, dict):
|
||||
if issue_payload.get("number"):
|
||||
source_issue_number = int(issue_payload["number"])
|
||||
if issue_payload.get("html_url"):
|
||||
source_issue_url = str(issue_payload["html_url"])
|
||||
|
||||
commit_id_value = event.get("commit_id")
|
||||
if isinstance(commit_id_value, str):
|
||||
commit_id = commit_id_value
|
||||
|
||||
return {
|
||||
"event": event_type,
|
||||
"actor": actor,
|
||||
"created_at": created_at,
|
||||
"label": label_name,
|
||||
"assignee": assignee,
|
||||
"commit_id": commit_id,
|
||||
"source_issue_number": source_issue_number,
|
||||
"source_issue_url": source_issue_url,
|
||||
}
|
||||
|
||||
|
||||
def gather_text_blocks(issue: dict[str, Any], comments: list[dict[str, Any]]) -> list[str]:
|
||||
"""Return the issue body plus discussion comment bodies for heuristic parsing."""
|
||||
blocks = [issue.get("body", "")]
|
||||
blocks.extend(comment.get("body", "") for comment in comments)
|
||||
return [block for block in blocks if block]
|
||||
|
||||
|
||||
def has_any_pattern(text_blocks: list[str], patterns: tuple[str, ...]) -> bool:
|
||||
"""Return whether any normalized text block contains any requested pattern."""
|
||||
lowered_blocks = [collapse_whitespace(block).lower() for block in text_blocks]
|
||||
return any(pattern in block for block in lowered_blocks for pattern in patterns)
|
||||
|
||||
|
||||
def choose_issue_type_candidates(issue: dict[str, Any], text_blocks: list[str]) -> list[str]:
|
||||
"""Infer lightweight issue-type candidates from labels and discussion text."""
|
||||
labels = [label.lower() for label in issue.get("labels", [])]
|
||||
text = "\n".join(text_blocks).lower()
|
||||
candidates: list[str] = []
|
||||
|
||||
if any(label in {"bug", "regression"} for label in labels) or "bug" in text or "error" in text or "fails" in text:
|
||||
candidates.append("bug")
|
||||
if any(label in {"feature", "enhancement"} for label in labels) or "feature" in text or "support" in text:
|
||||
candidates.append("feature")
|
||||
if any(label in {"documentation", "docs"} for label in labels) or "documentation" in text or "readme" in text:
|
||||
candidates.append("docs")
|
||||
if any(label in {"question", "help wanted"} for label in labels) or "?" in issue.get("title", ""):
|
||||
candidates.append("question")
|
||||
if any(label in {"chore", "maintenance", "refactor"} for label in labels) or "cleanup" in text or "refactor" in text:
|
||||
candidates.append("maintenance")
|
||||
|
||||
if not candidates:
|
||||
candidates.append("question" if issue.get("body", "").strip().endswith("?") else "bug")
|
||||
|
||||
ordered_candidates: list[str] = []
|
||||
for candidate in ISSUE_TYPE_CANDIDATES:
|
||||
if candidate in candidates:
|
||||
ordered_candidates.append(candidate)
|
||||
|
||||
return ordered_candidates
|
||||
|
||||
|
||||
def extract_references_from_text(text: str) -> dict[str, list[str]]:
|
||||
"""Extract issue, commit, and file-path references from one text block."""
|
||||
issue_numbers = sorted({match.group(1) for match in ISSUE_REFERENCE_PATTERN.finditer(text)}, key=int)
|
||||
commit_shas = sorted({match.group(0) for match in COMMIT_REFERENCE_PATTERN.finditer(text)})
|
||||
file_paths = sorted({match.group(0) for match in FILE_PATH_PATTERN.finditer(text)})
|
||||
|
||||
return {
|
||||
"issues": [f"#{number}" for number in issue_numbers],
|
||||
"commit_shas": commit_shas,
|
||||
"file_paths": file_paths,
|
||||
}
|
||||
|
||||
|
||||
def merge_reference_values(values: list[dict[str, list[str]]]) -> dict[str, list[str]]:
|
||||
"""Merge extracted reference lists while preserving sorted unique output."""
|
||||
merged: dict[str, set[str]] = {"issues": set(), "commit_shas": set(), "file_paths": set()}
|
||||
for value in values:
|
||||
for key in merged:
|
||||
merged[key].update(value.get(key, []))
|
||||
|
||||
return {
|
||||
"issues": sorted(merged["issues"], key=lambda item: int(item[1:])),
|
||||
"commit_shas": sorted(merged["commit_shas"]),
|
||||
"file_paths": sorted(merged["file_paths"]),
|
||||
}
|
||||
|
||||
|
||||
def build_references(issue: dict[str, Any], comments: list[dict[str, Any]], events: list[dict[str, Any]]) -> dict[str, Any]:
|
||||
"""Build structured references from issue text and timeline context."""
|
||||
extracted = [extract_references_from_text(issue.get("body", ""))]
|
||||
extracted.extend(extract_references_from_text(comment.get("body", "")) for comment in comments)
|
||||
merged = merge_reference_values(extracted)
|
||||
referenced_by_timeline = sorted(
|
||||
{
|
||||
f"#{event['source_issue_number']}"
|
||||
for event in events
|
||||
if event.get("source_issue_number") is not None
|
||||
},
|
||||
key=lambda item: int(item[1:]),
|
||||
)
|
||||
|
||||
pull_request_references = sorted(
|
||||
{
|
||||
issue_reference
|
||||
for issue_reference in merged["issues"]
|
||||
if issue_reference != f"#{issue['number']}"
|
||||
},
|
||||
key=lambda item: int(item[1:]),
|
||||
)
|
||||
|
||||
return {
|
||||
"issues": merged["issues"],
|
||||
"pull_requests_or_issues": pull_request_references,
|
||||
"commit_shas": merged["commit_shas"],
|
||||
"file_paths": merged["file_paths"],
|
||||
"timeline_cross_references": referenced_by_timeline,
|
||||
}
|
||||
|
||||
|
||||
def build_information_flags(issue: dict[str, Any], comments: list[dict[str, Any]]) -> dict[str, bool]:
|
||||
"""Derive missing-information and readiness flags from the issue discussion."""
|
||||
text_blocks = gather_text_blocks(issue, comments)
|
||||
has_reproduction_steps = has_any_pattern(text_blocks, REPRODUCTION_PATTERNS)
|
||||
has_expected_behavior = has_any_pattern(text_blocks, EXPECTED_BEHAVIOR_PATTERNS)
|
||||
has_actual_behavior = has_any_pattern(text_blocks, ACTUAL_BEHAVIOR_PATTERNS)
|
||||
has_environment_details = has_any_pattern(text_blocks, ENVIRONMENT_PATTERNS)
|
||||
has_acceptance_signals = has_any_pattern(text_blocks, ACCEPTANCE_PATTERNS)
|
||||
needs_clarification = not (
|
||||
(has_actual_behavior and (has_reproduction_steps or has_environment_details))
|
||||
or has_acceptance_signals
|
||||
)
|
||||
|
||||
return {
|
||||
"has_reproduction_steps": has_reproduction_steps,
|
||||
"has_expected_behavior": has_expected_behavior,
|
||||
"has_actual_behavior": has_actual_behavior,
|
||||
"has_environment_details": has_environment_details,
|
||||
"has_acceptance_signals": has_acceptance_signals,
|
||||
"needs_clarification": needs_clarification,
|
||||
}
|
||||
|
||||
|
||||
def choose_affected_topics(issue: dict[str, Any], comments: list[dict[str, Any]]) -> list[str]:
|
||||
"""Map the issue discussion to likely active topics when obvious keyword matches exist."""
|
||||
text = "\n".join(gather_text_blocks(issue, comments)).lower()
|
||||
matches: list[str] = []
|
||||
for topic, keywords in ACTIVE_TOPIC_KEYWORDS.items():
|
||||
if any(keyword in text for keyword in keywords):
|
||||
matches.append(topic)
|
||||
|
||||
return matches
|
||||
|
||||
|
||||
def choose_next_action(
|
||||
information_flags: dict[str, bool],
|
||||
issue_type_candidates: list[str],
|
||||
affected_topics: list[str],
|
||||
) -> str:
|
||||
"""Choose the next handling mode for boot handoff."""
|
||||
if information_flags["needs_clarification"]:
|
||||
return "clarify-issue-before-code"
|
||||
if affected_topics:
|
||||
return "resume-existing-topic-with-boot"
|
||||
if "docs" in issue_type_candidates and issue_type_candidates[0] == "docs":
|
||||
return "start-new-docs-topic-with-boot"
|
||||
return "start-new-topic-with-boot"
|
||||
|
||||
|
||||
def build_triage_hints(issue: dict[str, Any], comments: list[dict[str, Any]]) -> dict[str, Any]:
|
||||
"""Build lightweight, reviewable triage hints for boot follow-up."""
|
||||
text_blocks = gather_text_blocks(issue, comments)
|
||||
issue_type_candidates = choose_issue_type_candidates(issue, text_blocks)
|
||||
information_flags = build_information_flags(issue, comments)
|
||||
affected_topics = choose_affected_topics(issue, comments)
|
||||
next_action = choose_next_action(information_flags, issue_type_candidates, affected_topics)
|
||||
|
||||
return {
|
||||
"issue_type_candidates": issue_type_candidates,
|
||||
"information_flags": information_flags,
|
||||
"affected_active_topics": affected_topics,
|
||||
"next_action": next_action,
|
||||
"boot_handoff": {
|
||||
"recommended_skill": "gframework-boot",
|
||||
"mode": "resume" if affected_topics else "new",
|
||||
"notes": (
|
||||
"Use gframework-boot to verify the issue against local code and active ai-plan topics."
|
||||
if not information_flags["needs_clarification"]
|
||||
else "Use gframework-boot to record a clarification-first task before changing code."
|
||||
),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def build_result(issue_number: int, branch: str, resolution_mode: str) -> dict[str, Any]:
|
||||
"""Build the full issue review payload for the selected issue."""
|
||||
parse_warnings: list[str] = []
|
||||
issue = fetch_issue_metadata(issue_number)
|
||||
raw_comments = fetch_issue_comments(issue_number)
|
||||
comments = [normalize_comment(comment) for comment in raw_comments]
|
||||
|
||||
events: list[dict[str, Any]] = []
|
||||
try:
|
||||
raw_events = fetch_issue_timeline(issue_number)
|
||||
events = [normalize_timeline_event(event) for event in raw_events]
|
||||
except Exception as error: # noqa: BLE001
|
||||
parse_warnings.append(f"Issue timeline could not be fetched or parsed: {error}")
|
||||
|
||||
references = build_references(issue, comments, events)
|
||||
triage_hints = build_triage_hints(issue, comments)
|
||||
|
||||
return {
|
||||
"issue": {
|
||||
**issue,
|
||||
"resolved_from_branch": branch,
|
||||
"resolution_mode": resolution_mode,
|
||||
},
|
||||
"discussion": {
|
||||
"comment_count": len(comments),
|
||||
"comments": comments,
|
||||
},
|
||||
"events": {
|
||||
"count": len(events),
|
||||
"items": events,
|
||||
},
|
||||
"references": references,
|
||||
"triage_hints": triage_hints,
|
||||
"parse_warnings": parse_warnings,
|
||||
}
|
||||
|
||||
|
||||
def write_json_output(result: dict[str, Any], output_path: str) -> str:
|
||||
"""Write the full JSON result to disk and return the destination path."""
|
||||
destination_path = Path(output_path).expanduser()
|
||||
destination_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
destination_path.write_text(json.dumps(result, ensure_ascii=False, indent=2), encoding="utf-8")
|
||||
return str(destination_path)
|
||||
|
||||
|
||||
def summarize_events(events: list[dict[str, Any]]) -> list[str]:
|
||||
"""Convert normalized events into concise text lines."""
|
||||
lines: list[str] = []
|
||||
for event in events:
|
||||
summary = f"- {event['event']}"
|
||||
details: list[str] = []
|
||||
if event.get("actor"):
|
||||
details.append(f"actor={event['actor']}")
|
||||
if event.get("label"):
|
||||
details.append(f"label={event['label']}")
|
||||
if event.get("assignee"):
|
||||
details.append(f"assignee={event['assignee']}")
|
||||
if event.get("source_issue_number") is not None:
|
||||
details.append(f"source_issue=#{event['source_issue_number']}")
|
||||
if event.get("commit_id"):
|
||||
details.append(f"commit={event['commit_id'][:12]}")
|
||||
if event.get("created_at"):
|
||||
details.append(f"at={event['created_at']}")
|
||||
if details:
|
||||
summary += " (" + ", ".join(details) + ")"
|
||||
lines.append(summary)
|
||||
return lines
|
||||
|
||||
|
||||
def format_text(
|
||||
result: dict[str, Any],
|
||||
*,
|
||||
sections: list[str] | None = None,
|
||||
max_description_length: int = 400,
|
||||
json_output_path: str | None = None,
|
||||
) -> str:
|
||||
"""Format the result payload into concise text output."""
|
||||
lines: list[str] = []
|
||||
selected_sections = set(sections or DISPLAY_SECTION_CHOICES)
|
||||
issue = result["issue"]
|
||||
triage_hints = result["triage_hints"]
|
||||
discussion = result["discussion"]
|
||||
events = result["events"]
|
||||
references = result["references"]
|
||||
|
||||
if "issue" in selected_sections:
|
||||
lines.append(f"Issue #{issue['number']}: {issue['title']}")
|
||||
lines.append(f"State: {issue['state']}")
|
||||
lines.append(f"Author: {issue['author']}")
|
||||
lines.append(f"Labels: {', '.join(issue['labels']) if issue['labels'] else '(none)'}")
|
||||
lines.append(f"Assignees: {', '.join(issue['assignees']) if issue['assignees'] else '(none)'}")
|
||||
lines.append(f"Milestone: {issue['milestone'] or '(none)'}")
|
||||
lines.append(f"Created: {issue['created_at']}")
|
||||
lines.append(f"Updated: {issue['updated_at']}")
|
||||
lines.append(f"Resolved from branch: {issue['resolved_from_branch'] or '(not branch-based)'}")
|
||||
lines.append(f"Resolution mode: {issue['resolution_mode']}")
|
||||
lines.append(f"URL: {issue['url']}")
|
||||
if issue["body"]:
|
||||
lines.append("Body:")
|
||||
lines.append(truncate_text(issue["body"], max_description_length))
|
||||
|
||||
if "summary" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append("Triage summary:")
|
||||
lines.append("- Issue type candidates: " + ", ".join(triage_hints["issue_type_candidates"]))
|
||||
information_flags = triage_hints["information_flags"]
|
||||
lines.append(
|
||||
"- Information flags: "
|
||||
+ ", ".join(
|
||||
[
|
||||
f"repro={'yes' if information_flags['has_reproduction_steps'] else 'no'}",
|
||||
f"expected={'yes' if information_flags['has_expected_behavior'] else 'no'}",
|
||||
f"actual={'yes' if information_flags['has_actual_behavior'] else 'no'}",
|
||||
f"environment={'yes' if information_flags['has_environment_details'] else 'no'}",
|
||||
f"acceptance={'yes' if information_flags['has_acceptance_signals'] else 'no'}",
|
||||
f"needs_clarification={'yes' if information_flags['needs_clarification'] else 'no'}",
|
||||
]
|
||||
)
|
||||
)
|
||||
lines.append(
|
||||
"- Affected active topics: "
|
||||
+ (", ".join(triage_hints["affected_active_topics"]) if triage_hints["affected_active_topics"] else "(none)")
|
||||
)
|
||||
lines.append(f"- Next action: {triage_hints['next_action']}")
|
||||
lines.append(f"- Boot handoff: {triage_hints['boot_handoff']['notes']}")
|
||||
|
||||
if "comments" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(f"Discussion comments: {discussion['comment_count']}")
|
||||
for comment in discussion["comments"]:
|
||||
lines.append(f"- {comment['author']} at {comment['created_at']}")
|
||||
lines.append(f" {truncate_text(comment['body'], max_description_length)}")
|
||||
|
||||
if "events" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append(f"Timeline events: {events['count']}")
|
||||
lines.extend(summarize_events(events["items"]))
|
||||
|
||||
if "references" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append("References:")
|
||||
lines.append("- Mentioned issues: " + (", ".join(references["issues"]) if references["issues"] else "(none)"))
|
||||
lines.append(
|
||||
"- Cross references: "
|
||||
+ (
|
||||
", ".join(references["timeline_cross_references"])
|
||||
if references["timeline_cross_references"]
|
||||
else "(none)"
|
||||
)
|
||||
)
|
||||
lines.append(
|
||||
"- Related issue/PR mentions: "
|
||||
+ (
|
||||
", ".join(references["pull_requests_or_issues"])
|
||||
if references["pull_requests_or_issues"]
|
||||
else "(none)"
|
||||
)
|
||||
)
|
||||
lines.append("- Commit SHAs: " + (", ".join(references["commit_shas"]) if references["commit_shas"] else "(none)"))
|
||||
lines.append("- File paths: " + (", ".join(references["file_paths"]) if references["file_paths"] else "(none)"))
|
||||
|
||||
if result["parse_warnings"] and "warnings" in selected_sections:
|
||||
lines.append("")
|
||||
lines.append("Warnings:")
|
||||
for warning in result["parse_warnings"]:
|
||||
lines.append(f"- {truncate_text(warning, max_description_length)}")
|
||||
|
||||
if json_output_path:
|
||||
lines.append("")
|
||||
lines.append(f"Full JSON written to: {json_output_path}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
"""Parse CLI arguments."""
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--branch", help="Override the current branch name.")
|
||||
parser.add_argument("--issue", type=int, help="Fetch a specific issue number instead of auto-selecting one.")
|
||||
parser.add_argument("--format", choices=("text", "json"), default="text")
|
||||
parser.add_argument(
|
||||
"--json-output",
|
||||
help="Write the full JSON result to a file. When used with --format text, stdout stays concise and points to the file.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--section",
|
||||
action="append",
|
||||
choices=DISPLAY_SECTION_CHOICES,
|
||||
help="Limit text output to specific sections. Can be passed multiple times.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-description-length",
|
||||
type=int,
|
||||
default=400,
|
||||
help="Truncate long text bodies in text output to this many characters.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Run the CLI entry point."""
|
||||
args = parse_args()
|
||||
branch = args.branch or get_current_branch()
|
||||
issue_number, resolution_mode = resolve_issue_number(args.issue)
|
||||
result = build_result(issue_number, branch, resolution_mode)
|
||||
|
||||
json_output_path: str | None = None
|
||||
if args.json_output:
|
||||
json_output_path = write_json_output(result, args.json_output)
|
||||
|
||||
if args.format == "json":
|
||||
if json_output_path:
|
||||
print(json_output_path)
|
||||
return
|
||||
|
||||
print(json.dumps(result, ensure_ascii=False, indent=2))
|
||||
return
|
||||
|
||||
print(
|
||||
format_text(
|
||||
result,
|
||||
sections=args.section,
|
||||
max_description_length=args.max_description_length,
|
||||
json_output_path=json_output_path,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as error: # noqa: BLE001
|
||||
print(str(error), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
@ -0,0 +1,55 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Regression tests for the GFramework issue review fetch helper."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
import unittest
|
||||
|
||||
|
||||
SCRIPT_PATH = Path(__file__).with_name("fetch_current_issue_review.py")
|
||||
MODULE_SPEC = importlib.util.spec_from_file_location("fetch_current_issue_review", SCRIPT_PATH)
|
||||
if MODULE_SPEC is None or MODULE_SPEC.loader is None:
|
||||
raise RuntimeError(f"Unable to load module from {SCRIPT_PATH}.")
|
||||
|
||||
MODULE = importlib.util.module_from_spec(MODULE_SPEC)
|
||||
MODULE_SPEC.loader.exec_module(MODULE)
|
||||
|
||||
|
||||
class SelectSingleOpenIssueNumberTests(unittest.TestCase):
|
||||
"""Cover auto-resolution rules for open GitHub issues."""
|
||||
|
||||
def test_select_single_open_issue_number_filters_pull_requests(self) -> None:
|
||||
"""Pull requests in the issues API must not block the single-open-issue path."""
|
||||
selected = MODULE.select_single_open_issue_number(
|
||||
[
|
||||
{"number": 10, "pull_request": {"url": "https://example.test/pr/10"}},
|
||||
{"number": 11},
|
||||
]
|
||||
)
|
||||
|
||||
self.assertEqual(selected, 11)
|
||||
|
||||
def test_select_single_open_issue_number_rejects_multiple_plain_issues(self) -> None:
|
||||
"""Auto-resolution must stop when more than one plain issue is open."""
|
||||
with self.assertRaisesRegex(RuntimeError, "Multiple open GitHub issues found"):
|
||||
MODULE.select_single_open_issue_number([{"number": 11}, {"number": 12}])
|
||||
|
||||
|
||||
class ExtractReferencesFromTextTests(unittest.TestCase):
|
||||
"""Cover lightweight reference extraction used by the text and JSON output."""
|
||||
|
||||
def test_extract_references_from_text_finds_issue_commit_and_path_mentions(self) -> None:
|
||||
"""The helper should retain the high-signal references needed for follow-up triage."""
|
||||
references = MODULE.extract_references_from_text(
|
||||
"See #123, commit abcdef1234567890, and GFramework.Core/Systems/Runner.cs for the failing path."
|
||||
)
|
||||
|
||||
self.assertEqual(references["issues"], ["#123"])
|
||||
self.assertEqual(references["commit_shas"], ["abcdef1234567890"])
|
||||
self.assertEqual(references["file_paths"], ["GFramework.Core/Systems/Runner.cs"])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@ -46,6 +46,11 @@ help the current worktree land on the right recovery documents without scanning
|
||||
- Purpose: keep runtime and abstractions packages isolated from source-generator dependencies, packaging leaks, and attribute usage.
|
||||
- Tracking: `ai-plan/public/runtime-generator-boundary/todos/runtime-generator-boundary-tracking.md`
|
||||
- Trace: `ai-plan/public/runtime-generator-boundary/traces/runtime-generator-boundary-trace.md`
|
||||
- `github-issue-review-skill`
|
||||
- Purpose: add a GitHub issue triage skill that fetches the current repository issue, summarizes actionable context,
|
||||
and hands follow-up execution to `gframework-boot`.
|
||||
- Tracking: `ai-plan/public/github-issue-review-skill/todos/github-issue-review-skill-tracking.md`
|
||||
- Trace: `ai-plan/public/github-issue-review-skill/traces/github-issue-review-skill-trace.md`
|
||||
|
||||
## Worktree To Active Topic Map
|
||||
|
||||
@ -75,6 +80,9 @@ help the current worktree land on the right recovery documents without scanning
|
||||
- Branch: `fix/runtime-generator-boundary`
|
||||
- Worktree hint: `GFramework`
|
||||
- Priority 1: `runtime-generator-boundary`
|
||||
- Branch: `feat/github-issue-review-skill`
|
||||
- Worktree hint: `GFramework`
|
||||
- Priority 1: `github-issue-review-skill`
|
||||
- Branch: `docs/sdk-update-documentation`
|
||||
- Worktree hint: `GFramework-update-documentation`
|
||||
- Priority 1: `documentation-full-coverage-governance`
|
||||
|
||||
@ -0,0 +1,69 @@
|
||||
# GitHub Issue Review Skill 跟踪
|
||||
|
||||
## 目标
|
||||
|
||||
为仓库新增一个与 `$gframework-pr-review` 并列的 `$gframework-issue-review` skill,让 AI 能够从 GitHub issue
|
||||
快速提取正文、讨论和关键事件,形成结构化分诊结果,并把后续代码处理明确衔接到 `$gframework-boot`。
|
||||
|
||||
- 保持与现有 PR review skill 相同的目录与 CLI 体验
|
||||
- 支持“当前恰好一个 open issue 时自动选中,否则要求显式传号”的解析策略
|
||||
- 输出适合 AI 后续验证的结构化 JSON 与高信号文本摘要
|
||||
- 给出最小回归测试,覆盖自动选中与解析边界
|
||||
- 用真实仓库 issue 做一次抓取验证,确保默认路径可用
|
||||
|
||||
## 当前恢复点
|
||||
|
||||
- 恢复点编号:`ISSUE-SKILL-RP-001`
|
||||
- 当前阶段:`Phase 2`
|
||||
- 当前焦点:
|
||||
- 保持 `$gframework-issue-review` 可供后续 issue 分诊直接复用
|
||||
- 通过 `$gframework-boot` 继续 issue `#327` 的澄清优先处理路径
|
||||
- 若后续 issue 数量从 `1` 变为 `0` 或 `>1`,要求显式传 `--issue`
|
||||
|
||||
### 已知风险
|
||||
|
||||
- GitHub timeline API 可能因响应缺失或字段差异导致部分事件无法结构化
|
||||
- 缓解措施:把 timeline 解析作为尽力而为能力,缺失时记录到 `parse_warnings`
|
||||
- 当前仓库 open issue 数量若在验证时变化为 `0` 或 `>1`,默认自动解析路径将无法通过
|
||||
- 缓解措施:脚本明确报错并要求 `--issue <number>`,验证时同时保留显式 issue 号路径
|
||||
- issue 文本中的模块归因和处理建议只能是启发式结果,不能替代本地代码验证
|
||||
- 缓解措施:skill 文档明确要求后续仍通过 `$gframework-boot` 与本地源码核实
|
||||
|
||||
## 已完成
|
||||
|
||||
- 已建立活跃 topic:
|
||||
- `ai-plan/public/github-issue-review-skill/todos/`
|
||||
- `ai-plan/public/github-issue-review-skill/traces/`
|
||||
- 已将分支 `feat/github-issue-review-skill` 映射到该 topic,供后续 `boot` 优先恢复
|
||||
- 已新增 `.agents/skills/gframework-issue-review/`:
|
||||
- `SKILL.md`
|
||||
- `agents/openai.yaml`
|
||||
- `scripts/fetch_current_issue_review.py`
|
||||
- `scripts/test_fetch_current_issue_review.py`
|
||||
- 已实现与 `gframework-pr-review` 同构的 GitHub API 抓取骨架:
|
||||
- 支持 issue 元数据、评论、timeline、引用与 triage hints 输出
|
||||
- 支持 `--issue`、`--format`、`--json-output`、`--section`、`--max-description-length`
|
||||
- 支持“仅当当前仓库恰好一个 open issue 时自动解析,否则要求显式传号”
|
||||
- 已修正新脚本在当前 WSL 会话下误回退到 `git.exe` 的兼容问题:
|
||||
- 在主仓库根目录且存在 Linux `git` 时,也优先绑定 `--git-dir` / `--work-tree`
|
||||
|
||||
## 验证
|
||||
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/test_fetch_current_issue_review.py`
|
||||
- 结果:通过
|
||||
- 备注:`3` 个脚本级测试全部通过
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --section summary --section warnings`
|
||||
- 结果:通过
|
||||
- 备注:真实 GitHub API 抓取成功,自动解析到当前唯一 open issue `#327`
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --format json --json-output /tmp/gframework-open-issue-review.json`
|
||||
- 结果:通过
|
||||
- 备注:JSON 文件成功写出,`resolution_mode=auto-single-open-issue`,`next_action=clarify-issue-before-code`
|
||||
- `dotnet build GFramework.sln -c Release`
|
||||
- 结果:通过
|
||||
- 备注:`0 Warning(s)`,`0 Error(s)`
|
||||
|
||||
## 下一步
|
||||
|
||||
1. 使用 `$gframework-issue-review` 重新抓取或显式抓取目标 issue,并把 triage 结果带入 `$gframework-boot`
|
||||
2. 针对 issue `#327` 先执行“澄清优先”路径,再决定是否创建新的代码改动 topic
|
||||
3. 若后续需要更细的 issue 事件语义,再补强 timeline 解析与脚本级回归测试
|
||||
@ -0,0 +1,48 @@
|
||||
# GitHub Issue Review Skill Trace
|
||||
|
||||
## 2026-05-06
|
||||
|
||||
### 阶段:能力落地准备(ISSUE-SKILL-RP-001)
|
||||
|
||||
- 读取 `AGENTS.md`、`.ai/environment/tools.ai.yaml`、`ai-plan/public/README.md` 与现有
|
||||
`.agents/skills/gframework-pr-review/` 实现,确认新 skill 最稳妥的方案是复用现有 PR review 的
|
||||
GitHub API、WSL worktree Git 解析、文本 section 输出与脚本级测试骨架
|
||||
- 确认当前任务属于 `new` + `complex`:
|
||||
- `new`:当前没有与 issue review skill 对应的公开恢复主题
|
||||
- `complex`:同时涉及 skill 设计、GitHub API 脚本、CLI 契约、测试和 `ai-plan` 恢复入口
|
||||
- 根据实现前确认的产品决策固定默认行为:
|
||||
- 未显式传 issue 号时,只在“仓库当前恰好一个 open issue”时自动选中
|
||||
- skill 默认只做“抓取 + 分诊 + boot 衔接”,不在脚本层直接改代码
|
||||
- 已创建新 topic 目录并将当前分支 `feat/github-issue-review-skill` 映射到该 topic
|
||||
|
||||
### 当前执行目标
|
||||
|
||||
1. 新增 `gframework-issue-review` skill 文档与默认 prompt
|
||||
2. 新增 `fetch_current_issue_review.py` 及其最小回归测试
|
||||
3. 用真实 open issue 抓取验证默认流程,并记录最小验证命令
|
||||
|
||||
### 下一步
|
||||
|
||||
1. 直接用 `$gframework-issue-review` + `$gframework-boot` 开始 issue `#327` 的后续处理
|
||||
2. 若后续仓库同时出现多个 open issue,统一改用显式 `--issue <number>` 入口
|
||||
|
||||
### 阶段:实现与验证完成(ISSUE-SKILL-RP-001)
|
||||
|
||||
- 已落盘新 skill 文件:
|
||||
- `.agents/skills/gframework-issue-review/SKILL.md`
|
||||
- `.agents/skills/gframework-issue-review/agents/openai.yaml`
|
||||
- `.agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py`
|
||||
- `.agents/skills/gframework-issue-review/scripts/test_fetch_current_issue_review.py`
|
||||
- 真实抓取验证时首次发现:当前 WSL 会话会解析到 `git.exe`,但无法执行
|
||||
- 已在新脚本中修正为:只要仓库根目录存在 Linux `git`,就优先绑定显式 `--git-dir` / `--work-tree`
|
||||
- 完成验证:
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/test_fetch_current_issue_review.py`
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --section summary --section warnings`
|
||||
- `python3 .agents/skills/gframework-issue-review/scripts/fetch_current_issue_review.py --format json --json-output /tmp/gframework-open-issue-review.json`
|
||||
- `dotnet build GFramework.sln -c Release`
|
||||
- 真实 issue 验证结论:
|
||||
- 当前 open issue 自动解析为 `#327`
|
||||
- `resolution_mode=auto-single-open-issue`
|
||||
- `comment_count=0`
|
||||
- `next_action=clarify-issue-before-code`
|
||||
- `affected_active_topics=cqrs-rewrite`
|
||||
Loading…
x
Reference in New Issue
Block a user