Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions .github/workflows/pr-analysis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
name: "PR Analysis"

on:
workflow_run:
workflows: ["Sync fabricv4 API spec"]
types: [completed]

permissions:
contents: read
pull-requests: write
actions: read

jobs:
pr-analysis:
name: Branch File Analysis
runs-on: ubuntu-latest
if: github.event.workflow_run.conclusion == 'success'
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow is triggered by the completion of "Sync fabricv4 API spec" workflow but doesn't verify that the workflow actually created or updated a PR. The workflow will run even if the sync workflow completed successfully but didn't create a PR (for example, if there were no changes to sync).

Consider adding a check to verify that the sync workflow actually created or found a PR before proceeding with the analysis. This could be done by checking if PR_NUMBER is empty after line 39, but it would be better to check earlier in the workflow to avoid unnecessary setup steps.

Copilot uses AI. Check for mistakes.

steps:
- name: Checkout code
uses: actions/checkout@v4

Comment on lines +21 to +22
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow doesn't check out the correct branch. The actions/checkout@v4 step at line 21 will check out the default branch (main), not the branch from the triggering workflow. This means the Python script will analyze the wrong branch and won't detect the actual changes made by the "Sync fabricv4 API spec" workflow.

To fix this, you need to check out the branch that triggered the workflow using:

- name: Checkout code
  uses: actions/checkout@v4
  with:
    ref: ${{ github.event.workflow_run.head_branch }}

Additionally, you need to fetch the base branch to ensure git diff has the proper reference:

- name: Fetch base branch
  run: git fetch origin main:main
Suggested change
uses: actions/checkout@v4
uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
- name: Fetch base branch
run: git fetch origin main:main

Copilot uses AI. Check for mistakes.
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.9'

- name: Run analysis and update PR
run: |
# Install GitHub CLI
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update
sudo apt install gh
Comment on lines +30 to +34
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GitHub CLI is being manually installed instead of using the official GitHub-provided action. GitHub-hosted runners already have gh CLI pre-installed, so lines 30-34 (installing gh) are unnecessary and add execution time.

You can remove these lines and directly use the gh command. If you need a specific version, you can use the gh version that comes with the runner or explicitly verify the version is available.

Copilot uses AI. Check for mistakes.
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow uses secrets.GITHUB_TOKEN for authentication at line 35. However, when using workflow_run triggers, the GITHUB_TOKEN in the triggered workflow has limited permissions and may not have write access to pull requests created from forks or in certain repository configurations.

Consider using a GitHub App token or a Personal Access Token (PAT) with appropriate permissions, stored as a separate secret, to ensure the workflow can update PRs reliably.

Copilot uses AI. Check for mistakes.

# Get PR for branch
BRANCH="${{ github.event.workflow_run.head_branch }}"
PR_NUMBER=$(gh pr list --head "$BRANCH" --state open --json number --jq '.[0].number // empty')
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow lacks error handling for the case when no PR is found for the branch. If PR_NUMBER is empty (which can happen if the branch has no open PR), subsequent commands using $PR_NUMBER will fail with unclear errors.

Add a check after line 39:

if [ -z "$PR_NUMBER" ]; then
  echo "No open PR found for branch $BRANCH"
  exit 0
fi
Suggested change
PR_NUMBER=$(gh pr list --head "$BRANCH" --state open --json number --jq '.[0].number // empty')
PR_NUMBER=$(gh pr list --head "$BRANCH" --state open --json number --jq '.[0].number // empty')
if [ -z "$PR_NUMBER" ]; then
echo "No open PR found for branch $BRANCH"
exit 0
fi

Copilot uses AI. Check for mistakes.

# Run Python script and get result
ANALYSIS_RESULT=$(python3 script/branch_file_analyzer.py)
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow doesn't handle the case where the Python script produces no output or fails silently. If ANALYSIS_RESULT is empty, the PR description will be updated with an empty analysis section, which could be confusing.

Add validation after line 42:

if [ -z "$ANALYSIS_RESULT" ]; then
  echo "No analysis results generated"
  exit 0
fi
Suggested change
ANALYSIS_RESULT=$(python3 script/branch_file_analyzer.py)
ANALYSIS_RESULT=$(python3 script/branch_file_analyzer.py)
if [ -z "$ANALYSIS_RESULT" ]; then
echo "No analysis results generated"
exit 0
fi

Copilot uses AI. Check for mistakes.

# Get current PR description
CURRENT_DESC=$(gh pr view $PR_NUMBER --json body --jq '.body // ""')

# Remove old analysis section if exists
NEW_DESC=$(echo "$CURRENT_DESC" | sed '/## 📋 Branch File Analysis/,/^\*Auto-generated by Branch File Analysis workflow\*$/d')
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sed command to remove the old analysis section may not work correctly in all cases. The regex pattern /^\*Auto-generated by Branch File Analysis workflow\*$/d looks for an exact line match, but the actual line in the PR description (from line 51) is *Auto-generated by Branch File Analysis workflow* without the ^ and $ being part of the literal text.

The sed command should use a simpler pattern or ensure proper escaping. Additionally, the range deletion may fail if the closing marker is not found, leaving partial content. Consider using a more robust approach with a unique marker or using a script that handles edge cases better.

Suggested change
NEW_DESC=$(echo "$CURRENT_DESC" | sed '/## 📋 Branch File Analysis/,/^\*Auto-generated by Branch File Analysis workflow\*$/d')
NEW_DESC=$(printf "%s" "$CURRENT_DESC" | python3 - << 'EOF'
import sys
body = sys.stdin.read()
start = "## 📋 Branch File Analysis"
end = "*Auto-generated by Branch File Analysis workflow*"
lines = body.splitlines()
out = []
buffer = []
in_block = False
for line in lines:
if not in_block:
if line.strip() == start:
in_block = True
buffer = [line]
else:
out.append(line)
else:
buffer.append(line)
if line.strip() == end:
# Completed a block; drop it and reset state
in_block = False
buffer = []
# If we were in a block but never found the end marker, restore the buffered lines
if in_block and buffer:
out.extend(buffer)
print("\n".join(out))
EOF
)

Copilot uses AI. Check for mistakes.

# Add new analysis section
UPDATED_DESC=$(printf "%s\n\n## 📋 Branch File Analysis\n\n%s\n\n---\n*Auto-generated by Branch File Analysis workflow*" "$NEW_DESC" "$ANALYSIS_RESULT")

# Update PR
gh pr edit $PR_NUMBER --body "$UPDATED_DESC"
Comment on lines +47 to +54
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sed command at line 48 and the printf command at line 51 process user-generated content (PR descriptions) without proper escaping. If a PR description contains special characters like backticks, dollar signs, or backslashes, it could lead to command injection or unexpected behavior.

Consider using a more robust method to manipulate the PR description, such as:

  1. Writing the content to temporary files instead of using shell variables
  2. Using a Python or other scripting language that handles strings more safely
  3. Properly escaping the content before using it in shell commands

This is especially important since the PR description could potentially be modified by attackers if they have write access to the repository.

Suggested change
# Remove old analysis section if exists
NEW_DESC=$(echo "$CURRENT_DESC" | sed '/## 📋 Branch File Analysis/,/^\*Auto-generated by Branch File Analysis workflow\*$/d')
# Add new analysis section
UPDATED_DESC=$(printf "%s\n\n## 📋 Branch File Analysis\n\n%s\n\n---\n*Auto-generated by Branch File Analysis workflow*" "$NEW_DESC" "$ANALYSIS_RESULT")
# Update PR
gh pr edit $PR_NUMBER --body "$UPDATED_DESC"
# Export variables for Python script
export ANALYSIS_RESULT
export CURRENT_DESC
# Use Python to safely update the PR description and write it to a file
python3 - << 'PY'
import os
import re
current_desc = os.environ.get("CURRENT_DESC", "")
analysis_result = os.environ.get("ANALYSIS_RESULT", "")
# Remove existing "Branch File Analysis" section, if present
pattern = (
r"\n?## 📋 Branch File Analysis"
r".*?"
r"\*Auto-generated by Branch File Analysis workflow\*"
r"\n?"
)
new_desc = re.sub(pattern, "", current_desc, flags=re.DOTALL)
new_desc = new_desc.rstrip()
updated_desc = (
(new_desc + "\n\n" if new_desc else "")
+ "## 📋 Branch File Analysis\n\n"
+ analysis_result
+ "\n\n---\n*Auto-generated by Branch File Analysis workflow*"
)
with open("updated_desc.txt", "w", encoding="utf-8") as f:
f.write(updated_desc)
PY
# Update PR using the file to avoid shell interpolation issues
gh pr edit $PR_NUMBER --body-file updated_desc.txt

Copilot uses AI. Check for mistakes.
Comment on lines +38 to +54
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shell script uses unquoted variables in several places (e.g., $PR_NUMBER, $ANALYSIS_RESULT, $NEW_DESC). If any of these variables contain special characters or whitespace, the commands could fail or behave unexpectedly.

While GitHub Actions usually sets set -e by default (which helps catch errors), it's still a best practice to quote variables, especially when they contain user-generated content like PR descriptions. Consider quoting variables throughout:

PR_NUMBER=$(gh pr list --head "$BRANCH" --state open --json number --jq '.[0].number // empty')
CURRENT_DESC=$(gh pr view "$PR_NUMBER" --json body --jq '.body // ""')

Copilot uses AI. Check for mistakes.
Comment on lines +28 to +54
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The entire workflow runs in a single step with a multi-line bash script. If any command in the middle fails (like the gh CLI installation or the Python script), the workflow will fail without clear indication of which specific step failed. This makes debugging difficult.

Consider splitting this into multiple named steps (e.g., "Install GitHub CLI", "Get PR Number", "Run Analysis", "Update PR Description") so that GitHub Actions can provide better visibility into which step failed and make the workflow easier to maintain.

Copilot uses AI. Check for mistakes.
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ public void getMetroByCode() throws ApiException {
public void getMetros() throws ApiException {
MetroResponse metroResponse = metrosApi.getMetros(null,1, 10);
assertEquals(200, metrosApi.getApiClient().getStatusCode());
boolean metroFound = metroResponse.getData().stream().anyMatch(metro -> metro.getCode().equals(metroCode));
assertTrue(metroFound);
assertTrue(!metroResponse.getData().isEmpty());
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion assertTrue(!metroResponse.getData().isEmpty()) uses double negation, which is harder to read and less idiomatic than using assertFalse.

Based on other test files in the codebase (e.g., NetworksApiTest.java:155, ServiceProfilesApiTest.java:86, 98, 143), the convention is to use assertFalse for isEmpty checks. Consider changing this to:

assertFalse(metroResponse.getData().isEmpty());

This is more consistent with the existing test code style.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test change appears unrelated to the PR's stated purpose of "adding flow to push added, modified, renamed files after sync action to pull request description." The PR description doesn't mention any test modifications.

If this change is intentional and related to the analysis workflow (perhaps as a test case), it should be explained in the PR description. If it's unrelated, it should be moved to a separate PR to keep changes focused and easier to review.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test assertion has been weakened. The original test verified that a specific metro (with code "SV") exists in the response, while the new test only checks that the response is not empty. This reduces test coverage and may not catch regressions where the specific metro is missing or incorrect.

The original assertion was more specific and valuable:

boolean metroFound = metroResponse.getData().stream().anyMatch(metro -> metro.getCode().equals(metroCode));
assertTrue(metroFound);

Unless there's a specific reason the "SV" metro might not always be present, consider keeping the original, more specific assertion.

Suggested change
assertTrue(!metroResponse.getData().isEmpty());
boolean metroFound = metroResponse.getData().stream().anyMatch(metro -> metro.getCode().equals(metroCode));
assertTrue(metroFound);

Copilot uses AI. Check for mistakes.
}
}
297 changes: 297 additions & 0 deletions script/branch_file_analyzer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,297 @@
#!/usr/bin/env python3

import subprocess
import sys
import os
from pathlib import Path
from collections import defaultdict


class BranchFileAnalyzer:
def __init__(self, base_branch='main'):
self.base_branch = base_branch
self.current_branch = self._get_current_branch()
self.base_ref = self._get_base_ref()

# File categorization
self.added_files = []
self.modified_files = []
self.deleted_files = []
self.renamed_files = {}

def _run_git_command(self, cmd):
"""Execute git command and return output."""
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True, cwd='.')
return result.stdout.strip()
except subprocess.CalledProcessError:
return ""
Comment on lines +27 to +28
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _run_git_command method silently swallows all git command errors by catching CalledProcessError and returning an empty string. This can hide important errors and make debugging difficult. For example, if git is not installed or if there's a permission issue, the error will be silently ignored.

Consider logging the error or at least making the error handling more selective. For critical commands, you may want to let the exception propagate or provide more context about what failed.

Suggested change
except subprocess.CalledProcessError:
return ""
except subprocess.CalledProcessError as e:
print(
f"Warning: git command failed: {cmd} (return code {e.returncode}). "
f"stderr: {e.stderr.strip() if e.stderr else 'None'}",
file=sys.stderr,
)
return ""
except FileNotFoundError as e:
# git executable not found
print(
f"Error: git command not found while running: {cmd}. "
f"Details: {e}",
file=sys.stderr,
)
return ""

Copilot uses AI. Check for mistakes.

def _get_current_branch(self):
"""Get current branch name."""
branch = self._run_git_command(['git', 'branch', '--show-current'])
if not branch:
# Handle detached HEAD
head_commit = self._run_git_command(['git', 'rev-parse', '--short', 'HEAD'])
return f"HEAD-{head_commit}" if head_commit else "unknown"
return branch

def _get_base_ref(self):
"""Get base reference for comparison."""
# Try different base references
for ref in [f'origin/{self.base_branch}', self.base_branch, 'HEAD~1']:
if self._run_git_command(['git', 'rev-parse', '--verify', ref]):
return ref
return 'HEAD~1' # fallback

def analyze_branch_changes(self):
"""Analyze all file changes in the current branch."""
# Get all changed files with their status
diff_output = self._run_git_command([
'git', 'diff', '--name-status', f'{self.base_ref}..HEAD'
])

if not diff_output:
# If no diff between base and HEAD, check uncommitted changes
print("No committed changes found. Checking uncommitted changes...", file=sys.stderr)
self._analyze_uncommitted_changes()
return

# Process each line of diff output
for line in diff_output.split('\n'):
if not line.strip():
continue

self._process_file_change(line.strip())

def _analyze_uncommitted_changes(self):
"""Analyze uncommitted changes if no committed changes found."""
# Check staged changes
staged_output = self._run_git_command(['git', 'diff', '--cached', '--name-status'])
# Check unstaged changes
unstaged_output = self._run_git_command(['git', 'diff', '--name-status'])
# Check untracked files
untracked_output = self._run_git_command(['git', 'ls-files', '--others', '--exclude-standard'])

# Process staged changes
for line in staged_output.split('\n'):
if line.strip():
self._process_file_change(line.strip())

# Process unstaged changes
for line in unstaged_output.split('\n'):
if line.strip():
self._process_file_change(line.strip())

# Process untracked files (treat as added)
for filepath in untracked_output.split('\n'):
if filepath.strip():
self.added_files.append(filepath.strip())

def _process_file_change(self, line):
"""Process a single file change line."""
parts = line.split('\t')
if len(parts) < 2:
return

status = parts[0]
filepath = parts[1]

# Handle rename operations (R100 old_file new_file)
if status.startswith('R'):
if len(parts) >= 3:
old_file = filepath
new_file = parts[2]
self.renamed_files[old_file] = new_file
return

# Handle copy operations (C100 old_file new_file)
if status.startswith('C'):
if len(parts) >= 3:
# Treat copies as new files
new_file = parts[2]
self.added_files.append(new_file)
return

# Handle other status codes
if status == 'A':
self.added_files.append(filepath)
elif status == 'M':
self.modified_files.append(filepath)
elif status == 'D':
self.deleted_files.append(filepath)
elif status == 'T':
# Type change (e.g., file to symlink)
self.modified_files.append(filepath)

def categorize_files_by_type(self, file_list):
"""Categorize files by their type/extension."""
categories = defaultdict(list)

for filepath in file_list:
file_path = Path(filepath)
filename = file_path.name

# Categorize by file type
if filepath.endswith('.java'):
if '/model/' in filepath or '/dto/' in filepath:
categories['Java Models'].append(filename)
elif '/api/' in filepath and filename.endswith('Api.java'):
categories['API Classes'].append(filename)
elif '/test/' in filepath or 'test' in filename.lower():
categories['Test Files'].append(filename)
# Removed Java Files category - these files will be ignored
elif filepath.endswith(('.py', '.sh', '.js', '.ts')):
categories['Scripts'].append(filename)
elif filepath.endswith(('.json', '.xml')):
categories['Data Files'].append(filename)
else:
categories['Other Files'].append(filename)

return categories

Comment on lines +127 to +152
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The categorize_files_by_type method (lines 127-151) is defined but never used anywhere in the code. This dead code should be removed to improve code maintainability and avoid confusion.

If this method is intended for future use, consider removing it for now and adding it back when it's actually needed.

Suggested change
def categorize_files_by_type(self, file_list):
"""Categorize files by their type/extension."""
categories = defaultdict(list)
for filepath in file_list:
file_path = Path(filepath)
filename = file_path.name
# Categorize by file type
if filepath.endswith('.java'):
if '/model/' in filepath or '/dto/' in filepath:
categories['Java Models'].append(filename)
elif '/api/' in filepath and filename.endswith('Api.java'):
categories['API Classes'].append(filename)
elif '/test/' in filepath or 'test' in filename.lower():
categories['Test Files'].append(filename)
# Removed Java Files category - these files will be ignored
elif filepath.endswith(('.py', '.sh', '.js', '.ts')):
categories['Scripts'].append(filename)
elif filepath.endswith(('.json', '.xml')):
categories['Data Files'].append(filename)
else:
categories['Other Files'].append(filename)
return categories

Copilot uses AI. Check for mistakes.
def generate_report(self):
"""Generate the analysis report."""
lines = []


# Added Files by category
if self.added_files:
# Separate by specific categories
added_apis = [f for f in self.added_files if f.endswith('.java') and ('/api/' in f and f.endswith('Api.java'))]
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition ('/api/' in f and f.endswith('Api.java')) is redundant because it checks f.endswith('Api.java') when you've already checked f.endswith('.java') in the outer condition. This pattern repeats on lines 161, 185, 209, 238, and 239.

The inner check f.endswith('Api.java') is sufficient and more specific than the combined check. Consider simplifying to:

added_apis = [f for f in self.added_files if '/api/' in f and f.endswith('Api.java')]

This removes the redundant .endswith('.java') check and makes the code clearer.

Suggested change
added_apis = [f for f in self.added_files if f.endswith('.java') and ('/api/' in f and f.endswith('Api.java'))]
added_apis = [f for f in self.added_files if '/api/' in f and f.endswith('Api.java')]

Copilot uses AI. Check for mistakes.
added_models = [f for f in self.added_files if f.endswith('.java') and ('/model/' in f or '/dto/' in f)]

# Only show header if there are files in any category
if added_apis or added_models:
lines.append("ADDED FILES:")

if added_apis:
lines.append(" Added API ->")
for filepath in sorted(added_apis):
filename = Path(filepath).name.replace('.java', '')
lines.append(f" + {filename}")

if added_models:
lines.append(" Added Models ->")
for filepath in sorted(added_models):
filename = Path(filepath).name.replace('.java', '')
lines.append(f" + {filename}")

lines.append("")

# Modified Files by category
if self.modified_files:
# Separate by specific categories
modified_apis = [f for f in self.modified_files if f.endswith('.java') and ('/api/' in f and f.endswith('Api.java'))]
modified_models = [f for f in self.modified_files if f.endswith('.java') and ('/model/' in f or '/dto/' in f)]

# Only show header if there are files in any category
if modified_apis or modified_models:
lines.append("MODIFIED FILES:")

if modified_apis:
lines.append(" Modified API ->")
for filepath in sorted(modified_apis):
filename = Path(filepath).name.replace('.java', '')
lines.append(f" * {filename}")

if modified_models:
lines.append(" Modified Models ->")
for filepath in sorted(modified_models):
filename = Path(filepath).name.replace('.java', '')
lines.append(f" * {filename}")

lines.append("")

# Deleted Files by category
if self.deleted_files:
# Separate by specific categories
deleted_apis = [f for f in self.deleted_files if f.endswith('.java') and ('/api/' in f and f.endswith('Api.java'))]
deleted_models = [f for f in self.deleted_files if f.endswith('.java') and ('/model/' in f or '/dto/' in f)]

# Only show header if there are files in any category
if deleted_apis or deleted_models:
lines.append("DELETED FILES:")

if deleted_apis:
lines.append(" Deleted API ->")
for filepath in sorted(deleted_apis):
filename = Path(filepath).name.replace('.java', '')
lines.append(f" - {filename}")

if deleted_models:
lines.append(" Deleted Models ->")
for filepath in sorted(deleted_models):
filename = Path(filepath).name.replace('.java', '')
lines.append(f" - {filename}")

lines.append("")
Comment on lines +159 to +228
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code has significant duplication in the report generation section. The logic for filtering and formatting added, modified, deleted, and renamed files follows the same pattern but is repeated four times (lines 159-180, 183-204, 207-228, 231-267).

Consider extracting a helper method to reduce duplication:

def _format_file_section(self, title, files, symbol):
    apis = [f for f in files if '/api/' in f and f.endswith('Api.java')]
    models = [f for f in files if '/model/' in f or '/dto/' in f and f.endswith('.java')]
    # ... format and return lines

This would improve maintainability and reduce the risk of inconsistent changes across sections.

Copilot uses AI. Check for mistakes.

# Renamed Files with detailed categorization
if self.renamed_files:
# Separate renamed files by category
renamed_apis = {}
renamed_models = {}


for old_file, new_file in self.renamed_files.items():
old_is_api = old_file.endswith('.java') and ('/api/' in old_file and old_file.endswith('Api.java'))
new_is_api = new_file.endswith('.java') and ('/api/' in new_file and new_file.endswith('Api.java'))
old_is_model = old_file.endswith('.java') and ('/model/' in old_file or '/dto/' in old_file)
new_is_model = new_file.endswith('.java') and ('/model/' in new_file or '/dto/' in new_file)

if old_is_api or new_is_api:
renamed_apis[old_file] = new_file
elif old_is_model or new_is_model:
renamed_models[old_file] = new_file


# Only show header if there are files in any category
if renamed_apis or renamed_models:
lines.append("RENAMED FILES:")

if renamed_apis:
lines.append(" Renamed API ->")
for old_file, new_file in sorted(renamed_apis.items()):
old_name = Path(old_file).name.replace('.java', '')
new_name = Path(new_file).name.replace('.java', '')
lines.append(f" {old_name} -> {new_name}")

if renamed_models:
lines.append(" Renamed Models ->")
for old_file, new_file in sorted(renamed_models.items()):
old_name = Path(old_file).name.replace('.java', '')
new_name = Path(new_file).name.replace('.java', '')
lines.append(f" {old_name} -> {new_name}")

lines.append("")


return "\n".join(lines)


def main():
"""Main function."""
# Check if we're in a git repository
if not os.path.exists('.git'):
print("Error: Not in a Git repository!", file=sys.stderr)
sys.exit(1)

try:
# Create analyzer
analyzer = BranchFileAnalyzer()

# Analyze changes
analyzer.analyze_branch_changes()

# Generate and print report
report = analyzer.generate_report()
print(report)

except Exception as e:
print(f"Error during analysis: {e}", file=sys.stderr)
sys.exit(1)


if __name__ == '__main__':
main()
Loading