Skip to content

Conversation

@yujonglee
Copy link
Contributor

@yujonglee yujonglee commented Dec 5, 2025

Summary

Adds an automated code analysis workflow using OpenCode CLI with Claude AI to periodically scan the codebase for bugs, refactoring opportunities, and security concerns. Findings can optionally trigger Devin sessions to create fix PRs.

The workflow runs weekly (Mondays 6 AM UTC) via cron or can be triggered manually. It analyzes 11 key components (plugins, crates, apps) with max 3 concurrent analyses using the OpenCode plan agent (read-only, no code changes).

Updates since last revision

Simplified to a single workflow file with all logic inline (no separate scripts or README).

Review & Testing Checklist for Human

  • Verify OpenCode CLI authentication: The workflow uses ANTHROPIC_API_KEY but OpenCode docs show Claude Max uses OAuth via /connect. Test if API key auth works in CI or if a different auth method is needed.
  • Test workflow manually first: Run the workflow via workflow_dispatch before relying on cron schedule. The OpenCode CLI installation and execution in CI is untested.
  • Check artifact naming: Matrix targets contain slashes (e.g., plugins/listener) which become artifact names like analysis-plugins/listener. Verify this doesn't cause upload issues.
  • Verify JSON output format: The workflow assumes opencode run --format json outputs parseable JSON. This should be validated.

Recommended test plan:

  1. Add ANTHROPIC_API_KEY secret to repo
  2. Manually trigger workflow with create_devin_sessions: false
  3. Check if OpenCode CLI installs and runs successfully
  4. Review the uploaded artifacts for valid output

Notes

Requested by: yujonglee (@yujonglee)
Devin session: https://app.devin.ai/sessions/97107827f74345a5ab430b1e576218f2

- Add analyze.sh script for code analysis using OpenCode CLI with Plan agent
- Add create-devin-sessions.sh for parsing findings and creating Devin sessions
- Add GitHub Actions workflow with cron schedule (weekly) and manual trigger
- Support max 3 concurrent analyses
- Add README documentation for the workflow

Co-Authored-By: yujonglee <yujonglee.dev@gmail.com>
@devin-ai-integration
Copy link
Contributor

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR that start with 'DevinAI' or '@devin'.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@netlify
Copy link

netlify bot commented Dec 5, 2025

Deploy Preview for hyprnote ready!

Name Link
🔨 Latest commit ebcb979
🔍 Latest deploy log https://app.netlify.com/projects/hyprnote/deploys/6932d62c43ec3a0008423512
😎 Deploy Preview https://deploy-preview-2152--hyprnote.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@netlify
Copy link

netlify bot commented Dec 5, 2025

Deploy Preview for hyprnote-storybook ready!

Name Link
🔨 Latest commit ebcb979
🔍 Latest deploy log https://app.netlify.com/projects/hyprnote-storybook/deploys/6932d62cbbfe1100087f9352
😎 Deploy Preview https://deploy-preview-2152--hyprnote-storybook.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

📝 Walkthrough

Walkthrough

Extended GitHub Actions workflow with matrix-based analysis targets and manual triggers. The analyze job now runs OpenCode CLI across multiple targets with configurable parallelism, uploading findings. The create-sessions job downloads findings and posts per-target analysis results to the Devin API for session creation.

Changes

Cohort / File(s) Summary
GitHub Actions workflow configuration
.github/workflows/opencode_analysis.yaml
Added scheduling and manual trigger inputs (create_devin_sessions, max_concurrency); extended analyze job to run matrix-based targets with parallelism control, installs/uses OpenCode CLI, saves results to /tmp/analysis.json, and uploads as artifact; extended create-sessions job to download findings artifact, iterate over findings directories, and POST per-target analysis to Devin API using DEVIN_API_KEY; configured artifact retention to 30 days

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Single workflow file with added scheduling/manual triggers and new matrix configuration
  • New external service integration (OpenCode CLI and Devin API) requires verification of authentication and API usage patterns
  • Artifact handling and inter-job dependencies should be validated for correctness

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding an OpenCode analysis workflow with Devin integration, which matches the PR objectives and changeset.
Description check ✅ Passed The description is directly related to the changeset, providing detailed context about the workflow's purpose, schedule, scope, and implementation details.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch devin/1764938365-opencode-analysis-workflow

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

…un setup

Co-Authored-By: yujonglee <yujonglee.dev@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
scripts/opencode-analysis/analyze.sh (1)

72-74: IFS-based parsing may fail if colon appears in description.

Line 73 uses IFS=':' to parse target entries. If a description contains a colon (e.g., "Plugin: for X"), the split will break. Consider using a more robust delimiter or format (e.g., @ or structured data).

-TARGETS=(
-    "plugins/listener:Audio capture and session management plugin"
+TARGETS=(
+    "plugins/listener@Audio capture and session management plugin"

Then update line 73:

-        IFS=':' read -r target description <<< "$target_entry"
+        IFS='@' read -r target description <<< "$target_entry"
.github/workflows/opencode_analysis.yaml (1)

24-29: Quote $GITHUB_PATH for defensive coding.

Line 26 appends to $GITHUB_PATH without quoting. While $GITHUB_PATH is unlikely to contain spaces in the GitHub Actions environment, defensive quoting is a best practice:

-          echo "$HOME/.opencode/bin" >> $GITHUB_PATH
+          echo "$HOME/.opencode/bin" >> "$GITHUB_PATH"

This aligns with the shellcheck SC2086 warning and prevents edge-case failures if environment variables are unexpectedly modified.

scripts/opencode-analysis/create-devin-sessions.sh (3)

10-32: Add error handling for malformed JSON in parse_findings.

Lines 15–27 iterate over JSON files and attempt to parse them with jq without verifying that the input is valid JSON. If analyze.sh produces corrupted output (as flagged in the earlier critical issue), line 20's jq -e '.error' will fail silently, and line 25's jq operation will error, halting the script or producing an incomplete findings.json.

Add a pre-flight validation:

for file in "$OUTPUT_DIR"/*.json; do
     if [[ -f "$file" ]]; then
         local target_name=$(basename "$file" .json)
         local content=$(cat "$file")
         
+        # Validate JSON structure before processing
+        if ! echo "$content" | jq -e . > /dev/null 2>&1; then
+            echo "Skipping $target_name: invalid JSON output"
+            continue
+        fi
+        
         if echo "$content" | jq -e '.error' > /dev/null 2>&1; then
             echo "Skipping $target_name: analysis failed"
             continue
         fi
         ...

34-52: Add error handling for Devin API response parsing.

Line 43 checks for .session_id in the response without first validating that the response is valid JSON. If the Devin API returns an error or non-JSON response, jq will fail and the script will error. Add a pre-flight JSON validation:

create_devin_session() {
     local prompt="$1"
     local response
     
     response=$(curl -s -X POST "$DEVIN_API_URL" \
         -H "Authorization: Bearer $DEVIN_API_KEY" \
         -H "Content-Type: application/json" \
         -d "{\"prompt\": $(echo "$prompt" | jq -Rs .)}")
     
+    # Validate JSON response before parsing
+    if ! echo "$response" | jq -e . > /dev/null 2>&1; then
+        echo "Failed to create session: invalid JSON response"
+        return 1
+    fi
+    
     if echo "$response" | jq -e '.session_id' > /dev/null 2>&1; then
         ...

70-98: Consider splitting declare and assign to improve error capture (optional).

Shellcheck flags multiple SC2155 warnings (lines 17, 18, 44, 45, 65, 66, 71, 72) recommending that variable declarations and assignments be split. While this is valid style guidance, in "Chill" review mode these are low-priority. If you wish to address them, split assignments like:

-local target_name=$(basename "$file" .json)
+local target_name
+target_name=$(basename "$file" .json)

This allows the script to catch and fail on command failures (e.g., if basename is unavailable). However, given the script's overall robustness, this refactor can be deferred.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a51cc60 and ea273ed.

⛔ Files ignored due to path filters (1)
  • scripts/opencode-analysis/README.md is excluded by !**/README*
📒 Files selected for processing (3)
  • .github/workflows/opencode_analysis.yaml (1 hunks)
  • scripts/opencode-analysis/analyze.sh (1 hunks)
  • scripts/opencode-analysis/create-devin-sessions.sh (1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.9)
.github/workflows/opencode_analysis.yaml

24-24: shellcheck reported issue in this script: SC2086:info:2:31: Double quote to prevent globbing and word splitting

(shellcheck)

🪛 Shellcheck (0.11.0)
scripts/opencode-analysis/create-devin-sessions.sh

[warning] 17-17: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 18-18: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 44-44: session_id appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 44-44: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 45-45: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 65-65: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 66-66: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 71-71: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 72-72: Declare and assign separately to avoid masking return values.

(SC2155)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Devin
🔇 Additional comments (4)
.github/workflows/opencode_analysis.yaml (2)

30-35: Verify permissions and environment propagation.

The analyze.sh script is invoked with MAX_CONCURRENCY passed explicitly at line 35, which correctly overrides the default. Ensure that the script sources or inherits all necessary environment variables (e.g., ANTHROPIC_API_KEY, MAX_CONCURRENCY). The current setup looks correct, but confirm that OpenCode CLI can access the API key at runtime.


43-51: Clarify artifact handling for findings.json generation.

The workflow runs create-devin-sessions.sh parse at line 45 to generate findings.json, then uploads it as an artifact at line 50. This artifact is later downloaded in the create-sessions job (line 60–63). Confirm that:

  1. The parse command gracefully handles missing or malformed output files (e.g., if analyze.sh fails).
  2. If no valid findings are generated, an empty or minimal findings.json is still created so the downstream create-sessions job does not fail during artifact download.

Run a manual test via workflow_dispatch with create_devin_sessions: false to verify the artifact pipeline works end-to-end.

scripts/opencode-analysis/create-devin-sessions.sh (2)

54-99: DEVIN_API_KEY validation is correctly placed in process_findings.

The check at line 55 validates DEVIN_API_KEY only when the create command is invoked (via process_findings), not at script startup. This is the correct approach—it allows parse to run independently without requiring the API key to be set. The PR objectives' recommendation to move this check into process_findings() appears to already be implemented.


101-116: Entrypoint and command dispatch look solid.

The case statement cleanly separates parse and create workflows, with clear help text. The design allows the GitHub Actions workflow to invoke parse to generate findings and later invoke create conditionally, which is a good separation of concerns.

Comment on lines 55 to 63
cd "$target_path"

if opencode run "$prompt" --agent plan --format json > "$output_file" 2>&1; then
echo "Completed: $target"
else
echo "Failed: $target"
echo "{\"error\": \"Analysis failed\", \"target\": \"$target\"}" > "$output_file"
fi
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Separate stderr from JSON output to prevent corruption.

Line 57 redirects both stdout and stderr (2>&1) to the output file. If OpenCode fails, error messages are written to the JSON file, corrupting it. The fallback at line 61 then appends a malformed JSON structure, leaving the file in an unparseable state for downstream jq operations in create-devin-sessions.sh.

Separate stderr and stdout, and only write JSON output to the file:

-    if opencode run "$prompt" --agent plan --format json > "$output_file" 2>&1; then
+    if opencode run "$prompt" --agent plan --format json > "$output_file" 2>/tmp/opencode-err-${target_name}.log; then
         echo "Completed: $target"
     else
         echo "Failed: $target"
+        cat /tmp/opencode-err-${target_name}.log >&2
         echo "{\"error\": \"Analysis failed\", \"target\": \"$target\"}" > "$output_file"
     fi

Alternatively, capture stderr to a separate diagnostics file or log it without mixing into the JSON output.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In scripts/opencode-analysis/analyze.sh around lines 55 to 63, stderr is
currently merged into the JSON output file (using 2>&1) which can corrupt the
JSON; change the opencode invocation to write only stdout (the JSON) to
"$output_file" and redirect stderr to a separate diagnostics/log file (e.g.,
"$output_file.err" or a central log) so errors aren’t mixed into the JSON; keep
the existing if/else logic but ensure the failure branch still writes the
fallback JSON to "$output_file" and optionally write a human-readable error
message or the captured stderr into the diagnostics file for debugging.

Co-Authored-By: yujonglee <yujonglee.dev@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
.github/workflows/opencode_analysis.yaml (3)

34-36: Consider pinning OpenCode CLI version. The curl | bash pattern installs the latest version without version control. Consider using a specific version pin or checksum to ensure reproducibility and reduce supply-chain risk.


55-62: Improve error handling for missing or invalid analysis files.

Line 62 silently falls back to {} if analysis.json is missing, which can result in API calls with empty findings. This masks OpenCode failures and may create unnecessary Devin sessions. Add explicit validation: check if the file exists and log warnings if it's missing or invalid. Consider failing the job if analysis is unavailable for a target, rather than proceeding with empty data.


63-67: Add proper error logging for Devin API calls.

The || true operator on line 66 suppresses curl errors, making it hard to debug API failures. Consider capturing and logging errors, or at least checking the HTTP response code before silently continuing. Additionally, verify that the 2-second sleep between requests aligns with Devin API rate limits.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ea273ed and ebcb979.

📒 Files selected for processing (1)
  • .github/workflows/opencode_analysis.yaml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: fmt
  • GitHub Check: Redirect rules - hyprnote
  • GitHub Check: Header rules - hyprnote
  • GitHub Check: Pages changed - hyprnote
  • GitHub Check: Devin
🔇 Additional comments (3)
.github/workflows/opencode_analysis.yaml (3)

1-11: LGTM—Triggers and inputs are well-configured. Cron schedule correctly targets Mondays 06:00 UTC, and workflow_dispatch inputs align with concurrency control and Devin session creation.


14-17: Verify inputs.max_concurrency behavior in scheduled runs.

Line 17 uses ${{ inputs.max_concurrency || 3 }}, but the inputs context is only available during workflow_dispatch triggers. In scheduled runs, this expression may fail or use an undefined value. Confirm that the fallback to 3 is reliable, or refactor to use an environment variable or conditional.


44-48: LGTM—Artifact upload is correctly configured. Per-target naming ensures uniqueness, and 30-day retention is appropriate.

Comment on lines +59 to +70
- run: |
for dir in findings/*/; do
target=$(basename "$dir" | sed 's/analysis-//')
analysis=$(cat "$dir/analysis.json" 2>/dev/null || echo "{}")
curl -s -X POST "https://api.devin.ai/v1/sessions" \
-H "Authorization: Bearer $DEVIN_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"prompt\": \"Fix issues in fastrepl/hyprnote $target based on: $analysis\"}" || true
sleep 2
done
env:
DEVIN_API_KEY: ${{ secrets.DEVIN_API_KEY }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

CRITICAL: Fix JSON injection vulnerability and add DEVIN_API_KEY validation.

This script has multiple blocking issues:

  1. JSON Injection (line 66): The $analysis variable is directly interpolated into the JSON payload without escaping. If analysis contains quotes, backslashes, or newlines, the JSON will be malformed. Example: analysis='{"error": "quote\"inside"}' will break the payload.

  2. Missing DEVIN_API_KEY validation (lines 59-70): Per the PR checklist, the DEVIN_API_KEY should be validated before use. Currently, if the secret is not set, curl will POST with an empty Bearer token and fail silently due to || true. Add an early check to halt the job if the key is missing.

  3. Artifact path extraction (line 61): The sed command assumes artifact names in findings/*/, but actions/download-artifact@v4 creates subdirectories like findings/<artifact_name>/. Verify the actual structure and adjust the extraction logic accordingly.

Apply this diff to fix the issues:

      - run: |
+         if [ -z "$DEVIN_API_KEY" ]; then
+           echo "ERROR: DEVIN_API_KEY is not set" >&2
+           exit 1
+         fi
          for dir in findings/*/; do
            target=$(basename "$dir" | sed 's/analysis-//')
-           analysis=$(cat "$dir/analysis.json" 2>/dev/null || echo "{}")
-           curl -s -X POST "https://api.devin.ai/v1/sessions" \
+           if [ ! -f "$dir/analysis.json" ]; then
+             echo "WARNING: $dir/analysis.json not found, skipping $target" >&2
+             continue
+           fi
+           analysis=$(cat "$dir/analysis.json")
+           payload=$(jq -n --arg prompt "Fix issues in fastrepl/hyprnote $target based on: $analysis" '{prompt: $prompt}')
+           curl -s -X POST "https://api.devin.ai/v1/sessions" \
              -H "Authorization: Bearer $DEVIN_API_KEY" \
              -H "Content-Type: application/json" \
-             -d "{\"prompt\": \"Fix issues in fastrepl/hyprnote $target based on: $analysis\"}" || true
+             -d "$payload" || true
            sleep 2
          done

Additionally, verify the artifact directory structure after download to ensure the path extraction is correct.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- run: |
for dir in findings/*/; do
target=$(basename "$dir" | sed 's/analysis-//')
analysis=$(cat "$dir/analysis.json" 2>/dev/null || echo "{}")
curl -s -X POST "https://api.devin.ai/v1/sessions" \
-H "Authorization: Bearer $DEVIN_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"prompt\": \"Fix issues in fastrepl/hyprnote $target based on: $analysis\"}" || true
sleep 2
done
env:
DEVIN_API_KEY: ${{ secrets.DEVIN_API_KEY }}
- run: |
if [ -z "$DEVIN_API_KEY" ]; then
echo "ERROR: DEVIN_API_KEY is not set" >&2
exit 1
fi
for dir in findings/*/; do
target=$(basename "$dir" | sed 's/analysis-//')
if [ ! -f "$dir/analysis.json" ]; then
echo "WARNING: $dir/analysis.json not found, skipping $target" >&2
continue
fi
analysis=$(cat "$dir/analysis.json")
payload=$(jq -n --arg prompt "Fix issues in fastrepl/hyprnote $target based on: $analysis" '{prompt: $prompt}')
curl -s -X POST "https://api.devin.ai/v1/sessions" \
-H "Authorization: Bearer $DEVIN_API_KEY" \
-H "Content-Type: application/json" \
-d "$payload" || true
sleep 2
done
env:
DEVIN_API_KEY: ${{ secrets.DEVIN_API_KEY }}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants