So far in this tutorial series, we’ve covered how to set up GitHub Spec Kit, constitution, specify, clarify, plan, and tasks commands. So we are almost at the point of telling Copilot to start writing code using /implement. But before we get there, there is also an optional /analyze command that can be very useful to run before implementing tasks.
/analyze Command #
This command analyzes all the artifacts created so far like the constitution, spec files, plan files, etc., and it analyzes the files for any inconsistencies where there might be conflicting information or any ambiguities. It’s like someone taking a final look at over everything done so far, just to make sure we are absolutely ready to running the tasks.
Before running the /analyze command, let’s look at the .github/agents/speckit.analyze.agent.md file just to see what is happening under the hood. If you open that file you can see that it also accepts user’s input as $ARGUMENTS. But in the case of this /analyze command, unless there is any specific guidance you want to give to the model, we don’t really need to pass anything. After arguments we can see the goal of these instructions is to identify any inconsistencies, duplications and ambiguities. It also says that this is strictly a read only action, so the coding agent won’t be making any changes to the files in the repository, unless explicitly instructed to do so. It also instructs the model to check the constitution file and flag any conflicts between the constitution and the spec files or plan files. Let’s run the /speckit.analyze command now.
Issues #
After a few moments, we can see that the analysis is complete. The output is a table that lists all the issues found during the analysis.
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|---|---|---|---|---|---|
| C1 | Constitution Alignment | CRITICAL | spec.md (Testing & Manual Validation Checklist) vs constitution.md (Workflow Constraints) | Spec includes an “Automated: Unit test generator function…” line proposing automated unit tests, which directly violates the constitution’s explicit ban on automated testing. | Remove or rewrite the automated test item in spec.md. Replace with a manual test plan (unit test matrix as manual checklists) or note that automated tests are “proposed but blocked by constitution” and require a constitution amendment. Resolve before any /speckit.implement work. |
| A1 | Ambiguity / Inconsistency | HIGH | spec.md (Success Criteria SC-003) & plan.md (Performance Goals) | Conflicting performance targets: spec’s SC-003 says “under 5 seconds”, plan’s Performance Goals say “<2 seconds”. | Pick one measurable target and make it consistent: update spec.md or plan.md to the agreed latency (e.g., “ICS generation & download β€ 2s on typical broadband”) and add the measurement procedure (manual measurement steps). |
| A2 | Ambiguity | MEDIUM | spec.md (FR-001, FR-008) | Recurrence semantics are underspecified (e.g., how “Weekly” maps to RRULE BYDAY, timezone handling with RRULE, default DTSTART alignment). | Expand FR-008 with exact RRULE expectations: BYDAY behavior, DTSTART anchoring, timezone interaction, and acceptance criteria (example RRULE strings for each recurrence option). Add a small matrix in spec.md for expected RRULE outputs. |
| A3 | Ambiguity | MEDIUM | spec.md (Edge Cases, FR-005) | “Same start/end (allowed only if semantics defined)” β semantics are not defined (allowed as zero-length event? block?). | Define allowed semantics for equal start/end: either allow zero-length events with explicit acceptance or treat as validation error. Update FR-005 and validation rules in tasks (lib/form-utils.ts). |
| D1 | Duplication | LOW | spec.md (trailing content) | The spec file contains a repeated generic template block appended after the feature content (looks like an accidental paste). | Remove the duplicate/template tail so spec.md contains only the feature-specific content. |
| I1 | Constitution Alignment | HIGH | plan.md (Dependencies) vs constitution.md (Minimal Dependencies Only) | Plan adds date-fns and @date-fns/tz without documenting that governance approval was obtained, while constitution requires approval for new packages. | Add a governance note in plan.md documenting approval (or a rollback plan), or mark the dependency addition as needing explicit approval before implementation. |
| U1 | Underspecification | MEDIUM | tasks.md (Phase 1: Setup tasks T001βT005) | Several setup tasks (T001βT005) lack the requested “exact file paths” or exact commands (task format says include exact file paths/commands). | Expand tasks with concrete commands/paths (e.g., npm install date-fns @date-fns/tz, exact npx shadcn@latest init flags, file paths to Tailwind config edits). |
| G1 | Coverage Gap | MEDIUM | spec.md (SC-003) & tasks.md | Performance success criterion (SC-003) lacks an explicit task to measure and validate it (manual perf measurement task is not present). | Add a task to tasks.md (e.g., T037 Manual performance check: measure generation+download times across network profiles and document results) and instructions (page URL, sample events, stopwatch steps, and acceptance thresholds). |
| I2 | Inconsistency | MEDIUM | plan.md (shadcn/ui guidance) & tasks.md (T002) | Plan claims shadcn/ui will be “copy-paste components, not a runtime dependency”, but tasks instruct npx shadcn@latest init, which typically adds a package/runtime or scaffolding. | Clarify the intended approach: (A) use shadcn as copy-paste components (no package), or (B) use the shadcn CLI to scaffold components (document what is installed). Update plan and tasks to match. |
| A4 | Ambiguity (Accessibility) | MEDIUM | spec.md (Testing & Manual Validation) & constitution | Accessibility checks are mentioned as manual steps but lack measurable acceptance criteria (contrast with constitution’s “manual validation” requirement). | Add explicit accessibility acceptance criteria (e.g., keyboard-only completion, aria-labels present, Lighthouse score thresholds) and a task to run and document results. |
You can see that the agent has given us a table, which got IDs for each items found, a category, severity level, location of the issue, a summary and a recommendation on how to fix the issue. At the end of the analysis report, the agent asks if you would like them to suggest remediation edits for any of the issues found. This can be very useful as it can help you quickly fix any problems before moving on to implementation. I said to go ahead, in your case if your analysis report contains no issues, you can skip this step.
After agent suggested the remediation edits, I went ahead told the agent to apply the edits, since the /analyse command is a read only action, we explicitly have to tell the agent to make the changes to the files. After a few moments, all the issues found during the analysis were fixed.
Implementing Tasks using /implement Command #
So we have run every command except the final one which is the /implement command. This is the command that actually takes the task list, runs through each task and implements them. This is the first time we instruct the coding agent to generate working code for the project. This may feel like a fairly long and drawn out process to get here but keep in mind that we ran two optional commands which are the /clarify and /analyze commands. We have run the constitution command as well, which typically we won’t need to do every time we have a new feature. Another reason is coding agents are tend to do a better job when they have more context about the project or the feature to be implemented. So taking the time to set up the context properly will pay off in the long run with better quality code being generated.
Before we run the /implement command, first open a new chat window if you are using the same chat for running previous commands. This is because the context window may be full of previous messages and it may bloat the context for the coding agent. So it’s best to start fresh with a new chat window. If you open the .github/agents/speckit.implement.agent.md file, it also accepts user’s input as $ARGUMENTS. But in this case we don’t need to pass anything as arguments. It’s a prompt that instructs the model to read the tasks and execute each one.
After a few moments, you can see that the implementation is complete. The output shows a summary of the work done, including the number of files created, modified, and any issues encountered. You can now review the changes made to the codebase to ensure everything is as expected. You can also run the tests to verify that everything is working correctly. If there are any issues, you can ask the coding agent to help fix them.