🎯

speckit-clarify

🎯Skill

from dceoy/speckit-agent-skills

VibeIndex|
What it does

Identifies and resolves underspecified areas in a feature specification by asking targeted clarification questions and updating the spec accordingly.

📦

Part of

dceoy/speckit-agent-skills(10 items)

speckit-clarify

Installation

git cloneClone repository
git clone https://github.com/github/speckit-agent-skills.git
📖 Extracted from docs: dceoy/speckit-agent-skills
2Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.

Overview

# Spec Kit Clarify Skill

When to Use

  • The feature spec exists but needs targeted clarification before planning.

Inputs

  • The current feature spec in specs//spec.md.
  • The user's clarification intent or constraints from the request.

If the request is empty or the spec is missing, ask a targeted question before proceeding.

Workflow

Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.

Note: This clarification workflow is expected to run (and be completed) BEFORE the speckit-plan skill. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.

Execution steps:

  1. Run .specify/scripts/bash/check-prerequisites.sh --json --paths-only from repo root once (combined --json --paths-only mode / -Json -PathsOnly). Parse minimal JSON payload fields:

- FEATURE_DIR

- FEATURE_SPEC

- (Optionally capture IMPL_PLAN, TASKS for future chained flows.)

- If JSON parsing fails, abort and instruct the user to re-run speckit-specify or verify the feature branch environment.

- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").

  1. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).

Functional Scope & Behavior:

- Core user goals & success criteria

- Explicit out-of-scope declarations

- User roles / personas differentiation

Domain & Data Model:

- Entities, attributes, relationships

- Identity & uniqueness rules

- Lifecycle/state transitions

- Data volume / scale assumptions

Interaction & UX Flow:

- Critical user journeys / sequences

- Error/empty/loading states

- Accessibility or localization notes

Non-Functional Quality Attributes:

- Performance (latency, throughput targets)

- Scalability (horizontal/vertical, limits)

- Reliability & availability (uptime, recovery expectations)

- Observability (logging, metrics, tracing signals)

- Security & privacy (authN/Z, data protection, threat assumptions)

- Compliance / regulatory constraints (if any)

Integration & External Dependencies:

- External services/APIs and failure modes

- Data import/export formats

- Protocol/versioning assumptions

Edge Cases & Failure Handling:

- Negative scenarios

- Rate limiting / throttling

- Conflict resolution (e.g., concurrent edits)

Constraints & Tradeoffs:

- Technical constraints (language, storage, hosting)

- Explicit tradeoffs or rejected alternatives

Terminology & Consistency:

- Canonical glossary terms

- Avoided synonyms / deprecated terms

Completion Signals:

- Acceptance criteria testability

- Measurable Definition of Done style indicators

Misc / Placeholders:

- TODO markers / unresolved decisions

- Ambiguous adjectives ("robust", "intuitive") lacking quantification

For each category with Partial or Missing status, add a candidate question opportunity unless:

- Clarification would not materially change implementation or validation strategy

- Information is better deferred to planning phase (note internally)

  1. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:

- Maximum of 10 total questions across the whole session.

- Each question must be answerable with EITHER:

- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR

- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").

- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.

- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.

- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).

- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.

- If more than 5 categories remain unresolved, select the top 5 by (Impact \* Uncertainty) heuristic.

  1. Sequential questioning loop (interactive):

- Present EXACTLY ONE question at a time.

- For multiple‑choice questions:

- Analyze all options and determine the most suitable option based on:

- Best practices for the project type

- Common patterns in similar implementations

- Risk reduction (security, performance, maintainability)

- Alignment with any explicit project goals or constraints visible in the spec

- Present your recommended option prominently at the top with clear reasoning (1-2 sentences explaining why this is the best choice).

- Format as: Recommended: Option [X] -

- Then render all options as a Markdown table:

| Option | Description |

| ------ | --------------------------------------------------------------------------------------------------- |

| A |

| B |

| C |

| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |

- After the table, add: You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.

- For short‑answer style (no meaningful discrete options):

- Provide your suggested answer based on best practices and context.

- Format as: Suggested: -

- Then output: Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.

- After the user answers:

- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.

- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.

- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).

- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.

- Stop asking further questions when:

- All critical ambiguities resolved early (remaining queued items become unnecessary), OR

- User signals completion ("done", "good", "no more"), OR

- You reach 5 asked questions.

- Never reveal future queued questions in advance.

- If no valid questions exist at start, immediately report no critical ambiguities.

  1. Integration after EACH accepted answer (incremental update approach):

- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.

- For the first integrated answer in this session:

- Ensure a ## Clarifications section exists (create it just after the highest-level contextual/overview section per the spec template if missing).

- Under it, create (if not present) a ### Session YYYY-MM-DD subheading for today.

- Append a bullet line immediately after acceptance: - Q: → A: .

- Then immediately apply the clarification to the most appropriate section(s):

- Functional ambiguity → Update or add a bullet in Functional Requirements.

- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.

- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.

- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).

- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).

- Terminology conflict → Normalize term across spec; retain original only if necessary by adding (formerly referred to as "X") once.

- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.

- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).

- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.

- Keep each inserted clarification minimal and testable (avoid narrative drift).

  1. Validation (performed after EACH write plus final pass):

- Clarifications session contains exactly one bullet per accepted answer (no duplicates).

- Total asked (accepted) questions ≤ 5.

- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.

- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).

- Markdown structure valid; only allowed new headings: ## Clarifications, ### Session YYYY-MM-DD.

- Terminology consistency: same canonical term used across all updated sections.

  1. Write the updated spec back to FEATURE_SPEC.
  1. Report completion (after questioning loop ends or early termination):

- Number of questions asked & answered.

- Path to updated spec.

- Sections touched (list names).

- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).

- If any Outstanding or Deferred remain, recommend whether to proceed to speckit-plan or run speckit-clarify again later post-plan.

- Suggested next step.

Behavior rules:

  • If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
  • If spec file missing, instruct the user to run speckit-specify first (do not create a new spec here).
  • Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
  • Avoid speculative tech stack questions unless the absence blocks functional clarity.
  • Respect user early termination signals ("stop", "done", "proceed").
  • If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
  • If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.

Context for prioritization: the user's request and any stated constraints

Outputs

  • Updated specs//spec.md with clarifications appended and integrated

Next Steps

After clarifications are resolved:

  • Plan implementation with speckit-plan.