🎯

speckit-tasks

🎯Skill

from dceoy/speckit-agent-skills

VibeIndex|
What it does

Generates a dependency-ordered, actionable task list for a feature based on design artifacts and user stories.

πŸ“¦

Part of

dceoy/speckit-agent-skills(10 items)

speckit-tasks

Installation

git cloneClone repository
git clone https://github.com/github/speckit-agent-skills.git
πŸ“– Extracted from docs: dceoy/speckit-agent-skills
4Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.

Overview

# Spec Kit Tasks Skill

When to Use

  • The implementation plan is ready and you need a dependency-ordered task list.

Inputs

  • specs//plan.md and specs//spec.md
  • Optional artifacts: data-model.md, contracts/, research.md, quickstart.md
  • Any user constraints or priorities from the request

If the plan is missing, ask the user to run speckit-plan first.

Workflow

  1. Setup: Run .specify/scripts/bash/check-prerequisites.sh --json from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
  1. Load design documents: Read from FEATURE_DIR:

- Required: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)

- Optional: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)

- Note: Not all projects have all documents. Generate tasks based on what's available.

  1. Execute task generation workflow:

- Load plan.md and extract tech stack, libraries, project structure

- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)

- If data-model.md exists: Extract entities and map to user stories

- If contracts/ exists: Map endpoints to user stories

- If research.md exists: Extract decisions for setup tasks

- Generate tasks organized by user story (see Task Generation Rules below)

- Generate dependency graph showing user story completion order

- Create parallel execution examples per user story

- Validate task completeness (each user story has all needed tasks, independently testable)

  1. Generate tasks.md: Use .specify/templates/tasks-template.md as structure, fill with:

- Correct feature name from plan.md

- Phase 1: Setup tasks (project initialization)

- Phase 2: Foundational tasks (blocking prerequisites for all user stories)

- Phase 3+: One phase per user story (in priority order from spec.md)

- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks

- Final Phase: Polish & cross-cutting concerns

- All tasks must follow the strict checklist format (see Task Generation Rules below)

- Clear file paths for each task

- Dependencies section showing story completion order

- Parallel execution examples per story

- Implementation strategy section (MVP first, incremental delivery)

  1. Report: Output path to generated tasks.md and summary:

- Total task count

- Task count per user story

- Parallel opportunities identified

- Independent test criteria for each story

- Suggested MVP scope (typically just User Story 1)

- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)

Context for task generation: the user's request and any stated priorities

The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.

Task Generation Rules

CRITICAL: Tasks MUST be organized by user story to enable independent implementation and testing.

Tests are OPTIONAL: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.

Checklist Format (REQUIRED)

Every task MUST strictly follow this format:

```text

  • [ ] [TaskID] [P?] [Story?] Description with file path

```

Format Components:

  1. Checkbox: ALWAYS start with - [ ] (markdown checkbox)
  2. Task ID: Sequential number (T001, T002, T003...) in execution order
  3. [P] marker: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
  4. [Story] label: REQUIRED for user story phase tasks only

- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)

- Setup phase: NO story label

- Foundational phase: NO story label

- User Story phases: MUST have story label

- Polish phase: NO story label

  1. Description: Clear action with exact file path

Examples:

  • βœ… CORRECT: - [ ] T001 Create project structure per implementation plan
  • βœ… CORRECT: - [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py
  • βœ… CORRECT: - [ ] T012 [P] [US1] Create User model in src/models/user.py
  • βœ… CORRECT: - [ ] T014 [US1] Implement UserService in src/services/user_service.py
  • ❌ WRONG: - [ ] Create User model (missing ID and Story label)
  • ❌ WRONG: T001 [US1] Create model (missing checkbox)
  • ❌ WRONG: - [ ] [US1] Create User model (missing Task ID)
  • ❌ WRONG: - [ ] T001 [US1] Create model (missing file path)

Task Organization

  1. From User Stories (spec.md) - PRIMARY ORGANIZATION:

- Each user story (P1, P2, P3...) gets its own phase

- Map all related components to their story:

- Models needed for that story

- Services needed for that story

- Endpoints/UI needed for that story

- If tests requested: Tests specific to that story

- Mark story dependencies (most stories should be independent)

  1. From Contracts:

- Map each contract/endpoint β†’ to the user story it serves

- If tests requested: Each contract β†’ contract test task [P] before implementation in that story's phase

  1. From Data Model:

- Map each entity to the user story(ies) that need it

- If entity serves multiple stories: Put in earliest story or Setup phase

- Relationships β†’ service layer tasks in appropriate story phase

  1. From Setup/Infrastructure:

- Shared infrastructure β†’ Setup phase (Phase 1)

- Foundational/blocking tasks β†’ Foundational phase (Phase 2)

- Story-specific setup β†’ within that story's phase

Phase Structure

  • Phase 1: Setup (project initialization)
  • Phase 2: Foundational (blocking prerequisites - MUST complete before user stories)
  • Phase 3+: User Stories in priority order (P1, P2, P3...)

- Within each story: Tests (if requested) β†’ Models β†’ Services β†’ Endpoints β†’ Integration

- Each phase should be a complete, independently testable increment

  • Final Phase: Polish & Cross-Cutting Concerns

Outputs

  • specs//tasks.md

Next Steps

After tasks are generated:

  • Analyze cross-artifact consistency with speckit-analyze.
  • Implement the plan with speckit-implement.

More from this repository9

🎯
speckit-specify🎯Skill

Generates a concise feature specification and branch name from a natural language description, ensuring unique naming and context preservation.

🎯
speckit-implement🎯Skill

Implements feature tasks from tasks.md by executing a comprehensive implementation workflow with prerequisite and checklist validation.

🎯
speckit-plan🎯Skill

Generates a comprehensive implementation plan for a feature by researching unknowns, defining data models, and creating technical design artifacts.

🎯
speckit-constitution🎯Skill

Generates and synchronizes project constitution templates by interactively collecting principles and automatically updating dependent artifacts.

🎯
speckit-baseline🎯Skill

Automatically generates feature specifications by analyzing existing source code, extracting key functionality, and creating standardized documentation.

🎯
speckit-analyze🎯Skill

Analyzes spectral data files, extracting key features and generating comprehensive statistical summaries for scientific research.

🎯
speckit-clarify🎯Skill

Identifies and resolves underspecified areas in a feature specification by asking targeted clarification questions and updating the spec accordingly.

🎯
speckit-taskstoissues🎯Skill

Converts tasks from a markdown file into dependency-ordered, actionable GitHub issues for a specific feature.

🎯
speckit-checklist🎯Skill

Generates tailored, domain-specific checklists that validate requirements quality by identifying completeness, clarity, and potential gaps.