🎯

replicate-cli

🎯Skill

from rawveg/skillsforge-marketplace

VibeIndex|
What it does

replicate-cli skill from rawveg/skillsforge-marketplace

πŸ“¦

Part of

rawveg/skillsforge-marketplace(33 items)

replicate-cli

Installation

Add MarketplaceAdd marketplace to Claude Code
/plugin marketplace add rawveg/skillsforge-marketplace
npxRun with npx
npx skills rawveg/skillsforge-marketplace
Install PluginInstall plugin from marketplace
/plugin install https://github.com/rawveg/skillsforge-marketplace/tree/main/skill-name
git cloneClone repository
git clone https://github.com/rawveg/skillsforge-marketplace.git
Add MarketplaceAdd marketplace to Claude Code
/plugin marketplace add ./skillsforge-marketplace

+ 10 more commands

πŸ“– Extracted from docs: rawveg/skillsforge-marketplace
13Installs
15
-
Last UpdatedJan 17, 2026

Skill Details

SKILL.md

This skill provides comprehensive guidance for using the Replicate CLI to run AI models, create predictions, manage deployments, and fine-tune models. Use this skill when the user wants to interact with Replicate's AI model platform via command line, including running image generation models, language models, or any ML model hosted on Replicate. This skill should be used when users ask about running models on Replicate, creating predictions, managing deployments, fine-tuning models, or working with the Replicate API through the CLI.

Overview

# Replicate CLI

The Replicate CLI is a command-line tool for interacting with Replicate's AI model platform. It enables running predictions, managing models, creating deployments, and fine-tuning models directly from the terminal.

Authentication

Before using the Replicate CLI, set the API token:

```bash

export REPLICATE_API_TOKEN=

```

Alternatively, authenticate interactively:

```bash

replicate auth login

```

Verify authentication:

```bash

replicate account current

```

Core Commands

Running Predictions

The primary use case is running predictions against hosted models.

Basic prediction:

```bash

replicate run input_key=value

```

Examples:

Image generation:

```bash

replicate run stability-ai/sdxl prompt="a studio photo of a rainbow colored corgi"

```

Text generation with streaming:

```bash

replicate run meta/llama-2-70b-chat --stream prompt="Tell me a joke"

```

Prediction flags:

  • --stream - Stream output tokens in real-time (for text models)
  • --no-wait - Submit prediction without waiting for completion
  • --web - Open prediction in browser
  • --json - Output result as JSON
  • --save - Save outputs to local directory
  • --output-directory - Specify output directory (default: ./{prediction-id})

Input Handling

File uploads: Prefix local file paths with @:

```bash

replicate run nightmareai/real-esrgan image=@photo.jpg

```

Output chaining: Use {{.output}} template syntax to chain predictions:

```bash

replicate run stability-ai/sdxl prompt="a corgi" | \

replicate run nightmareai/real-esrgan image={{.output[0]}}

```

Model Operations

View model schema (see required inputs and outputs):

```bash

replicate model schema

replicate model schema stability-ai/sdxl --json

```

List models:

```bash

replicate model list

replicate model list --json

```

Show model details:

```bash

replicate model show

```

Create a new model:

```bash

replicate model create \

--hardware gpu-a100-large \

--private \

--description "Model description"

```

Model creation flags:

  • --hardware - Hardware SKU (see references/hardware.md)
  • --private / --public - Visibility setting
  • --description - Model description
  • --github-url - Link to source repository
  • --license-url - License information
  • --cover-image-url - Cover image for model page

Training (Fine-tuning)

Fine-tune models using the training command:

```bash

replicate train \

--destination \

input_key=value

```

Example - Fine-tune SDXL with DreamBooth:

```bash

replicate train stability-ai/sdxl \

--destination myuser/custom-sdxl \

--web \

input_images=@training-images.zip \

use_face_detection_instead=true

```

List trainings:

```bash

replicate training list

```

Show training details:

```bash

replicate training show

```

Deployments

Deployments provide dedicated, always-on inference endpoints with predictable performance.

Create deployment:

```bash

replicate deployments create \

--model \

--hardware \

--min-instances 1 \

--max-instances 3

```

Example:

```bash

replicate deployments create text-to-image \

--model stability-ai/sdxl \

--hardware gpu-a100-large \

--min-instances 1 \

--max-instances 5

```

Update deployment:

```bash

replicate deployments update \

--max-instances 10 \

--version

```

List deployments:

```bash

replicate deployments list

```

Show deployment details and schema:

```bash

replicate deployments show

replicate deployments schema

```

Hardware

List available hardware options:

```bash

replicate hardware list

```

See references/hardware.md for detailed hardware information and selection guidelines.

Scaffolding

Create a local development environment from an existing prediction:

```bash

replicate scaffold --template=

```

This generates a project with the prediction's model and inputs pre-configured.

Command Aliases

For convenience, these aliases are available:

| Alias | Equivalent Command |

|-------|-------------------|

| replicate run | replicate prediction create |

| replicate stream | replicate prediction create --stream |

| replicate train | replicate training create |

Short aliases for subcommands:

  • replicate m = replicate model
  • replicate p = replicate prediction
  • replicate t = replicate training
  • replicate d = replicate deployments
  • replicate hw = replicate hardware
  • replicate a = replicate account

Common Workflows

Image Generation Pipeline

Generate an image and upscale it:

```bash

replicate run stability-ai/sdxl \

prompt="professional photo of a sunset" \

negative_prompt="blurry, low quality" | \

replicate run nightmareai/real-esrgan \

image={{.output[0]}} \

--save

```

Check Model Inputs Before Running

Always check the model schema to understand required inputs:

```bash

replicate model schema owner/model-name

```

Batch Processing

Run predictions and save outputs:

```bash

for prompt in "cat" "dog" "bird"; do

replicate run stability-ai/sdxl prompt="$prompt" --save --output-directory "./outputs/$prompt"

done

```

Monitor Long-Running Tasks

Submit without waiting, then check status:

```bash

# Submit

replicate run owner/model input=value --no-wait --json > prediction.json

# Check status later

replicate prediction show $(jq -r '.id' prediction.json)

```

Best Practices

  1. Always check schema first - Run replicate model schema to understand required and optional inputs before running predictions.
  1. Use streaming for text models - Add --stream flag when running language models to see output in real-time.
  1. Save outputs explicitly - Use --save and --output-directory to organize prediction outputs.
  1. Use JSON output for automation - Add --json flag when parsing outputs programmatically.
  1. Open in web for debugging - Add --web flag to view predictions in the Replicate dashboard for detailed logs.
  1. Chain predictions efficiently - Use the {{.output}} syntax to pass outputs between models without intermediate saves.

Troubleshooting

Authentication errors:

  • Verify REPLICATE_API_TOKEN is set correctly
  • Run replicate account current to test authentication

Model not found:

  • Check model name format: owner/model-name
  • Verify model exists at replicate.com

Input validation errors:

  • Run replicate model schema to see required inputs
  • Check input types (string, number, file)

File upload issues:

  • Ensure @ prefix is used for local files
  • Verify file path is correct and file exists

Additional Resources

  • Replicate documentation: https://replicate.com/docs
  • Model explorer: https://replicate.com/explore
  • API reference: https://replicate.com/docs/reference/http
  • GitHub repository: https://github.com/replicate/cli