🎯

cloudflare-r2

🎯Skill

from akornmeier/claude-config

VibeIndex|
What it does

Enables seamless S3-compatible object storage on Cloudflare's global network with zero egress fees, supporting file uploads, downloads, and migrations.

πŸ“¦

Part of

akornmeier/claude-config(24 items)

cloudflare-r2

Installation

npm installInstall npm package
npm install -g wrangler
πŸ“– Extracted from docs: akornmeier/claude-config
1Installs
-
AddedFeb 4, 2026

Skill Details

SKILL.md

Guide for implementing Cloudflare R2 - S3-compatible object storage with zero egress fees. Use when implementing file storage, uploads/downloads, data migration to/from R2, configuring buckets, integrating with Workers, or working with R2 APIs and SDKs.

Overview

# Cloudflare R2

S3-compatible object storage with zero egress bandwidth fees. Built on Cloudflare's global network for high durability (11 nines) and strong consistency.

When to Use This Skill

  • Implementing object storage for applications
  • Migrating from AWS S3 or other storage providers
  • Setting up file uploads/downloads
  • Configuring public or private buckets
  • Integrating R2 with Cloudflare Workers
  • Using R2 with S3-compatible tools and SDKs
  • Configuring CORS, lifecycles, or event notifications
  • Optimizing storage costs with zero egress fees

Prerequisites

Required:

  • Cloudflare account with R2 purchased
  • Account ID from Cloudflare dashboard

For API access:

  • R2 Access Keys (Access Key ID + Secret Access Key)
  • Generate from: Cloudflare Dashboard β†’ R2 β†’ Manage R2 API Tokens

For Wrangler CLI:

```bash

npm install -g wrangler

wrangler login

```

Core Concepts

Architecture

  • S3-compatible API - works with AWS SDKs and tools
  • Workers API - native Cloudflare Workers integration
  • Global network - strong consistency across all regions
  • Zero egress fees - no bandwidth charges for data retrieval

Storage Classes

  • Standard - default, optimized for frequent access
  • Infrequent Access - lower storage cost, retrieval fees apply, 30-day minimum

Access Methods

  1. R2 Workers Binding - serverless integration (recommended for new apps)
  2. S3 API - compatibility with existing tools
  3. Public buckets - direct HTTP access via custom domains or r2.dev
  4. Presigned URLs - temporary access without credentials

Quick Start

1. Create Bucket

Wrangler:

```bash

wrangler r2 bucket create my-bucket

```

With location hint:

```bash

wrangler r2 bucket create my-bucket --location=wnam

```

Locations: wnam (West NA), enam (East NA), weur (West EU), eeur (East EU), apac (Asia Pacific)

2. Upload Object

Wrangler:

```bash

wrangler r2 object put my-bucket/file.txt --file=./local-file.txt

```

Workers API:

```javascript

await env.MY_BUCKET.put('file.txt', fileContents, {

httpMetadata: {

contentType: 'text/plain',

},

});

```

3. Download Object

Wrangler:

```bash

wrangler r2 object get my-bucket/file.txt --file=./downloaded.txt

```

Workers API:

```javascript

const object = await env.MY_BUCKET.get('file.txt');

const contents = await object.text();

```

Workers Integration

Binding Configuration

wrangler.toml:

```toml

[[r2_buckets]]

binding = "MY_BUCKET"

bucket_name = "my-bucket"

preview_bucket_name = "my-bucket-preview"

```

Common Operations

Upload with metadata:

```javascript

await env.MY_BUCKET.put('user-uploads/photo.jpg', imageData, {

httpMetadata: {

contentType: 'image/jpeg',

cacheControl: 'public, max-age=31536000',

},

customMetadata: {

uploadedBy: userId,

uploadDate: new Date().toISOString(),

},

});

```

Download with streaming:

```javascript

const object = await env.MY_BUCKET.get('large-file.mp4');

if (object === null) {

return new Response('Not found', { status: 404 });

}

return new Response(object.body, {

headers: {

'Content-Type': object.httpMetadata.contentType,

ETag: object.etag,

},

});

```

List objects:

```javascript

const listed = await env.MY_BUCKET.list({

prefix: 'user-uploads/',

limit: 100,

});

for (const object of listed.objects) {

console.log(object.key, object.size);

}

```

Delete object:

```javascript

await env.MY_BUCKET.delete('old-file.txt');

```

Check if object exists:

```javascript

const object = await env.MY_BUCKET.head('file.txt');

if (object) {

console.log('Exists:', object.size, 'bytes');

}

```

S3 SDK Integration

AWS CLI

Configure:

```bash

aws configure

# Access Key ID:

# Secret Access Key:

# Region: auto

```

Operations:

```bash

# List buckets

aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage.com

# Upload file

aws s3 cp file.txt s3://my-bucket/ --endpoint-url https://.r2.cloudflarestorage.com

# Generate presigned URL (expires in 1 hour)

aws s3 presign s3://my-bucket/file.txt --endpoint-url https://.r2.cloudflarestorage.com --expires-in 3600

```

JavaScript (AWS SDK v3)

```javascript

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({

region: 'auto',

endpoint: https://${accountId}.r2.cloudflarestorage.com,

credentials: {

accessKeyId: process.env.R2_ACCESS_KEY_ID,

secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,

},

});

await s3.send(

new PutObjectCommand({

Bucket: 'my-bucket',

Key: 'file.txt',

Body: fileContents,

})

);

```

Python (Boto3)

```python

import boto3

s3 = boto3.client(

service_name="s3",

endpoint_url=f'https://{account_id}.r2.cloudflarestorage.com',

aws_access_key_id=access_key_id,

aws_secret_access_key=secret_access_key,

region_name="auto",

)

# Upload file

s3.upload_fileobj(file_obj, 'my-bucket', 'file.txt')

# Download file

s3.download_file('my-bucket', 'file.txt', './local-file.txt')

```

Rclone (Large Files)

Configure:

```bash

rclone config

# Select: Amazon S3 β†’ Cloudflare R2

# Enter credentials and endpoint

```

Upload with multipart optimization:

```bash

# For large files (>100MB)

rclone copy large-video.mp4 r2:my-bucket/ \

--s3-upload-cutoff=100M \

--s3-chunk-size=100M

```

Public Buckets

Enable Public Access

Wrangler:

```bash

wrangler r2 bucket create my-public-bucket

# Then enable in dashboard: R2 β†’ Bucket β†’ Settings β†’ Public Access

```

Access URLs

r2.dev (development only, rate-limited):

```

https://pub-.r2.dev/file.txt

```

Custom domain (recommended for production):

  1. Dashboard β†’ R2 β†’ Bucket β†’ Settings β†’ Public Access
  2. Add custom domain
  3. Cloudflare handles DNS/TLS automatically

CORS Configuration

Required for:

  • Browser-based uploads
  • Cross-origin API calls
  • Presigned URL usage from web apps

Wrangler:

```bash

wrangler r2 bucket cors put my-bucket --rules '[

{

"AllowedOrigins": ["https://example.com"],

"AllowedMethods": ["GET", "PUT", "POST"],

"AllowedHeaders": ["*"],

"ExposeHeaders": ["ETag"],

"MaxAgeSeconds": 3600

}

]'

```

Important: Origins must match exactly (no trailing slash).

Multipart Uploads

For files >100MB or parallel uploads:

Workers API:

```javascript

const multipart = await env.MY_BUCKET.createMultipartUpload('large-file.mp4');

// Upload parts (5MiB - 5GiB each, max 10,000 parts)

const part1 = await multipart.uploadPart(1, chunk1);

const part2 = await multipart.uploadPart(2, chunk2);

// Complete upload

const object = await multipart.complete([part1, part2]);

```

Constraints:

  • Part size: 5MiB - 5GiB
  • Max parts: 10,000
  • Max object size: 5TB
  • Incomplete uploads auto-abort after 7 days (configurable via lifecycle)

Data Migration

Sippy (Incremental, On-Demand)

Best for: Gradual migration, avoiding upfront egress fees

```bash

# Enable for bucket

wrangler r2 bucket sippy enable my-bucket \

--provider=aws \

--bucket=source-bucket \

--region=us-east-1 \

--access-key-id=$AWS_KEY \

--secret-access-key=$AWS_SECRET

```

Objects migrate when first requested. Subsequent requests served from R2.

Super Slurper (Bulk, One-Time)

Best for: Complete migration, known object list

  1. Dashboard β†’ R2 β†’ Data Migration β†’ Super Slurper
  2. Select source provider (AWS, GCS, Azure)
  3. Enter credentials and bucket name
  4. Start migration

Lifecycle Rules

Auto-delete or transition storage classes:

Wrangler:

```bash

wrangler r2 bucket lifecycle put my-bucket --rules '[

{

"action": {"type": "AbortIncompleteMultipartUpload"},

"filter": {},

"abortIncompleteMultipartUploadDays": 7

},

{

"action": {"type": "Transition", "storageClass": "InfrequentAccess"},

"filter": {"prefix": "archives/"},

"daysFromCreation": 90

}

]'

```

Event Notifications

Trigger Workers on bucket events:

Wrangler:

```bash

wrangler r2 bucket notification create my-bucket \

--queue=my-queue \

--event-type=object-create

```

Supported events:

  • object-create - new uploads
  • object-delete - deletions

Message format:

```json

{

"account": "account-id",

"bucket": "my-bucket",

"object": { "key": "file.txt", "size": 1024, "etag": "..." },

"action": "PutObject",

"eventTime": "2024-01-15T12:00:00Z"

}

```

Best Practices

Performance

  • Use Cloudflare Cache with custom domains for frequently accessed objects
  • Multipart uploads for files >100MB (faster, more reliable)
  • Rclone for batch operations (concurrent transfers)
  • Location hints match user geography

Security

  • Never commit Access Keys to version control
  • Use environment variables for credentials
  • Bucket-scoped tokens for least privilege
  • Presigned URLs for temporary access
  • Enable Cloudflare Access for additional protection

Cost Optimization

  • Infrequent Access storage for archives (30+ day retention)
  • Lifecycle rules to auto-transition or delete
  • Larger multipart chunks = fewer Class A operations
  • Monitor usage via dashboard analytics

Naming

  • Bucket names: lowercase, hyphens, 3-63 chars
  • Avoid sequential prefixes for better performance (e.g., use hashed prefixes)
  • No dots in bucket names if using custom domains with TLS

Limits

  • Buckets per account: 1,000
  • Object size: 5TB max
  • Bucket name: 3-63 characters
  • Lifecycle rules: 1,000 per bucket
  • Event notification rules: 100 per bucket
  • r2.dev rate limit: 1,000 req/min (use custom domains for production)

Troubleshooting

401 Unauthorized:

  • Verify Access Keys are correct
  • Check endpoint URL includes account ID
  • Ensure region is "auto" for most operations

403 Forbidden:

  • Check bucket permissions and token scopes
  • Verify CORS configuration for browser requests
  • Confirm bucket exists and name is correct

404 Not Found:

  • Object key case-sensitive
  • Check bucket name spelling
  • Verify object was uploaded successfully

Presigned URLs not working:

  • Verify CORS configuration
  • Check URL expiry time
  • Ensure origin matches CORS rules exactly

Multipart upload failures:

  • Part size must be 5MiB - 5GiB
  • Max 10,000 parts per upload
  • Complete upload within 7 days (or configure lifecycle)

Reference Files

For detailed documentation, see:

  • references/api-reference.md - Complete API endpoint documentation
  • references/sdk-examples.md - SDK examples for all languages
  • references/workers-patterns.md - Advanced Workers integration patterns
  • references/pricing-guide.md - Detailed pricing and cost optimization

Additional Resources

  • Documentation: https://developers.cloudflare.com/r2/
  • Wrangler Commands: https://developers.cloudflare.com/r2/reference/wrangler-commands/
  • S3 Compatibility: https://developers.cloudflare.com/r2/api/s3/api/
  • Workers API: https://developers.cloudflare.com/r2/api/workers/workers-api-reference/

More from this repository10

🎯
postgresql-psql🎯Skill

Enables interactive PostgreSQL database management through psql, executing queries, scripting, and administering database operations from the command line.

🎯
shadcn-ui🎯Skill

Provides customizable, accessible React UI components using Tailwind CSS that can be directly copied and modified in your project.

🎯
openspec-dev🎯Skill

Automates OpenSpec change implementation by parsing tasks, filtering phases, and creating PRs using subagent-driven development.

🎯
cloudflare-workers🎯Skill

Enables building and deploying serverless edge functions across Cloudflare's global network using JavaScript, TypeScript, Python, and Rust.

🎯
better-auth🎯Skill

Securely implement authentication and authorization with a framework-agnostic TypeScript library supporting email/password, OAuth, 2FA, and advanced auth features.

🎯
mongodb🎯Skill

Enables comprehensive MongoDB database management, from schema design and querying to deployment, performance optimization, and secure integration across multiple platforms and languages.

🎯
nuxt-ui-tdd🎯Skill

Guides developers in building Vue 3 NuxtUI components using strict Test-Driven Development (TDD) methodology with atomic design principles and Storybook interaction tests.

🎯
turborepo🎯Skill

Accelerates monorepo development by enabling intelligent caching, parallel task execution, and efficient build performance across JavaScript and TypeScript projects.

🎯
skill-creator🎯Skill

Generates configurable AI skills with customizable parameters and templates for Claude-based applications.

🎯
prd-to-ux🎯Skill

Translates product requirements into comprehensive UX specifications by systematically analyzing user mental models, information architecture, and interaction design.