Operation 1: View Build Logs
Access build process logs to debug compilation and dependency issues.
CLI Commands:
```bash
# Stream build logs (live)
railway logs --build
# Last 100 lines of build logs
railway logs --build --lines 100
# Build logs for specific deployment
railway logs --build
# Export build logs to file
railway logs --build --lines 500 > build-logs.txt
```
Dashboard Access:
- Navigate to Railway Dashboard β Your Service
- Click Deployments tab
- Select deployment
- View Build Logs section
What to Look For:
- β
Dependency installation success
- β
Build steps completion
- β οΈ Warning messages (may indicate future issues)
- β Build failures and error messages
- π Build time (optimization opportunities)
Common Issues:
| Issue | Solution |
|-------|----------|
| Dependencies not installing | Check package.json/requirements.txt |
| Build timeout | Optimize build process or increase timeout |
| Missing build command | Set in Railway dashboard or railway.json |
| Cache issues | Force rebuild without cache |
See Also: [railway-troubleshooting](../railway-troubleshooting/SKILL.md) for cache busting
---
Operation 2: View Deploy Logs
Monitor deployment lifecycle and health check status.
CLI Commands:
```bash
# Stream deployment logs (live)
railway logs --deployment
# Last N lines of deploy logs
railway logs --deployment --lines 200
# Specific deployment by ID
railway logs --deployment
# Export deploy logs
railway logs --deployment --lines 500 > deploy-logs.txt
```
Dashboard Access:
- Railway Dashboard β Service β Deployments
- Select deployment
- View Deploy Logs section
Deployment Phases:
- Building - Compiling code
- Publishing - Creating container image
- Deploying - Rolling out to infrastructure
- Health Checking - Verifying service health
- Active - Deployment live
Health Check Debugging:
```bash
# View health check failures
railway logs --deployment | grep "health check"
# Common health check issues:
# - Port not exposed correctly
# - Application not binding to 0.0.0.0
# - Health endpoint not responding
# - Application crashing during startup
```
Troubleshooting:
- Health check failing? Verify
PORT environment variable - Deployment stuck? Check for blocking startup processes
- Rollback occurring? Check health check configuration
---
Operation 3: View Runtime Logs
Access application stdout/stderr for debugging runtime behavior.
CLI Commands:
```bash
# Stream runtime logs (live, Ctrl+C to stop)
railway logs
# Last N lines (stops streaming)
railway logs --lines 500
# Stream different service/environment
railway logs --service backend --environment production
# Filter with Railway syntax
railway logs --filter "@level:error"
railway logs --lines 100 --filter "timeout"
# Pipe to grep for local filtering
railway logs | grep ERROR
# JSON output for parsing
railway logs --json | jq 'select(.level == "error")'
```
Dashboard Access:
- Railway Dashboard β Service β Observability
- Click Logs tab
- Real-time log stream with filtering
Structured Logging Best Practices:
Railway supports structured JSON logging. Output JSON on a single line:
```javascript
// Node.js Example
console.log(JSON.stringify({
level: 'error',
message: 'Database connection failed',
error: err.message,
timestamp: new Date().toISOString(),
userId: req.user?.id
}));
```
```python
# Python Example
import json
import logging
logging.basicConfig(format='%(message)s')
logger = logging.getLogger()
logger.error(json.dumps({
'level': 'error',
'message': 'Database connection failed',
'error': str(e),
'timestamp': datetime.utcnow().isoformat(),
'user_id': user_id
}))
```
Supported Log Levels:
debug - Detailed diagnostic informationinfo - General informational messageswarn - Warning messages (potential issues)error - Error messages (failures)
Benefits:
- All JSON fields are searchable in Railway dashboard
- Better filtering and analysis
- Integration with log aggregation tools
---
Operation 4: View HTTP Logs
Analyze HTTP request patterns and debug API issues.
Dashboard Access:
- Railway Dashboard β Service β Observability
- Click HTTP tab
- View request metadata
Available Metadata:
- HTTP method (GET, POST, etc.)
- Request path
- Status code
- Response time (ms)
- Client IP address
- User agent
- Timestamp
Filtering HTTP Logs:
```
# Filter by status code
@httpStatus:500
# Filter by path
@path:"/api/users"
# Combine filters
@httpStatus:500 AND @path:"/api"
```
Use Cases:
- Identify slow endpoints (high response time)
- Find error patterns (500, 404 status codes)
- Analyze traffic patterns
- Debug API issues
- Monitor rate limits
Performance Analysis:
```bash
# Find slow requests (>1000ms)
# Filter in dashboard: responseTime > 1000
# Find all 5xx errors
# Filter: @httpStatus:5xx
# Analyze specific endpoint
# Filter: @path:"/api/checkout"
```
---
Operation 5: Filter and Search Logs
Use Railway's powerful filtering syntax for targeted log analysis.
Filter Syntax:
| Filter | Example | Description |
|--------|---------|-------------|
| Substring | "error" | Search for text |
| HTTP Status | @httpStatus:500 | Filter by status code |
| Service ID | @service: | Filter by service |
| Log Level | @level:error | Filter by severity |
| Custom Field | @userId:123 | Filter by JSON field |
Boolean Operators:
```
# AND - Both conditions must match
@httpStatus:500 AND @path:"/api"
# OR - Either condition matches
@level:error OR @level:warn
# NOT - Exclude matches
NOT @path:"/health"
# Grouping
(@level:error OR @level:warn) AND @service:api
```
Common Filter Patterns:
```bash
# All errors from last hour
@level:error
# Slow HTTP requests (>1000ms)
@httpStatus:200 AND responseTime > 1000
# Failed API calls
@path:"/api" AND @httpStatus:5xx
# Exclude health checks
NOT @path:"/health" NOT @path:"/metrics"
# Specific user errors
@level:error AND @userId:12345
# Database connection issues
"connection refused" OR "timeout"
```
Dashboard Filtering:
- Observability β Logs
- Enter filter in search box
- Use dropdowns for common filters
- Save frequent filters as presets
CLI Filtering:
```bash
# Use grep for basic filtering (streaming is default)
railway logs | grep ERROR
# Use jq for JSON logs
railway logs --json | jq 'select(.level == "error")'
# Complex filtering with awk
railway logs | awk '/ERROR/ || /WARN/'
```
---
Operation 6: Export Logs Externally
Export logs to external systems for long-term retention and analysis.
Why Export?
- Railway retention: 7-30 days (plan dependent)
- Long-term log storage
- Advanced analytics
- Compliance requirements
- Centralized multi-service logging
External Export Options:
#### Option 1: Locomotive Sidecar (Webhook Export)
Deploy a sidecar container to forward logs via webhooks.
Repository: https://github.com/railwayapp/locomotive
Setup:
```bash
# Add locomotive service to Railway project
railway service create locomotive
# Configure environment variables
WEBHOOK_URL=https://your-log-endpoint.com/ingest
WEBHOOK_METHOD=POST
WEBHOOK_HEADERS='{"Authorization": "Bearer xxx"}'
# Deploy locomotive
railway up
```
Supported Destinations:
- Custom webhooks
- Datadog
- Axiom
- BetterStack
- Logtail
- Any HTTP endpoint
#### Option 2: OpenTelemetry (OTEL) Integration
Send logs using OTEL protocol.
Environment Variables:
```bash
# Add to your service
OTEL_EXPORTER_OTLP_ENDPOINT=https://otel-collector.example.com:4318
OTEL_EXPORTER_OTLP_HEADERS=x-api-key=xxx
OTEL_SERVICE_NAME=my-railway-service
```
Supported OTEL Collectors:
- Grafana Alloy
- OpenTelemetry Collector
- Datadog Agent
- New Relic
- Honeycomb
See Also: [observability-stack-setup](../observability-stack-setup/SKILL.md) for LGTM stack
#### Option 3: Log Streaming Script
Use the provided script to stream logs to external systems.
Usage:
```bash
# Stream to file
.claude/skills/railway-logs/scripts/stream-logs.sh --output file --path logs/
# Stream to webhook
.claude/skills/railway-logs/scripts/stream-logs.sh --output webhook \
--url https://logs.example.com/ingest \
--token YOUR_API_KEY
# Stream to S3
.claude/skills/railway-logs/scripts/stream-logs.sh --output s3 \
--bucket my-logs-bucket \
--prefix railway/
```
Features:
- Continuous streaming
- Automatic reconnection
- Buffering and batching
- Multiple output formats
#### Option 4: Manual Export
Export logs for ad-hoc analysis.
```bash
# Export last 1000 lines
railway logs --lines 1000 > logs-$(date +%Y%m%d-%H%M%S).txt
# Export recent logs (specify number of lines)
railway logs --lines 1000 > logs-recent.txt
# Export and compress
railway logs --lines 5000 | gzip > logs.txt.gz
```
Scheduled Export (cron):
```bash
# Add to crontab (every 6 hours)
0 /6 railway logs --lines 10000 > /backup/railway-logs-$(date +\%Y\%m\%d-\%H\%M).txt
```
---