🎯

flow-nexus-neural

🎯Skill

from ruvnet/claude-flow

VibeIndex|
What it does

Enables distributed neural network training and deployment across E2B sandboxes with flexible architectures and scalable computational tiers.

flow-nexus-neural

Installation

Install skill:
npx skills add https://github.com/ruvnet/claude-flow --skill flow-nexus-neural
18
Last UpdatedJan 26, 2026

Skill Details

SKILL.md

Train and deploy neural networks in distributed E2B sandboxes with Flow Nexus

Overview

# Flow Nexus Neural Networks

Deploy, train, and manage neural networks in distributed E2B sandbox environments. Train custom models with multiple architectures (feedforward, LSTM, GAN, transformer) or use pre-built templates from the marketplace.

Prerequisites

```bash

# Add Flow Nexus MCP server

claude mcp add flow-nexus npx flow-nexus@latest mcp start

# Register and login

npx flow-nexus@latest register

npx flow-nexus@latest login

```

Core Capabilities

1. Single-Node Neural Training

Train neural networks with custom architectures and configurations.

Available Architectures:

  • feedforward - Standard fully-connected networks
  • lstm - Long Short-Term Memory for sequences
  • gan - Generative Adversarial Networks
  • autoencoder - Dimensionality reduction
  • transformer - Attention-based models

Training Tiers:

  • nano - Minimal resources (fast, limited)
  • mini - Small models
  • small - Standard models
  • medium - Complex models
  • large - Large-scale training

#### Example: Train Custom Classifier

```javascript

mcp__flow-nexus__neural_train({

config: {

architecture: {

type: "feedforward",

layers: [

{ type: "dense", units: 256, activation: "relu" },

{ type: "dropout", rate: 0.3 },

{ type: "dense", units: 128, activation: "relu" },

{ type: "dropout", rate: 0.2 },

{ type: "dense", units: 64, activation: "relu" },

{ type: "dense", units: 10, activation: "softmax" }

]

},

training: {

epochs: 100,

batch_size: 32,

learning_rate: 0.001,

optimizer: "adam"

},

divergent: {

enabled: true,

pattern: "lateral", // quantum, chaotic, associative, evolutionary

factor: 0.5

}

},

tier: "small",

user_id: "your_user_id"

})

```

#### Example: LSTM for Time Series

```javascript

mcp__flow-nexus__neural_train({

config: {

architecture: {

type: "lstm",

layers: [

{ type: "lstm", units: 128, return_sequences: true },

{ type: "dropout", rate: 0.2 },

{ type: "lstm", units: 64 },

{ type: "dense", units: 1, activation: "linear" }

]

},

training: {

epochs: 150,

batch_size: 64,

learning_rate: 0.01,

optimizer: "adam"

}

},

tier: "medium"

})

```

#### Example: Transformer Architecture

```javascript

mcp__flow-nexus__neural_train({

config: {

architecture: {

type: "transformer",

layers: [

{ type: "embedding", vocab_size: 10000, embedding_dim: 512 },

{ type: "transformer_encoder", num_heads: 8, ff_dim: 2048 },

{ type: "global_average_pooling" },

{ type: "dense", units: 128, activation: "relu" },

{ type: "dense", units: 2, activation: "softmax" }

]

},

training: {

epochs: 50,

batch_size: 16,

learning_rate: 0.0001,

optimizer: "adam"

}

},

tier: "large"

})

```

2. Model Inference

Run predictions on trained models.

```javascript

mcp__flow-nexus__neural_predict({

model_id: "model_abc123",

input: [

[0.5, 0.3, 0.2, 0.1],

[0.8, 0.1, 0.05, 0.05],

[0.2, 0.6, 0.15, 0.05]

],

user_id: "your_user_id"

})

```

Response:

```json

{

"predictions": [

[0.12, 0.85, 0.03],

[0.89, 0.08, 0.03],

[0.05, 0.92, 0.03]

],

"inference_time_ms": 45,

"model_version": "1.0.0"

}

```

3. Template Marketplace

Browse and deploy pre-trained models from the marketplace.

#### List Available Templates

```javascript

mcp__flow-nexus__neural_list_templates({

category: "classification", // timeseries, regression, nlp, vision, anomaly, generative

tier: "free", // or "paid"

search: "sentiment",

limit: 20

})

```

Response:

```json

{

"templates": [

{

"id": "sentiment-analysis-v2",

"name": "Sentiment Analysis Classifier",

"description": "Pre-trained BERT model for sentiment analysis",

"category": "nlp",

"accuracy": 0.94,

"downloads": 1523,

"tier": "free"

},

{

"id": "image-classifier-resnet",

"name": "ResNet Image Classifier",

"description": "ResNet-50 for image classification",

"category": "vision",

"accuracy": 0.96,

"downloads": 2341,

"tier": "paid"

}

]

}

```

#### Deploy Template

```javascript

mcp__flow-nexus__neural_deploy_template({

template_id: "sentiment-analysis-v2",

custom_config: {

training: {

epochs: 50,

learning_rate: 0.0001

}

},

user_id: "your_user_id"

})

```

4. Distributed Training Clusters

Train large models across multiple E2B sandboxes with distributed computing.

#### Initialize Cluster

```javascript

mcp__flow-nexus__neural_cluster_init({

name: "large-model-cluster",

architecture: "transformer", // transformer, cnn, rnn, gnn, hybrid

topology: "mesh", // mesh, ring, star, hierarchical

consensus: "proof-of-learning", // byzantine, raft, gossip

daaEnabled: true, // Decentralized Autonomous Agents

wasmOptimization: true

})

```

Response:

```json

{

"cluster_id": "cluster_xyz789",

"name": "large-model-cluster",

"status": "initializing",

"topology": "mesh",

"max_nodes": 100,

"created_at": "2025-10-19T10:30:00Z"

}

```

#### Deploy Worker Nodes

```javascript

// Deploy parameter server

mcp__flow-nexus__neural_node_deploy({

cluster_id: "cluster_xyz789",

node_type: "parameter_server",

model: "large",

template: "nodejs",

capabilities: ["parameter_management", "gradient_aggregation"],

autonomy: 0.8

})

// Deploy worker nodes

mcp__flow-nexus__neural_node_deploy({

cluster_id: "cluster_xyz789",

node_type: "worker",

model: "xl",

role: "worker",

capabilities: ["training", "inference"],

layers: [

{ type: "transformer_encoder", num_heads: 16 },

{ type: "feed_forward", units: 4096 }

],

autonomy: 0.9

})

// Deploy aggregator

mcp__flow-nexus__neural_node_deploy({

cluster_id: "cluster_xyz789",

node_type: "aggregator",

model: "large",

capabilities: ["gradient_aggregation", "model_synchronization"]

})

```

#### Connect Cluster Topology

```javascript

mcp__flow-nexus__neural_cluster_connect({

cluster_id: "cluster_xyz789",

topology: "mesh" // Override default if needed

})

```

#### Start Distributed Training

```javascript

mcp__flow-nexus__neural_train_distributed({

cluster_id: "cluster_xyz789",

dataset: "imagenet", // or custom dataset identifier

epochs: 100,

batch_size: 128,

learning_rate: 0.001,

optimizer: "adam", // sgd, rmsprop, adagrad

federated: true // Enable federated learning

})

```

Federated Learning Example:

```javascript

mcp__flow-nexus__neural_train_distributed({

cluster_id: "cluster_xyz789",

dataset: "medical_images_distributed",

epochs: 200,

batch_size: 64,

learning_rate: 0.0001,

optimizer: "adam",

federated: true, // Data stays on local nodes

aggregation_rounds: 50,

min_nodes_per_round: 5

})

```

#### Monitor Cluster Status

```javascript

mcp__flow-nexus__neural_cluster_status({

cluster_id: "cluster_xyz789"

})

```

Response:

```json

{

"cluster_id": "cluster_xyz789",

"status": "training",

"nodes": [

{

"node_id": "node_001",

"type": "parameter_server",

"status": "active",

"cpu_usage": 0.75,

"memory_usage": 0.82

},

{

"node_id": "node_002",

"type": "worker",

"status": "active",

"training_progress": 0.45

}

],

"training_metrics": {

"current_epoch": 45,

"total_epochs": 100,

"loss": 0.234,

"accuracy": 0.891

}

}

```

#### Run Distributed Inference

```javascript

mcp__flow-nexus__neural_predict_distributed({

cluster_id: "cluster_xyz789",

input_data: JSON.stringify([

[0.1, 0.2, 0.3],

[0.4, 0.5, 0.6]

]),

aggregation: "ensemble" // mean, majority, weighted, ensemble

})

```

#### Terminate Cluster

```javascript

mcp__flow-nexus__neural_cluster_terminate({

cluster_id: "cluster_xyz789"

})

```

5. Model Management

#### List Your Models

```javascript

mcp__flow-nexus__neural_list_models({

user_id: "your_user_id",

include_public: true

})

```

Response:

```json

{

"models": [

{

"model_id": "model_abc123",

"name": "Custom Classifier v1",

"architecture": "feedforward",

"accuracy": 0.92,

"created_at": "2025-10-15T14:20:00Z",

"status": "trained"

},

{

"model_id": "model_def456",

"name": "LSTM Forecaster",

"architecture": "lstm",

"mse": 0.0045,

"created_at": "2025-10-18T09:15:00Z",

"status": "training"

}

]

}

```

#### Check Training Status

```javascript

mcp__flow-nexus__neural_training_status({

job_id: "job_training_xyz"

})

```

Response:

```json

{

"job_id": "job_training_xyz",

"status": "training",

"progress": 0.67,

"current_epoch": 67,

"total_epochs": 100,

"current_loss": 0.234,

"estimated_completion": "2025-10-19T12:45:00Z"

}

```

#### Performance Benchmarking

```javascript

mcp__flow-nexus__neural_performance_benchmark({

model_id: "model_abc123",

benchmark_type: "comprehensive" // inference, throughput, memory, comprehensive

})

```

Response:

```json

{

"model_id": "model_abc123",

"benchmarks": {

"inference_latency_ms": 12.5,

"throughput_qps": 8000,

"memory_usage_mb": 245,

"gpu_utilization": 0.78,

"accuracy": 0.92,

"f1_score": 0.89

},

"timestamp": "2025-10-19T11:00:00Z"

}

```

#### Create Validation Workflow

```javascript

mcp__flow-nexus__neural_validation_workflow({

model_id: "model_abc123",

user_id: "your_user_id",

validation_type: "comprehensive" // performance, accuracy, robustness, comprehensive

})

```

6. Publishing and Marketplace

#### Publish Model as Template

```javascript

mcp__flow-nexus__neural_publish_template({

model_id: "model_abc123",

name: "High-Accuracy Sentiment Classifier",

description: "Fine-tuned BERT model for sentiment analysis with 94% accuracy",

category: "nlp",

price: 0, // 0 for free, or credits amount

user_id: "your_user_id"

})

```

#### Rate a Template

```javascript

mcp__flow-nexus__neural_rate_template({

template_id: "sentiment-analysis-v2",

rating: 5,

review: "Excellent model! Achieved 95% accuracy on my dataset.",

user_id: "your_user_id"

})

```

Common Use Cases

Image Classification with CNN

```javascript

// Initialize cluster for large-scale image training

const cluster = await mcp__flow-nexus__neural_cluster_init({

name: "image-classification-cluster",

architecture: "cnn",

topology: "hierarchical",

wasmOptimization: true

})

// Deploy worker nodes

await mcp__flow-nexus__neural_node_deploy({

cluster_id: cluster.cluster_id,

node_type: "worker",

model: "large",

capabilities: ["training", "data_augmentation"]

})

// Start training

await mcp__flow-nexus__neural_train_distributed({

cluster_id: cluster.cluster_id,

dataset: "custom_images",

epochs: 100,

batch_size: 64,

learning_rate: 0.001,

optimizer: "adam"

})

```

NLP Sentiment Analysis

```javascript

// Use pre-built template

const deployment = await mcp__flow-nexus__neural_deploy_template({

template_id: "sentiment-analysis-v2",

custom_config: {

training: {

epochs: 30,

batch_size: 16

}

}

})

// Run inference

const result = await mcp__flow-nexus__neural_predict({

model_id: deployment.model_id,

input: ["This product is amazing!", "Terrible experience."]

})

```

Time Series Forecasting

```javascript

// Train LSTM model

const training = await mcp__flow-nexus__neural_train({

config: {

architecture: {

type: "lstm",

layers: [

{ type: "lstm", units: 128, return_sequences: true },

{ type: "dropout", rate: 0.2 },

{ type: "lstm", units: 64 },

{ type: "dense", units: 1 }

]

},

training: {

epochs: 150,

batch_size: 64,

learning_rate: 0.01,

optimizer: "adam"

}

},

tier: "medium"

})

// Monitor progress

const status = await mcp__flow-nexus__neural_training_status({

job_id: training.job_id

})

```

Federated Learning for Privacy

```javascript

// Initialize federated cluster

const cluster = await mcp__flow-nexus__neural_cluster_init({

name: "federated-medical-cluster",

architecture: "transformer",

topology: "mesh",

consensus: "proof-of-learning",

daaEnabled: true

})

// Deploy nodes across different locations

for (let i = 0; i < 5; i++) {

await mcp__flow-nexus__neural_node_deploy({

cluster_id: cluster.cluster_id,

node_type: "worker",

model: "large",

autonomy: 0.9

})

}

// Train with federated learning (data never leaves nodes)

await mcp__flow-nexus__neural_train_distributed({

cluster_id: cluster.cluster_id,

dataset: "medical_records_distributed",

epochs: 200,

federated: true,

aggregation_rounds: 100

})

```

Architecture Patterns

Feedforward Networks

Best for: Classification, regression, simple pattern recognition

```javascript

{

type: "feedforward",

layers: [

{ type: "dense", units: 256, activation: "relu" },

{ type: "dropout", rate: 0.3 },

{ type: "dense", units: 128, activation: "relu" },

{ type: "dense", units: 10, activation: "softmax" }

]

}

```

LSTM Networks

Best for: Time series, sequences, forecasting

```javascript

{

type: "lstm",

layers: [

{ type: "lstm", units: 128, return_sequences: true },

{ type: "lstm", units: 64 },

{ type: "dense", units: 1 }

]

}

```

Transformers

Best for: NLP, attention mechanisms, large-scale text

```javascript

{

type: "transformer",

layers: [

{ type: "embedding", vocab_size: 10000, embedding_dim: 512 },

{ type: "transformer_encoder", num_heads: 8, ff_dim: 2048 },

{ type: "global_average_pooling" },

{ type: "dense", units: 2, activation: "softmax" }

]

}

```

GANs

Best for: Generative tasks, image synthesis

```javascript

{

type: "gan",

generator_layers: [...],

discriminator_layers: [...]

}

```

Autoencoders

Best for: Dimensionality reduction, anomaly detection

```javascript

{

type: "autoencoder",

encoder_layers: [

{ type: "dense", units: 128, activation: "relu" },

{ type: "dense", units: 64, activation: "relu" }

],

decoder_layers: [

{ type: "dense", units: 128, activation: "relu" },

{ type: "dense", units: input_dim, activation: "sigmoid" }

]

}

```

Best Practices

  1. Start Small: Begin with nano or mini tiers for experimentation
  2. Use Templates: Leverage marketplace templates for common tasks
  3. Monitor Training: Check status regularly to catch issues early
  4. Benchmark Models: Always benchmark before production deployment
  5. Distributed Training: Use clusters for large models (>1B parameters)
  6. Federated Learning: Use for privacy-sensitive data
  7. Version Models: Publish successful models as templates for reuse
  8. Validate Thoroughly: Use validation workflows before deployment

Troubleshooting

Training Stalled

```javascript

// Check cluster status

const status = await mcp__flow-nexus__neural_cluster_status({

cluster_id: "cluster_id"

})

// Terminate and restart if needed

await mcp__flow-nexus__neural_cluster_terminate({

cluster_id: "cluster_id"

})

```

Low Accuracy

  • Increase epochs
  • Adjust learning rate
  • Add regularization (dropout)
  • Try different optimizer
  • Use data augmentation

Out of Memory

  • Reduce batch size
  • Use smaller model tier
  • Enable gradient accumulation
  • Use distributed training

Related Skills

  • flow-nexus-sandbox - E2B sandbox management
  • flow-nexus-swarm - AI swarm orchestration
  • flow-nexus-workflow - Workflow automation

Resources

  • Flow Nexus Docs: https://flow-nexus.ruv.io/docs
  • Neural Network Guide: https://flow-nexus.ruv.io/docs/neural
  • Template Marketplace: https://flow-nexus.ruv.io/templates
  • API Reference: https://flow-nexus.ruv.io/api

---

Note: Distributed training requires authentication. Register at https://flow-nexus.ruv.io or use npx flow-nexus@latest register.

More from this repository10

πŸͺ
ruvnet-claude-flowπŸͺMarketplace

Enables developers to discover, share, and integrate specialized Claude AI plugins and agents from a community-driven marketplace.

🎯
flow-nexus-platform🎯Skill

Manages Flow Nexus platform capabilities, enabling developers to authenticate, deploy apps, manage credits, execute sandboxes, and participate in coding challenges.

🎯
agentdb memory patterns🎯Skill

Manages and optimizes memory storage, retrieval, and pattern recognition for AI agents using a structured database approach.

🎯
v3 mcp optimization🎯Skill

Optimizes multi-control point (MCP) configurations for enterprise AI workflows, enhancing agent coordination and performance efficiency.

πŸ”Œ
claude-flowπŸ”ŒPlugin

Orchestrates multiple Claude AI agents into coordinated, self-learning swarms for enterprise-grade multi-agent AI development and deployment.

πŸ”Œ
claude-flowπŸ”ŒPlugin

Orchestrates and coordinates multiple specialized AI agents using Claude Code, enabling enterprise-level multi-agent AI workflows with self-learning and consensus capabilities.

πŸ”Œ
claude-flowπŸ”ŒPlugin

Orchestrates and coordinates multiple specialized AI agents using Claude Code, enabling enterprise-level multi-agent AI workflows with self-learning and consensus capabilities.

πŸ”Œ
claude-flowπŸ”ŒPlugin

Orchestrates multiple Claude AI agents into coordinated, self-learning swarms for enterprise-level AI development and task execution.

πŸ”Œ
claude-flowπŸ”ŒPlugin

Orchestrates and coordinates multiple specialized AI agents using Claude Code, enabling complex, collaborative problem-solving with enterprise-grade deployment and management capabilities.

🎯
performance-analysis🎯Skill

Identifies and analyzes performance bottlenecks in Claude Flow swarms, generating detailed reports and providing AI-powered optimization recommendations.