google-gemini-embeddings
π―Skillfrom ovachiever/droid-tings
Generates text embeddings using Google Gemini's API for semantic search, RAG systems, and document clustering with flexible dimension sizes and task-specific optimization.
Part of
ovachiever/droid-tings(370 items)
Installation
npx wrangler vectorize create gemini-embeddings --dimensions 768 --metric cosineSkill Details
|
Overview
# Google Gemini Embeddings
Complete production-ready guide for Google Gemini embeddings API
This skill provides comprehensive coverage of the gemini-embedding-001 model for generating text embeddings, including SDK usage, REST API patterns, batch processing, RAG integration with Cloudflare Vectorize, and advanced use cases like semantic search and document clustering.
---
Table of Contents
- [Quick Start](#1-quick-start)
- [gemini-embedding-001 Model](#2-gemini-embedding-001-model)
- [Basic Embeddings](#3-basic-embeddings)
- [Batch Embeddings](#4-batch-embeddings)
- [Task Types](#5-task-types)
- [RAG Patterns](#6-rag-patterns)
- [Semantic Search](#7-semantic-search)
- [Document Clustering](#8-document-clustering)
- [Error Handling](#9-error-handling)
- [Best Practices](#10-best-practices)
---
1. Quick Start
Installation
Install the Google Generative AI SDK:
```bash
npm install @google/genai@^1.27.0
```
For TypeScript projects:
```bash
npm install -D typescript@^5.0.0
```
Environment Setup
Set your Gemini API key as an environment variable:
```bash
export GEMINI_API_KEY="your-api-key-here"
```
Get your API key from: https://aistudio.google.com/apikey
First Embedding Example
```typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'What is the meaning of life?',
config: {
taskType: 'RETRIEVAL_QUERY',
outputDimensionality: 768
}
});
console.log(response.embedding.values); // [0.012, -0.034, ...]
console.log(response.embedding.values.length); // 768
```
Result: A 768-dimension embedding vector representing the semantic meaning of the text.
---
2. gemini-embedding-001 Model
Model Specifications
Current Model: gemini-embedding-001 (stable, production-ready)
- Status: Stable
- Experimental:
gemini-embedding-exp-03-07(deprecated October 2025, do not use)
Dimensions
The model supports flexible output dimensionality using Matryoshka Representation Learning:
| Dimension | Use Case | Storage | Performance |
|-----------|----------|---------|-------------|
| 768 | Recommended for most use cases | Low | Fast |
| 1536 | Balance between accuracy and efficiency | Medium | Medium |
| 3072 | Maximum accuracy (default) | High | Slower |
| 128-3071 | Custom (any value in range) | Variable | Variable |
Default: 3072 dimensions
Recommended: 768, 1536, or 3072 for optimal performance
Context Window
- Input Limit: 2,048 tokens per text
- Input Type: Text only (no images, audio, or video)
Rate Limits
| Tier | RPM | TPM | RPD | Requirements |
|------|-----|-----|-----|--------------|
| Free | 100 | 30,000 | 1,000 | No billing account |
| Tier 1 | 3,000 | 1,000,000 | - | Billing account linked |
| Tier 2 | 5,000 | 5,000,000 | - | $250+ spending, 30-day wait |
| Tier 3 | 10,000 | 10,000,000 | - | $1,000+ spending, 30-day wait |
RPM = Requests Per Minute
TPM = Tokens Per Minute
RPD = Requests Per Day
Output Format
```typescript
{
embedding: {
values: number[] // Array of floating-point numbers
}
}
```
---
3. Basic Embeddings
SDK Approach (Node.js)
Single text embedding:
```typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'The quick brown fox jumps over the lazy dog',
config: {
taskType: 'SEMANTIC_SIMILARITY',
outputDimensionality: 768
}
});
console.log(response.embedding.values);
// [0.00388, -0.00762, 0.01543, ...]
```
Fetch Approach (Cloudflare Workers)
For Workers/edge environments without SDK support:
```typescript
export default {
async fetch(request: Request, env: Env): Promise
const apiKey = env.GEMINI_API_KEY;
const text = "What is the meaning of life?";
const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent',
{
method: 'POST',
headers: {
'x-goog-api-key': apiKey,
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: {
parts: [{ text }]
},
taskType: 'RETRIEVAL_QUERY',
outputDimensionality: 768
})
}
);
const data = await response.json();
// Response format:
// {
// embedding: {
// values: [0.012, -0.034, ...]
// }
// }
return new Response(JSON.stringify(data), {
headers: { 'Content-Type': 'application/json' }
});
}
};
```
Response Parsing
```typescript
interface EmbeddingResponse {
embedding: {
values: number[];
};
}
const response: EmbeddingResponse = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'Sample text',
config: { taskType: 'SEMANTIC_SIMILARITY' }
});
const embedding: number[] = response.embedding.values;
const dimensions: number = embedding.length; // 3072 by default
```
---
4. Batch Embeddings
Multiple Texts in One Request (SDK)
Generate embeddings for multiple texts simultaneously:
```typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const texts = [
"What is the meaning of life?",
"How does photosynthesis work?",
"Tell me about the history of the internet."
];
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: texts, // Array of strings
config: {
taskType: 'RETRIEVAL_DOCUMENT',
outputDimensionality: 768
}
});
// Process each embedding
response.embeddings.forEach((embedding, index) => {
console.log(Text ${index}: ${texts[index]});
console.log(Embedding: ${embedding.values.slice(0, 5)}...);
console.log(Dimensions: ${embedding.values.length});
});
```
Batch REST API (fetch)
Use the batchEmbedContents endpoint:
```typescript
const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:batchEmbedContents',
{
method: 'POST',
headers: {
'x-goog-api-key': apiKey,
'Content-Type': 'application/json'
},
body: JSON.stringify({
requests: texts.map(text => ({
model: 'models/gemini-embedding-001',
content: {
parts: [{ text }]
},
taskType: 'RETRIEVAL_DOCUMENT'
}))
})
}
);
const data = await response.json();
// data.embeddings: Array of {values: number[]}
```
Chunking for Rate Limits
When processing large datasets, chunk requests to stay within rate limits:
```typescript
async function batchEmbedWithRateLimit(
texts: string[],
batchSize: number = 100, // Free tier: 100 RPM
delayMs: number = 60000 // 1 minute delay between batches
): Promise
const allEmbeddings: number[][] = [];
for (let i = 0; i < texts.length; i += batchSize) {
const batch = texts.slice(i, i + batchSize);
console.log(Processing batch ${i / batchSize + 1} (${batch.length} texts));
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: batch,
config: {
taskType: 'RETRIEVAL_DOCUMENT',
outputDimensionality: 768
}
});
allEmbeddings.push(...response.embeddings.map(e => e.values));
// Wait before next batch (except last batch)
if (i + batchSize < texts.length) {
await new Promise(resolve => setTimeout(resolve, delayMs));
}
}
return allEmbeddings;
}
// Usage
const embeddings = await batchEmbedWithRateLimit(documents, 100);
```
Performance Optimization
Tips:
- Use batch API when embedding multiple texts (single request vs multiple requests)
- Choose lower dimensions (768) for faster processing and less storage
- Implement exponential backoff for rate limit errors
- Cache embeddings to avoid redundant API calls
---
5. Task Types
The taskType parameter optimizes embeddings for specific use cases. Always specify a task type for best results.
Available Task Types (8 total)
| Task Type | Use Case | Example |
|-----------|----------|---------|
| RETRIEVAL_QUERY | User search queries | "How do I fix a flat tire?" |
| RETRIEVAL_DOCUMENT | Documents to be indexed/searched | Product descriptions, articles |
| SEMANTIC_SIMILARITY | Comparing text similarity | Duplicate detection, clustering |
| CLASSIFICATION | Categorizing texts | Spam detection, sentiment analysis |
| CLUSTERING | Grouping similar texts | Topic modeling, content organization |
| CODE_RETRIEVAL_QUERY | Code search queries | "function to sort array" |
| QUESTION_ANSWERING | Questions seeking answers | FAQ matching |
| FACT_VERIFICATION | Verifying claims with evidence | Fact-checking systems |
When to Use Which
RAG Systems (Retrieval Augmented Generation):
```typescript
// When embedding user queries
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: { taskType: 'RETRIEVAL_QUERY' } // β Use RETRIEVAL_QUERY
});
// When embedding documents for indexing
const docEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: documentText,
config: { taskType: 'RETRIEVAL_DOCUMENT' } // β Use RETRIEVAL_DOCUMENT
});
```
Semantic Search:
```typescript
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY' }
});
```
Document Clustering:
```typescript
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'CLUSTERING' }
});
```
Impact on Quality
Using the correct task type significantly improves retrieval quality:
```typescript
// β BAD: No task type specified
const embedding1 = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery
});
// β GOOD: Task type specified
const embedding2 = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: { taskType: 'RETRIEVAL_QUERY' }
});
```
Result: Using the right task type can improve search relevance by 10-30%.
---
6. RAG Patterns
RAG (Retrieval Augmented Generation) combines vector search with LLM generation to create AI systems that answer questions using custom knowledge bases.
Complete RAG Workflow
```
- Document Ingestion
βββ Chunk documents into smaller pieces
βββ Generate embeddings (RETRIEVAL_DOCUMENT)
βββ Store in Vectorize
- Query Processing
βββ User submits query
βββ Generate query embedding (RETRIEVAL_QUERY)
βββ Search Vectorize for similar documents
- Response Generation
βββ Retrieve top-k similar documents
βββ Pass documents as context to LLM
βββ Stream response to user
```
Document Ingestion Pipeline
```typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
// 1. Chunk document into smaller pieces
function chunkDocument(text: string, chunkSize: number = 500): string[] {
const words = text.split(' ');
const chunks: string[] = [];
for (let i = 0; i < words.length; i += chunkSize) {
chunks.push(words.slice(i, i + chunkSize).join(' '));
}
return chunks;
}
// 2. Generate embeddings for chunks
async function embedChunks(chunks: string[]): Promise
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: chunks,
config: {
taskType: 'RETRIEVAL_DOCUMENT', // β Documents for indexing
outputDimensionality: 768 // β Match Vectorize index dimensions
}
});
return response.embeddings.map(e => e.values);
}
// 3. Store in Cloudflare Vectorize
async function storeInVectorize(
env: Env,
chunks: string[],
embeddings: number[][]
) {
const vectors = chunks.map((chunk, i) => ({
id: doc-${Date.now()}-${i},
values: embeddings[i],
metadata: { text: chunk }
}));
await env.VECTORIZE.insert(vectors);
}
// Complete pipeline
async function ingestDocument(env: Env, documentText: string) {
const chunks = chunkDocument(documentText, 500);
const embeddings = await embedChunks(chunks);
await storeInVectorize(env, chunks, embeddings);
console.log(Ingested ${chunks.length} chunks);
}
```
Query Flow (Retrieve + Generate)
```typescript
async function ragQuery(env: Env, userQuery: string): Promise
// 1. Embed user query
const queryResponse = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: {
taskType: 'RETRIEVAL_QUERY', // β Query, not document
outputDimensionality: 768
}
});
const queryEmbedding = queryResponse.embedding.values;
// 2. Search Vectorize for similar documents
const results = await env.VECTORIZE.query(queryEmbedding, {
topK: 5,
returnMetadata: true
});
// 3. Extract context from top results
const context = results.matches
.map(match => match.metadata.text)
.join('\n\n');
// 4. Generate response with context
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: Context:\n${context}\n\nQuestion: ${userQuery}\n\nAnswer based on the context above:
});
return response.text;
}
```
Integration with Cloudflare Vectorize
Create Vectorize Index (768 dimensions for Gemini):
```bash
npx wrangler vectorize create gemini-embeddings --dimensions 768 --metric cosine
```
Bind in wrangler.jsonc:
```jsonc
{
"name": "my-rag-app",
"main": "src/index.ts",
"compatibility_date": "2025-10-25",
"vectorize": {
"bindings": [
{
"binding": "VECTORIZE",
"index_name": "gemini-embeddings"
}
]
}
}
```
Complete RAG Worker:
See templates/rag-with-vectorize.ts for full implementation.
---
7. Semantic Search
Semantic search finds content based on meaning, not just keyword matching.
Cosine Similarity
Cosine similarity measures how similar two embeddings are (range: -1 to 1, where 1 = identical):
```typescript
function cosineSimilarity(a: number[], b: number[]): number {
if (a.length !== b.length) {
throw new Error('Vectors must have same length');
}
let dotProduct = 0;
let magnitudeA = 0;
let magnitudeB = 0;
for (let i = 0; i < a.length; i++) {
dotProduct += a[i] * b[i];
magnitudeA += a[i] * a[i];
magnitudeB += b[i] * b[i];
}
if (magnitudeA === 0 || magnitudeB === 0) {
return 0;
}
return dotProduct / (Math.sqrt(magnitudeA) * Math.sqrt(magnitudeB));
}
```
Vector Normalization
Normalize vectors to unit length for faster similarity calculations:
```typescript
function normalizeVector(vector: number[]): number[] {
const magnitude = Math.sqrt(vector.reduce((sum, val) => sum + val * val, 0));
if (magnitude === 0) {
return vector;
}
return vector.map(val => val / magnitude);
}
// Normalized vectors allow dot product instead of cosine similarity
function dotProduct(a: number[], b: number[]): number {
return a.reduce((sum, val, i) => sum + val * b[i], 0);
}
```
Top-K Search
Find the most similar documents to a query:
```typescript
interface Document {
id: string;
text: string;
embedding: number[];
}
function topKSimilar(
queryEmbedding: number[],
documents: Document[],
k: number = 5
): Array<{ document: Document; similarity: number }> {
// Calculate similarity for each document
const similarities = documents.map(doc => ({
document: doc,
similarity: cosineSimilarity(queryEmbedding, doc.embedding)
}));
// Sort by similarity (descending) and return top K
return similarities
.sort((a, b) => b.similarity - a.similarity)
.slice(0, k);
}
// Usage
const results = topKSimilar(queryEmbedding, documents, 5);
results.forEach(result => {
console.log(Similarity: ${result.similarity.toFixed(4)});
console.log(Text: ${result.document.text}\n);
});
```
Complete Semantic Search Example
See templates/semantic-search.ts for full implementation with Gemini API.
---
8. Document Clustering
Clustering groups similar documents together automatically.
K-Means Clustering
```typescript
interface Cluster {
centroid: number[];
documents: number[][];
}
function kMeansClustering(
embeddings: number[][],
k: number = 3,
maxIterations: number = 100
): Cluster[] {
// 1. Initialize centroids randomly
const centroids: number[][] = [];
for (let i = 0; i < k; i++) {
centroids.push(embeddings[Math.floor(Math.random() * embeddings.length)]);
}
// 2. Iterate until convergence
for (let iter = 0; iter < maxIterations; iter++) {
// Assign each embedding to nearest centroid
const clusters: number[][][] = Array(k).fill(null).map(() => []);
embeddings.forEach(embedding => {
let minDistance = Infinity;
let closestCluster = 0;
centroids.forEach((centroid, i) => {
const distance = 1 - cosineSimilarity(embedding, centroid);
if (distance < minDistance) {
minDistance = distance;
closestCluster = i;
}
});
clusters[closestCluster].push(embedding);
});
// Update centroids
let changed = false;
clusters.forEach((cluster, i) => {
if (cluster.length > 0) {
const newCentroid = cluster[0].map((_, dim) =>
cluster.reduce((sum, emb) => sum + emb[dim], 0) / cluster.length
);
if (cosineSimilarity(centroids[i], newCentroid) < 0.9999) {
changed = true;
}
centroids[i] = newCentroid;
}
});
if (!changed) break;
}
// Build final clusters
const finalClusters: Cluster[] = centroids.map((centroid, i) => ({
centroid,
documents: embeddings.filter(emb =>
cosineSimilarity(emb, centroid) === Math.max(
...centroids.map(c => cosineSimilarity(emb, c))
)
)
}));
return finalClusters;
}
```
See templates/clustering.ts for complete implementation with examples.
Use Cases
- Content Organization: Automatically group similar articles, products, or documents
- Duplicate Detection: Find near-duplicate content
- Topic Modeling: Discover themes in large text corpora
- Customer Feedback Analysis: Group similar customer complaints or feature requests
---
9. Error Handling
Common Errors
1. API Key Missing or Invalid
```typescript
// β Error: API key not set
const ai = new GoogleGenAI({});
// β Correct
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
if (!process.env.GEMINI_API_KEY) {
throw new Error('GEMINI_API_KEY environment variable not set');
}
```
2. Dimension Mismatch
```typescript
// β Error: Embedding has 3072 dims, Vectorize expects 768
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text
// No outputDimensionality specified β defaults to 3072
});
await env.VECTORIZE.insert([{
id: '1',
values: embedding.embedding.values // 3072 dims, but index is 768!
}]);
// β Correct: Match dimensions
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { outputDimensionality: 768 } // β Match index dimensions
});
```
3. Rate Limiting
```typescript
// β Error: 429 Too Many Requests
for (let i = 0; i < 1000; i++) {
await ai.models.embedContent({ / ... / }); // Exceeds 100 RPM on free tier
}
// β Correct: Implement rate limiting
async function embedWithRetry(text: string, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY' }
});
} catch (error: any) {
if (error.status === 429 && attempt < maxRetries - 1) {
const delay = Math.pow(2, attempt) * 1000; // Exponential backoff
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}
```
See references/top-errors.md for all 8 documented errors with detailed solutions.
---
10. Best Practices
Always Do
β Specify Task Type
```typescript
// Task type optimizes embeddings for your use case
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'RETRIEVAL_QUERY' } // β Always specify
});
```
β Match Dimensions with Vectorize
```typescript
// Ensure embeddings match your Vectorize index dimensions
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { outputDimensionality: 768 } // β Match index
});
```
β Implement Rate Limiting
```typescript
// Use exponential backoff for 429 errors
async function embedWithBackoff(text: string) {
// Implementation from Error Handling section
}
```
β Cache Embeddings
```typescript
// Cache embeddings to avoid redundant API calls
const cache = new Map
async function getCachedEmbedding(text: string): Promise
if (cache.has(text)) {
return cache.get(text)!;
}
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY' }
});
const embedding = response.embedding.values;
cache.set(text, embedding);
return embedding;
}
```
β Use Batch API for Multiple Texts
```typescript
// Single batch request vs multiple individual requests
const embeddings = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: texts, // Array of texts
config: { taskType: 'RETRIEVAL_DOCUMENT' }
});
```
Never Do
β Don't Skip Task Type
```typescript
// Reduces quality by 10-30%
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text
// Missing taskType!
});
```
β Don't Mix Different Dimensions
```typescript
// Can't compare embeddings with different dimensions
const emb1 = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text1,
config: { outputDimensionality: 768 }
});
const emb2 = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text2,
config: { outputDimensionality: 1536 } // Different dimensions!
});
// β Can't calculate similarity between different dimensions
const similarity = cosineSimilarity(emb1.embedding.values, emb2.embedding.values);
```
β Don't Use Wrong Task Type for RAG
```typescript
// Reduces search quality
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: query,
config: { taskType: 'RETRIEVAL_DOCUMENT' } // Wrong! Should be RETRIEVAL_QUERY
});
```
---
Using Bundled Resources
Templates (templates/)
package.json- Package configuration with verified versionsbasic-embeddings.ts- Single text embedding with SDKembeddings-fetch.ts- Fetch-based for Cloudflare Workersbatch-embeddings.ts- Batch processing with rate limitingrag-with-vectorize.ts- Complete RAG implementation with Vectorizesemantic-search.ts- Cosine similarity and top-K searchclustering.ts- K-means clustering implementation
References (references/)
model-comparison.md- Compare Gemini vs OpenAI vs Workers AI embeddingsvectorize-integration.md- Cloudflare Vectorize setup and patternsrag-patterns.md- Complete RAG implementation strategiesdimension-guide.md- Choosing the right dimensions (768 vs 1536 vs 3072)top-errors.md- 8 common errors and detailed solutions
Scripts (scripts/)
check-versions.sh- Verify @google/genai package version is current
---
Official Documentation
- Embeddings Guide: https://ai.google.dev/gemini-api/docs/embeddings
- Model Spec: https://ai.google.dev/gemini-api/docs/models/gemini#gemini-embedding-001
- Rate Limits: https://ai.google.dev/gemini-api/docs/rate-limits
- SDK Reference: https://www.npmjs.com/package/@google/genai
- Context7 Library ID:
/websites/ai_google_dev_gemini-api
---
Related Skills
- google-gemini-api - Main Gemini API for text/image generation
- cloudflare-vectorize - Vector database for storing embeddings
- cloudflare-workers-ai - Workers AI embeddings (BGE models)
---
Success Metrics
Token Savings: ~60% compared to manual implementation
Errors Prevented: 8 documented errors with solutions
Production Tested: β Verified in RAG applications
Package Version: @google/genai@1.27.0
Last Updated: 2025-10-25
---
License
MIT License - Free to use in personal and commercial projects.
---
Questions or Issues?
- GitHub: https://github.com/jezweb/claude-skills
- Email: jeremy@jezweb.net
More from this repository10
nextjs-shadcn-builder skill from ovachiever/droid-tings
security-auditor skill from ovachiever/droid-tings
threejs-graphics-optimizer skill from ovachiever/droid-tings
api-documenter skill from ovachiever/droid-tings
secret-scanner skill from ovachiever/droid-tings
readme-updater skill from ovachiever/droid-tings
applying-brand-guidelines skill from ovachiever/droid-tings
Configures Tailwind v4 with shadcn/ui, automating CSS variable setup, dark mode, and preventing common initialization errors.
deep-reading-analyst skill from ovachiever/droid-tings
dependency-auditor skill from ovachiever/droid-tings