10 results for tag "benchmark"
Performance baseline and regression detector with four modes: page perf (Core Web Vitals + page weight + JS/CSS/image bundle sizes via browser MCP), API perf (100 hits per endpoint with p50/p95/p99 plus 10-concurrent load), build perf (cold build, HMR, test/typecheck/lint, Docker), and `baseline`/`compare` mode that diffs metrics before vs after a change. Stores baselines in git-tracked `.ecc/benchmarks/*.json` and is meant to run on every PR.
A Claude Code plugin that generates presentation slides, part of a collection including tools for LangChain usage, news extraction, subtitle processing, and content orchestration.
Provides a marketplace of Claude plugins for generating slides, using LangChain, and extracting news content from various platforms.
Benchmarks MCP servers against real GitHub issues with one command, providing hard performance numbers and metrics.
Custom Coding Agents Plugins for Research Software Engineering (RSE) and Scientific Computing tasks
Claude Code plugin concepts for Nixtla - Generate TimeGPT pipelines, model benchmarks, and FastAPI services from natural language
Automated quality assurance for Claude Code agents using LLM-as-judge evaluation. Built by BrandCast.
Claude Code plugin concepts for Nixtla - Generate TimeGPT pipelines, model benchmarks, and FastAPI services from natural language