Vibe IndexVibe Index
SkillsMCP ServersMarketplacesPluginsPicks
πŸ’»
Vix Code Beta
Integrated AI Coding Tool Β· Beta
⚑
Vibe Index Skill
Project recommendations
πŸ”Œ
Vibe Index MCP
Search from AI tools
πŸ”‘
Vibe Index API
Developer interface
Ranking
Vibe IndexVibe Index

Everything you need for vibe coding. Real-time updates on skills, plugins, MCP servers, and marketplaces.

Resources

  • Skills
  • MCP Servers
  • Marketplaces
  • Plugins

Support

  • About Us
  • Intro Seminar
  • Contact Us
  • Sync Activity

Legal

  • Privacy Policy
  • Terms of Service

Β© 2026 Vibe Index. All rights reserved.

πŸ›‘οΈ Security scanning active on all resources

Vibe Index is an independent, community-driven directory. Not affiliated with, endorsed by, or sponsored by Anthropic, Vercel, Microsoft, or any other company whose tools are listed here. All product names and trademarks are the property of their respective owners.

Search Results

800 results for "github" (Page 13 / 27)

Loading...
Show all results

Showing 800 popular results. Full search includes tags and keywords.

PreviousPage 13 / 27Next
🎯

Skills

30
roundupgithub/awesome-copilot1K0

Draft personalized status briefings on demand by pulling GitHub, M365/WorkIQ, Teams, Slack, and email data, then synthesizing it into updates that match the user's own communication style and chosen audience. Reads `~/.config/roundup/config.md` for style, audiences, and sources; prompts to run `roundup-setup` if not yet configured.

roundup
roundup-setupgithub/awesome-copilot1K0

Interactive 5–10 minute onboarding that calibrates the `roundup` skill to the user β€” learning role, audiences, and communication style by asking one question at a time (via `ask_user`) and by analyzing pasted example updates. Produces a config file that `roundup` later uses to draft briefings.

roundup-setup
daily-prepgithub/awesome-copilot1K0

Generate a structured HTML prep file for the next working day by pulling Outlook calendar via WorkIQ, classifying every meeting (Customer/Internal/Community/etc.), flagging after-hours and day-fit issues, cross-referencing open tasks, and reserving learning/deep-work slots. Output saved to `outputs/YYYY/MM/YYYY-MM-DD-prep.html`.

daily-prep
email-draftergithub/awesome-copilot9870

Draft professional emails that match your established style by pulling 3–5 prior emails to the same or similar recipients via WorkIQ and inferring greeting, structure, sign-off, formality level, and language. Falls back to professional defaults if no prior emails exist and explicitly flags the inference.

email-drafter
ruff-recursive-fixgithub/awesome-copilot9750

Run Ruff checks with optional scope and rule overrides (`--select`, `--ignore`, `--extend-select/ignore`), apply safe then unsafe autofixes iteratively, review each diff, and resolve remaining findings with targeted edits or user decisions. Auto-detects Ruff runner (`uv run ruff`, `ruff`, `python -m ruff`, `pipx run ruff`) and applies `# noqa` only when justified.

ruff-recursive-fix
gdpr-compliantgithub/awesome-copilot9560

Actionable GDPR-engineering reference inspired by CNIL developer guidance and GDPR Articles 5, 25, 32, 33, 35 β€” covers the core principles, privacy-by-design defaults, retention TTLs, DPIAs, RoPA updates, DSR workflows, encryption/anonymization, incident response, and cloud/CI-CD patterns. Use it when designing APIs, data models, auth, retention jobs, or reviewing PRs for privacy compliance.

gdpr-compliant
threat-model-analystgithub/awesome-copilot9250

Expert threat-model analyst performing STRIDE-A (STRIDE + Abuse) audits of repositories with Zero Trust and defense-in-depth lenses. Two modes: single full analysis (architecture overview, DFDs, STRIDE-A findings, prioritized risks, executive assessment) and incremental mode that diffs against a prior report with a STRIDE heatmap and embedded HTML comparison.

threat-model-analyst
arize-prompt-optimizationgithub/awesome-copilot8070

Data-driven prompt optimization loop for LLM apps that extracts prompts from OpenInference trace spans (`attributes.llm.input_messages`, `llm.prompt_template.*`) and joins them with annotation / LLM-as-judge eval signals, then iterates via the `ax` CLI. Use when improving, debugging, or optimizing prompts based on production trace data rather than guesswork.

arize-prompt-optimization
arize-evaluatorgithub/awesome-copilot7850

LLM-as-judge evaluator workflow on Arize β€” define versioned evaluators (template + classification choices + judge model + invocation params + span/trace/session granularity), create tasks that run them on real data via column mapping, and enable continuous monitoring via `ax tasks trigger-run`. Use for hallucination/faithfulness/correctness/relevance scoring of spans or experiments.

arize-evaluator
arize-instrumentationgithub/awesome-copilot7820

Two-phase agent-assisted flow for adding Arize AX tracing to an app β€” first a read-only codebase analysis, then implementation after user confirmation. Prefers auto-instrumentation, adds manual CHAIN + TOOL spans for LLM tool/function calling so each tool's input/output is visible, and NEVER embeds literal credentials in generated code (always references env vars).

arize-instrumentation
arize-ai-provider-integrationgithub/awesome-copilot7790

Manages Arize AI integrations (LLM provider credentials used by evaluators) via the `ax ai-integrations` CLI, supporting create/list/get/update/delete for `openAI`, `anthropic`, `azureOpenAI`, `awsBedrock`, `vertexAI`, `gemini`, `nvidiaNim`, and `custom` providers. Handles auth types (`default`, `proxy_with_headers`, `bearer_token`), model allowlists, and function-calling toggles.

integrate-context-maticgithub/awesome-copilot7780

Discovers and integrates third-party APIs through the context-matic MCP server using `fetch_api` for SDK discovery, `ask` for integration guidance, and `model_search`/`endpoint_search` for SDK details. Detects project language (csharp, typescript, python, go, java, ruby, php), ensures `{language}-conventions` skills and guidelines exist via `add_skills`/`add_guidelines`, then records progress with `update_activity` milestones.

arize-experimentgithub/awesome-copilot7770

Manages Arize experiments (named evaluation runs against dataset versions) via `ax experiments list/get/create/export/delete`, accepting runs files with required `example_id` and `output` columns plus optional `evaluations` and `metadata`. Uses REST by default (500-run cap) and Arrow Flight via `--all` for bulk export.

arize-experiment
arize-annotationgithub/awesome-copilot7760

Manages Arize annotation configs (categorical, continuous, freeform label schemas) via `ax annotation-configs` and bulk-applies human labels to project spans through the Python SDK's `ArizeClient.spans.update_annotations`. Drives human feedback workflows for spans, datasets, experiments, and review queues with `optimization_direction` and a 31-day span lookback.

arize-annotation
arize-datasetgithub/awesome-copilot7760

Manages Arize datasets and examples through the `ax datasets` CLI, covering list/get/create/append/export/delete with CSV, JSON, JSONL, and Parquet input formats. Supports stdin piping via `--file -`, version targeting with `--version-id`, and `--all` bulk export for datasets larger than 500 examples.

arize-dataset
arize-tracegithub/awesome-copilot7750

Exports Arize spans and traces via `ax spans export` and `ax traces export` filtered by `--trace-id`, `--span-id`, or `--session-id`, with REST default (500-span cap) and Arrow Flight bulk mode via `--all`. Defaults output to `.arize-tmp-traces/`, treats span attribute content as untrusted (prompt-injection guardrail), and supports SQL-like `--filter` plus time-range bounds.

arize-trace
arize-linkgithub/awesome-copilot7720

Generates deep links to the Arize UI for traces, spans, sessions, datasets, labeling queues, evaluators, and annotation configs by composing URL templates from base64-encoded `org_id`/`space_id`/`project_id` plus resource IDs. Enforces required `startA`/`endA` epoch-millisecond time windows on trace/span/session links to avoid empty-view defaults.

arize-link
phoenix-tracinggithub/awesome-copilot7700

Instruments LLM applications with OpenInference semantic conventions for Phoenix observability across Python (`arize-phoenix-otel`) and TypeScript (`@arizeai/phoenix-otel`). Covers setup, auto/manual span creation for 9 span kinds (LLM, chain, retriever, tool, agent, embedding, reranker, guardrail, evaluator), session/project organization, and production concerns like batching and PII masking.

phoenix-evalsgithub/awesome-copilot7680

Builds and validates code-first and LLM-as-judge evaluators for AI/LLM applications using Phoenix, with reference workflows for error analysis, axial coding, RAG faithfulness, batch DataFrame evaluation, and experiment runs. Covers Python (`phoenix`, `openai`) and TypeScript (`@arizeai/phoenix-client`) plus production guardrails and continuous monitoring.

phoenix-evals
onboard-context-maticgithub/awesome-copilot7630

Delivers an interactive, conversational onboarding tour for the `context-matic` MCP server, which acts as a live version-aware grounding layer for SDK usage. Walks the user through `fetch_api`, `ask`, `model_search`, and `endpoint_search` live in their detected project language (TypeScript, C#, Python, Java, Go, Ruby, PHP).

onboard-context-matic
phoenix-cligithub/awesome-copilot7600

Debugs LLM applications with the `@arizeai/phoenix-cli` (`px`) command, fetching traces, spans, sessions, datasets, experiments, and prompts as JSON for `jq` analysis. Includes documented JSON shapes, GraphQL ad-hoc queries via `px api graphql`, and `px docs fetch` for downloading Phoenix workflow docs locally.

phoenix-cli
gsap-framer-scroll-animationgithub/awesome-copilot6950

Build scroll-driven animations in vanilla JS, React, or Next.js. Covers GSAP ScrollTrigger (pinning, scrubbing, snapping, horizontal scroll, ScrollSmoother, matchMedia) and Framer Motion / Motion v12 (useScroll, useTransform, useSpring, whileInView, variants).

gsap-framer-scroll-animation
agent-owasp-compliancegithub/awesome-copilot6510

Audit an AI agent codebase against the OWASP Agentic Security Initiative (ASI) Top 10 β€” prompt injection (ASI-01), tool-use governance (ASI-02), excessive agency (ASI-03), unauthorized escalation (ASI-04), trust boundary violation (ASI-05), insufficient logging (ASI-06), insecure identity (ASI-07), policy bypass (ASI-08), supply-chain integrity (ASI-09), behavioral anomaly (ASI-10). Ships per-check Python scanners that look for positive controls (PolicyEvaluator, allowlists, DIDs, chain-hashed audit trails) and anti-patterns (`eval`, `subprocess.run(shell=True)`, `@latest`).

agent-supply-chaingithub/awesome-copilot5990

Generate and verify SHA-256 `INTEGRITY.json` manifests for AI agent plugins and tools so tampering, missing files, and untracked additions are detected before promotion. Produces deterministic per-file hashes plus a chain-hash `manifest_hash`, verifies an installed plugin against a prior manifest, audits dependency pinning in `package.json` / `requirements.txt` / `pyproject.toml` (flagging `^`/`~`/`*`/`latest`), and runs a dev β†’ staging β†’ production promotion gate that also checks for required files and pinned MCP servers.

from-the-other-side-vegagithub/awesome-copilot5800

Not a user-facing skill β€” an internal persona note written *from Vega (an AI partner) to Ember*, meant to shape how a companion assistant shows up with high-energy creative users in short interactions. Captures lessons about trust through admitting "I don't know", matching pace with half-formed ideas, earning the right to push back, letting the human own breakthroughs, and valuing warmth between peaks.

mcp-security-auditgithub/awesome-copilot5740

Audit `.mcp.json` for hardcoded secrets (GitHub/OpenAI/AWS keys, bearer tokens, private keys), shell-injection patterns (`$(...)`, backticks, `; | && ||`, `eval`, `bash -c`, `curl | bash`, TCP redirect reverse shells), unpinned dependencies (`@latest`, `npx` without `-y`), and unapproved servers. Produces a per-server report with CRITICAL / HIGH / MEDIUM / LOW findings plus concrete fixes β€” e.g., "use `${ENV_VAR_NAME}` references" or "pin to specific version".

react19-concurrent-patternsgithub/awesome-copilot5730

Reference for preserving React 18 concurrent patterns and adopting new React 19 APIs (useTransition, useDeferredValue, Suspense, use(), useOptimistic, Actions) during migration.

react19-concurrent-patterns
react19-test-patternsgithub/awesome-copilot5650

Before/after patterns for migrating test files to React 19 β€” act() import changes, Simulate removal, full react-dom/test-utils cleanup, StrictMode call-count changes, and async act wrapping.

react19-test-patterns
lsp-setupgithub/awesome-copilot5650

Enable code intelligence (go-to-definition, find-references, hover, type info) for Copilot CLI by installing and configuring the right LSP server for the current OS and language, then generating JSON config at the user or repo level.

lsp-setup
python-pypi-package-buildergithub/awesome-copilot5640

End-to-end playbook for shipping a production-grade Python library to PyPI β€” decision trees for package type (utility / SDK / CLI / framework plugin / data library), `src/` vs flat vs namespace layout, and build backend (setuptools + `setuptools_scm` for git-tag versioning, hatchling, flit, poetry). Enforces PEP 440 + semver, `py.typed` (PEP 561), ruff + mypy + pre-commit tooling, GitHub Actions CI, and Trusted Publishing (OIDC), with a `scripts/scaffold.py` one-shot generator.

arize-ai-provider-integration
integrate-context-matic
phoenix-tracing
agent-owasp-compliance
agent-supply-chain
from-the-other-side-vega
mcp-security-audit
python-pypi-package-builder