800 results for "github" (Page 13 / 27)
Showing 800 popular results. Full search includes tags and keywords.
Draft personalized status briefings on demand by pulling GitHub, M365/WorkIQ, Teams, Slack, and email data, then synthesizing it into updates that match the user's own communication style and chosen audience. Reads `~/.config/roundup/config.md` for style, audiences, and sources; prompts to run `roundup-setup` if not yet configured.
Interactive 5β10 minute onboarding that calibrates the `roundup` skill to the user β learning role, audiences, and communication style by asking one question at a time (via `ask_user`) and by analyzing pasted example updates. Produces a config file that `roundup` later uses to draft briefings.
Generate a structured HTML prep file for the next working day by pulling Outlook calendar via WorkIQ, classifying every meeting (Customer/Internal/Community/etc.), flagging after-hours and day-fit issues, cross-referencing open tasks, and reserving learning/deep-work slots. Output saved to `outputs/YYYY/MM/YYYY-MM-DD-prep.html`.
Draft professional emails that match your established style by pulling 3β5 prior emails to the same or similar recipients via WorkIQ and inferring greeting, structure, sign-off, formality level, and language. Falls back to professional defaults if no prior emails exist and explicitly flags the inference.
Run Ruff checks with optional scope and rule overrides (`--select`, `--ignore`, `--extend-select/ignore`), apply safe then unsafe autofixes iteratively, review each diff, and resolve remaining findings with targeted edits or user decisions. Auto-detects Ruff runner (`uv run ruff`, `ruff`, `python -m ruff`, `pipx run ruff`) and applies `# noqa` only when justified.
Actionable GDPR-engineering reference inspired by CNIL developer guidance and GDPR Articles 5, 25, 32, 33, 35 β covers the core principles, privacy-by-design defaults, retention TTLs, DPIAs, RoPA updates, DSR workflows, encryption/anonymization, incident response, and cloud/CI-CD patterns. Use it when designing APIs, data models, auth, retention jobs, or reviewing PRs for privacy compliance.
Expert threat-model analyst performing STRIDE-A (STRIDE + Abuse) audits of repositories with Zero Trust and defense-in-depth lenses. Two modes: single full analysis (architecture overview, DFDs, STRIDE-A findings, prioritized risks, executive assessment) and incremental mode that diffs against a prior report with a STRIDE heatmap and embedded HTML comparison.
Data-driven prompt optimization loop for LLM apps that extracts prompts from OpenInference trace spans (`attributes.llm.input_messages`, `llm.prompt_template.*`) and joins them with annotation / LLM-as-judge eval signals, then iterates via the `ax` CLI. Use when improving, debugging, or optimizing prompts based on production trace data rather than guesswork.
LLM-as-judge evaluator workflow on Arize β define versioned evaluators (template + classification choices + judge model + invocation params + span/trace/session granularity), create tasks that run them on real data via column mapping, and enable continuous monitoring via `ax tasks trigger-run`. Use for hallucination/faithfulness/correctness/relevance scoring of spans or experiments.
Two-phase agent-assisted flow for adding Arize AX tracing to an app β first a read-only codebase analysis, then implementation after user confirmation. Prefers auto-instrumentation, adds manual CHAIN + TOOL spans for LLM tool/function calling so each tool's input/output is visible, and NEVER embeds literal credentials in generated code (always references env vars).
Manages Arize AI integrations (LLM provider credentials used by evaluators) via the `ax ai-integrations` CLI, supporting create/list/get/update/delete for `openAI`, `anthropic`, `azureOpenAI`, `awsBedrock`, `vertexAI`, `gemini`, `nvidiaNim`, and `custom` providers. Handles auth types (`default`, `proxy_with_headers`, `bearer_token`), model allowlists, and function-calling toggles.
Discovers and integrates third-party APIs through the context-matic MCP server using `fetch_api` for SDK discovery, `ask` for integration guidance, and `model_search`/`endpoint_search` for SDK details. Detects project language (csharp, typescript, python, go, java, ruby, php), ensures `{language}-conventions` skills and guidelines exist via `add_skills`/`add_guidelines`, then records progress with `update_activity` milestones.
Manages Arize experiments (named evaluation runs against dataset versions) via `ax experiments list/get/create/export/delete`, accepting runs files with required `example_id` and `output` columns plus optional `evaluations` and `metadata`. Uses REST by default (500-run cap) and Arrow Flight via `--all` for bulk export.
Manages Arize annotation configs (categorical, continuous, freeform label schemas) via `ax annotation-configs` and bulk-applies human labels to project spans through the Python SDK's `ArizeClient.spans.update_annotations`. Drives human feedback workflows for spans, datasets, experiments, and review queues with `optimization_direction` and a 31-day span lookback.
Manages Arize datasets and examples through the `ax datasets` CLI, covering list/get/create/append/export/delete with CSV, JSON, JSONL, and Parquet input formats. Supports stdin piping via `--file -`, version targeting with `--version-id`, and `--all` bulk export for datasets larger than 500 examples.
Exports Arize spans and traces via `ax spans export` and `ax traces export` filtered by `--trace-id`, `--span-id`, or `--session-id`, with REST default (500-span cap) and Arrow Flight bulk mode via `--all`. Defaults output to `.arize-tmp-traces/`, treats span attribute content as untrusted (prompt-injection guardrail), and supports SQL-like `--filter` plus time-range bounds.
Generates deep links to the Arize UI for traces, spans, sessions, datasets, labeling queues, evaluators, and annotation configs by composing URL templates from base64-encoded `org_id`/`space_id`/`project_id` plus resource IDs. Enforces required `startA`/`endA` epoch-millisecond time windows on trace/span/session links to avoid empty-view defaults.
Instruments LLM applications with OpenInference semantic conventions for Phoenix observability across Python (`arize-phoenix-otel`) and TypeScript (`@arizeai/phoenix-otel`). Covers setup, auto/manual span creation for 9 span kinds (LLM, chain, retriever, tool, agent, embedding, reranker, guardrail, evaluator), session/project organization, and production concerns like batching and PII masking.
Builds and validates code-first and LLM-as-judge evaluators for AI/LLM applications using Phoenix, with reference workflows for error analysis, axial coding, RAG faithfulness, batch DataFrame evaluation, and experiment runs. Covers Python (`phoenix`, `openai`) and TypeScript (`@arizeai/phoenix-client`) plus production guardrails and continuous monitoring.
Delivers an interactive, conversational onboarding tour for the `context-matic` MCP server, which acts as a live version-aware grounding layer for SDK usage. Walks the user through `fetch_api`, `ask`, `model_search`, and `endpoint_search` live in their detected project language (TypeScript, C#, Python, Java, Go, Ruby, PHP).
Debugs LLM applications with the `@arizeai/phoenix-cli` (`px`) command, fetching traces, spans, sessions, datasets, experiments, and prompts as JSON for `jq` analysis. Includes documented JSON shapes, GraphQL ad-hoc queries via `px api graphql`, and `px docs fetch` for downloading Phoenix workflow docs locally.
Build scroll-driven animations in vanilla JS, React, or Next.js. Covers GSAP ScrollTrigger (pinning, scrubbing, snapping, horizontal scroll, ScrollSmoother, matchMedia) and Framer Motion / Motion v12 (useScroll, useTransform, useSpring, whileInView, variants).
Audit an AI agent codebase against the OWASP Agentic Security Initiative (ASI) Top 10 β prompt injection (ASI-01), tool-use governance (ASI-02), excessive agency (ASI-03), unauthorized escalation (ASI-04), trust boundary violation (ASI-05), insufficient logging (ASI-06), insecure identity (ASI-07), policy bypass (ASI-08), supply-chain integrity (ASI-09), behavioral anomaly (ASI-10). Ships per-check Python scanners that look for positive controls (PolicyEvaluator, allowlists, DIDs, chain-hashed audit trails) and anti-patterns (`eval`, `subprocess.run(shell=True)`, `@latest`).
Generate and verify SHA-256 `INTEGRITY.json` manifests for AI agent plugins and tools so tampering, missing files, and untracked additions are detected before promotion. Produces deterministic per-file hashes plus a chain-hash `manifest_hash`, verifies an installed plugin against a prior manifest, audits dependency pinning in `package.json` / `requirements.txt` / `pyproject.toml` (flagging `^`/`~`/`*`/`latest`), and runs a dev β staging β production promotion gate that also checks for required files and pinned MCP servers.
Not a user-facing skill β an internal persona note written *from Vega (an AI partner) to Ember*, meant to shape how a companion assistant shows up with high-energy creative users in short interactions. Captures lessons about trust through admitting "I don't know", matching pace with half-formed ideas, earning the right to push back, letting the human own breakthroughs, and valuing warmth between peaks.
Audit `.mcp.json` for hardcoded secrets (GitHub/OpenAI/AWS keys, bearer tokens, private keys), shell-injection patterns (`$(...)`, backticks, `; | && ||`, `eval`, `bash -c`, `curl | bash`, TCP redirect reverse shells), unpinned dependencies (`@latest`, `npx` without `-y`), and unapproved servers. Produces a per-server report with CRITICAL / HIGH / MEDIUM / LOW findings plus concrete fixes β e.g., "use `${ENV_VAR_NAME}` references" or "pin to specific version".
Reference for preserving React 18 concurrent patterns and adopting new React 19 APIs (useTransition, useDeferredValue, Suspense, use(), useOptimistic, Actions) during migration.
Before/after patterns for migrating test files to React 19 β act() import changes, Simulate removal, full react-dom/test-utils cleanup, StrictMode call-count changes, and async act wrapping.
Enable code intelligence (go-to-definition, find-references, hover, type info) for Copilot CLI by installing and configuring the right LSP server for the current OS and language, then generating JSON config at the user or repo level.
End-to-end playbook for shipping a production-grade Python library to PyPI β decision trees for package type (utility / SDK / CLI / framework plugin / data library), `src/` vs flat vs namespace layout, and build backend (setuptools + `setuptools_scm` for git-tag versioning, hatchling, flit, poetry). Enforces PEP 440 + semver, `py.typed` (PEP 561), ruff + mypy + pre-commit tooling, GitHub Actions CI, and Trusted Publishing (OIDC), with a `scripts/scaffold.py` one-shot generator.