← All projects

Devinity AI engineering management—context for what your teams actually ship.

Devinity is an AI-forward engineering management workspace: it connects initiatives, code changes, releases, and team narratives so leaders are not rebuilding a picture of reality from fragmented artifacts scattered across chat, tickets, and decks.

Repository

Why

Running engineering at scale is less about “tracking story points” and more about maintaining a *shared, honest model* of what is being built, by whom, under which constraints, and what actually reached production. In most organizations that model decays: Jira tickets diverge from Git reality, roadmaps are PowerPoint fiction, and new engineers spend weeks building mental maps from fragmented docs.

The pain intensifies when work spans multiple teams and vendors—you need a living context layer that ties epics to repositories, releases to customer commitments, and technical debt to business risk. Without that layer, “alignment” meetings recycle the same questions and leaders optimize for visibility instead of throughput. Devinity targets that gap: an eng-specific context fabric that helps teams grow without losing the thread of what shipped, why it shipped, and what is next.

How

Architecture — NestJS feature modules with clear boundaries (initiatives, repos, releases, people, AI summaries); Drizzle schemas against PostgreSQL for relational integrity; Redis for session-adjacent and rate-sensitive reads; shared DTOs via `@repo/api` so web and API agree on shapes.

AI layer — Retrieval over approved sources (PR descriptions, RFCs, release notes—not raw secret data) to generate concise “state of initiative” briefs, risk callouts, and onboarding digests. Human review is always the contract: models propose, leads confirm.

Web — Next.js shell for authenticated leadership views and public marketing pages; styled similarly to the rest of the monorepo so experimentation is cheap.

Principles — event-sourced release markers where possible, immutable links to source artifacts, least-privilege service tokens, and explicit data residency flags for future enterprise pilots.

Product requirements

Customer discovery — I ran a structured customer interview with my former CTO, walking through how they rebuilt quarterly planning after a reorganisation. We mapped where context lived (Notion, Jira, GitHub, slides), where it lied, and how much time exec staff spent “re-briefing” teams after every escalation.

What we validated —

  • Leaders want a *single narrative spine* per initiative that links business intent → technical scope → shipping evidence—not another dashboard of vanity metrics.
  • ICs will ignore tools that require duplicate entry; ingestion must be API- and Git-native with light human curation.
  • AI is acceptable only when every claim cites a source artifact and edits are attributable.

MVP scope — initiative timelines with release anchors, repo ↔ team ownership graph, AI-generated weekly briefs with citations, and export to existing slide decks so Devinity augments—not replaces—current rituals.

Explicit non-goals for v0 — automated performance management, headcount planning, or HRIS replacement.

Analytics & measurement

Product analytics — Track `Brief Generated`, `Brief Edited`, `Source Link Followed`, and `Weekly Digest Exported` to measure whether AI output is trusted (editing without deletion is a positive signal). Funnel from invited leader → connected Git org → first pinned initiative.

Engineering health — Deployment frequency and change-failure proxies per team (from CI/CD hooks), correlated—but not equated—with DORA metrics; the goal is *explainability*, not gamification.

Privacy — aggregate-only reporting for comparative team views until explicit opt-in; audit logs for who viewed which AI summary (future enterprise requirement).