Participate in our

AI Engineering Practices Benchmark

Understand how your org uses AI tools for software engineering & how this compares to the industry

Research Problem
Tool licenses and surveys give incomplete signals of how organizations actually use AI in software engineering.
Our Method
We analyze artifacts in your repos to benchmark how AI tools are being used in your software org.
Since 2022, we've worked with
600+
Organizations
120K+
Engineers
Our research has been featured in
Business Insider
image/svg+xml

Criteria for Participation

We work exclusively with companies and organizations
Globe
Any Geography & Industry
👥
Minimum Company Size: 50+ Software Engineers
Git
Git Only: GitHub, GitLab, Bitbucket, or Azure DevOps

Receive Insights in 3 Steps

1
Run our Docker image
Scans for AI‑related artifacts (prompt files, agent configs, workflows)
2
Review your local report
Produces a report inside your environment summarizing your AI Engineering Practice Levels (L0-L4)
3
Optionally share anonymized metrics with SWEPR
You choose whether to send back a redacted summary (no source code). We use your metrics to build cross-industry benchmarks

AI Engineering Practice Levels

What we measure
We scan for concrete traces in version control instead of surveys
What we look for
Files and directories for AI assistants, prompts, agents, and workflows
How we classify
Each repo gets a level (L0–L4) based on artifacts found
Level Name Definition
L0 No observable AI use No trace of AI use in version control or usage APIs
L1 Opportunistic prompting AI use is ad‑hoc, not reusable or shared. Identified via API
L2 Systematized prompting Prompts & rules are stored and versioned so others can reuse them
L3 Agent‑backed development Teams build callable agents or scripts that offload parts of dev work
L4 Orchestrated agentic workflows Multiple agents/tools are coordinated via workflows or DAGs
For each level we attach a confidence score based on commit activity, recency, and number of contributors.

Other Ongoing Research

Productivity Research

Software Engineering Productivity Research

Get data-driven insights on the productivity of your software engineering organization.
AI Impact

Impact of AI on Engineering Productivity

Understand how AI tools like GitHub Copilot affect developer productivity and code quality.

Contact Us

Want to get in touch with the research team?
© 2025 Stanford Software Engineering Productivity Research Group