Appearance
Getting Started: From Hope to Proof in 10 Minutes
Stop assuming your tests are reliable. Start proving it.
New to Obvyr?
Read the Introduction first to understand how Obvyr replaces testing assumptions with evidence, or jump straight in below to start collecting data.
What You'll Achieve
By the end of this 10-minute guide, you'll transform from assumption-based to evidence-based testing:
Instead of:
- "I think this test is reliable" → "This test passed in 47/47 executions"
- "Coverage looks good" → "These tests caught 12 bugs this month"
- "It works locally" → "Local and CI environments match on 98% of executions"
You'll have:
- ✅ Your first project collecting comprehensive test execution data
- ✅ An agent wrapping your test commands with zero workflow disruption
- ✅ Evidence-based test reliability metrics in your dashboard
- ✅ Pattern insights revealing flaky tests, environment drift, or test value gaps
Time investment: 10 minutes Value delivered: Immediate evidence-based testing confidence
Prerequisites
Before you start, ensure you have:
- Python 3.8 or later installed
- An existing test command you'd like to monitor (e.g.,
pytest,npm test,mypy) - Access to create projects in your Obvyr organisation
Step 1: Create Your Project Structure
Time: 2 minutes | Value: Organized data collection for targeted insights
- Log into the Obvyr dashboard
- Navigate to Projects and click Create Project
- Give your project a meaningful name (e.g., "My API", "Frontend App", or "Payment Service")
Project Naming for Insights
Choose names that match how you want to analyse your testing data:
- By codebase: "Payment API", "User Frontend" - Compare test reliability across services
- By team: "Platform Team", "Product Team" - Track team-specific testing practices
- By environment: "Staging", "Production" - Analyse environmental differences
The goal is evidence-based insights, not rigid hierarchy. Organize however makes sense for your team.
Step 2: Create Your First CLI Agent
Time: 3 minutes | Value: Separate pattern analysis for each test type
- Open your newly created project
- Click Create Agent
- Name your CLI agent based on what it will monitor (e.g., "Unit Tests", "Lint", "Type Check")
- Copy the API key: You'll need this in the next step and it is only displayed once, when you first create a CLI agent.
Why separate CLI agents matter: Different test types have different reliability patterns. Unit tests might be flaky due to timing issues, while linting never fails but takes too long. Separate CLI agents let Obvyr analyse each test type's specific patterns, giving you targeted insights like:
- "Your unit tests have a 2.3% flaky rate, concentrated in authentication tests"
- "Your linting never catches bugs but accounts for 15% of CI time"
- "Your type checking catches 94% of type-related production bugs"
Keep Your API Key Safe
Treat this API key like a password. It should only be stored as environment variables that are not committed to source control.
Step 3: Install the Obvyr CLI
Time: 1 minute | Value: Zero-friction test execution data collection
Install the Obvyr CLI tool:
bash
pip install obvyr-cliVerify the installation:
bash
obvyr --versionWhat this enables: The CLI wraps your existing test commands with zero workflow disruption. Your tests run exactly the same way, but now every execution contributes to pattern analysis. No code changes, no test modifications, no infrastructure changes required.
Step 4: Configure and Run Your First Command
Time: 2 minutes | Value: Start collecting evidence immediately
Set environment variables with your CLI agent details:
Option A: Environment Variables
bash
export OBVYR_CLI_USER="timmah"
export OBVYR_PROFILES__DEFAULT__API_KEY="your-api-key-here"Option B: .env File
bash
OBVYR_CLI_USER=timmah
OBVYR_PROFILES__DEFAULT__API_KEY=your-api-key-hereNow wrap your existing test command with the Obvyr CLI:
bash
# Instead of: pytest tests/
obvyr pytest tests/
# Instead of: npm test
obvyr npm test
# Instead of: mypy src/
obvyr mypy src/What happens: Your tests run exactly as before. But now Obvyr captures comprehensive execution data that becomes pattern insights after multiple runs.
What Obvyr Captures (And Why It Matters)
The Obvyr CLI captures complete execution context:
- Command output (stdout and stderr) → Analyse failure patterns and error messages
- Execution duration → Identify tests getting slower over time
- Exit code (success/failure status) → Track test reliability trends
- User who ran the command → Understand team testing practices
- Timestamp and environment context → Compare local vs. CI patterns
- Test framework metadata (e.g. jUnit XML) → Deep test-level insights
Individual observations are data. Patterns across hundreds of observations are insights.
After 50+ executions, Obvyr reveals:
- Which tests are truly flaky vs. genuinely broken
- How local environment differs from CI
- Which tests never catch bugs
- Where your test suite provides the most value
Step 5: View Your Evidence
Time: 2 minutes | Value: Immediate visibility into test execution
- Return to the Obvyr dashboard
- Navigate to your project
- Click on the CLI agent you just used
- View your first observation with complete execution details
What you'll see:
- Full command output (stdout/stderr)
- Execution timing and duration
- Environment context (user, timestamp, system info)
- Test framework metadata (if available, e.g., jUnit XML parsing)
What this becomes: After your team runs tests 50-100 times, individual observations become patterns:
- First run: "Test passed in 1.2s" (data point)
- After 50 runs: "Test passed in 47/50 runs (94% reliable), fails only on CI runner 'ci-3', timing pattern suggests network timeout" (insight)
The transformation: From "test passed" to "test is 94% reliable with known environmental trigger"
Next Steps: From Data to Insights
You've completed the setup! Now capture the full value:
Immediate Actions (Today)
1. Add CI/CD Integration (10 minutes)
- Add
OBVYR_PROFILES__DEFAULT__API_KEYto your CI secrets - Set
OBVYR_CLI_USER=ciin CI environment - Wrap your CI test commands with
obvyr - Value: Start comparing local vs. CI patterns immediately
2. Add More Test Types (5 minutes per CLI agent)
- Create separate CLI agents for: linting, type checking, integration tests
- Wrap each command type:
obvyr npm run lint,obvyr mypy src/ - Value: Pattern analysis specific to each quality check type
3. Team Adoption (15 minutes)
- Share CLI agent configuration with team members
- Add to team documentation: "Wrap test commands with obvyr"
- Value: Comprehensive team-wide data collection starts immediately
Short-Term Value (Week 1-2)
After 50-100 test executions, Obvyr reveals:
- Flaky test identification: Which tests fail inconsistently
- Environment comparison: Local vs. CI execution differences
- Initial patterns: Quick wins like obviously problematic tests
Expected value: Identify 2-5 flaky tests, prevent first environment-drift incident
Medium-Term Value (Month 1)
After 500-1,000 test executions, Obvyr provides:
- Comprehensive flaky test analysis: Root cause, correlation patterns
- Test value assessment: Which tests catch bugs vs. slow down CI
- CI optimization opportunities: Evidence-based suite optimization
- AI test quality baseline: If using AI code generation
Expected value: Measurable time savings and incident prevention (varies by team)
Long-Term Value (Quarter 1+)
Sustained data collection enables:
- Full flaky test resolution: 90%+ reduction in debugging time
- Environment drift prevention: Proactive issue detection
- Evidence-based CI optimization: 60-70% pipeline reduction
- AI quality validation at scale: Automated test quality assurance
Expected value: Sustained cost reduction and quality improvement (calculate your specific ROI using our business case framework)
Integration with CI/CD
The same obvyr your-command pattern works seamlessly in CI/CD:
GitHub Actions:
yaml
env:
OBVYR_CLI_USER: github-ci
OBVYR_PROFILES__DEFAULT__API_KEY: ${{ secrets.OBVYR_AGENT_API_KEY }}
steps:
- name: Run tests with Obvyr
run: obvyr pytest tests/GitLab CI:
yaml
variables:
OBVYR_CLI_USER: gitlab-ci
test:
script:
- obvyr pytest tests/Jenkins:
groovy
environment {
OBVYR_CLI_USER = 'jenkins-ci'
OBVYR_PROFILES__DEFAULT__API_KEY = credentials('obvyr-api-key')
}
steps {
sh 'obvyr pytest tests/'
}Critical for Environment Comparison
CI integration is essential for comparing local vs. CI execution patterns. Environment drift detection requires data from both contexts. Add CI integration within the first week for maximum value.
Explore Further
Ready to dive deeper?
- Why Obvyr? - Understand the complete value proposition and differentiators
- Problems Solved - See detailed scenarios of specific testing challenges Obvyr solves
- AI-Era Testing - Learn why testing insights matter more than ever with AI development
- ROI & Business Case - Calculate the quantified value for your team
- CLI Configuration - Advanced CLI setup for complex workflows
You're Now Evidence-Based
Congratulations! You've transformed from assumption-based to evidence-based testing:
Before: "I think our tests are reliable" (hope) After: "Our tests are 94% reliable with identified patterns" (proof)
Every test run from every developer and every CI pipeline now contributes to proving test reliability. Welcome to evidence-based software quality.
Keep Building Evidence
The more you and your team use Obvyr, the stronger your evidence becomes. Patterns emerge from comprehensive data. Run your tests through Obvyr, and insights will follow.