Skip to content

Frequently Asked Questions

General Questions

What is Obvyr?

Obvyr is a testing insights platform that proves test reliability through comprehensive data collection and pattern recognition. Instead of assuming your tests are reliable because they pass, Obvyr analyses patterns across thousands of test executions to provide evidence-based insights into test quality, flaky tests, environment drift, and test value.

How is Obvyr different from traditional testing tools?

Traditional testing tools show you point-in-time results: "test passed" or "test failed." Obvyr analyses patterns over time to reveal insights like:

  • "This test passed in 847/850 executions (99.6% reliable)"
  • "This test fails only on CI runner 'ci-3' due to network timeout"
  • "These 412 tests never caught a bug but account for 62% of CI time"

Key difference: Patterns over time vs. snapshots in time. Evidence vs. assumptions.

How quickly can we see value?

Immediate value (Week 1): After 50-100 test executions, Obvyr identifies obviously flaky tests and initial patterns.

Significant value (Month 1): After 500-1,000 executions, comprehensive flaky test analysis, test value assessment, and CI optimisation opportunities become clear.

Full value (Quarter 1): Sustained data collection enables complete flaky test resolution, environment drift prevention, and evidence-based CI optimisation.

ROI timeline: Varies by team—depends on how severe your testing challenges are (see Business Case for evaluation framework)

Implementation & Integration

How hard is it to integrate Obvyr?

Extremely easy. Total setup time: 10 minutes.

  1. Create project and agent in Obvyr dashboard (5 minutes)
  2. Install obvyr-cli (1 minute)
  3. Wrap test commands: obvyr pytest tests/ (1 minute)
  4. View first observation in dashboard (3 minutes)

No code changes. No test modifications. No infrastructure changes. Zero workflow disruption.

Does Obvyr work with our existing CI/CD?

Yes. Obvyr works with any CI/CD platform. Simply:

  1. Add OBVYR_PROFILES__DEFAULT__API_KEY to CI secrets
  2. Set OBVYR_CLI_USER=ci environment variable
  3. Wrap test commands with obvyr

Supports: GitHub Actions, GitLab CI, Jenkins, CircleCI, Travis CI, Buildkite, and any other CI platform.

What test frameworks does Obvyr support?

Obvyr works with any test framework or quality check tool because it wraps commands, not tests:

  • Python: pytest, unittest, nose, tox
  • JavaScript/TypeScript: Jest, Vitest, Mocha, Jasmine, Cypress
  • Go: go test
  • Java: JUnit, TestNG
  • Ruby: RSpec, Minitest
  • Any linting tool: ESLint, Ruff, Rubocop, etc.
  • Any type checker: mypy, TypeScript, Flow

If you can run it from command line, Obvyr can wrap it.

Do we need to change our tests?

No. Obvyr requires zero test modifications. Your tests run exactly as they did before. Obvyr captures execution data without changing test behaviour.

Can we try Obvyr without committing our whole team?

Yes. Start with:

  • One project (e.g., your main API)
  • One agent (e.g., unit tests)
  • One developer or CI pipeline

Expand based on value demonstrated. No all-or-nothing commitment required.

Data & Security

What data does Obvyr collect?

Obvyr collects comprehensive test execution context:

  • Command executed and arguments
  • Execution duration and timing
  • Exit code (success/failure)
  • Command output (stdout/stderr)
  • User who ran the command
  • Environment variables (configurable)
  • Test framework metadata (e.g., jUnit XML)

No source code is collected. Only execution metadata.

Is our test output data secure?

Yes. Security measures include:

  • Encryption in transit: All data transmitted via TLS 1.3
  • Encryption at rest: All data encrypted in DynamoDB with AWS KMS
  • Multi-tenant isolation: Complete data separation between organisations
  • Access control: Role-based permissions for team members
  • API key security: Agent tokens are long-lived but revocable per-agent
  • Audit logging: All data access logged for security monitoring

See our Security Whitepaper for comprehensive security documentation.

Does Obvyr help with compliance requirements?

Yes. Obvyr provides automated compliance evidence collection for organisations with regulatory or audit requirements:

Audit Trail:

  • Complete record of all test executions (who, what, when, where, result)
  • Immutable historical data for regulatory compliance
  • Comprehensive evidence without manual documentation

Compliance Frameworks: Obvyr helps organisations meet testing-related requirements for:

  • Quality management systems (ISO 9001, etc.)
  • Change control and deployment validation
  • Security compliance (SOC 2, ISO 27001, etc.)
  • Industry-specific regulations (financial services, healthcare, government)

Value:

  • Reduce audit preparation time by 95% (40 hours → 2 hours)
  • Automated evidence collection as by-product of development
  • Prove systematic testing practices to auditors and customers

See Compliance Value for detailed information.

Can we use Obvyr for regulatory audits?

Yes. Obvyr provides audit-ready evidence:

What auditors need:

  • Proof of systematic testing practices
  • Historical verification of quality assurance
  • Evidence of pre-deployment validation
  • Change control documentation

What Obvyr provides:

  • Complete test execution history with timestamps
  • User attribution for all test runs
  • Environment verification records
  • Pre-deployment test validation proof
  • Exportable compliance reports

Common audit scenarios:

  1. Security reviews: Prove security tests execute before every deployment
  2. Quality audits: Demonstrate systematic regression testing
  3. Change control: Show evidence of testing at each change
  4. Customer audits: Provide comprehensive testing documentation for enterprise deals

Where is data stored?

Data is stored in AWS (region configurable):

  • Database: DynamoDB with encryption at rest
  • Artifacts: S3 with server-side encryption
  • Caching: ElastiCache Redis (ephemeral session data)

All data remains within your configured AWS region.

Can we self-host Obvyr?

Currently: Obvyr is a SaaS platform hosted on AWS.

Future: Self-hosted and private cloud deployment options are on our roadmap. Contact us if this is a requirement for your organisation.

What happens if Obvyr is unavailable?

Your tests continue working normally. If Obvyr API is unavailable:

  1. The agent executes your command as usual
  2. Test results are returned to your terminal/CI
  3. Observation data is not sent (graceful degradation)
  4. No impact on test execution or results

Obvyr never blocks your development workflow.

Cost & ROI

How much does Obvyr cost?

Pricing is based on team size:

  • Small teams (10-20 developers): $60,000/year
  • Mid-size teams (30-50 developers): $120,000/year
  • Large teams (100+ developers): $240,000/year

ROI: Varies based on your team's specific testing challenges. See Business Case for a framework to calculate your actual ROI.

What's the value calculation based on?

Value delivered through four areas (actual impact varies by team):

  1. Flaky test resolution: Reduce time spent debugging false negatives
  2. Environment drift prevention: Prevent incidents from environmental differences
  3. CI/CD optimisation: Identify and remove low-value tests safely
  4. AI quality assurance: Automated test quality validation at AI development speeds

Your ROI depends on:

  • How much time your team currently spends on flaky tests
  • Your incident frequency and costs
  • Your CI/CD compute costs and pipeline duration
  • Your AI adoption level and code review burden

Use our business case framework to estimate based on your actual situation.

How does Obvyr compare to hiring more QA engineers?

Hiring 3 senior QA engineers:

  • Cost: $500,000/year
  • Effectiveness: Manual processes, reactive, doesn't scale at AI speeds
  • Coverage: Limited by human capacity

Obvyr:

  • Cost: $120,000/year (76% less expensive)
  • Effectiveness: Automated pattern detection at scale, proactive
  • Coverage: Every test execution from every developer

Obvyr advantage: 3.5x lower cost, 10x more effective at providing test insights.

Can we build this ourselves?

Estimated internal build cost:

  • Development: 2 engineers × 6 months = $180,000
  • Ongoing maintenance: 0.5 engineer/year = $90,000/year
  • Infrastructure: $24,000/year
  • Year 1 total: $294,000
  • Ongoing: $114,000/year

Risks:

  • 6-month delay before value delivery
  • Opportunity cost: $2.4M (6 months of unresolved problems)
  • Uncertain effectiveness
  • Maintenance burden
  • Incomplete feature set

Obvyr advantage: Lower cost ($140K year 1 vs. $294K), immediate value (10 days vs. 6 months), proven effectiveness, no maintenance burden.

Technical Questions

How does Obvyr handle large test suites?

Obvyr scales to any test suite size:

  • Tested with: 10,000+ tests per suite
  • Performance: No impact on test execution time
  • Data processing: Asynchronous, doesn't block development
  • Analysis: Scales with comprehensive data, more data = better patterns

Large test suites are where Obvyr provides the most value—revealing which tests provide value vs. noise.

Does Obvyr slow down our tests?

No. Obvyr adds ~100-200ms overhead for:

  • Command wrapper startup
  • Data collection
  • Asynchronous API submission

For typical test suites (5+ seconds execution), overhead is <2% and often imperceptible. Tests run at effectively the same speed.

How does flaky test detection work?

Obvyr uses pattern analysis across hundreds of executions:

  1. Collect data: Every test run captured with full context
  2. Identify inconsistency: Tests that pass sometimes, fail sometimes
  3. Correlation analysis: Match failures to environment, user, CI runner, timing
  4. Root cause identification: "92% of failures occur on CI runner 'ci-3' between 2-5s execution"

Result: Not just "this test is flaky" but "this test fails due to network timeout on specific CI runner"

How does environment drift detection work?

Obvyr compares execution patterns across environments:

  1. Local executions: Developer machine test runs
  2. CI executions: CI/CD pipeline test runs
  3. Pattern comparison: Identify systematic differences

Example detection:

test_payment_processing:
- Local: 100% pass rate, 0.8s avg duration, uses mock gateway
- CI: 78% pass rate, 1.5s avg duration, uses staging gateway
- Drift identified: Configuration file present locally, missing in CI

Can Obvyr analyse AI-generated test quality?

Yes. Obvyr compares AI-generated test patterns to human-written baselines:

AI test analysis:

  • Execution patterns: AI tests never fail = only testing happy paths
  • Coverage comparison: Human tests catch failures, AI tests don't
  • Gap identification: Specific scenarios AI missed (error handling, timeouts, edge cases)
  • Recommendations: Exact tests to add for comprehensive coverage

Value: Automated AI test quality validation at AI development speeds.

Comparison Questions

How is Obvyr different from code coverage tools?

Coverage tools show: Lines of code executed by tests (the "what")

Obvyr shows: Which tests actually catch bugs and provide value (the "why")

Example:

  • Coverage: "This file has 92% coverage"
  • Obvyr: "These 12 tests cover this file and caught 47 bugs; these 8 tests cover this file and never caught a bug"

Complementary, not competitive: Coverage metrics + Obvyr insights = complete picture

How is Obvyr different from test runners (pytest, Jest, etc.)?

Test runners: Execute tests and report pass/fail results

Obvyr: Analyses patterns across hundreds of test executions to reveal insights

Not competitive: Obvyr wraps test runners, doesn't replace them. You continue using pytest, Jest, etc. Obvyr adds pattern analysis on top.

How is Obvyr different from CI/CD platforms?

CI/CD platforms: Automate test execution in pipelines

Obvyr: Analyses test execution patterns from CI and local environments

Integration, not replacement: Obvyr integrates with your existing CI/CD (GitHub Actions, Jenkins, etc.) by wrapping test commands. Provides insights into CI performance and reliability.

How is Obvyr different from monitoring tools (Datadog, New Relic)?

Monitoring tools: Application performance and production observability

Obvyr: Test execution and quality observability

Different focus: Monitoring tools watch production. Obvyr watches test quality. Complementary tools for complete observability.

How is Obvyr different from audit logging tools?

Audit logging tools: Track application user actions for security/compliance

Obvyr: Track test execution for quality assurance and compliance

Different scope:

  • Audit logs: Who accessed what data in production
  • Obvyr: Who ran which tests with what results

Compliance value:

  • Audit logs: Prove data access controls
  • Obvyr: Prove testing practices and quality assurance

Both provide compliance value for different aspects of your system.

Use Case Questions

Is Obvyr only for large teams?

No. Obvyr provides value at any team size:

  • Small teams (10-20 developers): Flaky tests have proportionally larger impact per developer
  • Mid-size teams (30-50 developers): All four value areas (flaky tests, environment drift, CI optimisation, AI quality) typically apply
  • Large teams (100+ developers): Pattern detection scales massively, CI cost savings multiply

Value scales with team size, but smaller teams often see higher impact per developer because flaky tests disrupt proportionally more of the team's capacity.

Do we need AI development to benefit from Obvyr?

No. AI-related value is significant upside, not a requirement.

Core value without AI:

  • Flaky test resolution (time savings)
  • Environment drift prevention (incident reduction)
  • CI/CD optimisation (compute savings + productivity)

AI quality validation is additional value for teams using AI coding tools, not a prerequisite for Obvyr's core benefits.

Can we use Obvyr for integration/E2E tests?

Yes. Obvyr works with any test type:

  • Unit tests: Flaky test detection, performance trends
  • Integration tests: Environment drift, timing analysis
  • E2E tests: Cross-service reliability, flaky scenario identification
  • Performance tests: Execution time trends, degradation detection
  • Security tests: Proof of systematic execution for compliance

Different test types reveal different patterns. All provide value.

Is Obvyr useful for organisations with compliance requirements?

Extremely valuable. Organisations in regulated industries gain significant benefits:

Industries with high compliance value:

  • Financial services: Regulatory audits, change control documentation
  • Healthcare: HIPAA compliance, quality assurance evidence
  • Government contracting: Security reviews, systematic testing proof
  • Enterprise SaaS: Customer security audits, SOC 2 compliance

Compliance-specific value:

  • Automated audit trail (no manual documentation)
  • Historical evidence for regulatory reviews
  • Proof of systematic quality assurance
  • Pre-deployment validation records
  • Customer audit readiness

Time savings: Reduce audit preparation from 40 hours to 2 hours per audit

See Compliance Problem for detailed scenario.

Does Obvyr work for mobile app testing?

Yes. If you can run tests from command line, Obvyr can wrap them:

  • iOS: XCTest via xcodebuild test
  • Android: Espresso via ./gradlew test
  • React Native: Jest/Detox tests
  • Flutter: flutter test

Wrap the command with obvyr and gain the same pattern insights.

Getting Started Questions

What's the minimum team size to get value?

Even solo developers benefit. Pattern analysis works with:

  • Minimum: 1 developer running tests regularly
  • Ideal: Full team (comprehensive data = better patterns)

Value scales with usage: More test executions = more comprehensive patterns.

How long until we see patterns?

Timeline:

  • 50 executions: Initial flaky test identification
  • 100 executions: Reliable pattern detection
  • 500 executions: Comprehensive analysis
  • 1,000+ executions: High-confidence insights

Team of 10 developers: Reach 500 executions in 1-2 weeks with normal testing cadence.

What if our team doesn't adopt it?

Adoption is trivial: Wrapping test commands requires zero workflow change:

Before: pytest tests/ After: obvyr pytest tests/

No learning curve. Tests run the same way. No new tools to learn. No process changes.

Value increases with adoption: More users = more data = better patterns. But even partial adoption provides value.

Can we integrate Obvyr gradually?

Yes. Recommended approach:

Week 1: One project, one agent, one test type Week 2: Add CI integration for environment comparison Week 3: Add more test types (linting, type checking) Week 4: Full team adoption

Prove value incrementally. No big-bang deployment required.

Support & Documentation

What support is provided?

Standard support includes:

  • Email support (24-hour response SLA)
  • Documentation and guides
  • Regular product updates
  • Community forum access

Premium support available:

  • 4-hour response SLA
  • Dedicated support engineer
  • Custom integration assistance
  • Quarterly business reviews

Is there training available?

Self-service training:

Custom training available:

  • Team onboarding sessions
  • Dashboard training
  • Best practices workshops

How often is Obvyr updated?

Continuous deployment:

  • Feature updates: Weekly
  • Security patches: As needed (immediately)
  • Documentation updates: Continuous

Major releases: Quarterly with new capabilities

No downtime: Rolling deployments with zero service interruption

Where can I get help?

Resources:

Response times:

  • Standard support: 24 hours
  • Premium support: 4 hours
  • Critical issues: Immediate escalation

Still Have Questions?

Can't find your answer here?

Get Started Now

Most questions are answered by trying Obvyr. Set up your first project in 10 minutes and see the value for yourself.

v0.2.1