Frequently Asked Questions
How do you pronounce "Obvyr"?
It's pronounced "ob-veer" (rhymes with "beer", "clear", "here"). It's a trendy 5-letter domain with minimal vowels, related to observability. Yes, we know it's weird. Don't worry, we still use vowels in our code... mostly.
General Questions
What is Obvyr?
Obvyr is a testing insights platform that proves test reliability by parsing JUnit XML output from your test runs. Instead of assuming your tests are reliable because they pass, Obvyr parses individual test results from JUnit XML and analyses patterns across thousands of test executions to provide evidence-based insights into test quality, flaky tests, and test reliability.
Core mechanism: Obvyr wraps your test commands (e.g., obvyr-cli pytest --junitxml=junit.xml tests/), captures the JUnit XML file, and parses individual test results to track pass rates, identify flaky tests, and monitor execution times over time.
How is Obvyr different from traditional testing tools?
Traditional testing tools show you point-in-time results: "test passed" or "test failed." Obvyr analyses patterns over time to reveal insights like:
- "This test passed in 847/850 executions (99.6% reliable)"
- "This test failed in 23/150 executions (15% flaky) - pattern suggests timing issue"
- "Test execution trends show pass rate declining from 98% to 89% over past month"
Key difference: Patterns over time vs. snapshots in time. Evidence vs. assumptions.
How quickly can we see value?
Initial patterns (50+ executions): Obvyr begins identifying obviously flaky tests and execution patterns.
Comprehensive insights (100-500 executions): Flaky test detection becomes highly reliable, performance trends emerge, and you can track test reliability across environments (local vs CI).
Timeline: A team of 10 developers running tests regularly can reach 500 executions in 1-2 weeks.
Implementation & Integration
How hard is it to integrate Obvyr?
Extremely easy. Total setup time: ~10 minutes.
- Create project and CLI agent in Obvyr dashboard (2 minutes)
- Install
obvyr-cli(2 minutes) - Configure environment variables (2 minutes)
- Wrap test commands:
obvyr-cli pytest tests/(2 minutes) - View first observation in dashboard (2 minutes)
No code changes. No test modifications. No infrastructure changes. Zero workflow disruption.
See the Getting Started guide for step-by-step instructions.
Does Obvyr work with our existing CI/CD?
Yes. Obvyr works with any CI/CD platform. Simply:
- Add
OBVYR_PROFILES__DEFAULT__API_KEYto CI secrets - Set
OBVYR_CLI_USERto execution context (e.g.,github-ci,jenkins-ci) - Wrap test commands with
obvyr-cli
Supports: GitHub Actions, GitLab CI, Jenkins, CircleCI, Travis CI, Buildkite, and any other CI platform.
See Getting Started - CI/CD Integration for examples.
Is JUnit XML output required?
Yes. JUnit XML parsing is how Obvyr provides test-level insights. Without it, Obvyr only captures command output - you won't get:
- Individual test pass rates
- Flaky test detection
- Test-level execution times
- Failure pattern analysis
The good news: Nearly all modern test frameworks support JUnit XML output. See below for configuration examples.
What test frameworks does Obvyr support?
Obvyr works with any test framework that can output JUnit XML:
- Python: pytest, unittest, nose, tox
- JavaScript/TypeScript: Jest, Vitest, Mocha, Jasmine, Cypress
- Go: go test
- Java: JUnit, TestNG, Maven (
mvn test), Gradle (gradle test) - Ruby: RSpec, Minitest
- .NET: NUnit, xUnit, MSTest
- PHP: PHPUnit
For best results, configure your test runner to generate JUnit XML output. This enables test-level insights (individual test pass rates, flaky detection, execution times).
Can I wrap other commands?
Yes! Obvyr can wrap any command-line tool (linters, type checkers, build scripts). However, test-level insights require JUnit XML output from test frameworks. Other commands are tracked as observations without test-level detail.
Examples of other wrappable commands:
- Linters: ESLint, Ruff, Rubocop
- Type checkers: TypeScript (
tsc), Flow - Build tools: make, npm build
Do we need to change our tests?
No. Obvyr requires zero test modifications. Your tests run exactly as they did before. Obvyr captures execution data without changing test behaviour.
Can we try Obvyr without committing our whole team?
Yes. Start with:
- One project (e.g., your main API)
- One agent (e.g., unit tests)
- One developer or CI pipeline
Expand based on value demonstrated. No all-or-nothing commitment required.
Data & Security
What data does Obvyr collect?
Obvyr collects comprehensive test execution context:
- Command executed and arguments
- Execution duration and timing
- Exit code (success/failure)
- Command output (stdout/stderr)
- Execution context identifier (see
OBVYR_CLI_USERbelow) - Environment variables (configurable)
- Test framework metadata (JUnit XML)
- File attachments (test reports, coverage data - configurable)
No source code is collected. Only execution metadata and test results.
Privacy note: OBVYR_CLI_USER should be set to an execution context identifier (e.g., local-dev, github-ci, jenkins-prod), NOT individual developer names. This ensures GDPR compliance and focuses analysis on environment patterns (local vs CI) rather than individual tracking.
Is our test output data secure?
Yes. Security measures implemented in Obvyr's infrastructure include:
- Encryption in transit: HTTPS via AWS Application Load Balancer with ACM certificates, S3 access enforces TLS
- Encryption at rest: DynamoDB server-side encryption (AWS managed keys), S3 server-side encryption (AES-256)
- Multi-tenant isolation: Account-based data partitioning in DynamoDB ensures complete separation between organisations
- Access control: AWS Cognito authentication for dashboard users, revocable API tokens for CLI agents (6-month expiry)
- VPC security: ECS services run in private subnets with VPC endpoint enforcement for S3 access
Where is data stored?
Data is stored in AWS:
- Database: DynamoDB with encryption at rest
- File attachments: S3 with server-side encryption (AES-256) and automatic lifecycle management (7-year retention)
- Session cache: ElastiCache Redis (ephemeral data only)
All data remains within AWS us-east-1 region.
Can we self-host Obvyr?
Currently: Obvyr is a SaaS platform hosted on AWS.
Future: Self-hosted deployment options may be available. Contact support@obvyr.com if this is a requirement for your organisation.
What happens if Obvyr is unavailable?
Your tests continue working normally. If Obvyr API is unavailable:
- The agent executes your command as usual
- Test results are returned to your terminal/CI
- Observation data is not sent (graceful degradation)
- No impact on test execution or results
Obvyr never blocks your development workflow.
Technical Questions
How does Obvyr handle large test suites?
Obvyr scales to any test suite size:
- Performance: Minimal impact on test execution time
- Data processing: Asynchronous, doesn't block development
- Analysis: More data = better pattern detection
Large test suites particularly benefit from flaky test detection across hundreds of executions.
Does Obvyr slow down our tests?
No. Obvyr adds minimal overhead for:
- Command wrapper startup
- Data collection
- Asynchronous API submission
For typical test suites, overhead is imperceptible. Tests run at effectively the same speed.
How does flaky test detection work?
Obvyr uses pattern analysis across hundreds of executions:
- Collect data: Every test run captured with full context (execution environment, timing, user context, output)
- Identify inconsistency: Track which tests pass sometimes and fail sometimes
- Pattern analysis: Identify execution patterns across test results
- Metrics: Calculate pass rates, failure frequencies, and execution time trends for each test
Result: Instead of just "test passed" or "test failed", you see "this test passed in 47/50 executions (94% reliable)" with detailed execution history.
Comparison Questions
How is Obvyr different from code coverage tools?
Coverage tools show: Lines of code executed by tests (the "what")
Obvyr shows: Test reliability and execution patterns over time (the "how reliable")
Example:
- Coverage: "This file has 92% coverage"
- Obvyr: "These 12 tests have 99.8% pass rate across 500 executions; these 8 tests are flaky (85% pass rate)"
Complementary, not competitive: Coverage metrics + Obvyr insights = complete picture of test quality
How is Obvyr different from test runners (pytest, Jest, etc.)?
Test runners: Execute tests and report pass/fail results
Obvyr: Analyses patterns across hundreds of test executions to reveal insights
Not competitive: Obvyr wraps test runners, doesn't replace them. You continue using pytest, Jest, etc. Obvyr adds pattern analysis on top.
How is Obvyr different from CI/CD platforms?
CI/CD platforms: Automate test execution in pipelines
Obvyr: Analyses test execution patterns from CI and local environments
Integration, not replacement: Obvyr integrates with your existing CI/CD (GitHub Actions, Jenkins, etc.) by wrapping test commands. Provides insights into CI performance and reliability.
How is Obvyr different from monitoring tools (Datadog, New Relic)?
Monitoring tools: Application performance and production observability
Obvyr: Test execution and quality observability
Different focus: Monitoring tools watch production. Obvyr watches test quality. Complementary tools for complete observability.
How is Obvyr different from audit logging tools?
Audit logging tools: Track application user actions for security monitoring
Obvyr: Track test execution for quality analysis and pattern detection
Different scope:
- Audit logs: Who accessed what data in production
- Obvyr: Test execution patterns, reliability metrics, flaky test detection
Different tools for different purposes - production security vs. test quality.
Use Case Questions
Is Obvyr only for large teams?
No. Obvyr provides value at any team size:
- Small teams (10-20 developers): Flaky tests have proportionally larger impact per developer
- Mid-size teams (30-50 developers): Pattern detection reveals test reliability across environments
- Large teams (100+ developers): Pattern detection scales with more comprehensive execution data
Value scales with usage: More test executions = more comprehensive pattern analysis, regardless of team size.
Can we use Obvyr for integration/E2E tests?
Yes. Obvyr works with any test type that generates JUnit XML:
- Unit tests: Flaky test detection, pass rate tracking, performance trends
- Integration tests: Reliability metrics, timing analysis, execution patterns
- E2E tests: Cross-service reliability, flaky scenario identification
- Performance tests: Execution time trends, degradation detection
Different test types reveal different patterns. All provide value when configured to generate JUnit XML output.
Does Obvyr work for mobile app testing?
Yes. If you can run tests from command line and they generate JUnit XML, Obvyr can wrap them:
- iOS: XCTest via
xcodebuild test(with JUnit reporter) - Android: Espresso via
./gradlew test(generates JUnit XML automatically) - React Native: Jest/Detox tests (configure JUnit reporter)
- Flutter:
flutter test(with junitreport package)
Wrap the command with obvyr-cli and configure JUnit XML output to gain test-level insights.
Getting Started Questions
What's the minimum team size to get value?
Even solo developers benefit. Pattern analysis works with:
- Minimum: 1 developer running tests regularly
- Ideal: Full team (comprehensive data = better patterns)
Value scales with usage: More test executions = more comprehensive patterns.
How long until we see patterns?
Timeline:
- 50 executions: Initial flaky test identification
- 100 executions: Reliable pattern detection
- 500 executions: Comprehensive analysis
- 1,000+ executions: High-confidence insights
Team of 10 developers: Reach 500 executions in 1-2 weeks with normal testing cadence.
What if our team doesn't adopt it?
Adoption is trivial: Wrapping test commands requires zero workflow change:
Before: pytest tests/ After: obvyr-cli pytest tests/
No learning curve. Tests run the same way. No new tools to learn. No process changes.
Value increases with adoption: More users = more data = better patterns. But even partial adoption provides value.
Can we integrate Obvyr gradually?
Yes. Recommended approach:
Week 1: One project, one CLI agent, one test suite (e.g., unit tests) Week 2: Add CI/CD integration for both local and CI execution data Week 3: Add more test suites (integration tests, E2E tests) Week 4: Full team adoption across all test types
Prove value incrementally. No big-bang deployment required.
Support & Documentation
Where can I get help?
Documentation:
- Getting Started Guide - 10-minute setup
- CLI Reference - Complete CLI documentation
- CLI Configuration - Advanced configuration options
- Roadmap - Current MVP features and planned features
Support:
- Email: support@obvyr.com - Technical questions and implementation help
Still Have Questions?
Can't find your answer here? Email us at support@obvyr.com
Get Started Now
Most questions are answered by trying Obvyr. Set up your first project in 10 minutes and see the value for yourself.