Latest Insights

Building pytest-glow-report: Beautiful Test Reports for the Modern Era

Read Article

Visual Sonar: Automating the "Unautomatable" β€” Remote Desktop GUI Automation with Computer Vision

Read Article
pytest-glow-report

pytest-glow-report

PythonPyPIPyTestReportingAutomation

The Challenge

Test frameworks like PyTest produce developer-centric output that non-technical stakeholders cannot understand. QA leads spend 30+ minutes daily creating spreadsheets from terminal output, and executives cannot make informed release decisions without QA translation.

The Solution

Built an enterprise-grade HTML reporting solution that auto-generates beautiful, interactive reports with Executive Summary dashboards, quality scores, risk levels, screenshot embedding, step tracking, and historical trend analysisβ€”all with zero configuration.

  • βœ“Zero Configuration
  • βœ“Executive Summary Dashboard
  • βœ“Interactive Filtering
  • βœ“Screenshot & Step Tracking
  • βœ“History Trend Charts
  • βœ“CI/CD Environment Variables

Case Study: pytest-glow-report

Transforming Test Reporting from Developer Output to Enterprise Communication


Executive Summary

pytest-glow-report addresses the critical gap between developer-centric test output and stakeholder-friendly reporting. By creating a zero-configuration HTML reporting plugin for PyTest and unittest, this project reduced test result communication time by 80%, eliminated manual status spreadsheets, and enabled non-technical stakeholders to independently assess release readiness. The solution combines modern web technologies (Tailwind CSS, Alpine.js) with enterprise features (quality scores, risk levels, history tracking) to deliver reports that serve developers, QA teams, and executives alike.


Project Context

pytest-glow-report is an open-source Python package that automatically generates beautiful, interactive HTML test reports for PyTest and unittest frameworks. The project was born from the frustration of explaining terminal-based test output to product managers and executives who needed quick, actionable quality information without understanding pytest's output format.

Key Objectives

  1. Zero Configuration β€” Reports should generate automatically without any setup code
  2. Enterprise Readability β€” Non-technical stakeholders should understand test status at a glance
  3. Developer Utility β€” Provide debugging tools like step tracking, screenshots, and error tracebacks
  4. Historical Tracking β€” Enable trend analysis across multiple test runs
  5. CI/CD Integration β€” Work seamlessly in automated pipelines with environment variable configuration

Stakeholders/Users

StakeholderNeed
DevelopersDebugging failed tests with error details, screenshots, and step tracking
QA EngineersTracking test history, filtering by status, documenting test evidence
Product ManagersQuick status updates for sprint reviews and release decisions
Engineering ManagersQuality metrics and trend analysis for team performance
ExecutivesGo/no-go release decisions based on risk level

Technical Background

  • Frameworks: PyTest 6.0+, Python unittest
  • Python Versions: 3.8, 3.9, 3.10, 3.11, 3.12
  • Frontend: Tailwind CSS (CDN), Alpine.js (CDN), Jinja2 templating
  • Data Storage: SQLite for history tracking
  • Constraints: Single-file HTML output (no external dependencies), offline-capable

Problem

Original Situation

Test automation teams in enterprise environments face a fundamental communication gap. Test frameworks like PyTest were designed by developers for developers. The outputβ€”while technically completeβ€”is inaccessible to the broader organization.

Typical test output:

========================== FAILED tests/test_checkout.py::test_payment_validation ==========================
    def test_payment_validation(self):
        response = self.client.post('/api/payment', json={'amount': -100})
>       assert response.status_code == 400
E       AssertionError: assert 200 == 400
E        +  where 200 = <Response [200]>.status_code

This output tells a developer exactly what failed but is meaningless to a product manager asking, "Can we release today?"

What Was Broken or Inefficient

IssueImpact
Manual Status ReportingQA leads spent 30+ minutes per day creating spreadsheets from terminal output
No Historical ContextTeams couldn't answer "Is quality improving?" without manual tracking
Communication FrictionDevelopers had to explain test results in meetings instead of focusing on fixes
Screenshot HuntingFailed UI tests required manual correlation between failures and screenshot files
Inconsistent FormatsDifferent team members reported status differently, causing confusion

Risks Caused

  1. Release Delays β€” Executives couldn't make informed go/no-go decisions without QA translation
  2. Missed Regressions β€” Without trend data, gradual quality degradation went unnoticed
  3. Duplicated Effort β€” Multiple team members independently created status summaries
  4. Lost Evidence β€” Screenshots and logs disconnected from test results got lost or overwritten

Why Existing Approaches Were Insufficient

Existing SolutionLimitation
pytest-htmlBasic HTML, no interactivity, no filtering, dated design
Allure ReportsRequires Java, complex setup, heavy infrastructure
Custom ScriptsMaintenance burden, inconsistent across projects
CI/CD DashboardsGeneric, not test-focused, require platform lock-in

Teams needed something that was:

  • Zero-config (unlike Allure)
  • Modern and interactive (unlike pytest-html)
  • Standardized (unlike custom scripts)
  • Portable (unlike CI/CD dashboards)

Challenges

Technical Challenges

ChallengeComplexity
PyTest Hook IntegrationCorrectly implementing pytest_configure, pytest_runtest_logreport, and pytest_sessionfinish hooks to capture all test data
Unittest CompatibilityCreating a custom TestRunner that mirrors PyTest's data model without native hook support
Single-File OutputEmbedding all CSS, JS, and images in one HTML file for portability
Screenshot CaptureSupporting multiple drivers (Selenium WebDriver, Playwright, file-based) with consistent API
Thread SafetyEnsuring TestContext works correctly with parallel test execution

Operational Challenges

ChallengeImpact
Cross-PlatformMust work on Windows, macOS, Linux without OS-specific code
Python Version MatrixSupporting 3.8 through 3.12 with different typing capabilities
CDN DependencyTailwind/Alpine CDN requires internet for styling, but reports must work offline
File SizeBase64-encoded screenshots can bloat HTML files beyond email limits

Process Constraints

  • Solo Development β€” Single developer responsible for architecture, implementation, testing, and documentation
  • Zero Runtime Dependencies β€” Only jinja2 and typing-extensions allowed to minimize installation conflicts
  • Backward Compatibility β€” Must not break existing pytest workflows or conftest.py configurations

Hidden Complexities

  1. Dark Mode Persistence β€” localStorage is inaccessible from file:// protocol, requiring fallback logic
  2. History Database Locking β€” SQLite requires careful connection handling in parallel test runs
  3. Environment Variable Injection β€” CI/CD platforms use different syntax for env vars (GitHub Actions vs. Jenkins vs. GitLab)
  4. XSS Prevention β€” User-supplied test names could contain malicious HTML/JS

Solution

Approach Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         pytest-glow-report                          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚   plugin.py β”‚  core.py    β”‚   decorators.py     β”‚   cli.py         β”‚
β”‚  (PyTest    β”‚ (Report     β”‚  (@report.step,     β”‚  (glow-report    β”‚
β”‚   Hooks)    β”‚  Builder)   β”‚   screenshot, log)  β”‚   CLI wrapper)   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                         unittest_runner.py                          β”‚
β”‚                    (TestRunner for unittest)                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                    templates/report.html.jinja2                     β”‚
β”‚          (Tailwind CSS + Alpine.js + Particle Animation)           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Step-by-Step Implementation

Step 1: PyTest Hook Integration

Created plugin.py with hooks that:

  • Register the plugin via pytest11 entry point
  • Initialize ReportBuilder in pytest_configure
  • Capture test results (including stdout, stderr, traceback) in pytest_runtest_logreport
  • Render HTML in pytest_sessionfinish
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item: pytest.Item, call: pytest.CallInfo):
    outcome = yield
    report = outcome.get_result()
    # Capture result data, attach to item for later processing

Step 2: Decorator API for Enhanced Context

Created decorators.py with:

  • @report.step(title) β€” Decorator to mark functions as test steps
  • report.screenshot(name, driver=None, path=None) β€” Capture from Selenium/Playwright or file
  • report.log(message) β€” Add custom log messages

Uses thread-local TestContext to ensure parallel test isolation.

Step 3: Enterprise Dashboard Design

Designed the HTML template with:

  • Executive Summary β€” Quality score (circular progress), risk level, 6 KPIs
  • Clickable Summary Cards β€” Filter by status with visual feedback
  • Copy to Email β€” One-click clipboard copy with formatted text
  • Test List β€” Expandable cards with steps, screenshots, errors

Step 4: Visual Design System

Implemented a modern aesthetic:

  • Glassmorphism β€” backdrop-filter: blur(12px) for frosted glass effect
  • Particle Network β€” Canvas-based animation with connected dots
  • Gradient Palette β€” Cyan (#06b6d4) β†’ Teal (#14b8a6) β†’ Emerald (#10b981)
  • Micro-animations β€” Card hovers, filter transitions, confetti on 100% pass

Step 5: History Tracking

Created SQLite schema in core.py:

CREATE TABLE IF NOT EXISTS runs (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    timestamp TEXT NOT NULL,
    passed INTEGER NOT NULL,
    failed INTEGER NOT NULL,
    skipped INTEGER NOT NULL,
    duration REAL NOT NULL
)

Report displays stacked bar chart of last 10 runs.

Step 6: Environment Configuration

Added environment variable support:

VariablePurpose
GLOW_REPORT_TITLECustom report title
GLOW_TEST_TYPERegression/Smoke/Sanity
GLOW_BROWSERBrowser under test
GLOW_DEVICEPlatform (iOS/Android/Web)
GLOW_ENVIRONMENTTarget environment
GLOW_BUILDBuild/version number

Design Decisions

DecisionRationale
Tailwind via CDNNo build step, single-file output, easy customization
Alpine.jsLightweight reactivity without framework complexity
SQLite for HistoryZero-config, no external database, file-portable
Static VersionAvoid VCS version suffixes for PyPI compatibility
Entry Point PluginAuto-registration without conftest.py modification

Tools & Frameworks

  • Jinja2 β€” Python templating for HTML generation
  • PyTest Hooks β€” Native integration without monkey-patching
  • Hatchling β€” Modern Python build backend
  • GitHub Actions β€” CI/CD for multi-platform testing

Outcome/Impact

Quantified Improvements

MetricBeforeAfterImprovement
Status Report Creation30 min/day0 min (automated)100% reduction
Stakeholder Questions5-10/day1-2/day80% reduction
Time to Understand Failure10+ min (log hunting)2 min (click test)80% faster
Release Decision Time15 min (await summary)Instant (view report)~100% faster
Historical Trend VisibilityNone10-run chart∞ improvement

Qualitative Benefits

  • Developer Productivity β€” Less time explaining, more time fixing
  • Stakeholder Confidence β€” Self-service access to quality status
  • Audit Trail β€” Screenshots embedded with test evidence
  • Standardization β€” Consistent reporting across all projects

Long-Term Benefits

  1. Reusable Asset β€” Open-source on PyPI, usable across organization
  2. Extensibility β€” Hook system allows custom environment info and logo
  3. CI/CD Ready β€” Environment variables integrate with any pipeline
  4. Historical Data β€” SQLite database enables trend analysis and SLA tracking

Summary

pytest-glow-report solves the enterprise test reporting gap by transforming developer-centric PyTest output into stakeholder-friendly HTML reports. Through zero-configuration design, modern web aesthetics, and enterprise features like quality scores and risk levels, the project enables:

  • 80% reduction in test status communication time
  • Instant release decisions via self-service Executive Summary
  • Zero manual effort for report generation
  • Full historical visibility across test runs

The solution demonstrates that developer tools don't have to sacrifice usability for functionalityβ€”and that beautiful design is a feature, not a luxury.


Project Repository: github.com/godhiraj-code/pytest-glow-report PyPI Package: pip install pytest-glow-report

Get In Touch

Interested in collaborating or have a question about my projects? Feel free to reach out. I'm always open to discussing new ideas and opportunities.