AI Web FeedsAIWebFeeds

Test Infrastructure

Comprehensive test suite with pytest, uv, and advanced testing features

Overview

AI Web Feeds includes a production-ready test suite with 100+ tests covering unit, integration, and end-to-end scenarios. The infrastructure uses modern tools for fast, reliable testing.

All tests use uv for execution (10-100x faster than pip) and pytest with 9+ advanced plugins.

Test Execution Architecture

Centralized Test Execution

All test execution logic is centralized using uv scripts defined in the workspace root pyproject.toml. The scripts delegate to the CLI for consistent test execution across all environments.

Execution Flow

uv scripts (workspace pyproject.toml)

    CLI Test Commands

    pytest (test execution)

Alternative entry point for backward compatibility:

tests/run_tests.py → uv scripts → CLI → pytest

Multiple Entry Points

You can run tests using any of these methods:

# Run all tests
uv run test

# Run unit tests
uv run test-unit

# Run unit tests (skip slow)
uv run test-unit-fast

# Run with coverage and open in browser
uv run test-coverage-open

# Quick test run
uv run test-quick

# Debug mode
uv run test-debug

# Watch mode
uv run test-watch

# List available scripts
uv run --help
# Run all tests
uv run aiwebfeeds test all

# Run unit tests with options
uv run aiwebfeeds test unit --fast

# Run with coverage
uv run aiwebfeeds test coverage --open

# E2E tests only
uv run aiwebfeeds test e2e

# Get help
uv run aiwebfeeds test --help
cd tests

# Run all tests
./run_tests.py all

# Run unit tests
./run_tests.py unit

# Run with coverage
./run_tests.py coverage

# Quick run
./run_tests.py quick

# Get help
./run_tests.py help

Quick Reference

Common Commands

# Quick test (TDD workflow)
uv run test-quick

# Watch mode (auto-rerun)
uv run test-watch

# Unit tests only
uv run test-unit-fast

# With coverage
uv run test-coverage-open
# Full test suite with coverage
uv run test-coverage

# All tests
uv run test-all

# E2E tests only
uv run test-e2e

# Integration tests
uv run test-integration
# Debug mode (with pdb)
uv run test-debug

# Or use CLI directly with specific test
uv run aiwebfeeds test file test_models.py -k "twitter"

# Show local variables
uv run aiwebfeeds test all --verbose

Test Suite Statistics

  • 11 test files created
  • 35+ test classes
  • 100+ individual tests
  • 15+ reusable fixtures
  • 2,500+ lines of test code

Test Structure

Tests mirror the source code structure:

packages/ai_web_feeds/src/ai_web_feeds/
├── models.py      →  tests/.../test_models.py
├── storage.py     →  tests/.../test_storage.py
├── fetcher.py     →  tests/.../test_fetcher.py
├── config.py      →  tests/.../test_config.py
├── utils.py       →  tests/.../test_utils.py
└── analytics.py   →  tests/.../test_analytics.py

Test Categories

Unit Tests (@pytest.mark.unit)

Fast, isolated tests with no external dependencies:

  • test_models.py - Model validation with property-based testing
  • test_storage.py - Database CRUD operations
  • test_fetcher.py - Feed fetching with mocking
  • test_config.py - Configuration management
  • test_utils.py - Utility functions (platform detection, URL generation)
  • test_analytics.py - Analytics calculations
  • test_commands.py - CLI command tests

Integration Tests (@pytest.mark.integration)

Multi-component workflows:

  • test_integration.py - Database + Fetcher integration
  • test_cli_integration.py - CLI integration

E2E Tests (@pytest.mark.e2e)

Complete user workflows:

  • test_workflows.py - Full workflows (onboarding, bulk operations, export)

Advanced Features

Property-Based Testing

Using Hypothesis for robust input validation:

from hypothesis import given, strategies as st

@given(st.text())
def test_sanitize_text_property_based(text):
    """Property-based test for text sanitization."""
    result = sanitize_text(text)
    assert isinstance(result, str)

Test Fixtures

Comprehensive fixtures in conftest.py:

Database Fixtures:

  • temp_db_path - Temporary SQLite database
  • db_engine - Test database engine
  • db_session - Test database session

Model Fixtures:

  • sample_feed_source - Single feed source
  • sample_feed_items - Multiple feed items (5)
  • sample_topic - Topic instance

Mock Fixtures:

  • mock_httpx_response - Mocked HTTP response
  • mock_feedparser_result - Mocked feedparser

File Fixtures:

  • temp_yaml_file - Temporary YAML
  • sample_rss_feed - Sample RSS XML
  • sample_atom_feed - Sample Atom XML

Test Markers

Available markers for filtering:

MarkerDescription
unitUnit tests (fast, no external dependencies)
integrationIntegration tests (multiple components)
e2eEnd-to-end tests (full workflows)
slowSlow running tests
networkTests requiring network access
databaseTests requiring database
# List all markers
aiwebfeeds test markers

# Run specific markers
uv run --directory tests pytest -m "unit and not slow"

Coverage Reporting

Generate comprehensive coverage reports:

# HTML + terminal report
aiwebfeeds test coverage

# Open in browser
aiwebfeeds test coverage --open

# Coverage reports saved to: tests/reports/coverage/

Coverage Configuration:

[tool.coverage.run]
source = ["ai_web_feeds"]
branch = true
omit = ["*/tests/*", "*/test_*.py"]

[tool.coverage.report]
precision = 2
show_missing = true
exclude_lines = [
    "pragma: no cover",
    "def __repr__",
    "if __name__ == .__main__.:",
    "if TYPE_CHECKING:",
]

Test Configuration

All configuration in tests/pyproject.toml:

Pytest Settings

[tool.pytest.ini_options]
python_files = "test_*.py"
python_classes = "Test*"
python_functions = "test_*"
testpaths = ["."]

addopts = [
    "-v",                    # Verbose
    "--strict-markers",      # Enforce markers
    "--showlocals",          # Show locals in errors
    "--cov=ai_web_feeds",   # Coverage
    "--emoji",               # Emoji output
    "--icdiff",              # Better diffs
    "--instafail",           # Instant failures
    "--timeout=300",         # Test timeout
]

Pytest Plugins

  • pytest-cov - Coverage reporting
  • pytest-emoji - Emoji test output
  • pytest-icdiff - Better diff display
  • pytest-instafail - Instant failure reporting
  • pytest-html - HTML reports
  • pytest-timeout - Timeout protection
  • pytest-mock - Mocking support
  • pytest-sugar - Better output
  • pytest-xdist - Parallel execution
  • hypothesis - Property-based testing

CLI Test Command

UV Scripts Configuration

The workspace pyproject.toml defines test scripts for convenience:

[tool.uv.scripts]
# Test execution commands (delegates to CLI)
test = "aiwebfeeds test all"
test-all = "aiwebfeeds test all"
test-unit = "aiwebfeeds test unit"
test-unit-fast = "aiwebfeeds test unit --fast"
test-integration = "aiwebfeeds test integration"
test-e2e = "aiwebfeeds test e2e"
test-coverage = "aiwebfeeds test coverage"
test-coverage-open = "aiwebfeeds test coverage --open"
test-quick = "aiwebfeeds test quick"
test-debug = "aiwebfeeds test debug"
test-watch = "aiwebfeeds test watch"
test-markers = "aiwebfeeds test markers"

UV Integration

All commands use uv run internally:

def run_uv_command(args: list[str], cwd: Optional[Path] = None) -> int:
    """Run a uv command and return exit code."""
    cmd = ["uv", "run"] + args
    result = subprocess.run(cmd, cwd=cwd)
    return result.returncode

Available Subcommands

CommandDescriptionOptionsuv Script
test allRun all tests--verbose, --coverage, --paralleluv run test
test unitUnit tests only--fast (skip slow)uv run test-unit
test integrationIntegration tests--verboseuv run test-integration
test e2eE2E tests--verboseuv run test-e2e
test coverageWith coverage--open (open browser)uv run test-coverage
test quickFast unit testsNoneuv run test-quick
test watchWatch modeNoneuv run test-watch
test file <path>Specific file-k <keyword>N/A (use CLI)
test debugDebug modeNoneuv run test-debug
test markersList markersNoneuv run test-markers

Examples

# Recommended: Use uv scripts
uv run test-quick                # Quick development cycle
uv run test-coverage-open        # Full test with coverage
uv run test-watch                # Watch mode for TDD

# Alternative: Use CLI directly
uv run aiwebfeeds test all --verbose --coverage
uv run aiwebfeeds test unit --fast
uv run aiwebfeeds test debug packages/ai_web_feeds/unit/test_models.py

# Legacy: Use run_tests.py wrapper
cd tests
./run_tests.py quick
./run_tests.py coverage

Benefits of This Architecture

Single Source of Truth: All test execution logic lives in the CLI commands, with uv scripts providing convenient shortcuts. This eliminates duplication and makes maintenance easier.

Key advantages:

  1. Native uv Integration - Uses uv's built-in script system
  2. Multiple Entry Points - Choose the interface that works best for you
  3. Consistent Behavior - All methods use the same underlying CLI
  4. Easy Discovery - uv run --help lists all available scripts
  5. Backward Compatible - Legacy run_tests.py still works

CI/CD Integration

GitHub Actions Example

name: Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Install uv
        run: curl -LsSf https://astral.sh/uv/install.sh | sh

      - name: Run tests with uv scripts
        run: uv run test-coverage

      - name: Upload coverage
        uses: codecov/codecov-action@v3

Migration from Legacy Commands

If you're updating CI/CD pipelines:

Before:

- run: python tests/run_tests.py coverage

After (Recommended):

- run: uv run test-coverage

Alternative:

- run: uv run aiwebfeeds test coverage

Docker Testing

FROM python:3.13-slim

WORKDIR /app
COPY . .

RUN pip install uv
RUN cd tests && uv sync

CMD ["uv", "run", "--directory", "tests", "pytest", "-v"]

Performance

Test Execution Speed

  • Quick tests: ~2-5 seconds
  • Unit tests: ~10-15 seconds
  • Integration tests: ~20-30 seconds
  • Full suite: ~30-45 seconds
  • With coverage: ~45-60 seconds
  • Parallel execution: 50-70% faster

Optimization Tips

  1. Use quick mode for rapid feedback during development
  2. Run unit tests before integration/E2E
  3. Enable parallel execution with --parallel
  4. Skip slow tests with --fast flag
  5. Use watch mode for TDD workflow

Best Practices

Writing Tests

  1. Mirror structure - Test files match source files
  2. Use fixtures - Reusable test data
  3. Mark appropriately - Use @pytest.mark.unit, etc.
  4. Property-based - Use Hypothesis for edge cases
  5. Descriptive names - Clear test method names
  6. AAA pattern - Arrange, Act, Assert

Running Tests

  1. Quick first - Run quick tests during development
  2. Full before commit - Run all tests before committing
  3. Coverage regularly - Check coverage weekly
  4. E2E before release - Run E2E tests before releases
  5. CI/CD always - All tests in CI/CD pipeline

Troubleshooting

Tests Not Found

# Sync dependencies
cd tests
uv sync

# Verify discovery
uv run pytest --collect-only

Import Errors

# From workspace root
uv sync

# Verify package installed
uv run --directory tests python -c "import ai_web_feeds"

Slow Tests

# Skip slow tests
aiwebfeeds test unit --fast

# Show slowest tests
uv run --directory tests pytest --durations=10

Coverage Issues

# Clear coverage data
rm -rf tests/reports/.coverage tests/reports/coverage

# Regenerate
aiwebfeeds test coverage

Documentation

All test infrastructure documentation is now integrated into this Fumadocs site:

Future Enhancements

  • Mutation testing with mutmut
  • Performance benchmarking with pytest-benchmark
  • Async testing with pytest-asyncio
  • Snapshot testing
  • Contract testing
  • Load testing