Test Infrastructure
Comprehensive test suite with pytest, uv, and advanced testing features
Overview
AI Web Feeds includes a production-ready test suite with 100+ tests covering unit, integration, and end-to-end scenarios. The infrastructure uses modern tools for fast, reliable testing.
Test Execution Architecture
Centralized Test Execution
All test execution logic is centralized using uv scripts defined in the workspace root pyproject.toml. The scripts delegate to the CLI for consistent test execution across all environments.
Execution Flow
uv scripts (workspace pyproject.toml)
↓
CLI Test Commands
↓
pytest (test execution)Alternative entry point for backward compatibility:
tests/run_tests.py → uv scripts → CLI → pytestMultiple Entry Points
You can run tests using any of these methods:
# Run all tests
uv run test
# Run unit tests
uv run test-unit
# Run unit tests (skip slow)
uv run test-unit-fast
# Run with coverage and open in browser
uv run test-coverage-open
# Quick test run
uv run test-quick
# Debug mode
uv run test-debug
# Watch mode
uv run test-watch
# List available scripts
uv run --help# Run all tests
uv run aiwebfeeds test all
# Run unit tests with options
uv run aiwebfeeds test unit --fast
# Run with coverage
uv run aiwebfeeds test coverage --open
# E2E tests only
uv run aiwebfeeds test e2e
# Get help
uv run aiwebfeeds test --helpcd tests
# Run all tests
./run_tests.py all
# Run unit tests
./run_tests.py unit
# Run with coverage
./run_tests.py coverage
# Quick run
./run_tests.py quick
# Get help
./run_tests.py helpQuick Reference
Common Commands
# Quick test (TDD workflow)
uv run test-quick
# Watch mode (auto-rerun)
uv run test-watch
# Unit tests only
uv run test-unit-fast
# With coverage
uv run test-coverage-open# Full test suite with coverage
uv run test-coverage
# All tests
uv run test-all
# E2E tests only
uv run test-e2e
# Integration tests
uv run test-integration# Debug mode (with pdb)
uv run test-debug
# Or use CLI directly with specific test
uv run aiwebfeeds test file test_models.py -k "twitter"
# Show local variables
uv run aiwebfeeds test all --verboseTest Suite Statistics
- 11 test files created
- 35+ test classes
- 100+ individual tests
- 15+ reusable fixtures
- 2,500+ lines of test code
Test Structure
Tests mirror the source code structure:
packages/ai_web_feeds/src/ai_web_feeds/
├── models.py → tests/.../test_models.py
├── storage.py → tests/.../test_storage.py
├── fetcher.py → tests/.../test_fetcher.py
├── config.py → tests/.../test_config.py
├── utils.py → tests/.../test_utils.py
└── analytics.py → tests/.../test_analytics.pyTest Categories
Unit Tests (@pytest.mark.unit)
Fast, isolated tests with no external dependencies:
- test_models.py - Model validation with property-based testing
- test_storage.py - Database CRUD operations
- test_fetcher.py - Feed fetching with mocking
- test_config.py - Configuration management
- test_utils.py - Utility functions (platform detection, URL generation)
- test_analytics.py - Analytics calculations
- test_commands.py - CLI command tests
Integration Tests (@pytest.mark.integration)
Multi-component workflows:
- test_integration.py - Database + Fetcher integration
- test_cli_integration.py - CLI integration
E2E Tests (@pytest.mark.e2e)
Complete user workflows:
- test_workflows.py - Full workflows (onboarding, bulk operations, export)
Advanced Features
Property-Based Testing
Using Hypothesis for robust input validation:
from hypothesis import given, strategies as st
@given(st.text())
def test_sanitize_text_property_based(text):
"""Property-based test for text sanitization."""
result = sanitize_text(text)
assert isinstance(result, str)Test Fixtures
Comprehensive fixtures in conftest.py:
Database Fixtures:
temp_db_path- Temporary SQLite databasedb_engine- Test database enginedb_session- Test database session
Model Fixtures:
sample_feed_source- Single feed sourcesample_feed_items- Multiple feed items (5)sample_topic- Topic instance
Mock Fixtures:
mock_httpx_response- Mocked HTTP responsemock_feedparser_result- Mocked feedparser
File Fixtures:
temp_yaml_file- Temporary YAMLsample_rss_feed- Sample RSS XMLsample_atom_feed- Sample Atom XML
Test Markers
Available markers for filtering:
| Marker | Description |
|---|---|
unit | Unit tests (fast, no external dependencies) |
integration | Integration tests (multiple components) |
e2e | End-to-end tests (full workflows) |
slow | Slow running tests |
network | Tests requiring network access |
database | Tests requiring database |
# List all markers
aiwebfeeds test markers
# Run specific markers
uv run --directory tests pytest -m "unit and not slow"Coverage Reporting
Generate comprehensive coverage reports:
# HTML + terminal report
aiwebfeeds test coverage
# Open in browser
aiwebfeeds test coverage --open
# Coverage reports saved to: tests/reports/coverage/Coverage Configuration:
[tool.coverage.run]
source = ["ai_web_feeds"]
branch = true
omit = ["*/tests/*", "*/test_*.py"]
[tool.coverage.report]
precision = 2
show_missing = true
exclude_lines = [
"pragma: no cover",
"def __repr__",
"if __name__ == .__main__.:",
"if TYPE_CHECKING:",
]Test Configuration
All configuration in tests/pyproject.toml:
Pytest Settings
[tool.pytest.ini_options]
python_files = "test_*.py"
python_classes = "Test*"
python_functions = "test_*"
testpaths = ["."]
addopts = [
"-v", # Verbose
"--strict-markers", # Enforce markers
"--showlocals", # Show locals in errors
"--cov=ai_web_feeds", # Coverage
"--emoji", # Emoji output
"--icdiff", # Better diffs
"--instafail", # Instant failures
"--timeout=300", # Test timeout
]Pytest Plugins
- pytest-cov - Coverage reporting
- pytest-emoji - Emoji test output
- pytest-icdiff - Better diff display
- pytest-instafail - Instant failure reporting
- pytest-html - HTML reports
- pytest-timeout - Timeout protection
- pytest-mock - Mocking support
- pytest-sugar - Better output
- pytest-xdist - Parallel execution
- hypothesis - Property-based testing
CLI Test Command
UV Scripts Configuration
The workspace pyproject.toml defines test scripts for convenience:
[tool.uv.scripts]
# Test execution commands (delegates to CLI)
test = "aiwebfeeds test all"
test-all = "aiwebfeeds test all"
test-unit = "aiwebfeeds test unit"
test-unit-fast = "aiwebfeeds test unit --fast"
test-integration = "aiwebfeeds test integration"
test-e2e = "aiwebfeeds test e2e"
test-coverage = "aiwebfeeds test coverage"
test-coverage-open = "aiwebfeeds test coverage --open"
test-quick = "aiwebfeeds test quick"
test-debug = "aiwebfeeds test debug"
test-watch = "aiwebfeeds test watch"
test-markers = "aiwebfeeds test markers"UV Integration
All commands use uv run internally:
def run_uv_command(args: list[str], cwd: Optional[Path] = None) -> int:
"""Run a uv command and return exit code."""
cmd = ["uv", "run"] + args
result = subprocess.run(cmd, cwd=cwd)
return result.returncodeAvailable Subcommands
| Command | Description | Options | uv Script |
|---|---|---|---|
test all | Run all tests | --verbose, --coverage, --parallel | uv run test |
test unit | Unit tests only | --fast (skip slow) | uv run test-unit |
test integration | Integration tests | --verbose | uv run test-integration |
test e2e | E2E tests | --verbose | uv run test-e2e |
test coverage | With coverage | --open (open browser) | uv run test-coverage |
test quick | Fast unit tests | None | uv run test-quick |
test watch | Watch mode | None | uv run test-watch |
test file <path> | Specific file | -k <keyword> | N/A (use CLI) |
test debug | Debug mode | None | uv run test-debug |
test markers | List markers | None | uv run test-markers |
Examples
# Recommended: Use uv scripts
uv run test-quick # Quick development cycle
uv run test-coverage-open # Full test with coverage
uv run test-watch # Watch mode for TDD
# Alternative: Use CLI directly
uv run aiwebfeeds test all --verbose --coverage
uv run aiwebfeeds test unit --fast
uv run aiwebfeeds test debug packages/ai_web_feeds/unit/test_models.py
# Legacy: Use run_tests.py wrapper
cd tests
./run_tests.py quick
./run_tests.py coverageBenefits of This Architecture
Key advantages:
- Native uv Integration - Uses uv's built-in script system
- Multiple Entry Points - Choose the interface that works best for you
- Consistent Behavior - All methods use the same underlying CLI
- Easy Discovery -
uv run --helplists all available scripts - Backward Compatible - Legacy
run_tests.pystill works
CI/CD Integration
GitHub Actions Example
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install uv
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Run tests with uv scripts
run: uv run test-coverage
- name: Upload coverage
uses: codecov/codecov-action@v3Migration from Legacy Commands
If you're updating CI/CD pipelines:
Before:
- run: python tests/run_tests.py coverageAfter (Recommended):
- run: uv run test-coverageAlternative:
- run: uv run aiwebfeeds test coverageDocker Testing
FROM python:3.13-slim
WORKDIR /app
COPY . .
RUN pip install uv
RUN cd tests && uv sync
CMD ["uv", "run", "--directory", "tests", "pytest", "-v"]Performance
Test Execution Speed
- Quick tests: ~2-5 seconds
- Unit tests: ~10-15 seconds
- Integration tests: ~20-30 seconds
- Full suite: ~30-45 seconds
- With coverage: ~45-60 seconds
- Parallel execution: 50-70% faster
Optimization Tips
- Use quick mode for rapid feedback during development
- Run unit tests before integration/E2E
- Enable parallel execution with
--parallel - Skip slow tests with
--fastflag - Use watch mode for TDD workflow
Best Practices
Writing Tests
- Mirror structure - Test files match source files
- Use fixtures - Reusable test data
- Mark appropriately - Use
@pytest.mark.unit, etc. - Property-based - Use Hypothesis for edge cases
- Descriptive names - Clear test method names
- AAA pattern - Arrange, Act, Assert
Running Tests
- Quick first - Run quick tests during development
- Full before commit - Run all tests before committing
- Coverage regularly - Check coverage weekly
- E2E before release - Run E2E tests before releases
- CI/CD always - All tests in CI/CD pipeline
Troubleshooting
Tests Not Found
# Sync dependencies
cd tests
uv sync
# Verify discovery
uv run pytest --collect-onlyImport Errors
# From workspace root
uv sync
# Verify package installed
uv run --directory tests python -c "import ai_web_feeds"Slow Tests
# Skip slow tests
aiwebfeeds test unit --fast
# Show slowest tests
uv run --directory tests pytest --durations=10Coverage Issues
# Clear coverage data
rm -rf tests/reports/.coverage tests/reports/coverage
# Regenerate
aiwebfeeds test coverageDocumentation
All test infrastructure documentation is now integrated into this Fumadocs site:
- Testing Guide - Quick start and overview
- This Page - Comprehensive test infrastructure
- Twitter/arXiv Integration - Platform-specific testing
- tests/README.md - Technical reference (in repository)
Future Enhancements
- Mutation testing with mutmut
- Performance benchmarking with pytest-benchmark
- Async testing with pytest-asyncio
- Snapshot testing
- Contract testing
- Load testing