Skip to content

Plugin Architecture Testing Strategy

This document explains how datason implements and tests its plugin-style architecture where the core package has zero required dependencies but gains functionality when optional dependencies are available.

๐ŸŽฏ Architecture Goals

Core Principle: Zero Dependencies

pip install datason  # โ† Works with no additional dependencies!

Enhanced Functionality

pip install datason[numpy]     # โ† Adds NumPy support
pip install datason[pandas]    # โ† Adds Pandas support
pip install datason[ml]        # โ† Adds ML library support
pip install datason[crypto]    # โ† Adds digital signature support
pip install datason[all]       # โ† All optional features

๐Ÿงช Testing Strategy

CI Matrix Testing

Our CI tests multiple dependency scenarios:

  1. minimal - Core functionality only (no optional dependencies)
  2. with-numpy - Core + NumPy support
  3. with-pandas - Core + Pandas support
  4. with-ml-deps - Core + ML dependencies (sklearn, etc.)
  5. full - All dependencies (complete test suite)

Test Categories

๐ŸŸข Core Tests (Always Run)

  • Basic JSON serialization
  • Datetime handling
  • UUID handling
  • Error handling
  • Security features

Files: tests/core/test_core.py, tests/core/test_deserializers.py, tests/core/test_converters.py, tests/core/test_security.py, tests/core/test_edge_cases.py, tests/core/test_circular_references.py, tests/core/test_dataframe_orientation_regression.py

๐ŸŸก Feature Tests (Conditional)

  • NumPy array serialization
  • Pandas DataFrame handling
  • ML model serialization
  • Chunked streaming
  • Template deserialization
  • Auto-detection and metadata

Files: tests/features/test_ml_serializers.py, tests/features/test_chunked_streaming.py, tests/features/test_auto_detection_and_metadata.py, tests/features/test_template_deserialization.py

๐Ÿ”ต Integration Tests (Multi-component)

  • Configuration system integration
  • Optional dependency integration
  • Pickle bridge functionality

Files: tests/integration/test_config_and_type_handlers.py, tests/integration/test_optional_dependencies.py, tests/integration/test_pickle_bridge.py

โšช Coverage Tests (Edge cases)

  • Coverage boost for specific modules
  • Edge case testing
  • Error condition testing

Files: tests/coverage/test_*_coverage_boost.py

๐Ÿ”ด Performance Tests (Separate Pipeline)

  • Benchmark tests
  • Performance regression detection
  • Memory usage analysis

Files: tests/benchmarks/test_*_benchmarks.py (runs in separate performance pipeline)

๐Ÿ“ Test Markers

Use pytest markers to categorize tests:

import pytest
from conftest import requires_numpy, requires_pandas, requires_sklearn

@pytest.mark.core
def test_basic_serialization():
    """Core functionality - always runs."""
    pass

@requires_numpy
@pytest.mark.numpy
def test_numpy_arrays():
    """Only runs when numpy is available."""
    pass

@requires_pandas  
@pytest.mark.pandas
def test_pandas_dataframes():
    """Only runs when pandas is available."""
    pass

@pytest.mark.fallback
def test_numpy_fallback():
    """Tests fallback when numpy isn't available."""
    with patch("datason.core.np", None):
        # Test fallback behavior
        pass

๐Ÿš€ Running Tests Locally

Test Minimal Install

# Create clean environment
python -m venv test_minimal
source test_minimal/bin/activate
pip install -e .
pip install pytest pytest-cov

# Run core tests only
pytest tests/unit/ -v

Test With Specific Dependencies

# Test with numpy
pip install numpy
pytest tests/unit/ tests/integration/test_ml_serializers.py -v

# Test with pandas
pip install pandas  
pytest tests/unit/ tests/integration/test_auto_detection_and_metadata.py tests/integration/test_chunked_streaming.py tests/integration/test_template_deserializer.py -v

# Test ML features
pip install numpy pandas scikit-learn
pytest tests/unit/ tests/integration/ -v

Test Full Suite

pip install -e ".[dev]"
pytest tests/unit/ tests/edge_cases/ tests/integration/ -v

Test Performance (Separate)

# Performance tests run in separate pipeline
pytest tests/benchmarks/ --benchmark-only -v

๐Ÿ“‹ CI Test Matrix

Each CI job tests a specific scenario:

Job Dependencies Tests Run Purpose
minimal None tests/unit/ Verify zero-dependency functionality
with-numpy numpy tests/unit/ + ML features Basic array support
with-pandas pandas tests/unit/ + data features DataFrame support
with-ml-deps numpy, pandas, sklearn tests/unit/ + tests/integration/ ML model serialization
full All deps tests/unit/ + tests/edge_cases/ + tests/integration/ Complete functional testing
performance All deps tests/benchmarks/ (separate pipeline) Performance regression tracking

๐Ÿ”ง Adding New Optional Features

When adding support for a new optional dependency:

  1. Add to pyproject.toml:

    [project.optional-dependencies]
    crypto = ["cryptography>=42.0.0"]
    

  2. Add conditional import:

    try:
        import newlibrary
        HAS_NEWLIBRARY = True
    except ImportError:
        HAS_NEWLIBRARY = False
    

  3. Add to CI matrix:

    - name: "with-crypto"
      install: "pip install -e . && pip install cryptography"
      description: "Core + digital signature support"
    

  4. Add tests with proper markers:

    @pytest.mark.skipif(not HAS_NEWLIBRARY, reason="newlibrary not available")
    @pytest.mark.newlibrary
    def test_newlibrary_feature():
        pass
    

โœ… Benefits

  • Lightweight: Users only install what they need
  • Robust: Core functionality always works
  • Flexible: Easy to add new optional features
  • Well-tested: All scenarios covered in CI
  • User-friendly: Clear error messages when deps missing

This architecture ensures datason works for everyone while providing enhanced functionality for users who need it.