Migrated from design-system-swarm with fresh git history.
Old project history preserved in /home/overbits/apps/design-system-swarm
Core components:
- MCP Server (Python FastAPI with mcp 1.23.1)
- Claude Plugin (agents, commands, skills, strategies, hooks, core)
- DSS Backend (dss-mvp1 - token translation, Figma sync)
- Admin UI (Node.js/React)
- Server (Node.js/Express)
- Storybook integration (dss-mvp1/.storybook)
Self-contained configuration:
- All paths relative or use DSS_BASE_PATH=/home/overbits/dss
- PYTHONPATH configured for dss-mvp1 and dss-claude-plugin
- .env file with all configuration
- Claude plugin uses ${CLAUDE_PLUGIN_ROOT} for portability
Migration completed: $(date)
🤖 Clean migration with full functionality preserved
202 lines
4.0 KiB
Markdown
202 lines
4.0 KiB
Markdown
# DSS Test Suite
|
|
|
|
Comprehensive test suite for Design System Server.
|
|
|
|
## Running Tests
|
|
|
|
```bash
|
|
# Install pytest if not already installed
|
|
pip install pytest pytest-asyncio
|
|
|
|
# Run all tests
|
|
pytest
|
|
|
|
# Run specific test file
|
|
pytest tests/test_ingestion.py
|
|
|
|
# Run with verbose output
|
|
pytest -v
|
|
|
|
# Run with coverage (requires pytest-cov)
|
|
pip install pytest-cov
|
|
pytest --cov=tools --cov-report=html
|
|
|
|
# Run only fast tests (skip slow integration tests)
|
|
pytest -m "not slow"
|
|
```
|
|
|
|
## Test Structure
|
|
|
|
```
|
|
tests/
|
|
├── conftest.py # Shared fixtures and configuration
|
|
├── test_ingestion.py # Token ingestion tests (CSS, SCSS, JSON)
|
|
├── test_merge.py # Token merging and conflict resolution
|
|
└── README.md # This file
|
|
```
|
|
|
|
## Test Categories
|
|
|
|
### Unit Tests
|
|
Fast, isolated tests for individual functions/classes.
|
|
- Token parsing
|
|
- Merge strategies
|
|
- Collection operations
|
|
|
|
### Integration Tests (marked with `@pytest.mark.slow`)
|
|
Tests that interact with external systems or files.
|
|
- Figma API (requires FIGMA_TOKEN)
|
|
- File system operations
|
|
- Database operations
|
|
|
|
### Async Tests (marked with `@pytest.mark.asyncio`)
|
|
Tests for async functions.
|
|
- All ingestion operations
|
|
- API endpoints
|
|
- MCP tools
|
|
|
|
## Fixtures
|
|
|
|
Available in `conftest.py`:
|
|
|
|
- `temp_dir`: Temporary directory for file operations
|
|
- `sample_css`: Sample CSS custom properties
|
|
- `sample_scss`: Sample SCSS variables
|
|
- `sample_json_tokens`: Sample W3C JSON tokens
|
|
- `sample_token_collection`: Pre-built token collection
|
|
- `tailwind_config_path`: Temporary Tailwind config file
|
|
|
|
## Writing New Tests
|
|
|
|
### Unit Test Example
|
|
|
|
```python
|
|
import pytest
|
|
from tools.ingest.css import CSSTokenSource
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_css_parsing(sample_css):
|
|
"""Test CSS token extraction."""
|
|
parser = CSSTokenSource()
|
|
result = await parser.extract(sample_css)
|
|
|
|
assert len(result.tokens) > 0
|
|
assert result.name
|
|
```
|
|
|
|
### Integration Test Example
|
|
|
|
```python
|
|
import pytest
|
|
|
|
@pytest.mark.slow
|
|
@pytest.mark.asyncio
|
|
async def test_figma_integration():
|
|
"""Test Figma API integration."""
|
|
# Test code here
|
|
pass
|
|
```
|
|
|
|
## Continuous Integration
|
|
|
|
Tests run automatically on:
|
|
- Pull requests
|
|
- Commits to main branch
|
|
- Nightly builds
|
|
|
|
### CI Configuration
|
|
|
|
```yaml
|
|
# .github/workflows/test.yml
|
|
- name: Run tests
|
|
run: pytest --cov=tools --cov-report=xml
|
|
|
|
- name: Upload coverage
|
|
uses: codecov/codecov-action@v3
|
|
```
|
|
|
|
## Coverage Goals
|
|
|
|
Target: 80% code coverage
|
|
|
|
Current coverage by module:
|
|
- tools.ingest: ~85%
|
|
- tools.analyze: ~70%
|
|
- tools.storybook: ~65%
|
|
- tools.figma: ~60% (requires API mocking)
|
|
|
|
## Mocking External Services
|
|
|
|
### Figma API
|
|
|
|
```python
|
|
from unittest.mock import AsyncMock, patch
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_with_mocked_figma():
|
|
with patch('tools.figma.figma_tools.httpx.AsyncClient') as mock:
|
|
mock.return_value.__aenter__.return_value.get = AsyncMock(
|
|
return_value={"status": "ok"}
|
|
)
|
|
# Test code here
|
|
```
|
|
|
|
### Database
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def mock_db(temp_dir):
|
|
"""Create temporary test database."""
|
|
db_path = temp_dir / "test.db"
|
|
# Initialize test DB
|
|
return db_path
|
|
```
|
|
|
|
## Test Data
|
|
|
|
Test fixtures use realistic but minimal data:
|
|
- ~5-10 tokens per collection
|
|
- Simple color and spacing values
|
|
- W3C-compliant JSON format
|
|
|
|
## Debugging Failed Tests
|
|
|
|
```bash
|
|
# Run with detailed output
|
|
pytest -vv
|
|
|
|
# Run with pdb on failure
|
|
pytest --pdb
|
|
|
|
# Run last failed tests only
|
|
pytest --lf
|
|
|
|
# Show print statements
|
|
pytest -s
|
|
```
|
|
|
|
## Performance Testing
|
|
|
|
```bash
|
|
# Run with duration report
|
|
pytest --durations=10
|
|
|
|
# Profile slow tests
|
|
python -m cProfile -o profile.stats -m pytest
|
|
```
|
|
|
|
## Contributing Tests
|
|
|
|
1. Write tests for new features
|
|
2. Maintain >80% coverage
|
|
3. Use descriptive test names
|
|
4. Add docstrings to test functions
|
|
5. Use fixtures for common setup
|
|
6. Mark slow tests with `@pytest.mark.slow`
|
|
|
|
## Known Issues
|
|
|
|
- Tailwind parser tests may fail due to regex limitations (non-blocking)
|
|
- Figma tests require valid FIGMA_TOKEN environment variable
|
|
- Some integration tests may be slow (~5s each)
|