Major cleanup: Remove redundant code, consolidate knowledge base
- Delete redundant directories: demo/, server/, orchestrator/, team-portal/, servers/ - Remove all human-readable documentation (docs/, .dss/*.md, admin-ui/*.md) - Consolidate 4 knowledge JSON files into single DSS_CORE.json - Clear browser logs (7.5MB), backups, temp files - Remove obsolete configs (.cursorrules, .dss-boundaries.yaml, .ds-swarm/) - Reduce project from 20MB to ~8MB Kept: tools/, admin-ui/, cli/, dss-claude-plugin/, .dss/schema/ 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
201
tests/README.md
201
tests/README.md
@@ -1,201 +0,0 @@
|
||||
# DSS Test Suite
|
||||
|
||||
Comprehensive test suite for Design System Server.
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Install pytest if not already installed
|
||||
pip install pytest pytest-asyncio
|
||||
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/test_ingestion.py
|
||||
|
||||
# Run with verbose output
|
||||
pytest -v
|
||||
|
||||
# Run with coverage (requires pytest-cov)
|
||||
pip install pytest-cov
|
||||
pytest --cov=tools --cov-report=html
|
||||
|
||||
# Run only fast tests (skip slow integration tests)
|
||||
pytest -m "not slow"
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Shared fixtures and configuration
|
||||
├── test_ingestion.py # Token ingestion tests (CSS, SCSS, JSON)
|
||||
├── test_merge.py # Token merging and conflict resolution
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Test Categories
|
||||
|
||||
### Unit Tests
|
||||
Fast, isolated tests for individual functions/classes.
|
||||
- Token parsing
|
||||
- Merge strategies
|
||||
- Collection operations
|
||||
|
||||
### Integration Tests (marked with `@pytest.mark.slow`)
|
||||
Tests that interact with external systems or files.
|
||||
- Figma API (requires FIGMA_TOKEN)
|
||||
- File system operations
|
||||
- Database operations
|
||||
|
||||
### Async Tests (marked with `@pytest.mark.asyncio`)
|
||||
Tests for async functions.
|
||||
- All ingestion operations
|
||||
- API endpoints
|
||||
- MCP tools
|
||||
|
||||
## Fixtures
|
||||
|
||||
Available in `conftest.py`:
|
||||
|
||||
- `temp_dir`: Temporary directory for file operations
|
||||
- `sample_css`: Sample CSS custom properties
|
||||
- `sample_scss`: Sample SCSS variables
|
||||
- `sample_json_tokens`: Sample W3C JSON tokens
|
||||
- `sample_token_collection`: Pre-built token collection
|
||||
- `tailwind_config_path`: Temporary Tailwind config file
|
||||
|
||||
## Writing New Tests
|
||||
|
||||
### Unit Test Example
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from tools.ingest.css import CSSTokenSource
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_css_parsing(sample_css):
|
||||
"""Test CSS token extraction."""
|
||||
parser = CSSTokenSource()
|
||||
result = await parser.extract(sample_css)
|
||||
|
||||
assert len(result.tokens) > 0
|
||||
assert result.name
|
||||
```
|
||||
|
||||
### Integration Test Example
|
||||
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.slow
|
||||
@pytest.mark.asyncio
|
||||
async def test_figma_integration():
|
||||
"""Test Figma API integration."""
|
||||
# Test code here
|
||||
pass
|
||||
```
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
Tests run automatically on:
|
||||
- Pull requests
|
||||
- Commits to main branch
|
||||
- Nightly builds
|
||||
|
||||
### CI Configuration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yml
|
||||
- name: Run tests
|
||||
run: pytest --cov=tools --cov-report=xml
|
||||
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v3
|
||||
```
|
||||
|
||||
## Coverage Goals
|
||||
|
||||
Target: 80% code coverage
|
||||
|
||||
Current coverage by module:
|
||||
- tools.ingest: ~85%
|
||||
- tools.analyze: ~70%
|
||||
- tools.storybook: ~65%
|
||||
- tools.figma: ~60% (requires API mocking)
|
||||
|
||||
## Mocking External Services
|
||||
|
||||
### Figma API
|
||||
|
||||
```python
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_with_mocked_figma():
|
||||
with patch('tools.figma.figma_tools.httpx.AsyncClient') as mock:
|
||||
mock.return_value.__aenter__.return_value.get = AsyncMock(
|
||||
return_value={"status": "ok"}
|
||||
)
|
||||
# Test code here
|
||||
```
|
||||
|
||||
### Database
|
||||
|
||||
```python
|
||||
@pytest.fixture
|
||||
def mock_db(temp_dir):
|
||||
"""Create temporary test database."""
|
||||
db_path = temp_dir / "test.db"
|
||||
# Initialize test DB
|
||||
return db_path
|
||||
```
|
||||
|
||||
## Test Data
|
||||
|
||||
Test fixtures use realistic but minimal data:
|
||||
- ~5-10 tokens per collection
|
||||
- Simple color and spacing values
|
||||
- W3C-compliant JSON format
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
```bash
|
||||
# Run with detailed output
|
||||
pytest -vv
|
||||
|
||||
# Run with pdb on failure
|
||||
pytest --pdb
|
||||
|
||||
# Run last failed tests only
|
||||
pytest --lf
|
||||
|
||||
# Show print statements
|
||||
pytest -s
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
```bash
|
||||
# Run with duration report
|
||||
pytest --durations=10
|
||||
|
||||
# Profile slow tests
|
||||
python -m cProfile -o profile.stats -m pytest
|
||||
```
|
||||
|
||||
## Contributing Tests
|
||||
|
||||
1. Write tests for new features
|
||||
2. Maintain >80% coverage
|
||||
3. Use descriptive test names
|
||||
4. Add docstrings to test functions
|
||||
5. Use fixtures for common setup
|
||||
6. Mark slow tests with `@pytest.mark.slow`
|
||||
|
||||
## Known Issues
|
||||
|
||||
- Tailwind parser tests may fail due to regex limitations (non-blocking)
|
||||
- Figma tests require valid FIGMA_TOKEN environment variable
|
||||
- Some integration tests may be slow (~5s each)
|
||||
Reference in New Issue
Block a user