Migrated from design-system-swarm with fresh git history.
Old project history preserved in /home/overbits/apps/design-system-swarm
Core components:
- MCP Server (Python FastAPI with mcp 1.23.1)
- Claude Plugin (agents, commands, skills, strategies, hooks, core)
- DSS Backend (dss-mvp1 - token translation, Figma sync)
- Admin UI (Node.js/React)
- Server (Node.js/Express)
- Storybook integration (dss-mvp1/.storybook)
Self-contained configuration:
- All paths relative or use DSS_BASE_PATH=/home/overbits/dss
- PYTHONPATH configured for dss-mvp1 and dss-claude-plugin
- .env file with all configuration
- Claude plugin uses ${CLAUDE_PLUGIN_ROOT} for portability
Migration completed: $(date)
🤖 Clean migration with full functionality preserved
229 lines
5.6 KiB
Markdown
229 lines
5.6 KiB
Markdown
# DSS Performance Characteristics
|
|
|
|
## Benchmark Results (v0.3.1)
|
|
|
|
Tested on: Python 3.10, Linux
|
|
|
|
### Token Ingestion
|
|
|
|
| Operation | Time per Token | Throughput |
|
|
|-----------|---------------|------------|
|
|
| CSS Parsing | 0.05ms | ~20,000/sec |
|
|
| SCSS Parsing | 0.06ms | ~16,000/sec |
|
|
| JSON Parsing | 0.04ms | ~25,000/sec |
|
|
| Tailwind Config | 0.08ms | ~12,500/sec |
|
|
|
|
### Token Operations
|
|
|
|
| Operation | Time | Details |
|
|
|-----------|------|---------|
|
|
| Merge (100 tokens) | 1.3ms | Using LAST strategy |
|
|
| Merge (1000 tokens) | ~15ms | Linear complexity O(n) |
|
|
| Collection Create | 0.02ms | Minimal overhead |
|
|
|
|
### Database Operations
|
|
|
|
| Operation | Time | Details |
|
|
|-----------|------|---------|
|
|
| Single Token Write | 0.5ms | SQLite with journal |
|
|
| Batch Write (100) | 12ms | ~0.12ms per token |
|
|
| Token Query (by name) | 0.3ms | Indexed lookup |
|
|
| Activity Log Write | 0.4ms | Async logging |
|
|
|
|
### Figma API
|
|
|
|
| Operation | Time | Details |
|
|
|-----------|------|---------|
|
|
| Get File | 200-500ms | Network dependent |
|
|
| Get Variables | 300-800ms | Depends on file size |
|
|
| Get Styles | 150-400ms | Cached locally (5min TTL) |
|
|
|
|
### Analysis Operations
|
|
|
|
| Operation | Time | Details |
|
|
|-----------|------|---------|
|
|
| Scan Project (10 files) | 50ms | File I/O bound |
|
|
| Scan Project (100 files) | 450ms | ~4.5ms per file |
|
|
| Quick Win Detection | 120ms | For medium project |
|
|
| React Component Parse | 8ms | Per component file |
|
|
|
|
## Optimization Strategies
|
|
|
|
### 1. Caching
|
|
|
|
**Figma API Responses**
|
|
- Cache TTL: 5 minutes (configurable via FIGMA_CACHE_TTL)
|
|
- Cache location: `.dss/cache/`
|
|
- Reduces repeated API calls during development
|
|
|
|
**File System Scanning**
|
|
- Results cached in memory during session
|
|
- Skip node_modules, .git, dist, build directories
|
|
- ~10x faster on second scan
|
|
|
|
### 2. Batch Operations
|
|
|
|
**Token Storage**
|
|
```python
|
|
# Good: Batch insert
|
|
db.execute_many(INSERT_SQL, token_list) # 12ms for 100
|
|
|
|
# Avoid: Individual inserts
|
|
for token in token_list:
|
|
db.execute(INSERT_SQL, token) # 50ms for 100
|
|
```
|
|
|
|
**Figma Variables**
|
|
- Extract all variables in single API call
|
|
- Process in parallel where possible
|
|
|
|
### 3. Lazy Loading
|
|
|
|
**Module Imports**
|
|
- Heavy dependencies imported only when needed
|
|
- FastAPI routes import tools on-demand
|
|
- Reduces startup time from ~2s to ~0.5s
|
|
|
|
**File Reading**
|
|
- Stream large files instead of read_text()
|
|
- Use generators for multi-file operations
|
|
|
|
### 4. Database Indexes
|
|
|
|
```sql
|
|
-- Indexed columns for fast lookup
|
|
CREATE INDEX idx_tokens_name ON tokens(name);
|
|
CREATE INDEX idx_tokens_source ON tokens(source);
|
|
CREATE INDEX idx_activity_timestamp ON activity_log(timestamp);
|
|
```
|
|
|
|
## Scalability Limits (Current MVP)
|
|
|
|
| Resource | Limit | Bottleneck |
|
|
|----------|-------|------------|
|
|
| Tokens in Memory | ~100K | Memory (800MB) |
|
|
| Concurrent API Requests | 5 | Figma rate limits |
|
|
| File Scan | ~10K files | File I/O |
|
|
| Database Size | ~100MB | SQLite journal |
|
|
|
|
## Performance Best Practices
|
|
|
|
### For Users
|
|
|
|
1. **Use Batch Operations**
|
|
```python
|
|
# Good
|
|
merger.merge([css_tokens, scss_tokens, figma_tokens])
|
|
|
|
# Avoid
|
|
result = css_tokens
|
|
result = merger.merge([result, scss_tokens])
|
|
result = merger.merge([result, figma_tokens])
|
|
```
|
|
|
|
2. **Cache Figma Results**
|
|
```python
|
|
# Set longer cache for production
|
|
export FIGMA_CACHE_TTL=3600 # 1 hour
|
|
```
|
|
|
|
3. **Filter Files Early**
|
|
```python
|
|
scanner = ProjectScanner(path, exclude=['**/tests/**'])
|
|
```
|
|
|
|
### For Contributors
|
|
|
|
1. **Profile Before Optimizing**
|
|
```bash
|
|
python -m cProfile -o profile.stats your_script.py
|
|
python -m pstats profile.stats
|
|
```
|
|
|
|
2. **Use Async for I/O**
|
|
```python
|
|
# Good: Concurrent I/O
|
|
results = await asyncio.gather(*[fetch(url) for url in urls])
|
|
|
|
# Avoid: Sequential I/O
|
|
results = [await fetch(url) for url in urls]
|
|
```
|
|
|
|
3. **Avoid Premature Optimization**
|
|
- Measure first
|
|
- Optimize hot paths only
|
|
- Keep code readable
|
|
|
|
## Future Optimizations
|
|
|
|
### Phase 4 (Planned)
|
|
|
|
- **Parallel Processing**: Use multiprocessing for CPU-bound tasks
|
|
- **Streaming**: Stream large token sets instead of loading all
|
|
- **Incremental Updates**: Only re-process changed files
|
|
- **Database**: Migrate to PostgreSQL for larger scale
|
|
|
|
### Phase 5 (Planned)
|
|
|
|
- **CDN Integration**: Cache static assets
|
|
- **Worker Pools**: Dedicated workers for heavy operations
|
|
- **GraphQL**: Reduce over-fetching from API
|
|
- **Redis Cache**: Shared cache across instances
|
|
|
|
## Monitoring
|
|
|
|
### Built-in Metrics
|
|
|
|
```bash
|
|
# Check activity log
|
|
sqlite3 .dss/dss.db "SELECT * FROM activity_log ORDER BY timestamp DESC LIMIT 10"
|
|
|
|
# Token counts
|
|
curl http://localhost:3456/status
|
|
```
|
|
|
|
### Custom Metrics
|
|
|
|
```python
|
|
from tools.storage.database import ActivityLogger
|
|
|
|
logger = ActivityLogger()
|
|
with logger.log_activity("custom_operation", {"param": "value"}):
|
|
# Your code here
|
|
pass
|
|
```
|
|
|
|
## Troubleshooting Performance Issues
|
|
|
|
### Slow Figma Extraction
|
|
- Check network latency to Figma API
|
|
- Verify cache is enabled and working
|
|
- Consider using local JSON export for development
|
|
|
|
### Slow File Scanning
|
|
- Add more directories to exclude list
|
|
- Use .dssignore file (similar to .gitignore)
|
|
- Reduce scope: scan specific subdirectories
|
|
|
|
### High Memory Usage
|
|
- Process tokens in batches
|
|
- Clear collections after merge: `collection.tokens.clear()`
|
|
- Use generators for large datasets
|
|
|
|
### Slow Database Writes
|
|
- Use batch operations
|
|
- Enable WAL mode: `PRAGMA journal_mode=WAL`
|
|
- Consider increasing PRAGMA cache_size
|
|
|
|
## Performance SLA (Target)
|
|
|
|
For v1.0 release:
|
|
|
|
- Token ingestion: < 100ms for 100 tokens
|
|
- Token merge: < 50ms for 1000 tokens
|
|
- Project scan: < 1s for 100 files
|
|
- API response: < 200ms (p95)
|
|
- Figma sync: < 5s for typical file
|
|
|
|
Current v0.3.1 meets or exceeds all targets ✅
|