Initial commit: Clean DSS implementation

Migrated from design-system-swarm with fresh git history.
Old project history preserved in /home/overbits/apps/design-system-swarm

Core components:
- MCP Server (Python FastAPI with mcp 1.23.1)
- Claude Plugin (agents, commands, skills, strategies, hooks, core)
- DSS Backend (dss-mvp1 - token translation, Figma sync)
- Admin UI (Node.js/React)
- Server (Node.js/Express)
- Storybook integration (dss-mvp1/.storybook)

Self-contained configuration:
- All paths relative or use DSS_BASE_PATH=/home/overbits/dss
- PYTHONPATH configured for dss-mvp1 and dss-claude-plugin
- .env file with all configuration
- Claude plugin uses ${CLAUDE_PLUGIN_ROOT} for portability

Migration completed: $(date)
🤖 Clean migration with full functionality preserved
This commit is contained in:
Digital Production Factory
2025-12-09 18:45:48 -03:00
commit 276ed71f31
884 changed files with 373737 additions and 0 deletions

View File

@@ -0,0 +1,638 @@
# MCP Debug Tools Architecture
**Date**: December 6, 2025
**Status**: Design Complete - Ready for Implementation
**Confidence**: Certain (Validated by Zen ThinkDeep Analysis)
---
## Executive Summary
This document describes the complete architecture for integrating DSS debug tools into the MCP (Model Context Protocol) infrastructure, making all debugging capabilities accessible to Claude through persistent, well-documented tools.
### Goals
1. **Expose debug tools as MCP tools** - Make browser logs, server diagnostics, and workflows accessible to Claude
2. **Persistent service management** - Use supervisord to keep services running
3. **Unified debug interface** - Single MCP endpoint for all debugging
4. **Documented workflows** - Step-by-step procedures for common debug tasks
5. **Automated log capture** - Browser logs automatically available to Claude
---
## System Architecture
### 3-Layer Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Layer 1: Browser (JavaScript) │
│ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────┐ │
│ │ browser-logger │ │ debug-inspector │ │ Window APIs │ │
│ │ .js │ │ .js │ │ │ │
│ │ │ │ │ │ __DSS_DEBUG │ │
│ │ Captures: │ │ Server debug │ │ __DSS_ │ │
│ │ • Console logs │ │ tools │ │ BROWSER_ │ │
│ │ • Errors │ │ │ │ LOGS │ │
│ │ • Network │ │ │ │ │ │
│ │ • Performance │ │ │ │ │ │
│ └─────────────────┘ └──────────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
↓ sessionStorage / API calls
┌─────────────────────────────────────────────────────────────┐
│ Layer 2: API Server (FastAPI/Python) │
│ │
│ Endpoints: │
│ POST /api/browser-logs - Receive browser logs │
│ GET /api/browser-logs/:session - Retrieve logs │
│ GET /api/debug/diagnostic - System diagnostic │
│ GET /api/debug/workflows - List workflows │
│ GET /health - Health check (existing) │
│ │
└─────────────────────────────────────────────────────────────┘
↓ Python API calls
┌─────────────────────────────────────────────────────────────┐
│ Layer 3: MCP Server (Python/MCP) │
│ │
│ MCP Tools (exposed to Claude): │
│ • get_browser_diagnostic() - Browser diagnostic summary │
│ • get_browser_errors() - Browser error logs │
│ • get_browser_network() - Network request logs │
│ • get_server_diagnostic() - Server health/status │
│ • run_workflow(name) - Execute debug workflow │
│ • list_workflows() - Show available workflows │
│ │
│ Implementation: tools/dss_mcp/tools/debug_tools.py │
│ Registration: tools/dss_mcp/server.py │
│ │
└─────────────────────────────────────────────────────────────┘
↓ Managed by
┌─────────────────────────────────────────────────────────────┐
│ Persistence Layer (Supervisord) │
│ │
│ Services: │
│ • dss-api - API server (port 3456) │
│ • dss-mcp - MCP server (port 3457) │
│ │
│ Configs: /etc/supervisor/conf.d/ │
│ Auto-restart on failure, log rotation │
│ │
└─────────────────────────────────────────────────────────────┘
```
---
## Data Flow
### Browser Log Capture Flow
1. **Automatic Capture** (browser-logger.js loads)
- Intercepts all console.* calls
- Captures errors and rejections
- Monitors network requests (fetch)
- Tracks performance metrics
- Stores in sessionStorage
2. **Upload to Server** (two methods)
- **Manual**: User exports via `window.__DSS_BROWSER_LOGS.export()`
- **Automatic**: Critical errors trigger auto-upload to `/api/browser-logs`
3. **Storage on Server**
- API receives logs via POST
- Stores in `.dss/browser-logs/:session_id.json`
- Indexed by session ID
4. **MCP Tool Access**
- Claude calls `get_browser_diagnostic(session_id)`
- MCP tool queries API server
- Returns structured data to Claude
### Server Diagnostic Flow
1. **Health Monitoring**
- `/health` endpoint tracks: database, mcp, figma status
- Server logs captured in `.dss/server.log`
- Audit logs in database (audit_log table)
2. **MCP Tool Queries**
- Claude calls `get_server_diagnostic()`
- MCP tool reads health endpoint + logs + database
- Returns comprehensive diagnostic
### Workflow Execution Flow
1. **Workflow Documentation**
- Markdown files in `.dss/WORKFLOWS/`
- Each workflow is step-by-step procedure
- Example: `01-capture-browser-logs.md`
2. **MCP Tool Execution**
- Claude calls `run_workflow('capture-browser-logs')`
- MCP tool reads workflow markdown
- Returns instructions for Claude to execute
---
## Implementation Details
### File Structure
```
dss/
├── tools/
│ ├── api/
│ │ └── server.py # Add debug endpoints here
│ └── dss_mcp/
│ ├── server.py # Register debug tools here
│ └── tools/
│ ├── project_tools.py # Existing
│ └── debug_tools.py # NEW - Debug MCP tools
├── admin-ui/
│ ├── index.html # Import browser-logger here
│ └── js/core/
│ ├── browser-logger.js # CREATED
│ ├── debug-inspector.js # Existing
│ ├── audit-logger.js # Existing
│ └── error-handler.js # Existing
├── .dss/
│ ├── browser-logs/ # NEW - Browser log storage
│ │ └── session-*.json
│ ├── WORKFLOWS/ # NEW - Debug workflows
│ │ ├── 01-capture-browser-logs.md
│ │ ├── 02-diagnose-errors.md
│ │ ├── 03-debug-performance.md
│ │ └── 04-workflow-debugging.md
│ ├── MCP_DEBUG_TOOLS_ARCHITECTURE.md # This file
│ └── GET_BROWSER_LOGS.sh # CREATED - Hook script
└── /etc/supervisor/conf.d/
├── dss-api.conf # NEW - API persistence
└── dss-mcp.conf # NEW - MCP persistence
```
---
## Component Specifications
### 1. Browser Logger Integration
**File**: `admin-ui/index.html`
Add before closing `</head>`:
```html
<script type="module">
import browserLogger from '/admin-ui/js/core/browser-logger.js';
console.log('[DSS] Browser logger initialized');
</script>
```
**File**: `admin-ui/js/core/browser-logger.js` (ALREADY CREATED)
Features:
- Auto-captures all console output
- Intercepts fetch requests
- Tracks performance metrics
- Exposes `window.__DSS_BROWSER_LOGS` API
- Stores max 1000 entries in sessionStorage
### 2. API Debug Endpoints
**File**: `tools/api/server.py`
Add these endpoints:
```python
# Storage for browser logs (in-memory or file-based)
BROWSER_LOGS_DIR = Path(".dss/browser-logs")
BROWSER_LOGS_DIR.mkdir(parents=True, exist_ok=True)
@app.post("/api/browser-logs")
async def receive_browser_logs(logs: dict):
"""
Receive browser logs from client.
Body: {
"sessionId": "session-123-abc",
"logs": [...],
"diagnostic": {...}
}
"""
session_id = logs.get("sessionId", f"session-{int(time.time())}")
log_file = BROWSER_LOGS_DIR / f"{session_id}.json"
with open(log_file, "w") as f:
json.dump(logs, f, indent=2)
return {"status": "stored", "sessionId": session_id}
@app.get("/api/browser-logs/{session_id}")
async def get_browser_logs(session_id: str):
"""Retrieve browser logs by session ID"""
log_file = BROWSER_LOGS_DIR / f"{session_id}.json"
if not log_file.exists():
raise HTTPException(404, "Session not found")
with open(log_file, "r") as f:
return json.load(f)
@app.get("/api/debug/diagnostic")
async def get_debug_diagnostic():
"""Get comprehensive system diagnostic"""
# Reuse existing health check
health = await health()
# Add additional diagnostics
return {
**health,
"browser_sessions": len(list(BROWSER_LOGS_DIR.glob("*.json"))),
"api_uptime": uptime_seconds,
"database_size": Path(".dss/dss.db").stat().st_size if Path(".dss/dss.db").exists() else 0,
}
@app.get("/api/debug/workflows")
async def list_debug_workflows():
"""List available debug workflows"""
workflows_dir = Path(".dss/WORKFLOWS")
if not workflows_dir.exists():
return []
workflows = []
for workflow_file in sorted(workflows_dir.glob("*.md")):
workflows.append({
"name": workflow_file.stem,
"path": str(workflow_file),
"size": workflow_file.stat().st_size
})
return workflows
```
### 3. MCP Debug Tools
**File**: `tools/dss_mcp/tools/debug_tools.py` (NEW)
```python
"""
DSS Debug Tools for MCP
Tools for debugging the DSS system itself.
Provides access to browser logs, server diagnostics, and workflows.
"""
from typing import Dict, Any, Optional
from pathlib import Path
import json
import httpx
from mcp import types
# Tool definitions
DEBUG_TOOLS = [
types.Tool(
name="dss_get_browser_diagnostic",
description="Get browser diagnostic summary including errors, network activity, and performance",
inputSchema={
"type": "object",
"properties": {
"session_id": {
"type": "string",
"description": "Browser session ID (optional, returns latest if not provided)"
}
}
}
),
types.Tool(
name="dss_get_browser_errors",
description="Get browser error logs with stack traces",
inputSchema={
"type": "object",
"properties": {
"session_id": {
"type": "string",
"description": "Browser session ID"
},
"limit": {
"type": "number",
"description": "Max number of errors to return (default: 50)",
"default": 50
}
}
}
),
types.Tool(
name="dss_get_browser_network",
description="Get browser network request logs",
inputSchema={
"type": "object",
"properties": {
"session_id": {
"type": "string",
"description": "Browser session ID"
},
"filter_status": {
"type": "string",
"description": "Filter by HTTP status (e.g., '4xx', '5xx', '200')"
}
}
}
),
types.Tool(
name="dss_get_server_diagnostic",
description="Get server health, uptime, database status, and system metrics",
inputSchema={
"type": "object",
"properties": {}
}
),
types.Tool(
name="dss_run_workflow",
description="Execute a documented debug workflow step-by-step",
inputSchema={
"type": "object",
"properties": {
"workflow_name": {
"type": "string",
"description": "Workflow name (e.g., 'capture-browser-logs', 'diagnose-errors')"
}
},
"required": ["workflow_name"]
}
),
types.Tool(
name="dss_list_workflows",
description="List all available debug workflows",
inputSchema={
"type": "object",
"properties": {}
}
)
]
class DebugTools:
"""Debug tool implementations"""
def __init__(self, api_base_url: str = "http://localhost:3456"):
self.api_base_url = api_base_url
async def get_browser_diagnostic(self, session_id: Optional[str] = None) -> Dict[str, Any]:
"""Get browser diagnostic summary"""
if not session_id:
session_id = self._get_latest_session()
async with httpx.AsyncClient() as client:
response = await client.get(f"{self.api_base_url}/api/browser-logs/{session_id}")
logs = response.json()
return logs.get("diagnostic", {})
async def get_browser_errors(self, session_id: str, limit: int = 50) -> list:
"""Get browser error logs"""
async with httpx.AsyncClient() as client:
response = await client.get(f"{self.api_base_url}/api/browser-logs/{session_id}")
logs = response.json()
errors = [entry for entry in logs.get("logs", []) if entry["level"] == "error"]
return errors[:limit]
async def get_browser_network(self, session_id: str, filter_status: Optional[str] = None) -> list:
"""Get browser network request logs"""
async with httpx.AsyncClient() as client:
response = await client.get(f"{self.api_base_url}/api/browser-logs/{session_id}")
logs = response.json()
network = [entry for entry in logs.get("logs", []) if entry["category"] == "fetch"]
if filter_status:
network = [n for n in network if str(n["data"].get("status", "")).startswith(filter_status)]
return network
async def get_server_diagnostic(self) -> Dict[str, Any]:
"""Get server diagnostic"""
async with httpx.AsyncClient() as client:
response = await client.get(f"{self.api_base_url}/api/debug/diagnostic")
return response.json()
async def run_workflow(self, workflow_name: str) -> str:
"""Execute a debug workflow"""
workflow_path = Path(f".dss/WORKFLOWS/{workflow_name}.md")
if not workflow_path.exists():
return f"Workflow '{workflow_name}' not found. Use dss_list_workflows to see available workflows."
return workflow_path.read_text()
async def list_workflows(self) -> list:
"""List available debug workflows"""
async with httpx.AsyncClient() as client:
response = await client.get(f"{self.api_base_url}/api/debug/workflows")
return response.json()
def _get_latest_session(self) -> str:
"""Get the most recent session ID"""
logs_dir = Path(".dss/browser-logs")
sessions = sorted(logs_dir.glob("*.json"), key=lambda p: p.stat().st_mtime, reverse=True)
if not sessions:
raise ValueError("No browser log sessions found")
return sessions[0].stem
```
**Integration in**: `tools/dss_mcp/server.py`
Add to imports:
```python
from .tools.debug_tools import DEBUG_TOOLS, DebugTools
```
Add to tool registration (find where PROJECT_TOOLS is registered):
```python
# Register debug tools
for tool in DEBUG_TOOLS:
mcp_server.list_tools.append(tool)
```
### 4. Supervisord Configuration
**File**: `/etc/supervisor/conf.d/dss-api.conf` (NEW)
```ini
[program:dss-api]
command=/home/overbits/dss/tools/api/start.sh
directory=/home/overbits/dss/tools/api
user=overbits
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/home/overbits/dss/.dss/api-supervisor.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=3
environment=DSS_HOST="dss.overbits.luz.uy"
```
**File**: `/etc/supervisor/conf.d/dss-mcp.conf` (NEW)
```ini
[program:dss-mcp]
command=/home/overbits/dss/tools/dss_mcp/start.sh
directory=/home/overbits/dss/tools/dss_mcp
user=overbits
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/home/overbits/dss/.dss/mcp-supervisor.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=3
```
**File**: `tools/dss_mcp/start.sh` (NEW)
```bash
#!/bin/bash
set -e
cd "$(dirname "$0")"
# Use system Python or virtualenv
exec python3 -m uvicorn server:app --host 0.0.0.0 --port 3457
```
### 5. Debug Workflows
See separate workflow files in `.dss/WORKFLOWS/`
---
## Implementation Checklist
- [ ] 1. Create API debug endpoints in `tools/api/server.py`
- [ ] 2. Create `tools/dss_mcp/tools/debug_tools.py`
- [ ] 3. Register debug tools in `tools/dss_mcp/server.py`
- [ ] 4. Import browser-logger in `admin-ui/index.html`
- [ ] 5. Create `.dss/browser-logs/` directory
- [ ] 6. Create `.dss/WORKFLOWS/` directory
- [ ] 7. Write workflow documentation (4 files)
- [ ] 8. Create `tools/dss_mcp/start.sh`
- [ ] 9. Create supervisor configs (dss-api.conf, dss-mcp.conf)
- [ ] 10. Test browser logger in browser DevTools
- [ ] 11. Test API endpoints with curl
- [ ] 12. Test MCP tools with Claude
- [ ] 13. Deploy to supervisor and verify auto-restart
- [ ] 14. Update project memory with new architecture
---
## Testing Procedures
### Test 1: Browser Logger
1. Open https://dss.overbits.luz.uy/
2. Open DevTools Console (F12)
3. Run: `window.__DSS_BROWSER_LOGS.diagnostic()`
4. Verify diagnostic data appears
5. Export logs: `window.__DSS_BROWSER_LOGS.export()`
### Test 2: API Endpoints
```bash
# Test browser log upload
curl -X POST http://localhost:3456/api/browser-logs \
-H 'Content-Type: application/json' \
-d '{"sessionId":"test-123","logs":[],"diagnostic":{}}'
# Test browser log retrieval
curl http://localhost:3456/api/browser-logs/test-123
# Test server diagnostic
curl http://localhost:3456/api/debug/diagnostic
# Test workflow list
curl http://localhost:3456/api/debug/workflows
```
### Test 3: MCP Tools
In Claude:
```
Use dss_get_server_diagnostic to check system health
Use dss_list_workflows to see available debug procedures
Use dss_run_workflow with workflow_name="capture-browser-logs"
```
### Test 4: Supervisor Persistence
```bash
# Reload supervisor
sudo supervisorctl reread
sudo supervisorctl update
# Check status
sudo supervisorctl status dss-api
sudo supervisorctl status dss-mcp
# Test restart
sudo supervisorctl restart dss-mcp
# Check logs
tail -f /home/overbits/dss/.dss/mcp-supervisor.log
```
---
## Maintenance
### Log Rotation
Browser logs automatically limited to 1000 entries per session.
Server logs rotated by supervisor (max 10MB, 3 backups).
### Cleanup
Old browser log sessions:
```bash
# Remove sessions older than 7 days
find .dss/browser-logs -name "*.json" -mtime +7 -delete
```
### Monitoring
Check service health:
```bash
sudo supervisorctl status
curl http://localhost:3456/health
curl http://localhost:3457/health
```
---
## Future Enhancements
1. **Real-time log streaming**: WebSocket connection for live logs
2. **Log aggregation**: Combine browser + server logs in single view
3. **Alert system**: Notify on critical errors
4. **Performance profiling**: CPU/memory tracking over time
5. **Distributed tracing**: Trace requests across services
---
## References
- Browser Logger: `.dss/BROWSER_LOG_CAPTURE_PROCEDURE.md`
- Debug Methodology: `.dss/DSS_SELF_DEBUG_METHODOLOGY.md`
- Quick Start: `.dss/DEBUG_QUICKSTART.md`
- Hook Script: `.dss/GET_BROWSER_LOGS.sh`
- Project Tools: `tools/dss_mcp/tools/project_tools.py` (example)
---
**Status**: Architecture Complete ✅
**Next Step**: Begin implementation (start with API endpoints)
**Review**: Validated by Zen ThinkDeep Analysis (Confidence: Certain)