1
0
Fork 0

Squashed inappropriate git messages

This commit is contained in:
Atridad Lahiji 2024-12-04 19:09:32 -06:00
commit 40f9e1d6be
Signed by: atridad
SSH key fingerprint: SHA256:LGomp8Opq0jz+7kbwNcdfTcuaLRb5Nh0k5AchDDb438
53 changed files with 72233 additions and 0 deletions

3
.dockerignore Normal file
View file

@ -0,0 +1,3 @@
# flyctl launch added from .gitignore
**/.env
fly.toml

3
.env.example Normal file
View file

@ -0,0 +1,3 @@
TURSO_URL=
TURSO_AUTH_TOKEN=
REDIS_URL=

2
.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
.env
.DS_Store

16
Dockerfile Normal file
View file

@ -0,0 +1,16 @@
FROM golang:1.23.1 as build
WORKDIR /app
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o /go/bin/app
FROM gcr.io/distroless/base-debian12
COPY --from=build /go/bin/app /
EXPOSE 8080
CMD [ "/app" ]

82
README.md Normal file
View file

@ -0,0 +1,82 @@
# Distributed Service Performance Monitor
A real-time performance monitoring server to test distributed systems, featuring live metrics visualization and robust data collection.
## Features
### Metrics Collection
- Real-time service performance monitoring
- Database operation timing
- Cache performance tracking
- Automatic data aggregation and processing
### Storage & Processing
- Distributed SQLite storage (Turso)
- Redis caching layer
- Asynchronous processing queue
- Retry mechanisms with exponential backoff
- Connection pooling and transaction management
### Dashboard
- Real-time metrics visualization
- Customizable time ranges (30m, 1h, 24h, 7d, custom)
- Performance statistics (avg, P50, P95, P99)
- Database and cache activity monitoring
- CSV export functionality
- Interactive time series charts
## Architecture
The system uses a multi-layered architecture:
1. Frontend: React-based dashboard with Chart.js
2. Storage: Turso Database (distributed SQLite) + Redis cache
3. Processing: Async queue with multiple workers
4. Collection: Distributed metrics collection with retry logic
## Technical Stack
- **Frontend**: React, Chart.js, Tailwind CSS
- **Database**: Turso (distributed SQLite)
- **Cache**: Redis
- **Language**: Go 1.23
- **Deployment**: Docker + Fly.io
## Setup
1. Deploy using fly.io:
```bash
fly launch
fly deploy
```
## Development
For local development:
1. Install dependencies:
```bash
go mod download
```
2. Start the service:
```bash
go run main.go
```
3. Access the dashboard at `http://localhost:8080`
## Architecture Notes
- The system uses a queue-based architecture for processing metrics
- Implements automatic retries for failed operations
- Features connection pooling for database operations
- Supports distributed deployment through Fly.io
- Uses websockets for real-time metric updates
## Performance Considerations
- Metrics are processed asynchronously to prevent blocking
- Connection pooling optimizes database access
- Redis caching reduces database load
- Configurable retry mechanisms ensure reliability
- Dashboard uses data bucketing for better visualization

2
commands.md Normal file
View file

@ -0,0 +1,2 @@
# Commands
./loadr -rate=20 -max=10000 -url=https://cmpt815perf.fly.dev/api/request -pattern=2p3g

19
fly.toml Normal file
View file

@ -0,0 +1,19 @@
# fly.toml app configuration file generated for cmpt815perf on 2024-12-01T13:51:44-06:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#
app = 'cmpt815perf'
primary_region = 'ord'
[build]
[http_service]
internal_port = 8080
force_https = true
auto_start_machines = true
min_machines_running = 0
processes = ['app']
[[vm]]
size = 'shared-cpu-1x'

17
go.mod Normal file
View file

@ -0,0 +1,17 @@
module atri.dad/distributedperf
go 1.23
require (
github.com/joho/godotenv v1.5.1
github.com/redis/go-redis/v9 v9.7.0
github.com/tursodatabase/libsql-client-go v0.0.0-20240902231107-85af5b9d094d
)
require (
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/coder/websocket v1.8.12 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f // indirect
)

20
go.sum Normal file
View file

@ -0,0 +1,20 @@
github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=
github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coder/websocket v1.8.12 h1:5bUXkEPPIbewrnkU8LTCLVaxi4N4J8ahufH2vlo4NAo=
github.com/coder/websocket v1.8.12/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/tursodatabase/libsql-client-go v0.0.0-20240902231107-85af5b9d094d h1:dOMI4+zEbDI37KGb0TI44GUAwxHF9cMsIoDTJ7UmgfU=
github.com/tursodatabase/libsql-client-go v0.0.0-20240902231107-85af5b9d094d/go.mod h1:l8xTsYB90uaVdMHXMCxKKLSgw5wLYBwBKKefNIUnm9s=
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f h1:XdNn9LlyWAhLVp6P/i8QYBW+hlyhrhei9uErw2B5GJo=
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f/go.mod h1:D5SMRVC3C2/4+F/DB1wZsLRnSNimn2Sp/NPsCrsv8ak=

91
lib/redis.go Normal file
View file

@ -0,0 +1,91 @@
package lib
import (
"context"
"encoding/json"
"fmt"
"log"
"time"
"github.com/redis/go-redis/v9"
)
// RedisStorage implements a Redis-backed caching layer for test data.
// It provides fast access to frequently requested data while reducing database load.
type RedisStorage struct {
client *redis.Client
}
// NewRedisStorage creates and initializes a new Redis connection with the provided URL.
// It verifies the connection and configures default timeouts.
func NewRedisStorage(url string) (*RedisStorage, error) {
opt, err := redis.ParseURL(url)
if err != nil {
return nil, fmt.Errorf("failed to parse Redis URL: %v", err)
}
client := redis.NewClient(opt)
// Verify connection is working
ctx := context.Background()
if err := client.Ping(ctx).Err(); err != nil {
return nil, fmt.Errorf("failed to connect to Redis: %v", err)
}
log.Printf("Successfully connected to Redis")
return &RedisStorage{client: client}, nil
}
// GetTestData retrieves cached test data if available.
// Returns (nil, redis.Nil) if key doesn't exist.
func (s *RedisStorage) GetTestData(ctx context.Context) (*TestData, error) {
data, err := s.client.Get(ctx, "test_data").Bytes()
if err != nil {
if err == redis.Nil {
log.Printf("Redis: Cache miss - key not found")
} else {
log.Printf("Redis: Error retrieving data: %v", err)
}
return nil, err
}
var testData TestData
if err := json.Unmarshal(data, &testData); err != nil {
log.Printf("Redis: Error deserializing cached data: %v", err)
return nil, err
}
log.Printf("Redis: Cache hit - retrieved data: %+v", testData)
return &testData, nil
}
// SaveTestData caches the provided test data with a 1-hour TTL.
// Existing data for the same key will be overwritten.
func (s *RedisStorage) SaveTestData(ctx context.Context, data *TestData) error {
jsonData, err := json.Marshal(data)
if err != nil {
log.Printf("Redis: Error serializing data: %v", err)
return err
}
err = s.client.Set(ctx, "test_data", jsonData, 1*time.Hour).Err()
if err != nil {
log.Printf("Redis: Error writing to cache: %v", err)
return err
}
log.Printf("Redis: Successfully cached data: %+v", data)
return nil
}
// InvalidateTestData removes the test data from cache.
// This is typically called when the underlying data is updated.
func (s *RedisStorage) InvalidateTestData(ctx context.Context) error {
err := s.client.Del(ctx, "test_data").Err()
if err != nil {
log.Printf("Redis: Error invalidating cache: %v", err)
} else {
log.Printf("Redis: Successfully invalidated cached data")
}
return err
}

338
lib/turso.go Normal file
View file

@ -0,0 +1,338 @@
package lib
import (
"context"
"database/sql"
"fmt"
"log"
"time"
_ "github.com/tursodatabase/libsql-client-go/libsql"
)
// TursoStorage implements a Turso (distributed SQLite) backed storage layer
// for persisting test data and performance metrics with automatic connection management.
type TursoStorage struct {
db *sql.DB // Database connection pool
}
// NewTursoStorage initializes a new database connection pool with the provided credentials.
// It configures connection pooling and timeout settings for optimal performance.
func NewTursoStorage(url, token string) (*TursoStorage, error) {
db, err := sql.Open("libsql", url+"?authToken="+token)
if err != nil {
return nil, fmt.Errorf("database connection failed: %v", err)
}
// Configure connection pool settings
db.SetMaxOpenConns(200) // Maximum concurrent connections
db.SetConnMaxLifetime(5 * time.Minute) // Connection time-to-live
db.SetMaxIdleConns(25) // Connections maintained when idle
return &TursoStorage{db: db}, nil
}
// Close safely shuts down the database connection pool.
// Should be called during application shutdown to prevent connection leaks.
func (s *TursoStorage) Close() error {
if err := s.db.Close(); err != nil {
return fmt.Errorf("error closing database connections: %v", err)
}
log.Printf("Database connections closed successfully")
return nil
}
// InitTables ensures all required database tables and indices exist.
// This should be called once during application startup.
func (s *TursoStorage) InitTables() error {
log.Printf("Initializing database schema...")
// Verify existing schema
var count int
err := s.db.QueryRow("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='metrics'").Scan(&count)
if err != nil {
log.Printf("Error checking existing schema: %v", err)
} else {
log.Printf("Found %d existing metrics tables", count)
}
// Create required tables and indices
_, err = s.db.Exec(`
CREATE TABLE IF NOT EXISTS metrics (
timestamp INTEGER, -- Unix timestamp in milliseconds
service_time REAL, -- Total request processing time (ms)
db_time REAL, -- Database operation time (ms)
cache_time REAL, -- Cache operation time (ms)
db_rows_read INTEGER DEFAULT 0,
db_rows_written INTEGER DEFAULT 0,
db_total_rows INTEGER DEFAULT 0,
cache_hits INTEGER DEFAULT 0,
cache_misses INTEGER DEFAULT 0
);
CREATE TABLE IF NOT EXISTS test_data (
id INTEGER PRIMARY KEY AUTOINCREMENT,
data TEXT NOT NULL,
timestamp DATETIME NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_test_timestamp ON test_data(timestamp);
`)
if err != nil {
return fmt.Errorf("schema initialization failed: %v", err)
}
// Verify tables were created
tables := []string{"metrics", "test_data"}
for _, table := range tables {
var count int
err := s.db.QueryRow("SELECT COUNT(*) FROM " + table).Scan(&count)
if err != nil {
log.Printf("Error verifying table %s: %v", table, err)
} else {
log.Printf("Table %s exists with %d rows", table, count)
}
}
return nil
}
// GetTotalRows returns the total number of rows in the test_data table.
// Used for monitoring database growth over time.
func (s *TursoStorage) GetTotalRows(ctx context.Context) (int64, error) {
var count int64
err := s.db.QueryRowContext(ctx, "SELECT COUNT(*) FROM test_data").Scan(&count)
return count, err
}
// SaveTestData stores a new test data record in the database.
// It first clears existing data to maintain a single active test record.
func (s *TursoStorage) SaveTestData(ctx context.Context, data *TestData) error {
// Clear existing data
_, err := s.db.ExecContext(ctx, "DELETE FROM test_data")
if err != nil {
return fmt.Errorf("failed to clear existing data: %v", err)
}
// Insert new record
result, err := s.db.ExecContext(ctx, `
INSERT INTO test_data (data, timestamp)
VALUES (?, ?)
`, data.Data, data.Timestamp)
if err != nil {
return fmt.Errorf("failed to insert test data: %v", err)
}
// Update ID of inserted record
id, err := result.LastInsertId()
if err != nil {
return fmt.Errorf("failed to get inserted ID: %v", err)
}
data.ID = id
return nil
}
// GetTestData retrieves the most recent test data record.
func (s *TursoStorage) GetTestData(ctx context.Context) (*TestData, error) {
var data TestData
err := s.db.QueryRowContext(ctx, `
SELECT id, data, timestamp
FROM test_data
ORDER BY timestamp DESC
LIMIT 1
`).Scan(&data.ID, &data.Data, &data.Timestamp)
if err != nil {
return nil, fmt.Errorf("failed to retrieve test data: %v", err)
}
return &data, nil
}
// SaveMetrics stores a new performance metrics data point.
// This data is used for monitoring and visualization.
func (s *TursoStorage) SaveMetrics(ctx context.Context, point DataPoint) error {
log.Printf("Storing metrics - Service: %.2fms, DB: %.2fms, Cache: %.2fms",
point.ServiceTime, point.DBTime, point.CacheTime)
_, err := s.db.ExecContext(ctx, `
INSERT INTO metrics (
timestamp,
service_time,
db_time,
cache_time,
db_rows_read,
db_rows_written,
db_total_rows,
cache_hits,
cache_misses
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
point.Timestamp,
point.ServiceTime,
point.DBTime,
point.CacheTime,
point.DBRowsRead,
point.DBRowsWritten,
point.DBTotalRows,
point.CacheHits,
point.CacheMisses,
)
if err != nil {
return fmt.Errorf("failed to store metrics: %v", err)
}
log.Printf("Metrics stored successfully")
return nil
}
// ClearDB removes all data from both metrics and test_data tables.
// This operation is atomic - either all data is cleared or none is.
func (s *TursoStorage) ClearDB(ctx context.Context) error {
// Use transaction for atomicity
tx, err := s.db.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("failed to start transaction: %v", err)
}
defer tx.Rollback() // Ensure rollback on error
// Clear metrics table
if _, err := tx.ExecContext(ctx, "DELETE FROM metrics"); err != nil {
return fmt.Errorf("failed to clear metrics: %v", err)
}
// Clear test_data table
if _, err := tx.ExecContext(ctx, "DELETE FROM test_data"); err != nil {
return fmt.Errorf("failed to clear test data: %v", err)
}
// Commit transaction
if err := tx.Commit(); err != nil {
return fmt.Errorf("failed to commit clear operation: %v", err)
}
log.Printf("Database cleared successfully")
return nil
}
// GetMetrics retrieves performance metrics within the specified time range.
// Returns metrics sorted by timestamp in descending order, limited to 10000 points.
func (s *TursoStorage) GetMetrics(ctx context.Context, start, end time.Time) ([]DataPoint, error) {
log.Printf("Retrieving metrics from %v to %v", start, end)
// Convert timestamps to Unix milliseconds for storage
startMs := start.UnixMilli()
endMs := end.UnixMilli()
// Prepare query with time range filter
query := `
SELECT
timestamp, service_time, db_time, cache_time,
db_rows_read, db_rows_written, db_total_rows,
cache_hits, cache_misses
FROM metrics
WHERE timestamp BETWEEN ? AND ?
ORDER BY timestamp DESC
LIMIT 10000 -- Protect against excessive memory usage
`
log.Printf("Executing query with range: %d to %d", startMs, endMs)
rows, err := s.db.QueryContext(ctx, query, startMs, endMs)
if err != nil {
log.Printf("Query failed: %v", err)
return nil, err
}
defer rows.Close()
// Collect all matching metrics
points := make([]DataPoint, 0)
for rows.Next() {
var p DataPoint
if err := rows.Scan(
&p.Timestamp, &p.ServiceTime, &p.DBTime, &p.CacheTime,
&p.DBRowsRead, &p.DBRowsWritten, &p.DBTotalRows,
&p.CacheHits, &p.CacheMisses,
); err != nil {
log.Printf("Row scan failed: %v", err)
return nil, err
}
points = append(points, p)
}
// Log summary of retrieved data
if len(points) > 0 {
log.Printf("First point: %v (%v)",
points[0].Timestamp,
time.UnixMilli(points[0].Timestamp))
log.Printf("Last point: %v (%v)",
points[len(points)-1].Timestamp,
time.UnixMilli(points[len(points)-1].Timestamp))
}
log.Printf("Retrieved %d metric points", len(points))
return points, rows.Err()
}
// DebugMetrics performs diagnostic checks on the metrics table.
// Used during startup and for troubleshooting system state.
func (s *TursoStorage) DebugMetrics(ctx context.Context) error {
// Check total metrics count
var count int
err := s.db.QueryRowContext(ctx, "SELECT COUNT(*) FROM metrics").Scan(&count)
if err != nil {
return fmt.Errorf("failed to count metrics: %v", err)
}
log.Printf("Debug: Total metrics in database: %d", count)
if count == 0 {
log.Printf("Debug: Metrics table is empty")
return nil
}
// Check timestamp range of stored metrics
var minTs, maxTs int64
err = s.db.QueryRowContext(ctx, "SELECT MIN(timestamp), MAX(timestamp) FROM metrics").Scan(&minTs, &maxTs)
if err != nil {
return fmt.Errorf("failed to get timestamp range: %v", err)
}
log.Printf("Debug: Metrics timestamp range: %v to %v",
time.UnixMilli(minTs),
time.UnixMilli(maxTs))
// Sample recent metrics for verification
rows, err := s.db.QueryContext(ctx, `
SELECT timestamp, service_time, db_time, cache_time,
db_rows_read, db_rows_written, db_total_rows,
cache_hits, cache_misses
FROM metrics
ORDER BY timestamp DESC
LIMIT 5
`)
if err != nil {
return fmt.Errorf("failed to query recent metrics: %v", err)
}
defer rows.Close()
log.Printf("Debug: Most recent metrics:")
for rows.Next() {
var p DataPoint
if err := rows.Scan(
&p.Timestamp, &p.ServiceTime, &p.DBTime, &p.CacheTime,
&p.DBRowsRead, &p.DBRowsWritten, &p.DBTotalRows,
&p.CacheHits, &p.CacheMisses,
); err != nil {
return fmt.Errorf("failed to scan row: %v", err)
}
log.Printf("Time: %v, Service: %.2fms, DB: %.2fms, Cache: %.2fms, "+
"Reads: %d, Writes: %d, Total: %d, Hits: %d, Misses: %d",
time.UnixMilli(p.Timestamp),
p.ServiceTime,
p.DBTime,
p.CacheTime,
p.DBRowsRead,
p.DBRowsWritten,
p.DBTotalRows,
p.CacheHits,
p.CacheMisses)
}
return rows.Err()
}

25
lib/types.go Normal file
View file

@ -0,0 +1,25 @@
package lib
import "time"
// DataPoint represents a single metrics measurement containing performance statistics
// and counters for database and cache operations.
type DataPoint struct {
SessionID string `json:"session_id"`
Timestamp int64 `json:"timestamp"`
ServiceTime float64 `json:"service_time"`
DBTime float64 `json:"db_time"`
CacheTime float64 `json:"cache_time"`
DBRowsRead int64 `json:"db_rows_read"`
DBRowsWritten int64 `json:"db_rows_written"`
DBTotalRows int64 `json:"db_total_rows"`
CacheHits int64 `json:"cache_hits"`
CacheMisses int64 `json:"cache_misses"`
}
// TestData represents a test record used for performance measurements.
type TestData struct {
ID int64 `json:"id"`
Data string `json:"data"`
Timestamp time.Time `json:"timestamp"`
}

351
main.go Normal file
View file

@ -0,0 +1,351 @@
package main
import (
"context"
"embed"
"encoding/json"
"fmt"
"html/template"
"io"
"log"
"net/http"
"os"
"strconv"
"sync"
"time"
"atri.dad/distributedperf/lib"
"github.com/joho/godotenv"
)
//go:embed static
var static embed.FS // Embedded filesystem for static web assets
// Global storage interfaces for database and cache access
var (
db *lib.TursoStorage
cache *lib.RedisStorage
)
// Thread-safe performance counters for monitoring system behavior
var (
cumulativeCacheHits int64
cumulativeCacheMisses int64
cumulativeRowsRead int64
cumulativeRowsWritten int64
counterMutex sync.RWMutex
)
// resetCounters safely zeroes all performance counters and logs the before/after values.
// This is typically called when clearing all data or starting a new test run.
func resetCounters() {
counterMutex.Lock()
defer counterMutex.Unlock()
// Log current values before reset for historical reference
log.Printf("Resetting counters - Current values: hits=%d, misses=%d, reads=%d, writes=%d",
cumulativeCacheHits, cumulativeCacheMisses, cumulativeRowsRead, cumulativeRowsWritten)
// Zero all counters atomically
cumulativeCacheHits = 0
cumulativeCacheMisses = 0
cumulativeRowsRead = 0
cumulativeRowsWritten = 0
// Confirm reset was successful
log.Printf("Counters after reset: hits=%d, misses=%d, reads=%d, writes=%d",
cumulativeCacheHits, cumulativeCacheMisses, cumulativeRowsRead, cumulativeRowsWritten)
}
// handleRequest processes GET and POST requests for test data using a cache-aside pattern.
// GET requests attempt cache retrieval before falling back to database.
// POST requests invalidate cache and write directly to database.
func handleRequest(w http.ResponseWriter, r *http.Request) {
requestStart := time.Now()
var data *lib.TestData
var err error
var dbTime, cacheTime time.Duration
log.Printf("Starting %s request", r.Method)
switch r.Method {
case http.MethodGet:
// Try cache first for better performance
cacheStart := time.Now()
data, err = cache.GetTestData(r.Context())
cacheTime = time.Since(cacheStart)
cacheHit := (err == nil && data != nil)
if cacheHit {
// Update cache hit statistics
counterMutex.Lock()
log.Printf("Before cache hit increment: hits=%d", cumulativeCacheHits)
cumulativeCacheHits++
log.Printf("After cache hit increment: hits=%d", cumulativeCacheHits)
counterMutex.Unlock()
log.Printf("Cache HIT - total hits now: %d", cumulativeCacheHits)
} else {
// Handle cache miss - fallback to database
counterMutex.Lock()
log.Printf("Before cache miss increment: misses=%d", cumulativeCacheMisses)
cumulativeCacheMisses++
log.Printf("After cache miss increment: misses=%d", cumulativeCacheMisses)
counterMutex.Unlock()
log.Printf("Cache MISS - total misses now: %d", cumulativeCacheMisses)
// Retrieve from database
dbStart := time.Now()
data, err = db.GetTestData(r.Context())
dbTime = time.Since(dbStart)
if err == nil && data != nil {
counterMutex.Lock()
cumulativeRowsRead++
counterMutex.Unlock()
log.Printf("DB read successful - total rows read: %d", cumulativeRowsRead)
// Update cache with fresh data
if err := cache.SaveTestData(r.Context(), data); err != nil {
log.Printf("Failed to update cache: %v", err)
} else {
log.Printf("Cache updated with fresh data")
}
}
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
case http.MethodPost:
// Invalidate cache before write to maintain consistency
if err := cache.InvalidateTestData(r.Context()); err != nil {
log.Printf("Warning: Cache invalidation failed: %v", err)
} else {
log.Printf("Cache invalidated for POST operation")
}
// Create new test data record
data = &lib.TestData{
Data: fmt.Sprintf("test-%d", time.Now().Unix()),
Timestamp: time.Now(),
}
// Write to database
dbStart := time.Now()
err = db.SaveTestData(r.Context(), data)
dbTime = time.Since(dbStart)
if err == nil {
counterMutex.Lock()
cumulativeRowsWritten++
counterMutex.Unlock()
log.Printf("DB write successful - total rows written: %d", cumulativeRowsWritten)
}
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
// Send response to client
json.NewEncoder(w).Encode(data)
// Calculate core operation time before metrics processing
serviceTime := time.Since(requestStart)
// Process metrics asynchronously to minimize request latency
go func(svcTime time.Duration, dbT time.Duration, cacheT time.Duration) {
// Set timeout for metrics processing
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
// Get total row count for monitoring
totalRows, err := db.GetTotalRows(ctx)
if err != nil {
log.Printf("Failed to get total row count: %v", err)
totalRows = 0
}
// Capture current counter values atomically
counterMutex.RLock()
metrics := lib.DataPoint{
Timestamp: time.Now().UnixMilli(),
ServiceTime: float64(svcTime.Milliseconds()),
DBTime: float64(dbT.Milliseconds()),
CacheTime: float64(cacheT.Milliseconds()),
DBRowsRead: cumulativeRowsRead,
DBRowsWritten: cumulativeRowsWritten,
DBTotalRows: totalRows,
CacheHits: cumulativeCacheHits,
CacheMisses: cumulativeCacheMisses,
}
counterMutex.RUnlock()
// Store metrics
if err := db.SaveMetrics(ctx, metrics); err != nil {
log.Printf("Failed to save performance metrics: %v", err)
}
}(serviceTime, dbTime, cacheTime)
}
// getMetrics retrieves performance metrics within the specified time range.
// Supports both absolute and relative time ranges.
func getMetrics(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Parse time range parameters
var start, end time.Time
end = time.Now()
// Parse start time if provided
if startStr := r.URL.Query().Get("start"); startStr != "" {
if ts, err := strconv.ParseInt(startStr, 10, 64); err == nil {
start = time.UnixMilli(ts)
} else {
http.Error(w, "Invalid start time format", http.StatusBadRequest)
return
}
}
// Parse end time if provided
if endStr := r.URL.Query().Get("end"); endStr != "" {
if ts, err := strconv.ParseInt(endStr, 10, 64); err == nil {
end = time.UnixMilli(ts)
} else {
http.Error(w, "Invalid end time format", http.StatusBadRequest)
return
}
}
// Default to last 30 minutes if no start time specified
if start.IsZero() {
start = end.Add(-30 * time.Minute)
}
log.Printf("Retrieving metrics from %v to %v", start, end)
points, err := db.GetMetrics(ctx, start, end)
if err != nil {
log.Printf("Metrics retrieval failed: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
log.Printf("Retrieved %d metric points", len(points))
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(points)
}
// clearDB removes all test data and metrics, resetting the system to initial state.
func clearDB(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
log.Printf("Initiating system-wide data clear...")
// Reset performance counters
resetCounters()
log.Printf("Performance counters reset")
// Clear cache
if err := cache.InvalidateTestData(r.Context()); err != nil {
log.Printf("Warning: Cache clear failed: %v", err)
} else {
log.Printf("Cache cleared successfully")
}
// Clear database
if err := db.ClearDB(r.Context()); err != nil {
log.Printf("Database clear failed: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
log.Printf("Database cleared successfully")
w.Write([]byte("OK"))
}
func main() {
var err error
// Load environment configuration
err = godotenv.Load()
if err != nil {
log.Printf("Warning: .env file not found: %v", err)
}
// Validate required environment variables
dbURL := os.Getenv("TURSO_URL")
dbToken := os.Getenv("TURSO_AUTH_TOKEN")
redisURL := os.Getenv("REDIS_URL")
if dbURL == "" || dbToken == "" || redisURL == "" {
log.Fatal("Missing required environment variables")
}
// Initialize storage systems
db, err = lib.NewTursoStorage(dbURL, dbToken)
if err != nil {
log.Fatalf("Database initialization failed: %v", err)
}
cache, err = lib.NewRedisStorage(redisURL)
if err != nil {
log.Fatalf("Cache initialization failed: %v", err)
}
// Initialize database schema
if err := db.InitTables(); err != nil {
log.Fatalf("Schema initialization failed: %v", err)
}
// Verify metrics system
if err := db.DebugMetrics(context.Background()); err != nil {
log.Printf("Warning: Metrics verification failed: %v", err)
}
// Configure API routes
http.HandleFunc("/api/request", handleRequest)
http.HandleFunc("/api/metrics", getMetrics)
http.HandleFunc("/api/clear", clearDB)
// Configure static file serving
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/" {
// Serve index page
tmpl := template.Must(template.ParseFS(static, "static/index.html"))
tmpl.Execute(w, nil)
return
}
// Serve static files from embedded filesystem
fsys := http.FS(static)
path := "static" + r.URL.Path
file, err := fsys.Open(path)
if err != nil {
http.Error(w, "File not found", http.StatusNotFound)
return
}
defer file.Close()
stat, err := file.Stat()
if err != nil {
http.Error(w, "File error", http.StatusInternalServerError)
return
}
http.ServeContent(w, r, stat.Name(), stat.ModTime(), file.(io.ReadSeeker))
})
// Start HTTP server
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("Server starting on port %s", port)
log.Fatal(http.ListenAndServe(":"+port, nil))
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 792 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 650 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10004
Total Responses Received: 6047
Average Latency: 234.348839ms
Max Latency: 2.347729229s
Min Latency: 2.588263ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.97
Responses/sec: 12.07

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 976 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10010
Total Responses Received: 7366
Average Latency: 553.241317ms
Max Latency: 2.173345621s
Min Latency: 2.776339ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.95
Responses/sec: 14.68

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 818 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 676 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10003
Total Responses Received: 5879
Average Latency: 198.5433ms
Max Latency: 922.48238ms
Min Latency: 2.981613ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.98
Responses/sec: 11.74

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1,002 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10009
Total Responses Received: 6580
Average Latency: 450.610561ms
Max Latency: 1.657144732s
Min Latency: 2.64786ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.97
Responses/sec: 13.13

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 735 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 591 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10003
Total Responses Received: 8029
Average Latency: 257.887412ms
Max Latency: 427.077416ms
Min Latency: 103.071016ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.98
Responses/sec: 16.04

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1 MiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10010
Total Responses Received: 6841
Average Latency: 566.702525ms
Max Latency: 1.161745611s
Min Latency: 222.852737ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.97
Responses/sec: 13.65

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 993 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 850 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10000
Total Responses Received: 9959
Average Latency: 18.28291ms
Max Latency: 306.765966ms
Min Latency: 2.619982ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.99
Responses/sec: 19.91

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 983 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 839 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10002
Total Responses Received: 9975
Average Latency: 118.671554ms
Max Latency: 386.153542ms
Min Latency: 103.560937ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.99
Responses/sec: 19.93

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 798 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 939 KiB

View file

@ -0,0 +1,16 @@
Load Test Report
=============
Endpoint: https://cmpt815perf.fly.dev/api/request
Pattern: 2p → 3g
Performance Metrics
-----------------
Total Requests Sent: 10004
Total Responses Received: 9934
Average Latency: 256.204585ms
Max Latency: 603.606934ms
Min Latency: 236.504125ms
Requests/sec (Target): 20.00
Requests/sec (Actual): 19.98
Responses/sec: 19.84

File diff suppressed because it is too large Load diff

474
static/app.js Normal file
View file

@ -0,0 +1,474 @@
const { useState, useEffect, useRef } = React;
const MetricCard = ({ title, stats }) => (
<div className="bg-white shadow rounded-lg p-4">
<div className="pb-2">
<h2 className="text-lg font-semibold">{title}</h2>
</div>
<div className="space-y-2">
<div className="grid grid-cols-2 gap-2">
<div>Avg: {stats.avg} ms</div>
<div>P50: {stats.p50} ms</div>
<div>P95: {stats.p95} ms</div>
<div>P99: {stats.p99} ms</div>
</div>
</div>
</div>
);
const DBActivityCard = ({ stats }) => (
<div className="bg-white shadow rounded-lg p-4">
<div className="pb-2">
<h2 className="text-lg font-semibold">Database Activity</h2>
</div>
<div className="space-y-2">
<div className="grid grid-cols-2 gap-2">
<div>Rows Read: {stats.rowsRead}</div>
<div>Rows Written: {stats.rowsWritten}</div>
<div>Total Rows: {stats.totalRows}</div>
</div>
</div>
</div>
);
const CacheActivityCard = ({ stats }) => (
<div className="bg-white shadow rounded-lg p-4">
<div className="pb-2">
<h2 className="text-lg font-semibold">Cache Activity</h2>
</div>
<div className="space-y-2">
<div className="grid grid-cols-2 gap-2">
<div>Cache Hits: {stats.hits}</div>
<div>Cache Misses: {stats.misses}</div>
<div>Hit Rate: {stats.hitRate}%</div>
</div>
</div>
</div>
);
const MetricsDashboard = () => {
const [data, setData] = useState([]);
const [timeRange, setTimeRange] = useState("30m");
const [customStart, setCustomStart] = useState("");
const [customEnd, setCustomEnd] = useState("");
const [stats, setStats] = useState({
service: { avg: 0, p50: 0, p95: 0, p99: 0 },
db: { avg: 0, p50: 0, p95: 0, p99: 0 },
cache: { avg: 0, p50: 0, p95: 0, p99: 0 },
dbActivity: { rowsRead: 0, rowsWritten: 0, totalRows: 0 },
cacheActivity: { hits: 0, misses: 0, hitRate: 0 },
});
const chartRef = useRef(null);
const chartInstance = useRef(null);
const getTimeRangeParams = () => {
const now = Date.now();
switch (timeRange) {
case "30m":
return `?start=${now - 30 * 60 * 1000}&end=${now}`;
case "1h":
return `?start=${now - 60 * 60 * 1000}&end=${now}`;
case "24h":
return `?start=${now - 24 * 60 * 60 * 1000}&end=${now}`;
case "7d":
return `?start=${now - 7 * 24 * 60 * 60 * 1000}&end=${now}`;
case "custom":
if (customStart && customEnd) {
return `?start=${new Date(customStart).getTime()}&end=${new Date(customEnd).getTime()}`;
}
return "";
case "all":
return "";
default:
return `?start=${now - 30 * 60 * 1000}&end=${now}`;
}
};
const bucketDataForChart = (data, bucketSizeMs = 1000) => {
const buckets = {};
data.forEach((point) => {
const bucketKey =
Math.floor(point.timestamp / bucketSizeMs) * bucketSizeMs;
if (!buckets[bucketKey]) {
buckets[bucketKey] = {
timestamp: bucketKey,
service_time: [],
db_time: [],
cache_time: [],
db_rows_read: [],
db_rows_written: [],
cache_hits: [],
cache_misses: [],
};
}
buckets[bucketKey].service_time.push(point.service_time);
buckets[bucketKey].db_time.push(point.db_time);
buckets[bucketKey].cache_time.push(point.cache_time);
buckets[bucketKey].db_rows_read.push(point.db_rows_read);
buckets[bucketKey].db_rows_written.push(point.db_rows_written);
buckets[bucketKey].cache_hits.push(point.cache_hits);
buckets[bucketKey].cache_misses.push(point.cache_misses);
});
return Object.values(buckets).map((bucket) => ({
timestamp: bucket.timestamp,
service_time: _.mean(bucket.service_time),
db_time: _.mean(bucket.db_time),
cache_time: _.mean(bucket.cache_time),
db_rows_read: _.sum(bucket.db_rows_read),
db_rows_written: _.sum(bucket.db_rows_written),
cache_hits: _.sum(bucket.cache_hits),
cache_misses: _.sum(bucket.cache_misses),
}));
};
const calculateStats = (data) => {
// Input validation with early return
if (!data?.length) {
return {
service: { avg: 0, p50: 0, p95: 0, p99: 0 },
db: { avg: 0, p50: 0, p95: 0, p99: 0 },
cache: { avg: 0, p50: 0, p95: 0, p99: 0 },
dbActivity: { rowsRead: 0, rowsWritten: 0, totalRows: 0 },
cacheActivity: { hits: 0, misses: 0, hitRate: 0 },
};
}
// Create separate arrays for each metric type and sort them independently
const serviceValues = data
.map((d) => Number(d.service_time) || 0)
.sort((a, b) => a - b);
const dbValues = data
.map((d) => Number(d.db_time) || 0)
.sort((a, b) => a - b);
const cacheValues = data
.map((d) => Number(d.cache_time) || 0)
.sort((a, b) => a - b);
// Calculate percentile indices
const len = data.length;
const p50idx = Math.floor(len * 0.5);
const p95idx = Math.floor(len * 0.95);
const p99idx = Math.floor(len * 0.99);
// Log the actual values we're using
console.log("Sorted Values Sample:", {
service: serviceValues.slice(0, 5),
db: dbValues.slice(0, 5),
cache: cacheValues.slice(0, 5),
});
console.log("Median Values:", {
service: serviceValues[p50idx],
db: dbValues[p50idx],
cache: cacheValues[p50idx],
});
// Get latest values for activity metrics
const latest = data[0] || {
cache_hits: 0,
cache_misses: 0,
db_rows_read: 0,
db_rows_written: 0,
db_total_rows: 0,
};
const totalCacheRequests = latest.cache_hits + latest.cache_misses;
const hitRate =
totalCacheRequests > 0
? (latest.cache_hits / totalCacheRequests) * 100
: 0;
const stats = {
service: {
avg: _.round(_.mean(serviceValues), 2),
p50: _.round(serviceValues[p50idx] || 0, 2),
p95: _.round(serviceValues[p95idx] || 0, 2),
p99: _.round(serviceValues[p99idx] || 0, 2),
},
db: {
avg: _.round(_.mean(dbValues), 2),
p50: _.round(dbValues[p50idx] || 0, 2),
p95: _.round(dbValues[p95idx] || 0, 2),
p99: _.round(dbValues[p99idx] || 0, 2),
},
cache: {
avg: _.round(_.mean(cacheValues), 2),
p50: _.round(cacheValues[p50idx] || 0, 2),
p95: _.round(cacheValues[p95idx] || 0, 2),
p99: _.round(cacheValues[p99idx] || 0, 2),
},
dbActivity: {
rowsRead: latest.db_rows_read,
rowsWritten: latest.db_rows_written,
totalRows: latest.db_total_rows,
},
cacheActivity: {
hits: latest.cache_hits,
misses: latest.cache_misses,
hitRate: _.round(hitRate, 1),
},
};
// Log final calculated stats
console.log("Final Stats:", stats);
return stats;
};
const updateChart = (forceReset = false) => {
if (!chartRef.current) return;
if (!chartInstance.current || forceReset) {
if (chartInstance.current) {
chartInstance.current.destroy();
}
initializeChart();
}
const bucketedData = bucketDataForChart(data);
chartInstance.current.data.datasets[0].data = bucketedData.map((d) => ({
x: d.timestamp,
y: d.service_time,
}));
chartInstance.current.data.datasets[1].data = bucketedData.map((d) => ({
x: d.timestamp,
y: d.db_time,
}));
chartInstance.current.data.datasets[2].data = bucketedData.map((d) => ({
x: d.timestamp,
y: d.cache_time,
}));
chartInstance.current.update("none");
};
const initializeChart = () => {
if (!chartRef.current) {
console.log("Chart ref not ready");
return;
}
console.log("Initializing chart");
const ctx = chartRef.current.getContext("2d");
chartInstance.current = new Chart(ctx, {
type: "line",
data: {
datasets: [
{
label: "Service Time",
borderColor: "#8884d8",
data: [],
tension: 0.1,
},
{
label: "DB Time",
borderColor: "#82ca9d",
data: [],
tension: 0.1,
},
{
label: "Cache Time",
borderColor: "#ffc658",
data: [],
tension: 0.1,
},
],
},
options: {
responsive: true,
maintainAspectRatio: false,
animation: false,
scales: {
x: {
type: "time",
time: {
parser: "MM/DD/YYYY HH:mm",
tooltipFormat: "ll HH:mm",
unit: "second",
displayFormats: {
second: "HH:mm:ss",
},
},
title: {
display: true,
text: "Time",
},
},
y: {
beginAtZero: true,
title: {
display: true,
text: "Time (ms)",
},
},
},
},
});
};
const fetchMetrics = async () => {
try {
console.log("Fetching metrics with params:", getTimeRangeParams());
const response = await fetch(`/api/metrics${getTimeRangeParams()}`);
const newData = await response.json();
console.log("Received metrics data:", newData);
if (!newData || newData.length === 0) {
console.log("No data received");
setData([]);
setStats(calculateStats([]));
return;
}
const newStats = calculateStats(newData);
console.log("Calculated stats:", newStats);
if (newStats) {
setStats(newStats);
}
setData(newData || []);
} catch (error) {
console.error("Error fetching metrics:", error);
setData([]);
setStats(calculateStats([]));
}
};
useEffect(() => {
console.log("Initial fetch and chart setup");
fetchMetrics();
updateChart(true);
let interval;
if (timeRange !== "all" && timeRange !== "custom") {
interval = setInterval(fetchMetrics, 1000);
console.log("Set up polling interval");
}
return () => {
if (interval) {
console.log("Cleaning up interval");
clearInterval(interval);
}
};
}, [timeRange, customStart, customEnd]);
useEffect(() => {
console.log("Data updated:", data.length, "points");
if (data.length > 0 && chartRef.current) {
updateChart();
}
}, [data]);
const exportCSV = () => {
try {
console.log("Exporting data:", data);
const csv = Papa.unparse(data);
const blob = new Blob([csv], { type: "text/csv" });
const url = window.URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = `metrics_export_${new Date().toISOString()}.csv`;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
} catch (error) {
console.error("Error exporting CSV:", error);
}
};
return (
<div className="container mx-auto p-4 space-y-4">
<div className="flex justify-between items-center">
<h1 className="text-2xl font-bold">Service Performance Metrics</h1>
<div className="flex items-center space-x-4">
<div className="flex items-center space-x-2">
<select
value={timeRange}
onChange={(e) => setTimeRange(e.target.value)}
className="rounded border p-2"
>
<option value="30m">Last 30m</option>
<option value="1h">Last 1h</option>
<option value="24h">Last 24h</option>
<option value="7d">Last 7d</option>
<option value="custom">Custom Range</option>
<option value="all">All Data</option>
</select>
</div>
{timeRange === "custom" && (
<div className="flex items-center space-x-2">
<input
type="datetime-local"
value={customStart}
onChange={(e) => setCustomStart(e.target.value)}
className="rounded border p-2"
/>
<span>to</span>
<input
type="datetime-local"
value={customEnd}
onChange={(e) => setCustomEnd(e.target.value)}
className="rounded border p-2"
/>
</div>
)}
<div className="space-x-2">
<button
onClick={exportCSV}
className="bg-blue-500 text-white px-4 py-2 rounded hover:bg-blue-600"
>
Export CSV
</button>
<button
onClick={async () => {
await fetch("/api/clear", { method: "POST" });
setData([]);
if (chartInstance.current) {
chartInstance.current.data.datasets.forEach((dataset) => {
dataset.data = [];
});
chartInstance.current.update();
}
fetchMetrics();
}}
className="bg-red-500 text-white px-4 py-2 rounded hover:bg-red-600"
>
Clear Data
</button>
</div>
</div>
</div>
<div className="grid grid-cols-1 md:grid-cols-3 gap-4">
<MetricCard title="Service Time" stats={stats.service} />
<MetricCard title="Database Time" stats={stats.db} />
<MetricCard title="Cache Time" stats={stats.cache} />
</div>
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
<DBActivityCard stats={stats.dbActivity} />
<CacheActivityCard stats={stats.cacheActivity} />
</div>
<div className="w-full h-96">
<canvas ref={chartRef}></canvas>
</div>
</div>
);
};
// Render the app
ReactDOM.createRoot(document.getElementById("root")).render(
<MetricsDashboard />,
);

27
static/index.html Normal file
View file

@ -0,0 +1,27 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Service Performance Metrics</title>
<!-- Tailwind CSS -->
<script src="https://cdn.tailwindcss.com"></script>
<!-- React and ReactDOM -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/18.2.0/umd/react.development.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/18.2.0/umd/react-dom.development.js"></script>
<!-- Lodash -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.21/lodash.min.js"></script>
<!-- Chart.js and its dependencies (order matters) -->
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.0/dist/chart.umd.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/moment@2.29.4/moment.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-moment@1.0.1/dist/chartjs-adapter-moment.min.js"></script>
<!-- PapaParse for CSV handling -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/PapaParse/5.4.1/papaparse.min.js"></script>
<!-- Babel for JSX transformation -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/babel-standalone/7.23.5/babel.min.js"></script>
</head>
<body>
<div id="root"></div>
<script type="text/babel" src="/app.js"></script>
</body>
</html>