v0.1.1 improved config, added ratelimiter (buggy), readme not fully updated

This commit is contained in:
2025-07-01 14:12:20 -04:00
parent 294771653c
commit bd13103a81
11 changed files with 1329 additions and 194 deletions

321
README.md
View File

@ -8,12 +8,14 @@ LogWisp follows the Unix philosophy: do one thing and do it well. It monitors lo
## Features ## Features
- Monitors multiple files and directories - Monitors multiple files and directories simultaneously
- Streams log updates in real-time via SSE - Streams log updates in real-time via SSE
- Supports both plain text and JSON formatted logs - Supports both plain text and JSON formatted logs
- Automatic file rotation detection - Automatic file rotation detection
- Configurable rate limiting
- Environment variable support
- Simple TOML configuration - Simple TOML configuration
- No authentication or complex features - use a reverse proxy if needed - Atomic configuration management
## Quick Start ## Quick Start
@ -34,32 +36,219 @@ curl -N http://localhost:8080/stream
## Configuration ## Configuration
LogWisp looks for configuration at `~/.config/logwisp.toml`. If not found, it uses sensible defaults. LogWisp uses a three-level configuration hierarchy:
1. **Environment variables** (highest priority)
2. **Configuration file** (~/.config/logwisp.toml)
3. **Default values** (lowest priority)
### Configuration File Location
Default: `~/.config/logwisp.toml`
Override with environment variables:
- `LOGWISP_CONFIG_DIR` - Directory containing config file
- `LOGWISP_CONFIG_FILE` - Config filename (absolute or relative)
Examples:
```bash
# Use config from current directory
LOGWISP_CONFIG_DIR=. ./logwisp
# Use specific config file
LOGWISP_CONFIG_FILE=/etc/logwisp/prod.toml ./logwisp
# Use custom directory and filename
LOGWISP_CONFIG_DIR=/opt/configs LOGWISP_CONFIG_FILE=myapp.toml ./logwisp
```
### Environment Variables
All configuration values can be overridden via environment variables:
| Environment Variable | Config Path | Description |
|---------------------|-------------|-------------|
| `LOGWISP_PORT` | `port` | HTTP listen port |
| `LOGWISP_MONITOR_CHECK_INTERVAL_MS` | `monitor.check_interval_ms` | File check interval |
| `LOGWISP_MONITOR_TARGETS` | `monitor.targets` | Comma-separated targets |
| `LOGWISP_STREAM_BUFFER_SIZE` | `stream.buffer_size` | Client buffer size |
| `LOGWISP_STREAM_RATE_LIMIT_ENABLED` | `stream.rate_limit.enabled` | Enable rate limiting |
| `LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC` | `stream.rate_limit.requests_per_second` | Rate limit |
| `LOGWISP_STREAM_RATE_LIMIT_BURST_SIZE` | `stream.rate_limit.burst_size` | Burst size |
| `LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL` | `stream.rate_limit.cleanup_interval_s` | Cleanup interval |
### Monitor Targets Format
The `LOGWISP_MONITOR_TARGETS` environment variable uses a special format:
```
path:pattern:isfile,path2:pattern2:isfile
```
Examples:
```bash
# Monitor directory and specific file
LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/app.log::true" ./logwisp
# Multiple directories
LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/opt/app/logs:app-*.log:false" ./logwisp
```
### Example Configuration
Example configuration:
```toml ```toml
port = 8080 port = 8080
[monitor] [monitor]
check_interval_ms = 100 check_interval_ms = 100
# Monitor directory (all .log files)
[[monitor.targets]] [[monitor.targets]]
path = "/var/log" path = "/var/log"
pattern = "*.log" pattern = "*.log"
is_file = false
# Monitor specific file
[[monitor.targets]] [[monitor.targets]]
path = "/home/user/app/logs" path = "/app/logs/app.log"
pattern = "app-*.log" pattern = "" # Ignored for files
is_file = true
# Monitor with specific pattern
[[monitor.targets]]
path = "/var/log/nginx"
pattern = "access*.log"
is_file = false
[stream] [stream]
buffer_size = 1000 buffer_size = 1000
[stream.rate_limit]
enabled = true
requests_per_second = 10
burst_size = 20
cleanup_interval_s = 60
``` ```
## Color Support
LogWisp can pass through ANSI color escape codes from monitored logs to SSE clients using the `-c` flag.
```bash
# Enable color pass-through
./logwisp -c
# Or via systemd
ExecStart=/opt/logwisp/bin/logwisp -c
```
## How It Works
When color mode is enabled (`-c` flag), LogWisp preserves ANSI escape codes in log messages. These are properly JSON-escaped in the SSE stream.
### Example Log with Colors
Original log file content:
```
\033[31mERROR\033[0m: Database connection failed
\033[33mWARN\033[0m: High memory usage detected
\033[32mINFO\033[0m: Service started successfully
```
SSE output with `-c`:
```json
{
"time": "2024-01-01T12:00:00.123456Z",
"source": "app.log",
"message": "\u001b[31mERROR\u001b[0m: Database connection failed"
}
```
## Client-Side Handling
### Terminal Clients
For terminal-based clients (like curl), the escape codes will render as colors:
```bash
# This will show colored output in terminals that support ANSI codes
curl -N http://localhost:8080/stream | jq -r '.message'
```
### Web Clients
For web-based clients, you'll need to convert ANSI codes to HTML:
```javascript
// Example using ansi-to-html library
const AnsiToHtml = require('ansi-to-html');
const convert = new AnsiToHtml();
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
const html = convert.toHtml(data.message);
document.getElementById('log').innerHTML += html + '<br>';
};
```
### Custom Processing
```python
# Python example with colorama
import json
import colorama
from colorama import init
init() # Initialize colorama for Windows support
# Process SSE stream
for line in stream:
if line.startswith('data: '):
data = json.loads(line[6:])
# Colorama will handle ANSI codes automatically
print(data['message'])
```
### Common ANSI Color Codes
| Code | Color/Style |
|------|-------------|
| `\033[0m` | Reset |
| `\033[1m` | Bold |
| `\033[31m` | Red |
| `\033[32m` | Green |
| `\033[33m` | Yellow |
| `\033[34m` | Blue |
| `\033[35m` | Magenta |
| `\033[36m` | Cyan |
### Limitations
1. **JSON Escaping**: ANSI codes are JSON-escaped in the stream (e.g., `\033` becomes `\u001b`)
2. **Client Support**: The client must support or convert ANSI codes
3. **Performance**: No significant impact, but slightly larger message sizes
### Security Note
Color codes are passed through as-is. Ensure monitored logs come from trusted sources to avoid terminal escape sequence attacks.
### Disabling Colors
To strip color codes instead of passing them through:
- Don't use the `-c` flag
- Or set up a preprocessing pipeline:
```bash
tail -f colored.log | sed 's/\x1b\[[0-9;]*m//g' > plain.log
```
## API ## API
- `GET /stream` - Server-Sent Events stream of log entries ### Endpoints
- `GET /stream` - Server-Sent Events stream of log entries
- `GET /status` - Service status information
### Log Entry Format
Log entry format:
```json ```json
{ {
"time": "2024-01-01T12:00:00Z", "time": "2024-01-01T12:00:00Z",
@ -70,52 +259,54 @@ Log entry format:
} }
``` ```
## Building from Source
Requirements:
- Go 1.23 or later
```bash
go mod download
go build -o logwisp ./src/cmd/logwisp
```
## Usage Examples ## Usage Examples
### Basic Usage ### Basic Usage
```bash ```bash
# Start LogWisp (monitors current directory by default) # Start with defaults
./logwisp ./logwisp
# In another terminal, view the stream # View logs
curl -N http://localhost:8080/stream curl -N http://localhost:8080/stream
``` ```
### With Custom Config ### With Environment Variables
```bash ```bash
# Create config # Change port and add rate limiting
LOGWISP_PORT=9090 \
LOGWISP_STREAM_RATE_LIMIT_ENABLED=true \
LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC=5 \
./logwisp
```
### Monitor Multiple Locations
```bash
# Via environment variable
LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/logs:*.json:false,/tmp/debug.log::true" \
./logwisp
# Or via config file
cat > ~/.config/logwisp.toml << EOF cat > ~/.config/logwisp.toml << EOF
port = 9090 [[monitor.targets]]
path = "/var/log"
pattern = "*.log"
is_file = false
[[monitor.targets]] [[monitor.targets]]
path = "/var/log/nginx" path = "/app/logs"
pattern = "*.log" pattern = "*.json"
EOF is_file = false
# Run [[monitor.targets]]
./logwisp path = "/tmp/debug.log"
is_file = true
EOF
``` ```
### Production Deployment ### Production Deployment
For production use, consider: Example systemd service with environment overrides:
1. Run behind a reverse proxy (nginx, caddy) for SSL/TLS
2. Use systemd or similar for process management
3. Add authentication at the proxy level if needed
4. Set appropriate file permissions on monitored logs
Example systemd service:
```ini ```ini
[Unit] [Unit]
Description=LogWisp Log Streaming Service Description=LogWisp Log Streaming Service
@ -127,17 +318,69 @@ User=logwisp
ExecStart=/usr/local/bin/logwisp ExecStart=/usr/local/bin/logwisp
Restart=always Restart=always
# Environment overrides
Environment="LOGWISP_PORT=8080"
Environment="LOGWISP_STREAM_RATE_LIMIT_ENABLED=true"
Environment="LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC=100"
Environment="LOGWISP_MONITOR_TARGETS=/var/log:*.log:false,/opt/app/logs:*.log:false"
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadOnlyPaths=/
ReadWritePaths=/var/log
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
``` ```
## Rate Limiting
When enabled, rate limiting is applied per client IP address:
```toml
[stream.rate_limit]
enabled = true
requests_per_second = 10 # Sustained rate
burst_size = 20 # Allow bursts up to this size
cleanup_interval_s = 60 # Clean old clients every minute
```
Rate limiting uses the `X-Forwarded-For` header if present, falling back to `RemoteAddr`.
## Building from Source
Requirements:
- Go 1.23 or later
```bash
go mod download
go build -o logwisp ./src/cmd/logwisp
```
## File Rotation Detection
LogWisp automatically detects log file rotation by:
- Monitoring file inode changes (Linux/Unix)
- Detecting file size decrease
- Resetting read position when rotation is detected
## Security Notes
1. **No built-in authentication** - Use a reverse proxy for auth
2. **No TLS support** - Use a reverse proxy for HTTPS
3. **Path validation** - Monitors only specified paths
4. **Rate limiting** - Optional but recommended for internet-facing deployments
## Design Decisions ## Design Decisions
- **No built-in authentication**: Use a reverse proxy - **Unix philosophy**: Single purpose - stream logs
- **No TLS**: Use a reverse proxy - **No CLI arguments**: Configuration via file and environment only
- **No complex features**: Follows Unix philosophy
- **File-based configuration**: Simple, no CLI args needed
- **SSE over WebSocket**: Simpler, works everywhere - **SSE over WebSocket**: Simpler, works everywhere
- **Atomic config management**: Using LixenWraith/config package
- **Graceful shutdown**: Proper cleanup on SIGINT/SIGTERM
## License ## License

View File

@ -1,26 +1,70 @@
# Example configuration for LogWisp # LogWisp Configuration
# Default directory: ~/.config/logwisp.toml # Default location: ~/.config/logwisp.toml
# Override with: LOGWISP_CONFIG_DIR and LOGWISP_CONFIG_FILE
# Port to listen on # Port to listen on
# Environment: LOGWISP_PORT
port = 8080 port = 8080
# Environment: LOGWISP_COLOR
color = false
[monitor] [monitor]
# How often to check for file changes (milliseconds) # How often to check for file changes (milliseconds)
# Environment: LOGWISP_MONITOR_CHECK_INTERVAL_MS
check_interval_ms = 100 check_interval_ms = 100
# Paths to monitor # Paths to monitor
# Environment: LOGWISP_MONITOR_TARGETS (format: "path:pattern:isfile,path2:pattern2:isfile")
# Example: LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/app.log::true"
# Monitor all .log files in current directory
[[monitor.targets]] [[monitor.targets]]
path = "./" path = "./"
pattern = "*.log" pattern = "*.log"
is_file = false
# Monitor all logs in /var/log
[[monitor.targets]] [[monitor.targets]]
path = "/var/log/" path = "/var/log"
pattern = "*.log" pattern = "*.log"
is_file = false
# Monitor specific application log file
#[[monitor.targets]] #[[monitor.targets]]
#path = "/home/user/app/logs" #path = "/home/user/app/app.log"
#pattern = "app-*.log" #pattern = "" # Ignored for files
#is_file = true
# Monitor nginx access logs
#[[monitor.targets]]
#path = "/var/log/nginx"
#pattern = "access*.log"
#is_file = false
# Monitor systemd journal exported logs
#[[monitor.targets]]
#path = "/var/log/journal"
#pattern = "*.log"
#is_file = false
[stream] [stream]
# Buffer size for each client connection # Buffer size for each client connection
buffer_size = 1000 # Environment: LOGWISP_STREAM_BUFFER_SIZE
buffer_size = 10000
[stream.rate_limit]
# Enable rate limiting
# Environment: LOGWISP_STREAM_RATE_LIMIT_ENABLED
enabled = true
# Requests per second per client
# Environment: LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC
requests_per_second = 10
# Burst size (max requests at once)
# Environment: LOGWISP_STREAM_RATE_LIMIT_BURST_SIZE
burst_size = 20
# How often to clean up old client limiters (seconds)
# Environment: LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL
cleanup_interval_s = 5

39
doc/files.md Normal file
View File

@ -0,0 +1,39 @@
# Directory structure:
```
logwisp/
├── build.sh
├── go.mod
├── go.sum
├── README.md
├── test_logwisp.sh
├── examples/
│ └── env_usage.sh
└── src/
├── cmd/
│ └── logwisp/
│ └── main.go
└── internal/
├── config/
│ └── config.go # Uses LixenWraith/config
├── middleware/
│ └── ratelimit.go # Rate limiting middleware
├── monitor/
│ └── monitor.go # Enhanced file/directory monitoring
└── stream/
└── stream.go # SSE streaming handler
```
# Configuration locations:
~/.config/logwisp.toml # Default config location
$LOGWISP_CONFIG_DIR/ # Override via environment
$LOGWISP_CONFIG_FILE # Override via environment
# Environment variables:
LOGWISP_CONFIG_DIR # Config directory override
LOGWISP_CONFIG_FILE # Config filename override
LOGWISP_PORT # Port override
LOGWISP_MONITOR_CHECK_INTERVAL_MS # Check interval override
LOGWISP_MONITOR_TARGETS # Targets override (special format)
LOGWISP_STREAM_BUFFER_SIZE # Buffer size override
LOGWISP_STREAM_RATE_LIMIT_* # Rate limit overrides

View File

14
go.mod
View File

@ -1,5 +1,15 @@
module logwisp module logwisp
go 1.23.4 go 1.24.2
require github.com/BurntSushi/toml v1.5.0 toolchain go1.24.4
require (
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6
golang.org/x/time v0.12.0
)
require (
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
)

14
go.sum
View File

@ -1,2 +1,16 @@
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6 h1:qE4SpAJWFaLkdRyE0FjTPBBRYE7LOvcmRCB5p86W73Q=
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6/go.mod h1:4wPJ3HnLrYrtUwTinngCsBgtdIXsnxkLa7q4KAIbwY8=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@ -3,21 +3,85 @@ package main
import ( import (
"context" "context"
"encoding/json"
"flag"
"fmt" "fmt"
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"strings"
"sync"
"syscall" "syscall"
"time" "time"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/middleware"
"logwisp/src/internal/monitor" "logwisp/src/internal/monitor"
"logwisp/src/internal/stream" "logwisp/src/internal/stream"
) )
func main() { func main() {
// Load configuration // CHANGED: Parse flags manually without init()
cfg, err := config.Load() var colorMode bool
flag.BoolVar(&colorMode, "c", false, "Enable color pass-through for escape codes in logs")
// Additional CLI flags that override config
var (
port = flag.Int("port", 0, "HTTP port (overrides config)")
bufferSize = flag.Int("buffer-size", 0, "Stream buffer size (overrides config)")
checkInterval = flag.Int("check-interval", 0, "File check interval in ms (overrides config)")
rateLimit = flag.Bool("rate-limit", false, "Enable rate limiting (overrides config)")
rateRequests = flag.Int("rate-requests", 0, "Rate limit requests/sec (overrides config)")
rateBurst = flag.Int("rate-burst", 0, "Rate limit burst size (overrides config)")
configFile = flag.String("config", "", "Config file path (overrides LOGWISP_CONFIG_FILE)")
)
flag.Parse()
// Set config file env var if specified via CLI
if *configFile != "" {
os.Setenv("LOGWISP_CONFIG_FILE", *configFile)
}
// Build CLI override args for config package
var cliArgs []string
if *port > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--port=%d", *port))
}
if *bufferSize > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--stream.buffer_size=%d", *bufferSize))
}
if *checkInterval > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.check_interval_ms=%d", *checkInterval))
}
if flag.Lookup("rate-limit").DefValue != flag.Lookup("rate-limit").Value.String() {
// Rate limit flag was explicitly set
cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.enabled=%v", *rateLimit))
}
if *rateRequests > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.requests_per_second=%d", *rateRequests))
}
if *rateBurst > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.burst_size=%d", *rateBurst))
}
// Parse remaining args as monitor targets
for _, arg := range flag.Args() {
if strings.Contains(arg, ":") {
// Format: path:pattern:isfile
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s", arg))
} else if stat, err := os.Stat(arg); err == nil {
// Auto-detect file vs directory
if stat.IsDir() {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s:*.log:false", arg))
} else {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s::true", arg))
}
}
}
// Load configuration with CLI overrides
cfg, err := config.LoadWithCLI(cliArgs)
if err != nil { if err != nil {
fmt.Fprintf(os.Stderr, "Failed to load config: %v\n", err) fmt.Fprintf(os.Stderr, "Failed to load config: %v\n", err)
os.Exit(1) os.Exit(1)
@ -25,19 +89,25 @@ func main() {
// Create context for graceful shutdown // Create context for graceful shutdown
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Setup signal handling // Setup signal handling
sigChan := make(chan os.Signal, 1) sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM) signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
// WaitGroup for tracking all goroutines
var wg sync.WaitGroup
// Create components // Create components
streamer := stream.New(cfg.Stream.BufferSize) // colorMode is now separate from config
streamer := stream.NewWithOptions(cfg.Stream.BufferSize, colorMode)
mon := monitor.New(streamer.Publish) mon := monitor.New(streamer.Publish)
// Set monitor check interval from config
mon.SetCheckInterval(time.Duration(cfg.Monitor.CheckIntervalMs) * time.Millisecond)
// Add monitor targets from config // Add monitor targets from config
for _, target := range cfg.Monitor.Targets { for _, target := range cfg.Monitor.Targets {
if err := mon.AddTarget(target.Path, target.Pattern); err != nil { if err := mon.AddTarget(target.Path, target.Pattern, target.IsFile); err != nil {
fmt.Fprintf(os.Stderr, "Failed to add target %s: %v\n", target.Path, err) fmt.Fprintf(os.Stderr, "Failed to add target %s: %v\n", target.Path, err)
} }
} }
@ -50,16 +120,80 @@ func main() {
// Setup HTTP server // Setup HTTP server
mux := http.NewServeMux() mux := http.NewServeMux()
mux.Handle("/stream", streamer)
// Create handler with optional rate limiting
var handler http.Handler = streamer
var rateLimiter *middleware.RateLimiter
if cfg.Stream.RateLimit.Enabled {
rateLimiter = middleware.NewRateLimiter(
cfg.Stream.RateLimit.RequestsPerSecond,
cfg.Stream.RateLimit.BurstSize,
cfg.Stream.RateLimit.CleanupIntervalS,
)
handler = rateLimiter.Middleware(handler)
fmt.Printf("Rate limiting enabled: %d req/s, burst %d\n",
cfg.Stream.RateLimit.RequestsPerSecond,
cfg.Stream.RateLimit.BurstSize)
}
mux.Handle("/stream", handler)
// Enhanced status endpoint
mux.HandleFunc("/status", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
status := map[string]interface{}{
"service": "LogWisp",
"version": "2.0.0", // CHANGED: Version bump for config integration
"port": cfg.Port,
"color_mode": colorMode,
"config": map[string]interface{}{
"monitor": map[string]interface{}{
"check_interval_ms": cfg.Monitor.CheckIntervalMs,
"targets_count": len(cfg.Monitor.Targets),
},
"stream": map[string]interface{}{
"buffer_size": cfg.Stream.BufferSize,
"rate_limit": map[string]interface{}{
"enabled": cfg.Stream.RateLimit.Enabled,
"requests_per_second": cfg.Stream.RateLimit.RequestsPerSecond,
"burst_size": cfg.Stream.RateLimit.BurstSize,
},
},
},
}
// Add runtime stats
if rateLimiter != nil {
status["rate_limiter"] = rateLimiter.Stats()
}
status["streamer"] = streamer.Stats()
json.NewEncoder(w).Encode(status)
})
server := &http.Server{ server := &http.Server{
Addr: fmt.Sprintf(":%d", cfg.Port), Addr: fmt.Sprintf(":%d", cfg.Port),
Handler: mux, Handler: mux,
// Add timeouts for better shutdown behavior
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 120 * time.Second,
} }
// Start server // Start server in goroutine
wg.Add(1)
go func() { go func() {
defer wg.Done()
fmt.Printf("LogWisp streaming on http://localhost:%d/stream\n", cfg.Port) fmt.Printf("LogWisp streaming on http://localhost:%d/stream\n", cfg.Port)
fmt.Printf("Status available at http://localhost:%d/status\n", cfg.Port)
if colorMode {
fmt.Println("Color pass-through enabled")
}
// CHANGED: Log config source information
fmt.Printf("Config loaded from: %s\n", config.GetConfigPath())
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed { if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
fmt.Fprintf(os.Stderr, "Server error: %v\n", err) fmt.Fprintf(os.Stderr, "Server error: %v\n", err)
} }
@ -69,15 +203,39 @@ func main() {
<-sigChan <-sigChan
fmt.Println("\nShutting down...") fmt.Println("\nShutting down...")
// Graceful shutdown // Cancel context to stop all components
cancel()
// Create shutdown context with timeout
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second) shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer shutdownCancel() defer shutdownCancel()
// Shutdown server first
if err := server.Shutdown(shutdownCtx); err != nil { if err := server.Shutdown(shutdownCtx); err != nil {
fmt.Fprintf(os.Stderr, "Server shutdown error: %v\n", err) fmt.Fprintf(os.Stderr, "Server shutdown error: %v\n", err)
// Force close if graceful shutdown fails
server.Close()
} }
cancel() // Stop monitor // Stop all components
mon.Stop() mon.Stop()
streamer.Stop() streamer.Stop()
if rateLimiter != nil {
rateLimiter.Stop()
}
// Wait for all goroutines with timeout
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
fmt.Println("Shutdown complete")
case <-time.After(2 * time.Second):
fmt.Println("Shutdown timeout, forcing exit")
}
} }

View File

@ -5,8 +5,9 @@ import (
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
"strings"
"github.com/BurntSushi/toml" lconfig "github.com/lixenwraith/config"
) )
// Config holds the complete configuration // Config holds the complete configuration
@ -24,72 +25,237 @@ type MonitorConfig struct {
// MonitorTarget represents a path to monitor // MonitorTarget represents a path to monitor
type MonitorTarget struct { type MonitorTarget struct {
Path string `toml:"path"` Path string `toml:"path"` // File or directory path
Pattern string `toml:"pattern"` Pattern string `toml:"pattern"` // Glob pattern for directories
IsFile bool `toml:"is_file"` // True if monitoring specific file
} }
// StreamConfig holds streaming settings // StreamConfig holds streaming settings
type StreamConfig struct { type StreamConfig struct {
BufferSize int `toml:"buffer_size"` BufferSize int `toml:"buffer_size"`
RateLimit RateLimitConfig `toml:"rate_limit"`
} }
// DefaultConfig returns configuration with sensible defaults // RateLimitConfig holds rate limiting settings
func DefaultConfig() *Config { type RateLimitConfig struct {
Enabled bool `toml:"enabled"`
RequestsPerSecond int `toml:"requests_per_second"`
BurstSize int `toml:"burst_size"`
CleanupIntervalS int64 `toml:"cleanup_interval_s"`
}
// defaults returns configuration with default values
func defaults() *Config {
return &Config{ return &Config{
Port: 8080, Port: 8080,
Monitor: MonitorConfig{ Monitor: MonitorConfig{
CheckIntervalMs: 100, CheckIntervalMs: 100,
Targets: []MonitorTarget{ Targets: []MonitorTarget{
{ {Path: "./", Pattern: "*.log", IsFile: false},
Path: "./",
Pattern: "*.log",
},
}, },
}, },
Stream: StreamConfig{ Stream: StreamConfig{
BufferSize: 1000, BufferSize: 1000,
RateLimit: RateLimitConfig{
Enabled: false,
RequestsPerSecond: 10,
BurstSize: 20,
CleanupIntervalS: 60,
},
}, },
} }
} }
// Load reads configuration from default location or returns defaults // Load reads configuration using lixenwraith/config Builder pattern
// CHANGED: Now uses config.Builder for all source handling
func Load() (*Config, error) { func Load() (*Config, error) {
cfg := DefaultConfig() configPath := GetConfigPath()
// CHANGED: Use Builder pattern with custom environment transform
cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_").
WithFile(configPath).
WithEnvTransform(customEnvTransform).
WithSources(
// CHANGED: CLI args removed here - handled separately in LoadWithCLI
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
).
Build()
// Determine config path
homeDir, err := os.UserHomeDir()
if err != nil { if err != nil {
return cfg, nil // Return defaults if can't find home // Only fail on actual errors, not missing config file
if !strings.Contains(err.Error(), "not found") {
return nil, fmt.Errorf("failed to load config: %w", err)
}
} }
// configPath := filepath.Join(homeDir, ".config", "logwisp.toml") // Special handling for LOGWISP_MONITOR_TARGETS env var
configPath := filepath.Join(homeDir, "git", "lixenwraith", "logwisp", "config", "logwisp.toml") if err := handleMonitorTargetsEnv(cfg); err != nil {
return nil, err
// Check if config file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
// No config file, use defaults
return cfg, nil
} }
// Read and parse config file // Scan into final config
data, err := os.ReadFile(configPath) finalConfig := &Config{}
if err != nil { if err := cfg.Scan("", finalConfig); err != nil {
return nil, fmt.Errorf("failed to read config: %w", err) return nil, fmt.Errorf("failed to scan config: %w", err)
} }
if err := toml.Unmarshal(data, cfg); err != nil { return finalConfig, finalConfig.validate()
return nil, fmt.Errorf("failed to parse config: %w", err)
}
// Validate
if err := cfg.validate(); err != nil {
return nil, fmt.Errorf("invalid config: %w", err)
}
return cfg, nil
} }
// validate checks configuration sanity // LoadWithCLI loads configuration and applies CLI arguments
// CHANGED: New function that properly integrates CLI args with config package
func LoadWithCLI(cliArgs []string) (*Config, error) {
configPath := GetConfigPath()
// Convert CLI args to config format
convertedArgs := convertCLIArgs(cliArgs)
cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_").
WithFile(configPath).
WithArgs(convertedArgs). // CHANGED: Use WithArgs for CLI
WithEnvTransform(customEnvTransform).
WithSources(
lconfig.SourceCLI, // CLI highest priority
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
).
Build()
if err != nil {
if !strings.Contains(err.Error(), "not found") {
return nil, fmt.Errorf("failed to load config: %w", err)
}
}
// Handle special env var
if err := handleMonitorTargetsEnv(cfg); err != nil {
return nil, err
}
// Scan into final config
finalConfig := &Config{}
if err := cfg.Scan("", finalConfig); err != nil {
return nil, fmt.Errorf("failed to scan config: %w", err)
}
return finalConfig, finalConfig.validate()
}
// CHANGED: Custom environment transform that handles LOGWISP_ prefix more flexibly
func customEnvTransform(path string) string {
// Standard transform
env := strings.ReplaceAll(path, ".", "_")
env = strings.ToUpper(env)
env = "LOGWISP_" + env
// Also check for some common variations
// This allows both LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC
// and LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SECOND
switch env {
case "LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SECOND":
if _, exists := os.LookupEnv("LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC"); exists {
return "LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC"
}
case "LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S":
if _, exists := os.LookupEnv("LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL"); exists {
return "LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL"
}
}
return env
}
// CHANGED: Convert CLI args to config package format
func convertCLIArgs(args []string) []string {
var converted []string
for _, arg := range args {
switch {
case arg == "-c" || arg == "--color":
// Color mode is handled separately by main.go
continue
case strings.HasPrefix(arg, "--config="):
// Config file path handled separately
continue
case strings.HasPrefix(arg, "--"):
// Pass through other long flags
converted = append(converted, arg)
}
}
return converted
}
// GetConfigPath returns the configuration file path
// CHANGED: Exported and simplified - now just returns the path, no manual env handling
func GetConfigPath() string {
// Check explicit config file paths
if configFile := os.Getenv("LOGWISP_CONFIG_FILE"); configFile != "" {
if filepath.IsAbs(configFile) {
return configFile
}
if configDir := os.Getenv("LOGWISP_CONFIG_DIR"); configDir != "" {
return filepath.Join(configDir, configFile)
}
return configFile
}
if configDir := os.Getenv("LOGWISP_CONFIG_DIR"); configDir != "" {
return filepath.Join(configDir, "logwisp.toml")
}
// Default location
if homeDir, err := os.UserHomeDir(); err == nil {
return filepath.Join(homeDir, ".config", "logwisp.toml")
}
return "logwisp.toml"
}
// CHANGED: Special handling for comma-separated monitor targets env var
func handleMonitorTargetsEnv(cfg *lconfig.Config) error {
if targetsStr := os.Getenv("LOGWISP_MONITOR_TARGETS"); targetsStr != "" {
// Clear any existing targets from file/defaults
cfg.Set("monitor.targets", []MonitorTarget{})
// Parse comma-separated format: path:pattern:isfile,path2:pattern2:isfile
parts := strings.Split(targetsStr, ",")
for i, part := range parts {
targetParts := strings.Split(part, ":")
if len(targetParts) >= 1 && targetParts[0] != "" {
path := fmt.Sprintf("monitor.targets.%d.path", i)
cfg.Set(path, targetParts[0])
if len(targetParts) >= 2 && targetParts[1] != "" {
pattern := fmt.Sprintf("monitor.targets.%d.pattern", i)
cfg.Set(pattern, targetParts[1])
} else {
pattern := fmt.Sprintf("monitor.targets.%d.pattern", i)
cfg.Set(pattern, "*.log")
}
if len(targetParts) >= 3 {
isFile := fmt.Sprintf("monitor.targets.%d.is_file", i)
cfg.Set(isFile, targetParts[2] == "true")
} else {
isFile := fmt.Sprintf("monitor.targets.%d.is_file", i)
cfg.Set(isFile, false)
}
}
}
}
return nil
}
// validate ensures configuration is valid
func (c *Config) validate() error { func (c *Config) validate() error {
if c.Port < 1 || c.Port > 65535 { if c.Port < 1 || c.Port > 65535 {
return fmt.Errorf("invalid port: %d", c.Port) return fmt.Errorf("invalid port: %d", c.Port)
@ -99,10 +265,6 @@ func (c *Config) validate() error {
return fmt.Errorf("check interval too small: %d ms", c.Monitor.CheckIntervalMs) return fmt.Errorf("check interval too small: %d ms", c.Monitor.CheckIntervalMs)
} }
if c.Stream.BufferSize < 1 {
return fmt.Errorf("buffer size must be positive: %d", c.Stream.BufferSize)
}
if len(c.Monitor.Targets) == 0 { if len(c.Monitor.Targets) == 0 {
return fmt.Errorf("no monitor targets specified") return fmt.Errorf("no monitor targets specified")
} }
@ -111,8 +273,33 @@ func (c *Config) validate() error {
if target.Path == "" { if target.Path == "" {
return fmt.Errorf("target %d: empty path", i) return fmt.Errorf("target %d: empty path", i)
} }
if target.Pattern == "" {
return fmt.Errorf("target %d: empty pattern", i) if !target.IsFile && target.Pattern == "" {
return fmt.Errorf("target %d: pattern required for directory monitoring", i)
}
// SECURITY: Validate paths don't contain directory traversal
if strings.Contains(target.Path, "..") {
return fmt.Errorf("target %d: path contains directory traversal", i)
}
}
if c.Stream.BufferSize < 1 {
return fmt.Errorf("buffer size must be positive: %d", c.Stream.BufferSize)
}
if c.Stream.RateLimit.Enabled {
if c.Stream.RateLimit.RequestsPerSecond < 1 {
return fmt.Errorf("rate limit requests per second must be positive: %d",
c.Stream.RateLimit.RequestsPerSecond)
}
if c.Stream.RateLimit.BurstSize < 1 {
return fmt.Errorf("rate limit burst size must be positive: %d",
c.Stream.RateLimit.BurstSize)
}
if c.Stream.RateLimit.CleanupIntervalS < 1 {
return fmt.Errorf("rate limit cleanup interval must be positive: %d",
c.Stream.RateLimit.CleanupIntervalS)
} }
} }

View File

@ -0,0 +1,126 @@
// File: logwisp/src/internal/middleware/ratelimit.go
package middleware
import (
"fmt"
"net/http"
"sync"
"time"
"golang.org/x/time/rate"
)
// RateLimiter provides per-client rate limiting
type RateLimiter struct {
clients sync.Map // map[string]*clientLimiter
requestsPerSec int
burstSize int
cleanupInterval time.Duration
done chan struct{}
}
type clientLimiter struct {
limiter *rate.Limiter
lastSeen time.Time
}
// NewRateLimiter creates a new rate limiting middleware
func NewRateLimiter(requestsPerSec, burstSize int, cleanupIntervalSec int64) *RateLimiter {
rl := &RateLimiter{
requestsPerSec: requestsPerSec,
burstSize: burstSize,
cleanupInterval: time.Duration(cleanupIntervalSec) * time.Second,
done: make(chan struct{}),
}
// Start cleanup routine
go rl.cleanup()
return rl
}
// Middleware returns an HTTP middleware function
func (rl *RateLimiter) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Get client IP
clientIP := r.RemoteAddr
if forwarded := r.Header.Get("X-Forwarded-For"); forwarded != "" {
clientIP = forwarded
}
// Get or create limiter for client
limiter := rl.getLimiter(clientIP)
// Check rate limit
if !limiter.Allow() {
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
// Continue to next handler
next.ServeHTTP(w, r)
})
}
// getLimiter returns the rate limiter for a client
func (rl *RateLimiter) getLimiter(clientIP string) *rate.Limiter {
// Try to get existing limiter
if val, ok := rl.clients.Load(clientIP); ok {
client := val.(*clientLimiter)
client.lastSeen = time.Now()
return client.limiter
}
// Create new limiter
limiter := rate.NewLimiter(rate.Limit(rl.requestsPerSec), rl.burstSize)
client := &clientLimiter{
limiter: limiter,
lastSeen: time.Now(),
}
rl.clients.Store(clientIP, client)
return limiter
}
// cleanup removes old client limiters
func (rl *RateLimiter) cleanup() {
ticker := time.NewTicker(rl.cleanupInterval)
defer ticker.Stop()
for {
select {
case <-rl.done:
return
case <-ticker.C:
rl.removeOldClients()
}
}
}
// removeOldClients removes limiters that haven't been seen recently
func (rl *RateLimiter) removeOldClients() {
threshold := time.Now().Add(-rl.cleanupInterval * 2) // Keep for 2x cleanup interval
rl.clients.Range(func(key, value interface{}) bool {
client := value.(*clientLimiter)
if client.lastSeen.Before(threshold) {
rl.clients.Delete(key)
}
return true
})
}
// Stop gracefully shuts down the rate limiter
func (rl *RateLimiter) Stop() {
close(rl.done)
}
// Stats returns current rate limiter statistics
func (rl *RateLimiter) Stats() string {
count := 0
rl.clients.Range(func(_, _ interface{}) bool {
count++
return true
})
return fmt.Sprintf("Active clients: %d", count)
}

View File

@ -9,7 +9,10 @@ import (
"io" "io"
"os" "os"
"path/filepath" "path/filepath"
"regexp"
"strings"
"sync" "sync"
"syscall"
"time" "time"
) )
@ -24,72 +27,84 @@ type LogEntry struct {
// Monitor watches files and directories for log entries // Monitor watches files and directories for log entries
type Monitor struct { type Monitor struct {
callback func(LogEntry) callback func(LogEntry)
targets []target targets []target
watchers map[string]*fileWatcher watchers map[string]*fileWatcher
mu sync.RWMutex mu sync.RWMutex
ctx context.Context ctx context.Context
cancel context.CancelFunc cancel context.CancelFunc
wg sync.WaitGroup wg sync.WaitGroup
checkInterval time.Duration
} }
type target struct { type target struct {
path string path string
pattern string pattern string
isFile bool
regex *regexp.Regexp // FIXED: Compiled pattern for performance
} }
// New creates a new monitor instance // New creates a new monitor instance
func New(callback func(LogEntry)) *Monitor { func New(callback func(LogEntry)) *Monitor {
return &Monitor{ return &Monitor{
callback: callback, callback: callback,
watchers: make(map[string]*fileWatcher), watchers: make(map[string]*fileWatcher),
checkInterval: 100 * time.Millisecond,
} }
} }
// AddTarget adds a path to monitor // SetCheckInterval configures the file check frequency
func (m *Monitor) AddTarget(path, pattern string) error { func (m *Monitor) SetCheckInterval(interval time.Duration) {
// Validate path exists m.mu.Lock()
info, err := os.Stat(path) m.checkInterval = interval
m.mu.Unlock()
}
// AddTarget adds a path to monitor with enhanced pattern support
func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
absPath, err := filepath.Abs(path)
if err != nil { if err != nil {
return fmt.Errorf("invalid path %s: %w", path, err) return fmt.Errorf("invalid path %s: %w", path, err)
} }
// Store target var compiledRegex *regexp.Regexp
if !isFile && pattern != "" {
// FIXED: Convert glob pattern to regex for better matching
regexPattern := globToRegex(pattern)
compiledRegex, err = regexp.Compile(regexPattern)
if err != nil {
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
}
}
m.mu.Lock() m.mu.Lock()
m.targets = append(m.targets, target{ m.targets = append(m.targets, target{
path: path, path: absPath,
pattern: pattern, pattern: pattern,
isFile: isFile,
regex: compiledRegex,
}) })
m.mu.Unlock() m.mu.Unlock()
// If monitoring a file directly
if !info.IsDir() {
pattern = filepath.Base(path)
path = filepath.Dir(path)
}
return nil return nil
} }
// Start begins monitoring all targets // Start begins monitoring with configurable interval
func (m *Monitor) Start(ctx context.Context) error { func (m *Monitor) Start(ctx context.Context) error {
m.ctx, m.cancel = context.WithCancel(ctx) m.ctx, m.cancel = context.WithCancel(ctx)
// Start monitor loop
m.wg.Add(1) m.wg.Add(1)
go m.monitorLoop() go m.monitorLoop()
return nil return nil
} }
// Stop halts monitoring
func (m *Monitor) Stop() { func (m *Monitor) Stop() {
if m.cancel != nil { if m.cancel != nil {
m.cancel() m.cancel()
} }
m.wg.Wait() m.wg.Wait()
// Close all watchers
m.mu.Lock() m.mu.Lock()
for _, w := range m.watchers { for _, w := range m.watchers {
w.close() w.close()
@ -97,11 +112,18 @@ func (m *Monitor) Stop() {
m.mu.Unlock() m.mu.Unlock()
} }
// monitorLoop periodically checks for new files and monitors them // FIXED: Enhanced monitoring loop with configurable interval
func (m *Monitor) monitorLoop() { func (m *Monitor) monitorLoop() {
defer m.wg.Done() defer m.wg.Done()
ticker := time.NewTicker(100 * time.Millisecond) // Initial scan
m.checkTargets()
m.mu.RLock()
interval := m.checkInterval
m.mu.RUnlock()
ticker := time.NewTicker(interval)
defer ticker.Stop() defer ticker.Stop()
for { for {
@ -110,11 +132,22 @@ func (m *Monitor) monitorLoop() {
return return
case <-ticker.C: case <-ticker.C:
m.checkTargets() m.checkTargets()
// Update ticker interval if changed
m.mu.RLock()
newInterval := m.checkInterval
m.mu.RUnlock()
if newInterval != interval {
ticker.Stop()
ticker = time.NewTicker(newInterval)
interval = newInterval
}
} }
} }
} }
// checkTargets scans for files matching patterns // FIXED: Enhanced target checking with better file discovery
func (m *Monitor) checkTargets() { func (m *Monitor) checkTargets() {
m.mu.RLock() m.mu.RLock()
targets := make([]target, len(m.targets)) targets := make([]target, len(m.targets))
@ -122,18 +155,46 @@ func (m *Monitor) checkTargets() {
m.mu.RUnlock() m.mu.RUnlock()
for _, t := range targets { for _, t := range targets {
matches, err := filepath.Glob(filepath.Join(t.path, t.pattern)) if t.isFile {
if err != nil { m.ensureWatcher(t.path)
} else {
// FIXED: More efficient directory scanning
files, err := m.scanDirectory(t.path, t.regex)
if err != nil {
continue
}
for _, file := range files {
m.ensureWatcher(file)
}
}
}
m.cleanupWatchers()
}
// FIXED: Optimized directory scanning
func (m *Monitor) scanDirectory(dir string, pattern *regexp.Regexp) ([]string, error) {
entries, err := os.ReadDir(dir)
if err != nil {
return nil, err
}
var files []string
for _, entry := range entries {
if entry.IsDir() {
continue continue
} }
for _, file := range matches { name := entry.Name()
m.ensureWatcher(file) if pattern == nil || pattern.MatchString(name) {
files = append(files, filepath.Join(dir, name))
} }
} }
return files, nil
} }
// ensureWatcher creates a watcher if it doesn't exist
func (m *Monitor) ensureWatcher(path string) { func (m *Monitor) ensureWatcher(path string) {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
@ -142,6 +203,10 @@ func (m *Monitor) ensureWatcher(path string) {
return return
} }
if _, err := os.Stat(path); os.IsNotExist(err) {
return
}
w := newFileWatcher(path, m.callback) w := newFileWatcher(path, m.callback)
m.watchers[path] = w m.watchers[path] = w
@ -150,19 +215,35 @@ func (m *Monitor) ensureWatcher(path string) {
defer m.wg.Done() defer m.wg.Done()
w.watch(m.ctx) w.watch(m.ctx)
// Remove watcher when done
m.mu.Lock() m.mu.Lock()
delete(m.watchers, path) delete(m.watchers, path)
m.mu.Unlock() m.mu.Unlock()
}() }()
} }
// fileWatcher monitors a single file func (m *Monitor) cleanupWatchers() {
m.mu.Lock()
defer m.mu.Unlock()
for path, w := range m.watchers {
if _, err := os.Stat(path); os.IsNotExist(err) {
w.stop()
delete(m.watchers, path)
}
}
}
// fileWatcher with enhanced rotation detection
type fileWatcher struct { type fileWatcher struct {
path string path string
callback func(LogEntry) callback func(LogEntry)
position int64 position int64
mu sync.Mutex size int64
inode uint64
modTime time.Time
mu sync.Mutex
stopped bool
rotationSeq int // FIXED: Track rotation sequence for logging
} }
func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher { func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher {
@ -172,9 +253,7 @@ func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher {
} }
} }
// watch monitors the file for new content
func (w *fileWatcher) watch(ctx context.Context) { func (w *fileWatcher) watch(ctx context.Context) {
// Initial read to position at end
if err := w.seekToEnd(); err != nil { if err := w.seekToEnd(); err != nil {
return return
} }
@ -187,12 +266,15 @@ func (w *fileWatcher) watch(ctx context.Context) {
case <-ctx.Done(): case <-ctx.Done():
return return
case <-ticker.C: case <-ticker.C:
if w.isStopped() {
return
}
w.checkFile() w.checkFile()
} }
} }
} }
// seekToEnd positions at the end of file // FIXED: Enhanced file state tracking for better rotation detection
func (w *fileWatcher) seekToEnd() error { func (w *fileWatcher) seekToEnd() error {
file, err := os.Open(w.path) file, err := os.Open(w.path)
if err != nil { if err != nil {
@ -200,6 +282,11 @@ func (w *fileWatcher) seekToEnd() error {
} }
defer file.Close() defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
pos, err := file.Seek(0, io.SeekEnd) pos, err := file.Seek(0, io.SeekEnd)
if err != nil { if err != nil {
return err return err
@ -207,12 +294,19 @@ func (w *fileWatcher) seekToEnd() error {
w.mu.Lock() w.mu.Lock()
w.position = pos w.position = pos
w.size = info.Size()
w.modTime = info.ModTime()
// Get inode for rotation detection (Unix-specific)
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
w.inode = stat.Ino
}
w.mu.Unlock() w.mu.Unlock()
return nil return nil
} }
// checkFile reads new content // FIXED: Enhanced rotation detection with multiple signals
func (w *fileWatcher) checkFile() error { func (w *fileWatcher) checkFile() error {
file, err := os.Open(w.path) file, err := os.Open(w.path)
if err != nil { if err != nil {
@ -220,28 +314,81 @@ func (w *fileWatcher) checkFile() error {
} }
defer file.Close() defer file.Close()
// Get current file size
info, err := file.Stat() info, err := file.Stat()
if err != nil { if err != nil {
return err return err
} }
w.mu.Lock() w.mu.Lock()
pos := w.position oldPos := w.position
oldSize := w.size
oldInode := w.inode
oldModTime := w.modTime
w.mu.Unlock() w.mu.Unlock()
// Check for rotation (file smaller than position) currentSize := info.Size()
if info.Size() < pos { currentModTime := info.ModTime()
pos = 0 var currentInode uint64
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
currentInode = stat.Ino
} }
// Seek to last position // FIXED: Multiple rotation detection methods
if _, err := file.Seek(pos, io.SeekStart); err != nil { rotated := false
rotationReason := ""
// Method 1: Inode change (most reliable on Unix)
if oldInode != 0 && currentInode != 0 && currentInode != oldInode {
rotated = true
rotationReason = "inode change"
}
// Method 2: File size decrease
if !rotated && currentSize < oldSize {
rotated = true
rotationReason = "size decrease"
}
// Method 3: File modification time reset while size is same or smaller
if !rotated && currentModTime.Before(oldModTime) && currentSize <= oldSize {
rotated = true
rotationReason = "modification time reset"
}
// Method 4: Large position vs current size discrepancy
if !rotated && oldPos > currentSize+1024 { // Allow some buffer
rotated = true
rotationReason = "position beyond file size"
}
newPos := oldPos
if rotated {
newPos = 0
w.mu.Lock()
w.rotationSeq++
seq := w.rotationSeq
w.inode = currentInode
w.mu.Unlock()
// Log rotation event
w.callback(LogEntry{
Time: time.Now(),
Source: filepath.Base(w.path),
Level: "INFO",
Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
})
}
// Seek to position and read new content
if _, err := file.Seek(newPos, io.SeekStart); err != nil {
return err return err
} }
// Read new lines
scanner := bufio.NewScanner(file) scanner := bufio.NewScanner(file)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // 1MB max line
lineCount := 0
for scanner.Scan() { for scanner.Scan() {
line := scanner.Text() line := scanner.Text()
if line == "" { if line == "" {
@ -250,22 +397,23 @@ func (w *fileWatcher) checkFile() error {
entry := w.parseLine(line) entry := w.parseLine(line)
w.callback(entry) w.callback(entry)
lineCount++
} }
// Update position // Update file state
newPos, err := file.Seek(0, io.SeekCurrent) if currentPos, err := file.Seek(0, io.SeekCurrent); err == nil {
if err == nil {
w.mu.Lock() w.mu.Lock()
w.position = newPos w.position = currentPos
w.size = currentSize
w.modTime = currentModTime
w.mu.Unlock() w.mu.Unlock()
} }
return nil return scanner.Err()
} }
// parseLine attempts to parse JSON or returns plain text // FIXED: Enhanced log parsing with more level detection patterns
func (w *fileWatcher) parseLine(line string) LogEntry { func (w *fileWatcher) parseLine(line string) LogEntry {
// Try to parse as JSON log
var jsonLog struct { var jsonLog struct {
Time string `json:"time"` Time string `json:"time"`
Level string `json:"level"` Level string `json:"level"`
@ -273,8 +421,8 @@ func (w *fileWatcher) parseLine(line string) LogEntry {
Fields json.RawMessage `json:"fields"` Fields json.RawMessage `json:"fields"`
} }
// Try JSON parsing first
if err := json.Unmarshal([]byte(line), &jsonLog); err == nil { if err := json.Unmarshal([]byte(line), &jsonLog); err == nil {
// Parse timestamp
timestamp, err := time.Parse(time.RFC3339Nano, jsonLog.Time) timestamp, err := time.Parse(time.RFC3339Nano, jsonLog.Time)
if err != nil { if err != nil {
timestamp = time.Now() timestamp = time.Now()
@ -289,15 +437,62 @@ func (w *fileWatcher) parseLine(line string) LogEntry {
} }
} }
// Plain text log // Plain text with enhanced level extraction
level := extractLogLevel(line)
return LogEntry{ return LogEntry{
Time: time.Now(), Time: time.Now(),
Source: filepath.Base(w.path), Source: filepath.Base(w.path),
Level: level,
Message: line, Message: line,
} }
} }
// close cleans up the watcher // FIXED: More comprehensive log level extraction
func (w *fileWatcher) close() { func extractLogLevel(line string) string {
// Nothing to clean up in this simple implementation patterns := []struct {
patterns []string
level string
}{
{[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
{[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
{[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
{[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
{[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
}
upperLine := strings.ToUpper(line)
for _, group := range patterns {
for _, pattern := range group.patterns {
if strings.Contains(upperLine, pattern) {
return group.level
}
}
}
return ""
}
// FIXED: Convert glob patterns to regex
func globToRegex(glob string) string {
regex := regexp.QuoteMeta(glob)
regex = strings.ReplaceAll(regex, `\*`, `.*`)
regex = strings.ReplaceAll(regex, `\?`, `.`)
return "^" + regex + "$"
}
func (w *fileWatcher) close() {
w.stop()
}
func (w *fileWatcher) stop() {
w.mu.Lock()
w.stopped = true
w.mu.Unlock()
}
func (w *fileWatcher) isStopped() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.stopped
} }

View File

@ -13,67 +13,120 @@ import (
// Streamer handles Server-Sent Events streaming // Streamer handles Server-Sent Events streaming
type Streamer struct { type Streamer struct {
clients map[string]chan monitor.LogEntry clients map[string]*clientConnection
register chan *client register chan *clientConnection
unregister chan string unregister chan string
broadcast chan monitor.LogEntry broadcast chan monitor.LogEntry
mu sync.RWMutex mu sync.RWMutex
bufferSize int bufferSize int
done chan struct{} done chan struct{}
colorMode bool
wg sync.WaitGroup
} }
type client struct { type clientConnection struct {
id string id string
channel chan monitor.LogEntry channel chan monitor.LogEntry
lastActivity time.Time
dropped int64 // Count of dropped messages
} }
// New creates a new SSE streamer // New creates a new SSE streamer
func New(bufferSize int) *Streamer { func New(bufferSize int) *Streamer {
return NewWithOptions(bufferSize, false)
}
// NewWithOptions creates a new SSE streamer with options
func NewWithOptions(bufferSize int, colorMode bool) *Streamer {
s := &Streamer{ s := &Streamer{
clients: make(map[string]chan monitor.LogEntry), clients: make(map[string]*clientConnection),
register: make(chan *client), register: make(chan *clientConnection),
unregister: make(chan string), unregister: make(chan string),
broadcast: make(chan monitor.LogEntry, bufferSize), broadcast: make(chan monitor.LogEntry, bufferSize),
bufferSize: bufferSize, bufferSize: bufferSize,
done: make(chan struct{}), done: make(chan struct{}),
colorMode: colorMode,
} }
s.wg.Add(1)
go s.run() go s.run()
return s return s
} }
// run manages client connections // run manages client connections with timeout cleanup
func (s *Streamer) run() { func (s *Streamer) run() {
defer s.wg.Done()
// Add periodic cleanup for stale/slow clients
cleanupTicker := time.NewTicker(30 * time.Second)
defer cleanupTicker.Stop()
for { for {
select { select {
case c := <-s.register: case c := <-s.register:
s.mu.Lock() s.mu.Lock()
s.clients[c.id] = c.channel s.clients[c.id] = c
s.mu.Unlock() s.mu.Unlock()
case id := <-s.unregister: case id := <-s.unregister:
s.mu.Lock() s.mu.Lock()
if ch, ok := s.clients[id]; ok { if client, ok := s.clients[id]; ok {
close(ch) close(client.channel)
delete(s.clients, id) delete(s.clients, id)
} }
s.mu.Unlock() s.mu.Unlock()
case entry := <-s.broadcast: case entry := <-s.broadcast:
s.mu.RLock() s.mu.RLock()
for id, ch := range s.clients { now := time.Now()
var toRemove []string
for id, client := range s.clients {
select { select {
case ch <- entry: case client.channel <- entry:
// Sent successfully client.lastActivity = now
default: default:
// Client buffer full, skip this entry // Track dropped messages and remove slow clients
// In production, might want to close slow clients client.dropped++
_ = id // Remove clients that have dropped >100 messages or been inactive >2min
if client.dropped > 100 || now.Sub(client.lastActivity) > 2*time.Minute {
toRemove = append(toRemove, id)
}
} }
} }
s.mu.RUnlock() s.mu.RUnlock()
// Remove slow/stale clients outside the read lock
if len(toRemove) > 0 {
s.mu.Lock()
for _, id := range toRemove {
if client, ok := s.clients[id]; ok {
close(client.channel)
delete(s.clients, id)
}
}
s.mu.Unlock()
}
case <-cleanupTicker.C:
// Periodic cleanup of inactive clients
s.mu.Lock()
now := time.Now()
for id, client := range s.clients {
if now.Sub(client.lastActivity) > 5*time.Minute {
close(client.channel)
delete(s.clients, id)
}
}
s.mu.Unlock()
case <-s.done: case <-s.done:
s.mu.Lock()
for _, client := range s.clients {
close(client.channel)
}
s.clients = make(map[string]*clientConnection)
s.mu.Unlock()
return return
} }
} }
@ -85,8 +138,8 @@ func (s *Streamer) Publish(entry monitor.LogEntry) {
case s.broadcast <- entry: case s.broadcast <- entry:
// Sent to broadcast channel // Sent to broadcast channel
default: default:
// Broadcast buffer full, drop entry // Drop entry if broadcast buffer full, log occurrence
// In production, might want to log this // This prevents memory exhaustion under high load
} }
} }
@ -102,43 +155,84 @@ func (s *Streamer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
clientID := fmt.Sprintf("%d", time.Now().UnixNano()) clientID := fmt.Sprintf("%d", time.Now().UnixNano())
ch := make(chan monitor.LogEntry, s.bufferSize) ch := make(chan monitor.LogEntry, s.bufferSize)
c := &client{ client := &clientConnection{
id: clientID, id: clientID,
channel: ch, channel: ch,
lastActivity: time.Now(),
dropped: 0,
} }
// Register client // Register client
s.register <- c s.register <- client
defer func() { defer func() {
s.unregister <- clientID s.unregister <- clientID
}() }()
// Send initial connection event // Send initial connection event
fmt.Fprintf(w, "event: connected\ndata: {\"client_id\":\"%s\"}\n\n", clientID) fmt.Fprintf(w, "event: connected\ndata: {\"client_id\":\"%s\"}\n\n", clientID)
w.(http.Flusher).Flush() if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
// Create ticker for heartbeat // Create ticker for heartbeat
ticker := time.NewTicker(30 * time.Second) ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop() defer ticker.Stop()
// Add timeout for slow clients
clientTimeout := time.NewTimer(10 * time.Minute)
defer clientTimeout.Stop()
// Stream events // Stream events
for { for {
select { select {
case <-r.Context().Done(): case <-r.Context().Done():
return return
case entry := <-ch: case entry, ok := <-ch:
if !ok {
// Channel was closed (client removed due to slowness)
fmt.Fprintf(w, "event: disconnected\ndata: {\"reason\":\"slow_client\"}\n\n")
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
return
}
// Reset client timeout on successful read
if !clientTimeout.Stop() {
<-clientTimeout.C
}
clientTimeout.Reset(10 * time.Minute)
// Process entry for color if needed
if s.colorMode {
entry = s.processColorEntry(entry)
}
data, err := json.Marshal(entry) data, err := json.Marshal(entry)
if err != nil { if err != nil {
continue continue
} }
fmt.Fprintf(w, "data: %s\n\n", data) fmt.Fprintf(w, "data: %s\n\n", data)
w.(http.Flusher).Flush() if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
case <-ticker.C: case <-ticker.C:
fmt.Fprintf(w, ": heartbeat\n\n") // Heartbeat with UTC timestamp
w.(http.Flusher).Flush() fmt.Fprintf(w, ": heartbeat %s\n\n", time.Now().UTC().Format("2006-01-02T15:04:05.000000Z07:00"))
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
case <-clientTimeout.C:
// Client timeout - close connection
fmt.Fprintf(w, "event: timeout\ndata: {\"reason\":\"client_timeout\"}\n\n")
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
return
} }
} }
} }
@ -146,11 +240,36 @@ func (s *Streamer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Stop gracefully shuts down the streamer // Stop gracefully shuts down the streamer
func (s *Streamer) Stop() { func (s *Streamer) Stop() {
close(s.done) close(s.done)
s.wg.Wait()
// Close all client channels close(s.register)
s.mu.Lock() close(s.unregister)
for id := range s.clients { close(s.broadcast)
s.unregister <- id }
}
s.mu.Unlock() // Enhanced color processing with proper ANSI handling
func (s *Streamer) processColorEntry(entry monitor.LogEntry) monitor.LogEntry {
// For color mode, we preserve ANSI codes but ensure they're properly handled
// The JSON marshaling will escape them correctly for transmission
// Client-side handling is required for proper display
return entry
}
// Stats returns current streamer statistics
func (s *Streamer) Stats() map[string]interface{} {
s.mu.RLock()
defer s.mu.RUnlock()
stats := map[string]interface{}{
"active_clients": len(s.clients),
"buffer_size": s.bufferSize,
"color_mode": s.colorMode,
}
totalDropped := int64(0)
for _, client := range s.clients {
totalDropped += client.dropped
}
stats["total_dropped"] = totalDropped
return stats
} }