v0.1.3 stream changed from net/http to fasthttp for http and gnet for tcp stream, heartbeat config added

This commit is contained in:
2025-07-01 23:43:51 -04:00
parent a3450a9589
commit a7595061ba
13 changed files with 1134 additions and 1474 deletions

625
README.md
View File

@ -2,41 +2,39 @@
<img src="assets/logo.svg" alt="LogWisp Logo" width="200"/> <img src="assets/logo.svg" alt="LogWisp Logo" width="200"/>
</p> </p>
# LogWisp - Simple Log Streaming # LogWisp - Dual-Stack Log Streaming
A lightweight log streaming service that monitors files and streams updates via Server-Sent Events (SSE). A high-performance log streaming service with dual-stack architecture: raw TCP streaming via gnet and HTTP/SSE streaming via fasthttp.
## Philosophy
LogWisp follows the Unix philosophy: do one thing and do it well. It monitors log files and streams them over HTTP/SSE. That's it.
## Features ## Features
- Monitors multiple files and directories simultaneously - **Dual streaming modes**: TCP (gnet) and HTTP/SSE (fasthttp)
- Streams log updates in real-time via SSE - **Fan-out architecture**: Multiple independent consumers
- Supports both plain text and JSON formatted logs - **Real-time updates**: File monitoring with rotation detection
- Automatic file rotation detection - **Zero dependencies**: Only gnet and fasthttp beyond stdlib
- Configurable rate limiting - **High performance**: Non-blocking I/O throughout
- Environment variable support
- Simple TOML configuration
- Atomic configuration management
- Optional ANSI color pass-through
## Quick Start ## Quick Start
1. Build:
```bash ```bash
./build.sh # Build
``` go build -o logwisp ./src/cmd/logwisp
2. Run with defaults (monitors current directory): # Run with HTTP only (default)
```bash
./logwisp ./logwisp
# Enable both TCP and HTTP
./logwisp --enable-tcp --tcp-port 9090
# Monitor specific paths
./logwisp /var/log:*.log /app/logs:error*.log
``` ```
3. View logs: ## Architecture
```bash
curl -N http://localhost:8080/stream ```
Monitor (Publisher) → [Subscriber Channels] → TCP Server (default port 9090)
↘ HTTP Server (default port 8080)
``` ```
## Command Line Options ## Command Line Options
@ -45,279 +43,87 @@ curl -N http://localhost:8080/stream
logwisp [OPTIONS] [TARGET...] logwisp [OPTIONS] [TARGET...]
OPTIONS: OPTIONS:
-c, --color Enable color pass-through for ANSI escape codes --config FILE Config file path
--config FILE Config file path (default: ~/.config/logwisp.toml) --check-interval MS File check interval (default: 100)
--port PORT HTTP port (default: 8080)
--buffer-size SIZE Stream buffer size (default: 1000) # TCP Server
--check-interval MS File check interval in ms (default: 100) --enable-tcp Enable TCP server
--rate-limit Enable rate limiting --tcp-port PORT TCP port (default: 9090)
--rate-requests N Rate limit requests/sec (default: 10) --tcp-buffer-size SIZE TCP buffer size (default: 1000)
--rate-burst N Rate limit burst size (default: 20)
# HTTP Server
--enable-http Enable HTTP server (default: true)
--http-port PORT HTTP port (default: 8080)
--http-buffer-size SIZE HTTP buffer size (default: 1000)
# Legacy compatibility
--port PORT Same as --http-port
--buffer-size SIZE Same as --http-buffer-size
TARGET: TARGET:
path[:pattern[:isfile]] Path to monitor (file or directory) path[:pattern[:isfile]] Path to monitor
pattern: glob pattern for directories (default: *.log) pattern: glob pattern for directories
isfile: true/false (auto-detected if omitted) isfile: true/false (auto-detected if omitted)
EXAMPLES:
# Monitor current directory for *.log files
logwisp
# Monitor specific file with color support
logwisp -c /var/log/app.log
# Monitor multiple locations
logwisp /var/log:*.log /app/logs:error*.log:false /tmp/debug.log::true
# Custom port with rate limiting
logwisp --port 9090 --rate-limit --rate-requests 100 --rate-burst 200
``` ```
## Configuration ## Configuration
LogWisp uses a three-level configuration hierarchy: Config file location: `~/.config/logwisp.toml`
1. **Command-line arguments** (highest priority)
2. **Environment variables**
3. **Configuration file** (~/.config/logwisp.toml)
4. **Default values** (lowest priority)
### Default Values
| Setting | Default | Description |
|---------|---------|-------------|
| `port` | 8080 | HTTP listen port |
| `monitor.check_interval_ms` | 100 | File check interval (milliseconds) |
| `monitor.targets` | [{"path": "./", "pattern": "*.log", "is_file": false}] | Paths to monitor |
| `stream.buffer_size` | 1000 | Per-client event buffer size |
| `stream.rate_limit.enabled` | false | Enable rate limiting |
| `stream.rate_limit.requests_per_second` | 10 | Sustained request rate |
| `stream.rate_limit.burst_size` | 20 | Maximum burst size |
| `stream.rate_limit.cleanup_interval_s` | 60 | Client cleanup interval |
### Configuration File Location
Default: `~/.config/logwisp.toml`
Override with environment variables:
- `LOGWISP_CONFIG_DIR` - Directory containing config file
- `LOGWISP_CONFIG_FILE` - Config filename (absolute or relative)
Examples:
```bash
# Use config from current directory
LOGWISP_CONFIG_DIR=. ./logwisp
# Use specific config file
LOGWISP_CONFIG_FILE=/etc/logwisp/prod.toml ./logwisp
# Use custom directory and filename
LOGWISP_CONFIG_DIR=/opt/configs LOGWISP_CONFIG_FILE=myapp.toml ./logwisp
```
### Environment Variables
All configuration values can be overridden via environment variables:
| Environment Variable | Config Path | Description |
|---------------------|-------------|-------------|
| `LOGWISP_PORT` | `port` | HTTP listen port |
| `LOGWISP_MONITOR_CHECK_INTERVAL_MS` | `monitor.check_interval_ms` | File check interval |
| `LOGWISP_MONITOR_TARGETS` | `monitor.targets` | Comma-separated targets |
| `LOGWISP_STREAM_BUFFER_SIZE` | `stream.buffer_size` | Client buffer size |
| `LOGWISP_STREAM_RATE_LIMIT_ENABLED` | `stream.rate_limit.enabled` | Enable rate limiting |
| `LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC` | `stream.rate_limit.requests_per_second` | Rate limit |
| `LOGWISP_STREAM_RATE_LIMIT_BURST_SIZE` | `stream.rate_limit.burst_size` | Burst size |
| `LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S` | `stream.rate_limit.cleanup_interval_s` | Cleanup interval |
### Monitor Targets Format
The `LOGWISP_MONITOR_TARGETS` environment variable uses a special format:
```
path:pattern:isfile,path2:pattern2:isfile
```
Examples:
```bash
# Monitor directory and specific file
LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/app.log::true" ./logwisp
# Multiple directories
LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/opt/app/logs:app-*.log:false" ./logwisp
```
### Complete Configuration Example
```toml ```toml
# Port to listen on (default: 8080)
port = 8080
[monitor] [monitor]
# How often to check for file changes in milliseconds (default: 100)
check_interval_ms = 100 check_interval_ms = 100
# Paths to monitor
# Default: [{"path": "./", "pattern": "*.log", "is_file": false}]
# Monitor all .log files in current directory
[[monitor.targets]] [[monitor.targets]]
path = "./" path = "./"
pattern = "*.log" pattern = "*.log"
is_file = false is_file = false
# Monitor specific file [tcpserver]
[[monitor.targets]] enabled = false
path = "/app/logs/app.log" port = 9090
pattern = "" # Ignored for files
is_file = true
# Monitor with specific pattern
[[monitor.targets]]
path = "/var/log/nginx"
pattern = "access*.log"
is_file = false
[stream]
# Buffer size for each client connection (default: 1000)
# Controls how many log entries can be queued per client
buffer_size = 1000 buffer_size = 1000
[stream.rate_limit] [httpserver]
# Enable rate limiting (default: false) enabled = true
enabled = false port = 8080
buffer_size = 1000
# Requests per second per client (default: 10)
# This is the sustained rate
requests_per_second = 10
# Burst size - max requests at once (default: 20)
# Allows temporary bursts above the sustained rate
burst_size = 20
# How often to clean up old client limiters in seconds (default: 60)
# Clients inactive for 2x this duration are removed
cleanup_interval_s = 60
``` ```
## Color Support ## Clients
LogWisp can pass through ANSI color escape codes from monitored logs to SSE clients using the `-c` flag.
### TCP Stream
```bash ```bash
# Enable color pass-through # Simple TCP client
./logwisp -c nc localhost 9090
# Or via systemd # Using telnet
ExecStart=/opt/logwisp/bin/logwisp -c telnet localhost 9090
# Using socat
socat - TCP:localhost:9090
``` ```
### How It Works ### HTTP/SSE Stream
When color mode is enabled (`-c` flag), LogWisp preserves ANSI escape codes in log messages. These are properly JSON-escaped in the SSE stream.
### Example Log with Colors
Original log file content:
```
\033[31mERROR\033[0m: Database connection failed
\033[33mWARN\033[0m: High memory usage detected
\033[32mINFO\033[0m: Service started successfully
```
SSE output with `-c`:
```json
{
"time": "2024-01-01T12:00:00.123456Z",
"source": "app.log",
"message": "\u001b[31mERROR\u001b[0m: Database connection failed"
}
```
### Client-Side Handling
#### Terminal Clients
For terminal-based clients (like curl), the escape codes will render as colors:
```bash ```bash
# This will show colored output in terminals that support ANSI codes # Stream logs
curl -N http://localhost:8080/stream | jq -r '.message' curl -N http://localhost:8080/stream
# Check status
curl http://localhost:8080/status
``` ```
#### Web Clients ## Environment Variables
For web-based clients, you'll need to convert ANSI codes to HTML: All config values can be set via environment:
- `LOGWISP_MONITOR_CHECK_INTERVAL_MS`
- `LOGWISP_MONITOR_TARGETS` (format: "path:pattern:isfile,...")
- `LOGWISP_TCPSERVER_ENABLED`
- `LOGWISP_TCPSERVER_PORT`
- `LOGWISP_HTTPSERVER_ENABLED`
- `LOGWISP_HTTPSERVER_PORT`
```javascript ## Log Entry Format
// Example using ansi-to-html library
const AnsiToHtml = require('ansi-to-html');
const convert = new AnsiToHtml();
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
const html = convert.toHtml(data.message);
document.getElementById('log').innerHTML += html + '<br>';
};
```
#### Custom Processing
```python
# Python example with colorama
import json
import colorama
from colorama import init
init() # Initialize colorama for Windows support
# Process SSE stream
for line in stream:
if line.startswith('data: '):
data = json.loads(line[6:])
# Colorama will handle ANSI codes automatically
print(data['message'])
```
### Common ANSI Color Codes
| Code | Color/Style |
|------|-------------|
| `\033[0m` | Reset |
| `\033[1m` | Bold |
| `\033[31m` | Red |
| `\033[32m` | Green |
| `\033[33m` | Yellow |
| `\033[34m` | Blue |
| `\033[35m` | Magenta |
| `\033[36m` | Cyan |
### Limitations
1. **JSON Escaping**: ANSI codes are JSON-escaped in the stream (e.g., `\033` becomes `\u001b`)
2. **Client Support**: The client must support or convert ANSI codes
3. **Performance**: No significant impact, but slightly larger message sizes
### Security Note
Color codes are passed through as-is. Ensure monitored logs come from trusted sources to avoid terminal escape sequence attacks.
### Disabling Colors
To strip color codes instead of passing them through:
- Don't use the `-c` flag
- Or set up a preprocessing pipeline:
```bash
tail -f colored.log | sed 's/\x1b\[[0-9;]*m//g' > plain.log
```
## API
### Endpoints
- `GET /stream` - Server-Sent Events stream of log entries
- `GET /status` - Service status and configuration information
### Log Entry Format
```json ```json
{ {
@ -329,223 +135,156 @@ To strip color codes instead of passing them through:
} }
``` ```
### SSE Event Types ## API Endpoints
| Event | Description | Data Format | ### TCP Protocol
|-------|-------------|-------------| - Raw JSON lines, one entry per line
| `connected` | Initial connection | `{"client_id": "123456789"}` | - No headers or authentication
| `data` | Log entry | JSON log entry | - Instant connection, streaming starts immediately
| `disconnected` | Client disconnected | `{"reason": "slow_client"}` |
| `timeout` | Client timeout | `{"reason": "client_timeout"}` |
| `:` | Heartbeat (comment) | ISO timestamp |
### Status Response Format ### HTTP Endpoints
- `GET /stream` - SSE stream of log entries
- `GET /status` - Service status JSON
### SSE Events
- `connected` - Initial connection with client_id
- `data` - Log entry JSON
- `:` - Heartbeat comment (30s interval)
## Heartbeat Configuration
LogWisp supports configurable heartbeat messages for both HTTP/SSE and TCP streams to detect stale connections and provide server statistics.
**HTTP/SSE Heartbeat:**
- **Format Options:**
- `comment`: SSE comment format (`: heartbeat ...`)
- `json`: Standard data message with JSON payload
- **Content Options:**
- `include_timestamp`: Add current UTC timestamp
- `include_stats`: Add active clients count and server uptime
**TCP Heartbeat:**
- Always uses JSON format
- Same content options as HTTP
- Useful for detecting disconnected clients
**Example Heartbeat Messages:**
HTTP Comment format:
```
: heartbeat 2024-01-01T12:00:00Z clients=5 uptime=3600s
```
JSON format:
```json ```json
{ {"type":"heartbeat","timestamp":"2024-01-01T12:00:00Z","active_clients":5,"uptime_seconds":3600}
"service": "LogWisp",
"version": "2.0.0",
"port": 8080,
"color_mode": false,
"config": {
"monitor": {
"check_interval_ms": 100,
"targets_count": 2
},
"stream": {
"buffer_size": 1000,
"rate_limit": {
"enabled": true,
"requests_per_second": 10,
"burst_size": 20
}
}
},
"streamer": {
"active_clients": 5,
"buffer_size": 1000,
"color_mode": false,
"total_dropped": 42
},
"rate_limiter": "Active clients: 3"
}
``` ```
## Usage Examples **Configuration:**
```toml
### Basic Usage [httpserver.heartbeat]
```bash enabled = true
# Start with defaults interval_seconds = 30
./logwisp include_timestamp = true
include_stats = true
# Monitor specific file format = "json"
./logwisp /var/log/app.log
# View logs
curl -N http://localhost:8080/stream
``` ```
### With Environment Variables **Environment Variables:**
```bash - `LOGWISP_HTTPSERVER_HEARTBEAT_ENABLED`
# Change port and add rate limiting - `LOGWISP_HTTPSERVER_HEARTBEAT_INTERVAL_SECONDS`
LOGWISP_PORT=9090 \ - `LOGWISP_TCPSERVER_HEARTBEAT_ENABLED`
LOGWISP_STREAM_RATE_LIMIT_ENABLED=true \ - `LOGWISP_TCPSERVER_HEARTBEAT_INTERVAL_SECONDS`
LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC=5 \
./logwisp
```
### Monitor Multiple Locations ## Summary
```bash
# Via command line
./logwisp /var/log:*.log /app/logs:*.json /tmp/debug.log
# Via environment variable **Fixed:**
LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/logs:*.json:false,/tmp/debug.log::true" \ - Removed duplicate `globToRegex` functions (never used)
./logwisp - Added missing TCP heartbeat support
- Made HTTP heartbeat configurable
# Or via config file **Enhanced:**
cat > ~/.config/logwisp.toml << EOF - Configurable heartbeat interval
[[monitor.targets]] - Multiple format options (comment/JSON)
path = "/var/log" - Optional timestamp and statistics
pattern = "*.log" - Per-protocol configuration
is_file = false
[[monitor.targets]] **⚠️ SECURITY:** Heartbeat statistics expose minimal server state (connection count, uptime). If this is sensitive in your environment, disable `include_stats`.
path = "/app/logs"
pattern = "*.json"
is_file = false
[[monitor.targets]] ## Deployment
path = "/tmp/debug.log"
is_file = true
EOF
```
### Production Deployment
Example systemd service with environment overrides:
### Systemd Service
```ini ```ini
[Unit] [Unit]
Description=LogWisp Log Streaming Service Description=LogWisp Log Streaming
After=network.target After=network.target
[Service] [Service]
Type=simple Type=simple
User=logwisp ExecStart=/usr/local/bin/logwisp --enable-tcp --enable-http
ExecStart=/usr/local/bin/logwisp -c
Restart=always Restart=always
RestartSec=5 Environment="LOGWISP_TCPSERVER_PORT=9090"
Environment="LOGWISP_HTTPSERVER_PORT=8080"
# Environment overrides
Environment="LOGWISP_PORT=8080"
Environment="LOGWISP_STREAM_BUFFER_SIZE=5000"
Environment="LOGWISP_STREAM_RATE_LIMIT_ENABLED=true"
Environment="LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC=100"
Environment="LOGWISP_MONITOR_TARGETS=/var/log:*.log:false,/opt/app/logs:*.log:false"
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadOnlyPaths=/
ReadWritePaths=/var/log /opt/app/logs
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
``` ```
### Docker
```dockerfile
FROM golang:1.24 AS builder
WORKDIR /app
COPY . .
RUN go build -o logwisp ./src/cmd/logwisp
FROM debian:bookworm-slim
COPY --from=builder /app/logwisp /usr/local/bin/
EXPOSE 8080 9090
CMD ["logwisp", "--enable-tcp", "--enable-http"]
```
## Performance Tuning ## Performance Tuning
### Buffer Size - **Buffer Size**: Increase for burst traffic (5000+)
- **Check Interval**: Decrease for lower latency (10-50ms)
- **TCP**: Best for high-volume system consumers
- **HTTP**: Best for web browsers and REST clients
The `stream.buffer_size` setting controls how many log entries can be queued per client: ### Message Dropping and Client Behavior
- **Small buffers (100-500)**: Lower memory usage, clients skip entries during bursts
- **Default (1000)**: Good balance for most use cases
- **Large buffers (5000+)**: Handle burst traffic better, higher memory usage
When a client's buffer is full, new messages are skipped for that client until it catches up. The client remains connected and will receive future messages once buffer space is available. LogWisp uses non-blocking message delivery to maintain system stability. When a client cannot keep up with the log stream, messages are dropped rather than blocking other clients or the monitor.
### Check Interval **Common causes of dropped messages:**
- **Browser throttling**: Browsers may throttle background tabs, reducing JavaScript execution frequency
- **Network congestion**: Slow connections or high latency can cause client buffers to fill
- **Client processing**: Heavy client-side processing (parsing, rendering) can create backpressure
- **System resources**: CPU/memory constraints on client machines affect consumption rate
The `monitor.check_interval_ms` setting controls file polling frequency: **TCP vs HTTP behavior:**
- **Fast (10-50ms)**: Near real-time updates, higher CPU usage - **TCP**: Raw stream with kernel-level buffering. Drops occur when TCP send buffer fills
- **Default (100ms)**: Good balance - **HTTP/SSE**: Application-level buffering. Each client has a dedicated channel (default: 1000 entries)
- **Slow (500ms+)**: Lower CPU usage, more latency
### Rate Limiting **Mitigation strategies:**
1. Increase buffer sizes for burst tolerance: `--tcp-buffer-size 5000` or `--http-buffer-size 5000`
2. Implement client-side flow control (pause/resume based on queue depth)
3. Use TCP for high-volume consumers that need guaranteed delivery
4. Keep browser tabs in foreground for real-time monitoring
5. Consider log aggregation/filtering at source for high-volume scenarios
When to enable rate limiting: **Monitoring drops:**
- Internet-facing deployments - HTTP: Check `/status` endpoint for drop statistics
- Shared environments - TCP: Monitor connection count and system TCP metrics
- Protection against misbehaving clients - Both: Watch for "channel full" indicators in client implementations
Rate limiting applies only to establishing SSE connections, not to individual messages. Once connected, clients receive all messages (subject to buffer capacity).
## Troubleshooting
### Client Missing Messages
If clients miss messages during bursts:
1. Check `total_dropped` and `clients_with_drops` in status endpoint
2. Increase `stream.buffer_size` to handle larger bursts
3. Messages are skipped when buffer is full, but clients stay connected
### High Memory Usage
If memory usage is high:
1. Reduce `stream.buffer_size`
2. Enable rate limiting to limit concurrent connections
3. Each client uses `buffer_size * avg_message_size` memory
### Browser Stops Receiving Updates
This shouldn't happen with the current implementation. If it does:
1. Check browser developer console for errors
2. Verify no proxy/firewall is timing out the connection
3. Ensure reverse proxy (if used) doesn't buffer SSE responses
## File Rotation Detection
LogWisp automatically detects log file rotation using multiple methods:
- Inode changes (Linux/Unix)
- File size decrease
- Modification time reset
- Read position beyond file size
When rotation is detected, LogWisp:
1. Logs a rotation event
2. Resets read position to beginning
3. Continues streaming from new file
## Security Notes
1. **No built-in authentication** - Use a reverse proxy for auth
2. **No TLS support** - Use a reverse proxy for HTTPS
3. **Path validation** - Only specified paths can be monitored
4. **Directory traversal protection** - Paths containing ".." are rejected
5. **Rate limiting** - Optional but recommended for public deployments
6. **ANSI escape sequences** - Only enable color mode for trusted log sources
## Design Decisions
- **Unix philosophy**: Single purpose - stream logs
- **SSE over WebSocket**: Simpler, works everywhere, built-in reconnect
- **No database**: Stateless operation, instant startup
- **Atomic config management**: Using LixenWraith/config package
- **Graceful shutdown**: Proper cleanup on SIGINT/SIGTERM
- **Platform agnostic**: POSIX-compliant where possible
## Building from Source ## Building from Source
Requirements:
- Go 1.23 or later
```bash ```bash
git clone https://github.com/yourusername/logwisp git clone https://github.com/yourusername/logwisp
cd logwisp cd logwisp
go mod download go mod init logwisp
go get github.com/panjf2000/gnet/v2
go get github.com/valyala/fasthttp
go get github.com/lixenwraith/config
go build -o logwisp ./src/cmd/logwisp go build -o logwisp ./src/cmd/logwisp
``` ```

View File

@ -6,107 +6,128 @@
# 2. Environment variables (LOGWISP_ prefix) # 2. Environment variables (LOGWISP_ prefix)
# 3. This configuration file # 3. This configuration file
# 4. Built-in defaults # 4. Built-in defaults
#
# All settings shown below with their default values
# Port to listen on
# Environment: LOGWISP_PORT
# CLI: --port PORT
# Default: 8080
port = 8080
[monitor] [monitor]
# How often to check for file changes (milliseconds) # File check interval (milliseconds)
# Lower values = more responsive but higher CPU usage # Lower = more responsive, higher CPU usage
# Environment: LOGWISP_MONITOR_CHECK_INTERVAL_MS # Environment: LOGWISP_MONITOR_CHECK_INTERVAL_MS
# CLI: --check-interval MS # CLI: --check-interval MS
# Default: 100
check_interval_ms = 100 check_interval_ms = 100
# Paths to monitor for log files # Monitor targets
# Environment: LOGWISP_MONITOR_TARGETS (format: "path:pattern:isfile,path2:pattern2:isfile") # Environment: LOGWISP_MONITOR_TARGETS="path:pattern:isfile,path2:pattern2:isfile"
# CLI: logwisp [path[:pattern[:isfile]]] ... # CLI: logwisp [path[:pattern[:isfile]]] ...
# Default: Monitor current directory for *.log files
[[monitor.targets]] [[monitor.targets]]
path = "./" # Directory or file path to monitor path = "./" # Directory or file path
pattern = "*.log" # Glob pattern for directory monitoring (ignored for files) pattern = "*.log" # Glob pattern (ignored for files)
is_file = false # true = monitor specific file, false = monitor directory is_file = false # true = file, false = directory
# Additional target examples (uncomment to use): # # Example: Specific file
# # Monitor specific log file
# [[monitor.targets]] # [[monitor.targets]]
# path = "/var/log/application.log" # path = "/var/log/app.log"
# pattern = "" # Pattern ignored when is_file = true # pattern = ""
# is_file = true # is_file = true
# # Monitor system logs # # Example: System logs
# [[monitor.targets]] # [[monitor.targets]]
# path = "/var/log" # path = "/var/log"
# pattern = "*.log" # pattern = "*.log"
# is_file = false # is_file = false
# # Monitor nginx access logs with pattern [tcpserver]
# [[monitor.targets]] # Raw TCP streaming server (gnet)
# path = "/var/log/nginx" # Environment: LOGWISP_TCPSERVER_ENABLED
# pattern = "access*.log" # CLI: --enable-tcp
# is_file = false
# # Monitor journal export directory
# [[monitor.targets]]
# path = "/var/log/journal"
# pattern = "*.log"
# is_file = false
# # Monitor multiple application logs
# [[monitor.targets]]
# path = "/opt/myapp/logs"
# pattern = "app-*.log"
# is_file = false
[stream]
# Buffer size for each client connection
# Number of log entries that can be queued per client
# When buffer is full, new messages are skipped (not sent to that client)
# Increase for burst traffic, decrease for memory conservation
# Environment: LOGWISP_STREAM_BUFFER_SIZE
# CLI: --buffer-size SIZE
# Default: 1000
buffer_size = 1000
[stream.rate_limit]
# Enable rate limiting per client IP
# Prevents resource exhaustion from misbehaving clients
# Environment: LOGWISP_STREAM_RATE_LIMIT_ENABLED
# CLI: --rate-limit
# Default: false
enabled = false enabled = false
# Sustained requests per second allowed per client # TCP port
# Clients can make this many requests per second continuously # Environment: LOGWISP_TCPSERVER_PORT
# Environment: LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC # CLI: --tcp-port PORT
# CLI: --rate-requests N port = 9090
# Default: 10
requests_per_second = 10
# Maximum burst size per client # Per-client buffer size
# Allows temporary bursts above the sustained rate # Environment: LOGWISP_TCPSERVER_BUFFER_SIZE
# Should be >= requests_per_second # CLI: --tcp-buffer-size SIZE
# Environment: LOGWISP_STREAM_RATE_LIMIT_BURST_SIZE buffer_size = 1000
# CLI: --rate-burst N
# Default: 20
burst_size = 20
# How often to clean up inactive client rate limiters (seconds) # TLS/SSL settings (not implemented in PoC)
# Clients inactive for 2x this duration are removed from tracking ssl_enabled = false
# Lower values = more frequent cleanup, higher values = less overhead ssl_cert_file = ""
# Environment: LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S ssl_key_file = ""
# Default: 60
cleanup_interval_s = 60
# Production configuration example: [tcpserver.heartbeat]
# [stream.rate_limit] # Enable/disable heartbeat messages
# Environment: LOGWISP_TCPSERVER_HEARTBEAT_ENABLED
enabled = false
# Heartbeat interval in seconds
# Environment: LOGWISP_TCPSERVER_HEARTBEAT_INTERVAL_SECONDS
interval_seconds = 30
# Include timestamp in heartbeat
# Environment: LOGWISP_TCPSERVER_HEARTBEAT_INCLUDE_TIMESTAMP
include_timestamp = true
# Include server statistics (active connections, uptime)
# Environment: LOGWISP_TCPSERVER_HEARTBEAT_INCLUDE_STATS
include_stats = false
# Format: "json" only for TCP
# Environment: LOGWISP_TCPSERVER_HEARTBEAT_FORMAT
format = "json"
[httpserver]
# HTTP/SSE streaming server (fasthttp)
# Environment: LOGWISP_HTTPSERVER_ENABLED
# CLI: --enable-http
enabled = true
# HTTP port
# Environment: LOGWISP_HTTPSERVER_PORT
# CLI: --http-port PORT (or legacy --port)
port = 8080
# Per-client buffer size
# Environment: LOGWISP_HTTPSERVER_BUFFER_SIZE
# CLI: --http-buffer-size SIZE (or legacy --buffer-size)
buffer_size = 1000
# TLS/SSL settings (not implemented in PoC)
ssl_enabled = false
ssl_cert_file = ""
ssl_key_file = ""
[httpserver.heartbeat]
# Enable/disable heartbeat messages
# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_ENABLED
enabled = true
# Heartbeat interval in seconds
# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_INTERVAL_SECONDS
interval_seconds = 30
# Include timestamp in heartbeat
# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_INCLUDE_TIMESTAMP
include_timestamp = true
# Include server statistics (active clients, uptime)
# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_INCLUDE_STATS
include_stats = false
# Format: "comment" (SSE comment) or "json" (data message)
# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_FORMAT
format = "comment"
# Production example:
# [tcpserver]
# enabled = true # enabled = true
# requests_per_second = 100 # Higher limit for production # port = 9090
# burst_size = 200 # Allow larger bursts # buffer_size = 5000
# cleanup_interval_s = 300 # Less frequent cleanup #
# [httpserver]
# enabled = true
# port = 443
# buffer_size = 5000
# ssl_enabled = true
# ssl_cert_file = "/etc/ssl/certs/logwisp.crt"
# ssl_key_file = "/etc/ssl/private/logwisp.key"

12
go.mod
View File

@ -6,10 +6,20 @@ toolchain go1.24.4
require ( require (
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6 github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6
golang.org/x/time v0.12.0 github.com/panjf2000/gnet/v2 v2.9.1
github.com/valyala/fasthttp v1.63.0
) )
require ( require (
github.com/BurntSushi/toml v1.5.0 // indirect github.com/BurntSushi/toml v1.5.0 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/panjf2000/ants/v2 v2.11.3 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/sync v0.15.0 // indirect
golang.org/x/sys v0.33.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
) )

28
go.sum
View File

@ -1,16 +1,40 @@
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6 h1:qE4SpAJWFaLkdRyE0FjTPBBRYE7LOvcmRCB5p86W73Q= github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6 h1:qE4SpAJWFaLkdRyE0FjTPBBRYE7LOvcmRCB5p86W73Q=
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6/go.mod h1:4wPJ3HnLrYrtUwTinngCsBgtdIXsnxkLa7q4KAIbwY8= github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6/go.mod h1:4wPJ3HnLrYrtUwTinngCsBgtdIXsnxkLa7q4KAIbwY8=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
github.com/panjf2000/gnet/v2 v2.9.1 h1:bKewICy/0xnQ9PMzNaswpe/Ah14w1TrRk91LHTcbIlA=
github.com/panjf2000/gnet/v2 v2.9.1/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.63.0 h1:DisIL8OjB7ul2d7cBaMRcKTQDYnrGy56R4FCiuDP0Ns=
github.com/valyala/fasthttp v1.63.0/go.mod h1:REc4IeW+cAEyLrRPa5A81MIjvz0QE1laoTX2EaPHKJM=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@ -1,230 +1,174 @@
// File: logwisp/src/cmd/logwisp/main.go // FILE: src/cmd/logwisp/main.go
package main package main
import ( import (
"context" "context"
"encoding/json"
"flag" "flag"
"fmt" "fmt"
"net/http"
"os" "os"
"os/signal" "os/signal"
"strings"
"sync" "sync"
"syscall" "syscall"
"time" "time"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/middleware"
"logwisp/src/internal/monitor" "logwisp/src/internal/monitor"
"logwisp/src/internal/stream" "logwisp/src/internal/stream"
) )
func main() { func main() {
// Parse flags manually without init() // Parse CLI flags
var colorMode bool
flag.BoolVar(&colorMode, "c", false, "Enable color pass-through for escape codes in logs")
flag.BoolVar(&colorMode, "color", false, "Enable color pass-through for escape codes in logs")
// Additional CLI flags that override config
var ( var (
port = flag.Int("port", 0, "HTTP port (overrides config)") configFile = flag.String("config", "", "Config file path")
bufferSize = flag.Int("buffer-size", 0, "Stream buffer size (overrides config)") // Legacy compatibility flags
checkInterval = flag.Int("check-interval", 0, "File check interval in ms (overrides config)") port = flag.Int("port", 0, "HTTP port (legacy, maps to --http-port)")
rateLimit = flag.Bool("rate-limit", false, "Enable rate limiting (overrides config)") bufferSize = flag.Int("buffer-size", 0, "Buffer size (legacy, maps to --http-buffer-size)")
rateRequests = flag.Int("rate-requests", 0, "Rate limit requests/sec (overrides config)") // New explicit flags
rateBurst = flag.Int("rate-burst", 0, "Rate limit burst size (overrides config)") httpPort = flag.Int("http-port", 0, "HTTP server port")
configFile = flag.String("config", "", "Config file path (overrides LOGWISP_CONFIG_FILE)") httpBuffer = flag.Int("http-buffer-size", 0, "HTTP server buffer size")
tcpPort = flag.Int("tcp-port", 0, "TCP server port")
tcpBuffer = flag.Int("tcp-buffer-size", 0, "TCP server buffer size")
enableTCP = flag.Bool("enable-tcp", false, "Enable TCP server")
enableHTTP = flag.Bool("enable-http", false, "Enable HTTP server")
checkInterval = flag.Int("check-interval", 0, "File check interval in ms")
) )
flag.Parse() flag.Parse()
// Set config file env var if specified via CLI
if *configFile != "" { if *configFile != "" {
os.Setenv("LOGWISP_CONFIG_FILE", *configFile) os.Setenv("LOGWISP_CONFIG_FILE", *configFile)
} }
// Build CLI override args for config package // Build CLI args for config
var cliArgs []string var cliArgs []string
// Legacy mapping
if *port > 0 { if *port > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--port=%d", *port)) cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.port=%d", *port))
} }
if *bufferSize > 0 { if *bufferSize > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--stream.buffer_size=%d", *bufferSize)) cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.buffer_size=%d", *bufferSize))
}
// New flags
if *httpPort > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.port=%d", *httpPort))
}
if *httpBuffer > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.buffer_size=%d", *httpBuffer))
}
if *tcpPort > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--tcpserver.port=%d", *tcpPort))
}
if *tcpBuffer > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--tcpserver.buffer_size=%d", *tcpBuffer))
}
if flag.Lookup("enable-tcp").DefValue != flag.Lookup("enable-tcp").Value.String() {
cliArgs = append(cliArgs, fmt.Sprintf("--tcpserver.enabled=%v", *enableTCP))
}
if flag.Lookup("enable-http").DefValue != flag.Lookup("enable-http").Value.String() {
cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.enabled=%v", *enableHTTP))
} }
if *checkInterval > 0 { if *checkInterval > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.check_interval_ms=%d", *checkInterval)) cliArgs = append(cliArgs, fmt.Sprintf("--monitor.check_interval_ms=%d", *checkInterval))
} }
if flag.Lookup("rate-limit").DefValue != flag.Lookup("rate-limit").Value.String() {
// Rate limit flag was explicitly set
cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.enabled=%v", *rateLimit))
}
if *rateRequests > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.requests_per_second=%d", *rateRequests))
}
if *rateBurst > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.burst_size=%d", *rateBurst))
}
// Parse remaining args as monitor targets // Parse monitor targets from remaining args
for _, arg := range flag.Args() { for _, arg := range flag.Args() {
if strings.Contains(arg, ":") { cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s", arg))
// Format: path:pattern:isfile
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s", arg))
} else if stat, err := os.Stat(arg); err == nil {
// Auto-detect file vs directory
if stat.IsDir() {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s:*.log:false", arg))
} else {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s::true", arg))
}
}
} }
// Load configuration with CLI overrides // Load configuration
cfg, err := config.LoadWithCLI(cliArgs) cfg, err := config.LoadWithCLI(cliArgs)
if err != nil { if err != nil {
fmt.Fprintf(os.Stderr, "Failed to load config: %v\n", err) fmt.Fprintf(os.Stderr, "Failed to load config: %v\n", err)
os.Exit(1) os.Exit(1)
} }
// Create context for graceful shutdown // Create context for shutdown
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Setup signal handling // Setup signal handling
sigChan := make(chan os.Signal, 1) sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM) signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
// WaitGroup for tracking all goroutines
var wg sync.WaitGroup var wg sync.WaitGroup
// Create components // Create monitor
streamer := stream.NewWithOptions(cfg.Stream.BufferSize, colorMode) mon := monitor.New()
mon := monitor.New(streamer.Publish)
// Set monitor check interval from config
mon.SetCheckInterval(time.Duration(cfg.Monitor.CheckIntervalMs) * time.Millisecond) mon.SetCheckInterval(time.Duration(cfg.Monitor.CheckIntervalMs) * time.Millisecond)
// Add monitor targets from config // Add targets
for _, target := range cfg.Monitor.Targets { for _, target := range cfg.Monitor.Targets {
if err := mon.AddTarget(target.Path, target.Pattern, target.IsFile); err != nil { if err := mon.AddTarget(target.Path, target.Pattern, target.IsFile); err != nil {
fmt.Fprintf(os.Stderr, "Failed to add target %s: %v\n", target.Path, err) fmt.Fprintf(os.Stderr, "Failed to add target %s: %v\n", target.Path, err)
} }
} }
// Start monitoring // Start monitor
if err := mon.Start(ctx); err != nil { if err := mon.Start(ctx); err != nil {
fmt.Fprintf(os.Stderr, "Failed to start monitor: %v\n", err) fmt.Fprintf(os.Stderr, "Failed to start monitor: %v\n", err)
os.Exit(1) os.Exit(1)
} }
// Setup HTTP server var tcpServer *stream.TCPStreamer
mux := http.NewServeMux() var httpServer *stream.HTTPStreamer
// Create handler with optional rate limiting // Start TCP server if enabled
var handler http.Handler = streamer if cfg.TCPServer.Enabled {
var rateLimiter *middleware.RateLimiter tcpChan := mon.Subscribe()
tcpServer = stream.NewTCPStreamer(tcpChan, cfg.TCPServer)
if cfg.Stream.RateLimit.Enabled { wg.Add(1)
rateLimiter = middleware.NewRateLimiter( go func() {
cfg.Stream.RateLimit.RequestsPerSecond, defer wg.Done()
cfg.Stream.RateLimit.BurstSize, if err := tcpServer.Start(); err != nil {
cfg.Stream.RateLimit.CleanupIntervalS, fmt.Fprintf(os.Stderr, "TCP server error: %v\n", err)
) }
handler = rateLimiter.Middleware(handler) }()
fmt.Printf("Rate limiting enabled: %d req/s, burst %d\n",
cfg.Stream.RateLimit.RequestsPerSecond, fmt.Printf("TCP streaming on port %d\n", cfg.TCPServer.Port)
cfg.Stream.RateLimit.BurstSize)
} }
mux.Handle("/stream", handler) // Start HTTP server if enabled
if cfg.HTTPServer.Enabled {
httpChan := mon.Subscribe()
httpServer = stream.NewHTTPStreamer(httpChan, cfg.HTTPServer)
// Enhanced status endpoint wg.Add(1)
mux.HandleFunc("/status", func(w http.ResponseWriter, r *http.Request) { go func() {
w.Header().Set("Content-Type", "application/json") defer wg.Done()
if err := httpServer.Start(); err != nil {
fmt.Fprintf(os.Stderr, "HTTP server error: %v\n", err)
}
}()
status := map[string]interface{}{ fmt.Printf("HTTP/SSE streaming on http://localhost:%d/stream\n", cfg.HTTPServer.Port)
"service": "LogWisp", fmt.Printf("Status available at http://localhost:%d/status\n", cfg.HTTPServer.Port)
"version": "2.0.0",
"port": cfg.Port,
"color_mode": colorMode,
"config": map[string]interface{}{
"monitor": map[string]interface{}{
"check_interval_ms": cfg.Monitor.CheckIntervalMs,
"targets_count": len(cfg.Monitor.Targets),
},
"stream": map[string]interface{}{
"buffer_size": cfg.Stream.BufferSize,
"rate_limit": map[string]interface{}{
"enabled": cfg.Stream.RateLimit.Enabled,
"requests_per_second": cfg.Stream.RateLimit.RequestsPerSecond,
"burst_size": cfg.Stream.RateLimit.BurstSize,
},
},
},
}
// Add runtime stats
if rateLimiter != nil {
status["rate_limiter"] = rateLimiter.Stats()
}
status["streamer"] = streamer.Stats()
json.NewEncoder(w).Encode(status)
})
server := &http.Server{
Addr: fmt.Sprintf(":%d", cfg.Port),
Handler: mux,
// Add timeouts for better shutdown behavior
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 120 * time.Second,
} }
// Start server in goroutine if !cfg.TCPServer.Enabled && !cfg.HTTPServer.Enabled {
wg.Add(1) fmt.Fprintln(os.Stderr, "No servers enabled. Enable at least one server in config.")
go func() { os.Exit(1)
defer wg.Done() }
fmt.Printf("LogWisp streaming on http://localhost:%d/stream\n", cfg.Port)
fmt.Printf("Status available at http://localhost:%d/status\n", cfg.Port)
if colorMode {
fmt.Println("Color pass-through enabled")
}
fmt.Printf("Config loaded from: %s\n", config.GetConfigPath())
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed { // Wait for shutdown
fmt.Fprintf(os.Stderr, "Server error: %v\n", err)
}
}()
// Wait for shutdown signal
<-sigChan <-sigChan
fmt.Println("\nShutting down...") fmt.Println("\nShutting down...")
// Cancel context to stop all components // Stop servers first
if tcpServer != nil {
tcpServer.Stop()
}
if httpServer != nil {
httpServer.Stop()
}
// Cancel context and stop monitor
cancel() cancel()
// Create shutdown context with timeout
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer shutdownCancel()
// Shutdown server first
if err := server.Shutdown(shutdownCtx); err != nil {
fmt.Fprintf(os.Stderr, "Server shutdown error: %v\n", err)
// Force close if graceful shutdown fails
server.Close()
}
// Stop all components
mon.Stop() mon.Stop()
streamer.Stop()
if rateLimiter != nil { // Wait for completion
rateLimiter.Stop()
}
// Wait for all goroutines with timeout
done := make(chan struct{}) done := make(chan struct{})
go func() { go func() {
wg.Wait() wg.Wait()
@ -235,6 +179,6 @@ func main() {
case <-done: case <-done:
fmt.Println("Shutdown complete") fmt.Println("Shutdown complete")
case <-time.After(2 * time.Second): case <-time.After(2 * time.Second):
fmt.Println("Shutdown timeout, forcing exit") fmt.Println("Shutdown timeout")
} }
} }

View File

@ -1,4 +1,4 @@
// File: logwisp/src/internal/config/config.go // FILE: src/internal/config/config.go
package config package config
import ( import (
@ -10,114 +10,97 @@ import (
lconfig "github.com/lixenwraith/config" lconfig "github.com/lixenwraith/config"
) )
// Config holds the complete configuration
type Config struct { type Config struct {
Port int `toml:"port"` Monitor MonitorConfig `toml:"monitor"`
Monitor MonitorConfig `toml:"monitor"` TCPServer TCPConfig `toml:"tcpserver"`
Stream StreamConfig `toml:"stream"` HTTPServer HTTPConfig `toml:"httpserver"`
} }
// MonitorConfig holds monitoring settings
type MonitorConfig struct { type MonitorConfig struct {
CheckIntervalMs int `toml:"check_interval_ms"` CheckIntervalMs int `toml:"check_interval_ms"`
Targets []MonitorTarget `toml:"targets"` Targets []MonitorTarget `toml:"targets"`
} }
// MonitorTarget represents a path to monitor
type MonitorTarget struct { type MonitorTarget struct {
Path string `toml:"path"` // File or directory path Path string `toml:"path"`
Pattern string `toml:"pattern"` // Glob pattern for directories Pattern string `toml:"pattern"`
IsFile bool `toml:"is_file"` // True if monitoring specific file IsFile bool `toml:"is_file"`
} }
// StreamConfig holds streaming settings type TCPConfig struct {
type StreamConfig struct { Enabled bool `toml:"enabled"`
BufferSize int `toml:"buffer_size"` Port int `toml:"port"`
RateLimit RateLimitConfig `toml:"rate_limit"` BufferSize int `toml:"buffer_size"`
SSLEnabled bool `toml:"ssl_enabled"`
SSLCertFile string `toml:"ssl_cert_file"`
SSLKeyFile string `toml:"ssl_key_file"`
Heartbeat HeartbeatConfig `toml:"heartbeat"`
} }
// RateLimitConfig holds rate limiting settings type HTTPConfig struct {
type RateLimitConfig struct { Enabled bool `toml:"enabled"`
Enabled bool `toml:"enabled"` Port int `toml:"port"`
RequestsPerSecond int `toml:"requests_per_second"` BufferSize int `toml:"buffer_size"`
BurstSize int `toml:"burst_size"` SSLEnabled bool `toml:"ssl_enabled"`
CleanupIntervalS int64 `toml:"cleanup_interval_s"` SSLCertFile string `toml:"ssl_cert_file"`
SSLKeyFile string `toml:"ssl_key_file"`
Heartbeat HeartbeatConfig `toml:"heartbeat"`
}
type HeartbeatConfig struct {
Enabled bool `toml:"enabled"`
IntervalSeconds int `toml:"interval_seconds"`
IncludeTimestamp bool `toml:"include_timestamp"`
IncludeStats bool `toml:"include_stats"`
Format string `toml:"format"` // "comment" or "json"
} }
// defaults returns configuration with default values
func defaults() *Config { func defaults() *Config {
return &Config{ return &Config{
Port: 8080,
Monitor: MonitorConfig{ Monitor: MonitorConfig{
CheckIntervalMs: 100, CheckIntervalMs: 100,
Targets: []MonitorTarget{ Targets: []MonitorTarget{
{Path: "./", Pattern: "*.log", IsFile: false}, {Path: "./", Pattern: "*.log", IsFile: false},
}, },
}, },
Stream: StreamConfig{ TCPServer: TCPConfig{
Enabled: false,
Port: 9090,
BufferSize: 1000, BufferSize: 1000,
RateLimit: RateLimitConfig{ Heartbeat: HeartbeatConfig{
Enabled: false, Enabled: false,
RequestsPerSecond: 10, IntervalSeconds: 30,
BurstSize: 20, IncludeTimestamp: true,
CleanupIntervalS: 60, IncludeStats: false,
Format: "json",
},
},
HTTPServer: HTTPConfig{
Enabled: true,
Port: 8080,
BufferSize: 1000,
Heartbeat: HeartbeatConfig{
Enabled: true,
IntervalSeconds: 30,
IncludeTimestamp: true,
IncludeStats: false,
Format: "comment",
}, },
}, },
} }
} }
// Load reads configuration using lixenwraith/config Builder pattern
func Load() (*Config, error) {
configPath := GetConfigPath()
cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_").
WithFile(configPath).
WithEnvTransform(customEnvTransform).
WithSources(
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
).
Build()
if err != nil {
// Only fail on actual errors, not missing config file
if !strings.Contains(err.Error(), "not found") {
return nil, fmt.Errorf("failed to load config: %w", err)
}
}
// Special handling for LOGWISP_MONITOR_TARGETS env var
if err := handleMonitorTargetsEnv(cfg); err != nil {
return nil, err
}
// Scan into final config
finalConfig := &Config{}
if err := cfg.Scan("", finalConfig); err != nil {
return nil, fmt.Errorf("failed to scan config: %w", err)
}
return finalConfig, finalConfig.validate()
}
// LoadWithCLI loads configuration and applies CLI arguments
func LoadWithCLI(cliArgs []string) (*Config, error) { func LoadWithCLI(cliArgs []string) (*Config, error) {
configPath := GetConfigPath() configPath := GetConfigPath()
// Convert CLI args to config format
convertedArgs := convertCLIArgs(cliArgs)
cfg, err := lconfig.NewBuilder(). cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()). WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_"). WithEnvPrefix("LOGWISP_").
WithFile(configPath). WithFile(configPath).
WithArgs(convertedArgs). WithArgs(cliArgs).
WithEnvTransform(customEnvTransform). WithEnvTransform(customEnvTransform).
WithSources( WithSources(
lconfig.SourceCLI, // CLI highest priority lconfig.SourceCLI,
lconfig.SourceEnv, lconfig.SourceEnv,
lconfig.SourceFile, lconfig.SourceFile,
lconfig.SourceDefault, lconfig.SourceDefault,
@ -130,12 +113,10 @@ func LoadWithCLI(cliArgs []string) (*Config, error) {
} }
} }
// Handle special env var
if err := handleMonitorTargetsEnv(cfg); err != nil { if err := handleMonitorTargetsEnv(cfg); err != nil {
return nil, err return nil, err
} }
// Scan into final config
finalConfig := &Config{} finalConfig := &Config{}
if err := cfg.Scan("", finalConfig); err != nil { if err := cfg.Scan("", finalConfig); err != nil {
return nil, fmt.Errorf("failed to scan config: %w", err) return nil, fmt.Errorf("failed to scan config: %w", err)
@ -144,52 +125,14 @@ func LoadWithCLI(cliArgs []string) (*Config, error) {
return finalConfig, finalConfig.validate() return finalConfig, finalConfig.validate()
} }
// customEnvTransform handles LOGWISP_ prefix environment variables
func customEnvTransform(path string) string { func customEnvTransform(path string) string {
// Standard transform
env := strings.ReplaceAll(path, ".", "_") env := strings.ReplaceAll(path, ".", "_")
env = strings.ToUpper(env) env = strings.ToUpper(env)
env = "LOGWISP_" + env env = "LOGWISP_" + env
// Handle common variations
switch env {
case "LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SECOND":
if _, exists := os.LookupEnv("LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC"); exists {
return "LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC"
}
case "LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S":
if _, exists := os.LookupEnv("LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL"); exists {
return "LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL"
}
}
return env return env
} }
// convertCLIArgs converts CLI args to config package format
func convertCLIArgs(args []string) []string {
var converted []string
for _, arg := range args {
switch {
case arg == "-c" || arg == "--color":
// Color mode is handled separately by main.go
continue
case strings.HasPrefix(arg, "--config="):
// Config file path handled separately
continue
case strings.HasPrefix(arg, "--"):
// Pass through other long flags
converted = append(converted, arg)
}
}
return converted
}
// GetConfigPath returns the configuration file path
func GetConfigPath() string { func GetConfigPath() string {
// Check explicit config file paths
if configFile := os.Getenv("LOGWISP_CONFIG_FILE"); configFile != "" { if configFile := os.Getenv("LOGWISP_CONFIG_FILE"); configFile != "" {
if filepath.IsAbs(configFile) { if filepath.IsAbs(configFile) {
return configFile return configFile
@ -204,7 +147,6 @@ func GetConfigPath() string {
return filepath.Join(configDir, "logwisp.toml") return filepath.Join(configDir, "logwisp.toml")
} }
// Default location
if homeDir, err := os.UserHomeDir(); err == nil { if homeDir, err := os.UserHomeDir(); err == nil {
return filepath.Join(homeDir, ".config", "logwisp.toml") return filepath.Join(homeDir, ".config", "logwisp.toml")
} }
@ -212,13 +154,10 @@ func GetConfigPath() string {
return "logwisp.toml" return "logwisp.toml"
} }
// handleMonitorTargetsEnv handles comma-separated monitor targets env var
func handleMonitorTargetsEnv(cfg *lconfig.Config) error { func handleMonitorTargetsEnv(cfg *lconfig.Config) error {
if targetsStr := os.Getenv("LOGWISP_MONITOR_TARGETS"); targetsStr != "" { if targetsStr := os.Getenv("LOGWISP_MONITOR_TARGETS"); targetsStr != "" {
// Clear any existing targets from file/defaults
cfg.Set("monitor.targets", []MonitorTarget{}) cfg.Set("monitor.targets", []MonitorTarget{})
// Parse comma-separated format: path:pattern:isfile,path2:pattern2:isfile
parts := strings.Split(targetsStr, ",") parts := strings.Split(targetsStr, ",")
for i, part := range parts { for i, part := range parts {
targetParts := strings.Split(part, ":") targetParts := strings.Split(part, ":")
@ -248,12 +187,7 @@ func handleMonitorTargetsEnv(cfg *lconfig.Config) error {
return nil return nil
} }
// validate ensures configuration is valid
func (c *Config) validate() error { func (c *Config) validate() error {
if c.Port < 1 || c.Port > 65535 {
return fmt.Errorf("invalid port: %d", c.Port)
}
if c.Monitor.CheckIntervalMs < 10 { if c.Monitor.CheckIntervalMs < 10 {
return fmt.Errorf("check interval too small: %d ms", c.Monitor.CheckIntervalMs) return fmt.Errorf("check interval too small: %d ms", c.Monitor.CheckIntervalMs)
} }
@ -266,33 +200,44 @@ func (c *Config) validate() error {
if target.Path == "" { if target.Path == "" {
return fmt.Errorf("target %d: empty path", i) return fmt.Errorf("target %d: empty path", i)
} }
if !target.IsFile && target.Pattern == "" {
return fmt.Errorf("target %d: pattern required for directory monitoring", i)
}
// SECURITY: Validate paths don't contain directory traversal
if strings.Contains(target.Path, "..") { if strings.Contains(target.Path, "..") {
return fmt.Errorf("target %d: path contains directory traversal", i) return fmt.Errorf("target %d: path contains directory traversal", i)
} }
} }
if c.Stream.BufferSize < 1 { if c.TCPServer.Enabled {
return fmt.Errorf("buffer size must be positive: %d", c.Stream.BufferSize) if c.TCPServer.Port < 1 || c.TCPServer.Port > 65535 {
return fmt.Errorf("invalid TCP port: %d", c.TCPServer.Port)
}
if c.TCPServer.BufferSize < 1 {
return fmt.Errorf("TCP buffer size must be positive: %d", c.TCPServer.BufferSize)
}
} }
if c.Stream.RateLimit.Enabled { if c.HTTPServer.Enabled {
if c.Stream.RateLimit.RequestsPerSecond < 1 { if c.HTTPServer.Port < 1 || c.HTTPServer.Port > 65535 {
return fmt.Errorf("rate limit requests per second must be positive: %d", return fmt.Errorf("invalid HTTP port: %d", c.HTTPServer.Port)
c.Stream.RateLimit.RequestsPerSecond)
} }
if c.Stream.RateLimit.BurstSize < 1 { if c.HTTPServer.BufferSize < 1 {
return fmt.Errorf("rate limit burst size must be positive: %d", return fmt.Errorf("HTTP buffer size must be positive: %d", c.HTTPServer.BufferSize)
c.Stream.RateLimit.BurstSize)
} }
if c.Stream.RateLimit.CleanupIntervalS < 1 { }
return fmt.Errorf("rate limit cleanup interval must be positive: %d",
c.Stream.RateLimit.CleanupIntervalS) if c.TCPServer.Enabled && c.TCPServer.Heartbeat.Enabled {
if c.TCPServer.Heartbeat.IntervalSeconds < 1 {
return fmt.Errorf("TCP heartbeat interval must be positive: %d", c.TCPServer.Heartbeat.IntervalSeconds)
}
if c.TCPServer.Heartbeat.Format != "json" && c.TCPServer.Heartbeat.Format != "comment" {
return fmt.Errorf("TCP heartbeat format must be 'json' or 'comment': %s", c.TCPServer.Heartbeat.Format)
}
}
if c.HTTPServer.Enabled && c.HTTPServer.Heartbeat.Enabled {
if c.HTTPServer.Heartbeat.IntervalSeconds < 1 {
return fmt.Errorf("HTTP heartbeat interval must be positive: %d", c.HTTPServer.Heartbeat.IntervalSeconds)
}
if c.HTTPServer.Heartbeat.Format != "json" && c.HTTPServer.Heartbeat.Format != "comment" {
return fmt.Errorf("HTTP heartbeat format must be 'json' or 'comment': %s", c.HTTPServer.Heartbeat.Format)
} }
} }

View File

@ -1,126 +0,0 @@
// File: logwisp/src/internal/middleware/ratelimit.go
package middleware
import (
"fmt"
"net/http"
"sync"
"time"
"golang.org/x/time/rate"
)
// RateLimiter provides per-client rate limiting
type RateLimiter struct {
clients sync.Map // map[string]*clientLimiter
requestsPerSec int
burstSize int
cleanupInterval time.Duration
done chan struct{}
}
type clientLimiter struct {
limiter *rate.Limiter
lastSeen time.Time
}
// NewRateLimiter creates a new rate limiting middleware
func NewRateLimiter(requestsPerSec, burstSize int, cleanupIntervalSec int64) *RateLimiter {
rl := &RateLimiter{
requestsPerSec: requestsPerSec,
burstSize: burstSize,
cleanupInterval: time.Duration(cleanupIntervalSec) * time.Second,
done: make(chan struct{}),
}
// Start cleanup routine
go rl.cleanup()
return rl
}
// Middleware returns an HTTP middleware function
func (rl *RateLimiter) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Get client IP
clientIP := r.RemoteAddr
if forwarded := r.Header.Get("X-Forwarded-For"); forwarded != "" {
clientIP = forwarded
}
// Get or create limiter for client
limiter := rl.getLimiter(clientIP)
// Check rate limit
if !limiter.Allow() {
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
// Continue to next handler
next.ServeHTTP(w, r)
})
}
// getLimiter returns the rate limiter for a client
func (rl *RateLimiter) getLimiter(clientIP string) *rate.Limiter {
// Try to get existing limiter
if val, ok := rl.clients.Load(clientIP); ok {
client := val.(*clientLimiter)
client.lastSeen = time.Now()
return client.limiter
}
// Create new limiter
limiter := rate.NewLimiter(rate.Limit(rl.requestsPerSec), rl.burstSize)
client := &clientLimiter{
limiter: limiter,
lastSeen: time.Now(),
}
rl.clients.Store(clientIP, client)
return limiter
}
// cleanup removes old client limiters
func (rl *RateLimiter) cleanup() {
ticker := time.NewTicker(rl.cleanupInterval)
defer ticker.Stop()
for {
select {
case <-rl.done:
return
case <-ticker.C:
rl.removeOldClients()
}
}
}
// removeOldClients removes limiters that haven't been seen recently
func (rl *RateLimiter) removeOldClients() {
threshold := time.Now().Add(-rl.cleanupInterval * 2) // Keep for 2x cleanup interval
rl.clients.Range(func(key, value interface{}) bool {
client := value.(*clientLimiter)
if client.lastSeen.Before(threshold) {
rl.clients.Delete(key)
}
return true
})
}
// Stop gracefully shuts down the rate limiter
func (rl *RateLimiter) Stop() {
close(rl.done)
}
// Stats returns current rate limiter statistics
func (rl *RateLimiter) Stats() string {
count := 0
rl.clients.Range(func(_, _ interface{}) bool {
count++
return true
})
return fmt.Sprintf("Active clients: %d", count)
}

View File

@ -0,0 +1,261 @@
package monitor
import (
"bufio"
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"regexp"
"strings"
"sync"
"syscall"
"time"
)
type fileWatcher struct {
path string
callback func(LogEntry)
position int64
size int64
inode uint64
modTime time.Time
mu sync.Mutex
stopped bool
rotationSeq int
}
func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher {
return &fileWatcher{
path: path,
callback: callback,
}
}
func (w *fileWatcher) watch(ctx context.Context) {
if err := w.seekToEnd(); err != nil {
return
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if w.isStopped() {
return
}
w.checkFile()
}
}
}
func (w *fileWatcher) seekToEnd() error {
file, err := os.Open(w.path)
if err != nil {
return err
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
pos, err := file.Seek(0, io.SeekEnd)
if err != nil {
return err
}
w.mu.Lock()
w.position = pos
w.size = info.Size()
w.modTime = info.ModTime()
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
w.inode = stat.Ino
}
w.mu.Unlock()
return nil
}
func (w *fileWatcher) checkFile() error {
file, err := os.Open(w.path)
if err != nil {
return err
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
w.mu.Lock()
oldPos := w.position
oldSize := w.size
oldInode := w.inode
oldModTime := w.modTime
w.mu.Unlock()
currentSize := info.Size()
currentModTime := info.ModTime()
var currentInode uint64
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
currentInode = stat.Ino
}
rotated := false
rotationReason := ""
if oldInode != 0 && currentInode != 0 && currentInode != oldInode {
rotated = true
rotationReason = "inode change"
}
if !rotated && currentSize < oldSize {
rotated = true
rotationReason = "size decrease"
}
if !rotated && currentModTime.Before(oldModTime) && currentSize <= oldSize {
rotated = true
rotationReason = "modification time reset"
}
if !rotated && oldPos > currentSize+1024 {
rotated = true
rotationReason = "position beyond file size"
}
newPos := oldPos
if rotated {
newPos = 0
w.mu.Lock()
w.rotationSeq++
seq := w.rotationSeq
w.inode = currentInode
w.mu.Unlock()
w.callback(LogEntry{
Time: time.Now(),
Source: filepath.Base(w.path),
Level: "INFO",
Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
})
}
if _, err := file.Seek(newPos, io.SeekStart); err != nil {
return err
}
scanner := bufio.NewScanner(file)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024)
for scanner.Scan() {
line := scanner.Text()
if line == "" {
continue
}
entry := w.parseLine(line)
w.callback(entry)
}
if currentPos, err := file.Seek(0, io.SeekCurrent); err == nil {
w.mu.Lock()
w.position = currentPos
w.size = currentSize
w.modTime = currentModTime
w.mu.Unlock()
}
return scanner.Err()
}
func (w *fileWatcher) parseLine(line string) LogEntry {
var jsonLog struct {
Time string `json:"time"`
Level string `json:"level"`
Message string `json:"msg"`
Fields json.RawMessage `json:"fields"`
}
if err := json.Unmarshal([]byte(line), &jsonLog); err == nil {
timestamp, err := time.Parse(time.RFC3339Nano, jsonLog.Time)
if err != nil {
timestamp = time.Now()
}
return LogEntry{
Time: timestamp,
Source: filepath.Base(w.path),
Level: jsonLog.Level,
Message: jsonLog.Message,
Fields: jsonLog.Fields,
}
}
level := extractLogLevel(line)
return LogEntry{
Time: time.Now(),
Source: filepath.Base(w.path),
Level: level,
Message: line,
}
}
func extractLogLevel(line string) string {
patterns := []struct {
patterns []string
level string
}{
{[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
{[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
{[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
{[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
{[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
}
upperLine := strings.ToUpper(line)
for _, group := range patterns {
for _, pattern := range group.patterns {
if strings.Contains(upperLine, pattern) {
return group.level
}
}
}
return ""
}
func globToRegex(glob string) string {
regex := regexp.QuoteMeta(glob)
regex = strings.ReplaceAll(regex, `\*`, `.*`)
regex = strings.ReplaceAll(regex, `\?`, `.`)
return "^" + regex + "$"
}
func (w *fileWatcher) close() {
w.stop()
}
func (w *fileWatcher) stop() {
w.mu.Lock()
w.stopped = true
w.mu.Unlock()
}
func (w *fileWatcher) isStopped() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.stopped
}

View File

@ -1,22 +1,17 @@
// File: logwisp/src/internal/monitor/monitor.go // FILE: src/internal/monitor/monitor.go
package monitor package monitor
import ( import (
"bufio"
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"os" "os"
"path/filepath" "path/filepath"
"regexp" "regexp"
"strings"
"sync" "sync"
"syscall"
"time" "time"
) )
// LogEntry represents a log line to be streamed
type LogEntry struct { type LogEntry struct {
Time time.Time `json:"time"` Time time.Time `json:"time"`
Source string `json:"source"` Source string `json:"source"`
@ -25,9 +20,8 @@ type LogEntry struct {
Fields json.RawMessage `json:"fields,omitempty"` Fields json.RawMessage `json:"fields,omitempty"`
} }
// Monitor watches files and directories for log entries
type Monitor struct { type Monitor struct {
callback func(LogEntry) subscribers []chan LogEntry
targets []target targets []target
watchers map[string]*fileWatcher watchers map[string]*fileWatcher
mu sync.RWMutex mu sync.RWMutex
@ -41,26 +35,44 @@ type target struct {
path string path string
pattern string pattern string
isFile bool isFile bool
regex *regexp.Regexp // FIXED: Compiled pattern for performance regex *regexp.Regexp
} }
// New creates a new monitor instance func New() *Monitor {
func New(callback func(LogEntry)) *Monitor {
return &Monitor{ return &Monitor{
callback: callback,
watchers: make(map[string]*fileWatcher), watchers: make(map[string]*fileWatcher),
checkInterval: 100 * time.Millisecond, checkInterval: 100 * time.Millisecond,
} }
} }
// SetCheckInterval configures the file check frequency func (m *Monitor) Subscribe() chan LogEntry {
m.mu.Lock()
defer m.mu.Unlock()
ch := make(chan LogEntry, 1000)
m.subscribers = append(m.subscribers, ch)
return ch
}
func (m *Monitor) publish(entry LogEntry) {
m.mu.RLock()
defer m.mu.RUnlock()
for _, ch := range m.subscribers {
select {
case ch <- entry:
default:
// Drop message if channel full
}
}
}
func (m *Monitor) SetCheckInterval(interval time.Duration) { func (m *Monitor) SetCheckInterval(interval time.Duration) {
m.mu.Lock() m.mu.Lock()
m.checkInterval = interval m.checkInterval = interval
m.mu.Unlock() m.mu.Unlock()
} }
// AddTarget adds a path to monitor with enhanced pattern support
func (m *Monitor) AddTarget(path, pattern string, isFile bool) error { func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
absPath, err := filepath.Abs(path) absPath, err := filepath.Abs(path)
if err != nil { if err != nil {
@ -69,7 +81,6 @@ func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
var compiledRegex *regexp.Regexp var compiledRegex *regexp.Regexp
if !isFile && pattern != "" { if !isFile && pattern != "" {
// FIXED: Convert glob pattern to regex for better matching
regexPattern := globToRegex(pattern) regexPattern := globToRegex(pattern)
compiledRegex, err = regexp.Compile(regexPattern) compiledRegex, err = regexp.Compile(regexPattern)
if err != nil { if err != nil {
@ -89,13 +100,10 @@ func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
return nil return nil
} }
// Start begins monitoring with configurable interval
func (m *Monitor) Start(ctx context.Context) error { func (m *Monitor) Start(ctx context.Context) error {
m.ctx, m.cancel = context.WithCancel(ctx) m.ctx, m.cancel = context.WithCancel(ctx)
m.wg.Add(1) m.wg.Add(1)
go m.monitorLoop() go m.monitorLoop()
return nil return nil
} }
@ -109,14 +117,15 @@ func (m *Monitor) Stop() {
for _, w := range m.watchers { for _, w := range m.watchers {
w.close() w.close()
} }
for _, ch := range m.subscribers {
close(ch)
}
m.mu.Unlock() m.mu.Unlock()
} }
// FIXED: Enhanced monitoring loop with configurable interval
func (m *Monitor) monitorLoop() { func (m *Monitor) monitorLoop() {
defer m.wg.Done() defer m.wg.Done()
// Initial scan
m.checkTargets() m.checkTargets()
m.mu.RLock() m.mu.RLock()
@ -133,7 +142,6 @@ func (m *Monitor) monitorLoop() {
case <-ticker.C: case <-ticker.C:
m.checkTargets() m.checkTargets()
// Update ticker interval if changed
m.mu.RLock() m.mu.RLock()
newInterval := m.checkInterval newInterval := m.checkInterval
m.mu.RUnlock() m.mu.RUnlock()
@ -147,7 +155,6 @@ func (m *Monitor) monitorLoop() {
} }
} }
// FIXED: Enhanced target checking with better file discovery
func (m *Monitor) checkTargets() { func (m *Monitor) checkTargets() {
m.mu.RLock() m.mu.RLock()
targets := make([]target, len(m.targets)) targets := make([]target, len(m.targets))
@ -158,12 +165,10 @@ func (m *Monitor) checkTargets() {
if t.isFile { if t.isFile {
m.ensureWatcher(t.path) m.ensureWatcher(t.path)
} else { } else {
// FIXED: More efficient directory scanning
files, err := m.scanDirectory(t.path, t.regex) files, err := m.scanDirectory(t.path, t.regex)
if err != nil { if err != nil {
continue continue
} }
for _, file := range files { for _, file := range files {
m.ensureWatcher(file) m.ensureWatcher(file)
} }
@ -173,7 +178,6 @@ func (m *Monitor) checkTargets() {
m.cleanupWatchers() m.cleanupWatchers()
} }
// FIXED: Optimized directory scanning
func (m *Monitor) scanDirectory(dir string, pattern *regexp.Regexp) ([]string, error) { func (m *Monitor) scanDirectory(dir string, pattern *regexp.Regexp) ([]string, error) {
entries, err := os.ReadDir(dir) entries, err := os.ReadDir(dir)
if err != nil { if err != nil {
@ -207,7 +211,7 @@ func (m *Monitor) ensureWatcher(path string) {
return return
} }
w := newFileWatcher(path, m.callback) w := newFileWatcher(path, m.publish)
m.watchers[path] = w m.watchers[path] = w
m.wg.Add(1) m.wg.Add(1)
@ -231,268 +235,4 @@ func (m *Monitor) cleanupWatchers() {
delete(m.watchers, path) delete(m.watchers, path)
} }
} }
}
// fileWatcher with enhanced rotation detection
type fileWatcher struct {
path string
callback func(LogEntry)
position int64
size int64
inode uint64
modTime time.Time
mu sync.Mutex
stopped bool
rotationSeq int // FIXED: Track rotation sequence for logging
}
func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher {
return &fileWatcher{
path: path,
callback: callback,
}
}
func (w *fileWatcher) watch(ctx context.Context) {
if err := w.seekToEnd(); err != nil {
return
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if w.isStopped() {
return
}
w.checkFile()
}
}
}
// FIXED: Enhanced file state tracking for better rotation detection
func (w *fileWatcher) seekToEnd() error {
file, err := os.Open(w.path)
if err != nil {
return err
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
pos, err := file.Seek(0, io.SeekEnd)
if err != nil {
return err
}
w.mu.Lock()
w.position = pos
w.size = info.Size()
w.modTime = info.ModTime()
// Get inode for rotation detection (Unix-specific)
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
w.inode = stat.Ino
}
w.mu.Unlock()
return nil
}
// FIXED: Enhanced rotation detection with multiple signals
func (w *fileWatcher) checkFile() error {
file, err := os.Open(w.path)
if err != nil {
return err
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
w.mu.Lock()
oldPos := w.position
oldSize := w.size
oldInode := w.inode
oldModTime := w.modTime
w.mu.Unlock()
currentSize := info.Size()
currentModTime := info.ModTime()
var currentInode uint64
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
currentInode = stat.Ino
}
// FIXED: Multiple rotation detection methods
rotated := false
rotationReason := ""
// Method 1: Inode change (most reliable on Unix)
if oldInode != 0 && currentInode != 0 && currentInode != oldInode {
rotated = true
rotationReason = "inode change"
}
// Method 2: File size decrease
if !rotated && currentSize < oldSize {
rotated = true
rotationReason = "size decrease"
}
// Method 3: File modification time reset while size is same or smaller
if !rotated && currentModTime.Before(oldModTime) && currentSize <= oldSize {
rotated = true
rotationReason = "modification time reset"
}
// Method 4: Large position vs current size discrepancy
if !rotated && oldPos > currentSize+1024 { // Allow some buffer
rotated = true
rotationReason = "position beyond file size"
}
newPos := oldPos
if rotated {
newPos = 0
w.mu.Lock()
w.rotationSeq++
seq := w.rotationSeq
w.inode = currentInode
w.mu.Unlock()
// Log rotation event
w.callback(LogEntry{
Time: time.Now(),
Source: filepath.Base(w.path),
Level: "INFO",
Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
})
}
// Seek to position and read new content
if _, err := file.Seek(newPos, io.SeekStart); err != nil {
return err
}
scanner := bufio.NewScanner(file)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // 1MB max line
lineCount := 0
for scanner.Scan() {
line := scanner.Text()
if line == "" {
continue
}
entry := w.parseLine(line)
w.callback(entry)
lineCount++
}
// Update file state
if currentPos, err := file.Seek(0, io.SeekCurrent); err == nil {
w.mu.Lock()
w.position = currentPos
w.size = currentSize
w.modTime = currentModTime
w.mu.Unlock()
}
return scanner.Err()
}
// FIXED: Enhanced log parsing with more level detection patterns
func (w *fileWatcher) parseLine(line string) LogEntry {
var jsonLog struct {
Time string `json:"time"`
Level string `json:"level"`
Message string `json:"msg"`
Fields json.RawMessage `json:"fields"`
}
// Try JSON parsing first
if err := json.Unmarshal([]byte(line), &jsonLog); err == nil {
timestamp, err := time.Parse(time.RFC3339Nano, jsonLog.Time)
if err != nil {
timestamp = time.Now()
}
return LogEntry{
Time: timestamp,
Source: filepath.Base(w.path),
Level: jsonLog.Level,
Message: jsonLog.Message,
Fields: jsonLog.Fields,
}
}
// Plain text with enhanced level extraction
level := extractLogLevel(line)
return LogEntry{
Time: time.Now(),
Source: filepath.Base(w.path),
Level: level,
Message: line,
}
}
// FIXED: More comprehensive log level extraction
func extractLogLevel(line string) string {
patterns := []struct {
patterns []string
level string
}{
{[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
{[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
{[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
{[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
{[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
}
upperLine := strings.ToUpper(line)
for _, group := range patterns {
for _, pattern := range group.patterns {
if strings.Contains(upperLine, pattern) {
return group.level
}
}
}
return ""
}
// FIXED: Convert glob patterns to regex
func globToRegex(glob string) string {
regex := regexp.QuoteMeta(glob)
regex = strings.ReplaceAll(regex, `\*`, `.*`)
regex = strings.ReplaceAll(regex, `\?`, `.`)
return "^" + regex + "$"
}
func (w *fileWatcher) close() {
w.stop()
}
func (w *fileWatcher) stop() {
w.mu.Lock()
w.stopped = true
w.mu.Unlock()
}
func (w *fileWatcher) isStopped() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.stopped
} }

192
src/internal/stream/http.go Normal file
View File

@ -0,0 +1,192 @@
// FILE: src/internal/stream/http.go
package stream
import (
"bufio"
"encoding/json"
"fmt"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/valyala/fasthttp"
"logwisp/src/internal/config"
"logwisp/src/internal/monitor"
)
type HTTPStreamer struct {
logChan chan monitor.LogEntry
config config.HTTPConfig
server *fasthttp.Server
activeClients atomic.Int32
mu sync.RWMutex
startTime time.Time
}
func NewHTTPStreamer(logChan chan monitor.LogEntry, cfg config.HTTPConfig) *HTTPStreamer {
return &HTTPStreamer{
logChan: logChan,
config: cfg,
startTime: time.Now(),
}
}
func (h *HTTPStreamer) Start() error {
h.server = &fasthttp.Server{
Handler: h.requestHandler,
DisableKeepalive: false,
StreamRequestBody: true,
Logger: nil, // Suppress fasthttp logs
}
addr := fmt.Sprintf(":%d", h.config.Port)
return h.server.ListenAndServe(addr)
}
func (h *HTTPStreamer) Stop() {
if h.server != nil {
h.server.Shutdown()
}
}
func (h *HTTPStreamer) requestHandler(ctx *fasthttp.RequestCtx) {
path := string(ctx.Path())
switch path {
case "/stream":
h.handleStream(ctx)
case "/status":
h.handleStatus(ctx)
default:
ctx.SetStatusCode(fasthttp.StatusNotFound)
}
}
func (h *HTTPStreamer) handleStream(ctx *fasthttp.RequestCtx) {
// Set SSE headers
ctx.Response.Header.Set("Content-Type", "text/event-stream")
ctx.Response.Header.Set("Cache-Control", "no-cache")
ctx.Response.Header.Set("Connection", "keep-alive")
ctx.Response.Header.Set("Access-Control-Allow-Origin", "*")
ctx.Response.Header.Set("X-Accel-Buffering", "no")
h.activeClients.Add(1)
defer h.activeClients.Add(-1)
// Create subscription for this client
clientChan := make(chan monitor.LogEntry, h.config.BufferSize)
// Subscribe to monitor's broadcast
go func() {
for entry := range h.logChan {
select {
case clientChan <- entry:
default:
// Drop if client buffer full
}
}
close(clientChan)
}()
// Define the stream writer function
streamFunc := func(w *bufio.Writer) {
// Send initial connected event
clientID := fmt.Sprintf("%d", time.Now().UnixNano())
fmt.Fprintf(w, "event: connected\ndata: {\"client_id\":\"%s\"}\n\n", clientID)
w.Flush()
var ticker *time.Ticker
var tickerChan <-chan time.Time
if h.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalSeconds) * time.Second)
tickerChan = ticker.C
defer ticker.Stop()
}
for {
select {
case entry, ok := <-clientChan:
if !ok {
return
}
data, err := json.Marshal(entry)
if err != nil {
continue
}
fmt.Fprintf(w, "data: %s\n\n", data)
if err := w.Flush(); err != nil {
return
}
case <-tickerChan:
if heartbeat := h.formatHeartbeat(); heartbeat != "" {
fmt.Fprint(w, heartbeat)
if err := w.Flush(); err != nil {
return
}
}
}
}
}
ctx.SetBodyStreamWriter(streamFunc)
}
func (h *HTTPStreamer) formatHeartbeat() string {
if !h.config.Heartbeat.Enabled {
return ""
}
if h.config.Heartbeat.Format == "json" {
data := make(map[string]interface{})
data["type"] = "heartbeat"
if h.config.Heartbeat.IncludeTimestamp {
data["timestamp"] = time.Now().UTC().Format(time.RFC3339)
}
if h.config.Heartbeat.IncludeStats {
data["active_clients"] = h.activeClients.Load()
data["uptime_seconds"] = int(time.Since(h.startTime).Seconds())
}
jsonData, _ := json.Marshal(data)
return fmt.Sprintf("data: %s\n\n", jsonData)
}
// Default comment format
var parts []string
parts = append(parts, "heartbeat")
if h.config.Heartbeat.IncludeTimestamp {
parts = append(parts, time.Now().UTC().Format(time.RFC3339))
}
if h.config.Heartbeat.IncludeStats {
parts = append(parts, fmt.Sprintf("clients=%d", h.activeClients.Load()))
parts = append(parts, fmt.Sprintf("uptime=%ds", int(time.Since(h.startTime).Seconds())))
}
return fmt.Sprintf(": %s\n\n", strings.Join(parts, " "))
}
func (h *HTTPStreamer) handleStatus(ctx *fasthttp.RequestCtx) {
ctx.SetContentType("application/json")
status := map[string]interface{}{
"service": "LogWisp",
"version": "3.0.0",
"http_server": map[string]interface{}{
"port": h.config.Port,
"active_clients": h.activeClients.Load(),
"buffer_size": h.config.BufferSize,
},
}
data, _ := json.Marshal(status)
ctx.SetBody(data)
}

View File

@ -0,0 +1,11 @@
// FILE: src/internal/stream/noop_logger.go
package stream
// noopLogger implements gnet's Logger interface but discards everything
type noopLogger struct{}
func (n noopLogger) Debugf(format string, args ...any) {}
func (n noopLogger) Infof(format string, args ...any) {}
func (n noopLogger) Warnf(format string, args ...any) {}
func (n noopLogger) Errorf(format string, args ...any) {}
func (n noopLogger) Fatalf(format string, args ...any) {}

View File

@ -1,245 +0,0 @@
// File: logwisp/src/internal/stream/stream.go
package stream
import (
"encoding/json"
"fmt"
"net/http"
"sync"
"sync/atomic"
"time"
"logwisp/src/internal/monitor"
)
// Streamer handles Server-Sent Events streaming
type Streamer struct {
clients map[string]*clientConnection
register chan *clientConnection
unregister chan string
broadcast chan monitor.LogEntry
mu sync.RWMutex
bufferSize int
done chan struct{}
colorMode bool
wg sync.WaitGroup
// Metrics
totalDropped atomic.Int64
}
type clientConnection struct {
id string
channel chan monitor.LogEntry
lastActivity time.Time
dropped atomic.Int64 // Track per-client dropped messages
}
// New creates a new SSE streamer
func New(bufferSize int) *Streamer {
return NewWithOptions(bufferSize, false)
}
// NewWithOptions creates a new SSE streamer with options
func NewWithOptions(bufferSize int, colorMode bool) *Streamer {
s := &Streamer{
clients: make(map[string]*clientConnection),
register: make(chan *clientConnection),
unregister: make(chan string),
broadcast: make(chan monitor.LogEntry, bufferSize),
bufferSize: bufferSize,
done: make(chan struct{}),
colorMode: colorMode,
}
s.wg.Add(1)
go s.run()
return s
}
// run manages client connections - SIMPLIFIED: no forced disconnections
func (s *Streamer) run() {
defer s.wg.Done()
for {
select {
case c := <-s.register:
s.mu.Lock()
s.clients[c.id] = c
s.mu.Unlock()
case id := <-s.unregister:
s.mu.Lock()
if client, ok := s.clients[id]; ok {
close(client.channel)
delete(s.clients, id)
}
s.mu.Unlock()
case entry := <-s.broadcast:
s.mu.RLock()
now := time.Now()
for _, client := range s.clients {
select {
case client.channel <- entry:
// Successfully sent
client.lastActivity = now
client.dropped.Store(0) // Reset dropped counter on success
default:
// Buffer full - skip this message for this client
// Don't disconnect, just track dropped messages
dropped := client.dropped.Add(1)
s.totalDropped.Add(1)
// Log significant drop milestones for monitoring
if dropped == 100 || dropped == 1000 || dropped%10000 == 0 {
// Could add logging here if needed
}
}
}
s.mu.RUnlock()
case <-s.done:
s.mu.Lock()
for _, client := range s.clients {
close(client.channel)
}
s.clients = make(map[string]*clientConnection)
s.mu.Unlock()
return
}
}
}
// Publish sends a log entry to all connected clients
func (s *Streamer) Publish(entry monitor.LogEntry) {
select {
case s.broadcast <- entry:
// Sent to broadcast channel
default:
// Broadcast buffer full - drop the message globally
s.totalDropped.Add(1)
}
}
// ServeHTTP implements http.Handler for SSE - SIMPLIFIED
func (s *Streamer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Set SSE headers
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("X-Accel-Buffering", "no") // Disable nginx buffering
// SECURITY: Prevent XSS
w.Header().Set("X-Content-Type-Options", "nosniff")
// Create client
clientID := fmt.Sprintf("%d", time.Now().UnixNano())
ch := make(chan monitor.LogEntry, s.bufferSize)
client := &clientConnection{
id: clientID,
channel: ch,
lastActivity: time.Now(),
}
// Register client
s.register <- client
defer func() {
s.unregister <- clientID
}()
// Send initial connection event
fmt.Fprintf(w, "event: connected\ndata: {\"client_id\":\"%s\",\"buffer_size\":%d}\n\n",
clientID, s.bufferSize)
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
// Create ticker for heartbeat - keeps connection alive through proxies
ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop()
// Stream events until client disconnects
for {
select {
case <-r.Context().Done():
// Client disconnected
return
case entry, ok := <-ch:
if !ok {
// Channel closed
return
}
// Process entry for color if needed
if s.colorMode {
entry = s.processColorEntry(entry)
}
data, err := json.Marshal(entry)
if err != nil {
continue
}
fmt.Fprintf(w, "data: %s\n\n", data)
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
case <-ticker.C:
// Send heartbeat as SSE comment
fmt.Fprintf(w, ": heartbeat %s\n\n", time.Now().UTC().Format(time.RFC3339))
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
}
}
}
// Stop gracefully shuts down the streamer
func (s *Streamer) Stop() {
close(s.done)
s.wg.Wait()
close(s.register)
close(s.unregister)
close(s.broadcast)
}
// processColorEntry preserves ANSI codes in JSON
func (s *Streamer) processColorEntry(entry monitor.LogEntry) monitor.LogEntry {
return entry
}
// Stats returns current streamer statistics
func (s *Streamer) Stats() map[string]interface{} {
s.mu.RLock()
defer s.mu.RUnlock()
stats := map[string]interface{}{
"active_clients": len(s.clients),
"buffer_size": s.bufferSize,
"color_mode": s.colorMode,
"total_dropped": s.totalDropped.Load(),
}
// Include per-client dropped counts if any are significant
var clientsWithDrops []map[string]interface{}
for id, client := range s.clients {
dropped := client.dropped.Load()
if dropped > 0 {
clientsWithDrops = append(clientsWithDrops, map[string]interface{}{
"id": id,
"dropped": dropped,
})
}
}
if len(clientsWithDrops) > 0 {
stats["clients_with_drops"] = clientsWithDrops
}
return stats
}

144
src/internal/stream/tcp.go Normal file
View File

@ -0,0 +1,144 @@
// FILE: src/internal/stream/tcp.go
package stream
import (
"encoding/json"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/panjf2000/gnet/v2"
"logwisp/src/internal/config"
"logwisp/src/internal/monitor"
)
type TCPStreamer struct {
logChan chan monitor.LogEntry
config config.TCPConfig
server *tcpServer
done chan struct{}
activeConns atomic.Int32
startTime time.Time
}
type tcpServer struct {
gnet.BuiltinEventEngine
streamer *TCPStreamer
connections sync.Map
}
func NewTCPStreamer(logChan chan monitor.LogEntry, cfg config.TCPConfig) *TCPStreamer {
return &TCPStreamer{
logChan: logChan,
config: cfg,
done: make(chan struct{}),
startTime: time.Now(),
}
}
func (t *TCPStreamer) Start() error {
t.server = &tcpServer{streamer: t}
// Start log broadcast loop
go t.broadcastLoop()
// Configure gnet with no-op logger
addr := fmt.Sprintf("tcp://:%d", t.config.Port)
err := gnet.Run(t.server, addr,
gnet.WithLogger(noopLogger{}), // No-op logger: discard everything
gnet.WithMulticore(true),
gnet.WithReusePort(true),
)
return err
}
func (t *TCPStreamer) Stop() {
close(t.done)
// No engine to stop with gnet v2
}
func (t *TCPStreamer) broadcastLoop() {
var ticker *time.Ticker
var tickerChan <-chan time.Time
if t.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalSeconds) * time.Second)
tickerChan = ticker.C
defer ticker.Stop()
}
for {
select {
case entry := <-t.logChan:
data, err := json.Marshal(entry)
if err != nil {
continue
}
data = append(data, '\n')
t.server.connections.Range(func(key, value interface{}) bool {
conn := key.(gnet.Conn)
conn.AsyncWrite(data, nil)
return true
})
case <-tickerChan:
if heartbeat := t.formatHeartbeat(); heartbeat != nil {
t.server.connections.Range(func(key, value interface{}) bool {
conn := key.(gnet.Conn)
conn.AsyncWrite(heartbeat, nil)
return true
})
}
case <-t.done:
return
}
}
}
func (t *TCPStreamer) formatHeartbeat() []byte {
if !t.config.Heartbeat.Enabled {
return nil
}
data := make(map[string]interface{})
data["type"] = "heartbeat"
if t.config.Heartbeat.IncludeTimestamp {
data["time"] = time.Now().UTC().Format(time.RFC3339Nano)
}
if t.config.Heartbeat.IncludeStats {
data["active_connections"] = t.activeConns.Load()
data["uptime_seconds"] = int(time.Since(t.startTime).Seconds())
}
jsonData, _ := json.Marshal(data)
return append(jsonData, '\n')
}
func (s *tcpServer) OnBoot(eng gnet.Engine) gnet.Action {
return gnet.None
}
func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
s.connections.Store(c, struct{}{})
s.streamer.activeConns.Add(1)
return nil, gnet.None
}
func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
s.connections.Delete(c)
s.streamer.activeConns.Add(-1)
return gnet.None
}
func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
// We don't expect input from clients, just discard
c.Discard(-1)
return gnet.None
}