diff --git a/README.md b/README.md
index 5eda4ae..e99666c 100644
--- a/README.md
+++ b/README.md
@@ -2,41 +2,39 @@
-# LogWisp - Simple Log Streaming
+# LogWisp - Dual-Stack Log Streaming
-A lightweight log streaming service that monitors files and streams updates via Server-Sent Events (SSE).
-
-## Philosophy
-
-LogWisp follows the Unix philosophy: do one thing and do it well. It monitors log files and streams them over HTTP/SSE. That's it.
+A high-performance log streaming service with dual-stack architecture: raw TCP streaming via gnet and HTTP/SSE streaming via fasthttp.
## Features
-- Monitors multiple files and directories simultaneously
-- Streams log updates in real-time via SSE
-- Supports both plain text and JSON formatted logs
-- Automatic file rotation detection
-- Configurable rate limiting
-- Environment variable support
-- Simple TOML configuration
-- Atomic configuration management
-- Optional ANSI color pass-through
+- **Dual streaming modes**: TCP (gnet) and HTTP/SSE (fasthttp)
+- **Fan-out architecture**: Multiple independent consumers
+- **Real-time updates**: File monitoring with rotation detection
+- **Zero dependencies**: Only gnet and fasthttp beyond stdlib
+- **High performance**: Non-blocking I/O throughout
## Quick Start
-1. Build:
```bash
-./build.sh
-```
+# Build
+go build -o logwisp ./src/cmd/logwisp
-2. Run with defaults (monitors current directory):
-```bash
+# Run with HTTP only (default)
./logwisp
+
+# Enable both TCP and HTTP
+./logwisp --enable-tcp --tcp-port 9090
+
+# Monitor specific paths
+./logwisp /var/log:*.log /app/logs:error*.log
```
-3. View logs:
-```bash
-curl -N http://localhost:8080/stream
+## Architecture
+
+```
+Monitor (Publisher) → [Subscriber Channels] → TCP Server (default port 9090)
+ ↘ HTTP Server (default port 8080)
```
## Command Line Options
@@ -45,279 +43,87 @@ curl -N http://localhost:8080/stream
logwisp [OPTIONS] [TARGET...]
OPTIONS:
- -c, --color Enable color pass-through for ANSI escape codes
- --config FILE Config file path (default: ~/.config/logwisp.toml)
- --port PORT HTTP port (default: 8080)
- --buffer-size SIZE Stream buffer size (default: 1000)
- --check-interval MS File check interval in ms (default: 100)
- --rate-limit Enable rate limiting
- --rate-requests N Rate limit requests/sec (default: 10)
- --rate-burst N Rate limit burst size (default: 20)
+ --config FILE Config file path
+ --check-interval MS File check interval (default: 100)
+
+ # TCP Server
+ --enable-tcp Enable TCP server
+ --tcp-port PORT TCP port (default: 9090)
+ --tcp-buffer-size SIZE TCP buffer size (default: 1000)
+
+ # HTTP Server
+ --enable-http Enable HTTP server (default: true)
+ --http-port PORT HTTP port (default: 8080)
+ --http-buffer-size SIZE HTTP buffer size (default: 1000)
+
+ # Legacy compatibility
+ --port PORT Same as --http-port
+ --buffer-size SIZE Same as --http-buffer-size
TARGET:
- path[:pattern[:isfile]] Path to monitor (file or directory)
- pattern: glob pattern for directories (default: *.log)
- isfile: true/false (auto-detected if omitted)
-
-EXAMPLES:
- # Monitor current directory for *.log files
- logwisp
-
- # Monitor specific file with color support
- logwisp -c /var/log/app.log
-
- # Monitor multiple locations
- logwisp /var/log:*.log /app/logs:error*.log:false /tmp/debug.log::true
-
- # Custom port with rate limiting
- logwisp --port 9090 --rate-limit --rate-requests 100 --rate-burst 200
+ path[:pattern[:isfile]] Path to monitor
+ pattern: glob pattern for directories
+ isfile: true/false (auto-detected if omitted)
```
## Configuration
-LogWisp uses a three-level configuration hierarchy:
-
-1. **Command-line arguments** (highest priority)
-2. **Environment variables**
-3. **Configuration file** (~/.config/logwisp.toml)
-4. **Default values** (lowest priority)
-
-### Default Values
-
-| Setting | Default | Description |
-|---------|---------|-------------|
-| `port` | 8080 | HTTP listen port |
-| `monitor.check_interval_ms` | 100 | File check interval (milliseconds) |
-| `monitor.targets` | [{"path": "./", "pattern": "*.log", "is_file": false}] | Paths to monitor |
-| `stream.buffer_size` | 1000 | Per-client event buffer size |
-| `stream.rate_limit.enabled` | false | Enable rate limiting |
-| `stream.rate_limit.requests_per_second` | 10 | Sustained request rate |
-| `stream.rate_limit.burst_size` | 20 | Maximum burst size |
-| `stream.rate_limit.cleanup_interval_s` | 60 | Client cleanup interval |
-
-### Configuration File Location
-
-Default: `~/.config/logwisp.toml`
-
-Override with environment variables:
-- `LOGWISP_CONFIG_DIR` - Directory containing config file
-- `LOGWISP_CONFIG_FILE` - Config filename (absolute or relative)
-
-Examples:
-```bash
-# Use config from current directory
-LOGWISP_CONFIG_DIR=. ./logwisp
-
-# Use specific config file
-LOGWISP_CONFIG_FILE=/etc/logwisp/prod.toml ./logwisp
-
-# Use custom directory and filename
-LOGWISP_CONFIG_DIR=/opt/configs LOGWISP_CONFIG_FILE=myapp.toml ./logwisp
-```
-
-### Environment Variables
-
-All configuration values can be overridden via environment variables:
-
-| Environment Variable | Config Path | Description |
-|---------------------|-------------|-------------|
-| `LOGWISP_PORT` | `port` | HTTP listen port |
-| `LOGWISP_MONITOR_CHECK_INTERVAL_MS` | `monitor.check_interval_ms` | File check interval |
-| `LOGWISP_MONITOR_TARGETS` | `monitor.targets` | Comma-separated targets |
-| `LOGWISP_STREAM_BUFFER_SIZE` | `stream.buffer_size` | Client buffer size |
-| `LOGWISP_STREAM_RATE_LIMIT_ENABLED` | `stream.rate_limit.enabled` | Enable rate limiting |
-| `LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC` | `stream.rate_limit.requests_per_second` | Rate limit |
-| `LOGWISP_STREAM_RATE_LIMIT_BURST_SIZE` | `stream.rate_limit.burst_size` | Burst size |
-| `LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S` | `stream.rate_limit.cleanup_interval_s` | Cleanup interval |
-
-### Monitor Targets Format
-
-The `LOGWISP_MONITOR_TARGETS` environment variable uses a special format:
-```
-path:pattern:isfile,path2:pattern2:isfile
-```
-
-Examples:
-```bash
-# Monitor directory and specific file
-LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/app.log::true" ./logwisp
-
-# Multiple directories
-LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/opt/app/logs:app-*.log:false" ./logwisp
-```
-
-### Complete Configuration Example
+Config file location: `~/.config/logwisp.toml`
```toml
-# Port to listen on (default: 8080)
-port = 8080
-
[monitor]
-# How often to check for file changes in milliseconds (default: 100)
check_interval_ms = 100
-# Paths to monitor
-# Default: [{"path": "./", "pattern": "*.log", "is_file": false}]
-
-# Monitor all .log files in current directory
[[monitor.targets]]
path = "./"
pattern = "*.log"
is_file = false
-# Monitor specific file
-[[monitor.targets]]
-path = "/app/logs/app.log"
-pattern = "" # Ignored for files
-is_file = true
-
-# Monitor with specific pattern
-[[monitor.targets]]
-path = "/var/log/nginx"
-pattern = "access*.log"
-is_file = false
-
-[stream]
-# Buffer size for each client connection (default: 1000)
-# Controls how many log entries can be queued per client
+[tcpserver]
+enabled = false
+port = 9090
buffer_size = 1000
-[stream.rate_limit]
-# Enable rate limiting (default: false)
-enabled = false
-
-# Requests per second per client (default: 10)
-# This is the sustained rate
-requests_per_second = 10
-
-# Burst size - max requests at once (default: 20)
-# Allows temporary bursts above the sustained rate
-burst_size = 20
-
-# How often to clean up old client limiters in seconds (default: 60)
-# Clients inactive for 2x this duration are removed
-cleanup_interval_s = 60
+[httpserver]
+enabled = true
+port = 8080
+buffer_size = 1000
```
-## Color Support
-
-LogWisp can pass through ANSI color escape codes from monitored logs to SSE clients using the `-c` flag.
+## Clients
+### TCP Stream
```bash
-# Enable color pass-through
-./logwisp -c
+# Simple TCP client
+nc localhost 9090
-# Or via systemd
-ExecStart=/opt/logwisp/bin/logwisp -c
+# Using telnet
+telnet localhost 9090
+
+# Using socat
+socat - TCP:localhost:9090
```
-### How It Works
-
-When color mode is enabled (`-c` flag), LogWisp preserves ANSI escape codes in log messages. These are properly JSON-escaped in the SSE stream.
-
-### Example Log with Colors
-
-Original log file content:
-```
-\033[31mERROR\033[0m: Database connection failed
-\033[33mWARN\033[0m: High memory usage detected
-\033[32mINFO\033[0m: Service started successfully
-```
-
-SSE output with `-c`:
-```json
-{
- "time": "2024-01-01T12:00:00.123456Z",
- "source": "app.log",
- "message": "\u001b[31mERROR\u001b[0m: Database connection failed"
-}
-```
-
-### Client-Side Handling
-
-#### Terminal Clients
-
-For terminal-based clients (like curl), the escape codes will render as colors:
-
+### HTTP/SSE Stream
```bash
-# This will show colored output in terminals that support ANSI codes
-curl -N http://localhost:8080/stream | jq -r '.message'
+# Stream logs
+curl -N http://localhost:8080/stream
+
+# Check status
+curl http://localhost:8080/status
```
-#### Web Clients
+## Environment Variables
-For web-based clients, you'll need to convert ANSI codes to HTML:
+All config values can be set via environment:
+- `LOGWISP_MONITOR_CHECK_INTERVAL_MS`
+- `LOGWISP_MONITOR_TARGETS` (format: "path:pattern:isfile,...")
+- `LOGWISP_TCPSERVER_ENABLED`
+- `LOGWISP_TCPSERVER_PORT`
+- `LOGWISP_HTTPSERVER_ENABLED`
+- `LOGWISP_HTTPSERVER_PORT`
-```javascript
-// Example using ansi-to-html library
-const AnsiToHtml = require('ansi-to-html');
-const convert = new AnsiToHtml();
-
-eventSource.onmessage = (event) => {
- const data = JSON.parse(event.data);
- const html = convert.toHtml(data.message);
- document.getElementById('log').innerHTML += html + '
';
-};
-```
-
-#### Custom Processing
-
-```python
-# Python example with colorama
-import json
-import colorama
-from colorama import init
-
-init() # Initialize colorama for Windows support
-
-# Process SSE stream
-for line in stream:
- if line.startswith('data: '):
- data = json.loads(line[6:])
- # Colorama will handle ANSI codes automatically
- print(data['message'])
-```
-
-### Common ANSI Color Codes
-
-| Code | Color/Style |
-|------|-------------|
-| `\033[0m` | Reset |
-| `\033[1m` | Bold |
-| `\033[31m` | Red |
-| `\033[32m` | Green |
-| `\033[33m` | Yellow |
-| `\033[34m` | Blue |
-| `\033[35m` | Magenta |
-| `\033[36m` | Cyan |
-
-### Limitations
-
-1. **JSON Escaping**: ANSI codes are JSON-escaped in the stream (e.g., `\033` becomes `\u001b`)
-2. **Client Support**: The client must support or convert ANSI codes
-3. **Performance**: No significant impact, but slightly larger message sizes
-
-### Security Note
-
-Color codes are passed through as-is. Ensure monitored logs come from trusted sources to avoid terminal escape sequence attacks.
-
-### Disabling Colors
-
-To strip color codes instead of passing them through:
-- Don't use the `-c` flag
-- Or set up a preprocessing pipeline:
- ```bash
- tail -f colored.log | sed 's/\x1b\[[0-9;]*m//g' > plain.log
- ```
-
-## API
-
-### Endpoints
-
-- `GET /stream` - Server-Sent Events stream of log entries
-- `GET /status` - Service status and configuration information
-
-### Log Entry Format
+## Log Entry Format
```json
{
@@ -329,223 +135,156 @@ To strip color codes instead of passing them through:
}
```
-### SSE Event Types
+## API Endpoints
-| Event | Description | Data Format |
-|-------|-------------|-------------|
-| `connected` | Initial connection | `{"client_id": "123456789"}` |
-| `data` | Log entry | JSON log entry |
-| `disconnected` | Client disconnected | `{"reason": "slow_client"}` |
-| `timeout` | Client timeout | `{"reason": "client_timeout"}` |
-| `:` | Heartbeat (comment) | ISO timestamp |
+### TCP Protocol
+- Raw JSON lines, one entry per line
+- No headers or authentication
+- Instant connection, streaming starts immediately
-### Status Response Format
+### HTTP Endpoints
+- `GET /stream` - SSE stream of log entries
+- `GET /status` - Service status JSON
+### SSE Events
+- `connected` - Initial connection with client_id
+- `data` - Log entry JSON
+- `:` - Heartbeat comment (30s interval)
+
+## Heartbeat Configuration
+
+LogWisp supports configurable heartbeat messages for both HTTP/SSE and TCP streams to detect stale connections and provide server statistics.
+
+**HTTP/SSE Heartbeat:**
+- **Format Options:**
+ - `comment`: SSE comment format (`: heartbeat ...`)
+ - `json`: Standard data message with JSON payload
+- **Content Options:**
+ - `include_timestamp`: Add current UTC timestamp
+ - `include_stats`: Add active clients count and server uptime
+
+**TCP Heartbeat:**
+- Always uses JSON format
+- Same content options as HTTP
+- Useful for detecting disconnected clients
+
+**Example Heartbeat Messages:**
+
+HTTP Comment format:
+```
+: heartbeat 2024-01-01T12:00:00Z clients=5 uptime=3600s
+```
+
+JSON format:
```json
-{
- "service": "LogWisp",
- "version": "2.0.0",
- "port": 8080,
- "color_mode": false,
- "config": {
- "monitor": {
- "check_interval_ms": 100,
- "targets_count": 2
- },
- "stream": {
- "buffer_size": 1000,
- "rate_limit": {
- "enabled": true,
- "requests_per_second": 10,
- "burst_size": 20
- }
- }
- },
- "streamer": {
- "active_clients": 5,
- "buffer_size": 1000,
- "color_mode": false,
- "total_dropped": 42
- },
- "rate_limiter": "Active clients: 3"
-}
+{"type":"heartbeat","timestamp":"2024-01-01T12:00:00Z","active_clients":5,"uptime_seconds":3600}
```
-## Usage Examples
-
-### Basic Usage
-```bash
-# Start with defaults
-./logwisp
-
-# Monitor specific file
-./logwisp /var/log/app.log
-
-# View logs
-curl -N http://localhost:8080/stream
+**Configuration:**
+```toml
+[httpserver.heartbeat]
+enabled = true
+interval_seconds = 30
+include_timestamp = true
+include_stats = true
+format = "json"
```
-### With Environment Variables
-```bash
-# Change port and add rate limiting
-LOGWISP_PORT=9090 \
-LOGWISP_STREAM_RATE_LIMIT_ENABLED=true \
-LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC=5 \
-./logwisp
-```
+**Environment Variables:**
+- `LOGWISP_HTTPSERVER_HEARTBEAT_ENABLED`
+- `LOGWISP_HTTPSERVER_HEARTBEAT_INTERVAL_SECONDS`
+- `LOGWISP_TCPSERVER_HEARTBEAT_ENABLED`
+- `LOGWISP_TCPSERVER_HEARTBEAT_INTERVAL_SECONDS`
-### Monitor Multiple Locations
-```bash
-# Via command line
-./logwisp /var/log:*.log /app/logs:*.json /tmp/debug.log
+## Summary
-# Via environment variable
-LOGWISP_MONITOR_TARGETS="/var/log:*.log:false,/app/logs:*.json:false,/tmp/debug.log::true" \
-./logwisp
+**Fixed:**
+- Removed duplicate `globToRegex` functions (never used)
+- Added missing TCP heartbeat support
+- Made HTTP heartbeat configurable
-# Or via config file
-cat > ~/.config/logwisp.toml << EOF
-[[monitor.targets]]
-path = "/var/log"
-pattern = "*.log"
-is_file = false
+**Enhanced:**
+- Configurable heartbeat interval
+- Multiple format options (comment/JSON)
+- Optional timestamp and statistics
+- Per-protocol configuration
-[[monitor.targets]]
-path = "/app/logs"
-pattern = "*.json"
-is_file = false
+**⚠️ SECURITY:** Heartbeat statistics expose minimal server state (connection count, uptime). If this is sensitive in your environment, disable `include_stats`.
-[[monitor.targets]]
-path = "/tmp/debug.log"
-is_file = true
-EOF
-```
-
-### Production Deployment
-
-Example systemd service with environment overrides:
+## Deployment
+### Systemd Service
```ini
[Unit]
-Description=LogWisp Log Streaming Service
+Description=LogWisp Log Streaming
After=network.target
[Service]
Type=simple
-User=logwisp
-ExecStart=/usr/local/bin/logwisp -c
+ExecStart=/usr/local/bin/logwisp --enable-tcp --enable-http
Restart=always
-RestartSec=5
-
-# Environment overrides
-Environment="LOGWISP_PORT=8080"
-Environment="LOGWISP_STREAM_BUFFER_SIZE=5000"
-Environment="LOGWISP_STREAM_RATE_LIMIT_ENABLED=true"
-Environment="LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC=100"
-Environment="LOGWISP_MONITOR_TARGETS=/var/log:*.log:false,/opt/app/logs:*.log:false"
-
-# Security hardening
-NoNewPrivileges=true
-PrivateTmp=true
-ProtectSystem=strict
-ProtectHome=true
-ReadOnlyPaths=/
-ReadWritePaths=/var/log /opt/app/logs
+Environment="LOGWISP_TCPSERVER_PORT=9090"
+Environment="LOGWISP_HTTPSERVER_PORT=8080"
[Install]
WantedBy=multi-user.target
```
+### Docker
+```dockerfile
+FROM golang:1.24 AS builder
+WORKDIR /app
+COPY . .
+RUN go build -o logwisp ./src/cmd/logwisp
+
+FROM debian:bookworm-slim
+COPY --from=builder /app/logwisp /usr/local/bin/
+EXPOSE 8080 9090
+CMD ["logwisp", "--enable-tcp", "--enable-http"]
+```
+
## Performance Tuning
-### Buffer Size
+- **Buffer Size**: Increase for burst traffic (5000+)
+- **Check Interval**: Decrease for lower latency (10-50ms)
+- **TCP**: Best for high-volume system consumers
+- **HTTP**: Best for web browsers and REST clients
-The `stream.buffer_size` setting controls how many log entries can be queued per client:
-- **Small buffers (100-500)**: Lower memory usage, clients skip entries during bursts
-- **Default (1000)**: Good balance for most use cases
-- **Large buffers (5000+)**: Handle burst traffic better, higher memory usage
+### Message Dropping and Client Behavior
-When a client's buffer is full, new messages are skipped for that client until it catches up. The client remains connected and will receive future messages once buffer space is available.
+LogWisp uses non-blocking message delivery to maintain system stability. When a client cannot keep up with the log stream, messages are dropped rather than blocking other clients or the monitor.
-### Check Interval
+**Common causes of dropped messages:**
+- **Browser throttling**: Browsers may throttle background tabs, reducing JavaScript execution frequency
+- **Network congestion**: Slow connections or high latency can cause client buffers to fill
+- **Client processing**: Heavy client-side processing (parsing, rendering) can create backpressure
+- **System resources**: CPU/memory constraints on client machines affect consumption rate
-The `monitor.check_interval_ms` setting controls file polling frequency:
-- **Fast (10-50ms)**: Near real-time updates, higher CPU usage
-- **Default (100ms)**: Good balance
-- **Slow (500ms+)**: Lower CPU usage, more latency
+**TCP vs HTTP behavior:**
+- **TCP**: Raw stream with kernel-level buffering. Drops occur when TCP send buffer fills
+- **HTTP/SSE**: Application-level buffering. Each client has a dedicated channel (default: 1000 entries)
-### Rate Limiting
+**Mitigation strategies:**
+1. Increase buffer sizes for burst tolerance: `--tcp-buffer-size 5000` or `--http-buffer-size 5000`
+2. Implement client-side flow control (pause/resume based on queue depth)
+3. Use TCP for high-volume consumers that need guaranteed delivery
+4. Keep browser tabs in foreground for real-time monitoring
+5. Consider log aggregation/filtering at source for high-volume scenarios
-When to enable rate limiting:
-- Internet-facing deployments
-- Shared environments
-- Protection against misbehaving clients
-
-Rate limiting applies only to establishing SSE connections, not to individual messages. Once connected, clients receive all messages (subject to buffer capacity).
-
-## Troubleshooting
-
-### Client Missing Messages
-
-If clients miss messages during bursts:
-1. Check `total_dropped` and `clients_with_drops` in status endpoint
-2. Increase `stream.buffer_size` to handle larger bursts
-3. Messages are skipped when buffer is full, but clients stay connected
-
-### High Memory Usage
-
-If memory usage is high:
-1. Reduce `stream.buffer_size`
-2. Enable rate limiting to limit concurrent connections
-3. Each client uses `buffer_size * avg_message_size` memory
-
-### Browser Stops Receiving Updates
-
-This shouldn't happen with the current implementation. If it does:
-1. Check browser developer console for errors
-2. Verify no proxy/firewall is timing out the connection
-3. Ensure reverse proxy (if used) doesn't buffer SSE responses
-
-## File Rotation Detection
-
-LogWisp automatically detects log file rotation using multiple methods:
-- Inode changes (Linux/Unix)
-- File size decrease
-- Modification time reset
-- Read position beyond file size
-
-When rotation is detected, LogWisp:
-1. Logs a rotation event
-2. Resets read position to beginning
-3. Continues streaming from new file
-
-## Security Notes
-
-1. **No built-in authentication** - Use a reverse proxy for auth
-2. **No TLS support** - Use a reverse proxy for HTTPS
-3. **Path validation** - Only specified paths can be monitored
-4. **Directory traversal protection** - Paths containing ".." are rejected
-5. **Rate limiting** - Optional but recommended for public deployments
-6. **ANSI escape sequences** - Only enable color mode for trusted log sources
-
-## Design Decisions
-
-- **Unix philosophy**: Single purpose - stream logs
-- **SSE over WebSocket**: Simpler, works everywhere, built-in reconnect
-- **No database**: Stateless operation, instant startup
-- **Atomic config management**: Using LixenWraith/config package
-- **Graceful shutdown**: Proper cleanup on SIGINT/SIGTERM
-- **Platform agnostic**: POSIX-compliant where possible
+**Monitoring drops:**
+- HTTP: Check `/status` endpoint for drop statistics
+- TCP: Monitor connection count and system TCP metrics
+- Both: Watch for "channel full" indicators in client implementations
## Building from Source
-Requirements:
-- Go 1.23 or later
-
```bash
git clone https://github.com/yourusername/logwisp
cd logwisp
-go mod download
+go mod init logwisp
+go get github.com/panjf2000/gnet/v2
+go get github.com/valyala/fasthttp
+go get github.com/lixenwraith/config
go build -o logwisp ./src/cmd/logwisp
```
diff --git a/config/logwisp.toml b/config/logwisp.toml
index cf7c71f..aed8653 100644
--- a/config/logwisp.toml
+++ b/config/logwisp.toml
@@ -6,107 +6,128 @@
# 2. Environment variables (LOGWISP_ prefix)
# 3. This configuration file
# 4. Built-in defaults
-#
-# All settings shown below with their default values
-
-# Port to listen on
-# Environment: LOGWISP_PORT
-# CLI: --port PORT
-# Default: 8080
-port = 8080
[monitor]
-# How often to check for file changes (milliseconds)
-# Lower values = more responsive but higher CPU usage
+# File check interval (milliseconds)
+# Lower = more responsive, higher CPU usage
# Environment: LOGWISP_MONITOR_CHECK_INTERVAL_MS
# CLI: --check-interval MS
-# Default: 100
check_interval_ms = 100
-# Paths to monitor for log files
-# Environment: LOGWISP_MONITOR_TARGETS (format: "path:pattern:isfile,path2:pattern2:isfile")
+# Monitor targets
+# Environment: LOGWISP_MONITOR_TARGETS="path:pattern:isfile,path2:pattern2:isfile"
# CLI: logwisp [path[:pattern[:isfile]]] ...
-# Default: Monitor current directory for *.log files
[[monitor.targets]]
-path = "./" # Directory or file path to monitor
-pattern = "*.log" # Glob pattern for directory monitoring (ignored for files)
-is_file = false # true = monitor specific file, false = monitor directory
+path = "./" # Directory or file path
+pattern = "*.log" # Glob pattern (ignored for files)
+is_file = false # true = file, false = directory
-# Additional target examples (uncomment to use):
-
-# # Monitor specific log file
+# # Example: Specific file
# [[monitor.targets]]
-# path = "/var/log/application.log"
-# pattern = "" # Pattern ignored when is_file = true
+# path = "/var/log/app.log"
+# pattern = ""
# is_file = true
-# # Monitor system logs
+# # Example: System logs
# [[monitor.targets]]
# path = "/var/log"
# pattern = "*.log"
# is_file = false
-# # Monitor nginx access logs with pattern
-# [[monitor.targets]]
-# path = "/var/log/nginx"
-# pattern = "access*.log"
-# is_file = false
-
-# # Monitor journal export directory
-# [[monitor.targets]]
-# path = "/var/log/journal"
-# pattern = "*.log"
-# is_file = false
-
-# # Monitor multiple application logs
-# [[monitor.targets]]
-# path = "/opt/myapp/logs"
-# pattern = "app-*.log"
-# is_file = false
-
-[stream]
-# Buffer size for each client connection
-# Number of log entries that can be queued per client
-# When buffer is full, new messages are skipped (not sent to that client)
-# Increase for burst traffic, decrease for memory conservation
-# Environment: LOGWISP_STREAM_BUFFER_SIZE
-# CLI: --buffer-size SIZE
-# Default: 1000
-buffer_size = 1000
-
-[stream.rate_limit]
-# Enable rate limiting per client IP
-# Prevents resource exhaustion from misbehaving clients
-# Environment: LOGWISP_STREAM_RATE_LIMIT_ENABLED
-# CLI: --rate-limit
-# Default: false
+[tcpserver]
+# Raw TCP streaming server (gnet)
+# Environment: LOGWISP_TCPSERVER_ENABLED
+# CLI: --enable-tcp
enabled = false
-# Sustained requests per second allowed per client
-# Clients can make this many requests per second continuously
-# Environment: LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC
-# CLI: --rate-requests N
-# Default: 10
-requests_per_second = 10
+# TCP port
+# Environment: LOGWISP_TCPSERVER_PORT
+# CLI: --tcp-port PORT
+port = 9090
-# Maximum burst size per client
-# Allows temporary bursts above the sustained rate
-# Should be >= requests_per_second
-# Environment: LOGWISP_STREAM_RATE_LIMIT_BURST_SIZE
-# CLI: --rate-burst N
-# Default: 20
-burst_size = 20
+# Per-client buffer size
+# Environment: LOGWISP_TCPSERVER_BUFFER_SIZE
+# CLI: --tcp-buffer-size SIZE
+buffer_size = 1000
-# How often to clean up inactive client rate limiters (seconds)
-# Clients inactive for 2x this duration are removed from tracking
-# Lower values = more frequent cleanup, higher values = less overhead
-# Environment: LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S
-# Default: 60
-cleanup_interval_s = 60
+# TLS/SSL settings (not implemented in PoC)
+ssl_enabled = false
+ssl_cert_file = ""
+ssl_key_file = ""
-# Production configuration example:
-# [stream.rate_limit]
+[tcpserver.heartbeat]
+# Enable/disable heartbeat messages
+# Environment: LOGWISP_TCPSERVER_HEARTBEAT_ENABLED
+enabled = false
+
+# Heartbeat interval in seconds
+# Environment: LOGWISP_TCPSERVER_HEARTBEAT_INTERVAL_SECONDS
+interval_seconds = 30
+
+# Include timestamp in heartbeat
+# Environment: LOGWISP_TCPSERVER_HEARTBEAT_INCLUDE_TIMESTAMP
+include_timestamp = true
+
+# Include server statistics (active connections, uptime)
+# Environment: LOGWISP_TCPSERVER_HEARTBEAT_INCLUDE_STATS
+include_stats = false
+
+# Format: "json" only for TCP
+# Environment: LOGWISP_TCPSERVER_HEARTBEAT_FORMAT
+format = "json"
+
+[httpserver]
+# HTTP/SSE streaming server (fasthttp)
+# Environment: LOGWISP_HTTPSERVER_ENABLED
+# CLI: --enable-http
+enabled = true
+
+# HTTP port
+# Environment: LOGWISP_HTTPSERVER_PORT
+# CLI: --http-port PORT (or legacy --port)
+port = 8080
+
+# Per-client buffer size
+# Environment: LOGWISP_HTTPSERVER_BUFFER_SIZE
+# CLI: --http-buffer-size SIZE (or legacy --buffer-size)
+buffer_size = 1000
+
+# TLS/SSL settings (not implemented in PoC)
+ssl_enabled = false
+ssl_cert_file = ""
+ssl_key_file = ""
+
+[httpserver.heartbeat]
+# Enable/disable heartbeat messages
+# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_ENABLED
+enabled = true
+
+# Heartbeat interval in seconds
+# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_INTERVAL_SECONDS
+interval_seconds = 30
+
+# Include timestamp in heartbeat
+# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_INCLUDE_TIMESTAMP
+include_timestamp = true
+
+# Include server statistics (active clients, uptime)
+# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_INCLUDE_STATS
+include_stats = false
+
+# Format: "comment" (SSE comment) or "json" (data message)
+# Environment: LOGWISP_HTTPSERVER_HEARTBEAT_FORMAT
+format = "comment"
+
+# Production example:
+# [tcpserver]
# enabled = true
-# requests_per_second = 100 # Higher limit for production
-# burst_size = 200 # Allow larger bursts
-# cleanup_interval_s = 300 # Less frequent cleanup
\ No newline at end of file
+# port = 9090
+# buffer_size = 5000
+#
+# [httpserver]
+# enabled = true
+# port = 443
+# buffer_size = 5000
+# ssl_enabled = true
+# ssl_cert_file = "/etc/ssl/certs/logwisp.crt"
+# ssl_key_file = "/etc/ssl/private/logwisp.key"
\ No newline at end of file
diff --git a/go.mod b/go.mod
index 5e1393f..510cc77 100644
--- a/go.mod
+++ b/go.mod
@@ -6,10 +6,20 @@ toolchain go1.24.4
require (
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6
- golang.org/x/time v0.12.0
+ github.com/panjf2000/gnet/v2 v2.9.1
+ github.com/valyala/fasthttp v1.63.0
)
require (
github.com/BurntSushi/toml v1.5.0 // indirect
+ github.com/andybalholm/brotli v1.2.0 // indirect
+ github.com/klauspost/compress v1.18.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
+ github.com/panjf2000/ants/v2 v2.11.3 // indirect
+ github.com/valyala/bytebufferpool v1.0.0 // indirect
+ go.uber.org/multierr v1.11.0 // indirect
+ go.uber.org/zap v1.27.0 // indirect
+ golang.org/x/sync v0.15.0 // indirect
+ golang.org/x/sys v0.33.0 // indirect
+ gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
)
diff --git a/go.sum b/go.sum
index 98821ae..0e750ea 100644
--- a/go.sum
+++ b/go.sum
@@ -1,16 +1,40 @@
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
+github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
+github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
+github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6 h1:qE4SpAJWFaLkdRyE0FjTPBBRYE7LOvcmRCB5p86W73Q=
github.com/lixenwraith/config v0.0.0-20250701170607-8515fa0543b6/go.mod h1:4wPJ3HnLrYrtUwTinngCsBgtdIXsnxkLa7q4KAIbwY8=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
+github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
+github.com/panjf2000/gnet/v2 v2.9.1 h1:bKewICy/0xnQ9PMzNaswpe/Ah14w1TrRk91LHTcbIlA=
+github.com/panjf2000/gnet/v2 v2.9.1/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
-golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
-golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
+github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
+github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
+github.com/valyala/fasthttp v1.63.0 h1:DisIL8OjB7ul2d7cBaMRcKTQDYnrGy56R4FCiuDP0Ns=
+github.com/valyala/fasthttp v1.63.0/go.mod h1:REc4IeW+cAEyLrRPa5A81MIjvz0QE1laoTX2EaPHKJM=
+github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
+github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
+go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
+go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
+go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
+go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
+go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
+go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
+golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
+golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
+golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
+golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
+gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
+gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
diff --git a/src/cmd/logwisp/main.go b/src/cmd/logwisp/main.go
index f3bc523..6b42b78 100644
--- a/src/cmd/logwisp/main.go
+++ b/src/cmd/logwisp/main.go
@@ -1,230 +1,174 @@
-// File: logwisp/src/cmd/logwisp/main.go
+// FILE: src/cmd/logwisp/main.go
package main
import (
"context"
- "encoding/json"
"flag"
"fmt"
- "net/http"
"os"
"os/signal"
- "strings"
"sync"
"syscall"
"time"
"logwisp/src/internal/config"
- "logwisp/src/internal/middleware"
"logwisp/src/internal/monitor"
"logwisp/src/internal/stream"
)
func main() {
- // Parse flags manually without init()
- var colorMode bool
- flag.BoolVar(&colorMode, "c", false, "Enable color pass-through for escape codes in logs")
- flag.BoolVar(&colorMode, "color", false, "Enable color pass-through for escape codes in logs")
-
- // Additional CLI flags that override config
+ // Parse CLI flags
var (
- port = flag.Int("port", 0, "HTTP port (overrides config)")
- bufferSize = flag.Int("buffer-size", 0, "Stream buffer size (overrides config)")
- checkInterval = flag.Int("check-interval", 0, "File check interval in ms (overrides config)")
- rateLimit = flag.Bool("rate-limit", false, "Enable rate limiting (overrides config)")
- rateRequests = flag.Int("rate-requests", 0, "Rate limit requests/sec (overrides config)")
- rateBurst = flag.Int("rate-burst", 0, "Rate limit burst size (overrides config)")
- configFile = flag.String("config", "", "Config file path (overrides LOGWISP_CONFIG_FILE)")
+ configFile = flag.String("config", "", "Config file path")
+ // Legacy compatibility flags
+ port = flag.Int("port", 0, "HTTP port (legacy, maps to --http-port)")
+ bufferSize = flag.Int("buffer-size", 0, "Buffer size (legacy, maps to --http-buffer-size)")
+ // New explicit flags
+ httpPort = flag.Int("http-port", 0, "HTTP server port")
+ httpBuffer = flag.Int("http-buffer-size", 0, "HTTP server buffer size")
+ tcpPort = flag.Int("tcp-port", 0, "TCP server port")
+ tcpBuffer = flag.Int("tcp-buffer-size", 0, "TCP server buffer size")
+ enableTCP = flag.Bool("enable-tcp", false, "Enable TCP server")
+ enableHTTP = flag.Bool("enable-http", false, "Enable HTTP server")
+ checkInterval = flag.Int("check-interval", 0, "File check interval in ms")
)
-
flag.Parse()
- // Set config file env var if specified via CLI
if *configFile != "" {
os.Setenv("LOGWISP_CONFIG_FILE", *configFile)
}
- // Build CLI override args for config package
+ // Build CLI args for config
var cliArgs []string
+
+ // Legacy mapping
if *port > 0 {
- cliArgs = append(cliArgs, fmt.Sprintf("--port=%d", *port))
+ cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.port=%d", *port))
}
if *bufferSize > 0 {
- cliArgs = append(cliArgs, fmt.Sprintf("--stream.buffer_size=%d", *bufferSize))
+ cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.buffer_size=%d", *bufferSize))
+ }
+
+ // New flags
+ if *httpPort > 0 {
+ cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.port=%d", *httpPort))
+ }
+ if *httpBuffer > 0 {
+ cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.buffer_size=%d", *httpBuffer))
+ }
+ if *tcpPort > 0 {
+ cliArgs = append(cliArgs, fmt.Sprintf("--tcpserver.port=%d", *tcpPort))
+ }
+ if *tcpBuffer > 0 {
+ cliArgs = append(cliArgs, fmt.Sprintf("--tcpserver.buffer_size=%d", *tcpBuffer))
+ }
+ if flag.Lookup("enable-tcp").DefValue != flag.Lookup("enable-tcp").Value.String() {
+ cliArgs = append(cliArgs, fmt.Sprintf("--tcpserver.enabled=%v", *enableTCP))
+ }
+ if flag.Lookup("enable-http").DefValue != flag.Lookup("enable-http").Value.String() {
+ cliArgs = append(cliArgs, fmt.Sprintf("--httpserver.enabled=%v", *enableHTTP))
}
if *checkInterval > 0 {
cliArgs = append(cliArgs, fmt.Sprintf("--monitor.check_interval_ms=%d", *checkInterval))
}
- if flag.Lookup("rate-limit").DefValue != flag.Lookup("rate-limit").Value.String() {
- // Rate limit flag was explicitly set
- cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.enabled=%v", *rateLimit))
- }
- if *rateRequests > 0 {
- cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.requests_per_second=%d", *rateRequests))
- }
- if *rateBurst > 0 {
- cliArgs = append(cliArgs, fmt.Sprintf("--stream.rate_limit.burst_size=%d", *rateBurst))
- }
- // Parse remaining args as monitor targets
+ // Parse monitor targets from remaining args
for _, arg := range flag.Args() {
- if strings.Contains(arg, ":") {
- // Format: path:pattern:isfile
- cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s", arg))
- } else if stat, err := os.Stat(arg); err == nil {
- // Auto-detect file vs directory
- if stat.IsDir() {
- cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s:*.log:false", arg))
- } else {
- cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s::true", arg))
- }
- }
+ cliArgs = append(cliArgs, fmt.Sprintf("--monitor.targets.add=%s", arg))
}
- // Load configuration with CLI overrides
+ // Load configuration
cfg, err := config.LoadWithCLI(cliArgs)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to load config: %v\n", err)
os.Exit(1)
}
- // Create context for graceful shutdown
+ // Create context for shutdown
ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
// Setup signal handling
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
- // WaitGroup for tracking all goroutines
var wg sync.WaitGroup
- // Create components
- streamer := stream.NewWithOptions(cfg.Stream.BufferSize, colorMode)
- mon := monitor.New(streamer.Publish)
-
- // Set monitor check interval from config
+ // Create monitor
+ mon := monitor.New()
mon.SetCheckInterval(time.Duration(cfg.Monitor.CheckIntervalMs) * time.Millisecond)
- // Add monitor targets from config
+ // Add targets
for _, target := range cfg.Monitor.Targets {
if err := mon.AddTarget(target.Path, target.Pattern, target.IsFile); err != nil {
fmt.Fprintf(os.Stderr, "Failed to add target %s: %v\n", target.Path, err)
}
}
- // Start monitoring
+ // Start monitor
if err := mon.Start(ctx); err != nil {
fmt.Fprintf(os.Stderr, "Failed to start monitor: %v\n", err)
os.Exit(1)
}
- // Setup HTTP server
- mux := http.NewServeMux()
+ var tcpServer *stream.TCPStreamer
+ var httpServer *stream.HTTPStreamer
- // Create handler with optional rate limiting
- var handler http.Handler = streamer
- var rateLimiter *middleware.RateLimiter
+ // Start TCP server if enabled
+ if cfg.TCPServer.Enabled {
+ tcpChan := mon.Subscribe()
+ tcpServer = stream.NewTCPStreamer(tcpChan, cfg.TCPServer)
- if cfg.Stream.RateLimit.Enabled {
- rateLimiter = middleware.NewRateLimiter(
- cfg.Stream.RateLimit.RequestsPerSecond,
- cfg.Stream.RateLimit.BurstSize,
- cfg.Stream.RateLimit.CleanupIntervalS,
- )
- handler = rateLimiter.Middleware(handler)
- fmt.Printf("Rate limiting enabled: %d req/s, burst %d\n",
- cfg.Stream.RateLimit.RequestsPerSecond,
- cfg.Stream.RateLimit.BurstSize)
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ if err := tcpServer.Start(); err != nil {
+ fmt.Fprintf(os.Stderr, "TCP server error: %v\n", err)
+ }
+ }()
+
+ fmt.Printf("TCP streaming on port %d\n", cfg.TCPServer.Port)
}
- mux.Handle("/stream", handler)
+ // Start HTTP server if enabled
+ if cfg.HTTPServer.Enabled {
+ httpChan := mon.Subscribe()
+ httpServer = stream.NewHTTPStreamer(httpChan, cfg.HTTPServer)
- // Enhanced status endpoint
- mux.HandleFunc("/status", func(w http.ResponseWriter, r *http.Request) {
- w.Header().Set("Content-Type", "application/json")
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ if err := httpServer.Start(); err != nil {
+ fmt.Fprintf(os.Stderr, "HTTP server error: %v\n", err)
+ }
+ }()
- status := map[string]interface{}{
- "service": "LogWisp",
- "version": "2.0.0",
- "port": cfg.Port,
- "color_mode": colorMode,
- "config": map[string]interface{}{
- "monitor": map[string]interface{}{
- "check_interval_ms": cfg.Monitor.CheckIntervalMs,
- "targets_count": len(cfg.Monitor.Targets),
- },
- "stream": map[string]interface{}{
- "buffer_size": cfg.Stream.BufferSize,
- "rate_limit": map[string]interface{}{
- "enabled": cfg.Stream.RateLimit.Enabled,
- "requests_per_second": cfg.Stream.RateLimit.RequestsPerSecond,
- "burst_size": cfg.Stream.RateLimit.BurstSize,
- },
- },
- },
- }
-
- // Add runtime stats
- if rateLimiter != nil {
- status["rate_limiter"] = rateLimiter.Stats()
- }
- status["streamer"] = streamer.Stats()
-
- json.NewEncoder(w).Encode(status)
- })
-
- server := &http.Server{
- Addr: fmt.Sprintf(":%d", cfg.Port),
- Handler: mux,
- // Add timeouts for better shutdown behavior
- ReadTimeout: 10 * time.Second,
- WriteTimeout: 10 * time.Second,
- IdleTimeout: 120 * time.Second,
+ fmt.Printf("HTTP/SSE streaming on http://localhost:%d/stream\n", cfg.HTTPServer.Port)
+ fmt.Printf("Status available at http://localhost:%d/status\n", cfg.HTTPServer.Port)
}
- // Start server in goroutine
- wg.Add(1)
- go func() {
- defer wg.Done()
- fmt.Printf("LogWisp streaming on http://localhost:%d/stream\n", cfg.Port)
- fmt.Printf("Status available at http://localhost:%d/status\n", cfg.Port)
- if colorMode {
- fmt.Println("Color pass-through enabled")
- }
- fmt.Printf("Config loaded from: %s\n", config.GetConfigPath())
+ if !cfg.TCPServer.Enabled && !cfg.HTTPServer.Enabled {
+ fmt.Fprintln(os.Stderr, "No servers enabled. Enable at least one server in config.")
+ os.Exit(1)
+ }
- if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
- fmt.Fprintf(os.Stderr, "Server error: %v\n", err)
- }
- }()
-
- // Wait for shutdown signal
+ // Wait for shutdown
<-sigChan
fmt.Println("\nShutting down...")
- // Cancel context to stop all components
+ // Stop servers first
+ if tcpServer != nil {
+ tcpServer.Stop()
+ }
+ if httpServer != nil {
+ httpServer.Stop()
+ }
+
+ // Cancel context and stop monitor
cancel()
-
- // Create shutdown context with timeout
- shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
- defer shutdownCancel()
-
- // Shutdown server first
- if err := server.Shutdown(shutdownCtx); err != nil {
- fmt.Fprintf(os.Stderr, "Server shutdown error: %v\n", err)
- // Force close if graceful shutdown fails
- server.Close()
- }
-
- // Stop all components
mon.Stop()
- streamer.Stop()
- if rateLimiter != nil {
- rateLimiter.Stop()
- }
-
- // Wait for all goroutines with timeout
+ // Wait for completion
done := make(chan struct{})
go func() {
wg.Wait()
@@ -235,6 +179,6 @@ func main() {
case <-done:
fmt.Println("Shutdown complete")
case <-time.After(2 * time.Second):
- fmt.Println("Shutdown timeout, forcing exit")
+ fmt.Println("Shutdown timeout")
}
}
\ No newline at end of file
diff --git a/src/internal/config/config.go b/src/internal/config/config.go
index e102d75..03a3ca3 100644
--- a/src/internal/config/config.go
+++ b/src/internal/config/config.go
@@ -1,4 +1,4 @@
-// File: logwisp/src/internal/config/config.go
+// FILE: src/internal/config/config.go
package config
import (
@@ -10,114 +10,97 @@ import (
lconfig "github.com/lixenwraith/config"
)
-// Config holds the complete configuration
type Config struct {
- Port int `toml:"port"`
- Monitor MonitorConfig `toml:"monitor"`
- Stream StreamConfig `toml:"stream"`
+ Monitor MonitorConfig `toml:"monitor"`
+ TCPServer TCPConfig `toml:"tcpserver"`
+ HTTPServer HTTPConfig `toml:"httpserver"`
}
-// MonitorConfig holds monitoring settings
type MonitorConfig struct {
CheckIntervalMs int `toml:"check_interval_ms"`
Targets []MonitorTarget `toml:"targets"`
}
-// MonitorTarget represents a path to monitor
type MonitorTarget struct {
- Path string `toml:"path"` // File or directory path
- Pattern string `toml:"pattern"` // Glob pattern for directories
- IsFile bool `toml:"is_file"` // True if monitoring specific file
+ Path string `toml:"path"`
+ Pattern string `toml:"pattern"`
+ IsFile bool `toml:"is_file"`
}
-// StreamConfig holds streaming settings
-type StreamConfig struct {
- BufferSize int `toml:"buffer_size"`
- RateLimit RateLimitConfig `toml:"rate_limit"`
+type TCPConfig struct {
+ Enabled bool `toml:"enabled"`
+ Port int `toml:"port"`
+ BufferSize int `toml:"buffer_size"`
+ SSLEnabled bool `toml:"ssl_enabled"`
+ SSLCertFile string `toml:"ssl_cert_file"`
+ SSLKeyFile string `toml:"ssl_key_file"`
+ Heartbeat HeartbeatConfig `toml:"heartbeat"`
}
-// RateLimitConfig holds rate limiting settings
-type RateLimitConfig struct {
- Enabled bool `toml:"enabled"`
- RequestsPerSecond int `toml:"requests_per_second"`
- BurstSize int `toml:"burst_size"`
- CleanupIntervalS int64 `toml:"cleanup_interval_s"`
+type HTTPConfig struct {
+ Enabled bool `toml:"enabled"`
+ Port int `toml:"port"`
+ BufferSize int `toml:"buffer_size"`
+ SSLEnabled bool `toml:"ssl_enabled"`
+ SSLCertFile string `toml:"ssl_cert_file"`
+ SSLKeyFile string `toml:"ssl_key_file"`
+ Heartbeat HeartbeatConfig `toml:"heartbeat"`
+}
+
+type HeartbeatConfig struct {
+ Enabled bool `toml:"enabled"`
+ IntervalSeconds int `toml:"interval_seconds"`
+ IncludeTimestamp bool `toml:"include_timestamp"`
+ IncludeStats bool `toml:"include_stats"`
+ Format string `toml:"format"` // "comment" or "json"
}
-// defaults returns configuration with default values
func defaults() *Config {
return &Config{
- Port: 8080,
Monitor: MonitorConfig{
CheckIntervalMs: 100,
Targets: []MonitorTarget{
{Path: "./", Pattern: "*.log", IsFile: false},
},
},
- Stream: StreamConfig{
+ TCPServer: TCPConfig{
+ Enabled: false,
+ Port: 9090,
BufferSize: 1000,
- RateLimit: RateLimitConfig{
- Enabled: false,
- RequestsPerSecond: 10,
- BurstSize: 20,
- CleanupIntervalS: 60,
+ Heartbeat: HeartbeatConfig{
+ Enabled: false,
+ IntervalSeconds: 30,
+ IncludeTimestamp: true,
+ IncludeStats: false,
+ Format: "json",
+ },
+ },
+ HTTPServer: HTTPConfig{
+ Enabled: true,
+ Port: 8080,
+ BufferSize: 1000,
+ Heartbeat: HeartbeatConfig{
+ Enabled: true,
+ IntervalSeconds: 30,
+ IncludeTimestamp: true,
+ IncludeStats: false,
+ Format: "comment",
},
},
}
}
-// Load reads configuration using lixenwraith/config Builder pattern
-func Load() (*Config, error) {
- configPath := GetConfigPath()
-
- cfg, err := lconfig.NewBuilder().
- WithDefaults(defaults()).
- WithEnvPrefix("LOGWISP_").
- WithFile(configPath).
- WithEnvTransform(customEnvTransform).
- WithSources(
- lconfig.SourceEnv,
- lconfig.SourceFile,
- lconfig.SourceDefault,
- ).
- Build()
-
- if err != nil {
- // Only fail on actual errors, not missing config file
- if !strings.Contains(err.Error(), "not found") {
- return nil, fmt.Errorf("failed to load config: %w", err)
- }
- }
-
- // Special handling for LOGWISP_MONITOR_TARGETS env var
- if err := handleMonitorTargetsEnv(cfg); err != nil {
- return nil, err
- }
-
- // Scan into final config
- finalConfig := &Config{}
- if err := cfg.Scan("", finalConfig); err != nil {
- return nil, fmt.Errorf("failed to scan config: %w", err)
- }
-
- return finalConfig, finalConfig.validate()
-}
-
-// LoadWithCLI loads configuration and applies CLI arguments
func LoadWithCLI(cliArgs []string) (*Config, error) {
configPath := GetConfigPath()
- // Convert CLI args to config format
- convertedArgs := convertCLIArgs(cliArgs)
-
cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_").
WithFile(configPath).
- WithArgs(convertedArgs).
+ WithArgs(cliArgs).
WithEnvTransform(customEnvTransform).
WithSources(
- lconfig.SourceCLI, // CLI highest priority
+ lconfig.SourceCLI,
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
@@ -130,12 +113,10 @@ func LoadWithCLI(cliArgs []string) (*Config, error) {
}
}
- // Handle special env var
if err := handleMonitorTargetsEnv(cfg); err != nil {
return nil, err
}
- // Scan into final config
finalConfig := &Config{}
if err := cfg.Scan("", finalConfig); err != nil {
return nil, fmt.Errorf("failed to scan config: %w", err)
@@ -144,52 +125,14 @@ func LoadWithCLI(cliArgs []string) (*Config, error) {
return finalConfig, finalConfig.validate()
}
-// customEnvTransform handles LOGWISP_ prefix environment variables
func customEnvTransform(path string) string {
- // Standard transform
env := strings.ReplaceAll(path, ".", "_")
env = strings.ToUpper(env)
env = "LOGWISP_" + env
-
- // Handle common variations
- switch env {
- case "LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SECOND":
- if _, exists := os.LookupEnv("LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC"); exists {
- return "LOGWISP_STREAM_RATE_LIMIT_REQUESTS_PER_SEC"
- }
- case "LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL_S":
- if _, exists := os.LookupEnv("LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL"); exists {
- return "LOGWISP_STREAM_RATE_LIMIT_CLEANUP_INTERVAL"
- }
- }
-
return env
}
-// convertCLIArgs converts CLI args to config package format
-func convertCLIArgs(args []string) []string {
- var converted []string
-
- for _, arg := range args {
- switch {
- case arg == "-c" || arg == "--color":
- // Color mode is handled separately by main.go
- continue
- case strings.HasPrefix(arg, "--config="):
- // Config file path handled separately
- continue
- case strings.HasPrefix(arg, "--"):
- // Pass through other long flags
- converted = append(converted, arg)
- }
- }
-
- return converted
-}
-
-// GetConfigPath returns the configuration file path
func GetConfigPath() string {
- // Check explicit config file paths
if configFile := os.Getenv("LOGWISP_CONFIG_FILE"); configFile != "" {
if filepath.IsAbs(configFile) {
return configFile
@@ -204,7 +147,6 @@ func GetConfigPath() string {
return filepath.Join(configDir, "logwisp.toml")
}
- // Default location
if homeDir, err := os.UserHomeDir(); err == nil {
return filepath.Join(homeDir, ".config", "logwisp.toml")
}
@@ -212,13 +154,10 @@ func GetConfigPath() string {
return "logwisp.toml"
}
-// handleMonitorTargetsEnv handles comma-separated monitor targets env var
func handleMonitorTargetsEnv(cfg *lconfig.Config) error {
if targetsStr := os.Getenv("LOGWISP_MONITOR_TARGETS"); targetsStr != "" {
- // Clear any existing targets from file/defaults
cfg.Set("monitor.targets", []MonitorTarget{})
- // Parse comma-separated format: path:pattern:isfile,path2:pattern2:isfile
parts := strings.Split(targetsStr, ",")
for i, part := range parts {
targetParts := strings.Split(part, ":")
@@ -248,12 +187,7 @@ func handleMonitorTargetsEnv(cfg *lconfig.Config) error {
return nil
}
-// validate ensures configuration is valid
func (c *Config) validate() error {
- if c.Port < 1 || c.Port > 65535 {
- return fmt.Errorf("invalid port: %d", c.Port)
- }
-
if c.Monitor.CheckIntervalMs < 10 {
return fmt.Errorf("check interval too small: %d ms", c.Monitor.CheckIntervalMs)
}
@@ -266,33 +200,44 @@ func (c *Config) validate() error {
if target.Path == "" {
return fmt.Errorf("target %d: empty path", i)
}
-
- if !target.IsFile && target.Pattern == "" {
- return fmt.Errorf("target %d: pattern required for directory monitoring", i)
- }
-
- // SECURITY: Validate paths don't contain directory traversal
if strings.Contains(target.Path, "..") {
return fmt.Errorf("target %d: path contains directory traversal", i)
}
}
- if c.Stream.BufferSize < 1 {
- return fmt.Errorf("buffer size must be positive: %d", c.Stream.BufferSize)
+ if c.TCPServer.Enabled {
+ if c.TCPServer.Port < 1 || c.TCPServer.Port > 65535 {
+ return fmt.Errorf("invalid TCP port: %d", c.TCPServer.Port)
+ }
+ if c.TCPServer.BufferSize < 1 {
+ return fmt.Errorf("TCP buffer size must be positive: %d", c.TCPServer.BufferSize)
+ }
}
- if c.Stream.RateLimit.Enabled {
- if c.Stream.RateLimit.RequestsPerSecond < 1 {
- return fmt.Errorf("rate limit requests per second must be positive: %d",
- c.Stream.RateLimit.RequestsPerSecond)
+ if c.HTTPServer.Enabled {
+ if c.HTTPServer.Port < 1 || c.HTTPServer.Port > 65535 {
+ return fmt.Errorf("invalid HTTP port: %d", c.HTTPServer.Port)
}
- if c.Stream.RateLimit.BurstSize < 1 {
- return fmt.Errorf("rate limit burst size must be positive: %d",
- c.Stream.RateLimit.BurstSize)
+ if c.HTTPServer.BufferSize < 1 {
+ return fmt.Errorf("HTTP buffer size must be positive: %d", c.HTTPServer.BufferSize)
}
- if c.Stream.RateLimit.CleanupIntervalS < 1 {
- return fmt.Errorf("rate limit cleanup interval must be positive: %d",
- c.Stream.RateLimit.CleanupIntervalS)
+ }
+
+ if c.TCPServer.Enabled && c.TCPServer.Heartbeat.Enabled {
+ if c.TCPServer.Heartbeat.IntervalSeconds < 1 {
+ return fmt.Errorf("TCP heartbeat interval must be positive: %d", c.TCPServer.Heartbeat.IntervalSeconds)
+ }
+ if c.TCPServer.Heartbeat.Format != "json" && c.TCPServer.Heartbeat.Format != "comment" {
+ return fmt.Errorf("TCP heartbeat format must be 'json' or 'comment': %s", c.TCPServer.Heartbeat.Format)
+ }
+ }
+
+ if c.HTTPServer.Enabled && c.HTTPServer.Heartbeat.Enabled {
+ if c.HTTPServer.Heartbeat.IntervalSeconds < 1 {
+ return fmt.Errorf("HTTP heartbeat interval must be positive: %d", c.HTTPServer.Heartbeat.IntervalSeconds)
+ }
+ if c.HTTPServer.Heartbeat.Format != "json" && c.HTTPServer.Heartbeat.Format != "comment" {
+ return fmt.Errorf("HTTP heartbeat format must be 'json' or 'comment': %s", c.HTTPServer.Heartbeat.Format)
}
}
diff --git a/src/internal/middleware/ratelimiter.go b/src/internal/middleware/ratelimiter.go
deleted file mode 100644
index f9af14b..0000000
--- a/src/internal/middleware/ratelimiter.go
+++ /dev/null
@@ -1,126 +0,0 @@
-// File: logwisp/src/internal/middleware/ratelimit.go
-package middleware
-
-import (
- "fmt"
- "net/http"
- "sync"
- "time"
-
- "golang.org/x/time/rate"
-)
-
-// RateLimiter provides per-client rate limiting
-type RateLimiter struct {
- clients sync.Map // map[string]*clientLimiter
- requestsPerSec int
- burstSize int
- cleanupInterval time.Duration
- done chan struct{}
-}
-
-type clientLimiter struct {
- limiter *rate.Limiter
- lastSeen time.Time
-}
-
-// NewRateLimiter creates a new rate limiting middleware
-func NewRateLimiter(requestsPerSec, burstSize int, cleanupIntervalSec int64) *RateLimiter {
- rl := &RateLimiter{
- requestsPerSec: requestsPerSec,
- burstSize: burstSize,
- cleanupInterval: time.Duration(cleanupIntervalSec) * time.Second,
- done: make(chan struct{}),
- }
-
- // Start cleanup routine
- go rl.cleanup()
-
- return rl
-}
-
-// Middleware returns an HTTP middleware function
-func (rl *RateLimiter) Middleware(next http.Handler) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- // Get client IP
- clientIP := r.RemoteAddr
- if forwarded := r.Header.Get("X-Forwarded-For"); forwarded != "" {
- clientIP = forwarded
- }
-
- // Get or create limiter for client
- limiter := rl.getLimiter(clientIP)
-
- // Check rate limit
- if !limiter.Allow() {
- http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
- return
- }
-
- // Continue to next handler
- next.ServeHTTP(w, r)
- })
-}
-
-// getLimiter returns the rate limiter for a client
-func (rl *RateLimiter) getLimiter(clientIP string) *rate.Limiter {
- // Try to get existing limiter
- if val, ok := rl.clients.Load(clientIP); ok {
- client := val.(*clientLimiter)
- client.lastSeen = time.Now()
- return client.limiter
- }
-
- // Create new limiter
- limiter := rate.NewLimiter(rate.Limit(rl.requestsPerSec), rl.burstSize)
- client := &clientLimiter{
- limiter: limiter,
- lastSeen: time.Now(),
- }
-
- rl.clients.Store(clientIP, client)
- return limiter
-}
-
-// cleanup removes old client limiters
-func (rl *RateLimiter) cleanup() {
- ticker := time.NewTicker(rl.cleanupInterval)
- defer ticker.Stop()
-
- for {
- select {
- case <-rl.done:
- return
- case <-ticker.C:
- rl.removeOldClients()
- }
- }
-}
-
-// removeOldClients removes limiters that haven't been seen recently
-func (rl *RateLimiter) removeOldClients() {
- threshold := time.Now().Add(-rl.cleanupInterval * 2) // Keep for 2x cleanup interval
-
- rl.clients.Range(func(key, value interface{}) bool {
- client := value.(*clientLimiter)
- if client.lastSeen.Before(threshold) {
- rl.clients.Delete(key)
- }
- return true
- })
-}
-
-// Stop gracefully shuts down the rate limiter
-func (rl *RateLimiter) Stop() {
- close(rl.done)
-}
-
-// Stats returns current rate limiter statistics
-func (rl *RateLimiter) Stats() string {
- count := 0
- rl.clients.Range(func(_, _ interface{}) bool {
- count++
- return true
- })
- return fmt.Sprintf("Active clients: %d", count)
-}
\ No newline at end of file
diff --git a/src/internal/monitor/file_watcher.go b/src/internal/monitor/file_watcher.go
new file mode 100644
index 0000000..902171a
--- /dev/null
+++ b/src/internal/monitor/file_watcher.go
@@ -0,0 +1,261 @@
+package monitor
+
+import (
+ "bufio"
+ "context"
+ "encoding/json"
+ "fmt"
+ "io"
+ "os"
+ "path/filepath"
+ "regexp"
+ "strings"
+ "sync"
+ "syscall"
+ "time"
+)
+
+type fileWatcher struct {
+ path string
+ callback func(LogEntry)
+ position int64
+ size int64
+ inode uint64
+ modTime time.Time
+ mu sync.Mutex
+ stopped bool
+ rotationSeq int
+}
+
+func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher {
+ return &fileWatcher{
+ path: path,
+ callback: callback,
+ }
+}
+
+func (w *fileWatcher) watch(ctx context.Context) {
+ if err := w.seekToEnd(); err != nil {
+ return
+ }
+
+ ticker := time.NewTicker(100 * time.Millisecond)
+ defer ticker.Stop()
+
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ case <-ticker.C:
+ if w.isStopped() {
+ return
+ }
+ w.checkFile()
+ }
+ }
+}
+
+func (w *fileWatcher) seekToEnd() error {
+ file, err := os.Open(w.path)
+ if err != nil {
+ return err
+ }
+ defer file.Close()
+
+ info, err := file.Stat()
+ if err != nil {
+ return err
+ }
+
+ pos, err := file.Seek(0, io.SeekEnd)
+ if err != nil {
+ return err
+ }
+
+ w.mu.Lock()
+ w.position = pos
+ w.size = info.Size()
+ w.modTime = info.ModTime()
+
+ if stat, ok := info.Sys().(*syscall.Stat_t); ok {
+ w.inode = stat.Ino
+ }
+ w.mu.Unlock()
+
+ return nil
+}
+
+func (w *fileWatcher) checkFile() error {
+ file, err := os.Open(w.path)
+ if err != nil {
+ return err
+ }
+ defer file.Close()
+
+ info, err := file.Stat()
+ if err != nil {
+ return err
+ }
+
+ w.mu.Lock()
+ oldPos := w.position
+ oldSize := w.size
+ oldInode := w.inode
+ oldModTime := w.modTime
+ w.mu.Unlock()
+
+ currentSize := info.Size()
+ currentModTime := info.ModTime()
+ var currentInode uint64
+
+ if stat, ok := info.Sys().(*syscall.Stat_t); ok {
+ currentInode = stat.Ino
+ }
+
+ rotated := false
+ rotationReason := ""
+
+ if oldInode != 0 && currentInode != 0 && currentInode != oldInode {
+ rotated = true
+ rotationReason = "inode change"
+ }
+
+ if !rotated && currentSize < oldSize {
+ rotated = true
+ rotationReason = "size decrease"
+ }
+
+ if !rotated && currentModTime.Before(oldModTime) && currentSize <= oldSize {
+ rotated = true
+ rotationReason = "modification time reset"
+ }
+
+ if !rotated && oldPos > currentSize+1024 {
+ rotated = true
+ rotationReason = "position beyond file size"
+ }
+
+ newPos := oldPos
+ if rotated {
+ newPos = 0
+ w.mu.Lock()
+ w.rotationSeq++
+ seq := w.rotationSeq
+ w.inode = currentInode
+ w.mu.Unlock()
+
+ w.callback(LogEntry{
+ Time: time.Now(),
+ Source: filepath.Base(w.path),
+ Level: "INFO",
+ Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
+ })
+ }
+
+ if _, err := file.Seek(newPos, io.SeekStart); err != nil {
+ return err
+ }
+
+ scanner := bufio.NewScanner(file)
+ scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024)
+
+ for scanner.Scan() {
+ line := scanner.Text()
+ if line == "" {
+ continue
+ }
+
+ entry := w.parseLine(line)
+ w.callback(entry)
+ }
+
+ if currentPos, err := file.Seek(0, io.SeekCurrent); err == nil {
+ w.mu.Lock()
+ w.position = currentPos
+ w.size = currentSize
+ w.modTime = currentModTime
+ w.mu.Unlock()
+ }
+
+ return scanner.Err()
+}
+
+func (w *fileWatcher) parseLine(line string) LogEntry {
+ var jsonLog struct {
+ Time string `json:"time"`
+ Level string `json:"level"`
+ Message string `json:"msg"`
+ Fields json.RawMessage `json:"fields"`
+ }
+
+ if err := json.Unmarshal([]byte(line), &jsonLog); err == nil {
+ timestamp, err := time.Parse(time.RFC3339Nano, jsonLog.Time)
+ if err != nil {
+ timestamp = time.Now()
+ }
+
+ return LogEntry{
+ Time: timestamp,
+ Source: filepath.Base(w.path),
+ Level: jsonLog.Level,
+ Message: jsonLog.Message,
+ Fields: jsonLog.Fields,
+ }
+ }
+
+ level := extractLogLevel(line)
+
+ return LogEntry{
+ Time: time.Now(),
+ Source: filepath.Base(w.path),
+ Level: level,
+ Message: line,
+ }
+}
+
+func extractLogLevel(line string) string {
+ patterns := []struct {
+ patterns []string
+ level string
+ }{
+ {[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
+ {[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
+ {[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
+ {[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
+ {[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
+ }
+
+ upperLine := strings.ToUpper(line)
+ for _, group := range patterns {
+ for _, pattern := range group.patterns {
+ if strings.Contains(upperLine, pattern) {
+ return group.level
+ }
+ }
+ }
+
+ return ""
+}
+
+func globToRegex(glob string) string {
+ regex := regexp.QuoteMeta(glob)
+ regex = strings.ReplaceAll(regex, `\*`, `.*`)
+ regex = strings.ReplaceAll(regex, `\?`, `.`)
+ return "^" + regex + "$"
+}
+
+func (w *fileWatcher) close() {
+ w.stop()
+}
+
+func (w *fileWatcher) stop() {
+ w.mu.Lock()
+ w.stopped = true
+ w.mu.Unlock()
+}
+
+func (w *fileWatcher) isStopped() bool {
+ w.mu.Lock()
+ defer w.mu.Unlock()
+ return w.stopped
+}
\ No newline at end of file
diff --git a/src/internal/monitor/monitor.go b/src/internal/monitor/monitor.go
index cd89b06..f67b2e1 100644
--- a/src/internal/monitor/monitor.go
+++ b/src/internal/monitor/monitor.go
@@ -1,22 +1,17 @@
-// File: logwisp/src/internal/monitor/monitor.go
+// FILE: src/internal/monitor/monitor.go
package monitor
import (
- "bufio"
"context"
"encoding/json"
"fmt"
- "io"
"os"
"path/filepath"
"regexp"
- "strings"
"sync"
- "syscall"
"time"
)
-// LogEntry represents a log line to be streamed
type LogEntry struct {
Time time.Time `json:"time"`
Source string `json:"source"`
@@ -25,9 +20,8 @@ type LogEntry struct {
Fields json.RawMessage `json:"fields,omitempty"`
}
-// Monitor watches files and directories for log entries
type Monitor struct {
- callback func(LogEntry)
+ subscribers []chan LogEntry
targets []target
watchers map[string]*fileWatcher
mu sync.RWMutex
@@ -41,26 +35,44 @@ type target struct {
path string
pattern string
isFile bool
- regex *regexp.Regexp // FIXED: Compiled pattern for performance
+ regex *regexp.Regexp
}
-// New creates a new monitor instance
-func New(callback func(LogEntry)) *Monitor {
+func New() *Monitor {
return &Monitor{
- callback: callback,
watchers: make(map[string]*fileWatcher),
checkInterval: 100 * time.Millisecond,
}
}
-// SetCheckInterval configures the file check frequency
+func (m *Monitor) Subscribe() chan LogEntry {
+ m.mu.Lock()
+ defer m.mu.Unlock()
+
+ ch := make(chan LogEntry, 1000)
+ m.subscribers = append(m.subscribers, ch)
+ return ch
+}
+
+func (m *Monitor) publish(entry LogEntry) {
+ m.mu.RLock()
+ defer m.mu.RUnlock()
+
+ for _, ch := range m.subscribers {
+ select {
+ case ch <- entry:
+ default:
+ // Drop message if channel full
+ }
+ }
+}
+
func (m *Monitor) SetCheckInterval(interval time.Duration) {
m.mu.Lock()
m.checkInterval = interval
m.mu.Unlock()
}
-// AddTarget adds a path to monitor with enhanced pattern support
func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
absPath, err := filepath.Abs(path)
if err != nil {
@@ -69,7 +81,6 @@ func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
var compiledRegex *regexp.Regexp
if !isFile && pattern != "" {
- // FIXED: Convert glob pattern to regex for better matching
regexPattern := globToRegex(pattern)
compiledRegex, err = regexp.Compile(regexPattern)
if err != nil {
@@ -89,13 +100,10 @@ func (m *Monitor) AddTarget(path, pattern string, isFile bool) error {
return nil
}
-// Start begins monitoring with configurable interval
func (m *Monitor) Start(ctx context.Context) error {
m.ctx, m.cancel = context.WithCancel(ctx)
-
m.wg.Add(1)
go m.monitorLoop()
-
return nil
}
@@ -109,14 +117,15 @@ func (m *Monitor) Stop() {
for _, w := range m.watchers {
w.close()
}
+ for _, ch := range m.subscribers {
+ close(ch)
+ }
m.mu.Unlock()
}
-// FIXED: Enhanced monitoring loop with configurable interval
func (m *Monitor) monitorLoop() {
defer m.wg.Done()
- // Initial scan
m.checkTargets()
m.mu.RLock()
@@ -133,7 +142,6 @@ func (m *Monitor) monitorLoop() {
case <-ticker.C:
m.checkTargets()
- // Update ticker interval if changed
m.mu.RLock()
newInterval := m.checkInterval
m.mu.RUnlock()
@@ -147,7 +155,6 @@ func (m *Monitor) monitorLoop() {
}
}
-// FIXED: Enhanced target checking with better file discovery
func (m *Monitor) checkTargets() {
m.mu.RLock()
targets := make([]target, len(m.targets))
@@ -158,12 +165,10 @@ func (m *Monitor) checkTargets() {
if t.isFile {
m.ensureWatcher(t.path)
} else {
- // FIXED: More efficient directory scanning
files, err := m.scanDirectory(t.path, t.regex)
if err != nil {
continue
}
-
for _, file := range files {
m.ensureWatcher(file)
}
@@ -173,7 +178,6 @@ func (m *Monitor) checkTargets() {
m.cleanupWatchers()
}
-// FIXED: Optimized directory scanning
func (m *Monitor) scanDirectory(dir string, pattern *regexp.Regexp) ([]string, error) {
entries, err := os.ReadDir(dir)
if err != nil {
@@ -207,7 +211,7 @@ func (m *Monitor) ensureWatcher(path string) {
return
}
- w := newFileWatcher(path, m.callback)
+ w := newFileWatcher(path, m.publish)
m.watchers[path] = w
m.wg.Add(1)
@@ -231,268 +235,4 @@ func (m *Monitor) cleanupWatchers() {
delete(m.watchers, path)
}
}
-}
-
-// fileWatcher with enhanced rotation detection
-type fileWatcher struct {
- path string
- callback func(LogEntry)
- position int64
- size int64
- inode uint64
- modTime time.Time
- mu sync.Mutex
- stopped bool
- rotationSeq int // FIXED: Track rotation sequence for logging
-}
-
-func newFileWatcher(path string, callback func(LogEntry)) *fileWatcher {
- return &fileWatcher{
- path: path,
- callback: callback,
- }
-}
-
-func (w *fileWatcher) watch(ctx context.Context) {
- if err := w.seekToEnd(); err != nil {
- return
- }
-
- ticker := time.NewTicker(100 * time.Millisecond)
- defer ticker.Stop()
-
- for {
- select {
- case <-ctx.Done():
- return
- case <-ticker.C:
- if w.isStopped() {
- return
- }
- w.checkFile()
- }
- }
-}
-
-// FIXED: Enhanced file state tracking for better rotation detection
-func (w *fileWatcher) seekToEnd() error {
- file, err := os.Open(w.path)
- if err != nil {
- return err
- }
- defer file.Close()
-
- info, err := file.Stat()
- if err != nil {
- return err
- }
-
- pos, err := file.Seek(0, io.SeekEnd)
- if err != nil {
- return err
- }
-
- w.mu.Lock()
- w.position = pos
- w.size = info.Size()
- w.modTime = info.ModTime()
-
- // Get inode for rotation detection (Unix-specific)
- if stat, ok := info.Sys().(*syscall.Stat_t); ok {
- w.inode = stat.Ino
- }
- w.mu.Unlock()
-
- return nil
-}
-
-// FIXED: Enhanced rotation detection with multiple signals
-func (w *fileWatcher) checkFile() error {
- file, err := os.Open(w.path)
- if err != nil {
- return err
- }
- defer file.Close()
-
- info, err := file.Stat()
- if err != nil {
- return err
- }
-
- w.mu.Lock()
- oldPos := w.position
- oldSize := w.size
- oldInode := w.inode
- oldModTime := w.modTime
- w.mu.Unlock()
-
- currentSize := info.Size()
- currentModTime := info.ModTime()
- var currentInode uint64
-
- if stat, ok := info.Sys().(*syscall.Stat_t); ok {
- currentInode = stat.Ino
- }
-
- // FIXED: Multiple rotation detection methods
- rotated := false
- rotationReason := ""
-
- // Method 1: Inode change (most reliable on Unix)
- if oldInode != 0 && currentInode != 0 && currentInode != oldInode {
- rotated = true
- rotationReason = "inode change"
- }
-
- // Method 2: File size decrease
- if !rotated && currentSize < oldSize {
- rotated = true
- rotationReason = "size decrease"
- }
-
- // Method 3: File modification time reset while size is same or smaller
- if !rotated && currentModTime.Before(oldModTime) && currentSize <= oldSize {
- rotated = true
- rotationReason = "modification time reset"
- }
-
- // Method 4: Large position vs current size discrepancy
- if !rotated && oldPos > currentSize+1024 { // Allow some buffer
- rotated = true
- rotationReason = "position beyond file size"
- }
-
- newPos := oldPos
- if rotated {
- newPos = 0
- w.mu.Lock()
- w.rotationSeq++
- seq := w.rotationSeq
- w.inode = currentInode
- w.mu.Unlock()
-
- // Log rotation event
- w.callback(LogEntry{
- Time: time.Now(),
- Source: filepath.Base(w.path),
- Level: "INFO",
- Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
- })
- }
-
- // Seek to position and read new content
- if _, err := file.Seek(newPos, io.SeekStart); err != nil {
- return err
- }
-
- scanner := bufio.NewScanner(file)
- scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // 1MB max line
-
- lineCount := 0
- for scanner.Scan() {
- line := scanner.Text()
- if line == "" {
- continue
- }
-
- entry := w.parseLine(line)
- w.callback(entry)
- lineCount++
- }
-
- // Update file state
- if currentPos, err := file.Seek(0, io.SeekCurrent); err == nil {
- w.mu.Lock()
- w.position = currentPos
- w.size = currentSize
- w.modTime = currentModTime
- w.mu.Unlock()
- }
-
- return scanner.Err()
-}
-
-// FIXED: Enhanced log parsing with more level detection patterns
-func (w *fileWatcher) parseLine(line string) LogEntry {
- var jsonLog struct {
- Time string `json:"time"`
- Level string `json:"level"`
- Message string `json:"msg"`
- Fields json.RawMessage `json:"fields"`
- }
-
- // Try JSON parsing first
- if err := json.Unmarshal([]byte(line), &jsonLog); err == nil {
- timestamp, err := time.Parse(time.RFC3339Nano, jsonLog.Time)
- if err != nil {
- timestamp = time.Now()
- }
-
- return LogEntry{
- Time: timestamp,
- Source: filepath.Base(w.path),
- Level: jsonLog.Level,
- Message: jsonLog.Message,
- Fields: jsonLog.Fields,
- }
- }
-
- // Plain text with enhanced level extraction
- level := extractLogLevel(line)
-
- return LogEntry{
- Time: time.Now(),
- Source: filepath.Base(w.path),
- Level: level,
- Message: line,
- }
-}
-
-// FIXED: More comprehensive log level extraction
-func extractLogLevel(line string) string {
- patterns := []struct {
- patterns []string
- level string
- }{
- {[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
- {[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
- {[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
- {[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
- {[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
- }
-
- upperLine := strings.ToUpper(line)
- for _, group := range patterns {
- for _, pattern := range group.patterns {
- if strings.Contains(upperLine, pattern) {
- return group.level
- }
- }
- }
-
- return ""
-}
-
-// FIXED: Convert glob patterns to regex
-func globToRegex(glob string) string {
- regex := regexp.QuoteMeta(glob)
- regex = strings.ReplaceAll(regex, `\*`, `.*`)
- regex = strings.ReplaceAll(regex, `\?`, `.`)
- return "^" + regex + "$"
-}
-
-func (w *fileWatcher) close() {
- w.stop()
-}
-
-func (w *fileWatcher) stop() {
- w.mu.Lock()
- w.stopped = true
- w.mu.Unlock()
-}
-
-func (w *fileWatcher) isStopped() bool {
- w.mu.Lock()
- defer w.mu.Unlock()
- return w.stopped
}
\ No newline at end of file
diff --git a/src/internal/stream/http.go b/src/internal/stream/http.go
new file mode 100644
index 0000000..daa617e
--- /dev/null
+++ b/src/internal/stream/http.go
@@ -0,0 +1,192 @@
+// FILE: src/internal/stream/http.go
+package stream
+
+import (
+ "bufio"
+ "encoding/json"
+ "fmt"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/valyala/fasthttp"
+ "logwisp/src/internal/config"
+ "logwisp/src/internal/monitor"
+)
+
+type HTTPStreamer struct {
+ logChan chan monitor.LogEntry
+ config config.HTTPConfig
+ server *fasthttp.Server
+ activeClients atomic.Int32
+ mu sync.RWMutex
+ startTime time.Time
+}
+
+func NewHTTPStreamer(logChan chan monitor.LogEntry, cfg config.HTTPConfig) *HTTPStreamer {
+ return &HTTPStreamer{
+ logChan: logChan,
+ config: cfg,
+ startTime: time.Now(),
+ }
+}
+
+func (h *HTTPStreamer) Start() error {
+ h.server = &fasthttp.Server{
+ Handler: h.requestHandler,
+ DisableKeepalive: false,
+ StreamRequestBody: true,
+ Logger: nil, // Suppress fasthttp logs
+ }
+
+ addr := fmt.Sprintf(":%d", h.config.Port)
+ return h.server.ListenAndServe(addr)
+}
+
+func (h *HTTPStreamer) Stop() {
+ if h.server != nil {
+ h.server.Shutdown()
+ }
+}
+
+func (h *HTTPStreamer) requestHandler(ctx *fasthttp.RequestCtx) {
+ path := string(ctx.Path())
+
+ switch path {
+ case "/stream":
+ h.handleStream(ctx)
+ case "/status":
+ h.handleStatus(ctx)
+ default:
+ ctx.SetStatusCode(fasthttp.StatusNotFound)
+ }
+}
+
+func (h *HTTPStreamer) handleStream(ctx *fasthttp.RequestCtx) {
+ // Set SSE headers
+ ctx.Response.Header.Set("Content-Type", "text/event-stream")
+ ctx.Response.Header.Set("Cache-Control", "no-cache")
+ ctx.Response.Header.Set("Connection", "keep-alive")
+ ctx.Response.Header.Set("Access-Control-Allow-Origin", "*")
+ ctx.Response.Header.Set("X-Accel-Buffering", "no")
+
+ h.activeClients.Add(1)
+ defer h.activeClients.Add(-1)
+
+ // Create subscription for this client
+ clientChan := make(chan monitor.LogEntry, h.config.BufferSize)
+
+ // Subscribe to monitor's broadcast
+ go func() {
+ for entry := range h.logChan {
+ select {
+ case clientChan <- entry:
+ default:
+ // Drop if client buffer full
+ }
+ }
+ close(clientChan)
+ }()
+
+ // Define the stream writer function
+ streamFunc := func(w *bufio.Writer) {
+ // Send initial connected event
+ clientID := fmt.Sprintf("%d", time.Now().UnixNano())
+ fmt.Fprintf(w, "event: connected\ndata: {\"client_id\":\"%s\"}\n\n", clientID)
+ w.Flush()
+
+ var ticker *time.Ticker
+ var tickerChan <-chan time.Time
+
+ if h.config.Heartbeat.Enabled {
+ ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalSeconds) * time.Second)
+ tickerChan = ticker.C
+ defer ticker.Stop()
+ }
+
+ for {
+ select {
+ case entry, ok := <-clientChan:
+ if !ok {
+ return
+ }
+
+ data, err := json.Marshal(entry)
+ if err != nil {
+ continue
+ }
+
+ fmt.Fprintf(w, "data: %s\n\n", data)
+ if err := w.Flush(); err != nil {
+ return
+ }
+
+ case <-tickerChan:
+ if heartbeat := h.formatHeartbeat(); heartbeat != "" {
+ fmt.Fprint(w, heartbeat)
+ if err := w.Flush(); err != nil {
+ return
+ }
+ }
+ }
+ }
+ }
+
+ ctx.SetBodyStreamWriter(streamFunc)
+}
+
+func (h *HTTPStreamer) formatHeartbeat() string {
+ if !h.config.Heartbeat.Enabled {
+ return ""
+ }
+
+ if h.config.Heartbeat.Format == "json" {
+ data := make(map[string]interface{})
+ data["type"] = "heartbeat"
+
+ if h.config.Heartbeat.IncludeTimestamp {
+ data["timestamp"] = time.Now().UTC().Format(time.RFC3339)
+ }
+
+ if h.config.Heartbeat.IncludeStats {
+ data["active_clients"] = h.activeClients.Load()
+ data["uptime_seconds"] = int(time.Since(h.startTime).Seconds())
+ }
+
+ jsonData, _ := json.Marshal(data)
+ return fmt.Sprintf("data: %s\n\n", jsonData)
+ }
+
+ // Default comment format
+ var parts []string
+ parts = append(parts, "heartbeat")
+
+ if h.config.Heartbeat.IncludeTimestamp {
+ parts = append(parts, time.Now().UTC().Format(time.RFC3339))
+ }
+
+ if h.config.Heartbeat.IncludeStats {
+ parts = append(parts, fmt.Sprintf("clients=%d", h.activeClients.Load()))
+ parts = append(parts, fmt.Sprintf("uptime=%ds", int(time.Since(h.startTime).Seconds())))
+ }
+
+ return fmt.Sprintf(": %s\n\n", strings.Join(parts, " "))
+}
+
+func (h *HTTPStreamer) handleStatus(ctx *fasthttp.RequestCtx) {
+ ctx.SetContentType("application/json")
+
+ status := map[string]interface{}{
+ "service": "LogWisp",
+ "version": "3.0.0",
+ "http_server": map[string]interface{}{
+ "port": h.config.Port,
+ "active_clients": h.activeClients.Load(),
+ "buffer_size": h.config.BufferSize,
+ },
+ }
+
+ data, _ := json.Marshal(status)
+ ctx.SetBody(data)
+}
\ No newline at end of file
diff --git a/src/internal/stream/noop_logger.go b/src/internal/stream/noop_logger.go
new file mode 100644
index 0000000..61f89fc
--- /dev/null
+++ b/src/internal/stream/noop_logger.go
@@ -0,0 +1,11 @@
+// FILE: src/internal/stream/noop_logger.go
+package stream
+
+// noopLogger implements gnet's Logger interface but discards everything
+type noopLogger struct{}
+
+func (n noopLogger) Debugf(format string, args ...any) {}
+func (n noopLogger) Infof(format string, args ...any) {}
+func (n noopLogger) Warnf(format string, args ...any) {}
+func (n noopLogger) Errorf(format string, args ...any) {}
+func (n noopLogger) Fatalf(format string, args ...any) {}
\ No newline at end of file
diff --git a/src/internal/stream/stream.go b/src/internal/stream/stream.go
deleted file mode 100644
index be8e6d4..0000000
--- a/src/internal/stream/stream.go
+++ /dev/null
@@ -1,245 +0,0 @@
-// File: logwisp/src/internal/stream/stream.go
-package stream
-
-import (
- "encoding/json"
- "fmt"
- "net/http"
- "sync"
- "sync/atomic"
- "time"
-
- "logwisp/src/internal/monitor"
-)
-
-// Streamer handles Server-Sent Events streaming
-type Streamer struct {
- clients map[string]*clientConnection
- register chan *clientConnection
- unregister chan string
- broadcast chan monitor.LogEntry
- mu sync.RWMutex
- bufferSize int
- done chan struct{}
- colorMode bool
- wg sync.WaitGroup
-
- // Metrics
- totalDropped atomic.Int64
-}
-
-type clientConnection struct {
- id string
- channel chan monitor.LogEntry
- lastActivity time.Time
- dropped atomic.Int64 // Track per-client dropped messages
-}
-
-// New creates a new SSE streamer
-func New(bufferSize int) *Streamer {
- return NewWithOptions(bufferSize, false)
-}
-
-// NewWithOptions creates a new SSE streamer with options
-func NewWithOptions(bufferSize int, colorMode bool) *Streamer {
- s := &Streamer{
- clients: make(map[string]*clientConnection),
- register: make(chan *clientConnection),
- unregister: make(chan string),
- broadcast: make(chan monitor.LogEntry, bufferSize),
- bufferSize: bufferSize,
- done: make(chan struct{}),
- colorMode: colorMode,
- }
-
- s.wg.Add(1)
- go s.run()
- return s
-}
-
-// run manages client connections - SIMPLIFIED: no forced disconnections
-func (s *Streamer) run() {
- defer s.wg.Done()
-
- for {
- select {
- case c := <-s.register:
- s.mu.Lock()
- s.clients[c.id] = c
- s.mu.Unlock()
-
- case id := <-s.unregister:
- s.mu.Lock()
- if client, ok := s.clients[id]; ok {
- close(client.channel)
- delete(s.clients, id)
- }
- s.mu.Unlock()
-
- case entry := <-s.broadcast:
- s.mu.RLock()
- now := time.Now()
-
- for _, client := range s.clients {
- select {
- case client.channel <- entry:
- // Successfully sent
- client.lastActivity = now
- client.dropped.Store(0) // Reset dropped counter on success
- default:
- // Buffer full - skip this message for this client
- // Don't disconnect, just track dropped messages
- dropped := client.dropped.Add(1)
- s.totalDropped.Add(1)
-
- // Log significant drop milestones for monitoring
- if dropped == 100 || dropped == 1000 || dropped%10000 == 0 {
- // Could add logging here if needed
- }
- }
- }
- s.mu.RUnlock()
-
- case <-s.done:
- s.mu.Lock()
- for _, client := range s.clients {
- close(client.channel)
- }
- s.clients = make(map[string]*clientConnection)
- s.mu.Unlock()
- return
- }
- }
-}
-
-// Publish sends a log entry to all connected clients
-func (s *Streamer) Publish(entry monitor.LogEntry) {
- select {
- case s.broadcast <- entry:
- // Sent to broadcast channel
- default:
- // Broadcast buffer full - drop the message globally
- s.totalDropped.Add(1)
- }
-}
-
-// ServeHTTP implements http.Handler for SSE - SIMPLIFIED
-func (s *Streamer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- // Set SSE headers
- w.Header().Set("Content-Type", "text/event-stream")
- w.Header().Set("Cache-Control", "no-cache")
- w.Header().Set("Connection", "keep-alive")
- w.Header().Set("X-Accel-Buffering", "no") // Disable nginx buffering
-
- // SECURITY: Prevent XSS
- w.Header().Set("X-Content-Type-Options", "nosniff")
-
- // Create client
- clientID := fmt.Sprintf("%d", time.Now().UnixNano())
- ch := make(chan monitor.LogEntry, s.bufferSize)
-
- client := &clientConnection{
- id: clientID,
- channel: ch,
- lastActivity: time.Now(),
- }
-
- // Register client
- s.register <- client
- defer func() {
- s.unregister <- clientID
- }()
-
- // Send initial connection event
- fmt.Fprintf(w, "event: connected\ndata: {\"client_id\":\"%s\",\"buffer_size\":%d}\n\n",
- clientID, s.bufferSize)
- if flusher, ok := w.(http.Flusher); ok {
- flusher.Flush()
- }
-
- // Create ticker for heartbeat - keeps connection alive through proxies
- ticker := time.NewTicker(30 * time.Second)
- defer ticker.Stop()
-
- // Stream events until client disconnects
- for {
- select {
- case <-r.Context().Done():
- // Client disconnected
- return
-
- case entry, ok := <-ch:
- if !ok {
- // Channel closed
- return
- }
-
- // Process entry for color if needed
- if s.colorMode {
- entry = s.processColorEntry(entry)
- }
-
- data, err := json.Marshal(entry)
- if err != nil {
- continue
- }
-
- fmt.Fprintf(w, "data: %s\n\n", data)
- if flusher, ok := w.(http.Flusher); ok {
- flusher.Flush()
- }
-
- case <-ticker.C:
- // Send heartbeat as SSE comment
- fmt.Fprintf(w, ": heartbeat %s\n\n", time.Now().UTC().Format(time.RFC3339))
- if flusher, ok := w.(http.Flusher); ok {
- flusher.Flush()
- }
- }
- }
-}
-
-// Stop gracefully shuts down the streamer
-func (s *Streamer) Stop() {
- close(s.done)
- s.wg.Wait()
- close(s.register)
- close(s.unregister)
- close(s.broadcast)
-}
-
-// processColorEntry preserves ANSI codes in JSON
-func (s *Streamer) processColorEntry(entry monitor.LogEntry) monitor.LogEntry {
- return entry
-}
-
-// Stats returns current streamer statistics
-func (s *Streamer) Stats() map[string]interface{} {
- s.mu.RLock()
- defer s.mu.RUnlock()
-
- stats := map[string]interface{}{
- "active_clients": len(s.clients),
- "buffer_size": s.bufferSize,
- "color_mode": s.colorMode,
- "total_dropped": s.totalDropped.Load(),
- }
-
- // Include per-client dropped counts if any are significant
- var clientsWithDrops []map[string]interface{}
- for id, client := range s.clients {
- dropped := client.dropped.Load()
- if dropped > 0 {
- clientsWithDrops = append(clientsWithDrops, map[string]interface{}{
- "id": id,
- "dropped": dropped,
- })
- }
- }
-
- if len(clientsWithDrops) > 0 {
- stats["clients_with_drops"] = clientsWithDrops
- }
-
- return stats
-}
\ No newline at end of file
diff --git a/src/internal/stream/tcp.go b/src/internal/stream/tcp.go
new file mode 100644
index 0000000..54275a4
--- /dev/null
+++ b/src/internal/stream/tcp.go
@@ -0,0 +1,144 @@
+// FILE: src/internal/stream/tcp.go
+package stream
+
+import (
+ "encoding/json"
+ "fmt"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/panjf2000/gnet/v2"
+ "logwisp/src/internal/config"
+ "logwisp/src/internal/monitor"
+)
+
+type TCPStreamer struct {
+ logChan chan monitor.LogEntry
+ config config.TCPConfig
+ server *tcpServer
+ done chan struct{}
+ activeConns atomic.Int32
+ startTime time.Time
+}
+
+type tcpServer struct {
+ gnet.BuiltinEventEngine
+ streamer *TCPStreamer
+ connections sync.Map
+}
+
+func NewTCPStreamer(logChan chan monitor.LogEntry, cfg config.TCPConfig) *TCPStreamer {
+ return &TCPStreamer{
+ logChan: logChan,
+ config: cfg,
+ done: make(chan struct{}),
+ startTime: time.Now(),
+ }
+}
+
+func (t *TCPStreamer) Start() error {
+ t.server = &tcpServer{streamer: t}
+
+ // Start log broadcast loop
+ go t.broadcastLoop()
+
+ // Configure gnet with no-op logger
+ addr := fmt.Sprintf("tcp://:%d", t.config.Port)
+
+ err := gnet.Run(t.server, addr,
+ gnet.WithLogger(noopLogger{}), // No-op logger: discard everything
+ gnet.WithMulticore(true),
+ gnet.WithReusePort(true),
+ )
+
+ return err
+}
+
+func (t *TCPStreamer) Stop() {
+ close(t.done)
+ // No engine to stop with gnet v2
+}
+
+func (t *TCPStreamer) broadcastLoop() {
+ var ticker *time.Ticker
+ var tickerChan <-chan time.Time
+
+ if t.config.Heartbeat.Enabled {
+ ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalSeconds) * time.Second)
+ tickerChan = ticker.C
+ defer ticker.Stop()
+ }
+
+ for {
+ select {
+ case entry := <-t.logChan:
+ data, err := json.Marshal(entry)
+ if err != nil {
+ continue
+ }
+ data = append(data, '\n')
+
+ t.server.connections.Range(func(key, value interface{}) bool {
+ conn := key.(gnet.Conn)
+ conn.AsyncWrite(data, nil)
+ return true
+ })
+
+ case <-tickerChan:
+ if heartbeat := t.formatHeartbeat(); heartbeat != nil {
+ t.server.connections.Range(func(key, value interface{}) bool {
+ conn := key.(gnet.Conn)
+ conn.AsyncWrite(heartbeat, nil)
+ return true
+ })
+ }
+
+ case <-t.done:
+ return
+ }
+ }
+}
+
+func (t *TCPStreamer) formatHeartbeat() []byte {
+ if !t.config.Heartbeat.Enabled {
+ return nil
+ }
+
+ data := make(map[string]interface{})
+ data["type"] = "heartbeat"
+
+ if t.config.Heartbeat.IncludeTimestamp {
+ data["time"] = time.Now().UTC().Format(time.RFC3339Nano)
+ }
+
+ if t.config.Heartbeat.IncludeStats {
+ data["active_connections"] = t.activeConns.Load()
+ data["uptime_seconds"] = int(time.Since(t.startTime).Seconds())
+ }
+
+ jsonData, _ := json.Marshal(data)
+ return append(jsonData, '\n')
+}
+
+func (s *tcpServer) OnBoot(eng gnet.Engine) gnet.Action {
+ return gnet.None
+}
+
+func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
+ s.connections.Store(c, struct{}{})
+ s.streamer.activeConns.Add(1)
+ return nil, gnet.None
+}
+
+func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
+ s.connections.Delete(c)
+ s.streamer.activeConns.Add(-1)
+ return gnet.None
+}
+
+func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
+ // We don't expect input from clients, just discard
+ c.Discard(-1)
+ return gnet.None
+}
\ No newline at end of file