Compare commits

...

10 Commits

84 changed files with 7939 additions and 6908 deletions

1
.gitignore vendored
View File

@ -7,6 +7,5 @@ cert
bin
script
build
test
*.log
*.toml

View File

@ -6,7 +6,7 @@
<td>
<h1>LogWisp</h1>
<p>
<a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.24-00ADD8?style=flat&logo=go" alt="Go"></a>
<a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.25-00ADD8?style=flat&logo=go" alt="Go"></a>
<a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a>
<a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a>
</p>
@ -14,41 +14,81 @@
</tr>
</table>
**Flexible log monitoring with real-time streaming over HTTP/SSE and TCP**
# LogWisp
LogWisp watches log files and streams updates to connected clients in real-time using a pipeline architecture: **sources → filters → sinks**. Perfect for monitoring multiple applications, filtering noise, and routing logs to multiple destinations.
A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with enterprise-grade security and reliability features.
## 🚀 Quick Start
## Features
```bash
# Install
git clone https://github.com/lixenwraith/logwisp.git
cd logwisp
make install
### Core Capabilities
- **Pipeline Architecture**: Independent processing pipelines with source(s) → filter → format → sink(s) flow
- **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
- **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
- **Hot Configuration Reload**: Update pipelines without service restart
# Run with defaults (monitors *.log in current directory)
logwisp
### Data Processing
- **Pattern-based Filtering**: Chainable include/exclude filters with regex support
- **Multiple Formatters**: Raw, JSON, and template-based text formatting
- **Rate Limiting**: Pipeline rate control
### Security & Reliability
- **Authentication**: Basic, token, and mTLS support for HTTPS, and SCRAM for TCP
- **TLS Encryption**: TLS 1.2/1.3 support for HTTP connections
- **Access Control**: IP whitelisting/blacklisting, connection limits
- **Automatic Reconnection**: Resilient client connections with exponential backoff
- **File Rotation**: Size-based rotation with retention policies
### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints
- **Signal Handling**: Graceful shutdown and configuration reload via signals
- **Background Mode**: Daemon operation with proper signal handling
- **Quiet Mode**: Silent operation for automated deployments
## Documentation
Available in `doc/` directory.
- [Installation Guide](doc/installation.md) - Platform setup and service configuration
- [Architecture Overview](doc/architecture.md) - System design and component interaction
- [Configuration Reference](doc/configuration.md) - TOML structure and configuration methods
- [Input Sources](doc/sources.md) - Available source types and configurations
- [Output Sinks](doc/sinks.md) - Sink types and output options
- [Filters](doc/filters.md) - Pattern-based log filtering
- [Formatters](doc/formatters.md) - Log formatting and transformation
- [Authentication](doc/authentication.md) - Security configurations and auth methods
- [Networking](doc/networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](doc/cli.md) - CLI flags and subcommands
- [Operations Guide](doc/operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
## ✨ Key Features
Run with: `logwisp -c config.toml`
- **🔧 Pipeline Architecture** - Flexible source → filter → sink processing
- **📡 Real-time Streaming** - SSE (HTTP) and TCP protocols
- **🔍 Pattern Filtering** - Include/exclude logs with regex patterns
- **🛡️ Rate Limiting** - Protect against abuse with configurable limits
- **📊 Multi-pipeline** - Process different log sources simultaneously
- **🔄 Rotation Aware** - Handles log rotation seamlessly
- **⚡ High Performance** - Minimal CPU/memory footprint
## System Requirements
## 📖 Documentation
- **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
- **Architecture**: amd64
- **Go Version**: 1.25+ (for building from source)
Complete documentation is available in the [`doc/`](doc/) directory:
## License
- [**Quick Start Guide**](doc/quickstart.md) - Get running in 5 minutes
- [**Configuration**](doc/configuration.md) - All configuration options
- [**CLI Reference**](doc/cli.md) - Command-line interface
- [**Examples**](doc/examples/) - Ready-to-use configurations
## 📄 License
BSD-3-Clause
BSD 3-Clause License

View File

@ -1,261 +1,408 @@
# LogWisp Configuration Reference
# Default location: ~/.config/logwisp/logwisp.toml
# Override: logwisp --config /path/to/config.toml
#
# All values shown are defaults unless marked (required)
###############################################################################
### LogWisp Configuration
### Default location: ~/.config/logwisp/logwisp.toml
### Configuration Precedence: CLI flags > Environment > File > Defaults
### Default values shown - uncommented lines represent active configuration
###############################################################################
# ============================================================================
# GLOBAL OPTIONS
# ============================================================================
# router = false # Enable router mode (multi-pipeline HTTP routing)
# background = false # Run as background daemon
# quiet = false # Suppress all output
# disable_status_reporter = false # Disable periodic status logging
# config_auto_reload = false # Auto-reload on config change
# config_save_on_exit = false # Save config on shutdown
###############################################################################
### Global Settings
###############################################################################
background = false # Run as daemon
quiet = false # Suppress console output
disable_status_reporter = false # Disable periodic status logging
config_auto_reload = false # Reload config on file change
###############################################################################
### Logging Configuration
###############################################################################
# ============================================================================
# LOGGING (LogWisp's operational logs)
# ============================================================================
[logging]
output = "stderr" # file, stdout, stderr, both, none
level = "info" # debug, info, warn, error
output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug|info|warn|error
[logging.file]
directory = "./logs" # Log file directory
name = "logwisp" # Base filename
max_size_mb = 100 # Rotate after size
max_total_size_mb = 1000 # Total size limit for all logs
retention_hours = 168.0 # Delete logs older than (0 = disabled)
# [logging.file]
# directory = "./log" # Log directory path
# name = "logwisp" # Base filename
# max_size_mb = 100 # Rotation threshold
# max_total_size_mb = 1000 # Total size limit
# retention_hours = 168.0 # Delete logs older than (7 days)
[logging.console]
target = "stderr" # stdout, stderr, split (split: info→stdout, error→stderr)
format = "txt" # txt, json
target = "stdout" # stdout|stderr|split
format = "txt" # txt|json
# ============================================================================
# PIPELINES
# ============================================================================
# Define one or more [[pipelines]] blocks
# Each pipeline: sources → [rate_limit] → [filters] → [format] → sinks
###############################################################################
### Pipeline Configuration
###############################################################################
[[pipelines]]
name = "default" # (required) Unique identifier
name = "default" # Pipeline identifier
###============================================================================
### Rate Limiting (Pipeline-level)
###============================================================================
# ----------------------------------------------------------------------------
# PIPELINE RATE LIMITING (optional)
# ----------------------------------------------------------------------------
# [pipelines.rate_limit]
# rate = 1000.0 # Entries per second (0 = unlimited)
# burst = 1000.0 # Max burst size (defaults to rate)
# policy = "drop" # drop, pass
# max_entry_size_bytes = 0 # Max size per entry (0 = unlimited)
# rate = 1000.0 # Entries per second (0=disabled)
# burst = 2000.0 # Burst capacity (defaults to rate)
# policy = "drop" # pass|drop
# max_entry_size_bytes = 0 # Max entry size (0=unlimited)
# ----------------------------------------------------------------------------
# SOURCES
# ----------------------------------------------------------------------------
[[pipelines.sources]]
type = "directory" # directory, file, stdin, http, tcp
###============================================================================
### Filters
###============================================================================
# Directory source options
[pipelines.sources.options]
path = "./" # (required) Directory path
pattern = "*.log" # Glob pattern
check_interval_ms = 100 # Scan interval (min: 10)
### ⚠️ Example: Include only ERROR and WARN logs
## [[pipelines.filters]]
## type = "include" # include|exclude
## logic = "or" # or|and
## patterns = [".*ERROR.*", ".*WARN.*"]
# File source options (alternative)
# type = "file"
# [pipelines.sources.options]
# path = "/var/log/app.log" # (required) File path
### ⚠️ Example: Exclude debug logs
## [[pipelines.filters]]
## type = "exclude"
## patterns = [".*DEBUG.*"]
# HTTP source options (alternative)
# type = "http"
# [pipelines.sources.options]
# port = 8081 # (required) Listen port
# ingest_path = "/ingest" # POST endpoint
# buffer_size = 1000 # Entry buffer size
# net_limit = { # Rate limiting
# enabled = true,
# requests_per_second = 100.0,
# burst_size = 200,
# limit_by = "ip" # ip, global
# }
###============================================================================
### Format Configuration
###============================================================================
# TCP source options (alternative)
# type = "tcp"
# [pipelines.sources.options]
# port = 9091 # (required) Listen port
# buffer_size = 1000 # Entry buffer size
# net_limit = { ... } # Same as HTTP
# [pipelines.format]
# type = "raw" # json|txt|raw
# ----------------------------------------------------------------------------
# FILTERS (optional)
# ----------------------------------------------------------------------------
# [[pipelines.filters]]
# type = "include" # include (whitelist), exclude (blacklist)
# logic = "or" # or (any match), and (all match)
# patterns = [ # Regular expressions
# "ERROR",
# "(?i)warn", # Case-insensitive
# "\\bfatal\\b" # Word boundary
# ]
### Raw formatter options (default)
# [pipelines.format.raw]
# add_new_line = true # Add newline to messages
# ----------------------------------------------------------------------------
# FORMAT (optional)
# ----------------------------------------------------------------------------
# format = "raw" # raw, json, text
# [pipelines.format_options]
# # JSON formatter options
# pretty = false # Pretty print JSON
# timestamp_field = "timestamp" # Field name for timestamp
# level_field = "level" # Field name for log level
# message_field = "message" # Field name for message
# source_field = "source" # Field name for source
#
# # Text formatter options
### JSON formatter options
# [pipelines.format.json]
# pretty = false # Pretty print JSON
# timestamp_field = "timestamp" # Field name for timestamp
# level_field = "level" # Field name for log level
# message_field = "message" # Field name for message
# source_field = "source" # Field name for source
### Text formatter options
# [pipelines.format.txt]
# template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
# timestamp_format = "2006-01-02T15:04:05Z07:00" # Go time format
# timestamp_format = "2006-01-02T15:04:05.000Z07:00" # Go time format string
# ----------------------------------------------------------------------------
# SINKS
# ----------------------------------------------------------------------------
[[pipelines.sinks]]
type = "http" # http, tcp, http_client, tcp_client, file, stdout, stderr
###============================================================================
### Sources (Input Sources)
###============================================================================
# HTTP sink options (streaming server)
[pipelines.sinks.options]
port = 8080 # (required) Listen port
buffer_size = 1000 # Entry buffer size
stream_path = "/stream" # SSE endpoint
status_path = "/status" # Status endpoint
###----------------------------------------------------------------------------
### Directory Source (Active Default)
[[pipelines.sources]]
type = "directory"
[pipelines.sinks.options.heartbeat]
enabled = true # Send periodic heartbeats
interval_seconds = 30 # Heartbeat interval
format = "comment" # comment, json
include_timestamp = true # Include timestamp in heartbeat
include_stats = false # Include statistics
[pipelines.sources.directory]
path = "./" # Watch directory
pattern = "*.log" # File pattern (glob)
check_interval_ms = 100 # Poll interval
recursive = false # Scan subdirectories
[pipelines.sinks.options.net_limit]
enabled = false # Enable rate limiting
requests_per_second = 10.0 # Request rate limit
burst_size = 20 # Token bucket burst
limit_by = "ip" # ip, global
max_connections_per_ip = 5 # Per-IP connection limit
max_total_connections = 100 # Total connection limit
response_code = 429 # HTTP response code
response_message = "Rate limit exceeded"
###----------------------------------------------------------------------------
### Stdin Source
# [[pipelines.sources]]
# type = "stdin"
# TCP sink options (alternative)
# type = "tcp"
# [pipelines.sinks.options]
# port = 9090 # (required) Listen port
# buffer_size = 1000
# heartbeat = { ... } # Same as HTTP
# net_limit = { ... } # Same as HTTP
# [pipelines.sources.stdin]
# buffer_size = 1000 # Internal buffer size
# HTTP client sink options (forward to remote)
# type = "http_client"
# [pipelines.sinks.options]
# url = "https://logs.example.com/ingest" # (required) Target URL
# batch_size = 100 # Entries per batch
# batch_delay_ms = 1000 # Batch timeout
# timeout_seconds = 30 # Request timeout
# max_retries = 3 # Retry attempts
# retry_delay_ms = 1000 # Initial retry delay
# retry_backoff = 2.0 # Exponential backoff multiplier
# insecure_skip_verify = false # Skip TLS verification
# headers = { # Custom headers
# "Authorization" = "Bearer token",
# "X-Custom" = "value"
# }
###----------------------------------------------------------------------------
### HTTP Source (Receives via POST)
# [[pipelines.sources]]
# type = "http"
# TCP client sink options (forward to remote)
# type = "tcp_client"
# [pipelines.sinks.options]
# address = "logs.example.com:9090" # (required) host:port
# buffer_size = 1000
# dial_timeout_seconds = 10 # Connection timeout
# write_timeout_seconds = 30 # Write timeout
# keep_alive_seconds = 30 # TCP keepalive
# reconnect_delay_ms = 1000 # Initial reconnect delay
# max_reconnect_delay_seconds = 30 # Max reconnect delay
# reconnect_backoff = 1.5 # Exponential backoff
# [pipelines.sources.http]
# host = "0.0.0.0" # Listen address
# port = 8081 # Listen port
# ingest_path = "/ingest" # Ingest endpoint
# buffer_size = 1000 # Internal buffer size
# max_body_size = 1048576 # Max request body (1MB)
# read_timeout_ms = 10000 # Read timeout
# write_timeout_ms = 10000 # Write timeout
# File sink options
# type = "file"
# [pipelines.sinks.options]
# directory = "/var/log/logwisp" # (required) Output directory
# name = "app" # (required) Base filename
# max_size_mb = 100 # Rotate after size
# max_total_size_mb = 0 # Total size limit (0 = unlimited)
# retention_hours = 0.0 # Delete old files (0 = disabled)
# min_disk_free_mb = 1000 # Maintain free disk space
### TLS configuration
# [pipelines.sources.http.tls]
# enabled = false
# cert_file = "/path/to/cert.pem"
# key_file = "/path/to/key.pem"
# ca_file = "/path/to/ca.pem"
# min_version = "TLS1.2" # TLS1.2|TLS1.3
# client_auth = false # Require client certs
# client_ca_file = "/path/to/ca.pem" # CA to validate client certs
# verify_client_cert = true # Require valid client cert
# Console sink options
# type = "stdout" # or "stderr"
# [pipelines.sinks.options]
# buffer_size = 1000
# target = "stdout" # Override for split mode
### ⚠️ Example: TLS configuration to enable auth)
## [pipelines.sources.http.tls]
## enabled = true # MUST be true for auth
## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
# ----------------------------------------------------------------------------
# AUTHENTICATION (optional, for network sinks)
# ----------------------------------------------------------------------------
# [pipelines.auth]
# type = "none" # none, basic, bearer
# ip_whitelist = [] # Allowed IPs (empty = all)
# ip_blacklist = [] # Blocked IPs
#
# [pipelines.auth.basic_auth]
# realm = "LogWisp" # WWW-Authenticate realm
# users_file = "" # External users file
# [[pipelines.auth.basic_auth.users]]
### Network limiting (access control)
# [pipelines.sources.http.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# requests_per_second = 100.0 # Rate limit per client
# burst_size = 200 # Token bucket burst
# response_code = 429 # HTTP rate limit response code
# response_message = "Rate limit exceeded"
# ip_whitelist = []
# ip_blacklist = []
### Authentication (validates clients)
### ☢ SECURITY: HTTP auth REQUIRES TLS to be enabled
# [pipelines.sources.http.auth]
# type = "none" # none|basic|token|mtls (NO scram)
# realm = "LogWisp" # For basic auth
### Basic auth users
# [[pipelines.sources.http.auth.basic.users]]
# username = "admin"
# password_hash = "$2a$10$..." # bcrypt hash
#
# [pipelines.auth.bearer_auth]
# tokens = ["token1", "token2"] # Static tokens
# [pipelines.auth.bearer_auth.jwt]
# jwks_url = "" # JWKS endpoint
# signing_key = "" # Static key (if not using JWKS)
# issuer = "" # Expected issuer
# audience = "" # Expected audience
# password_hash = "$argon2..." # Argon2 hash
# ============================================================================
# HOT RELOAD
# ============================================================================
# Enable with: --config-auto-reload
# Manual reload: kill -HUP $(pidof logwisp)
# Updates pipelines, filters, formatters without restart
# Logging changes require restart
### Token auth tokens
# [pipelines.sources.http.auth.token]
# tokens = ["token1", "token2"]
# ============================================================================
# ROUTER MODE
# ============================================================================
# Enable with: logwisp --router or router = true
# Combines multiple pipeline HTTP sinks on shared ports
# Access pattern: http://localhost:8080/{pipeline_name}/stream
# Global status: http://localhost:8080/status
###----------------------------------------------------------------------------
### TCP Source (Receives logs via TCP Client Sink)
# [[pipelines.sources]]
# type = "tcp"
# ============================================================================
# SIGNALS
# ============================================================================
# SIGINT/SIGTERM: Graceful shutdown
# SIGHUP/SIGUSR1: Reload config (when auto-reload enabled)
# SIGKILL: Immediate shutdown
# [pipelines.sources.tcp]
# host = "0.0.0.0" # Listen address
# port = 9091 # Listen port
# buffer_size = 1000 # Internal buffer size
# read_timeout_ms = 10000 # Read timeout
# keep_alive = true # Enable TCP keep-alive
# keep_alive_period_ms = 30000 # Keep-alive interval
# ============================================================================
# CLI FLAGS
# ============================================================================
# --config, -c PATH # Config file path
# --router, -r # Enable router mode
# --background, -b # Run as daemon
# --quiet, -q # Suppress output
# --version, -v # Show version
### ☣ WARNING: TCP has NO TLS support (gnet limitation)
### Use HTTP with TLS for encrypted transport
# ============================================================================
# ENVIRONMENT VARIABLES
# ============================================================================
# LOGWISP_CONFIG_FILE # Config filename
# LOGWISP_CONFIG_DIR # Config directory
# LOGWISP_CONSOLE_TARGET # Override console target
# Any config value: LOGWISP_<SECTION>_<KEY> (uppercase, dots → underscores)
### Network limiting (access control)
# [pipelines.sources.tcp.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# requests_per_second = 100.0
# burst_size = 200
# ip_whitelist = []
# ip_blacklist = []
### Authentication
# [pipelines.sources.tcp.auth]
# type = "none" # none|scram ONLY (no basic/token/mtls)
### SCRAM auth users for TCP Source
# [[pipelines.sources.tcp.auth.scram.users]]
# username = "user1"
# stored_key = "base64..." # Pre-computed SCRAM keys
# server_key = "base64..."
# salt = "base64..."
# argon_time = 3
# argon_memory = 65536
# argon_threads = 4
###============================================================================
### Sinks (Output Destinations)
###============================================================================
###----------------------------------------------------------------------------
### Console Sink (Active Default)
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout" # stdout|stderr|split
colorize = false # Enable colored output
buffer_size = 100 # Internal buffer size
###----------------------------------------------------------------------------
### File Sink
# [[pipelines.sinks]]
# type = "file"
# [pipelines.sinks.file]
# directory = "./logs" # Output directory
# name = "output" # Base filename
# max_size_mb = 100 # Rotation threshold
# max_total_size_mb = 1000 # Total size limit
# min_disk_free_mb = 500 # Minimum free disk space
# retention_hours = 168.0 # Delete logs older than (7 days)
# buffer_size = 1000 # Internal buffer size
# flush_interval_ms = 1000 # Force flush interval
###----------------------------------------------------------------------------
### HTTP Sink (SSE streaming to browser/HTTP client)
# [[pipelines.sinks]]
# type = "http"
# [pipelines.sinks.http]
# host = "0.0.0.0" # Listen address
# port = 8080 # Listen port
# stream_path = "/stream" # SSE stream endpoint
# status_path = "/status" # Status endpoint
# buffer_size = 1000 # Internal buffer size
# max_connections = 100 # Max concurrent clients
# read_timeout_ms = 10000 # Read timeout
# write_timeout_ms = 10000 # Write timeout
### Heartbeat configuration (keeps SSE alive)
# [pipelines.sinks.http.heartbeat]
# enabled = true
# interval_ms = 30000 # 30 seconds
# include_timestamp = true
# include_stats = false
# format = "comment" # comment|event|json
### TLS configuration
# [pipelines.sinks.http.tls]
# enabled = false
# cert_file = "/path/to/cert.pem"
# key_file = "/path/to/key.pem"
# ca_file = "/path/to/ca.pem"
# min_version = "TLS1.2" # TLS1.2|TLS1.3
# client_auth = false # Require client certs
### ⚠️ Example: HTTP Client Sink → HTTP Source with mTLS
## HTTP Source with mTLS:
## [pipelines.sources.http.tls]
## enabled = true
## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
## client_auth = true # Enable client cert verification
## client_ca_file = "/path/to/ca.pem"
## HTTP Client with client cert:
## [pipelines.sinks.http_client.tls]
## enabled = true
## cert_file = "/path/to/client.pem" # Client certificate
## key_file = "/path/to/client.key"
### Network limiting (access control)
# [pipelines.sinks.http.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# ip_whitelist = ["192.168.1.0/24"]
# ip_blacklist = []
### Authentication (for clients)
### ☢ SECURITY: HTTP auth REQUIRES TLS to be enabled
# [pipelines.sinks.http.auth]
# type = "none" # none|basic|bearer|mtls
###----------------------------------------------------------------------------
### TCP Sink (Server - accepts connections from TCP clients)
# [[pipelines.sinks]]
# type = "tcp"
# [pipelines.sinks.tcp]
# host = "0.0.0.0" # Listen address
# port = 9090 # Listen port
# buffer_size = 1000 # Internal buffer size
# max_connections = 100 # Max concurrent clients
# keep_alive = true # Enable TCP keep-alive
# keep_alive_period_ms = 30000 # Keep-alive interval
### Heartbeat configuration
# [pipelines.sinks.tcp.heartbeat]
# enabled = false
# interval_ms = 30000
# include_timestamp = true
# include_stats = false
# format = "json" # json|txt
### ☣ WARNING: TCP has NO TLS support (gnet limitation)
### Use HTTP with TLS for encrypted transport
### Network limiting
# [pipelines.sinks.tcp.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# ip_whitelist = []
# ip_blacklist = []
### ☣ WARNING: TCP Sink has NO AUTH support (aimed for debugging)
### Use HTTP with TLS for encrypted transport
###----------------------------------------------------------------------------
### HTTP Client Sink (POST to HTTP Source endpoint)
# [[pipelines.sinks]]
# type = "http_client"
# [pipelines.sinks.http_client]
# url = "https://logs.example.com/ingest"
# buffer_size = 1000
# batch_size = 100 # Logs per request
# batch_delay_ms = 1000 # Max wait before sending
# timeout_seconds = 30 # Request timeout
# max_retries = 3 # Retry attempts
# retry_delay_ms = 1000 # Initial retry delay
# retry_backoff = 2.0 # Exponential backoff
# insecure_skip_verify = false # Skip TLS verification
### TLS configuration
# [pipelines.sinks.http_client.tls]
# enabled = false
# server_name = "logs.example.com" # For verification
# skip_verify = false # Skip verification
# cert_file = "/path/to/client.pem" # Client cert for mTLS
# key_file = "/path/to/client.key" # Client key for mTLS
### ⚠️ Example: HTTP Client Sink → HTTP Source with mTLS
## HTTP Source with mTLS:
## [pipelines.sources.http.tls]
## enabled = true
## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
## client_auth = true # Enable client cert verification
## client_ca_file = "/path/to/ca.pem"
## HTTP Client with client cert:
## [pipelines.sinks.http_client.tls]
## enabled = true
## cert_file = "/path/to/client.pem" # Client certificate
## key_file = "/path/to/client.key"
### Client authentication
### ☢ SECURITY: HTTP auth REQUIRES TLS to be enabled
# [pipelines.sinks.http_client.auth]
# type = "none" # none|basic|token|mtls (NO scram)
# # token = "your-token" # For token auth
# # username = "user" # For basic auth
# # password = "pass" # For basic auth
###----------------------------------------------------------------------------
### TCP Client Sink (Connect to TCP Source server)
# [[pipelines.sinks]]
# type = "tcp_client"
## [pipelines.sinks.tcp_client]
# host = "logs.example.com" # Target host
# port = 9090 # Target port
# buffer_size = 1000 # Internal buffer size
# dial_timeout = 10 # Connection timeout (seconds)
# write_timeout = 30 # Write timeout (seconds)
# read_timeout = 10 # Read timeout (seconds)
# keep_alive = 30 # TCP keep-alive (seconds)
# reconnect_delay_ms = 1000 # Initial reconnect delay
# max_reconnect_delay_ms = 30000 # Max reconnect delay
# reconnect_backoff = 1.5 # Exponential backoff
### ☣ WARNING: TCP has NO TLS support (gnet limitation)
### Use HTTP with TLS for encrypted transport
### Client authentication
# [pipelines.sinks.tcp_client.auth]
# type = "none" # none|scram ONLY (no basic/token/mtls)
# # username = "user" # For SCRAM auth
# # password = "pass" # For SCRAM auth

View File

@ -1,42 +0,0 @@
# LogWisp Minimal Configuration
# Save as: ~/.config/logwisp/logwisp.toml
# Basic pipeline monitoring application logs
[[pipelines]]
name = "app"
# Source: Monitor log directory
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/myapp", pattern = "*.log", check_interval_ms = 100 }
# Sink: HTTP streaming
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
buffer_size = 1000,
stream_path = "/stream",
status_path = "/status"
}
# Optional: Filter for errors only
# [[pipelines.filters]]
# type = "include"
# patterns = ["ERROR", "WARN", "CRITICAL"]
# Optional: Add rate limiting to HTTP sink
# [[pipelines.sinks]]
# type = "http"
# options = {
# port = 8080,
# buffer_size = 1000,
# stream_path = "/stream",
# status_path = "/status",
# net_limit = { enabled = true, requests_per_second = 10.0, burst_size = 20 }
# }
# Optional: Add file output
# [[pipelines.sinks]]
# type = "file"
# options = { directory = "/var/log/logwisp", name = "app" }

View File

@ -1,27 +1,76 @@
# LogWisp Documentation
# LogWisp
Documentation covers installation, configuration, and usage of LogWisp's pipeline-based log monitoring system.
A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with security and reliability features.
## 📚 Documentation Index
## Features
### Getting Started
- **[Installation Guide](installation.md)** - Platform-specific installation
- **[Quick Start](quickstart.md)** - Get running in 5 minutes
- **[Architecture Overview](architecture.md)** - Pipeline design
### Core Capabilities
- **Pipeline Architecture**: Independent processing pipelines with source(s) → filter → format → sink(s) flow
- **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
- **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
- **Hot Configuration Reload**: Update pipelines without service restart
### Configuration
- **[Configuration Guide](configuration.md)** - Complete reference
- **[Environment Variables](environment.md)** - Container configuration
- **[Command Line Options](cli.md)** - CLI reference
- **[Sample Configurations](../config/)** - Default & Minimal Config
### Data Processing
- **Pattern-based Filtering**: Chainable include/exclude filters with regex support
- **Multiple Formatters**: Raw, JSON, and template-based text formatting
- **Rate Limiting**: Pipeline rate controls
### Features
- **[Status Monitoring](status.md)** - Health checks
- **[Filters Guide](filters.md)** - Pattern-based filtering
- **[Rate Limiting](ratelimiting.md)** - Connection protection
- **[Router Mode](router.md)** - Multi-pipeline routing
- **[Authentication](authentication.md)** - Access control *(planned)*
### Security & Reliability
- **Authentication**: Basic, token, SCRAM, and mTLS support
- **TLS Encryption**: Full TLS 1.2/1.3 support for HTTP connections
- **Access Control**: IP whitelisting/blacklisting, connection limits
- **Automatic Reconnection**: Resilient client connections with exponential backoff
- **File Rotation**: Size-based rotation with retention policies
## 📝 License
### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints
- **Signal Handling**: Graceful shutdown and configuration reload via signals
- **Background Mode**: Daemon operation with proper signal handling
- **Quiet Mode**: Silent operation for automated deployments
BSD-3-Clause
## Documentation
- [Installation Guide](installation.md) - Platform setup and service configuration
- [Architecture Overview](architecture.md) - System design and component interaction
- [Configuration Reference](configuration.md) - TOML structure and configuration methods
- [Input Sources](sources.md) - Available source types and configurations
- [Output Sinks](sinks.md) - Sink types and output options
- [Filters](filters.md) - Pattern-based log filtering
- [Formatters](formatters.md) - Log formatting and transformation
- [Authentication](authentication.md) - Security configurations and auth methods
- [Networking](networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](cli.md) - CLI flags and subcommands
- [Operations Guide](operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
Run with: `logwisp -c config.toml`
## System Requirements
- **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
- **Architecture**: amd64
- **Go Version**: 1.25+ (for building from source)
## License
BSD 3-Clause License

View File

@ -1,343 +1,168 @@
# Architecture Overview
LogWisp implements a flexible pipeline architecture for real-time log processing and streaming.
LogWisp implements a pipeline-based architecture for flexible log processing and distribution.
## Core Architecture
## Core Concepts
### Pipeline Model
Each pipeline operates independently with a source → filter → format → sink flow. Multiple pipelines can run concurrently within a single LogWisp instance, each processing different log streams with unique configurations.
### Component Hierarchy
```
┌─────────────────────────────────────────────────────────────────────────┐
│ LogWisp Service │
├─────────────────────────────────────────────────────────────────────────┤
┌─────────────────────────── Pipeline 1 ───────────────────────────┐ │
│ │ │
│ Sources Filters Sinks │ │
│ │ ┌──────┐ ┌────────┐ ┌──────┐ │ │
│ │ Dir │──┐ │Include │ ┌────│ HTTP │←── Client 1 │ │
│ │ └──────┘ ├────▶│ ERROR │ │ └──────┘ │ │
│ │ │ │ WARN │────▶├────┌──────┐ │ │
│ │ ┌──────┐ │ └────┬───┘ │ │ File │ │ │
│ │ │ HTTP │──┤ ▼ │ └──────┘ │ │
│ │ └──────┘ │ ┌────────┐ │ ┌──────┐ │ │
│ │ ┌──────┐ │ │Exclude │ └────│ TCP │←── Client 2 │ │
│ │ │ TCP │──┘ │ DEBUG │ └──────┘ │ │
│ │ └──────┘ └────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline 2 ───────────────────────────┐ │
│ │ │ │
│ │ ┌──────┐ ┌───────────┐ │ │
│ │ │Stdin │───────────────────────┬───▶│HTTP Client│──► Remote │ │
│ │ └──────┘ (No Filters) │ └───────────┘ │ │
│ │ │ ┌───────────┐ │ │
│ │ └────│TCP Client │──► Remote │ │
│ │ └───────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline N ───────────────────────────┐ │
│ │ Multiple Sources → Filter Chain → Multiple Sinks │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
Service (Main Process)
├── Pipeline 1
│ ├── Sources (1 or more)
├── Rate Limiter (optional)
├── Filter Chain (optional)
├── Formatter (optional)
└── Sinks (1 or more)
├── Pipeline 2
└── [Same structure]
└── Status Reporter (optional)
```
## Data Flow
```
Log Entry Flow:
### Processing Stages
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Source │ │ Parse │ │ Filter │ │ Sink │
│ Monitor │────▶│ Entry │────▶│ Chain │────▶│ Deliver │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │ │ │
▼ ▼ ▼ ▼
Detect Extract Include/ Send to
Input & Format Exclude Clients
1. **Source Stage**: Sources monitor inputs and generate log entries
2. **Rate Limiting**: Optional pipeline-level rate control
3. **Filtering**: Pattern-based inclusion/exclusion
4. **Formatting**: Transform entries to desired output format
5. **Distribution**: Fan-out to multiple sinks
### Entry Lifecycle
Entry Processing:
Log entries flow through the pipeline as `core.LogEntry` structures containing:
- **Time**: Entry timestamp
- **Level**: Log level (DEBUG, INFO, WARN, ERROR)
- **Source**: Origin identifier
- **Message**: Log content
- **Fields**: Additional metadata (JSON)
- **RawSize**: Original entry size
1. Source Detection 2. Entry Creation 3. Filter Application
┌──────────┐ ┌────────────┐ ┌─────────────┐
│New Entry │ │ Timestamp │ │ Filter 1 │
│Detected │──────────▶│ Level │────────▶│ Include? │
└──────────┘ │ Message │ └──────┬──────┘
└────────────┘ │
4. Sink Distribution ┌─────────────┐
┌──────────┐ │ Filter 2 │
│ HTTP │◀───┐ │ Exclude? │
└──────────┘ │ └──────┬──────┘
┌──────────┐ │ │
│ TCP │◀───┼────────── Entry ◀──────────────────┘
└──────────┘ │ (if passed)
┌──────────┐ │
│ File │◀───┤
└──────────┘ │
┌──────────┐ │
│ HTTP/TCP │◀───┘
│ Client │
└──────────┘
```
### Buffering Strategy
## Component Details
Each component maintains internal buffers to handle burst traffic:
- Sources: Configurable buffer size (default 1000 entries)
- Sinks: Independent buffers per sink
- Network components: Additional TCP/HTTP buffers
### Sources
## Component Types
Sources monitor inputs and generate log entries:
### Sources (Input)
```
Directory Source:
┌─────────────────────────────────┐
│ Directory Monitor │
├─────────────────────────────────┤
│ • Pattern Matching (*.log) │
│ • File Rotation Detection │
│ • Position Tracking │
│ • Concurrent File Watching │
└─────────────────────────────────┘
┌──────────────┐
│ File Watcher │ (per file)
├──────────────┤
│ • Read New │
│ • Track Pos │
│ • Detect Rot │
└──────────────┘
- **Directory Source**: File system monitoring with rotation detection
- **Stdin Source**: Standard input processing
- **HTTP Source**: REST endpoint for log ingestion
- **TCP Source**: Raw TCP socket listener
HTTP/TCP Sources:
┌─────────────────────────────────┐
│ Network Listener │
├─────────────────────────────────┤
│ • JSON Parsing │
│ • Rate Limiting │
│ • Connection Management │
│ • Input Validation │
└─────────────────────────────────┘
```
### Sinks (Output)
### Filters
- **Console Sink**: stdout/stderr output
- **File Sink**: Rotating file writer
- **HTTP Sink**: Server-Sent Events (SSE) streaming
- **TCP Sink**: TCP server for client connections
- **HTTP Client Sink**: Forward to remote HTTP endpoints
- **TCP Client Sink**: Forward to remote TCP servers
Filters process entries through pattern matching:
### Processing Components
```
Filter Chain:
┌─────────────┐
Entry ──────────▶│ Filter 1 │
│ (Include) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter 2 │
│ (Exclude) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter N │
└──────┬──────┘
To Sinks
```
### Sinks
Sinks deliver processed entries to destinations:
```
HTTP Sink (SSE):
┌───────────────────────────────────┐
│ HTTP Server │
├───────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ │
│ │ Stream │ │ Status │ │
│ │Endpoint │ │Endpoint │ │
│ └────┬────┘ └────┬────┘ │
│ │ │ │
│ ┌────▼──────────────▼────┐ │
│ │ Connection Manager │ │
│ ├────────────────────────┤ │
│ │ • Rate Limiting │ │
│ │ • Heartbeat │ │
│ │ • Buffer Management │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
TCP Sink:
┌───────────────────────────────────┐
│ TCP Server │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ gnet Event Loop │ │
│ ├────────────────────────┤ │
│ │ • Async I/O │ │
│ │ • Connection Pool │ │
│ │ • Rate Limiting │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
Client Sinks:
┌───────────────────────────────────┐
│ HTTP/TCP Client │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ Output Manager │ │
│ ├────────────────────────┤ │
│ │ • Batching │ │
│ │ • Retry Logic │ │
│ │ • Connection Pooling │ │
│ │ • Failover │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
```
## Router Mode
In router mode, multiple pipelines share HTTP ports:
```
Router Architecture:
┌─────────────────┐
│ HTTP Router │
│ Port 8080 │
└────────┬────────┘
┌────────────────────┼────────────────────┐
│ │ │
/app/stream /db/stream /sys/stream
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│Pipeline │ │Pipeline │ │Pipeline │
│ "app" │ │ "db" │ │ "sys" │
└─────────┘ └─────────┘ └─────────┘
Path Routing:
Client Request ──▶ Router ──▶ Parse Path ──▶ Find Pipeline ──▶ Route
Extract Pipeline Name
from /pipeline/endpoint
```
## Memory Management
```
Buffer Flow:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Source │ │ Pipeline │ │ Sink │
│ Buffer │────▶│ Buffer │────▶│ Buffer │
│ (1000) │ │ (chan) │ │ (1000) │
└──────────┘ └──────────┘ └──────────┘
│ │ │
▼ ▼ ▼
Drop if full Backpressure Drop if full
(counted) (blocking) (counted)
Client Sinks:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Entry │ │ Batch │ │ Send │
│ Buffer │────▶│ Buffer │────▶│ Queue │
│ (1000) │ │ (100) │ │ (retry) │
└──────────┘ └──────────┘ └──────────┘
```
## Rate Limiting
```
Token Bucket Algorithm:
┌─────────────────────────────┐
│ Token Bucket │
├─────────────────────────────┤
│ Capacity: burst_size │
│ Refill: requests_per_second │
│ │
│ ┌─────────────────────┐ │
│ │ ● ● ● ● ● ● ○ ○ ○ ○ │ │
│ └─────────────────────┘ │
│ 6/10 tokens available │
└─────────────────────────────┘
Request arrives
Token available? ──No──▶ Reject (429)
Yes
Consume token ──▶ Allow request
```
- **Rate Limiter**: Token bucket algorithm for flow control
- **Filter Chain**: Sequential pattern matching
- **Formatters**: Raw, JSON, or template-based text transformation
## Concurrency Model
```
Goroutine Structure:
### Goroutine Architecture
Main ────┬──── Pipeline 1 ────┬──── Source Reader 1
│ ├──── Source Reader 2
│ ├──── HTTP Server
│ ├──── TCP Server
│ ├──── Filter Processor
│ ├──── HTTP Client Writer
│ └──── TCP Client Writer
├──── Pipeline 2 ────┬──── Source Reader
│ └──── Sink Writers
└──── HTTP Router (if enabled)
- Each source runs in dedicated goroutines for monitoring
- Sinks operate independently with their own processing loops
- Network listeners use optimized event loops (gnet for TCP)
- Pipeline processing uses channel-based communication
Channel Communication:
Source ──chan──▶ Filter ──chan──▶ Sink
│ │
└── Non-blocking send ────────────┘
(drop & count if full)
```
### Synchronization
## Configuration Loading
- Atomic counters for statistics
- Read-write mutexes for configuration access
- Context-based cancellation for graceful shutdown
- Wait groups for coordinated startup/shutdown
```
Priority Order:
1. CLI Flags ─────────┐
2. Environment Vars ──┼──▶ Merge ──▶ Final Config
3. Config File ───────┤
4. Defaults ──────────┘
## Network Architecture
Example:
CLI: --logging.level debug
Env: LOGWISP_PIPELINES_0_NAME=app
File: pipelines.toml
Default: buffer_size = 1000
```
### Connection Patterns
## Security Architecture
**Chaining Design**:
- TCP Client Sink → TCP Source: Direct TCP forwarding
- HTTP Client Sink → HTTP Source: HTTP-based forwarding
```
Security Layers:
**Monitoring Design**:
- TCP Sink: Debugging interface
- HTTP Sink: Browser-based live monitoring
┌─────────────────────────────────────┐
│ Network Layer │
├─────────────────────────────────────┤
│ • Rate Limiting (per IP/global) │
│ • Connection Limits │
│ • TLS/SSL (planned) │
└──────────────┬──────────────────────┘
┌──────────────▼──────────────────────┐
│ Authentication Layer │
├─────────────────────────────────────┤
│ • Basic Auth (planned) │
│ • Bearer Tokens (planned) │
│ • IP Whitelisting (planned) │
└──────────────┬──────────────────────┘
┌──────────────▼──────────────────────┐
│ Application Layer │
├─────────────────────────────────────┤
│ • Input Validation │
│ • Path Traversal Prevention │
│ • Resource Limits │
└─────────────────────────────────────┘
```
### Protocol Support
- HTTP/1.1 and HTTP/2 for HTTP connections
- Raw TCP with optional SCRAM authentication
- TLS 1.2/1.3 for HTTPS connections (HTTP only)
- Server-Sent Events for real-time streaming
## Resource Management
### Memory Management
- Bounded buffers prevent unbounded growth
- Automatic garbage collection via Go runtime
- Connection limits prevent resource exhaustion
### File Management
- Automatic rotation based on size thresholds
- Retention policies for old log files
- Minimum disk space checks before writing
### Connection Management
- Per-IP connection limits
- Global connection caps
- Automatic reconnection with exponential backoff
- Keep-alive for persistent connections
## Reliability Features
### Fault Tolerance
- Panic recovery in pipeline processing
- Independent pipeline operation
- Automatic source restart on failure
- Sink failure isolation
### Data Integrity
- Entry validation at ingestion
- Size limits for entries and batches
- Duplicate detection in file monitoring
- Position tracking for file reads
## Performance Characteristics
### Throughput
- Pipeline rate limiting: Configurable (default 1000 entries/second)
- Network throughput: Limited by network and sink capacity
- File monitoring: Sub-second detection (default 100ms interval)
### Latency
- Entry processing: Sub-millisecond in-memory
- Network forwarding: Depends on batch configuration
- File detection: Configurable check interval
### Scalability
- Horizontal: Multiple LogWisp instances with different configurations
- Vertical: Multiple pipelines per instance
- Fan-out: Multiple sinks per pipeline
- Fan-in: Multiple sources per pipeline

237
doc/authentication.md Normal file
View File

@ -0,0 +1,237 @@
# Authentication
LogWisp supports multiple authentication methods for securing network connections.
## Authentication Methods
### Overview
| Method | HTTP Source | HTTP Sink | HTTP Client | TCP Source | TCP Client | TCP Sink |
|--------|------------|-----------|-------------|------------|------------|----------|
| None | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Basic | ✓ (TLS req) | ✓ (TLS req) | ✓ (TLS req) | ✗ | ✗ | ✗ |
| Token | ✓ (TLS req) | ✓ (TLS req) | ✓ (TLS req) | ✗ | ✗ | ✗ |
| SCRAM | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
| mTLS | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
**Important Notes:**
- HTTP authentication **requires** TLS to be enabled
- TCP connections are **always** unencrypted
- TCP Sink has **no** authentication (debugging only)
## Basic Authentication
HTTP/HTTPS connections with username/password.
### Configuration
```toml
[pipelines.sources.http.auth]
type = "basic"
realm = "LogWisp"
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2id$v=19$m=65536,t=3,p=2$..."
```
### Generating Credentials
Use the `auth` command:
```bash
logwisp auth -u admin -b
```
Output includes:
- Argon2id password hash for configuration
- TOML configuration snippet
### Password Hash Format
LogWisp uses Argon2id with parameters:
- Memory: 65536 KB
- Iterations: 3
- Parallelism: 2
- Salt: Random 16 bytes
## Token Authentication
Bearer token authentication for HTTP/HTTPS.
### Configuration
```toml
[pipelines.sources.http.auth]
type = "token"
[pipelines.sources.http.auth.token]
tokens = ["token1", "token2", "token3"]
```
### Generating Tokens
```bash
logwisp auth -k -l 32
```
Generates:
- Base64-encoded token
- Hex-encoded token
- Configuration snippet
### Token Usage
Include in requests:
```
Authorization: Bearer <token>
```
## SCRAM Authentication
Secure Challenge-Response for TCP connections.
### Configuration
```toml
[pipelines.sources.tcp.auth]
type = "scram"
[[pipelines.sources.tcp.auth.scram.users]]
username = "tcpuser"
stored_key = "base64..."
server_key = "base64..."
salt = "base64..."
argon_time = 3
argon_memory = 65536
argon_threads = 4
```
### Generating SCRAM Credentials
```bash
logwisp auth -u tcpuser -s
```
### SCRAM Features
- Argon2-SCRAM-SHA256 algorithm
- Challenge-response mechanism
- No password transmission
- Replay attack protection
- Works over unencrypted connections
## mTLS (Mutual TLS)
Certificate-based authentication for HTTPS.
### Server Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
client_auth = true
client_ca_file = "/path/to/ca.pem"
verify_client_cert = true
[pipelines.sources.http.auth]
type = "mtls"
```
### Client Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
cert_file = "/path/to/client.pem"
key_file = "/path/to/client.key"
[pipelines.sinks.http_client.auth]
type = "mtls"
```
### Certificate Generation
Use the `tls` command:
```bash
# Generate CA
logwisp tls -ca -o ca
# Generate server certificate
logwisp tls -server -ca-cert ca.pem -ca-key ca.key -host localhost -o server
# Generate client certificate
logwisp tls -client -ca-cert ca.pem -ca-key ca.key -o client
```
## Authentication Command
### Usage
```bash
logwisp auth [options]
```
### Options
| Flag | Description |
|------|-------------|
| `-u, --user` | Username for credential generation |
| `-p, --password` | Password (prompts if not provided) |
| `-b, --basic` | Generate basic auth (HTTP/HTTPS) |
| `-s, --scram` | Generate SCRAM auth (TCP) |
| `-k, --token` | Generate bearer token |
| `-l, --length` | Token length in bytes (default: 32) |
### Security Best Practices
1. **Always use TLS** for HTTP authentication
2. **Never hardcode passwords** in configuration
3. **Use strong passwords** (minimum 12 characters)
4. **Rotate tokens regularly**
5. **Limit user permissions** to minimum required
6. **Store password hashes only**, never plaintext
7. **Use unique credentials** per service/user
## Access Control Lists
Combine authentication with IP-based access control:
```toml
[pipelines.sources.http.net_limit]
enabled = true
ip_whitelist = ["192.168.1.0/24", "10.0.0.0/8"]
ip_blacklist = ["192.168.1.100"]
```
Priority order:
1. Blacklist (checked first, immediate deny)
2. Whitelist (if configured, must match)
3. Authentication (if configured)
## Credential Storage
### Configuration File
Store hashes in TOML:
```toml
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2id$..."
```
### Environment Variables
Override via environment:
```bash
export LOGWISP_PIPELINES_0_SOURCES_0_HTTP_AUTH_BASIC_USERS_0_USERNAME=admin
export LOGWISP_PIPELINES_0_SOURCES_0_HTTP_AUTH_BASIC_USERS_0_PASSWORD_HASH='$argon2id$...'
```
### External Files
Future support planned for:
- External user databases
- LDAP/AD integration
- OAuth2/OIDC providers

View File

@ -1,196 +1,260 @@
# Command Line Interface
LogWisp CLI options for controlling behavior without modifying configuration files.
LogWisp CLI reference for commands and options.
## Synopsis
```bash
logwisp [command] [options]
logwisp [options]
```
## General Options
## Commands
### `--config <path>`
Configuration file location.
- **Default**: `~/.config/logwisp/logwisp.toml`
- **Example**: `logwisp --config /etc/logwisp/production.toml`
### Main Commands
### `--router`
Enable HTTP router mode for path-based routing.
- **Default**: `false`
- **Example**: `logwisp --router`
| Command | Description |
|---------|-------------|
| `auth` | Generate authentication credentials |
| `tls` | Generate TLS certificates |
| `version` | Display version information |
| `help` | Show help information |
### auth Command
Generate authentication credentials.
```bash
logwisp auth [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-u, --user` | Username | Required for password auth |
| `-p, --password` | Password | Prompts if not provided |
| `-b, --basic` | Generate basic auth | - |
| `-s, --scram` | Generate SCRAM auth | - |
| `-k, --token` | Generate bearer token | - |
| `-l, --length` | Token length in bytes | 32 |
### tls Command
Generate TLS certificates.
```bash
logwisp tls [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-ca` | Generate CA certificate | - |
| `-server` | Generate server certificate | - |
| `-client` | Generate client certificate | - |
| `-host` | Comma-separated hosts/IPs | localhost |
| `-o` | Output file prefix | Required |
| `-ca-cert` | CA certificate file | Required for server/client |
| `-ca-key` | CA key file | Required for server/client |
| `-days` | Certificate validity days | 365 |
### version Command
### `--version`
Display version information.
### `--background`
Run as background process.
- **Example**: `logwisp --background`
### `--quiet`
Suppress all output (overrides logging configuration) except sinks.
- **Example**: `logwisp --quiet`
### `--disable-status-reporter`
Disable periodic status reporting.
- **Example**: `logwisp --disable-status-reporter`
### `--config-auto-reload`
Enable automatic configuration reloading on file changes.
- **Example**: `logwisp --config-auto-reload --config /etc/logwisp/config.toml`
- Monitors configuration file for changes
- Reloads pipelines without restart
- Preserves connections during reload
### `--config-save-on-exit`
Save current configuration to file on exit.
- **Example**: `logwisp --config-save-on-exit`
- Useful with runtime modifications
- Requires valid config file path
## Logging Options
Override configuration file settings:
### `--logging.output <mode>`
LogWisp's operational log output.
- **Values**: `file`, `stdout`, `stderr`, `both`, `none`
- **Example**: `logwisp --logging.output both`
### `--logging.level <level>`
Minimum log level.
- **Values**: `debug`, `info`, `warn`, `error`
- **Example**: `logwisp --logging.level debug`
### `--logging.file.directory <path>`
Log directory (with file output).
- **Example**: `logwisp --logging.file.directory /var/log/logwisp`
### `--logging.file.name <name>`
Log file name (with file output).
- **Example**: `logwisp --logging.file.name app`
### `--logging.file.max_size_mb <size>`
Maximum log file size in MB.
- **Example**: `logwisp --logging.file.max_size_mb 200`
### `--logging.file.max_total_size_mb <size>`
Maximum total log size in MB.
- **Example**: `logwisp --logging.file.max_total_size_mb 2000`
### `--logging.file.retention_hours <hours>`
Log retention period in hours.
- **Example**: `logwisp --logging.file.retention_hours 336`
### `--logging.console.target <target>`
Console output destination.
- **Values**: `stdout`, `stderr`, `split`
- **Example**: `logwisp --logging.console.target split`
### `--logging.console.format <format>`
Console output format.
- **Values**: `txt`, `json`
- **Example**: `logwisp --logging.console.format json`
## Pipeline Options
Configure pipelines via CLI (N = array index, 0-based):
### `--pipelines.N.name <name>`
Pipeline name.
- **Example**: `logwisp --pipelines.0.name myapp`
### `--pipelines.N.sources.N.type <type>`
Source type.
- **Example**: `logwisp --pipelines.0.sources.0.type directory`
### `--pipelines.N.sources.N.options.<key> <value>`
Source options.
- **Example**: `logwisp --pipelines.0.sources.0.options.path /var/log`
### `--pipelines.N.filters.N.type <type>`
Filter type.
- **Example**: `logwisp --pipelines.0.filters.0.type include`
### `--pipelines.N.filters.N.patterns <json>`
Filter patterns (JSON array).
- **Example**: `logwisp --pipelines.0.filters.0.patterns '["ERROR","WARN"]'`
### `--pipelines.N.sinks.N.type <type>`
Sink type.
- **Example**: `logwisp --pipelines.0.sinks.0.type http`
### `--pipelines.N.sinks.N.options.<key> <value>`
Sink options.
- **Example**: `logwisp --pipelines.0.sinks.0.options.port 8080`
## Examples
### Basic Usage
```bash
# Default configuration
logwisp
# Specific configuration
logwisp --config /etc/logwisp/production.toml
logwisp version
logwisp -v
logwisp --version
```
### Development
```bash
# Debug mode
logwisp --logging.output stderr --logging.level debug
Output includes:
- Version number
- Build date
- Git commit hash
- Go version
# With file output
logwisp --logging.output both --logging.level debug --logging.file.directory ./debug-logs
## Global Options
### Configuration Options
| Flag | Description | Default |
|------|-------------|---------|
| `-c, --config` | Configuration file path | `./logwisp.toml` |
| `-b, --background` | Run as daemon | false |
| `-q, --quiet` | Suppress console output | false |
| `--disable-status-reporter` | Disable status logging | false |
| `--config-auto-reload` | Enable config hot reload | false |
### Logging Options
| Flag | Description | Values |
|------|-------------|--------|
| `--logging.output` | Log output mode | file, stdout, stderr, split, all, none |
| `--logging.level` | Log level | debug, info, warn, error |
| `--logging.file.directory` | Log directory | Path |
| `--logging.file.name` | Log filename | String |
| `--logging.file.max_size_mb` | Max file size | Integer |
| `--logging.file.max_total_size_mb` | Total size limit | Integer |
| `--logging.file.retention_hours` | Retention period | Float |
| `--logging.console.target` | Console target | stdout, stderr, split |
| `--logging.console.format` | Output format | txt, json |
### Pipeline Options
Configure pipelines via CLI (N = array index, 0-based).
**Pipeline Configuration:**
| Flag | Description |
|------|-------------|
| `--pipelines.N.name` | Pipeline name |
| `--pipelines.N.sources.N.type` | Source type |
| `--pipelines.N.filters.N.type` | Filter type |
| `--pipelines.N.sinks.N.type` | Sink type |
## Flag Formats
### Boolean Flags
```bash
logwisp --quiet
logwisp --quiet=true
logwisp --quiet=false
```
### Production
### String Flags
```bash
# File logging
logwisp --logging.output file --logging.file.directory /var/log/logwisp
# Background with router
logwisp --background --router --config /etc/logwisp/prod.toml
# Quiet mode for cron
logwisp --quiet --config /etc/logwisp/batch.toml
logwisp --config /etc/logwisp/config.toml
logwisp -c config.toml
```
### Pipeline Configuration via CLI
```bash
# Simple pipeline
logwisp --pipelines.0.name app \
--pipelines.0.sources.0.type directory \
--pipelines.0.sources.0.options.path /var/log/app \
--pipelines.0.sinks.0.type http \
--pipelines.0.sinks.0.options.port 8080
### Nested Configuration
# With filters
logwisp --pipelines.0.name filtered \
--pipelines.0.sources.0.type stdin \
--pipelines.0.filters.0.type include \
--pipelines.0.filters.0.patterns '["ERROR","CRITICAL"]' \
--pipelines.0.sinks.0.type stdout
```bash
logwisp --logging.level=debug
logwisp --pipelines.0.name=myapp
logwisp --pipelines.0.sources.0.type=stdin
```
## Priority Order
### Array Values (JSON)
1. **Command-line flags** (highest)
2. **Environment variables**
3. **Configuration file**
4. **Built-in defaults** (lowest)
```bash
logwisp --pipelines.0.filters.0.patterns='["ERROR","WARN"]'
```
## Environment Variables
All flags can be set via environment:
```bash
export LOGWISP_QUIET=true
export LOGWISP_LOGGING_LEVEL=debug
export LOGWISP_PIPELINES_0_NAME=myapp
```
## Configuration Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Built-in defaults (lowest)
## Exit Codes
- `0`: Success
- `1`: General error
- `2`: Configuration file not found
- `137`: SIGKILL received
| Code | Description |
|------|-------------|
| 0 | Success |
| 1 | General error |
| 2 | Configuration file not found |
| 137 | SIGKILL received |
## Signals
## Signal Handling
- `SIGINT` (Ctrl+C): Graceful shutdown
- `SIGTERM`: Graceful shutdown
- `SIGHUP`: Reload configuration (when auto-reload enabled)
- `SIGUSR1`: Reload configuration (when auto-reload enabled)
- `SIGKILL`: Immediate shutdown (exit code 137)
| Signal | Action |
|--------|--------|
| SIGINT (Ctrl+C) | Graceful shutdown |
| SIGTERM | Graceful shutdown |
| SIGHUP | Reload configuration |
| SIGUSR1 | Reload configuration |
| SIGKILL | Immediate termination |
## Usage Patterns
### Development Mode
```bash
# Verbose logging to console
logwisp --logging.output=stderr --logging.level=debug
# Quick test with stdin
logwisp --pipelines.0.sources.0.type=stdin --pipelines.0.sinks.0.type=console
```
### Production Deployment
```bash
# Background with file logging
logwisp --background --config /etc/logwisp/prod.toml --logging.output=file
# Systemd service
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/config.toml
```
### Debugging
```bash
# Check configuration
logwisp --config test.toml --logging.level=debug --disable-status-reporter
# Dry run (verify config only)
logwisp --config test.toml --quiet
```
### Quick Commands
```bash
# Generate admin password
logwisp auth -u admin -b
# Create self-signed certs
logwisp tls -server -host localhost -o server
# Check version
logwisp version
```
## Help System
### General Help
```bash
logwisp --help
logwisp -h
logwisp help
```
### Command Help
```bash
logwisp auth --help
logwisp tls --help
logwisp help auth
```
## Special Flags
### Internal Flags
These flags are for internal use:
- `--background-daemon`: Child process indicator
- `--config-save-on-exit`: Save config on shutdown
### Hidden Behaviors
- SIGHUP ignored by default (nohup behavior)
- Automatic panic recovery in pipelines
- Resource cleanup on shutdown

View File

@ -1,512 +1,198 @@
# Configuration Guide
# Configuration Reference
LogWisp uses TOML format with a flexible **source → filter → sink** pipeline architecture.
LogWisp configuration uses TOML format with flexible override mechanisms.
## Configuration Methods
LogWisp supports three configuration methods with the following precedence:
## Configuration Precedence
Configuration sources are evaluated in order:
1. **Command-line flags** (highest priority)
2. **Environment variables**
3. **Configuration file** (lowest priority)
3. **Configuration file**
4. **Built-in defaults** (lowest priority)
### Complete Configuration Reference
## File Location
| Category | CLI Flag | Environment Variable | TOML File |
|----------|----------|---------------------|-----------|
| **Top-level** |
| Router mode | `--router` | `LOGWISP_ROUTER` | `router = true` |
| Background mode | `--background` | `LOGWISP_BACKGROUND` | `background = true` |
| Show version | `--version` | `LOGWISP_VERSION` | `version = true` |
| Quiet mode | `--quiet` | `LOGWISP_QUIET` | `quiet = true` |
| Disable status reporter | `--disable-status-reporter` | `LOGWISP_DISABLE_STATUS_REPORTER` | `disable_status_reporter = true` |
| Config auto-reload | `--config-auto-reload` | `LOGWISP_CONFIG_AUTO_RELOAD` | `config_auto_reload = true` |
| Config save on exit | `--config-save-on-exit` | `LOGWISP_CONFIG_SAVE_ON_EXIT` | `config_save_on_exit = true` |
| Config file | `--config <path>` | `LOGWISP_CONFIG_FILE` | N/A |
| Config directory | N/A | `LOGWISP_CONFIG_DIR` | N/A |
| **Logging** |
| Output mode | `--logging.output <mode>` | `LOGWISP_LOGGING_OUTPUT` | `[logging]`<br>`output = "stderr"` |
| Log level | `--logging.level <level>` | `LOGWISP_LOGGING_LEVEL` | `[logging]`<br>`level = "info"` |
| File directory | `--logging.file.directory <path>` | `LOGWISP_LOGGING_FILE_DIRECTORY` | `[logging.file]`<br>`directory = "./logs"` |
| File name | `--logging.file.name <name>` | `LOGWISP_LOGGING_FILE_NAME` | `[logging.file]`<br>`name = "logwisp"` |
| Max file size | `--logging.file.max_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_SIZE_MB` | `[logging.file]`<br>`max_size_mb = 100` |
| Max total size | `--logging.file.max_total_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB` | `[logging.file]`<br>`max_total_size_mb = 1000` |
| Retention hours | `--logging.file.retention_hours <hours>` | `LOGWISP_LOGGING_FILE_RETENTION_HOURS` | `[logging.file]`<br>`retention_hours = 168` |
| Console target | `--logging.console.target <target>` | `LOGWISP_LOGGING_CONSOLE_TARGET` | `[logging.console]`<br>`target = "stderr"` |
| Console format | `--logging.console.format <format>` | `LOGWISP_LOGGING_CONSOLE_FORMAT` | `[logging.console]`<br>`format = "txt"` |
| **Pipelines** |
| Pipeline name | `--pipelines.N.name <name>` | `LOGWISP_PIPELINES_N_NAME` | `[[pipelines]]`<br>`name = "default"` |
| Source type | `--pipelines.N.sources.N.type <type>` | `LOGWISP_PIPELINES_N_SOURCES_N_TYPE` | `[[pipelines.sources]]`<br>`type = "directory"` |
| Source options | `--pipelines.N.sources.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SOURCES_N_OPTIONS_<KEY>` | `[[pipelines.sources]]`<br>`options = { ... }` |
| Filter type | `--pipelines.N.filters.N.type <type>` | `LOGWISP_PIPELINES_N_FILTERS_N_TYPE` | `[[pipelines.filters]]`<br>`type = "include"` |
| Filter logic | `--pipelines.N.filters.N.logic <logic>` | `LOGWISP_PIPELINES_N_FILTERS_N_LOGIC` | `[[pipelines.filters]]`<br>`logic = "or"` |
| Filter patterns | `--pipelines.N.filters.N.patterns <json>` | `LOGWISP_PIPELINES_N_FILTERS_N_PATTERNS` | `[[pipelines.filters]]`<br>`patterns = [...]` |
| Sink type | `--pipelines.N.sinks.N.type <type>` | `LOGWISP_PIPELINES_N_SINKS_N_TYPE` | `[[pipelines.sinks]]`<br>`type = "http"` |
| Sink options | `--pipelines.N.sinks.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SINKS_N_OPTIONS_<KEY>` | `[[pipelines.sinks]]`<br>`options = { ... }` |
| Auth type | `--pipelines.N.auth.type <type>` | `LOGWISP_PIPELINES_N_AUTH_TYPE` | `[pipelines.auth]`<br>`type = "none"` |
LogWisp searches for configuration in order:
1. Path specified via `--config` flag
2. Path from `LOGWISP_CONFIG_FILE` environment variable
3. `~/.config/logwisp/logwisp.toml`
4. `./logwisp.toml` in current directory
Note: `N` represents array indices (0-based).
## Global Settings
## Configuration File Location
Top-level configuration options:
1. Command line: `--config /path/to/config.toml`
2. Environment: `$LOGWISP_CONFIG_FILE` and `$LOGWISP_CONFIG_DIR`
3. User config: `~/.config/logwisp/logwisp.toml`
4. Current directory: `./logwisp.toml`
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `background` | bool | false | Run as daemon process |
| `quiet` | bool | false | Suppress console output |
| `disable_status_reporter` | bool | false | Disable periodic status logging |
| `config_auto_reload` | bool | false | Enable file watch for auto-reload |
## Hot Reload
## Logging Configuration
LogWisp supports automatic configuration reloading without restart:
```bash
# Enable hot reload
logwisp --config-auto-reload --config /etc/logwisp/config.toml
# Manual reload via signal
kill -HUP $(pidof logwisp) # or SIGUSR1
```
Hot reload updates:
- Pipeline configurations
- Filters
- Formatters
- Rate limits
- Router mode changes
Not reloaded (requires restart):
- Logging configuration
- Background mode
## Configuration Structure
LogWisp's internal operational logging:
```toml
# Optional: Enable router mode
router = false
# Optional: Background mode
background = false
# Optional: Quiet mode
quiet = false
# Optional: Disable status reporter
disable_status_reporter = false
# Optional: LogWisp's own logging
[logging]
output = "stderr" # file, stdout, stderr, both, none
level = "info" # debug, info, warn, error
output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug|info|warn|error
[logging.file]
directory = "./logs"
directory = "./log"
name = "logwisp"
max_size_mb = 100
max_total_size_mb = 1000
retention_hours = 168
retention_hours = 168.0
[logging.console]
target = "stderr" # stdout, stderr, split
format = "txt" # txt or json
target = "stdout" # stdout|stderr|split
format = "txt" # txt|json
```
# Required: At least one pipeline
### Output Modes
- **file**: Write to log files only
- **stdout**: Write to standard output
- **stderr**: Write to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
- **all**: Write to both file and console
- **none**: Disable all logging
## Pipeline Configuration
Each `[[pipelines]]` section defines an independent processing pipeline:
```toml
[[pipelines]]
name = "default"
name = "pipeline-name"
# Sources (required)
# Rate limiting (optional)
[pipelines.rate_limit]
rate = 1000.0
burst = 2000.0
policy = "drop" # pass|drop
max_entry_size_bytes = 0 # 0=unlimited
# Format configuration (optional)
[pipelines.format]
type = "json" # raw|json|txt
# Sources (required, 1+)
[[pipelines.sources]]
type = "directory"
options = { ... }
# ... source-specific config
# Filters (optional)
[[pipelines.filters]]
type = "include"
patterns = [...]
logic = "or"
patterns = ["ERROR", "WARN"]
# Sinks (required)
# Sinks (required, 1+)
[[pipelines.sinks]]
type = "http"
options = { ... }
# ... sink-specific config
```
## Pipeline Configuration
## Environment Variables
Each `[[pipelines]]` section defines an independent processing pipeline.
All configuration options support environment variable overrides:
### Pipeline Formatters
### Naming Convention
Control output format per pipeline:
- Prefix: `LOGWISP_`
- Path separator: `_` (underscore)
- Array indices: Numeric suffix (0-based)
- Case: UPPERCASE
```toml
[[pipelines]]
name = "json-output"
format = "json" # raw, json, text
### Mapping Examples
[pipelines.format_options]
# JSON formatter
pretty = false
timestamp_field = "timestamp"
level_field = "level"
message_field = "message"
source_field = "source"
| TOML Path | Environment Variable |
|-----------|---------------------|
| `quiet` | `LOGWISP_QUIET` |
| `logging.level` | `LOGWISP_LOGGING_LEVEL` |
| `pipelines[0].name` | `LOGWISP_PIPELINES_0_NAME` |
| `pipelines[0].sources[0].type` | `LOGWISP_PIPELINES_0_SOURCES_0_TYPE` |
# Text formatter
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Message}}"
timestamp_format = "2006-01-02T15:04:05Z07:00"
## Command-Line Overrides
All configuration options can be overridden via CLI flags:
```bash
logwisp --quiet \
--logging.level=debug \
--pipelines.0.name=myapp \
--pipelines.0.sources.0.type=stdin
```
### Sources
## Configuration Validation
Input data sources:
LogWisp validates configuration at startup:
- Required fields presence
- Type correctness
- Port conflicts
- Path accessibility
- Pattern compilation
- Network address formats
#### Directory Source
```toml
[[pipelines.sources]]
type = "directory"
options = {
path = "/var/log/myapp", # Directory to monitor
pattern = "*.log", # File pattern (glob)
check_interval_ms = 100 # Check interval (10-60000)
}
```
## Hot Reload
#### File Source
```toml
[[pipelines.sources]]
type = "file"
options = {
path = "/var/log/app.log" # Specific file
}
```
#### Stdin Source
```toml
[[pipelines.sources]]
type = "stdin"
options = {}
```
#### HTTP Source
```toml
[[pipelines.sources]]
type = "http"
options = {
port = 8081, # Port to listen on
ingest_path = "/ingest", # Path for POST requests
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip"
}
}
```
#### TCP Source
```toml
[[pipelines.sources]]
type = "tcp"
options = {
port = 9091, # Port to listen on
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 5.0,
burst_size = 10,
limit_by = "ip"
}
}
```
### Filters
Control which log entries pass through:
```toml
# Include filter - only matching logs pass
[[pipelines.filters]]
type = "include"
logic = "or" # or: match any, and: match all
patterns = [
"ERROR",
"(?i)warn", # Case-insensitive
"\\bfatal\\b" # Word boundary
]
# Exclude filter - matching logs are dropped
[[pipelines.filters]]
type = "exclude"
patterns = ["DEBUG", "health-check"]
```
### Sinks
Output destinations:
#### HTTP Sink (SSE)
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
buffer_size = 1000,
stream_path = "/stream",
status_path = "/status",
# Heartbeat
heartbeat = {
enabled = true,
interval_seconds = 30,
format = "comment", # comment or json
include_timestamp = true,
include_stats = false
},
# Rate limiting
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # ip or global
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
#### TCP Sink
```toml
[[pipelines.sinks]]
type = "tcp"
options = {
port = 9090,
buffer_size = 5000,
heartbeat = { enabled = true, interval_seconds = 60, format = "json" },
rate_limit = { enabled = true, requests_per_second = 5.0, burst_size = 10 }
}
```
#### HTTP Client Sink
```toml
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://remote-log-server.com/ingest",
buffer_size = 1000,
batch_size = 100,
batch_delay_ms = 1000,
timeout_seconds = 30,
max_retries = 3,
retry_delay_ms = 1000,
retry_backoff = 2.0,
headers = {
"Authorization" = "Bearer <API_KEY_HERE>",
"X-Custom-Header" = "value"
},
insecure_skip_verify = false
}
```
#### TCP Client Sink
```toml
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "remote-server.com:9090",
buffer_size = 1000,
dial_timeout_seconds = 10,
write_timeout_seconds = 30,
keep_alive_seconds = 30,
reconnect_delay_ms = 1000,
max_reconnect_delay_seconds = 30,
reconnect_backoff = 1.5
}
```
#### File Sink
```toml
[[pipelines.sinks]]
type = "file"
options = {
directory = "/var/log/logwisp",
name = "app",
max_size_mb = 100,
max_total_size_mb = 1000,
retention_hours = 168.0,
min_disk_free_mb = 1000,
buffer_size = 2000
}
```
#### Console Sinks
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {
buffer_size = 500,
target = "stdout" # stdout, stderr, or split
}
```
## Complete Examples
### Basic Application Monitoring
```toml
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Hot Reload with JSON Output
Enable configuration hot reload:
```toml
config_auto_reload = true
config_save_on_exit = true
[[pipelines]]
name = "app"
format = "json"
[pipelines.format_options]
pretty = true
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Filtering
```toml
[logging]
output = "file"
level = "info"
[[pipelines]]
name = "production"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log", check_interval_ms = 50 }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.filters]]
type = "exclude"
patterns = ["/health", "/metrics"]
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = { enabled = true, requests_per_second = 25.0 }
}
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "errors" }
Or via command line:
```bash
logwisp --config-auto-reload
```
### Multi-Source Aggregation
Reload triggers:
- File modification detection
- SIGHUP or SIGUSR1 signals
Reloadable items:
- Pipeline configurations
- Sources and sinks
- Filters and formatters
- Rate limits
Non-reloadable (requires restart):
- Logging configuration
- Background mode
- Global settings
## Default Configuration
Minimal working configuration:
```toml
[[pipelines]]
name = "aggregated"
name = "default"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" }
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sources]]
type = "stdin"
options = {}
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/logs" }
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "tcp"
options = { port = 9090 }
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
### Router Mode
## Configuration Schema
```toml
# Run with: logwisp --router
router = true
### Type Reference
[[pipelines]]
name = "api"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/api", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Same port OK in router mode
[[pipelines]]
name = "web"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
# Access:
# http://localhost:8080/api/stream
# http://localhost:8080/web/stream
# http://localhost:8080/status
```
### Remote Log Forwarding
```toml
[[pipelines]]
name = "forwarder"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-aggregator.example.com/ingest",
batch_size = 100,
batch_delay_ms = 5000,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "backup-logger.example.com:9090",
reconnect_delay_ms = 5000
}
```
| TOML Type | Go Type | Environment Format |
|-----------|---------|-------------------|
| String | string | Plain text |
| Integer | int64 | Numeric string |
| Float | float64 | Decimal string |
| Boolean | bool | true/false |
| Array | []T | JSON array string |
| Table | struct | Nested with `_` |

View File

@ -1,274 +0,0 @@
# Environment Variables
Configure LogWisp through environment variables for containerized deployments.
## Naming Convention
- **Prefix**: `LOGWISP_`
- **Path separator**: `_` (underscore)
- **Array indices**: Numeric suffix (0-based)
- **Case**: UPPERCASE
Examples:
- `logging.level``LOGWISP_LOGGING_LEVEL`
- `pipelines[0].name``LOGWISP_PIPELINES_0_NAME`
## General Variables
```bash
LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
LOGWISP_CONFIG_DIR=/etc/logwisp
LOGWISP_BACKGROUND=true
LOGWISP_QUIET=true
LOGWISP_DISABLE_STATUS_REPORTER=true
LOGWISP_CONFIG_AUTO_RELOAD=true
LOGWISP_CONFIG_SAVE_ON_EXIT=true
```
### `LOGWISP_CONFIG_FILE`
Configuration file path.
```bash
export LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
```
### `LOGWISP_CONFIG_DIR`
Configuration directory.
```bash
export LOGWISP_CONFIG_DIR=/etc/logwisp
export LOGWISP_CONFIG_FILE=production.toml
```
### `LOGWISP_ROUTER`
Enable router mode.
```bash
export LOGWISP_ROUTER=true
```
### `LOGWISP_BACKGROUND`
Run in background.
```bash
export LOGWISP_BACKGROUND=true
```
### `LOGWISP_QUIET`
Suppress all output.
```bash
export LOGWISP_QUIET=true
```
### `LOGWISP_DISABLE_STATUS_REPORTER`
Disable periodic status reporting.
```bash
export LOGWISP_DISABLE_STATUS_REPORTER=true
```
## Logging Variables
```bash
# Output mode
LOGWISP_LOGGING_OUTPUT=both
# Log level
LOGWISP_LOGGING_LEVEL=debug
# File logging
LOGWISP_LOGGING_FILE_DIRECTORY=/var/log/logwisp
LOGWISP_LOGGING_FILE_NAME=logwisp
LOGWISP_LOGGING_FILE_MAX_SIZE_MB=100
LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB=1000
LOGWISP_LOGGING_FILE_RETENTION_HOURS=168
# Console logging
LOGWISP_LOGGING_CONSOLE_TARGET=stderr
LOGWISP_LOGGING_CONSOLE_FORMAT=json
# Special console target override
LOGWISP_CONSOLE_TARGET=split # Overrides sink console targets
```
## Pipeline Configuration
### Basic Pipeline
```bash
# Pipeline name
LOGWISP_PIPELINES_0_NAME=app
# Source configuration
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/app
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_CHECK_INTERVAL_MS=100
# Sink configuration
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=1000
```
### Pipeline with Formatter
```bash
# Pipeline name and format
LOGWISP_PIPELINES_0_NAME=app
LOGWISP_PIPELINES_0_FORMAT=json
# Format options
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_PRETTY=true
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_TIMESTAMP_FIELD=ts
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_LEVEL_FIELD=severity
```
### Filters
```bash
# Include filter
LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
LOGWISP_PIPELINES_0_FILTERS_0_LOGIC=or
LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# Exclude filter
LOGWISP_PIPELINES_0_FILTERS_1_TYPE=exclude
LOGWISP_PIPELINES_0_FILTERS_1_PATTERNS='["DEBUG"]'
```
### HTTP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=http
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=8081
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_INGEST_PATH=/ingest
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
```
### TCP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=tcp
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=9091
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=5.0
```
### HTTP Sink Options
```bash
# Basic
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STREAM_PATH=/stream
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STATUS_PATH=/status
# Heartbeat
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INTERVAL_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_FORMAT=comment
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_TIMESTAMP=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_STATS=false
# Rate Limiting
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_BURST_SIZE=20
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_LIMIT_BY=ip
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_CONNECTIONS_PER_IP=5
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_TOTAL_CONNECTIONS=100
```
### HTTP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_URL=https://log-server.com/ingest
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_SIZE=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_DELAY_MS=5000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RETRIES=3
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_BACKOFF=2.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_INSECURE_SKIP_VERIFY=false
```
### TCP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=tcp_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_ADDRESS=remote-server.com:9090
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIAL_TIMEOUT_SECONDS=10
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_WRITE_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_KEEP_ALIVE_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RECONNECT_DELAY_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_BACKOFF=1.5
```
### File Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIRECTORY=/var/log/logwisp
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_NAME=app
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_SIZE_MB=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_TOTAL_SIZE_MB=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETENTION_HOURS=168
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MIN_DISK_FREE_MB=1000
```
### Console Sinks
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=stdout
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=500
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TARGET=stdout
```
## Example
```bash
#!/usr/bin/env bash
# General settings
export LOGWISP_DISABLE_STATUS_REPORTER=false
# Logging
export LOGWISP_LOGGING_OUTPUT=both
export LOGWISP_LOGGING_LEVEL=info
# Pipeline 0: Application logs
export LOGWISP_PIPELINES_0_NAME=app
export LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/myapp
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
# Filters
export LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
export LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# HTTP sink
export LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=25.0
# Pipeline 1: System logs
export LOGWISP_PIPELINES_1_NAME=system
export LOGWISP_PIPELINES_1_SOURCES_0_TYPE=file
export LOGWISP_PIPELINES_1_SOURCES_0_OPTIONS_PATH=/var/log/syslog
# TCP sink
export LOGWISP_PIPELINES_1_SINKS_0_TYPE=tcp
export LOGWISP_PIPELINES_1_SINKS_0_OPTIONS_PORT=9090
# Pipeline 2: Remote forwarding
export LOGWISP_PIPELINES_2_NAME=forwarder
export LOGWISP_PIPELINES_2_SOURCES_0_TYPE=http
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_PORT=8081
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_INGEST_PATH=/logs
# HTTP client sink
export LOGWISP_PIPELINES_2_SINKS_0_TYPE=http_client
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_URL=https://log-aggregator.example.com/ingest
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_BATCH_SIZE=100
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
logwisp
```
## Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Defaults (lowest)

View File

@ -1,268 +1,185 @@
# Filter Guide
# Filters
LogWisp filters control which log entries pass through pipelines using regular expressions.
LogWisp filters control which log entries pass through the pipeline using pattern matching.
## How Filters Work
## Filter Types
- **Include**: Only matching logs pass (whitelist)
- **Exclude**: Matching logs are dropped (blacklist)
- Multiple filters apply sequentially - all must pass
### Include Filter
## Configuration
Only entries matching patterns pass through.
```toml
[[pipelines.filters]]
type = "include" # or "exclude"
logic = "or" # or "and"
patterns = [
"pattern1",
"pattern2"
]
```
### Filter Types
#### Include Filter
```toml
[[pipelines.filters]]
type = "include"
logic = "or"
patterns = ["ERROR", "WARN", "CRITICAL"]
# Only ERROR, WARN, or CRITICAL logs pass
logic = "or" # or|and
patterns = [
"ERROR",
"WARN",
"CRITICAL"
]
```
#### Exclude Filter
### Exclude Filter
Entries matching patterns are dropped.
```toml
[[pipelines.filters]]
type = "exclude"
patterns = ["DEBUG", "TRACE", "/health"]
# DEBUG, TRACE, and health checks are dropped
patterns = [
"DEBUG",
"TRACE",
"health-check"
]
```
### Logic Operators
## Configuration Options
- **OR**: Match ANY pattern (default)
- **AND**: Match ALL patterns
```toml
# OR Logic
logic = "or"
patterns = ["ERROR", "FAIL"]
# Matches: "ERROR: disk full" OR "FAIL: timeout"
# AND Logic
logic = "and"
patterns = ["database", "timeout", "ERROR"]
# Matches: "ERROR: database connection timeout"
# Not: "ERROR: file not found"
```
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `type` | string | Required | Filter type (include/exclude) |
| `logic` | string | "or" | Pattern matching logic (or/and) |
| `patterns` | []string | Required | Pattern list |
## Pattern Syntax
Go regular expressions (RE2):
Patterns support regular expression syntax:
### Basic Patterns
- **Literal match**: `"ERROR"` - matches "ERROR" anywhere
- **Case-insensitive**: `"(?i)error"` - matches "error", "ERROR", "Error"
- **Word boundary**: `"\\berror\\b"` - matches whole word only
### Advanced Patterns
- **Alternation**: `"ERROR|WARN|FATAL"`
- **Character classes**: `"[0-9]{3}"`
- **Wildcards**: `".*exception.*"`
- **Line anchors**: `"^ERROR"` (start), `"ERROR$"` (end)
### Special Characters
Escape special regex characters with backslash:
- `.``\\.`
- `*``\\*`
- `[``\\[`
- `(``\\(`
## Filter Logic
### OR Logic (default)
Entry passes if ANY pattern matches:
```toml
"ERROR" # Substring match
"(?i)error" # Case-insensitive
"\\berror\\b" # Word boundaries
"^ERROR" # Start of line
"ERROR$" # End of line
"error|fail|warn" # Alternatives
logic = "or"
patterns = ["ERROR", "WARN"]
# Passes: "ERROR in module", "WARN: low memory"
# Blocks: "INFO: started"
```
## Common Patterns
### Log Levels
### AND Logic
Entry passes only if ALL patterns match:
```toml
patterns = [
"\\[(ERROR|WARN|INFO)\\]", # [ERROR] format
"(?i)\\b(error|warning)\\b", # Word boundaries
"level=(error|warn)", # key=value format
]
logic = "and"
patterns = ["database", "ERROR"]
# Passes: "ERROR: database connection failed"
# Blocks: "ERROR: file not found"
```
### Application Errors
## Filter Chain
Multiple filters execute sequentially:
```toml
# Java
patterns = [
"Exception",
"at .+\\.java:[0-9]+",
"NullPointerException"
]
# Python
patterns = [
"Traceback",
"File \".+\\.py\", line [0-9]+",
"ValueError|TypeError"
]
# Go
patterns = [
"panic:",
"goroutine [0-9]+",
"runtime error:"
]
```
### Performance Issues
```toml
patterns = [
"took [0-9]{4,}ms", # >999ms operations
"timeout|timed out",
"slow query",
"high cpu|cpu usage: [8-9][0-9]%"
]
```
### HTTP Patterns
```toml
patterns = [
"status[=:][4-5][0-9]{2}", # 4xx/5xx codes
"HTTP/[0-9.]+ [4-5][0-9]{2}",
"\"/api/v[0-9]+/", # API paths
]
```
## Filter Chains
### Error Monitoring
```toml
# Include errors
# First filter: Include errors and warnings
[[pipelines.filters]]
type = "include"
patterns = ["(?i)\\b(error|fail|critical)\\b"]
patterns = ["ERROR", "WARN"]
# Exclude known non-issues
# Second filter: Exclude test environments
[[pipelines.filters]]
type = "exclude"
patterns = ["Error: Expected", "/health"]
patterns = ["test-env", "staging"]
```
### API Monitoring
Processing order:
1. Entry arrives from source
2. Include filter evaluates
3. If passed, exclude filter evaluates
4. If passed all filters, entry continues to sink
## Performance Considerations
### Pattern Compilation
- Patterns compile once at startup
- Invalid patterns cause startup failure
- Complex patterns may impact performance
### Optimization Tips
- Place most selective filters first
- Use simple patterns when possible
- Combine related patterns with alternation
- Avoid excessive wildcards (`.*`)
## Filter Statistics
Filters track:
- Total entries evaluated
- Entries passed
- Entries blocked
- Processing time per pattern
## Common Use Cases
### Log Level Filtering
```toml
# Include API calls
[[pipelines.filters]]
type = "include"
patterns = ["/api/", "/v[0-9]+/"]
patterns = ["ERROR", "WARN", "FATAL", "CRITICAL"]
```
# Exclude successful
### Application Filtering
```toml
[[pipelines.filters]]
type = "include"
patterns = ["app1", "app2", "app3"]
```
### Noise Reduction
```toml
[[pipelines.filters]]
type = "exclude"
patterns = ["\" 2[0-9]{2} "]
patterns = [
"health-check",
"ping",
"/metrics",
"heartbeat"
]
```
## Performance Tips
1. **Use anchors**: `^ERROR` faster than `ERROR`
2. **Avoid nested quantifiers**: `((a+)+)+`
3. **Non-capturing groups**: `(?:error|warn)`
4. **Order by frequency**: Most common first
5. **Simple patterns**: Faster than complex regex
## Testing Filters
```bash
# Test configuration
echo "[ERROR] Test" >> test.log
echo "[INFO] Test" >> test.log
# Run with debug
logwisp --log-level debug
# Check output
curl -N http://localhost:8080/stream
### Security Filtering
```toml
[[pipelines.filters]]
type = "exclude"
patterns = [
"password",
"token",
"api[_-]key",
"secret"
]
```
## Regex Pattern Guide
### Multi-stage Filtering
```toml
# Include production logs
[[pipelines.filters]]
type = "include"
patterns = ["prod-", "production"]
LogWisp uses Go's standard regex engine (RE2). It includes most common features but omits backtracking-heavy syntax.
# Include only errors
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "EXCEPTION", "FATAL"]
For complex logic, chain multiple filters (e.g., an `include` followed by an `exclude`) rather than writing one complex regex.
### Basic Matching
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `literal` | Matches the exact text. | `"ERROR"` matches any log with "ERROR". |
| `.` | Matches any single character (except newline). | `"user."` matches "userA", "userB", etc. |
| `a\|b` | Matches expression `a` OR expression `b`. | `"error\|fail"` matches lines with "error" or "fail". |
### Anchors and Boundaries
Anchors tie your pattern to a specific position in the line.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `^` | Matches the beginning of the line. | `"^ERROR"` matches lines *starting* with "ERROR". |
| `$` | Matches the end of the line. | `"crashed$"` matches lines *ending* with "crashed". |
| `\b` | Matches a word boundary. | `"\berror\b"` matches "error" but not "terrorist". |
### Character Classes
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `[abc]` | Matches `a`, `b`, or `c`. | `"[aeiou]"` matches any vowel. |
| `[^abc]` | Matches any character *except* `a`, `b`, or `c`. | `"[^0-9]"` matches any non-digit. |
| `[a-z]` | Matches any character in the range `a` to `z`. | `"[a-zA-Z]"` matches any letter. |
| `\d` | Matches any digit (`[0-9]`). | `\d{3}` matches three digits, like "123". |
| `\w` | Matches any word character (`[a-zA-Z0-9_]`). | `\w+` matches one or more word characters. |
| `\s` | Matches any whitespace character. | `\s+` matches one or more spaces or tabs. |
### Quantifiers
Quantifiers specify how many times a character or group must appear.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `*` | Zero or more times. | `"a*"` matches "", "a", "aa". |
| `+` | One or more times. | `"a+"` matches "a", "aa", but not "". |
| `?` | Zero or one time. | `"colou?r"` matches "color" and "colour". |
| `{n}` | Exactly `n` times. | `\d{4}` matches a 4-digit number. |
| `{n,}` | `n` or more times. | `\d{2,}` matches numbers with 2 or more digits. |
| `{n,m}` | Between `n` and `m` times. | `\d{1,3}` matches numbers with 1 to 3 digits. |
### Grouping
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `(...)` | Groups an expression and captures the match. | `(ERROR|WARN)` captures "ERROR" or "WARN". |
| `(?:...)`| Groups an expression *without* capturing. Faster. | `(?:ERROR|WARN)` is more efficient if you just need to group. |
### Flags and Modifiers
Flags are placed at the beginning of a pattern to change its behavior.
| Pattern | Description |
| :--- | :--- |
| `(?i)` | Case-insensitive matching. |
| `(?m)` | Multi-line mode (`^` and `$` match start/end of lines). |
**Example:** `"(?i)error"` matches "error", "ERROR", and "Error".
### Practical Examples for Logging
* **Match an IP Address:**
```
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
```
* **Match HTTP 4xx or 5xx Status Codes:**
```
"status[= ](4|5)\d{2}"
```
* **Match a slow database query (>100ms):**
```
"Query took [1-9]\d{2,}ms"
```
* **Match key-value pairs:**
```
"user=(admin|guest)"
```
* **Match Java exceptions:**
```
"Exception:|at .+\.java:\d+"
```
# Exclude known issues
[[pipelines.filters]]
type = "exclude"
patterns = ["ECONNRESET", "broken pipe"]
```

215
doc/formatters.md Normal file
View File

@ -0,0 +1,215 @@
# Formatters
LogWisp formatters transform log entries before output to sinks.
## Formatter Types
### Raw Formatter
Outputs the log message as-is with optional newline.
```toml
[pipelines.format]
type = "raw"
[pipelines.format.raw]
add_new_line = true
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `add_new_line` | bool | true | Append newline to messages |
### JSON Formatter
Produces structured JSON output.
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
timestamp_field = "timestamp"
level_field = "level"
message_field = "message"
source_field = "source"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `pretty` | bool | false | Pretty print JSON |
| `timestamp_field` | string | "timestamp" | Field name for timestamp |
| `level_field` | string | "level" | Field name for log level |
| `message_field` | string | "message" | Field name for message |
| `source_field` | string | "source" | Field name for source |
**Output Structure:**
```json
{
"timestamp": "2024-01-01T12:00:00Z",
"level": "ERROR",
"source": "app",
"message": "Connection failed"
}
```
### Text Formatter
Template-based text formatting.
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
timestamp_format = "2006-01-02T15:04:05.000Z07:00"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `template` | string | See below | Go template string |
| `timestamp_format` | string | RFC3339 | Go time format string |
**Default Template:**
```
[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}
```
## Template Functions
Available functions in text templates:
| Function | Description | Example |
|----------|-------------|---------|
| `FmtTime` | Format timestamp | `{{.Timestamp \| FmtTime}}` |
| `ToUpper` | Convert to uppercase | `{{.Level \| ToUpper}}` |
| `ToLower` | Convert to lowercase | `{{.Source \| ToLower}}` |
| `TrimSpace` | Remove whitespace | `{{.Message \| TrimSpace}}` |
## Template Variables
Available variables in templates:
| Variable | Type | Description |
|----------|------|-------------|
| `.Timestamp` | time.Time | Entry timestamp |
| `.Level` | string | Log level |
| `.Source` | string | Source identifier |
| `.Message` | string | Log message |
| `.Fields` | string | Additional fields (JSON) |
## Time Format Strings
Common Go time format patterns:
| Pattern | Example Output |
|---------|---------------|
| `2006-01-02T15:04:05Z07:00` | 2024-01-02T15:04:05Z |
| `2006-01-02 15:04:05` | 2024-01-02 15:04:05 |
| `Jan 2 15:04:05` | Jan 2 15:04:05 |
| `15:04:05.000` | 15:04:05.123 |
| `2006/01/02` | 2024/01/02 |
## Format Selection
### Default Behavior
If no formatter specified:
- **HTTP/TCP sinks**: JSON format
- **Console/File sinks**: Raw format
- **Client sinks**: JSON format
### Per-Pipeline Configuration
Each pipeline can have its own formatter:
```toml
[[pipelines]]
name = "json-pipeline"
[pipelines.format]
type = "json"
[[pipelines]]
name = "text-pipeline"
[pipelines.format]
type = "txt"
```
## Message Processing
### JSON Message Handling
When using JSON formatter with JSON log messages:
1. Attempts to parse message as JSON
2. Merges fields with LogWisp metadata
3. LogWisp fields take precedence
4. Falls back to string if parsing fails
### Field Preservation
LogWisp metadata always includes:
- Timestamp (from source or current time)
- Level (detected or default)
- Source (origin identifier)
- Message (original content)
## Performance Characteristics
### Formatter Performance
Relative performance (fastest to slowest):
1. **Raw**: Direct passthrough
2. **Text**: Template execution
3. **JSON**: Serialization
4. **JSON (pretty)**: Formatted serialization
### Optimization Tips
- Use raw format for high throughput
- Cache template compilation (automatic)
- Minimize template complexity
- Avoid pretty JSON in production
## Common Configurations
### Structured Logging
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
```
### Human-Readable Logs
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
timestamp_format = "15:04:05"
```
### Syslog Format
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} {{.Source}} {{.Level}}: {{.Message}}"
timestamp_format = "Jan 2 15:04:05"
```
### Minimal Output
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Message}}"
```

View File

@ -1,77 +1,76 @@
# Installation Guide
Installation process on tested platforms.
LogWisp installation and service configuration for Linux and FreeBSD systems.
## Requirements
- **OS**: Linux, FreeBSD
- **Architecture**: amd64
- **Go**: 1.24+ (for building)
## Installation
## Installation Methods
### Pre-built Binaries
Download the latest release binary for your platform and install to `/usr/local/bin`:
```bash
# Linux amd64
wget https://github.com/lixenwraith/logwisp/releases/latest/download/logwisp-linux-amd64
wget https://github.com/yourusername/logwisp/releases/latest/download/logwisp-linux-amd64
chmod +x logwisp-linux-amd64
sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp
# Verify
logwisp --version
# FreeBSD amd64
fetch https://github.com/yourusername/logwisp/releases/latest/download/logwisp-freebsd-amd64
chmod +x logwisp-freebsd-amd64
sudo mv logwisp-freebsd-amd64 /usr/local/bin/logwisp
```
### From Source
### Building from Source
Requires Go 1.24 or newer:
```bash
git clone https://github.com/lixenwraith/logwisp.git
git clone https://github.com/yourusername/logwisp.git
cd logwisp
make build
sudo make install
go build -o logwisp ./src/cmd/logwisp
sudo install -m 755 logwisp /usr/local/bin/
```
### Go Install
### Go Install Method
Install directly using Go (version information will not be embedded):
```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
go install github.com/yourusername/logwisp/src/cmd/logwisp@latest
```
Note: Binary created with this method will not contain version information.
## Platform-Specific
## Service Configuration
### Linux (systemd)
```bash
# Create service
sudo tee /etc/systemd/system/logwisp.service << EOF
Create systemd service file `/etc/systemd/system/logwisp.service`:
```ini
[Unit]
Description=LogWisp Log Monitoring Service
Description=LogWisp Log Transport Service
After=network.target
[Service]
Type=simple
User=logwisp
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/logwisp.toml
Restart=always
Group=logwisp
ExecStart=/usr/local/bin/logwisp -c /etc/logwisp/logwisp.toml
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
WorkingDirectory=/var/lib/logwisp
[Install]
WantedBy=multi-user.target
EOF
```
# Create user
Setup service user and directories:
```bash
sudo useradd -r -s /bin/false logwisp
# Create service user
sudo useradd -r -s /bin/false logwisp
# Create configuration directory
sudo mkdir -p /etc/logwisp
sudo chown logwisp:logwisp /etc/logwisp
# Enable and start
sudo mkdir -p /etc/logwisp /var/lib/logwisp /var/log/logwisp
sudo chown logwisp:logwisp /var/lib/logwisp /var/log/logwisp
sudo systemctl daemon-reload
sudo systemctl enable logwisp
sudo systemctl start logwisp
@ -79,141 +78,90 @@ sudo systemctl start logwisp
### FreeBSD (rc.d)
```bash
# Create service script
sudo tee /usr/local/etc/rc.d/logwisp << 'EOF'
Create rc script `/usr/local/etc/rc.d/logwisp`:
```sh
#!/bin/sh
# PROVIDE: logwisp
# REQUIRE: DAEMON
# REQUIRE: DAEMON NETWORKING
# KEYWORD: shutdown
. /etc/rc.subr
name="logwisp"
rcvar="${name}_enable"
command="/usr/local/bin/logwisp"
command_args="--config /usr/local/etc/logwisp/logwisp.toml"
pidfile="/var/run/${name}.pid"
start_cmd="logwisp_start"
stop_cmd="logwisp_stop"
logwisp_start()
{
echo "Starting logwisp service..."
/usr/sbin/daemon -c -f -p ${pidfile} ${command} ${command_args}
}
logwisp_stop()
{
if [ -f ${pidfile} ]; then
echo "Stopping logwisp service..."
kill $(cat ${pidfile})
rm -f ${pidfile}
fi
}
command="/usr/local/bin/logwisp"
command_args="-c /usr/local/etc/logwisp/logwisp.toml"
load_rc_config $name
: ${logwisp_enable:="NO"}
: ${logwisp_config:="/usr/local/etc/logwisp/logwisp.toml"}
run_rc_command "$1"
EOF
```
# Make executable
Setup service:
```bash
sudo chmod +x /usr/local/etc/rc.d/logwisp
# Create service user
sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin
# Create configuration directory
sudo mkdir -p /usr/local/etc/logwisp
sudo chown logwisp:logwisp /usr/local/etc/logwisp
# Enable service
sudo mkdir -p /usr/local/etc/logwisp /var/log/logwisp
sudo chown logwisp:logwisp /var/log/logwisp
sudo sysrc logwisp_enable="YES"
# Start service
sudo service logwisp start
```
## Post-Installation
## Directory Structure
Standard installation directories:
| Purpose | Linux | FreeBSD |
|---------|-------|---------|
| Binary | `/usr/local/bin/logwisp` | `/usr/local/bin/logwisp` |
| Configuration | `/etc/logwisp/` | `/usr/local/etc/logwisp/` |
| Working Directory | `/var/lib/logwisp/` | `/var/db/logwisp/` |
| Log Files | `/var/log/logwisp/` | `/var/log/logwisp/` |
| PID File | `/var/run/logwisp.pid` | `/var/run/logwisp.pid` |
## Post-Installation Verification
Verify the installation:
### Verify Installation
```bash
# Check version
logwisp --version
logwisp version
# Test configuration
logwisp --config /etc/logwisp/logwisp.toml --log-level debug
logwisp -c /etc/logwisp/logwisp.toml --disable-status-reporter
# Check service
# Check service status (Linux)
sudo systemctl status logwisp
```
### Linux Service Status
```bash
sudo systemctl status logwisp
```
### FreeBSD Service Status
```bash
# Check service status (FreeBSD)
sudo service logwisp status
```
### Initial Configuration
Create a basic configuration file:
```toml
# /etc/logwisp/logwisp.toml (Linux)
# /usr/local/etc/logwisp/logwisp.toml (FreeBSD)
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = {
path = "/path/to/application/logs",
pattern = "*.log"
}
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
Restart service after configuration changes:
**Linux:**
```bash
sudo systemctl restart logwisp
```
**FreeBSD:**
```bash
sudo service logwisp restart
```
## Uninstallation
### Linux
```bash
sudo systemctl stop logwisp
sudo systemctl disable logwisp
sudo rm /usr/local/bin/logwisp
sudo rm /etc/systemd/system/logwisp.service
sudo rm -rf /etc/logwisp
sudo rm -rf /etc/logwisp /var/lib/logwisp /var/log/logwisp
sudo userdel logwisp
```
### FreeBSD
```bash
sudo service logwisp stop
sudo sysrc logwisp_enable="NO"
sudo sysrc -x logwisp_enable
sudo rm /usr/local/bin/logwisp
sudo rm /usr/local/etc/rc.d/logwisp
sudo rm -rf /usr/local/etc/logwisp
sudo rm -rf /usr/local/etc/logwisp /var/db/logwisp /var/log/logwisp
sudo pw userdel logwisp
```

289
doc/networking.md Normal file
View File

@ -0,0 +1,289 @@
# Networking
Network configuration for LogWisp connections, including TLS, rate limiting, and access control.
## TLS Configuration
### TLS Support Matrix
| Component | TLS Support | Notes |
|-----------|-------------|-------|
| HTTP Source | ✓ | Full TLS 1.2/1.3 |
| HTTP Sink | ✓ | Full TLS 1.2/1.3 |
| HTTP Client | ✓ | Client certificates |
| TCP Source | ✗ | No encryption |
| TCP Sink | ✗ | No encryption |
| TCP Client | ✗ | No encryption |
### Server TLS Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2" # TLS1.2|TLS1.3
client_auth = false
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Client TLS Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_name = "logs.example.com"
skip_verify = false
cert_file = "/path/to/client.pem" # For mTLS
key_file = "/path/to/client.key" # For mTLS
```
### TLS Certificate Generation
Using the `tls` command:
```bash
# Generate CA certificate
logwisp tls -ca -o myca
# Generate server certificate
logwisp tls -server -ca-cert myca.pem -ca-key myca.key -host localhost,server.example.com -o server
# Generate client certificate
logwisp tls -client -ca-cert myca.pem -ca-key myca.key -o client
```
Command options:
| Flag | Description |
|------|-------------|
| `-ca` | Generate CA certificate |
| `-server` | Generate server certificate |
| `-client` | Generate client certificate |
| `-host` | Comma-separated hostnames/IPs |
| `-o` | Output file prefix |
| `-days` | Certificate validity (default: 365) |
## Network Rate Limiting
### Configuration Options
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### Rate Limiting Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `enabled` | bool | Enable rate limiting |
| `max_connections_per_ip` | int | Per-IP connection limit |
| `max_connections_total` | int | Global connection limit |
| `requests_per_second` | float | Request rate limit |
| `burst_size` | int | Token bucket burst capacity |
| `response_code` | int | HTTP response code when limited |
| `response_message` | string | Response message when limited |
### IP Access Control
**Whitelist**: Only specified IPs/networks allowed
```toml
ip_whitelist = [
"192.168.1.0/24", # Local network
"10.0.0.0/8", # Private network
"203.0.113.5" # Specific IP
]
```
**Blacklist**: Specified IPs/networks denied
```toml
ip_blacklist = [
"192.168.1.100", # Blocked host
"10.0.0.0/16" # Blocked subnet
]
```
Processing order:
1. Blacklist (immediate deny if matched)
2. Whitelist (must match if configured)
3. Rate limiting
4. Authentication
## Connection Management
### TCP Keep-Alive
```toml
[pipelines.sources.tcp]
keep_alive = true
keep_alive_period_ms = 30000 # 30 seconds
```
Benefits:
- Detect dead connections
- Prevent connection timeout
- Maintain NAT mappings
### Connection Timeouts
```toml
[pipelines.sources.http]
read_timeout_ms = 10000 # 10 seconds
write_timeout_ms = 10000 # 10 seconds
[pipelines.sinks.tcp_client]
dial_timeout = 10 # Connection timeout
write_timeout = 30 # Write timeout
read_timeout = 10 # Read timeout
```
### Connection Limits
Global limits:
```toml
max_connections = 100 # Total concurrent connections
```
Per-IP limits:
```toml
max_connections_per_ip = 10
```
## Heartbeat Configuration
Keep connections alive with periodic heartbeats:
### HTTP Sink Heartbeat
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
Formats:
- **comment**: SSE comment (`: heartbeat`)
- **event**: SSE event with data
- **json**: JSON-formatted heartbeat
### TCP Sink Heartbeat
```toml
[pipelines.sinks.tcp.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "json" # json|txt
```
## Network Protocols
### HTTP/HTTPS
- HTTP/1.1 and HTTP/2 support
- Persistent connections
- Chunked transfer encoding
- Server-Sent Events (SSE)
### TCP
- Raw TCP sockets
- Newline-delimited protocol
- Binary-safe transmission
- No encryption available
## Port Configuration
### Default Ports
| Service | Default Port | Protocol |
|---------|--------------|----------|
| HTTP Source | 8081 | HTTP/HTTPS |
| HTTP Sink | 8080 | HTTP/HTTPS |
| TCP Source | 9091 | TCP |
| TCP Sink | 9090 | TCP |
### Port Conflict Prevention
LogWisp validates port usage at startup:
- Detects port conflicts across pipelines
- Prevents duplicate bindings
- Suggests alternative ports
## Network Security
### Best Practices
1. **Use TLS for HTTP** connections when possible
2. **Implement rate limiting** to prevent DoS
3. **Configure IP whitelists** for restricted access
4. **Enable authentication** for all network endpoints
5. **Use non-standard ports** to reduce scanning exposure
6. **Monitor connection metrics** for anomalies
7. **Set appropriate timeouts** to prevent resource exhaustion
### Security Warnings
- TCP connections are **always unencrypted**
- HTTP Basic/Token auth **requires TLS**
- Avoid `skip_verify` in production
- Never expose unauthenticated endpoints publicly
## Load Balancing
### Client-Side Load Balancing
Configure multiple endpoints (future feature):
```toml
[[pipelines.sinks.http_client]]
urls = [
"https://log1.example.com/ingest",
"https://log2.example.com/ingest"
]
strategy = "round-robin" # round-robin|random|least-conn
```
### Server-Side Considerations
- Use reverse proxy for load distribution
- Configure session affinity if needed
- Monitor individual instance health
## Troubleshooting
### Common Issues
**Connection Refused**
- Check firewall rules
- Verify service is running
- Confirm correct port/host
**TLS Handshake Failure**
- Verify certificate validity
- Check certificate chain
- Confirm TLS versions match
**Rate Limit Exceeded**
- Adjust rate limit parameters
- Add IP to whitelist
- Implement client-side throttling
**Connection Timeout**
- Increase timeout values
- Check network latency
- Verify keep-alive settings

358
doc/operations.md Normal file
View File

@ -0,0 +1,358 @@
# Operations Guide
Running, monitoring, and maintaining LogWisp in production.
## Starting LogWisp
### Manual Start
```bash
# Foreground with default config
logwisp
# Background mode
logwisp --background
# With specific configuration
logwisp --config /etc/logwisp/production.toml
```
### Service Management
**Linux (systemd):**
```bash
sudo systemctl start logwisp
sudo systemctl stop logwisp
sudo systemctl restart logwisp
sudo systemctl status logwisp
```
**FreeBSD (rc.d):**
```bash
sudo service logwisp start
sudo service logwisp stop
sudo service logwisp restart
sudo service logwisp status
```
## Configuration Management
### Hot Reload
Enable automatic configuration reload:
```toml
config_auto_reload = true
```
Or via command line:
```bash
logwisp --config-auto-reload
```
Trigger manual reload:
```bash
kill -HUP $(pidof logwisp)
# or
kill -USR1 $(pidof logwisp)
```
### Configuration Validation
Test configuration without starting:
```bash
logwisp --config test.toml --quiet --disable-status-reporter
```
Check for errors:
- Port conflicts
- Invalid patterns
- Missing required fields
- File permissions
## Monitoring
### Status Reporter
Built-in periodic status logging (30-second intervals):
```
[INFO] Status report active_pipelines=2 time=15:04:05
[INFO] Pipeline status pipeline=app entries_processed=10523
[INFO] Pipeline status pipeline=system entries_processed=5231
```
Disable if not needed:
```toml
disable_status_reporter = true
```
### HTTP Status Endpoint
When using HTTP sink:
```bash
curl http://localhost:8080/status | jq .
```
Response structure:
```json
{
"uptime": "2h15m30s",
"pipelines": {
"default": {
"sources": 1,
"sinks": 2,
"processed": 15234,
"filtered": 523,
"dropped": 12
}
}
}
```
### Metrics Collection
Track via logs:
- Total entries processed
- Entries filtered
- Entries dropped
- Active connections
- Buffer utilization
## Log Management
### LogWisp's Operational Logs
Configuration for LogWisp's own logs:
```toml
[logging]
output = "file"
level = "info"
[logging.file]
directory = "/var/log/logwisp"
name = "logwisp"
max_size_mb = 100
retention_hours = 168
```
### Log Rotation
Automatic rotation based on:
- File size threshold
- Total size limit
- Retention period
Manual rotation:
```bash
# Move current log
mv /var/log/logwisp/logwisp.log /var/log/logwisp/logwisp.log.1
# Send signal to reopen
kill -USR1 $(pidof logwisp)
```
### Log Levels
Operational log levels:
- **debug**: Detailed debugging information
- **info**: General operational messages
- **warn**: Warning conditions
- **error**: Error conditions
Production recommendation: `info` or `warn`
## Performance Tuning
### Buffer Sizing
Adjust buffers based on load:
```toml
# High-volume source
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
buffer_size = 5000 # Increase for burst traffic
# Slow consumer sink
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
buffer_size = 10000 # Larger buffer for slow endpoints
batch_size = 500 # Larger batches
```
### Rate Limiting
Protect against overload:
```toml
[pipelines.rate_limit]
rate = 1000.0 # Entries per second
burst = 2000.0 # Burst capacity
policy = "drop" # Drop excess entries
```
### Connection Limits
Prevent resource exhaustion:
```toml
[pipelines.sources.http.net_limit]
max_connections_total = 1000
max_connections_per_ip = 50
```
## Troubleshooting
### Common Issues
**High Memory Usage**
- Check buffer sizes
- Monitor goroutine count
- Review retention settings
**Dropped Entries**
- Increase buffer sizes
- Add rate limiting
- Check sink performance
**Connection Errors**
- Verify network connectivity
- Check firewall rules
- Review TLS certificates
### Debug Mode
Enable detailed logging:
```bash
logwisp --logging.level=debug --logging.output=stderr
```
### Health Checks
Implement external monitoring:
```bash
#!/bin/bash
# Health check script
if ! curl -sf http://localhost:8080/status > /dev/null; then
echo "LogWisp health check failed"
exit 1
fi
```
## Backup and Recovery
### Configuration Backup
```bash
# Backup configuration
cp /etc/logwisp/logwisp.toml /backup/logwisp-$(date +%Y%m%d).toml
# Version control
git add /etc/logwisp/
git commit -m "LogWisp config update"
```
### State Recovery
LogWisp maintains minimal state:
- File read positions (automatic)
- Connection state (automatic)
Recovery after crash:
1. Service automatically restarts (systemd/rc.d)
2. File sources resume from last position
3. Network sources accept new connections
4. Clients reconnect automatically
## Security Operations
### Certificate Management
Monitor certificate expiration:
```bash
openssl x509 -in /path/to/cert.pem -noout -enddate
```
Rotate certificates:
1. Generate new certificates
2. Update configuration
3. Reload service (SIGHUP)
### Credential Rotation
Update authentication:
```bash
# Generate new credentials
logwisp auth -u admin -b
# Update configuration
vim /etc/logwisp/logwisp.toml
# Reload service
kill -HUP $(pidof logwisp)
```
### Access Auditing
Monitor access patterns:
- Review connection logs
- Track authentication failures
- Monitor rate limit hits
## Maintenance
### Planned Maintenance
1. Notify users of maintenance window
2. Stop accepting new connections
3. Drain existing connections
4. Perform maintenance
5. Restart service
### Upgrade Process
1. Download new version
2. Test with current configuration
3. Stop old version
4. Install new version
5. Start service
6. Verify operation
### Cleanup Tasks
Regular maintenance:
- Remove old log files
- Clean temporary files
- Verify disk space
- Update documentation
## Disaster Recovery
### Backup Strategy
- Configuration files: Daily
- TLS certificates: After generation
- Authentication credentials: Secure storage
### Recovery Procedures
Service failure:
1. Check service status
2. Review error logs
3. Verify configuration
4. Restart service
Data loss:
1. Restore configuration from backup
2. Regenerate certificates if needed
3. Recreate authentication credentials
4. Restart service
### Business Continuity
- Run multiple instances for redundancy
- Use load balancer for distribution
- Implement monitoring alerts
- Document recovery procedures

View File

@ -1,215 +0,0 @@
# Quick Start Guide
Get LogWisp up and running in minutes:
## Installation
### From Source
```bash
git clone https://github.com/lixenwraith/logwisp.git
cd logwisp
make install
```
### Using Go Install
```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
```
## Basic Usage
### 1. Monitor Current Directory
Start LogWisp with defaults (monitors `*.log` files in current directory):
```bash
logwisp
```
### 2. Stream Logs
Connect to the log stream:
```bash
# SSE stream
curl -N http://localhost:8080/stream
# Check status
curl http://localhost:8080/status | jq .
```
### 3. Generate Test Logs
```bash
echo "[ERROR] Something went wrong!" >> test.log
echo "[INFO] Application started" >> test.log
echo "[WARN] Low memory warning" >> test.log
```
## Common Scenarios
### Monitor Specific Directory
Create `~/.config/logwisp/logwisp.toml`:
```toml
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/myapp", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Filter Only Errors
```toml
[[pipelines]]
name = "errors"
[[pipelines.sources]]
type = "directory"
options = { path = "./", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Multiple Outputs
Send logs to both HTTP stream and file:
```toml
[[pipelines]]
name = "multi-output"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
# HTTP streaming
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# File archival
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "app" }
```
### TCP Streaming
For high-performance streaming:
```toml
[[pipelines]]
name = "highperf"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "tcp"
options = { port = 9090, buffer_size = 5000 }
```
Connect with netcat:
```bash
nc localhost 9090
```
### Router Mode
Run multiple pipelines on shared ports:
```bash
logwisp --router
# Access pipelines at:
# http://localhost:8080/myapp/stream
# http://localhost:8080/errors/stream
# http://localhost:8080/status (global)
```
### Remote Log Collection
Receive logs via HTTP/TCP and forward to remote servers:
```toml
[[pipelines]]
name = "collector"
# Receive logs via HTTP POST
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/ingest" }
# Forward to remote server
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-server.com/ingest",
batch_size = 100,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
```
Send logs to collector:
```bash
curl -X POST http://localhost:8081/ingest \
-H "Content-Type: application/json" \
-d '{"message": "Test log", "level": "INFO"}'
```
## Quick Tips
### Enable Debug Logging
```bash
logwisp --logging.level debug --logging.output stderr
```
### Quiet Mode
```bash
logwisp --quiet
```
### Rate Limiting
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20
}
}
```
### Console Output
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {}
```
### Split Console Output
```toml
# INFO/DEBUG to stdout, ERROR/WARN to stderr
[[pipelines.sinks]]
type = "stdout"
options = { target = "split" }
```

View File

@ -1,125 +0,0 @@
# Rate Limiting Guide
LogWisp provides configurable rate limiting to protect against abuse and ensure fair access.
## How It Works
Token bucket algorithm:
1. Each client gets a bucket with fixed capacity
2. Tokens refill at configured rate
3. Each request consumes one token
4. No tokens = request rejected
## Configuration
```toml
[[pipelines.sinks]]
type = "http" # or "tcp"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # or "global"
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
## Strategies
### Per-IP Limiting (Default)
Each IP gets its own bucket:
```toml
limit_by = "ip"
requests_per_second = 10.0
# Client A: 10 req/sec
# Client B: 10 req/sec
```
### Global Limiting
All clients share one bucket:
```toml
limit_by = "global"
requests_per_second = 50.0
# All clients combined: 50 req/sec
```
## Connection Limits
```toml
max_connections_per_ip = 5 # Per IP
max_total_connections = 100 # Total
```
## Response Behavior
### HTTP
Returns JSON with configured status:
```json
{
"error": "Rate limit exceeded",
"retry_after": "60"
}
```
### TCP
Connections silently dropped.
## Examples
### Light Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 50.0,
burst_size = 100
}
```
### Moderate Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 30,
max_connections_per_ip = 5
}
```
### Strict Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 2.0,
burst_size = 5,
max_connections_per_ip = 2,
response_code = 503
}
```
## Monitoring
Check statistics:
```bash
curl http://localhost:8080/status | jq '.sinks[0].details.rate_limit'
```
## Testing
```bash
# Test rate limits
for i in {1..20}; do
curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/status
done
```
## Tuning
- **requests_per_second**: Expected load
- **burst_size**: 2-3× requests_per_second
- **Connection limits**: Based on memory

View File

@ -1,158 +0,0 @@
# Router Mode Guide
Router mode enables multiple pipelines to share HTTP ports through path-based routing.
## Overview
**Standard mode**: Each pipeline needs its own port
- Pipeline 1: `http://localhost:8080/stream`
- Pipeline 2: `http://localhost:8081/stream`
**Router mode**: Pipelines share ports via paths
- Pipeline 1: `http://localhost:8080/app/stream`
- Pipeline 2: `http://localhost:8080/database/stream`
- Global status: `http://localhost:8080/status`
## Enabling Router Mode
```bash
logwisp --router --config /etc/logwisp/multi-pipeline.toml
```
## Configuration
```toml
# All pipelines can use the same port
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Same port OK
[[pipelines]]
name = "database"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/postgresql", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
```
## Path Structure
Paths are prefixed with pipeline name:
| Pipeline | Config Path | Router Path |
|----------|-------------|-------------|
| `app` | `/stream` | `/app/stream` |
| `app` | `/status` | `/app/status` |
| `database` | `/stream` | `/database/stream` |
### Custom Paths
```toml
[[pipelines.sinks]]
type = "http"
options = {
stream_path = "/logs", # Becomes /app/logs
status_path = "/health" # Becomes /app/health
}
```
## Endpoints
### Pipeline Endpoints
```bash
# SSE stream
curl -N http://localhost:8080/app/stream
# Pipeline status
curl http://localhost:8080/database/status
```
### Global Status
```bash
curl http://localhost:8080/status
```
Returns:
```json
{
"service": "LogWisp Router",
"pipelines": {
"app": { /* stats */ },
"database": { /* stats */ }
},
"total_pipelines": 2
}
```
## Use Cases
### Microservices
```toml
[[pipelines]]
name = "frontend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/frontend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "backend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/backend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# Access:
# http://localhost:8080/frontend/stream
# http://localhost:8080/backend/stream
```
### Environment-Based
```toml
[[pipelines]]
name = "prod"
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "dev"
# No filters - all logs
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
## Limitations
1. **HTTP Only**: Router mode only works for HTTP/SSE
2. **No TCP Routing**: TCP remains on separate ports
3. **Path Conflicts**: Pipeline names must be unique
## Load Balancer Integration
```nginx
upstream logwisp {
server logwisp1:8080;
server logwisp2:8080;
}
location /logs/ {
proxy_pass http://logwisp/;
proxy_buffering off;
}
```

293
doc/sinks.md Normal file
View File

@ -0,0 +1,293 @@
# Output Sinks
LogWisp sinks deliver processed log entries to various destinations.
## Sink Types
### Console Sink
Output to stdout/stderr.
```toml
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout" # stdout|stderr|split
colorize = false
buffer_size = 100
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `target` | string | "stdout" | Output target (stdout/stderr/split) |
| `colorize` | bool | false | Enable colored output |
| `buffer_size` | int | 100 | Internal buffer size |
**Target Modes:**
- **stdout**: All output to standard output
- **stderr**: All output to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
### File Sink
Write logs to rotating files.
```toml
[[pipelines.sinks]]
type = "file"
[pipelines.sinks.file]
directory = "./logs"
name = "output"
max_size_mb = 100
max_total_size_mb = 1000
min_disk_free_mb = 500
retention_hours = 168.0
buffer_size = 1000
flush_interval_ms = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `directory` | string | Required | Output directory |
| `name` | string | Required | Base filename |
| `max_size_mb` | int | 100 | Rotation threshold |
| `max_total_size_mb` | int | 1000 | Total size limit |
| `min_disk_free_mb` | int | 500 | Minimum free disk space |
| `retention_hours` | float | 168 | Delete files older than |
| `buffer_size` | int | 1000 | Internal buffer size |
| `flush_interval_ms` | int | 1000 | Force flush interval |
**Features:**
- Automatic rotation on size
- Retention management
- Disk space monitoring
- Periodic flushing
### HTTP Sink
SSE (Server-Sent Events) streaming server.
```toml
[[pipelines.sinks]]
type = "http"
[pipelines.sinks.http]
host = "0.0.0.0"
port = 8080
stream_path = "/stream"
status_path = "/status"
buffer_size = 1000
max_connections = 100
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `stream_path` | string | "/stream" | SSE stream endpoint |
| `status_path` | string | "/status" | Status endpoint |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Heartbeat Configuration:**
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
### TCP Sink
TCP streaming server for debugging.
```toml
[[pipelines.sinks]]
type = "tcp"
[pipelines.sinks.tcp]
host = "0.0.0.0"
port = 9090
buffer_size = 1000
max_connections = 100
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Note:** TCP Sink has no authentication support (debugging only).
### HTTP Client Sink
Forward logs to remote HTTP endpoints.
```toml
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
url = "https://logs.example.com/ingest"
buffer_size = 1000
batch_size = 100
batch_delay_ms = 1000
timeout_seconds = 30
max_retries = 3
retry_delay_ms = 1000
retry_backoff = 2.0
insecure_skip_verify = false
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `url` | string | Required | Target URL |
| `buffer_size` | int | 1000 | Internal buffer size |
| `batch_size` | int | 100 | Logs per request |
| `batch_delay_ms` | int | 1000 | Max wait before sending |
| `timeout_seconds` | int | 30 | Request timeout |
| `max_retries` | int | 3 | Retry attempts |
| `retry_delay_ms` | int | 1000 | Initial retry delay |
| `retry_backoff` | float | 2.0 | Exponential backoff multiplier |
| `insecure_skip_verify` | bool | false | Skip TLS verification |
### TCP Client Sink
Forward logs to remote TCP servers.
```toml
[[pipelines.sinks]]
type = "tcp_client"
[pipelines.sinks.tcp_client]
host = "logs.example.com"
port = 9090
buffer_size = 1000
dial_timeout = 10
write_timeout = 30
read_timeout = 10
keep_alive = 30
reconnect_delay_ms = 1000
max_reconnect_delay_ms = 30000
reconnect_backoff = 1.5
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | Required | Target host |
| `port` | int | Required | Target port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `dial_timeout` | int | 10 | Connection timeout (seconds) |
| `write_timeout` | int | 30 | Write timeout (seconds) |
| `read_timeout` | int | 10 | Read timeout (seconds) |
| `keep_alive` | int | 30 | TCP keep-alive (seconds) |
| `reconnect_delay_ms` | int | 1000 | Initial reconnect delay |
| `max_reconnect_delay_ms` | int | 30000 | Maximum reconnect delay |
| `reconnect_backoff` | float | 1.5 | Backoff multiplier |
## Network Sink Features
### Network Rate Limiting
Available for HTTP and TCP sinks:
```toml
[pipelines.sinks.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sinks.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = false
```
HTTP Client TLS:
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_name = "logs.example.com"
skip_verify = false
cert_file = "/path/to/client.pem" # For mTLS
key_file = "/path/to/client.key" # For mTLS
```
### Authentication
HTTP/HTTP Client authentication:
```toml
[pipelines.sinks.http_client.auth]
type = "basic" # none|basic|token|mtls
username = "user"
password = "pass"
token = "bearer-token"
```
TCP Client authentication:
```toml
[pipelines.sinks.tcp_client.auth]
type = "scram" # none|scram
username = "user"
password = "pass"
```
## Sink Chaining
Designed connection patterns:
### Log Aggregation
- **HTTP Client Sink → HTTP Source**: HTTPS with authentication
- **TCP Client Sink → TCP Source**: Raw TCP with SCRAM
### Live Monitoring
- **HTTP Sink**: Browser-based SSE streaming
- **TCP Sink**: Debug interface (telnet/netcat)
## Sink Statistics
All sinks track:
- Total entries processed
- Active connections
- Failed sends
- Retry attempts
- Last processed timestamp

214
doc/sources.md Normal file
View File

@ -0,0 +1,214 @@
# Input Sources
LogWisp sources monitor various inputs and generate log entries for pipeline processing.
## Source Types
### Directory Source
Monitors a directory for log files matching a pattern.
```toml
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "/var/log/myapp"
pattern = "*.log" # Glob pattern
check_interval_ms = 100 # Poll interval
recursive = false # Scan subdirectories
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `path` | string | Required | Directory to monitor |
| `pattern` | string | "*" | File pattern (glob) |
| `check_interval_ms` | int | 100 | File check interval in milliseconds |
| `recursive` | bool | false | Include subdirectories |
**Features:**
- Automatic file rotation detection
- Position tracking (resume after restart)
- Concurrent file monitoring
- Pattern-based file selection
### Stdin Source
Reads log entries from standard input.
```toml
[[pipelines.sources]]
type = "stdin"
[pipelines.sources.stdin]
buffer_size = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `buffer_size` | int | 1000 | Internal buffer size |
**Features:**
- Line-based processing
- Automatic level detection
- Non-blocking reads
### HTTP Source
REST endpoint for log ingestion.
```toml
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
host = "0.0.0.0"
port = 8081
ingest_path = "/ingest"
buffer_size = 1000
max_body_size = 1048576 # 1MB
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `ingest_path` | string | "/ingest" | Ingestion endpoint path |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_body_size` | int | 1048576 | Maximum request body size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Input Formats:**
- Single JSON object
- JSON array
- Newline-delimited JSON (NDJSON)
- Plain text (one entry per line)
### TCP Source
Raw TCP socket listener for log ingestion.
```toml
[[pipelines.sources]]
type = "tcp"
[pipelines.sources.tcp]
host = "0.0.0.0"
port = 9091
buffer_size = 1000
read_timeout_ms = 10000
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Protocol:**
- Newline-delimited JSON
- One log entry per line
- UTF-8 encoding
## Network Source Features
### Network Rate Limiting
Available for HTTP and TCP sources:
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = true
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Authentication
HTTP Source authentication options:
```toml
[pipelines.sources.http.auth]
type = "basic" # none|basic|token|mtls
realm = "LogWisp"
# Basic auth
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2..."
# Token auth
[pipelines.sources.http.auth.token]
tokens = ["token1", "token2"]
```
TCP Source authentication:
```toml
[pipelines.sources.tcp.auth]
type = "scram" # none|scram
# SCRAM users
[[pipelines.sources.tcp.auth.scram.users]]
username = "user1"
stored_key = "base64..."
server_key = "base64..."
salt = "base64..."
argon_time = 3
argon_memory = 65536
argon_threads = 4
```
## Source Statistics
All sources track:
- Total entries received
- Dropped entries (buffer full)
- Invalid entries
- Last entry timestamp
- Active connections (network sources)
- Source-specific metrics
## Buffer Management
Each source maintains internal buffers:
- Default size: 1000 entries
- Drop policy when full
- Configurable per source
- Non-blocking writes

View File

@ -1,148 +0,0 @@
# Status Monitoring
LogWisp provides comprehensive monitoring through status endpoints and operational logs.
## Status Endpoints
### Pipeline Status
```bash
# Standalone mode
curl http://localhost:8080/status
# Router mode
curl http://localhost:8080/pipelinename/status
```
Example response:
```json
{
"service": "LogWisp",
"version": "1.0.0",
"server": {
"type": "http",
"port": 8080,
"active_clients": 5,
"buffer_size": 1000,
"uptime_seconds": 3600,
"mode": {"standalone": true, "router": false}
},
"sources": [{
"type": "directory",
"total_entries": 152341,
"dropped_entries": 12,
"active_watchers": 3
}],
"filters": {
"filter_count": 2,
"total_processed": 152341,
"total_passed": 48234
},
"sinks": [{
"type": "http",
"total_processed": 48234,
"active_connections": 5,
"details": {
"port": 8080,
"buffer_size": 1000,
"rate_limit": {
"enabled": true,
"total_requests": 98234,
"blocked_requests": 234
}
}
}],
"endpoints": {
"transport": "/stream",
"status": "/status"
},
"features": {
"heartbeat": {
"enabled": true,
"interval": 30,
"format": "comment"
},
"ssl": {
"enabled": false
},
"rate_limit": {
"enabled": true,
"requests_per_second": 10.0,
"burst_size": 20
}
}
}
```
## Key Metrics
### Source Metrics
| Metric | Description | Healthy Range |
|--------|-------------|---------------|
| `active_watchers` | Files being watched | 1-1000 |
| `total_entries` | Entries processed | Increasing |
| `dropped_entries` | Buffer overflows | < 1% of total |
| `active_connections` | Network connections (HTTP/TCP sources) | Within limits |
### Sink Metrics
| Metric | Description | Warning Signs |
|--------|-------------|---------------|
| `active_connections` | Current clients | Near limit |
| `total_processed` | Entries sent | Should match filter output |
| `total_batches` | Batches sent (client sinks) | Increasing |
| `failed_batches` | Failed sends (client sinks) | > 0 indicates issues |
### Filter Metrics
| Metric | Description | Notes |
|--------|-------------|-------|
| `total_processed` | Entries checked | All entries |
| `total_passed` | Passed filters | Check if too low/high |
| `total_matched` | Pattern matches | Per filter stats |
### Rate Limit Metrics
| Metric | Description | Action |
|--------|-------------|--------|
| `blocked_requests` | Rejected requests | Increase limits if high |
| `active_ips` | Unique IPs tracked | Monitor for attacks |
| `total_connections` | Current connections | Check against limits |
## Operational Logging
### Log Levels
```toml
[logging]
level = "info" # debug, info, warn, error
```
## Health Checks
### Basic Check
```bash
#!/usr/bin/env bash
if curl -s -f http://localhost:8080/status > /dev/null; then
echo "Healthy"
else
echo "Unhealthy"
exit 1
fi
```
### Advanced Check
```bash
#!/usr/bin/env bash
STATUS=$(curl -s http://localhost:8080/status)
DROPPED=$(echo "$STATUS" | jq '.sources[0].dropped_entries')
TOTAL=$(echo "$STATUS" | jq '.sources[0].total_entries')
if [ $((DROPPED * 100 / TOTAL)) -gt 5 ]; then
echo "High drop rate"
exit 1
fi
# Check client sink failures
FAILED=$(echo "$STATUS" | jq '.sinks[] | select(.type=="http_client") | .details.failed_batches // 0' | head -1)
if [ "$FAILED" -gt 10 ]; then
echo "High failure rate"
exit 1
fi
```

18
go.mod
View File

@ -3,28 +3,26 @@ module logwisp
go 1.25.1
require (
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208
github.com/panjf2000/gnet/v2 v2.9.3
github.com/valyala/fasthttp v1.65.0
golang.org/x/crypto v0.42.0
golang.org/x/term v0.35.0
golang.org/x/time v0.13.0
github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6
github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686
github.com/panjf2000/gnet/v2 v2.9.4
github.com/valyala/fasthttp v1.68.0
golang.org/x/crypto v0.43.0
golang.org/x/term v0.36.0
)
require (
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/compress v1.18.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/panjf2000/ants/v2 v2.11.3 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/sys v0.37.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

36
go.sum
View File

@ -6,26 +6,24 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-viper/mapstructure v1.6.0 h1:0WdPOF2rmmQDN1xo8qIgxyugvLp71HrZSWyGLxofobw=
github.com/go-viper/mapstructure v1.6.0/go.mod h1:FcbLReH7/cjaC0RVQR+LHFIrBhHF3s1e/ud1KMDoBVw=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3 h1:+RwUb7dUz9mGdUSW+E0WuqJgTVg1yFnPb94Wyf5ma/0=
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0=
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208 h1:IB1O/HLv9VR/4mL1Tkjlr91lk+r8anP6bab7rYdS/oE=
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0=
github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co=
github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0=
github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6 h1:G9qP8biXBT6bwBOjEe1tZwjA0gPuB5DC+fLBRXDNXqo=
github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0=
github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686 h1:STgvFUpjvZquBF322PNLXaU67oEScewGDLy0aV+lIkY=
github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0=
github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
github.com/panjf2000/gnet/v2 v2.9.3 h1:auV3/A9Na3jiBDmYAAU00rPhFKnsAI+TnI1F7YUJMHQ=
github.com/panjf2000/gnet/v2 v2.9.3/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
github.com/panjf2000/gnet/v2 v2.9.4 h1:XvPCcaFwO4XWg4IgSfZnNV4dfDy5g++HIEx7sH0ldHc=
github.com/panjf2000/gnet/v2 v2.9.4/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.65.0 h1:j/u3uzFEGFfRxw79iYzJN+TteTJwbYkru9uDp3d0Yf8=
github.com/valyala/fasthttp v1.65.0/go.mod h1:P/93/YkKPMsKSnATEeELUCkG8a7Y+k99uxNHVbKINr4=
github.com/valyala/fasthttp v1.68.0 h1:v12Nx16iepr8r9ySOwqI+5RBJ/DqTxhOy1HrHoDFnok=
github.com/valyala/fasthttp v1.68.0/go.mod h1:5EXiRfYQAoiO/khu4oU9VISC/eVY6JqmSpPJoHCKsz4=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
@ -34,16 +32,14 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.35.0 h1:bZBVKBudEyhRcajGcNc3jIfWPqV4y/Kt2XcoigOWtDQ=
golang.org/x/term v0.35.0/go.mod h1:TPGtkTLesOwf2DE8CgVYiZinHAOuy5AYUYT1lENIZnA=
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=

View File

@ -1,110 +0,0 @@
// FILE: logwisp/src/cmd/auth-gen/main.go
package main
import (
"crypto/rand"
"encoding/base64"
"flag"
"fmt"
"os"
"syscall"
"golang.org/x/crypto/bcrypt"
"golang.org/x/term"
)
func main() {
var (
username = flag.String("u", "", "Username for basic auth")
password = flag.String("p", "", "Password to hash (will prompt if not provided)")
cost = flag.Int("c", 10, "Bcrypt cost (10-31)")
genToken = flag.Bool("t", false, "Generate random bearer token")
tokenLen = flag.Int("l", 32, "Token length in bytes")
)
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "LogWisp Authentication Utility\n\n")
fmt.Fprintf(os.Stderr, "Usage:\n")
fmt.Fprintf(os.Stderr, " Generate bcrypt hash: %s -u <username> [-p <password>]\n", os.Args[0])
fmt.Fprintf(os.Stderr, " Generate bearer token: %s -t [-l <length>]\n", os.Args[0])
fmt.Fprintf(os.Stderr, "\nOptions:\n")
flag.PrintDefaults()
}
flag.Parse()
if *genToken {
generateToken(*tokenLen)
return
}
if *username == "" {
fmt.Fprintf(os.Stderr, "Error: Username required for basic auth\n")
flag.Usage()
os.Exit(1)
}
// Get password
pass := *password
if pass == "" {
pass = promptPassword("Enter password: ")
confirm := promptPassword("Confirm password: ")
if pass != confirm {
fmt.Fprintf(os.Stderr, "Error: Passwords don't match\n")
os.Exit(1)
}
}
// Generate bcrypt hash
hash, err := bcrypt.GenerateFromPassword([]byte(pass), *cost)
if err != nil {
fmt.Fprintf(os.Stderr, "Error generating hash: %v\n", err)
os.Exit(1)
}
// Output TOML config format
fmt.Println("\n# Add to logwisp.toml under [[pipelines.auth.basic_auth.users]]:")
fmt.Printf("[[pipelines.auth.basic_auth.users]]\n")
fmt.Printf("username = \"%s\"\n", *username)
fmt.Printf("password_hash = \"%s\"\n", string(hash))
// Also output for users file format
fmt.Println("\n# Or add to users file:")
fmt.Printf("%s:%s\n", *username, string(hash))
}
func promptPassword(prompt string) string {
fmt.Fprint(os.Stderr, prompt)
password, err := term.ReadPassword(int(syscall.Stdin))
fmt.Fprintln(os.Stderr)
if err != nil {
fmt.Fprintf(os.Stderr, "Error reading password: %v\n", err)
os.Exit(1)
}
return string(password)
}
func generateToken(length int) {
if length < 16 {
fmt.Fprintf(os.Stderr, "Warning: Token length < 16 bytes is insecure\n")
}
token := make([]byte, length)
if _, err := rand.Read(token); err != nil {
fmt.Fprintf(os.Stderr, "Error generating token: %v\n", err)
os.Exit(1)
}
// Output in various formats
b64 := base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(token)
hex := fmt.Sprintf("%x", token)
fmt.Println("\n# Add to logwisp.toml under [pipelines.auth.bearer_auth]:")
fmt.Printf("tokens = [\"%s\"]\n", b64)
fmt.Println("\n# Alternative hex encoding:")
fmt.Printf("# tokens = [\"%s\"]\n", hex)
fmt.Printf("\n# Token (base64): %s\n", b64)
fmt.Printf("# Token (hex): %s\n", hex)
}

View File

@ -13,10 +13,10 @@ import (
"github.com/lixenwraith/log"
)
// bootstrapService creates and initializes the log transport service
// Creates and initializes the log transport service
func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service, error) {
// Create service with logger dependency injection
svc := service.New(ctx, logger)
svc := service.NewService(ctx, logger)
// Initialize pipelines
successCount := 0
@ -24,7 +24,7 @@ func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service
logger.Info("msg", "Initializing pipeline", "pipeline", pipelineCfg.Name)
// Create the pipeline
if err := svc.NewPipeline(pipelineCfg); err != nil {
if err := svc.NewPipeline(&pipelineCfg); err != nil {
logger.Error("msg", "Failed to create pipeline",
"pipeline", pipelineCfg.Name,
"error", err)
@ -45,7 +45,7 @@ func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service
return svc, nil
}
// initializeLogger sets up the logger based on configuration
// Sets up the logger based on configuration
func initializeLogger(cfg *config.Config) error {
logger = log.NewLogger()
logCfg := log.DefaultConfig()
@ -53,8 +53,8 @@ func initializeLogger(cfg *config.Config) error {
if cfg.Quiet {
// In quiet mode, disable ALL logging output
logCfg.Level = 255 // A level that disables all output
logCfg.DisableFile = true
logCfg.EnableStdout = false
logCfg.EnableFile = false
logCfg.EnableConsole = false
return logger.ApplyConfig(logCfg)
}
@ -68,23 +68,29 @@ func initializeLogger(cfg *config.Config) error {
// Configure based on output mode
switch cfg.Logging.Output {
case "none":
logCfg.DisableFile = true
logCfg.EnableStdout = false
logCfg.EnableFile = false
logCfg.EnableConsole = false
case "stdout":
logCfg.DisableFile = true
logCfg.EnableStdout = true
logCfg.StdoutTarget = "stdout"
logCfg.EnableFile = false
logCfg.EnableConsole = true
logCfg.ConsoleTarget = "stdout"
case "stderr":
logCfg.DisableFile = true
logCfg.EnableStdout = true
logCfg.StdoutTarget = "stderr"
logCfg.EnableFile = false
logCfg.EnableConsole = true
logCfg.ConsoleTarget = "stderr"
case "split":
logCfg.EnableFile = false
logCfg.EnableConsole = true
logCfg.ConsoleTarget = "split"
case "file":
logCfg.EnableStdout = false
logCfg.EnableFile = true
logCfg.EnableConsole = false
configureFileLogging(logCfg, cfg)
case "both":
logCfg.EnableStdout = true
case "all":
logCfg.EnableFile = true
logCfg.EnableConsole = true
logCfg.ConsoleTarget = "split"
configureFileLogging(logCfg, cfg)
configureConsoleTarget(logCfg, cfg)
default:
return fmt.Errorf("invalid log output mode: %s", cfg.Logging.Output)
}
@ -97,7 +103,7 @@ func initializeLogger(cfg *config.Config) error {
return logger.ApplyConfig(logCfg)
}
// configureFileLogging sets up file-based logging parameters
// Sets up file-based logging parameters
func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
if cfg.Logging.File != nil {
logCfg.Directory = cfg.Logging.File.Directory
@ -110,18 +116,6 @@ func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
}
}
// configureConsoleTarget sets up console output parameters
func configureConsoleTarget(logCfg *log.Config, cfg *config.Config) {
target := "stderr" // default
if cfg.Logging.Console != nil && cfg.Logging.Console.Target != "" {
target = cfg.Logging.Console.Target
}
// Set the target, which can be "stdout", "stderr", or "split"
logCfg.StdoutTarget = target
}
func parseLogLevel(level string) (int64, error) {
switch strings.ToLower(level) {
case "debug":

View File

@ -0,0 +1,355 @@
// FILE: src/cmd/logwisp/commands/auth.go
package commands
import (
"crypto/rand"
"encoding/base64"
"flag"
"fmt"
"io"
"os"
"strings"
"syscall"
"logwisp/src/internal/auth"
"logwisp/src/internal/core"
"golang.org/x/term"
)
type AuthCommand struct {
output io.Writer
errOut io.Writer
}
func NewAuthCommand() *AuthCommand {
return &AuthCommand{
output: os.Stdout,
errOut: os.Stderr,
}
}
func (ac *AuthCommand) Execute(args []string) error {
cmd := flag.NewFlagSet("auth", flag.ContinueOnError)
cmd.SetOutput(ac.errOut)
var (
// User credentials
username = cmd.String("u", "", "Username")
usernameLong = cmd.String("user", "", "Username")
password = cmd.String("p", "", "Password (will prompt if not provided)")
passwordLong = cmd.String("password", "", "Password (will prompt if not provided)")
// Auth type selection (multiple ways to specify)
authType = cmd.String("t", "", "Auth type: basic, scram, or token")
authTypeLong = cmd.String("type", "", "Auth type: basic, scram, or token")
useScram = cmd.Bool("s", false, "Generate SCRAM credentials (TCP)")
useScramLong = cmd.Bool("scram", false, "Generate SCRAM credentials (TCP)")
useBasic = cmd.Bool("b", false, "Generate basic auth credentials (HTTP)")
useBasicLong = cmd.Bool("basic", false, "Generate basic auth credentials (HTTP)")
// Token generation
genToken = cmd.Bool("k", false, "Generate random bearer token")
genTokenLong = cmd.Bool("token", false, "Generate random bearer token")
tokenLen = cmd.Int("l", 32, "Token length in bytes")
tokenLenLong = cmd.Int("length", 32, "Token length in bytes")
// Migration option
migrate = cmd.Bool("m", false, "Convert basic auth PHC to SCRAM")
migrateLong = cmd.Bool("migrate", false, "Convert basic auth PHC to SCRAM")
phcHash = cmd.String("phc", "", "PHC hash to migrate (required with --migrate)")
)
cmd.Usage = func() {
fmt.Fprintln(ac.errOut, "Generate authentication credentials for LogWisp")
fmt.Fprintln(ac.errOut, "\nUsage: logwisp auth [options]")
fmt.Fprintln(ac.errOut, "\nExamples:")
fmt.Fprintln(ac.errOut, " # Generate basic auth hash for HTTP sources/sinks")
fmt.Fprintln(ac.errOut, " logwisp auth -u admin -b")
fmt.Fprintln(ac.errOut, " logwisp auth --user=admin --basic")
fmt.Fprintln(ac.errOut, " ")
fmt.Fprintln(ac.errOut, " # Generate SCRAM credentials for TCP")
fmt.Fprintln(ac.errOut, " logwisp auth -u tcpuser -s")
fmt.Fprintln(ac.errOut, " logwisp auth --user=tcpuser --scram")
fmt.Fprintln(ac.errOut, " ")
fmt.Fprintln(ac.errOut, " # Generate bearer token")
fmt.Fprintln(ac.errOut, " logwisp auth -k -l 64")
fmt.Fprintln(ac.errOut, " logwisp auth --token --length=64")
fmt.Fprintln(ac.errOut, "\nOptions:")
cmd.PrintDefaults()
}
if err := cmd.Parse(args); err != nil {
return err
}
// Check for unparsed arguments
if cmd.NArg() > 0 {
return fmt.Errorf("unexpected argument(s): %s", strings.Join(cmd.Args(), " "))
}
// Merge short and long form values
finalUsername := coalesceString(*username, *usernameLong)
finalPassword := coalesceString(*password, *passwordLong)
finalAuthType := coalesceString(*authType, *authTypeLong)
finalGenToken := coalesceBool(*genToken, *genTokenLong)
finalTokenLen := coalesceInt(*tokenLen, *tokenLenLong, core.DefaultTokenLength)
finalUseScram := coalesceBool(*useScram, *useScramLong)
finalUseBasic := coalesceBool(*useBasic, *useBasicLong)
finalMigrate := coalesceBool(*migrate, *migrateLong)
// Handle migration mode
if finalMigrate {
if *phcHash == "" || finalUsername == "" || finalPassword == "" {
return fmt.Errorf("--migrate requires --user, --password, and --phc flags")
}
return ac.migrateToScram(finalUsername, finalPassword, *phcHash)
}
// Determine auth type from flags
if finalGenToken || finalAuthType == "token" {
return ac.generateToken(finalTokenLen)
}
// Determine credential type
credType := "basic" // default
// Check explicit type flags
if finalUseScram || finalAuthType == "scram" {
credType = "scram"
} else if finalUseBasic || finalAuthType == "basic" {
credType = "basic"
} else if finalAuthType != "" {
return fmt.Errorf("invalid auth type: %s (valid: basic, scram, token)", finalAuthType)
}
// Username required for password-based auth
if finalUsername == "" {
cmd.Usage()
return fmt.Errorf("username required for %s auth generation", credType)
}
return ac.generatePasswordHash(finalUsername, finalPassword, credType)
}
func (ac *AuthCommand) Description() string {
return "Generate authentication credentials (passwords, tokens, SCRAM)"
}
func (ac *AuthCommand) Help() string {
return `Auth Command - Generate authentication credentials for LogWisp
Usage:
logwisp auth [options]
Authentication Types:
HTTP/HTTPS Sources & Sinks (TLS required):
- Basic Auth: Username/password with Argon2id hashing
- Bearer Token: Random cryptographic tokens
TCP Sources & Sinks (No TLS):
- SCRAM: Argon2-SCRAM-SHA256 for plaintext connections
Options:
-u, --user <name> Username for credential generation
-p, --password <pass> Password (will prompt if not provided)
-t, --type <type> Auth type: "basic", "scram", or "token"
-b, --basic Generate basic auth credentials (HTTP/HTTPS)
-s, --scram Generate SCRAM credentials (TCP)
-k, --token Generate random bearer token
-l, --length <bytes> Token length in bytes (default: 32)
Examples:
Examples:
# Generate basic auth hash for HTTP/HTTPS (with TLS)
logwisp auth -u admin -b
logwisp auth --user=admin --basic
# Generate SCRAM credentials for TCP (without TLS)
logwisp auth -u tcpuser -s
logwisp auth --user=tcpuser --type=scram
# Generate 64-byte bearer token
logwisp auth -k -l 64
logwisp auth --token --length=64
# Convert existing basic auth to SCRAM (HTTPS to TCP conversion)
logwisp auth -u admin -m --phc='$argon2id$v=19$m=65536...' --password='secret'
Output:
The command outputs configuration snippets ready to paste into logwisp.toml
and the raw credential values for external auth files.
Security Notes:
- Basic auth and tokens require TLS encryption for HTTP connections
- SCRAM provides authentication but NOT encryption for TCP connections
- Use strong passwords (12+ characters with mixed case, numbers, symbols)
- Store credentials securely and never commit them to version control
`
}
func (ac *AuthCommand) generatePasswordHash(username, password, credType string) error {
// Get password if not provided
if password == "" {
var err error
password, err = ac.promptForPassword()
if err != nil {
return err
}
}
switch credType {
case "basic":
return ac.generateBasicAuth(username, password)
case "scram":
return ac.generateScramAuth(username, password)
default:
return fmt.Errorf("invalid credential type: %s", credType)
}
}
// promptForPassword handles password prompting with confirmation
func (ac *AuthCommand) promptForPassword() (string, error) {
pass1 := ac.promptPassword("Enter password: ")
pass2 := ac.promptPassword("Confirm password: ")
if pass1 != pass2 {
return "", fmt.Errorf("passwords don't match")
}
return pass1, nil
}
func (ac *AuthCommand) promptPassword(prompt string) string {
fmt.Fprint(ac.errOut, prompt)
password, err := term.ReadPassword(syscall.Stdin)
fmt.Fprintln(ac.errOut)
if err != nil {
fmt.Fprintf(ac.errOut, "Failed to read password: %v\n", err)
os.Exit(1)
}
return string(password)
}
// generateBasicAuth creates Argon2id hash for HTTP basic auth
func (ac *AuthCommand) generateBasicAuth(username, password string) error {
// Generate salt
salt := make([]byte, core.Argon2SaltLen)
if _, err := rand.Read(salt); err != nil {
return fmt.Errorf("failed to generate salt: %w", err)
}
// Generate Argon2id hash
cred, err := auth.DeriveCredential(username, password, salt,
core.Argon2Time, core.Argon2Memory, core.Argon2Threads)
if err != nil {
return fmt.Errorf("failed to derive credential: %w", err)
}
// Output configuration snippets
fmt.Fprintln(ac.output, "\n# Basic Auth Configuration (HTTP sources/sinks)")
fmt.Fprintln(ac.output, "# REQUIRES HTTPS/TLS for security")
fmt.Fprintln(ac.output, "# Add to logwisp.toml under [[pipelines]]:")
fmt.Fprintln(ac.output, "")
fmt.Fprintln(ac.output, "[pipelines.auth]")
fmt.Fprintln(ac.output, `type = "basic"`)
fmt.Fprintln(ac.output, "")
fmt.Fprintln(ac.output, "[[pipelines.auth.basic_auth.users]]")
fmt.Fprintf(ac.output, "username = %q\n", username)
fmt.Fprintf(ac.output, "password_hash = %q\n\n", cred.PHCHash)
fmt.Fprintln(ac.output, "# For external users file:")
fmt.Fprintf(ac.output, "%s:%s\n", username, cred.PHCHash)
return nil
}
// generateScramAuth creates Argon2id-SCRAM-SHA256 credentials for TCP
func (ac *AuthCommand) generateScramAuth(username, password string) error {
// Generate salt
salt := make([]byte, core.Argon2SaltLen)
if _, err := rand.Read(salt); err != nil {
return fmt.Errorf("failed to generate salt: %w", err)
}
// Use internal auth package to derive SCRAM credentials
cred, err := auth.DeriveCredential(username, password, salt,
core.Argon2Time, core.Argon2Memory, core.Argon2Threads)
if err != nil {
return fmt.Errorf("failed to derive SCRAM credential: %w", err)
}
// Output SCRAM configuration
fmt.Fprintln(ac.output, "\n# SCRAM Auth Configuration (TCP sources/sinks)")
fmt.Fprintln(ac.output, "# Provides authentication but NOT encryption")
fmt.Fprintln(ac.output, "# Add to logwisp.toml under [[pipelines]]:")
fmt.Fprintln(ac.output, "")
fmt.Fprintln(ac.output, "[pipelines.auth]")
fmt.Fprintln(ac.output, `type = "scram"`)
fmt.Fprintln(ac.output, "")
fmt.Fprintln(ac.output, "[[pipelines.auth.scram_auth.users]]")
fmt.Fprintf(ac.output, "username = %q\n", username)
fmt.Fprintf(ac.output, "stored_key = %q\n", base64.StdEncoding.EncodeToString(cred.StoredKey))
fmt.Fprintf(ac.output, "server_key = %q\n", base64.StdEncoding.EncodeToString(cred.ServerKey))
fmt.Fprintf(ac.output, "salt = %q\n", base64.StdEncoding.EncodeToString(cred.Salt))
fmt.Fprintf(ac.output, "argon_time = %d\n", cred.ArgonTime)
fmt.Fprintf(ac.output, "argon_memory = %d\n", cred.ArgonMemory)
fmt.Fprintf(ac.output, "argon_threads = %d\n\n", cred.ArgonThreads)
fmt.Fprintln(ac.output, "# Note: SCRAM provides authentication only.")
fmt.Fprintln(ac.output, "# Use TLS/mTLS for encryption if needed.")
return nil
}
func (ac *AuthCommand) generateToken(length int) error {
if length < 16 {
fmt.Fprintln(ac.errOut, "Warning: tokens < 16 bytes are cryptographically weak")
}
if length > 512 {
return fmt.Errorf("token length exceeds maximum (512 bytes)")
}
token := make([]byte, length)
if _, err := rand.Read(token); err != nil {
return fmt.Errorf("failed to generate random bytes: %w", err)
}
b64 := base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(token)
hex := fmt.Sprintf("%x", token)
fmt.Fprintln(ac.output, "\n# Token Configuration")
fmt.Fprintln(ac.output, "# Add to logwisp.toml:")
fmt.Fprintf(ac.output, "tokens = [%q]\n\n", b64)
fmt.Fprintln(ac.output, "# Generated Token:")
fmt.Fprintf(ac.output, "Base64: %s\n", b64)
fmt.Fprintf(ac.output, "Hex: %s\n", hex)
return nil
}
// migrateToScram converts basic auth PHC hash to SCRAM credentials
func (ac *AuthCommand) migrateToScram(username, password, phcHash string) error {
// CHANGED: Moved from internal/auth to CLI command layer
cred, err := auth.MigrateFromPHC(username, password, phcHash)
if err != nil {
return fmt.Errorf("migration failed: %w", err)
}
// Output SCRAM configuration (reuse format from generateScramAuth)
fmt.Fprintln(ac.output, "\n# Migrated SCRAM Credentials")
fmt.Fprintln(ac.output, "# Add to logwisp.toml under [[pipelines]]:")
fmt.Fprintln(ac.output, "")
fmt.Fprintln(ac.output, "[pipelines.auth]")
fmt.Fprintln(ac.output, `type = "scram"`)
fmt.Fprintln(ac.output, "")
fmt.Fprintln(ac.output, "[[pipelines.auth.scram_auth.users]]")
fmt.Fprintf(ac.output, "username = %q\n", username)
fmt.Fprintf(ac.output, "stored_key = %q\n", base64.StdEncoding.EncodeToString(cred.StoredKey))
fmt.Fprintf(ac.output, "server_key = %q\n", base64.StdEncoding.EncodeToString(cred.ServerKey))
fmt.Fprintf(ac.output, "salt = %q\n", base64.StdEncoding.EncodeToString(cred.Salt))
fmt.Fprintf(ac.output, "argon_time = %d\n", cred.ArgonTime)
fmt.Fprintf(ac.output, "argon_memory = %d\n", cred.ArgonMemory)
fmt.Fprintf(ac.output, "argon_threads = %d\n", cred.ArgonThreads)
return nil
}

View File

@ -0,0 +1,123 @@
// FILE: src/cmd/logwisp/commands/help.go
package commands
import (
"fmt"
"sort"
"strings"
)
const generalHelpTemplate = `LogWisp: A flexible log transport and processing tool.
Usage:
logwisp [command] [options]
logwisp [options]
Commands:
%s
Application Options:
-c, --config <path> Path to configuration file (default: logwisp.toml)
-h, --help Display this help message and exit
-v, --version Display version information and exit
-b, --background Run LogWisp in the background as a daemon
-q, --quiet Suppress all console output, including errors
Runtime Options:
--disable-status-reporter Disable the periodic status reporter
--config-auto-reload Enable config reload on file change
For command-specific help:
logwisp help <command>
logwisp <command> --help
Configuration Sources (Precedence: CLI > Env > File > Defaults):
- CLI flags override all other settings
- Environment variables override file settings
- TOML configuration file is the primary method
Examples:
# Generate password for admin user
logwisp auth -u admin
# Start service with custom config
logwisp -c /etc/logwisp/prod.toml
# Run in background with config reload
logwisp -b --config-auto-reload
For detailed configuration options, please refer to the documentation.
`
// HelpCommand handles help display
type HelpCommand struct {
router *CommandRouter
}
// NewHelpCommand creates a new help command
func NewHelpCommand(router *CommandRouter) *HelpCommand {
return &HelpCommand{router: router}
}
// Execute displays help information
func (c *HelpCommand) Execute(args []string) error {
// Check if help is requested for a specific command
if len(args) > 0 && args[0] != "" {
cmdName := args[0]
if handler, exists := c.router.GetCommand(cmdName); exists {
fmt.Print(handler.Help())
return nil
}
return fmt.Errorf("unknown command: %s", cmdName)
}
// Display general help with command list
fmt.Printf(generalHelpTemplate, c.formatCommandList())
return nil
}
// formatCommandList creates a formatted list of available commands
func (c *HelpCommand) formatCommandList() string {
commands := c.router.GetCommands()
// Sort command names for consistent output
names := make([]string, 0, len(commands))
maxLen := 0
for name := range commands {
names = append(names, name)
if len(name) > maxLen {
maxLen = len(name)
}
}
sort.Strings(names)
// Format each command with aligned descriptions
var lines []string
for _, name := range names {
handler := commands[name]
padding := strings.Repeat(" ", maxLen-len(name)+2)
lines = append(lines, fmt.Sprintf(" %s%s%s", name, padding, handler.Description()))
}
return strings.Join(lines, "\n")
}
func (c *HelpCommand) Description() string {
return "Display help information"
}
func (c *HelpCommand) Help() string {
return `Help Command - Display help information
Usage:
logwisp help Show general help
logwisp help <command> Show help for a specific command
Examples:
logwisp help # Show general help
logwisp help auth # Show auth command help
logwisp auth --help # Alternative way to get command help
`
}

View File

@ -0,0 +1,118 @@
// FILE: src/cmd/logwisp/commands/router.go
package commands
import (
"fmt"
"os"
)
// Handler defines the interface for subcommands
type Handler interface {
Execute(args []string) error
Description() string
Help() string
}
// CommandRouter handles subcommand routing before main app initialization
type CommandRouter struct {
commands map[string]Handler
}
// NewCommandRouter creates and initializes the command router
func NewCommandRouter() *CommandRouter {
router := &CommandRouter{
commands: make(map[string]Handler),
}
// Register available commands
router.commands["auth"] = NewAuthCommand()
router.commands["tls"] = NewTLSCommand()
router.commands["version"] = NewVersionCommand()
router.commands["help"] = NewHelpCommand(router)
return router
}
// Route checks for and executes subcommands
func (r *CommandRouter) Route(args []string) (bool, error) {
if len(args) < 2 {
return false, nil // No command specified, let main app continue
}
cmdName := args[1]
// Special case: help flag at any position shows general help
for _, arg := range args[1:] {
if arg == "-h" || arg == "--help" {
// If it's after a valid command, show command-specific help
if handler, exists := r.commands[cmdName]; exists && cmdName != "help" {
fmt.Print(handler.Help())
return true, nil
}
// Otherwise show general help
return true, r.commands["help"].Execute(nil)
}
}
// Check if this is a known command
handler, exists := r.commands[cmdName]
if !exists {
// Check if it looks like a mistyped command (not a flag)
if cmdName[0] != '-' {
return false, fmt.Errorf("unknown command: %s\n\nRun 'logwisp help' for usage", cmdName)
}
// It's a flag, let main app handle it
return false, nil
}
// Execute the command
return true, handler.Execute(args[2:])
}
// GetCommand returns a command handler by name
func (r *CommandRouter) GetCommand(name string) (Handler, bool) {
cmd, exists := r.commands[name]
return cmd, exists
}
// GetCommands returns all registered commands
func (r *CommandRouter) GetCommands() map[string]Handler {
return r.commands
}
// ShowCommands displays available subcommands
func (r *CommandRouter) ShowCommands() {
for name, handler := range r.commands {
fmt.Fprintf(os.Stderr, " %-10s %s\n", name, handler.Description())
}
fmt.Fprintln(os.Stderr, "\nUse 'logwisp <command> --help' for command-specific help")
}
// Helper functions to merge short and long options
func coalesceString(values ...string) string {
for _, v := range values {
if v != "" {
return v
}
}
return ""
}
func coalesceInt(primary, secondary, defaultVal int) int {
if primary != defaultVal {
return primary
}
if secondary != defaultVal {
return secondary
}
return defaultVal
}
func coalesceBool(values ...bool) bool {
for _, v := range values {
if v {
return true
}
}
return false
}

View File

@ -0,0 +1,563 @@
// FILE: src/cmd/logwisp/commands/tls.go
package commands
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"flag"
"fmt"
"io"
"math/big"
"net"
"os"
"strings"
"time"
)
type TLSCommand struct {
output io.Writer
errOut io.Writer
}
func NewTLSCommand() *TLSCommand {
return &TLSCommand{
output: os.Stdout,
errOut: os.Stderr,
}
}
func (tc *TLSCommand) Execute(args []string) error {
cmd := flag.NewFlagSet("tls", flag.ContinueOnError)
cmd.SetOutput(tc.errOut)
// Certificate type flags
var (
genCA = cmd.Bool("ca", false, "Generate CA certificate")
genServer = cmd.Bool("server", false, "Generate server certificate")
genClient = cmd.Bool("client", false, "Generate client certificate")
selfSign = cmd.Bool("self-signed", false, "Generate self-signed certificate")
// Common options - short forms
commonName = cmd.String("cn", "", "Common name (required)")
org = cmd.String("o", "LogWisp", "Organization")
country = cmd.String("c", "US", "Country code")
validDays = cmd.Int("d", 365, "Validity period in days")
keySize = cmd.Int("b", 2048, "RSA key size")
// Common options - long forms
commonNameLong = cmd.String("common-name", "", "Common name (required)")
orgLong = cmd.String("org", "LogWisp", "Organization")
countryLong = cmd.String("country", "US", "Country code")
validDaysLong = cmd.Int("days", 365, "Validity period in days")
keySizeLong = cmd.Int("bits", 2048, "RSA key size")
// Server/Client specific - short forms
hosts = cmd.String("h", "", "Comma-separated hostnames/IPs")
caFile = cmd.String("ca-cert", "", "CA certificate file")
caKey = cmd.String("ca-key", "", "CA key file")
// Server/Client specific - long forms
hostsLong = cmd.String("hosts", "", "Comma-separated hostnames/IPs")
// Output files
certOut = cmd.String("cert-out", "", "Output certificate file")
keyOut = cmd.String("key-out", "", "Output key file")
)
cmd.Usage = func() {
fmt.Fprintln(tc.errOut, "Generate TLS certificates for LogWisp")
fmt.Fprintln(tc.errOut, "\nUsage: logwisp tls [options]")
fmt.Fprintln(tc.errOut, "\nExamples:")
fmt.Fprintln(tc.errOut, " # Generate self-signed certificate")
fmt.Fprintln(tc.errOut, " logwisp tls --self-signed --cn localhost --hosts localhost,127.0.0.1")
fmt.Fprintln(tc.errOut, " ")
fmt.Fprintln(tc.errOut, " # Generate CA certificate")
fmt.Fprintln(tc.errOut, " logwisp tls --ca --cn \"LogWisp CA\" --cert-out ca.crt --key-out ca.key")
fmt.Fprintln(tc.errOut, " ")
fmt.Fprintln(tc.errOut, " # Generate server certificate signed by CA")
fmt.Fprintln(tc.errOut, " logwisp tls --server --cn server.example.com --hosts server.example.com \\")
fmt.Fprintln(tc.errOut, " --ca-cert ca.crt --ca-key ca.key")
fmt.Fprintln(tc.errOut, "\nOptions:")
cmd.PrintDefaults()
fmt.Fprintln(tc.errOut)
}
if err := cmd.Parse(args); err != nil {
return err
}
// Check for unparsed arguments
if cmd.NArg() > 0 {
return fmt.Errorf("unexpected argument(s): %s", strings.Join(cmd.Args(), " "))
}
// Merge short and long options
finalCN := coalesceString(*commonName, *commonNameLong)
finalOrg := coalesceString(*org, *orgLong, "LogWisp")
finalCountry := coalesceString(*country, *countryLong, "US")
finalDays := coalesceInt(*validDays, *validDaysLong, 365)
finalKeySize := coalesceInt(*keySize, *keySizeLong, 2048)
finalHosts := coalesceString(*hosts, *hostsLong)
finalCAFile := *caFile // no short form
finalCAKey := *caKey // no short form
finalCertOut := *certOut // no short form
finalKeyOut := *keyOut // no short form
// Validate common name
if finalCN == "" {
cmd.Usage()
return fmt.Errorf("common name (--cn) is required")
}
// Validate RSA key size
if finalKeySize != 2048 && finalKeySize != 3072 && finalKeySize != 4096 {
return fmt.Errorf("invalid key size: %d (valid: 2048, 3072, 4096)", finalKeySize)
}
// Route to appropriate generator
switch {
case *genCA:
return tc.generateCA(finalCN, finalOrg, finalCountry, finalDays, finalKeySize, finalCertOut, finalKeyOut)
case *selfSign:
return tc.generateSelfSigned(finalCN, finalOrg, finalCountry, finalHosts, finalDays, finalKeySize, finalCertOut, finalKeyOut)
case *genServer:
return tc.generateServerCert(finalCN, finalOrg, finalCountry, finalHosts, finalCAFile, finalCAKey, finalDays, finalKeySize, finalCertOut, finalKeyOut)
case *genClient:
return tc.generateClientCert(finalCN, finalOrg, finalCountry, finalCAFile, finalCAKey, finalDays, finalKeySize, finalCertOut, finalKeyOut)
default:
cmd.Usage()
return fmt.Errorf("specify certificate type: --ca, --self-signed, --server, or --client")
}
}
func (tc *TLSCommand) Description() string {
return "Generate TLS certificates (CA, server, client, self-signed)"
}
func (tc *TLSCommand) Help() string {
return `TLS Command - Generate TLS certificates for LogWisp
Usage:
logwisp tls [options]
Certificate Types:
--ca Generate Certificate Authority (CA) certificate
--server Generate server certificate (requires CA or self-signed)
--client Generate client certificate (for mTLS)
--self-signed Generate self-signed certificate (single cert for testing)
Common Options:
--cn, --common-name <name> Common Name (required)
-o, --org <organization> Organization name (default: "LogWisp")
-c, --country <code> Country code (default: "US")
-d, --days <number> Validity period in days (default: 365)
-b, --bits <size> RSA key size (default: 2048)
Server Certificate Options:
-h, --hosts <list> Comma-separated hostnames/IPs
Example: "localhost,10.0.0.1,example.com"
--ca-cert <file> CA certificate file (for signing)
--ca-key <file> CA key file (for signing)
Output Options:
--cert-out <file> Output certificate file (default: stdout)
--key-out <file> Output private key file (default: stdout)
Examples:
# Generate self-signed certificate for testing
logwisp tls --self-signed --cn localhost --hosts "localhost,127.0.0.1" \
--cert-out server.crt --key-out server.key
# Generate CA certificate
logwisp tls --ca --cn "LogWisp CA" --days 3650 \
--cert-out ca.crt --key-out ca.key
# Generate server certificate signed by CA
logwisp tls --server --cn "logwisp.example.com" \
--hosts "logwisp.example.com,10.0.0.100" \
--ca-cert ca.crt --ca-key ca.key \
--cert-out server.crt --key-out server.key
# Generate client certificate for mTLS
logwisp tls --client --cn "client1" \
--ca-cert ca.crt --ca-key ca.key \
--cert-out client.crt --key-out client.key
Security Notes:
- Keep private keys secure and never share them
- Use 2048-bit RSA minimum, 3072 or 4096 for higher security
- For production, use certificates from a trusted CA
- Self-signed certificates are only for development/testing
- Rotate certificates before expiration
`
}
// Create and manage private CA
func (tc *TLSCommand) generateCA(cn, org, country string, days, bits int, certFile, keyFile string) error {
// Generate RSA key
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate key: %w", err)
}
// Create certificate template
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{org},
Country: []string{country},
CommonName: cn,
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(0, 0, days),
KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign,
BasicConstraintsValid: true,
IsCA: true,
}
// Generate certificate
certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
return fmt.Errorf("failed to create certificate: %w", err)
}
// Default output files
if certFile == "" {
certFile = "ca.crt"
}
if keyFile == "" {
keyFile = "ca.key"
}
// Save certificate
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
fmt.Printf("✓ CA certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Valid for: %d days\n", days)
fmt.Printf(" Common name: %s\n", cn)
return nil
}
func parseHosts(hostList string) ([]string, []net.IP) {
var dnsNames []string
var ipAddrs []net.IP
if hostList == "" {
return dnsNames, ipAddrs
}
hosts := strings.Split(hostList, ",")
for _, h := range hosts {
h = strings.TrimSpace(h)
if ip := net.ParseIP(h); ip != nil {
ipAddrs = append(ipAddrs, ip)
} else {
dnsNames = append(dnsNames, h)
}
}
return dnsNames, ipAddrs
}
// Generate self-signed certificate
func (tc *TLSCommand) generateSelfSigned(cn, org, country, hosts string, days, bits int, certFile, keyFile string) error {
// 1. Generate an RSA private key with the specified bit size
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate private key: %w", err)
}
// 2. Parse the hosts string into DNS names and IP addresses
dnsNames, ipAddrs := parseHosts(hosts)
// 3. Create the certificate template
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: cn,
Organization: []string{org},
Country: []string{country},
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(0, 0, days),
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth},
IsCA: false,
DNSNames: dnsNames,
IPAddresses: ipAddrs,
}
// 4. Create the self-signed certificate
certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
return fmt.Errorf("failed to create certificate: %w", err)
}
// 5. Default output filenames
if certFile == "" {
certFile = "server.crt"
}
if keyFile == "" {
keyFile = "server.key"
}
// 6. Save the certificate with 0644 permissions
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
// 7. Print summary
fmt.Printf("\n✓ Self-signed certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private Key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Valid for: %d days\n", days)
fmt.Printf(" Common Name: %s\n", cn)
if len(hosts) > 0 {
fmt.Printf(" Hosts (SANs): %s\n", hosts)
}
return nil
}
// Generate server cert with CA
func (tc *TLSCommand) generateServerCert(cn, org, country, hosts, caFile, caKeyFile string, days, bits int, certFile, keyFile string) error {
caCert, caKey, err := loadCA(caFile, caKeyFile)
if err != nil {
return err
}
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate server private key: %w", err)
}
dnsNames, ipAddrs := parseHosts(hosts)
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
certExpiry := time.Now().AddDate(0, 0, days)
if certExpiry.After(caCert.NotAfter) {
return fmt.Errorf("certificate validity period (%d days) exceeds CA expiry (%s)", days, caCert.NotAfter.Format(time.RFC3339))
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: cn,
Organization: []string{org},
Country: []string{country},
},
NotBefore: time.Now(),
NotAfter: certExpiry,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
DNSNames: dnsNames,
IPAddresses: ipAddrs,
}
certDER, err := x509.CreateCertificate(rand.Reader, &template, caCert, &priv.PublicKey, caKey)
if err != nil {
return fmt.Errorf("failed to sign server certificate: %w", err)
}
if certFile == "" {
certFile = "server.crt"
}
if keyFile == "" {
keyFile = "server.key"
}
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
fmt.Printf("\n✓ Server certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private Key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Signed by: CN=%s\n", caCert.Subject.CommonName)
if len(hosts) > 0 {
fmt.Printf(" Hosts (SANs): %s\n", hosts)
}
return nil
}
// Generate client cert with CA
func (tc *TLSCommand) generateClientCert(cn, org, country, caFile, caKeyFile string, days, bits int, certFile, keyFile string) error {
caCert, caKey, err := loadCA(caFile, caKeyFile)
if err != nil {
return err
}
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate client private key: %w", err)
}
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
certExpiry := time.Now().AddDate(0, 0, days)
if certExpiry.After(caCert.NotAfter) {
return fmt.Errorf("certificate validity period (%d days) exceeds CA expiry (%s)", days, caCert.NotAfter.Format(time.RFC3339))
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: cn,
Organization: []string{org},
Country: []string{country},
},
NotBefore: time.Now(),
NotAfter: certExpiry,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
}
certDER, err := x509.CreateCertificate(rand.Reader, &template, caCert, &priv.PublicKey, caKey)
if err != nil {
return fmt.Errorf("failed to sign client certificate: %w", err)
}
if certFile == "" {
certFile = "client.crt"
}
if keyFile == "" {
keyFile = "client.key"
}
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
fmt.Printf("\n✓ Client certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private Key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Signed by: CN=%s\n", caCert.Subject.CommonName)
return nil
}
// Load cert with CA
func loadCA(certFile, keyFile string) (*x509.Certificate, *rsa.PrivateKey, error) {
// Load CA certificate
certPEM, err := os.ReadFile(certFile)
if err != nil {
return nil, nil, fmt.Errorf("failed to read CA certificate: %w", err)
}
certBlock, _ := pem.Decode(certPEM)
if certBlock == nil || certBlock.Type != "CERTIFICATE" {
return nil, nil, fmt.Errorf("invalid CA certificate format")
}
caCert, err := x509.ParseCertificate(certBlock.Bytes)
if err != nil {
return nil, nil, fmt.Errorf("failed to parse CA certificate: %w", err)
}
// Load CA private key
keyPEM, err := os.ReadFile(keyFile)
if err != nil {
return nil, nil, fmt.Errorf("failed to read CA key: %w", err)
}
keyBlock, _ := pem.Decode(keyPEM)
if keyBlock == nil {
return nil, nil, fmt.Errorf("invalid CA key format")
}
var caKey *rsa.PrivateKey
switch keyBlock.Type {
case "RSA PRIVATE KEY":
caKey, err = x509.ParsePKCS1PrivateKey(keyBlock.Bytes)
case "PRIVATE KEY":
parsedKey, err := x509.ParsePKCS8PrivateKey(keyBlock.Bytes)
if err != nil {
return nil, nil, fmt.Errorf("failed to parse CA key: %w", err)
}
var ok bool
caKey, ok = parsedKey.(*rsa.PrivateKey)
if !ok {
return nil, nil, fmt.Errorf("CA key is not RSA")
}
default:
return nil, nil, fmt.Errorf("unsupported CA key type: %s", keyBlock.Type)
}
if err != nil {
return nil, nil, fmt.Errorf("failed to parse CA private key: %w", err)
}
// Verify CA certificate is actually a CA
if !caCert.IsCA {
return nil, nil, fmt.Errorf("certificate is not a CA certificate")
}
return caCert, caKey, nil
}
func saveCert(filename string, certDER []byte) error {
certFile, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create certificate file: %w", err)
}
defer certFile.Close()
if err := pem.Encode(certFile, &pem.Block{
Type: "CERTIFICATE",
Bytes: certDER,
}); err != nil {
return fmt.Errorf("failed to write certificate: %w", err)
}
// Set readable permissions
if err := os.Chmod(filename, 0644); err != nil {
return fmt.Errorf("failed to set certificate permissions: %w", err)
}
return nil
}
func saveKey(filename string, key *rsa.PrivateKey) error {
keyFile, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create key file: %w", err)
}
defer keyFile.Close()
privKeyDER := x509.MarshalPKCS1PrivateKey(key)
if err := pem.Encode(keyFile, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: privKeyDER,
}); err != nil {
return fmt.Errorf("failed to write private key: %w", err)
}
// Set restricted permissions for private key
if err := os.Chmod(filename, 0600); err != nil {
return fmt.Errorf("failed to set key permissions: %w", err)
}
return nil
}

View File

@ -0,0 +1,41 @@
// FILE: src/cmd/logwisp/commands/version.go
package commands
import (
"fmt"
"logwisp/src/internal/version"
)
// VersionCommand handles version display
type VersionCommand struct{}
// NewVersionCommand creates a new version command
func NewVersionCommand() *VersionCommand {
return &VersionCommand{}
}
func (c *VersionCommand) Execute(args []string) error {
fmt.Println(version.String())
return nil
}
func (c *VersionCommand) Description() string {
return "Show version information"
}
func (c *VersionCommand) Help() string {
return `Version Command - Show LogWisp version information
Usage:
logwisp version
logwisp -v
logwisp --version
Output includes:
- Version number
- Build date
- Git commit hash (if available)
- Go version used for compilation
`
}

View File

@ -1,56 +0,0 @@
// FILE: logwisp/src/cmd/logwisp/help.go
package main
import (
"fmt"
"os"
)
const helpText = `LogWisp: A flexible log transport and processing tool.
Usage: logwisp [options]
Application Control:
-c, --config <path> (string) Path to configuration file (default: logwisp.toml).
-h, --help Display this help message and exit.
-v, --version Display version information and exit.
-b, --background Run LogWisp in the background as a daemon.
-q, --quiet Suppress all console output, including errors.
Runtime Behavior:
--disable-status-reporter Disable the periodic status reporter.
--config-auto-reload Enable config reload and pipeline reconfiguration on config file change.
Configuration Sources (Precedence: CLI > Env > File > Defaults):
- CLI flags override all other settings.
- Environment variables override file settings.
- TOML configuration file is the primary method for defining pipelines.
Logging ([logging] section or LOGWISP_LOGGING_* env vars):
output = "stderr" (string) Log output: none, stdout, stderr, file, both.
level = "info" (string) Log level: debug, info, warn, error.
[logging.file] Settings for file logging (directory, name, rotation).
[logging.console] Settings for console logging (target, format).
Pipelines ([[pipelines]] array in TOML):
Each pipeline defines a complete data flow from sources to sinks.
name = "my_pipeline" (string) Unique name for the pipeline.
sources = [...] (array) Data inputs (e.g., directory, stdin, http, tcp).
sinks = [...] (array) Data outputs (e.g., http, tcp, file, stdout, stderr, http_client).
filters = [...] (array) Optional filters to include/exclude logs based on regex.
rate_limit = {...} (object) Optional rate limiting for the entire pipeline.
auth = {...} (object) Optional authentication for network sinks.
format = "json" (string) Optional output formatter for the pipeline (raw, text, json).
For detailed configuration options, please refer to the documentation.
`
// CheckAndDisplayHelp scans arguments for help flags and prints help text if found.
func CheckAndDisplayHelp(args []string) {
for _, arg := range args {
if arg == "-h" || arg == "--help" {
fmt.Fprint(os.Stdout, helpText)
os.Exit(0)
}
}
}

View File

@ -11,6 +11,7 @@ import (
"syscall"
"time"
"logwisp/src/cmd/logwisp/commands"
"logwisp/src/internal/config"
"logwisp/src/internal/version"
@ -20,12 +21,27 @@ import (
var logger *log.Logger
func main() {
// Handle subcommands before any config loading
// This prevents flag conflicts with lixenwraith/config
router := commands.NewCommandRouter()
handled, err := router.Route(os.Args)
if err != nil {
// Command execution error
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
os.Exit(1)
}
if handled {
// Command was successfully handled
os.Exit(0)
}
// No subcommand, continue with main application
// Emulates nohup
signal.Ignore(syscall.SIGHUP)
// Early check for help flag to avoid unnecessary config loading
CheckAndDisplayHelp(os.Args[1:])
// Load configuration with automatic CLI parsing
cfg, err := config.Load(os.Args[1:])
if err != nil {
@ -153,8 +169,6 @@ func main() {
select {
case <-done:
// Save configuration after graceful shutdown (no reload manager in static mode)
saveConfigurationOnExit(cfg, nil, logger)
logger.Info("msg", "Shutdown complete")
case <-shutdownCtx.Done():
logger.Error("msg", "Shutdown timeout exceeded - forcing exit")
@ -167,9 +181,6 @@ func main() {
// Wait for context cancellation
<-ctx.Done()
// Save configuration before final shutdown, handled by reloadManager
saveConfigurationOnExit(cfg, reloadManager, logger)
// Shutdown is handled by ReloadManager.Shutdown() in defer
logger.Info("msg", "Shutdown complete")
}
@ -181,48 +192,4 @@ func shutdownLogger() {
Error("Logger shutdown error: %v\n", err)
}
}
}
// saveConfigurationOnExit saves the configuration to file on exist
func saveConfigurationOnExit(cfg *config.Config, reloadManager *ReloadManager, logger *log.Logger) {
// Only save if explicitly enabled and we have a valid path
if !cfg.ConfigSaveOnExit || cfg.ConfigFile == "" {
return
}
// Create a context with timeout for save operation
saveCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Perform save in goroutine to respect timeout
done := make(chan error, 1)
go func() {
var err error
if reloadManager != nil && reloadManager.lcfg != nil {
// Use existing lconfig instance from reload manager
// This ensures we save through the same configuration system
err = reloadManager.lcfg.Save(cfg.ConfigFile)
} else {
// Static mode: create temporary lconfig for saving
err = cfg.SaveToFile(cfg.ConfigFile)
}
done <- err
}()
select {
case err := <-done:
if err != nil {
logger.Error("msg", "Failed to save configuration on exit",
"path", cfg.ConfigFile,
"error", err)
// Don't fail the exit on save error
} else {
logger.Info("msg", "Configuration saved successfully",
"path", cfg.ConfigFile)
}
case <-saveCtx.Done():
logger.Error("msg", "Configuration save timeout exceeded",
"path", cfg.ConfigFile,
"timeout", "5s")
}
}

View File

@ -8,7 +8,7 @@ import (
"sync"
)
// OutputHandler manages all application output respecting quiet mode
// Manages all application output respecting quiet mode
type OutputHandler struct {
quiet bool
mu sync.RWMutex
@ -19,7 +19,7 @@ type OutputHandler struct {
// Global output handler instance
var output *OutputHandler
// InitOutputHandler initializes the global output handler
// Initializes the global output handler
func InitOutputHandler(quiet bool) {
output = &OutputHandler{
quiet: quiet,
@ -28,7 +28,7 @@ func InitOutputHandler(quiet bool) {
}
}
// Print writes to stdout if not in quiet mode
// Writes to stdout if not in quiet mode
func (o *OutputHandler) Print(format string, args ...any) {
o.mu.RLock()
defer o.mu.RUnlock()
@ -38,7 +38,7 @@ func (o *OutputHandler) Print(format string, args ...any) {
}
}
// Error writes to stderr if not in quiet mode
// Writes to stderr if not in quiet mode
func (o *OutputHandler) Error(format string, args ...any) {
o.mu.RLock()
defer o.mu.RUnlock()
@ -48,20 +48,20 @@ func (o *OutputHandler) Error(format string, args ...any) {
}
}
// FatalError writes to stderr and exits (respects quiet mode)
// Writes to stderr and exits (respects quiet mode)
func (o *OutputHandler) FatalError(code int, format string, args ...any) {
o.Error(format, args...)
os.Exit(code)
}
// IsQuiet returns the current quiet mode status
// Returns the current quiet mode status
func (o *OutputHandler) IsQuiet() bool {
o.mu.RLock()
defer o.mu.RUnlock()
return o.quiet
}
// SetQuiet updates quiet mode (useful for testing)
// Updates quiet mode (useful for testing)
func (o *OutputHandler) SetQuiet(quiet bool) {
o.mu.Lock()
defer o.mu.Unlock()

View File

@ -4,8 +4,10 @@ package main
import (
"context"
"fmt"
"os"
"strings"
"sync"
"syscall"
"time"
"logwisp/src/internal/config"
@ -15,7 +17,7 @@ import (
"github.com/lixenwraith/log"
)
// ReloadManager handles configuration hot reload
// Handles configuration hot reload
type ReloadManager struct {
configPath string
service *service.Service
@ -33,7 +35,7 @@ type ReloadManager struct {
statusReporterMu sync.Mutex
}
// NewReloadManager creates a new reload manager
// Creates a new reload manager
func NewReloadManager(configPath string, initialCfg *config.Config, logger *log.Logger) *ReloadManager {
return &ReloadManager{
configPath: configPath,
@ -43,7 +45,7 @@ func NewReloadManager(configPath string, initialCfg *config.Config, logger *log.
}
}
// Start begins watching for configuration changes
// Begins watching for configuration changes
func (rm *ReloadManager) Start(ctx context.Context) error {
// Bootstrap initial service
svc, err := bootstrapService(ctx, rm.cfg)
@ -60,18 +62,11 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
rm.startStatusReporter(ctx, svc)
}
// Create lconfig instance for file watching, logwisp config is always TOML
lcfg, err := lconfig.NewBuilder().
WithFile(rm.configPath).
WithTarget(rm.cfg).
WithFileFormat("toml").
WithSecurityOptions(lconfig.SecurityOptions{
PreventPathTraversal: true,
MaxFileSize: 10 * 1024 * 1024,
}).
Build()
if err != nil {
return fmt.Errorf("failed to create config watcher: %w", err)
// Use the same lconfig instance from initial load
lcfg := config.GetConfigManager()
if lcfg == nil {
// Config manager not initialized - potential for config bypass
return fmt.Errorf("config manager not initialized - cannot enable hot reload")
}
rm.lcfg = lcfg
@ -81,7 +76,7 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
PollInterval: time.Second,
Debounce: 500 * time.Millisecond,
ReloadTimeout: 30 * time.Second,
VerifyPermissions: true, // TODO: Prevent malicious config replacement, to be implemented
VerifyPermissions: true,
}
lcfg.AutoUpdateWithOptions(watchOpts)
@ -95,7 +90,7 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
return nil
}
// watchLoop monitors configuration changes
// Monitors configuration changes
func (rm *ReloadManager) watchLoop(ctx context.Context) {
defer rm.wg.Done()
@ -115,7 +110,7 @@ func (rm *ReloadManager) watchLoop(ctx context.Context) {
"action", "keeping current configuration")
continue
case "permissions_changed":
// SECURITY: Config file permissions changed suspiciously
// Config file permissions changed suspiciously, overlap with file permission check
rm.logger.Error("msg", "Configuration file permissions changed",
"action", "reload blocked for security")
continue
@ -132,6 +127,15 @@ func (rm *ReloadManager) watchLoop(ctx context.Context) {
}
}
// Verify file permissions before reload
if err := verifyFilePermissions(rm.configPath); err != nil {
rm.logger.Error("msg", "Configuration file permission check failed",
"path", rm.configPath,
"error", err,
"action", "reload blocked for security")
continue
}
// Trigger reload for any pipeline-related change
if rm.shouldReload(changedPath) {
rm.triggerReload(ctx)
@ -140,7 +144,37 @@ func (rm *ReloadManager) watchLoop(ctx context.Context) {
}
}
// shouldReload determines if a config change requires service reload
// Verify file permissions for security
func verifyFilePermissions(path string) error {
info, err := os.Stat(path)
if err != nil {
return fmt.Errorf("failed to stat config file: %w", err)
}
// Extract file mode and system stats
mode := info.Mode()
stat, ok := info.Sys().(*syscall.Stat_t)
if !ok {
return fmt.Errorf("unable to get file ownership info")
}
// Check ownership - must be current user or root
currentUID := uint32(os.Getuid())
if stat.Uid != currentUID && stat.Uid != 0 {
return fmt.Errorf("config file owned by uid %d, expected %d or 0", stat.Uid, currentUID)
}
// Check permissions - must not be writable by group or other
perm := mode.Perm()
if perm&0022 != 0 {
// Group or other has write permission
return fmt.Errorf("insecure permissions %04o - file must not be writable by group/other", perm)
}
return nil
}
// Determines if a config change requires service reload
func (rm *ReloadManager) shouldReload(path string) bool {
// Pipeline changes always require reload
if strings.HasPrefix(path, "pipelines.") || path == "pipelines" {
@ -160,7 +194,7 @@ func (rm *ReloadManager) shouldReload(path string) bool {
return false
}
// triggerReload performs the actual reload
// Performs the actual reload
func (rm *ReloadManager) triggerReload(ctx context.Context) {
// Prevent concurrent reloads
rm.reloadingMu.Lock()
@ -194,7 +228,7 @@ func (rm *ReloadManager) triggerReload(ctx context.Context) {
rm.logger.Info("msg", "Configuration hot reload completed successfully")
}
// performReload executes the reload process
// Executes the reload process
func (rm *ReloadManager) performReload(ctx context.Context) error {
// Get updated config from lconfig
updatedCfg, err := rm.lcfg.AsStruct()
@ -202,8 +236,14 @@ func (rm *ReloadManager) performReload(ctx context.Context) error {
return fmt.Errorf("failed to get updated config: %w", err)
}
// AsStruct returns the target pointer, not a new instance
newCfg := updatedCfg.(*config.Config)
// Validate the new config
if err := config.ValidateConfig(newCfg); err != nil {
return fmt.Errorf("updated config validation failed: %w", err)
}
// Get current service snapshot
rm.mu.RLock()
oldService := rm.service
@ -226,14 +266,13 @@ func (rm *ReloadManager) performReload(ctx context.Context) error {
// Stop old status reporter and start new one
rm.restartStatusReporter(ctx, newService)
// Gracefully shutdown old services
// This happens after the swap to minimize downtime
// Gracefully shutdown old services after swap to minimize downtime
go rm.shutdownOldServices(oldService)
return nil
}
// shutdownOldServices gracefully shuts down old services
// Gracefully shuts down old services
func (rm *ReloadManager) shutdownOldServices(svc *service.Service) {
// Give connections time to drain
rm.logger.Debug("msg", "Draining connections from old services")
@ -247,7 +286,7 @@ func (rm *ReloadManager) shutdownOldServices(svc *service.Service) {
rm.logger.Debug("msg", "Old services shutdown complete")
}
// startStatusReporter starts a new status reporter
// Starts a new status reporter
func (rm *ReloadManager) startStatusReporter(ctx context.Context, svc *service.Service) {
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
@ -260,7 +299,7 @@ func (rm *ReloadManager) startStatusReporter(ctx context.Context, svc *service.S
rm.logger.Debug("msg", "Started status reporter")
}
// restartStatusReporter stops old and starts new status reporter
// Stops old and starts new status reporter
func (rm *ReloadManager) restartStatusReporter(ctx context.Context, newService *service.Service) {
if rm.cfg.DisableStatusReporter {
// Just stop the old one if disabled
@ -285,7 +324,7 @@ func (rm *ReloadManager) restartStatusReporter(ctx context.Context, newService *
rm.logger.Debug("msg", "Started new status reporter")
}
// stopStatusReporter stops the status reporter
// Stops the status reporter
func (rm *ReloadManager) stopStatusReporter() {
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
@ -297,15 +336,7 @@ func (rm *ReloadManager) stopStatusReporter() {
}
}
// SaveConfig is a wrapper to save the config
func (rm *ReloadManager) SaveConfig(path string) error {
if rm.lcfg == nil {
return fmt.Errorf("no lconfig instance available")
}
return rm.lcfg.Save(path)
}
// Shutdown stops the reload manager
// Stops the reload manager
func (rm *ReloadManager) Shutdown() {
rm.logger.Info("msg", "Shutting down reload manager")
@ -332,7 +363,7 @@ func (rm *ReloadManager) Shutdown() {
}
}
// GetService returns the current service (thread-safe)
// Returns the current service (thread-safe)
func (rm *ReloadManager) GetService() *service.Service {
rm.mu.RLock()
defer rm.mu.RUnlock()

View File

@ -10,14 +10,14 @@ import (
"github.com/lixenwraith/log"
)
// SignalHandler manages OS signals
// Manages OS signals
type SignalHandler struct {
reloadManager *ReloadManager
logger *log.Logger
sigChan chan os.Signal
}
// NewSignalHandler creates a signal handler
// Creates a signal handler
func NewSignalHandler(rm *ReloadManager, logger *log.Logger) *SignalHandler {
sh := &SignalHandler{
reloadManager: rm,
@ -36,7 +36,7 @@ func NewSignalHandler(rm *ReloadManager, logger *log.Logger) *SignalHandler {
return sh
}
// Handle processes signals
// Processes signals
func (sh *SignalHandler) Handle(ctx context.Context) os.Signal {
for {
select {
@ -58,7 +58,7 @@ func (sh *SignalHandler) Handle(ctx context.Context) os.Signal {
}
}
// Stop cleans up signal handling
// Cleans up signal handling
func (sh *SignalHandler) Stop() {
signal.Stop(sh.sigChan)
close(sh.sigChan)

View File

@ -10,7 +10,7 @@ import (
"logwisp/src/internal/service"
)
// statusReporter periodically logs service status
// Periodically logs service status
func statusReporter(service *service.Service, ctx context.Context) {
ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop()
@ -60,7 +60,7 @@ func statusReporter(service *service.Service, ctx context.Context) {
}
}
// logPipelineStatus logs the status of an individual pipeline
// Logs the status of an individual pipeline
func logPipelineStatus(name string, stats map[string]any) {
statusFields := []any{
"msg", "Pipeline status",
@ -108,84 +108,146 @@ func logPipelineStatus(name string, stats map[string]any) {
logger.Debug(statusFields...)
}
// displayPipelineEndpoints logs the configured endpoints for a pipeline
// Logs the configured endpoints for a pipeline
func displayPipelineEndpoints(cfg config.PipelineConfig) {
// Display sink endpoints
for i, sinkCfg := range cfg.Sinks {
switch sinkCfg.Type {
case "tcp":
if port, ok := sinkCfg.Options["port"].(int64); ok {
if sinkCfg.TCP != nil {
host := "0.0.0.0"
if sinkCfg.TCP.Host != "" {
host = sinkCfg.TCP.Host
}
logger.Info("msg", "TCP endpoint configured",
"component", "main",
"pipeline", cfg.Name,
"sink_index", i,
"port", port)
"listen", fmt.Sprintf("%s:%d", host, sinkCfg.TCP.Port))
// Display net limit info if configured
if rl, ok := sinkCfg.Options["net_limit"].(map[string]any); ok {
if enabled, ok := rl["enabled"].(bool); ok && enabled {
logger.Info("msg", "TCP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", rl["requests_per_second"],
"burst_size", rl["burst_size"])
}
if sinkCfg.TCP.NetLimit != nil && sinkCfg.TCP.NetLimit.Enabled {
logger.Info("msg", "TCP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", sinkCfg.TCP.NetLimit.RequestsPerSecond,
"burst_size", sinkCfg.TCP.NetLimit.BurstSize)
}
}
case "http":
if port, ok := sinkCfg.Options["port"].(int64); ok {
streamPath := "/transport"
statusPath := "/status"
if path, ok := sinkCfg.Options["stream_path"].(string); ok {
streamPath = path
if sinkCfg.HTTP != nil {
host := "0.0.0.0"
if sinkCfg.HTTP.Host != "" {
host = sinkCfg.HTTP.Host
}
if path, ok := sinkCfg.Options["status_path"].(string); ok {
statusPath = path
streamPath := "/stream"
statusPath := "/status"
if sinkCfg.HTTP.StreamPath != "" {
streamPath = sinkCfg.HTTP.StreamPath
}
if sinkCfg.HTTP.StatusPath != "" {
statusPath = sinkCfg.HTTP.StatusPath
}
logger.Info("msg", "HTTP endpoints configured",
"pipeline", cfg.Name,
"sink_index", i,
"stream_url", fmt.Sprintf("http://localhost:%d%s", port, streamPath),
"status_url", fmt.Sprintf("http://localhost:%d%s", port, statusPath))
"listen", fmt.Sprintf("%s:%d", host, sinkCfg.HTTP.Port),
"stream_url", fmt.Sprintf("http://%s:%d%s", host, sinkCfg.HTTP.Port, streamPath),
"status_url", fmt.Sprintf("http://%s:%d%s", host, sinkCfg.HTTP.Port, statusPath))
// Display net limit info if configured
if rl, ok := sinkCfg.Options["net_limit"].(map[string]any); ok {
if enabled, ok := rl["enabled"].(bool); ok && enabled {
logger.Info("msg", "HTTP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", rl["requests_per_second"],
"burst_size", rl["burst_size"],
"limit_by", rl["limit_by"])
}
if sinkCfg.HTTP.NetLimit != nil && sinkCfg.HTTP.NetLimit.Enabled {
logger.Info("msg", "HTTP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", sinkCfg.HTTP.NetLimit.RequestsPerSecond,
"burst_size", sinkCfg.HTTP.NetLimit.BurstSize)
}
}
case "file":
if dir, ok := sinkCfg.Options["directory"].(string); ok {
name, _ := sinkCfg.Options["name"].(string)
if sinkCfg.File != nil {
logger.Info("msg", "File sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"directory", dir,
"name", name)
"directory", sinkCfg.File.Directory,
"name", sinkCfg.File.Name)
}
case "stdout", "stderr":
logger.Info("msg", "Console sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"type", sinkCfg.Type)
case "console":
if sinkCfg.Console != nil {
logger.Info("msg", "Console sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"target", sinkCfg.Console.Target)
}
}
}
// Display authentication information
if cfg.Auth != nil && cfg.Auth.Type != "none" {
logger.Info("msg", "Authentication enabled",
"pipeline", cfg.Name,
"auth_type", cfg.Auth.Type)
// Display source endpoints with host support
for i, sourceCfg := range cfg.Sources {
switch sourceCfg.Type {
case "http":
if sourceCfg.HTTP != nil {
host := "0.0.0.0"
if sourceCfg.HTTP.Host != "" {
host = sourceCfg.HTTP.Host
}
displayHost := host
if host == "0.0.0.0" {
displayHost = "localhost"
}
ingestPath := "/ingest"
if sourceCfg.HTTP.IngestPath != "" {
ingestPath = sourceCfg.HTTP.IngestPath
}
logger.Info("msg", "HTTP source configured",
"pipeline", cfg.Name,
"source_index", i,
"listen", fmt.Sprintf("%s:%d", host, sourceCfg.HTTP.Port),
"ingest_url", fmt.Sprintf("http://%s:%d%s", displayHost, sourceCfg.HTTP.Port, ingestPath))
}
case "tcp":
if sourceCfg.TCP != nil {
host := "0.0.0.0"
if sourceCfg.TCP.Host != "" {
host = sourceCfg.TCP.Host
}
displayHost := host
if host == "0.0.0.0" {
displayHost = "localhost"
}
logger.Info("msg", "TCP source configured",
"pipeline", cfg.Name,
"source_index", i,
"listen", fmt.Sprintf("%s:%d", host, sourceCfg.TCP.Port),
"endpoint", fmt.Sprintf("%s:%d", displayHost, sourceCfg.TCP.Port))
}
case "directory":
if sourceCfg.Directory != nil {
logger.Info("msg", "Directory source configured",
"pipeline", cfg.Name,
"source_index", i,
"path", sourceCfg.Directory.Path,
"pattern", sourceCfg.Directory.Pattern)
}
case "stdin":
logger.Info("msg", "Stdin source configured",
"pipeline", cfg.Name,
"source_index", i)
}
}
// Display filter information

View File

@ -2,129 +2,69 @@
package auth
import (
"bufio"
"crypto/rand"
"encoding/base64"
"fmt"
"net"
"os"
"strings"
"sync"
"time"
"logwisp/src/internal/config"
"github.com/golang-jwt/jwt/v5"
"github.com/lixenwraith/log"
"golang.org/x/crypto/bcrypt"
"golang.org/x/time/rate"
)
// Prevent unbounded map growth
const maxAuthTrackedIPs = 10000
// Authenticator handles all authentication methods for a pipeline
// Handles all authentication methods for a pipeline
type Authenticator struct {
config *config.AuthConfig
logger *log.Logger
basicUsers map[string]string // username -> password hash
bearerTokens map[string]bool // token -> valid
jwtParser *jwt.Parser
jwtKeyFunc jwt.Keyfunc
mu sync.RWMutex
config *config.ServerAuthConfig
logger *log.Logger
tokens map[string]bool // token -> valid
mu sync.RWMutex
// Session tracking
sessions map[string]*Session
sessionMu sync.RWMutex
// Brute-force protection
ipAuthAttempts map[string]*ipAuthState
authMu sync.RWMutex
}
// ADDED: Per-IP auth attempt tracking
type ipAuthState struct {
limiter *rate.Limiter
failCount int
lastAttempt time.Time
blockedUntil time.Time
}
// Session represents an authenticated connection
// TODO: only one connection per user, token, mtls
// TODO: implement tracker logic
// Represents an authenticated connection
type Session struct {
ID string
Username string
Method string // basic, bearer, jwt, mtls
Method string // basic, token, mtls
RemoteAddr string
CreatedAt time.Time
LastActivity time.Time
Metadata map[string]any
}
// New creates a new authenticator from config
func New(cfg *config.AuthConfig, logger *log.Logger) (*Authenticator, error) {
if cfg == nil || cfg.Type == "none" {
// Creates a new authenticator from config
func NewAuthenticator(cfg *config.ServerAuthConfig, logger *log.Logger) (*Authenticator, error) {
// SCRAM is handled by ScramManager in sources
if cfg == nil || cfg.Type == "none" || cfg.Type == "scram" {
return nil, nil
}
a := &Authenticator{
config: cfg,
logger: logger,
basicUsers: make(map[string]string),
bearerTokens: make(map[string]bool),
sessions: make(map[string]*Session),
ipAuthAttempts: make(map[string]*ipAuthState),
config: cfg,
logger: logger,
tokens: make(map[string]bool),
sessions: make(map[string]*Session),
}
// Initialize Basic Auth users
if cfg.Type == "basic" && cfg.BasicAuth != nil {
for _, user := range cfg.BasicAuth.Users {
a.basicUsers[user.Username] = user.PasswordHash
}
// Load users from file if specified
if cfg.BasicAuth.UsersFile != "" {
if err := a.loadUsersFile(cfg.BasicAuth.UsersFile); err != nil {
return nil, fmt.Errorf("failed to load users file: %w", err)
}
}
}
// Initialize Bearer tokens
if cfg.Type == "bearer" && cfg.BearerAuth != nil {
for _, token := range cfg.BearerAuth.Tokens {
a.bearerTokens[token] = true
}
// Setup JWT validation if configured
if cfg.BearerAuth.JWT != nil {
a.jwtParser = jwt.NewParser(
jwt.WithValidMethods([]string{"HS256", "HS384", "HS512", "RS256", "RS384", "RS512", "ES256", "ES384", "ES512"}),
jwt.WithLeeway(5*time.Second),
jwt.WithExpirationRequired(),
)
// Setup key function
if cfg.BearerAuth.JWT.SigningKey != "" {
// Static key
key := []byte(cfg.BearerAuth.JWT.SigningKey)
a.jwtKeyFunc = func(token *jwt.Token) (interface{}, error) {
return key, nil
}
} else if cfg.BearerAuth.JWT.JWKSURL != "" {
// JWKS support would require additional implementation
// ☢ SECURITY: JWKS rotation not implemented - tokens won't refresh keys
return nil, fmt.Errorf("JWKS support not yet implemented")
}
// Initialize tokens
if cfg.Type == "token" && cfg.Token != nil {
for _, token := range cfg.Token.Tokens {
a.tokens[token] = true
}
}
// Start session cleanup
go a.sessionCleanup()
// Start auth attempt cleanup
go a.authAttemptCleanup()
logger.Info("msg", "Authenticator initialized",
"component", "auth",
"type", cfg.Type)
@ -132,130 +72,7 @@ func New(cfg *config.AuthConfig, logger *log.Logger) (*Authenticator, error) {
return a, nil
}
// Check and enforce rate limits
func (a *Authenticator) checkRateLimit(remoteAddr string) error {
ip, _, err := net.SplitHostPort(remoteAddr)
if err != nil {
ip = remoteAddr // Fallback for malformed addresses
}
a.authMu.Lock()
defer a.authMu.Unlock()
state, exists := a.ipAuthAttempts[ip]
now := time.Now()
if !exists {
// Check map size limit before creating new entry
if len(a.ipAuthAttempts) >= maxAuthTrackedIPs {
// Evict an old entry using simplified LRU
// Sample 20 random entries and evict the oldest
const sampleSize = 20
var oldestIP string
oldestTime := now
// Build sample
sampled := 0
for sampledIP, sampledState := range a.ipAuthAttempts {
if sampledState.lastAttempt.Before(oldestTime) {
oldestIP = sampledIP
oldestTime = sampledState.lastAttempt
}
sampled++
if sampled >= sampleSize {
break
}
}
// Evict the oldest from our sample
if oldestIP != "" {
delete(a.ipAuthAttempts, oldestIP)
a.logger.Debug("msg", "Evicted old auth attempt state",
"component", "auth",
"evicted_ip", oldestIP,
"last_seen", oldestTime)
}
}
// Create new state for this IP
// 5 attempts per minute, burst of 3
state = &ipAuthState{
limiter: rate.NewLimiter(rate.Every(12*time.Second), 3),
lastAttempt: now,
}
a.ipAuthAttempts[ip] = state
}
// Check if IP is temporarily blocked
if now.Before(state.blockedUntil) {
remaining := state.blockedUntil.Sub(now)
a.logger.Warn("msg", "IP temporarily blocked",
"component", "auth",
"ip", ip,
"remaining", remaining)
// Sleep to slow down even blocked attempts
time.Sleep(2 * time.Second)
return fmt.Errorf("temporarily blocked, try again in %v", remaining.Round(time.Second))
}
// Check rate limit
if !state.limiter.Allow() {
state.failCount++
// Only set new blockedUntil if not already blocked
// This prevents indefinite block extension
if state.blockedUntil.IsZero() || now.After(state.blockedUntil) {
// Progressive blocking: 2^failCount minutes
blockMinutes := 1 << min(state.failCount, 6) // Cap at 64 minutes
state.blockedUntil = now.Add(time.Duration(blockMinutes) * time.Minute)
a.logger.Warn("msg", "Rate limit exceeded, blocking IP",
"component", "auth",
"ip", ip,
"fail_count", state.failCount,
"block_duration", time.Duration(blockMinutes)*time.Minute)
}
return fmt.Errorf("rate limit exceeded")
}
state.lastAttempt = now
return nil
}
// Record failed attempt
func (a *Authenticator) recordFailure(remoteAddr string) {
ip, _, _ := net.SplitHostPort(remoteAddr)
if ip == "" {
ip = remoteAddr
}
a.authMu.Lock()
defer a.authMu.Unlock()
if state, exists := a.ipAuthAttempts[ip]; exists {
state.failCount++
state.lastAttempt = time.Now()
}
}
// Reset failure count on success
func (a *Authenticator) recordSuccess(remoteAddr string) {
ip, _, _ := net.SplitHostPort(remoteAddr)
if ip == "" {
ip = remoteAddr
}
a.authMu.Lock()
defer a.authMu.Unlock()
if state, exists := a.ipAuthAttempts[ip]; exists {
state.failCount = 0
state.blockedUntil = time.Time{}
}
}
// AuthenticateHTTP handles HTTP authentication headers
// Handles HTTP authentication headers
func (a *Authenticator) AuthenticateHTTP(authHeader, remoteAddr string) (*Session, error) {
if a == nil || a.config.Type == "none" {
return &Session{
@ -266,143 +83,27 @@ func (a *Authenticator) AuthenticateHTTP(authHeader, remoteAddr string) (*Sessio
}, nil
}
// Check rate limit
if err := a.checkRateLimit(remoteAddr); err != nil {
return nil, err
}
var session *Session
var err error
switch a.config.Type {
case "basic":
session, err = a.authenticateBasic(authHeader, remoteAddr)
case "bearer":
session, err = a.authenticateBearer(authHeader, remoteAddr)
case "token":
session, err = a.authenticateToken(authHeader, remoteAddr)
default:
err = fmt.Errorf("unsupported auth type: %s", a.config.Type)
}
if err != nil {
a.recordFailure(remoteAddr)
time.Sleep(500 * time.Millisecond)
return nil, err
}
a.recordSuccess(remoteAddr)
return session, nil
}
// AuthenticateTCP handles TCP connection authentication
func (a *Authenticator) AuthenticateTCP(method, credentials, remoteAddr string) (*Session, error) {
if a == nil || a.config.Type == "none" {
return &Session{
ID: generateSessionID(),
Method: "none",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
}, nil
}
// Check rate limit first
if err := a.checkRateLimit(remoteAddr); err != nil {
return nil, err
}
var session *Session
var err error
// TCP auth protocol: AUTH <method> <credentials>
switch strings.ToLower(method) {
case "token":
if a.config.Type != "bearer" {
err = fmt.Errorf("token auth not configured")
} else {
session, err = a.validateToken(credentials, remoteAddr)
}
case "basic":
if a.config.Type != "basic" {
err = fmt.Errorf("basic auth not configured")
} else {
// Expect base64(username:password)
decoded, decErr := base64.StdEncoding.DecodeString(credentials)
if decErr != nil {
err = fmt.Errorf("invalid credentials encoding")
} else {
parts := strings.SplitN(string(decoded), ":", 2)
if len(parts) != 2 {
err = fmt.Errorf("invalid credentials format")
} else {
session, err = a.validateBasicAuth(parts[0], parts[1], remoteAddr)
}
}
}
default:
err = fmt.Errorf("unsupported auth method: %s", method)
}
if err != nil {
a.recordFailure(remoteAddr)
// Add delay on failure
time.Sleep(500 * time.Millisecond)
return nil, err
}
a.recordSuccess(remoteAddr)
return session, nil
}
func (a *Authenticator) authenticateBasic(authHeader, remoteAddr string) (*Session, error) {
if !strings.HasPrefix(authHeader, "Basic ") {
return nil, fmt.Errorf("invalid basic auth header")
}
payload, err := base64.StdEncoding.DecodeString(authHeader[6:])
if err != nil {
return nil, fmt.Errorf("invalid base64 encoding")
}
parts := strings.SplitN(string(payload), ":", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("invalid credentials format")
}
return a.validateBasicAuth(parts[0], parts[1], remoteAddr)
}
func (a *Authenticator) validateBasicAuth(username, password, remoteAddr string) (*Session, error) {
a.mu.RLock()
expectedHash, exists := a.basicUsers[username]
a.mu.RUnlock()
if !exists {
// ☢ SECURITY: Perform bcrypt anyway to prevent timing attacks
bcrypt.CompareHashAndPassword([]byte("$2a$10$dummy.hash.to.prevent.timing.attacks"), []byte(password))
return nil, fmt.Errorf("invalid credentials")
}
if err := bcrypt.CompareHashAndPassword([]byte(expectedHash), []byte(password)); err != nil {
return nil, fmt.Errorf("invalid credentials")
}
session := &Session{
ID: generateSessionID(),
Username: username,
Method: "basic",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
}
a.storeSession(session)
return session, nil
}
func (a *Authenticator) authenticateBearer(authHeader, remoteAddr string) (*Session, error) {
if !strings.HasPrefix(authHeader, "Bearer ") {
return nil, fmt.Errorf("invalid bearer auth header")
func (a *Authenticator) authenticateToken(authHeader, remoteAddr string) (*Session, error) {
if !strings.HasPrefix(authHeader, "Token") {
return nil, fmt.Errorf("invalid token auth header")
}
token := authHeader[7:]
@ -412,97 +113,22 @@ func (a *Authenticator) authenticateBearer(authHeader, remoteAddr string) (*Sess
func (a *Authenticator) validateToken(token, remoteAddr string) (*Session, error) {
// Check static tokens first
a.mu.RLock()
isStatic := a.bearerTokens[token]
isValid := a.tokens[token]
a.mu.RUnlock()
if isStatic {
session := &Session{
ID: generateSessionID(),
Method: "bearer",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
Metadata: map[string]any{"token_type": "static"},
}
a.storeSession(session)
return session, nil
if !isValid {
return nil, fmt.Errorf("invalid token")
}
// Try JWT validation if configured
if a.jwtParser != nil && a.jwtKeyFunc != nil {
claims := jwt.MapClaims{}
parsedToken, err := a.jwtParser.ParseWithClaims(token, claims, a.jwtKeyFunc)
if err != nil {
return nil, fmt.Errorf("JWT validation failed: %w", err)
}
if !parsedToken.Valid {
return nil, fmt.Errorf("invalid JWT token")
}
// Explicit expiration check
if exp, ok := claims["exp"].(float64); ok {
if time.Now().Unix() > int64(exp) {
return nil, fmt.Errorf("token expired")
}
} else {
// Reject tokens without expiration
return nil, fmt.Errorf("token missing expiration claim")
}
// Check not-before claim
if nbf, ok := claims["nbf"].(float64); ok {
if time.Now().Unix() < int64(nbf) {
return nil, fmt.Errorf("token not yet valid")
}
}
// Check issuer if configured
if a.config.BearerAuth.JWT.Issuer != "" {
if iss, ok := claims["iss"].(string); !ok || iss != a.config.BearerAuth.JWT.Issuer {
return nil, fmt.Errorf("invalid token issuer")
}
}
// Check audience if configured
if a.config.BearerAuth.JWT.Audience != "" {
// Handle both string and []string audience formats
audValid := false
switch aud := claims["aud"].(type) {
case string:
audValid = aud == a.config.BearerAuth.JWT.Audience
case []interface{}:
for _, aa := range aud {
if audStr, ok := aa.(string); ok && audStr == a.config.BearerAuth.JWT.Audience {
audValid = true
break
}
}
}
if !audValid {
return nil, fmt.Errorf("invalid token audience")
}
}
username := ""
if sub, ok := claims["sub"].(string); ok {
username = sub
}
session := &Session{
ID: generateSessionID(),
Username: username,
Method: "jwt",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
Metadata: map[string]any{"claims": claims},
}
a.storeSession(session)
return session, nil
session := &Session{
ID: generateSessionID(),
Method: "token",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
}
return nil, fmt.Errorf("invalid token")
a.storeSession(session)
return session, nil
}
func (a *Authenticator) storeSession(session *Session) {
@ -537,70 +163,6 @@ func (a *Authenticator) sessionCleanup() {
}
}
// Cleanup old auth attempts
func (a *Authenticator) authAttemptCleanup() {
ticker := time.NewTicker(5 * time.Minute)
defer ticker.Stop()
for range ticker.C {
a.authMu.Lock()
now := time.Now()
for ip, state := range a.ipAuthAttempts {
// Remove entries older than 1 hour with no recent activity
if now.Sub(state.lastAttempt) > time.Hour {
delete(a.ipAuthAttempts, ip)
a.logger.Debug("msg", "Cleaned up auth attempt state",
"component", "auth",
"ip", ip)
}
}
a.authMu.Unlock()
}
}
func (a *Authenticator) loadUsersFile(path string) error {
file, err := os.Open(path)
if err != nil {
return fmt.Errorf("could not open users file: %w", err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
lineNumber := 0
for scanner.Scan() {
lineNumber++
line := strings.TrimSpace(scanner.Text())
if line == "" || strings.HasPrefix(line, "#") {
continue // Skip empty lines and comments
}
parts := strings.SplitN(line, ":", 2)
if len(parts) != 2 {
a.logger.Warn("msg", "Skipping malformed line in users file",
"component", "auth",
"path", path,
"line_number", lineNumber)
continue
}
username, hash := strings.TrimSpace(parts[0]), strings.TrimSpace(parts[1])
if username != "" && hash != "" {
// File-based users can overwrite inline users if names conflict
a.basicUsers[username] = hash
}
}
if err := scanner.Err(); err != nil {
return fmt.Errorf("error reading users file: %w", err)
}
a.logger.Info("msg", "Loaded users from file",
"component", "auth",
"path", path,
"user_count", len(a.basicUsers))
return nil
}
func generateSessionID() string {
b := make([]byte, 32)
if _, err := rand.Read(b); err != nil {
@ -610,7 +172,7 @@ func generateSessionID() string {
return base64.URLEncoding.EncodeToString(b)
}
// ValidateSession checks if a session is still valid
// Checks if a session is still valid
func (a *Authenticator) ValidateSession(sessionID string) bool {
if a == nil {
return true
@ -632,7 +194,7 @@ func (a *Authenticator) ValidateSession(sessionID string) bool {
return true
}
// GetStats returns authentication statistics
// Returns authentication statistics
func (a *Authenticator) GetStats() map[string]any {
if a == nil {
return map[string]any{"enabled": false}
@ -646,7 +208,6 @@ func (a *Authenticator) GetStats() map[string]any {
"enabled": true,
"type": a.config.Type,
"active_sessions": sessionCount,
"basic_users": len(a.basicUsers),
"static_tokens": len(a.bearerTokens),
"static_tokens": len(a.tokens),
}
}

View File

@ -0,0 +1,106 @@
// FILE: src/internal/auth/scram_client.go
package auth
import (
"crypto/rand"
"crypto/sha256"
"crypto/subtle"
"encoding/base64"
"fmt"
"golang.org/x/crypto/argon2"
)
// Client handles SCRAM client-side authentication
type ScramClient struct {
Username string
Password string
// Handshake state
clientNonce string
serverFirst *ServerFirst
authMessage string
serverKey []byte
}
// NewScramClient creates SCRAM client
func NewScramClient(username, password string) *ScramClient {
return &ScramClient{
Username: username,
Password: password,
}
}
// StartAuthentication generates ClientFirst message
func (c *ScramClient) StartAuthentication() (*ClientFirst, error) {
// Generate client nonce
nonce := make([]byte, 32)
if _, err := rand.Read(nonce); err != nil {
return nil, fmt.Errorf("failed to generate nonce: %w", err)
}
c.clientNonce = base64.StdEncoding.EncodeToString(nonce)
return &ClientFirst{
Username: c.Username,
ClientNonce: c.clientNonce,
}, nil
}
// ProcessServerFirst handles server challenge
func (c *ScramClient) ProcessServerFirst(msg *ServerFirst) (*ClientFinal, error) {
c.serverFirst = msg
// Decode salt
salt, err := base64.StdEncoding.DecodeString(msg.Salt)
if err != nil {
return nil, fmt.Errorf("invalid salt encoding: %w", err)
}
// Derive keys using Argon2id
saltedPassword := argon2.IDKey([]byte(c.Password), salt,
msg.ArgonTime, msg.ArgonMemory, msg.ArgonThreads, 32)
clientKey := computeHMAC(saltedPassword, []byte("Client Key"))
serverKey := computeHMAC(saltedPassword, []byte("Server Key"))
storedKey := sha256.Sum256(clientKey)
// Build auth message
clientFirstBare := fmt.Sprintf("u=%s,n=%s", c.Username, c.clientNonce)
clientFinalBare := fmt.Sprintf("r=%s", msg.FullNonce)
c.authMessage = clientFirstBare + "," + msg.Marshal() + "," + clientFinalBare
// Compute client proof
clientSignature := computeHMAC(storedKey[:], []byte(c.authMessage))
clientProof := xorBytes(clientKey, clientSignature)
// Store server key for verification
c.serverKey = serverKey
return &ClientFinal{
FullNonce: msg.FullNonce,
ClientProof: base64.StdEncoding.EncodeToString(clientProof),
}, nil
}
// VerifyServerFinal validates server signature
func (c *ScramClient) VerifyServerFinal(msg *ServerFinal) error {
if c.authMessage == "" || c.serverKey == nil {
return fmt.Errorf("invalid handshake state")
}
// Compute expected server signature
expectedSig := computeHMAC(c.serverKey, []byte(c.authMessage))
// Decode received signature
receivedSig, err := base64.StdEncoding.DecodeString(msg.ServerSignature)
if err != nil {
return fmt.Errorf("invalid signature encoding: %w", err)
}
// ☢ SECURITY: Constant-time comparison
if subtle.ConstantTimeCompare(expectedSig, receivedSig) != 1 {
return fmt.Errorf("server authentication failed")
}
return nil
}

View File

@ -0,0 +1,108 @@
// FILE: src/internal/auth/scram_credential.go
package auth
import (
"crypto/hmac"
"crypto/sha256"
"crypto/subtle"
"encoding/base64"
"fmt"
"strings"
"logwisp/src/internal/core"
"golang.org/x/crypto/argon2"
)
// Credential stores SCRAM authentication data
type Credential struct {
Username string
Salt []byte // 16+ bytes
ArgonTime uint32 // e.g., 3
ArgonMemory uint32 // e.g., 64*1024 KiB
ArgonThreads uint8 // e.g., 4
StoredKey []byte // SHA256(ClientKey)
ServerKey []byte // For server auth
PHCHash string
}
// DeriveCredential creates SCRAM credential from password
func DeriveCredential(username, password string, salt []byte, time, memory uint32, threads uint8) (*Credential, error) {
if len(salt) < 16 {
return nil, fmt.Errorf("salt must be at least 16 bytes")
}
// Derive salted password using Argon2id
saltedPassword := argon2.IDKey([]byte(password), salt, time, memory, threads, core.Argon2KeyLen)
// Construct PHC format for basic auth compatibility
saltB64 := base64.RawStdEncoding.EncodeToString(salt)
hashB64 := base64.RawStdEncoding.EncodeToString(saltedPassword)
phcHash := fmt.Sprintf("$argon2id$v=%d$m=%d,t=%d,p=%d$%s$%s",
argon2.Version, memory, time, threads, saltB64, hashB64)
// Derive keys
clientKey := computeHMAC(saltedPassword, []byte("Client Key"))
serverKey := computeHMAC(saltedPassword, []byte("Server Key"))
storedKey := sha256.Sum256(clientKey)
return &Credential{
Username: username,
Salt: salt,
ArgonTime: time,
ArgonMemory: memory,
ArgonThreads: threads,
StoredKey: storedKey[:],
ServerKey: serverKey,
PHCHash: phcHash,
}, nil
}
// MigrateFromPHC converts existing Argon2 PHC hash to SCRAM credential
func MigrateFromPHC(username, password, phcHash string) (*Credential, error) {
// Parse PHC: $argon2id$v=19$m=65536,t=3,p=4$salt$hash
parts := strings.Split(phcHash, "$")
if len(parts) != 6 || parts[1] != "argon2id" {
return nil, fmt.Errorf("invalid PHC format")
}
var memory, time uint32
var threads uint8
fmt.Sscanf(parts[3], "m=%d,t=%d,p=%d", &memory, &time, &threads)
salt, err := base64.RawStdEncoding.DecodeString(parts[4])
if err != nil {
return nil, fmt.Errorf("invalid salt encoding: %w", err)
}
expectedHash, err := base64.RawStdEncoding.DecodeString(parts[5])
if err != nil {
return nil, fmt.Errorf("invalid hash encoding: %w", err)
}
// Verify password matches
computedHash := argon2.IDKey([]byte(password), salt, time, memory, threads, uint32(len(expectedHash)))
if subtle.ConstantTimeCompare(computedHash, expectedHash) != 1 {
return nil, fmt.Errorf("password verification failed")
}
// Now derive SCRAM credential
return DeriveCredential(username, password, salt, time, memory, threads)
}
func computeHMAC(key, message []byte) []byte {
mac := hmac.New(sha256.New, key)
mac.Write(message)
return mac.Sum(nil)
}
func xorBytes(a, b []byte) []byte {
if len(a) != len(b) {
panic("xor length mismatch")
}
result := make([]byte, len(a))
for i := range a {
result[i] = a[i] ^ b[i]
}
return result
}

View File

@ -0,0 +1,83 @@
// FILE: src/internal/auth/scram_manager.go
package auth
import (
"crypto/rand"
"encoding/base64"
"fmt"
"logwisp/src/internal/config"
)
// ScramManager provides high-level SCRAM operations with rate limiting
type ScramManager struct {
server *ScramServer
}
// NewScramManager creates SCRAM manager
func NewScramManager(scramAuthCfg *config.ScramAuthConfig) *ScramManager {
manager := &ScramManager{
server: NewScramServer(),
}
// Load users from SCRAM config
for _, user := range scramAuthCfg.Users {
storedKey, err := base64.StdEncoding.DecodeString(user.StoredKey)
if err != nil {
// Skip user with invalid stored key
continue
}
serverKey, err := base64.StdEncoding.DecodeString(user.ServerKey)
if err != nil {
// Skip user with invalid server key
continue
}
salt, err := base64.StdEncoding.DecodeString(user.Salt)
if err != nil {
// Skip user with invalid salt
continue
}
cred := &Credential{
Username: user.Username,
StoredKey: storedKey,
ServerKey: serverKey,
Salt: salt,
ArgonTime: user.ArgonTime,
ArgonMemory: user.ArgonMemory,
ArgonThreads: user.ArgonThreads,
}
manager.server.AddCredential(cred)
}
return manager
}
// RegisterUser creates new user credential
func (sm *ScramManager) RegisterUser(username, password string) error {
salt := make([]byte, 16)
if _, err := rand.Read(salt); err != nil {
return fmt.Errorf("salt generation failed: %w", err)
}
cred, err := DeriveCredential(username, password, salt,
sm.server.DefaultTime, sm.server.DefaultMemory, sm.server.DefaultThreads)
if err != nil {
return err
}
sm.server.AddCredential(cred)
return nil
}
// HandleClientFirst wraps server's HandleClientFirst
func (sm *ScramManager) HandleClientFirst(msg *ClientFirst) (*ServerFirst, error) {
return sm.server.HandleClientFirst(msg)
}
// HandleClientFinal wraps server's HandleClientFinal
func (sm *ScramManager) HandleClientFinal(msg *ClientFinal) (*ServerFinal, error) {
return sm.server.HandleClientFinal(msg)
}

View File

@ -0,0 +1,38 @@
// FILE: src/internal/auth/scram_message.go
package auth
import (
"fmt"
)
// ClientFirst initiates authentication
type ClientFirst struct {
Username string `json:"u"`
ClientNonce string `json:"n"`
}
// ServerFirst contains server challenge
type ServerFirst struct {
FullNonce string `json:"r"` // client_nonce + server_nonce
Salt string `json:"s"` // base64
ArgonTime uint32 `json:"t"`
ArgonMemory uint32 `json:"m"`
ArgonThreads uint8 `json:"p"`
}
// ClientFinal contains client proof
type ClientFinal struct {
FullNonce string `json:"r"`
ClientProof string `json:"p"` // base64
}
// ServerFinal contains server signature for mutual auth
type ServerFinal struct {
ServerSignature string `json:"v"` // base64
SessionID string `json:"sid,omitempty"`
}
func (sf *ServerFirst) Marshal() string {
return fmt.Sprintf("r=%s,s=%s,t=%d,m=%d,p=%d",
sf.FullNonce, sf.Salt, sf.ArgonTime, sf.ArgonMemory, sf.ArgonThreads)
}

View File

@ -0,0 +1,117 @@
// FILE: src/internal/auth/scram_protocol.go
package auth
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/lixenwraith/log"
"github.com/panjf2000/gnet/v2"
)
// ScramProtocolHandler handles SCRAM message exchange for TCP
type ScramProtocolHandler struct {
manager *ScramManager
logger *log.Logger
}
// NewScramProtocolHandler creates protocol handler
func NewScramProtocolHandler(manager *ScramManager, logger *log.Logger) *ScramProtocolHandler {
return &ScramProtocolHandler{
manager: manager,
logger: logger,
}
}
// HandleAuthMessage processes a complete auth line from buffer
func (sph *ScramProtocolHandler) HandleAuthMessage(line []byte, conn gnet.Conn) (authenticated bool, session *Session, err error) {
// Parse SCRAM messages
parts := strings.Fields(string(line))
if len(parts) < 2 {
conn.AsyncWrite([]byte("SCRAM-FAIL Invalid message format\n"), nil)
return false, nil, fmt.Errorf("invalid message format")
}
switch parts[0] {
case "SCRAM-FIRST":
// Parse ClientFirst JSON
var clientFirst ClientFirst
if err := json.Unmarshal([]byte(parts[1]), &clientFirst); err != nil {
conn.AsyncWrite([]byte("SCRAM-FAIL Invalid JSON\n"), nil)
return false, nil, fmt.Errorf("invalid JSON")
}
// Process with SCRAM server
serverFirst, err := sph.manager.HandleClientFirst(&clientFirst)
if err != nil {
// Still send challenge to prevent user enumeration
response, _ := json.Marshal(serverFirst)
conn.AsyncWrite([]byte(fmt.Sprintf("SCRAM-CHALLENGE %s\n", response)), nil)
return false, nil, err
}
// Send ServerFirst challenge
response, _ := json.Marshal(serverFirst)
conn.AsyncWrite([]byte(fmt.Sprintf("SCRAM-CHALLENGE %s\n", response)), nil)
return false, nil, nil // Not authenticated yet
case "SCRAM-PROOF":
// Parse ClientFinal JSON
var clientFinal ClientFinal
if err := json.Unmarshal([]byte(parts[1]), &clientFinal); err != nil {
conn.AsyncWrite([]byte("SCRAM-FAIL Invalid JSON\n"), nil)
return false, nil, fmt.Errorf("invalid JSON")
}
// Verify proof
serverFinal, err := sph.manager.HandleClientFinal(&clientFinal)
if err != nil {
conn.AsyncWrite([]byte("SCRAM-FAIL Authentication failed\n"), nil)
return false, nil, err
}
// Authentication successful
session = &Session{
ID: serverFinal.SessionID,
Method: "scram-sha-256",
RemoteAddr: conn.RemoteAddr().String(),
CreatedAt: time.Now(),
}
// Send ServerFinal with signature
response, _ := json.Marshal(serverFinal)
conn.AsyncWrite([]byte(fmt.Sprintf("SCRAM-OK %s\n", response)), nil)
return true, session, nil
default:
conn.AsyncWrite([]byte("SCRAM-FAIL Unknown command\n"), nil)
return false, nil, fmt.Errorf("unknown command: %s", parts[0])
}
}
// FormatSCRAMRequest formats a SCRAM protocol message for TCP
func FormatSCRAMRequest(command string, data interface{}) (string, error) {
jsonData, err := json.Marshal(data)
if err != nil {
return "", fmt.Errorf("failed to marshal %s: %w", command, err)
}
return fmt.Sprintf("%s %s\n", command, jsonData), nil
}
// ParseSCRAMResponse parses a SCRAM protocol response from TCP
func ParseSCRAMResponse(response string) (command string, data string, err error) {
response = strings.TrimSpace(response)
parts := strings.SplitN(response, " ", 2)
if len(parts) < 1 {
return "", "", fmt.Errorf("empty response")
}
command = parts[0]
if len(parts) > 1 {
data = parts[1]
}
return command, data, nil
}

View File

@ -0,0 +1,174 @@
// FILE: src/internal/auth/scram_server.go
package auth
import (
"crypto/rand"
"crypto/sha256"
"crypto/subtle"
"encoding/base64"
"fmt"
"sync"
"time"
"logwisp/src/internal/core"
)
// Server handles SCRAM authentication
type ScramServer struct {
credentials map[string]*Credential
handshakes map[string]*HandshakeState
mu sync.RWMutex
// TODO: configurability useful? to be included in config or refactor to use core.const directly for simplicity
// Default Argon2 params for new registrations
DefaultTime uint32
DefaultMemory uint32
DefaultThreads uint8
}
// HandshakeState tracks ongoing authentication
type HandshakeState struct {
Username string
ClientNonce string
ServerNonce string
FullNonce string
Credential *Credential
CreatedAt time.Time
}
// NewScramServer creates SCRAM server
func NewScramServer() *ScramServer {
return &ScramServer{
credentials: make(map[string]*Credential),
handshakes: make(map[string]*HandshakeState),
DefaultTime: core.Argon2Time,
DefaultMemory: core.Argon2Memory,
DefaultThreads: core.Argon2Threads,
}
}
// AddCredential registers user credential
func (s *ScramServer) AddCredential(cred *Credential) {
s.mu.Lock()
defer s.mu.Unlock()
s.credentials[cred.Username] = cred
}
// HandleClientFirst processes initial auth request
func (s *ScramServer) HandleClientFirst(msg *ClientFirst) (*ServerFirst, error) {
s.mu.Lock()
defer s.mu.Unlock()
// Check if user exists
cred, exists := s.credentials[msg.Username]
if !exists {
// Prevent user enumeration - still generate response
salt := make([]byte, 16)
rand.Read(salt)
serverNonce := generateNonce()
return &ServerFirst{
FullNonce: msg.ClientNonce + serverNonce,
Salt: base64.StdEncoding.EncodeToString(salt),
ArgonTime: s.DefaultTime,
ArgonMemory: s.DefaultMemory,
ArgonThreads: s.DefaultThreads,
}, fmt.Errorf("invalid credentials")
}
// Generate server nonce
serverNonce := generateNonce()
fullNonce := msg.ClientNonce + serverNonce
// Store handshake state
state := &HandshakeState{
Username: msg.Username,
ClientNonce: msg.ClientNonce,
ServerNonce: serverNonce,
FullNonce: fullNonce,
Credential: cred,
CreatedAt: time.Now(),
}
s.handshakes[fullNonce] = state
// Cleanup old handshakes
s.cleanupHandshakes()
return &ServerFirst{
FullNonce: fullNonce,
Salt: base64.StdEncoding.EncodeToString(cred.Salt),
ArgonTime: cred.ArgonTime,
ArgonMemory: cred.ArgonMemory,
ArgonThreads: cred.ArgonThreads,
}, nil
}
// HandleClientFinal verifies client proof
func (s *ScramServer) HandleClientFinal(msg *ClientFinal) (*ServerFinal, error) {
s.mu.Lock()
defer s.mu.Unlock()
state, exists := s.handshakes[msg.FullNonce]
if !exists {
return nil, fmt.Errorf("invalid nonce or expired handshake")
}
defer delete(s.handshakes, msg.FullNonce)
// Check timeout
if time.Since(state.CreatedAt) > 60*time.Second {
return nil, fmt.Errorf("handshake timeout")
}
// Decode client proof
clientProof, err := base64.StdEncoding.DecodeString(msg.ClientProof)
if err != nil {
return nil, fmt.Errorf("invalid proof encoding")
}
// Build auth message
clientFirstBare := fmt.Sprintf("u=%s,n=%s", state.Username, state.ClientNonce)
serverFirst := &ServerFirst{
FullNonce: state.FullNonce,
Salt: base64.StdEncoding.EncodeToString(state.Credential.Salt),
ArgonTime: state.Credential.ArgonTime,
ArgonMemory: state.Credential.ArgonMemory,
ArgonThreads: state.Credential.ArgonThreads,
}
clientFinalBare := fmt.Sprintf("r=%s", msg.FullNonce)
authMessage := clientFirstBare + "," + serverFirst.Marshal() + "," + clientFinalBare
// Compute client signature
clientSignature := computeHMAC(state.Credential.StoredKey, []byte(authMessage))
// XOR to get ClientKey
clientKey := xorBytes(clientProof, clientSignature)
// Verify by computing StoredKey
computedStoredKey := sha256.Sum256(clientKey)
if subtle.ConstantTimeCompare(computedStoredKey[:], state.Credential.StoredKey) != 1 {
return nil, fmt.Errorf("authentication failed")
}
// Generate server signature for mutual auth
serverSignature := computeHMAC(state.Credential.ServerKey, []byte(authMessage))
return &ServerFinal{
ServerSignature: base64.StdEncoding.EncodeToString(serverSignature),
SessionID: generateSessionID(),
}, nil
}
func (s *ScramServer) cleanupHandshakes() {
cutoff := time.Now().Add(-60 * time.Second)
for nonce, state := range s.handshakes {
if state.CreatedAt.Before(cutoff) {
delete(s.handshakes, nonce)
}
}
}
func generateNonce() string {
b := make([]byte, 32)
rand.Read(b)
return base64.StdEncoding.EncodeToString(b)
}

View File

@ -1,77 +0,0 @@
// FILE: logwisp/src/internal/config/auth.go
package config
import (
"fmt"
)
type AuthConfig struct {
// Authentication type: "none", "basic", "bearer", "mtls"
Type string `toml:"type"`
// Basic auth
BasicAuth *BasicAuthConfig `toml:"basic_auth"`
// Bearer token auth
BearerAuth *BearerAuthConfig `toml:"bearer_auth"`
}
type BasicAuthConfig struct {
// Static users (for simple deployments)
Users []BasicAuthUser `toml:"users"`
// External auth file
UsersFile string `toml:"users_file"`
// Realm for WWW-Authenticate header
Realm string `toml:"realm"`
}
type BasicAuthUser struct {
Username string `toml:"username"`
// Password hash (bcrypt)
PasswordHash string `toml:"password_hash"`
}
type BearerAuthConfig struct {
// Static tokens
Tokens []string `toml:"tokens"`
// JWT validation
JWT *JWTConfig `toml:"jwt"`
}
type JWTConfig struct {
// JWKS URL for key discovery
JWKSURL string `toml:"jwks_url"`
// Static signing key (if not using JWKS)
SigningKey string `toml:"signing_key"`
// Expected issuer
Issuer string `toml:"issuer"`
// Expected audience
Audience string `toml:"audience"`
}
func validateAuth(pipelineName string, auth *AuthConfig) error {
if auth == nil {
return nil
}
validTypes := map[string]bool{"none": true, "basic": true, "bearer": true, "mtls": true}
if !validTypes[auth.Type] {
return fmt.Errorf("pipeline '%s': invalid auth type: %s", pipelineName, auth.Type)
}
if auth.Type == "basic" && auth.BasicAuth == nil {
return fmt.Errorf("pipeline '%s': basic auth type specified but config missing", pipelineName)
}
if auth.Type == "bearer" && auth.BearerAuth == nil {
return fmt.Errorf("pipeline '%s': bearer auth type specified but config missing", pipelineName)
}
return nil
}

View File

@ -1,6 +1,8 @@
// FILE: logwisp/src/internal/config/config.go
package config
// --- LogWisp Configuration Options ---
type Config struct {
// Top-level flags for application control
Background bool `toml:"background"`
@ -10,15 +12,368 @@ type Config struct {
// Runtime behavior flags
DisableStatusReporter bool `toml:"disable_status_reporter"`
ConfigAutoReload bool `toml:"config_auto_reload"`
ConfigSaveOnExit bool `toml:"config_save_on_exit"`
// Internal flag indicating demonized child process
BackgroundDaemon bool `toml:"background-daemon"`
// Internal flag indicating demonized child process (DO NOT SET IN CONFIG FILE)
BackgroundDaemon bool
// Configuration file path
ConfigFile string `toml:"config"`
ConfigFile string `toml:"config_file"`
// Existing fields
Logging *LogConfig `toml:"logging"`
Pipelines []PipelineConfig `toml:"pipelines"`
}
// --- Logging Options ---
// Represents logging configuration for LogWisp
type LogConfig struct {
// Output mode: "file", "stdout", "stderr", "split", "all", "none"
Output string `toml:"output"`
// Log level: "debug", "info", "warn", "error"
Level string `toml:"level"`
// File output settings (when Output includes "file" or "all")
File *LogFileConfig `toml:"file"`
// Console output settings
Console *LogConsoleConfig `toml:"console"`
}
type LogFileConfig struct {
// Directory for log files
Directory string `toml:"directory"`
// Base name for log files
Name string `toml:"name"`
// Maximum size per log file in MB
MaxSizeMB int64 `toml:"max_size_mb"`
// Maximum total size of all logs in MB
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
// Log retention in hours (0 = disabled)
RetentionHours float64 `toml:"retention_hours"`
}
type LogConsoleConfig struct {
// Target for console output: "stdout", "stderr", "split"
// "split": info/debug to stdout, warn/error to stderr
Target string `toml:"target"`
// Format: "txt" or "json"
Format string `toml:"format"`
}
// --- Pipeline Options ---
type PipelineConfig struct {
Name string `toml:"name"`
Sources []SourceConfig `toml:"sources"`
RateLimit *RateLimitConfig `toml:"rate_limit"`
Filters []FilterConfig `toml:"filters"`
Format *FormatConfig `toml:"format"`
Sinks []SinkConfig `toml:"sinks"`
// Auth *ServerAuthConfig `toml:"auth"` // Global auth for pipeline
}
// Common configuration structs used across components
type NetLimitConfig struct {
Enabled bool `toml:"enabled"`
MaxConnections int64 `toml:"max_connections"`
RequestsPerSecond float64 `toml:"requests_per_second"`
BurstSize int64 `toml:"burst_size"`
ResponseMessage string `toml:"response_message"`
ResponseCode int64 `toml:"response_code"` // Default: 429
MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"`
MaxConnectionsTotal int64 `toml:"max_connections_total"`
IPWhitelist []string `toml:"ip_whitelist"`
IPBlacklist []string `toml:"ip_blacklist"`
}
type TLSConfig struct {
Enabled bool `toml:"enabled"`
CertFile string `toml:"cert_file"`
KeyFile string `toml:"key_file"`
CAFile string `toml:"ca_file"`
ServerName string `toml:"server_name"` // for client verification
SkipVerify bool `toml:"skip_verify"`
// Client certificate authentication
ClientAuth bool `toml:"client_auth"`
ClientCAFile string `toml:"client_ca_file"`
VerifyClientCert bool `toml:"verify_client_cert"`
// TLS version constraints
MinVersion string `toml:"min_version"` // "TLS1.2", "TLS1.3"
MaxVersion string `toml:"max_version"`
// Cipher suites (comma-separated list)
CipherSuites string `toml:"cipher_suites"`
}
type HeartbeatConfig struct {
Enabled bool `toml:"enabled"`
IntervalMS int64 `toml:"interval_ms"`
IncludeTimestamp bool `toml:"include_timestamp"`
IncludeStats bool `toml:"include_stats"`
Format string `toml:"format"`
}
type ClientAuthConfig struct {
Type string `toml:"type"` // "none", "basic", "token", "scram"
Username string `toml:"username"`
Password string `toml:"password"`
Token string `toml:"token"`
}
// --- Source Options ---
type SourceConfig struct {
Type string `toml:"type"`
// Polymorphic - only one populated based on type
Directory *DirectorySourceOptions `toml:"directory,omitempty"`
Stdin *StdinSourceOptions `toml:"stdin,omitempty"`
HTTP *HTTPSourceOptions `toml:"http,omitempty"`
TCP *TCPSourceOptions `toml:"tcp,omitempty"`
}
type DirectorySourceOptions struct {
Path string `toml:"path"`
Pattern string `toml:"pattern"` // glob pattern
CheckIntervalMS int64 `toml:"check_interval_ms"`
Recursive bool `toml:"recursive"` // TODO: implement logic
}
type StdinSourceOptions struct {
BufferSize int64 `toml:"buffer_size"`
}
type HTTPSourceOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
IngestPath string `toml:"ingest_path"`
BufferSize int64 `toml:"buffer_size"`
MaxRequestBodySize int64 `toml:"max_body_size"`
ReadTimeout int64 `toml:"read_timeout_ms"`
WriteTimeout int64 `toml:"write_timeout_ms"`
NetLimit *NetLimitConfig `toml:"net_limit"`
TLS *TLSConfig `toml:"tls"`
Auth *ServerAuthConfig `toml:"auth"`
}
type TCPSourceOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
ReadTimeout int64 `toml:"read_timeout_ms"`
KeepAlive bool `toml:"keep_alive"`
KeepAlivePeriod int64 `toml:"keep_alive_period_ms"`
NetLimit *NetLimitConfig `toml:"net_limit"`
Auth *ServerAuthConfig `toml:"auth"`
}
// --- Sink Options ---
type SinkConfig struct {
Type string `toml:"type"`
// Polymorphic - only one populated based on type
Console *ConsoleSinkOptions `toml:"console,omitempty"`
File *FileSinkOptions `toml:"file,omitempty"`
HTTP *HTTPSinkOptions `toml:"http,omitempty"`
TCP *TCPSinkOptions `toml:"tcp,omitempty"`
HTTPClient *HTTPClientSinkOptions `toml:"http_client,omitempty"`
TCPClient *TCPClientSinkOptions `toml:"tcp_client,omitempty"`
}
type ConsoleSinkOptions struct {
Target string `toml:"target"` // "stdout", "stderr", "split"
Colorize bool `toml:"colorize"`
BufferSize int64 `toml:"buffer_size"`
}
type FileSinkOptions struct {
Directory string `toml:"directory"`
Name string `toml:"name"`
MaxSizeMB int64 `toml:"max_size_mb"`
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
MinDiskFreeMB int64 `toml:"min_disk_free_mb"`
RetentionHours float64 `toml:"retention_hours"`
BufferSize int64 `toml:"buffer_size"`
FlushInterval int64 `toml:"flush_interval_ms"`
}
type HTTPSinkOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
StreamPath string `toml:"stream_path"`
StatusPath string `toml:"status_path"`
BufferSize int64 `toml:"buffer_size"`
WriteTimeout int64 `toml:"write_timeout_ms"`
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
NetLimit *NetLimitConfig `toml:"net_limit"`
TLS *TLSConfig `toml:"tls"`
Auth *ServerAuthConfig `toml:"auth"`
}
type TCPSinkOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
WriteTimeout int64 `toml:"write_timeout_ms"`
KeepAlive bool `toml:"keep_alive"`
KeepAlivePeriod int64 `toml:"keep_alive_period_ms"`
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
NetLimit *NetLimitConfig `toml:"net_limit"`
Auth *ServerAuthConfig `toml:"auth"`
}
type HTTPClientSinkOptions struct {
URL string `toml:"url"`
BufferSize int64 `toml:"buffer_size"`
BatchSize int64 `toml:"batch_size"`
BatchDelayMS int64 `toml:"batch_delay_ms"`
Timeout int64 `toml:"timeout_seconds"`
MaxRetries int64 `toml:"max_retries"`
RetryDelayMS int64 `toml:"retry_delay_ms"`
RetryBackoff float64 `toml:"retry_backoff"`
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
TLS *TLSConfig `toml:"tls"`
Auth *ClientAuthConfig `toml:"auth"`
}
type TCPClientSinkOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
DialTimeout int64 `toml:"dial_timeout_seconds"`
WriteTimeout int64 `toml:"write_timeout_seconds"`
ReadTimeout int64 `toml:"read_timeout_seconds"`
KeepAlive int64 `toml:"keep_alive_seconds"`
ReconnectDelayMS int64 `toml:"reconnect_delay_ms"`
MaxReconnectDelayMS int64 `toml:"max_reconnect_delay_ms"`
ReconnectBackoff float64 `toml:"reconnect_backoff"`
Auth *ClientAuthConfig `toml:"auth"`
}
// --- Rate Limit Options ---
// Defines the action to take when a rate limit is exceeded.
type RateLimitPolicy int
const (
// PolicyPass allows all logs through, effectively disabling the limiter.
PolicyPass RateLimitPolicy = iota
// PolicyDrop drops logs that exceed the rate limit.
PolicyDrop
)
// Defines the configuration for pipeline-level rate limiting.
type RateLimitConfig struct {
// Rate is the number of log entries allowed per second. Default: 0 (disabled).
Rate float64 `toml:"rate"`
// Burst is the maximum number of log entries that can be sent in a short burst. Defaults to the Rate.
Burst float64 `toml:"burst"`
// Policy defines the action to take when the limit is exceeded. "pass" or "drop".
Policy string `toml:"policy"`
// MaxEntrySizeBytes is the maximum allowed size for a single log entry. 0 = no limit.
MaxEntrySizeBytes int64 `toml:"max_entry_size_bytes"`
}
// --- Filter Options ---
// Represents the filter type
type FilterType string
const (
FilterTypeInclude FilterType = "include" // Whitelist - only matching logs pass
FilterTypeExclude FilterType = "exclude" // Blacklist - matching logs are dropped
)
// Represents how multiple patterns are combined
type FilterLogic string
const (
FilterLogicOr FilterLogic = "or" // Match any pattern
FilterLogicAnd FilterLogic = "and" // Match all patterns
)
// Represents filter configuration
type FilterConfig struct {
Type FilterType `toml:"type"`
Logic FilterLogic `toml:"logic"`
Patterns []string `toml:"patterns"`
}
// --- Formatter Options ---
type FormatConfig struct {
// Format configuration - polymorphic like sources/sinks
Type string `toml:"type"` // "json", "txt", "raw"
// Only one will be populated based on format type
JSONFormatOptions *JSONFormatterOptions `toml:"json,omitempty"`
TxtFormatOptions *TxtFormatterOptions `toml:"txt,omitempty"`
RawFormatOptions *RawFormatterOptions `toml:"raw,omitempty"`
}
type JSONFormatterOptions struct {
Pretty bool `toml:"pretty"`
TimestampField string `toml:"timestamp_field"`
LevelField string `toml:"level_field"`
MessageField string `toml:"message_field"`
SourceField string `toml:"source_field"`
}
type TxtFormatterOptions struct {
Template string `toml:"template"`
TimestampFormat string `toml:"timestamp_format"`
}
type RawFormatterOptions struct {
AddNewLine bool `toml:"add_new_line"`
}
// --- Server-side Auth (for sources) ---
type BasicAuthConfig struct {
Users []BasicAuthUser `toml:"users"`
Realm string `toml:"realm"`
}
type BasicAuthUser struct {
Username string `toml:"username"`
PasswordHash string `toml:"password_hash"` // Argon2
}
type ScramAuthConfig struct {
Users []ScramUser `toml:"users"`
}
type ScramUser struct {
Username string `toml:"username"`
StoredKey string `toml:"stored_key"` // base64
ServerKey string `toml:"server_key"` // base64
Salt string `toml:"salt"` // base64
ArgonTime uint32 `toml:"argon_time"`
ArgonMemory uint32 `toml:"argon_memory"`
ArgonThreads uint8 `toml:"argon_threads"`
}
type TokenAuthConfig struct {
Tokens []string `toml:"tokens"`
}
// Server auth wrapper (for sources accepting connections)
type ServerAuthConfig struct {
Type string `toml:"type"` // "none", "basic", "token", "scram"
Basic *BasicAuthConfig `toml:"basic,omitempty"`
Token *TokenAuthConfig `toml:"token,omitempty"`
Scram *ScramAuthConfig `toml:"scram,omitempty"`
}

View File

@ -1,65 +0,0 @@
// FILE: logwisp/src/internal/config/filter.go
package config
import (
"fmt"
"regexp"
)
// FilterType represents the filter type
type FilterType string
const (
FilterTypeInclude FilterType = "include" // Whitelist - only matching logs pass
FilterTypeExclude FilterType = "exclude" // Blacklist - matching logs are dropped
)
// FilterLogic represents how multiple patterns are combined
type FilterLogic string
const (
FilterLogicOr FilterLogic = "or" // Match any pattern
FilterLogicAnd FilterLogic = "and" // Match all patterns
)
// FilterConfig represents filter configuration
type FilterConfig struct {
Type FilterType `toml:"type"`
Logic FilterLogic `toml:"logic"`
Patterns []string `toml:"patterns"`
}
func validateFilter(pipelineName string, filterIndex int, cfg *FilterConfig) error {
// Validate filter type
switch cfg.Type {
case FilterTypeInclude, FilterTypeExclude, "":
// Valid types
default:
return fmt.Errorf("pipeline '%s' filter[%d]: invalid type '%s' (must be 'include' or 'exclude')",
pipelineName, filterIndex, cfg.Type)
}
// Validate filter logic
switch cfg.Logic {
case FilterLogicOr, FilterLogicAnd, "":
// Valid logic
default:
return fmt.Errorf("pipeline '%s' filter[%d]: invalid logic '%s' (must be 'or' or 'and')",
pipelineName, filterIndex, cfg.Logic)
}
// Empty patterns is valid - passes everything
if len(cfg.Patterns) == 0 {
return nil
}
// Validate regex patterns
for i, pattern := range cfg.Patterns {
if _, err := regexp.Compile(pattern); err != nil {
return fmt.Errorf("pipeline '%s' filter[%d] pattern[%d] '%s': invalid regex: %w",
pipelineName, filterIndex, i, pattern, err)
}
}
return nil
}

View File

@ -1,58 +0,0 @@
// FILE: logwisp/src/internal/config/ratelimit.go
package config
import (
"fmt"
"strings"
)
// RateLimitPolicy defines the action to take when a rate limit is exceeded.
type RateLimitPolicy int
const (
// PolicyPass allows all logs through, effectively disabling the limiter.
PolicyPass RateLimitPolicy = iota
// PolicyDrop drops logs that exceed the rate limit.
PolicyDrop
)
// RateLimitConfig defines the configuration for pipeline-level rate limiting.
type RateLimitConfig struct {
// Rate is the number of log entries allowed per second. Default: 0 (disabled).
Rate float64 `toml:"rate"`
// Burst is the maximum number of log entries that can be sent in a short burst. Defaults to the Rate.
Burst float64 `toml:"burst"`
// Policy defines the action to take when the limit is exceeded. "pass" or "drop".
Policy string `toml:"policy"`
// MaxEntrySizeBytes is the maximum allowed size for a single log entry. 0 = no limit.
MaxEntrySizeBytes int64 `toml:"max_entry_size_bytes"`
}
func validateRateLimit(pipelineName string, cfg *RateLimitConfig) error {
if cfg == nil {
return nil
}
if cfg.Rate < 0 {
return fmt.Errorf("pipeline '%s': rate limit rate cannot be negative", pipelineName)
}
if cfg.Burst < 0 {
return fmt.Errorf("pipeline '%s': rate limit burst cannot be negative", pipelineName)
}
if cfg.MaxEntrySizeBytes < 0 {
return fmt.Errorf("pipeline '%s': max entry size bytes cannot be negative", pipelineName)
}
// Validate policy
switch strings.ToLower(cfg.Policy) {
case "", "pass", "drop":
// Valid policies
default:
return fmt.Errorf("pipeline '%s': invalid rate limit policy '%s' (must be 'pass' or 'drop')",
pipelineName, cfg.Policy)
}
return nil
}

View File

@ -11,9 +11,11 @@ import (
lconfig "github.com/lixenwraith/config"
)
// LoadContext holds all configuration sources
type LoadContext struct {
FlagConfig any // Parsed command-line flags from main
var configManager *lconfig.Config
// Hot reload access
func GetConfigManager() *lconfig.Config {
return configManager
}
func defaults() *Config {
@ -26,41 +28,46 @@ func defaults() *Config {
// Runtime behavior defaults
DisableStatusReporter: false,
ConfigAutoReload: false,
ConfigSaveOnExit: false,
// Child process indicator
BackgroundDaemon: false,
// Existing defaults
Logging: DefaultLogConfig(),
Logging: &LogConfig{
Output: "stdout",
Level: "info",
File: &LogFileConfig{
Directory: "./log",
Name: "logwisp",
MaxSizeMB: 100,
MaxTotalSizeMB: 1000,
RetentionHours: 168, // 7 days
},
Console: &LogConsoleConfig{
Target: "stdout",
Format: "txt",
},
},
Pipelines: []PipelineConfig{
{
Name: "default",
Sources: []SourceConfig{
{
Type: "directory",
Options: map[string]any{
"path": "./",
"pattern": "*.log",
"check_interval_ms": int64(100),
Directory: &DirectorySourceOptions{
Path: "./",
Pattern: "*.log",
CheckIntervalMS: int64(100),
},
},
},
Sinks: []SinkConfig{
{
Type: "http",
Options: map[string]any{
"port": int64(8080),
"buffer_size": int64(1000),
"stream_path": "/stream",
"status_path": "/status",
"heartbeat": map[string]any{
"enabled": true,
"interval_seconds": int64(30),
"include_timestamp": true,
"include_stats": false,
"format": "comment",
},
Type: "console",
Console: &ConsoleSinkOptions{
Target: "stdout",
Colorize: false,
BufferSize: 100,
},
},
},
@ -69,22 +76,34 @@ func defaults() *Config {
}
}
// Load is the single entry point for loading all configuration
// Single entry point for loading all configuration
func Load(args []string) (*Config, error) {
configPath, isExplicit := resolveConfigPath(args)
// Build configuration with all sources
// Create target config instance that will be populated
finalConfig := &Config{}
// Builder handles loading, populating the target struct, and validation
cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_").
WithEnvTransform(customEnvTransform).
WithArgs(args).
WithFile(configPath).
WithTarget(finalConfig). // Typed target struct
WithDefaults(defaults()). // Default values
WithSources(
lconfig.SourceCLI,
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
).
WithEnvTransform(customEnvTransform). // Convert '.' to '_' in env separation
WithEnvPrefix("LOGWISP_"). // Environment variable prefix
WithArgs(args). // Command-line arguments
WithFile(configPath). // TOML config file
WithFileFormat("toml"). // Explicit format
WithTypedValidator(ValidateConfig). // Centralized validation
WithSecurityOptions(lconfig.SecurityOptions{
PreventPathTraversal: true,
MaxFileSize: 10 * 1024 * 1024, // 10MB max config
}).
Build()
if err != nil {
@ -93,42 +112,26 @@ func Load(args []string) (*Config, error) {
if isExplicit {
return nil, fmt.Errorf("config file not found: %s", configPath)
}
// If the default config file is not found, it's not an error
// If the default config file is not found, it's not an error, default/cli/env will be used
} else {
return nil, fmt.Errorf("failed to load config: %w", err)
return nil, fmt.Errorf("failed to load or validate config: %w", err)
}
}
// Scan into final config struct - using new interface
finalConfig := &Config{}
if err := cfg.Scan(finalConfig); err != nil {
return nil, fmt.Errorf("failed to scan config: %w", err)
}
// Store the config file path for hot reload
finalConfig.ConfigFile = configPath
// Set config file path if it exists
if _, err := os.Stat(configPath); err == nil {
finalConfig.ConfigFile = configPath
}
// Store the manager for hot reload
configManager = cfg
// Ensure critical fields are not nil
if finalConfig.Logging == nil {
finalConfig.Logging = DefaultLogConfig()
}
// Apply console target overrides if needed
if err := applyConsoleTargetOverrides(finalConfig); err != nil {
return nil, fmt.Errorf("failed to apply console target overrides: %w", err)
}
// Validate configuration
return finalConfig, finalConfig.validate()
return finalConfig, nil
}
// resolveConfigPath returns the configuration file path
// Returns the configuration file path
func resolveConfigPath(args []string) (path string, isExplicit bool) {
// 1. Check for --config flag in command-line arguments (highest precedence)
for i, arg := range args {
if (arg == "--config" || arg == "-c") && i+1 < len(args) {
if arg == "-c" {
return args[i+1], true
}
if strings.HasPrefix(arg, "--config=") {
@ -165,43 +168,4 @@ func customEnvTransform(path string) string {
env = strings.ToUpper(env)
// env = "LOGWISP_" + env // already added by WithEnvPrefix
return env
}
// applyConsoleTargetOverrides centralizes console target configuration
func applyConsoleTargetOverrides(cfg *Config) error {
// Check environment variable for console target override
consoleTarget := os.Getenv("LOGWISP_CONSOLE_TARGET")
if consoleTarget == "" {
return nil
}
// Validate console target value
validTargets := map[string]bool{
"stdout": true,
"stderr": true,
"split": true,
}
if !validTargets[consoleTarget] {
return fmt.Errorf("invalid LOGWISP_CONSOLE_TARGET value: %s", consoleTarget)
}
// Apply to all console sinks
for i, pipeline := range cfg.Pipelines {
for j, sink := range pipeline.Sinks {
if sink.Type == "stdout" || sink.Type == "stderr" {
if sink.Options == nil {
cfg.Pipelines[i].Sinks[j].Options = make(map[string]any)
}
// Set target for split mode handling
cfg.Pipelines[i].Sinks[j].Options["target"] = consoleTarget
}
}
}
// Also update logging console target if applicable
if cfg.Logging.Console != nil && consoleTarget == "split" {
cfg.Logging.Console.Target = "split"
}
return nil
}

View File

@ -1,99 +0,0 @@
// FILE: logwisp/src/internal/config/logging.go
package config
import "fmt"
// LogConfig represents logging configuration for LogWisp
type LogConfig struct {
// Output mode: "file", "stdout", "stderr", "both", "none"
Output string `toml:"output"`
// Log level: "debug", "info", "warn", "error"
Level string `toml:"level"`
// File output settings (when Output includes "file" or "both")
File *LogFileConfig `toml:"file"`
// Console output settings
Console *LogConsoleConfig `toml:"console"`
}
type LogFileConfig struct {
// Directory for log files
Directory string `toml:"directory"`
// Base name for log files
Name string `toml:"name"`
// Maximum size per log file in MB
MaxSizeMB int64 `toml:"max_size_mb"`
// Maximum total size of all logs in MB
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
// Log retention in hours (0 = disabled)
RetentionHours float64 `toml:"retention_hours"`
}
type LogConsoleConfig struct {
// Target for console output: "stdout", "stderr", "split"
// "split": info/debug to stdout, warn/error to stderr
Target string `toml:"target"`
// Format: "txt" or "json"
Format string `toml:"format"`
}
// DefaultLogConfig returns sensible logging defaults
func DefaultLogConfig() *LogConfig {
return &LogConfig{
Output: "stderr",
Level: "info",
File: &LogFileConfig{
Directory: "./logs",
Name: "logwisp",
MaxSizeMB: 100,
MaxTotalSizeMB: 1000,
RetentionHours: 168, // 7 days
},
Console: &LogConsoleConfig{
Target: "stderr",
Format: "txt",
},
}
}
func validateLogConfig(cfg *LogConfig) error {
validOutputs := map[string]bool{
"file": true, "stdout": true, "stderr": true,
"both": true, "none": true,
}
if !validOutputs[cfg.Output] {
return fmt.Errorf("invalid log output mode: %s", cfg.Output)
}
validLevels := map[string]bool{
"debug": true, "info": true, "warn": true, "error": true,
}
if !validLevels[cfg.Level] {
return fmt.Errorf("invalid log level: %s", cfg.Level)
}
if cfg.Console != nil {
validTargets := map[string]bool{
"stdout": true, "stderr": true, "split": true,
}
if !validTargets[cfg.Console.Target] {
return fmt.Errorf("invalid console target: %s", cfg.Console.Target)
}
validFormats := map[string]bool{
"txt": true, "json": true, "": true,
}
if !validFormats[cfg.Console.Format] {
return fmt.Errorf("invalid console format: %s", cfg.Console.Format)
}
}
return nil
}

View File

@ -1,383 +0,0 @@
// FILE: logwisp/src/internal/config/pipeline.go
package config
import (
"fmt"
"net"
"net/url"
"path/filepath"
"strings"
)
// PipelineConfig represents a data processing pipeline
type PipelineConfig struct {
// Pipeline identifier (used in logs and metrics)
Name string `toml:"name"`
// Data sources for this pipeline
Sources []SourceConfig `toml:"sources"`
// Rate limiting
RateLimit *RateLimitConfig `toml:"rate_limit"`
// Filter configuration
Filters []FilterConfig `toml:"filters"`
// Log formatting configuration
Format string `toml:"format"`
FormatOptions map[string]any `toml:"format_options"`
// Output sinks for this pipeline
Sinks []SinkConfig `toml:"sinks"`
// Authentication/Authorization (applies to network sinks)
Auth *AuthConfig `toml:"auth"`
}
// SourceConfig represents an input data source
type SourceConfig struct {
// Source type: "directory", "file", "stdin", etc.
Type string `toml:"type"`
// Type-specific configuration options
Options map[string]any `toml:"options"`
}
// SinkConfig represents an output destination
type SinkConfig struct {
// Sink type: "http", "tcp", "file", "stdout", "stderr"
Type string `toml:"type"`
// Type-specific configuration options
Options map[string]any `toml:"options"`
}
func validateSource(pipelineName string, sourceIndex int, cfg *SourceConfig) error {
if cfg.Type == "" {
return fmt.Errorf("pipeline '%s' source[%d]: missing type", pipelineName, sourceIndex)
}
switch cfg.Type {
case "directory":
// Validate directory source options
path, ok := cfg.Options["path"].(string)
if !ok || path == "" {
return fmt.Errorf("pipeline '%s' source[%d]: directory source requires 'path' option",
pipelineName, sourceIndex)
}
// Check for directory traversal
if strings.Contains(path, "..") {
return fmt.Errorf("pipeline '%s' source[%d]: path contains directory traversal",
pipelineName, sourceIndex)
}
// Validate pattern if provided
if pattern, ok := cfg.Options["pattern"].(string); ok && pattern != "" {
// Try to compile as glob pattern (will be converted to regex internally)
if strings.Count(pattern, "*") == 0 && strings.Count(pattern, "?") == 0 {
// If no wildcards, ensure it's a valid filename
if filepath.Base(pattern) != pattern {
return fmt.Errorf("pipeline '%s' source[%d]: pattern contains path separators",
pipelineName, sourceIndex)
}
}
}
// Validate check interval if provided
if interval, ok := cfg.Options["check_interval_ms"]; ok {
if intVal, ok := interval.(int64); ok {
if intVal < 10 {
return fmt.Errorf("pipeline '%s' source[%d]: check interval too small: %d ms (min: 10ms)",
pipelineName, sourceIndex, intVal)
}
} else {
return fmt.Errorf("pipeline '%s' source[%d]: invalid check_interval_ms type",
pipelineName, sourceIndex)
}
}
case "stdin":
// No specific validation needed for stdin
case "http":
// Validate HTTP source options
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' source[%d]: invalid or missing HTTP port",
pipelineName, sourceIndex)
}
// Validate path if provided
if ingestPath, ok := cfg.Options["ingest_path"].(string); ok {
if !strings.HasPrefix(ingestPath, "/") {
return fmt.Errorf("pipeline '%s' source[%d]: ingest path must start with /: %s",
pipelineName, sourceIndex, ingestPath)
}
}
// Validate net_limit if present within Options
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("HTTP source", pipelineName, sourceIndex, rl); err != nil {
return err
}
}
// CHANGED: Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("HTTP source", pipelineName, sourceIndex, ssl); err != nil {
return err
}
}
case "tcp":
// Validate TCP source options
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' source[%d]: invalid or missing TCP port",
pipelineName, sourceIndex)
}
// Validate net_limit if present within Options
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("TCP source", pipelineName, sourceIndex, rl); err != nil {
return err
}
}
// CHANGED: Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("TCP source", pipelineName, sourceIndex, ssl); err != nil {
return err
}
}
default:
return fmt.Errorf("pipeline '%s' source[%d]: unknown source type '%s'",
pipelineName, sourceIndex, cfg.Type)
}
return nil
}
func validateSink(pipelineName string, sinkIndex int, cfg *SinkConfig, allPorts map[int64]string) error {
if cfg.Type == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: missing type", pipelineName, sinkIndex)
}
switch cfg.Type {
case "http":
// Extract and validate HTTP configuration
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid or missing HTTP port",
pipelineName, sinkIndex)
}
// Check port conflicts
if existing, exists := allPorts[port]; exists {
return fmt.Errorf("pipeline '%s' sink[%d]: HTTP port %d already used by %s",
pipelineName, sinkIndex, port, existing)
}
allPorts[port] = fmt.Sprintf("%s-http[%d]", pipelineName, sinkIndex)
// Validate buffer size
if bufSize, ok := cfg.Options["buffer_size"].(int64); ok {
if bufSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: HTTP buffer size must be positive: %d",
pipelineName, sinkIndex, bufSize)
}
}
// Validate paths if provided
if streamPath, ok := cfg.Options["stream_path"].(string); ok {
if !strings.HasPrefix(streamPath, "/") {
return fmt.Errorf("pipeline '%s' sink[%d]: stream path must start with /: %s",
pipelineName, sinkIndex, streamPath)
}
}
if statusPath, ok := cfg.Options["status_path"].(string); ok {
if !strings.HasPrefix(statusPath, "/") {
return fmt.Errorf("pipeline '%s' sink[%d]: status path must start with /: %s",
pipelineName, sinkIndex, statusPath)
}
}
// Validate heartbeat if present
if hb, ok := cfg.Options["heartbeat"].(map[string]any); ok {
if err := validateHeartbeatOptions("HTTP", pipelineName, sinkIndex, hb); err != nil {
return err
}
}
// Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("HTTP", pipelineName, sinkIndex, ssl); err != nil {
return err
}
}
// Validate net limit if present
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("HTTP", pipelineName, sinkIndex, rl); err != nil {
return err
}
}
case "tcp":
// Extract and validate TCP configuration
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid or missing TCP port",
pipelineName, sinkIndex)
}
// Check port conflicts
if existing, exists := allPorts[port]; exists {
return fmt.Errorf("pipeline '%s' sink[%d]: TCP port %d already used by %s",
pipelineName, sinkIndex, port, existing)
}
allPorts[port] = fmt.Sprintf("%s-tcp[%d]", pipelineName, sinkIndex)
// Validate buffer size
if bufSize, ok := cfg.Options["buffer_size"].(int64); ok {
if bufSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: TCP buffer size must be positive: %d",
pipelineName, sinkIndex, bufSize)
}
}
// Validate heartbeat if present
if hb, ok := cfg.Options["heartbeat"].(map[string]any); ok {
if err := validateHeartbeatOptions("TCP", pipelineName, sinkIndex, hb); err != nil {
return err
}
}
// Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("TCP", pipelineName, sinkIndex, ssl); err != nil {
return err
}
}
// Validate net limit if present
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("TCP", pipelineName, sinkIndex, rl); err != nil {
return err
}
}
case "http_client":
// Validate URL
urlStr, ok := cfg.Options["url"].(string)
if !ok || urlStr == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: http_client sink requires 'url' option",
pipelineName, sinkIndex)
}
// Validate URL format
parsedURL, err := url.Parse(urlStr)
if err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid URL: %w",
pipelineName, sinkIndex, err)
}
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
return fmt.Errorf("pipeline '%s' sink[%d]: URL must use http or https scheme",
pipelineName, sinkIndex)
}
// Validate batch size
if batchSize, ok := cfg.Options["batch_size"].(int64); ok {
if batchSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: batch_size must be positive: %d",
pipelineName, sinkIndex, batchSize)
}
}
// Validate timeout
if timeout, ok := cfg.Options["timeout_seconds"].(int64); ok {
if timeout < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: timeout_seconds must be positive: %d",
pipelineName, sinkIndex, timeout)
}
}
case "tcp_client":
// FIXED: Added validation for TCP client sink
// Validate address
address, ok := cfg.Options["address"].(string)
if !ok || address == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: tcp_client sink requires 'address' option",
pipelineName, sinkIndex)
}
// Validate address format
_, _, err := net.SplitHostPort(address)
if err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid address format (expected host:port): %w",
pipelineName, sinkIndex, err)
}
// Validate timeouts
if dialTimeout, ok := cfg.Options["dial_timeout_seconds"].(int64); ok {
if dialTimeout < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: dial_timeout_seconds must be positive: %d",
pipelineName, sinkIndex, dialTimeout)
}
}
if writeTimeout, ok := cfg.Options["write_timeout_seconds"].(int64); ok {
if writeTimeout < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: write_timeout_seconds must be positive: %d",
pipelineName, sinkIndex, writeTimeout)
}
}
case "file":
// Validate file sink options
directory, ok := cfg.Options["directory"].(string)
if !ok || directory == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: file sink requires 'directory' option",
pipelineName, sinkIndex)
}
name, ok := cfg.Options["name"].(string)
if !ok || name == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: file sink requires 'name' option",
pipelineName, sinkIndex)
}
// Validate numeric options
if maxSize, ok := cfg.Options["max_size_mb"].(int64); ok {
if maxSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_size_mb must be positive: %d",
pipelineName, sinkIndex, maxSize)
}
}
if maxTotalSize, ok := cfg.Options["max_total_size_mb"].(int64); ok {
if maxTotalSize < 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_total_size_mb cannot be negative: %d",
pipelineName, sinkIndex, maxTotalSize)
}
}
if retention, ok := cfg.Options["retention_hours"].(float64); ok {
if retention < 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: retention_hours cannot be negative: %f",
pipelineName, sinkIndex, retention)
}
}
case "stdout", "stderr":
// No specific validation needed for console sinks
default:
return fmt.Errorf("pipeline '%s' sink[%d]: unknown sink type '%s'",
pipelineName, sinkIndex, cfg.Type)
}
return nil
}

View File

@ -1,34 +0,0 @@
// FILE: logwisp/src/internal/config/saver.go
package config
import (
"fmt"
lconfig "github.com/lixenwraith/config"
)
// SaveToFile saves the configuration to the specified file path.
// It uses the lconfig library's atomic file saving capabilities.
func (c *Config) SaveToFile(path string) error {
if path == "" {
return fmt.Errorf("cannot save config: path is empty")
}
// Create a temporary lconfig instance just for saving
// This avoids the need to track lconfig throughout the application
lcfg, err := lconfig.NewBuilder().
WithFile(path).
WithTarget(c).
WithFileFormat("toml").
Build()
if err != nil {
return fmt.Errorf("failed to create config builder: %w", err)
}
// Use lconfig's Save method which handles atomic writes
if err := lcfg.Save(path); err != nil {
return fmt.Errorf("failed to save config: %w", err)
}
return nil
}

View File

@ -1,205 +0,0 @@
// FILE: logwisp/src/internal/config/server.go
package config
import (
"fmt"
"net"
"strings"
)
type TCPConfig struct {
Enabled bool `toml:"enabled"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
// SSL/TLS Configuration
SSL *SSLConfig `toml:"ssl"`
// Net limiting
NetLimit *NetLimitConfig `toml:"net_limit"`
// Heartbeat
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
}
type HTTPConfig struct {
Enabled bool `toml:"enabled"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
// Endpoint paths
StreamPath string `toml:"stream_path"`
StatusPath string `toml:"status_path"`
// SSL/TLS Configuration
SSL *SSLConfig `toml:"ssl"`
// Nate limiting
NetLimit *NetLimitConfig `toml:"net_limit"`
// Heartbeat
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
}
type HeartbeatConfig struct {
Enabled bool `toml:"enabled"`
IntervalSeconds int64 `toml:"interval_seconds"`
IncludeTimestamp bool `toml:"include_timestamp"`
IncludeStats bool `toml:"include_stats"`
Format string `toml:"format"`
}
type NetLimitConfig struct {
// Enable net limiting
Enabled bool `toml:"enabled"`
// IP Access Control Lists
IPWhitelist []string `toml:"ip_whitelist"`
IPBlacklist []string `toml:"ip_blacklist"`
// Requests per second per client
RequestsPerSecond float64 `toml:"requests_per_second"`
// Burst size (token bucket)
BurstSize int64 `toml:"burst_size"`
// Net limit by: "ip", "user", "token", "global"
LimitBy string `toml:"limit_by"`
// Response when net limited
ResponseCode int64 `toml:"response_code"` // Default: 429
ResponseMessage string `toml:"response_message"` // Default: "Net limit exceeded"
// Connection limits
MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"`
MaxTotalConnections int64 `toml:"max_total_connections"`
}
func validateHeartbeatOptions(serverType, pipelineName string, sinkIndex int, hb map[string]any) error {
if enabled, ok := hb["enabled"].(bool); ok && enabled {
interval, ok := hb["interval_seconds"].(int64)
if !ok || interval < 1 {
return fmt.Errorf("pipeline '%s' sink[%d] %s: heartbeat interval must be positive",
pipelineName, sinkIndex, serverType)
}
if format, ok := hb["format"].(string); ok {
if format != "json" && format != "comment" {
return fmt.Errorf("pipeline '%s' sink[%d] %s: heartbeat format must be 'json' or 'comment': %s",
pipelineName, sinkIndex, serverType, format)
}
}
}
return nil
}
func validateNetLimitOptions(serverType, pipelineName string, sinkIndex int, rl map[string]any) error {
if enabled, ok := rl["enabled"].(bool); !ok || !enabled {
return nil
}
// Validate IP lists if present
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
for i, entry := range ipWhitelist {
entryStr, ok := entry.(string)
if !ok {
continue
}
if err := validateIPv4Entry(entryStr); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: whitelist[%d] %v",
pipelineName, sinkIndex, serverType, i, err)
}
}
}
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
for i, entry := range ipBlacklist {
entryStr, ok := entry.(string)
if !ok {
continue
}
if err := validateIPv4Entry(entryStr); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: blacklist[%d] %v",
pipelineName, sinkIndex, serverType, i, err)
}
}
}
// Validate requests per second
rps, ok := rl["requests_per_second"].(float64)
if !ok || rps <= 0 {
return fmt.Errorf("pipeline '%s' sink[%d] %s: requests_per_second must be positive",
pipelineName, sinkIndex, serverType)
}
// Validate burst size
burst, ok := rl["burst_size"].(int64)
if !ok || burst < 1 {
return fmt.Errorf("pipeline '%s' sink[%d] %s: burst_size must be at least 1",
pipelineName, sinkIndex, serverType)
}
// Validate limit_by
if limitBy, ok := rl["limit_by"].(string); ok && limitBy != "" {
validLimitBy := map[string]bool{"ip": true, "global": true}
if !validLimitBy[limitBy] {
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid limit_by value: %s (must be 'ip' or 'global')",
pipelineName, sinkIndex, serverType, limitBy)
}
}
// Validate response code
if respCode, ok := rl["response_code"].(int64); ok {
if respCode > 0 && (respCode < 400 || respCode >= 600) {
return fmt.Errorf("pipeline '%s' sink[%d] %s: response_code must be 4xx or 5xx: %d",
pipelineName, sinkIndex, serverType, respCode)
}
}
// Validate connection limits
maxPerIP, perIPOk := rl["max_connections_per_ip"].(int64)
maxTotal, totalOk := rl["max_total_connections"].(int64)
if perIPOk && totalOk && maxPerIP > 0 && maxTotal > 0 {
if maxPerIP > maxTotal {
return fmt.Errorf("pipeline '%s' sink[%d] %s: max_connections_per_ip (%d) cannot exceed max_total_connections (%d)",
pipelineName, sinkIndex, serverType, maxPerIP, maxTotal)
}
}
return nil
}
// validateIPv4Entry ensures an IP or CIDR is IPv4
func validateIPv4Entry(entry string) error {
// Handle single IP
if !strings.Contains(entry, "/") {
ip := net.ParseIP(entry)
if ip == nil {
return fmt.Errorf("invalid IP address: %s", entry)
}
if ip.To4() == nil {
return fmt.Errorf("IPv6 not supported (IPv4-only): %s", entry)
}
return nil
}
// Handle CIDR
ipAddr, ipNet, err := net.ParseCIDR(entry)
if err != nil {
return fmt.Errorf("invalid CIDR: %s", entry)
}
// Check if the IP is IPv4
if ipAddr.To4() == nil {
return fmt.Errorf("IPv6 CIDR not supported (IPv4-only): %s", entry)
}
// Verify the network mask is appropriate for IPv4
_, bits := ipNet.Mask.Size()
if bits != 32 {
return fmt.Errorf("invalid IPv4 CIDR mask (got %d bits, expected 32): %s", bits, entry)
}
return nil
}

View File

@ -1,79 +0,0 @@
// FILE: logwisp/src/internal/config/ssl.go
package config
import (
"fmt"
"os"
)
type SSLConfig struct {
Enabled bool `toml:"enabled"`
CertFile string `toml:"cert_file"`
KeyFile string `toml:"key_file"`
// Client certificate authentication
ClientAuth bool `toml:"client_auth"`
ClientCAFile string `toml:"client_ca_file"`
VerifyClientCert bool `toml:"verify_client_cert"`
// Option to skip verification for clients
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
// TLS version constraints
MinVersion string `toml:"min_version"` // "TLS1.2", "TLS1.3"
MaxVersion string `toml:"max_version"`
// Cipher suites (comma-separated list)
CipherSuites string `toml:"cipher_suites"`
}
func validateSSLOptions(serverType, pipelineName string, sinkIndex int, ssl map[string]any) error {
if enabled, ok := ssl["enabled"].(bool); ok && enabled {
certFile, certOk := ssl["cert_file"].(string)
keyFile, keyOk := ssl["key_file"].(string)
if !certOk || certFile == "" || !keyOk || keyFile == "" {
return fmt.Errorf("pipeline '%s' sink[%d] %s: SSL enabled but cert/key files not specified",
pipelineName, sinkIndex, serverType)
}
// Validate that certificate files exist and are readable
if _, err := os.Stat(certFile); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: cert_file is not accessible: %w",
pipelineName, sinkIndex, serverType, err)
}
if _, err := os.Stat(keyFile); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: key_file is not accessible: %w",
pipelineName, sinkIndex, serverType, err)
}
if clientAuth, ok := ssl["client_auth"].(bool); ok && clientAuth {
caFile, caOk := ssl["client_ca_file"].(string)
if !caOk || caFile == "" {
return fmt.Errorf("pipeline '%s' sink[%d] %s: client auth enabled but CA file not specified",
pipelineName, sinkIndex, serverType)
}
// Validate that the client CA file exists and is readable
if _, err := os.Stat(caFile); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: client_ca_file is not accessible: %w",
pipelineName, sinkIndex, serverType, err)
}
}
// Validate TLS versions
validVersions := map[string]bool{"TLS1.0": true, "TLS1.1": true, "TLS1.2": true, "TLS1.3": true}
if minVer, ok := ssl["min_version"].(string); ok && minVer != "" {
if !validVersions[minVer] {
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid min TLS version: %s",
pipelineName, sinkIndex, serverType, minVer)
}
}
if maxVer, ok := ssl["max_version"].(string); ok && maxVer != "" {
if !validVersions[maxVer] {
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid max TLS version: %s",
pipelineName, sinkIndex, serverType, maxVer)
}
}
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,13 @@
// FILE: logwisp/src/internal/core/const.go
package core
// Argon2id parameters
const (
Argon2Time = 3
Argon2Memory = 64 * 1024 // 64 MB
Argon2Threads = 4
Argon2SaltLen = 16
Argon2KeyLen = 32
)
const DefaultTokenLength = 32

View File

@ -6,7 +6,7 @@ import (
"time"
)
// LogEntry represents a single log record flowing through the pipeline
// Represents a single log record flowing through the pipeline
type LogEntry struct {
Time time.Time `json:"time"`
Source string `json:"source"`

View File

@ -11,7 +11,7 @@ import (
"github.com/lixenwraith/log"
)
// Chain manages multiple filters in sequence
// Manages multiple filters in sequence
type Chain struct {
filters []*Filter
logger *log.Logger
@ -21,7 +21,7 @@ type Chain struct {
totalPassed atomic.Uint64
}
// NewChain creates a new filter chain from configurations
// Creates a new filter chain from configurations
func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error) {
chain := &Chain{
filters: make([]*Filter, 0, len(configs)),
@ -29,7 +29,7 @@ func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error)
}
for i, cfg := range configs {
filter, err := New(cfg, logger)
filter, err := NewFilter(cfg, logger)
if err != nil {
return nil, fmt.Errorf("filter[%d]: %w", i, err)
}
@ -42,8 +42,7 @@ func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error)
return chain, nil
}
// Apply runs all filters in sequence
// Returns true if the entry passes all filters
// Runs all filters in sequence, returns true if the entry passes all filters
func (c *Chain) Apply(entry core.LogEntry) bool {
c.totalProcessed.Add(1)
@ -68,7 +67,7 @@ func (c *Chain) Apply(entry core.LogEntry) bool {
return true
}
// GetStats returns chain statistics
// Returns chain statistics
func (c *Chain) GetStats() map[string]any {
filterStats := make([]map[string]any, len(c.filters))
for i, filter := range c.filters {

View File

@ -13,7 +13,7 @@ import (
"github.com/lixenwraith/log"
)
// Filter applies regex-based filtering to log entries
// Applies regex-based filtering to log entries
type Filter struct {
config config.FilterConfig
patterns []*regexp.Regexp
@ -26,8 +26,8 @@ type Filter struct {
totalDropped atomic.Uint64
}
// New creates a new filter from configuration
func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
// Creates a new filter from configuration
func NewFilter(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
// Set defaults
if cfg.Type == "" {
cfg.Type = config.FilterTypeInclude
@ -60,12 +60,15 @@ func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
return f, nil
}
// Apply checks if a log entry should be passed through
// Checks if a log entry should be passed through
func (f *Filter) Apply(entry core.LogEntry) bool {
f.totalProcessed.Add(1)
// No patterns means pass everything
if len(f.patterns) == 0 {
f.logger.Debug("msg", "No patterns configured, passing entry",
"component", "filter",
"type", f.config.Type)
return true
}
@ -78,10 +81,32 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
text = entry.Source + " " + text
}
f.logger.Debug("msg", "Filter checking entry",
"component", "filter",
"type", f.config.Type,
"logic", f.config.Logic,
"entry_level", entry.Level,
"entry_source", entry.Source,
"entry_message", entry.Message[:min(100, len(entry.Message))], // First 100 chars
"text_to_match", text[:min(150, len(text))], // First 150 chars
"patterns", f.config.Patterns)
for i, pattern := range f.config.Patterns {
isMatch := f.patterns[i].MatchString(text)
f.logger.Debug("msg", "Pattern match result",
"component", "filter",
"pattern_index", i,
"pattern", pattern,
"matched", isMatch)
}
matched := f.matches(text)
if matched {
f.totalMatched.Add(1)
}
f.logger.Debug("msg", "Filter final match result",
"component", "filter",
"matched", matched)
// Determine if we should pass or drop
shouldPass := false
@ -92,6 +117,12 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
shouldPass = !matched
}
f.logger.Debug("msg", "Filter decision",
"component", "filter",
"type", f.config.Type,
"matched", matched,
"should_pass", shouldPass)
if !shouldPass {
f.totalDropped.Add(1)
}
@ -99,7 +130,7 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
return shouldPass
}
// matches checks if text matches the patterns according to the logic
// Checks if text matches the patterns according to the logic
func (f *Filter) matches(text string) bool {
switch f.config.Logic {
case config.FilterLogicOr:
@ -129,7 +160,7 @@ func (f *Filter) matches(text string) bool {
}
}
// GetStats returns filter statistics
// Returns filter statistics
func (f *Filter) GetStats() map[string]any {
return map[string]any{
"type": f.config.Type,
@ -141,7 +172,7 @@ func (f *Filter) GetStats() map[string]any {
}
}
// UpdatePatterns allows dynamic pattern updates
// Allows dynamic pattern updates
func (f *Filter) UpdatePatterns(patterns []string) error {
compiled := make([]*regexp.Regexp, 0, len(patterns))

View File

@ -4,12 +4,13 @@ package format
import (
"fmt"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// Formatter defines the interface for transforming a LogEntry into a byte slice.
// Defines the interface for transforming a LogEntry into a byte slice.
type Formatter interface {
// Format takes a LogEntry and returns the formatted log as a byte slice.
Format(entry core.LogEntry) ([]byte, error)
@ -18,21 +19,16 @@ type Formatter interface {
Name() string
}
// New creates a new Formatter based on the provided configuration.
func New(name string, options map[string]any, logger *log.Logger) (Formatter, error) {
// Default to raw if no format specified
if name == "" {
name = "raw"
}
switch name {
// Creates a new Formatter based on the provided configuration.
func NewFormatter(cfg *config.FormatConfig, logger *log.Logger) (Formatter, error) {
switch cfg.Type {
case "json":
return NewJSONFormatter(options, logger)
case "text":
return NewTextFormatter(options, logger)
case "raw":
return NewRawFormatter(options, logger)
return NewJSONFormatter(cfg.JSONFormatOptions, logger)
case "txt":
return NewTxtFormatter(cfg.TxtFormatOptions, logger)
case "raw", "":
return NewRawFormatter(cfg.RawFormatOptions, logger)
default:
return nil, fmt.Errorf("unknown formatter type: %s", name)
return nil, fmt.Errorf("unknown formatter type: %s", cfg.Type)
}
}

View File

@ -6,60 +6,37 @@ import (
"fmt"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// JSONFormatter produces structured JSON logs
// Produces structured JSON logs
type JSONFormatter struct {
pretty bool
timestampField string
levelField string
messageField string
sourceField string
logger *log.Logger
config *config.JSONFormatterOptions
logger *log.Logger
}
// NewJSONFormatter creates a new JSON formatter
func NewJSONFormatter(options map[string]any, logger *log.Logger) (*JSONFormatter, error) {
// Creates a new JSON formatter
func NewJSONFormatter(opts *config.JSONFormatterOptions, logger *log.Logger) (*JSONFormatter, error) {
f := &JSONFormatter{
timestampField: "timestamp",
levelField: "level",
messageField: "message",
sourceField: "source",
logger: logger,
}
// Extract options
if pretty, ok := options["pretty"].(bool); ok {
f.pretty = pretty
}
if field, ok := options["timestamp_field"].(string); ok && field != "" {
f.timestampField = field
}
if field, ok := options["level_field"].(string); ok && field != "" {
f.levelField = field
}
if field, ok := options["message_field"].(string); ok && field != "" {
f.messageField = field
}
if field, ok := options["source_field"].(string); ok && field != "" {
f.sourceField = field
config: opts,
logger: logger,
}
return f, nil
}
// Format formats the log entry as JSON
// Formats the log entry as JSON
func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Start with a clean map
output := make(map[string]any)
// First, populate with LogWisp metadata
output[f.timestampField] = entry.Time.Format(time.RFC3339Nano)
output[f.levelField] = entry.Level
output[f.sourceField] = entry.Source
output[f.config.TimestampField] = entry.Time.Format(time.RFC3339Nano)
output[f.config.LevelField] = entry.Level
output[f.config.SourceField] = entry.Source
// Try to parse the message as JSON
var msgData map[string]any
@ -68,21 +45,21 @@ func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
// LogWisp metadata takes precedence
for k, v := range msgData {
// Don't overwrite our standard fields
if k != f.timestampField && k != f.levelField && k != f.sourceField {
if k != f.config.TimestampField && k != f.config.LevelField && k != f.config.SourceField {
output[k] = v
}
}
// If the original JSON had these fields, log that we're overriding
if _, hasTime := msgData[f.timestampField]; hasTime {
if _, hasTime := msgData[f.config.TimestampField]; hasTime {
f.logger.Debug("msg", "Overriding timestamp from JSON message",
"component", "json_formatter",
"original", msgData[f.timestampField],
"logwisp", output[f.timestampField])
"original", msgData[f.config.TimestampField],
"logwisp", output[f.config.TimestampField])
}
} else {
// Message is not valid JSON - add as message field
output[f.messageField] = entry.Message
output[f.config.MessageField] = entry.Message
}
// Add any additional fields from LogEntry.Fields
@ -101,7 +78,7 @@ func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Marshal to JSON
var result []byte
var err error
if f.pretty {
if f.config.Pretty {
result, err = json.MarshalIndent(output, "", " ")
} else {
result, err = json.Marshal(output)
@ -115,12 +92,12 @@ func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
return append(result, '\n'), nil
}
// Name returns the formatter name
// Returns the formatter name
func (f *JSONFormatter) Name() string {
return "json"
}
// FormatBatch formats multiple entries as a JSON array
// Formats multiple entries as a JSON array
// This is a special method for sinks that need to batch entries
func (f *JSONFormatter) FormatBatch(entries []core.LogEntry) ([]byte, error) {
// For batching, we need to create an array of formatted objects
@ -147,7 +124,7 @@ func (f *JSONFormatter) FormatBatch(entries []core.LogEntry) ([]byte, error) {
// Marshal the entire batch as an array
var result []byte
var err error
if f.pretty {
if f.config.Pretty {
result, err = json.MarshalIndent(batch, "", " ")
} else {
result, err = json.Marshal(batch)

View File

@ -2,30 +2,37 @@
package format
import (
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// RawFormatter outputs the log message as-is with a newline
// Outputs the log message as-is with a newline
type RawFormatter struct {
config *config.RawFormatterOptions
logger *log.Logger
}
// NewRawFormatter creates a new raw formatter
func NewRawFormatter(options map[string]any, logger *log.Logger) (*RawFormatter, error) {
// Creates a new raw formatter
func NewRawFormatter(cfg *config.RawFormatterOptions, logger *log.Logger) (*RawFormatter, error) {
return &RawFormatter{
config: cfg,
logger: logger,
}, nil
}
// Format returns the message with a newline appended
// Returns the message with a newline appended
func (f *RawFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Simply return the message with newline
return append([]byte(entry.Message), '\n'), nil
// TODO: Standardize not to add "\n" when processing raw, check lixenwraith/log for consistency
if f.config.AddNewLine {
return append([]byte(entry.Message), '\n'), nil
} else {
return []byte(entry.Message), nil
}
}
// Name returns the formatter name
// Returns the formatter name
func (f *RawFormatter) Name() string {
return "raw"
}

View File

@ -1,4 +1,4 @@
// FILE: logwisp/src/internal/format/text.go
// FILE: logwisp/src/internal/format/txt.go
package format
import (
@ -8,48 +8,37 @@ import (
"text/template"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// TextFormatter produces human-readable text logs using templates
type TextFormatter struct {
template *template.Template
timestampFormat string
logger *log.Logger
// Produces human-readable text logs using templates
type TxtFormatter struct {
config *config.TxtFormatterOptions
template *template.Template
logger *log.Logger
}
// NewTextFormatter creates a new text formatter
func NewTextFormatter(options map[string]any, logger *log.Logger) (*TextFormatter, error) {
// Default template
templateStr := "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}"
if tmpl, ok := options["template"].(string); ok && tmpl != "" {
templateStr = tmpl
}
// Default timestamp format
timestampFormat := time.RFC3339
if tsFormat, ok := options["timestamp_format"].(string); ok && tsFormat != "" {
timestampFormat = tsFormat
}
f := &TextFormatter{
timestampFormat: timestampFormat,
logger: logger,
// Creates a new text formatter
func NewTxtFormatter(opts *config.TxtFormatterOptions, logger *log.Logger) (*TxtFormatter, error) {
f := &TxtFormatter{
config: opts,
logger: logger,
}
// Create template with helper functions
funcMap := template.FuncMap{
"FmtTime": func(t time.Time) string {
return t.Format(f.timestampFormat)
return t.Format(f.config.TimestampFormat)
},
"ToUpper": strings.ToUpper,
"ToLower": strings.ToLower,
"TrimSpace": strings.TrimSpace,
}
tmpl, err := template.New("log").Funcs(funcMap).Parse(templateStr)
tmpl, err := template.New("log").Funcs(funcMap).Parse(f.config.Template)
if err != nil {
return nil, fmt.Errorf("invalid template: %w", err)
}
@ -58,8 +47,8 @@ func NewTextFormatter(options map[string]any, logger *log.Logger) (*TextFormatte
return f, nil
}
// Format formats the log entry using the template
func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Formats the log entry using the template
func (f *TxtFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Prepare data for template
data := map[string]any{
"Timestamp": entry.Time,
@ -82,11 +71,11 @@ func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
if err := f.template.Execute(&buf, data); err != nil {
// Fallback: return a basic formatted message
f.logger.Debug("msg", "Template execution failed, using fallback",
"component", "text_formatter",
"component", "txt_formatter",
"error", err)
fallback := fmt.Sprintf("[%s] [%s] %s - %s\n",
entry.Time.Format(f.timestampFormat),
entry.Time.Format(f.config.TimestampFormat),
strings.ToUpper(entry.Level),
entry.Source,
entry.Message)
@ -102,7 +91,7 @@ func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
return result, nil
}
// Name returns the formatter name
func (f *TextFormatter) Name() string {
return "text"
// Returns the formatter name
func (f *TxtFormatter) Name() string {
return "txt"
}

View File

@ -17,6 +17,7 @@ import (
// DenialReason indicates why a request was denied
type DenialReason string
// ** THIS PROGRAM IS IPV4 ONLY !!**
const (
// IPv4Only is the enforcement message for IPv6 rejection
IPv4Only = "IPv4-only (IPv6 not supported)"
@ -33,7 +34,7 @@ const (
// NetLimiter manages net limiting for a transport
type NetLimiter struct {
config config.NetLimitConfig
config *config.NetLimitConfig
logger *log.Logger
// IP Access Control Lists
@ -48,8 +49,11 @@ type NetLimiter struct {
globalLimiter *TokenBucket
// Connection tracking
ipConnections map[string]*connTracker
connMu sync.RWMutex
ipConnections map[string]*connTracker
userConnections map[string]*connTracker
tokenConnections map[string]*connTracker
totalConnections atomic.Int64
connMu sync.RWMutex
// Statistics
totalRequests atomic.Uint64
@ -85,7 +89,11 @@ type connTracker struct {
}
// Creates a new net limiter
func NewNetLimiter(cfg config.NetLimitConfig, logger *log.Logger) *NetLimiter {
func NewNetLimiter(cfg *config.NetLimitConfig, logger *log.Logger) *NetLimiter {
if cfg == nil {
return nil
}
// Return nil only if nothing is configured
hasACL := len(cfg.IPWhitelist) > 0 || len(cfg.IPBlacklist) > 0
hasRateLimit := cfg.Enabled
@ -101,28 +109,22 @@ func NewNetLimiter(cfg config.NetLimitConfig, logger *log.Logger) *NetLimiter {
ctx, cancel := context.WithCancel(context.Background())
l := &NetLimiter{
config: cfg,
logger: logger,
ipWhitelist: make([]*net.IPNet, 0),
ipBlacklist: make([]*net.IPNet, 0),
ipLimiters: make(map[string]*ipLimiter),
ipConnections: make(map[string]*connTracker),
lastCleanup: time.Now(),
ctx: ctx,
cancel: cancel,
cleanupDone: make(chan struct{}),
config: cfg,
logger: logger,
ipWhitelist: make([]*net.IPNet, 0),
ipBlacklist: make([]*net.IPNet, 0),
ipLimiters: make(map[string]*ipLimiter),
ipConnections: make(map[string]*connTracker),
userConnections: make(map[string]*connTracker),
tokenConnections: make(map[string]*connTracker),
lastCleanup: time.Now(),
ctx: ctx,
cancel: cancel,
cleanupDone: make(chan struct{}),
}
// Parse IP lists
l.parseIPLists(cfg)
// Create global limiter if configured
if cfg.Enabled && cfg.LimitBy == "global" {
l.globalLimiter = NewTokenBucket(
float64(cfg.BurstSize),
cfg.RequestsPerSecond,
)
}
l.parseIPLists()
// Start cleanup goroutine only if rate limiting is enabled
if cfg.Enabled {
@ -137,22 +139,23 @@ func NewNetLimiter(cfg config.NetLimitConfig, logger *log.Logger) *NetLimiter {
"blacklist_rules", len(l.ipBlacklist),
"requests_per_second", cfg.RequestsPerSecond,
"burst_size", cfg.BurstSize,
"limit_by", cfg.LimitBy)
"max_connections_per_ip", cfg.MaxConnectionsPerIP,
"max_connections_total", cfg.MaxConnectionsTotal)
return l
}
// parseIPLists parses and validates IP whitelist/blacklist
func (l *NetLimiter) parseIPLists(cfg config.NetLimitConfig) {
func (l *NetLimiter) parseIPLists() {
// Parse whitelist
for _, entry := range cfg.IPWhitelist {
for _, entry := range l.config.IPWhitelist {
if ipNet := l.parseIPEntry(entry, "whitelist"); ipNet != nil {
l.ipWhitelist = append(l.ipWhitelist, ipNet)
}
}
// Parse blacklist
for _, entry := range cfg.IPBlacklist {
for _, entry := range l.config.IPBlacklist {
if ipNet := l.parseIPEntry(entry, "blacklist"); ipNet != nil {
l.ipBlacklist = append(l.ipBlacklist, ipNet)
}
@ -275,7 +278,7 @@ func (l *NetLimiter) Shutdown() {
}
}
// Checks if an HTTP request should be allowed
// Checks if an HTTP request should be allowed: IP access control + connection limits (IP only) + calls
func (l *NetLimiter) CheckHTTP(remoteAddr string) (allowed bool, statusCode int64, message string) {
if l == nil {
return true, 0, ""
@ -342,7 +345,7 @@ func (l *NetLimiter) CheckHTTP(remoteAddr string) (allowed bool, statusCode int6
}
// Check rate limit
if !l.checkLimit(ipStr) {
if !l.checkIPLimit(ipStr) {
l.blockedByRateLimit.Add(1)
statusCode = l.config.ResponseCode
if statusCode == 0 {
@ -371,7 +374,7 @@ func (l *NetLimiter) updateConnectionActivity(ip string) {
}
}
// Checks if a TCP connection should be allowed
// Checks if a TCP connection should be allowed: IP access control + calls checkIPLimit()
func (l *NetLimiter) CheckTCP(remoteAddr net.Addr) bool {
if l == nil {
return true
@ -411,7 +414,7 @@ func (l *NetLimiter) CheckTCP(remoteAddr net.Addr) bool {
// Check rate limit
ipStr := tcpAddr.IP.String()
if !l.checkLimit(ipStr) {
if !l.checkIPLimit(ipStr) {
l.blockedByRateLimit.Add(1)
return false
}
@ -530,17 +533,40 @@ func (l *NetLimiter) GetStats() map[string]any {
return map[string]any{"enabled": false}
}
// Get active rate limiters count
l.ipMu.RLock()
activeIPs := len(l.ipLimiters)
l.ipMu.RUnlock()
// Get connection tracker counts and calculate total active connections
l.connMu.RLock()
totalConnections := 0
ipConnTrackers := len(l.ipConnections)
userConnTrackers := len(l.userConnections)
tokenConnTrackers := len(l.tokenConnections)
// Calculate actual connection count by summing all IP connections
// Potentially more accurate than totalConnections counter which might drift
// TODO: test and refactor if they match
actualIPConnections := 0
for _, tracker := range l.ipConnections {
totalConnections += int(tracker.connections.Load())
actualIPConnections += int(tracker.connections.Load())
}
actualUserConnections := 0
for _, tracker := range l.userConnections {
actualUserConnections += int(tracker.connections.Load())
}
actualTokenConnections := 0
for _, tracker := range l.tokenConnections {
actualTokenConnections += int(tracker.connections.Load())
}
// Use the counter for total (should match actualIPConnections in most cases)
totalConns := l.totalConnections.Load()
l.connMu.RUnlock()
// Calculate total blocked
totalBlocked := l.blockedByBlacklist.Load() +
l.blockedByWhitelist.Load() +
l.blockedByRateLimit.Load() +
@ -558,23 +584,37 @@ func (l *NetLimiter) GetStats() map[string]any {
"conn_limit": l.blockedByConnLimit.Load(),
"invalid_ip": l.blockedByInvalidIP.Load(),
},
"active_ips": activeIPs,
"total_connections": totalConnections,
"acl": map[string]int{
"whitelist_rules": len(l.ipWhitelist),
"blacklist_rules": len(l.ipBlacklist),
},
"rate_limit": map[string]any{
"rate_limiting": map[string]any{
"enabled": l.config.Enabled,
"requests_per_second": l.config.RequestsPerSecond,
"burst_size": l.config.BurstSize,
"limit_by": l.config.LimitBy,
"active_ip_limiters": activeIPs, // IPs being rate-limited
},
"access_control": map[string]any{
"whitelist_rules": len(l.ipWhitelist),
"blacklist_rules": len(l.ipBlacklist),
},
"connections": map[string]any{
// Actual counts
"total_active": totalConns, // Counter-based total
"active_ip_connections": actualIPConnections, // Sum of all IP connections
"active_user_connections": actualUserConnections, // Sum of all user connections
"active_token_connections": actualTokenConnections, // Sum of all token connections
// Tracker counts (number of unique IPs/users/tokens being tracked)
"tracked_ips": ipConnTrackers,
"tracked_users": userConnTrackers,
"tracked_tokens": tokenConnTrackers,
// Configuration limits (0 = disabled)
"limit_per_ip": l.config.MaxConnectionsPerIP,
"limit_total": l.config.MaxConnectionsTotal,
},
}
}
// Performs the actual net limit check
func (l *NetLimiter) checkLimit(ip string) bool {
// Performs IP net limit check (req/sec)
func (l *NetLimiter) checkIPLimit(ip string) bool {
// Validate IP format
parsedIP := net.ParseIP(ip)
if parsedIP == nil || !isIPv4(parsedIP) {
@ -587,53 +627,36 @@ func (l *NetLimiter) checkLimit(ip string) bool {
// Maybe run cleanup
l.maybeCleanup()
switch l.config.LimitBy {
case "global":
return l.globalLimiter.Allow()
case "ip", "":
// Default to per-IP limiting
l.ipMu.Lock()
lim, exists := l.ipLimiters[ip]
if !exists {
// Create new limiter for this IP
lim = &ipLimiter{
bucket: NewTokenBucket(
float64(l.config.BurstSize),
l.config.RequestsPerSecond,
),
lastSeen: time.Now(),
}
l.ipLimiters[ip] = lim
l.uniqueIPs.Add(1)
l.logger.Debug("msg", "Created new IP limiter",
"ip", ip,
"total_ips", l.uniqueIPs.Load())
} else {
lim.lastSeen = time.Now()
// IP limit
l.ipMu.Lock()
lim, exists := l.ipLimiters[ip]
if !exists {
// Create new limiter for this IP
lim = &ipLimiter{
bucket: NewTokenBucket(
float64(l.config.BurstSize),
l.config.RequestsPerSecond,
),
lastSeen: time.Now(),
}
l.ipMu.Unlock()
l.ipLimiters[ip] = lim
l.uniqueIPs.Add(1)
// Check connection limit if configured
if l.config.MaxConnectionsPerIP > 0 {
l.connMu.RLock()
tracker, exists := l.ipConnections[ip]
l.connMu.RUnlock()
if exists && tracker.connections.Load() >= l.config.MaxConnectionsPerIP {
return false
}
}
return lim.bucket.Allow()
default:
// Unknown limit_by value, allow by default
l.logger.Warn("msg", "Unknown limit_by value",
"limit_by", l.config.LimitBy)
return true
l.logger.Debug("msg", "Created new IP limiter",
"ip", ip,
"total_ips", l.uniqueIPs.Load())
} else {
lim.lastSeen = time.Now()
}
l.ipMu.Unlock()
// Rate limit check
allowed := lim.bucket.Allow()
if !allowed {
l.blockedByRateLimit.Add(1)
}
return allowed
}
// Runs cleanup if enough time has passed
@ -690,25 +713,57 @@ func (l *NetLimiter) cleanup() {
// Clean up stale connection trackers
l.connMu.Lock()
connCleaned := 0
// Clean IP connections
ipCleaned := 0
for ip, tracker := range l.ipConnections {
tracker.mu.Lock()
lastSeen := tracker.lastSeen
tracker.mu.Unlock()
// Remove if no activity for 5 minutes AND no active connections
if now.Sub(lastSeen) > staleTimeout && tracker.connections.Load() <= 0 {
delete(l.ipConnections, ip)
connCleaned++
ipCleaned++
}
}
// Clean user connections
userCleaned := 0
for user, tracker := range l.userConnections {
tracker.mu.Lock()
lastSeen := tracker.lastSeen
tracker.mu.Unlock()
if now.Sub(lastSeen) > staleTimeout && tracker.connections.Load() <= 0 {
delete(l.userConnections, user)
userCleaned++
}
}
// Clean token connections
tokenCleaned := 0
for token, tracker := range l.tokenConnections {
tracker.mu.Lock()
lastSeen := tracker.lastSeen
tracker.mu.Unlock()
if now.Sub(lastSeen) > staleTimeout && tracker.connections.Load() <= 0 {
delete(l.tokenConnections, token)
tokenCleaned++
}
}
l.connMu.Unlock()
if connCleaned > 0 {
if ipCleaned > 0 || userCleaned > 0 || tokenCleaned > 0 {
l.logger.Debug("msg", "Cleaned up stale connection trackers",
"component", "netlimit",
"cleaned", connCleaned,
"remaining", len(l.ipConnections))
"ip_cleaned", ipCleaned,
"user_cleaned", userCleaned,
"token_cleaned", tokenCleaned,
"ip_remaining", len(l.ipConnections),
"user_remaining", len(l.userConnections),
"token_remaining", len(l.tokenConnections))
}
}
@ -729,4 +784,110 @@ func (l *NetLimiter) cleanupLoop() {
l.cleanup()
}
}
}
// Tracks a new connection with optional user/token info: Connection limits (IP/user/token/total) for TCP only
func (l *NetLimiter) TrackConnection(ip string, user string, token string) bool {
if l == nil {
return true
}
l.connMu.Lock()
defer l.connMu.Unlock()
// Check total connections limit (0 = disabled)
if l.config.MaxConnectionsTotal > 0 {
currentTotal := l.totalConnections.Load()
if currentTotal >= l.config.MaxConnectionsTotal {
l.blockedByConnLimit.Add(1)
l.logger.Debug("msg", "TCP connection blocked by total limit",
"component", "netlimit",
"current_total", currentTotal,
"max_connections_total", l.config.MaxConnectionsTotal)
return false
}
}
// Check per-IP connection limit (0 = disabled)
if l.config.MaxConnectionsPerIP > 0 && ip != "" {
tracker, exists := l.ipConnections[ip]
if !exists {
tracker = &connTracker{lastSeen: time.Now()}
l.ipConnections[ip] = tracker
}
if tracker.connections.Load() >= l.config.MaxConnectionsPerIP {
l.blockedByConnLimit.Add(1)
l.logger.Debug("msg", "TCP connection blocked by IP limit",
"component", "netlimit",
"ip", ip,
"current", tracker.connections.Load(),
"max", l.config.MaxConnectionsPerIP)
return false
}
}
// All checks passed, increment counters
l.totalConnections.Add(1)
if ip != "" && l.config.MaxConnectionsPerIP > 0 {
if tracker, exists := l.ipConnections[ip]; exists {
tracker.connections.Add(1)
tracker.mu.Lock()
tracker.lastSeen = time.Now()
tracker.mu.Unlock()
}
}
return true
}
// Releases a tracked connection
func (l *NetLimiter) ReleaseConnection(ip string, user string, token string) {
if l == nil {
return
}
l.connMu.Lock()
defer l.connMu.Unlock()
// Decrement total
if l.totalConnections.Load() > 0 {
l.totalConnections.Add(-1)
}
// Decrement IP counter
if ip != "" {
if tracker, exists := l.ipConnections[ip]; exists {
if tracker.connections.Load() > 0 {
tracker.connections.Add(-1)
}
tracker.mu.Lock()
tracker.lastSeen = time.Now()
tracker.mu.Unlock()
}
}
// Decrement user counter
if user != "" {
if tracker, exists := l.userConnections[user]; exists {
if tracker.connections.Load() > 0 {
tracker.connections.Add(-1)
}
tracker.mu.Lock()
tracker.lastSeen = time.Now()
tracker.mu.Unlock()
}
}
// Decrement token counter
if token != "" {
if tracker, exists := l.tokenConnections[token]; exists {
if tracker.connections.Load() > 0 {
tracker.connections.Add(-1)
}
tracker.mu.Lock()
tracker.lastSeen = time.Now()
tracker.mu.Unlock()
}
}
}

View File

@ -11,7 +11,7 @@ import (
"github.com/lixenwraith/log"
)
// RateLimiter enforces rate limits on log entries flowing through a pipeline.
// Enforces rate limits on log entries flowing through a pipeline.
type RateLimiter struct {
bucket *TokenBucket
policy config.RateLimitPolicy
@ -23,7 +23,7 @@ type RateLimiter struct {
droppedCount atomic.Uint64
}
// NewRateLimiter creates a new rate limiter. If cfg.Rate is 0, it returns nil.
// Creates a new rate limiter. If cfg.Rate is 0, it returns nil.
func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimiter, error) {
if cfg.Rate <= 0 {
return nil, nil // No rate limit
@ -56,7 +56,7 @@ func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimite
return l, nil
}
// Allow checks if a log entry is allowed to pass based on the rate limit.
// Checks if a log entry is allowed to pass based on the rate limit.
// It returns true if the entry should pass, false if it should be dropped.
func (l *RateLimiter) Allow(entry core.LogEntry) bool {
if l == nil || l.policy == config.PolicyPass {

View File

@ -16,7 +16,7 @@ type TokenBucket struct {
mu sync.Mutex
}
// NewTokenBucket creates a new token bucket with given capacity and refill rate
// Creates a new token bucket with given capacity and refill rate
func NewTokenBucket(capacity float64, refillRate float64) *TokenBucket {
return &TokenBucket{
capacity: capacity,
@ -26,12 +26,12 @@ func NewTokenBucket(capacity float64, refillRate float64) *TokenBucket {
}
}
// Allow attempts to consume one token, returns true if allowed
// Attempts to consume one token, returns true if allowed
func (tb *TokenBucket) Allow() bool {
return tb.AllowN(1)
}
// AllowN attempts to consume n tokens, returns true if allowed
// Attempts to consume n tokens, returns true if allowed
func (tb *TokenBucket) AllowN(n float64) bool {
tb.mu.Lock()
defer tb.mu.Unlock()
@ -45,7 +45,7 @@ func (tb *TokenBucket) AllowN(n float64) bool {
return false
}
// Tokens returns the current number of available tokens
// Returns the current number of available tokens
func (tb *TokenBucket) Tokens() float64 {
tb.mu.Lock()
defer tb.mu.Unlock()
@ -54,7 +54,7 @@ func (tb *TokenBucket) Tokens() float64 {
return tb.tokens
}
// refill adds tokens based on time elapsed since last refill
// Adds tokens based on time elapsed since last refill
// MUST be called with mutex held
func (tb *TokenBucket) refill() {
now := time.Now()

View File

@ -3,12 +3,14 @@ package service
import (
"context"
"fmt"
"sync"
"sync/atomic"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/filter"
"logwisp/src/internal/format"
"logwisp/src/internal/limit"
"logwisp/src/internal/sink"
"logwisp/src/internal/source"
@ -16,10 +18,9 @@ import (
"github.com/lixenwraith/log"
)
// Pipeline manages the flow of data from sources through filters to sinks
// Manages the flow of data from sources through filters to sinks
type Pipeline struct {
Name string
Config config.PipelineConfig
Config *config.PipelineConfig
Sources []source.Source
RateLimiter *limit.RateLimiter
FilterChain *filter.Chain
@ -32,7 +33,7 @@ type Pipeline struct {
wg sync.WaitGroup
}
// PipelineStats contains statistics for a pipeline
// Contains statistics for a pipeline
type PipelineStats struct {
StartTime time.Time
TotalEntriesProcessed atomic.Uint64
@ -43,11 +44,116 @@ type PipelineStats struct {
FilterStats map[string]any
}
// Shutdown gracefully stops the pipeline
// Creates and starts a new pipeline
func (s *Service) NewPipeline(cfg *config.PipelineConfig) error {
s.mu.Lock()
defer s.mu.Unlock()
if _, exists := s.pipelines[cfg.Name]; exists {
err := fmt.Errorf("pipeline '%s' already exists", cfg.Name)
s.logger.Error("msg", "Failed to create pipeline - duplicate name",
"component", "service",
"pipeline", cfg.Name,
"error", err)
return err
}
s.logger.Debug("msg", "Creating pipeline", "pipeline", cfg.Name)
// Create pipeline context
pipelineCtx, pipelineCancel := context.WithCancel(s.ctx)
// Create pipeline instance
pipeline := &Pipeline{
Config: cfg,
Stats: &PipelineStats{
StartTime: time.Now(),
},
ctx: pipelineCtx,
cancel: pipelineCancel,
logger: s.logger,
}
// Create sources
for i, srcCfg := range cfg.Sources {
src, err := s.createSource(&srcCfg)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create source[%d]: %w", i, err)
}
pipeline.Sources = append(pipeline.Sources, src)
}
// Create pipeline rate limiter
if cfg.RateLimit != nil {
limiter, err := limit.NewRateLimiter(*cfg.RateLimit, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create pipeline rate limiter: %w", err)
}
pipeline.RateLimiter = limiter
}
// Create filter chain
if len(cfg.Filters) > 0 {
chain, err := filter.NewChain(cfg.Filters, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create filter chain: %w", err)
}
pipeline.FilterChain = chain
}
// Create formatter for the pipeline
formatter, err := format.NewFormatter(cfg.Format, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create formatter: %w", err)
}
// Create sinks
for i, sinkCfg := range cfg.Sinks {
sinkInst, err := s.createSink(sinkCfg, formatter)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create sink[%d]: %w", i, err)
}
pipeline.Sinks = append(pipeline.Sinks, sinkInst)
}
// Start all sources
for i, src := range pipeline.Sources {
if err := src.Start(); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start source[%d]: %w", i, err)
}
}
// Start all sinks
for i, sinkInst := range pipeline.Sinks {
if err := sinkInst.Start(pipelineCtx); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start sink[%d]: %w", i, err)
}
}
// Wire sources to sinks through filters
s.wirePipeline(pipeline)
// Start stats updater
pipeline.startStatsUpdater(pipelineCtx)
s.pipelines[cfg.Name] = pipeline
s.logger.Info("msg", "Pipeline created successfully",
"pipeline", cfg.Name)
return nil
}
// Gracefully stops the pipeline
func (p *Pipeline) Shutdown() {
p.logger.Info("msg", "Shutting down pipeline",
"component", "pipeline",
"pipeline", p.Name)
"pipeline", p.Config.Name)
// Cancel context to stop processing
p.cancel()
@ -78,17 +184,17 @@ func (p *Pipeline) Shutdown() {
p.logger.Info("msg", "Pipeline shutdown complete",
"component", "pipeline",
"pipeline", p.Name)
"pipeline", p.Config.Name)
}
// GetStats returns pipeline statistics
// Returns pipeline statistics
func (p *Pipeline) GetStats() map[string]any {
// Recovery to handle concurrent access during shutdown
// When service is shutting down, sources/sinks might be nil or partially stopped
defer func() {
if r := recover(); r != nil {
p.logger.Error("msg", "Panic getting pipeline stats",
"pipeline", p.Name,
"pipeline", p.Config.Name,
"panic", r)
}
}()
@ -142,7 +248,7 @@ func (p *Pipeline) GetStats() map[string]any {
}
return map[string]any{
"name": p.Name,
"name": p.Config.Name,
"uptime_seconds": int(time.Since(p.Stats.StartTime).Seconds()),
"total_processed": p.Stats.TotalEntriesProcessed.Load(),
"total_dropped_rate_limit": p.Stats.TotalEntriesDroppedByRateLimit.Load(),
@ -157,7 +263,7 @@ func (p *Pipeline) GetStats() map[string]any {
}
}
// startStatsUpdater runs periodic stats updates
// Runs periodic stats updates
func (p *Pipeline) startStatsUpdater(ctx context.Context) {
go func() {
ticker := time.NewTicker(1 * time.Second)

View File

@ -5,13 +5,10 @@ import (
"context"
"fmt"
"sync"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/filter"
"logwisp/src/internal/format"
"logwisp/src/internal/limit"
"logwisp/src/internal/sink"
"logwisp/src/internal/source"
@ -28,8 +25,8 @@ type Service struct {
logger *log.Logger
}
// New creates a new service
func New(ctx context.Context, logger *log.Logger) *Service {
// Creates a new service
func NewService(ctx context.Context, logger *log.Logger) *Service {
serviceCtx, cancel := context.WithCancel(ctx)
return &Service{
pipelines: make(map[string]*Pipeline),
@ -39,125 +36,7 @@ func New(ctx context.Context, logger *log.Logger) *Service {
}
}
// NewPipeline creates and starts a new pipeline
func (s *Service) NewPipeline(cfg config.PipelineConfig) error {
s.mu.Lock()
defer s.mu.Unlock()
if _, exists := s.pipelines[cfg.Name]; exists {
err := fmt.Errorf("pipeline '%s' already exists", cfg.Name)
s.logger.Error("msg", "Failed to create pipeline - duplicate name",
"component", "service",
"pipeline", cfg.Name,
"error", err)
return err
}
s.logger.Debug("msg", "Creating pipeline", "pipeline", cfg.Name)
// Create pipeline context
pipelineCtx, pipelineCancel := context.WithCancel(s.ctx)
// Create pipeline instance
pipeline := &Pipeline{
Name: cfg.Name,
Config: cfg,
Stats: &PipelineStats{
StartTime: time.Now(),
},
ctx: pipelineCtx,
cancel: pipelineCancel,
logger: s.logger,
}
// Create sources
for i, srcCfg := range cfg.Sources {
src, err := s.createSource(srcCfg)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create source[%d]: %w", i, err)
}
pipeline.Sources = append(pipeline.Sources, src)
}
// Create pipeline rate limiter
if cfg.RateLimit != nil {
limiter, err := limit.NewRateLimiter(*cfg.RateLimit, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create pipeline rate limiter: %w", err)
}
pipeline.RateLimiter = limiter
}
// Create filter chain
if len(cfg.Filters) > 0 {
chain, err := filter.NewChain(cfg.Filters, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create filter chain: %w", err)
}
pipeline.FilterChain = chain
}
// Create formatter for the pipeline
var formatter format.Formatter
var err error
if cfg.Format != "" || len(cfg.FormatOptions) > 0 {
formatter, err = format.New(cfg.Format, cfg.FormatOptions, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create formatter: %w", err)
}
}
// Create sinks
for i, sinkCfg := range cfg.Sinks {
sinkInst, err := s.createSink(sinkCfg, formatter)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create sink[%d]: %w", i, err)
}
pipeline.Sinks = append(pipeline.Sinks, sinkInst)
}
// Start all sources
for i, src := range pipeline.Sources {
if err := src.Start(); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start source[%d]: %w", i, err)
}
}
// Start all sinks
for i, sinkInst := range pipeline.Sinks {
if err := sinkInst.Start(pipelineCtx); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start sink[%d]: %w", i, err)
}
}
// Configure authentication for sinks that support it
for _, sinkInst := range pipeline.Sinks {
if setter, ok := sinkInst.(sink.AuthSetter); ok {
setter.SetAuthConfig(cfg.Auth)
}
}
// Wire sources to sinks through filters
s.wirePipeline(pipeline)
// Start stats updater
pipeline.startStatsUpdater(pipelineCtx)
s.pipelines[cfg.Name] = pipeline
s.logger.Info("msg", "Pipeline created successfully",
"pipeline", cfg.Name,
"auth_enabled", cfg.Auth != nil && cfg.Auth.Type != "none")
return nil
}
// wirePipeline connects sources to sinks through filters
// Connects sources to sinks through filters
func (s *Service) wirePipeline(p *Pipeline) {
// For each source, subscribe and process entries
for _, src := range p.Sources {
@ -172,17 +51,17 @@ func (s *Service) wirePipeline(p *Pipeline) {
defer func() {
if r := recover(); r != nil {
s.logger.Error("msg", "Panic in pipeline processing",
"pipeline", p.Name,
"pipeline", p.Config.Name,
"source", source.GetStats().Type,
"panic", r)
// Ensure failed pipelines don't leave resources hanging
go func() {
s.logger.Warn("msg", "Shutting down pipeline due to panic",
"pipeline", p.Name)
if err := s.RemovePipeline(p.Name); err != nil {
"pipeline", p.Config.Name)
if err := s.RemovePipeline(p.Config.Name); err != nil {
s.logger.Error("msg", "Failed to remove panicked pipeline",
"pipeline", p.Name,
"pipeline", p.Config.Name,
"error", err)
}
}()
@ -225,7 +104,7 @@ func (s *Service) wirePipeline(p *Pipeline) {
default:
// Drop if sink buffer is full, may flood logging for slow client
s.logger.Debug("msg", "Dropped log entry - sink buffer full",
"pipeline", p.Name)
"pipeline", p.Config.Name)
}
}
}
@ -234,60 +113,52 @@ func (s *Service) wirePipeline(p *Pipeline) {
}
}
// createSource creates a source instance based on configuration
func (s *Service) createSource(cfg config.SourceConfig) (source.Source, error) {
// Creates a source instance based on configuration
func (s *Service) createSource(cfg *config.SourceConfig) (source.Source, error) {
switch cfg.Type {
case "directory":
return source.NewDirectorySource(cfg.Options, s.logger)
return source.NewDirectorySource(cfg.Directory, s.logger)
case "stdin":
return source.NewStdinSource(cfg.Options, s.logger)
return source.NewStdinSource(cfg.Stdin, s.logger)
case "http":
return source.NewHTTPSource(cfg.Options, s.logger)
return source.NewHTTPSource(cfg.HTTP, s.logger)
case "tcp":
return source.NewTCPSource(cfg.Options, s.logger)
return source.NewTCPSource(cfg.TCP, s.logger)
default:
return nil, fmt.Errorf("unknown source type: %s", cfg.Type)
}
}
// createSink creates a sink instance based on configuration
// Creates a sink instance based on configuration
func (s *Service) createSink(cfg config.SinkConfig, formatter format.Formatter) (sink.Sink, error) {
if formatter == nil {
// Default formatters for different sink types
defaultFormat := "raw"
switch cfg.Type {
case "http", "tcp", "http_client", "tcp_client":
defaultFormat = "json"
}
var err error
formatter, err = format.New(defaultFormat, nil, s.logger)
if err != nil {
return nil, fmt.Errorf("failed to create default formatter: %w", err)
}
}
switch cfg.Type {
case "http":
return sink.NewHTTPSink(cfg.Options, s.logger, formatter)
if cfg.HTTP == nil {
return nil, fmt.Errorf("HTTP sink configuration missing")
}
return sink.NewHTTPSink(cfg.HTTP, s.logger, formatter)
case "tcp":
return sink.NewTCPSink(cfg.Options, s.logger, formatter)
if cfg.TCP == nil {
return nil, fmt.Errorf("TCP sink configuration missing")
}
return sink.NewTCPSink(cfg.TCP, s.logger, formatter)
case "http_client":
return sink.NewHTTPClientSink(cfg.Options, s.logger, formatter)
return sink.NewHTTPClientSink(cfg.HTTPClient, s.logger, formatter)
case "tcp_client":
return sink.NewTCPClientSink(cfg.Options, s.logger, formatter)
return sink.NewTCPClientSink(cfg.TCPClient, s.logger, formatter)
case "file":
return sink.NewFileSink(cfg.Options, s.logger, formatter)
case "stdout":
return sink.NewStdoutSink(cfg.Options, s.logger, formatter)
case "stderr":
return sink.NewStderrSink(cfg.Options, s.logger, formatter)
return sink.NewFileSink(cfg.File, s.logger, formatter)
case "console":
return sink.NewConsoleSink(cfg.Console, s.logger, formatter)
default:
return nil, fmt.Errorf("unknown sink type: %s", cfg.Type)
}
}
// GetPipeline returns a pipeline by name
// Returns a pipeline by name
func (s *Service) GetPipeline(name string) (*Pipeline, error) {
s.mu.RLock()
defer s.mu.RUnlock()
@ -299,14 +170,7 @@ func (s *Service) GetPipeline(name string) (*Pipeline, error) {
return pipeline, nil
}
// ListStreams is deprecated, use ListPipelines
func (s *Service) ListStreams() []string {
s.logger.Warn("msg", "ListStreams is deprecated, use ListPipelines",
"component", "service")
return s.ListPipelines()
}
// ListPipelines returns all pipeline names
// Returns all pipeline names
func (s *Service) ListPipelines() []string {
s.mu.RLock()
defer s.mu.RUnlock()
@ -318,14 +182,7 @@ func (s *Service) ListPipelines() []string {
return names
}
// RemoveStream is deprecated, use RemovePipeline
func (s *Service) RemoveStream(name string) error {
s.logger.Warn("msg", "RemoveStream is deprecated, use RemovePipeline",
"component", "service")
return s.RemovePipeline(name)
}
// RemovePipeline stops and removes a pipeline
// Stops and removes a pipeline
func (s *Service) RemovePipeline(name string) error {
s.mu.Lock()
defer s.mu.Unlock()
@ -346,7 +203,7 @@ func (s *Service) RemovePipeline(name string) error {
return nil
}
// Shutdown stops all pipelines
// Stops all pipelines
func (s *Service) Shutdown() {
s.logger.Info("msg", "Service shutdown initiated")
@ -374,7 +231,7 @@ func (s *Service) Shutdown() {
s.logger.Info("msg", "Service shutdown complete")
}
// GetGlobalStats returns statistics for all pipelines
// Returns statistics for all pipelines
func (s *Service) GetGlobalStats() map[string]any {
s.mu.RLock()
defer s.mu.RUnlock()

View File

@ -2,33 +2,28 @@
package sink
import (
"bytes"
"context"
"io"
"os"
"fmt"
"strings"
"sync/atomic"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/format"
"github.com/lixenwraith/log"
)
// ConsoleConfig holds common configuration for console sinks
type ConsoleConfig struct {
Target string // "stdout", "stderr", or "split"
BufferSize int64
}
// StdoutSink writes log entries to stdout
type StdoutSink struct {
// ConsoleSink writes log entries to the console (stdout/stderr) using an dedicated logger instance
type ConsoleSink struct {
config *config.ConsoleSinkOptions
input chan core.LogEntry
config ConsoleConfig
output io.Writer
writer *log.Logger // Dedicated internal logger instance for console writing
done chan struct{}
startTime time.Time
logger *log.Logger
logger *log.Logger // Application logger for app logs
formatter format.Formatter
// Statistics
@ -36,29 +31,41 @@ type StdoutSink struct {
lastProcessed atomic.Value // time.Time
}
// NewStdoutSink creates a new stdout sink
func NewStdoutSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*StdoutSink, error) {
config := ConsoleConfig{
Target: "stdout",
BufferSize: 1000,
// Creates a new console sink
func NewConsoleSink(opts *config.ConsoleSinkOptions, appLogger *log.Logger, formatter format.Formatter) (*ConsoleSink, error) {
if opts == nil {
return nil, fmt.Errorf("console sink options cannot be nil")
}
// Check for split mode configuration
if target, ok := options["target"].(string); ok {
config.Target = target
// Set defaults if not configured
if opts.Target == "" {
opts.Target = "stdout"
}
if opts.BufferSize <= 0 {
opts.BufferSize = 1000
}
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
config.BufferSize = bufSize
// Dedicated logger instance as console writer
writer, err := log.NewBuilder().
EnableFile(false).
EnableConsole(true).
ConsoleTarget(opts.Target).
Format("raw"). // Passthrough pre-formatted messages
ShowTimestamp(false). // Disable writer's own timestamp
ShowLevel(false). // Disable writer's own level prefix
Build()
if err != nil {
return nil, fmt.Errorf("failed to create console writer: %w", err)
}
s := &StdoutSink{
input: make(chan core.LogEntry, config.BufferSize),
config: config,
output: os.Stdout,
s := &ConsoleSink{
config: opts,
input: make(chan core.LogEntry, opts.BufferSize),
writer: writer,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
logger: appLogger,
formatter: formatter,
}
s.lastProcessed.Store(time.Time{})
@ -66,39 +73,52 @@ func NewStdoutSink(options map[string]any, logger *log.Logger, formatter format.
return s, nil
}
func (s *StdoutSink) Input() chan<- core.LogEntry {
func (s *ConsoleSink) Input() chan<- core.LogEntry {
return s.input
}
func (s *StdoutSink) Start(ctx context.Context) error {
func (s *ConsoleSink) Start(ctx context.Context) error {
// Start the internal writer's processing goroutine.
if err := s.writer.Start(); err != nil {
return fmt.Errorf("failed to start console writer: %w", err)
}
go s.processLoop(ctx)
s.logger.Info("msg", "Stdout sink started",
"component", "stdout_sink",
"target", s.config.Target)
s.logger.Info("msg", "Console sink started",
"component", "console_sink",
"target", s.writer.GetConfig().ConsoleTarget)
return nil
}
func (s *StdoutSink) Stop() {
s.logger.Info("msg", "Stopping stdout sink")
func (s *ConsoleSink) Stop() {
target := s.writer.GetConfig().ConsoleTarget
s.logger.Info("msg", "Stopping console sink", "target", target)
close(s.done)
s.logger.Info("msg", "Stdout sink stopped")
// Shutdown the internal writer with a timeout.
if err := s.writer.Shutdown(2 * time.Second); err != nil {
s.logger.Error("msg", "Error shutting down console writer",
"component", "console_sink",
"error", err)
}
s.logger.Info("msg", "Console sink stopped", "target", target)
}
func (s *StdoutSink) GetStats() SinkStats {
func (s *ConsoleSink) GetStats() SinkStats {
lastProc, _ := s.lastProcessed.Load().(time.Time)
return SinkStats{
Type: "stdout",
Type: "console",
TotalProcessed: s.totalProcessed.Load(),
StartTime: s.startTime,
LastProcessed: lastProc,
Details: map[string]any{
"target": s.config.Target,
"target": s.writer.GetConfig().ConsoleTarget,
},
}
}
func (s *StdoutSink) processLoop(ctx context.Context) {
// processLoop reads entries, formats them, and passes them to the internal writer.
func (s *ConsoleSink) processLoop(ctx context.Context) {
for {
select {
case entry, ok := <-s.input:
@ -109,136 +129,31 @@ func (s *StdoutSink) processLoop(ctx context.Context) {
s.totalProcessed.Add(1)
s.lastProcessed.Store(time.Now())
// Handle split mode - only process INFO/DEBUG for stdout
if s.config.Target == "split" {
upperLevel := strings.ToUpper(entry.Level)
if upperLevel == "ERROR" || upperLevel == "WARN" || upperLevel == "WARNING" {
// Skip ERROR/WARN levels in stdout when in split mode
continue
}
}
// Format and write
// Format the entry using the pipeline's configured formatter.
formatted, err := s.formatter.Format(entry)
if err != nil {
s.logger.Error("msg", "Failed to format log entry for stdout", "error", err)
s.logger.Error("msg", "Failed to format log entry for console",
"component", "console_sink",
"error", err)
continue
}
s.output.Write(formatted)
case <-ctx.Done():
return
case <-s.done:
return
}
}
}
// StderrSink writes log entries to stderr
type StderrSink struct {
input chan core.LogEntry
config ConsoleConfig
output io.Writer
done chan struct{}
startTime time.Time
logger *log.Logger
formatter format.Formatter
// Statistics
totalProcessed atomic.Uint64
lastProcessed atomic.Value // time.Time
}
// NewStderrSink creates a new stderr sink
func NewStderrSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*StderrSink, error) {
config := ConsoleConfig{
Target: "stderr",
BufferSize: 1000,
}
// Check for split mode configuration
if target, ok := options["target"].(string); ok {
config.Target = target
}
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
config.BufferSize = bufSize
}
s := &StderrSink{
input: make(chan core.LogEntry, config.BufferSize),
config: config,
output: os.Stderr,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
formatter: formatter,
}
s.lastProcessed.Store(time.Time{})
return s, nil
}
func (s *StderrSink) Input() chan<- core.LogEntry {
return s.input
}
func (s *StderrSink) Start(ctx context.Context) error {
go s.processLoop(ctx)
s.logger.Info("msg", "Stderr sink started",
"component", "stderr_sink",
"target", s.config.Target)
return nil
}
func (s *StderrSink) Stop() {
s.logger.Info("msg", "Stopping stderr sink")
close(s.done)
s.logger.Info("msg", "Stderr sink stopped")
}
func (s *StderrSink) GetStats() SinkStats {
lastProc, _ := s.lastProcessed.Load().(time.Time)
return SinkStats{
Type: "stderr",
TotalProcessed: s.totalProcessed.Load(),
StartTime: s.startTime,
LastProcessed: lastProc,
Details: map[string]any{
"target": s.config.Target,
},
}
}
func (s *StderrSink) processLoop(ctx context.Context) {
for {
select {
case entry, ok := <-s.input:
if !ok {
return
// Convert to string to prevent hex encoding of []byte by log package
// Strip new line, writer adds it
message := string(bytes.TrimSuffix(formatted, []byte{'\n'}))
switch strings.ToUpper(entry.Level) {
case "DEBUG":
s.writer.Debug(message)
case "INFO":
s.writer.Info(message)
case "WARN", "WARNING":
s.writer.Warn(message)
case "ERROR", "FATAL":
s.writer.Error(message)
default:
s.writer.Message(message)
}
s.totalProcessed.Add(1)
s.lastProcessed.Store(time.Now())
// Handle split mode - only process ERROR/WARN for stderr
if s.config.Target == "split" {
upperLevel := strings.ToUpper(entry.Level)
if upperLevel != "ERROR" && upperLevel != "WARN" && upperLevel != "WARNING" {
// Skip non-ERROR/WARN levels in stderr when in split mode
continue
}
}
// Format and write
formatted, err := s.formatter.Format(entry)
if err != nil {
s.logger.Error("msg", "Failed to format log entry for stderr", "error", err)
continue
}
s.output.Write(formatted)
case <-ctx.Done():
return
case <-s.done:

View File

@ -2,19 +2,22 @@
package sink
import (
"bytes"
"context"
"fmt"
"sync/atomic"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/format"
"github.com/lixenwraith/log"
)
// FileSink writes log entries to files with rotation
// Writes log entries to files with rotation
type FileSink struct {
config *config.FileSinkOptions
input chan core.LogEntry
writer *log.Logger // Internal logger instance for file writing
done chan struct{}
@ -27,63 +30,28 @@ type FileSink struct {
lastProcessed atomic.Value // time.Time
}
// NewFileSink creates a new file sink
func NewFileSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*FileSink, error) {
directory, ok := options["directory"].(string)
if !ok || directory == "" {
return nil, fmt.Errorf("file sink requires 'directory' option")
}
name, ok := options["name"].(string)
if !ok || name == "" {
return nil, fmt.Errorf("file sink requires 'name' option")
// Creates a new file sink
func NewFileSink(opts *config.FileSinkOptions, logger *log.Logger, formatter format.Formatter) (*FileSink, error) {
if opts == nil {
return nil, fmt.Errorf("file sink options cannot be nil")
}
// Create configuration for the internal log writer
writerConfig := log.DefaultConfig()
writerConfig.Directory = directory
writerConfig.Name = name
writerConfig.EnableStdout = false // File only
writerConfig.Directory = opts.Directory
writerConfig.Name = opts.Name
writerConfig.EnableConsole = false // File only
writerConfig.ShowTimestamp = false // We already have timestamps in entries
writerConfig.ShowLevel = false // We already have levels in entries
// Add optional configurations
if maxSize, ok := options["max_size_mb"].(int64); ok && maxSize > 0 {
writerConfig.MaxSizeKB = maxSize * 1000
}
if maxTotalSize, ok := options["max_total_size_mb"].(int64); ok && maxTotalSize >= 0 {
writerConfig.MaxTotalSizeKB = maxTotalSize * 1000
}
if retention, ok := options["retention_hours"].(int64); ok && retention > 0 {
writerConfig.RetentionPeriodHrs = float64(retention)
}
if minDiskFree, ok := options["min_disk_free_mb"].(int64); ok && minDiskFree > 0 {
writerConfig.MinDiskFreeKB = minDiskFree * 1000
}
// Create internal logger for file writing
writer := log.NewLogger()
if err := writer.ApplyConfig(writerConfig); err != nil {
return nil, fmt.Errorf("failed to initialize file writer: %w", err)
}
// Start the internal file writer
if err := writer.Start(); err != nil {
return nil, fmt.Errorf("failed to start file writer: %w", err)
}
// Buffer size for input channel
// TODO: Make this configurable
bufferSize := int64(1000)
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
bufferSize = bufSize
}
fs := &FileSink{
input: make(chan core.LogEntry, bufferSize),
input: make(chan core.LogEntry, opts.BufferSize),
writer: writer,
done: make(chan struct{}),
startTime: time.Now(),
@ -100,6 +68,11 @@ func (fs *FileSink) Input() chan<- core.LogEntry {
}
func (fs *FileSink) Start(ctx context.Context) error {
// Start the internal file writer
if err := fs.writer.Start(); err != nil {
return fmt.Errorf("failed to start sink file writer: %w", err)
}
go fs.processLoop(ctx)
fs.logger.Info("msg", "File sink started", "component", "file_sink")
return nil
@ -151,11 +124,9 @@ func (fs *FileSink) processLoop(ctx context.Context) {
continue
}
// Write formatted bytes (strip newline as writer adds it)
message := string(formatted)
if len(message) > 0 && message[len(message)-1] == '\n' {
message = message[:len(message)-1]
}
// Convert to string to prevent hex encoding of []byte by log package
// Strip new line, writer adds it
message := string(bytes.TrimSuffix(formatted, []byte{'\n'}))
fs.writer.Message(message)
case <-ctx.Done():

View File

@ -24,10 +24,13 @@ import (
"github.com/valyala/fasthttp"
)
// HTTPSink streams log entries via Server-Sent Events
// Streams log entries via Server-Sent Events
type HTTPSink struct {
// Configuration reference (NOT a copy)
config *config.HTTPSinkOptions
// Runtime
input chan core.LogEntry
config HTTPConfig
server *fasthttp.Server
activeClients atomic.Int64
mu sync.RWMutex
@ -37,14 +40,16 @@ type HTTPSink struct {
logger *log.Logger
formatter format.Formatter
// Broker architecture
clients map[uint64]chan core.LogEntry
clientsMu sync.RWMutex
unregister chan uint64
nextClientID atomic.Uint64
// Security components
authenticator *auth.Authenticator
tlsManager *tls.Manager
authConfig *config.AuthConfig
// Path configuration
streamPath string
statusPath string
authConfig *config.ServerAuthConfig
// Net limiting
netLimiter *limit.NetLimiter
@ -56,138 +61,58 @@ type HTTPSink struct {
authSuccesses atomic.Uint64
}
// HTTPConfig holds HTTP sink configuration
type HTTPConfig struct {
Port int64
BufferSize int64
StreamPath string
StatusPath string
Heartbeat *config.HeartbeatConfig
SSL *config.SSLConfig
NetLimit *config.NetLimitConfig
}
// NewHTTPSink creates a new HTTP streaming sink
func NewHTTPSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*HTTPSink, error) {
cfg := HTTPConfig{
Port: 8080,
BufferSize: 1000,
StreamPath: "/transport",
StatusPath: "/status",
}
// Extract configuration from options
if port, ok := options["port"].(int64); ok {
cfg.Port = port
}
if bufSize, ok := options["buffer_size"].(int64); ok {
cfg.BufferSize = bufSize
}
if path, ok := options["stream_path"].(string); ok {
cfg.StreamPath = path
}
if path, ok := options["status_path"].(string); ok {
cfg.StatusPath = path
}
// Extract heartbeat config
if hb, ok := options["heartbeat"].(map[string]any); ok {
cfg.Heartbeat = &config.HeartbeatConfig{}
cfg.Heartbeat.Enabled, _ = hb["enabled"].(bool)
if interval, ok := hb["interval_seconds"].(int64); ok {
cfg.Heartbeat.IntervalSeconds = interval
}
cfg.Heartbeat.IncludeTimestamp, _ = hb["include_timestamp"].(bool)
cfg.Heartbeat.IncludeStats, _ = hb["include_stats"].(bool)
if hbFormat, ok := hb["format"].(string); ok {
cfg.Heartbeat.Format = hbFormat
}
}
// Extract SSL config
if ssl, ok := options["ssl"].(map[string]any); ok {
cfg.SSL = &config.SSLConfig{}
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
cfg.SSL.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
cfg.SSL.KeyFile = keyFile
}
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
cfg.SSL.ClientCAFile = caFile
}
cfg.SSL.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
if minVer, ok := ssl["min_version"].(string); ok {
cfg.SSL.MinVersion = minVer
}
if maxVer, ok := ssl["max_version"].(string); ok {
cfg.SSL.MaxVersion = maxVer
}
if ciphers, ok := ssl["cipher_suites"].(string); ok {
cfg.SSL.CipherSuites = ciphers
}
}
// Extract net limit config
if rl, ok := options["net_limit"].(map[string]any); ok {
cfg.NetLimit = &config.NetLimitConfig{}
cfg.NetLimit.Enabled, _ = rl["enabled"].(bool)
if rps, ok := rl["requests_per_second"].(float64); ok {
cfg.NetLimit.RequestsPerSecond = rps
}
if burst, ok := rl["burst_size"].(int64); ok {
cfg.NetLimit.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.NetLimit.LimitBy = limitBy
}
if respCode, ok := rl["response_code"].(int64); ok {
cfg.NetLimit.ResponseCode = respCode
}
if msg, ok := rl["response_message"].(string); ok {
cfg.NetLimit.ResponseMessage = msg
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.NetLimit.MaxConnectionsPerIP = maxPerIP
}
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
cfg.NetLimit.MaxTotalConnections = maxTotal
}
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
cfg.NetLimit.IPWhitelist = make([]string, 0, len(ipWhitelist))
for _, entry := range ipWhitelist {
if str, ok := entry.(string); ok {
cfg.NetLimit.IPWhitelist = append(cfg.NetLimit.IPWhitelist, str)
}
}
}
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
cfg.NetLimit.IPBlacklist = make([]string, 0, len(ipBlacklist))
for _, entry := range ipBlacklist {
if str, ok := entry.(string); ok {
cfg.NetLimit.IPBlacklist = append(cfg.NetLimit.IPBlacklist, str)
}
}
}
// Creates a new HTTP streaming sink
func NewHTTPSink(opts *config.HTTPSinkOptions, logger *log.Logger, formatter format.Formatter) (*HTTPSink, error) {
if opts == nil {
return nil, fmt.Errorf("HTTP sink options cannot be nil")
}
h := &HTTPSink{
input: make(chan core.LogEntry, cfg.BufferSize),
config: cfg,
startTime: time.Now(),
done: make(chan struct{}),
streamPath: cfg.StreamPath,
statusPath: cfg.StatusPath,
logger: logger,
formatter: formatter,
config: opts, // Direct reference to config struct
input: make(chan core.LogEntry, opts.BufferSize),
startTime: time.Now(),
done: make(chan struct{}),
logger: logger,
formatter: formatter,
clients: make(map[uint64]chan core.LogEntry),
}
h.lastProcessed.Store(time.Time{})
// Initialize TLS manager if configured
if opts.TLS != nil && opts.TLS.Enabled {
tlsManager, err := tls.NewManager(opts.TLS, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
h.tlsManager = tlsManager
logger.Info("msg", "TLS enabled",
"component", "http_sink")
}
// Initialize net limiter if configured
if cfg.NetLimit != nil && cfg.NetLimit.Enabled {
h.netLimiter = limit.NewNetLimiter(*cfg.NetLimit, logger)
if opts.NetLimit != nil && (opts.NetLimit.Enabled ||
len(opts.NetLimit.IPWhitelist) > 0 ||
len(opts.NetLimit.IPBlacklist) > 0) {
h.netLimiter = limit.NewNetLimiter(opts.NetLimit, logger)
}
// Initialize authenticator if auth is not "none"
if opts.Auth != nil && opts.Auth.Type != "none" {
// Only "basic" and "token" are valid for HTTP sink
if opts.Auth.Type != "basic" && opts.Auth.Type != "token" {
return nil, fmt.Errorf("invalid auth type '%s' for HTTP sink (valid: none, basic, token)", opts.Auth.Type)
}
authenticator, err := auth.NewAuthenticator(opts.Auth, logger)
if err != nil {
return nil, fmt.Errorf("failed to create authenticator: %w", err)
}
h.authenticator = authenticator
h.authConfig = opts.Auth
logger.Info("msg", "Authentication enabled",
"component", "http_sink",
"type", opts.Auth.Type)
}
return h, nil
@ -198,14 +123,22 @@ func (h *HTTPSink) Input() chan<- core.LogEntry {
}
func (h *HTTPSink) Start(ctx context.Context) error {
// Start central broker goroutine
h.wg.Add(1)
go h.brokerLoop(ctx)
// Create fasthttp adapter for logging
fasthttpLogger := compat.NewFastHTTPAdapter(h.logger)
h.server = &fasthttp.Server{
Name: fmt.Sprintf("LogWisp/%s", version.Short()),
Handler: h.requestHandler,
DisableKeepalive: false,
StreamRequestBody: true,
Logger: fasthttpLogger,
// ReadTimeout: time.Duration(h.config.ReadTimeout) * time.Millisecond,
WriteTimeout: time.Duration(h.config.WriteTimeout) * time.Millisecond,
// MaxRequestBodySize: int(h.config.MaxBodySize),
}
// Configure TLS if enabled
@ -216,22 +149,24 @@ func (h *HTTPSink) Start(ctx context.Context) error {
"port", h.config.Port)
}
addr := fmt.Sprintf(":%d", h.config.Port)
// Use configured host and port
addr := fmt.Sprintf("%s:%d", h.config.Host, h.config.Port)
// Run server in separate goroutine to avoid blocking
errChan := make(chan error, 1)
go func() {
h.logger.Info("msg", "HTTP server started",
"component", "http_sink",
"host", h.config.Host,
"port", h.config.Port,
"stream_path", h.streamPath,
"status_path", h.statusPath,
"stream_path", h.config.StreamPath,
"status_path", h.config.StatusPath,
"tls_enabled", h.tlsManager != nil)
var err error
if h.tlsManager != nil {
// HTTPS server
err = h.server.ListenAndServeTLS(addr, h.config.SSL.CertFile, h.config.SSL.KeyFile)
err = h.server.ListenAndServeTLS(addr, h.config.TLS.CertFile, h.config.TLS.KeyFile)
} else {
// HTTP server
err = h.server.ListenAndServe(addr)
@ -262,6 +197,99 @@ func (h *HTTPSink) Start(ctx context.Context) error {
}
}
// Broadcasts only to active clients
func (h *HTTPSink) brokerLoop(ctx context.Context) {
defer h.wg.Done()
var ticker *time.Ticker
var tickerChan <-chan time.Time
if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C
defer ticker.Stop()
}
for {
select {
case <-ctx.Done():
h.logger.Debug("msg", "Broker loop stopping due to context cancellation",
"component", "http_sink")
return
case <-h.done:
h.logger.Debug("msg", "Broker loop stopping due to shutdown signal",
"component", "http_sink")
return
case clientID := <-h.unregister:
// Broker owns channel cleanup
h.clientsMu.Lock()
if clientChan, exists := h.clients[clientID]; exists {
delete(h.clients, clientID)
close(clientChan)
h.logger.Debug("msg", "Unregistered client",
"component", "http_sink",
"client_id", clientID)
}
h.clientsMu.Unlock()
case entry, ok := <-h.input:
if !ok {
h.logger.Debug("msg", "Input channel closed, broker stopping",
"component", "http_sink")
return
}
h.totalProcessed.Add(1)
h.lastProcessed.Store(time.Now())
// Broadcast to all active clients
h.clientsMu.RLock()
clientCount := len(h.clients)
if clientCount > 0 {
slowClients := 0
for id, ch := range h.clients {
select {
case ch <- entry:
// Successfully sent
default:
// Client buffer full
slowClients++
if slowClients == 1 { // Log only once per broadcast
h.logger.Debug("msg", "Dropped entry for slow client(s)",
"component", "http_sink",
"client_id", id,
"slow_clients", slowClients,
"total_clients", clientCount)
}
}
}
}
// If no clients connected, entry is discarded (no buffering)
h.clientsMu.RUnlock()
case <-tickerChan:
// Send global heartbeat to all clients
if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled {
heartbeatEntry := h.createHeartbeatEntry()
h.clientsMu.RLock()
for id, ch := range h.clients {
select {
case ch <- heartbeatEntry:
default:
// Client buffer full, skip heartbeat
h.logger.Debug("msg", "Skipped heartbeat for slow client",
"component", "http_sink",
"client_id", id)
}
}
h.clientsMu.RUnlock()
}
}
}
}
func (h *HTTPSink) Stop() {
h.logger.Info("msg", "Stopping HTTP sink")
@ -278,6 +306,17 @@ func (h *HTTPSink) Stop() {
// Wait for all active client handlers to finish
h.wg.Wait()
// Close unregister channel after all clients have finished
close(h.unregister)
// Close all client channels
h.clientsMu.Lock()
for _, ch := range h.clients {
close(ch)
}
h.clients = make(map[uint64]chan core.LogEntry)
h.clientsMu.Unlock()
h.logger.Info("msg", "HTTP sink stopped")
}
@ -311,8 +350,8 @@ func (h *HTTPSink) GetStats() SinkStats {
"port": h.config.Port,
"buffer_size": h.config.BufferSize,
"endpoints": map[string]string{
"stream": h.streamPath,
"status": h.statusPath,
"stream": h.config.StreamPath,
"status": h.config.StatusPath,
},
"net_limit": netLimitStats,
"auth": authStats,
@ -341,10 +380,25 @@ func (h *HTTPSink) requestHandler(ctx *fasthttp.RequestCtx) {
}
}
// Enforce TLS for authentication
if h.authenticator != nil && h.authConfig.Type != "none" {
isTLS := ctx.IsTLS() || h.tlsManager != nil
if !isTLS {
ctx.SetStatusCode(fasthttp.StatusForbidden)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "TLS required for authentication",
"hint": "Use HTTPS for authenticated connections",
})
return
}
}
path := string(ctx.Path())
// Status endpoint doesn't require auth
if path == h.statusPath {
if path == h.config.StatusPath {
h.handleStatus(ctx)
return
}
@ -364,14 +418,14 @@ func (h *HTTPSink) requestHandler(ctx *fasthttp.RequestCtx) {
// Return 401 with WWW-Authenticate header
ctx.SetStatusCode(fasthttp.StatusUnauthorized)
if h.authConfig.Type == "basic" && h.authConfig.BasicAuth != nil {
realm := h.authConfig.BasicAuth.Realm
if h.authConfig.Type == "basic" && h.authConfig.Basic != nil {
realm := h.authConfig.Basic.Realm
if realm == "" {
realm = "Restricted"
}
ctx.Response.Header.Set("WWW-Authenticate", fmt.Sprintf("Basic realm=\"%s\"", realm))
} else if h.authConfig.Type == "bearer" {
ctx.Response.Header.Set("WWW-Authenticate", "Bearer")
} else if h.authConfig.Type == "token" {
ctx.Response.Header.Set("WWW-Authenticate", "Token")
}
ctx.SetContentType("application/json")
@ -381,10 +435,19 @@ func (h *HTTPSink) requestHandler(ctx *fasthttp.RequestCtx) {
return
}
h.authSuccesses.Add(1)
} else {
// Create anonymous session for unauthenticated connections
session = &auth.Session{
ID: fmt.Sprintf("anon-%d", time.Now().UnixNano()),
Username: "anonymous",
Method: "none",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
}
}
switch path {
case h.streamPath:
case h.config.StreamPath:
h.handleStream(ctx, session)
default:
ctx.SetStatusCode(fasthttp.StatusNotFound)
@ -393,6 +456,15 @@ func (h *HTTPSink) requestHandler(ctx *fasthttp.RequestCtx) {
"error": "Not Found",
})
}
// Handle stream endpoint
// if path == h.config.StreamPath {
// h.handleStream(ctx, session)
// return
// }
//
// // Unknown path
// ctx.SetStatusCode(fasthttp.StatusNotFound)
// ctx.SetBody([]byte("Not Found"))
}
func (h *HTTPSink) handleStream(ctx *fasthttp.RequestCtx, session *auth.Session) {
@ -404,102 +476,93 @@ func (h *HTTPSink) handleStream(ctx *fasthttp.RequestCtx, session *auth.Session)
}
// Set SSE headers
ctx.Response.Header.Set("Content-Type", "text/event-transport")
ctx.Response.Header.Set("Content-Type", "text/event-stream")
ctx.Response.Header.Set("Cache-Control", "no-cache")
ctx.Response.Header.Set("Connection", "keep-alive")
ctx.Response.Header.Set("Access-Control-Allow-Origin", "*")
ctx.Response.Header.Set("X-Accel-Buffering", "no")
// Create subscription for this client
// Register new client with broker
clientID := h.nextClientID.Add(1)
clientChan := make(chan core.LogEntry, h.config.BufferSize)
clientDone := make(chan struct{})
// Subscribe to input channel
go func() {
defer close(clientChan)
for {
select {
case entry, ok := <-h.input:
if !ok {
return
}
h.totalProcessed.Add(1)
h.lastProcessed.Store(time.Now())
h.clientsMu.Lock()
h.clients[clientID] = clientChan
h.clientsMu.Unlock()
select {
case clientChan <- entry:
case <-clientDone:
return
case <-h.done:
return
default:
// Drop if client buffer full
h.logger.Debug("msg", "Dropped entry for slow client",
"component", "http_sink",
"remote_addr", remoteAddr)
}
case <-clientDone:
return
case <-h.done:
return
}
}
}()
// Define the transport writer function
// Define the stream writer function
streamFunc := func(w *bufio.Writer) {
newCount := h.activeClients.Add(1)
connectCount := h.activeClients.Add(1)
h.logger.Debug("msg", "HTTP client connected",
"component", "http_sink",
"remote_addr", remoteAddr,
"username", session.Username,
"auth_method", session.Method,
"active_clients", newCount)
"client_id", clientID,
"active_clients", connectCount)
// Track goroutine lifecycle with waitgroup
h.wg.Add(1)
// Cleanup signals unregister
defer func() {
close(clientDone)
newCount := h.activeClients.Add(-1)
disconnectCount := h.activeClients.Add(-1)
h.logger.Debug("msg", "HTTP client disconnected",
"component", "http_sink",
"remote_addr", remoteAddr,
"username", session.Username,
"active_clients", newCount)
"client_id", clientID,
"active_clients", disconnectCount)
// Signal broker to cleanup this client's channel
select {
case h.unregister <- clientID:
case <-h.done:
// Shutting down, don't block
}
h.wg.Done()
}()
// Send initial connected event
clientID := fmt.Sprintf("%d", time.Now().UnixNano())
// Send initial connected event with metadata
connectionInfo := map[string]any{
"client_id": clientID,
"client_id": fmt.Sprintf("%d", clientID),
"username": session.Username,
"auth_method": session.Method,
"stream_path": h.streamPath,
"status_path": h.statusPath,
"stream_path": h.config.StreamPath,
"status_path": h.config.StatusPath,
"buffer_size": h.config.BufferSize,
"tls": h.tlsManager != nil,
}
data, _ := json.Marshal(connectionInfo)
fmt.Fprintf(w, "event: connected\ndata: %s\n\n", data)
w.Flush()
if err := w.Flush(); err != nil {
return
}
// Setup heartbeat ticker if enabled
var ticker *time.Ticker
var tickerChan <-chan time.Time
if h.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalSeconds) * time.Second)
if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C
defer ticker.Stop()
}
// Main streaming loop
for {
select {
case entry, ok := <-clientChan:
if !ok {
// Channel closed, client being removed
return
}
if err := h.formatEntryForSSE(w, entry); err != nil {
h.logger.Error("msg", "Failed to format log entry",
"component", "http_sink",
"client_id", clientID,
"error", err,
"entry_source", entry.Source)
continue
@ -512,18 +575,22 @@ func (h *HTTPSink) handleStream(ctx *fasthttp.RequestCtx, session *auth.Session)
case <-tickerChan:
// Validate session is still active
if h.authenticator != nil && !h.authenticator.ValidateSession(session.ID) {
if h.authenticator != nil && session != nil && !h.authenticator.ValidateSession(session.ID) {
fmt.Fprintf(w, "event: disconnect\ndata: {\"reason\":\"session_expired\"}\n\n")
w.Flush()
return
}
heartbeatEntry := h.createHeartbeatEntry()
if err := h.formatEntryForSSE(w, heartbeatEntry); err != nil {
h.logger.Error("msg", "Failed to format heartbeat",
"component", "http_sink",
"error", err)
// Heartbeat is sent from broker, additional client-specific heartbeat is sent here
// This provides per-client heartbeat validation with session check
sessionHB := map[string]any{
"type": "session_heartbeat",
"client_id": fmt.Sprintf("%d", clientID),
"session_valid": true,
}
hbData, _ := json.Marshal(sessionHB)
fmt.Fprintf(w, "event: heartbeat\ndata: %s\n\n", hbData)
if err := w.Flush(); err != nil {
return
}
@ -552,8 +619,7 @@ func (h *HTTPSink) formatEntryForSSE(w *bufio.Writer, entry core.LogEntry) error
// Multi-line content handler
lines := bytes.Split(formatted, []byte{'\n'})
for _, line := range lines {
// SSE needs "data: " prefix for each line
// TODO: validate above, is 'data: ' really necessary? make it optional if it works without it?
// SSE needs "data: " prefix for each line based on W3C spec
fmt.Fprintf(w, "data: %s\n", line)
}
fmt.Fprintf(w, "\n") // Empty line to terminate event
@ -568,7 +634,7 @@ func (h *HTTPSink) createHeartbeatEntry() core.LogEntry {
fields := make(map[string]any)
fields["type"] = "heartbeat"
if h.config.Heartbeat.IncludeStats {
if h.config.Heartbeat.Enabled {
fields["active_clients"] = h.activeClients.Load()
fields["uptime_seconds"] = int(time.Since(h.startTime).Seconds())
}
@ -627,14 +693,14 @@ func (h *HTTPSink) handleStatus(ctx *fasthttp.RequestCtx) {
"uptime_seconds": int(time.Since(h.startTime).Seconds()),
},
"endpoints": map[string]string{
"transport": h.streamPath,
"status": h.statusPath,
"transport": h.config.StreamPath,
"status": h.config.StatusPath,
},
"features": map[string]any{
"heartbeat": map[string]any{
"enabled": h.config.Heartbeat.Enabled,
"interval": h.config.Heartbeat.IntervalSeconds,
"format": h.config.Heartbeat.Format,
"enabled": h.config.Heartbeat.Enabled,
"interval_ms": h.config.Heartbeat.IntervalMS,
"format": h.config.Heartbeat.Format,
},
"tls": tlsStats,
"auth": authStats,
@ -651,39 +717,22 @@ func (h *HTTPSink) handleStatus(ctx *fasthttp.RequestCtx) {
ctx.SetBody(data)
}
// GetActiveConnections returns the current number of active clients
// Returns the current number of active clients
func (h *HTTPSink) GetActiveConnections() int64 {
return h.activeClients.Load()
}
// GetStreamPath returns the configured transport endpoint path
// Returns the configured transport endpoint path
func (h *HTTPSink) GetStreamPath() string {
return h.streamPath
return h.config.StreamPath
}
// GetStatusPath returns the configured status endpoint path
// Returns the configured status endpoint path
func (h *HTTPSink) GetStatusPath() string {
return h.statusPath
return h.config.StatusPath
}
// SetAuthConfig configures http sink authentication
func (h *HTTPSink) SetAuthConfig(authCfg *config.AuthConfig) {
if authCfg == nil || authCfg.Type == "none" {
return
}
h.authConfig = authCfg
authenticator, err := auth.New(authCfg, h.logger)
if err != nil {
h.logger.Error("msg", "Failed to initialize authenticator for HTTP sink",
"component", "http_sink",
"error", err)
// Continue without auth
return
}
h.authenticator = authenticator
h.logger.Info("msg", "Authentication configured for HTTP sink",
"component", "http_sink",
"auth_type", authCfg.Type)
// Returns the configured host
func (h *HTTPSink) GetHost() string {
return h.config.Host
}

View File

@ -6,33 +6,38 @@ import (
"context"
"crypto/tls"
"crypto/x509"
"encoding/base64"
"fmt"
"net/url"
"os"
"strings"
"sync"
"sync/atomic"
"time"
"logwisp/src/internal/auth"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/format"
"logwisp/src/internal/version"
"github.com/lixenwraith/log"
"github.com/valyala/fasthttp"
)
// HTTPClientSink forwards log entries to a remote HTTP endpoint
// TODO: implement heartbeat for HTTP Client Sink, similar to HTTP Sink
// Forwards log entries to a remote HTTP endpoint
type HTTPClientSink struct {
input chan core.LogEntry
config HTTPClientConfig
client *fasthttp.Client
batch []core.LogEntry
batchMu sync.Mutex
done chan struct{}
wg sync.WaitGroup
startTime time.Time
logger *log.Logger
formatter format.Formatter
input chan core.LogEntry
config *config.HTTPClientSinkOptions
client *fasthttp.Client
batch []core.LogEntry
batchMu sync.Mutex
done chan struct{}
wg sync.WaitGroup
startTime time.Time
logger *log.Logger
formatter format.Formatter
authenticator *auth.Authenticator
// Statistics
totalProcessed atomic.Uint64
@ -43,103 +48,16 @@ type HTTPClientSink struct {
activeConnections atomic.Int64
}
// HTTPClientConfig holds HTTP client sink configuration
type HTTPClientConfig struct {
URL string
BufferSize int64
BatchSize int64
BatchDelay time.Duration
Timeout time.Duration
Headers map[string]string
// Retry configuration
MaxRetries int64
RetryDelay time.Duration
RetryBackoff float64 // Multiplier for exponential backoff
// TLS configuration
InsecureSkipVerify bool
CAFile string
}
// NewHTTPClientSink creates a new HTTP client sink
func NewHTTPClientSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*HTTPClientSink, error) {
cfg := HTTPClientConfig{
BufferSize: int64(1000),
BatchSize: int64(100),
BatchDelay: time.Second,
Timeout: 30 * time.Second,
MaxRetries: int64(3),
RetryDelay: time.Second,
RetryBackoff: float64(2.0),
Headers: make(map[string]string),
}
// Extract URL
urlStr, ok := options["url"].(string)
if !ok || urlStr == "" {
return nil, fmt.Errorf("http_client sink requires 'url' option")
}
// Validate URL
parsedURL, err := url.Parse(urlStr)
if err != nil {
return nil, fmt.Errorf("invalid URL: %w", err)
}
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
return nil, fmt.Errorf("URL must use http or https scheme")
}
cfg.URL = urlStr
// Extract other options
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
cfg.BufferSize = bufSize
}
if batchSize, ok := options["batch_size"].(int64); ok && batchSize > 0 {
cfg.BatchSize = batchSize
}
if delayMs, ok := options["batch_delay_ms"].(int64); ok && delayMs > 0 {
cfg.BatchDelay = time.Duration(delayMs) * time.Millisecond
}
if timeoutSec, ok := options["timeout_seconds"].(int64); ok && timeoutSec > 0 {
cfg.Timeout = time.Duration(timeoutSec) * time.Second
}
if maxRetries, ok := options["max_retries"].(int64); ok && maxRetries >= 0 {
cfg.MaxRetries = maxRetries
}
if retryDelayMs, ok := options["retry_delay_ms"].(int64); ok && retryDelayMs > 0 {
cfg.RetryDelay = time.Duration(retryDelayMs) * time.Millisecond
}
if backoff, ok := options["retry_backoff"].(float64); ok && backoff >= 1.0 {
cfg.RetryBackoff = backoff
}
if insecure, ok := options["insecure_skip_verify"].(bool); ok {
cfg.InsecureSkipVerify = insecure
}
// Extract headers
if headers, ok := options["headers"].(map[string]any); ok {
for k, v := range headers {
if strVal, ok := v.(string); ok {
cfg.Headers[k] = strVal
}
}
}
// Set default Content-Type if not specified
if _, exists := cfg.Headers["Content-Type"]; !exists {
cfg.Headers["Content-Type"] = "application/json"
}
// Extract TLS options
if caFile, ok := options["ca_file"].(string); ok && caFile != "" {
cfg.CAFile = caFile
// Creates a new HTTP client sink
func NewHTTPClientSink(opts *config.HTTPClientSinkOptions, logger *log.Logger, formatter format.Formatter) (*HTTPClientSink, error) {
if opts == nil {
return nil, fmt.Errorf("HTTP client sink options cannot be nil")
}
h := &HTTPClientSink{
input: make(chan core.LogEntry, cfg.BufferSize),
config: cfg,
batch: make([]core.LogEntry, 0, cfg.BatchSize),
config: opts,
input: make(chan core.LogEntry, opts.BufferSize),
batch: make([]core.LogEntry, 0, opts.BatchSize),
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
@ -152,31 +70,48 @@ func NewHTTPClientSink(options map[string]any, logger *log.Logger, formatter for
h.client = &fasthttp.Client{
MaxConnsPerHost: 10,
MaxIdleConnDuration: 10 * time.Second,
ReadTimeout: cfg.Timeout,
WriteTimeout: cfg.Timeout,
ReadTimeout: time.Duration(opts.Timeout) * time.Second,
WriteTimeout: time.Duration(opts.Timeout) * time.Second,
DisableHeaderNamesNormalizing: true,
}
// Configure TLS if using HTTPS
if strings.HasPrefix(cfg.URL, "https://") {
if strings.HasPrefix(opts.URL, "https://") {
tlsConfig := &tls.Config{
InsecureSkipVerify: cfg.InsecureSkipVerify,
InsecureSkipVerify: opts.InsecureSkipVerify,
}
// Load custom CA if provided
if cfg.CAFile != "" {
caCert, err := os.ReadFile(cfg.CAFile)
if err != nil {
return nil, fmt.Errorf("failed to read CA file: %w", err)
// Use TLS config if provided
if opts.TLS != nil {
// Load custom CA for server verification
if opts.TLS.CAFile != "" {
caCert, err := os.ReadFile(opts.TLS.CAFile)
if err != nil {
return nil, fmt.Errorf("failed to read CA file '%s': %w", opts.TLS.CAFile, err)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to parse CA certificate from '%s'", opts.TLS.CAFile)
}
tlsConfig.RootCAs = caCertPool
logger.Debug("msg", "Custom CA loaded for server verification",
"component", "http_client_sink",
"ca_file", opts.TLS.CAFile)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to parse CA certificate")
// Load client certificate for mTLS if provided
if opts.TLS.CertFile != "" && opts.TLS.KeyFile != "" {
cert, err := tls.LoadX509KeyPair(opts.TLS.CertFile, opts.TLS.KeyFile)
if err != nil {
return nil, fmt.Errorf("failed to load client certificate: %w", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
logger.Info("msg", "Client certificate loaded for mTLS",
"component", "http_client_sink",
"cert_file", opts.TLS.CertFile)
}
tlsConfig.RootCAs = caCertPool
}
// Set TLS config directly on the client
h.client.TLSConfig = tlsConfig
}
@ -196,7 +131,7 @@ func (h *HTTPClientSink) Start(ctx context.Context) error {
"component", "http_client_sink",
"url", h.config.URL,
"batch_size", h.config.BatchSize,
"batch_delay", h.config.BatchDelay)
"batch_delay_ms", h.config.BatchDelayMS)
return nil
}
@ -287,7 +222,7 @@ func (h *HTTPClientSink) processLoop(ctx context.Context) {
func (h *HTTPClientSink) batchTimer(ctx context.Context) {
defer h.wg.Done()
ticker := time.NewTicker(h.config.BatchDelay)
ticker := time.NewTicker(time.Duration(h.config.BatchDelayMS) * time.Millisecond)
defer ticker.Stop()
for {
@ -356,8 +291,9 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
// Retry logic
var lastErr error
retryDelay := h.config.RetryDelay
retryDelay := time.Duration(h.config.RetryDelayMS) * time.Millisecond
// TODO: verify retry loop placement is correct or should it be after acquiring resources (req :=....)
for attempt := int64(0); attempt <= h.config.MaxRetries; attempt++ {
if attempt > 0 {
// Wait before retry
@ -367,9 +303,10 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
newDelay := time.Duration(float64(retryDelay) * h.config.RetryBackoff)
// Cap at maximum to prevent integer overflow
if newDelay > h.config.Timeout || newDelay < retryDelay {
timeout := time.Duration(h.config.Timeout) * time.Second
if newDelay > timeout || newDelay < retryDelay {
// Either exceeded max or overflowed (negative/wrapped)
retryDelay = h.config.Timeout
retryDelay = timeout
} else {
retryDelay = newDelay
}
@ -381,15 +318,31 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
req.SetRequestURI(h.config.URL)
req.Header.SetMethod("POST")
req.Header.SetContentType("application/json")
req.SetBody(body)
// Set headers
for k, v := range h.config.Headers {
req.Header.Set(k, v)
req.Header.Set("User-Agent", fmt.Sprintf("LogWisp/%s", version.Short()))
// Add authentication based on auth type
switch h.config.Auth.Type {
case "basic":
creds := h.config.Auth.Username + ":" + h.config.Auth.Password
encodedCreds := base64.StdEncoding.EncodeToString([]byte(creds))
req.Header.Set("Authorization", "Basic "+encodedCreds)
case "token":
req.Header.Set("Authorization", "Token "+h.config.Auth.Token)
case "mtls":
// mTLS auth is handled at TLS layer via client certificates
// No Authorization header needed
case "none":
// No authentication
}
// Send request
err := h.client.DoTimeout(req, resp, h.config.Timeout)
err := h.client.DoTimeout(req, resp, time.Duration(h.config.Timeout)*time.Second)
// Capture response before releasing
statusCode := resp.StatusCode()

View File

@ -5,26 +5,25 @@ import (
"context"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
)
// Sink represents an output destination for log entries
// Represents an output data stream
type Sink interface {
// Input returns the channel for sending log entries to this sink
// Returns the channel for sending log entries to this sink
Input() chan<- core.LogEntry
// Start begins processing log entries
// Begins processing log entries
Start(ctx context.Context) error
// Stop gracefully shuts down the sink
// Gracefully shuts down the sink
Stop()
// GetStats returns sink statistics
// Returns sink statistics
GetStats() SinkStats
}
// SinkStats contains statistics about a sink
// Contains statistics about a sink
type SinkStats struct {
Type string
TotalProcessed uint64
@ -32,9 +31,4 @@ type SinkStats struct {
StartTime time.Time
LastProcessed time.Time
Details map[string]any
}
// AuthSetter is an interface for sinks that can accept an AuthConfig.
type AuthSetter interface {
SetAuthConfig(auth *config.AuthConfig)
}

View File

@ -7,7 +7,6 @@ import (
"encoding/json"
"fmt"
"net"
"strings"
"sync"
"sync/atomic"
"time"
@ -17,17 +16,16 @@ import (
"logwisp/src/internal/core"
"logwisp/src/internal/format"
"logwisp/src/internal/limit"
"logwisp/src/internal/tls"
"github.com/lixenwraith/log"
"github.com/lixenwraith/log/compat"
"github.com/panjf2000/gnet/v2"
)
// TCPSink streams log entries via TCP
// Streams log entries via TCP
type TCPSink struct {
input chan core.LogEntry
config TCPConfig
config *config.TCPSinkOptions
server *tcpServer
done chan struct{}
activeConns atomic.Int64
@ -39,16 +37,9 @@ type TCPSink struct {
logger *log.Logger
formatter format.Formatter
// Security components
authenticator *auth.Authenticator
tlsManager *tls.Manager
authConfig *config.AuthConfig
// Statistics
totalProcessed atomic.Uint64
lastProcessed atomic.Value // time.Time
authFailures atomic.Uint64
authSuccesses atomic.Uint64
// Write error tracking
writeErrors atomic.Uint64
@ -56,116 +47,24 @@ type TCPSink struct {
errorMu sync.Mutex
}
// TCPConfig holds TCP sink configuration
// Holds TCP sink configuration
type TCPConfig struct {
Host string
Port int64
BufferSize int64
Heartbeat *config.HeartbeatConfig
SSL *config.SSLConfig
NetLimit *config.NetLimitConfig
}
// NewTCPSink creates a new TCP streaming sink
func NewTCPSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*TCPSink, error) {
cfg := TCPConfig{
Port: int64(9090),
BufferSize: int64(1000),
}
// Extract configuration from options
if port, ok := options["port"].(int64); ok {
cfg.Port = port
}
if bufSize, ok := options["buffer_size"].(int64); ok {
cfg.BufferSize = bufSize
}
// Extract heartbeat config
if hb, ok := options["heartbeat"].(map[string]any); ok {
cfg.Heartbeat = &config.HeartbeatConfig{}
cfg.Heartbeat.Enabled, _ = hb["enabled"].(bool)
if interval, ok := hb["interval_seconds"].(int64); ok {
cfg.Heartbeat.IntervalSeconds = interval
}
cfg.Heartbeat.IncludeTimestamp, _ = hb["include_timestamp"].(bool)
cfg.Heartbeat.IncludeStats, _ = hb["include_stats"].(bool)
if hbFormat, ok := hb["format"].(string); ok {
cfg.Heartbeat.Format = hbFormat
}
}
// Extract SSL config
if ssl, ok := options["ssl"].(map[string]any); ok {
cfg.SSL = &config.SSLConfig{}
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
cfg.SSL.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
cfg.SSL.KeyFile = keyFile
}
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
cfg.SSL.ClientCAFile = caFile
}
cfg.SSL.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
if minVer, ok := ssl["min_version"].(string); ok {
cfg.SSL.MinVersion = minVer
}
if maxVer, ok := ssl["max_version"].(string); ok {
cfg.SSL.MaxVersion = maxVer
}
if ciphers, ok := ssl["cipher_suites"].(string); ok {
cfg.SSL.CipherSuites = ciphers
}
}
// Extract net limit config
if rl, ok := options["net_limit"].(map[string]any); ok {
cfg.NetLimit = &config.NetLimitConfig{}
cfg.NetLimit.Enabled, _ = rl["enabled"].(bool)
if rps, ok := rl["requests_per_second"].(float64); ok {
cfg.NetLimit.RequestsPerSecond = rps
}
if burst, ok := rl["burst_size"].(int64); ok {
cfg.NetLimit.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.NetLimit.LimitBy = limitBy
}
if respCode, ok := rl["response_code"].(int64); ok {
cfg.NetLimit.ResponseCode = respCode
}
if msg, ok := rl["response_message"].(string); ok {
cfg.NetLimit.ResponseMessage = msg
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.NetLimit.MaxConnectionsPerIP = maxPerIP
}
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
cfg.NetLimit.MaxTotalConnections = maxTotal
}
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
cfg.NetLimit.IPWhitelist = make([]string, 0, len(ipWhitelist))
for _, entry := range ipWhitelist {
if str, ok := entry.(string); ok {
cfg.NetLimit.IPWhitelist = append(cfg.NetLimit.IPWhitelist, str)
}
}
}
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
cfg.NetLimit.IPBlacklist = make([]string, 0, len(ipBlacklist))
for _, entry := range ipBlacklist {
if str, ok := entry.(string); ok {
cfg.NetLimit.IPBlacklist = append(cfg.NetLimit.IPBlacklist, str)
}
}
}
// Creates a new TCP streaming sink
func NewTCPSink(opts *config.TCPSinkOptions, logger *log.Logger, formatter format.Formatter) (*TCPSink, error) {
if opts == nil {
return nil, fmt.Errorf("TCP sink options cannot be nil")
}
t := &TCPSink{
input: make(chan core.LogEntry, cfg.BufferSize),
config: cfg,
config: opts, // Direct reference to config
input: make(chan core.LogEntry, opts.BufferSize),
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
@ -173,9 +72,11 @@ func NewTCPSink(options map[string]any, logger *log.Logger, formatter format.For
}
t.lastProcessed.Store(time.Time{})
// Initialize net limiter
if cfg.NetLimit != nil && cfg.NetLimit.Enabled {
t.netLimiter = limit.NewNetLimiter(*cfg.NetLimit, logger)
// Initialize net limiter with pointer
if opts.NetLimit != nil && (opts.NetLimit.Enabled ||
len(opts.NetLimit.IPWhitelist) > 0 ||
len(opts.NetLimit.IPBlacklist) > 0) {
t.netLimiter = limit.NewNetLimiter(opts.NetLimit, logger)
}
return t, nil
@ -199,7 +100,7 @@ func (t *TCPSink) Start(ctx context.Context) error {
}()
// Configure gnet options
addr := fmt.Sprintf("tcp://:%d", t.config.Port)
addr := fmt.Sprintf("tcp://%s:%d", t.config.Host, t.config.Port)
// Create a gnet adapter using the existing logger instance
gnetLogger := compat.NewGnetAdapter(t.logger)
@ -216,8 +117,7 @@ func (t *TCPSink) Start(ctx context.Context) error {
go func() {
t.logger.Info("msg", "Starting TCP server",
"component", "tcp_sink",
"port", t.config.Port,
"auth", t.authenticator != nil)
"port", t.config.Port)
err := gnet.Run(t.server, addr, opts...)
if err != nil {
@ -285,18 +185,6 @@ func (t *TCPSink) GetStats() SinkStats {
netLimitStats = t.netLimiter.GetStats()
}
var authStats map[string]any
if t.authenticator != nil {
authStats = t.authenticator.GetStats()
authStats["failures"] = t.authFailures.Load()
authStats["successes"] = t.authSuccesses.Load()
}
var tlsStats map[string]any
if t.tlsManager != nil {
tlsStats = t.tlsManager.GetStats()
}
return SinkStats{
Type: "tcp",
TotalProcessed: t.totalProcessed.Load(),
@ -307,8 +195,7 @@ func (t *TCPSink) GetStats() SinkStats {
"port": t.config.Port,
"buffer_size": t.config.BufferSize,
"net_limit": netLimitStats,
"auth": authStats,
"tls": tlsStats,
"auth": map[string]any{"enabled": false},
},
}
}
@ -317,8 +204,8 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
var ticker *time.Ticker
var tickerChan <-chan time.Time
if t.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalSeconds) * time.Second)
if t.config.Heartbeat != nil && t.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C
defer ticker.Stop()
}
@ -342,37 +229,7 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
"entry_source", entry.Source)
continue
}
// Broadcast only to authenticated clients
t.server.mu.RLock()
for conn, client := range t.server.clients {
if client.authenticated {
// Send through TLS bridge if present
if client.tlsBridge != nil {
if _, err := client.tlsBridge.Write(data); err != nil {
// TLS write failed, connection likely dead
t.logger.Debug("msg", "TLS write failed",
"component", "tcp_sink",
"error", err)
conn.Close()
}
} else {
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
if err != nil {
t.writeErrors.Add(1)
t.handleWriteError(c, err)
} else {
// Reset consecutive error count on success
t.errorMu.Lock()
delete(t.consecutiveWriteErrors, c)
t.errorMu.Unlock()
}
return nil
})
}
}
}
t.server.mu.RUnlock()
t.broadcastData(data)
case <-tickerChan:
heartbeatEntry := t.createHeartbeatEntry()
@ -383,37 +240,7 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
"error", err)
continue
}
t.server.mu.RLock()
for conn, client := range t.server.clients {
if client.authenticated {
// Validate session is still active
if t.authenticator != nil && client.session != nil {
if !t.authenticator.ValidateSession(client.session.ID) {
// Session expired, close connection
conn.Close()
continue
}
}
if client.tlsBridge != nil {
if _, err := client.tlsBridge.Write(data); err != nil {
t.logger.Debug("msg", "TLS heartbeat write failed",
"component", "tcp_sink",
"error", err)
conn.Close()
}
} else {
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
if err != nil {
t.writeErrors.Add(1)
t.handleWriteError(c, err)
}
return nil
})
}
}
}
t.server.mu.RUnlock()
t.broadcastData(data)
case <-t.done:
return
@ -421,6 +248,26 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
}
}
func (t *TCPSink) broadcastData(data []byte) {
t.server.mu.RLock()
defer t.server.mu.RUnlock()
for conn, _ := range t.server.clients {
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
if err != nil {
t.writeErrors.Add(1)
t.handleWriteError(c, err)
} else {
// Reset consecutive error count on success
t.errorMu.Lock()
delete(t.consecutiveWriteErrors, c)
t.errorMu.Unlock()
}
return nil
})
}
}
// Handle write errors with threshold-based connection termination
func (t *TCPSink) handleWriteError(c gnet.Conn, err error) {
t.errorMu.Lock()
@ -475,23 +322,20 @@ func (t *TCPSink) createHeartbeatEntry() core.LogEntry {
}
}
// GetActiveConnections returns the current number of connections
// Returns the current number of connections
func (t *TCPSink) GetActiveConnections() int64 {
return t.activeConns.Load()
}
// tcpClient represents a connected TCP client with auth state
// Represents a connected TCP client with auth state
type tcpClient struct {
conn gnet.Conn
buffer bytes.Buffer
authenticated bool
session *auth.Session
authTimeout time.Time
tlsBridge *tls.GNetTLSConn
authTimeoutSet bool
conn gnet.Conn
buffer bytes.Buffer
authTimeout time.Time
session *auth.Session
}
// tcpServer handles gnet events with authentication
// Handles gnet events with authentication
type tcpServer struct {
gnet.BuiltinEventEngine
sink *TCPSink
@ -515,7 +359,7 @@ func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
remoteAddr := c.RemoteAddr()
s.sink.logger.Debug("msg", "TCP connection attempt", "remote_addr", remoteAddr)
// Reject IPv6 connections immediately
// Reject IPv6 connections
if tcpAddr, ok := remoteAddr.(*net.TCPAddr); ok {
if tcpAddr.IP.To4() == nil {
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
@ -543,26 +387,10 @@ func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
s.sink.netLimiter.AddConnection(remoteStr)
}
// Create client state without auth timeout initially
// TCP Sink accepts all connections without authentication
client := &tcpClient{
conn: c,
authenticated: s.sink.authenticator == nil, // No auth = auto authenticated
authTimeoutSet: false, // Auth timeout not started yet
}
// Initialize TLS bridge if enabled
if s.sink.tlsManager != nil {
tlsConfig := s.sink.tlsManager.GetTCPConfig()
client.tlsBridge = tls.NewServerConn(c, tlsConfig)
client.tlsBridge.Handshake() // Start async handshake
s.sink.logger.Debug("msg", "TLS handshake initiated",
"component", "tcp_sink",
"remote_addr", remoteAddr)
} else if s.sink.authenticator != nil {
// Only set auth timeout if no TLS (plain connection)
client.authTimeout = time.Now().Add(30 * time.Second) // TODO: configurable or non-hardcoded timer
client.authTimeoutSet = true
conn: c,
buffer: bytes.Buffer{},
}
s.mu.Lock()
@ -572,14 +400,7 @@ func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
newCount := s.sink.activeConns.Add(1)
s.sink.logger.Debug("msg", "TCP connection opened",
"remote_addr", remoteAddr,
"active_connections", newCount,
"requires_auth", s.sink.authenticator != nil)
// Send auth prompt if authentication is required
if s.sink.authenticator != nil && s.sink.tlsManager == nil {
authPrompt := []byte("AUTH REQUIRED\nFormat: AUTH <method> <credentials>\nMethods: basic, token\n")
return authPrompt, gnet.None
}
"active_connections", newCount)
return nil, gnet.None
}
@ -589,17 +410,9 @@ func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
// Remove client state
s.mu.Lock()
client := s.clients[c]
delete(s.clients, c)
s.mu.Unlock()
// Clean up TLS bridge if present
if client != nil && client.tlsBridge != nil {
client.tlsBridge.Close()
s.sink.logger.Debug("msg", "TLS connection closed",
"remote_addr", remoteAddr)
}
// Clean up write error tracking
s.sink.errorMu.Lock()
delete(s.sink.consecutiveWriteErrors, c)
@ -619,204 +432,7 @@ func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
}
func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
s.mu.RLock()
client, exists := s.clients[c]
s.mu.RUnlock()
if !exists {
return gnet.Close
}
// // Check auth timeout
// if !client.authenticated && time.Now().After(client.authTimeout) {
// s.sink.logger.Warn("msg", "Authentication timeout",
// "component", "tcp_sink",
// "remote_addr", c.RemoteAddr().String())
// if client.tlsBridge != nil && client.tlsBridge.IsHandshakeDone() {
// client.tlsBridge.Write([]byte("AUTH TIMEOUT\n"))
// } else if client.tlsBridge == nil {
// c.AsyncWrite([]byte("AUTH TIMEOUT\n"), nil)
// }
// return gnet.Close
// }
// Read all available data
data, err := c.Next(-1)
if err != nil {
s.sink.logger.Error("msg", "Error reading from connection",
"component", "tcp_sink",
"error", err)
return gnet.Close
}
// Process through TLS bridge if present
if client.tlsBridge != nil {
// Feed encrypted data into TLS engine
if err := client.tlsBridge.ProcessIncoming(data); err != nil {
s.sink.logger.Error("msg", "TLS processing error",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String(),
"error", err)
return gnet.Close
}
// Check if handshake is complete
if !client.tlsBridge.IsHandshakeDone() {
// Still handshaking, wait for more data
return gnet.None
}
// Check handshake result
_, hsErr := client.tlsBridge.HandshakeComplete()
if hsErr != nil {
s.sink.logger.Error("msg", "TLS handshake failed",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String(),
"error", hsErr)
return gnet.Close
}
// Set auth timeout only after TLS handshake completes
if !client.authTimeoutSet && s.sink.authenticator != nil && !client.authenticated {
client.authTimeout = time.Now().Add(30 * time.Second)
client.authTimeoutSet = true
s.sink.logger.Debug("msg", "Auth timeout started after TLS handshake",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String())
}
// Read decrypted plaintext
data = client.tlsBridge.Read()
if data == nil || len(data) == 0 {
// No plaintext available yet
return gnet.None
}
// First data after TLS handshake - send auth prompt if needed
if s.sink.authenticator != nil && !client.authenticated &&
len(client.buffer.Bytes()) == 0 {
authPrompt := []byte("AUTH REQUIRED\n")
client.tlsBridge.Write(authPrompt)
}
}
// Only check auth timeout if it has been set
if !client.authenticated && client.authTimeoutSet && time.Now().After(client.authTimeout) {
s.sink.logger.Warn("msg", "Authentication timeout",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String())
if client.tlsBridge != nil && client.tlsBridge.IsHandshakeDone() {
client.tlsBridge.Write([]byte("AUTH TIMEOUT\n"))
} else if client.tlsBridge == nil {
c.AsyncWrite([]byte("AUTH TIMEOUT\n"), nil)
}
return gnet.Close
}
// If not authenticated, expect auth command
if !client.authenticated {
client.buffer.Write(data)
// Look for complete auth line
if line, err := client.buffer.ReadBytes('\n'); err == nil {
line = bytes.TrimSpace(line)
// Parse AUTH command: AUTH <method> <credentials>
parts := strings.SplitN(string(line), " ", 3)
if len(parts) != 3 || parts[0] != "AUTH" {
// Send error through TLS if enabled
errMsg := []byte("AUTH FAILED\n")
if client.tlsBridge != nil {
client.tlsBridge.Write(errMsg)
} else {
c.AsyncWrite(errMsg, nil)
}
return gnet.None
}
// Authenticate
session, err := s.sink.authenticator.AuthenticateTCP(parts[1], parts[2], c.RemoteAddr().String())
if err != nil {
s.sink.authFailures.Add(1)
s.sink.logger.Warn("msg", "TCP authentication failed",
"remote_addr", c.RemoteAddr().String(),
"method", parts[1],
"error", err)
// Send error through TLS if enabled
errMsg := []byte("AUTH FAILED\n")
if client.tlsBridge != nil {
client.tlsBridge.Write(errMsg)
} else {
c.AsyncWrite(errMsg, nil)
}
return gnet.Close
}
// Authentication successful
s.sink.authSuccesses.Add(1)
s.mu.Lock()
client.authenticated = true
client.session = session
s.mu.Unlock()
s.sink.logger.Info("msg", "TCP client authenticated",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String(),
"username", session.Username,
"method", session.Method,
"tls", client.tlsBridge != nil)
// Send success through TLS if enabled
successMsg := []byte("AUTH OK\n")
if client.tlsBridge != nil {
client.tlsBridge.Write(successMsg)
} else {
c.AsyncWrite(successMsg, nil)
}
// Clear buffer after auth
client.buffer.Reset()
}
return gnet.None
}
// Authenticated clients shouldn't send data, just discard
// TCP Sink doesn't expect any data from clients, discard all
c.Discard(-1)
return gnet.None
}
// SetAuthConfig configures tcp sink authentication
func (t *TCPSink) SetAuthConfig(authCfg *config.AuthConfig) {
if authCfg == nil || authCfg.Type == "none" {
return
}
t.authConfig = authCfg
authenticator, err := auth.New(authCfg, t.logger)
if err != nil {
t.logger.Error("msg", "Failed to initialize authenticator for TCP sink",
"component", "tcp_sink",
"error", err)
return
}
t.authenticator = authenticator
// Initialize TLS manager if SSL is configured
if t.config.SSL != nil && t.config.SSL.Enabled {
tlsManager, err := tls.NewManager(t.config.SSL, t.logger)
if err != nil {
t.logger.Error("msg", "Failed to create TLS manager",
"component", "tcp_sink",
"error", err)
// Continue without TLS
return
}
t.tlsManager = tlsManager
}
t.logger.Info("msg", "Authentication configured for TCP sink",
"component", "tcp_sink",
"auth_type", authCfg.Type,
"tls_enabled", t.tlsManager != nil,
"tls_bridge", t.tlsManager != nil)
}

View File

@ -2,27 +2,32 @@
package sink
import (
"bufio"
"context"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"net"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"logwisp/src/internal/auth"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/format"
tlspkg "logwisp/src/internal/tls"
"github.com/lixenwraith/log"
)
// TCPClientSink forwards log entries to a remote TCP endpoint
// TODO: implement heartbeat for TCP Client Sink, similar to TCP Sink
// Forwards log entries to a remote TCP endpoint
type TCPClientSink struct {
input chan core.LogEntry
config TCPClientConfig
config *config.TCPClientSinkOptions
address string
conn net.Conn
connMu sync.RWMutex
done chan struct{}
@ -31,10 +36,6 @@ type TCPClientSink struct {
logger *log.Logger
formatter format.Formatter
// TLS support
tlsManager *tlspkg.Manager
tlsConfig *tls.Config
// Reconnection state
reconnecting atomic.Bool
lastConnectErr error
@ -48,93 +49,17 @@ type TCPClientSink struct {
connectionUptime atomic.Value // time.Duration
}
// TCPClientConfig holds TCP client sink configuration
type TCPClientConfig struct {
Address string
BufferSize int64
DialTimeout time.Duration
WriteTimeout time.Duration
KeepAlive time.Duration
// Reconnection settings
ReconnectDelay time.Duration
MaxReconnectDelay time.Duration
ReconnectBackoff float64
// TLS config
SSL *config.SSLConfig
}
// NewTCPClientSink creates a new TCP client sink
func NewTCPClientSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*TCPClientSink, error) {
cfg := TCPClientConfig{
BufferSize: int64(1000),
DialTimeout: 10 * time.Second,
WriteTimeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
ReconnectDelay: time.Second,
MaxReconnectDelay: 30 * time.Second,
ReconnectBackoff: float64(1.5),
}
// Extract address
address, ok := options["address"].(string)
if !ok || address == "" {
return nil, fmt.Errorf("tcp_client sink requires 'address' option")
}
// Validate address format
_, _, err := net.SplitHostPort(address)
if err != nil {
return nil, fmt.Errorf("invalid address format (expected host:port): %w", err)
}
cfg.Address = address
// Extract other options
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
cfg.BufferSize = bufSize
}
if dialTimeout, ok := options["dial_timeout_seconds"].(int64); ok && dialTimeout > 0 {
cfg.DialTimeout = time.Duration(dialTimeout) * time.Second
}
if writeTimeout, ok := options["write_timeout_seconds"].(int64); ok && writeTimeout > 0 {
cfg.WriteTimeout = time.Duration(writeTimeout) * time.Second
}
if keepAlive, ok := options["keep_alive_seconds"].(int64); ok && keepAlive > 0 {
cfg.KeepAlive = time.Duration(keepAlive) * time.Second
}
if reconnectDelay, ok := options["reconnect_delay_ms"].(int64); ok && reconnectDelay > 0 {
cfg.ReconnectDelay = time.Duration(reconnectDelay) * time.Millisecond
}
if maxReconnectDelay, ok := options["max_reconnect_delay_seconds"].(int64); ok && maxReconnectDelay > 0 {
cfg.MaxReconnectDelay = time.Duration(maxReconnectDelay) * time.Second
}
if backoff, ok := options["reconnect_backoff"].(float64); ok && backoff >= 1.0 {
cfg.ReconnectBackoff = backoff
}
// Extract SSL config
if ssl, ok := options["ssl"].(map[string]any); ok {
cfg.SSL = &config.SSLConfig{}
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
cfg.SSL.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
cfg.SSL.KeyFile = keyFile
}
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
cfg.SSL.ClientCAFile = caFile
}
if insecure, ok := ssl["insecure_skip_verify"].(bool); ok {
cfg.SSL.InsecureSkipVerify = insecure
}
// Creates a new TCP client sink
func NewTCPClientSink(opts *config.TCPClientSinkOptions, logger *log.Logger, formatter format.Formatter) (*TCPClientSink, error) {
// Validation and defaults are handled in config package
if opts == nil {
return nil, fmt.Errorf("TCP client sink options cannot be nil")
}
t := &TCPClientSink{
input: make(chan core.LogEntry, cfg.BufferSize),
config: cfg,
config: opts,
address: opts.Host + ":" + strconv.Itoa(int(opts.Port)),
input: make(chan core.LogEntry, opts.BufferSize),
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
@ -143,34 +68,6 @@ func NewTCPClientSink(options map[string]any, logger *log.Logger, formatter form
t.lastProcessed.Store(time.Time{})
t.connectionUptime.Store(time.Duration(0))
// Initialize TLS manager if SSL is configured
if cfg.SSL != nil && cfg.SSL.Enabled {
tlsManager, err := tlspkg.NewManager(cfg.SSL, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
t.tlsManager = tlsManager
// Get client TLS config
t.tlsConfig = tlsManager.GetTCPConfig()
// ADDED: Client-specific TLS config adjustments
t.tlsConfig.InsecureSkipVerify = cfg.SSL.InsecureSkipVerify
// Extract server name from address for SNI
host, _, err := net.SplitHostPort(cfg.Address)
if err != nil {
return nil, fmt.Errorf("failed to parse address for SNI: %w", err)
}
t.tlsConfig.ServerName = host
logger.Info("msg", "TLS enabled for TCP client",
"component", "tcp_client_sink",
"address", cfg.Address,
"server_name", host,
"insecure", cfg.SSL.InsecureSkipVerify)
}
return t, nil
}
@ -189,7 +86,8 @@ func (t *TCPClientSink) Start(ctx context.Context) error {
t.logger.Info("msg", "TCP client sink started",
"component", "tcp_client_sink",
"address", t.config.Address)
"host", t.config.Host,
"port", t.config.Port)
return nil
}
@ -231,7 +129,7 @@ func (t *TCPClientSink) GetStats() SinkStats {
StartTime: t.startTime,
LastProcessed: lastProc,
Details: map[string]any{
"address": t.config.Address,
"address": t.address,
"connected": connected,
"reconnecting": t.reconnecting.Load(),
"total_failed": t.totalFailed.Load(),
@ -245,7 +143,7 @@ func (t *TCPClientSink) GetStats() SinkStats {
func (t *TCPClientSink) connectionManager(ctx context.Context) {
defer t.wg.Done()
reconnectDelay := t.config.ReconnectDelay
reconnectDelay := time.Duration(t.config.ReconnectDelayMS) * time.Millisecond
for {
select {
@ -265,9 +163,9 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
t.lastConnectErr = err
t.logger.Warn("msg", "Failed to connect to TCP server",
"component", "tcp_client_sink",
"address", t.config.Address,
"address", t.address,
"error", err,
"retry_delay", reconnectDelay)
"retry_delay_ms", reconnectDelay)
// Wait before retry
select {
@ -280,15 +178,15 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
// Exponential backoff
reconnectDelay = time.Duration(float64(reconnectDelay) * t.config.ReconnectBackoff)
if reconnectDelay > t.config.MaxReconnectDelay {
reconnectDelay = t.config.MaxReconnectDelay
if reconnectDelay > time.Duration(t.config.MaxReconnectDelayMS)*time.Millisecond {
reconnectDelay = time.Duration(t.config.MaxReconnectDelayMS)
}
continue
}
// Connection successful
t.lastConnectErr = nil
reconnectDelay = t.config.ReconnectDelay // Reset backoff
reconnectDelay = time.Duration(t.config.ReconnectDelayMS) * time.Millisecond // Reset backoff
t.connectTime = time.Now()
t.totalReconnects.Add(1)
@ -298,7 +196,7 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
t.logger.Info("msg", "Connected to TCP server",
"component", "tcp_client_sink",
"address", t.config.Address,
"address", t.address,
"local_addr", conn.LocalAddr())
// Monitor connection
@ -315,18 +213,18 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
t.logger.Warn("msg", "Lost connection to TCP server",
"component", "tcp_client_sink",
"address", t.config.Address,
"address", t.address,
"uptime", uptime)
}
}
func (t *TCPClientSink) connect() (net.Conn, error) {
dialer := &net.Dialer{
Timeout: t.config.DialTimeout,
KeepAlive: t.config.KeepAlive,
Timeout: time.Duration(t.config.DialTimeout) * time.Second,
KeepAlive: time.Duration(t.config.KeepAlive) * time.Second,
}
conn, err := dialer.Dial("tcp", t.config.Address)
conn, err := dialer.Dial("tcp", t.address)
if err != nil {
return nil, err
}
@ -334,41 +232,132 @@ func (t *TCPClientSink) connect() (net.Conn, error) {
// Set TCP keep-alive
if tcpConn, ok := conn.(*net.TCPConn); ok {
tcpConn.SetKeepAlive(true)
tcpConn.SetKeepAlivePeriod(t.config.KeepAlive)
tcpConn.SetKeepAlivePeriod(time.Duration(t.config.KeepAlive) * time.Second)
}
// Wrap with TLS if configured
if t.tlsConfig != nil {
t.logger.Debug("msg", "Initiating TLS handshake",
"component", "tcp_client_sink",
"address", t.config.Address)
tlsConn := tls.Client(conn, t.tlsConfig)
// Perform handshake with timeout
handshakeCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := tlsConn.HandshakeContext(handshakeCtx); err != nil {
// SCRAM authentication if credentials configured
if t.config.Auth != nil && t.config.Auth.Type == "scram" {
if err := t.performSCRAMAuth(conn); err != nil {
conn.Close()
return nil, fmt.Errorf("TLS handshake failed: %w", err)
return nil, fmt.Errorf("SCRAM authentication failed: %w", err)
}
// Log connection details
state := tlsConn.ConnectionState()
t.logger.Info("msg", "TLS connection established",
t.logger.Debug("msg", "SCRAM authentication completed",
"component", "tcp_client_sink",
"address", t.config.Address,
"tls_version", tlsVersionString(state.Version),
"cipher_suite", tls.CipherSuiteName(state.CipherSuite),
"server_name", state.ServerName)
return tlsConn, nil
"address", t.address)
}
return conn, nil
}
func (t *TCPClientSink) performSCRAMAuth(conn net.Conn) error {
reader := bufio.NewReader(conn)
// Create SCRAM client
scramClient := auth.NewScramClient(t.config.Auth.Username, t.config.Auth.Password)
// Wait for AUTH_REQUIRED from server
authPrompt, err := reader.ReadString('\n')
if err != nil {
return fmt.Errorf("failed to read auth prompt: %w", err)
}
if strings.TrimSpace(authPrompt) != "AUTH_REQUIRED" {
return fmt.Errorf("unexpected server greeting: %s", authPrompt)
}
// Step 1: Send ClientFirst
clientFirst, err := scramClient.StartAuthentication()
if err != nil {
return fmt.Errorf("failed to start SCRAM: %w", err)
}
msg, err := auth.FormatSCRAMRequest("SCRAM-FIRST", clientFirst)
if err != nil {
return err
}
if _, err := conn.Write([]byte(msg)); err != nil {
return fmt.Errorf("failed to send SCRAM-FIRST: %w", err)
}
// Step 2: Receive ServerFirst challenge
response, err := reader.ReadString('\n')
if err != nil {
return fmt.Errorf("failed to read SCRAM challenge: %w", err)
}
command, data, err := auth.ParseSCRAMResponse(response)
if err != nil {
return err
}
if command != "SCRAM-CHALLENGE" {
return fmt.Errorf("unexpected server response: %s", command)
}
var serverFirst auth.ServerFirst
if err := json.Unmarshal([]byte(data), &serverFirst); err != nil {
return fmt.Errorf("failed to parse server challenge: %w", err)
}
// Step 3: Process challenge and send proof
clientFinal, err := scramClient.ProcessServerFirst(&serverFirst)
if err != nil {
return fmt.Errorf("failed to process challenge: %w", err)
}
msg, err = auth.FormatSCRAMRequest("SCRAM-PROOF", clientFinal)
if err != nil {
return err
}
if _, err := conn.Write([]byte(msg)); err != nil {
return fmt.Errorf("failed to send SCRAM-PROOF: %w", err)
}
// Step 4: Receive ServerFinal
response, err = reader.ReadString('\n')
if err != nil {
return fmt.Errorf("failed to read SCRAM result: %w", err)
}
command, data, err = auth.ParseSCRAMResponse(response)
if err != nil {
return err
}
switch command {
case "SCRAM-OK":
var serverFinal auth.ServerFinal
if err := json.Unmarshal([]byte(data), &serverFinal); err != nil {
return fmt.Errorf("failed to parse server signature: %w", err)
}
// Verify server signature
if err := scramClient.VerifyServerFinal(&serverFinal); err != nil {
return fmt.Errorf("server signature verification failed: %w", err)
}
t.logger.Info("msg", "SCRAM authentication successful",
"component", "tcp_client_sink",
"address", t.address,
"username", t.config.Auth.Username,
"session_id", serverFinal.SessionID)
return nil
case "SCRAM-FAIL":
reason := data
if reason == "" {
reason = "unknown"
}
return fmt.Errorf("authentication failed: %s", reason)
default:
return fmt.Errorf("unexpected response: %s", command)
}
}
func (t *TCPClientSink) monitorConnection(conn net.Conn) {
// Simple connection monitoring by periodic zero-byte reads
ticker := time.NewTicker(5 * time.Second)
@ -381,8 +370,7 @@ func (t *TCPClientSink) monitorConnection(conn net.Conn) {
return
case <-ticker.C:
// Set read deadline
// TODO: Add t.config.ReadTimeout and after addition use it instead of static value
if err := conn.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
if err := conn.SetReadDeadline(time.Now().Add(time.Duration(t.config.ReadTimeout) * time.Second)); err != nil {
t.logger.Debug("msg", "Failed to set read deadline", "error", err)
return
}
@ -448,7 +436,7 @@ func (t *TCPClientSink) sendEntry(entry core.LogEntry) error {
}
// Set write deadline
if err := conn.SetWriteDeadline(time.Now().Add(t.config.WriteTimeout)); err != nil {
if err := conn.SetWriteDeadline(time.Now().Add(time.Duration(t.config.WriteTimeout) * time.Second)); err != nil {
return fmt.Errorf("failed to set write deadline: %w", err)
}
@ -464,20 +452,4 @@ func (t *TCPClientSink) sendEntry(entry core.LogEntry) error {
}
return nil
}
// tlsVersionString returns human-readable TLS version
func tlsVersionString(version uint16) string {
switch version {
case tls.VersionTLS10:
return "TLS1.0"
case tls.VersionTLS11:
return "TLS1.1"
case tls.VersionTLS12:
return "TLS1.2"
case tls.VersionTLS13:
return "TLS1.3"
default:
return fmt.Sprintf("0x%04x", version)
}
}

View File

@ -13,16 +13,15 @@ import (
"sync/atomic"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// DirectorySource monitors a directory for log files
// Monitors a directory for log files
type DirectorySource struct {
path string
pattern string
checkInterval time.Duration
config *config.DirectorySourceOptions
subscribers []chan core.LogEntry
watchers map[string]*fileWatcher
mu sync.RWMutex
@ -36,35 +35,17 @@ type DirectorySource struct {
logger *log.Logger
}
// NewDirectorySource creates a new directory monitoring source
func NewDirectorySource(options map[string]any, logger *log.Logger) (*DirectorySource, error) {
path, ok := options["path"].(string)
if !ok {
return nil, fmt.Errorf("directory source requires 'path' option")
}
pattern, _ := options["pattern"].(string)
if pattern == "" {
pattern = "*"
}
checkInterval := 100 * time.Millisecond
if ms, ok := options["check_interval_ms"].(int64); ok && ms > 0 {
checkInterval = time.Duration(ms) * time.Millisecond
}
absPath, err := filepath.Abs(path)
if err != nil {
return nil, fmt.Errorf("invalid path %s: %w", path, err)
// Creates a new directory monitoring source
func NewDirectorySource(opts *config.DirectorySourceOptions, logger *log.Logger) (*DirectorySource, error) {
if opts == nil {
return nil, fmt.Errorf("directory source options cannot be nil")
}
ds := &DirectorySource{
path: absPath,
pattern: pattern,
checkInterval: checkInterval,
watchers: make(map[string]*fileWatcher),
startTime: time.Now(),
logger: logger,
config: opts,
watchers: make(map[string]*fileWatcher),
startTime: time.Now(),
logger: logger,
}
ds.lastEntryTime.Store(time.Time{})
@ -87,9 +68,9 @@ func (ds *DirectorySource) Start() error {
ds.logger.Info("msg", "Directory source started",
"component", "directory_source",
"path", ds.path,
"pattern", ds.pattern,
"check_interval_ms", ds.checkInterval.Milliseconds())
"path", ds.config.Path,
"pattern", ds.config.Pattern,
"check_interval_ms", ds.config.CheckIntervalMS)
return nil
}
@ -110,7 +91,7 @@ func (ds *DirectorySource) Stop() {
ds.logger.Info("msg", "Directory source stopped",
"component", "directory_source",
"path", ds.path)
"path", ds.config.Path)
}
func (ds *DirectorySource) GetStats() SourceStats {
@ -170,7 +151,7 @@ func (ds *DirectorySource) monitorLoop() {
ds.checkTargets()
ticker := time.NewTicker(ds.checkInterval)
ticker := time.NewTicker(time.Duration(ds.config.CheckIntervalMS) * time.Millisecond)
defer ticker.Stop()
for {
@ -188,8 +169,8 @@ func (ds *DirectorySource) checkTargets() {
if err != nil {
ds.logger.Warn("msg", "Failed to scan directory",
"component", "directory_source",
"path", ds.path,
"pattern", ds.pattern,
"path", ds.config.Path,
"pattern", ds.config.Pattern,
"error", err)
return
}
@ -202,13 +183,13 @@ func (ds *DirectorySource) checkTargets() {
}
func (ds *DirectorySource) scanDirectory() ([]string, error) {
entries, err := os.ReadDir(ds.path)
entries, err := os.ReadDir(ds.config.Path)
if err != nil {
return nil, err
}
// Convert glob pattern to regex
regexPattern := globToRegex(ds.pattern)
regexPattern := globToRegex(ds.config.Pattern)
re, err := regexp.Compile(regexPattern)
if err != nil {
return nil, fmt.Errorf("invalid pattern regex: %w", err)
@ -222,7 +203,7 @@ func (ds *DirectorySource) scanDirectory() ([]string, error) {
name := entry.Name()
if re.MatchString(name) {
files = append(files, filepath.Join(ds.path, name))
files = append(files, filepath.Join(ds.config.Path, name))
}
}

View File

@ -20,7 +20,7 @@ import (
"github.com/lixenwraith/log"
)
// WatcherInfo contains information about a file watcher
// Contains information about a file watcher
type WatcherInfo struct {
Path string
Size int64
@ -81,7 +81,6 @@ func (w *fileWatcher) watch(ctx context.Context) error {
}
}
// FILE: logwisp/src/internal/source/file_watcher.go
func (w *fileWatcher) seekToEnd() error {
file, err := os.Open(w.path)
if err != nil {

View File

@ -4,36 +4,41 @@ package source
import (
"encoding/json"
"fmt"
"logwisp/src/internal/tls"
"net"
"sync"
"sync/atomic"
"time"
"logwisp/src/internal/auth"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/limit"
"logwisp/src/internal/tls"
"github.com/lixenwraith/log"
"github.com/valyala/fasthttp"
)
// HTTPSource receives log entries via HTTP POST requests
// Receives log entries via HTTP POST requests
type HTTPSource struct {
port int64
ingestPath string
bufferSize int64
config *config.HTTPSourceOptions
// Application
server *fasthttp.Server
subscribers []chan core.LogEntry
mu sync.RWMutex
done chan struct{}
wg sync.WaitGroup
netLimiter *limit.NetLimiter
logger *log.Logger
// CHANGED: Add TLS support
tlsManager *tls.Manager
sslConfig *config.SSLConfig
// Runtime
mu sync.RWMutex
done chan struct{}
wg sync.WaitGroup
// Security
authenticator *auth.Authenticator
authFailures atomic.Uint64
authSuccesses atomic.Uint64
tlsManager *tls.Manager
// Statistics
totalEntries atomic.Uint64
@ -43,83 +48,53 @@ type HTTPSource struct {
lastEntryTime atomic.Value // time.Time
}
// NewHTTPSource creates a new HTTP server source
func NewHTTPSource(options map[string]any, logger *log.Logger) (*HTTPSource, error) {
port, ok := options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return nil, fmt.Errorf("http source requires valid 'port' option")
}
ingestPath := "/ingest"
if path, ok := options["ingest_path"].(string); ok && path != "" {
ingestPath = path
}
bufferSize := int64(1000)
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
bufferSize = bufSize
// Creates a new HTTP server source
func NewHTTPSource(opts *config.HTTPSourceOptions, logger *log.Logger) (*HTTPSource, error) {
// Validation done in config package
if opts == nil {
return nil, fmt.Errorf("HTTP source options cannot be nil")
}
h := &HTTPSource{
port: port,
ingestPath: ingestPath,
bufferSize: bufferSize,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
config: opts,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
}
h.lastEntryTime.Store(time.Time{})
// Initialize net limiter if configured
if rl, ok := options["net_limit"].(map[string]any); ok {
if enabled, _ := rl["enabled"].(bool); enabled {
cfg := config.NetLimitConfig{
Enabled: true,
}
if rps, ok := toFloat(rl["requests_per_second"]); ok {
cfg.RequestsPerSecond = rps
}
if burst, ok := rl["burst_size"].(int64); ok {
cfg.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.LimitBy = limitBy
}
if respCode, ok := rl["response_code"].(int64); ok {
cfg.ResponseCode = respCode
}
if msg, ok := rl["response_message"].(string); ok {
cfg.ResponseMessage = msg
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.MaxConnectionsPerIP = maxPerIP
}
h.netLimiter = limit.NewNetLimiter(cfg, logger)
}
if opts.NetLimit != nil && (opts.NetLimit.Enabled ||
len(opts.NetLimit.IPWhitelist) > 0 ||
len(opts.NetLimit.IPBlacklist) > 0) {
h.netLimiter = limit.NewNetLimiter(opts.NetLimit, logger)
}
// Extract SSL config after existing options
if ssl, ok := options["ssl"].(map[string]any); ok {
h.sslConfig = &config.SSLConfig{}
h.sslConfig.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
h.sslConfig.CertFile = certFile
// Initialize TLS manager if configured
if opts.TLS != nil && opts.TLS.Enabled {
tlsManager, err := tls.NewManager(opts.TLS, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
if keyFile, ok := ssl["key_file"].(string); ok {
h.sslConfig.KeyFile = keyFile
}
// TODO: extract other SSL options similar to tcp_client_sink
h.tlsManager = tlsManager
}
// Create TLS manager
if h.sslConfig.Enabled {
tlsManager, err := tls.NewManager(h.sslConfig, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
h.tlsManager = tlsManager
// Initialize authenticator if configured
if opts.Auth != nil && opts.Auth.Type != "none" && opts.Auth.Type != "" {
// Verify TLS is enabled for auth (validation should have caught this)
if h.tlsManager == nil {
return nil, fmt.Errorf("authentication requires TLS to be enabled")
}
authenticator, err := auth.NewAuthenticator(opts.Auth, logger)
if err != nil {
return nil, fmt.Errorf("failed to create authenticator: %w", err)
}
h.authenticator = authenticator
logger.Info("msg", "Authentication configured for HTTP source",
"component", "http_source",
"auth_type", opts.Auth.Type)
}
return h, nil
@ -129,51 +104,63 @@ func (h *HTTPSource) Subscribe() <-chan core.LogEntry {
h.mu.Lock()
defer h.mu.Unlock()
ch := make(chan core.LogEntry, h.bufferSize)
ch := make(chan core.LogEntry, h.config.BufferSize)
h.subscribers = append(h.subscribers, ch)
return ch
}
func (h *HTTPSource) Start() error {
h.server = &fasthttp.Server{
Handler: h.requestHandler,
DisableKeepalive: false,
StreamRequestBody: true,
CloseOnShutdown: true,
Handler: h.requestHandler,
DisableKeepalive: false,
StreamRequestBody: true,
CloseOnShutdown: true,
ReadTimeout: time.Duration(h.config.ReadTimeout) * time.Millisecond,
WriteTimeout: time.Duration(h.config.WriteTimeout) * time.Millisecond,
MaxRequestBodySize: int(h.config.MaxRequestBodySize),
}
addr := fmt.Sprintf(":%d", h.port)
// Use configured host and port
addr := fmt.Sprintf("%s:%d", h.config.Host, h.config.Port)
// Start server in background
h.wg.Add(1)
errChan := make(chan error, 1)
go func() {
defer h.wg.Done()
h.logger.Info("msg", "HTTP source server starting",
"component", "http_source",
"port", h.port,
"ingest_path", h.ingestPath,
"tls_enabled", h.tlsManager != nil)
"port", h.config.Port,
"ingest_path", h.config.IngestPath,
"tls_enabled", h.tlsManager != nil,
"auth_enabled", h.authenticator != nil)
var err error
// Check for TLS manager and start the appropriate server type
if h.tlsManager != nil {
// HTTPS server
h.server.TLSConfig = h.tlsManager.GetHTTPConfig()
err = h.server.ListenAndServeTLS(addr, h.sslConfig.CertFile, h.sslConfig.KeyFile)
err = h.server.ListenAndServeTLS(addr, h.config.TLS.CertFile, h.config.TLS.KeyFile)
} else {
// HTTP server
err = h.server.ListenAndServe(addr)
}
if err != nil {
h.logger.Error("msg", "HTTP source server failed",
"component", "http_source",
"port", h.port,
"port", h.config.Port,
"error", err)
errChan <- err
}
}()
// Give server time to start
time.Sleep(100 * time.Millisecond) // TODO: standardize and better manage timers
return nil
// Wait briefly for server startup
select {
case err := <-errChan:
return fmt.Errorf("HTTP server failed to start: %w", err)
case <-time.After(250 * time.Millisecond):
return nil
}
}
func (h *HTTPSource) Stop() {
@ -213,6 +200,21 @@ func (h *HTTPSource) GetStats() SourceStats {
netLimitStats = h.netLimiter.GetStats()
}
var authStats map[string]any
if h.authenticator != nil {
authStats = map[string]any{
"enabled": true,
"type": h.config.Auth.Type,
"failures": h.authFailures.Load(),
"successes": h.authSuccesses.Load(),
}
}
var tlsStats map[string]any
if h.tlsManager != nil {
tlsStats = h.tlsManager.GetStats()
}
return SourceStats{
Type: "http",
TotalEntries: h.totalEntries.Load(),
@ -220,28 +222,21 @@ func (h *HTTPSource) GetStats() SourceStats {
StartTime: h.startTime,
LastEntryTime: lastEntry,
Details: map[string]any{
"port": h.port,
"ingest_path": h.ingestPath,
"host": h.config.Host,
"port": h.config.Port,
"path": h.config.IngestPath,
"invalid_entries": h.invalidEntries.Load(),
"net_limit": netLimitStats,
"auth": authStats,
"tls": tlsStats,
},
}
}
func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
// Only handle POST to the configured ingest path
if string(ctx.Method()) != "POST" || string(ctx.Path()) != h.ingestPath {
ctx.SetStatusCode(fasthttp.StatusNotFound)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Not Found",
"hint": fmt.Sprintf("POST logs to %s", h.ingestPath),
})
return
}
// Extract and validate IP
remoteAddr := ctx.RemoteAddr().String()
// 1. IPv6 check (early reject)
ipStr, _, err := net.SplitHostPort(remoteAddr)
if err == nil {
if ip := net.ParseIP(ipStr); ip != nil && ip.To4() == nil {
@ -254,7 +249,7 @@ func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
}
}
// Check net limit
// 2. Net limit check (early reject)
if h.netLimiter != nil {
if allowed, statusCode, message := h.netLimiter.CheckHTTP(remoteAddr); !allowed {
ctx.SetStatusCode(int(statusCode))
@ -267,9 +262,72 @@ func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
}
}
// Process the request body
// 3. Check TLS requirement for auth
if h.authenticator != nil {
isTLS := ctx.IsTLS() || h.tlsManager != nil
if !isTLS {
ctx.SetStatusCode(fasthttp.StatusForbidden)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "TLS required for authentication",
"hint": "Use HTTPS to submit authenticated requests",
})
return
}
// Authenticate request
authHeader := string(ctx.Request.Header.Peek("Authorization"))
session, err := h.authenticator.AuthenticateHTTP(authHeader, remoteAddr)
if err != nil {
h.authFailures.Add(1)
h.logger.Warn("msg", "Authentication failed",
"component", "http_source",
"remote_addr", remoteAddr,
"error", err)
ctx.SetStatusCode(fasthttp.StatusUnauthorized)
if h.config.Auth.Type == "basic" && h.config.Auth.Basic != nil && h.config.Auth.Basic.Realm != "" {
ctx.Response.Header.Set("WWW-Authenticate", fmt.Sprintf(`Basic realm="%s"`, h.config.Auth.Basic.Realm))
}
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Authentication failed",
})
return
}
h.authSuccesses.Add(1)
_ = session // Session can be used for audit logging
}
// 4. Path check
path := string(ctx.Path())
if path != h.config.IngestPath {
ctx.SetStatusCode(fasthttp.StatusNotFound)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Not Found",
"hint": fmt.Sprintf("POST logs to %s", h.config.IngestPath),
})
return
}
// 5. Method check (only accepts POST)
if string(ctx.Method()) != "POST" {
ctx.SetStatusCode(fasthttp.StatusMethodNotAllowed)
ctx.SetContentType("application/json")
ctx.Response.Header.Set("Allow", "POST")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Method not allowed",
"hint": "Use POST to submit logs",
})
return
}
// 6. Process log entry
body := ctx.PostBody()
if len(body) == 0 {
h.invalidEntries.Add(1)
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
@ -278,32 +336,34 @@ func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
return
}
// Parse the log entries
entries, err := h.parseEntries(body)
if err != nil {
var entry core.LogEntry
if err := json.Unmarshal(body, &entry); err != nil {
h.invalidEntries.Add(1)
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": fmt.Sprintf("Invalid log format: %v", err),
"error": fmt.Sprintf("Invalid JSON: %v", err),
})
return
}
// Publish entries
accepted := 0
for _, entry := range entries {
if h.publish(entry) {
accepted++
}
// Set defaults
if entry.Time.IsZero() {
entry.Time = time.Now()
}
if entry.Source == "" {
entry.Source = "http"
}
entry.RawSize = int64(len(body))
// Return success response
// Publish to subscribers
h.publish(entry)
// Success response
ctx.SetStatusCode(fasthttp.StatusAccepted)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]any{
"accepted": accepted,
"total": len(entries),
json.NewEncoder(ctx).Encode(map[string]string{
"status": "accepted",
})
}
@ -331,7 +391,8 @@ func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
// Try to parse as JSON array
var array []core.LogEntry
if err := json.Unmarshal(body, &array); err == nil {
// NOTE: Placeholder; For array, divide total size by entry count as approximation
// For array, divide total size by entry count as approximation
// Accurate calculation adds too much complexity and processing
approxSizePerEntry := int64(len(body) / len(array))
for i, entry := range array {
if entry.Message == "" {
@ -343,7 +404,6 @@ func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
if entry.Source == "" {
array[i].Source = "http"
}
// NOTE: Placeholder
array[i].RawSize = approxSizePerEntry
}
return array, nil
@ -382,32 +442,25 @@ func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
return entries, nil
}
func (h *HTTPSource) publish(entry core.LogEntry) bool {
func (h *HTTPSource) publish(entry core.LogEntry) {
h.mu.RLock()
defer h.mu.RUnlock()
h.totalEntries.Add(1)
h.lastEntryTime.Store(entry.Time)
dropped := false
for _, ch := range h.subscribers {
select {
case ch <- entry:
default:
dropped = true
h.droppedEntries.Add(1)
h.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "http_source")
}
}
if dropped {
h.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "http_source")
}
return true
}
// splitLines splits bytes into lines, handling both \n and \r\n
// Splits bytes into lines, handling both \n and \r\n
func splitLines(data []byte) [][]byte {
var lines [][]byte
start := 0
@ -430,18 +483,4 @@ func splitLines(data []byte) [][]byte {
}
return lines
}
// Helper function for type conversion
func toFloat(v any) (float64, bool) {
switch val := v.(type) {
case float64:
return val, true
case int:
return float64(val), true
case int64:
return float64(val), true
default:
return 0, false
}
}

View File

@ -7,22 +7,22 @@ import (
"logwisp/src/internal/core"
)
// Source represents an input data stream
// Represents an input data stream
type Source interface {
// Subscribe returns a channel that receives log entries
// Returns a channel that receives log entries
Subscribe() <-chan core.LogEntry
// Start begins reading from the source
// Begins reading from the source
Start() error
// Stop gracefully shuts down the source
// Gracefully shuts down the source
Stop()
// GetStats returns source statistics
// Returns source statistics
GetStats() SourceStats
}
// SourceStats contains statistics about a source
// Contains statistics about a source
type SourceStats struct {
Type string
TotalEntries uint64

View File

@ -7,13 +7,15 @@ import (
"sync/atomic"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// StdinSource reads log entries from standard input
// Reads log entries from standard input
type StdinSource struct {
config *config.StdinSourceOptions
subscribers []chan core.LogEntry
done chan struct{}
totalEntries atomic.Uint64
@ -23,19 +25,26 @@ type StdinSource struct {
logger *log.Logger
}
// NewStdinSource creates a new stdin source
func NewStdinSource(options map[string]any, logger *log.Logger) (*StdinSource, error) {
s := &StdinSource{
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
func NewStdinSource(opts *config.StdinSourceOptions, logger *log.Logger) (*StdinSource, error) {
if opts == nil {
opts = &config.StdinSourceOptions{
BufferSize: 1000, // Default
}
}
s.lastEntryTime.Store(time.Time{})
return s, nil
source := &StdinSource{
config: opts,
subscribers: make([]chan core.LogEntry, 0),
done: make(chan struct{}),
logger: logger,
startTime: time.Now(),
}
source.lastEntryTime.Store(time.Time{})
return source, nil
}
func (s *StdinSource) Subscribe() <-chan core.LogEntry {
ch := make(chan core.LogEntry, 1000)
ch := make(chan core.LogEntry, s.config.BufferSize)
s.subscribers = append(s.subscribers, ch)
return ch
}

View File

@ -5,9 +5,9 @@ import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net"
"strings"
"sync"
"sync/atomic"
"time"
@ -16,7 +16,6 @@ import (
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/limit"
"logwisp/src/internal/tls"
"github.com/lixenwraith/log"
"github.com/lixenwraith/log/compat"
@ -24,27 +23,25 @@ import (
)
const (
maxClientBufferSize = 10 * 1024 * 1024 // 10MB max per client
maxLineLength = 1 * 1024 * 1024 // 1MB max per log line
maxEncryptedDataPerRead = 1 * 1024 * 1024 // 1MB max encrypted data per read
maxCumulativeEncrypted = 20 * 1024 * 1024 // 20MB total encrypted before processing
maxClientBufferSize = 10 * 1024 * 1024 // 10MB max per client
maxLineLength = 1 * 1024 * 1024 // 1MB max per log line
)
// TCPSource receives log entries via TCP connections
// Receives log entries via TCP connections
type TCPSource struct {
port int64
bufferSize int64
server *tcpSourceServer
subscribers []chan core.LogEntry
mu sync.RWMutex
done chan struct{}
engine *gnet.Engine
engineMu sync.Mutex
wg sync.WaitGroup
netLimiter *limit.NetLimiter
tlsManager *tls.Manager
sslConfig *config.SSLConfig
logger *log.Logger
config *config.TCPSourceOptions
server *tcpSourceServer
subscribers []chan core.LogEntry
mu sync.RWMutex
done chan struct{}
engine *gnet.Engine
engineMu sync.Mutex
wg sync.WaitGroup
authenticator *auth.Authenticator
netLimiter *limit.NetLimiter
logger *log.Logger
scramManager *auth.ScramManager
scramProtocolHandler *auth.ScramProtocolHandler
// Statistics
totalEntries atomic.Uint64
@ -53,80 +50,41 @@ type TCPSource struct {
activeConns atomic.Int64
startTime time.Time
lastEntryTime atomic.Value // time.Time
authFailures atomic.Uint64
authSuccesses atomic.Uint64
}
// NewTCPSource creates a new TCP server source
func NewTCPSource(options map[string]any, logger *log.Logger) (*TCPSource, error) {
port, ok := options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return nil, fmt.Errorf("tcp source requires valid 'port' option")
}
bufferSize := int64(1000)
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
bufferSize = bufSize
// Creates a new TCP server source
func NewTCPSource(opts *config.TCPSourceOptions, logger *log.Logger) (*TCPSource, error) {
// Accept typed config - validation done in config package
if opts == nil {
return nil, fmt.Errorf("TCP source options cannot be nil")
}
t := &TCPSource{
port: port,
bufferSize: bufferSize,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
config: opts,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
}
t.lastEntryTime.Store(time.Time{})
// Initialize net limiter if configured
if rl, ok := options["net_limit"].(map[string]any); ok {
if enabled, _ := rl["enabled"].(bool); enabled {
cfg := config.NetLimitConfig{
Enabled: true,
}
if rps, ok := toFloat(rl["requests_per_second"]); ok {
cfg.RequestsPerSecond = rps
}
if burst, ok := rl["burst_size"].(int64); ok {
cfg.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.LimitBy = limitBy
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.MaxConnectionsPerIP = maxPerIP
}
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
cfg.MaxTotalConnections = maxTotal
}
t.netLimiter = limit.NewNetLimiter(cfg, logger)
}
if opts.NetLimit != nil && (opts.NetLimit.Enabled ||
len(opts.NetLimit.IPWhitelist) > 0 ||
len(opts.NetLimit.IPBlacklist) > 0) {
t.netLimiter = limit.NewNetLimiter(opts.NetLimit, logger)
}
// Extract SSL config and initialize TLS manager
if ssl, ok := options["ssl"].(map[string]any); ok {
t.sslConfig = &config.SSLConfig{}
t.sslConfig.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
t.sslConfig.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
t.sslConfig.KeyFile = keyFile
}
t.sslConfig.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
t.sslConfig.ClientCAFile = caFile
}
t.sslConfig.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
// Create TLS manager if enabled
if t.sslConfig.Enabled {
tlsManager, err := tls.NewManager(t.sslConfig, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
t.tlsManager = tlsManager
}
// Initialize SCRAM
if opts.Auth != nil && opts.Auth.Type == "scram" && opts.Auth.Scram != nil {
t.scramManager = auth.NewScramManager(opts.Auth.Scram)
t.scramProtocolHandler = auth.NewScramProtocolHandler(t.scramManager, logger)
logger.Info("msg", "SCRAM authentication configured for TCP source",
"component", "tcp_source",
"users", len(opts.Auth.Scram.Users))
} else if opts.Auth != nil && opts.Auth.Type != "none" && opts.Auth.Type != "" {
return nil, fmt.Errorf("TCP source only supports 'none' or 'scram' auth")
}
return t, nil
@ -136,7 +94,7 @@ func (t *TCPSource) Subscribe() <-chan core.LogEntry {
t.mu.Lock()
defer t.mu.Unlock()
ch := make(chan core.LogEntry, t.bufferSize)
ch := make(chan core.LogEntry, t.config.BufferSize)
t.subscribers = append(t.subscribers, ch)
return ch
}
@ -147,7 +105,8 @@ func (t *TCPSource) Start() error {
clients: make(map[gnet.Conn]*tcpClient),
}
addr := fmt.Sprintf("tcp://:%d", t.port)
// Use configured host and port
addr := fmt.Sprintf("tcp://%s:%d", t.config.Host, t.config.Port)
// Create a gnet adapter using the existing logger instance
gnetLogger := compat.NewGnetAdapter(t.logger)
@ -159,18 +118,19 @@ func (t *TCPSource) Start() error {
defer t.wg.Done()
t.logger.Info("msg", "TCP source server starting",
"component", "tcp_source",
"port", t.port,
"tls_enabled", t.tlsManager != nil)
"port", t.config.Port,
"auth_enabled", t.authenticator != nil)
err := gnet.Run(t.server, addr,
gnet.WithLogger(gnetLogger),
gnet.WithMulticore(true),
gnet.WithReusePort(true),
gnet.WithTCPKeepAlive(time.Duration(t.config.KeepAlivePeriod)*time.Millisecond),
)
if err != nil {
t.logger.Error("msg", "TCP source server failed",
"component", "tcp_source",
"port", t.port,
"port", t.config.Port,
"error", err)
}
errChan <- err
@ -185,7 +145,7 @@ func (t *TCPSource) Start() error {
return err
case <-time.After(100 * time.Millisecond):
// Server started successfully
t.logger.Info("msg", "TCP server started", "port", t.port)
t.logger.Info("msg", "TCP server started", "port", t.config.Port)
return nil
}
}
@ -230,6 +190,16 @@ func (t *TCPSource) GetStats() SourceStats {
netLimitStats = t.netLimiter.GetStats()
}
var authStats map[string]any
if t.authenticator != nil {
authStats = map[string]any{
"enabled": true,
"type": t.config.Auth.Type,
"failures": t.authFailures.Load(),
"successes": t.authSuccesses.Load(),
}
}
return SourceStats{
Type: "tcp",
TotalEntries: t.totalEntries.Load(),
@ -237,52 +207,44 @@ func (t *TCPSource) GetStats() SourceStats {
StartTime: t.startTime,
LastEntryTime: lastEntry,
Details: map[string]any{
"port": t.port,
"port": t.config.Port,
"active_connections": t.activeConns.Load(),
"invalid_entries": t.invalidEntries.Load(),
"net_limit": netLimitStats,
"auth": authStats,
},
}
}
func (t *TCPSource) publish(entry core.LogEntry) bool {
func (t *TCPSource) publish(entry core.LogEntry) {
t.mu.RLock()
defer t.mu.RUnlock()
t.totalEntries.Add(1)
t.lastEntryTime.Store(entry.Time)
dropped := false
for _, ch := range t.subscribers {
select {
case ch <- entry:
default:
dropped = true
t.droppedEntries.Add(1)
t.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "tcp_source")
}
}
if dropped {
t.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "tcp_source")
}
return true
}
// tcpClient represents a connected TCP client
// Represents a connected TCP client
type tcpClient struct {
conn gnet.Conn
buffer bytes.Buffer
authenticated bool
session *auth.Session
authTimeout time.Time
tlsBridge *tls.GNetTLSConn
maxBufferSeen int
cumulativeEncrypted int64
conn gnet.Conn
buffer *bytes.Buffer
authenticated bool
authTimeout time.Time
session *auth.Session
maxBufferSeen int
}
// tcpSourceServer handles gnet events
// Handles gnet events
type tcpSourceServer struct {
gnet.BuiltinEventEngine
source *TCPSource
@ -298,7 +260,7 @@ func (s *tcpSourceServer) OnBoot(eng gnet.Engine) gnet.Action {
s.source.logger.Debug("msg", "TCP source server booted",
"component", "tcp_source",
"port", s.source.port)
"port", s.source.config.Port)
return gnet.None
}
@ -319,6 +281,16 @@ func (s *tcpSourceServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
return nil, gnet.Close
}
// Check if connection is allowed
ip := tcpAddr.IP
if ip.To4() == nil {
// Reject IPv6
s.source.logger.Warn("msg", "IPv6 connection rejected",
"component", "tcp_source",
"remote_addr", remoteAddr)
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
}
if !s.source.netLimiter.CheckTCP(tcpAddr) {
s.source.logger.Warn("msg", "TCP connection net limited",
"component", "tcp_source",
@ -327,65 +299,66 @@ func (s *tcpSourceServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
}
// Track connection
s.source.netLimiter.AddConnection(remoteAddr)
// s.source.netLimiter.AddConnection(remoteAddr)
if !s.source.netLimiter.TrackConnection(ip.String(), "", "") {
s.source.logger.Warn("msg", "TCP connection limit exceeded",
"component", "tcp_source",
"remote_addr", remoteAddr)
return nil, gnet.Close
}
}
// Create client state
client := &tcpClient{conn: c}
// Initialize TLS bridge if enabled
if s.source.tlsManager != nil {
tlsConfig := s.source.tlsManager.GetTCPConfig()
client.tlsBridge = tls.NewServerConn(c, tlsConfig)
client.tlsBridge.Handshake() // Start async handshake
s.source.logger.Debug("msg", "TLS handshake initiated",
"component", "tcp_source",
"remote_addr", remoteAddr)
client := &tcpClient{
conn: c,
buffer: bytes.NewBuffer(nil),
authenticated: s.source.authenticator == nil, // No auth = auto authenticated
}
if s.source.authenticator != nil {
// Set auth timeout
client.authTimeout = time.Now().Add(10 * time.Second)
// Send auth challenge for SCRAM
if s.source.config.Auth.Type == "scram" {
out = []byte("AUTH_REQUIRED\n")
}
}
// Create client state
s.mu.Lock()
s.clients[c] = &tcpClient{conn: c}
s.clients[c] = client
s.mu.Unlock()
newCount := s.source.activeConns.Add(1)
s.source.activeConns.Add(1)
s.source.logger.Debug("msg", "TCP connection opened",
"component", "tcp_source",
"remote_addr", remoteAddr,
"active_connections", newCount,
"tls_enabled", s.source.tlsManager != nil)
"auth_enabled", s.source.authenticator != nil)
return nil, gnet.None
return out, gnet.None
}
func (s *tcpSourceServer) OnClose(c gnet.Conn, err error) gnet.Action {
remoteAddr := c.RemoteAddr().String()
// Untrack connection
if s.source.netLimiter != nil {
if tcpAddr, err := net.ResolveTCPAddr("tcp", remoteAddr); err == nil {
s.source.netLimiter.ReleaseConnection(tcpAddr.IP.String(), "", "")
// s.source.netLimiter.RemoveConnection(remoteAddr)
}
}
// Remove client state
s.mu.Lock()
client := s.clients[c]
delete(s.clients, c)
s.mu.Unlock()
// Clean up TLS bridge if present
if client != nil && client.tlsBridge != nil {
client.tlsBridge.Close()
s.source.logger.Debug("msg", "TLS connection closed",
"component", "tcp_source",
"remote_addr", remoteAddr)
}
// Remove connection tracking
if s.source.netLimiter != nil {
s.source.netLimiter.RemoveConnection(remoteAddr)
}
newCount := s.source.activeConns.Add(-1)
newConnectionCount := s.source.activeConns.Add(-1)
s.source.logger.Debug("msg", "TCP connection closed",
"component", "tcp_source",
"remote_addr", remoteAddr,
"active_connections", newCount,
"active_connections", newConnectionCount,
"error", err)
return gnet.None
}
@ -408,79 +381,85 @@ func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
return gnet.Close
}
// Check encrypted data size BEFORE processing through TLS
if len(data) > maxEncryptedDataPerRead {
s.source.logger.Warn("msg", "Encrypted data per read limit exceeded",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"data_size", len(data),
"limit", maxEncryptedDataPerRead)
s.source.invalidEntries.Add(1)
return gnet.Close
}
// SCRAM Authentication phase
if !client.authenticated && s.source.scramManager != nil {
// Check auth timeout
if !client.authTimeout.IsZero() && time.Now().After(client.authTimeout) {
s.source.logger.Warn("msg", "Authentication timeout",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String())
s.source.authFailures.Add(1)
c.AsyncWrite([]byte("AUTH_TIMEOUT\n"), nil)
return gnet.Close
}
// Track cumulative encrypted data to prevent slow accumulation
client.cumulativeEncrypted += int64(len(data))
if client.cumulativeEncrypted > maxCumulativeEncrypted {
s.source.logger.Warn("msg", "Cumulative encrypted data limit exceeded",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"total_encrypted", client.cumulativeEncrypted,
"limit", maxCumulativeEncrypted)
s.source.invalidEntries.Add(1)
return gnet.Close
}
if len(data) == 0 {
return gnet.None
}
// Process through TLS bridge if present
if client.tlsBridge != nil {
// Feed encrypted data into TLS engine
if err := client.tlsBridge.ProcessIncoming(data); err != nil {
if errors.Is(err, tls.ErrTLSBackpressure) {
s.source.logger.Warn("msg", "TLS backpressure, closing slow client",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String())
} else {
s.source.logger.Error("msg", "TLS processing error",
client.buffer.Write(data)
// Use centralized SCRAM protocol handler
if s.source.scramProtocolHandler == nil {
s.source.scramProtocolHandler = auth.NewScramProtocolHandler(s.source.scramManager, s.source.logger)
}
// Look for complete auth line
for {
idx := bytes.IndexByte(client.buffer.Bytes(), '\n')
if idx < 0 {
break
}
line := client.buffer.Bytes()[:idx]
client.buffer.Next(idx + 1)
// Process auth message through handler
authenticated, session, err := s.source.scramProtocolHandler.HandleAuthMessage(line, c)
if err != nil {
s.source.logger.Warn("msg", "SCRAM authentication failed",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"error", err)
if strings.Contains(err.Error(), "unknown command") {
return gnet.Close
}
// Continue for other errors (might be multi-step auth)
}
return gnet.Close
}
// Check if handshake is complete
if !client.tlsBridge.IsHandshakeDone() {
// Still handshaking, wait for more data
return gnet.None
}
if authenticated && session != nil {
// Authentication successful
s.mu.Lock()
client.authenticated = true
client.session = session
s.mu.Unlock()
// Check handshake result
_, hsErr := client.tlsBridge.HandshakeComplete()
if hsErr != nil {
s.source.logger.Error("msg", "TLS handshake failed",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"error", hsErr)
return gnet.Close
}
s.source.logger.Info("msg", "Client authenticated via SCRAM",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"session_id", session.ID)
// Read decrypted plaintext
data = client.tlsBridge.Read()
if data == nil || len(data) == 0 {
// No plaintext available yet
return gnet.None
// Clear auth buffer
client.buffer.Reset()
break
}
}
// Reset cumulative counter after successful decryption and processing
client.cumulativeEncrypted = 0
return gnet.None
}
// Check buffer size before appending
return s.processLogData(c, client, data)
}
func (s *tcpSourceServer) processLogData(c gnet.Conn, client *tcpClient, data []byte) gnet.Action {
// Check if appending the new data would exceed the client buffer limit.
if client.buffer.Len()+len(data) > maxClientBufferSize {
s.source.logger.Warn("msg", "Client buffer limit exceeded",
s.source.logger.Warn("msg", "Client buffer limit exceeded, closing connection.",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"buffer_size", client.buffer.Len(),
"incoming_size", len(data))
"incoming_size", len(data),
"limit", maxClientBufferSize)
s.source.invalidEntries.Add(1)
return gnet.Close
}
@ -563,14 +542,4 @@ func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
}
return gnet.None
}
// noopLogger implements gnet's Logger interface but discards everything
// type noopLogger struct{}
// func (n noopLogger) Debugf(format string, args ...any) {}
// func (n noopLogger) Infof(format string, args ...any) {}
// func (n noopLogger) Warnf(format string, args ...any) {}
// func (n noopLogger) Errorf(format string, args ...any) {}
// func (n noopLogger) Fatalf(format string, args ...any) {}
// Usage: gnet.Run(..., gnet.WithLogger(noopLogger{}), ...)
}

View File

@ -1,341 +0,0 @@
// FILE: src/internal/tls/gnet_bridge.go
package tls
import (
"crypto/tls"
"errors"
"io"
"net"
"sync"
"sync/atomic"
"time"
"github.com/panjf2000/gnet/v2"
)
var (
ErrTLSBackpressure = errors.New("TLS processing backpressure")
ErrConnectionClosed = errors.New("connection closed")
ErrPlaintextBufferExceeded = errors.New("plaintext buffer size exceeded")
)
// Maximum plaintext buffer size to prevent memory exhaustion
const maxPlaintextBufferSize = 32 * 1024 * 1024 // 32MB
// GNetTLSConn bridges gnet.Conn with crypto/tls via io.Pipe
type GNetTLSConn struct {
gnetConn gnet.Conn
tlsConn *tls.Conn
config *tls.Config
// Buffered channels for non-blocking operation
incomingCipher chan []byte // Network → TLS (encrypted)
outgoingCipher chan []byte // TLS → Network (encrypted)
// Handshake state
handshakeOnce sync.Once
handshakeDone chan struct{}
handshakeErr error
// Decrypted data buffer
plainBuf []byte
plainMu sync.Mutex
// Lifecycle
closed atomic.Bool
closeOnce sync.Once
wg sync.WaitGroup
// Error tracking
lastErr atomic.Value // error
logger interface{ Warn(args ...any) } // Minimal logger interface
}
// NewServerConn creates a server-side TLS bridge
func NewServerConn(gnetConn gnet.Conn, config *tls.Config) *GNetTLSConn {
tc := &GNetTLSConn{
gnetConn: gnetConn,
config: config,
handshakeDone: make(chan struct{}),
// Buffered channels sized for throughput without blocking
incomingCipher: make(chan []byte, 128), // 128 packets buffer
outgoingCipher: make(chan []byte, 128),
plainBuf: make([]byte, 0, 65536), // 64KB initial capacity
}
// Create TLS conn with channel-based transport
rawConn := &channelConn{
incoming: tc.incomingCipher,
outgoing: tc.outgoingCipher,
localAddr: gnetConn.LocalAddr(),
remoteAddr: gnetConn.RemoteAddr(),
tc: tc,
}
tc.tlsConn = tls.Server(rawConn, config)
// Start pump goroutines
tc.wg.Add(2)
go tc.pumpCipherToNetwork()
go tc.pumpPlaintextFromTLS()
return tc
}
// NewClientConn creates a client-side TLS bridge (similar changes)
func NewClientConn(gnetConn gnet.Conn, config *tls.Config, serverName string) *GNetTLSConn {
tc := &GNetTLSConn{
gnetConn: gnetConn,
config: config,
handshakeDone: make(chan struct{}),
incomingCipher: make(chan []byte, 128),
outgoingCipher: make(chan []byte, 128),
plainBuf: make([]byte, 0, 65536),
}
if config.ServerName == "" {
config = config.Clone()
config.ServerName = serverName
}
rawConn := &channelConn{
incoming: tc.incomingCipher,
outgoing: tc.outgoingCipher,
localAddr: gnetConn.LocalAddr(),
remoteAddr: gnetConn.RemoteAddr(),
tc: tc,
}
tc.tlsConn = tls.Client(rawConn, config)
tc.wg.Add(2)
go tc.pumpCipherToNetwork()
go tc.pumpPlaintextFromTLS()
return tc
}
// ProcessIncoming feeds encrypted data from network into TLS engine (non-blocking)
func (tc *GNetTLSConn) ProcessIncoming(encryptedData []byte) error {
if tc.closed.Load() {
return ErrConnectionClosed
}
// Non-blocking send with backpressure detection
select {
case tc.incomingCipher <- encryptedData:
return nil
default:
// Channel full - TLS processing can't keep up
// Drop connection under backpressure vs blocking event loop
if tc.logger != nil {
tc.logger.Warn("msg", "TLS backpressure, dropping data",
"remote_addr", tc.gnetConn.RemoteAddr())
}
return ErrTLSBackpressure
}
}
// pumpCipherToNetwork sends TLS-encrypted data to network
func (tc *GNetTLSConn) pumpCipherToNetwork() {
defer tc.wg.Done()
for {
select {
case data, ok := <-tc.outgoingCipher:
if !ok {
return
}
// Send to network
if err := tc.gnetConn.AsyncWrite(data, nil); err != nil {
tc.lastErr.Store(err)
tc.Close()
return
}
case <-time.After(30 * time.Second):
// Keepalive/timeout check
if tc.closed.Load() {
return
}
}
}
}
// pumpPlaintextFromTLS reads decrypted data from TLS
func (tc *GNetTLSConn) pumpPlaintextFromTLS() {
defer tc.wg.Done()
buf := make([]byte, 32768) // 32KB read buffer
for {
n, err := tc.tlsConn.Read(buf)
if n > 0 {
tc.plainMu.Lock()
// Check buffer size limit before appending to prevent memory exhaustion
if len(tc.plainBuf)+n > maxPlaintextBufferSize {
tc.plainMu.Unlock()
// Log warning about buffer limit
if tc.logger != nil {
tc.logger.Warn("msg", "Plaintext buffer limit exceeded, closing connection",
"remote_addr", tc.gnetConn.RemoteAddr(),
"buffer_size", len(tc.plainBuf),
"incoming_size", n,
"limit", maxPlaintextBufferSize)
}
// Store error and close connection
tc.lastErr.Store(ErrPlaintextBufferExceeded)
tc.Close()
return
}
tc.plainBuf = append(tc.plainBuf, buf[:n]...)
tc.plainMu.Unlock()
}
if err != nil {
if err != io.EOF {
tc.lastErr.Store(err)
}
tc.Close()
return
}
}
}
// Read returns available decrypted plaintext (non-blocking)
func (tc *GNetTLSConn) Read() []byte {
tc.plainMu.Lock()
defer tc.plainMu.Unlock()
if len(tc.plainBuf) == 0 {
return nil
}
// Atomic buffer swap under mutex protection to prevent race condition
data := tc.plainBuf
tc.plainBuf = make([]byte, 0, cap(tc.plainBuf))
return data
}
// Write encrypts plaintext and queues for network transmission
func (tc *GNetTLSConn) Write(plaintext []byte) (int, error) {
if tc.closed.Load() {
return 0, ErrConnectionClosed
}
if !tc.IsHandshakeDone() {
return 0, errors.New("handshake not complete")
}
return tc.tlsConn.Write(plaintext)
}
// Handshake initiates TLS handshake asynchronously
func (tc *GNetTLSConn) Handshake() {
tc.handshakeOnce.Do(func() {
go func() {
tc.handshakeErr = tc.tlsConn.Handshake()
close(tc.handshakeDone)
}()
})
}
// IsHandshakeDone checks if handshake is complete
func (tc *GNetTLSConn) IsHandshakeDone() bool {
select {
case <-tc.handshakeDone:
return true
default:
return false
}
}
// HandshakeComplete waits for handshake completion
func (tc *GNetTLSConn) HandshakeComplete() (<-chan struct{}, error) {
<-tc.handshakeDone
return tc.handshakeDone, tc.handshakeErr
}
// Close shuts down the bridge
func (tc *GNetTLSConn) Close() error {
tc.closeOnce.Do(func() {
tc.closed.Store(true)
// Close TLS connection
tc.tlsConn.Close()
// Close channels to stop pumps
close(tc.incomingCipher)
close(tc.outgoingCipher)
})
// Wait for pumps to finish
tc.wg.Wait()
return nil
}
// GetConnectionState returns TLS connection state
func (tc *GNetTLSConn) GetConnectionState() tls.ConnectionState {
return tc.tlsConn.ConnectionState()
}
// GetError returns last error
func (tc *GNetTLSConn) GetError() error {
if err, ok := tc.lastErr.Load().(error); ok {
return err
}
return nil
}
// channelConn implements net.Conn over channels
type channelConn struct {
incoming <-chan []byte
outgoing chan<- []byte
localAddr net.Addr
remoteAddr net.Addr
tc *GNetTLSConn
readBuf []byte
}
func (c *channelConn) Read(b []byte) (int, error) {
// Use buffered read for efficiency
if len(c.readBuf) > 0 {
n := copy(b, c.readBuf)
c.readBuf = c.readBuf[n:]
return n, nil
}
// Wait for new data
select {
case data, ok := <-c.incoming:
if !ok {
return 0, io.EOF
}
n := copy(b, data)
if n < len(data) {
c.readBuf = data[n:] // Buffer remainder
}
return n, nil
case <-time.After(30 * time.Second):
return 0, errors.New("read timeout")
}
}
func (c *channelConn) Write(b []byte) (int, error) {
if c.tc.closed.Load() {
return 0, ErrConnectionClosed
}
// Make a copy since TLS may hold reference
data := make([]byte, len(b))
copy(data, b)
select {
case c.outgoing <- data:
return len(b), nil
case <-time.After(5 * time.Second):
return 0, errors.New("write timeout")
}
}
func (c *channelConn) Close() error { return nil }
func (c *channelConn) LocalAddr() net.Addr { return c.localAddr }
func (c *channelConn) RemoteAddr() net.Addr { return c.remoteAddr }
func (c *channelConn) SetDeadline(t time.Time) error { return nil }
func (c *channelConn) SetReadDeadline(t time.Time) error { return nil }
func (c *channelConn) SetWriteDeadline(t time.Time) error { return nil }

View File

@ -13,15 +13,15 @@ import (
"github.com/lixenwraith/log"
)
// Manager handles TLS configuration for servers
// Handles TLS configuration for servers
type Manager struct {
config *config.SSLConfig
config *config.TLSConfig
tlsConfig *tls.Config
logger *log.Logger
}
// NewManager creates a TLS configuration from SSL config
func NewManager(cfg *config.SSLConfig, logger *log.Logger) (*Manager, error) {
// Creates a TLS configuration from TLS config
func NewManager(cfg *config.TLSConfig, logger *log.Logger) (*Manager, error) {
if cfg == nil || !cfg.Enabled {
return nil, nil
}
@ -83,7 +83,6 @@ func NewManager(cfg *config.SSLConfig, logger *log.Logger) (*Manager, error) {
}
// Set secure defaults
m.tlsConfig.PreferServerCipherSuites = true
m.tlsConfig.SessionTicketsDisabled = false
m.tlsConfig.Renegotiation = tls.RenegotiateNever
@ -97,7 +96,7 @@ func NewManager(cfg *config.SSLConfig, logger *log.Logger) (*Manager, error) {
return m, nil
}
// GetConfig returns the TLS configuration
// Returns the TLS configuration
func (m *Manager) GetConfig() *tls.Config {
if m == nil {
return nil
@ -106,7 +105,7 @@ func (m *Manager) GetConfig() *tls.Config {
return m.tlsConfig.Clone()
}
// GetHTTPConfig returns TLS config suitable for HTTP servers
// Returns TLS config suitable for HTTP servers
func (m *Manager) GetHTTPConfig() *tls.Config {
if m == nil {
return nil
@ -118,19 +117,7 @@ func (m *Manager) GetHTTPConfig() *tls.Config {
return cfg
}
// GetTCPConfig returns TLS config for raw TCP connections
func (m *Manager) GetTCPConfig() *tls.Config {
if m == nil {
return nil
}
cfg := m.tlsConfig.Clone()
// No ALPN for raw TCP
cfg.NextProtos = nil
return cfg
}
// ValidateClientCert validates a client certificate for mTLS
// Validates a client certificate for mTLS
func (m *Manager) ValidateClientCert(rawCerts [][]byte) error {
if m == nil || !m.config.ClientAuth {
return nil
@ -175,6 +162,21 @@ func (m *Manager) ValidateClientCert(rawCerts [][]byte) error {
return nil
}
// Returns TLS statistics
func (m *Manager) GetStats() map[string]any {
if m == nil {
return map[string]any{"enabled": false}
}
return map[string]any{
"enabled": true,
"min_version": tlsVersionString(m.tlsConfig.MinVersion),
"max_version": tlsVersionString(m.tlsConfig.MaxVersion),
"client_auth": m.config.ClientAuth,
"cipher_suites": len(m.tlsConfig.CipherSuites),
}
}
func parseTLSVersion(version string, defaultVersion uint16) uint16 {
switch strings.ToUpper(version) {
case "TLS1.0", "TLS10":
@ -218,21 +220,6 @@ func parseCipherSuites(suites string) []uint16 {
return result
}
// GetStats returns TLS statistics
func (m *Manager) GetStats() map[string]any {
if m == nil {
return map[string]any{"enabled": false}
}
return map[string]any{
"enabled": true,
"min_version": tlsVersionString(m.tlsConfig.MinVersion),
"max_version": tlsVersionString(m.tlsConfig.MaxVersion),
"client_auth": m.config.ClientAuth,
"cipher_suites": len(m.tlsConfig.CipherSuites),
}
}
func tlsVersionString(version uint16) string {
switch version {
case tls.VersionTLS10:

View File

@ -10,7 +10,7 @@ var (
BuildTime = "unknown"
)
// returns a formatted version string
// Returns a formatted version string
func String() string {
if Version == "dev" {
return fmt.Sprintf("dev (commit: %s, built: %s)", GitCommit, BuildTime)
@ -18,7 +18,7 @@ func String() string {
return fmt.Sprintf("%s (commit: %s, built: %s)", Version, GitCommit, BuildTime)
}
// returns just the version tag
// Returns just the version tag
func Short() string {
return Version
}

12
test/README.md Normal file
View File

@ -0,0 +1,12 @@
### Usage:
- Copy logwisp executable to the test folder (to compile, in logwisp top directory: `make build`).
- Run the test script for each scenario.
### Notes:
- The tests create configuration files and log files. Most tests set logging at debug level and don't clean up their temp files that are created in the current execution directory.
- Some tests may need to be run on different hosts (containers can be used).