v0.7.1 default config and documentation update, refactor

This commit is contained in:
2025-10-10 13:03:03 -04:00
parent 89e6a4ea05
commit 33bf36f27e
34 changed files with 2877 additions and 2794 deletions

View File

@ -6,7 +6,7 @@
<td> <td>
<h1>LogWisp</h1> <h1>LogWisp</h1>
<p> <p>
<a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.24-00ADD8?style=flat&logo=go" alt="Go"></a> <a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.25-00ADD8?style=flat&logo=go" alt="Go"></a>
<a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a> <a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a>
<a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a> <a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a>
</p> </p>
@ -14,41 +14,81 @@
</tr> </tr>
</table> </table>
**Flexible log monitoring with real-time streaming over HTTP/SSE and TCP** # LogWisp
LogWisp watches log files and streams updates to connected clients in real-time using a pipeline architecture: **sources → filters → sinks**. Perfect for monitoring multiple applications, filtering noise, and routing logs to multiple destinations. A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with enterprise-grade security and reliability features.
## 🚀 Quick Start ## Features
```bash ### Core Capabilities
# Install - **Pipeline Architecture**: Independent processing pipelines with source → filter → format → sink flow.
git clone https://github.com/lixenwraith/logwisp.git - **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP.
cd logwisp - **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding.
make install - **Real-time Processing**: Sub-millisecond latency with configurable buffering.
- **Hot Configuration Reload**: Update pipelines without service restart.
# Run with defaults (monitors *.log in current directory) ### Data Processing
logwisp - **Pattern-based Filtering**: Chainable include/exclude filters with regex support.
- **Multiple Formatters**: Raw, JSON, and template-based text formatting.
- **Rate Limiting**: Pipeline rate control.
### Security & Reliability
- **Authentication**: Basic, token, and mTLS support for HTTPS, and SCRAM for TCP.
- **TLS Encryption**: TLS 1.2/1.3 support for HTTP connections.
- **Access Control**: IP whitelisting/blacklisting, connection limits.
- **Automatic Reconnection**: Resilient client connections with exponential backoff.
- **File Rotation**: Size-based rotation with retention policies.
### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints.
- **Signal Handling**: Graceful shutdown and configuration reload via signals.
- **Background Mode**: Daemon operation with proper signal handling.
- **Quiet Mode**: Silent operation for automated deployments.
## Documentation
Available in `doc/` directory.
- [Installation Guide](installation.md) - Platform setup and service configuration
- [Architecture Overview](architecture.md) - System design and component interaction
- [Configuration Reference](configuration.md) - TOML structure and configuration methods
- [Input Sources](sources.md) - Available source types and configurations
- [Output Sinks](sinks.md) - Sink types and output options
- [Filters](filters.md) - Pattern-based log filtering
- [Formatters](formatters.md) - Log formatting and transformation
- [Authentication](authentication.md) - Security configurations and auth methods
- [Networking](networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](cli.md) - CLI flags and subcommands
- [Operations Guide](operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
``` ```
## ✨ Key Features Run with: `logwisp -c config.toml`
- **🔧 Pipeline Architecture** - Flexible source → filter → sink processing ## System Requirements
- **📡 Real-time Streaming** - SSE (HTTP) and TCP protocols
- **🔍 Pattern Filtering** - Include/exclude logs with regex patterns
- **🛡️ Rate Limiting** - Protect against abuse with configurable limits
- **📊 Multi-pipeline** - Process different log sources simultaneously
- **🔄 Rotation Aware** - Handles log rotation seamlessly
- **⚡ High Performance** - Minimal CPU/memory footprint
## 📖 Documentation - **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
- **Architecture**: amd64
- **Go Version**: 1.25+ (for building from source)
Complete documentation is available in the [`doc/`](doc/) directory: ## License
- [**Quick Start Guide**](doc/quickstart.md) - Get running in 5 minutes BSD 3-Clause License
- [**Configuration**](doc/configuration.md) - All configuration options
- [**CLI Reference**](doc/cli.md) - Command-line interface
- [**Examples**](doc/examples/) - Ready-to-use configurations
## 📄 License
BSD-3-Clause

View File

@ -1,319 +1,408 @@
###############################################################################
### LogWisp Configuration ### LogWisp Configuration
### Default location: ~/.config/logwisp/logwisp.toml ### Default location: ~/.config/logwisp/logwisp.toml
### Configuration Precedence: CLI flags > Environment > File > Defaults ### Configuration Precedence: CLI flags > Environment > File > Defaults
### Default values shown - uncommented lines represent active configuration ### Default values shown - uncommented lines represent active configuration
###############################################################################
###############################################################################
### Global Settings ### Global Settings
###############################################################################
background = false # Run as daemon background = false # Run as daemon
quiet = false # Suppress console output quiet = false # Suppress console output
disable_status_reporter = false # Disable status logging disable_status_reporter = false # Disable periodic status logging
config_auto_reload = false # Reload config on file change config_auto_reload = false # Reload config on file change
config_save_on_exit = false # Persist runtime changes
###############################################################################
### Logging Configuration ### Logging Configuration
###############################################################################
[logging] [logging]
output = "stdout" # file|stdout|stderr|both|none output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug|info|warn|error level = "info" # debug|info|warn|error
[logging.file] # [logging.file]
directory = "./log" # Log directory path # directory = "./log" # Log directory path
name = "logwisp" # Base filename # name = "logwisp" # Base filename
max_size_mb = 100 # Rotation threshold # max_size_mb = 100 # Rotation threshold
max_total_size_mb = 1000 # Total size limit # max_total_size_mb = 1000 # Total size limit
retention_hours = 168.0 # Delete logs older than (7 days) # retention_hours = 168.0 # Delete logs older than (7 days)
[logging.console] [logging.console]
target = "stdout" # stdout|stderr|split target = "stdout" # stdout|stderr|split
format = "txt" # txt|json format = "txt" # txt|json
###############################################################################
### Pipeline Configuration ### Pipeline Configuration
###############################################################################
[[pipelines]] [[pipelines]]
name = "default" # Pipeline identifier name = "default" # Pipeline identifier
###============================================================================
### Rate Limiting (Pipeline-level) ### Rate Limiting (Pipeline-level)
###============================================================================
# [pipelines.rate_limit] # [pipelines.rate_limit]
# rate = 0.0 # Entries per second (0=disabled) # rate = 1000.0 # Entries per second (0=disabled)
# burst = 0.0 # Burst capacity (defaults to rate) # burst = 2000.0 # Burst capacity (defaults to rate)
# policy = "pass" # pass|drop # policy = "drop" # pass|drop
# max_entry_size_bytes = 0 # Max entry size (0=unlimited) # max_entry_size_bytes = 0 # Max entry size (0=unlimited)
###============================================================================
### Filters ### Filters
# [[pipelines.filters]] ###============================================================================
# type = "include" # include|exclude
# logic = "or" # or|and
# patterns = [".*ERROR.*", ".*WARN.*"] # Regex patterns
### Sources ### ⚠️ Example: Include only ERROR and WARN logs
## [[pipelines.filters]]
## type = "include" # include|exclude
## logic = "or" # or|and
## patterns = [".*ERROR.*", ".*WARN.*"]
### Directory Source ### ⚠️ Example: Exclude debug logs
## [[pipelines.filters]]
## type = "exclude"
## patterns = [".*DEBUG.*"]
###============================================================================
### Format Configuration
###============================================================================
# [pipelines.format]
# type = "raw" # json|txt|raw
### Raw formatter options (default)
# [pipelines.format.raw]
# add_new_line = true # Add newline to messages
### JSON formatter options
# [pipelines.format.json]
# pretty = false # Pretty print JSON
# timestamp_field = "timestamp" # Field name for timestamp
# level_field = "level" # Field name for log level
# message_field = "message" # Field name for message
# source_field = "source" # Field name for source
### Text formatter options
# [pipelines.format.txt]
# template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
# timestamp_format = "2006-01-02T15:04:05.000Z07:00" # Go time format string
###============================================================================
### Sources (Input Sources)
###============================================================================
###----------------------------------------------------------------------------
### Directory Source (Active Default)
[[pipelines.sources]] [[pipelines.sources]]
type = "directory" type = "directory"
[pipelines.sources.options] [pipelines.sources.directory]
path = "./" # Directory to monitor path = "./" # Watch directory
pattern = "*.log" # Glob pattern pattern = "*.log" # File pattern (glob)
check_interval_ms = 100 # Scan interval (min: 10ms) check_interval_ms = 100 # Poll interval
recursive = false # Scan subdirectories
###----------------------------------------------------------------------------
### Stdin Source ### Stdin Source
# [[pipelines.sources]] # [[pipelines.sources]]
# type = "stdin" # type = "stdin"
# [pipelines.sources.options] # [pipelines.sources.stdin]
# buffer_size = 1000 # Input buffer size # buffer_size = 1000 # Internal buffer size
### HTTP Source ###----------------------------------------------------------------------------
### HTTP Source (Receives via POST)
# [[pipelines.sources]] # [[pipelines.sources]]
# type = "http" # type = "http"
# [pipelines.sources.options] # [pipelines.sources.http]
# host = "0.0.0.0" # Listen address # host = "0.0.0.0" # Listen address
# port = 8081 # Listen port # port = 8081 # Listen port
# ingest_path = "/ingest" # Ingest endpoint # ingest_path = "/ingest" # Ingest endpoint
# buffer_size = 1000 # Input buffer size # buffer_size = 1000 # Internal buffer size
# max_body_size = 1048576 # Max request size bytes # max_body_size = 1048576 # Max request body (1MB)
# read_timeout_ms = 10000 # Read timeout
# write_timeout_ms = 10000 # Write timeout
# [pipelines.sources.options.tls] ### TLS configuration
# enabled = false # Enable TLS # [pipelines.sources.http.tls]
# cert_file = "" # Server certificate # enabled = false
# key_file = "" # Server key # cert_file = "/path/to/cert.pem"
# key_file = "/path/to/key.pem"
# ca_file = "/path/to/ca.pem"
# min_version = "TLS1.2" # TLS1.2|TLS1.3
# client_auth = false # Require client certs # client_auth = false # Require client certs
# client_ca_file = "" # Client CA cert # client_ca_file = "/path/to/ca.pem" # CA to validate client certs
# verify_client_cert = false # Verify client certs # verify_client_cert = true # Require valid client cert
# insecure_skip_verify = false # Skip verification (server-side)
# ca_file = "" # Custom CA file
# min_version = "TLS1.2" # Min TLS version
# max_version = "TLS1.3" # Max TLS version
# cipher_suites = "" # Comma-separated list
# [pipelines.sources.options.net_limit] ### ⚠️ Example: TLS configuration to enable auth)
# enabled = false # Enable rate limiting ## [pipelines.sources.http.tls]
# ip_whitelist = [] # Allowed IPs/CIDRs (IPv4 only) ## enabled = true # MUST be true for auth
# ip_blacklist = [] # Blocked IPs/CIDRs (IPv4 only) ## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
### Network limiting (access control)
# [pipelines.sources.http.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# requests_per_second = 100.0 # Rate limit per client # requests_per_second = 100.0 # Rate limit per client
# burst_size = 100 # Burst capacity # burst_size = 200 # Token bucket burst
# response_code = 429 # HTTP status when limited # response_code = 429 # HTTP rate limit response code
# response_message = "Rate limit exceeded" # response_message = "Rate limit exceeded"
# max_connections_per_ip = 10 # Max concurrent per IP # ip_whitelist = []
# max_connections_total = 1000 # Max total connections # ip_blacklist = []
### TCP Source ### Authentication (validates clients)
### ☢ SECURITY: HTTP auth REQUIRES TLS to be enabled
# [pipelines.sources.http.auth]
# type = "none" # none|basic|token|mtls (NO scram)
# realm = "LogWisp" # For basic auth
### Basic auth users
# [[pipelines.sources.http.auth.basic.users]]
# username = "admin"
# password_hash = "$argon2..." # Argon2 hash
### Token auth tokens
# [pipelines.sources.http.auth.token]
# tokens = ["token1", "token2"]
###----------------------------------------------------------------------------
### TCP Source (Receives logs via TCP Client Sink)
# [[pipelines.sources]] # [[pipelines.sources]]
# type = "tcp" # type = "tcp"
# [pipelines.sources.options] # [pipelines.sources.tcp]
# host = "0.0.0.0" # Listen address # host = "0.0.0.0" # Listen address
# port = 9091 # Listen port # port = 9091 # Listen port
# buffer_size = 1000 # Input buffer size # buffer_size = 1000 # Internal buffer size
# read_timeout_ms = 10000 # Read timeout
# keep_alive = true # Enable TCP keep-alive
# keep_alive_period_ms = 30000 # Keep-alive interval
# [pipelines.sources.options.net_limit] ### ☣ WARNING: TCP has NO TLS support (gnet limitation)
# enabled = false # Enable rate limiting ### Use HTTP with TLS for encrypted transport
# ip_whitelist = [] # Allowed IPs/CIDRs (IPv4 only)
# ip_blacklist = [] # Blocked IPs/CIDRs (IPv4 only)
# requests_per_second = 100.0 # Rate limit per client
# burst_size = 100 # Burst capacity
# response_code = 429 # TCP rejection
# response_message = "Rate limit exceeded"
# max_connections_per_ip = 10 # Max concurrent per IP
# max_connections_per_user = 10 # Max concurrent per user
# max_connections_per_token = 10 # Max concurrent per token
# max_connections_total = 1000 # Max total connections
### Format Configuration ### Network limiting (access control)
# [pipelines.sources.tcp.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# requests_per_second = 100.0
# burst_size = 200
# ip_whitelist = []
# ip_blacklist = []
### Raw formatter (default - passes through unchanged) ### Authentication
# format = "raw" # [pipelines.sources.tcp.auth]
### No options for raw formatter # type = "none" # none|scram ONLY (no basic/token/mtls)
### JSON formatter ### SCRAM auth users for TCP Source
# format = "json" # [[pipelines.sources.tcp.auth.scram.users]]
# [pipelines.format_options] # username = "user1"
# pretty = false # Pretty-print JSON # stored_key = "base64..." # Pre-computed SCRAM keys
# timestamp_field = "timestamp" # Timestamp field name # server_key = "base64..."
# level_field = "level" # Level field name # salt = "base64..."
# message_field = "message" # Message field name # argon_time = 3
# source_field = "source" # Source field name # argon_memory = 65536
# argon_threads = 4
### Text formatter ###============================================================================
# format = "txt" ### Sinks (Output Destinations)
# [pipelines.format_options] ###============================================================================
# template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}"
# timestamp_format = "2006-01-02T15:04:05Z07:00" # Go time format
### Sinks ###----------------------------------------------------------------------------
### Console Sink (Active Default)
### HTTP Sink (SSE Server)
[[pipelines.sinks]] [[pipelines.sinks]]
type = "http" type = "console"
[pipelines.sinks.options] [pipelines.sinks.console]
host = "0.0.0.0" # Listen address target = "stdout" # stdout|stderr|split
port = 8080 # Server port colorize = false # Enable colored output
buffer_size = 1000 # Buffer size buffer_size = 100 # Internal buffer size
stream_path = "/stream" # SSE endpoint
status_path = "/status" # Status endpoint
[pipelines.sinks.options.heartbeat]
enabled = true # Send heartbeats
interval_seconds = 30 # Heartbeat interval
include_timestamp = true # Include timestamp
include_stats = false # Include statistics
format = "comment" # comment|message
# [pipelines.sinks.options.tls]
# enabled = false # Enable TLS
# cert_file = "" # Server certificate
# key_file = "" # Server key
# client_auth = false # Require client certs
# client_ca_file = "" # Client CA cert
# verify_client_cert = false # Verify client certs
# insecure_skip_verify = false # Skip verification
# ca_file = "" # Custom CA file
# min_version = "TLS1.2" # Min TLS version
# max_version = "TLS1.3" # Max TLS version
# cipher_suites = "" # Comma-separated list
# [pipelines.sinks.options.net_limit]
# enabled = false # Enable rate limiting
# ip_whitelist = [] # Allowed IPs/CIDRs (IPv4 only)
# ip_blacklist = [] # Blocked IPs/CIDRs (IPv4 only)
# requests_per_second = 100.0 # Rate limit per client
# burst_size = 100 # Burst capacity
# response_code = 429 # HTTP status when limited
# response_message = "Rate limit exceeded"
# max_connections_per_ip = 10 # Max concurrent per IP
# max_connections_total = 1000 # Max total connections
### TCP Sink (TCP Server)
# [[pipelines.sinks]]
# type = "tcp"
# [pipelines.sinks.options]
# host = "0.0.0.0" # Listen address
# port = 9090 # Server port
# buffer_size = 1000 # Buffer size
# auth_type = "none" # none|scram
# [pipelines.sinks.options.heartbeat]
# enabled = false # Send heartbeats
# interval_seconds = 30 # Heartbeat interval
# include_timestamp = false # Include timestamp
# include_stats = false # Include statistics
# format = "comment" # comment|message
# [pipelines.sinks.options.net_limit]
# enabled = false # Enable rate limiting
# ip_whitelist = [] # Allowed IPs/CIDRs (IPv4 only)
# ip_blacklist = [] # Blocked IPs/CIDRs (IPv4 only)
# requests_per_second = 100.0 # Rate limit per client
# burst_size = 100 # Burst capacity
# response_code = 429 # TCP rejection code
# response_message = "Rate limit exceeded"
# max_connections_per_ip = 10 # Max concurrent per IP
# max_connections_per_user = 10 # Max concurrent per user
# max_connections_per_token = 10 # Max concurrent per token
# max_connections_total = 1000 # Max total connections
# [pipelines.sinks.options.scram]
# username = "" # SCRAM auth username
# password = "" # SCRAM auth password
### HTTP Client Sink (Forward to remote HTTP endpoint)
# [[pipelines.sinks]]
# type = "http_client"
# [pipelines.sinks.options]
# url = "" # Target URL (required)
# buffer_size = 1000 # Buffer size
# batch_size = 100 # Entries per batch
# batch_delay_ms = 1000 # Batch timeout
# timeout_seconds = 30 # Request timeout
# max_retries = 3 # Retry attempts
# retry_delay_ms = 1000 # Initial retry delay
# retry_backoff = 2.0 # Exponential backoff multiplier
# insecure_skip_verify = false # Skip TLS verification
# auth_type = "none" # none|basic|bearer|mtls
# [pipelines.sinks.options.basic]
# username = "" # Basic auth username
# password_hash = "" # Argon2 password hash
# [pipelines.sinks.options.bearer]
# token = "" # Bearer token
====== not needed:
## Custom HTTP headers
# [pipelines.sinks.options.headers]
# Content-Type = "application/json"
# Authorization = "Bearer token"
## Client certificate for mTLS
# [pipelines.sinks.options.tls]
# ca_file = "" # Custom CA certificate
# cert_file = "" # Client certificate
# key_file = "" # Client key
### TCP Client Sink (Forward to remote TCP endpoint)
# [[pipelines.sinks]]
# type = "tcp_client"
# [pipelines.sinks.options]
# address = "" # host:port (required)
# buffer_size = 1000 # Buffer size
# dial_timeout_seconds = 10 # Connection timeout
# write_timeout_seconds = 30 # Write timeout
# read_timeout_seconds = 10 # Read timeout
# keep_alive_seconds = 30 # TCP keepalive
# reconnect_delay_ms = 1000 # Initial reconnect delay
# max_reconnect_delay_seconds = 30 # Max reconnect delay
# reconnect_backoff = 1.5 # Exponential backoff multiplier
# [pipelines.sinks.options.scram]
# username = "" # Auth username
# password_hash = "" # Argon2 password hash
###----------------------------------------------------------------------------
### File Sink ### File Sink
# [[pipelines.sinks]] # [[pipelines.sinks]]
# type = "file" # type = "file"
# [pipelines.sinks.options] # [pipelines.sinks.file]
# directory = "./" # Output dir # directory = "./logs" # Output directory
# name = "logwisp.output" # Base name # name = "output" # Base filename
# buffer_size = 1000 # Input channel buffer # max_size_mb = 100 # Rotation threshold
# max_size_mb = 100 # Rotation size # max_total_size_mb = 1000 # Total size limit
# max_total_size_mb = 0 # Total limit (0=unlimited) # min_disk_free_mb = 500 # Minimum free disk space
# retention_hours = 0.0 # Retention (0=disabled) # retention_hours = 168.0 # Delete logs older than (7 days)
# min_disk_free_mb = 1000 # Disk space guard # buffer_size = 1000 # Internal buffer size
# flush_interval_ms = 1000 # Force flush interval
### Console Sinks ###----------------------------------------------------------------------------
### HTTP Sink (SSE streaming to browser/HTTP client)
# [[pipelines.sinks]] # [[pipelines.sinks]]
# type = "console" # type = "http"
# [pipelines.sinks.options] # [pipelines.sinks.http]
# target = "stdout" # stdout|stderr|split # host = "0.0.0.0" # Listen address
# buffer_size = 1000 # Buffer size # port = 8080 # Listen port
# stream_path = "/stream" # SSE stream endpoint
# status_path = "/status" # Status endpoint
# buffer_size = 1000 # Internal buffer size
# max_connections = 100 # Max concurrent clients
# read_timeout_ms = 10000 # Read timeout
# write_timeout_ms = 10000 # Write timeout
### Authentication Configuration ### Heartbeat configuration (keeps SSE alive)
# [pipelines.auth] # [pipelines.sinks.http.heartbeat]
# enabled = true
# interval_ms = 30000 # 30 seconds
# include_timestamp = true
# include_stats = false
# format = "comment" # comment|event|json
### TLS configuration
# [pipelines.sinks.http.tls]
# enabled = false
# cert_file = "/path/to/cert.pem"
# key_file = "/path/to/key.pem"
# ca_file = "/path/to/ca.pem"
# min_version = "TLS1.2" # TLS1.2|TLS1.3
# client_auth = false # Require client certs
### ⚠️ Example: HTTP Client Sink → HTTP Source with mTLS
## HTTP Source with mTLS:
## [pipelines.sources.http.tls]
## enabled = true
## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
## client_auth = true # Enable client cert verification
## client_ca_file = "/path/to/ca.pem"
## HTTP Client with client cert:
## [pipelines.sinks.http_client.tls]
## enabled = true
## cert_file = "/path/to/client.pem" # Client certificate
## key_file = "/path/to/client.key"
### Network limiting (access control)
# [pipelines.sinks.http.net_limit]
# enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# ip_whitelist = ["192.168.1.0/24"]
# ip_blacklist = []
### Authentication (for clients)
### ☢ SECURITY: HTTP auth REQUIRES TLS to be enabled
# [pipelines.sinks.http.auth]
# type = "none" # none|basic|bearer|mtls # type = "none" # none|basic|bearer|mtls
### Basic Authentication ###----------------------------------------------------------------------------
# [pipelines.auth.basic_auth] ### TCP Sink (Server - accepts connections from TCP clients)
# realm = "LogWisp" # WWW-Authenticate realm # [[pipelines.sinks]]
# users_file = "" # External users file path # type = "tcp"
# [[pipelines.auth.basic_auth.users]] # [pipelines.sinks.tcp]
# username = "" # Username # host = "0.0.0.0" # Listen address
# password_hash = "" # Argon2 password hash # port = 9090 # Listen port
# buffer_size = 1000 # Internal buffer size
# max_connections = 100 # Max concurrent clients
# keep_alive = true # Enable TCP keep-alive
# keep_alive_period_ms = 30000 # Keep-alive interval
### Bearer Token Authentication ### Heartbeat configuration
# [pipelines.auth.bearer_auth] # [pipelines.sinks.tcp.heartbeat]
# tokens = [] # Static bearer tokens # enabled = false
# interval_ms = 30000
# include_timestamp = true
# include_stats = false
# format = "json" # json|txt
### JWT Validation ### ☣ WARNING: TCP has NO TLS support (gnet limitation)
# [pipelines.auth.bearer_auth.jwt] ### Use HTTP with TLS for encrypted transport
# jwks_url = "" # JWKS endpoint for key discovery
# signing_key = "" # Static signing key (if not using JWKS) ### Network limiting
# issuer = "" # Expected issuer claim # [pipelines.sinks.tcp.net_limit]
# audience = "" # Expected audience claim # enabled = false
# max_connections_per_ip = 10
# max_connections_total = 100
# ip_whitelist = []
# ip_blacklist = []
### ☣ WARNING: TCP Sink has NO AUTH support (aimed for debugging)
### Use HTTP with TLS for encrypted transport
###----------------------------------------------------------------------------
### HTTP Client Sink (POST to HTTP Source endpoint)
# [[pipelines.sinks]]
# type = "http_client"
# [pipelines.sinks.http_client]
# url = "https://logs.example.com/ingest"
# buffer_size = 1000
# batch_size = 100 # Logs per request
# batch_delay_ms = 1000 # Max wait before sending
# timeout_seconds = 30 # Request timeout
# max_retries = 3 # Retry attempts
# retry_delay_ms = 1000 # Initial retry delay
# retry_backoff = 2.0 # Exponential backoff
# insecure_skip_verify = false # Skip TLS verification
### TLS configuration
# [pipelines.sinks.http_client.tls]
# enabled = false
# server_name = "logs.example.com" # For verification
# skip_verify = false # Skip verification
# cert_file = "/path/to/client.pem" # Client cert for mTLS
# key_file = "/path/to/client.key" # Client key for mTLS
### ⚠️ Example: HTTP Client Sink → HTTP Source with mTLS
## HTTP Source with mTLS:
## [pipelines.sources.http.tls]
## enabled = true
## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
## client_auth = true # Enable client cert verification
## client_ca_file = "/path/to/ca.pem"
## HTTP Client with client cert:
## [pipelines.sinks.http_client.tls]
## enabled = true
## cert_file = "/path/to/client.pem" # Client certificate
## key_file = "/path/to/client.key"
### Client authentication
### ☢ SECURITY: HTTP auth REQUIRES TLS to be enabled
# [pipelines.sinks.http_client.auth]
# type = "none" # none|basic|token|mtls (NO scram)
# # token = "your-token" # For token auth
# # username = "user" # For basic auth
# # password = "pass" # For basic auth
###----------------------------------------------------------------------------
### TCP Client Sink (Connect to TCP Source server)
# [[pipelines.sinks]]
# type = "tcp_client"
## [pipelines.sinks.tcp_client]
# host = "logs.example.com" # Target host
# port = 9090 # Target port
# buffer_size = 1000 # Internal buffer size
# dial_timeout = 10 # Connection timeout (seconds)
# write_timeout = 30 # Write timeout (seconds)
# read_timeout = 10 # Read timeout (seconds)
# keep_alive = 30 # TCP keep-alive (seconds)
# reconnect_delay_ms = 1000 # Initial reconnect delay
# max_reconnect_delay_ms = 30000 # Max reconnect delay
# reconnect_backoff = 1.5 # Exponential backoff
### ☣ WARNING: TCP has NO TLS support (gnet limitation)
### Use HTTP with TLS for encrypted transport
### Client authentication
# [pipelines.sinks.tcp_client.auth]
# type = "none" # none|scram ONLY (no basic/token/mtls)
# # username = "user" # For SCRAM auth
# # password = "pass" # For SCRAM auth

View File

@ -1,27 +1,77 @@
# LogWisp Documentation # LogWisp
Documentation covers installation, configuration, and usage of LogWisp's pipeline-based log monitoring system. A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with enterprise-grade security and reliability features.
## 📚 Documentation Index ## Features
### Getting Started ### Core Capabilities
- **[Installation Guide](installation.md)** - Platform-specific installation - **Pipeline Architecture**: Independent processing pipelines with source → filter → format → sink flow
- **[Quick Start](quickstart.md)** - Get running in 5 minutes - **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
- **[Architecture Overview](architecture.md)** - Pipeline design - **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
- **Hot Configuration Reload**: Update pipelines without service restart
### Configuration ### Data Processing
- **[Configuration Guide](configuration.md)** - Complete reference - **Pattern-based Filtering**: Include/exclude filters with regex support
- **[Environment Variables](environment.md)** - Container configuration - **Multiple Formatters**: Raw, JSON, and template-based text formatting
- **[Command Line Options](cli.md)** - CLI reference - **Rate Limiting**: Pipeline and per-connection rate controls
- **[Sample Configurations](../config/)** - Default & Minimal Config - **Batch Processing**: Configurable batching for HTTP/TCP clients
### Features ### Security & Reliability
- **[Status Monitoring](status.md)** - Health checks - **Authentication**: Basic, token, SCRAM, and mTLS support
- **[Filters Guide](filters.md)** - Pattern-based filtering - **TLS Encryption**: Full TLS 1.2/1.3 support for HTTP connections
- **[Rate Limiting](ratelimiting.md)** - Connection protection - **Access Control**: IP whitelisting/blacklisting, connection limits
- **[Router Mode](router.md)** - Multi-pipeline routing - **Automatic Reconnection**: Resilient client connections with exponential backoff
- **[Authentication](authentication.md)** - Access control *(planned)* - **File Rotation**: Size-based rotation with retention policies
## 📝 License ### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints
- **Signal Handling**: Graceful shutdown and configuration reload via signals
- **Background Mode**: Daemon operation with proper signal handling
- **Quiet Mode**: Silent operation for automated deployments
BSD-3-Clause ## Documentation
- [Installation Guide](installation.md) - Platform setup and service configuration
- [Architecture Overview](architecture.md) - System design and component interaction
- [Configuration Reference](configuration.md) - TOML structure and configuration methods
- [Input Sources](sources.md) - Available source types and configurations
- [Output Sinks](sinks.md) - Sink types and output options
- [Filters](filters.md) - Pattern-based log filtering
- [Formatters](formatters.md) - Log formatting and transformation
- [Authentication](authentication.md) - Security configurations and auth methods
- [Networking](networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](cli.md) - CLI flags and subcommands
- [Operations Guide](operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
Run with: `logwisp -c config.toml`
## System Requirements
- **Operating Systems**: Linux (kernel 3.10+), FreeBSD (12.0+)
- **Architecture**: amd64
- **Go Version**: 1.24+ (for building from source)
## License
BSD 3-Clause License

View File

@ -1,343 +1,168 @@
# Architecture Overview # Architecture Overview
LogWisp implements a flexible pipeline architecture for real-time log processing and streaming. LogWisp implements a pipeline-based architecture for flexible log processing and distribution.
## Core Architecture ## Core Concepts
### Pipeline Model
Each pipeline operates independently with a source → filter → format → sink flow. Multiple pipelines can run concurrently within a single LogWisp instance, each processing different log streams with unique configurations.
### Component Hierarchy
``` ```
┌─────────────────────────────────────────────────────────────────────────┐ Service (Main Process)
│ LogWisp Service │ ├── Pipeline 1
├─────────────────────────────────────────────────────────────────────────┤ │ ├── Sources (1 or more)
├── Rate Limiter (optional)
┌─────────────────────────── Pipeline 1 ───────────────────────────┐ │ ├── Filter Chain (optional)
│ │ │ ├── Formatter (optional)
│ Sources Filters Sinks │ │ └── Sinks (1 or more)
│ │ ┌──────┐ ┌────────┐ ┌──────┐ │ │ ├── Pipeline 2
│ │ Dir │──┐ │Include │ ┌────│ HTTP │←── Client 1 │ │ └── [Same structure]
│ │ └──────┘ ├────▶│ ERROR │ │ └──────┘ │ │ └── Status Reporter (optional)
│ │ │ │ WARN │────▶├────┌──────┐ │ │
│ │ ┌──────┐ │ └────┬───┘ │ │ File │ │ │
│ │ │ HTTP │──┤ ▼ │ └──────┘ │ │
│ │ └──────┘ │ ┌────────┐ │ ┌──────┐ │ │
│ │ ┌──────┐ │ │Exclude │ └────│ TCP │←── Client 2 │ │
│ │ │ TCP │──┘ │ DEBUG │ └──────┘ │ │
│ │ └──────┘ └────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline 2 ───────────────────────────┐ │
│ │ │ │
│ │ ┌──────┐ ┌───────────┐ │ │
│ │ │Stdin │───────────────────────┬───▶│HTTP Client│──► Remote │ │
│ │ └──────┘ (No Filters) │ └───────────┘ │ │
│ │ │ ┌───────────┐ │ │
│ │ └────│TCP Client │──► Remote │ │
│ │ └───────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline N ───────────────────────────┐ │
│ │ Multiple Sources → Filter Chain → Multiple Sinks │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
``` ```
## Data Flow ## Data Flow
``` ### Processing Stages
Log Entry Flow:
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ 1. **Source Stage**: Sources monitor inputs and generate log entries
│ Source │ │ Parse │ │ Filter │ │ Sink │ 2. **Rate Limiting**: Optional pipeline-level rate control
│ Monitor │────▶│ Entry │────▶│ Chain │────▶│ Deliver │ 3. **Filtering**: Pattern-based inclusion/exclusion
└─────────┘ └─────────┘ └─────────┘ └─────────┘ 4. **Formatting**: Transform entries to desired output format
│ │ │ │ 5. **Distribution**: Fan-out to multiple sinks
▼ ▼ ▼ ▼
Detect Extract Include/ Send to
Input & Format Exclude Clients
### Entry Lifecycle
Entry Processing: Log entries flow through the pipeline as `core.LogEntry` structures containing:
- **Time**: Entry timestamp
- **Level**: Log level (DEBUG, INFO, WARN, ERROR)
- **Source**: Origin identifier
- **Message**: Log content
- **Fields**: Additional metadata (JSON)
- **RawSize**: Original entry size
1. Source Detection 2. Entry Creation 3. Filter Application ### Buffering Strategy
┌──────────┐ ┌────────────┐ ┌─────────────┐
│New Entry │ │ Timestamp │ │ Filter 1 │
│Detected │──────────▶│ Level │────────▶│ Include? │
└──────────┘ │ Message │ └──────┬──────┘
└────────────┘ │
4. Sink Distribution ┌─────────────┐
┌──────────┐ │ Filter 2 │
│ HTTP │◀───┐ │ Exclude? │
└──────────┘ │ └──────┬──────┘
┌──────────┐ │ │
│ TCP │◀───┼────────── Entry ◀──────────────────┘
└──────────┘ │ (if passed)
┌──────────┐ │
│ File │◀───┤
└──────────┘ │
┌──────────┐ │
│ HTTP/TCP │◀───┘
│ Client │
└──────────┘
```
## Component Details Each component maintains internal buffers to handle burst traffic:
- Sources: Configurable buffer size (default 1000 entries)
- Sinks: Independent buffers per sink
- Network components: Additional TCP/HTTP buffers
### Sources ## Component Types
Sources monitor inputs and generate log entries: ### Sources (Input)
``` - **Directory Source**: File system monitoring with rotation detection
Directory Source: - **Stdin Source**: Standard input processing
┌─────────────────────────────────┐ - **HTTP Source**: REST endpoint for log ingestion
│ Directory Monitor │ - **TCP Source**: Raw TCP socket listener
├─────────────────────────────────┤
│ • Pattern Matching (*.log) │
│ • File Rotation Detection │
│ • Position Tracking │
│ • Concurrent File Watching │
└─────────────────────────────────┘
┌──────────────┐
│ File Watcher │ (per file)
├──────────────┤
│ • Read New │
│ • Track Pos │
│ • Detect Rot │
└──────────────┘
HTTP/TCP Sources: ### Sinks (Output)
┌─────────────────────────────────┐
│ Network Listener │
├─────────────────────────────────┤
│ • JSON Parsing │
│ • Rate Limiting │
│ • Connection Management │
│ • Input Validation │
└─────────────────────────────────┘
```
### Filters - **Console Sink**: stdout/stderr output
- **File Sink**: Rotating file writer
- **HTTP Sink**: Server-Sent Events (SSE) streaming
- **TCP Sink**: TCP server for client connections
- **HTTP Client Sink**: Forward to remote HTTP endpoints
- **TCP Client Sink**: Forward to remote TCP servers
Filters process entries through pattern matching: ### Processing Components
``` - **Rate Limiter**: Token bucket algorithm for flow control
Filter Chain: - **Filter Chain**: Sequential pattern matching
┌─────────────┐ - **Formatters**: Raw, JSON, or template-based text transformation
Entry ──────────▶│ Filter 1 │
│ (Include) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter 2 │
│ (Exclude) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter N │
└──────┬──────┘
To Sinks
```
### Sinks
Sinks deliver processed entries to destinations:
```
HTTP Sink (SSE):
┌───────────────────────────────────┐
│ HTTP Server │
├───────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ │
│ │ Stream │ │ Status │ │
│ │Endpoint │ │Endpoint │ │
│ └────┬────┘ └────┬────┘ │
│ │ │ │
│ ┌────▼──────────────▼────┐ │
│ │ Connection Manager │ │
│ ├────────────────────────┤ │
│ │ • Rate Limiting │ │
│ │ • Heartbeat │ │
│ │ • Buffer Management │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
TCP Sink:
┌───────────────────────────────────┐
│ TCP Server │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ gnet Event Loop │ │
│ ├────────────────────────┤ │
│ │ • Async I/O │ │
│ │ • Connection Pool │ │
│ │ • Rate Limiting │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
Client Sinks:
┌───────────────────────────────────┐
│ HTTP/TCP Client │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ Output Manager │ │
│ ├────────────────────────┤ │
│ │ • Batching │ │
│ │ • Retry Logic │ │
│ │ • Connection Pooling │ │
│ │ • Failover │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
```
## Router Mode
In router mode, multiple pipelines share HTTP ports:
```
Router Architecture:
┌─────────────────┐
│ HTTP Router │
│ Port 8080 │
└────────┬────────┘
┌────────────────────┼────────────────────┐
│ │ │
/app/stream /db/stream /sys/stream
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│Pipeline │ │Pipeline │ │Pipeline │
│ "app" │ │ "db" │ │ "sys" │
└─────────┘ └─────────┘ └─────────┘
Path Routing:
Client Request ──▶ Router ──▶ Parse Path ──▶ Find Pipeline ──▶ Route
Extract Pipeline Name
from /pipeline/endpoint
```
## Memory Management
```
Buffer Flow:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Source │ │ Pipeline │ │ Sink │
│ Buffer │────▶│ Buffer │────▶│ Buffer │
│ (1000) │ │ (chan) │ │ (1000) │
└──────────┘ └──────────┘ └──────────┘
│ │ │
▼ ▼ ▼
Drop if full Backpressure Drop if full
(counted) (blocking) (counted)
Client Sinks:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Entry │ │ Batch │ │ Send │
│ Buffer │────▶│ Buffer │────▶│ Queue │
│ (1000) │ │ (100) │ │ (retry) │
└──────────┘ └──────────┘ └──────────┘
```
## Rate Limiting
```
Token Bucket Algorithm:
┌─────────────────────────────┐
│ Token Bucket │
├─────────────────────────────┤
│ Capacity: burst_size │
│ Refill: requests_per_second │
│ │
│ ┌─────────────────────┐ │
│ │ ● ● ● ● ● ● ○ ○ ○ ○ │ │
│ └─────────────────────┘ │
│ 6/10 tokens available │
└─────────────────────────────┘
Request arrives
Token available? ──No──▶ Reject (429)
Yes
Consume token ──▶ Allow request
```
## Concurrency Model ## Concurrency Model
``` ### Goroutine Architecture
Goroutine Structure:
Main ────┬──── Pipeline 1 ────┬──── Source Reader 1 - Each source runs in dedicated goroutines for monitoring
│ ├──── Source Reader 2 - Sinks operate independently with their own processing loops
│ ├──── HTTP Server - Network listeners use optimized event loops (gnet for TCP)
│ ├──── TCP Server - Pipeline processing uses channel-based communication
│ ├──── Filter Processor
│ ├──── HTTP Client Writer
│ └──── TCP Client Writer
├──── Pipeline 2 ────┬──── Source Reader
│ └──── Sink Writers
└──── HTTP Router (if enabled)
Channel Communication: ### Synchronization
Source ──chan──▶ Filter ──chan──▶ Sink
│ │
└── Non-blocking send ────────────┘
(drop & count if full)
```
## Configuration Loading - Atomic counters for statistics
- Read-write mutexes for configuration access
- Context-based cancellation for graceful shutdown
- Wait groups for coordinated startup/shutdown
``` ## Network Architecture
Priority Order:
1. CLI Flags ─────────┐
2. Environment Vars ──┼──▶ Merge ──▶ Final Config
3. Config File ───────┤
4. Defaults ──────────┘
Example: ### Connection Patterns
CLI: --logging.level debug
Env: LOGWISP_PIPELINES_0_NAME=app
File: pipelines.toml
Default: buffer_size = 1000
```
## Security Architecture **Chaining Design**:
- TCP Client Sink → TCP Source: Direct TCP forwarding
- HTTP Client Sink → HTTP Source: HTTP-based forwarding
``` **Monitoring Design**:
Security Layers: - TCP Sink: Debugging interface
- HTTP Sink: Browser-based live monitoring
┌─────────────────────────────────────┐ ### Protocol Support
│ Network Layer │
├─────────────────────────────────────┤ - HTTP/1.1 and HTTP/2 for HTTP connections
│ • Rate Limiting (per IP/global) │ - Raw TCP with optional SCRAM authentication
│ • Connection Limits │ - TLS 1.2/1.3 for HTTPS connections (HTTP only)
│ • TLS/SSL (planned) │ - Server-Sent Events for real-time streaming
└──────────────┬──────────────────────┘
## Resource Management
┌──────────────▼──────────────────────┐
│ Authentication Layer │ ### Memory Management
├─────────────────────────────────────┤
│ • Basic Auth (planned) │ - Bounded buffers prevent unbounded growth
│ • Bearer Tokens (planned) │ - Automatic garbage collection via Go runtime
│ • IP Whitelisting (planned) │ - Connection limits prevent resource exhaustion
└──────────────┬──────────────────────┘
### File Management
┌──────────────▼──────────────────────┐
│ Application Layer │ - Automatic rotation based on size thresholds
├─────────────────────────────────────┤ - Retention policies for old log files
│ • Input Validation │ - Minimum disk space checks before writing
│ • Path Traversal Prevention │
│ • Resource Limits │ ### Connection Management
└─────────────────────────────────────┘
``` - Per-IP connection limits
- Global connection caps
- Automatic reconnection with exponential backoff
- Keep-alive for persistent connections
## Reliability Features
### Fault Tolerance
- Panic recovery in pipeline processing
- Independent pipeline operation
- Automatic source restart on failure
- Sink failure isolation
### Data Integrity
- Entry validation at ingestion
- Size limits for entries and batches
- Duplicate detection in file monitoring
- Position tracking for file reads
## Performance Characteristics
### Throughput
- Pipeline rate limiting: Configurable (default 1000 entries/second)
- Network throughput: Limited by network and sink capacity
- File monitoring: Sub-second detection (default 100ms interval)
### Latency
- Entry processing: Sub-millisecond in-memory
- Network forwarding: Depends on batch configuration
- File detection: Configurable check interval
### Scalability
- Horizontal: Multiple LogWisp instances with different configurations
- Vertical: Multiple pipelines per instance
- Fan-out: Multiple sinks per pipeline
- Fan-in: Multiple sources per pipeline

237
doc/authentication.md Normal file
View File

@ -0,0 +1,237 @@
# Authentication
LogWisp supports multiple authentication methods for securing network connections.
## Authentication Methods
### Overview
| Method | HTTP Source | HTTP Sink | HTTP Client | TCP Source | TCP Client | TCP Sink |
|--------|------------|-----------|-------------|------------|------------|----------|
| None | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Basic | ✓ (TLS req) | ✓ (TLS req) | ✓ (TLS req) | ✗ | ✗ | ✗ |
| Token | ✓ (TLS req) | ✓ (TLS req) | ✓ (TLS req) | ✗ | ✗ | ✗ |
| SCRAM | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
| mTLS | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
**Important Notes:**
- HTTP authentication **requires** TLS to be enabled
- TCP connections are **always** unencrypted
- TCP Sink has **no** authentication (debugging only)
## Basic Authentication
HTTP/HTTPS connections with username/password.
### Configuration
```toml
[pipelines.sources.http.auth]
type = "basic"
realm = "LogWisp"
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2id$v=19$m=65536,t=3,p=2$..."
```
### Generating Credentials
Use the `auth` command:
```bash
logwisp auth -u admin -b
```
Output includes:
- Argon2id password hash for configuration
- TOML configuration snippet
### Password Hash Format
LogWisp uses Argon2id with parameters:
- Memory: 65536 KB
- Iterations: 3
- Parallelism: 2
- Salt: Random 16 bytes
## Token Authentication
Bearer token authentication for HTTP/HTTPS.
### Configuration
```toml
[pipelines.sources.http.auth]
type = "token"
[pipelines.sources.http.auth.token]
tokens = ["token1", "token2", "token3"]
```
### Generating Tokens
```bash
logwisp auth -k -l 32
```
Generates:
- Base64-encoded token
- Hex-encoded token
- Configuration snippet
### Token Usage
Include in requests:
```
Authorization: Bearer <token>
```
## SCRAM Authentication
Secure Challenge-Response for TCP connections.
### Configuration
```toml
[pipelines.sources.tcp.auth]
type = "scram"
[[pipelines.sources.tcp.auth.scram.users]]
username = "tcpuser"
stored_key = "base64..."
server_key = "base64..."
salt = "base64..."
argon_time = 3
argon_memory = 65536
argon_threads = 4
```
### Generating SCRAM Credentials
```bash
logwisp auth -u tcpuser -s
```
### SCRAM Features
- Argon2-SCRAM-SHA256 algorithm
- Challenge-response mechanism
- No password transmission
- Replay attack protection
- Works over unencrypted connections
## mTLS (Mutual TLS)
Certificate-based authentication for HTTPS.
### Server Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
client_auth = true
client_ca_file = "/path/to/ca.pem"
verify_client_cert = true
[pipelines.sources.http.auth]
type = "mtls"
```
### Client Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
cert_file = "/path/to/client.pem"
key_file = "/path/to/client.key"
[pipelines.sinks.http_client.auth]
type = "mtls"
```
### Certificate Generation
Use the `tls` command:
```bash
# Generate CA
logwisp tls -ca -o ca
# Generate server certificate
logwisp tls -server -ca-cert ca.pem -ca-key ca.key -host localhost -o server
# Generate client certificate
logwisp tls -client -ca-cert ca.pem -ca-key ca.key -o client
```
## Authentication Command
### Usage
```bash
logwisp auth [options]
```
### Options
| Flag | Description |
|------|-------------|
| `-u, --user` | Username for credential generation |
| `-p, --password` | Password (prompts if not provided) |
| `-b, --basic` | Generate basic auth (HTTP/HTTPS) |
| `-s, --scram` | Generate SCRAM auth (TCP) |
| `-k, --token` | Generate bearer token |
| `-l, --length` | Token length in bytes (default: 32) |
### Security Best Practices
1. **Always use TLS** for HTTP authentication
2. **Never hardcode passwords** in configuration
3. **Use strong passwords** (minimum 12 characters)
4. **Rotate tokens regularly**
5. **Limit user permissions** to minimum required
6. **Store password hashes only**, never plaintext
7. **Use unique credentials** per service/user
## Access Control Lists
Combine authentication with IP-based access control:
```toml
[pipelines.sources.http.net_limit]
enabled = true
ip_whitelist = ["192.168.1.0/24", "10.0.0.0/8"]
ip_blacklist = ["192.168.1.100"]
```
Priority order:
1. Blacklist (checked first, immediate deny)
2. Whitelist (if configured, must match)
3. Authentication (if configured)
## Credential Storage
### Configuration File
Store hashes in TOML:
```toml
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2id$..."
```
### Environment Variables
Override via environment:
```bash
export LOGWISP_PIPELINES_0_SOURCES_0_HTTP_AUTH_BASIC_USERS_0_USERNAME=admin
export LOGWISP_PIPELINES_0_SOURCES_0_HTTP_AUTH_BASIC_USERS_0_PASSWORD_HASH='$argon2id$...'
```
### External Files
Future support planned for:
- External user databases
- LDAP/AD integration
- OAuth2/OIDC providers

View File

@ -1,196 +1,260 @@
# Command Line Interface # Command Line Interface
LogWisp CLI options for controlling behavior without modifying configuration files. LogWisp CLI reference for commands and options.
## Synopsis ## Synopsis
```bash ```bash
logwisp [command] [options]
logwisp [options] logwisp [options]
``` ```
## General Options ## Commands
### `--config <path>` ### Main Commands
Configuration file location.
- **Default**: `~/.config/logwisp/logwisp.toml`
- **Example**: `logwisp --config /etc/logwisp/production.toml`
### `--router` | Command | Description |
Enable HTTP router mode for path-based routing. |---------|-------------|
- **Default**: `false` | `auth` | Generate authentication credentials |
- **Example**: `logwisp --router` | `tls` | Generate TLS certificates |
| `version` | Display version information |
| `help` | Show help information |
### auth Command
Generate authentication credentials.
```bash
logwisp auth [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-u, --user` | Username | Required for password auth |
| `-p, --password` | Password | Prompts if not provided |
| `-b, --basic` | Generate basic auth | - |
| `-s, --scram` | Generate SCRAM auth | - |
| `-k, --token` | Generate bearer token | - |
| `-l, --length` | Token length in bytes | 32 |
### tls Command
Generate TLS certificates.
```bash
logwisp tls [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-ca` | Generate CA certificate | - |
| `-server` | Generate server certificate | - |
| `-client` | Generate client certificate | - |
| `-host` | Comma-separated hosts/IPs | localhost |
| `-o` | Output file prefix | Required |
| `-ca-cert` | CA certificate file | Required for server/client |
| `-ca-key` | CA key file | Required for server/client |
| `-days` | Certificate validity days | 365 |
### version Command
### `--version`
Display version information. Display version information.
### `--background`
Run as background process.
- **Example**: `logwisp --background`
### `--quiet`
Suppress all output (overrides logging configuration) except sinks.
- **Example**: `logwisp --quiet`
### `--disable-status-reporter`
Disable periodic status reporting.
- **Example**: `logwisp --disable-status-reporter`
### `--config-auto-reload`
Enable automatic configuration reloading on file changes.
- **Example**: `logwisp --config-auto-reload --config /etc/logwisp/config.toml`
- Monitors configuration file for changes
- Reloads pipelines without restart
- Preserves connections during reload
### `--config-save-on-exit`
Save current configuration to file on exit.
- **Example**: `logwisp --config-save-on-exit`
- Useful with runtime modifications
- Requires valid config file path
## Logging Options
Override configuration file settings:
### `--logging.output <mode>`
LogWisp's operational log output.
- **Values**: `file`, `stdout`, `stderr`, `both`, `none`
- **Example**: `logwisp --logging.output both`
### `--logging.level <level>`
Minimum log level.
- **Values**: `debug`, `info`, `warn`, `error`
- **Example**: `logwisp --logging.level debug`
### `--logging.file.directory <path>`
Log directory (with file output).
- **Example**: `logwisp --logging.file.directory /var/log/logwisp`
### `--logging.file.name <name>`
Log file name (with file output).
- **Example**: `logwisp --logging.file.name app`
### `--logging.file.max_size_mb <size>`
Maximum log file size in MB.
- **Example**: `logwisp --logging.file.max_size_mb 200`
### `--logging.file.max_total_size_mb <size>`
Maximum total log size in MB.
- **Example**: `logwisp --logging.file.max_total_size_mb 2000`
### `--logging.file.retention_hours <hours>`
Log retention period in hours.
- **Example**: `logwisp --logging.file.retention_hours 336`
### `--logging.console.target <target>`
Console output destination.
- **Values**: `stdout`, `stderr`, `split`
- **Example**: `logwisp --logging.console.target split`
### `--logging.console.format <format>`
Console output format.
- **Values**: `txt`, `json`
- **Example**: `logwisp --logging.console.format json`
## Pipeline Options
Configure pipelines via CLI (N = array index, 0-based):
### `--pipelines.N.name <name>`
Pipeline name.
- **Example**: `logwisp --pipelines.0.name myapp`
### `--pipelines.N.sources.N.type <type>`
Source type.
- **Example**: `logwisp --pipelines.0.sources.0.type directory`
### `--pipelines.N.sources.N.options.<key> <value>`
Source options.
- **Example**: `logwisp --pipelines.0.sources.0.options.path /var/log`
### `--pipelines.N.filters.N.type <type>`
Filter type.
- **Example**: `logwisp --pipelines.0.filters.0.type include`
### `--pipelines.N.filters.N.patterns <json>`
Filter patterns (JSON array).
- **Example**: `logwisp --pipelines.0.filters.0.patterns '["ERROR","WARN"]'`
### `--pipelines.N.sinks.N.type <type>`
Sink type.
- **Example**: `logwisp --pipelines.0.sinks.0.type http`
### `--pipelines.N.sinks.N.options.<key> <value>`
Sink options.
- **Example**: `logwisp --pipelines.0.sinks.0.options.port 8080`
## Examples
### Basic Usage
```bash ```bash
# Default configuration logwisp version
logwisp logwisp -v
logwisp --version
# Specific configuration
logwisp --config /etc/logwisp/production.toml
``` ```
### Development Output includes:
```bash - Version number
# Debug mode - Build date
logwisp --logging.output stderr --logging.level debug - Git commit hash
- Go version
# With file output ## Global Options
logwisp --logging.output both --logging.level debug --logging.file.directory ./debug-logs
### Configuration Options
| Flag | Description | Default |
|------|-------------|---------|
| `-c, --config` | Configuration file path | `./logwisp.toml` |
| `-b, --background` | Run as daemon | false |
| `-q, --quiet` | Suppress console output | false |
| `--disable-status-reporter` | Disable status logging | false |
| `--config-auto-reload` | Enable config hot reload | false |
### Logging Options
| Flag | Description | Values |
|------|-------------|--------|
| `--logging.output` | Log output mode | file, stdout, stderr, split, all, none |
| `--logging.level` | Log level | debug, info, warn, error |
| `--logging.file.directory` | Log directory | Path |
| `--logging.file.name` | Log filename | String |
| `--logging.file.max_size_mb` | Max file size | Integer |
| `--logging.file.max_total_size_mb` | Total size limit | Integer |
| `--logging.file.retention_hours` | Retention period | Float |
| `--logging.console.target` | Console target | stdout, stderr, split |
| `--logging.console.format` | Output format | txt, json |
### Pipeline Options
Configure pipelines via CLI (N = array index, 0-based).
**Pipeline Configuration:**
| Flag | Description |
|------|-------------|
| `--pipelines.N.name` | Pipeline name |
| `--pipelines.N.sources.N.type` | Source type |
| `--pipelines.N.filters.N.type` | Filter type |
| `--pipelines.N.sinks.N.type` | Sink type |
## Flag Formats
### Boolean Flags
```bash
logwisp --quiet
logwisp --quiet=true
logwisp --quiet=false
``` ```
### Production ### String Flags
```bash ```bash
# File logging logwisp --config /etc/logwisp/config.toml
logwisp --logging.output file --logging.file.directory /var/log/logwisp logwisp -c config.toml
# Background with router
logwisp --background --router --config /etc/logwisp/prod.toml
# Quiet mode for cron
logwisp --quiet --config /etc/logwisp/batch.toml
``` ```
### Pipeline Configuration via CLI ### Nested Configuration
```bash
# Simple pipeline
logwisp --pipelines.0.name app \
--pipelines.0.sources.0.type directory \
--pipelines.0.sources.0.options.path /var/log/app \
--pipelines.0.sinks.0.type http \
--pipelines.0.sinks.0.options.port 8080
# With filters ```bash
logwisp --pipelines.0.name filtered \ logwisp --logging.level=debug
--pipelines.0.sources.0.type stdin \ logwisp --pipelines.0.name=myapp
--pipelines.0.filters.0.type include \ logwisp --pipelines.0.sources.0.type=stdin
--pipelines.0.filters.0.patterns '["ERROR","CRITICAL"]' \
--pipelines.0.sinks.0.type stdout
``` ```
## Priority Order ### Array Values (JSON)
1. **Command-line flags** (highest) ```bash
2. **Environment variables** logwisp --pipelines.0.filters.0.patterns='["ERROR","WARN"]'
3. **Configuration file** ```
4. **Built-in defaults** (lowest)
## Environment Variables
All flags can be set via environment:
```bash
export LOGWISP_QUIET=true
export LOGWISP_LOGGING_LEVEL=debug
export LOGWISP_PIPELINES_0_NAME=myapp
```
## Configuration Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Built-in defaults (lowest)
## Exit Codes ## Exit Codes
- `0`: Success | Code | Description |
- `1`: General error |------|-------------|
- `2`: Configuration file not found | 0 | Success |
- `137`: SIGKILL received | 1 | General error |
| 2 | Configuration file not found |
| 137 | SIGKILL received |
## Signals ## Signal Handling
- `SIGINT` (Ctrl+C): Graceful shutdown | Signal | Action |
- `SIGTERM`: Graceful shutdown |--------|--------|
- `SIGHUP`: Reload configuration (when auto-reload enabled) | SIGINT (Ctrl+C) | Graceful shutdown |
- `SIGUSR1`: Reload configuration (when auto-reload enabled) | SIGTERM | Graceful shutdown |
- `SIGKILL`: Immediate shutdown (exit code 137) | SIGHUP | Reload configuration |
| SIGUSR1 | Reload configuration |
| SIGKILL | Immediate termination |
## Usage Patterns
### Development Mode
```bash
# Verbose logging to console
logwisp --logging.output=stderr --logging.level=debug
# Quick test with stdin
logwisp --pipelines.0.sources.0.type=stdin --pipelines.0.sinks.0.type=console
```
### Production Deployment
```bash
# Background with file logging
logwisp --background --config /etc/logwisp/prod.toml --logging.output=file
# Systemd service
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/config.toml
```
### Debugging
```bash
# Check configuration
logwisp --config test.toml --logging.level=debug --disable-status-reporter
# Dry run (verify config only)
logwisp --config test.toml --quiet
```
### Quick Commands
```bash
# Generate admin password
logwisp auth -u admin -b
# Create self-signed certs
logwisp tls -server -host localhost -o server
# Check version
logwisp version
```
## Help System
### General Help
```bash
logwisp --help
logwisp -h
logwisp help
```
### Command Help
```bash
logwisp auth --help
logwisp tls --help
logwisp help auth
```
## Special Flags
### Internal Flags
These flags are for internal use:
- `--background-daemon`: Child process indicator
- `--config-save-on-exit`: Save config on shutdown
### Hidden Behaviors
- SIGHUP ignored by default (nohup behavior)
- Automatic panic recovery in pipelines
- Resource cleanup on shutdown

View File

@ -1,512 +1,198 @@
# Configuration Guide # Configuration Reference
LogWisp uses TOML format with a flexible **source → filter → sink** pipeline architecture. LogWisp configuration uses TOML format with flexible override mechanisms.
## Configuration Methods ## Configuration Precedence
LogWisp supports three configuration methods with the following precedence:
Configuration sources are evaluated in order:
1. **Command-line flags** (highest priority) 1. **Command-line flags** (highest priority)
2. **Environment variables** 2. **Environment variables**
3. **Configuration file** (lowest priority) 3. **Configuration file**
4. **Built-in defaults** (lowest priority)
### Complete Configuration Reference ## File Location
| Category | CLI Flag | Environment Variable | TOML File | LogWisp searches for configuration in order:
|----------|----------|---------------------|-----------| 1. Path specified via `--config` flag
| **Top-level** | 2. Path from `LOGWISP_CONFIG_FILE` environment variable
| Router mode | `--router` | `LOGWISP_ROUTER` | `router = true` | 3. `~/.config/logwisp/logwisp.toml`
| Background mode | `--background` | `LOGWISP_BACKGROUND` | `background = true` | 4. `./logwisp.toml` in current directory
| Show version | `--version` | `LOGWISP_VERSION` | `version = true` |
| Quiet mode | `--quiet` | `LOGWISP_QUIET` | `quiet = true` |
| Disable status reporter | `--disable-status-reporter` | `LOGWISP_DISABLE_STATUS_REPORTER` | `disable_status_reporter = true` |
| Config auto-reload | `--config-auto-reload` | `LOGWISP_CONFIG_AUTO_RELOAD` | `config_auto_reload = true` |
| Config save on exit | `--config-save-on-exit` | `LOGWISP_CONFIG_SAVE_ON_EXIT` | `config_save_on_exit = true` |
| Config file | `--config <path>` | `LOGWISP_CONFIG_FILE` | N/A |
| Config directory | N/A | `LOGWISP_CONFIG_DIR` | N/A |
| **Logging** |
| Output mode | `--logging.output <mode>` | `LOGWISP_LOGGING_OUTPUT` | `[logging]`<br>`output = "stderr"` |
| Log level | `--logging.level <level>` | `LOGWISP_LOGGING_LEVEL` | `[logging]`<br>`level = "info"` |
| File directory | `--logging.file.directory <path>` | `LOGWISP_LOGGING_FILE_DIRECTORY` | `[logging.file]`<br>`directory = "./logs"` |
| File name | `--logging.file.name <name>` | `LOGWISP_LOGGING_FILE_NAME` | `[logging.file]`<br>`name = "logwisp"` |
| Max file size | `--logging.file.max_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_SIZE_MB` | `[logging.file]`<br>`max_size_mb = 100` |
| Max total size | `--logging.file.max_total_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB` | `[logging.file]`<br>`max_total_size_mb = 1000` |
| Retention hours | `--logging.file.retention_hours <hours>` | `LOGWISP_LOGGING_FILE_RETENTION_HOURS` | `[logging.file]`<br>`retention_hours = 168` |
| Console target | `--logging.console.target <target>` | `LOGWISP_LOGGING_CONSOLE_TARGET` | `[logging.console]`<br>`target = "stderr"` |
| Console format | `--logging.console.format <format>` | `LOGWISP_LOGGING_CONSOLE_FORMAT` | `[logging.console]`<br>`format = "txt"` |
| **Pipelines** |
| Pipeline name | `--pipelines.N.name <name>` | `LOGWISP_PIPELINES_N_NAME` | `[[pipelines]]`<br>`name = "default"` |
| Source type | `--pipelines.N.sources.N.type <type>` | `LOGWISP_PIPELINES_N_SOURCES_N_TYPE` | `[[pipelines.sources]]`<br>`type = "directory"` |
| Source options | `--pipelines.N.sources.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SOURCES_N_OPTIONS_<KEY>` | `[[pipelines.sources]]`<br>`options = { ... }` |
| Filter type | `--pipelines.N.filters.N.type <type>` | `LOGWISP_PIPELINES_N_FILTERS_N_TYPE` | `[[pipelines.filters]]`<br>`type = "include"` |
| Filter logic | `--pipelines.N.filters.N.logic <logic>` | `LOGWISP_PIPELINES_N_FILTERS_N_LOGIC` | `[[pipelines.filters]]`<br>`logic = "or"` |
| Filter patterns | `--pipelines.N.filters.N.patterns <json>` | `LOGWISP_PIPELINES_N_FILTERS_N_PATTERNS` | `[[pipelines.filters]]`<br>`patterns = [...]` |
| Sink type | `--pipelines.N.sinks.N.type <type>` | `LOGWISP_PIPELINES_N_SINKS_N_TYPE` | `[[pipelines.sinks]]`<br>`type = "http"` |
| Sink options | `--pipelines.N.sinks.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SINKS_N_OPTIONS_<KEY>` | `[[pipelines.sinks]]`<br>`options = { ... }` |
| Auth type | `--pipelines.N.auth.type <type>` | `LOGWISP_PIPELINES_N_AUTH_TYPE` | `[pipelines.auth]`<br>`type = "none"` |
Note: `N` represents array indices (0-based). ## Global Settings
## Configuration File Location Top-level configuration options:
1. Command line: `--config /path/to/config.toml` | Setting | Type | Default | Description |
2. Environment: `$LOGWISP_CONFIG_FILE` and `$LOGWISP_CONFIG_DIR` |---------|------|---------|-------------|
3. User config: `~/.config/logwisp/logwisp.toml` | `background` | bool | false | Run as daemon process |
4. Current directory: `./logwisp.toml` | `quiet` | bool | false | Suppress console output |
| `disable_status_reporter` | bool | false | Disable periodic status logging |
| `config_auto_reload` | bool | false | Enable file watch for auto-reload |
## Hot Reload ## Logging Configuration
LogWisp supports automatic configuration reloading without restart: LogWisp's internal operational logging:
```bash
# Enable hot reload
logwisp --config-auto-reload --config /etc/logwisp/config.toml
# Manual reload via signal
kill -HUP $(pidof logwisp) # or SIGUSR1
```
Hot reload updates:
- Pipeline configurations
- Filters
- Formatters
- Rate limits
- Router mode changes
Not reloaded (requires restart):
- Logging configuration
- Background mode
## Configuration Structure
```toml ```toml
# Optional: Enable router mode
router = false
# Optional: Background mode
background = false
# Optional: Quiet mode
quiet = false
# Optional: Disable status reporter
disable_status_reporter = false
# Optional: LogWisp's own logging
[logging] [logging]
output = "stderr" # file, stdout, stderr, both, none output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug, info, warn, error level = "info" # debug|info|warn|error
[logging.file] [logging.file]
directory = "./logs" directory = "./log"
name = "logwisp" name = "logwisp"
max_size_mb = 100 max_size_mb = 100
max_total_size_mb = 1000 max_total_size_mb = 1000
retention_hours = 168 retention_hours = 168.0
[logging.console] [logging.console]
target = "stderr" # stdout, stderr, split target = "stdout" # stdout|stderr|split
format = "txt" # txt or json format = "txt" # txt|json
```
# Required: At least one pipeline ### Output Modes
- **file**: Write to log files only
- **stdout**: Write to standard output
- **stderr**: Write to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
- **all**: Write to both file and console
- **none**: Disable all logging
## Pipeline Configuration
Each `[[pipelines]]` section defines an independent processing pipeline:
```toml
[[pipelines]] [[pipelines]]
name = "default" name = "pipeline-name"
# Sources (required) # Rate limiting (optional)
[pipelines.rate_limit]
rate = 1000.0
burst = 2000.0
policy = "drop" # pass|drop
max_entry_size_bytes = 0 # 0=unlimited
# Format configuration (optional)
[pipelines.format]
type = "json" # raw|json|txt
# Sources (required, 1+)
[[pipelines.sources]] [[pipelines.sources]]
type = "directory" type = "directory"
options = { ... } # ... source-specific config
# Filters (optional) # Filters (optional)
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
patterns = [...] logic = "or"
patterns = ["ERROR", "WARN"]
# Sinks (required) # Sinks (required, 1+)
[[pipelines.sinks]] [[pipelines.sinks]]
type = "http" type = "http"
options = { ... } # ... sink-specific config
``` ```
## Pipeline Configuration ## Environment Variables
Each `[[pipelines]]` section defines an independent processing pipeline. All configuration options support environment variable overrides:
### Pipeline Formatters ### Naming Convention
Control output format per pipeline: - Prefix: `LOGWISP_`
- Path separator: `_` (underscore)
- Array indices: Numeric suffix (0-based)
- Case: UPPERCASE
```toml ### Mapping Examples
[[pipelines]]
name = "json-output"
format = "json" # raw, json, text
[pipelines.format_options] | TOML Path | Environment Variable |
# JSON formatter |-----------|---------------------|
pretty = false | `quiet` | `LOGWISP_QUIET` |
timestamp_field = "timestamp" | `logging.level` | `LOGWISP_LOGGING_LEVEL` |
level_field = "level" | `pipelines[0].name` | `LOGWISP_PIPELINES_0_NAME` |
message_field = "message" | `pipelines[0].sources[0].type` | `LOGWISP_PIPELINES_0_SOURCES_0_TYPE` |
source_field = "source"
# Text formatter ## Command-Line Overrides
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Message}}"
timestamp_format = "2006-01-02T15:04:05Z07:00" All configuration options can be overridden via CLI flags:
```bash
logwisp --quiet \
--logging.level=debug \
--pipelines.0.name=myapp \
--pipelines.0.sources.0.type=stdin
``` ```
### Sources ## Configuration Validation
Input data sources: LogWisp validates configuration at startup:
- Required fields presence
- Type correctness
- Port conflicts
- Path accessibility
- Pattern compilation
- Network address formats
#### Directory Source ## Hot Reload
```toml
[[pipelines.sources]]
type = "directory"
options = {
path = "/var/log/myapp", # Directory to monitor
pattern = "*.log", # File pattern (glob)
check_interval_ms = 100 # Check interval (10-60000)
}
```
#### File Source Enable configuration hot reload:
```toml
[[pipelines.sources]]
type = "file"
options = {
path = "/var/log/app.log" # Specific file
}
```
#### Stdin Source
```toml
[[pipelines.sources]]
type = "stdin"
options = {}
```
#### HTTP Source
```toml
[[pipelines.sources]]
type = "http"
options = {
port = 8081, # Port to listen on
ingest_path = "/ingest", # Path for POST requests
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip"
}
}
```
#### TCP Source
```toml
[[pipelines.sources]]
type = "tcp"
options = {
port = 9091, # Port to listen on
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 5.0,
burst_size = 10,
limit_by = "ip"
}
}
```
### Filters
Control which log entries pass through:
```toml
# Include filter - only matching logs pass
[[pipelines.filters]]
type = "include"
logic = "or" # or: match any, and: match all
patterns = [
"ERROR",
"(?i)warn", # Case-insensitive
"\\bfatal\\b" # Word boundary
]
# Exclude filter - matching logs are dropped
[[pipelines.filters]]
type = "exclude"
patterns = ["DEBUG", "health-check"]
```
### Sinks
Output destinations:
#### HTTP Sink (SSE)
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
buffer_size = 1000,
stream_path = "/stream",
status_path = "/status",
# Heartbeat
heartbeat = {
enabled = true,
interval_seconds = 30,
format = "comment", # comment or json
include_timestamp = true,
include_stats = false
},
# Rate limiting
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # ip or global
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
#### TCP Sink
```toml
[[pipelines.sinks]]
type = "tcp"
options = {
port = 9090,
buffer_size = 5000,
heartbeat = { enabled = true, interval_seconds = 60, format = "json" },
rate_limit = { enabled = true, requests_per_second = 5.0, burst_size = 10 }
}
```
#### HTTP Client Sink
```toml
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://remote-log-server.com/ingest",
buffer_size = 1000,
batch_size = 100,
batch_delay_ms = 1000,
timeout_seconds = 30,
max_retries = 3,
retry_delay_ms = 1000,
retry_backoff = 2.0,
headers = {
"Authorization" = "Bearer <API_KEY_HERE>",
"X-Custom-Header" = "value"
},
insecure_skip_verify = false
}
```
#### TCP Client Sink
```toml
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "remote-server.com:9090",
buffer_size = 1000,
dial_timeout_seconds = 10,
write_timeout_seconds = 30,
keep_alive_seconds = 30,
reconnect_delay_ms = 1000,
max_reconnect_delay_seconds = 30,
reconnect_backoff = 1.5
}
```
#### File Sink
```toml
[[pipelines.sinks]]
type = "file"
options = {
directory = "/var/log/logwisp",
name = "app",
max_size_mb = 100,
max_total_size_mb = 1000,
retention_hours = 168.0,
min_disk_free_mb = 1000,
buffer_size = 2000
}
```
#### Console Sinks
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {
buffer_size = 500,
target = "stdout" # stdout, stderr, or split
}
```
## Complete Examples
### Basic Application Monitoring
```toml
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Hot Reload with JSON Output
```toml ```toml
config_auto_reload = true config_auto_reload = true
config_save_on_exit = true
[[pipelines]]
name = "app"
format = "json"
[pipelines.format_options]
pretty = true
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
``` ```
### Filtering Or via command line:
```bash
```toml logwisp --config-auto-reload
[logging]
output = "file"
level = "info"
[[pipelines]]
name = "production"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log", check_interval_ms = 50 }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.filters]]
type = "exclude"
patterns = ["/health", "/metrics"]
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = { enabled = true, requests_per_second = 25.0 }
}
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "errors" }
``` ```
### Multi-Source Aggregation Reload triggers:
- File modification detection
- SIGHUP or SIGUSR1 signals
Reloadable items:
- Pipeline configurations
- Sources and sinks
- Filters and formatters
- Rate limits
Non-reloadable (requires restart):
- Logging configuration
- Background mode
- Global settings
## Default Configuration
Minimal working configuration:
```toml ```toml
[[pipelines]] [[pipelines]]
name = "aggregated" name = "default"
[[pipelines.sources]] [[pipelines.sources]]
type = "directory" type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" } [pipelines.sources.directory]
path = "./"
[[pipelines.sources]] pattern = "*.log"
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sources]]
type = "stdin"
options = {}
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/logs" }
[[pipelines.sinks]] [[pipelines.sinks]]
type = "tcp" type = "console"
options = { port = 9090 } [pipelines.sinks.console]
target = "stdout"
``` ```
### Router Mode ## Configuration Schema
```toml ### Type Reference
# Run with: logwisp --router
router = true
[[pipelines]] | TOML Type | Go Type | Environment Format |
name = "api" |-----------|---------|-------------------|
[[pipelines.sources]] | String | string | Plain text |
type = "directory" | Integer | int64 | Numeric string |
options = { path = "/var/log/api", pattern = "*.log" } | Float | float64 | Decimal string |
[[pipelines.sinks]] | Boolean | bool | true/false |
type = "http" | Array | []T | JSON array string |
options = { port = 8080 } # Same port OK in router mode | Table | struct | Nested with `_` |
[[pipelines]]
name = "web"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
# Access:
# http://localhost:8080/api/stream
# http://localhost:8080/web/stream
# http://localhost:8080/status
```
### Remote Log Forwarding
```toml
[[pipelines]]
name = "forwarder"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-aggregator.example.com/ingest",
batch_size = 100,
batch_delay_ms = 5000,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "backup-logger.example.com:9090",
reconnect_delay_ms = 5000
}
```

View File

@ -1,274 +0,0 @@
# Environment Variables
Configure LogWisp through environment variables for containerized deployments.
## Naming Convention
- **Prefix**: `LOGWISP_`
- **Path separator**: `_` (underscore)
- **Array indices**: Numeric suffix (0-based)
- **Case**: UPPERCASE
Examples:
- `logging.level``LOGWISP_LOGGING_LEVEL`
- `pipelines[0].name``LOGWISP_PIPELINES_0_NAME`
## General Variables
```bash
LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
LOGWISP_CONFIG_DIR=/etc/logwisp
LOGWISP_BACKGROUND=true
LOGWISP_QUIET=true
LOGWISP_DISABLE_STATUS_REPORTER=true
LOGWISP_CONFIG_AUTO_RELOAD=true
LOGWISP_CONFIG_SAVE_ON_EXIT=true
```
### `LOGWISP_CONFIG_FILE`
Configuration file path.
```bash
export LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
```
### `LOGWISP_CONFIG_DIR`
Configuration directory.
```bash
export LOGWISP_CONFIG_DIR=/etc/logwisp
export LOGWISP_CONFIG_FILE=production.toml
```
### `LOGWISP_ROUTER`
Enable router mode.
```bash
export LOGWISP_ROUTER=true
```
### `LOGWISP_BACKGROUND`
Run in background.
```bash
export LOGWISP_BACKGROUND=true
```
### `LOGWISP_QUIET`
Suppress all output.
```bash
export LOGWISP_QUIET=true
```
### `LOGWISP_DISABLE_STATUS_REPORTER`
Disable periodic status reporting.
```bash
export LOGWISP_DISABLE_STATUS_REPORTER=true
```
## Logging Variables
```bash
# Output mode
LOGWISP_LOGGING_OUTPUT=both
# Log level
LOGWISP_LOGGING_LEVEL=debug
# File logging
LOGWISP_LOGGING_FILE_DIRECTORY=/var/log/logwisp
LOGWISP_LOGGING_FILE_NAME=logwisp
LOGWISP_LOGGING_FILE_MAX_SIZE_MB=100
LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB=1000
LOGWISP_LOGGING_FILE_RETENTION_HOURS=168
# Console logging
LOGWISP_LOGGING_CONSOLE_TARGET=stderr
LOGWISP_LOGGING_CONSOLE_FORMAT=json
# Special console target override
LOGWISP_CONSOLE_TARGET=split # Overrides sink console targets
```
## Pipeline Configuration
### Basic Pipeline
```bash
# Pipeline name
LOGWISP_PIPELINES_0_NAME=app
# Source configuration
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/app
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_CHECK_INTERVAL_MS=100
# Sink configuration
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=1000
```
### Pipeline with Formatter
```bash
# Pipeline name and format
LOGWISP_PIPELINES_0_NAME=app
LOGWISP_PIPELINES_0_FORMAT=json
# Format options
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_PRETTY=true
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_TIMESTAMP_FIELD=ts
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_LEVEL_FIELD=severity
```
### Filters
```bash
# Include filter
LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
LOGWISP_PIPELINES_0_FILTERS_0_LOGIC=or
LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# Exclude filter
LOGWISP_PIPELINES_0_FILTERS_1_TYPE=exclude
LOGWISP_PIPELINES_0_FILTERS_1_PATTERNS='["DEBUG"]'
```
### HTTP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=http
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=8081
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_INGEST_PATH=/ingest
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
```
### TCP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=tcp
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=9091
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=5.0
```
### HTTP Sink Options
```bash
# Basic
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STREAM_PATH=/stream
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STATUS_PATH=/status
# Heartbeat
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INTERVAL_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_FORMAT=comment
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_TIMESTAMP=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_STATS=false
# Rate Limiting
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_BURST_SIZE=20
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_LIMIT_BY=ip
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_CONNECTIONS_PER_IP=5
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_TOTAL_CONNECTIONS=100
```
### HTTP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_URL=https://log-server.com/ingest
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_SIZE=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_DELAY_MS=5000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RETRIES=3
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_BACKOFF=2.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_INSECURE_SKIP_VERIFY=false
```
### TCP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=tcp_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_ADDRESS=remote-server.com:9090
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIAL_TIMEOUT_SECONDS=10
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_WRITE_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_KEEP_ALIVE_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RECONNECT_DELAY_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_BACKOFF=1.5
```
### File Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIRECTORY=/var/log/logwisp
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_NAME=app
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_SIZE_MB=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_TOTAL_SIZE_MB=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETENTION_HOURS=168
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MIN_DISK_FREE_MB=1000
```
### Console Sinks
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=stdout
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=500
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TARGET=stdout
```
## Example
```bash
#!/usr/bin/env bash
# General settings
export LOGWISP_DISABLE_STATUS_REPORTER=false
# Logging
export LOGWISP_LOGGING_OUTPUT=both
export LOGWISP_LOGGING_LEVEL=info
# Pipeline 0: Application logs
export LOGWISP_PIPELINES_0_NAME=app
export LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/myapp
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
# Filters
export LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
export LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# HTTP sink
export LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=25.0
# Pipeline 1: System logs
export LOGWISP_PIPELINES_1_NAME=system
export LOGWISP_PIPELINES_1_SOURCES_0_TYPE=file
export LOGWISP_PIPELINES_1_SOURCES_0_OPTIONS_PATH=/var/log/syslog
# TCP sink
export LOGWISP_PIPELINES_1_SINKS_0_TYPE=tcp
export LOGWISP_PIPELINES_1_SINKS_0_OPTIONS_PORT=9090
# Pipeline 2: Remote forwarding
export LOGWISP_PIPELINES_2_NAME=forwarder
export LOGWISP_PIPELINES_2_SOURCES_0_TYPE=http
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_PORT=8081
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_INGEST_PATH=/logs
# HTTP client sink
export LOGWISP_PIPELINES_2_SINKS_0_TYPE=http_client
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_URL=https://log-aggregator.example.com/ingest
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_BATCH_SIZE=100
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
logwisp
```
## Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Defaults (lowest)

View File

@ -1,268 +1,185 @@
# Filter Guide # Filters
LogWisp filters control which log entries pass through pipelines using regular expressions. LogWisp filters control which log entries pass through the pipeline using pattern matching.
## How Filters Work ## Filter Types
- **Include**: Only matching logs pass (whitelist) ### Include Filter
- **Exclude**: Matching logs are dropped (blacklist)
- Multiple filters apply sequentially - all must pass
## Configuration Only entries matching patterns pass through.
```toml
[[pipelines.filters]]
type = "include" # or "exclude"
logic = "or" # or "and"
patterns = [
"pattern1",
"pattern2"
]
```
### Filter Types
#### Include Filter
```toml ```toml
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
logic = "or" logic = "or" # or|and
patterns = ["ERROR", "WARN", "CRITICAL"] patterns = [
# Only ERROR, WARN, or CRITICAL logs pass "ERROR",
"WARN",
"CRITICAL"
]
``` ```
#### Exclude Filter ### Exclude Filter
Entries matching patterns are dropped.
```toml ```toml
[[pipelines.filters]] [[pipelines.filters]]
type = "exclude" type = "exclude"
patterns = ["DEBUG", "TRACE", "/health"] patterns = [
# DEBUG, TRACE, and health checks are dropped "DEBUG",
"TRACE",
"health-check"
]
``` ```
### Logic Operators ## Configuration Options
- **OR**: Match ANY pattern (default) | Option | Type | Default | Description |
- **AND**: Match ALL patterns |--------|------|---------|-------------|
| `type` | string | Required | Filter type (include/exclude) |
```toml | `logic` | string | "or" | Pattern matching logic (or/and) |
# OR Logic | `patterns` | []string | Required | Pattern list |
logic = "or"
patterns = ["ERROR", "FAIL"]
# Matches: "ERROR: disk full" OR "FAIL: timeout"
# AND Logic
logic = "and"
patterns = ["database", "timeout", "ERROR"]
# Matches: "ERROR: database connection timeout"
# Not: "ERROR: file not found"
```
## Pattern Syntax ## Pattern Syntax
Go regular expressions (RE2): Patterns support regular expression syntax:
### Basic Patterns
- **Literal match**: `"ERROR"` - matches "ERROR" anywhere
- **Case-insensitive**: `"(?i)error"` - matches "error", "ERROR", "Error"
- **Word boundary**: `"\\berror\\b"` - matches whole word only
### Advanced Patterns
- **Alternation**: `"ERROR|WARN|FATAL"`
- **Character classes**: `"[0-9]{3}"`
- **Wildcards**: `".*exception.*"`
- **Line anchors**: `"^ERROR"` (start), `"ERROR$"` (end)
### Special Characters
Escape special regex characters with backslash:
- `.``\\.`
- `*``\\*`
- `[``\\[`
- `(``\\(`
## Filter Logic
### OR Logic (default)
Entry passes if ANY pattern matches:
```toml ```toml
"ERROR" # Substring match logic = "or"
"(?i)error" # Case-insensitive patterns = ["ERROR", "WARN"]
"\\berror\\b" # Word boundaries # Passes: "ERROR in module", "WARN: low memory"
"^ERROR" # Start of line # Blocks: "INFO: started"
"ERROR$" # End of line
"error|fail|warn" # Alternatives
``` ```
## Common Patterns ### AND Logic
Entry passes only if ALL patterns match:
### Log Levels
```toml ```toml
patterns = [ logic = "and"
"\\[(ERROR|WARN|INFO)\\]", # [ERROR] format patterns = ["database", "ERROR"]
"(?i)\\b(error|warning)\\b", # Word boundaries # Passes: "ERROR: database connection failed"
"level=(error|warn)", # key=value format # Blocks: "ERROR: file not found"
]
``` ```
### Application Errors ## Filter Chain
Multiple filters execute sequentially:
```toml ```toml
# Java # First filter: Include errors and warnings
patterns = [
"Exception",
"at .+\\.java:[0-9]+",
"NullPointerException"
]
# Python
patterns = [
"Traceback",
"File \".+\\.py\", line [0-9]+",
"ValueError|TypeError"
]
# Go
patterns = [
"panic:",
"goroutine [0-9]+",
"runtime error:"
]
```
### Performance Issues
```toml
patterns = [
"took [0-9]{4,}ms", # >999ms operations
"timeout|timed out",
"slow query",
"high cpu|cpu usage: [8-9][0-9]%"
]
```
### HTTP Patterns
```toml
patterns = [
"status[=:][4-5][0-9]{2}", # 4xx/5xx codes
"HTTP/[0-9.]+ [4-5][0-9]{2}",
"\"/api/v[0-9]+/", # API paths
]
```
## Filter Chains
### Error Monitoring
```toml
# Include errors
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
patterns = ["(?i)\\b(error|fail|critical)\\b"] patterns = ["ERROR", "WARN"]
# Exclude known non-issues # Second filter: Exclude test environments
[[pipelines.filters]] [[pipelines.filters]]
type = "exclude" type = "exclude"
patterns = ["Error: Expected", "/health"] patterns = ["test-env", "staging"]
``` ```
### API Monitoring Processing order:
1. Entry arrives from source
2. Include filter evaluates
3. If passed, exclude filter evaluates
4. If passed all filters, entry continues to sink
## Performance Considerations
### Pattern Compilation
- Patterns compile once at startup
- Invalid patterns cause startup failure
- Complex patterns may impact performance
### Optimization Tips
- Place most selective filters first
- Use simple patterns when possible
- Combine related patterns with alternation
- Avoid excessive wildcards (`.*`)
## Filter Statistics
Filters track:
- Total entries evaluated
- Entries passed
- Entries blocked
- Processing time per pattern
## Common Use Cases
### Log Level Filtering
```toml ```toml
# Include API calls
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
patterns = ["/api/", "/v[0-9]+/"] patterns = ["ERROR", "WARN", "FATAL", "CRITICAL"]
```
# Exclude successful ### Application Filtering
```toml
[[pipelines.filters]]
type = "include"
patterns = ["app1", "app2", "app3"]
```
### Noise Reduction
```toml
[[pipelines.filters]] [[pipelines.filters]]
type = "exclude" type = "exclude"
patterns = ["\" 2[0-9]{2} "] patterns = [
"health-check",
"ping",
"/metrics",
"heartbeat"
]
``` ```
## Performance Tips ### Security Filtering
```toml
1. **Use anchors**: `^ERROR` faster than `ERROR` [[pipelines.filters]]
2. **Avoid nested quantifiers**: `((a+)+)+` type = "exclude"
3. **Non-capturing groups**: `(?:error|warn)` patterns = [
4. **Order by frequency**: Most common first "password",
5. **Simple patterns**: Faster than complex regex "token",
"api[_-]key",
## Testing Filters "secret"
]
```bash
# Test configuration
echo "[ERROR] Test" >> test.log
echo "[INFO] Test" >> test.log
# Run with debug
logwisp --log-level debug
# Check output
curl -N http://localhost:8080/stream
``` ```
## Regex Pattern Guide ### Multi-stage Filtering
```toml
# Include production logs
[[pipelines.filters]]
type = "include"
patterns = ["prod-", "production"]
LogWisp uses Go's standard regex engine (RE2). It includes most common features but omits backtracking-heavy syntax. # Include only errors
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "EXCEPTION", "FATAL"]
For complex logic, chain multiple filters (e.g., an `include` followed by an `exclude`) rather than writing one complex regex. # Exclude known issues
[[pipelines.filters]]
### Basic Matching type = "exclude"
patterns = ["ECONNRESET", "broken pipe"]
| Pattern | Description | Example | ```
| :--- | :--- | :--- |
| `literal` | Matches the exact text. | `"ERROR"` matches any log with "ERROR". |
| `.` | Matches any single character (except newline). | `"user."` matches "userA", "userB", etc. |
| `a\|b` | Matches expression `a` OR expression `b`. | `"error\|fail"` matches lines with "error" or "fail". |
### Anchors and Boundaries
Anchors tie your pattern to a specific position in the line.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `^` | Matches the beginning of the line. | `"^ERROR"` matches lines *starting* with "ERROR". |
| `$` | Matches the end of the line. | `"crashed$"` matches lines *ending* with "crashed". |
| `\b` | Matches a word boundary. | `"\berror\b"` matches "error" but not "terrorist". |
### Character Classes
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `[abc]` | Matches `a`, `b`, or `c`. | `"[aeiou]"` matches any vowel. |
| `[^abc]` | Matches any character *except* `a`, `b`, or `c`. | `"[^0-9]"` matches any non-digit. |
| `[a-z]` | Matches any character in the range `a` to `z`. | `"[a-zA-Z]"` matches any letter. |
| `\d` | Matches any digit (`[0-9]`). | `\d{3}` matches three digits, like "123". |
| `\w` | Matches any word character (`[a-zA-Z0-9_]`). | `\w+` matches one or more word characters. |
| `\s` | Matches any whitespace character. | `\s+` matches one or more spaces or tabs. |
### Quantifiers
Quantifiers specify how many times a character or group must appear.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `*` | Zero or more times. | `"a*"` matches "", "a", "aa". |
| `+` | One or more times. | `"a+"` matches "a", "aa", but not "". |
| `?` | Zero or one time. | `"colou?r"` matches "color" and "colour". |
| `{n}` | Exactly `n` times. | `\d{4}` matches a 4-digit number. |
| `{n,}` | `n` or more times. | `\d{2,}` matches numbers with 2 or more digits. |
| `{n,m}` | Between `n` and `m` times. | `\d{1,3}` matches numbers with 1 to 3 digits. |
### Grouping
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `(...)` | Groups an expression and captures the match. | `(ERROR|WARN)` captures "ERROR" or "WARN". |
| `(?:...)`| Groups an expression *without* capturing. Faster. | `(?:ERROR|WARN)` is more efficient if you just need to group. |
### Flags and Modifiers
Flags are placed at the beginning of a pattern to change its behavior.
| Pattern | Description |
| :--- | :--- |
| `(?i)` | Case-insensitive matching. |
| `(?m)` | Multi-line mode (`^` and `$` match start/end of lines). |
**Example:** `"(?i)error"` matches "error", "ERROR", and "Error".
### Practical Examples for Logging
* **Match an IP Address:**
```
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
```
* **Match HTTP 4xx or 5xx Status Codes:**
```
"status[= ](4|5)\d{2}"
```
* **Match a slow database query (>100ms):**
```
"Query took [1-9]\d{2,}ms"
```
* **Match key-value pairs:**
```
"user=(admin|guest)"
```
* **Match Java exceptions:**
```
"Exception:|at .+\.java:\d+"
```

215
doc/formatters.md Normal file
View File

@ -0,0 +1,215 @@
# Formatters
LogWisp formatters transform log entries before output to sinks.
## Formatter Types
### Raw Formatter
Outputs the log message as-is with optional newline.
```toml
[pipelines.format]
type = "raw"
[pipelines.format.raw]
add_new_line = true
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `add_new_line` | bool | true | Append newline to messages |
### JSON Formatter
Produces structured JSON output.
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
timestamp_field = "timestamp"
level_field = "level"
message_field = "message"
source_field = "source"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `pretty` | bool | false | Pretty print JSON |
| `timestamp_field` | string | "timestamp" | Field name for timestamp |
| `level_field` | string | "level" | Field name for log level |
| `message_field` | string | "message" | Field name for message |
| `source_field` | string | "source" | Field name for source |
**Output Structure:**
```json
{
"timestamp": "2024-01-01T12:00:00Z",
"level": "ERROR",
"source": "app",
"message": "Connection failed"
}
```
### Text Formatter
Template-based text formatting.
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
timestamp_format = "2006-01-02T15:04:05.000Z07:00"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `template` | string | See below | Go template string |
| `timestamp_format` | string | RFC3339 | Go time format string |
**Default Template:**
```
[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}
```
## Template Functions
Available functions in text templates:
| Function | Description | Example |
|----------|-------------|---------|
| `FmtTime` | Format timestamp | `{{.Timestamp \| FmtTime}}` |
| `ToUpper` | Convert to uppercase | `{{.Level \| ToUpper}}` |
| `ToLower` | Convert to lowercase | `{{.Source \| ToLower}}` |
| `TrimSpace` | Remove whitespace | `{{.Message \| TrimSpace}}` |
## Template Variables
Available variables in templates:
| Variable | Type | Description |
|----------|------|-------------|
| `.Timestamp` | time.Time | Entry timestamp |
| `.Level` | string | Log level |
| `.Source` | string | Source identifier |
| `.Message` | string | Log message |
| `.Fields` | string | Additional fields (JSON) |
## Time Format Strings
Common Go time format patterns:
| Pattern | Example Output |
|---------|---------------|
| `2006-01-02T15:04:05Z07:00` | 2024-01-02T15:04:05Z |
| `2006-01-02 15:04:05` | 2024-01-02 15:04:05 |
| `Jan 2 15:04:05` | Jan 2 15:04:05 |
| `15:04:05.000` | 15:04:05.123 |
| `2006/01/02` | 2024/01/02 |
## Format Selection
### Default Behavior
If no formatter specified:
- **HTTP/TCP sinks**: JSON format
- **Console/File sinks**: Raw format
- **Client sinks**: JSON format
### Per-Pipeline Configuration
Each pipeline can have its own formatter:
```toml
[[pipelines]]
name = "json-pipeline"
[pipelines.format]
type = "json"
[[pipelines]]
name = "text-pipeline"
[pipelines.format]
type = "txt"
```
## Message Processing
### JSON Message Handling
When using JSON formatter with JSON log messages:
1. Attempts to parse message as JSON
2. Merges fields with LogWisp metadata
3. LogWisp fields take precedence
4. Falls back to string if parsing fails
### Field Preservation
LogWisp metadata always includes:
- Timestamp (from source or current time)
- Level (detected or default)
- Source (origin identifier)
- Message (original content)
## Performance Characteristics
### Formatter Performance
Relative performance (fastest to slowest):
1. **Raw**: Direct passthrough
2. **Text**: Template execution
3. **JSON**: Serialization
4. **JSON (pretty)**: Formatted serialization
### Optimization Tips
- Use raw format for high throughput
- Cache template compilation (automatic)
- Minimize template complexity
- Avoid pretty JSON in production
## Common Configurations
### Structured Logging
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
```
### Human-Readable Logs
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
timestamp_format = "15:04:05"
```
### Syslog Format
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} {{.Source}} {{.Level}}: {{.Message}}"
timestamp_format = "Jan 2 15:04:05"
```
### Minimal Output
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Message}}"
```

View File

@ -1,77 +1,76 @@
# Installation Guide # Installation Guide
Installation process on tested platforms. LogWisp installation and service configuration for Linux and FreeBSD systems.
## Requirements ## Installation Methods
- **OS**: Linux, FreeBSD
- **Architecture**: amd64
- **Go**: 1.24+ (for building)
## Installation
### Pre-built Binaries ### Pre-built Binaries
Download the latest release binary for your platform and install to `/usr/local/bin`:
```bash ```bash
# Linux amd64 # Linux amd64
wget https://github.com/lixenwraith/logwisp/releases/latest/download/logwisp-linux-amd64 wget https://github.com/yourusername/logwisp/releases/latest/download/logwisp-linux-amd64
chmod +x logwisp-linux-amd64 chmod +x logwisp-linux-amd64
sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp
# Verify # FreeBSD amd64
logwisp --version fetch https://github.com/yourusername/logwisp/releases/latest/download/logwisp-freebsd-amd64
chmod +x logwisp-freebsd-amd64
sudo mv logwisp-freebsd-amd64 /usr/local/bin/logwisp
``` ```
### From Source ### Building from Source
Requires Go 1.24 or newer:
```bash ```bash
git clone https://github.com/lixenwraith/logwisp.git git clone https://github.com/yourusername/logwisp.git
cd logwisp cd logwisp
make build go build -o logwisp ./src/cmd/logwisp
sudo make install sudo install -m 755 logwisp /usr/local/bin/
``` ```
### Go Install ### Go Install Method
Install directly using Go (version information will not be embedded):
```bash ```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest go install github.com/yourusername/logwisp/src/cmd/logwisp@latest
``` ```
Note: Binary created with this method will not contain version information.
## Platform-Specific ## Service Configuration
### Linux (systemd) ### Linux (systemd)
```bash Create systemd service file `/etc/systemd/system/logwisp.service`:
# Create service
sudo tee /etc/systemd/system/logwisp.service << EOF ```ini
[Unit] [Unit]
Description=LogWisp Log Monitoring Service Description=LogWisp Log Transport Service
After=network.target After=network.target
[Service] [Service]
Type=simple Type=simple
User=logwisp User=logwisp
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/logwisp.toml Group=logwisp
Restart=always ExecStart=/usr/local/bin/logwisp -c /etc/logwisp/logwisp.toml
Restart=on-failure
RestartSec=10
StandardOutput=journal StandardOutput=journal
StandardError=journal StandardError=journal
WorkingDirectory=/var/lib/logwisp
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF ```
# Create user Setup service user and directories:
```bash
sudo useradd -r -s /bin/false logwisp sudo useradd -r -s /bin/false logwisp
sudo mkdir -p /etc/logwisp /var/lib/logwisp /var/log/logwisp
# Create service user sudo chown logwisp:logwisp /var/lib/logwisp /var/log/logwisp
sudo useradd -r -s /bin/false logwisp
# Create configuration directory
sudo mkdir -p /etc/logwisp
sudo chown logwisp:logwisp /etc/logwisp
# Enable and start
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable logwisp sudo systemctl enable logwisp
sudo systemctl start logwisp sudo systemctl start logwisp
@ -79,141 +78,90 @@ sudo systemctl start logwisp
### FreeBSD (rc.d) ### FreeBSD (rc.d)
```bash Create rc script `/usr/local/etc/rc.d/logwisp`:
# Create service script
sudo tee /usr/local/etc/rc.d/logwisp << 'EOF' ```sh
#!/bin/sh #!/bin/sh
# PROVIDE: logwisp # PROVIDE: logwisp
# REQUIRE: DAEMON # REQUIRE: DAEMON NETWORKING
# KEYWORD: shutdown # KEYWORD: shutdown
. /etc/rc.subr . /etc/rc.subr
name="logwisp" name="logwisp"
rcvar="${name}_enable" rcvar="${name}_enable"
command="/usr/local/bin/logwisp"
command_args="--config /usr/local/etc/logwisp/logwisp.toml"
pidfile="/var/run/${name}.pid" pidfile="/var/run/${name}.pid"
start_cmd="logwisp_start" command="/usr/local/bin/logwisp"
stop_cmd="logwisp_stop" command_args="-c /usr/local/etc/logwisp/logwisp.toml"
logwisp_start()
{
echo "Starting logwisp service..."
/usr/sbin/daemon -c -f -p ${pidfile} ${command} ${command_args}
}
logwisp_stop()
{
if [ -f ${pidfile} ]; then
echo "Stopping logwisp service..."
kill $(cat ${pidfile})
rm -f ${pidfile}
fi
}
load_rc_config $name load_rc_config $name
: ${logwisp_enable:="NO"} : ${logwisp_enable:="NO"}
: ${logwisp_config:="/usr/local/etc/logwisp/logwisp.toml"}
run_rc_command "$1" run_rc_command "$1"
EOF ```
# Make executable Setup service:
```bash
sudo chmod +x /usr/local/etc/rc.d/logwisp sudo chmod +x /usr/local/etc/rc.d/logwisp
# Create service user
sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin
sudo mkdir -p /usr/local/etc/logwisp /var/log/logwisp
# Create configuration directory sudo chown logwisp:logwisp /var/log/logwisp
sudo mkdir -p /usr/local/etc/logwisp
sudo chown logwisp:logwisp /usr/local/etc/logwisp
# Enable service
sudo sysrc logwisp_enable="YES" sudo sysrc logwisp_enable="YES"
# Start service
sudo service logwisp start sudo service logwisp start
``` ```
## Post-Installation ## Directory Structure
Standard installation directories:
| Purpose | Linux | FreeBSD |
|---------|-------|---------|
| Binary | `/usr/local/bin/logwisp` | `/usr/local/bin/logwisp` |
| Configuration | `/etc/logwisp/` | `/usr/local/etc/logwisp/` |
| Working Directory | `/var/lib/logwisp/` | `/var/db/logwisp/` |
| Log Files | `/var/log/logwisp/` | `/var/log/logwisp/` |
| PID File | `/var/run/logwisp.pid` | `/var/run/logwisp.pid` |
## Post-Installation Verification
Verify the installation:
### Verify Installation
```bash ```bash
# Check version # Check version
logwisp --version logwisp version
# Test configuration # Test configuration
logwisp --config /etc/logwisp/logwisp.toml --log-level debug logwisp -c /etc/logwisp/logwisp.toml --disable-status-reporter
# Check service # Check service status (Linux)
sudo systemctl status logwisp sudo systemctl status logwisp
```
### Linux Service Status # Check service status (FreeBSD)
```bash
sudo systemctl status logwisp
```
### FreeBSD Service Status
```bash
sudo service logwisp status sudo service logwisp status
``` ```
### Initial Configuration
Create a basic configuration file:
```toml
# /etc/logwisp/logwisp.toml (Linux)
# /usr/local/etc/logwisp/logwisp.toml (FreeBSD)
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = {
path = "/path/to/application/logs",
pattern = "*.log"
}
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
Restart service after configuration changes:
**Linux:**
```bash
sudo systemctl restart logwisp
```
**FreeBSD:**
```bash
sudo service logwisp restart
```
## Uninstallation ## Uninstallation
### Linux ### Linux
```bash ```bash
sudo systemctl stop logwisp sudo systemctl stop logwisp
sudo systemctl disable logwisp sudo systemctl disable logwisp
sudo rm /usr/local/bin/logwisp sudo rm /usr/local/bin/logwisp
sudo rm /etc/systemd/system/logwisp.service sudo rm /etc/systemd/system/logwisp.service
sudo rm -rf /etc/logwisp sudo rm -rf /etc/logwisp /var/lib/logwisp /var/log/logwisp
sudo userdel logwisp sudo userdel logwisp
``` ```
### FreeBSD ### FreeBSD
```bash ```bash
sudo service logwisp stop sudo service logwisp stop
sudo sysrc logwisp_enable="NO" sudo sysrc -x logwisp_enable
sudo rm /usr/local/bin/logwisp sudo rm /usr/local/bin/logwisp
sudo rm /usr/local/etc/rc.d/logwisp sudo rm /usr/local/etc/rc.d/logwisp
sudo rm -rf /usr/local/etc/logwisp sudo rm -rf /usr/local/etc/logwisp /var/db/logwisp /var/log/logwisp
sudo pw userdel logwisp sudo pw userdel logwisp
``` ```

289
doc/networking.md Normal file
View File

@ -0,0 +1,289 @@
# Networking
Network configuration for LogWisp connections, including TLS, rate limiting, and access control.
## TLS Configuration
### TLS Support Matrix
| Component | TLS Support | Notes |
|-----------|-------------|-------|
| HTTP Source | ✓ | Full TLS 1.2/1.3 |
| HTTP Sink | ✓ | Full TLS 1.2/1.3 |
| HTTP Client | ✓ | Client certificates |
| TCP Source | ✗ | No encryption |
| TCP Sink | ✗ | No encryption |
| TCP Client | ✗ | No encryption |
### Server TLS Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2" # TLS1.2|TLS1.3
client_auth = false
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Client TLS Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_name = "logs.example.com"
skip_verify = false
cert_file = "/path/to/client.pem" # For mTLS
key_file = "/path/to/client.key" # For mTLS
```
### TLS Certificate Generation
Using the `tls` command:
```bash
# Generate CA certificate
logwisp tls -ca -o myca
# Generate server certificate
logwisp tls -server -ca-cert myca.pem -ca-key myca.key -host localhost,server.example.com -o server
# Generate client certificate
logwisp tls -client -ca-cert myca.pem -ca-key myca.key -o client
```
Command options:
| Flag | Description |
|------|-------------|
| `-ca` | Generate CA certificate |
| `-server` | Generate server certificate |
| `-client` | Generate client certificate |
| `-host` | Comma-separated hostnames/IPs |
| `-o` | Output file prefix |
| `-days` | Certificate validity (default: 365) |
## Network Rate Limiting
### Configuration Options
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### Rate Limiting Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `enabled` | bool | Enable rate limiting |
| `max_connections_per_ip` | int | Per-IP connection limit |
| `max_connections_total` | int | Global connection limit |
| `requests_per_second` | float | Request rate limit |
| `burst_size` | int | Token bucket burst capacity |
| `response_code` | int | HTTP response code when limited |
| `response_message` | string | Response message when limited |
### IP Access Control
**Whitelist**: Only specified IPs/networks allowed
```toml
ip_whitelist = [
"192.168.1.0/24", # Local network
"10.0.0.0/8", # Private network
"203.0.113.5" # Specific IP
]
```
**Blacklist**: Specified IPs/networks denied
```toml
ip_blacklist = [
"192.168.1.100", # Blocked host
"10.0.0.0/16" # Blocked subnet
]
```
Processing order:
1. Blacklist (immediate deny if matched)
2. Whitelist (must match if configured)
3. Rate limiting
4. Authentication
## Connection Management
### TCP Keep-Alive
```toml
[pipelines.sources.tcp]
keep_alive = true
keep_alive_period_ms = 30000 # 30 seconds
```
Benefits:
- Detect dead connections
- Prevent connection timeout
- Maintain NAT mappings
### Connection Timeouts
```toml
[pipelines.sources.http]
read_timeout_ms = 10000 # 10 seconds
write_timeout_ms = 10000 # 10 seconds
[pipelines.sinks.tcp_client]
dial_timeout = 10 # Connection timeout
write_timeout = 30 # Write timeout
read_timeout = 10 # Read timeout
```
### Connection Limits
Global limits:
```toml
max_connections = 100 # Total concurrent connections
```
Per-IP limits:
```toml
max_connections_per_ip = 10
```
## Heartbeat Configuration
Keep connections alive with periodic heartbeats:
### HTTP Sink Heartbeat
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
Formats:
- **comment**: SSE comment (`: heartbeat`)
- **event**: SSE event with data
- **json**: JSON-formatted heartbeat
### TCP Sink Heartbeat
```toml
[pipelines.sinks.tcp.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "json" # json|txt
```
## Network Protocols
### HTTP/HTTPS
- HTTP/1.1 and HTTP/2 support
- Persistent connections
- Chunked transfer encoding
- Server-Sent Events (SSE)
### TCP
- Raw TCP sockets
- Newline-delimited protocol
- Binary-safe transmission
- No encryption available
## Port Configuration
### Default Ports
| Service | Default Port | Protocol |
|---------|--------------|----------|
| HTTP Source | 8081 | HTTP/HTTPS |
| HTTP Sink | 8080 | HTTP/HTTPS |
| TCP Source | 9091 | TCP |
| TCP Sink | 9090 | TCP |
### Port Conflict Prevention
LogWisp validates port usage at startup:
- Detects port conflicts across pipelines
- Prevents duplicate bindings
- Suggests alternative ports
## Network Security
### Best Practices
1. **Use TLS for HTTP** connections when possible
2. **Implement rate limiting** to prevent DoS
3. **Configure IP whitelists** for restricted access
4. **Enable authentication** for all network endpoints
5. **Use non-standard ports** to reduce scanning exposure
6. **Monitor connection metrics** for anomalies
7. **Set appropriate timeouts** to prevent resource exhaustion
### Security Warnings
- TCP connections are **always unencrypted**
- HTTP Basic/Token auth **requires TLS**
- Avoid `skip_verify` in production
- Never expose unauthenticated endpoints publicly
## Load Balancing
### Client-Side Load Balancing
Configure multiple endpoints (future feature):
```toml
[[pipelines.sinks.http_client]]
urls = [
"https://log1.example.com/ingest",
"https://log2.example.com/ingest"
]
strategy = "round-robin" # round-robin|random|least-conn
```
### Server-Side Considerations
- Use reverse proxy for load distribution
- Configure session affinity if needed
- Monitor individual instance health
## Troubleshooting
### Common Issues
**Connection Refused**
- Check firewall rules
- Verify service is running
- Confirm correct port/host
**TLS Handshake Failure**
- Verify certificate validity
- Check certificate chain
- Confirm TLS versions match
**Rate Limit Exceeded**
- Adjust rate limit parameters
- Add IP to whitelist
- Implement client-side throttling
**Connection Timeout**
- Increase timeout values
- Check network latency
- Verify keep-alive settings

358
doc/operations.md Normal file
View File

@ -0,0 +1,358 @@
# Operations Guide
Running, monitoring, and maintaining LogWisp in production.
## Starting LogWisp
### Manual Start
```bash
# Foreground with default config
logwisp
# Background mode
logwisp --background
# With specific configuration
logwisp --config /etc/logwisp/production.toml
```
### Service Management
**Linux (systemd):**
```bash
sudo systemctl start logwisp
sudo systemctl stop logwisp
sudo systemctl restart logwisp
sudo systemctl status logwisp
```
**FreeBSD (rc.d):**
```bash
sudo service logwisp start
sudo service logwisp stop
sudo service logwisp restart
sudo service logwisp status
```
## Configuration Management
### Hot Reload
Enable automatic configuration reload:
```toml
config_auto_reload = true
```
Or via command line:
```bash
logwisp --config-auto-reload
```
Trigger manual reload:
```bash
kill -HUP $(pidof logwisp)
# or
kill -USR1 $(pidof logwisp)
```
### Configuration Validation
Test configuration without starting:
```bash
logwisp --config test.toml --quiet --disable-status-reporter
```
Check for errors:
- Port conflicts
- Invalid patterns
- Missing required fields
- File permissions
## Monitoring
### Status Reporter
Built-in periodic status logging (30-second intervals):
```
[INFO] Status report active_pipelines=2 time=15:04:05
[INFO] Pipeline status pipeline=app entries_processed=10523
[INFO] Pipeline status pipeline=system entries_processed=5231
```
Disable if not needed:
```toml
disable_status_reporter = true
```
### HTTP Status Endpoint
When using HTTP sink:
```bash
curl http://localhost:8080/status | jq .
```
Response structure:
```json
{
"uptime": "2h15m30s",
"pipelines": {
"default": {
"sources": 1,
"sinks": 2,
"processed": 15234,
"filtered": 523,
"dropped": 12
}
}
}
```
### Metrics Collection
Track via logs:
- Total entries processed
- Entries filtered
- Entries dropped
- Active connections
- Buffer utilization
## Log Management
### LogWisp's Operational Logs
Configuration for LogWisp's own logs:
```toml
[logging]
output = "file"
level = "info"
[logging.file]
directory = "/var/log/logwisp"
name = "logwisp"
max_size_mb = 100
retention_hours = 168
```
### Log Rotation
Automatic rotation based on:
- File size threshold
- Total size limit
- Retention period
Manual rotation:
```bash
# Move current log
mv /var/log/logwisp/logwisp.log /var/log/logwisp/logwisp.log.1
# Send signal to reopen
kill -USR1 $(pidof logwisp)
```
### Log Levels
Operational log levels:
- **debug**: Detailed debugging information
- **info**: General operational messages
- **warn**: Warning conditions
- **error**: Error conditions
Production recommendation: `info` or `warn`
## Performance Tuning
### Buffer Sizing
Adjust buffers based on load:
```toml
# High-volume source
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
buffer_size = 5000 # Increase for burst traffic
# Slow consumer sink
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
buffer_size = 10000 # Larger buffer for slow endpoints
batch_size = 500 # Larger batches
```
### Rate Limiting
Protect against overload:
```toml
[pipelines.rate_limit]
rate = 1000.0 # Entries per second
burst = 2000.0 # Burst capacity
policy = "drop" # Drop excess entries
```
### Connection Limits
Prevent resource exhaustion:
```toml
[pipelines.sources.http.net_limit]
max_connections_total = 1000
max_connections_per_ip = 50
```
## Troubleshooting
### Common Issues
**High Memory Usage**
- Check buffer sizes
- Monitor goroutine count
- Review retention settings
**Dropped Entries**
- Increase buffer sizes
- Add rate limiting
- Check sink performance
**Connection Errors**
- Verify network connectivity
- Check firewall rules
- Review TLS certificates
### Debug Mode
Enable detailed logging:
```bash
logwisp --logging.level=debug --logging.output=stderr
```
### Health Checks
Implement external monitoring:
```bash
#!/bin/bash
# Health check script
if ! curl -sf http://localhost:8080/status > /dev/null; then
echo "LogWisp health check failed"
exit 1
fi
```
## Backup and Recovery
### Configuration Backup
```bash
# Backup configuration
cp /etc/logwisp/logwisp.toml /backup/logwisp-$(date +%Y%m%d).toml
# Version control
git add /etc/logwisp/
git commit -m "LogWisp config update"
```
### State Recovery
LogWisp maintains minimal state:
- File read positions (automatic)
- Connection state (automatic)
Recovery after crash:
1. Service automatically restarts (systemd/rc.d)
2. File sources resume from last position
3. Network sources accept new connections
4. Clients reconnect automatically
## Security Operations
### Certificate Management
Monitor certificate expiration:
```bash
openssl x509 -in /path/to/cert.pem -noout -enddate
```
Rotate certificates:
1. Generate new certificates
2. Update configuration
3. Reload service (SIGHUP)
### Credential Rotation
Update authentication:
```bash
# Generate new credentials
logwisp auth -u admin -b
# Update configuration
vim /etc/logwisp/logwisp.toml
# Reload service
kill -HUP $(pidof logwisp)
```
### Access Auditing
Monitor access patterns:
- Review connection logs
- Track authentication failures
- Monitor rate limit hits
## Maintenance
### Planned Maintenance
1. Notify users of maintenance window
2. Stop accepting new connections
3. Drain existing connections
4. Perform maintenance
5. Restart service
### Upgrade Process
1. Download new version
2. Test with current configuration
3. Stop old version
4. Install new version
5. Start service
6. Verify operation
### Cleanup Tasks
Regular maintenance:
- Remove old log files
- Clean temporary files
- Verify disk space
- Update documentation
## Disaster Recovery
### Backup Strategy
- Configuration files: Daily
- TLS certificates: After generation
- Authentication credentials: Secure storage
### Recovery Procedures
Service failure:
1. Check service status
2. Review error logs
3. Verify configuration
4. Restart service
Data loss:
1. Restore configuration from backup
2. Regenerate certificates if needed
3. Recreate authentication credentials
4. Restart service
### Business Continuity
- Run multiple instances for redundancy
- Use load balancer for distribution
- Implement monitoring alerts
- Document recovery procedures

View File

@ -1,215 +0,0 @@
# Quick Start Guide
Get LogWisp up and running in minutes:
## Installation
### From Source
```bash
git clone https://github.com/lixenwraith/logwisp.git
cd logwisp
make install
```
### Using Go Install
```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
```
## Basic Usage
### 1. Monitor Current Directory
Start LogWisp with defaults (monitors `*.log` files in current directory):
```bash
logwisp
```
### 2. Stream Logs
Connect to the log stream:
```bash
# SSE stream
curl -N http://localhost:8080/stream
# Check status
curl http://localhost:8080/status | jq .
```
### 3. Generate Test Logs
```bash
echo "[ERROR] Something went wrong!" >> test.log
echo "[INFO] Application started" >> test.log
echo "[WARN] Low memory warning" >> test.log
```
## Common Scenarios
### Monitor Specific Directory
Create `~/.config/logwisp/logwisp.toml`:
```toml
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/myapp", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Filter Only Errors
```toml
[[pipelines]]
name = "errors"
[[pipelines.sources]]
type = "directory"
options = { path = "./", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Multiple Outputs
Send logs to both HTTP stream and file:
```toml
[[pipelines]]
name = "multi-output"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
# HTTP streaming
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# File archival
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "app" }
```
### TCP Streaming
For high-performance streaming:
```toml
[[pipelines]]
name = "highperf"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "tcp"
options = { port = 9090, buffer_size = 5000 }
```
Connect with netcat:
```bash
nc localhost 9090
```
### Router Mode
Run multiple pipelines on shared ports:
```bash
logwisp --router
# Access pipelines at:
# http://localhost:8080/myapp/stream
# http://localhost:8080/errors/stream
# http://localhost:8080/status (global)
```
### Remote Log Collection
Receive logs via HTTP/TCP and forward to remote servers:
```toml
[[pipelines]]
name = "collector"
# Receive logs via HTTP POST
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/ingest" }
# Forward to remote server
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-server.com/ingest",
batch_size = 100,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
```
Send logs to collector:
```bash
curl -X POST http://localhost:8081/ingest \
-H "Content-Type: application/json" \
-d '{"message": "Test log", "level": "INFO"}'
```
## Quick Tips
### Enable Debug Logging
```bash
logwisp --logging.level debug --logging.output stderr
```
### Quiet Mode
```bash
logwisp --quiet
```
### Rate Limiting
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20
}
}
```
### Console Output
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {}
```
### Split Console Output
```toml
# INFO/DEBUG to stdout, ERROR/WARN to stderr
[[pipelines.sinks]]
type = "stdout"
options = { target = "split" }
```

View File

@ -1,125 +0,0 @@
# Rate Limiting Guide
LogWisp provides configurable rate limiting to protect against abuse and ensure fair access.
## How It Works
Token bucket algorithm:
1. Each client gets a bucket with fixed capacity
2. Tokens refill at configured rate
3. Each request consumes one token
4. No tokens = request rejected
## Configuration
```toml
[[pipelines.sinks]]
type = "http" # or "tcp"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # or "global"
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
## Strategies
### Per-IP Limiting (Default)
Each IP gets its own bucket:
```toml
limit_by = "ip"
requests_per_second = 10.0
# Client A: 10 req/sec
# Client B: 10 req/sec
```
### Global Limiting
All clients share one bucket:
```toml
limit_by = "global"
requests_per_second = 50.0
# All clients combined: 50 req/sec
```
## Connection Limits
```toml
max_connections_per_ip = 5 # Per IP
max_total_connections = 100 # Total
```
## Response Behavior
### HTTP
Returns JSON with configured status:
```json
{
"error": "Rate limit exceeded",
"retry_after": "60"
}
```
### TCP
Connections silently dropped.
## Examples
### Light Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 50.0,
burst_size = 100
}
```
### Moderate Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 30,
max_connections_per_ip = 5
}
```
### Strict Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 2.0,
burst_size = 5,
max_connections_per_ip = 2,
response_code = 503
}
```
## Monitoring
Check statistics:
```bash
curl http://localhost:8080/status | jq '.sinks[0].details.rate_limit'
```
## Testing
```bash
# Test rate limits
for i in {1..20}; do
curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/status
done
```
## Tuning
- **requests_per_second**: Expected load
- **burst_size**: 2-3× requests_per_second
- **Connection limits**: Based on memory

View File

@ -1,158 +0,0 @@
# Router Mode Guide
Router mode enables multiple pipelines to share HTTP ports through path-based routing.
## Overview
**Standard mode**: Each pipeline needs its own port
- Pipeline 1: `http://localhost:8080/stream`
- Pipeline 2: `http://localhost:8081/stream`
**Router mode**: Pipelines share ports via paths
- Pipeline 1: `http://localhost:8080/app/stream`
- Pipeline 2: `http://localhost:8080/database/stream`
- Global status: `http://localhost:8080/status`
## Enabling Router Mode
```bash
logwisp --router --config /etc/logwisp/multi-pipeline.toml
```
## Configuration
```toml
# All pipelines can use the same port
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Same port OK
[[pipelines]]
name = "database"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/postgresql", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
```
## Path Structure
Paths are prefixed with pipeline name:
| Pipeline | Config Path | Router Path |
|----------|-------------|-------------|
| `app` | `/stream` | `/app/stream` |
| `app` | `/status` | `/app/status` |
| `database` | `/stream` | `/database/stream` |
### Custom Paths
```toml
[[pipelines.sinks]]
type = "http"
options = {
stream_path = "/logs", # Becomes /app/logs
status_path = "/health" # Becomes /app/health
}
```
## Endpoints
### Pipeline Endpoints
```bash
# SSE stream
curl -N http://localhost:8080/app/stream
# Pipeline status
curl http://localhost:8080/database/status
```
### Global Status
```bash
curl http://localhost:8080/status
```
Returns:
```json
{
"service": "LogWisp Router",
"pipelines": {
"app": { /* stats */ },
"database": { /* stats */ }
},
"total_pipelines": 2
}
```
## Use Cases
### Microservices
```toml
[[pipelines]]
name = "frontend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/frontend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "backend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/backend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# Access:
# http://localhost:8080/frontend/stream
# http://localhost:8080/backend/stream
```
### Environment-Based
```toml
[[pipelines]]
name = "prod"
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "dev"
# No filters - all logs
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
## Limitations
1. **HTTP Only**: Router mode only works for HTTP/SSE
2. **No TCP Routing**: TCP remains on separate ports
3. **Path Conflicts**: Pipeline names must be unique
## Load Balancer Integration
```nginx
upstream logwisp {
server logwisp1:8080;
server logwisp2:8080;
}
location /logs/ {
proxy_pass http://logwisp/;
proxy_buffering off;
}
```

293
doc/sinks.md Normal file
View File

@ -0,0 +1,293 @@
# Output Sinks
LogWisp sinks deliver processed log entries to various destinations.
## Sink Types
### Console Sink
Output to stdout/stderr.
```toml
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout" # stdout|stderr|split
colorize = false
buffer_size = 100
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `target` | string | "stdout" | Output target (stdout/stderr/split) |
| `colorize` | bool | false | Enable colored output |
| `buffer_size` | int | 100 | Internal buffer size |
**Target Modes:**
- **stdout**: All output to standard output
- **stderr**: All output to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
### File Sink
Write logs to rotating files.
```toml
[[pipelines.sinks]]
type = "file"
[pipelines.sinks.file]
directory = "./logs"
name = "output"
max_size_mb = 100
max_total_size_mb = 1000
min_disk_free_mb = 500
retention_hours = 168.0
buffer_size = 1000
flush_interval_ms = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `directory` | string | Required | Output directory |
| `name` | string | Required | Base filename |
| `max_size_mb` | int | 100 | Rotation threshold |
| `max_total_size_mb` | int | 1000 | Total size limit |
| `min_disk_free_mb` | int | 500 | Minimum free disk space |
| `retention_hours` | float | 168 | Delete files older than |
| `buffer_size` | int | 1000 | Internal buffer size |
| `flush_interval_ms` | int | 1000 | Force flush interval |
**Features:**
- Automatic rotation on size
- Retention management
- Disk space monitoring
- Periodic flushing
### HTTP Sink
SSE (Server-Sent Events) streaming server.
```toml
[[pipelines.sinks]]
type = "http"
[pipelines.sinks.http]
host = "0.0.0.0"
port = 8080
stream_path = "/stream"
status_path = "/status"
buffer_size = 1000
max_connections = 100
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `stream_path` | string | "/stream" | SSE stream endpoint |
| `status_path` | string | "/status" | Status endpoint |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Heartbeat Configuration:**
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
### TCP Sink
TCP streaming server for debugging.
```toml
[[pipelines.sinks]]
type = "tcp"
[pipelines.sinks.tcp]
host = "0.0.0.0"
port = 9090
buffer_size = 1000
max_connections = 100
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Note:** TCP Sink has no authentication support (debugging only).
### HTTP Client Sink
Forward logs to remote HTTP endpoints.
```toml
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
url = "https://logs.example.com/ingest"
buffer_size = 1000
batch_size = 100
batch_delay_ms = 1000
timeout_seconds = 30
max_retries = 3
retry_delay_ms = 1000
retry_backoff = 2.0
insecure_skip_verify = false
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `url` | string | Required | Target URL |
| `buffer_size` | int | 1000 | Internal buffer size |
| `batch_size` | int | 100 | Logs per request |
| `batch_delay_ms` | int | 1000 | Max wait before sending |
| `timeout_seconds` | int | 30 | Request timeout |
| `max_retries` | int | 3 | Retry attempts |
| `retry_delay_ms` | int | 1000 | Initial retry delay |
| `retry_backoff` | float | 2.0 | Exponential backoff multiplier |
| `insecure_skip_verify` | bool | false | Skip TLS verification |
### TCP Client Sink
Forward logs to remote TCP servers.
```toml
[[pipelines.sinks]]
type = "tcp_client"
[pipelines.sinks.tcp_client]
host = "logs.example.com"
port = 9090
buffer_size = 1000
dial_timeout = 10
write_timeout = 30
read_timeout = 10
keep_alive = 30
reconnect_delay_ms = 1000
max_reconnect_delay_ms = 30000
reconnect_backoff = 1.5
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | Required | Target host |
| `port` | int | Required | Target port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `dial_timeout` | int | 10 | Connection timeout (seconds) |
| `write_timeout` | int | 30 | Write timeout (seconds) |
| `read_timeout` | int | 10 | Read timeout (seconds) |
| `keep_alive` | int | 30 | TCP keep-alive (seconds) |
| `reconnect_delay_ms` | int | 1000 | Initial reconnect delay |
| `max_reconnect_delay_ms` | int | 30000 | Maximum reconnect delay |
| `reconnect_backoff` | float | 1.5 | Backoff multiplier |
## Network Sink Features
### Network Rate Limiting
Available for HTTP and TCP sinks:
```toml
[pipelines.sinks.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sinks.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = false
```
HTTP Client TLS:
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_name = "logs.example.com"
skip_verify = false
cert_file = "/path/to/client.pem" # For mTLS
key_file = "/path/to/client.key" # For mTLS
```
### Authentication
HTTP/HTTP Client authentication:
```toml
[pipelines.sinks.http_client.auth]
type = "basic" # none|basic|token|mtls
username = "user"
password = "pass"
token = "bearer-token"
```
TCP Client authentication:
```toml
[pipelines.sinks.tcp_client.auth]
type = "scram" # none|scram
username = "user"
password = "pass"
```
## Sink Chaining
Designed connection patterns:
### Log Aggregation
- **HTTP Client Sink → HTTP Source**: HTTPS with authentication
- **TCP Client Sink → TCP Source**: Raw TCP with SCRAM
### Live Monitoring
- **HTTP Sink**: Browser-based SSE streaming
- **TCP Sink**: Debug interface (telnet/netcat)
## Sink Statistics
All sinks track:
- Total entries processed
- Active connections
- Failed sends
- Retry attempts
- Last processed timestamp

214
doc/sources.md Normal file
View File

@ -0,0 +1,214 @@
# Input Sources
LogWisp sources monitor various inputs and generate log entries for pipeline processing.
## Source Types
### Directory Source
Monitors a directory for log files matching a pattern.
```toml
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "/var/log/myapp"
pattern = "*.log" # Glob pattern
check_interval_ms = 100 # Poll interval
recursive = false # Scan subdirectories
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `path` | string | Required | Directory to monitor |
| `pattern` | string | "*" | File pattern (glob) |
| `check_interval_ms` | int | 100 | File check interval in milliseconds |
| `recursive` | bool | false | Include subdirectories |
**Features:**
- Automatic file rotation detection
- Position tracking (resume after restart)
- Concurrent file monitoring
- Pattern-based file selection
### Stdin Source
Reads log entries from standard input.
```toml
[[pipelines.sources]]
type = "stdin"
[pipelines.sources.stdin]
buffer_size = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `buffer_size` | int | 1000 | Internal buffer size |
**Features:**
- Line-based processing
- Automatic level detection
- Non-blocking reads
### HTTP Source
REST endpoint for log ingestion.
```toml
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
host = "0.0.0.0"
port = 8081
ingest_path = "/ingest"
buffer_size = 1000
max_body_size = 1048576 # 1MB
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `ingest_path` | string | "/ingest" | Ingestion endpoint path |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_body_size` | int | 1048576 | Maximum request body size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Input Formats:**
- Single JSON object
- JSON array
- Newline-delimited JSON (NDJSON)
- Plain text (one entry per line)
### TCP Source
Raw TCP socket listener for log ingestion.
```toml
[[pipelines.sources]]
type = "tcp"
[pipelines.sources.tcp]
host = "0.0.0.0"
port = 9091
buffer_size = 1000
read_timeout_ms = 10000
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Protocol:**
- Newline-delimited JSON
- One log entry per line
- UTF-8 encoding
## Network Source Features
### Network Rate Limiting
Available for HTTP and TCP sources:
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = true
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Authentication
HTTP Source authentication options:
```toml
[pipelines.sources.http.auth]
type = "basic" # none|basic|token|mtls
realm = "LogWisp"
# Basic auth
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2..."
# Token auth
[pipelines.sources.http.auth.token]
tokens = ["token1", "token2"]
```
TCP Source authentication:
```toml
[pipelines.sources.tcp.auth]
type = "scram" # none|scram
# SCRAM users
[[pipelines.sources.tcp.auth.scram.users]]
username = "user1"
stored_key = "base64..."
server_key = "base64..."
salt = "base64..."
argon_time = 3
argon_memory = 65536
argon_threads = 4
```
## Source Statistics
All sources track:
- Total entries received
- Dropped entries (buffer full)
- Invalid entries
- Last entry timestamp
- Active connections (network sources)
- Source-specific metrics
## Buffer Management
Each source maintains internal buffers:
- Default size: 1000 entries
- Drop policy when full
- Configurable per source
- Non-blocking writes

View File

@ -1,148 +0,0 @@
# Status Monitoring
LogWisp provides comprehensive monitoring through status endpoints and operational logs.
## Status Endpoints
### Pipeline Status
```bash
# Standalone mode
curl http://localhost:8080/status
# Router mode
curl http://localhost:8080/pipelinename/status
```
Example response:
```json
{
"service": "LogWisp",
"version": "1.0.0",
"server": {
"type": "http",
"port": 8080,
"active_clients": 5,
"buffer_size": 1000,
"uptime_seconds": 3600,
"mode": {"standalone": true, "router": false}
},
"sources": [{
"type": "directory",
"total_entries": 152341,
"dropped_entries": 12,
"active_watchers": 3
}],
"filters": {
"filter_count": 2,
"total_processed": 152341,
"total_passed": 48234
},
"sinks": [{
"type": "http",
"total_processed": 48234,
"active_connections": 5,
"details": {
"port": 8080,
"buffer_size": 1000,
"rate_limit": {
"enabled": true,
"total_requests": 98234,
"blocked_requests": 234
}
}
}],
"endpoints": {
"transport": "/stream",
"status": "/status"
},
"features": {
"heartbeat": {
"enabled": true,
"interval": 30,
"format": "comment"
},
"ssl": {
"enabled": false
},
"rate_limit": {
"enabled": true,
"requests_per_second": 10.0,
"burst_size": 20
}
}
}
```
## Key Metrics
### Source Metrics
| Metric | Description | Healthy Range |
|--------|-------------|---------------|
| `active_watchers` | Files being watched | 1-1000 |
| `total_entries` | Entries processed | Increasing |
| `dropped_entries` | Buffer overflows | < 1% of total |
| `active_connections` | Network connections (HTTP/TCP sources) | Within limits |
### Sink Metrics
| Metric | Description | Warning Signs |
|--------|-------------|---------------|
| `active_connections` | Current clients | Near limit |
| `total_processed` | Entries sent | Should match filter output |
| `total_batches` | Batches sent (client sinks) | Increasing |
| `failed_batches` | Failed sends (client sinks) | > 0 indicates issues |
### Filter Metrics
| Metric | Description | Notes |
|--------|-------------|-------|
| `total_processed` | Entries checked | All entries |
| `total_passed` | Passed filters | Check if too low/high |
| `total_matched` | Pattern matches | Per filter stats |
### Rate Limit Metrics
| Metric | Description | Action |
|--------|-------------|--------|
| `blocked_requests` | Rejected requests | Increase limits if high |
| `active_ips` | Unique IPs tracked | Monitor for attacks |
| `total_connections` | Current connections | Check against limits |
## Operational Logging
### Log Levels
```toml
[logging]
level = "info" # debug, info, warn, error
```
## Health Checks
### Basic Check
```bash
#!/usr/bin/env bash
if curl -s -f http://localhost:8080/status > /dev/null; then
echo "Healthy"
else
echo "Unhealthy"
exit 1
fi
```
### Advanced Check
```bash
#!/usr/bin/env bash
STATUS=$(curl -s http://localhost:8080/status)
DROPPED=$(echo "$STATUS" | jq '.sources[0].dropped_entries')
TOTAL=$(echo "$STATUS" | jq '.sources[0].total_entries')
if [ $((DROPPED * 100 / TOTAL)) -gt 5 ]; then
echo "High drop rate"
exit 1
fi
# Check client sink failures
FAILED=$(echo "$STATUS" | jq '.sinks[] | select(.type=="http_client") | .details.failed_batches // 0' | head -1)
if [ "$FAILED" -gt 10 ]; then
echo "High failure rate"
exit 1
fi
```

2
go.mod
View File

@ -4,7 +4,7 @@ go 1.25.1
require ( require (
github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6 github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6
github.com/lixenwraith/log v0.0.0-20250929145347-45cc8a5099c2 github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686
github.com/panjf2000/gnet/v2 v2.9.4 github.com/panjf2000/gnet/v2 v2.9.4
github.com/valyala/fasthttp v1.67.0 github.com/valyala/fasthttp v1.67.0
golang.org/x/crypto v0.43.0 golang.org/x/crypto v0.43.0

4
go.sum
View File

@ -10,8 +10,8 @@ github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zt
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6 h1:G9qP8biXBT6bwBOjEe1tZwjA0gPuB5DC+fLBRXDNXqo= github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6 h1:G9qP8biXBT6bwBOjEe1tZwjA0gPuB5DC+fLBRXDNXqo=
github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0= github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0=
github.com/lixenwraith/log v0.0.0-20250929145347-45cc8a5099c2 h1:9Qf+BR83sKjok2E1Nct+3Sfzoj2dLGwC/zyQDVNmmqs= github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686 h1:STgvFUpjvZquBF322PNLXaU67oEScewGDLy0aV+lIkY=
github.com/lixenwraith/log v0.0.0-20250929145347-45cc8a5099c2/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0= github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0=
github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg= github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek= github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
github.com/panjf2000/gnet/v2 v2.9.4 h1:XvPCcaFwO4XWg4IgSfZnNV4dfDy5g++HIEx7sH0ldHc= github.com/panjf2000/gnet/v2 v2.9.4 h1:XvPCcaFwO4XWg4IgSfZnNV4dfDy5g++HIEx7sH0ldHc=

View File

@ -62,18 +62,11 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
rm.startStatusReporter(ctx, svc) rm.startStatusReporter(ctx, svc)
} }
// Create lconfig instance for file watching, logwisp config is always TOML // Use the same lconfig instance from initial load
lcfg, err := lconfig.NewBuilder(). lcfg := config.GetConfigManager()
WithFile(rm.configPath). if lcfg == nil {
WithTarget(rm.cfg). // Config manager not initialized - potential for config bypass
WithFileFormat("toml"). return fmt.Errorf("config manager not initialized - cannot enable hot reload")
WithSecurityOptions(lconfig.SecurityOptions{
PreventPathTraversal: true,
MaxFileSize: 10 * 1024 * 1024,
}).
Build()
if err != nil {
return fmt.Errorf("failed to create config watcher: %w", err)
} }
rm.lcfg = lcfg rm.lcfg = lcfg
@ -83,7 +76,7 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
PollInterval: time.Second, PollInterval: time.Second,
Debounce: 500 * time.Millisecond, Debounce: 500 * time.Millisecond,
ReloadTimeout: 30 * time.Second, ReloadTimeout: 30 * time.Second,
VerifyPermissions: true, // TODO: Prevent malicious config replacement, to be implemented VerifyPermissions: true,
} }
lcfg.AutoUpdateWithOptions(watchOpts) lcfg.AutoUpdateWithOptions(watchOpts)
@ -243,8 +236,14 @@ func (rm *ReloadManager) performReload(ctx context.Context) error {
return fmt.Errorf("failed to get updated config: %w", err) return fmt.Errorf("failed to get updated config: %w", err)
} }
// AsStruct returns the target pointer, not a new instance
newCfg := updatedCfg.(*config.Config) newCfg := updatedCfg.(*config.Config)
// Validate the new config
if err := config.ValidateConfig(newCfg); err != nil {
return fmt.Errorf("updated config validation failed: %w", err)
}
// Get current service snapshot // Get current service snapshot
rm.mu.RLock() rm.mu.RLock()
oldService := rm.service oldService := rm.service
@ -267,8 +266,7 @@ func (rm *ReloadManager) performReload(ctx context.Context) error {
// Stop old status reporter and start new one // Stop old status reporter and start new one
rm.restartStatusReporter(ctx, newService) rm.restartStatusReporter(ctx, newService)
// Gracefully shutdown old services // Gracefully shutdown old services after swap to minimize downtime
// This happens after the swap to minimize downtime
go rm.shutdownOldServices(oldService) go rm.shutdownOldServices(oldService)
return nil return nil

View File

@ -29,6 +29,8 @@ type Authenticator struct {
sessionMu sync.RWMutex sessionMu sync.RWMutex
} }
// TODO: only one connection per user, token, mtls
// TODO: implement tracker logic
// Represents an authenticated connection // Represents an authenticated connection
type Session struct { type Session struct {
ID string ID string

View File

@ -13,11 +13,11 @@ type Config struct {
DisableStatusReporter bool `toml:"disable_status_reporter"` DisableStatusReporter bool `toml:"disable_status_reporter"`
ConfigAutoReload bool `toml:"config_auto_reload"` ConfigAutoReload bool `toml:"config_auto_reload"`
// Internal flag indicating demonized child process // Internal flag indicating demonized child process (DO NOT SET IN CONFIG FILE)
BackgroundDaemon bool `toml:"background-daemon"` BackgroundDaemon bool
// Configuration file path // Configuration file path
ConfigFile string `toml:"config"` ConfigFile string `toml:"config_file"`
// Existing fields // Existing fields
Logging *LogConfig `toml:"logging"` Logging *LogConfig `toml:"logging"`
@ -90,8 +90,6 @@ type NetLimitConfig struct {
ResponseMessage string `toml:"response_message"` ResponseMessage string `toml:"response_message"`
ResponseCode int64 `toml:"response_code"` // Default: 429 ResponseCode int64 `toml:"response_code"` // Default: 429
MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"` MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"`
MaxConnectionsPerUser int64 `toml:"max_connections_per_user"`
MaxConnectionsPerToken int64 `toml:"max_connections_per_token"`
MaxConnectionsTotal int64 `toml:"max_connections_total"` MaxConnectionsTotal int64 `toml:"max_connections_total"`
IPWhitelist []string `toml:"ip_whitelist"` IPWhitelist []string `toml:"ip_whitelist"`
IPBlacklist []string `toml:"ip_blacklist"` IPBlacklist []string `toml:"ip_blacklist"`
@ -120,7 +118,7 @@ type TLSConfig struct {
type HeartbeatConfig struct { type HeartbeatConfig struct {
Enabled bool `toml:"enabled"` Enabled bool `toml:"enabled"`
Interval int64 `toml:"interval_ms"` IntervalMS int64 `toml:"interval_ms"`
IncludeTimestamp bool `toml:"include_timestamp"` IncludeTimestamp bool `toml:"include_timestamp"`
IncludeStats bool `toml:"include_stats"` IncludeStats bool `toml:"include_stats"`
Format string `toml:"format"` Format string `toml:"format"`
@ -149,10 +147,7 @@ type DirectorySourceOptions struct {
Path string `toml:"path"` Path string `toml:"path"`
Pattern string `toml:"pattern"` // glob pattern Pattern string `toml:"pattern"` // glob pattern
CheckIntervalMS int64 `toml:"check_interval_ms"` CheckIntervalMS int64 `toml:"check_interval_ms"`
Recursive bool `toml:"recursive"` Recursive bool `toml:"recursive"` // TODO: implement logic
FollowSymlinks bool `toml:"follow_symlinks"`
DeleteAfterRead bool `toml:"delete_after_read"`
MoveToDirectory string `toml:"move_to_directory"` // move after processing
} }
type StdinSourceOptions struct { type StdinSourceOptions struct {
@ -206,7 +201,6 @@ type ConsoleSinkOptions struct {
type FileSinkOptions struct { type FileSinkOptions struct {
Directory string `toml:"directory"` Directory string `toml:"directory"`
Name string `toml:"name"` Name string `toml:"name"`
// Extension string `toml:"extension"`
MaxSizeMB int64 `toml:"max_size_mb"` MaxSizeMB int64 `toml:"max_size_mb"`
MaxTotalSizeMB int64 `toml:"max_total_size_mb"` MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
MinDiskFreeMB int64 `toml:"min_disk_free_mb"` MinDiskFreeMB int64 `toml:"min_disk_free_mb"`
@ -242,7 +236,6 @@ type TCPSinkOptions struct {
type HTTPClientSinkOptions struct { type HTTPClientSinkOptions struct {
URL string `toml:"url"` URL string `toml:"url"`
Headers map[string]string `toml:"headers"`
BufferSize int64 `toml:"buffer_size"` BufferSize int64 `toml:"buffer_size"`
BatchSize int64 `toml:"batch_size"` BatchSize int64 `toml:"batch_size"`
BatchDelayMS int64 `toml:"batch_delay_ms"` BatchDelayMS int64 `toml:"batch_delay_ms"`
@ -322,12 +315,12 @@ type FilterConfig struct {
type FormatConfig struct { type FormatConfig struct {
// Format configuration - polymorphic like sources/sinks // Format configuration - polymorphic like sources/sinks
Type string `toml:"type"` // "json", "text", "raw" Type string `toml:"type"` // "json", "txt", "raw"
// Only one will be populated based on format type // Only one will be populated based on format type
JSONFormatOptions *JSONFormatterOptions `toml:"json_format,omitempty"` JSONFormatOptions *JSONFormatterOptions `toml:"json,omitempty"`
TextFormatOptions *TextFormatterOptions `toml:"text_format,omitempty"` TxtFormatOptions *TxtFormatterOptions `toml:"txt,omitempty"`
RawFormatOptions *RawFormatterOptions `toml:"raw_format,omitempty"` RawFormatOptions *RawFormatterOptions `toml:"raw,omitempty"`
} }
type JSONFormatterOptions struct { type JSONFormatterOptions struct {
@ -338,7 +331,7 @@ type JSONFormatterOptions struct {
SourceField string `toml:"source_field"` SourceField string `toml:"source_field"`
} }
type TextFormatterOptions struct { type TxtFormatterOptions struct {
Template string `toml:"template"` Template string `toml:"template"`
TimestampFormat string `toml:"timestamp_format"` TimestampFormat string `toml:"timestamp_format"`
} }

View File

@ -13,6 +13,11 @@ import (
var configManager *lconfig.Config var configManager *lconfig.Config
// Hot reload access
func GetConfigManager() *lconfig.Config {
return configManager
}
func defaults() *Config { func defaults() *Config {
return &Config{ return &Config{
// Top-level flag defaults // Top-level flag defaults
@ -79,7 +84,7 @@ func Load(args []string) (*Config, error) {
// Create target config instance that will be populated // Create target config instance that will be populated
finalConfig := &Config{} finalConfig := &Config{}
// The builder now handles loading, populating the target struct, and validation // Builder handles loading, populating the target struct, and validation
cfg, err := lconfig.NewBuilder(). cfg, err := lconfig.NewBuilder().
WithTarget(finalConfig). // Typed target struct WithTarget(finalConfig). // Typed target struct
WithDefaults(defaults()). // Default values WithDefaults(defaults()). // Default values
@ -94,7 +99,7 @@ func Load(args []string) (*Config, error) {
WithArgs(args). // Command-line arguments WithArgs(args). // Command-line arguments
WithFile(configPath). // TOML config file WithFile(configPath). // TOML config file
WithFileFormat("toml"). // Explicit format WithFileFormat("toml"). // Explicit format
WithTypedValidator(validateConfig). // Centralized validation WithTypedValidator(ValidateConfig). // Centralized validation
WithSecurityOptions(lconfig.SecurityOptions{ WithSecurityOptions(lconfig.SecurityOptions{
PreventPathTraversal: true, PreventPathTraversal: true,
MaxFileSize: 10 * 1024 * 1024, // 10MB max config MaxFileSize: 10 * 1024 * 1024, // 10MB max config
@ -117,9 +122,7 @@ func Load(args []string) (*Config, error) {
finalConfig.ConfigFile = configPath finalConfig.ConfigFile = configPath
// Store the manager for hot reload // Store the manager for hot reload
if cfg != nil {
configManager = cfg configManager = cfg
}
return finalConfig, nil return finalConfig, nil
} }

View File

@ -13,7 +13,7 @@ import (
// validateConfig is the centralized validator for the entire configuration // validateConfig is the centralized validator for the entire configuration
// This replaces the old (c *Config) validate() method // This replaces the old (c *Config) validate() method
func validateConfig(cfg *Config) error { func ValidateConfig(cfg *Config) error {
if cfg == nil { if cfg == nil {
return fmt.Errorf("config is nil") return fmt.Errorf("config is nil")
} }
@ -599,14 +599,6 @@ func validateHTTPClientSink(pipelineName string, index int, opts *HTTPClientSink
if opts.RetryBackoff < 1.0 { if opts.RetryBackoff < 1.0 {
opts.RetryBackoff = 2.0 opts.RetryBackoff = 2.0
} }
if opts.Headers == nil {
opts.Headers = make(map[string]string)
}
// Set default Content-Type if not specified
if _, exists := opts.Headers["Content-Type"]; !exists {
opts.Headers["Content-Type"] = "application/json"
}
// Validate auth configuration // Validate auth configuration
if opts.Auth != nil { if opts.Auth != nil {
@ -748,20 +740,20 @@ func validateFormatterConfig(p *PipelineConfig) error {
} }
case "txt": case "txt":
if p.Format.TextFormatOptions == nil { if p.Format.TxtFormatOptions == nil {
p.Format.TextFormatOptions = &TextFormatterOptions{} p.Format.TxtFormatOptions = &TxtFormatterOptions{}
} }
// Default template format // Default template format
templateStr := "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}" templateStr := "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}"
if p.Format.TextFormatOptions.Template != "" { if p.Format.TxtFormatOptions.Template != "" {
p.Format.TextFormatOptions.Template = templateStr p.Format.TxtFormatOptions.Template = templateStr
} }
// Default timestamp format // Default timestamp format
timestampFormat := time.RFC3339 timestampFormat := time.RFC3339
if p.Format.TextFormatOptions.TimestampFormat != "" { if p.Format.TxtFormatOptions.TimestampFormat != "" {
p.Format.TextFormatOptions.TimestampFormat = timestampFormat p.Format.TxtFormatOptions.TimestampFormat = timestampFormat
} }
case "json": case "json":
@ -810,7 +802,7 @@ func validateHeartbeat(pipelineName, location string, hb *HeartbeatConfig) error
return nil // Skip validation if disabled return nil // Skip validation if disabled
} }
if hb.Interval < 1000 { // At least 1 second if hb.IntervalMS < 1000 { // At least 1 second
return fmt.Errorf("pipeline '%s' %s: heartbeat interval must be at least 1000ms", pipelineName, location) return fmt.Errorf("pipeline '%s' %s: heartbeat interval must be at least 1000ms", pipelineName, location)
} }

View File

@ -3,8 +3,8 @@ package format
import ( import (
"fmt" "fmt"
"logwisp/src/internal/config"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
@ -25,7 +25,7 @@ func NewFormatter(cfg *config.FormatConfig, logger *log.Logger) (Formatter, erro
case "json": case "json":
return NewJSONFormatter(cfg.JSONFormatOptions, logger) return NewJSONFormatter(cfg.JSONFormatOptions, logger)
case "txt": case "txt":
return NewTextFormatter(cfg.TextFormatOptions, logger) return NewTxtFormatter(cfg.TxtFormatOptions, logger)
case "raw", "": case "raw", "":
return NewRawFormatter(cfg.RawFormatOptions, logger) return NewRawFormatter(cfg.RawFormatOptions, logger)
default: default:

View File

@ -1,29 +1,29 @@
// FILE: logwisp/src/internal/format/text.go // FILE: logwisp/src/internal/format/txt.go
package format package format
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"logwisp/src/internal/config"
"strings" "strings"
"text/template" "text/template"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// Produces human-readable text logs using templates // Produces human-readable text logs using templates
type TextFormatter struct { type TxtFormatter struct {
config *config.TextFormatterOptions config *config.TxtFormatterOptions
template *template.Template template *template.Template
logger *log.Logger logger *log.Logger
} }
// Creates a new text formatter // Creates a new text formatter
func NewTextFormatter(opts *config.TextFormatterOptions, logger *log.Logger) (*TextFormatter, error) { func NewTxtFormatter(opts *config.TxtFormatterOptions, logger *log.Logger) (*TxtFormatter, error) {
f := &TextFormatter{ f := &TxtFormatter{
config: opts, config: opts,
logger: logger, logger: logger,
} }
@ -48,7 +48,7 @@ func NewTextFormatter(opts *config.TextFormatterOptions, logger *log.Logger) (*T
} }
// Formats the log entry using the template // Formats the log entry using the template
func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) { func (f *TxtFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Prepare data for template // Prepare data for template
data := map[string]any{ data := map[string]any{
"Timestamp": entry.Time, "Timestamp": entry.Time,
@ -71,7 +71,7 @@ func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
if err := f.template.Execute(&buf, data); err != nil { if err := f.template.Execute(&buf, data); err != nil {
// Fallback: return a basic formatted message // Fallback: return a basic formatted message
f.logger.Debug("msg", "Template execution failed, using fallback", f.logger.Debug("msg", "Template execution failed, using fallback",
"component", "text_formatter", "component", "txt_formatter",
"error", err) "error", err)
fallback := fmt.Sprintf("[%s] [%s] %s - %s\n", fallback := fmt.Sprintf("[%s] [%s] %s - %s\n",
@ -92,6 +92,6 @@ func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
} }
// Returns the formatter name // Returns the formatter name
func (f *TextFormatter) Name() string { func (f *TxtFormatter) Name() string {
return "txt" return "txt"
} }

View File

@ -140,8 +140,6 @@ func NewNetLimiter(cfg *config.NetLimitConfig, logger *log.Logger) *NetLimiter {
"requests_per_second", cfg.RequestsPerSecond, "requests_per_second", cfg.RequestsPerSecond,
"burst_size", cfg.BurstSize, "burst_size", cfg.BurstSize,
"max_connections_per_ip", cfg.MaxConnectionsPerIP, "max_connections_per_ip", cfg.MaxConnectionsPerIP,
"max_connections_per_user", cfg.MaxConnectionsPerUser,
"max_connections_per_token", cfg.MaxConnectionsPerToken,
"max_connections_total", cfg.MaxConnectionsTotal) "max_connections_total", cfg.MaxConnectionsTotal)
return l return l
@ -610,8 +608,6 @@ func (l *NetLimiter) GetStats() map[string]any {
// Configuration limits (0 = disabled) // Configuration limits (0 = disabled)
"limit_per_ip": l.config.MaxConnectionsPerIP, "limit_per_ip": l.config.MaxConnectionsPerIP,
"limit_per_user": l.config.MaxConnectionsPerUser,
"limit_per_token": l.config.MaxConnectionsPerToken,
"limit_total": l.config.MaxConnectionsTotal, "limit_total": l.config.MaxConnectionsTotal,
}, },
} }
@ -807,7 +803,7 @@ func (l *NetLimiter) TrackConnection(ip string, user string, token string) bool
l.logger.Debug("msg", "TCP connection blocked by total limit", l.logger.Debug("msg", "TCP connection blocked by total limit",
"component", "netlimit", "component", "netlimit",
"current_total", currentTotal, "current_total", currentTotal,
"max_total", l.config.MaxConnectionsTotal) "max_connections_total", l.config.MaxConnectionsTotal)
return false return false
} }
} }
@ -830,42 +826,6 @@ func (l *NetLimiter) TrackConnection(ip string, user string, token string) bool
} }
} }
// Check per-user connection limit (0 = disabled)
if l.config.MaxConnectionsPerUser > 0 && user != "" {
tracker, exists := l.userConnections[user]
if !exists {
tracker = &connTracker{lastSeen: time.Now()}
l.userConnections[user] = tracker
}
if tracker.connections.Load() >= l.config.MaxConnectionsPerUser {
l.blockedByConnLimit.Add(1)
l.logger.Debug("msg", "TCP connection blocked by user limit",
"component", "netlimit",
"user", user,
"current", tracker.connections.Load(),
"max", l.config.MaxConnectionsPerUser)
return false
}
}
// Check per-token connection limit (0 = disabled)
if l.config.MaxConnectionsPerToken > 0 && token != "" {
tracker, exists := l.tokenConnections[token]
if !exists {
tracker = &connTracker{lastSeen: time.Now()}
l.tokenConnections[token] = tracker
}
if tracker.connections.Load() >= l.config.MaxConnectionsPerToken {
l.blockedByConnLimit.Add(1)
l.logger.Debug("msg", "TCP connection blocked by token limit",
"component", "netlimit",
"token", token,
"current", tracker.connections.Load(),
"max", l.config.MaxConnectionsPerToken)
return false
}
}
// All checks passed, increment counters // All checks passed, increment counters
l.totalConnections.Add(1) l.totalConnections.Add(1)
@ -878,24 +838,6 @@ func (l *NetLimiter) TrackConnection(ip string, user string, token string) bool
} }
} }
if user != "" && l.config.MaxConnectionsPerUser > 0 {
if tracker, exists := l.userConnections[user]; exists {
tracker.connections.Add(1)
tracker.mu.Lock()
tracker.lastSeen = time.Now()
tracker.mu.Unlock()
}
}
if token != "" && l.config.MaxConnectionsPerToken > 0 {
if tracker, exists := l.tokenConnections[token]; exists {
tracker.connections.Add(1)
tracker.mu.Lock()
tracker.lastSeen = time.Now()
tracker.mu.Unlock()
}
}
return true return true
} }

View File

@ -205,7 +205,7 @@ func (h *HTTPSink) brokerLoop(ctx context.Context) {
var tickerChan <-chan time.Time var tickerChan <-chan time.Time
if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled { if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.Interval) * time.Second) ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C tickerChan = ticker.C
defer ticker.Stop() defer ticker.Stop()
} }
@ -545,7 +545,7 @@ func (h *HTTPSink) handleStream(ctx *fasthttp.RequestCtx, session *auth.Session)
var tickerChan <-chan time.Time var tickerChan <-chan time.Time
if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled { if h.config.Heartbeat != nil && h.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.Interval) * time.Second) ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C tickerChan = ticker.C
defer ticker.Stop() defer ticker.Stop()
} }
@ -699,7 +699,7 @@ func (h *HTTPSink) handleStatus(ctx *fasthttp.RequestCtx) {
"features": map[string]any{ "features": map[string]any{
"heartbeat": map[string]any{ "heartbeat": map[string]any{
"enabled": h.config.Heartbeat.Enabled, "enabled": h.config.Heartbeat.Enabled,
"interval": h.config.Heartbeat.Interval, "interval_ms": h.config.Heartbeat.IntervalMS,
"format": h.config.Heartbeat.Format, "format": h.config.Heartbeat.Format,
}, },
"tls": tlsStats, "tls": tlsStats,

View File

@ -24,6 +24,7 @@ import (
"github.com/valyala/fasthttp" "github.com/valyala/fasthttp"
) )
// TODO: implement heartbeat for HTTP Client Sink, similar to HTTP Sink
// Forwards log entries to a remote HTTP endpoint // Forwards log entries to a remote HTTP endpoint
type HTTPClientSink struct { type HTTPClientSink struct {
input chan core.LogEntry input chan core.LogEntry
@ -340,11 +341,6 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
// No authentication // No authentication
} }
// Set headers
for k, v := range h.config.Headers {
req.Header.Set(k, v)
}
// Send request // Send request
err := h.client.DoTimeout(req, resp, time.Duration(h.config.Timeout)*time.Second) err := h.client.DoTimeout(req, resp, time.Duration(h.config.Timeout)*time.Second)

View File

@ -205,7 +205,7 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
var tickerChan <-chan time.Time var tickerChan <-chan time.Time
if t.config.Heartbeat != nil && t.config.Heartbeat.Enabled { if t.config.Heartbeat != nil && t.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(t.config.Heartbeat.Interval) * time.Second) ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C tickerChan = ticker.C
defer ticker.Stop() defer ticker.Stop()
} }

View File

@ -7,7 +7,6 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"logwisp/src/internal/auth"
"net" "net"
"strconv" "strconv"
"strings" "strings"
@ -15,6 +14,7 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/auth"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/format" "logwisp/src/internal/format"
@ -22,6 +22,7 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// TODO: implement heartbeat for TCP Client Sink, similar to TCP Sink
// Forwards log entries to a remote TCP endpoint // Forwards log entries to a remote TCP endpoint
type TCPClientSink struct { type TCPClientSink struct {
input chan core.LogEntry input chan core.LogEntry

View File

@ -1,149 +0,0 @@
#!/usr/bin/env bash
# FILE: test-basic-auth.sh
# Creates test directories and starts network services
set -e
# Create test directories
mkdir -p test-logs test-data
# Generate Argon2id hash using logwisp auth
echo "=== Generating Argon2id hash ==="
./logwisp auth -u testuser -p secret123 > auth_output.txt 2>&1
HASH=$(grep 'password_hash = ' auth_output.txt | cut -d'"' -f2)
if [ -z "$HASH" ]; then
echo "Failed to generate hash. Output:"
cat auth_output.txt
exit 1
fi
echo "Generated hash format: ${HASH:0:15}..." # Show hash format prefix
echo "Full hash: $HASH"
# Determine hash type
if [[ "$HASH" == "\$argon2id\$"* ]]; then
echo "Hash type: Argon2id"
elif [[ "$HASH" == "\$2a\$"* ]] || [[ "$HASH" == "\$2b\$"* ]]; then
echo "Hash type: bcrypt"
else
echo "Hash type: Unknown"
fi
# Create test config with debug logging to stdout
cat > test-auth.toml << EOF
# General LogWisp settings
log_dir = "test-logs"
log_level = "debug"
data_dir = "test-data"
# Logging configuration for troubleshooting
[logging]
target = "all"
level = "debug"
[logging.console]
enabled = true
target = "stdout"
format = "txt"
[[pipelines]]
name = "tcp-test"
[pipelines.auth]
type = "basic"
[[pipelines.auth.basic_auth.users]]
username = "testuser"
password_hash = "$HASH"
[[pipelines.sources]]
type = "tcp"
[pipelines.sources.options]
port = 5514
host = "127.0.0.1"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.options]
target = "stdout"
# Second pipeline for HTTP
[[pipelines]]
name = "http-test"
[pipelines.auth]
type = "basic"
[[pipelines.auth.basic_auth.users]]
username = "httpuser"
password_hash = "$HASH"
[[pipelines.sources]]
type = "http"
[pipelines.sources.options]
port = 8080
host = "127.0.0.1"
path = "/ingest"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.options]
target = "stdout"
EOF
# Start LogWisp with visible debug output
echo "=== Starting LogWisp with debug logging ==="
./logwisp -c test-auth.toml 2>&1 | tee logwisp-debug.log &
LOGWISP_PID=$!
# Wait for startup with longer timeout
echo "Waiting for LogWisp to start..."
for i in {1..20}; do
if nc -z 127.0.0.1 5514 2>/dev/null && nc -z 127.0.0.1 8080 2>/dev/null; then
echo "LogWisp started successfully"
break
fi
if [ $i -eq 20 ]; then
echo "LogWisp failed to start. Check logwisp-debug.log"
kill $LOGWISP_PID 2>/dev/null || true
exit 1
fi
sleep 1
done
# Give extra time for auth initialization
sleep 2
echo "=== Testing HTTP Auth ==="
# Test with verbose curl to see headers
echo "Testing no auth (expecting 401)..."
curl -v -s -o response.txt -w "STATUS:%{http_code}\n" \
http://127.0.0.1:8080/ingest -d '{"test":"data"}' 2>&1 | tee curl-noauth.log | grep -E "STATUS:|< HTTP"
# Test invalid auth
echo "Testing invalid auth (expecting 401)..."
curl -v -s -o response.txt -w "STATUS:%{http_code}\n" \
-u baduser:badpass http://127.0.0.1:8080/ingest -d '{"test":"data"}' 2>&1 | tee curl-badauth.log | grep -E "STATUS:|< HTTP"
# Test valid auth with detailed output
echo "Testing valid auth (expecting 202/200)..."
curl -v -s -o response.txt -w "STATUS:%{http_code}\n" \
-u httpuser:secret123 http://127.0.0.1:8080/ingest \
-H "Content-Type: application/json" \
-d '{"message":"test log","level":"info"}' 2>&1 | tee curl-validauth.log | grep -E "STATUS:|< HTTP"
# Show response body if not 200/202
STATUS=$(grep "STATUS:" curl-validauth.log | cut -d: -f2)
if [ "$STATUS" != "200" ] && [ "$STATUS" != "202" ]; then
echo "Response body:"
cat response.txt
fi
# Check logs for auth-related errors
echo "=== Checking logs for auth errors ==="
grep -i "auth" logwisp-debug.log | grep -i "error" | tail -5 || echo "No auth errors found"
grep -i "authenticator" logwisp-debug.log | tail -5 || echo "No authenticator messages"
# Cleanup
echo "=== Cleanup ==="
kill $LOGWISP_PID 2>/dev/null || true
echo "Logs saved to logwisp-debug.log, curl-*.log"
# Optionally keep logs for analysis
# rm -f test-auth.toml auth_output.txt response.txt
# rm -rf test-logs test-data