Compare commits

...

12 Commits

81 changed files with 8690 additions and 8898 deletions

2
.gitignore vendored
View File

@ -7,6 +7,6 @@ cert
bin bin
script script
build build
test
*.log *.log
*.toml *.toml
build.sh

View File

@ -6,7 +6,7 @@
<td> <td>
<h1>LogWisp</h1> <h1>LogWisp</h1>
<p> <p>
<a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.24-00ADD8?style=flat&logo=go" alt="Go"></a> <a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.25-00ADD8?style=flat&logo=go" alt="Go"></a>
<a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a> <a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a>
<a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a> <a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a>
</p> </p>
@ -14,41 +14,81 @@
</tr> </tr>
</table> </table>
**Flexible log monitoring with real-time streaming over HTTP/SSE and TCP** # LogWisp
LogWisp watches log files and streams updates to connected clients in real-time using a pipeline architecture: **sources → filters → sinks**. Perfect for monitoring multiple applications, filtering noise, and routing logs to multiple destinations. A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with enterprise-grade security and reliability features.
## 🚀 Quick Start ## Features
```bash ### Core Capabilities
# Install - **Pipeline Architecture**: Independent processing pipelines with source(s) → filter → format → sink(s) flow
git clone https://github.com/lixenwraith/logwisp.git - **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
cd logwisp - **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
make install - **Real-time Processing**: Sub-millisecond latency with configurable buffering
- **Hot Configuration Reload**: Update pipelines without service restart
# Run with defaults (monitors *.log in current directory) ### Data Processing
logwisp - **Pattern-based Filtering**: Chainable include/exclude filters with regex support
- **Multiple Formatters**: Raw, JSON, and template-based text formatting
- **Rate Limiting**: Pipeline rate control
### Security & Reliability
- **Authentication**: mTLS support for HTTPS
- **TLS Encryption**: TLS 1.2/1.3 support for HTTP connections
- **Access Control**: IP whitelisting/blacklisting, connection limits
- **Automatic Reconnection**: Resilient client connections with exponential backoff
- **File Rotation**: Size-based rotation with retention policies
### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints
- **Signal Handling**: Graceful shutdown and configuration reload via signals
- **Background Mode**: Daemon operation with proper signal handling
- **Quiet Mode**: Silent operation for automated deployments
## Documentation
Available in `doc/` directory.
- [Installation Guide](doc/installation.md) - Platform setup and service configuration
- [Architecture Overview](doc/architecture.md) - System design and component interaction
- [Configuration Reference](doc/configuration.md) - TOML structure and configuration methods
- [Input Sources](doc/sources.md) - Available source types and configurations
- [Output Sinks](doc/sinks.md) - Sink types and output options
- [Filters](doc/filters.md) - Pattern-based log filtering
- [Formatters](doc/formatters.md) - Log formatting and transformation
- [Security](doc/security.md) - mTLS configurations and access control
- [Networking](doc/networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](doc/cli.md) - CLI flags and subcommands
- [Operations Guide](doc/operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
``` ```
## ✨ Key Features Run with: `logwisp -c config.toml`
- **🔧 Pipeline Architecture** - Flexible source → filter → sink processing ## System Requirements
- **📡 Real-time Streaming** - SSE (HTTP) and TCP protocols
- **🔍 Pattern Filtering** - Include/exclude logs with regex patterns
- **🛡️ Rate Limiting** - Protect against abuse with configurable limits
- **📊 Multi-pipeline** - Process different log sources simultaneously
- **🔄 Rotation Aware** - Handles log rotation seamlessly
- **⚡ High Performance** - Minimal CPU/memory footprint
## 📖 Documentation - **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
- **Architecture**: amd64
- **Go Version**: 1.25+ (for building from source)
Complete documentation is available in the [`doc/`](doc/) directory: ## License
- [**Quick Start Guide**](doc/quickstart.md) - Get running in 5 minutes BSD 3-Clause License
- [**Configuration**](doc/configuration.md) - All configuration options
- [**CLI Reference**](doc/cli.md) - Command-line interface
- [**Examples**](doc/examples/) - Ready-to-use configurations
## 📄 License
BSD-3-Clause

View File

@ -1,261 +1,373 @@
# LogWisp Configuration Reference ###############################################################################
# Default location: ~/.config/logwisp/logwisp.toml ### LogWisp Configuration
# Override: logwisp --config /path/to/config.toml ### Default location: ~/.config/logwisp/logwisp.toml
# ### Configuration Precedence: CLI flags > Environment > File > Defaults
# All values shown are defaults unless marked (required) ### Default values shown - uncommented lines represent active configuration
###############################################################################
# ============================================================================ ###############################################################################
# GLOBAL OPTIONS ### Global Settings
# ============================================================================ ###############################################################################
# router = false # Enable router mode (multi-pipeline HTTP routing)
# background = false # Run as background daemon background = false # Run as daemon
# quiet = false # Suppress all output quiet = false # Suppress console output
# disable_status_reporter = false # Disable periodic status logging disable_status_reporter = false # Disable periodic status logging
# config_auto_reload = false # Auto-reload on config change config_auto_reload = false # Reload config on file change
# config_save_on_exit = false # Save config on shutdown
###############################################################################
### Logging Configuration (LogWisp's internal operational logging)
###############################################################################
# ============================================================================
# LOGGING (LogWisp's operational logs)
# ============================================================================
[logging] [logging]
output = "stderr" # file, stdout, stderr, both, none output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug, info, warn, error level = "info" # debug|info|warn|error
[logging.file] # [logging.file]
directory = "./logs" # Log file directory # directory = "./log" # Log directory path
name = "logwisp" # Base filename # name = "logwisp" # Base filename
max_size_mb = 100 # Rotate after size # max_size_mb = 100 # Rotation threshold
max_total_size_mb = 1000 # Total size limit for all logs # max_total_size_mb = 1000 # Total size limit
retention_hours = 168.0 # Delete logs older than (0 = disabled) # retention_hours = 168.0 # Delete logs older than (7 days)
[logging.console] [logging.console]
target = "stderr" # stdout, stderr, split (split: info→stdout, error→stderr) target = "stdout" # stdout|stderr|split
format = "txt" # txt, json format = "txt" # txt|json
# ============================================================================ ###############################################################################
# PIPELINES ### Pipeline Configuration
# ============================================================================ ### Each pipeline: sources -> rate_limit -> filters -> format -> sinks
# Define one or more [[pipelines]] blocks ###############################################################################
# Each pipeline: sources → [rate_limit] → [filters] → [format] → sinks
[[pipelines]] [[pipelines]]
name = "default" # (required) Unique identifier name = "default" # Pipeline identifier
###============================================================================
### Rate Limiting (Pipeline-level)
###============================================================================
# ----------------------------------------------------------------------------
# PIPELINE RATE LIMITING (optional)
# ----------------------------------------------------------------------------
# [pipelines.rate_limit] # [pipelines.rate_limit]
# rate = 1000.0 # Entries per second (0 = unlimited) # rate = 1000.0 # Entries per second (0=disabled)
# burst = 1000.0 # Max burst size (defaults to rate) # burst = 2000.0 # Burst capacity (defaults to rate)
# policy = "drop" # drop, pass # policy = "drop" # pass|drop
# max_entry_size_bytes = 0 # Max size per entry (0 = unlimited) # max_entry_size_bytes = 0 # Max entry size (0=unlimited)
# ---------------------------------------------------------------------------- ###============================================================================
# SOURCES ### Filters (Sequential pattern matching)
# ---------------------------------------------------------------------------- ###============================================================================
[[pipelines.sources]]
type = "directory" # directory, file, stdin, http, tcp
# Directory source options ### ⚠️ Example: Include only ERROR and WARN logs
[pipelines.sources.options] ## [[pipelines.filters]]
path = "./" # (required) Directory path ## type = "include" # include|exclude
pattern = "*.log" # Glob pattern ## logic = "or" # or|and
check_interval_ms = 100 # Scan interval (min: 10) ## patterns = [".*ERROR.*", ".*WARN.*"]
# File source options (alternative) ### ⚠️ Example: Exclude debug logs
# type = "file" ## [[pipelines.filters]]
# [pipelines.sources.options] ## type = "exclude"
# path = "/var/log/app.log" # (required) File path ## patterns = [".*DEBUG.*"]
# HTTP source options (alternative) ###============================================================================
# type = "http" ### Format (Log transformation)
# [pipelines.sources.options] ###============================================================================
# port = 8081 # (required) Listen port
# ingest_path = "/ingest" # POST endpoint
# buffer_size = 1000 # Entry buffer size
# net_limit = { # Rate limiting
# enabled = true,
# requests_per_second = 100.0,
# burst_size = 200,
# limit_by = "ip" # ip, global
# }
# TCP source options (alternative) # [pipelines.format]
# type = "tcp" # type = "raw" # raw|json|txt
# [pipelines.sources.options]
# port = 9091 # (required) Listen port
# buffer_size = 1000 # Entry buffer size
# net_limit = { ... } # Same as HTTP
# ---------------------------------------------------------------------------- ## JSON formatting
# FILTERS (optional) # [pipelines.format.json]
# ---------------------------------------------------------------------------- # pretty = false # Pretty-print JSON
# [[pipelines.filters]]
# type = "include" # include (whitelist), exclude (blacklist)
# logic = "or" # or (any match), and (all match)
# patterns = [ # Regular expressions
# "ERROR",
# "(?i)warn", # Case-insensitive
# "\\bfatal\\b" # Word boundary
# ]
# ----------------------------------------------------------------------------
# FORMAT (optional)
# ----------------------------------------------------------------------------
# format = "raw" # raw, json, text
# [pipelines.format_options]
# # JSON formatter options
# pretty = false # Pretty print JSON
# timestamp_field = "timestamp" # Field name for timestamp # timestamp_field = "timestamp" # Field name for timestamp
# level_field = "level" # Field name for log level # level_field = "level" # Field name for log level
# message_field = "message" # Field name for message # message_field = "message" # Field name for message
# source_field = "source" # Field name for source # source_field = "source" # Field name for source
#
# # Text formatter options
# template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
# timestamp_format = "2006-01-02T15:04:05Z07:00" # Go time format
# ---------------------------------------------------------------------------- ## Text templating
# SINKS # [pipelines.format.txt]
# ---------------------------------------------------------------------------- # template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
[[pipelines.sinks]] # timestamp_format = "2006-01-02 15:04:05"
type = "http" # http, tcp, http_client, tcp_client, file, stdout, stderr
# HTTP sink options (streaming server) ## Raw templating
[pipelines.sinks.options] # [pipelines.format.raw]
port = 8080 # (required) Listen port # add_new_line = true # Preserve new line delimiter between log entries
buffer_size = 1000 # Entry buffer size
stream_path = "/stream" # SSE endpoint
status_path = "/status" # Status endpoint
[pipelines.sinks.options.heartbeat] ###============================================================================
enabled = true # Send periodic heartbeats ### SOURCES (Inputs)
interval_seconds = 30 # Heartbeat interval ### Architecture: Pipeline can have multiple sources
format = "comment" # comment, json ###============================================================================
include_timestamp = true # Include timestamp in heartbeat
include_stats = false # Include statistics
[pipelines.sinks.options.net_limit] ###----------------------------------------------------------------------------
enabled = false # Enable rate limiting ### File Source (File monitoring)
requests_per_second = 10.0 # Request rate limit [[pipelines.sources]]
burst_size = 20 # Token bucket burst type = "file"
limit_by = "ip" # ip, global
max_connections_per_ip = 5 # Per-IP connection limit
max_total_connections = 100 # Total connection limit
response_code = 429 # HTTP response code
response_message = "Rate limit exceeded"
# TCP sink options (alternative) [pipelines.sources.file]
# type = "tcp" directory = "./" # Directory to monitor
# [pipelines.sinks.options] pattern = "*.log" # Glob pattern
# port = 9090 # (required) Listen port check_interval_ms = 100 # File check interval
recursive = false # Recursive monitoring (TODO)
###----------------------------------------------------------------------------
### Console Source
# [[pipelines.sources]]
# type = "console"
# [pipelines.sources.console]
# buffer_size = 1000 # buffer_size = 1000
# heartbeat = { ... } # Same as HTTP
# net_limit = { ... } # Same as HTTP
# HTTP client sink options (forward to remote) ###----------------------------------------------------------------------------
### HTTP Source (Server mode - receives logs via HTTP POST)
# [[pipelines.sources]]
# type = "http"
# [pipelines.sources.http]
# host = "0.0.0.0" # Listen interface
# port = 8081 # Listen port
# ingest_path = "/ingest" # Ingestion endpoint
# buffer_size = 1000
# max_body_size = 1048576 # 1MB
# read_timeout_ms = 10000
# write_timeout_ms = 10000
### Network access control
# [pipelines.sources.http.acl]
# enabled = false
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
# max_connections_total = 100 # Max simultaneous connections for this component
# requests_per_second = 100.0 # Per-IP request rate limit
# burst_size = 200 # Per-IP request burst limit
# response_message = "Rate limit exceeded"
# response_code = 429
# ip_whitelist = ["192.168.1.0/24"]
# ip_blacklist = ["10.0.0.100"]
### TLS configuration (mTLS support)
# [pipelines.sources.http.tls]
# enabled = false
# cert_file = "/path/to/server.pem" # Server certificate
# key_file = "/path/to/server.key" # Server private key
# client_auth = false # Enable mTLS
# client_ca_file = "/path/to/ca.pem" # CA for client verification
# verify_client_cert = true # Verify client certificates
# min_version = "TLS1.2" # TLS1.0|TLS1.1|TLS1.2|TLS1.3
# max_version = "TLS1.3"
# cipher_suites = "" # Comma-separated cipher list
###----------------------------------------------------------------------------
### TCP Source (Server mode - receives logs via TCP)
# [[pipelines.sources]]
# type = "tcp"
# [pipelines.sources.tcp]
# host = "0.0.0.0"
# port = 9091
# buffer_size = 1000
# read_timeout_ms = 10000
# keep_alive = true
# keep_alive_period_ms = 30000
### Network access control
# [pipelines.sources.tcp.acl]
# enabled = false
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
# max_connections_total = 100 # Max simultaneous connections for this component
# requests_per_second = 100.0 # Per-IP request rate limit
# burst_size = 200 # Per-IP request burst limit
# response_message = "Rate limit exceeded"
# response_code = 429
# ip_whitelist = ["192.168.1.0/24"]
# ip_blacklist = ["10.0.0.100"]
### ⚠️ IMPORTANT: TCP does NOT support TLS/mTLS (gnet limitation)
### Use HTTP Source with TLS for encrypted transport
###============================================================================
### SINKS (Outputs)
### Architecture: Pipeline can have multiple sinks (fan-out)
###============================================================================
###----------------------------------------------------------------------------
### Console Sink
# [[pipelines.sinks]]
# type = "console"
# [pipelines.sinks.console]
# target = "stdout" # stdout|stderr|split
# colorize = false # Colorized output
# buffer_size = 100
###----------------------------------------------------------------------------
### File Sink (Rotating logs)
# [[pipelines.sinks]]
# type = "file"
# [pipelines.sinks.file]
# directory = "./logs"
# name = "output"
# max_size_mb = 100
# max_total_size_mb = 1000
# min_disk_free_mb = 100
# retention_hours = 168.0 # 7 days
# buffer_size = 1000
# flush_interval_ms = 1000
###----------------------------------------------------------------------------
### HTTP Sink (Server mode - SSE streaming for clients)
[[pipelines.sinks]]
type = "http"
[pipelines.sinks.http]
host = "0.0.0.0"
port = 8080
stream_path = "/stream" # SSE streaming endpoint
status_path = "/status" # Status endpoint
buffer_size = 1000
write_timeout_ms = 10000
### Heartbeat configuration (keep connections alive)
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000 # 30 seconds
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
### Network access control
# [pipelines.sinks.http.acl]
# enabled = false
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
# max_connections_total = 100 # Max simultaneous connections for this component
# requests_per_second = 100.0 # Per-IP request rate limit
# burst_size = 200 # Per-IP request burst limit
# response_message = "Rate limit exceeded"
# response_code = 429
# ip_whitelist = ["192.168.1.0/24"]
# ip_blacklist = ["10.0.0.100"]
### TLS configuration (mTLS support)
# [pipelines.sinks.http.tls]
# enabled = false
# cert_file = "/path/to/server.pem" # Server certificate
# key_file = "/path/to/server.key" # Server private key
# client_auth = false # Enable mTLS
# client_ca_file = "/path/to/ca.pem" # CA for client verification
# verify_client_cert = true # Verify client certificates
# min_version = "TLS1.2" # TLS1.0|TLS1.1|TLS1.2|TLS1.3
# max_version = "TLS1.3"
# cipher_suites = "" # Comma-separated cipher list
###----------------------------------------------------------------------------
### TCP Sink (Server mode - TCP streaming for clients)
# [[pipelines.sinks]]
# type = "tcp"
# [pipelines.sinks.tcp]
# host = "0.0.0.0"
# port = 9090
# buffer_size = 1000
# write_timeout_ms = 10000
# keep_alive = true
# keep_alive_period_ms = 30000
### Heartbeat configuration
# [pipelines.sinks.tcp.heartbeat]
# enabled = false
# interval_ms = 30000
# include_timestamp = true
# include_stats = false
# format = "json" # json|txt
### Network access control
# [pipelines.sinks.tcp.acl]
# enabled = false
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
# max_connections_total = 100 # Max simultaneous connections for this component
# requests_per_second = 100.0 # Per-IP request rate limit
# burst_size = 200 # Per-IP request burst limit
# response_message = "Rate limit exceeded"
# response_code = 429
# ip_whitelist = ["192.168.1.0/24"]
# ip_blacklist = ["10.0.0.100"]
### ⚠️ IMPORTANT: TCP does NOT support TLS/mTLS (gnet limitation)
### Use HTTP Sink with TLS for encrypted transport
###----------------------------------------------------------------------------
### HTTP Client Sink (Forward to remote HTTP endpoint)
# [[pipelines.sinks]]
# type = "http_client" # type = "http_client"
# [pipelines.sinks.options]
# url = "https://logs.example.com/ingest" # (required) Target URL # [pipelines.sinks.http_client]
# url = "https://logs.example.com/ingest"
# buffer_size = 1000
# batch_size = 100 # Entries per batch # batch_size = 100 # Entries per batch
# batch_delay_ms = 1000 # Batch timeout # batch_delay_ms = 1000 # Max wait before sending
# timeout_seconds = 30 # Request timeout # timeout_seconds = 30
# max_retries = 3 # Retry attempts # max_retries = 3
# retry_delay_ms = 1000 # Initial retry delay # retry_delay_ms = 1000
# retry_backoff = 2.0 # Exponential backoff multiplier # retry_backoff = 2.0 # Exponential backoff multiplier
# insecure_skip_verify = false # Skip TLS verification # insecure_skip_verify = false # Skip TLS verification
# headers = { # Custom headers
# "Authorization" = "Bearer token",
# "X-Custom" = "value"
# }
# TCP client sink options (forward to remote) ### TLS configuration for client
# [pipelines.sinks.http_client.tls]
# enabled = false # Enable TLS for the outgoing connection
# server_ca_file = "/path/to/ca.pem" # CA for verifying the remote server's certificate
# server_name = "logs.example.com" # For server certificate validation (SNI)
# insecure_skip_verify = false # Skip server verification, use with caution
# client_cert_file = "/path/to/client.pem" # Client's certificate to present to the server for mTLS
# client_key_file = "/path/to/client.key" # Client's private key for mTLS
# min_version = "TLS1.2"
# max_version = "TLS1.3"
# cipher_suites = ""
### ⚠️ Example: HTTP Client Sink → HTTP Source with mTLS
## HTTP Source with mTLS:
## [pipelines.sources.http.tls]
## enabled = true
## cert_file = "/path/to/server.pem"
## key_file = "/path/to/server.key"
## client_auth = true # Enable client cert verification
## client_ca_file = "/path/to/ca.pem"
## verify_client_cert = true
## HTTP Client with client cert:
## [pipelines.sinks.http_client.tls]
## enabled = true
## server_ca_file = "/path/to/ca.pem" # Verify server
## client_cert_file = "/path/to/client.pem" # Client certificate
## client_key_file = "/path/to/client.key"
###----------------------------------------------------------------------------
### TCP Client Sink (Forward to remote TCP endpoint)
# [[pipelines.sinks]]
# type = "tcp_client" # type = "tcp_client"
# [pipelines.sinks.options]
# address = "logs.example.com:9090" # (required) host:port # [pipelines.sinks.tcp_client]
# host = "logs.example.com"
# port = 9090
# buffer_size = 1000 # buffer_size = 1000
# dial_timeout_seconds = 10 # Connection timeout # dial_timeout_seconds = 10 # Connection timeout
# write_timeout_seconds = 30 # Write timeout # write_timeout_seconds = 30 # Write timeout
# keep_alive_seconds = 30 # TCP keepalive # read_timeout_seconds = 10 # Read timeout
# keep_alive_seconds = 30 # TCP keep-alive
# reconnect_delay_ms = 1000 # Initial reconnect delay # reconnect_delay_ms = 1000 # Initial reconnect delay
# max_reconnect_delay_seconds = 30 # Max reconnect delay # max_reconnect_delay_ms = 30000 # Max reconnect delay
# reconnect_backoff = 1.5 # Exponential backoff # reconnect_backoff = 1.5 # Exponential backoff
# File sink options ### ⚠️ WARNING: TCP Client has NO TLS support
# type = "file" ### Use HTTP Client with TLS for encrypted transport
# [pipelines.sinks.options]
# directory = "/var/log/logwisp" # (required) Output directory
# name = "app" # (required) Base filename
# max_size_mb = 100 # Rotate after size
# max_total_size_mb = 0 # Total size limit (0 = unlimited)
# retention_hours = 0.0 # Delete old files (0 = disabled)
# min_disk_free_mb = 1000 # Maintain free disk space
# Console sink options ###############################################################################
# type = "stdout" # or "stderr" ### Common Usage Patterns
# [pipelines.sinks.options] ###############################################################################
# buffer_size = 1000
# target = "stdout" # Override for split mode
# ---------------------------------------------------------------------------- ### Pattern 1: Log Aggregation (Client → Server)
# AUTHENTICATION (optional, for network sinks) ### - HTTP Client Sink → HTTP Source (with optional TLS/mTLS)
# ---------------------------------------------------------------------------- ### - TCP Client Sink → TCP Source (unencrypted only)
# [pipelines.auth]
# type = "none" # none, basic, bearer
# ip_whitelist = [] # Allowed IPs (empty = all)
# ip_blacklist = [] # Blocked IPs
#
# [pipelines.auth.basic_auth]
# realm = "LogWisp" # WWW-Authenticate realm
# users_file = "" # External users file
# [[pipelines.auth.basic_auth.users]]
# username = "admin"
# password_hash = "$2a$10$..." # bcrypt hash
#
# [pipelines.auth.bearer_auth]
# tokens = ["token1", "token2"] # Static tokens
# [pipelines.auth.bearer_auth.jwt]
# jwks_url = "" # JWKS endpoint
# signing_key = "" # Static key (if not using JWKS)
# issuer = "" # Expected issuer
# audience = "" # Expected audience
# ============================================================================ ### Pattern 2: Live Monitoring
# HOT RELOAD ### - HTTP Sink: Browser-based SSE streaming (https://host:8080/stream)
# ============================================================================ ### - TCP Sink: Debug interface (telnet/netcat to port 9090)
# Enable with: --config-auto-reload
# Manual reload: kill -HUP $(pidof logwisp)
# Updates pipelines, filters, formatters without restart
# Logging changes require restart
# ============================================================================ ### Pattern 3: Log Collection & Distribution
# ROUTER MODE ### - File Source → Multiple Sinks (fan-out)
# ============================================================================ ### - Multiple Sources → Single Pipeline → Multiple Sinks
# Enable with: logwisp --router or router = true
# Combines multiple pipeline HTTP sinks on shared ports
# Access pattern: http://localhost:8080/{pipeline_name}/stream
# Global status: http://localhost:8080/status
# ============================================================================
# SIGNALS
# ============================================================================
# SIGINT/SIGTERM: Graceful shutdown
# SIGHUP/SIGUSR1: Reload config (when auto-reload enabled)
# SIGKILL: Immediate shutdown
# ============================================================================
# CLI FLAGS
# ============================================================================
# --config, -c PATH # Config file path
# --router, -r # Enable router mode
# --background, -b # Run as daemon
# --quiet, -q # Suppress output
# --version, -v # Show version
# ============================================================================
# ENVIRONMENT VARIABLES
# ============================================================================
# LOGWISP_CONFIG_FILE # Config filename
# LOGWISP_CONFIG_DIR # Config directory
# LOGWISP_CONSOLE_TARGET # Override console target
# Any config value: LOGWISP_<SECTION>_<KEY> (uppercase, dots → underscores)

View File

@ -1,42 +0,0 @@
# LogWisp Minimal Configuration
# Save as: ~/.config/logwisp/logwisp.toml
# Basic pipeline monitoring application logs
[[pipelines]]
name = "app"
# Source: Monitor log directory
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/myapp", pattern = "*.log", check_interval_ms = 100 }
# Sink: HTTP streaming
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
buffer_size = 1000,
stream_path = "/stream",
status_path = "/status"
}
# Optional: Filter for errors only
# [[pipelines.filters]]
# type = "include"
# patterns = ["ERROR", "WARN", "CRITICAL"]
# Optional: Add rate limiting to HTTP sink
# [[pipelines.sinks]]
# type = "http"
# options = {
# port = 8080,
# buffer_size = 1000,
# stream_path = "/stream",
# status_path = "/status",
# net_limit = { enabled = true, requests_per_second = 10.0, burst_size = 20 }
# }
# Optional: Add file output
# [[pipelines.sinks]]
# type = "file"
# options = { directory = "/var/log/logwisp", name = "app" }

View File

@ -1,27 +1,76 @@
# LogWisp Documentation # LogWisp
Documentation covers installation, configuration, and usage of LogWisp's pipeline-based log monitoring system. A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with security and reliability features.
## 📚 Documentation Index ## Features
### Getting Started ### Core Capabilities
- **[Installation Guide](installation.md)** - Platform-specific installation - **Pipeline Architecture**: Independent processing pipelines with source(s) → filter → format → sink(s) flow
- **[Quick Start](quickstart.md)** - Get running in 5 minutes - **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
- **[Architecture Overview](architecture.md)** - Pipeline design - **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
- **Hot Configuration Reload**: Update pipelines without service restart
### Configuration ### Data Processing
- **[Configuration Guide](configuration.md)** - Complete reference - **Pattern-based Filtering**: Chainable include/exclude filters with regex support
- **[Environment Variables](environment.md)** - Container configuration - **Multiple Formatters**: Raw, JSON, and template-based text formatting
- **[Command Line Options](cli.md)** - CLI reference - **Rate Limiting**: Pipeline rate controls
- **[Sample Configurations](../config/)** - Default & Minimal Config
### Features ### Security & Reliability
- **[Status Monitoring](status.md)** - Health checks - **Authentication**: mTLS support
- **[Filters Guide](filters.md)** - Pattern-based filtering - **Access Control**: IP whitelisting/blacklisting, connection limits
- **[Rate Limiting](ratelimiting.md)** - Connection protection - **TLS Encryption**: Full TLS 1.2/1.3 support for HTTP connections
- **[Router Mode](router.md)** - Multi-pipeline routing - **Automatic Reconnection**: Resilient client connections with exponential backoff
- **[Authentication](authentication.md)** - Access control *(planned)* - **File Rotation**: Size-based rotation with retention policies
## 📝 License ### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints
- **Signal Handling**: Graceful shutdown and configuration reload via signals
- **Background Mode**: Daemon operation with proper signal handling
- **Quiet Mode**: Silent operation for automated deployments
BSD-3-Clause ## Documentation
- [Installation Guide](installation.md) - Platform setup and service configuration
- [Architecture Overview](architecture.md) - System design and component interaction
- [Configuration Reference](configuration.md) - TOML structure and configuration methods
- [Input Sources](sources.md) - Available source types and configurations
- [Output Sinks](sinks.md) - Sink types and output options
- [Filters](filters.md) - Pattern-based log filtering
- [Formatters](formatters.md) - Log formatting and transformation
- [Security](security.md) - IP-based access control configuration and mTLS
- [Networking](networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](cli.md) - CLI flags and subcommands
- [Operations Guide](operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
Run with: `logwisp -c config.toml`
## System Requirements
- **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
- **Architecture**: amd64
- **Go Version**: 1.25+ (for building from source)
## License
BSD 3-Clause License

View File

@ -1,343 +1,168 @@
# Architecture Overview # Architecture Overview
LogWisp implements a flexible pipeline architecture for real-time log processing and streaming. LogWisp implements a pipeline-based architecture for flexible log processing and distribution.
## Core Architecture ## Core Concepts
### Pipeline Model
Each pipeline operates independently with a source → filter → format → sink flow. Multiple pipelines can run concurrently within a single LogWisp instance, each processing different log streams with unique configurations.
### Component Hierarchy
``` ```
┌─────────────────────────────────────────────────────────────────────────┐ Service (Main Process)
│ LogWisp Service │ ├── Pipeline 1
├─────────────────────────────────────────────────────────────────────────┤ │ ├── Sources (1 or more)
├── Rate Limiter (optional)
┌─────────────────────────── Pipeline 1 ───────────────────────────┐ │ ├── Filter Chain (optional)
│ │ │ ├── Formatter (optional)
│ Sources Filters Sinks │ │ └── Sinks (1 or more)
│ │ ┌──────┐ ┌────────┐ ┌──────┐ │ │ ├── Pipeline 2
│ │ Dir │──┐ │Include │ ┌────│ HTTP │←── Client 1 │ │ └── [Same structure]
│ │ └──────┘ ├────▶│ ERROR │ │ └──────┘ │ │ └── Status Reporter (optional)
│ │ │ │ WARN │────▶├────┌──────┐ │ │
│ │ ┌──────┐ │ └────┬───┘ │ │ File │ │ │
│ │ │ HTTP │──┤ ▼ │ └──────┘ │ │
│ │ └──────┘ │ ┌────────┐ │ ┌──────┐ │ │
│ │ ┌──────┐ │ │Exclude │ └────│ TCP │←── Client 2 │ │
│ │ │ TCP │──┘ │ DEBUG │ └──────┘ │ │
│ │ └──────┘ └────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline 2 ───────────────────────────┐ │
│ │ │ │
│ │ ┌──────┐ ┌───────────┐ │ │
│ │ │Stdin │───────────────────────┬───▶│HTTP Client│──► Remote │ │
│ │ └──────┘ (No Filters) │ └───────────┘ │ │
│ │ │ ┌───────────┐ │ │
│ │ └────│TCP Client │──► Remote │ │
│ │ └───────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline N ───────────────────────────┐ │
│ │ Multiple Sources → Filter Chain → Multiple Sinks │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
``` ```
## Data Flow ## Data Flow
``` ### Processing Stages
Log Entry Flow:
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ 1. **Source Stage**: Sources monitor inputs and generate log entries
│ Source │ │ Parse │ │ Filter │ │ Sink │ 2. **Rate Limiting**: Optional pipeline-level rate control
│ Monitor │────▶│ Entry │────▶│ Chain │────▶│ Deliver │ 3. **Filtering**: Pattern-based inclusion/exclusion
└─────────┘ └─────────┘ └─────────┘ └─────────┘ 4. **Formatting**: Transform entries to desired output format
│ │ │ │ 5. **Distribution**: Fan-out to multiple sinks
▼ ▼ ▼ ▼
Detect Extract Include/ Send to
Input & Format Exclude Clients
### Entry Lifecycle
Entry Processing: Log entries flow through the pipeline as `core.LogEntry` structures containing:
- **Time**: Entry timestamp
- **Level**: Log level (DEBUG, INFO, WARN, ERROR)
- **Source**: Origin identifier
- **Message**: Log content
- **Fields**: Additional metadata (JSON)
- **RawSize**: Original entry size
1. Source Detection 2. Entry Creation 3. Filter Application ### Buffering Strategy
┌──────────┐ ┌────────────┐ ┌─────────────┐
│New Entry │ │ Timestamp │ │ Filter 1 │
│Detected │──────────▶│ Level │────────▶│ Include? │
└──────────┘ │ Message │ └──────┬──────┘
└────────────┘ │
4. Sink Distribution ┌─────────────┐
┌──────────┐ │ Filter 2 │
│ HTTP │◀───┐ │ Exclude? │
└──────────┘ │ └──────┬──────┘
┌──────────┐ │ │
│ TCP │◀───┼────────── Entry ◀──────────────────┘
└──────────┘ │ (if passed)
┌──────────┐ │
│ File │◀───┤
└──────────┘ │
┌──────────┐ │
│ HTTP/TCP │◀───┘
│ Client │
└──────────┘
```
## Component Details Each component maintains internal buffers to handle burst traffic:
- Sources: Configurable buffer size (default 1000 entries)
- Sinks: Independent buffers per sink
- Network components: Additional TCP/HTTP buffers
### Sources ## Component Types
Sources monitor inputs and generate log entries: ### Sources (Input)
``` - **Directory Source**: File system monitoring with rotation detection
Directory Source: - **Stdin Source**: Standard input processing
┌─────────────────────────────────┐ - **HTTP Source**: REST endpoint for log ingestion
│ Directory Monitor │ - **TCP Source**: Raw TCP socket listener
├─────────────────────────────────┤
│ • Pattern Matching (*.log) │
│ • File Rotation Detection │
│ • Position Tracking │
│ • Concurrent File Watching │
└─────────────────────────────────┘
┌──────────────┐
│ File Watcher │ (per file)
├──────────────┤
│ • Read New │
│ • Track Pos │
│ • Detect Rot │
└──────────────┘
HTTP/TCP Sources: ### Sinks (Output)
┌─────────────────────────────────┐
│ Network Listener │
├─────────────────────────────────┤
│ • JSON Parsing │
│ • Rate Limiting │
│ • Connection Management │
│ • Input Validation │
└─────────────────────────────────┘
```
### Filters - **Console Sink**: stdout/stderr output
- **File Sink**: Rotating file writer
- **HTTP Sink**: Server-Sent Events (SSE) streaming
- **TCP Sink**: TCP server for client connections
- **HTTP Client Sink**: Forward to remote HTTP endpoints
- **TCP Client Sink**: Forward to remote TCP servers
Filters process entries through pattern matching: ### Processing Components
``` - **Rate Limiter**: Token bucket algorithm for flow control
Filter Chain: - **Filter Chain**: Sequential pattern matching
┌─────────────┐ - **Formatters**: Raw, JSON, or template-based text transformation
Entry ──────────▶│ Filter 1 │
│ (Include) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter 2 │
│ (Exclude) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter N │
└──────┬──────┘
To Sinks
```
### Sinks
Sinks deliver processed entries to destinations:
```
HTTP Sink (SSE):
┌───────────────────────────────────┐
│ HTTP Server │
├───────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ │
│ │ Stream │ │ Status │ │
│ │Endpoint │ │Endpoint │ │
│ └────┬────┘ └────┬────┘ │
│ │ │ │
│ ┌────▼──────────────▼────┐ │
│ │ Connection Manager │ │
│ ├────────────────────────┤ │
│ │ • Rate Limiting │ │
│ │ • Heartbeat │ │
│ │ • Buffer Management │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
TCP Sink:
┌───────────────────────────────────┐
│ TCP Server │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ gnet Event Loop │ │
│ ├────────────────────────┤ │
│ │ • Async I/O │ │
│ │ • Connection Pool │ │
│ │ • Rate Limiting │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
Client Sinks:
┌───────────────────────────────────┐
│ HTTP/TCP Client │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ Output Manager │ │
│ ├────────────────────────┤ │
│ │ • Batching │ │
│ │ • Retry Logic │ │
│ │ • Connection Pooling │ │
│ │ • Failover │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
```
## Router Mode
In router mode, multiple pipelines share HTTP ports:
```
Router Architecture:
┌─────────────────┐
│ HTTP Router │
│ Port 8080 │
└────────┬────────┘
┌────────────────────┼────────────────────┐
│ │ │
/app/stream /db/stream /sys/stream
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│Pipeline │ │Pipeline │ │Pipeline │
│ "app" │ │ "db" │ │ "sys" │
└─────────┘ └─────────┘ └─────────┘
Path Routing:
Client Request ──▶ Router ──▶ Parse Path ──▶ Find Pipeline ──▶ Route
Extract Pipeline Name
from /pipeline/endpoint
```
## Memory Management
```
Buffer Flow:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Source │ │ Pipeline │ │ Sink │
│ Buffer │────▶│ Buffer │────▶│ Buffer │
│ (1000) │ │ (chan) │ │ (1000) │
└──────────┘ └──────────┘ └──────────┘
│ │ │
▼ ▼ ▼
Drop if full Backpressure Drop if full
(counted) (blocking) (counted)
Client Sinks:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Entry │ │ Batch │ │ Send │
│ Buffer │────▶│ Buffer │────▶│ Queue │
│ (1000) │ │ (100) │ │ (retry) │
└──────────┘ └──────────┘ └──────────┘
```
## Rate Limiting
```
Token Bucket Algorithm:
┌─────────────────────────────┐
│ Token Bucket │
├─────────────────────────────┤
│ Capacity: burst_size │
│ Refill: requests_per_second │
│ │
│ ┌─────────────────────┐ │
│ │ ● ● ● ● ● ● ○ ○ ○ ○ │ │
│ └─────────────────────┘ │
│ 6/10 tokens available │
└─────────────────────────────┘
Request arrives
Token available? ──No──▶ Reject (429)
Yes
Consume token ──▶ Allow request
```
## Concurrency Model ## Concurrency Model
``` ### Goroutine Architecture
Goroutine Structure:
Main ────┬──── Pipeline 1 ────┬──── Source Reader 1 - Each source runs in dedicated goroutines for monitoring
│ ├──── Source Reader 2 - Sinks operate independently with their own processing loops
│ ├──── HTTP Server - Network listeners use optimized event loops (gnet for TCP)
│ ├──── TCP Server - Pipeline processing uses channel-based communication
│ ├──── Filter Processor
│ ├──── HTTP Client Writer
│ └──── TCP Client Writer
├──── Pipeline 2 ────┬──── Source Reader
│ └──── Sink Writers
└──── HTTP Router (if enabled)
Channel Communication: ### Synchronization
Source ──chan──▶ Filter ──chan──▶ Sink
│ │
└── Non-blocking send ────────────┘
(drop & count if full)
```
## Configuration Loading - Atomic counters for statistics
- Read-write mutexes for configuration access
- Context-based cancellation for graceful shutdown
- Wait groups for coordinated startup/shutdown
``` ## Network Architecture
Priority Order:
1. CLI Flags ─────────┐
2. Environment Vars ──┼──▶ Merge ──▶ Final Config
3. Config File ───────┤
4. Defaults ──────────┘
Example: ### Connection Patterns
CLI: --logging.level debug
Env: LOGWISP_PIPELINES_0_NAME=app
File: pipelines.toml
Default: buffer_size = 1000
```
## Security Architecture **Chaining Design**:
- TCP Client Sink → TCP Source: Direct TCP forwarding
- HTTP Client Sink → HTTP Source: HTTP-based forwarding
``` **Monitoring Design**:
Security Layers: - TCP Sink: Debugging interface
- HTTP Sink: Browser-based live monitoring
┌─────────────────────────────────────┐ ### Protocol Support
│ Network Layer │
├─────────────────────────────────────┤ - HTTP/1.1 and HTTP/2 for HTTP connections
│ • Rate Limiting (per IP/global) │ - Raw TCP connections
│ • Connection Limits │ - TLS 1.2/1.3 for HTTPS connections (HTTP only)
│ • TLS/SSL (planned) │ - Server-Sent Events for real-time streaming
└──────────────┬──────────────────────┘
## Resource Management
┌──────────────▼──────────────────────┐
│ Authentication Layer │ ### Memory Management
├─────────────────────────────────────┤
│ • Basic Auth (planned) │ - Bounded buffers prevent unbounded growth
│ • Bearer Tokens (planned) │ - Automatic garbage collection via Go runtime
│ • IP Whitelisting (planned) │ - Connection limits prevent resource exhaustion
└──────────────┬──────────────────────┘
### File Management
┌──────────────▼──────────────────────┐
│ Application Layer │ - Automatic rotation based on size thresholds
├─────────────────────────────────────┤ - Retention policies for old log files
│ • Input Validation │ - Minimum disk space checks before writing
│ • Path Traversal Prevention │
│ • Resource Limits │ ### Connection Management
└─────────────────────────────────────┘
``` - Per-IP connection limits
- Global connection caps
- Automatic reconnection with exponential backoff
- Keep-alive for persistent connections
## Reliability Features
### Fault Tolerance
- Panic recovery in pipeline processing
- Independent pipeline operation
- Automatic source restart on failure
- Sink failure isolation
### Data Integrity
- Entry validation at ingestion
- Size limits for entries and batches
- Duplicate detection in file monitoring
- Position tracking for file reads
## Performance Characteristics
### Throughput
- Pipeline rate limiting: Configurable (default 1000 entries/second)
- Network throughput: Limited by network and sink capacity
- File monitoring: Sub-second detection (default 100ms interval)
### Latency
- Entry processing: Sub-millisecond in-memory
- Network forwarding: Depends on batch configuration
- File detection: Configurable check interval
### Scalability
- Horizontal: Multiple LogWisp instances with different configurations
- Vertical: Multiple pipelines per instance
- Fan-out: Multiple sinks per pipeline
- Fan-in: Multiple sources per pipeline

View File

@ -1,196 +1,240 @@
# Command Line Interface # Command Line Interface
LogWisp CLI options for controlling behavior without modifying configuration files. LogWisp CLI reference for commands and options.
## Synopsis ## Synopsis
```bash ```bash
logwisp [command] [options]
logwisp [options] logwisp [options]
``` ```
## General Options ## Commands
### `--config <path>` ### Main Commands
Configuration file location.
- **Default**: `~/.config/logwisp/logwisp.toml`
- **Example**: `logwisp --config /etc/logwisp/production.toml`
### `--router` | Command | Description |
Enable HTTP router mode for path-based routing. |---------|-------------|
- **Default**: `false` | `tls` | Generate TLS certificates |
- **Example**: `logwisp --router` | `version` | Display version information |
| `help` | Show help information |
### tls Command
Generate TLS certificates.
```bash
logwisp tls [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-ca` | Generate CA certificate | - |
| `-server` | Generate server certificate | - |
| `-client` | Generate client certificate | - |
| `-host` | Comma-separated hosts/IPs | localhost |
| `-o` | Output file prefix | Required |
| `-ca-cert` | CA certificate file | Required for server/client |
| `-ca-key` | CA key file | Required for server/client |
| `-days` | Certificate validity days | 365 |
### version Command
### `--version`
Display version information. Display version information.
### `--background`
Run as background process.
- **Example**: `logwisp --background`
### `--quiet`
Suppress all output (overrides logging configuration) except sinks.
- **Example**: `logwisp --quiet`
### `--disable-status-reporter`
Disable periodic status reporting.
- **Example**: `logwisp --disable-status-reporter`
### `--config-auto-reload`
Enable automatic configuration reloading on file changes.
- **Example**: `logwisp --config-auto-reload --config /etc/logwisp/config.toml`
- Monitors configuration file for changes
- Reloads pipelines without restart
- Preserves connections during reload
### `--config-save-on-exit`
Save current configuration to file on exit.
- **Example**: `logwisp --config-save-on-exit`
- Useful with runtime modifications
- Requires valid config file path
## Logging Options
Override configuration file settings:
### `--logging.output <mode>`
LogWisp's operational log output.
- **Values**: `file`, `stdout`, `stderr`, `both`, `none`
- **Example**: `logwisp --logging.output both`
### `--logging.level <level>`
Minimum log level.
- **Values**: `debug`, `info`, `warn`, `error`
- **Example**: `logwisp --logging.level debug`
### `--logging.file.directory <path>`
Log directory (with file output).
- **Example**: `logwisp --logging.file.directory /var/log/logwisp`
### `--logging.file.name <name>`
Log file name (with file output).
- **Example**: `logwisp --logging.file.name app`
### `--logging.file.max_size_mb <size>`
Maximum log file size in MB.
- **Example**: `logwisp --logging.file.max_size_mb 200`
### `--logging.file.max_total_size_mb <size>`
Maximum total log size in MB.
- **Example**: `logwisp --logging.file.max_total_size_mb 2000`
### `--logging.file.retention_hours <hours>`
Log retention period in hours.
- **Example**: `logwisp --logging.file.retention_hours 336`
### `--logging.console.target <target>`
Console output destination.
- **Values**: `stdout`, `stderr`, `split`
- **Example**: `logwisp --logging.console.target split`
### `--logging.console.format <format>`
Console output format.
- **Values**: `txt`, `json`
- **Example**: `logwisp --logging.console.format json`
## Pipeline Options
Configure pipelines via CLI (N = array index, 0-based):
### `--pipelines.N.name <name>`
Pipeline name.
- **Example**: `logwisp --pipelines.0.name myapp`
### `--pipelines.N.sources.N.type <type>`
Source type.
- **Example**: `logwisp --pipelines.0.sources.0.type directory`
### `--pipelines.N.sources.N.options.<key> <value>`
Source options.
- **Example**: `logwisp --pipelines.0.sources.0.options.path /var/log`
### `--pipelines.N.filters.N.type <type>`
Filter type.
- **Example**: `logwisp --pipelines.0.filters.0.type include`
### `--pipelines.N.filters.N.patterns <json>`
Filter patterns (JSON array).
- **Example**: `logwisp --pipelines.0.filters.0.patterns '["ERROR","WARN"]'`
### `--pipelines.N.sinks.N.type <type>`
Sink type.
- **Example**: `logwisp --pipelines.0.sinks.0.type http`
### `--pipelines.N.sinks.N.options.<key> <value>`
Sink options.
- **Example**: `logwisp --pipelines.0.sinks.0.options.port 8080`
## Examples
### Basic Usage
```bash ```bash
# Default configuration logwisp version
logwisp logwisp -v
logwisp --version
# Specific configuration
logwisp --config /etc/logwisp/production.toml
``` ```
### Development Output includes:
```bash - Version number
# Debug mode - Build date
logwisp --logging.output stderr --logging.level debug - Git commit hash
- Go version
# With file output ## Global Options
logwisp --logging.output both --logging.level debug --logging.file.directory ./debug-logs
### Configuration Options
| Flag | Description | Default |
|------|-------------|---------|
| `-c, --config` | Configuration file path | `./logwisp.toml` |
| `-b, --background` | Run as daemon | false |
| `-q, --quiet` | Suppress console output | false |
| `--disable-status-reporter` | Disable status logging | false |
| `--config-auto-reload` | Enable config hot reload | false |
### Logging Options
| Flag | Description | Values |
|------|-------------|--------|
| `--logging.output` | Log output mode | file, stdout, stderr, split, all, none |
| `--logging.level` | Log level | debug, info, warn, error |
| `--logging.file.directory` | Log directory | Path |
| `--logging.file.name` | Log filename | String |
| `--logging.file.max_size_mb` | Max file size | Integer |
| `--logging.file.max_total_size_mb` | Total size limit | Integer |
| `--logging.file.retention_hours` | Retention period | Float |
| `--logging.console.target` | Console target | stdout, stderr, split |
| `--logging.console.format` | Output format | txt, json |
### Pipeline Options
Configure pipelines via CLI (N = array index, 0-based).
**Pipeline Configuration:**
| Flag | Description |
|------|-------------|
| `--pipelines.N.name` | Pipeline name |
| `--pipelines.N.sources.N.type` | Source type |
| `--pipelines.N.filters.N.type` | Filter type |
| `--pipelines.N.sinks.N.type` | Sink type |
## Flag Formats
### Boolean Flags
```bash
logwisp --quiet
logwisp --quiet=true
logwisp --quiet=false
``` ```
### Production ### String Flags
```bash ```bash
# File logging logwisp --config /etc/logwisp/config.toml
logwisp --logging.output file --logging.file.directory /var/log/logwisp logwisp -c config.toml
# Background with router
logwisp --background --router --config /etc/logwisp/prod.toml
# Quiet mode for cron
logwisp --quiet --config /etc/logwisp/batch.toml
``` ```
### Pipeline Configuration via CLI ### Nested Configuration
```bash
# Simple pipeline
logwisp --pipelines.0.name app \
--pipelines.0.sources.0.type directory \
--pipelines.0.sources.0.options.path /var/log/app \
--pipelines.0.sinks.0.type http \
--pipelines.0.sinks.0.options.port 8080
# With filters ```bash
logwisp --pipelines.0.name filtered \ logwisp --logging.level=debug
--pipelines.0.sources.0.type stdin \ logwisp --pipelines.0.name=myapp
--pipelines.0.filters.0.type include \ logwisp --pipelines.0.sources.0.type=stdin
--pipelines.0.filters.0.patterns '["ERROR","CRITICAL"]' \
--pipelines.0.sinks.0.type stdout
``` ```
## Priority Order ### Array Values (JSON)
1. **Command-line flags** (highest) ```bash
2. **Environment variables** logwisp --pipelines.0.filters.0.patterns='["ERROR","WARN"]'
3. **Configuration file** ```
4. **Built-in defaults** (lowest)
## Environment Variables
All flags can be set via environment:
```bash
export LOGWISP_QUIET=true
export LOGWISP_LOGGING_LEVEL=debug
export LOGWISP_PIPELINES_0_NAME=myapp
```
## Configuration Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Built-in defaults (lowest)
## Exit Codes ## Exit Codes
- `0`: Success | Code | Description |
- `1`: General error |------|-------------|
- `2`: Configuration file not found | 0 | Success |
- `137`: SIGKILL received | 1 | General error |
| 2 | Configuration file not found |
| 137 | SIGKILL received |
## Signals ## Signal Handling
- `SIGINT` (Ctrl+C): Graceful shutdown | Signal | Action |
- `SIGTERM`: Graceful shutdown |--------|--------|
- `SIGHUP`: Reload configuration (when auto-reload enabled) | SIGINT (Ctrl+C) | Graceful shutdown |
- `SIGUSR1`: Reload configuration (when auto-reload enabled) | SIGTERM | Graceful shutdown |
- `SIGKILL`: Immediate shutdown (exit code 137) | SIGHUP | Reload configuration |
| SIGUSR1 | Reload configuration |
| SIGKILL | Immediate termination |
## Usage Patterns
### Development Mode
```bash
# Verbose logging to console
logwisp --logging.output=stderr --logging.level=debug
# Quick test with stdin
logwisp --pipelines.0.sources.0.type=stdin --pipelines.0.sinks.0.type=console
```
### Production Deployment
```bash
# Background with file logging
logwisp --background --config /etc/logwisp/prod.toml --logging.output=file
# Systemd service
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/config.toml
```
### Debugging
```bash
# Check configuration
logwisp --config test.toml --logging.level=debug --disable-status-reporter
# Dry run (verify config only)
logwisp --config test.toml --quiet
```
### Quick Commands
```bash
# Generate admin password
logwisp auth -u admin -b
# Create self-signed certs
logwisp tls -server -host localhost -o server
# Check version
logwisp version
```
## Help System
### General Help
```bash
logwisp --help
logwisp -h
logwisp help
```
### Command Help
```bash
logwisp auth --help
logwisp tls --help
logwisp help auth
```
## Special Flags
### Internal Flags
These flags are for internal use:
- `--background-daemon`: Child process indicator
- `--config-save-on-exit`: Save config on shutdown
### Hidden Behaviors
- SIGHUP ignored by default (nohup behavior)
- Automatic panic recovery in pipelines
- Resource cleanup on shutdown

View File

@ -1,512 +1,198 @@
# Configuration Guide # Configuration Reference
LogWisp uses TOML format with a flexible **source → filter → sink** pipeline architecture. LogWisp configuration uses TOML format with flexible override mechanisms.
## Configuration Methods ## Configuration Precedence
LogWisp supports three configuration methods with the following precedence:
Configuration sources are evaluated in order:
1. **Command-line flags** (highest priority) 1. **Command-line flags** (highest priority)
2. **Environment variables** 2. **Environment variables**
3. **Configuration file** (lowest priority) 3. **Configuration file**
4. **Built-in defaults** (lowest priority)
### Complete Configuration Reference ## File Location
| Category | CLI Flag | Environment Variable | TOML File | LogWisp searches for configuration in order:
|----------|----------|---------------------|-----------| 1. Path specified via `--config` flag
| **Top-level** | 2. Path from `LOGWISP_CONFIG_FILE` environment variable
| Router mode | `--router` | `LOGWISP_ROUTER` | `router = true` | 3. `~/.config/logwisp/logwisp.toml`
| Background mode | `--background` | `LOGWISP_BACKGROUND` | `background = true` | 4. `./logwisp.toml` in current directory
| Show version | `--version` | `LOGWISP_VERSION` | `version = true` |
| Quiet mode | `--quiet` | `LOGWISP_QUIET` | `quiet = true` |
| Disable status reporter | `--disable-status-reporter` | `LOGWISP_DISABLE_STATUS_REPORTER` | `disable_status_reporter = true` |
| Config auto-reload | `--config-auto-reload` | `LOGWISP_CONFIG_AUTO_RELOAD` | `config_auto_reload = true` |
| Config save on exit | `--config-save-on-exit` | `LOGWISP_CONFIG_SAVE_ON_EXIT` | `config_save_on_exit = true` |
| Config file | `--config <path>` | `LOGWISP_CONFIG_FILE` | N/A |
| Config directory | N/A | `LOGWISP_CONFIG_DIR` | N/A |
| **Logging** |
| Output mode | `--logging.output <mode>` | `LOGWISP_LOGGING_OUTPUT` | `[logging]`<br>`output = "stderr"` |
| Log level | `--logging.level <level>` | `LOGWISP_LOGGING_LEVEL` | `[logging]`<br>`level = "info"` |
| File directory | `--logging.file.directory <path>` | `LOGWISP_LOGGING_FILE_DIRECTORY` | `[logging.file]`<br>`directory = "./logs"` |
| File name | `--logging.file.name <name>` | `LOGWISP_LOGGING_FILE_NAME` | `[logging.file]`<br>`name = "logwisp"` |
| Max file size | `--logging.file.max_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_SIZE_MB` | `[logging.file]`<br>`max_size_mb = 100` |
| Max total size | `--logging.file.max_total_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB` | `[logging.file]`<br>`max_total_size_mb = 1000` |
| Retention hours | `--logging.file.retention_hours <hours>` | `LOGWISP_LOGGING_FILE_RETENTION_HOURS` | `[logging.file]`<br>`retention_hours = 168` |
| Console target | `--logging.console.target <target>` | `LOGWISP_LOGGING_CONSOLE_TARGET` | `[logging.console]`<br>`target = "stderr"` |
| Console format | `--logging.console.format <format>` | `LOGWISP_LOGGING_CONSOLE_FORMAT` | `[logging.console]`<br>`format = "txt"` |
| **Pipelines** |
| Pipeline name | `--pipelines.N.name <name>` | `LOGWISP_PIPELINES_N_NAME` | `[[pipelines]]`<br>`name = "default"` |
| Source type | `--pipelines.N.sources.N.type <type>` | `LOGWISP_PIPELINES_N_SOURCES_N_TYPE` | `[[pipelines.sources]]`<br>`type = "directory"` |
| Source options | `--pipelines.N.sources.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SOURCES_N_OPTIONS_<KEY>` | `[[pipelines.sources]]`<br>`options = { ... }` |
| Filter type | `--pipelines.N.filters.N.type <type>` | `LOGWISP_PIPELINES_N_FILTERS_N_TYPE` | `[[pipelines.filters]]`<br>`type = "include"` |
| Filter logic | `--pipelines.N.filters.N.logic <logic>` | `LOGWISP_PIPELINES_N_FILTERS_N_LOGIC` | `[[pipelines.filters]]`<br>`logic = "or"` |
| Filter patterns | `--pipelines.N.filters.N.patterns <json>` | `LOGWISP_PIPELINES_N_FILTERS_N_PATTERNS` | `[[pipelines.filters]]`<br>`patterns = [...]` |
| Sink type | `--pipelines.N.sinks.N.type <type>` | `LOGWISP_PIPELINES_N_SINKS_N_TYPE` | `[[pipelines.sinks]]`<br>`type = "http"` |
| Sink options | `--pipelines.N.sinks.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SINKS_N_OPTIONS_<KEY>` | `[[pipelines.sinks]]`<br>`options = { ... }` |
| Auth type | `--pipelines.N.auth.type <type>` | `LOGWISP_PIPELINES_N_AUTH_TYPE` | `[pipelines.auth]`<br>`type = "none"` |
Note: `N` represents array indices (0-based). ## Global Settings
## Configuration File Location Top-level configuration options:
1. Command line: `--config /path/to/config.toml` | Setting | Type | Default | Description |
2. Environment: `$LOGWISP_CONFIG_FILE` and `$LOGWISP_CONFIG_DIR` |---------|------|---------|-------------|
3. User config: `~/.config/logwisp/logwisp.toml` | `background` | bool | false | Run as daemon process |
4. Current directory: `./logwisp.toml` | `quiet` | bool | false | Suppress console output |
| `disable_status_reporter` | bool | false | Disable periodic status logging |
| `config_auto_reload` | bool | false | Enable file watch for auto-reload |
## Hot Reload ## Logging Configuration
LogWisp supports automatic configuration reloading without restart: LogWisp's internal operational logging:
```bash
# Enable hot reload
logwisp --config-auto-reload --config /etc/logwisp/config.toml
# Manual reload via signal
kill -HUP $(pidof logwisp) # or SIGUSR1
```
Hot reload updates:
- Pipeline configurations
- Filters
- Formatters
- Rate limits
- Router mode changes
Not reloaded (requires restart):
- Logging configuration
- Background mode
## Configuration Structure
```toml ```toml
# Optional: Enable router mode
router = false
# Optional: Background mode
background = false
# Optional: Quiet mode
quiet = false
# Optional: Disable status reporter
disable_status_reporter = false
# Optional: LogWisp's own logging
[logging] [logging]
output = "stderr" # file, stdout, stderr, both, none output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug, info, warn, error level = "info" # debug|info|warn|error
[logging.file] [logging.file]
directory = "./logs" directory = "./log"
name = "logwisp" name = "logwisp"
max_size_mb = 100 max_size_mb = 100
max_total_size_mb = 1000 max_total_size_mb = 1000
retention_hours = 168 retention_hours = 168.0
[logging.console] [logging.console]
target = "stderr" # stdout, stderr, split target = "stdout" # stdout|stderr|split
format = "txt" # txt or json format = "txt" # txt|json
```
# Required: At least one pipeline ### Output Modes
- **file**: Write to log files only
- **stdout**: Write to standard output
- **stderr**: Write to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
- **all**: Write to both file and console
- **none**: Disable all logging
## Pipeline Configuration
Each `[[pipelines]]` section defines an independent processing pipeline:
```toml
[[pipelines]] [[pipelines]]
name = "default" name = "pipeline-name"
# Sources (required) # Rate limiting (optional)
[pipelines.rate_limit]
rate = 1000.0
burst = 2000.0
policy = "drop" # pass|drop
max_entry_size_bytes = 0 # 0=unlimited
# Format configuration (optional)
[pipelines.format]
type = "json" # raw|json|txt
# Sources (required, 1+)
[[pipelines.sources]] [[pipelines.sources]]
type = "directory" type = "directory"
options = { ... } # ... source-specific config
# Filters (optional) # Filters (optional)
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
patterns = [...] logic = "or"
patterns = ["ERROR", "WARN"]
# Sinks (required) # Sinks (required, 1+)
[[pipelines.sinks]] [[pipelines.sinks]]
type = "http" type = "http"
options = { ... } # ... sink-specific config
``` ```
## Pipeline Configuration ## Environment Variables
Each `[[pipelines]]` section defines an independent processing pipeline. All configuration options support environment variable overrides:
### Pipeline Formatters ### Naming Convention
Control output format per pipeline: - Prefix: `LOGWISP_`
- Path separator: `_` (underscore)
- Array indices: Numeric suffix (0-based)
- Case: UPPERCASE
```toml ### Mapping Examples
[[pipelines]]
name = "json-output"
format = "json" # raw, json, text
[pipelines.format_options] | TOML Path | Environment Variable |
# JSON formatter |-----------|---------------------|
pretty = false | `quiet` | `LOGWISP_QUIET` |
timestamp_field = "timestamp" | `logging.level` | `LOGWISP_LOGGING_LEVEL` |
level_field = "level" | `pipelines[0].name` | `LOGWISP_PIPELINES_0_NAME` |
message_field = "message" | `pipelines[0].sources[0].type` | `LOGWISP_PIPELINES_0_SOURCES_0_TYPE` |
source_field = "source"
# Text formatter ## Command-Line Overrides
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Message}}"
timestamp_format = "2006-01-02T15:04:05Z07:00" All configuration options can be overridden via CLI flags:
```bash
logwisp --quiet \
--logging.level=debug \
--pipelines.0.name=myapp \
--pipelines.0.sources.0.type=stdin
``` ```
### Sources ## Configuration Validation
Input data sources: LogWisp validates configuration at startup:
- Required fields presence
- Type correctness
- Port conflicts
- Path accessibility
- Pattern compilation
- Network address formats
#### Directory Source ## Hot Reload
```toml
[[pipelines.sources]]
type = "directory"
options = {
path = "/var/log/myapp", # Directory to monitor
pattern = "*.log", # File pattern (glob)
check_interval_ms = 100 # Check interval (10-60000)
}
```
#### File Source Enable configuration hot reload:
```toml
[[pipelines.sources]]
type = "file"
options = {
path = "/var/log/app.log" # Specific file
}
```
#### Stdin Source
```toml
[[pipelines.sources]]
type = "stdin"
options = {}
```
#### HTTP Source
```toml
[[pipelines.sources]]
type = "http"
options = {
port = 8081, # Port to listen on
ingest_path = "/ingest", # Path for POST requests
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip"
}
}
```
#### TCP Source
```toml
[[pipelines.sources]]
type = "tcp"
options = {
port = 9091, # Port to listen on
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 5.0,
burst_size = 10,
limit_by = "ip"
}
}
```
### Filters
Control which log entries pass through:
```toml
# Include filter - only matching logs pass
[[pipelines.filters]]
type = "include"
logic = "or" # or: match any, and: match all
patterns = [
"ERROR",
"(?i)warn", # Case-insensitive
"\\bfatal\\b" # Word boundary
]
# Exclude filter - matching logs are dropped
[[pipelines.filters]]
type = "exclude"
patterns = ["DEBUG", "health-check"]
```
### Sinks
Output destinations:
#### HTTP Sink (SSE)
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
buffer_size = 1000,
stream_path = "/stream",
status_path = "/status",
# Heartbeat
heartbeat = {
enabled = true,
interval_seconds = 30,
format = "comment", # comment or json
include_timestamp = true,
include_stats = false
},
# Rate limiting
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # ip or global
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
#### TCP Sink
```toml
[[pipelines.sinks]]
type = "tcp"
options = {
port = 9090,
buffer_size = 5000,
heartbeat = { enabled = true, interval_seconds = 60, format = "json" },
rate_limit = { enabled = true, requests_per_second = 5.0, burst_size = 10 }
}
```
#### HTTP Client Sink
```toml
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://remote-log-server.com/ingest",
buffer_size = 1000,
batch_size = 100,
batch_delay_ms = 1000,
timeout_seconds = 30,
max_retries = 3,
retry_delay_ms = 1000,
retry_backoff = 2.0,
headers = {
"Authorization" = "Bearer <API_KEY_HERE>",
"X-Custom-Header" = "value"
},
insecure_skip_verify = false
}
```
#### TCP Client Sink
```toml
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "remote-server.com:9090",
buffer_size = 1000,
dial_timeout_seconds = 10,
write_timeout_seconds = 30,
keep_alive_seconds = 30,
reconnect_delay_ms = 1000,
max_reconnect_delay_seconds = 30,
reconnect_backoff = 1.5
}
```
#### File Sink
```toml
[[pipelines.sinks]]
type = "file"
options = {
directory = "/var/log/logwisp",
name = "app",
max_size_mb = 100,
max_total_size_mb = 1000,
retention_hours = 168.0,
min_disk_free_mb = 1000,
buffer_size = 2000
}
```
#### Console Sinks
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {
buffer_size = 500,
target = "stdout" # stdout, stderr, or split
}
```
## Complete Examples
### Basic Application Monitoring
```toml
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Hot Reload with JSON Output
```toml ```toml
config_auto_reload = true config_auto_reload = true
config_save_on_exit = true
[[pipelines]]
name = "app"
format = "json"
[pipelines.format_options]
pretty = true
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
``` ```
### Filtering Or via command line:
```bash
```toml logwisp --config-auto-reload
[logging]
output = "file"
level = "info"
[[pipelines]]
name = "production"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log", check_interval_ms = 50 }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.filters]]
type = "exclude"
patterns = ["/health", "/metrics"]
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = { enabled = true, requests_per_second = 25.0 }
}
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "errors" }
``` ```
### Multi-Source Aggregation Reload triggers:
- File modification detection
- SIGHUP or SIGUSR1 signals
Reloadable items:
- Pipeline configurations
- Sources and sinks
- Filters and formatters
- Rate limits
Non-reloadable (requires restart):
- Logging configuration
- Background mode
- Global settings
## Default Configuration
Minimal working configuration:
```toml ```toml
[[pipelines]] [[pipelines]]
name = "aggregated" name = "default"
[[pipelines.sources]] [[pipelines.sources]]
type = "directory" type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" } [pipelines.sources.directory]
path = "./"
[[pipelines.sources]] pattern = "*.log"
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sources]]
type = "stdin"
options = {}
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/logs" }
[[pipelines.sinks]] [[pipelines.sinks]]
type = "tcp" type = "console"
options = { port = 9090 } [pipelines.sinks.console]
target = "stdout"
``` ```
### Router Mode ## Configuration Schema
```toml ### Type Reference
# Run with: logwisp --router
router = true
[[pipelines]] | TOML Type | Go Type | Environment Format |
name = "api" |-----------|---------|-------------------|
[[pipelines.sources]] | String | string | Plain text |
type = "directory" | Integer | int64 | Numeric string |
options = { path = "/var/log/api", pattern = "*.log" } | Float | float64 | Decimal string |
[[pipelines.sinks]] | Boolean | bool | true/false |
type = "http" | Array | []T | JSON array string |
options = { port = 8080 } # Same port OK in router mode | Table | struct | Nested with `_` |
[[pipelines]]
name = "web"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
# Access:
# http://localhost:8080/api/stream
# http://localhost:8080/web/stream
# http://localhost:8080/status
```
### Remote Log Forwarding
```toml
[[pipelines]]
name = "forwarder"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-aggregator.example.com/ingest",
batch_size = 100,
batch_delay_ms = 5000,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "backup-logger.example.com:9090",
reconnect_delay_ms = 5000
}
```

View File

@ -1,274 +0,0 @@
# Environment Variables
Configure LogWisp through environment variables for containerized deployments.
## Naming Convention
- **Prefix**: `LOGWISP_`
- **Path separator**: `_` (underscore)
- **Array indices**: Numeric suffix (0-based)
- **Case**: UPPERCASE
Examples:
- `logging.level``LOGWISP_LOGGING_LEVEL`
- `pipelines[0].name``LOGWISP_PIPELINES_0_NAME`
## General Variables
```bash
LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
LOGWISP_CONFIG_DIR=/etc/logwisp
LOGWISP_BACKGROUND=true
LOGWISP_QUIET=true
LOGWISP_DISABLE_STATUS_REPORTER=true
LOGWISP_CONFIG_AUTO_RELOAD=true
LOGWISP_CONFIG_SAVE_ON_EXIT=true
```
### `LOGWISP_CONFIG_FILE`
Configuration file path.
```bash
export LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
```
### `LOGWISP_CONFIG_DIR`
Configuration directory.
```bash
export LOGWISP_CONFIG_DIR=/etc/logwisp
export LOGWISP_CONFIG_FILE=production.toml
```
### `LOGWISP_ROUTER`
Enable router mode.
```bash
export LOGWISP_ROUTER=true
```
### `LOGWISP_BACKGROUND`
Run in background.
```bash
export LOGWISP_BACKGROUND=true
```
### `LOGWISP_QUIET`
Suppress all output.
```bash
export LOGWISP_QUIET=true
```
### `LOGWISP_DISABLE_STATUS_REPORTER`
Disable periodic status reporting.
```bash
export LOGWISP_DISABLE_STATUS_REPORTER=true
```
## Logging Variables
```bash
# Output mode
LOGWISP_LOGGING_OUTPUT=both
# Log level
LOGWISP_LOGGING_LEVEL=debug
# File logging
LOGWISP_LOGGING_FILE_DIRECTORY=/var/log/logwisp
LOGWISP_LOGGING_FILE_NAME=logwisp
LOGWISP_LOGGING_FILE_MAX_SIZE_MB=100
LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB=1000
LOGWISP_LOGGING_FILE_RETENTION_HOURS=168
# Console logging
LOGWISP_LOGGING_CONSOLE_TARGET=stderr
LOGWISP_LOGGING_CONSOLE_FORMAT=json
# Special console target override
LOGWISP_CONSOLE_TARGET=split # Overrides sink console targets
```
## Pipeline Configuration
### Basic Pipeline
```bash
# Pipeline name
LOGWISP_PIPELINES_0_NAME=app
# Source configuration
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/app
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_CHECK_INTERVAL_MS=100
# Sink configuration
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=1000
```
### Pipeline with Formatter
```bash
# Pipeline name and format
LOGWISP_PIPELINES_0_NAME=app
LOGWISP_PIPELINES_0_FORMAT=json
# Format options
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_PRETTY=true
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_TIMESTAMP_FIELD=ts
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_LEVEL_FIELD=severity
```
### Filters
```bash
# Include filter
LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
LOGWISP_PIPELINES_0_FILTERS_0_LOGIC=or
LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# Exclude filter
LOGWISP_PIPELINES_0_FILTERS_1_TYPE=exclude
LOGWISP_PIPELINES_0_FILTERS_1_PATTERNS='["DEBUG"]'
```
### HTTP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=http
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=8081
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_INGEST_PATH=/ingest
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
```
### TCP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=tcp
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=9091
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=5.0
```
### HTTP Sink Options
```bash
# Basic
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STREAM_PATH=/stream
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STATUS_PATH=/status
# Heartbeat
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INTERVAL_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_FORMAT=comment
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_TIMESTAMP=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_STATS=false
# Rate Limiting
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_BURST_SIZE=20
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_LIMIT_BY=ip
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_CONNECTIONS_PER_IP=5
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_TOTAL_CONNECTIONS=100
```
### HTTP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_URL=https://log-server.com/ingest
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_SIZE=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_DELAY_MS=5000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RETRIES=3
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_BACKOFF=2.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_INSECURE_SKIP_VERIFY=false
```
### TCP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=tcp_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_ADDRESS=remote-server.com:9090
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIAL_TIMEOUT_SECONDS=10
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_WRITE_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_KEEP_ALIVE_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RECONNECT_DELAY_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_BACKOFF=1.5
```
### File Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIRECTORY=/var/log/logwisp
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_NAME=app
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_SIZE_MB=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_TOTAL_SIZE_MB=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETENTION_HOURS=168
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MIN_DISK_FREE_MB=1000
```
### Console Sinks
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=stdout
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=500
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TARGET=stdout
```
## Example
```bash
#!/usr/bin/env bash
# General settings
export LOGWISP_DISABLE_STATUS_REPORTER=false
# Logging
export LOGWISP_LOGGING_OUTPUT=both
export LOGWISP_LOGGING_LEVEL=info
# Pipeline 0: Application logs
export LOGWISP_PIPELINES_0_NAME=app
export LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/myapp
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
# Filters
export LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
export LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# HTTP sink
export LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=25.0
# Pipeline 1: System logs
export LOGWISP_PIPELINES_1_NAME=system
export LOGWISP_PIPELINES_1_SOURCES_0_TYPE=file
export LOGWISP_PIPELINES_1_SOURCES_0_OPTIONS_PATH=/var/log/syslog
# TCP sink
export LOGWISP_PIPELINES_1_SINKS_0_TYPE=tcp
export LOGWISP_PIPELINES_1_SINKS_0_OPTIONS_PORT=9090
# Pipeline 2: Remote forwarding
export LOGWISP_PIPELINES_2_NAME=forwarder
export LOGWISP_PIPELINES_2_SOURCES_0_TYPE=http
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_PORT=8081
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_INGEST_PATH=/logs
# HTTP client sink
export LOGWISP_PIPELINES_2_SINKS_0_TYPE=http_client
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_URL=https://log-aggregator.example.com/ingest
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_BATCH_SIZE=100
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
logwisp
```
## Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Defaults (lowest)

View File

@ -1,268 +1,185 @@
# Filter Guide # Filters
LogWisp filters control which log entries pass through pipelines using regular expressions. LogWisp filters control which log entries pass through the pipeline using pattern matching.
## How Filters Work ## Filter Types
- **Include**: Only matching logs pass (whitelist) ### Include Filter
- **Exclude**: Matching logs are dropped (blacklist)
- Multiple filters apply sequentially - all must pass
## Configuration Only entries matching patterns pass through.
```toml
[[pipelines.filters]]
type = "include" # or "exclude"
logic = "or" # or "and"
patterns = [
"pattern1",
"pattern2"
]
```
### Filter Types
#### Include Filter
```toml ```toml
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
logic = "or" logic = "or" # or|and
patterns = ["ERROR", "WARN", "CRITICAL"] patterns = [
# Only ERROR, WARN, or CRITICAL logs pass "ERROR",
"WARN",
"CRITICAL"
]
``` ```
#### Exclude Filter ### Exclude Filter
Entries matching patterns are dropped.
```toml ```toml
[[pipelines.filters]] [[pipelines.filters]]
type = "exclude" type = "exclude"
patterns = ["DEBUG", "TRACE", "/health"] patterns = [
# DEBUG, TRACE, and health checks are dropped "DEBUG",
"TRACE",
"health-check"
]
``` ```
### Logic Operators ## Configuration Options
- **OR**: Match ANY pattern (default) | Option | Type | Default | Description |
- **AND**: Match ALL patterns |--------|------|---------|-------------|
| `type` | string | Required | Filter type (include/exclude) |
```toml | `logic` | string | "or" | Pattern matching logic (or/and) |
# OR Logic | `patterns` | []string | Required | Pattern list |
logic = "or"
patterns = ["ERROR", "FAIL"]
# Matches: "ERROR: disk full" OR "FAIL: timeout"
# AND Logic
logic = "and"
patterns = ["database", "timeout", "ERROR"]
# Matches: "ERROR: database connection timeout"
# Not: "ERROR: file not found"
```
## Pattern Syntax ## Pattern Syntax
Go regular expressions (RE2): Patterns support regular expression syntax:
### Basic Patterns
- **Literal match**: `"ERROR"` - matches "ERROR" anywhere
- **Case-insensitive**: `"(?i)error"` - matches "error", "ERROR", "Error"
- **Word boundary**: `"\\berror\\b"` - matches whole word only
### Advanced Patterns
- **Alternation**: `"ERROR|WARN|FATAL"`
- **Character classes**: `"[0-9]{3}"`
- **Wildcards**: `".*exception.*"`
- **Line anchors**: `"^ERROR"` (start), `"ERROR$"` (end)
### Special Characters
Escape special regex characters with backslash:
- `.``\\.`
- `*``\\*`
- `[``\\[`
- `(``\\(`
## Filter Logic
### OR Logic (default)
Entry passes if ANY pattern matches:
```toml ```toml
"ERROR" # Substring match logic = "or"
"(?i)error" # Case-insensitive patterns = ["ERROR", "WARN"]
"\\berror\\b" # Word boundaries # Passes: "ERROR in module", "WARN: low memory"
"^ERROR" # Start of line # Blocks: "INFO: started"
"ERROR$" # End of line
"error|fail|warn" # Alternatives
``` ```
## Common Patterns ### AND Logic
Entry passes only if ALL patterns match:
### Log Levels
```toml ```toml
patterns = [ logic = "and"
"\\[(ERROR|WARN|INFO)\\]", # [ERROR] format patterns = ["database", "ERROR"]
"(?i)\\b(error|warning)\\b", # Word boundaries # Passes: "ERROR: database connection failed"
"level=(error|warn)", # key=value format # Blocks: "ERROR: file not found"
]
``` ```
### Application Errors ## Filter Chain
Multiple filters execute sequentially:
```toml ```toml
# Java # First filter: Include errors and warnings
patterns = [
"Exception",
"at .+\\.java:[0-9]+",
"NullPointerException"
]
# Python
patterns = [
"Traceback",
"File \".+\\.py\", line [0-9]+",
"ValueError|TypeError"
]
# Go
patterns = [
"panic:",
"goroutine [0-9]+",
"runtime error:"
]
```
### Performance Issues
```toml
patterns = [
"took [0-9]{4,}ms", # >999ms operations
"timeout|timed out",
"slow query",
"high cpu|cpu usage: [8-9][0-9]%"
]
```
### HTTP Patterns
```toml
patterns = [
"status[=:][4-5][0-9]{2}", # 4xx/5xx codes
"HTTP/[0-9.]+ [4-5][0-9]{2}",
"\"/api/v[0-9]+/", # API paths
]
```
## Filter Chains
### Error Monitoring
```toml
# Include errors
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
patterns = ["(?i)\\b(error|fail|critical)\\b"] patterns = ["ERROR", "WARN"]
# Exclude known non-issues # Second filter: Exclude test environments
[[pipelines.filters]] [[pipelines.filters]]
type = "exclude" type = "exclude"
patterns = ["Error: Expected", "/health"] patterns = ["test-env", "staging"]
``` ```
### API Monitoring Processing order:
1. Entry arrives from source
2. Include filter evaluates
3. If passed, exclude filter evaluates
4. If passed all filters, entry continues to sink
## Performance Considerations
### Pattern Compilation
- Patterns compile once at startup
- Invalid patterns cause startup failure
- Complex patterns may impact performance
### Optimization Tips
- Place most selective filters first
- Use simple patterns when possible
- Combine related patterns with alternation
- Avoid excessive wildcards (`.*`)
## Filter Statistics
Filters track:
- Total entries evaluated
- Entries passed
- Entries blocked
- Processing time per pattern
## Common Use Cases
### Log Level Filtering
```toml ```toml
# Include API calls
[[pipelines.filters]] [[pipelines.filters]]
type = "include" type = "include"
patterns = ["/api/", "/v[0-9]+/"] patterns = ["ERROR", "WARN", "FATAL", "CRITICAL"]
```
# Exclude successful ### Application Filtering
```toml
[[pipelines.filters]]
type = "include"
patterns = ["app1", "app2", "app3"]
```
### Noise Reduction
```toml
[[pipelines.filters]] [[pipelines.filters]]
type = "exclude" type = "exclude"
patterns = ["\" 2[0-9]{2} "] patterns = [
"health-check",
"ping",
"/metrics",
"heartbeat"
]
``` ```
## Performance Tips ### Security Filtering
```toml
1. **Use anchors**: `^ERROR` faster than `ERROR` [[pipelines.filters]]
2. **Avoid nested quantifiers**: `((a+)+)+` type = "exclude"
3. **Non-capturing groups**: `(?:error|warn)` patterns = [
4. **Order by frequency**: Most common first "password",
5. **Simple patterns**: Faster than complex regex "token",
"api[_-]key",
## Testing Filters "secret"
]
```bash
# Test configuration
echo "[ERROR] Test" >> test.log
echo "[INFO] Test" >> test.log
# Run with debug
logwisp --log-level debug
# Check output
curl -N http://localhost:8080/stream
``` ```
## Regex Pattern Guide ### Multi-stage Filtering
```toml
# Include production logs
[[pipelines.filters]]
type = "include"
patterns = ["prod-", "production"]
LogWisp uses Go's standard regex engine (RE2). It includes most common features but omits backtracking-heavy syntax. # Include only errors
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "EXCEPTION", "FATAL"]
For complex logic, chain multiple filters (e.g., an `include` followed by an `exclude`) rather than writing one complex regex. # Exclude known issues
[[pipelines.filters]]
### Basic Matching type = "exclude"
patterns = ["ECONNRESET", "broken pipe"]
| Pattern | Description | Example | ```
| :--- | :--- | :--- |
| `literal` | Matches the exact text. | `"ERROR"` matches any log with "ERROR". |
| `.` | Matches any single character (except newline). | `"user."` matches "userA", "userB", etc. |
| `a\|b` | Matches expression `a` OR expression `b`. | `"error\|fail"` matches lines with "error" or "fail". |
### Anchors and Boundaries
Anchors tie your pattern to a specific position in the line.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `^` | Matches the beginning of the line. | `"^ERROR"` matches lines *starting* with "ERROR". |
| `$` | Matches the end of the line. | `"crashed$"` matches lines *ending* with "crashed". |
| `\b` | Matches a word boundary. | `"\berror\b"` matches "error" but not "terrorist". |
### Character Classes
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `[abc]` | Matches `a`, `b`, or `c`. | `"[aeiou]"` matches any vowel. |
| `[^abc]` | Matches any character *except* `a`, `b`, or `c`. | `"[^0-9]"` matches any non-digit. |
| `[a-z]` | Matches any character in the range `a` to `z`. | `"[a-zA-Z]"` matches any letter. |
| `\d` | Matches any digit (`[0-9]`). | `\d{3}` matches three digits, like "123". |
| `\w` | Matches any word character (`[a-zA-Z0-9_]`). | `\w+` matches one or more word characters. |
| `\s` | Matches any whitespace character. | `\s+` matches one or more spaces or tabs. |
### Quantifiers
Quantifiers specify how many times a character or group must appear.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `*` | Zero or more times. | `"a*"` matches "", "a", "aa". |
| `+` | One or more times. | `"a+"` matches "a", "aa", but not "". |
| `?` | Zero or one time. | `"colou?r"` matches "color" and "colour". |
| `{n}` | Exactly `n` times. | `\d{4}` matches a 4-digit number. |
| `{n,}` | `n` or more times. | `\d{2,}` matches numbers with 2 or more digits. |
| `{n,m}` | Between `n` and `m` times. | `\d{1,3}` matches numbers with 1 to 3 digits. |
### Grouping
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `(...)` | Groups an expression and captures the match. | `(ERROR|WARN)` captures "ERROR" or "WARN". |
| `(?:...)`| Groups an expression *without* capturing. Faster. | `(?:ERROR|WARN)` is more efficient if you just need to group. |
### Flags and Modifiers
Flags are placed at the beginning of a pattern to change its behavior.
| Pattern | Description |
| :--- | :--- |
| `(?i)` | Case-insensitive matching. |
| `(?m)` | Multi-line mode (`^` and `$` match start/end of lines). |
**Example:** `"(?i)error"` matches "error", "ERROR", and "Error".
### Practical Examples for Logging
* **Match an IP Address:**
```
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
```
* **Match HTTP 4xx or 5xx Status Codes:**
```
"status[= ](4|5)\d{2}"
```
* **Match a slow database query (>100ms):**
```
"Query took [1-9]\d{2,}ms"
```
* **Match key-value pairs:**
```
"user=(admin|guest)"
```
* **Match Java exceptions:**
```
"Exception:|at .+\.java:\d+"
```

215
doc/formatters.md Normal file
View File

@ -0,0 +1,215 @@
# Formatters
LogWisp formatters transform log entries before output to sinks.
## Formatter Types
### Raw Formatter
Outputs the log message as-is with optional newline.
```toml
[pipelines.format]
type = "raw"
[pipelines.format.raw]
add_new_line = true
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `add_new_line` | bool | true | Append newline to messages |
### JSON Formatter
Produces structured JSON output.
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
timestamp_field = "timestamp"
level_field = "level"
message_field = "message"
source_field = "source"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `pretty` | bool | false | Pretty print JSON |
| `timestamp_field` | string | "timestamp" | Field name for timestamp |
| `level_field` | string | "level" | Field name for log level |
| `message_field` | string | "message" | Field name for message |
| `source_field` | string | "source" | Field name for source |
**Output Structure:**
```json
{
"timestamp": "2024-01-01T12:00:00Z",
"level": "ERROR",
"source": "app",
"message": "Connection failed"
}
```
### Text Formatter
Template-based text formatting.
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
timestamp_format = "2006-01-02T15:04:05.000Z07:00"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `template` | string | See below | Go template string |
| `timestamp_format` | string | RFC3339 | Go time format string |
**Default Template:**
```
[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}
```
## Template Functions
Available functions in text templates:
| Function | Description | Example |
|----------|-------------|---------|
| `FmtTime` | Format timestamp | `{{.Timestamp \| FmtTime}}` |
| `ToUpper` | Convert to uppercase | `{{.Level \| ToUpper}}` |
| `ToLower` | Convert to lowercase | `{{.Source \| ToLower}}` |
| `TrimSpace` | Remove whitespace | `{{.Message \| TrimSpace}}` |
## Template Variables
Available variables in templates:
| Variable | Type | Description |
|----------|------|-------------|
| `.Timestamp` | time.Time | Entry timestamp |
| `.Level` | string | Log level |
| `.Source` | string | Source identifier |
| `.Message` | string | Log message |
| `.Fields` | string | Additional fields (JSON) |
## Time Format Strings
Common Go time format patterns:
| Pattern | Example Output |
|---------|---------------|
| `2006-01-02T15:04:05Z07:00` | 2024-01-02T15:04:05Z |
| `2006-01-02 15:04:05` | 2024-01-02 15:04:05 |
| `Jan 2 15:04:05` | Jan 2 15:04:05 |
| `15:04:05.000` | 15:04:05.123 |
| `2006/01/02` | 2024/01/02 |
## Format Selection
### Default Behavior
If no formatter specified:
- **HTTP/TCP sinks**: JSON format
- **Console/File sinks**: Raw format
- **Client sinks**: JSON format
### Per-Pipeline Configuration
Each pipeline can have its own formatter:
```toml
[[pipelines]]
name = "json-pipeline"
[pipelines.format]
type = "json"
[[pipelines]]
name = "text-pipeline"
[pipelines.format]
type = "txt"
```
## Message Processing
### JSON Message Handling
When using JSON formatter with JSON log messages:
1. Attempts to parse message as JSON
2. Merges fields with LogWisp metadata
3. LogWisp fields take precedence
4. Falls back to string if parsing fails
### Field Preservation
LogWisp metadata always includes:
- Timestamp (from source or current time)
- Level (detected or default)
- Source (origin identifier)
- Message (original content)
## Performance Characteristics
### Formatter Performance
Relative performance (fastest to slowest):
1. **Raw**: Direct passthrough
2. **Text**: Template execution
3. **JSON**: Serialization
4. **JSON (pretty)**: Formatted serialization
### Optimization Tips
- Use raw format for high throughput
- Cache template compilation (automatic)
- Minimize template complexity
- Avoid pretty JSON in production
## Common Configurations
### Structured Logging
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
```
### Human-Readable Logs
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
timestamp_format = "15:04:05"
```
### Syslog Format
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} {{.Source}} {{.Level}}: {{.Message}}"
timestamp_format = "Jan 2 15:04:05"
```
### Minimal Output
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Message}}"
```

View File

@ -1,77 +1,76 @@
# Installation Guide # Installation Guide
Installation process on tested platforms. LogWisp installation and service configuration for Linux and FreeBSD systems.
## Requirements ## Installation Methods
- **OS**: Linux, FreeBSD
- **Architecture**: amd64
- **Go**: 1.24+ (for building)
## Installation
### Pre-built Binaries ### Pre-built Binaries
Download the latest release binary for your platform and install to `/usr/local/bin`:
```bash ```bash
# Linux amd64 # Linux amd64
wget https://github.com/lixenwraith/logwisp/releases/latest/download/logwisp-linux-amd64 wget https://github.com/yourusername/logwisp/releases/latest/download/logwisp-linux-amd64
chmod +x logwisp-linux-amd64 chmod +x logwisp-linux-amd64
sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp
# Verify # FreeBSD amd64
logwisp --version fetch https://github.com/yourusername/logwisp/releases/latest/download/logwisp-freebsd-amd64
chmod +x logwisp-freebsd-amd64
sudo mv logwisp-freebsd-amd64 /usr/local/bin/logwisp
``` ```
### From Source ### Building from Source
Requires Go 1.24 or newer:
```bash ```bash
git clone https://github.com/lixenwraith/logwisp.git git clone https://github.com/yourusername/logwisp.git
cd logwisp cd logwisp
make build go build -o logwisp ./src/cmd/logwisp
sudo make install sudo install -m 755 logwisp /usr/local/bin/
``` ```
### Go Install ### Go Install Method
Install directly using Go (version information will not be embedded):
```bash ```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest go install github.com/yourusername/logwisp/src/cmd/logwisp@latest
``` ```
Note: Binary created with this method will not contain version information.
## Platform-Specific ## Service Configuration
### Linux (systemd) ### Linux (systemd)
```bash Create systemd service file `/etc/systemd/system/logwisp.service`:
# Create service
sudo tee /etc/systemd/system/logwisp.service << EOF ```ini
[Unit] [Unit]
Description=LogWisp Log Monitoring Service Description=LogWisp Log Transport Service
After=network.target After=network.target
[Service] [Service]
Type=simple Type=simple
User=logwisp User=logwisp
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/logwisp.toml Group=logwisp
Restart=always ExecStart=/usr/local/bin/logwisp -c /etc/logwisp/logwisp.toml
Restart=on-failure
RestartSec=10
StandardOutput=journal StandardOutput=journal
StandardError=journal StandardError=journal
WorkingDirectory=/var/lib/logwisp
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF ```
# Create user Setup service user and directories:
```bash
sudo useradd -r -s /bin/false logwisp sudo useradd -r -s /bin/false logwisp
sudo mkdir -p /etc/logwisp /var/lib/logwisp /var/log/logwisp
# Create service user sudo chown logwisp:logwisp /var/lib/logwisp /var/log/logwisp
sudo useradd -r -s /bin/false logwisp
# Create configuration directory
sudo mkdir -p /etc/logwisp
sudo chown logwisp:logwisp /etc/logwisp
# Enable and start
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable logwisp sudo systemctl enable logwisp
sudo systemctl start logwisp sudo systemctl start logwisp
@ -79,141 +78,90 @@ sudo systemctl start logwisp
### FreeBSD (rc.d) ### FreeBSD (rc.d)
```bash Create rc script `/usr/local/etc/rc.d/logwisp`:
# Create service script
sudo tee /usr/local/etc/rc.d/logwisp << 'EOF' ```sh
#!/bin/sh #!/bin/sh
# PROVIDE: logwisp # PROVIDE: logwisp
# REQUIRE: DAEMON # REQUIRE: DAEMON NETWORKING
# KEYWORD: shutdown # KEYWORD: shutdown
. /etc/rc.subr . /etc/rc.subr
name="logwisp" name="logwisp"
rcvar="${name}_enable" rcvar="${name}_enable"
command="/usr/local/bin/logwisp"
command_args="--config /usr/local/etc/logwisp/logwisp.toml"
pidfile="/var/run/${name}.pid" pidfile="/var/run/${name}.pid"
start_cmd="logwisp_start" command="/usr/local/bin/logwisp"
stop_cmd="logwisp_stop" command_args="-c /usr/local/etc/logwisp/logwisp.toml"
logwisp_start()
{
echo "Starting logwisp service..."
/usr/sbin/daemon -c -f -p ${pidfile} ${command} ${command_args}
}
logwisp_stop()
{
if [ -f ${pidfile} ]; then
echo "Stopping logwisp service..."
kill $(cat ${pidfile})
rm -f ${pidfile}
fi
}
load_rc_config $name load_rc_config $name
: ${logwisp_enable:="NO"} : ${logwisp_enable:="NO"}
: ${logwisp_config:="/usr/local/etc/logwisp/logwisp.toml"}
run_rc_command "$1" run_rc_command "$1"
EOF ```
# Make executable Setup service:
```bash
sudo chmod +x /usr/local/etc/rc.d/logwisp sudo chmod +x /usr/local/etc/rc.d/logwisp
# Create service user
sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin
sudo mkdir -p /usr/local/etc/logwisp /var/log/logwisp
# Create configuration directory sudo chown logwisp:logwisp /var/log/logwisp
sudo mkdir -p /usr/local/etc/logwisp
sudo chown logwisp:logwisp /usr/local/etc/logwisp
# Enable service
sudo sysrc logwisp_enable="YES" sudo sysrc logwisp_enable="YES"
# Start service
sudo service logwisp start sudo service logwisp start
``` ```
## Post-Installation ## Directory Structure
Standard installation directories:
| Purpose | Linux | FreeBSD |
|---------|-------|---------|
| Binary | `/usr/local/bin/logwisp` | `/usr/local/bin/logwisp` |
| Configuration | `/etc/logwisp/` | `/usr/local/etc/logwisp/` |
| Working Directory | `/var/lib/logwisp/` | `/var/db/logwisp/` |
| Log Files | `/var/log/logwisp/` | `/var/log/logwisp/` |
| PID File | `/var/run/logwisp.pid` | `/var/run/logwisp.pid` |
## Post-Installation Verification
Verify the installation:
### Verify Installation
```bash ```bash
# Check version # Check version
logwisp --version logwisp version
# Test configuration # Test configuration
logwisp --config /etc/logwisp/logwisp.toml --log-level debug logwisp -c /etc/logwisp/logwisp.toml --disable-status-reporter
# Check service # Check service status (Linux)
sudo systemctl status logwisp sudo systemctl status logwisp
```
### Linux Service Status # Check service status (FreeBSD)
```bash
sudo systemctl status logwisp
```
### FreeBSD Service Status
```bash
sudo service logwisp status sudo service logwisp status
``` ```
### Initial Configuration
Create a basic configuration file:
```toml
# /etc/logwisp/logwisp.toml (Linux)
# /usr/local/etc/logwisp/logwisp.toml (FreeBSD)
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = {
path = "/path/to/application/logs",
pattern = "*.log"
}
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
Restart service after configuration changes:
**Linux:**
```bash
sudo systemctl restart logwisp
```
**FreeBSD:**
```bash
sudo service logwisp restart
```
## Uninstallation ## Uninstallation
### Linux ### Linux
```bash ```bash
sudo systemctl stop logwisp sudo systemctl stop logwisp
sudo systemctl disable logwisp sudo systemctl disable logwisp
sudo rm /usr/local/bin/logwisp sudo rm /usr/local/bin/logwisp
sudo rm /etc/systemd/system/logwisp.service sudo rm /etc/systemd/system/logwisp.service
sudo rm -rf /etc/logwisp sudo rm -rf /etc/logwisp /var/lib/logwisp /var/log/logwisp
sudo userdel logwisp sudo userdel logwisp
``` ```
### FreeBSD ### FreeBSD
```bash ```bash
sudo service logwisp stop sudo service logwisp stop
sudo sysrc logwisp_enable="NO" sudo sysrc -x logwisp_enable
sudo rm /usr/local/bin/logwisp sudo rm /usr/local/bin/logwisp
sudo rm /usr/local/etc/rc.d/logwisp sudo rm /usr/local/etc/rc.d/logwisp
sudo rm -rf /usr/local/etc/logwisp sudo rm -rf /usr/local/etc/logwisp /var/db/logwisp /var/log/logwisp
sudo pw userdel logwisp sudo pw userdel logwisp
``` ```

289
doc/networking.md Normal file
View File

@ -0,0 +1,289 @@
# Networking
Network configuration for LogWisp connections, including TLS, rate limiting, and access control.
## TLS Configuration
### TLS Support Matrix
| Component | TLS Support | Notes |
|-----------|-------------|-------|
| HTTP Source | ✓ | Full TLS 1.2/1.3 |
| HTTP Sink | ✓ | Full TLS 1.2/1.3 |
| HTTP Client | ✓ | Client certificates |
| TCP Source | ✗ | No encryption |
| TCP Sink | ✗ | No encryption |
| TCP Client | ✗ | No encryption |
### Server TLS Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
min_version = "TLS1.2" # TLS1.2|TLS1.3
client_auth = false
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Client TLS Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_ca_file = "/path/to/ca.pem" # For server verification
server_name = "logs.example.com"
insecure_skip_verify = false
client_cert_file = "/path/to/client.pem" # For mTLS
client_key_file = "/path/to/client.key" # For mTLS
```
### TLS Certificate Generation
Using the `tls` command:
```bash
# Generate CA certificate
logwisp tls -ca -o myca
# Generate server certificate
logwisp tls -server -ca-cert myca.pem -ca-key myca.key -host localhost,server.example.com -o server
# Generate client certificate
logwisp tls -client -ca-cert myca.pem -ca-key myca.key -o client
```
Command options:
| Flag | Description |
|------|-------------|
| `-ca` | Generate CA certificate |
| `-server` | Generate server certificate |
| `-client` | Generate client certificate |
| `-host` | Comma-separated hostnames/IPs |
| `-o` | Output file prefix |
| `-days` | Certificate validity (default: 365) |
## Network Rate Limiting
### Configuration Options
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### Rate Limiting Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `enabled` | bool | Enable rate limiting |
| `max_connections_per_ip` | int | Per-IP connection limit |
| `max_connections_total` | int | Global connection limit |
| `requests_per_second` | float | Request rate limit |
| `burst_size` | int | Token bucket burst capacity |
| `response_code` | int | HTTP response code when limited |
| `response_message` | string | Response message when limited |
### IP Access Control
**Whitelist**: Only specified IPs/networks allowed
```toml
ip_whitelist = [
"192.168.1.0/24", # Local network
"10.0.0.0/8", # Private network
"203.0.113.5" # Specific IP
]
```
**Blacklist**: Specified IPs/networks denied
```toml
ip_blacklist = [
"192.168.1.100", # Blocked host
"10.0.0.0/16" # Blocked subnet
]
```
Processing order:
1. Blacklist (immediate deny if matched)
2. Whitelist (must match if configured)
3. Rate limiting
4. Authentication
## Connection Management
### TCP Keep-Alive
```toml
[pipelines.sources.tcp]
keep_alive = true
keep_alive_period_ms = 30000 # 30 seconds
```
Benefits:
- Detect dead connections
- Prevent connection timeout
- Maintain NAT mappings
### Connection Timeouts
```toml
[pipelines.sources.http]
read_timeout_ms = 10000 # 10 seconds
write_timeout_ms = 10000 # 10 seconds
[pipelines.sinks.tcp_client]
dial_timeout = 10 # Connection timeout
write_timeout = 30 # Write timeout
read_timeout = 10 # Read timeout
```
### Connection Limits
Global limits:
```toml
max_connections = 100 # Total concurrent connections
```
Per-IP limits:
```toml
max_connections_per_ip = 10
```
## Heartbeat Configuration
Keep connections alive with periodic heartbeats:
### HTTP Sink Heartbeat
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
Formats:
- **comment**: SSE comment (`: heartbeat`)
- **event**: SSE event with data
- **json**: JSON-formatted heartbeat
### TCP Sink Heartbeat
```toml
[pipelines.sinks.tcp.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "json" # json|txt
```
## Network Protocols
### HTTP/HTTPS
- HTTP/1.1 and HTTP/2 support
- Persistent connections
- Chunked transfer encoding
- Server-Sent Events (SSE)
### TCP
- Raw TCP sockets
- Newline-delimited protocol
- Binary-safe transmission
- No encryption available
## Port Configuration
### Default Ports
| Service | Default Port | Protocol |
|---------|--------------|----------|
| HTTP Source | 8081 | HTTP/HTTPS |
| HTTP Sink | 8080 | HTTP/HTTPS |
| TCP Source | 9091 | TCP |
| TCP Sink | 9090 | TCP |
### Port Conflict Prevention
LogWisp validates port usage at startup:
- Detects port conflicts across pipelines
- Prevents duplicate bindings
- Suggests alternative ports
## Network Security
### Best Practices
1. **Use TLS for HTTP** connections when possible
2. **Implement rate limiting** to prevent DoS
3. **Configure IP whitelists** for restricted access
4. **Enable authentication** for all network endpoints
5. **Use non-standard ports** to reduce scanning exposure
6. **Monitor connection metrics** for anomalies
7. **Set appropriate timeouts** to prevent resource exhaustion
### Security Warnings
- TCP connections are **always unencrypted**
- HTTP Basic/Token auth **requires TLS**
- Avoid `skip_verify` in production
- Never expose unauthenticated endpoints publicly
## Load Balancing
### Client-Side Load Balancing
Configure multiple endpoints (future feature):
```toml
[[pipelines.sinks.http_client]]
urls = [
"https://log1.example.com/ingest",
"https://log2.example.com/ingest"
]
strategy = "round-robin" # round-robin|random|least-conn
```
### Server-Side Considerations
- Use reverse proxy for load distribution
- Configure session affinity if needed
- Monitor individual instance health
## Troubleshooting
### Common Issues
**Connection Refused**
- Check firewall rules
- Verify service is running
- Confirm correct port/host
**TLS Handshake Failure**
- Verify certificate validity
- Check certificate chain
- Confirm TLS versions match
**Rate Limit Exceeded**
- Adjust rate limit parameters
- Add IP to whitelist
- Implement client-side throttling
**Connection Timeout**
- Increase timeout values
- Check network latency
- Verify keep-alive settings

343
doc/operations.md Normal file
View File

@ -0,0 +1,343 @@
# Operations Guide
Running, monitoring, and maintaining LogWisp in production.
## Starting LogWisp
### Manual Start
```bash
# Foreground with default config
logwisp
# Background mode
logwisp --background
# With specific configuration
logwisp --config /etc/logwisp/production.toml
```
### Service Management
**Linux (systemd):**
```bash
sudo systemctl start logwisp
sudo systemctl stop logwisp
sudo systemctl restart logwisp
sudo systemctl status logwisp
```
**FreeBSD (rc.d):**
```bash
sudo service logwisp start
sudo service logwisp stop
sudo service logwisp restart
sudo service logwisp status
```
## Configuration Management
### Hot Reload
Enable automatic configuration reload:
```toml
config_auto_reload = true
```
Or via command line:
```bash
logwisp --config-auto-reload
```
Trigger manual reload:
```bash
kill -HUP $(pidof logwisp)
# or
kill -USR1 $(pidof logwisp)
```
### Configuration Validation
Test configuration without starting:
```bash
logwisp --config test.toml --quiet --disable-status-reporter
```
Check for errors:
- Port conflicts
- Invalid patterns
- Missing required fields
- File permissions
## Monitoring
### Status Reporter
Built-in periodic status logging (30-second intervals):
```
[INFO] Status report active_pipelines=2 time=15:04:05
[INFO] Pipeline status pipeline=app entries_processed=10523
[INFO] Pipeline status pipeline=system entries_processed=5231
```
Disable if not needed:
```toml
disable_status_reporter = true
```
### HTTP Status Endpoint
When using HTTP sink:
```bash
curl http://localhost:8080/status | jq .
```
Response structure:
```json
{
"uptime": "2h15m30s",
"pipelines": {
"default": {
"sources": 1,
"sinks": 2,
"processed": 15234,
"filtered": 523,
"dropped": 12
}
}
}
```
### Metrics Collection
Track via logs:
- Total entries processed
- Entries filtered
- Entries dropped
- Active connections
- Buffer utilization
## Log Management
### LogWisp's Operational Logs
Configuration for LogWisp's own logs:
```toml
[logging]
output = "file"
level = "info"
[logging.file]
directory = "/var/log/logwisp"
name = "logwisp"
max_size_mb = 100
retention_hours = 168
```
### Log Rotation
Automatic rotation based on:
- File size threshold
- Total size limit
- Retention period
Manual rotation:
```bash
# Move current log
mv /var/log/logwisp/logwisp.log /var/log/logwisp/logwisp.log.1
# Send signal to reopen
kill -USR1 $(pidof logwisp)
```
### Log Levels
Operational log levels:
- **debug**: Detailed debugging information
- **info**: General operational messages
- **warn**: Warning conditions
- **error**: Error conditions
Production recommendation: `info` or `warn`
## Performance Tuning
### Buffer Sizing
Adjust buffers based on load:
```toml
# High-volume source
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
buffer_size = 5000 # Increase for burst traffic
# Slow consumer sink
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
buffer_size = 10000 # Larger buffer for slow endpoints
batch_size = 500 # Larger batches
```
### Rate Limiting
Protect against overload:
```toml
[pipelines.rate_limit]
rate = 1000.0 # Entries per second
burst = 2000.0 # Burst capacity
policy = "drop" # Drop excess entries
```
### Connection Limits
Prevent resource exhaustion:
```toml
[pipelines.sources.http.net_limit]
max_connections_total = 1000
max_connections_per_ip = 50
```
## Troubleshooting
### Common Issues
**High Memory Usage**
- Check buffer sizes
- Monitor goroutine count
- Review retention settings
**Dropped Entries**
- Increase buffer sizes
- Add rate limiting
- Check sink performance
**Connection Errors**
- Verify network connectivity
- Check firewall rules
- Review TLS certificates
### Debug Mode
Enable detailed logging:
```bash
logwisp --logging.level=debug --logging.output=stderr
```
### Health Checks
Implement external monitoring:
```bash
#!/bin/bash
# Health check script
if ! curl -sf http://localhost:8080/status > /dev/null; then
echo "LogWisp health check failed"
exit 1
fi
```
## Backup and Recovery
### Configuration Backup
```bash
# Backup configuration
cp /etc/logwisp/logwisp.toml /backup/logwisp-$(date +%Y%m%d).toml
# Version control
git add /etc/logwisp/
git commit -m "LogWisp config update"
```
### State Recovery
LogWisp maintains minimal state:
- File read positions (automatic)
- Connection state (automatic)
Recovery after crash:
1. Service automatically restarts (systemd/rc.d)
2. File sources resume from last position
3. Network sources accept new connections
4. Clients reconnect automatically
## Security Operations
### Certificate Management
Monitor certificate expiration:
```bash
openssl x509 -in /path/to/cert.pem -noout -enddate
```
Rotate certificates:
1. Generate new certificates
2. Update configuration
3. Reload service (SIGHUP)
### Access Auditing
Monitor access patterns:
- Review connection logs
- Monitor rate limit hits
## Maintenance
### Planned Maintenance
1. Notify users of maintenance window
2. Stop accepting new connections
3. Drain existing connections
4. Perform maintenance
5. Restart service
### Upgrade Process
1. Download new version
2. Test with current configuration
3. Stop old version
4. Install new version
5. Start service
6. Verify operation
### Cleanup Tasks
Regular maintenance:
- Remove old log files
- Clean temporary files
- Verify disk space
- Update documentation
## Disaster Recovery
### Backup Strategy
- Configuration files: Daily
- TLS certificates: After generation
- Authentication credentials: Secure storage
### Recovery Procedures
Service failure:
1. Check service status
2. Review error logs
3. Verify configuration
4. Restart service
Data loss:
1. Restore configuration from backup
2. Regenerate certificates if needed
3. Recreate authentication credentials
4. Restart service
### Business Continuity
- Run multiple instances for redundancy
- Use load balancer for distribution
- Implement monitoring alerts
- Document recovery procedures

View File

@ -1,215 +0,0 @@
# Quick Start Guide
Get LogWisp up and running in minutes:
## Installation
### From Source
```bash
git clone https://github.com/lixenwraith/logwisp.git
cd logwisp
make install
```
### Using Go Install
```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
```
## Basic Usage
### 1. Monitor Current Directory
Start LogWisp with defaults (monitors `*.log` files in current directory):
```bash
logwisp
```
### 2. Stream Logs
Connect to the log stream:
```bash
# SSE stream
curl -N http://localhost:8080/stream
# Check status
curl http://localhost:8080/status | jq .
```
### 3. Generate Test Logs
```bash
echo "[ERROR] Something went wrong!" >> test.log
echo "[INFO] Application started" >> test.log
echo "[WARN] Low memory warning" >> test.log
```
## Common Scenarios
### Monitor Specific Directory
Create `~/.config/logwisp/logwisp.toml`:
```toml
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/myapp", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Filter Only Errors
```toml
[[pipelines]]
name = "errors"
[[pipelines.sources]]
type = "directory"
options = { path = "./", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Multiple Outputs
Send logs to both HTTP stream and file:
```toml
[[pipelines]]
name = "multi-output"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
# HTTP streaming
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# File archival
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "app" }
```
### TCP Streaming
For high-performance streaming:
```toml
[[pipelines]]
name = "highperf"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "tcp"
options = { port = 9090, buffer_size = 5000 }
```
Connect with netcat:
```bash
nc localhost 9090
```
### Router Mode
Run multiple pipelines on shared ports:
```bash
logwisp --router
# Access pipelines at:
# http://localhost:8080/myapp/stream
# http://localhost:8080/errors/stream
# http://localhost:8080/status (global)
```
### Remote Log Collection
Receive logs via HTTP/TCP and forward to remote servers:
```toml
[[pipelines]]
name = "collector"
# Receive logs via HTTP POST
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/ingest" }
# Forward to remote server
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-server.com/ingest",
batch_size = 100,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
```
Send logs to collector:
```bash
curl -X POST http://localhost:8081/ingest \
-H "Content-Type: application/json" \
-d '{"message": "Test log", "level": "INFO"}'
```
## Quick Tips
### Enable Debug Logging
```bash
logwisp --logging.level debug --logging.output stderr
```
### Quiet Mode
```bash
logwisp --quiet
```
### Rate Limiting
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20
}
}
```
### Console Output
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {}
```
### Split Console Output
```toml
# INFO/DEBUG to stdout, ERROR/WARN to stderr
[[pipelines.sinks]]
type = "stdout"
options = { target = "split" }
```

View File

@ -1,125 +0,0 @@
# Rate Limiting Guide
LogWisp provides configurable rate limiting to protect against abuse and ensure fair access.
## How It Works
Token bucket algorithm:
1. Each client gets a bucket with fixed capacity
2. Tokens refill at configured rate
3. Each request consumes one token
4. No tokens = request rejected
## Configuration
```toml
[[pipelines.sinks]]
type = "http" # or "tcp"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # or "global"
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
## Strategies
### Per-IP Limiting (Default)
Each IP gets its own bucket:
```toml
limit_by = "ip"
requests_per_second = 10.0
# Client A: 10 req/sec
# Client B: 10 req/sec
```
### Global Limiting
All clients share one bucket:
```toml
limit_by = "global"
requests_per_second = 50.0
# All clients combined: 50 req/sec
```
## Connection Limits
```toml
max_connections_per_ip = 5 # Per IP
max_total_connections = 100 # Total
```
## Response Behavior
### HTTP
Returns JSON with configured status:
```json
{
"error": "Rate limit exceeded",
"retry_after": "60"
}
```
### TCP
Connections silently dropped.
## Examples
### Light Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 50.0,
burst_size = 100
}
```
### Moderate Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 30,
max_connections_per_ip = 5
}
```
### Strict Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 2.0,
burst_size = 5,
max_connections_per_ip = 2,
response_code = 503
}
```
## Monitoring
Check statistics:
```bash
curl http://localhost:8080/status | jq '.sinks[0].details.rate_limit'
```
## Testing
```bash
# Test rate limits
for i in {1..20}; do
curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/status
done
```
## Tuning
- **requests_per_second**: Expected load
- **burst_size**: 2-3× requests_per_second
- **Connection limits**: Based on memory

View File

@ -1,158 +0,0 @@
# Router Mode Guide
Router mode enables multiple pipelines to share HTTP ports through path-based routing.
## Overview
**Standard mode**: Each pipeline needs its own port
- Pipeline 1: `http://localhost:8080/stream`
- Pipeline 2: `http://localhost:8081/stream`
**Router mode**: Pipelines share ports via paths
- Pipeline 1: `http://localhost:8080/app/stream`
- Pipeline 2: `http://localhost:8080/database/stream`
- Global status: `http://localhost:8080/status`
## Enabling Router Mode
```bash
logwisp --router --config /etc/logwisp/multi-pipeline.toml
```
## Configuration
```toml
# All pipelines can use the same port
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Same port OK
[[pipelines]]
name = "database"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/postgresql", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
```
## Path Structure
Paths are prefixed with pipeline name:
| Pipeline | Config Path | Router Path |
|----------|-------------|-------------|
| `app` | `/stream` | `/app/stream` |
| `app` | `/status` | `/app/status` |
| `database` | `/stream` | `/database/stream` |
### Custom Paths
```toml
[[pipelines.sinks]]
type = "http"
options = {
stream_path = "/logs", # Becomes /app/logs
status_path = "/health" # Becomes /app/health
}
```
## Endpoints
### Pipeline Endpoints
```bash
# SSE stream
curl -N http://localhost:8080/app/stream
# Pipeline status
curl http://localhost:8080/database/status
```
### Global Status
```bash
curl http://localhost:8080/status
```
Returns:
```json
{
"service": "LogWisp Router",
"pipelines": {
"app": { /* stats */ },
"database": { /* stats */ }
},
"total_pipelines": 2
}
```
## Use Cases
### Microservices
```toml
[[pipelines]]
name = "frontend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/frontend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "backend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/backend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# Access:
# http://localhost:8080/frontend/stream
# http://localhost:8080/backend/stream
```
### Environment-Based
```toml
[[pipelines]]
name = "prod"
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "dev"
# No filters - all logs
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
## Limitations
1. **HTTP Only**: Router mode only works for HTTP/SSE
2. **No TCP Routing**: TCP remains on separate ports
3. **Path Conflicts**: Pipeline names must be unique
## Load Balancer Integration
```nginx
upstream logwisp {
server logwisp1:8080;
server logwisp2:8080;
}
location /logs/ {
proxy_pass http://logwisp/;
proxy_buffering off;
}
```

58
doc/security.md Normal file
View File

@ -0,0 +1,58 @@
# Security
## mTLS (Mutual TLS)
Certificate-based authentication for HTTPS.
### Server Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
client_auth = true
client_ca_file = "/path/to/ca.pem"
verify_client_cert = true
```
### Client Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
cert_file = "/path/to/client.pem"
key_file = "/path/to/client.key"
```
### Certificate Generation
Use the `tls` command:
```bash
# Generate CA
logwisp tls -ca -o ca
# Generate server certificate
logwisp tls -server -ca-cert ca.pem -ca-key ca.key -host localhost -o server
# Generate client certificate
logwisp tls -client -ca-cert ca.pem -ca-key ca.key -o client
```
## Access Control
ogWisp provides IP-based access control for network connections.
+## IP-Based Access Control
Configure IP-based access control for sources:
```toml
[pipelines.sources.http.net_limit]
enabled = true
ip_whitelist = ["192.168.1.0/24", "10.0.0.0/8"]
ip_blacklist = ["192.168.1.100"]
```
Priority order:
1. Blacklist (checked first, immediate deny)
2. Whitelist (if configured, must match)

273
doc/sinks.md Normal file
View File

@ -0,0 +1,273 @@
# Output Sinks
LogWisp sinks deliver processed log entries to various destinations.
## Sink Types
### Console Sink
Output to stdout/stderr.
```toml
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout" # stdout|stderr|split
colorize = false
buffer_size = 100
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `target` | string | "stdout" | Output target (stdout/stderr/split) |
| `colorize` | bool | false | Enable colored output |
| `buffer_size` | int | 100 | Internal buffer size |
**Target Modes:**
- **stdout**: All output to standard output
- **stderr**: All output to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
### File Sink
Write logs to rotating files.
```toml
[[pipelines.sinks]]
type = "file"
[pipelines.sinks.file]
directory = "./logs"
name = "output"
max_size_mb = 100
max_total_size_mb = 1000
min_disk_free_mb = 500
retention_hours = 168.0
buffer_size = 1000
flush_interval_ms = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `directory` | string | Required | Output directory |
| `name` | string | Required | Base filename |
| `max_size_mb` | int | 100 | Rotation threshold |
| `max_total_size_mb` | int | 1000 | Total size limit |
| `min_disk_free_mb` | int | 500 | Minimum free disk space |
| `retention_hours` | float | 168 | Delete files older than |
| `buffer_size` | int | 1000 | Internal buffer size |
| `flush_interval_ms` | int | 1000 | Force flush interval |
**Features:**
- Automatic rotation on size
- Retention management
- Disk space monitoring
- Periodic flushing
### HTTP Sink
SSE (Server-Sent Events) streaming server.
```toml
[[pipelines.sinks]]
type = "http"
[pipelines.sinks.http]
host = "0.0.0.0"
port = 8080
stream_path = "/stream"
status_path = "/status"
buffer_size = 1000
max_connections = 100
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `stream_path` | string | "/stream" | SSE stream endpoint |
| `status_path` | string | "/status" | Status endpoint |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Heartbeat Configuration:**
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
### TCP Sink
TCP streaming server for debugging.
```toml
[[pipelines.sinks]]
type = "tcp"
[pipelines.sinks.tcp]
host = "0.0.0.0"
port = 9090
buffer_size = 1000
max_connections = 100
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Note:** TCP Sink has no authentication support (debugging only).
### HTTP Client Sink
Forward logs to remote HTTP endpoints.
```toml
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
url = "https://logs.example.com/ingest"
buffer_size = 1000
batch_size = 100
batch_delay_ms = 1000
timeout_seconds = 30
max_retries = 3
retry_delay_ms = 1000
retry_backoff = 2.0
insecure_skip_verify = false
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `url` | string | Required | Target URL |
| `buffer_size` | int | 1000 | Internal buffer size |
| `batch_size` | int | 100 | Logs per request |
| `batch_delay_ms` | int | 1000 | Max wait before sending |
| `timeout_seconds` | int | 30 | Request timeout |
| `max_retries` | int | 3 | Retry attempts |
| `retry_delay_ms` | int | 1000 | Initial retry delay |
| `retry_backoff` | float | 2.0 | Exponential backoff multiplier |
| `insecure_skip_verify` | bool | false | Skip TLS verification |
### TCP Client Sink
Forward logs to remote TCP servers.
```toml
[[pipelines.sinks]]
type = "tcp_client"
[pipelines.sinks.tcp_client]
host = "logs.example.com"
port = 9090
buffer_size = 1000
dial_timeout = 10
write_timeout = 30
read_timeout = 10
keep_alive = 30
reconnect_delay_ms = 1000
max_reconnect_delay_ms = 30000
reconnect_backoff = 1.5
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | Required | Target host |
| `port` | int | Required | Target port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `dial_timeout` | int | 10 | Connection timeout (seconds) |
| `write_timeout` | int | 30 | Write timeout (seconds) |
| `read_timeout` | int | 10 | Read timeout (seconds) |
| `keep_alive` | int | 30 | TCP keep-alive (seconds) |
| `reconnect_delay_ms` | int | 1000 | Initial reconnect delay |
| `max_reconnect_delay_ms` | int | 30000 | Maximum reconnect delay |
| `reconnect_backoff` | float | 1.5 | Backoff multiplier |
## Network Sink Features
### Network Rate Limiting
Available for HTTP and TCP sinks:
```toml
[pipelines.sinks.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sinks.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = false
```
HTTP Client TLS:
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_ca_file = "/path/to/ca.pem" # For server verification
server_name = "logs.example.com"
insecure_skip_verify = false
client_cert_file = "/path/to/client.pem" # For mTLS
client_key_file = "/path/to/client.key" # For mTLS
```
## Sink Chaining
Designed connection patterns:
### Log Aggregation
- **HTTP Client Sink → HTTP Source**: HTTP/HTTPS (optional mTLS for HTTPS)
- **TCP Client Sink → TCP Source**: Raw TCP
### Live Monitoring
- **HTTP Sink**: Browser-based SSE streaming
- **TCP Sink**: Debug interface (telnet/netcat)
## Sink Statistics
All sinks track:
- Total entries processed
- Active connections
- Failed sends
- Retry attempts
- Last processed timestamp

177
doc/sources.md Normal file
View File

@ -0,0 +1,177 @@
# Input Sources
LogWisp sources monitor various inputs and generate log entries for pipeline processing.
## Source Types
### Directory Source
Monitors a directory for log files matching a pattern.
```toml
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "/var/log/myapp"
pattern = "*.log" # Glob pattern
check_interval_ms = 100 # Poll interval
recursive = false # Scan subdirectories
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `path` | string | Required | Directory to monitor |
| `pattern` | string | "*" | File pattern (glob) |
| `check_interval_ms` | int | 100 | File check interval in milliseconds |
| `recursive` | bool | false | Include subdirectories |
**Features:**
- Automatic file rotation detection
- Position tracking (resume after restart)
- Concurrent file monitoring
- Pattern-based file selection
### Stdin Source
Reads log entries from standard input.
```toml
[[pipelines.sources]]
type = "console"
[pipelines.sources.stdin]
buffer_size = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `buffer_size` | int | 1000 | Internal buffer size |
**Features:**
- Line-based processing
- Automatic level detection
- Non-blocking reads
### HTTP Source
REST endpoint for log ingestion.
```toml
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
host = "0.0.0.0"
port = 8081
ingest_path = "/ingest"
buffer_size = 1000
max_body_size = 1048576 # 1MB
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `ingest_path` | string | "/ingest" | Ingestion endpoint path |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_body_size` | int | 1048576 | Maximum request body size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Input Formats:**
- Single JSON object
- JSON array
- Newline-delimited JSON (NDJSON)
- Plain text (one entry per line)
### TCP Source
Raw TCP socket listener for log ingestion.
```toml
[[pipelines.sources]]
type = "tcp"
[pipelines.sources.tcp]
host = "0.0.0.0"
port = 9091
buffer_size = 1000
read_timeout_ms = 10000
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Protocol:**
- Newline-delimited JSON
- One log entry per line
- UTF-8 encoding
## Network Source Features
### Network Rate Limiting
Available for HTTP and TCP sources:
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
min_version = "TLS1.2"
client_auth = true
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
## Source Statistics
All sources track:
- Total entries received
- Dropped entries (buffer full)
- Invalid entries
- Last entry timestamp
- Active connections (network sources)
- Source-specific metrics
## Buffer Management
Each source maintains internal buffers:
- Default size: 1000 entries
- Drop policy when full
- Configurable per source
- Non-blocking writes

View File

@ -1,148 +0,0 @@
# Status Monitoring
LogWisp provides comprehensive monitoring through status endpoints and operational logs.
## Status Endpoints
### Pipeline Status
```bash
# Standalone mode
curl http://localhost:8080/status
# Router mode
curl http://localhost:8080/pipelinename/status
```
Example response:
```json
{
"service": "LogWisp",
"version": "1.0.0",
"server": {
"type": "http",
"port": 8080,
"active_clients": 5,
"buffer_size": 1000,
"uptime_seconds": 3600,
"mode": {"standalone": true, "router": false}
},
"sources": [{
"type": "directory",
"total_entries": 152341,
"dropped_entries": 12,
"active_watchers": 3
}],
"filters": {
"filter_count": 2,
"total_processed": 152341,
"total_passed": 48234
},
"sinks": [{
"type": "http",
"total_processed": 48234,
"active_connections": 5,
"details": {
"port": 8080,
"buffer_size": 1000,
"rate_limit": {
"enabled": true,
"total_requests": 98234,
"blocked_requests": 234
}
}
}],
"endpoints": {
"transport": "/stream",
"status": "/status"
},
"features": {
"heartbeat": {
"enabled": true,
"interval": 30,
"format": "comment"
},
"ssl": {
"enabled": false
},
"rate_limit": {
"enabled": true,
"requests_per_second": 10.0,
"burst_size": 20
}
}
}
```
## Key Metrics
### Source Metrics
| Metric | Description | Healthy Range |
|--------|-------------|---------------|
| `active_watchers` | Files being watched | 1-1000 |
| `total_entries` | Entries processed | Increasing |
| `dropped_entries` | Buffer overflows | < 1% of total |
| `active_connections` | Network connections (HTTP/TCP sources) | Within limits |
### Sink Metrics
| Metric | Description | Warning Signs |
|--------|-------------|---------------|
| `active_connections` | Current clients | Near limit |
| `total_processed` | Entries sent | Should match filter output |
| `total_batches` | Batches sent (client sinks) | Increasing |
| `failed_batches` | Failed sends (client sinks) | > 0 indicates issues |
### Filter Metrics
| Metric | Description | Notes |
|--------|-------------|-------|
| `total_processed` | Entries checked | All entries |
| `total_passed` | Passed filters | Check if too low/high |
| `total_matched` | Pattern matches | Per filter stats |
### Rate Limit Metrics
| Metric | Description | Action |
|--------|-------------|--------|
| `blocked_requests` | Rejected requests | Increase limits if high |
| `active_ips` | Unique IPs tracked | Monitor for attacks |
| `total_connections` | Current connections | Check against limits |
## Operational Logging
### Log Levels
```toml
[logging]
level = "info" # debug, info, warn, error
```
## Health Checks
### Basic Check
```bash
#!/usr/bin/env bash
if curl -s -f http://localhost:8080/status > /dev/null; then
echo "Healthy"
else
echo "Unhealthy"
exit 1
fi
```
### Advanced Check
```bash
#!/usr/bin/env bash
STATUS=$(curl -s http://localhost:8080/status)
DROPPED=$(echo "$STATUS" | jq '.sources[0].dropped_entries')
TOTAL=$(echo "$STATUS" | jq '.sources[0].total_entries')
if [ $((DROPPED * 100 / TOTAL)) -gt 5 ]; then
echo "High drop rate"
exit 1
fi
# Check client sink failures
FAILED=$(echo "$STATUS" | jq '.sinks[] | select(.type=="http_client") | .details.failed_batches // 0' | head -1)
if [ "$FAILED" -gt 10 ]; then
echo "High failure rate"
exit 1
fi
```

21
go.mod
View File

@ -1,30 +1,27 @@
module logwisp module logwisp
go 1.25.1 go 1.25.4
require ( require (
github.com/golang-jwt/jwt/v5 v5.3.0 github.com/lixenwraith/config v0.1.0
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3 github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208 github.com/panjf2000/gnet/v2 v2.9.5
github.com/panjf2000/gnet/v2 v2.9.3 github.com/valyala/fasthttp v1.68.0
github.com/valyala/fasthttp v1.65.0
golang.org/x/crypto v0.42.0
golang.org/x/term v0.35.0
golang.org/x/time v0.13.0
) )
require ( require (
github.com/BurntSushi/toml v1.5.0 // indirect github.com/BurntSushi/toml v1.5.0 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect github.com/andybalholm/brotli v1.2.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/klauspost/compress v1.18.0 // indirect github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/klauspost/compress v1.18.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/panjf2000/ants/v2 v2.11.3 // indirect github.com/panjf2000/ants/v2 v2.11.3 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect github.com/valyala/bytebufferpool v1.0.0 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect go.uber.org/zap v1.27.0 // indirect
golang.org/x/sync v0.17.0 // indirect golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.36.0 // indirect golang.org/x/sys v0.38.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )

42
go.sum
View File

@ -6,26 +6,30 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-viper/mapstructure v1.6.0 h1:0WdPOF2rmmQDN1xo8qIgxyugvLp71HrZSWyGLxofobw= github.com/go-viper/mapstructure v1.6.0 h1:0WdPOF2rmmQDN1xo8qIgxyugvLp71HrZSWyGLxofobw=
github.com/go-viper/mapstructure v1.6.0/go.mod h1:FcbLReH7/cjaC0RVQR+LHFIrBhHF3s1e/ud1KMDoBVw= github.com/go-viper/mapstructure v1.6.0/go.mod h1:FcbLReH7/cjaC0RVQR+LHFIrBhHF3s1e/ud1KMDoBVw=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo= github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0=
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3 h1:+RwUb7dUz9mGdUSW+E0WuqJgTVg1yFnPb94Wyf5ma/0= github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6 h1:G9qP8biXBT6bwBOjEe1tZwjA0gPuB5DC+fLBRXDNXqo=
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0= github.com/lixenwraith/config v0.0.0-20251003140149-580459b815f6/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0=
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208 h1:IB1O/HLv9VR/4mL1Tkjlr91lk+r8anP6bab7rYdS/oE= github.com/lixenwraith/config v0.1.0 h1:MI+qubcsckVayztW3XPuf/Xa5AyPZcgVR/0THbwIbMQ=
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0= github.com/lixenwraith/config v0.1.0/go.mod h1:roNPTSCT5HSV9dru/zi/Catwc3FZVCFf7vob2pSlNW0=
github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686 h1:STgvFUpjvZquBF322PNLXaU67oEScewGDLy0aV+lIkY=
github.com/lixenwraith/log v0.0.0-20251010094026-6a161eb2b686/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0=
github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg= github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek= github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
github.com/panjf2000/gnet/v2 v2.9.3 h1:auV3/A9Na3jiBDmYAAU00rPhFKnsAI+TnI1F7YUJMHQ= github.com/panjf2000/gnet/v2 v2.9.4 h1:XvPCcaFwO4XWg4IgSfZnNV4dfDy5g++HIEx7sH0ldHc=
github.com/panjf2000/gnet/v2 v2.9.3/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E= github.com/panjf2000/gnet/v2 v2.9.4/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
github.com/panjf2000/gnet/v2 v2.9.5 h1:h/APp9rAFRVAspPl/prruU+FcjqilGyjHDJZ4eTB8Cw=
github.com/panjf2000/gnet/v2 v2.9.5/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw= github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.65.0 h1:j/u3uzFEGFfRxw79iYzJN+TteTJwbYkru9uDp3d0Yf8= github.com/valyala/fasthttp v1.68.0 h1:v12Nx16iepr8r9ySOwqI+5RBJ/DqTxhOy1HrHoDFnok=
github.com/valyala/fasthttp v1.65.0/go.mod h1:P/93/YkKPMsKSnATEeELUCkG8a7Y+k99uxNHVbKINr4= github.com/valyala/fasthttp v1.68.0/go.mod h1:5EXiRfYQAoiO/khu4oU9VISC/eVY6JqmSpPJoHCKsz4=
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU= github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E= github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
@ -34,16 +38,14 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug= golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k= golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/term v0.35.0 h1:bZBVKBudEyhRcajGcNc3jIfWPqV4y/Kt2XcoigOWtDQ= golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/term v0.35.0/go.mod h1:TPGtkTLesOwf2DE8CgVYiZinHAOuy5AYUYT1lENIZnA= golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI= golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc= gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=

View File

@ -1,110 +0,0 @@
// FILE: logwisp/src/cmd/auth-gen/main.go
package main
import (
"crypto/rand"
"encoding/base64"
"flag"
"fmt"
"os"
"syscall"
"golang.org/x/crypto/bcrypt"
"golang.org/x/term"
)
func main() {
var (
username = flag.String("u", "", "Username for basic auth")
password = flag.String("p", "", "Password to hash (will prompt if not provided)")
cost = flag.Int("c", 10, "Bcrypt cost (10-31)")
genToken = flag.Bool("t", false, "Generate random bearer token")
tokenLen = flag.Int("l", 32, "Token length in bytes")
)
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "LogWisp Authentication Utility\n\n")
fmt.Fprintf(os.Stderr, "Usage:\n")
fmt.Fprintf(os.Stderr, " Generate bcrypt hash: %s -u <username> [-p <password>]\n", os.Args[0])
fmt.Fprintf(os.Stderr, " Generate bearer token: %s -t [-l <length>]\n", os.Args[0])
fmt.Fprintf(os.Stderr, "\nOptions:\n")
flag.PrintDefaults()
}
flag.Parse()
if *genToken {
generateToken(*tokenLen)
return
}
if *username == "" {
fmt.Fprintf(os.Stderr, "Error: Username required for basic auth\n")
flag.Usage()
os.Exit(1)
}
// Get password
pass := *password
if pass == "" {
pass = promptPassword("Enter password: ")
confirm := promptPassword("Confirm password: ")
if pass != confirm {
fmt.Fprintf(os.Stderr, "Error: Passwords don't match\n")
os.Exit(1)
}
}
// Generate bcrypt hash
hash, err := bcrypt.GenerateFromPassword([]byte(pass), *cost)
if err != nil {
fmt.Fprintf(os.Stderr, "Error generating hash: %v\n", err)
os.Exit(1)
}
// Output TOML config format
fmt.Println("\n# Add to logwisp.toml under [[pipelines.auth.basic_auth.users]]:")
fmt.Printf("[[pipelines.auth.basic_auth.users]]\n")
fmt.Printf("username = \"%s\"\n", *username)
fmt.Printf("password_hash = \"%s\"\n", string(hash))
// Also output for users file format
fmt.Println("\n# Or add to users file:")
fmt.Printf("%s:%s\n", *username, string(hash))
}
func promptPassword(prompt string) string {
fmt.Fprint(os.Stderr, prompt)
password, err := term.ReadPassword(int(syscall.Stdin))
fmt.Fprintln(os.Stderr)
if err != nil {
fmt.Fprintf(os.Stderr, "Error reading password: %v\n", err)
os.Exit(1)
}
return string(password)
}
func generateToken(length int) {
if length < 16 {
fmt.Fprintf(os.Stderr, "Warning: Token length < 16 bytes is insecure\n")
}
token := make([]byte, length)
if _, err := rand.Read(token); err != nil {
fmt.Fprintf(os.Stderr, "Error generating token: %v\n", err)
os.Exit(1)
}
// Output in various formats
b64 := base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(token)
hex := fmt.Sprintf("%x", token)
fmt.Println("\n# Add to logwisp.toml under [pipelines.auth.bearer_auth]:")
fmt.Printf("tokens = [\"%s\"]\n", b64)
fmt.Println("\n# Alternative hex encoding:")
fmt.Printf("# tokens = [\"%s\"]\n", hex)
fmt.Printf("\n# Token (base64): %s\n", b64)
fmt.Printf("# Token (hex): %s\n", hex)
}

View File

@ -13,10 +13,10 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// bootstrapService creates and initializes the log transport service // bootstrapService creates and initializes the main log transport service and its pipelines.
func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service, error) { func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service, error) {
// Create service with logger dependency injection // Create service with logger dependency injection
svc := service.New(ctx, logger) svc := service.NewService(ctx, logger)
// Initialize pipelines // Initialize pipelines
successCount := 0 successCount := 0
@ -24,7 +24,7 @@ func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service
logger.Info("msg", "Initializing pipeline", "pipeline", pipelineCfg.Name) logger.Info("msg", "Initializing pipeline", "pipeline", pipelineCfg.Name)
// Create the pipeline // Create the pipeline
if err := svc.NewPipeline(pipelineCfg); err != nil { if err := svc.NewPipeline(&pipelineCfg); err != nil {
logger.Error("msg", "Failed to create pipeline", logger.Error("msg", "Failed to create pipeline",
"pipeline", pipelineCfg.Name, "pipeline", pipelineCfg.Name,
"error", err) "error", err)
@ -45,7 +45,7 @@ func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service
return svc, nil return svc, nil
} }
// initializeLogger sets up the logger based on configuration // initializeLogger sets up the global logger based on the application's configuration.
func initializeLogger(cfg *config.Config) error { func initializeLogger(cfg *config.Config) error {
logger = log.NewLogger() logger = log.NewLogger()
logCfg := log.DefaultConfig() logCfg := log.DefaultConfig()
@ -53,8 +53,8 @@ func initializeLogger(cfg *config.Config) error {
if cfg.Quiet { if cfg.Quiet {
// In quiet mode, disable ALL logging output // In quiet mode, disable ALL logging output
logCfg.Level = 255 // A level that disables all output logCfg.Level = 255 // A level that disables all output
logCfg.DisableFile = true logCfg.EnableFile = false
logCfg.EnableStdout = false logCfg.EnableConsole = false
return logger.ApplyConfig(logCfg) return logger.ApplyConfig(logCfg)
} }
@ -68,23 +68,29 @@ func initializeLogger(cfg *config.Config) error {
// Configure based on output mode // Configure based on output mode
switch cfg.Logging.Output { switch cfg.Logging.Output {
case "none": case "none":
logCfg.DisableFile = true logCfg.EnableFile = false
logCfg.EnableStdout = false logCfg.EnableConsole = false
case "stdout": case "stdout":
logCfg.DisableFile = true logCfg.EnableFile = false
logCfg.EnableStdout = true logCfg.EnableConsole = true
logCfg.StdoutTarget = "stdout" logCfg.ConsoleTarget = "stdout"
case "stderr": case "stderr":
logCfg.DisableFile = true logCfg.EnableFile = false
logCfg.EnableStdout = true logCfg.EnableConsole = true
logCfg.StdoutTarget = "stderr" logCfg.ConsoleTarget = "stderr"
case "split":
logCfg.EnableFile = false
logCfg.EnableConsole = true
logCfg.ConsoleTarget = "split"
case "file": case "file":
logCfg.EnableStdout = false logCfg.EnableFile = true
logCfg.EnableConsole = false
configureFileLogging(logCfg, cfg) configureFileLogging(logCfg, cfg)
case "both": case "all":
logCfg.EnableStdout = true logCfg.EnableFile = true
logCfg.EnableConsole = true
logCfg.ConsoleTarget = "split"
configureFileLogging(logCfg, cfg) configureFileLogging(logCfg, cfg)
configureConsoleTarget(logCfg, cfg)
default: default:
return fmt.Errorf("invalid log output mode: %s", cfg.Logging.Output) return fmt.Errorf("invalid log output mode: %s", cfg.Logging.Output)
} }
@ -97,7 +103,7 @@ func initializeLogger(cfg *config.Config) error {
return logger.ApplyConfig(logCfg) return logger.ApplyConfig(logCfg)
} }
// configureFileLogging sets up file-based logging parameters // configureFileLogging sets up file-based logging parameters from the configuration.
func configureFileLogging(logCfg *log.Config, cfg *config.Config) { func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
if cfg.Logging.File != nil { if cfg.Logging.File != nil {
logCfg.Directory = cfg.Logging.File.Directory logCfg.Directory = cfg.Logging.File.Directory
@ -110,18 +116,7 @@ func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
} }
} }
// configureConsoleTarget sets up console output parameters // parseLogLevel converts a string log level to its corresponding integer value.
func configureConsoleTarget(logCfg *log.Config, cfg *config.Config) {
target := "stderr" // default
if cfg.Logging.Console != nil && cfg.Logging.Console.Target != "" {
target = cfg.Logging.Console.Target
}
// Set the target, which can be "stdout", "stderr", or "split"
logCfg.StdoutTarget = target
}
func parseLogLevel(level string) (int64, error) { func parseLogLevel(level string) (int64, error) {
switch strings.ToLower(level) { switch strings.ToLower(level) {
case "debug": case "debug":

View File

@ -0,0 +1,123 @@
// FILE: src/cmd/logwisp/commands/help.go
package commands
import (
"fmt"
"sort"
"strings"
)
// generalHelpTemplate is the default help message shown when no specific command is requested.
const generalHelpTemplate = `LogWisp: A flexible log transport and processing tool.
Usage:
logwisp [command] [options]
logwisp [options]
Commands:
%s
Application Options:
-c, --config <path> Path to configuration file (default: logwisp.toml)
-h, --help Display this help message and exit
-v, --version Display version information and exit
-b, --background Run LogWisp in the background as a daemon
-q, --quiet Suppress all console output, including errors
Runtime Options:
--disable-status-reporter Disable the periodic status reporter
--config-auto-reload Enable config reload on file change
For command-specific help:
logwisp help <command>
logwisp <command> --help
Configuration Sources (Precedence: CLI > Env > File > Defaults):
- CLI flags override all other settings
- Environment variables override file settings
- TOML configuration file is the primary method
Examples:
# Start service with custom config
logwisp -c /etc/logwisp/prod.toml
# Run in background with config reload
logwisp -b --config-auto-reload
For detailed configuration options, please refer to the documentation.
`
// HelpCommand handles the display of general or command-specific help messages.
type HelpCommand struct {
router *CommandRouter
}
// NewHelpCommand creates a new help command handler.
func NewHelpCommand(router *CommandRouter) *HelpCommand {
return &HelpCommand{router: router}
}
// Execute displays the appropriate help message based on the provided arguments.
func (c *HelpCommand) Execute(args []string) error {
// Check if help is requested for a specific command
if len(args) > 0 && args[0] != "" {
cmdName := args[0]
if handler, exists := c.router.GetCommand(cmdName); exists {
fmt.Print(handler.Help())
return nil
}
return fmt.Errorf("unknown command: %s", cmdName)
}
// Display general help with command list
fmt.Printf(generalHelpTemplate, c.formatCommandList())
return nil
}
// Description returns a brief one-line description of the command.
func (c *HelpCommand) Description() string {
return "Display help information"
}
// Help returns the detailed help text for the 'help' command itself.
func (c *HelpCommand) Help() string {
return `Help Command - Display help information
Usage:
logwisp help Show general help
logwisp help <command> Show help for a specific command
Examples:
logwisp help # Show general help
logwisp help auth # Show auth command help
logwisp auth --help # Alternative way to get command help
`
}
// formatCommandList creates a formatted and aligned list of all available commands.
func (c *HelpCommand) formatCommandList() string {
commands := c.router.GetCommands()
// Sort command names for consistent output
names := make([]string, 0, len(commands))
maxLen := 0
for name := range commands {
names = append(names, name)
if len(name) > maxLen {
maxLen = len(name)
}
}
sort.Strings(names)
// Format each command with aligned descriptions
var lines []string
for _, name := range names {
handler := commands[name]
padding := strings.Repeat(" ", maxLen-len(name)+2)
lines = append(lines, fmt.Sprintf(" %s%s%s", name, padding, handler.Description()))
}
return strings.Join(lines, "\n")
}

View File

@ -0,0 +1,119 @@
// FILE: src/cmd/logwisp/commands/router.go
package commands
import (
"fmt"
"os"
)
// Handler defines the interface required for all subcommands.
type Handler interface {
Execute(args []string) error
Description() string
Help() string
}
// CommandRouter handles the routing of CLI arguments to the appropriate subcommand handler.
type CommandRouter struct {
commands map[string]Handler
}
// NewCommandRouter creates and initializes the command router with all available commands.
func NewCommandRouter() *CommandRouter {
router := &CommandRouter{
commands: make(map[string]Handler),
}
// Register available commands
router.commands["tls"] = NewTLSCommand()
router.commands["version"] = NewVersionCommand()
router.commands["help"] = NewHelpCommand(router)
return router
}
// Route checks for and executes a subcommand based on the provided CLI arguments.
func (r *CommandRouter) Route(args []string) (bool, error) {
if len(args) < 2 {
return false, nil // No command specified, let main app continue
}
cmdName := args[1]
// Special case: help flag at any position shows general help
for _, arg := range args[1:] {
if arg == "-h" || arg == "--help" {
// If it's after a valid command, show command-specific help
if handler, exists := r.commands[cmdName]; exists && cmdName != "help" {
fmt.Print(handler.Help())
return true, nil
}
// Otherwise show general help
return true, r.commands["help"].Execute(nil)
}
}
// Check if this is a known command
handler, exists := r.commands[cmdName]
if !exists {
// Check if it looks like a mistyped command (not a flag)
if cmdName[0] != '-' {
return false, fmt.Errorf("unknown command: %s\n\nRun 'logwisp help' for usage", cmdName)
}
// It's a flag, let main app handle it
return false, nil
}
// Execute the command
return true, handler.Execute(args[2:])
}
// GetCommand returns a specific command handler by its name.
func (r *CommandRouter) GetCommand(name string) (Handler, bool) {
cmd, exists := r.commands[name]
return cmd, exists
}
// GetCommands returns a map of all registered commands.
func (r *CommandRouter) GetCommands() map[string]Handler {
return r.commands
}
// ShowCommands displays a list of available subcommands to stderr.
func (r *CommandRouter) ShowCommands() {
for name, handler := range r.commands {
fmt.Fprintf(os.Stderr, " %-10s %s\n", name, handler.Description())
}
fmt.Fprintln(os.Stderr, "\nUse 'logwisp <command> --help' for command-specific help")
}
// coalesceString returns the first non-empty string from a list of arguments.
func coalesceString(values ...string) string {
for _, v := range values {
if v != "" {
return v
}
}
return ""
}
// coalesceInt returns the first non-default integer from a list of arguments.
func coalesceInt(primary, secondary, defaultVal int) int {
if primary != defaultVal {
return primary
}
if secondary != defaultVal {
return secondary
}
return defaultVal
}
// coalesceBool returns true if any of the boolean arguments is true.
func coalesceBool(values ...bool) bool {
for _, v := range values {
if v {
return true
}
}
return false
}

View File

@ -0,0 +1,571 @@
// FILE: src/cmd/logwisp/commands/tls.go
package commands
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"flag"
"fmt"
"io"
"math/big"
"net"
"os"
"strings"
"time"
)
// TLSCommand handles the generation of TLS certificates.
type TLSCommand struct {
output io.Writer
errOut io.Writer
}
// NewTLSCommand creates a new TLS command handler.
func NewTLSCommand() *TLSCommand {
return &TLSCommand{
output: os.Stdout,
errOut: os.Stderr,
}
}
// Execute parses flags and routes to the appropriate certificate generation function.
func (tc *TLSCommand) Execute(args []string) error {
cmd := flag.NewFlagSet("tls", flag.ContinueOnError)
cmd.SetOutput(tc.errOut)
// Certificate type flags
var (
genCA = cmd.Bool("ca", false, "Generate CA certificate")
genServer = cmd.Bool("server", false, "Generate server certificate")
genClient = cmd.Bool("client", false, "Generate client certificate")
selfSign = cmd.Bool("self-signed", false, "Generate self-signed certificate")
// Common options - short forms
commonName = cmd.String("cn", "", "Common name (required)")
org = cmd.String("o", "LogWisp", "Organization")
country = cmd.String("c", "US", "Country code")
validDays = cmd.Int("d", 365, "Validity period in days")
keySize = cmd.Int("b", 2048, "RSA key size")
// Common options - long forms
commonNameLong = cmd.String("common-name", "", "Common name (required)")
orgLong = cmd.String("org", "LogWisp", "Organization")
countryLong = cmd.String("country", "US", "Country code")
validDaysLong = cmd.Int("days", 365, "Validity period in days")
keySizeLong = cmd.Int("bits", 2048, "RSA key size")
// Server/Client specific - short forms
hosts = cmd.String("h", "", "Comma-separated hostnames/IPs")
caFile = cmd.String("ca-cert", "", "CA certificate file")
caKey = cmd.String("ca-key", "", "CA key file")
// Server/Client specific - long forms
hostsLong = cmd.String("hosts", "", "Comma-separated hostnames/IPs")
// Output files
certOut = cmd.String("cert-out", "", "Output certificate file")
keyOut = cmd.String("key-out", "", "Output key file")
)
cmd.Usage = func() {
fmt.Fprintln(tc.errOut, "Generate TLS certificates for LogWisp")
fmt.Fprintln(tc.errOut, "\nUsage: logwisp tls [options]")
fmt.Fprintln(tc.errOut, "\nExamples:")
fmt.Fprintln(tc.errOut, " # Generate self-signed certificate")
fmt.Fprintln(tc.errOut, " logwisp tls --self-signed --cn localhost --hosts localhost,127.0.0.1")
fmt.Fprintln(tc.errOut, " ")
fmt.Fprintln(tc.errOut, " # Generate CA certificate")
fmt.Fprintln(tc.errOut, " logwisp tls --ca --cn \"LogWisp CA\" --cert-out ca.crt --key-out ca.key")
fmt.Fprintln(tc.errOut, " ")
fmt.Fprintln(tc.errOut, " # Generate server certificate signed by CA")
fmt.Fprintln(tc.errOut, " logwisp tls --server --cn server.example.com --hosts server.example.com \\")
fmt.Fprintln(tc.errOut, " --ca-cert ca.crt --ca-key ca.key")
fmt.Fprintln(tc.errOut, "\nOptions:")
cmd.PrintDefaults()
fmt.Fprintln(tc.errOut)
}
if err := cmd.Parse(args); err != nil {
return err
}
// Check for unparsed arguments
if cmd.NArg() > 0 {
return fmt.Errorf("unexpected argument(s): %s", strings.Join(cmd.Args(), " "))
}
// Merge short and long options
finalCN := coalesceString(*commonName, *commonNameLong)
finalOrg := coalesceString(*org, *orgLong, "LogWisp")
finalCountry := coalesceString(*country, *countryLong, "US")
finalDays := coalesceInt(*validDays, *validDaysLong, 365)
finalKeySize := coalesceInt(*keySize, *keySizeLong, 2048)
finalHosts := coalesceString(*hosts, *hostsLong)
finalCAFile := *caFile // no short form
finalCAKey := *caKey // no short form
finalCertOut := *certOut // no short form
finalKeyOut := *keyOut // no short form
// Validate common name
if finalCN == "" {
cmd.Usage()
return fmt.Errorf("common name (--cn) is required")
}
// Validate RSA key size
if finalKeySize != 2048 && finalKeySize != 3072 && finalKeySize != 4096 {
return fmt.Errorf("invalid key size: %d (valid: 2048, 3072, 4096)", finalKeySize)
}
// Route to appropriate generator
switch {
case *genCA:
return tc.generateCA(finalCN, finalOrg, finalCountry, finalDays, finalKeySize, finalCertOut, finalKeyOut)
case *selfSign:
return tc.generateSelfSigned(finalCN, finalOrg, finalCountry, finalHosts, finalDays, finalKeySize, finalCertOut, finalKeyOut)
case *genServer:
return tc.generateServerCert(finalCN, finalOrg, finalCountry, finalHosts, finalCAFile, finalCAKey, finalDays, finalKeySize, finalCertOut, finalKeyOut)
case *genClient:
return tc.generateClientCert(finalCN, finalOrg, finalCountry, finalCAFile, finalCAKey, finalDays, finalKeySize, finalCertOut, finalKeyOut)
default:
cmd.Usage()
return fmt.Errorf("specify certificate type: --ca, --self-signed, --server, or --client")
}
}
// Description returns a brief one-line description of the command.
func (tc *TLSCommand) Description() string {
return "Generate TLS certificates (CA, server, client, self-signed)"
}
// Help returns the detailed help text for the command.
func (tc *TLSCommand) Help() string {
return `TLS Command - Generate TLS certificates for LogWisp
Usage:
logwisp tls [options]
Certificate Types:
--ca Generate Certificate Authority (CA) certificate
--server Generate server certificate (requires CA or self-signed)
--client Generate client certificate (for mTLS)
--self-signed Generate self-signed certificate (single cert for testing)
Common Options:
--cn, --common-name <name> Common Name (required)
-o, --org <organization> Organization name (default: "LogWisp")
-c, --country <code> Country code (default: "US")
-d, --days <number> Validity period in days (default: 365)
-b, --bits <size> RSA key size (default: 2048)
Server Certificate Options:
-h, --hosts <list> Comma-separated hostnames/IPs
Example: "localhost,10.0.0.1,example.com"
--ca-cert <file> CA certificate file (for signing)
--ca-key <file> CA key file (for signing)
Output Options:
--cert-out <file> Output certificate file (default: stdout)
--key-out <file> Output private key file (default: stdout)
Examples:
# Generate self-signed certificate for testing
logwisp tls --self-signed --cn localhost --hosts "localhost,127.0.0.1" \
--cert-out server.crt --key-out server.key
# Generate CA certificate
logwisp tls --ca --cn "LogWisp CA" --days 3650 \
--cert-out ca.crt --key-out ca.key
# Generate server certificate signed by CA
logwisp tls --server --cn "logwisp.example.com" \
--hosts "logwisp.example.com,10.0.0.100" \
--ca-cert ca.crt --ca-key ca.key \
--cert-out server.crt --key-out server.key
# Generate client certificate for mTLS
logwisp tls --client --cn "client1" \
--ca-cert ca.crt --ca-key ca.key \
--cert-out client.crt --key-out client.key
Security Notes:
- Keep private keys secure and never share them
- Use 2048-bit RSA minimum, 3072 or 4096 for higher security
- For production, use certificates from a trusted CA
- Self-signed certificates are only for development/testing
- Rotate certificates before expiration
`
}
// generateCA creates a new Certificate Authority (CA) certificate and private key.
func (tc *TLSCommand) generateCA(cn, org, country string, days, bits int, certFile, keyFile string) error {
// Generate RSA key
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate key: %w", err)
}
// Create certificate template
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{org},
Country: []string{country},
CommonName: cn,
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(0, 0, days),
KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign,
BasicConstraintsValid: true,
IsCA: true,
}
// Generate certificate
certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
return fmt.Errorf("failed to create certificate: %w", err)
}
// Default output files
if certFile == "" {
certFile = "ca.crt"
}
if keyFile == "" {
keyFile = "ca.key"
}
// Save certificate
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
fmt.Printf("✓ CA certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Valid for: %d days\n", days)
fmt.Printf(" Common name: %s\n", cn)
return nil
}
// generateSelfSigned creates a new self-signed server certificate and private key.
func (tc *TLSCommand) generateSelfSigned(cn, org, country, hosts string, days, bits int, certFile, keyFile string) error {
// 1. Generate an RSA private key with the specified bit size
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate private key: %w", err)
}
// 2. Parse the hosts string into DNS names and IP addresses
dnsNames, ipAddrs := parseHosts(hosts)
// 3. Create the certificate template
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: cn,
Organization: []string{org},
Country: []string{country},
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(0, 0, days),
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth},
IsCA: false,
DNSNames: dnsNames,
IPAddresses: ipAddrs,
}
// 4. Create the self-signed certificate
certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
return fmt.Errorf("failed to create certificate: %w", err)
}
// 5. Default output filenames
if certFile == "" {
certFile = "server.crt"
}
if keyFile == "" {
keyFile = "server.key"
}
// 6. Save the certificate with 0644 permissions
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
// 7. Print summary
fmt.Printf("\n✓ Self-signed certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private Key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Valid for: %d days\n", days)
fmt.Printf(" Common Name: %s\n", cn)
if len(hosts) > 0 {
fmt.Printf(" Hosts (SANs): %s\n", hosts)
}
return nil
}
// generateServerCert creates a new server certificate signed by a provided CA.
func (tc *TLSCommand) generateServerCert(cn, org, country, hosts, caFile, caKeyFile string, days, bits int, certFile, keyFile string) error {
caCert, caKey, err := loadCA(caFile, caKeyFile)
if err != nil {
return err
}
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate server private key: %w", err)
}
dnsNames, ipAddrs := parseHosts(hosts)
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
certExpiry := time.Now().AddDate(0, 0, days)
if certExpiry.After(caCert.NotAfter) {
return fmt.Errorf("certificate validity period (%d days) exceeds CA expiry (%s)", days, caCert.NotAfter.Format(time.RFC3339))
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: cn,
Organization: []string{org},
Country: []string{country},
},
NotBefore: time.Now(),
NotAfter: certExpiry,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
DNSNames: dnsNames,
IPAddresses: ipAddrs,
}
certDER, err := x509.CreateCertificate(rand.Reader, &template, caCert, &priv.PublicKey, caKey)
if err != nil {
return fmt.Errorf("failed to sign server certificate: %w", err)
}
if certFile == "" {
certFile = "server.crt"
}
if keyFile == "" {
keyFile = "server.key"
}
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
fmt.Printf("\n✓ Server certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private Key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Signed by: CN=%s\n", caCert.Subject.CommonName)
if len(hosts) > 0 {
fmt.Printf(" Hosts (SANs): %s\n", hosts)
}
return nil
}
// generateClientCert creates a new client certificate signed by a provided CA for mTLS.
func (tc *TLSCommand) generateClientCert(cn, org, country, caFile, caKeyFile string, days, bits int, certFile, keyFile string) error {
caCert, caKey, err := loadCA(caFile, caKeyFile)
if err != nil {
return err
}
priv, err := rsa.GenerateKey(rand.Reader, bits)
if err != nil {
return fmt.Errorf("failed to generate client private key: %w", err)
}
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
certExpiry := time.Now().AddDate(0, 0, days)
if certExpiry.After(caCert.NotAfter) {
return fmt.Errorf("certificate validity period (%d days) exceeds CA expiry (%s)", days, caCert.NotAfter.Format(time.RFC3339))
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
CommonName: cn,
Organization: []string{org},
Country: []string{country},
},
NotBefore: time.Now(),
NotAfter: certExpiry,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
}
certDER, err := x509.CreateCertificate(rand.Reader, &template, caCert, &priv.PublicKey, caKey)
if err != nil {
return fmt.Errorf("failed to sign client certificate: %w", err)
}
if certFile == "" {
certFile = "client.crt"
}
if keyFile == "" {
keyFile = "client.key"
}
if err := saveCert(certFile, certDER); err != nil {
return err
}
if err := saveKey(keyFile, priv); err != nil {
return err
}
fmt.Printf("\n✓ Client certificate generated:\n")
fmt.Printf(" Certificate: %s\n", certFile)
fmt.Printf(" Private Key: %s (mode 0600)\n", keyFile)
fmt.Printf(" Signed by: CN=%s\n", caCert.Subject.CommonName)
return nil
}
// loadCA reads and parses a CA certificate and its corresponding private key from files.
func loadCA(certFile, keyFile string) (*x509.Certificate, *rsa.PrivateKey, error) {
// Load CA certificate
certPEM, err := os.ReadFile(certFile)
if err != nil {
return nil, nil, fmt.Errorf("failed to read CA certificate: %w", err)
}
certBlock, _ := pem.Decode(certPEM)
if certBlock == nil || certBlock.Type != "CERTIFICATE" {
return nil, nil, fmt.Errorf("invalid CA certificate format")
}
caCert, err := x509.ParseCertificate(certBlock.Bytes)
if err != nil {
return nil, nil, fmt.Errorf("failed to parse CA certificate: %w", err)
}
// Load CA private key
keyPEM, err := os.ReadFile(keyFile)
if err != nil {
return nil, nil, fmt.Errorf("failed to read CA key: %w", err)
}
keyBlock, _ := pem.Decode(keyPEM)
if keyBlock == nil {
return nil, nil, fmt.Errorf("invalid CA key format")
}
var caKey *rsa.PrivateKey
switch keyBlock.Type {
case "RSA PRIVATE KEY":
caKey, err = x509.ParsePKCS1PrivateKey(keyBlock.Bytes)
case "PRIVATE KEY":
parsedKey, err := x509.ParsePKCS8PrivateKey(keyBlock.Bytes)
if err != nil {
return nil, nil, fmt.Errorf("failed to parse CA key: %w", err)
}
var ok bool
caKey, ok = parsedKey.(*rsa.PrivateKey)
if !ok {
return nil, nil, fmt.Errorf("CA key is not RSA")
}
default:
return nil, nil, fmt.Errorf("unsupported CA key type: %s", keyBlock.Type)
}
if err != nil {
return nil, nil, fmt.Errorf("failed to parse CA private key: %w", err)
}
// Verify CA certificate is actually a CA
if !caCert.IsCA {
return nil, nil, fmt.Errorf("certificate is not a CA certificate")
}
return caCert, caKey, nil
}
// saveCert saves a DER-encoded certificate to a file in PEM format.
func saveCert(filename string, certDER []byte) error {
certFile, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create certificate file: %w", err)
}
defer certFile.Close()
if err := pem.Encode(certFile, &pem.Block{
Type: "CERTIFICATE",
Bytes: certDER,
}); err != nil {
return fmt.Errorf("failed to write certificate: %w", err)
}
// Set readable permissions
if err := os.Chmod(filename, 0644); err != nil {
return fmt.Errorf("failed to set certificate permissions: %w", err)
}
return nil
}
// saveKey saves an RSA private key to a file in PEM format with restricted permissions.
func saveKey(filename string, key *rsa.PrivateKey) error {
keyFile, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create key file: %w", err)
}
defer keyFile.Close()
privKeyDER := x509.MarshalPKCS1PrivateKey(key)
if err := pem.Encode(keyFile, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: privKeyDER,
}); err != nil {
return fmt.Errorf("failed to write private key: %w", err)
}
// Set restricted permissions for private key
if err := os.Chmod(filename, 0600); err != nil {
return fmt.Errorf("failed to set key permissions: %w", err)
}
return nil
}
// parseHosts splits a comma-separated string of hosts into slices of DNS names and IP addresses.
func parseHosts(hostList string) ([]string, []net.IP) {
var dnsNames []string
var ipAddrs []net.IP
if hostList == "" {
return dnsNames, ipAddrs
}
hosts := strings.Split(hostList, ",")
for _, h := range hosts {
h = strings.TrimSpace(h)
if ip := net.ParseIP(h); ip != nil {
ipAddrs = append(ipAddrs, ip)
} else {
dnsNames = append(dnsNames, h)
}
}
return dnsNames, ipAddrs
}

View File

@ -0,0 +1,44 @@
// FILE: src/cmd/logwisp/commands/version.go
package commands
import (
"fmt"
"logwisp/src/internal/version"
)
// VersionCommand handles the display of the application's version information.
type VersionCommand struct{}
// NewVersionCommand creates a new version command handler.
func NewVersionCommand() *VersionCommand {
return &VersionCommand{}
}
// Execute prints the detailed version string to stdout.
func (c *VersionCommand) Execute(args []string) error {
fmt.Println(version.String())
return nil
}
// Description returns a brief one-line description of the command.
func (c *VersionCommand) Description() string {
return "Show version information"
}
// Help returns the detailed help text for the command.
func (c *VersionCommand) Help() string {
return `Version Command - Show LogWisp version information
Usage:
logwisp version
logwisp -v
logwisp --version
Output includes:
- Version number
- Build date
- Git commit hash (if available)
- Go version used for compilation
`
}

View File

@ -1,56 +0,0 @@
// FILE: logwisp/src/cmd/logwisp/help.go
package main
import (
"fmt"
"os"
)
const helpText = `LogWisp: A flexible log transport and processing tool.
Usage: logwisp [options]
Application Control:
-c, --config <path> (string) Path to configuration file (default: logwisp.toml).
-h, --help Display this help message and exit.
-v, --version Display version information and exit.
-b, --background Run LogWisp in the background as a daemon.
-q, --quiet Suppress all console output, including errors.
Runtime Behavior:
--disable-status-reporter Disable the periodic status reporter.
--config-auto-reload Enable config reload and pipeline reconfiguration on config file change.
Configuration Sources (Precedence: CLI > Env > File > Defaults):
- CLI flags override all other settings.
- Environment variables override file settings.
- TOML configuration file is the primary method for defining pipelines.
Logging ([logging] section or LOGWISP_LOGGING_* env vars):
output = "stderr" (string) Log output: none, stdout, stderr, file, both.
level = "info" (string) Log level: debug, info, warn, error.
[logging.file] Settings for file logging (directory, name, rotation).
[logging.console] Settings for console logging (target, format).
Pipelines ([[pipelines]] array in TOML):
Each pipeline defines a complete data flow from sources to sinks.
name = "my_pipeline" (string) Unique name for the pipeline.
sources = [...] (array) Data inputs (e.g., directory, stdin, http, tcp).
sinks = [...] (array) Data outputs (e.g., http, tcp, file, stdout, stderr, http_client).
filters = [...] (array) Optional filters to include/exclude logs based on regex.
rate_limit = {...} (object) Optional rate limiting for the entire pipeline.
auth = {...} (object) Optional authentication for network sinks.
format = "json" (string) Optional output formatter for the pipeline (raw, text, json).
For detailed configuration options, please refer to the documentation.
`
// CheckAndDisplayHelp scans arguments for help flags and prints help text if found.
func CheckAndDisplayHelp(args []string) {
for _, arg := range args {
if arg == "-h" || arg == "--help" {
fmt.Fprint(os.Stdout, helpText)
os.Exit(0)
}
}
}

View File

@ -4,28 +4,45 @@ package main
import ( import (
"context" "context"
"fmt" "fmt"
"logwisp/src/cmd/logwisp/commands"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/version"
"os" "os"
"os/exec" "os/exec"
"os/signal" "os/signal"
"strings" "strings"
"syscall" "syscall"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/version"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// logger is the global logger instance for the application.
var logger *log.Logger var logger *log.Logger
// main is the entry point for the LogWisp application.
func main() { func main() {
// Handle subcommands before any config loading
// This prevents flag conflicts with lixenwraith/config
router := commands.NewCommandRouter()
handled, err := router.Route(os.Args)
if err != nil {
// Command execution error
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
os.Exit(1)
}
if handled {
// Command was successfully handled
os.Exit(0)
}
// No subcommand, continue with main application
// Emulates nohup // Emulates nohup
signal.Ignore(syscall.SIGHUP) signal.Ignore(syscall.SIGHUP)
// Early check for help flag to avoid unnecessary config loading
CheckAndDisplayHelp(os.Args[1:])
// Load configuration with automatic CLI parsing // Load configuration with automatic CLI parsing
cfg, err := config.Load(os.Args[1:]) cfg, err := config.Load(os.Args[1:])
if err != nil { if err != nil {
@ -142,7 +159,7 @@ func main() {
logger.Info("msg", "Shutdown signal received, starting graceful shutdown...") logger.Info("msg", "Shutdown signal received, starting graceful shutdown...")
// Shutdown service with timeout // Shutdown service with timeout
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 10*time.Second) shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), core.ShutdownTimeout)
defer shutdownCancel() defer shutdownCancel()
done := make(chan struct{}) done := make(chan struct{})
@ -153,8 +170,6 @@ func main() {
select { select {
case <-done: case <-done:
// Save configuration after graceful shutdown (no reload manager in static mode)
saveConfigurationOnExit(cfg, nil, logger)
logger.Info("msg", "Shutdown complete") logger.Info("msg", "Shutdown complete")
case <-shutdownCtx.Done(): case <-shutdownCtx.Done():
logger.Error("msg", "Shutdown timeout exceeded - forcing exit") logger.Error("msg", "Shutdown timeout exceeded - forcing exit")
@ -167,62 +182,16 @@ func main() {
// Wait for context cancellation // Wait for context cancellation
<-ctx.Done() <-ctx.Done()
// Save configuration before final shutdown, handled by reloadManager
saveConfigurationOnExit(cfg, reloadManager, logger)
// Shutdown is handled by ReloadManager.Shutdown() in defer // Shutdown is handled by ReloadManager.Shutdown() in defer
logger.Info("msg", "Shutdown complete") logger.Info("msg", "Shutdown complete")
} }
// shutdownLogger gracefully shuts down the global logger.
func shutdownLogger() { func shutdownLogger() {
if logger != nil { if logger != nil {
if err := logger.Shutdown(2 * time.Second); err != nil { if err := logger.Shutdown(core.LoggerShutdownTimeout); err != nil {
// Best effort - can't log the shutdown error // Best effort - can't log the shutdown error
Error("Logger shutdown error: %v\n", err) Error("Logger shutdown error: %v\n", err)
} }
} }
} }
// saveConfigurationOnExit saves the configuration to file on exist
func saveConfigurationOnExit(cfg *config.Config, reloadManager *ReloadManager, logger *log.Logger) {
// Only save if explicitly enabled and we have a valid path
if !cfg.ConfigSaveOnExit || cfg.ConfigFile == "" {
return
}
// Create a context with timeout for save operation
saveCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Perform save in goroutine to respect timeout
done := make(chan error, 1)
go func() {
var err error
if reloadManager != nil && reloadManager.lcfg != nil {
// Use existing lconfig instance from reload manager
// This ensures we save through the same configuration system
err = reloadManager.lcfg.Save(cfg.ConfigFile)
} else {
// Static mode: create temporary lconfig for saving
err = cfg.SaveToFile(cfg.ConfigFile)
}
done <- err
}()
select {
case err := <-done:
if err != nil {
logger.Error("msg", "Failed to save configuration on exit",
"path", cfg.ConfigFile,
"error", err)
// Don't fail the exit on save error
} else {
logger.Info("msg", "Configuration saved successfully",
"path", cfg.ConfigFile)
}
case <-saveCtx.Done():
logger.Error("msg", "Configuration save timeout exceeded",
"path", cfg.ConfigFile,
"timeout", "5s")
}
}

View File

@ -8,7 +8,7 @@ import (
"sync" "sync"
) )
// OutputHandler manages all application output respecting quiet mode // OutputHandler manages all application output, respecting the global quiet mode.
type OutputHandler struct { type OutputHandler struct {
quiet bool quiet bool
mu sync.RWMutex mu sync.RWMutex
@ -16,10 +16,10 @@ type OutputHandler struct {
stderr io.Writer stderr io.Writer
} }
// Global output handler instance // output is the global instance of the OutputHandler.
var output *OutputHandler var output *OutputHandler
// InitOutputHandler initializes the global output handler // InitOutputHandler initializes the global output handler.
func InitOutputHandler(quiet bool) { func InitOutputHandler(quiet bool) {
output = &OutputHandler{ output = &OutputHandler{
quiet: quiet, quiet: quiet,
@ -28,59 +28,21 @@ func InitOutputHandler(quiet bool) {
} }
} }
// Print writes to stdout if not in quiet mode // Print writes to stdout.
func (o *OutputHandler) Print(format string, args ...any) {
o.mu.RLock()
defer o.mu.RUnlock()
if !o.quiet {
fmt.Fprintf(o.stdout, format, args...)
}
}
// Error writes to stderr if not in quiet mode
func (o *OutputHandler) Error(format string, args ...any) {
o.mu.RLock()
defer o.mu.RUnlock()
if !o.quiet {
fmt.Fprintf(o.stderr, format, args...)
}
}
// FatalError writes to stderr and exits (respects quiet mode)
func (o *OutputHandler) FatalError(code int, format string, args ...any) {
o.Error(format, args...)
os.Exit(code)
}
// IsQuiet returns the current quiet mode status
func (o *OutputHandler) IsQuiet() bool {
o.mu.RLock()
defer o.mu.RUnlock()
return o.quiet
}
// SetQuiet updates quiet mode (useful for testing)
func (o *OutputHandler) SetQuiet(quiet bool) {
o.mu.Lock()
defer o.mu.Unlock()
o.quiet = quiet
}
// Helper functions for global output handler
func Print(format string, args ...any) { func Print(format string, args ...any) {
if output != nil { if output != nil {
output.Print(format, args...) output.Print(format, args...)
} }
} }
// Error writes to stderr.
func Error(format string, args ...any) { func Error(format string, args ...any) {
if output != nil { if output != nil {
output.Error(format, args...) output.Error(format, args...)
} }
} }
// FatalError writes to stderr and exits the application.
func FatalError(code int, format string, args ...any) { func FatalError(code int, format string, args ...any) {
if output != nil { if output != nil {
output.FatalError(code, format, args...) output.FatalError(code, format, args...)
@ -90,3 +52,43 @@ func FatalError(code int, format string, args ...any) {
os.Exit(code) os.Exit(code)
} }
} }
// Print writes a formatted string to stdout if not in quiet mode.
func (o *OutputHandler) Print(format string, args ...any) {
o.mu.RLock()
defer o.mu.RUnlock()
if !o.quiet {
fmt.Fprintf(o.stdout, format, args...)
}
}
// Error writes a formatted string to stderr if not in quiet mode.
func (o *OutputHandler) Error(format string, args ...any) {
o.mu.RLock()
defer o.mu.RUnlock()
if !o.quiet {
fmt.Fprintf(o.stderr, format, args...)
}
}
// FatalError writes a formatted string to stderr and exits with the given code.
func (o *OutputHandler) FatalError(code int, format string, args ...any) {
o.Error(format, args...)
os.Exit(code)
}
// IsQuiet returns the current quiet mode status.
func (o *OutputHandler) IsQuiet() bool {
o.mu.RLock()
defer o.mu.RUnlock()
return o.quiet
}
// SetQuiet updates the quiet mode status.
func (o *OutputHandler) SetQuiet(quiet bool) {
o.mu.Lock()
defer o.mu.Unlock()
o.quiet = quiet
}

View File

@ -4,8 +4,11 @@ package main
import ( import (
"context" "context"
"fmt" "fmt"
"logwisp/src/internal/core"
"os"
"strings" "strings"
"sync" "sync"
"syscall"
"time" "time"
"logwisp/src/internal/config" "logwisp/src/internal/config"
@ -15,7 +18,7 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// ReloadManager handles configuration hot reload // ReloadManager handles the configuration hot-reloading functionality.
type ReloadManager struct { type ReloadManager struct {
configPath string configPath string
service *service.Service service *service.Service
@ -33,7 +36,7 @@ type ReloadManager struct {
statusReporterMu sync.Mutex statusReporterMu sync.Mutex
} }
// NewReloadManager creates a new reload manager // NewReloadManager creates a new reload manager.
func NewReloadManager(configPath string, initialCfg *config.Config, logger *log.Logger) *ReloadManager { func NewReloadManager(configPath string, initialCfg *config.Config, logger *log.Logger) *ReloadManager {
return &ReloadManager{ return &ReloadManager{
configPath: configPath, configPath: configPath,
@ -43,7 +46,7 @@ func NewReloadManager(configPath string, initialCfg *config.Config, logger *log.
} }
} }
// Start begins watching for configuration changes // Start bootstraps the initial service and begins watching for configuration changes.
func (rm *ReloadManager) Start(ctx context.Context) error { func (rm *ReloadManager) Start(ctx context.Context) error {
// Bootstrap initial service // Bootstrap initial service
svc, err := bootstrapService(ctx, rm.cfg) svc, err := bootstrapService(ctx, rm.cfg)
@ -60,28 +63,21 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
rm.startStatusReporter(ctx, svc) rm.startStatusReporter(ctx, svc)
} }
// Create lconfig instance for file watching, logwisp config is always TOML // Use the same lconfig instance from initial load
lcfg, err := lconfig.NewBuilder(). lcfg := config.GetConfigManager()
WithFile(rm.configPath). if lcfg == nil {
WithTarget(rm.cfg). // Config manager not initialized - potential for config bypass
WithFileFormat("toml"). return fmt.Errorf("config manager not initialized - cannot enable hot reload")
WithSecurityOptions(lconfig.SecurityOptions{
PreventPathTraversal: true,
MaxFileSize: 10 * 1024 * 1024,
}).
Build()
if err != nil {
return fmt.Errorf("failed to create config watcher: %w", err)
} }
rm.lcfg = lcfg rm.lcfg = lcfg
// Enable auto-update with custom options // Enable auto-update with custom options
watchOpts := lconfig.WatchOptions{ watchOpts := lconfig.WatchOptions{
PollInterval: time.Second, PollInterval: core.ReloadWatchPollInterval,
Debounce: 500 * time.Millisecond, Debounce: core.ReloadWatchDebounce,
ReloadTimeout: 30 * time.Second, ReloadTimeout: core.ReloadWatchTimeout,
VerifyPermissions: true, // TODO: Prevent malicious config replacement, to be implemented VerifyPermissions: true,
} }
lcfg.AutoUpdateWithOptions(watchOpts) lcfg.AutoUpdateWithOptions(watchOpts)
@ -95,217 +91,7 @@ func (rm *ReloadManager) Start(ctx context.Context) error {
return nil return nil
} }
// watchLoop monitors configuration changes // Shutdown gracefully stops the reload manager and the currently active service.
func (rm *ReloadManager) watchLoop(ctx context.Context) {
defer rm.wg.Done()
changeCh := rm.lcfg.Watch()
for {
select {
case <-ctx.Done():
return
case <-rm.shutdownCh:
return
case changedPath := <-changeCh:
// Handle special notifications
switch changedPath {
case "file_deleted":
rm.logger.Error("msg", "Configuration file deleted",
"action", "keeping current configuration")
continue
case "permissions_changed":
// SECURITY: Config file permissions changed suspiciously
rm.logger.Error("msg", "Configuration file permissions changed",
"action", "reload blocked for security")
continue
case "reload_timeout":
rm.logger.Error("msg", "Configuration reload timed out",
"action", "keeping current configuration")
continue
default:
if strings.HasPrefix(changedPath, "reload_error:") {
rm.logger.Error("msg", "Configuration reload error",
"error", strings.TrimPrefix(changedPath, "reload_error:"),
"action", "keeping current configuration")
continue
}
}
// Trigger reload for any pipeline-related change
if rm.shouldReload(changedPath) {
rm.triggerReload(ctx)
}
}
}
}
// shouldReload determines if a config change requires service reload
func (rm *ReloadManager) shouldReload(path string) bool {
// Pipeline changes always require reload
if strings.HasPrefix(path, "pipelines.") || path == "pipelines" {
return true
}
// Logging changes don't require service reload
if strings.HasPrefix(path, "logging.") {
return false
}
// Status reporter changes
if path == "disable_status_reporter" {
return true
}
return false
}
// triggerReload performs the actual reload
func (rm *ReloadManager) triggerReload(ctx context.Context) {
// Prevent concurrent reloads
rm.reloadingMu.Lock()
if rm.isReloading {
rm.reloadingMu.Unlock()
rm.logger.Debug("msg", "Reload already in progress, skipping")
return
}
rm.isReloading = true
rm.reloadingMu.Unlock()
defer func() {
rm.reloadingMu.Lock()
rm.isReloading = false
rm.reloadingMu.Unlock()
}()
rm.logger.Info("msg", "Starting configuration hot reload")
// Create reload context with timeout
reloadCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
if err := rm.performReload(reloadCtx); err != nil {
rm.logger.Error("msg", "Hot reload failed",
"error", err,
"action", "keeping current configuration and services")
return
}
rm.logger.Info("msg", "Configuration hot reload completed successfully")
}
// performReload executes the reload process
func (rm *ReloadManager) performReload(ctx context.Context) error {
// Get updated config from lconfig
updatedCfg, err := rm.lcfg.AsStruct()
if err != nil {
return fmt.Errorf("failed to get updated config: %w", err)
}
newCfg := updatedCfg.(*config.Config)
// Get current service snapshot
rm.mu.RLock()
oldService := rm.service
rm.mu.RUnlock()
// Try to bootstrap with new configuration
rm.logger.Debug("msg", "Bootstrapping new service with updated config")
newService, err := bootstrapService(ctx, newCfg)
if err != nil {
// Bootstrap failed - keep old services running
return fmt.Errorf("failed to bootstrap new service (old service still active): %w", err)
}
// Bootstrap succeeded - swap services atomically
rm.mu.Lock()
rm.service = newService
rm.cfg = newCfg
rm.mu.Unlock()
// Stop old status reporter and start new one
rm.restartStatusReporter(ctx, newService)
// Gracefully shutdown old services
// This happens after the swap to minimize downtime
go rm.shutdownOldServices(oldService)
return nil
}
// shutdownOldServices gracefully shuts down old services
func (rm *ReloadManager) shutdownOldServices(svc *service.Service) {
// Give connections time to drain
rm.logger.Debug("msg", "Draining connections from old services")
time.Sleep(2 * time.Second)
if svc != nil {
rm.logger.Info("msg", "Shutting down old service")
svc.Shutdown()
}
rm.logger.Debug("msg", "Old services shutdown complete")
}
// startStatusReporter starts a new status reporter
func (rm *ReloadManager) startStatusReporter(ctx context.Context, svc *service.Service) {
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
// Create cancellable context for status reporter
reporterCtx, cancel := context.WithCancel(ctx)
rm.statusReporterCancel = cancel
go statusReporter(svc, reporterCtx)
rm.logger.Debug("msg", "Started status reporter")
}
// restartStatusReporter stops old and starts new status reporter
func (rm *ReloadManager) restartStatusReporter(ctx context.Context, newService *service.Service) {
if rm.cfg.DisableStatusReporter {
// Just stop the old one if disabled
rm.stopStatusReporter()
return
}
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
// Stop old reporter
if rm.statusReporterCancel != nil {
rm.statusReporterCancel()
rm.logger.Debug("msg", "Stopped old status reporter")
}
// Start new reporter
reporterCtx, cancel := context.WithCancel(ctx)
rm.statusReporterCancel = cancel
go statusReporter(newService, reporterCtx)
rm.logger.Debug("msg", "Started new status reporter")
}
// stopStatusReporter stops the status reporter
func (rm *ReloadManager) stopStatusReporter() {
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
if rm.statusReporterCancel != nil {
rm.statusReporterCancel()
rm.statusReporterCancel = nil
rm.logger.Debug("msg", "Stopped status reporter")
}
}
// SaveConfig is a wrapper to save the config
func (rm *ReloadManager) SaveConfig(path string) error {
if rm.lcfg == nil {
return fmt.Errorf("no lconfig instance available")
}
return rm.lcfg.Save(path)
}
// Shutdown stops the reload manager
func (rm *ReloadManager) Shutdown() { func (rm *ReloadManager) Shutdown() {
rm.logger.Info("msg", "Shutting down reload manager") rm.logger.Info("msg", "Shutting down reload manager")
@ -332,9 +118,255 @@ func (rm *ReloadManager) Shutdown() {
} }
} }
// GetService returns the current service (thread-safe) // GetService returns the currently active service instance in a thread-safe manner.
func (rm *ReloadManager) GetService() *service.Service { func (rm *ReloadManager) GetService() *service.Service {
rm.mu.RLock() rm.mu.RLock()
defer rm.mu.RUnlock() defer rm.mu.RUnlock()
return rm.service return rm.service
} }
// triggerReload initiates the configuration reload process.
func (rm *ReloadManager) triggerReload(ctx context.Context) {
// Prevent concurrent reloads
rm.reloadingMu.Lock()
if rm.isReloading {
rm.reloadingMu.Unlock()
rm.logger.Debug("msg", "Reload already in progress, skipping")
return
}
rm.isReloading = true
rm.reloadingMu.Unlock()
defer func() {
rm.reloadingMu.Lock()
rm.isReloading = false
rm.reloadingMu.Unlock()
}()
rm.logger.Info("msg", "Starting configuration hot reload")
// Create reload context with timeout
reloadCtx, cancel := context.WithTimeout(ctx, core.ConfigReloadTimeout)
defer cancel()
if err := rm.performReload(reloadCtx); err != nil {
rm.logger.Error("msg", "Hot reload failed",
"error", err,
"action", "keeping current configuration and services")
return
}
rm.logger.Info("msg", "Configuration hot reload completed successfully")
}
// watchLoop is the main goroutine that monitors for configuration file changes.
func (rm *ReloadManager) watchLoop(ctx context.Context) {
defer rm.wg.Done()
changeCh := rm.lcfg.Watch()
for {
select {
case <-ctx.Done():
return
case <-rm.shutdownCh:
return
case changedPath := <-changeCh:
// Handle special notifications
switch changedPath {
case "file_deleted":
rm.logger.Error("msg", "Configuration file deleted",
"action", "keeping current configuration")
continue
case "permissions_changed":
// Config file permissions changed suspiciously, overlap with file permission check
rm.logger.Error("msg", "Configuration file permissions changed",
"action", "reload blocked for security")
continue
case "reload_timeout":
rm.logger.Error("msg", "Configuration reload timed out",
"action", "keeping current configuration")
continue
default:
if strings.HasPrefix(changedPath, "reload_error:") {
rm.logger.Error("msg", "Configuration reload error",
"error", strings.TrimPrefix(changedPath, "reload_error:"),
"action", "keeping current configuration")
continue
}
}
// Verify file permissions before reload
if err := verifyFilePermissions(rm.configPath); err != nil {
rm.logger.Error("msg", "Configuration file permission check failed",
"path", rm.configPath,
"error", err,
"action", "reload blocked for security")
continue
}
// Trigger reload for any pipeline-related change
if rm.shouldReload(changedPath) {
rm.triggerReload(ctx)
}
}
}
}
// performReload executes the steps to validate and apply a new configuration.
func (rm *ReloadManager) performReload(ctx context.Context) error {
// Get updated config from lconfig
updatedCfg, err := rm.lcfg.AsStruct()
if err != nil {
return fmt.Errorf("failed to get updated config: %w", err)
}
// AsStruct returns the target pointer, not a new instance
newCfg := updatedCfg.(*config.Config)
// Validate the new config
if err := config.ValidateConfig(newCfg); err != nil {
return fmt.Errorf("updated config validation failed: %w", err)
}
// Get current service snapshot
rm.mu.RLock()
oldService := rm.service
rm.mu.RUnlock()
// Try to bootstrap with new configuration
rm.logger.Debug("msg", "Bootstrapping new service with updated config")
newService, err := bootstrapService(ctx, newCfg)
if err != nil {
// Bootstrap failed - keep old services running
return fmt.Errorf("failed to bootstrap new service (old service still active): %w", err)
}
// Bootstrap succeeded - swap services atomically
rm.mu.Lock()
rm.service = newService
rm.cfg = newCfg
rm.mu.Unlock()
// Stop old status reporter and start new one
rm.restartStatusReporter(ctx, newService)
// Gracefully shutdown old services after swap to minimize downtime
go rm.shutdownOldServices(oldService)
return nil
}
// shouldReload determines if a given configuration change requires a full service reload.
func (rm *ReloadManager) shouldReload(path string) bool {
// Pipeline changes always require reload
if strings.HasPrefix(path, "pipelines.") || path == "pipelines" {
return true
}
// Logging changes don't require service reload
if strings.HasPrefix(path, "logging.") {
return false
}
// Status reporter changes
if path == "disable_status_reporter" {
return true
}
return false
}
// verifyFilePermissions checks the ownership and permissions of the config file for security.
func verifyFilePermissions(path string) error {
info, err := os.Stat(path)
if err != nil {
return fmt.Errorf("failed to stat config file: %w", err)
}
// Extract file mode and system stats
mode := info.Mode()
stat, ok := info.Sys().(*syscall.Stat_t)
if !ok {
return fmt.Errorf("unable to get file ownership info")
}
// Check ownership - must be current user or root
currentUID := uint32(os.Getuid())
if stat.Uid != currentUID && stat.Uid != 0 {
return fmt.Errorf("config file owned by uid %d, expected %d or 0", stat.Uid, currentUID)
}
// Check permissions - must not be writable by group or other
perm := mode.Perm()
if perm&0022 != 0 {
// Group or other has write permission
return fmt.Errorf("insecure permissions %04o - file must not be writable by group/other", perm)
}
return nil
}
// shutdownOldServices gracefully shuts down the previous service instance after a successful reload.
func (rm *ReloadManager) shutdownOldServices(svc *service.Service) {
// Give connections time to drain
rm.logger.Debug("msg", "Draining connections from old services")
time.Sleep(2 * time.Second)
if svc != nil {
rm.logger.Info("msg", "Shutting down old service")
svc.Shutdown()
}
rm.logger.Debug("msg", "Old services shutdown complete")
}
// startStatusReporter starts a new status reporter for service.
func (rm *ReloadManager) startStatusReporter(ctx context.Context, svc *service.Service) {
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
// Create cancellable context for status reporter
reporterCtx, cancel := context.WithCancel(ctx)
rm.statusReporterCancel = cancel
go statusReporter(svc, reporterCtx)
rm.logger.Debug("msg", "Started status reporter")
}
// stopStatusReporter stops the currently running status reporter.
func (rm *ReloadManager) stopStatusReporter() {
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
if rm.statusReporterCancel != nil {
rm.statusReporterCancel()
rm.statusReporterCancel = nil
rm.logger.Debug("msg", "Stopped status reporter")
}
}
// restartStatusReporter stops the old status reporter and starts a new one.
func (rm *ReloadManager) restartStatusReporter(ctx context.Context, newService *service.Service) {
if rm.cfg.DisableStatusReporter {
// Just stop the old one if disabled
rm.stopStatusReporter()
return
}
rm.statusReporterMu.Lock()
defer rm.statusReporterMu.Unlock()
// Stop old reporter
if rm.statusReporterCancel != nil {
rm.statusReporterCancel()
rm.logger.Debug("msg", "Stopped old status reporter")
}
// Start new reporter
reporterCtx, cancel := context.WithCancel(ctx)
rm.statusReporterCancel = cancel
go statusReporter(newService, reporterCtx)
rm.logger.Debug("msg", "Started new status reporter")
}

View File

@ -10,14 +10,14 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// SignalHandler manages OS signals // SignalHandler manages OS signals for shutdown and configuration reloads.
type SignalHandler struct { type SignalHandler struct {
reloadManager *ReloadManager reloadManager *ReloadManager
logger *log.Logger logger *log.Logger
sigChan chan os.Signal sigChan chan os.Signal
} }
// NewSignalHandler creates a signal handler // NewSignalHandler creates a new signal handler.
func NewSignalHandler(rm *ReloadManager, logger *log.Logger) *SignalHandler { func NewSignalHandler(rm *ReloadManager, logger *log.Logger) *SignalHandler {
sh := &SignalHandler{ sh := &SignalHandler{
reloadManager: rm, reloadManager: rm,
@ -36,7 +36,7 @@ func NewSignalHandler(rm *ReloadManager, logger *log.Logger) *SignalHandler {
return sh return sh
} }
// Handle processes signals // Handle blocks and processes incoming OS signals.
func (sh *SignalHandler) Handle(ctx context.Context) os.Signal { func (sh *SignalHandler) Handle(ctx context.Context) os.Signal {
for { for {
select { select {
@ -58,7 +58,7 @@ func (sh *SignalHandler) Handle(ctx context.Context) os.Signal {
} }
} }
// Stop cleans up signal handling // Stop cleans up the signal handling channel.
func (sh *SignalHandler) Stop() { func (sh *SignalHandler) Stop() {
signal.Stop(sh.sigChan) signal.Stop(sh.sigChan)
close(sh.sigChan) close(sh.sigChan)

View File

@ -10,7 +10,7 @@ import (
"logwisp/src/internal/service" "logwisp/src/internal/service"
) )
// statusReporter periodically logs service status // statusReporter is a goroutine that periodically logs the health and statistics of the service.
func statusReporter(service *service.Service, ctx context.Context) { func statusReporter(service *service.Service, ctx context.Context) {
ticker := time.NewTicker(30 * time.Second) ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop() defer ticker.Stop()
@ -60,7 +60,175 @@ func statusReporter(service *service.Service, ctx context.Context) {
} }
} }
// logPipelineStatus logs the status of an individual pipeline // displayPipelineEndpoints logs the configured source and sink endpoints for a pipeline at startup.
func displayPipelineEndpoints(cfg config.PipelineConfig) {
// Display sink endpoints
for i, sinkCfg := range cfg.Sinks {
switch sinkCfg.Type {
case "tcp":
if sinkCfg.TCP != nil {
host := "0.0.0.0"
if sinkCfg.TCP.Host != "" {
host = sinkCfg.TCP.Host
}
logger.Info("msg", "TCP endpoint configured",
"component", "main",
"pipeline", cfg.Name,
"sink_index", i,
"listen", fmt.Sprintf("%s:%d", host, sinkCfg.TCP.Port))
// Display net limit info if configured
if sinkCfg.TCP.ACL != nil && sinkCfg.TCP.ACL.Enabled {
logger.Info("msg", "TCP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", sinkCfg.TCP.ACL.RequestsPerSecond,
"burst_size", sinkCfg.TCP.ACL.BurstSize)
}
}
case "http":
if sinkCfg.HTTP != nil {
host := "0.0.0.0"
if sinkCfg.HTTP.Host != "" {
host = sinkCfg.HTTP.Host
}
streamPath := "/stream"
statusPath := "/status"
if sinkCfg.HTTP.StreamPath != "" {
streamPath = sinkCfg.HTTP.StreamPath
}
if sinkCfg.HTTP.StatusPath != "" {
statusPath = sinkCfg.HTTP.StatusPath
}
logger.Info("msg", "HTTP endpoints configured",
"pipeline", cfg.Name,
"sink_index", i,
"listen", fmt.Sprintf("%s:%d", host, sinkCfg.HTTP.Port),
"stream_url", fmt.Sprintf("http://%s:%d%s", host, sinkCfg.HTTP.Port, streamPath),
"status_url", fmt.Sprintf("http://%s:%d%s", host, sinkCfg.HTTP.Port, statusPath))
// Display net limit info if configured
if sinkCfg.HTTP.ACL != nil && sinkCfg.HTTP.ACL.Enabled {
logger.Info("msg", "HTTP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", sinkCfg.HTTP.ACL.RequestsPerSecond,
"burst_size", sinkCfg.HTTP.ACL.BurstSize)
}
}
case "file":
if sinkCfg.File != nil {
logger.Info("msg", "File sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"directory", sinkCfg.File.Directory,
"name", sinkCfg.File.Name)
}
case "console":
if sinkCfg.Console != nil {
logger.Info("msg", "Console sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"target", sinkCfg.Console.Target)
}
}
}
// Display source endpoints with host support
for i, sourceCfg := range cfg.Sources {
switch sourceCfg.Type {
case "tcp":
if sourceCfg.TCP != nil {
host := "0.0.0.0"
if sourceCfg.TCP.Host != "" {
host = sourceCfg.TCP.Host
}
displayHost := host
if host == "0.0.0.0" {
displayHost = "localhost"
}
logger.Info("msg", "TCP source configured",
"pipeline", cfg.Name,
"source_index", i,
"listen", fmt.Sprintf("%s:%d", host, sourceCfg.TCP.Port),
"endpoint", fmt.Sprintf("%s:%d", displayHost, sourceCfg.TCP.Port))
// Display net limit info if configured
if sourceCfg.TCP.ACL != nil && sourceCfg.TCP.ACL.Enabled {
logger.Info("msg", "TCP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", sourceCfg.TCP.ACL.RequestsPerSecond,
"burst_size", sourceCfg.TCP.ACL.BurstSize)
}
}
case "http":
if sourceCfg.HTTP != nil {
host := "0.0.0.0"
if sourceCfg.HTTP.Host != "" {
host = sourceCfg.HTTP.Host
}
displayHost := host
if host == "0.0.0.0" {
displayHost = "localhost"
}
ingestPath := "/ingest"
if sourceCfg.HTTP.IngestPath != "" {
ingestPath = sourceCfg.HTTP.IngestPath
}
logger.Info("msg", "HTTP source configured",
"pipeline", cfg.Name,
"source_index", i,
"listen", fmt.Sprintf("%s:%d", host, sourceCfg.HTTP.Port),
"ingest_url", fmt.Sprintf("http://%s:%d%s", displayHost, sourceCfg.HTTP.Port, ingestPath))
// Display net limit info if configured
if sourceCfg.HTTP.ACL != nil && sourceCfg.HTTP.ACL.Enabled {
logger.Info("msg", "HTTP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", sourceCfg.HTTP.ACL.RequestsPerSecond,
"burst_size", sourceCfg.HTTP.ACL.BurstSize)
}
}
case "file":
if sourceCfg.File != nil {
logger.Info("msg", "File source configured",
"pipeline", cfg.Name,
"source_index", i,
"path", sourceCfg.File.Directory,
"pattern", sourceCfg.File.Pattern)
}
case "console":
logger.Info("msg", "Console source configured",
"pipeline", cfg.Name,
"source_index", i)
}
}
// Display filter information
if len(cfg.Filters) > 0 {
logger.Info("msg", "Filters configured",
"pipeline", cfg.Name,
"filter_count", len(cfg.Filters))
}
}
// logPipelineStatus logs the detailed status and statistics of an individual pipeline.
func logPipelineStatus(name string, stats map[string]any) { func logPipelineStatus(name string, stats map[string]any) {
statusFields := []any{ statusFields := []any{
"msg", "Pipeline status", "msg", "Pipeline status",
@ -107,91 +275,3 @@ func logPipelineStatus(name string, stats map[string]any) {
logger.Debug(statusFields...) logger.Debug(statusFields...)
} }
// displayPipelineEndpoints logs the configured endpoints for a pipeline
func displayPipelineEndpoints(cfg config.PipelineConfig) {
// Display sink endpoints
for i, sinkCfg := range cfg.Sinks {
switch sinkCfg.Type {
case "tcp":
if port, ok := sinkCfg.Options["port"].(int64); ok {
logger.Info("msg", "TCP endpoint configured",
"component", "main",
"pipeline", cfg.Name,
"sink_index", i,
"port", port)
// Display net limit info if configured
if rl, ok := sinkCfg.Options["net_limit"].(map[string]any); ok {
if enabled, ok := rl["enabled"].(bool); ok && enabled {
logger.Info("msg", "TCP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", rl["requests_per_second"],
"burst_size", rl["burst_size"])
}
}
}
case "http":
if port, ok := sinkCfg.Options["port"].(int64); ok {
streamPath := "/transport"
statusPath := "/status"
if path, ok := sinkCfg.Options["stream_path"].(string); ok {
streamPath = path
}
if path, ok := sinkCfg.Options["status_path"].(string); ok {
statusPath = path
}
logger.Info("msg", "HTTP endpoints configured",
"pipeline", cfg.Name,
"sink_index", i,
"stream_url", fmt.Sprintf("http://localhost:%d%s", port, streamPath),
"status_url", fmt.Sprintf("http://localhost:%d%s", port, statusPath))
// Display net limit info if configured
if rl, ok := sinkCfg.Options["net_limit"].(map[string]any); ok {
if enabled, ok := rl["enabled"].(bool); ok && enabled {
logger.Info("msg", "HTTP net limiting enabled",
"pipeline", cfg.Name,
"sink_index", i,
"requests_per_second", rl["requests_per_second"],
"burst_size", rl["burst_size"],
"limit_by", rl["limit_by"])
}
}
}
case "file":
if dir, ok := sinkCfg.Options["directory"].(string); ok {
name, _ := sinkCfg.Options["name"].(string)
logger.Info("msg", "File sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"directory", dir,
"name", name)
}
case "stdout", "stderr":
logger.Info("msg", "Console sink configured",
"pipeline", cfg.Name,
"sink_index", i,
"type", sinkCfg.Type)
}
}
// Display authentication information
if cfg.Auth != nil && cfg.Auth.Type != "none" {
logger.Info("msg", "Authentication enabled",
"pipeline", cfg.Name,
"auth_type", cfg.Auth.Type)
}
// Display filter information
if len(cfg.Filters) > 0 {
logger.Info("msg", "Filters configured",
"pipeline", cfg.Name,
"filter_count", len(cfg.Filters))
}
}

View File

@ -1,652 +0,0 @@
// FILE: logwisp/src/internal/auth/authenticator.go
package auth
import (
"bufio"
"crypto/rand"
"encoding/base64"
"fmt"
"net"
"os"
"strings"
"sync"
"time"
"logwisp/src/internal/config"
"github.com/golang-jwt/jwt/v5"
"github.com/lixenwraith/log"
"golang.org/x/crypto/bcrypt"
"golang.org/x/time/rate"
)
// Prevent unbounded map growth
const maxAuthTrackedIPs = 10000
// Authenticator handles all authentication methods for a pipeline
type Authenticator struct {
config *config.AuthConfig
logger *log.Logger
basicUsers map[string]string // username -> password hash
bearerTokens map[string]bool // token -> valid
jwtParser *jwt.Parser
jwtKeyFunc jwt.Keyfunc
mu sync.RWMutex
// Session tracking
sessions map[string]*Session
sessionMu sync.RWMutex
// Brute-force protection
ipAuthAttempts map[string]*ipAuthState
authMu sync.RWMutex
}
// ADDED: Per-IP auth attempt tracking
type ipAuthState struct {
limiter *rate.Limiter
failCount int
lastAttempt time.Time
blockedUntil time.Time
}
// Session represents an authenticated connection
type Session struct {
ID string
Username string
Method string // basic, bearer, jwt, mtls
RemoteAddr string
CreatedAt time.Time
LastActivity time.Time
Metadata map[string]any
}
// New creates a new authenticator from config
func New(cfg *config.AuthConfig, logger *log.Logger) (*Authenticator, error) {
if cfg == nil || cfg.Type == "none" {
return nil, nil
}
a := &Authenticator{
config: cfg,
logger: logger,
basicUsers: make(map[string]string),
bearerTokens: make(map[string]bool),
sessions: make(map[string]*Session),
ipAuthAttempts: make(map[string]*ipAuthState),
}
// Initialize Basic Auth users
if cfg.Type == "basic" && cfg.BasicAuth != nil {
for _, user := range cfg.BasicAuth.Users {
a.basicUsers[user.Username] = user.PasswordHash
}
// Load users from file if specified
if cfg.BasicAuth.UsersFile != "" {
if err := a.loadUsersFile(cfg.BasicAuth.UsersFile); err != nil {
return nil, fmt.Errorf("failed to load users file: %w", err)
}
}
}
// Initialize Bearer tokens
if cfg.Type == "bearer" && cfg.BearerAuth != nil {
for _, token := range cfg.BearerAuth.Tokens {
a.bearerTokens[token] = true
}
// Setup JWT validation if configured
if cfg.BearerAuth.JWT != nil {
a.jwtParser = jwt.NewParser(
jwt.WithValidMethods([]string{"HS256", "HS384", "HS512", "RS256", "RS384", "RS512", "ES256", "ES384", "ES512"}),
jwt.WithLeeway(5*time.Second),
jwt.WithExpirationRequired(),
)
// Setup key function
if cfg.BearerAuth.JWT.SigningKey != "" {
// Static key
key := []byte(cfg.BearerAuth.JWT.SigningKey)
a.jwtKeyFunc = func(token *jwt.Token) (interface{}, error) {
return key, nil
}
} else if cfg.BearerAuth.JWT.JWKSURL != "" {
// JWKS support would require additional implementation
// ☢ SECURITY: JWKS rotation not implemented - tokens won't refresh keys
return nil, fmt.Errorf("JWKS support not yet implemented")
}
}
}
// Start session cleanup
go a.sessionCleanup()
// Start auth attempt cleanup
go a.authAttemptCleanup()
logger.Info("msg", "Authenticator initialized",
"component", "auth",
"type", cfg.Type)
return a, nil
}
// Check and enforce rate limits
func (a *Authenticator) checkRateLimit(remoteAddr string) error {
ip, _, err := net.SplitHostPort(remoteAddr)
if err != nil {
ip = remoteAddr // Fallback for malformed addresses
}
a.authMu.Lock()
defer a.authMu.Unlock()
state, exists := a.ipAuthAttempts[ip]
now := time.Now()
if !exists {
// Check map size limit before creating new entry
if len(a.ipAuthAttempts) >= maxAuthTrackedIPs {
// Evict an old entry using simplified LRU
// Sample 20 random entries and evict the oldest
const sampleSize = 20
var oldestIP string
oldestTime := now
// Build sample
sampled := 0
for sampledIP, sampledState := range a.ipAuthAttempts {
if sampledState.lastAttempt.Before(oldestTime) {
oldestIP = sampledIP
oldestTime = sampledState.lastAttempt
}
sampled++
if sampled >= sampleSize {
break
}
}
// Evict the oldest from our sample
if oldestIP != "" {
delete(a.ipAuthAttempts, oldestIP)
a.logger.Debug("msg", "Evicted old auth attempt state",
"component", "auth",
"evicted_ip", oldestIP,
"last_seen", oldestTime)
}
}
// Create new state for this IP
// 5 attempts per minute, burst of 3
state = &ipAuthState{
limiter: rate.NewLimiter(rate.Every(12*time.Second), 3),
lastAttempt: now,
}
a.ipAuthAttempts[ip] = state
}
// Check if IP is temporarily blocked
if now.Before(state.blockedUntil) {
remaining := state.blockedUntil.Sub(now)
a.logger.Warn("msg", "IP temporarily blocked",
"component", "auth",
"ip", ip,
"remaining", remaining)
// Sleep to slow down even blocked attempts
time.Sleep(2 * time.Second)
return fmt.Errorf("temporarily blocked, try again in %v", remaining.Round(time.Second))
}
// Check rate limit
if !state.limiter.Allow() {
state.failCount++
// Only set new blockedUntil if not already blocked
// This prevents indefinite block extension
if state.blockedUntil.IsZero() || now.After(state.blockedUntil) {
// Progressive blocking: 2^failCount minutes
blockMinutes := 1 << min(state.failCount, 6) // Cap at 64 minutes
state.blockedUntil = now.Add(time.Duration(blockMinutes) * time.Minute)
a.logger.Warn("msg", "Rate limit exceeded, blocking IP",
"component", "auth",
"ip", ip,
"fail_count", state.failCount,
"block_duration", time.Duration(blockMinutes)*time.Minute)
}
return fmt.Errorf("rate limit exceeded")
}
state.lastAttempt = now
return nil
}
// Record failed attempt
func (a *Authenticator) recordFailure(remoteAddr string) {
ip, _, _ := net.SplitHostPort(remoteAddr)
if ip == "" {
ip = remoteAddr
}
a.authMu.Lock()
defer a.authMu.Unlock()
if state, exists := a.ipAuthAttempts[ip]; exists {
state.failCount++
state.lastAttempt = time.Now()
}
}
// Reset failure count on success
func (a *Authenticator) recordSuccess(remoteAddr string) {
ip, _, _ := net.SplitHostPort(remoteAddr)
if ip == "" {
ip = remoteAddr
}
a.authMu.Lock()
defer a.authMu.Unlock()
if state, exists := a.ipAuthAttempts[ip]; exists {
state.failCount = 0
state.blockedUntil = time.Time{}
}
}
// AuthenticateHTTP handles HTTP authentication headers
func (a *Authenticator) AuthenticateHTTP(authHeader, remoteAddr string) (*Session, error) {
if a == nil || a.config.Type == "none" {
return &Session{
ID: generateSessionID(),
Method: "none",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
}, nil
}
// Check rate limit
if err := a.checkRateLimit(remoteAddr); err != nil {
return nil, err
}
var session *Session
var err error
switch a.config.Type {
case "basic":
session, err = a.authenticateBasic(authHeader, remoteAddr)
case "bearer":
session, err = a.authenticateBearer(authHeader, remoteAddr)
default:
err = fmt.Errorf("unsupported auth type: %s", a.config.Type)
}
if err != nil {
a.recordFailure(remoteAddr)
time.Sleep(500 * time.Millisecond)
return nil, err
}
a.recordSuccess(remoteAddr)
return session, nil
}
// AuthenticateTCP handles TCP connection authentication
func (a *Authenticator) AuthenticateTCP(method, credentials, remoteAddr string) (*Session, error) {
if a == nil || a.config.Type == "none" {
return &Session{
ID: generateSessionID(),
Method: "none",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
}, nil
}
// Check rate limit first
if err := a.checkRateLimit(remoteAddr); err != nil {
return nil, err
}
var session *Session
var err error
// TCP auth protocol: AUTH <method> <credentials>
switch strings.ToLower(method) {
case "token":
if a.config.Type != "bearer" {
err = fmt.Errorf("token auth not configured")
} else {
session, err = a.validateToken(credentials, remoteAddr)
}
case "basic":
if a.config.Type != "basic" {
err = fmt.Errorf("basic auth not configured")
} else {
// Expect base64(username:password)
decoded, decErr := base64.StdEncoding.DecodeString(credentials)
if decErr != nil {
err = fmt.Errorf("invalid credentials encoding")
} else {
parts := strings.SplitN(string(decoded), ":", 2)
if len(parts) != 2 {
err = fmt.Errorf("invalid credentials format")
} else {
session, err = a.validateBasicAuth(parts[0], parts[1], remoteAddr)
}
}
}
default:
err = fmt.Errorf("unsupported auth method: %s", method)
}
if err != nil {
a.recordFailure(remoteAddr)
// Add delay on failure
time.Sleep(500 * time.Millisecond)
return nil, err
}
a.recordSuccess(remoteAddr)
return session, nil
}
func (a *Authenticator) authenticateBasic(authHeader, remoteAddr string) (*Session, error) {
if !strings.HasPrefix(authHeader, "Basic ") {
return nil, fmt.Errorf("invalid basic auth header")
}
payload, err := base64.StdEncoding.DecodeString(authHeader[6:])
if err != nil {
return nil, fmt.Errorf("invalid base64 encoding")
}
parts := strings.SplitN(string(payload), ":", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("invalid credentials format")
}
return a.validateBasicAuth(parts[0], parts[1], remoteAddr)
}
func (a *Authenticator) validateBasicAuth(username, password, remoteAddr string) (*Session, error) {
a.mu.RLock()
expectedHash, exists := a.basicUsers[username]
a.mu.RUnlock()
if !exists {
// ☢ SECURITY: Perform bcrypt anyway to prevent timing attacks
bcrypt.CompareHashAndPassword([]byte("$2a$10$dummy.hash.to.prevent.timing.attacks"), []byte(password))
return nil, fmt.Errorf("invalid credentials")
}
if err := bcrypt.CompareHashAndPassword([]byte(expectedHash), []byte(password)); err != nil {
return nil, fmt.Errorf("invalid credentials")
}
session := &Session{
ID: generateSessionID(),
Username: username,
Method: "basic",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
}
a.storeSession(session)
return session, nil
}
func (a *Authenticator) authenticateBearer(authHeader, remoteAddr string) (*Session, error) {
if !strings.HasPrefix(authHeader, "Bearer ") {
return nil, fmt.Errorf("invalid bearer auth header")
}
token := authHeader[7:]
return a.validateToken(token, remoteAddr)
}
func (a *Authenticator) validateToken(token, remoteAddr string) (*Session, error) {
// Check static tokens first
a.mu.RLock()
isStatic := a.bearerTokens[token]
a.mu.RUnlock()
if isStatic {
session := &Session{
ID: generateSessionID(),
Method: "bearer",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
Metadata: map[string]any{"token_type": "static"},
}
a.storeSession(session)
return session, nil
}
// Try JWT validation if configured
if a.jwtParser != nil && a.jwtKeyFunc != nil {
claims := jwt.MapClaims{}
parsedToken, err := a.jwtParser.ParseWithClaims(token, claims, a.jwtKeyFunc)
if err != nil {
return nil, fmt.Errorf("JWT validation failed: %w", err)
}
if !parsedToken.Valid {
return nil, fmt.Errorf("invalid JWT token")
}
// Explicit expiration check
if exp, ok := claims["exp"].(float64); ok {
if time.Now().Unix() > int64(exp) {
return nil, fmt.Errorf("token expired")
}
} else {
// Reject tokens without expiration
return nil, fmt.Errorf("token missing expiration claim")
}
// Check not-before claim
if nbf, ok := claims["nbf"].(float64); ok {
if time.Now().Unix() < int64(nbf) {
return nil, fmt.Errorf("token not yet valid")
}
}
// Check issuer if configured
if a.config.BearerAuth.JWT.Issuer != "" {
if iss, ok := claims["iss"].(string); !ok || iss != a.config.BearerAuth.JWT.Issuer {
return nil, fmt.Errorf("invalid token issuer")
}
}
// Check audience if configured
if a.config.BearerAuth.JWT.Audience != "" {
// Handle both string and []string audience formats
audValid := false
switch aud := claims["aud"].(type) {
case string:
audValid = aud == a.config.BearerAuth.JWT.Audience
case []interface{}:
for _, aa := range aud {
if audStr, ok := aa.(string); ok && audStr == a.config.BearerAuth.JWT.Audience {
audValid = true
break
}
}
}
if !audValid {
return nil, fmt.Errorf("invalid token audience")
}
}
username := ""
if sub, ok := claims["sub"].(string); ok {
username = sub
}
session := &Session{
ID: generateSessionID(),
Username: username,
Method: "jwt",
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
Metadata: map[string]any{"claims": claims},
}
a.storeSession(session)
return session, nil
}
return nil, fmt.Errorf("invalid token")
}
func (a *Authenticator) storeSession(session *Session) {
a.sessionMu.Lock()
a.sessions[session.ID] = session
a.sessionMu.Unlock()
a.logger.Info("msg", "Session created",
"component", "auth",
"session_id", session.ID,
"username", session.Username,
"method", session.Method,
"remote_addr", session.RemoteAddr)
}
func (a *Authenticator) sessionCleanup() {
ticker := time.NewTicker(5 * time.Minute)
defer ticker.Stop()
for range ticker.C {
a.sessionMu.Lock()
now := time.Now()
for id, session := range a.sessions {
if now.Sub(session.LastActivity) > 30*time.Minute {
delete(a.sessions, id)
a.logger.Debug("msg", "Session expired",
"component", "auth",
"session_id", id)
}
}
a.sessionMu.Unlock()
}
}
// Cleanup old auth attempts
func (a *Authenticator) authAttemptCleanup() {
ticker := time.NewTicker(5 * time.Minute)
defer ticker.Stop()
for range ticker.C {
a.authMu.Lock()
now := time.Now()
for ip, state := range a.ipAuthAttempts {
// Remove entries older than 1 hour with no recent activity
if now.Sub(state.lastAttempt) > time.Hour {
delete(a.ipAuthAttempts, ip)
a.logger.Debug("msg", "Cleaned up auth attempt state",
"component", "auth",
"ip", ip)
}
}
a.authMu.Unlock()
}
}
func (a *Authenticator) loadUsersFile(path string) error {
file, err := os.Open(path)
if err != nil {
return fmt.Errorf("could not open users file: %w", err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
lineNumber := 0
for scanner.Scan() {
lineNumber++
line := strings.TrimSpace(scanner.Text())
if line == "" || strings.HasPrefix(line, "#") {
continue // Skip empty lines and comments
}
parts := strings.SplitN(line, ":", 2)
if len(parts) != 2 {
a.logger.Warn("msg", "Skipping malformed line in users file",
"component", "auth",
"path", path,
"line_number", lineNumber)
continue
}
username, hash := strings.TrimSpace(parts[0]), strings.TrimSpace(parts[1])
if username != "" && hash != "" {
// File-based users can overwrite inline users if names conflict
a.basicUsers[username] = hash
}
}
if err := scanner.Err(); err != nil {
return fmt.Errorf("error reading users file: %w", err)
}
a.logger.Info("msg", "Loaded users from file",
"component", "auth",
"path", path,
"user_count", len(a.basicUsers))
return nil
}
func generateSessionID() string {
b := make([]byte, 32)
if _, err := rand.Read(b); err != nil {
// Fallback to a less secure method if crypto/rand fails
return fmt.Sprintf("fallback-%d", time.Now().UnixNano())
}
return base64.URLEncoding.EncodeToString(b)
}
// ValidateSession checks if a session is still valid
func (a *Authenticator) ValidateSession(sessionID string) bool {
if a == nil {
return true
}
a.sessionMu.RLock()
session, exists := a.sessions[sessionID]
a.sessionMu.RUnlock()
if !exists {
return false
}
// Update activity
a.sessionMu.Lock()
session.LastActivity = time.Now()
a.sessionMu.Unlock()
return true
}
// GetStats returns authentication statistics
func (a *Authenticator) GetStats() map[string]any {
if a == nil {
return map[string]any{"enabled": false}
}
a.sessionMu.RLock()
sessionCount := len(a.sessions)
a.sessionMu.RUnlock()
return map[string]any{
"enabled": true,
"type": a.config.Type,
"active_sessions": sessionCount,
"basic_users": len(a.basicUsers),
"static_tokens": len(a.bearerTokens),
}
}

View File

@ -1,77 +0,0 @@
// FILE: logwisp/src/internal/config/auth.go
package config
import (
"fmt"
)
type AuthConfig struct {
// Authentication type: "none", "basic", "bearer", "mtls"
Type string `toml:"type"`
// Basic auth
BasicAuth *BasicAuthConfig `toml:"basic_auth"`
// Bearer token auth
BearerAuth *BearerAuthConfig `toml:"bearer_auth"`
}
type BasicAuthConfig struct {
// Static users (for simple deployments)
Users []BasicAuthUser `toml:"users"`
// External auth file
UsersFile string `toml:"users_file"`
// Realm for WWW-Authenticate header
Realm string `toml:"realm"`
}
type BasicAuthUser struct {
Username string `toml:"username"`
// Password hash (bcrypt)
PasswordHash string `toml:"password_hash"`
}
type BearerAuthConfig struct {
// Static tokens
Tokens []string `toml:"tokens"`
// JWT validation
JWT *JWTConfig `toml:"jwt"`
}
type JWTConfig struct {
// JWKS URL for key discovery
JWKSURL string `toml:"jwks_url"`
// Static signing key (if not using JWKS)
SigningKey string `toml:"signing_key"`
// Expected issuer
Issuer string `toml:"issuer"`
// Expected audience
Audience string `toml:"audience"`
}
func validateAuth(pipelineName string, auth *AuthConfig) error {
if auth == nil {
return nil
}
validTypes := map[string]bool{"none": true, "basic": true, "bearer": true, "mtls": true}
if !validTypes[auth.Type] {
return fmt.Errorf("pipeline '%s': invalid auth type: %s", pipelineName, auth.Type)
}
if auth.Type == "basic" && auth.BasicAuth == nil {
return fmt.Errorf("pipeline '%s': basic auth type specified but config missing", pipelineName)
}
if auth.Type == "bearer" && auth.BearerAuth == nil {
return fmt.Errorf("pipeline '%s': bearer auth type specified but config missing", pipelineName)
}
return nil
}

View File

@ -1,6 +1,9 @@
// FILE: logwisp/src/internal/config/config.go // FILE: logwisp/src/internal/config/config.go
package config package config
// --- LogWisp Configuration Options ---
// Config is the top-level configuration structure for the LogWisp application.
type Config struct { type Config struct {
// Top-level flags for application control // Top-level flags for application control
Background bool `toml:"background"` Background bool `toml:"background"`
@ -10,15 +13,368 @@ type Config struct {
// Runtime behavior flags // Runtime behavior flags
DisableStatusReporter bool `toml:"disable_status_reporter"` DisableStatusReporter bool `toml:"disable_status_reporter"`
ConfigAutoReload bool `toml:"config_auto_reload"` ConfigAutoReload bool `toml:"config_auto_reload"`
ConfigSaveOnExit bool `toml:"config_save_on_exit"`
// Internal flag indicating demonized child process // Internal flag indicating demonized child process (DO NOT SET IN CONFIG FILE)
BackgroundDaemon bool `toml:"background-daemon"` BackgroundDaemon bool
// Configuration file path // Configuration file path
ConfigFile string `toml:"config"` ConfigFile string `toml:"config_file"`
// Existing fields // Existing fields
Logging *LogConfig `toml:"logging"` Logging *LogConfig `toml:"logging"`
Pipelines []PipelineConfig `toml:"pipelines"` Pipelines []PipelineConfig `toml:"pipelines"`
} }
// --- Logging Options ---
// LogConfig represents the logging configuration for the LogWisp application itself.
type LogConfig struct {
// Output mode: "file", "stdout", "stderr", "split", "all", "none"
Output string `toml:"output"`
// Log level: "debug", "info", "warn", "error"
Level string `toml:"level"`
// File output settings (when Output includes "file" or "all")
File *LogFileConfig `toml:"file"`
// Console output settings
Console *LogConsoleConfig `toml:"console"`
}
// LogFileConfig defines settings for file-based application logging.
type LogFileConfig struct {
// Directory for log files
Directory string `toml:"directory"`
// Base name for log files
Name string `toml:"name"`
// Maximum size per log file in MB
MaxSizeMB int64 `toml:"max_size_mb"`
// Maximum total size of all logs in MB
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
// Log retention in hours (0 = disabled)
RetentionHours float64 `toml:"retention_hours"`
}
// LogConsoleConfig defines settings for console-based application logging.
type LogConsoleConfig struct {
// Target for console output: "stdout", "stderr", "split"
// "split": info/debug to stdout, warn/error to stderr
Target string `toml:"target"`
// Format: "txt" or "json"
Format string `toml:"format"`
}
// --- Pipeline Options ---
// PipelineConfig defines a complete data flow from sources to sinks.
type PipelineConfig struct {
Name string `toml:"name"`
Sources []SourceConfig `toml:"sources"`
RateLimit *RateLimitConfig `toml:"rate_limit"`
Filters []FilterConfig `toml:"filters"`
Format *FormatConfig `toml:"format"`
Sinks []SinkConfig `toml:"sinks"`
}
// Common configuration structs used across components
// ACLConfig defines network-level access control and rate limiting rules.
type ACLConfig struct {
Enabled bool `toml:"enabled"`
RequestsPerSecond float64 `toml:"requests_per_second"`
BurstSize int64 `toml:"burst_size"`
ResponseMessage string `toml:"response_message"`
ResponseCode int64 `toml:"response_code"` // Default: 429
MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"`
MaxConnectionsTotal int64 `toml:"max_connections_total"`
IPWhitelist []string `toml:"ip_whitelist"`
IPBlacklist []string `toml:"ip_blacklist"`
}
// TLSServerConfig defines TLS settings for a server (HTTP Source, HTTP Sink).
type TLSServerConfig struct {
Enabled bool `toml:"enabled"`
CertFile string `toml:"cert_file"` // Server's certificate file.
KeyFile string `toml:"key_file"` // Server's private key file.
ClientAuth bool `toml:"client_auth"` // Enable/disable mTLS.
ClientCAFile string `toml:"client_ca_file"` // CA for verifying client certificates.
VerifyClientCert bool `toml:"verify_client_cert"` // Require and verify client certs.
// Common TLS settings
MinVersion string `toml:"min_version"` // "TLS1.2", "TLS1.3"
MaxVersion string `toml:"max_version"`
CipherSuites string `toml:"cipher_suites"`
}
// TLSClientConfig defines TLS settings for a client (HTTP Client Sink).
type TLSClientConfig struct {
Enabled bool `toml:"enabled"`
ServerCAFile string `toml:"server_ca_file"` // CA for verifying the remote server's certificate.
ClientCertFile string `toml:"client_cert_file"` // Client's certificate for mTLS.
ClientKeyFile string `toml:"client_key_file"` // Client's private key for mTLS.
ServerName string `toml:"server_name"` // For server certificate validation (SNI).
InsecureSkipVerify bool `toml:"insecure_skip_verify"` // Skip server verification, Use with caution.
// Common TLS settings
MinVersion string `toml:"min_version"`
MaxVersion string `toml:"max_version"`
CipherSuites string `toml:"cipher_suites"`
}
// HeartbeatConfig defines settings for periodic keep-alive or status messages.
type HeartbeatConfig struct {
Enabled bool `toml:"enabled"`
IntervalMS int64 `toml:"interval_ms"`
IncludeTimestamp bool `toml:"include_timestamp"`
IncludeStats bool `toml:"include_stats"`
Format string `toml:"format"`
}
// TODO: Future implementation
// ClientAuthConfig defines settings for client-side authentication.
type ClientAuthConfig struct {
Type string `toml:"type"` // "none"
}
// --- Source Options ---
// SourceConfig is a polymorphic struct representing a single data source.
type SourceConfig struct {
Type string `toml:"type"`
// Polymorphic - only one populated based on type
File *FileSourceOptions `toml:"file,omitempty"`
Console *ConsoleSourceOptions `toml:"console,omitempty"`
HTTP *HTTPSourceOptions `toml:"http,omitempty"`
TCP *TCPSourceOptions `toml:"tcp,omitempty"`
}
// FileSourceOptions defines settings for a file-based source.
type FileSourceOptions struct {
Directory string `toml:"directory"`
Pattern string `toml:"pattern"` // glob pattern
CheckIntervalMS int64 `toml:"check_interval_ms"`
Recursive bool `toml:"recursive"` // TODO: implement logic
}
// ConsoleSourceOptions defines settings for a stdin-based source.
type ConsoleSourceOptions struct {
BufferSize int64 `toml:"buffer_size"`
}
// HTTPSourceOptions defines settings for an HTTP server source.
type HTTPSourceOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
IngestPath string `toml:"ingest_path"`
BufferSize int64 `toml:"buffer_size"`
MaxRequestBodySize int64 `toml:"max_body_size"`
ReadTimeout int64 `toml:"read_timeout_ms"`
WriteTimeout int64 `toml:"write_timeout_ms"`
ACL *ACLConfig `toml:"acl"`
TLS *TLSServerConfig `toml:"tls"`
Auth *ServerAuthConfig `toml:"auth"`
}
// TCPSourceOptions defines settings for a TCP server source.
type TCPSourceOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
ReadTimeout int64 `toml:"read_timeout_ms"`
KeepAlive bool `toml:"keep_alive"`
KeepAlivePeriod int64 `toml:"keep_alive_period_ms"`
ACL *ACLConfig `toml:"acl"`
Auth *ServerAuthConfig `toml:"auth"`
}
// --- Sink Options ---
// SinkConfig is a polymorphic struct representing a single data sink.
type SinkConfig struct {
Type string `toml:"type"`
// Polymorphic - only one populated based on type
Console *ConsoleSinkOptions `toml:"console,omitempty"`
File *FileSinkOptions `toml:"file,omitempty"`
HTTP *HTTPSinkOptions `toml:"http,omitempty"`
TCP *TCPSinkOptions `toml:"tcp,omitempty"`
HTTPClient *HTTPClientSinkOptions `toml:"http_client,omitempty"`
TCPClient *TCPClientSinkOptions `toml:"tcp_client,omitempty"`
}
// ConsoleSinkOptions defines settings for a console-based sink.
type ConsoleSinkOptions struct {
Target string `toml:"target"` // "stdout", "stderr", "split"
Colorize bool `toml:"colorize"`
BufferSize int64 `toml:"buffer_size"`
}
// FileSinkOptions defines settings for a file-based sink.
type FileSinkOptions struct {
Directory string `toml:"directory"`
Name string `toml:"name"`
MaxSizeMB int64 `toml:"max_size_mb"`
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
MinDiskFreeMB int64 `toml:"min_disk_free_mb"`
RetentionHours float64 `toml:"retention_hours"`
BufferSize int64 `toml:"buffer_size"`
FlushInterval int64 `toml:"flush_interval_ms"`
}
// HTTPSinkOptions defines settings for an HTTP server sink.
type HTTPSinkOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
StreamPath string `toml:"stream_path"`
StatusPath string `toml:"status_path"`
BufferSize int64 `toml:"buffer_size"`
WriteTimeout int64 `toml:"write_timeout_ms"`
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
ACL *ACLConfig `toml:"acl"`
TLS *TLSServerConfig `toml:"tls"`
Auth *ServerAuthConfig `toml:"auth"`
}
// TCPSinkOptions defines settings for a TCP server sink.
type TCPSinkOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
WriteTimeout int64 `toml:"write_timeout_ms"`
KeepAlive bool `toml:"keep_alive"`
KeepAlivePeriod int64 `toml:"keep_alive_period_ms"`
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
ACL *ACLConfig `toml:"acl"`
Auth *ServerAuthConfig `toml:"auth"`
}
// HTTPClientSinkOptions defines settings for an HTTP client sink.
type HTTPClientSinkOptions struct {
URL string `toml:"url"`
BufferSize int64 `toml:"buffer_size"`
BatchSize int64 `toml:"batch_size"`
BatchDelayMS int64 `toml:"batch_delay_ms"`
Timeout int64 `toml:"timeout_seconds"`
MaxRetries int64 `toml:"max_retries"`
RetryDelayMS int64 `toml:"retry_delay_ms"`
RetryBackoff float64 `toml:"retry_backoff"`
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
TLS *TLSClientConfig `toml:"tls"`
Auth *ClientAuthConfig `toml:"auth"`
}
// TCPClientSinkOptions defines settings for a TCP client sink.
type TCPClientSinkOptions struct {
Host string `toml:"host"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
DialTimeout int64 `toml:"dial_timeout_seconds"`
WriteTimeout int64 `toml:"write_timeout_seconds"`
ReadTimeout int64 `toml:"read_timeout_seconds"`
KeepAlive int64 `toml:"keep_alive_seconds"`
ReconnectDelayMS int64 `toml:"reconnect_delay_ms"`
MaxReconnectDelayMS int64 `toml:"max_reconnect_delay_ms"`
ReconnectBackoff float64 `toml:"reconnect_backoff"`
Auth *ClientAuthConfig `toml:"auth"`
}
// --- Rate Limit Options ---
// RateLimitPolicy defines the action to take when a rate limit is exceeded.
type RateLimitPolicy int
const (
// PolicyPass allows all logs through, effectively disabling the limiter.
PolicyPass RateLimitPolicy = iota
// PolicyDrop drops logs that exceed the rate limit.
PolicyDrop
)
// RateLimitConfig defines the configuration for pipeline-level rate limiting.
type RateLimitConfig struct {
// Rate is the number of log entries allowed per second. Default: 0 (disabled).
Rate float64 `toml:"rate"`
// Burst is the maximum number of log entries that can be sent in a short burst. Defaults to the Rate.
Burst float64 `toml:"burst"`
// Policy defines the action to take when the limit is exceeded. "pass" or "drop".
Policy string `toml:"policy"`
// MaxEntrySizeBytes is the maximum allowed size for a single log entry. 0 = no limit.
MaxEntrySizeBytes int64 `toml:"max_entry_size_bytes"`
}
// --- Filter Options ---
// FilterType represents the filter's behavior (include or exclude).
type FilterType string
const (
// FilterTypeInclude specifies that only matching logs will pass.
FilterTypeInclude FilterType = "include" // Whitelist - only matching logs pass
// FilterTypeExclude specifies that matching logs will be dropped.
FilterTypeExclude FilterType = "exclude" // Blacklist - matching logs are dropped
)
// FilterLogic represents how multiple filter patterns are combined.
type FilterLogic string
const (
// FilterLogicOr specifies that a match on any pattern is sufficient.
FilterLogicOr FilterLogic = "or" // Match any pattern
// FilterLogicAnd specifies that all patterns must match.
FilterLogicAnd FilterLogic = "and" // Match all patterns
)
// FilterConfig represents the configuration for a single filter.
type FilterConfig struct {
Type FilterType `toml:"type"`
Logic FilterLogic `toml:"logic"`
Patterns []string `toml:"patterns"`
}
// --- Formatter Options ---
// FormatConfig is a polymorphic struct representing log entry formatting options.
type FormatConfig struct {
// Format configuration - polymorphic like sources/sinks
Type string `toml:"type"` // "json", "txt", "raw"
// Only one will be populated based on format type
JSONFormatOptions *JSONFormatterOptions `toml:"json,omitempty"`
TxtFormatOptions *TxtFormatterOptions `toml:"txt,omitempty"`
RawFormatOptions *RawFormatterOptions `toml:"raw,omitempty"`
}
// JSONFormatterOptions defines settings for the JSON formatter.
type JSONFormatterOptions struct {
Pretty bool `toml:"pretty"`
TimestampField string `toml:"timestamp_field"`
LevelField string `toml:"level_field"`
MessageField string `toml:"message_field"`
SourceField string `toml:"source_field"`
}
// TxtFormatterOptions defines settings for the text template formatter.
type TxtFormatterOptions struct {
Template string `toml:"template"`
TimestampFormat string `toml:"timestamp_format"`
}
// RawFormatterOptions defines settings for the raw pass-through formatter.
type RawFormatterOptions struct {
AddNewLine bool `toml:"add_new_line"`
}
// --- Server-side Auth (for sources) ---
// TODO: future implementation
// ServerAuthConfig defines settings for server-side authentication.
type ServerAuthConfig struct {
Type string `toml:"type"` // "none"
}

View File

@ -1,65 +0,0 @@
// FILE: logwisp/src/internal/config/filter.go
package config
import (
"fmt"
"regexp"
)
// FilterType represents the filter type
type FilterType string
const (
FilterTypeInclude FilterType = "include" // Whitelist - only matching logs pass
FilterTypeExclude FilterType = "exclude" // Blacklist - matching logs are dropped
)
// FilterLogic represents how multiple patterns are combined
type FilterLogic string
const (
FilterLogicOr FilterLogic = "or" // Match any pattern
FilterLogicAnd FilterLogic = "and" // Match all patterns
)
// FilterConfig represents filter configuration
type FilterConfig struct {
Type FilterType `toml:"type"`
Logic FilterLogic `toml:"logic"`
Patterns []string `toml:"patterns"`
}
func validateFilter(pipelineName string, filterIndex int, cfg *FilterConfig) error {
// Validate filter type
switch cfg.Type {
case FilterTypeInclude, FilterTypeExclude, "":
// Valid types
default:
return fmt.Errorf("pipeline '%s' filter[%d]: invalid type '%s' (must be 'include' or 'exclude')",
pipelineName, filterIndex, cfg.Type)
}
// Validate filter logic
switch cfg.Logic {
case FilterLogicOr, FilterLogicAnd, "":
// Valid logic
default:
return fmt.Errorf("pipeline '%s' filter[%d]: invalid logic '%s' (must be 'or' or 'and')",
pipelineName, filterIndex, cfg.Logic)
}
// Empty patterns is valid - passes everything
if len(cfg.Patterns) == 0 {
return nil
}
// Validate regex patterns
for i, pattern := range cfg.Patterns {
if _, err := regexp.Compile(pattern); err != nil {
return fmt.Errorf("pipeline '%s' filter[%d] pattern[%d] '%s': invalid regex: %w",
pipelineName, filterIndex, i, pattern, err)
}
}
return nil
}

View File

@ -1,58 +0,0 @@
// FILE: logwisp/src/internal/config/ratelimit.go
package config
import (
"fmt"
"strings"
)
// RateLimitPolicy defines the action to take when a rate limit is exceeded.
type RateLimitPolicy int
const (
// PolicyPass allows all logs through, effectively disabling the limiter.
PolicyPass RateLimitPolicy = iota
// PolicyDrop drops logs that exceed the rate limit.
PolicyDrop
)
// RateLimitConfig defines the configuration for pipeline-level rate limiting.
type RateLimitConfig struct {
// Rate is the number of log entries allowed per second. Default: 0 (disabled).
Rate float64 `toml:"rate"`
// Burst is the maximum number of log entries that can be sent in a short burst. Defaults to the Rate.
Burst float64 `toml:"burst"`
// Policy defines the action to take when the limit is exceeded. "pass" or "drop".
Policy string `toml:"policy"`
// MaxEntrySizeBytes is the maximum allowed size for a single log entry. 0 = no limit.
MaxEntrySizeBytes int64 `toml:"max_entry_size_bytes"`
}
func validateRateLimit(pipelineName string, cfg *RateLimitConfig) error {
if cfg == nil {
return nil
}
if cfg.Rate < 0 {
return fmt.Errorf("pipeline '%s': rate limit rate cannot be negative", pipelineName)
}
if cfg.Burst < 0 {
return fmt.Errorf("pipeline '%s': rate limit burst cannot be negative", pipelineName)
}
if cfg.MaxEntrySizeBytes < 0 {
return fmt.Errorf("pipeline '%s': max entry size bytes cannot be negative", pipelineName)
}
// Validate policy
switch strings.ToLower(cfg.Policy) {
case "", "pass", "drop":
// Valid policies
default:
return fmt.Errorf("pipeline '%s': invalid rate limit policy '%s' (must be 'pass' or 'drop')",
pipelineName, cfg.Policy)
}
return nil
}

View File

@ -11,11 +11,66 @@ import (
lconfig "github.com/lixenwraith/config" lconfig "github.com/lixenwraith/config"
) )
// LoadContext holds all configuration sources // configManager holds the global instance of the configuration manager.
type LoadContext struct { var configManager *lconfig.Config
FlagConfig any // Parsed command-line flags from main
// Load is the single entry point for loading all application configuration.
func Load(args []string) (*Config, error) {
configPath, isExplicit := resolveConfigPath(args)
// Build configuration with all sources
// Create target config instance that will be populated
finalConfig := &Config{}
// Builder handles loading, populating the target struct, and validation
cfg, err := lconfig.NewBuilder().
WithTarget(finalConfig). // Typed target struct
WithDefaults(defaults()). // Default values
WithSources(
lconfig.SourceCLI,
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
).
WithEnvTransform(customEnvTransform). // Convert '.' to '_' in env separation
WithEnvPrefix("LOGWISP_"). // Environment variable prefix
WithArgs(args). // Command-line arguments
WithFile(configPath). // TOML config file
WithFileFormat("toml"). // Explicit format
WithTypedValidator(ValidateConfig). // Centralized validation
WithSecurityOptions(lconfig.SecurityOptions{
PreventPathTraversal: true,
MaxFileSize: 10 * 1024 * 1024, // 10MB max config
}).
Build()
if err != nil {
// Handle file not found errors - maintain existing behavior
if errors.Is(err, lconfig.ErrConfigNotFound) {
if isExplicit {
return nil, fmt.Errorf("config file not found: %s", configPath)
}
// If the default config file is not found, it's not an error, default/cli/env will be used
} else {
return nil, fmt.Errorf("failed to load or validate config: %w", err)
}
}
// Store the config file path for hot reload
finalConfig.ConfigFile = configPath
// Store the manager for hot reload
configManager = cfg
return finalConfig, nil
} }
// GetConfigManager returns the global configuration manager instance for hot-reloading.
func GetConfigManager() *lconfig.Config {
return configManager
}
// defaults provides the default configuration values for the application.
func defaults() *Config { func defaults() *Config {
return &Config{ return &Config{
// Top-level flag defaults // Top-level flag defaults
@ -26,41 +81,46 @@ func defaults() *Config {
// Runtime behavior defaults // Runtime behavior defaults
DisableStatusReporter: false, DisableStatusReporter: false,
ConfigAutoReload: false, ConfigAutoReload: false,
ConfigSaveOnExit: false,
// Child process indicator // Child process indicator
BackgroundDaemon: false, BackgroundDaemon: false,
// Existing defaults // Existing defaults
Logging: DefaultLogConfig(), Logging: &LogConfig{
Output: "stdout",
Level: "info",
File: &LogFileConfig{
Directory: "./log",
Name: "logwisp",
MaxSizeMB: 100,
MaxTotalSizeMB: 1000,
RetentionHours: 168, // 7 days
},
Console: &LogConsoleConfig{
Target: "stdout",
Format: "txt",
},
},
Pipelines: []PipelineConfig{ Pipelines: []PipelineConfig{
{ {
Name: "default", Name: "default",
Sources: []SourceConfig{ Sources: []SourceConfig{
{ {
Type: "directory", Type: "file",
Options: map[string]any{ File: &FileSourceOptions{
"path": "./", Directory: "./",
"pattern": "*.log", Pattern: "*.log",
"check_interval_ms": int64(100), CheckIntervalMS: int64(100),
}, },
}, },
}, },
Sinks: []SinkConfig{ Sinks: []SinkConfig{
{ {
Type: "http", Type: "console",
Options: map[string]any{ Console: &ConsoleSinkOptions{
"port": int64(8080), Target: "stdout",
"buffer_size": int64(1000), Colorize: false,
"stream_path": "/stream", BufferSize: 100,
"status_path": "/status",
"heartbeat": map[string]any{
"enabled": true,
"interval_seconds": int64(30),
"include_timestamp": true,
"include_stats": false,
"format": "comment",
},
}, },
}, },
}, },
@ -69,66 +129,11 @@ func defaults() *Config {
} }
} }
// Load is the single entry point for loading all configuration // resolveConfigPath determines the configuration file path based on CLI args, env vars, and default locations.
func Load(args []string) (*Config, error) {
configPath, isExplicit := resolveConfigPath(args)
// Build configuration with all sources
cfg, err := lconfig.NewBuilder().
WithDefaults(defaults()).
WithEnvPrefix("LOGWISP_").
WithEnvTransform(customEnvTransform).
WithArgs(args).
WithFile(configPath).
WithSources(
lconfig.SourceCLI,
lconfig.SourceEnv,
lconfig.SourceFile,
lconfig.SourceDefault,
).
Build()
if err != nil {
// Handle file not found errors - maintain existing behavior
if errors.Is(err, lconfig.ErrConfigNotFound) {
if isExplicit {
return nil, fmt.Errorf("config file not found: %s", configPath)
}
// If the default config file is not found, it's not an error
} else {
return nil, fmt.Errorf("failed to load config: %w", err)
}
}
// Scan into final config struct - using new interface
finalConfig := &Config{}
if err := cfg.Scan(finalConfig); err != nil {
return nil, fmt.Errorf("failed to scan config: %w", err)
}
// Set config file path if it exists
if _, err := os.Stat(configPath); err == nil {
finalConfig.ConfigFile = configPath
}
// Ensure critical fields are not nil
if finalConfig.Logging == nil {
finalConfig.Logging = DefaultLogConfig()
}
// Apply console target overrides if needed
if err := applyConsoleTargetOverrides(finalConfig); err != nil {
return nil, fmt.Errorf("failed to apply console target overrides: %w", err)
}
// Validate configuration
return finalConfig, finalConfig.validate()
}
// resolveConfigPath returns the configuration file path
func resolveConfigPath(args []string) (path string, isExplicit bool) { func resolveConfigPath(args []string) (path string, isExplicit bool) {
// 1. Check for --config flag in command-line arguments (highest precedence) // 1. Check for --config flag in command-line arguments (highest precedence)
for i, arg := range args { for i, arg := range args {
if (arg == "--config" || arg == "-c") && i+1 < len(args) { if arg == "-c" {
return args[i+1], true return args[i+1], true
} }
if strings.HasPrefix(arg, "--config=") { if strings.HasPrefix(arg, "--config=") {
@ -160,48 +165,10 @@ func resolveConfigPath(args []string) (path string, isExplicit bool) {
return "logwisp.toml", false return "logwisp.toml", false
} }
// customEnvTransform converts TOML-style config paths (e.g., logging.level) to environment variable format (LOGGING_LEVEL).
func customEnvTransform(path string) string { func customEnvTransform(path string) string {
env := strings.ReplaceAll(path, ".", "_") env := strings.ReplaceAll(path, ".", "_")
env = strings.ToUpper(env) env = strings.ToUpper(env)
// env = "LOGWISP_" + env // already added by WithEnvPrefix // env = "LOGWISP_" + env // already added by WithEnvPrefix
return env return env
} }
// applyConsoleTargetOverrides centralizes console target configuration
func applyConsoleTargetOverrides(cfg *Config) error {
// Check environment variable for console target override
consoleTarget := os.Getenv("LOGWISP_CONSOLE_TARGET")
if consoleTarget == "" {
return nil
}
// Validate console target value
validTargets := map[string]bool{
"stdout": true,
"stderr": true,
"split": true,
}
if !validTargets[consoleTarget] {
return fmt.Errorf("invalid LOGWISP_CONSOLE_TARGET value: %s", consoleTarget)
}
// Apply to all console sinks
for i, pipeline := range cfg.Pipelines {
for j, sink := range pipeline.Sinks {
if sink.Type == "stdout" || sink.Type == "stderr" {
if sink.Options == nil {
cfg.Pipelines[i].Sinks[j].Options = make(map[string]any)
}
// Set target for split mode handling
cfg.Pipelines[i].Sinks[j].Options["target"] = consoleTarget
}
}
}
// Also update logging console target if applicable
if cfg.Logging.Console != nil && consoleTarget == "split" {
cfg.Logging.Console.Target = "split"
}
return nil
}

View File

@ -1,99 +0,0 @@
// FILE: logwisp/src/internal/config/logging.go
package config
import "fmt"
// LogConfig represents logging configuration for LogWisp
type LogConfig struct {
// Output mode: "file", "stdout", "stderr", "both", "none"
Output string `toml:"output"`
// Log level: "debug", "info", "warn", "error"
Level string `toml:"level"`
// File output settings (when Output includes "file" or "both")
File *LogFileConfig `toml:"file"`
// Console output settings
Console *LogConsoleConfig `toml:"console"`
}
type LogFileConfig struct {
// Directory for log files
Directory string `toml:"directory"`
// Base name for log files
Name string `toml:"name"`
// Maximum size per log file in MB
MaxSizeMB int64 `toml:"max_size_mb"`
// Maximum total size of all logs in MB
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
// Log retention in hours (0 = disabled)
RetentionHours float64 `toml:"retention_hours"`
}
type LogConsoleConfig struct {
// Target for console output: "stdout", "stderr", "split"
// "split": info/debug to stdout, warn/error to stderr
Target string `toml:"target"`
// Format: "txt" or "json"
Format string `toml:"format"`
}
// DefaultLogConfig returns sensible logging defaults
func DefaultLogConfig() *LogConfig {
return &LogConfig{
Output: "stderr",
Level: "info",
File: &LogFileConfig{
Directory: "./logs",
Name: "logwisp",
MaxSizeMB: 100,
MaxTotalSizeMB: 1000,
RetentionHours: 168, // 7 days
},
Console: &LogConsoleConfig{
Target: "stderr",
Format: "txt",
},
}
}
func validateLogConfig(cfg *LogConfig) error {
validOutputs := map[string]bool{
"file": true, "stdout": true, "stderr": true,
"both": true, "none": true,
}
if !validOutputs[cfg.Output] {
return fmt.Errorf("invalid log output mode: %s", cfg.Output)
}
validLevels := map[string]bool{
"debug": true, "info": true, "warn": true, "error": true,
}
if !validLevels[cfg.Level] {
return fmt.Errorf("invalid log level: %s", cfg.Level)
}
if cfg.Console != nil {
validTargets := map[string]bool{
"stdout": true, "stderr": true, "split": true,
}
if !validTargets[cfg.Console.Target] {
return fmt.Errorf("invalid console target: %s", cfg.Console.Target)
}
validFormats := map[string]bool{
"txt": true, "json": true, "": true,
}
if !validFormats[cfg.Console.Format] {
return fmt.Errorf("invalid console format: %s", cfg.Console.Format)
}
}
return nil
}

View File

@ -1,383 +0,0 @@
// FILE: logwisp/src/internal/config/pipeline.go
package config
import (
"fmt"
"net"
"net/url"
"path/filepath"
"strings"
)
// PipelineConfig represents a data processing pipeline
type PipelineConfig struct {
// Pipeline identifier (used in logs and metrics)
Name string `toml:"name"`
// Data sources for this pipeline
Sources []SourceConfig `toml:"sources"`
// Rate limiting
RateLimit *RateLimitConfig `toml:"rate_limit"`
// Filter configuration
Filters []FilterConfig `toml:"filters"`
// Log formatting configuration
Format string `toml:"format"`
FormatOptions map[string]any `toml:"format_options"`
// Output sinks for this pipeline
Sinks []SinkConfig `toml:"sinks"`
// Authentication/Authorization (applies to network sinks)
Auth *AuthConfig `toml:"auth"`
}
// SourceConfig represents an input data source
type SourceConfig struct {
// Source type: "directory", "file", "stdin", etc.
Type string `toml:"type"`
// Type-specific configuration options
Options map[string]any `toml:"options"`
}
// SinkConfig represents an output destination
type SinkConfig struct {
// Sink type: "http", "tcp", "file", "stdout", "stderr"
Type string `toml:"type"`
// Type-specific configuration options
Options map[string]any `toml:"options"`
}
func validateSource(pipelineName string, sourceIndex int, cfg *SourceConfig) error {
if cfg.Type == "" {
return fmt.Errorf("pipeline '%s' source[%d]: missing type", pipelineName, sourceIndex)
}
switch cfg.Type {
case "directory":
// Validate directory source options
path, ok := cfg.Options["path"].(string)
if !ok || path == "" {
return fmt.Errorf("pipeline '%s' source[%d]: directory source requires 'path' option",
pipelineName, sourceIndex)
}
// Check for directory traversal
if strings.Contains(path, "..") {
return fmt.Errorf("pipeline '%s' source[%d]: path contains directory traversal",
pipelineName, sourceIndex)
}
// Validate pattern if provided
if pattern, ok := cfg.Options["pattern"].(string); ok && pattern != "" {
// Try to compile as glob pattern (will be converted to regex internally)
if strings.Count(pattern, "*") == 0 && strings.Count(pattern, "?") == 0 {
// If no wildcards, ensure it's a valid filename
if filepath.Base(pattern) != pattern {
return fmt.Errorf("pipeline '%s' source[%d]: pattern contains path separators",
pipelineName, sourceIndex)
}
}
}
// Validate check interval if provided
if interval, ok := cfg.Options["check_interval_ms"]; ok {
if intVal, ok := interval.(int64); ok {
if intVal < 10 {
return fmt.Errorf("pipeline '%s' source[%d]: check interval too small: %d ms (min: 10ms)",
pipelineName, sourceIndex, intVal)
}
} else {
return fmt.Errorf("pipeline '%s' source[%d]: invalid check_interval_ms type",
pipelineName, sourceIndex)
}
}
case "stdin":
// No specific validation needed for stdin
case "http":
// Validate HTTP source options
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' source[%d]: invalid or missing HTTP port",
pipelineName, sourceIndex)
}
// Validate path if provided
if ingestPath, ok := cfg.Options["ingest_path"].(string); ok {
if !strings.HasPrefix(ingestPath, "/") {
return fmt.Errorf("pipeline '%s' source[%d]: ingest path must start with /: %s",
pipelineName, sourceIndex, ingestPath)
}
}
// Validate net_limit if present within Options
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("HTTP source", pipelineName, sourceIndex, rl); err != nil {
return err
}
}
// CHANGED: Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("HTTP source", pipelineName, sourceIndex, ssl); err != nil {
return err
}
}
case "tcp":
// Validate TCP source options
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' source[%d]: invalid or missing TCP port",
pipelineName, sourceIndex)
}
// Validate net_limit if present within Options
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("TCP source", pipelineName, sourceIndex, rl); err != nil {
return err
}
}
// CHANGED: Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("TCP source", pipelineName, sourceIndex, ssl); err != nil {
return err
}
}
default:
return fmt.Errorf("pipeline '%s' source[%d]: unknown source type '%s'",
pipelineName, sourceIndex, cfg.Type)
}
return nil
}
func validateSink(pipelineName string, sinkIndex int, cfg *SinkConfig, allPorts map[int64]string) error {
if cfg.Type == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: missing type", pipelineName, sinkIndex)
}
switch cfg.Type {
case "http":
// Extract and validate HTTP configuration
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid or missing HTTP port",
pipelineName, sinkIndex)
}
// Check port conflicts
if existing, exists := allPorts[port]; exists {
return fmt.Errorf("pipeline '%s' sink[%d]: HTTP port %d already used by %s",
pipelineName, sinkIndex, port, existing)
}
allPorts[port] = fmt.Sprintf("%s-http[%d]", pipelineName, sinkIndex)
// Validate buffer size
if bufSize, ok := cfg.Options["buffer_size"].(int64); ok {
if bufSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: HTTP buffer size must be positive: %d",
pipelineName, sinkIndex, bufSize)
}
}
// Validate paths if provided
if streamPath, ok := cfg.Options["stream_path"].(string); ok {
if !strings.HasPrefix(streamPath, "/") {
return fmt.Errorf("pipeline '%s' sink[%d]: stream path must start with /: %s",
pipelineName, sinkIndex, streamPath)
}
}
if statusPath, ok := cfg.Options["status_path"].(string); ok {
if !strings.HasPrefix(statusPath, "/") {
return fmt.Errorf("pipeline '%s' sink[%d]: status path must start with /: %s",
pipelineName, sinkIndex, statusPath)
}
}
// Validate heartbeat if present
if hb, ok := cfg.Options["heartbeat"].(map[string]any); ok {
if err := validateHeartbeatOptions("HTTP", pipelineName, sinkIndex, hb); err != nil {
return err
}
}
// Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("HTTP", pipelineName, sinkIndex, ssl); err != nil {
return err
}
}
// Validate net limit if present
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("HTTP", pipelineName, sinkIndex, rl); err != nil {
return err
}
}
case "tcp":
// Extract and validate TCP configuration
port, ok := cfg.Options["port"].(int64)
if !ok || port < 1 || port > 65535 {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid or missing TCP port",
pipelineName, sinkIndex)
}
// Check port conflicts
if existing, exists := allPorts[port]; exists {
return fmt.Errorf("pipeline '%s' sink[%d]: TCP port %d already used by %s",
pipelineName, sinkIndex, port, existing)
}
allPorts[port] = fmt.Sprintf("%s-tcp[%d]", pipelineName, sinkIndex)
// Validate buffer size
if bufSize, ok := cfg.Options["buffer_size"].(int64); ok {
if bufSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: TCP buffer size must be positive: %d",
pipelineName, sinkIndex, bufSize)
}
}
// Validate heartbeat if present
if hb, ok := cfg.Options["heartbeat"].(map[string]any); ok {
if err := validateHeartbeatOptions("TCP", pipelineName, sinkIndex, hb); err != nil {
return err
}
}
// Validate SSL if present
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
if err := validateSSLOptions("TCP", pipelineName, sinkIndex, ssl); err != nil {
return err
}
}
// Validate net limit if present
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
if err := validateNetLimitOptions("TCP", pipelineName, sinkIndex, rl); err != nil {
return err
}
}
case "http_client":
// Validate URL
urlStr, ok := cfg.Options["url"].(string)
if !ok || urlStr == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: http_client sink requires 'url' option",
pipelineName, sinkIndex)
}
// Validate URL format
parsedURL, err := url.Parse(urlStr)
if err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid URL: %w",
pipelineName, sinkIndex, err)
}
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
return fmt.Errorf("pipeline '%s' sink[%d]: URL must use http or https scheme",
pipelineName, sinkIndex)
}
// Validate batch size
if batchSize, ok := cfg.Options["batch_size"].(int64); ok {
if batchSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: batch_size must be positive: %d",
pipelineName, sinkIndex, batchSize)
}
}
// Validate timeout
if timeout, ok := cfg.Options["timeout_seconds"].(int64); ok {
if timeout < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: timeout_seconds must be positive: %d",
pipelineName, sinkIndex, timeout)
}
}
case "tcp_client":
// FIXED: Added validation for TCP client sink
// Validate address
address, ok := cfg.Options["address"].(string)
if !ok || address == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: tcp_client sink requires 'address' option",
pipelineName, sinkIndex)
}
// Validate address format
_, _, err := net.SplitHostPort(address)
if err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid address format (expected host:port): %w",
pipelineName, sinkIndex, err)
}
// Validate timeouts
if dialTimeout, ok := cfg.Options["dial_timeout_seconds"].(int64); ok {
if dialTimeout < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: dial_timeout_seconds must be positive: %d",
pipelineName, sinkIndex, dialTimeout)
}
}
if writeTimeout, ok := cfg.Options["write_timeout_seconds"].(int64); ok {
if writeTimeout < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: write_timeout_seconds must be positive: %d",
pipelineName, sinkIndex, writeTimeout)
}
}
case "file":
// Validate file sink options
directory, ok := cfg.Options["directory"].(string)
if !ok || directory == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: file sink requires 'directory' option",
pipelineName, sinkIndex)
}
name, ok := cfg.Options["name"].(string)
if !ok || name == "" {
return fmt.Errorf("pipeline '%s' sink[%d]: file sink requires 'name' option",
pipelineName, sinkIndex)
}
// Validate numeric options
if maxSize, ok := cfg.Options["max_size_mb"].(int64); ok {
if maxSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_size_mb must be positive: %d",
pipelineName, sinkIndex, maxSize)
}
}
if maxTotalSize, ok := cfg.Options["max_total_size_mb"].(int64); ok {
if maxTotalSize < 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_total_size_mb cannot be negative: %d",
pipelineName, sinkIndex, maxTotalSize)
}
}
if retention, ok := cfg.Options["retention_hours"].(float64); ok {
if retention < 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: retention_hours cannot be negative: %f",
pipelineName, sinkIndex, retention)
}
}
case "stdout", "stderr":
// No specific validation needed for console sinks
default:
return fmt.Errorf("pipeline '%s' sink[%d]: unknown sink type '%s'",
pipelineName, sinkIndex, cfg.Type)
}
return nil
}

View File

@ -1,34 +0,0 @@
// FILE: logwisp/src/internal/config/saver.go
package config
import (
"fmt"
lconfig "github.com/lixenwraith/config"
)
// SaveToFile saves the configuration to the specified file path.
// It uses the lconfig library's atomic file saving capabilities.
func (c *Config) SaveToFile(path string) error {
if path == "" {
return fmt.Errorf("cannot save config: path is empty")
}
// Create a temporary lconfig instance just for saving
// This avoids the need to track lconfig throughout the application
lcfg, err := lconfig.NewBuilder().
WithFile(path).
WithTarget(c).
WithFileFormat("toml").
Build()
if err != nil {
return fmt.Errorf("failed to create config builder: %w", err)
}
// Use lconfig's Save method which handles atomic writes
if err := lcfg.Save(path); err != nil {
return fmt.Errorf("failed to save config: %w", err)
}
return nil
}

View File

@ -1,205 +0,0 @@
// FILE: logwisp/src/internal/config/server.go
package config
import (
"fmt"
"net"
"strings"
)
type TCPConfig struct {
Enabled bool `toml:"enabled"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
// SSL/TLS Configuration
SSL *SSLConfig `toml:"ssl"`
// Net limiting
NetLimit *NetLimitConfig `toml:"net_limit"`
// Heartbeat
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
}
type HTTPConfig struct {
Enabled bool `toml:"enabled"`
Port int64 `toml:"port"`
BufferSize int64 `toml:"buffer_size"`
// Endpoint paths
StreamPath string `toml:"stream_path"`
StatusPath string `toml:"status_path"`
// SSL/TLS Configuration
SSL *SSLConfig `toml:"ssl"`
// Nate limiting
NetLimit *NetLimitConfig `toml:"net_limit"`
// Heartbeat
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
}
type HeartbeatConfig struct {
Enabled bool `toml:"enabled"`
IntervalSeconds int64 `toml:"interval_seconds"`
IncludeTimestamp bool `toml:"include_timestamp"`
IncludeStats bool `toml:"include_stats"`
Format string `toml:"format"`
}
type NetLimitConfig struct {
// Enable net limiting
Enabled bool `toml:"enabled"`
// IP Access Control Lists
IPWhitelist []string `toml:"ip_whitelist"`
IPBlacklist []string `toml:"ip_blacklist"`
// Requests per second per client
RequestsPerSecond float64 `toml:"requests_per_second"`
// Burst size (token bucket)
BurstSize int64 `toml:"burst_size"`
// Net limit by: "ip", "user", "token", "global"
LimitBy string `toml:"limit_by"`
// Response when net limited
ResponseCode int64 `toml:"response_code"` // Default: 429
ResponseMessage string `toml:"response_message"` // Default: "Net limit exceeded"
// Connection limits
MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"`
MaxTotalConnections int64 `toml:"max_total_connections"`
}
func validateHeartbeatOptions(serverType, pipelineName string, sinkIndex int, hb map[string]any) error {
if enabled, ok := hb["enabled"].(bool); ok && enabled {
interval, ok := hb["interval_seconds"].(int64)
if !ok || interval < 1 {
return fmt.Errorf("pipeline '%s' sink[%d] %s: heartbeat interval must be positive",
pipelineName, sinkIndex, serverType)
}
if format, ok := hb["format"].(string); ok {
if format != "json" && format != "comment" {
return fmt.Errorf("pipeline '%s' sink[%d] %s: heartbeat format must be 'json' or 'comment': %s",
pipelineName, sinkIndex, serverType, format)
}
}
}
return nil
}
func validateNetLimitOptions(serverType, pipelineName string, sinkIndex int, rl map[string]any) error {
if enabled, ok := rl["enabled"].(bool); !ok || !enabled {
return nil
}
// Validate IP lists if present
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
for i, entry := range ipWhitelist {
entryStr, ok := entry.(string)
if !ok {
continue
}
if err := validateIPv4Entry(entryStr); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: whitelist[%d] %v",
pipelineName, sinkIndex, serverType, i, err)
}
}
}
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
for i, entry := range ipBlacklist {
entryStr, ok := entry.(string)
if !ok {
continue
}
if err := validateIPv4Entry(entryStr); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: blacklist[%d] %v",
pipelineName, sinkIndex, serverType, i, err)
}
}
}
// Validate requests per second
rps, ok := rl["requests_per_second"].(float64)
if !ok || rps <= 0 {
return fmt.Errorf("pipeline '%s' sink[%d] %s: requests_per_second must be positive",
pipelineName, sinkIndex, serverType)
}
// Validate burst size
burst, ok := rl["burst_size"].(int64)
if !ok || burst < 1 {
return fmt.Errorf("pipeline '%s' sink[%d] %s: burst_size must be at least 1",
pipelineName, sinkIndex, serverType)
}
// Validate limit_by
if limitBy, ok := rl["limit_by"].(string); ok && limitBy != "" {
validLimitBy := map[string]bool{"ip": true, "global": true}
if !validLimitBy[limitBy] {
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid limit_by value: %s (must be 'ip' or 'global')",
pipelineName, sinkIndex, serverType, limitBy)
}
}
// Validate response code
if respCode, ok := rl["response_code"].(int64); ok {
if respCode > 0 && (respCode < 400 || respCode >= 600) {
return fmt.Errorf("pipeline '%s' sink[%d] %s: response_code must be 4xx or 5xx: %d",
pipelineName, sinkIndex, serverType, respCode)
}
}
// Validate connection limits
maxPerIP, perIPOk := rl["max_connections_per_ip"].(int64)
maxTotal, totalOk := rl["max_total_connections"].(int64)
if perIPOk && totalOk && maxPerIP > 0 && maxTotal > 0 {
if maxPerIP > maxTotal {
return fmt.Errorf("pipeline '%s' sink[%d] %s: max_connections_per_ip (%d) cannot exceed max_total_connections (%d)",
pipelineName, sinkIndex, serverType, maxPerIP, maxTotal)
}
}
return nil
}
// validateIPv4Entry ensures an IP or CIDR is IPv4
func validateIPv4Entry(entry string) error {
// Handle single IP
if !strings.Contains(entry, "/") {
ip := net.ParseIP(entry)
if ip == nil {
return fmt.Errorf("invalid IP address: %s", entry)
}
if ip.To4() == nil {
return fmt.Errorf("IPv6 not supported (IPv4-only): %s", entry)
}
return nil
}
// Handle CIDR
ipAddr, ipNet, err := net.ParseCIDR(entry)
if err != nil {
return fmt.Errorf("invalid CIDR: %s", entry)
}
// Check if the IP is IPv4
if ipAddr.To4() == nil {
return fmt.Errorf("IPv6 CIDR not supported (IPv4-only): %s", entry)
}
// Verify the network mask is appropriate for IPv4
_, bits := ipNet.Mask.Size()
if bits != 32 {
return fmt.Errorf("invalid IPv4 CIDR mask (got %d bits, expected 32): %s", bits, entry)
}
return nil
}

View File

@ -1,79 +0,0 @@
// FILE: logwisp/src/internal/config/ssl.go
package config
import (
"fmt"
"os"
)
type SSLConfig struct {
Enabled bool `toml:"enabled"`
CertFile string `toml:"cert_file"`
KeyFile string `toml:"key_file"`
// Client certificate authentication
ClientAuth bool `toml:"client_auth"`
ClientCAFile string `toml:"client_ca_file"`
VerifyClientCert bool `toml:"verify_client_cert"`
// Option to skip verification for clients
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
// TLS version constraints
MinVersion string `toml:"min_version"` // "TLS1.2", "TLS1.3"
MaxVersion string `toml:"max_version"`
// Cipher suites (comma-separated list)
CipherSuites string `toml:"cipher_suites"`
}
func validateSSLOptions(serverType, pipelineName string, sinkIndex int, ssl map[string]any) error {
if enabled, ok := ssl["enabled"].(bool); ok && enabled {
certFile, certOk := ssl["cert_file"].(string)
keyFile, keyOk := ssl["key_file"].(string)
if !certOk || certFile == "" || !keyOk || keyFile == "" {
return fmt.Errorf("pipeline '%s' sink[%d] %s: SSL enabled but cert/key files not specified",
pipelineName, sinkIndex, serverType)
}
// Validate that certificate files exist and are readable
if _, err := os.Stat(certFile); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: cert_file is not accessible: %w",
pipelineName, sinkIndex, serverType, err)
}
if _, err := os.Stat(keyFile); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: key_file is not accessible: %w",
pipelineName, sinkIndex, serverType, err)
}
if clientAuth, ok := ssl["client_auth"].(bool); ok && clientAuth {
caFile, caOk := ssl["client_ca_file"].(string)
if !caOk || caFile == "" {
return fmt.Errorf("pipeline '%s' sink[%d] %s: client auth enabled but CA file not specified",
pipelineName, sinkIndex, serverType)
}
// Validate that the client CA file exists and is readable
if _, err := os.Stat(caFile); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d] %s: client_ca_file is not accessible: %w",
pipelineName, sinkIndex, serverType, err)
}
}
// Validate TLS versions
validVersions := map[string]bool{"TLS1.0": true, "TLS1.1": true, "TLS1.2": true, "TLS1.3": true}
if minVer, ok := ssl["min_version"].(string); ok && minVer != "" {
if !validVersions[minVer] {
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid min TLS version: %s",
pipelineName, sinkIndex, serverType, minVer)
}
}
if maxVer, ok := ssl["max_version"].(string); ok && maxVer != "" {
if !validVersions[maxVer] {
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid max TLS version: %s",
pipelineName, sinkIndex, serverType, maxVer)
}
}
}
return nil
}

View File

@ -3,22 +3,26 @@ package config
import ( import (
"fmt" "fmt"
"net/url"
"path/filepath"
"regexp"
"strings"
"time"
lconfig "github.com/lixenwraith/config"
) )
func (c *Config) validate() error { // ValidateConfig is the centralized validator for the entire configuration structure.
if c == nil { func ValidateConfig(cfg *Config) error {
if cfg == nil {
return fmt.Errorf("config is nil") return fmt.Errorf("config is nil")
} }
if c.Logging == nil { if len(cfg.Pipelines) == 0 {
c.Logging = DefaultLogConfig()
}
if len(c.Pipelines) == 0 {
return fmt.Errorf("no pipelines configured") return fmt.Errorf("no pipelines configured")
} }
if err := validateLogConfig(c.Logging); err != nil { if err := validateLogConfig(cfg.Logging); err != nil {
return fmt.Errorf("logging config: %w", err) return fmt.Errorf("logging config: %w", err)
} }
@ -26,57 +30,793 @@ func (c *Config) validate() error {
allPorts := make(map[int64]string) allPorts := make(map[int64]string)
pipelineNames := make(map[string]bool) pipelineNames := make(map[string]bool)
for i, pipeline := range c.Pipelines { for i, pipeline := range cfg.Pipelines {
if pipeline.Name == "" { if err := validatePipeline(i, &pipeline, pipelineNames, allPorts); err != nil {
return fmt.Errorf("pipeline %d: missing name", i)
}
if pipelineNames[pipeline.Name] {
return fmt.Errorf("pipeline %d: duplicate name '%s'", i, pipeline.Name)
}
pipelineNames[pipeline.Name] = true
// Pipeline must have at least one source
if len(pipeline.Sources) == 0 {
return fmt.Errorf("pipeline '%s': no sources specified", pipeline.Name)
}
// Validate sources
for j, source := range pipeline.Sources {
if err := validateSource(pipeline.Name, j, &source); err != nil {
return err
}
}
// Validate rate limit if present
if err := validateRateLimit(pipeline.Name, pipeline.RateLimit); err != nil {
return err
}
// Validate filters
for j, filterCfg := range pipeline.Filters {
if err := validateFilter(pipeline.Name, j, &filterCfg); err != nil {
return err
}
}
// Pipeline must have at least one sink
if len(pipeline.Sinks) == 0 {
return fmt.Errorf("pipeline '%s': no sinks specified", pipeline.Name)
}
// Validate sinks and check for port conflicts
for j, sink := range pipeline.Sinks {
if err := validateSink(pipeline.Name, j, &sink, allPorts); err != nil {
return err
}
}
// Validate auth if present
if err := validateAuth(pipeline.Name, pipeline.Auth); err != nil {
return err return err
} }
} }
return nil return nil
} }
// validateLogConfig validates the application's own logging settings.
func validateLogConfig(cfg *LogConfig) error {
validOutputs := map[string]bool{
"file": true, "stdout": true, "stderr": true,
"split": true, "all": true, "none": true,
}
if !validOutputs[cfg.Output] {
return fmt.Errorf("invalid log output mode: %s", cfg.Output)
}
validLevels := map[string]bool{
"debug": true, "info": true, "warn": true, "error": true,
}
if !validLevels[cfg.Level] {
return fmt.Errorf("invalid log level: %s", cfg.Level)
}
if cfg.Console != nil {
validTargets := map[string]bool{
"stdout": true, "stderr": true, "split": true,
}
if !validTargets[cfg.Console.Target] {
return fmt.Errorf("invalid console target: %s", cfg.Console.Target)
}
validFormats := map[string]bool{
"txt": true, "json": true, "": true,
}
if !validFormats[cfg.Console.Format] {
return fmt.Errorf("invalid console format: %s", cfg.Console.Format)
}
}
return nil
}
// validatePipeline validates a single pipeline's configuration.
func validatePipeline(index int, p *PipelineConfig, pipelineNames map[string]bool, allPorts map[int64]string) error {
// Validate pipeline name
if err := lconfig.NonEmpty(p.Name); err != nil {
return fmt.Errorf("pipeline %d: missing name", index)
}
if pipelineNames[p.Name] {
return fmt.Errorf("pipeline %d: duplicate name '%s'", index, p.Name)
}
pipelineNames[p.Name] = true
// Must have at least one source
if len(p.Sources) == 0 {
return fmt.Errorf("pipeline '%s': no sources specified", p.Name)
}
// Validate each source
for j, source := range p.Sources {
if err := validateSourceConfig(p.Name, j, &source); err != nil {
return err
}
}
// Validate rate limit if present
if p.RateLimit != nil {
if err := validateRateLimit(p.Name, p.RateLimit); err != nil {
return err
}
}
// Validate filters
for j, filter := range p.Filters {
if err := validateFilter(p.Name, j, &filter); err != nil {
return err
}
}
// Validate formatter configuration
if err := validateFormatterConfig(p); err != nil {
return fmt.Errorf("pipeline '%s': %w", p.Name, err)
}
// Must have at least one sink
if len(p.Sinks) == 0 {
return fmt.Errorf("pipeline '%s': no sinks specified", p.Name)
}
// Validate each sink
for j, sink := range p.Sinks {
if err := validateSinkConfig(p.Name, j, &sink, allPorts); err != nil {
return err
}
}
return nil
}
// validateSourceConfig validates a polymorphic source configuration.
func validateSourceConfig(pipelineName string, index int, s *SourceConfig) error {
if err := lconfig.NonEmpty(s.Type); err != nil {
return fmt.Errorf("pipeline '%s' source[%d]: missing type", pipelineName, index)
}
// Count how many source configs are populated
populated := 0
var populatedType string
if s.File != nil {
populated++
populatedType = "file"
}
if s.Console != nil {
populated++
populatedType = "console"
}
if s.HTTP != nil {
populated++
populatedType = "http"
}
if s.TCP != nil {
populated++
populatedType = "tcp"
}
if populated == 0 {
return fmt.Errorf("pipeline '%s' source[%d]: no configuration provided for type '%s'",
pipelineName, index, s.Type)
}
if populated > 1 {
return fmt.Errorf("pipeline '%s' source[%d]: multiple configurations provided, only one allowed",
pipelineName, index)
}
if populatedType != s.Type {
return fmt.Errorf("pipeline '%s' source[%d]: type mismatch - type is '%s' but config is for '%s'",
pipelineName, index, s.Type, populatedType)
}
// Validate specific source type
switch s.Type {
case "file":
return validateDirectorySource(pipelineName, index, s.File)
case "console":
return validateConsoleSource(pipelineName, index, s.Console)
case "http":
return validateHTTPSource(pipelineName, index, s.HTTP)
case "tcp":
return validateTCPSource(pipelineName, index, s.TCP)
default:
return fmt.Errorf("pipeline '%s' source[%d]: unknown type '%s'", pipelineName, index, s.Type)
}
}
// validateSinkConfig validates a polymorphic sink configuration.
func validateSinkConfig(pipelineName string, index int, s *SinkConfig, allPorts map[int64]string) error {
if err := lconfig.NonEmpty(s.Type); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: missing type", pipelineName, index)
}
// Count populated sink configs
populated := 0
var populatedType string
if s.Console != nil {
populated++
populatedType = "console"
}
if s.File != nil {
populated++
populatedType = "file"
}
if s.HTTP != nil {
populated++
populatedType = "http"
}
if s.TCP != nil {
populated++
populatedType = "tcp"
}
if s.HTTPClient != nil {
populated++
populatedType = "http_client"
}
if s.TCPClient != nil {
populated++
populatedType = "tcp_client"
}
if populated == 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: no configuration provided for type '%s'",
pipelineName, index, s.Type)
}
if populated > 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: multiple configurations provided, only one allowed",
pipelineName, index)
}
if populatedType != s.Type {
return fmt.Errorf("pipeline '%s' sink[%d]: type mismatch - type is '%s' but config is for '%s'",
pipelineName, index, s.Type, populatedType)
}
// Validate specific sink type
switch s.Type {
case "console":
return validateConsoleSink(pipelineName, index, s.Console)
case "file":
return validateFileSink(pipelineName, index, s.File)
case "http":
return validateHTTPSink(pipelineName, index, s.HTTP, allPorts)
case "tcp":
return validateTCPSink(pipelineName, index, s.TCP, allPorts)
case "http_client":
return validateHTTPClientSink(pipelineName, index, s.HTTPClient)
case "tcp_client":
return validateTCPClientSink(pipelineName, index, s.TCPClient)
default:
return fmt.Errorf("pipeline '%s' sink[%d]: unknown type '%s'", pipelineName, index, s.Type)
}
}
// validateFormatterConfig validates formatter configuration
func validateFormatterConfig(p *PipelineConfig) error {
if p.Format == nil {
p.Format = &FormatConfig{
Type: "raw",
}
} else if p.Format.Type == "" {
p.Format.Type = "raw" // Default
}
switch p.Format.Type {
case "raw":
if p.Format.RawFormatOptions == nil {
p.Format.RawFormatOptions = &RawFormatterOptions{}
}
case "txt":
if p.Format.TxtFormatOptions == nil {
p.Format.TxtFormatOptions = &TxtFormatterOptions{}
}
// Default template format
templateStr := "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}"
if p.Format.TxtFormatOptions.Template != "" {
p.Format.TxtFormatOptions.Template = templateStr
}
// Default timestamp format
timestampFormat := time.RFC3339
if p.Format.TxtFormatOptions.TimestampFormat != "" {
p.Format.TxtFormatOptions.TimestampFormat = timestampFormat
}
case "json":
if p.Format.JSONFormatOptions == nil {
p.Format.JSONFormatOptions = &JSONFormatterOptions{}
}
}
return nil
}
// validateRateLimit validates the pipeline-level rate limit settings.
func validateRateLimit(pipelineName string, cfg *RateLimitConfig) error {
if cfg == nil {
return nil
}
if cfg.Rate < 0 {
return fmt.Errorf("pipeline '%s': rate limit rate cannot be negative", pipelineName)
}
if cfg.Burst < 0 {
return fmt.Errorf("pipeline '%s': rate limit burst cannot be negative", pipelineName)
}
if cfg.MaxEntrySizeBytes < 0 {
return fmt.Errorf("pipeline '%s': max entry size bytes cannot be negative", pipelineName)
}
// Validate policy
switch strings.ToLower(cfg.Policy) {
case "", "pass", "drop":
// Valid policies
default:
return fmt.Errorf("pipeline '%s': invalid rate limit policy '%s' (must be 'pass' or 'drop')",
pipelineName, cfg.Policy)
}
return nil
}
// validateFilter validates a single filter's configuration.
func validateFilter(pipelineName string, filterIndex int, cfg *FilterConfig) error {
// Validate filter type
switch cfg.Type {
case FilterTypeInclude, FilterTypeExclude, "":
// Valid types
default:
return fmt.Errorf("pipeline '%s' filter[%d]: invalid type '%s' (must be 'include' or 'exclude')",
pipelineName, filterIndex, cfg.Type)
}
// Validate filter logic
switch cfg.Logic {
case FilterLogicOr, FilterLogicAnd, "":
// Valid logic
default:
return fmt.Errorf("pipeline '%s' filter[%d]: invalid logic '%s' (must be 'or' or 'and')",
pipelineName, filterIndex, cfg.Logic)
}
// Empty patterns is valid - passes everything
if len(cfg.Patterns) == 0 {
return nil
}
// Validate regex patterns
for i, pattern := range cfg.Patterns {
if _, err := regexp.Compile(pattern); err != nil {
return fmt.Errorf("pipeline '%s' filter[%d] pattern[%d] '%s': invalid regex: %w",
pipelineName, filterIndex, i, pattern, err)
}
}
return nil
}
// validateDirectorySource validates the settings for a directory source.
func validateDirectorySource(pipelineName string, index int, opts *FileSourceOptions) error {
if err := lconfig.NonEmpty(opts.Directory); err != nil {
return fmt.Errorf("pipeline '%s' source[%d]: directory requires 'path'", pipelineName, index)
} else {
absPath, err := filepath.Abs(opts.Directory)
if err != nil {
return fmt.Errorf("invalid path %s: %w", opts.Directory, err)
}
opts.Directory = absPath
}
// Check for directory traversal
if strings.Contains(opts.Directory, "..") {
return fmt.Errorf("pipeline '%s' source[%d]: path contains directory traversal", pipelineName, index)
}
// Validate pattern if provided
if opts.Pattern != "" {
if strings.Count(opts.Pattern, "*") == 0 && strings.Count(opts.Pattern, "?") == 0 {
// If no wildcards, ensure valid filename
if filepath.Base(opts.Pattern) != opts.Pattern {
return fmt.Errorf("pipeline '%s' source[%d]: pattern contains path separators", pipelineName, index)
}
}
} else {
opts.Pattern = "*"
}
// Validate check interval
if opts.CheckIntervalMS < 10 {
return fmt.Errorf("pipeline '%s' source[%d]: check_interval_ms must be at least 10ms", pipelineName, index)
}
return nil
}
// validateConsoleSource validates the settings for a console source.
func validateConsoleSource(pipelineName string, index int, opts *ConsoleSourceOptions) error {
if opts.BufferSize < 0 {
return fmt.Errorf("pipeline '%s' source[%d]: buffer_size must be positive", pipelineName, index)
} else if opts.BufferSize == 0 {
opts.BufferSize = 1000
}
return nil
}
// validateHTTPSource validates the settings for an HTTP source.
func validateHTTPSource(pipelineName string, index int, opts *HTTPSourceOptions) error {
// Validate port
if err := lconfig.Port(opts.Port); err != nil {
return fmt.Errorf("pipeline '%s' source[%d]: %w", pipelineName, index, err)
}
// Set defaults
if opts.Host == "" {
opts.Host = "0.0.0.0"
}
if opts.IngestPath == "" {
opts.IngestPath = "/ingest"
}
if opts.MaxRequestBodySize <= 0 {
opts.MaxRequestBodySize = 10 * 1024 * 1024 // 10MB default
}
if opts.ReadTimeout <= 0 {
opts.ReadTimeout = 5000 // 5 seconds
}
if opts.WriteTimeout <= 0 {
opts.WriteTimeout = 5000 // 5 seconds
}
// Validate host if specified
if opts.Host != "" && opts.Host != "0.0.0.0" {
if err := lconfig.IPAddress(opts.Host); err != nil {
return fmt.Errorf("pipeline '%s' source[%d]: %w", pipelineName, index, err)
}
}
// Validate paths
if !strings.HasPrefix(opts.IngestPath, "/") {
return fmt.Errorf("pipeline '%s' source[%d]: ingest_path must start with /", pipelineName, index)
}
// Validate auth configuration
validHTTPSourceAuthTypes := map[string]bool{"basic": true, "token": true, "mtls": true}
if opts.Auth != nil && opts.Auth.Type != "none" && opts.Auth.Type != "" {
if !validHTTPSourceAuthTypes[opts.Auth.Type] {
return fmt.Errorf("pipeline '%s' source[%d]: %s is not a valid auth type",
pipelineName, index, opts.Auth.Type)
}
// All non-none auth types require TLS for HTTP
if opts.TLS == nil || !opts.TLS.Enabled {
return fmt.Errorf("pipeline '%s' source[%d]: %s auth requires TLS to be enabled",
pipelineName, index, opts.Auth.Type)
}
}
// Validate nested configs
if opts.ACL != nil {
if err := validateACL(pipelineName, fmt.Sprintf("source[%d]", index), opts.ACL); err != nil {
return err
}
}
if opts.TLS != nil {
if err := validateTLSServer(pipelineName, fmt.Sprintf("source[%d]", index), opts.TLS); err != nil {
return err
}
}
return nil
}
// validateTCPSource validates the settings for a TCP source.
func validateTCPSource(pipelineName string, index int, opts *TCPSourceOptions) error {
// Validate port
if err := lconfig.Port(opts.Port); err != nil {
return fmt.Errorf("pipeline '%s' source[%d]: %w", pipelineName, index, err)
}
// Set defaults
if opts.Host == "" {
opts.Host = "0.0.0.0"
}
if opts.ReadTimeout <= 0 {
opts.ReadTimeout = 5000 // 5 seconds
}
if !opts.KeepAlive {
opts.KeepAlive = true // Default enabled
}
if opts.KeepAlivePeriod <= 0 {
opts.KeepAlivePeriod = 30000 // 30 seconds
}
// Validate host if specified
if opts.Host != "" && opts.Host != "0.0.0.0" {
if err := lconfig.IPAddress(opts.Host); err != nil {
return fmt.Errorf("pipeline '%s' source[%d]: %w", pipelineName, index, err)
}
}
// Validate ACL if present
if opts.ACL != nil {
if err := validateACL(pipelineName, fmt.Sprintf("source[%d]", index), opts.ACL); err != nil {
return err
}
}
return nil
}
// validateConsoleSink validates the settings for a console sink.
func validateConsoleSink(pipelineName string, index int, opts *ConsoleSinkOptions) error {
if opts.BufferSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: buffer_size must be positive", pipelineName, index)
}
return nil
}
// validateFileSink validates the settings for a file sink.
func validateFileSink(pipelineName string, index int, opts *FileSinkOptions) error {
if err := lconfig.NonEmpty(opts.Directory); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: file requires 'directory'", pipelineName, index)
}
if err := lconfig.NonEmpty(opts.Name); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: file requires 'name'", pipelineName, index)
}
if opts.BufferSize <= 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_size_mb must be positive", pipelineName, index)
}
// Validate sizes
if opts.MaxSizeMB < 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_size_mb must be positive", pipelineName, index)
}
if opts.MaxTotalSizeMB <= 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: max_total_size_mb cannot be negative", pipelineName, index)
}
if opts.MinDiskFreeMB < 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: min_disk_free_mb must be positive", pipelineName, index)
}
if opts.RetentionHours <= 0 {
return fmt.Errorf("pipeline '%s' sink[%d]: retention_hours cannot be negative", pipelineName, index)
}
return nil
}
// validateHTTPSink validates the settings for an HTTP sink.
func validateHTTPSink(pipelineName string, index int, opts *HTTPSinkOptions, allPorts map[int64]string) error {
// Validate port
if err := lconfig.Port(opts.Port); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: %w", pipelineName, index, err)
}
// Check port conflicts
if existing, exists := allPorts[opts.Port]; exists {
return fmt.Errorf("pipeline '%s' sink[%d]: port %d already used by %s",
pipelineName, index, opts.Port, existing)
}
allPorts[opts.Port] = fmt.Sprintf("%s-http[%d]", pipelineName, index)
// Validate host if specified
if opts.Host != "" {
if err := lconfig.IPAddress(opts.Host); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: %w", pipelineName, index, err)
}
}
// Validate paths
if !strings.HasPrefix(opts.StreamPath, "/") {
return fmt.Errorf("pipeline '%s' sink[%d]: stream_path must start with /", pipelineName, index)
}
if !strings.HasPrefix(opts.StatusPath, "/") {
return fmt.Errorf("pipeline '%s' sink[%d]: status_path must start with /", pipelineName, index)
}
// Validate buffer
if opts.BufferSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: buffer_size must be positive", pipelineName, index)
}
// Validate nested configs
if opts.Heartbeat != nil {
if err := validateHeartbeat(pipelineName, fmt.Sprintf("sink[%d]", index), opts.Heartbeat); err != nil {
return err
}
}
if opts.ACL != nil {
if err := validateACL(pipelineName, fmt.Sprintf("sink[%d]", index), opts.ACL); err != nil {
return err
}
}
if opts.TLS != nil {
if err := validateTLSServer(pipelineName, fmt.Sprintf("sink[%d]", index), opts.TLS); err != nil {
return err
}
}
return nil
}
// validateTCPSink validates the settings for a TCP sink.
func validateTCPSink(pipelineName string, index int, opts *TCPSinkOptions, allPorts map[int64]string) error {
// Validate port
if err := lconfig.Port(opts.Port); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: %w", pipelineName, index, err)
}
// Check port conflicts
if existing, exists := allPorts[opts.Port]; exists {
return fmt.Errorf("pipeline '%s' sink[%d]: port %d already used by %s",
pipelineName, index, opts.Port, existing)
}
allPorts[opts.Port] = fmt.Sprintf("%s-tcp[%d]", pipelineName, index)
// Validate host if specified
if opts.Host != "" {
if err := lconfig.IPAddress(opts.Host); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: %w", pipelineName, index, err)
}
}
// Validate buffer
if opts.BufferSize < 1 {
return fmt.Errorf("pipeline '%s' sink[%d]: buffer_size must be positive", pipelineName, index)
}
// Validate nested configs
if opts.Heartbeat != nil {
if err := validateHeartbeat(pipelineName, fmt.Sprintf("sink[%d]", index), opts.Heartbeat); err != nil {
return err
}
}
if opts.ACL != nil {
if err := validateACL(pipelineName, fmt.Sprintf("sink[%d]", index), opts.ACL); err != nil {
return err
}
}
return nil
}
// validateHTTPClientSink validates the settings for an HTTP client sink.
func validateHTTPClientSink(pipelineName string, index int, opts *HTTPClientSinkOptions) error {
// Validate URL
if err := lconfig.NonEmpty(opts.URL); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: http_client requires 'url'", pipelineName, index)
}
parsedURL, err := url.Parse(opts.URL)
if err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: invalid URL: %w", pipelineName, index, err)
}
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
return fmt.Errorf("pipeline '%s' sink[%d]: URL must use http or https scheme", pipelineName, index)
}
// Set defaults for unspecified fields
if opts.BufferSize <= 0 {
opts.BufferSize = 1000
}
if opts.BatchSize <= 0 {
opts.BatchSize = 100
}
if opts.BatchDelayMS <= 0 {
opts.BatchDelayMS = 1000 // 1 second in ms
}
if opts.Timeout <= 0 {
opts.Timeout = 30 // 30 seconds
}
if opts.MaxRetries < 0 {
opts.MaxRetries = 3
}
if opts.RetryDelayMS <= 0 {
opts.RetryDelayMS = 1000 // 1 second in ms
}
if opts.RetryBackoff < 1.0 {
opts.RetryBackoff = 2.0
}
// Validate TLS config if present
if opts.TLS != nil {
if err := validateTLSClient(pipelineName, fmt.Sprintf("sink[%d]", index), opts.TLS); err != nil {
return err
}
}
return nil
}
// validateTCPClientSink validates the settings for a TCP client sink.
func validateTCPClientSink(pipelineName string, index int, opts *TCPClientSinkOptions) error {
// Validate host and port
if err := lconfig.NonEmpty(opts.Host); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: tcp_client requires 'host'", pipelineName, index)
}
if err := lconfig.Port(opts.Port); err != nil {
return fmt.Errorf("pipeline '%s' sink[%d]: %w", pipelineName, index, err)
}
// Set defaults
if opts.BufferSize <= 0 {
opts.BufferSize = 1000
}
if opts.DialTimeout <= 0 {
opts.DialTimeout = 10
}
if opts.WriteTimeout <= 0 {
opts.WriteTimeout = 30 // 30 seconds
}
if opts.ReadTimeout <= 0 {
opts.ReadTimeout = 10 // 10 seconds
}
if opts.KeepAlive <= 0 {
opts.KeepAlive = 30 // 30 seconds
}
if opts.ReconnectDelayMS <= 0 {
opts.ReconnectDelayMS = 1000 // 1 second in ms
}
if opts.MaxReconnectDelayMS <= 0 {
opts.MaxReconnectDelayMS = 30000 // 30 seconds in ms
}
if opts.ReconnectBackoff < 1.0 {
opts.ReconnectBackoff = 1.5
}
return nil
}
// validateACL validates nested ACLConfig settings.
func validateACL(pipelineName, location string, nl *ACLConfig) error {
if !nl.Enabled {
return nil // Skip validation if disabled
}
if nl.MaxConnectionsPerIP < 0 {
return fmt.Errorf("pipeline '%s' %s: max_connections_per_ip cannot be negative", pipelineName, location)
}
if nl.MaxConnectionsTotal < 0 {
return fmt.Errorf("pipeline '%s' %s: max_connections_total cannot be negative", pipelineName, location)
}
if nl.MaxConnectionsTotal < nl.MaxConnectionsPerIP && nl.MaxConnectionsTotal != 0 {
return fmt.Errorf("pipeline '%s' %s: max_connections_total cannot be less than max_connections_per_ip", pipelineName, location)
}
if nl.BurstSize < 0 {
return fmt.Errorf("pipeline '%s' %s: burst_size cannot be negative", pipelineName, location)
}
return nil
}
// validateTLSServer validates the new TLSServerConfig struct.
func validateTLSServer(pipelineName, location string, tls *TLSServerConfig) error {
if !tls.Enabled {
return nil // Skip validation if disabled
}
// If TLS is enabled for a server, cert and key files are mandatory.
if tls.CertFile == "" || tls.KeyFile == "" {
return fmt.Errorf("pipeline '%s' %s: TLS enabled requires both cert_file and key_file", pipelineName, location)
}
// If mTLS (ClientAuth) is enabled, a client CA file is mandatory.
if tls.ClientAuth && tls.ClientCAFile == "" {
return fmt.Errorf("pipeline '%s' %s: client_auth is enabled, which requires a client_ca_file", pipelineName, location)
}
return nil
}
// validateTLSClient validates the new TLSClientConfig struct.
func validateTLSClient(pipelineName, location string, tls *TLSClientConfig) error {
if !tls.Enabled {
return nil // Skip validation if disabled
}
// If verification is not skipped, a server CA file must be provided.
if !tls.InsecureSkipVerify && tls.ServerCAFile == "" {
return fmt.Errorf("pipeline '%s' %s: TLS verification is enabled (insecure_skip_verify=false) but server_ca_file is not provided", pipelineName, location)
}
// For client mTLS, both the cert and key must be provided together.
if (tls.ClientCertFile != "" && tls.ClientKeyFile == "") || (tls.ClientCertFile == "" && tls.ClientKeyFile != "") {
return fmt.Errorf("pipeline '%s' %s: for client mTLS, both client_cert_file and client_key_file must be provided", pipelineName, location)
}
return nil
}
// validateHeartbeat validates nested HeartbeatConfig settings.
func validateHeartbeat(pipelineName, location string, hb *HeartbeatConfig) error {
if !hb.Enabled {
return nil // Skip validation if disabled
}
if hb.IntervalMS < 1000 { // At least 1 second
return fmt.Errorf("pipeline '%s' %s: heartbeat interval must be at least 1000ms", pipelineName, location)
}
return nil
}

View File

@ -0,0 +1,42 @@
// FILE: logwisp/src/internal/core/const.go
package core
import (
"time"
)
const (
MaxLogEntryBytes = 1024 * 1024
MaxSessionTime = time.Minute * 30
FileWatcherPollInterval = 100 * time.Millisecond
HttpServerStartTimeout = 100 * time.Millisecond
HttpServerShutdownTimeout = 2 * time.Second
SessionDefaultMaxIdleTime = 30 * time.Minute
SessionCleanupInterval = 5 * time.Minute
NetLimitCleanupInterval = 30 * time.Second
NetLimitCleanupTimeout = 2 * time.Second
NetLimitStaleTimeout = 5 * time.Minute
NetLimitPeriodicCleanupInterval = 1 * time.Minute
ServiceStatsUpdateInterval = 1 * time.Second
ShutdownTimeout = 10 * time.Second
ConfigReloadTimeout = 30 * time.Second
LoggerShutdownTimeout = 2 * time.Second
ReloadWatchPollInterval = time.Second
ReloadWatchDebounce = 500 * time.Millisecond
ReloadWatchTimeout = 30 * time.Second
)

View File

@ -1,4 +1,4 @@
// FILE: logwisp/src/internal/core/types.go // FILE: logwisp/src/internal/core/entry.go
package core package core
import ( import (
@ -6,7 +6,7 @@ import (
"time" "time"
) )
// LogEntry represents a single log record flowing through the pipeline // Represents a single log record flowing through the pipeline
type LogEntry struct { type LogEntry struct {
Time time.Time `json:"time"` Time time.Time `json:"time"`
Source string `json:"source"` Source string `json:"source"`

View File

@ -11,7 +11,7 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// Chain manages multiple filters in sequence // Chain manages a sequence of filters, applying them in order.
type Chain struct { type Chain struct {
filters []*Filter filters []*Filter
logger *log.Logger logger *log.Logger
@ -21,7 +21,7 @@ type Chain struct {
totalPassed atomic.Uint64 totalPassed atomic.Uint64
} }
// NewChain creates a new filter chain from configurations // NewChain creates a new filter chain from a slice of filter configurations.
func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error) { func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error) {
chain := &Chain{ chain := &Chain{
filters: make([]*Filter, 0, len(configs)), filters: make([]*Filter, 0, len(configs)),
@ -29,7 +29,7 @@ func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error)
} }
for i, cfg := range configs { for i, cfg := range configs {
filter, err := New(cfg, logger) filter, err := NewFilter(cfg, logger)
if err != nil { if err != nil {
return nil, fmt.Errorf("filter[%d]: %w", i, err) return nil, fmt.Errorf("filter[%d]: %w", i, err)
} }
@ -42,8 +42,7 @@ func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error)
return chain, nil return chain, nil
} }
// Apply runs all filters in sequence // Apply runs a log entry through all filters in the chain.
// Returns true if the entry passes all filters
func (c *Chain) Apply(entry core.LogEntry) bool { func (c *Chain) Apply(entry core.LogEntry) bool {
c.totalProcessed.Add(1) c.totalProcessed.Add(1)
@ -68,7 +67,7 @@ func (c *Chain) Apply(entry core.LogEntry) bool {
return true return true
} }
// GetStats returns chain statistics // GetStats returns aggregated statistics for the entire chain.
func (c *Chain) GetStats() map[string]any { func (c *Chain) GetStats() map[string]any {
filterStats := make([]map[string]any, len(c.filters)) filterStats := make([]map[string]any, len(c.filters))
for i, filter := range c.filters { for i, filter := range c.filters {

View File

@ -13,7 +13,7 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// Filter applies regex-based filtering to log entries // Filter applies regex-based filtering to log entries.
type Filter struct { type Filter struct {
config config.FilterConfig config config.FilterConfig
patterns []*regexp.Regexp patterns []*regexp.Regexp
@ -26,8 +26,8 @@ type Filter struct {
totalDropped atomic.Uint64 totalDropped atomic.Uint64
} }
// New creates a new filter from configuration // NewFilter creates a new filter from a configuration.
func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) { func NewFilter(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
// Set defaults // Set defaults
if cfg.Type == "" { if cfg.Type == "" {
cfg.Type = config.FilterTypeInclude cfg.Type = config.FilterTypeInclude
@ -60,12 +60,15 @@ func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
return f, nil return f, nil
} }
// Apply checks if a log entry should be passed through // Apply determines if a log entry should be passed through based on the filter's rules.
func (f *Filter) Apply(entry core.LogEntry) bool { func (f *Filter) Apply(entry core.LogEntry) bool {
f.totalProcessed.Add(1) f.totalProcessed.Add(1)
// No patterns means pass everything // No patterns means pass everything
if len(f.patterns) == 0 { if len(f.patterns) == 0 {
f.logger.Debug("msg", "No patterns configured, passing entry",
"component", "filter",
"type", f.config.Type)
return true return true
} }
@ -78,10 +81,32 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
text = entry.Source + " " + text text = entry.Source + " " + text
} }
f.logger.Debug("msg", "Filter checking entry",
"component", "filter",
"type", f.config.Type,
"logic", f.config.Logic,
"entry_level", entry.Level,
"entry_source", entry.Source,
"entry_message", entry.Message[:min(100, len(entry.Message))], // First 100 chars
"text_to_match", text[:min(150, len(text))], // First 150 chars
"patterns", f.config.Patterns)
for i, pattern := range f.config.Patterns {
isMatch := f.patterns[i].MatchString(text)
f.logger.Debug("msg", "Pattern match result",
"component", "filter",
"pattern_index", i,
"pattern", pattern,
"matched", isMatch)
}
matched := f.matches(text) matched := f.matches(text)
if matched { if matched {
f.totalMatched.Add(1) f.totalMatched.Add(1)
} }
f.logger.Debug("msg", "Filter final match result",
"component", "filter",
"matched", matched)
// Determine if we should pass or drop // Determine if we should pass or drop
shouldPass := false shouldPass := false
@ -92,6 +117,12 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
shouldPass = !matched shouldPass = !matched
} }
f.logger.Debug("msg", "Filter decision",
"component", "filter",
"type", f.config.Type,
"matched", matched,
"should_pass", shouldPass)
if !shouldPass { if !shouldPass {
f.totalDropped.Add(1) f.totalDropped.Add(1)
} }
@ -99,7 +130,44 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
return shouldPass return shouldPass
} }
// matches checks if text matches the patterns according to the logic // GetStats returns the filter's current statistics.
func (f *Filter) GetStats() map[string]any {
return map[string]any{
"type": f.config.Type,
"logic": f.config.Logic,
"pattern_count": len(f.patterns),
"total_processed": f.totalProcessed.Load(),
"total_matched": f.totalMatched.Load(),
"total_dropped": f.totalDropped.Load(),
}
}
// UpdatePatterns allows for dynamic, thread-safe updates to the filter's regex patterns.
func (f *Filter) UpdatePatterns(patterns []string) error {
compiled := make([]*regexp.Regexp, 0, len(patterns))
// Compile all patterns first
for i, pattern := range patterns {
re, err := regexp.Compile(pattern)
if err != nil {
return fmt.Errorf("invalid regex pattern[%d] '%s': %w", i, pattern, err)
}
compiled = append(compiled, re)
}
// Update atomically
f.mu.Lock()
f.patterns = compiled
f.config.Patterns = patterns
f.mu.Unlock()
f.logger.Info("msg", "Filter patterns updated",
"component", "filter",
"pattern_count", len(patterns))
return nil
}
// matches checks if the given text matches the filter's patterns according to its logic.
func (f *Filter) matches(text string) bool { func (f *Filter) matches(text string) bool {
switch f.config.Logic { switch f.config.Logic {
case config.FilterLogicOr: case config.FilterLogicOr:
@ -128,40 +196,3 @@ func (f *Filter) matches(text string) bool {
return false return false
} }
} }
// GetStats returns filter statistics
func (f *Filter) GetStats() map[string]any {
return map[string]any{
"type": f.config.Type,
"logic": f.config.Logic,
"pattern_count": len(f.patterns),
"total_processed": f.totalProcessed.Load(),
"total_matched": f.totalMatched.Load(),
"total_dropped": f.totalDropped.Load(),
}
}
// UpdatePatterns allows dynamic pattern updates
func (f *Filter) UpdatePatterns(patterns []string) error {
compiled := make([]*regexp.Regexp, 0, len(patterns))
// Compile all patterns first
for i, pattern := range patterns {
re, err := regexp.Compile(pattern)
if err != nil {
return fmt.Errorf("invalid regex pattern[%d] '%s': %w", i, pattern, err)
}
compiled = append(compiled, re)
}
// Update atomically
f.mu.Lock()
f.patterns = compiled
f.config.Patterns = patterns
f.mu.Unlock()
f.logger.Info("msg", "Filter patterns updated",
"component", "filter",
"pattern_count", len(patterns))
return nil
}

View File

@ -1,5 +1,5 @@
// FILE: logwisp/src/internal/limit/rate.go // FILE: src/internal/flow/rate.go
package limit package flow
import ( import (
"strings" "strings"
@ -7,13 +7,14 @@ import (
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/tokenbucket"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// RateLimiter enforces rate limits on log entries flowing through a pipeline. // RateLimiter enforces rate limits on log entries flowing through a pipeline.
type RateLimiter struct { type RateLimiter struct {
bucket *TokenBucket bucket *tokenbucket.TokenBucket
policy config.RateLimitPolicy policy config.RateLimitPolicy
logger *log.Logger logger *log.Logger
@ -23,7 +24,7 @@ type RateLimiter struct {
droppedCount atomic.Uint64 droppedCount atomic.Uint64
} }
// NewRateLimiter creates a new rate limiter. If cfg.Rate is 0, it returns nil. // NewRateLimiter creates a new pipeline-level rate limiter from configuration.
func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimiter, error) { func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimiter, error) {
if cfg.Rate <= 0 { if cfg.Rate <= 0 {
return nil, nil // No rate limit return nil, nil // No rate limit
@ -43,21 +44,16 @@ func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimite
} }
l := &RateLimiter{ l := &RateLimiter{
bucket: NewTokenBucket(burst, cfg.Rate), bucket: tokenbucket.New(burst, cfg.Rate),
policy: policy, policy: policy,
logger: logger, logger: logger,
maxEntrySizeBytes: cfg.MaxEntrySizeBytes, maxEntrySizeBytes: cfg.MaxEntrySizeBytes,
} }
if cfg.Rate > 0 {
l.bucket = NewTokenBucket(burst, cfg.Rate)
}
return l, nil return l, nil
} }
// Allow checks if a log entry is allowed to pass based on the rate limit. // Allow checks if a log entry is permitted to pass based on the rate limit.
// It returns true if the entry should pass, false if it should be dropped.
func (l *RateLimiter) Allow(entry core.LogEntry) bool { func (l *RateLimiter) Allow(entry core.LogEntry) bool {
if l == nil || l.policy == config.PolicyPass { if l == nil || l.policy == config.PolicyPass {
return true return true
@ -83,7 +79,7 @@ func (l *RateLimiter) Allow(entry core.LogEntry) bool {
return true return true
} }
// GetStats returns the statistics for the limiter. // GetStats returns statistics for the rate limiter.
func (l *RateLimiter) GetStats() map[string]any { func (l *RateLimiter) GetStats() map[string]any {
if l == nil { if l == nil {
return map[string]any{ return map[string]any{
@ -106,7 +102,7 @@ func (l *RateLimiter) GetStats() map[string]any {
return stats return stats
} }
// policyString returns the string representation of the policy. // policyString returns the string representation of a rate limit policy.
func policyString(p config.RateLimitPolicy) string { func policyString(p config.RateLimitPolicy) string {
switch p { switch p {
case config.PolicyDrop: case config.PolicyDrop:

View File

@ -4,6 +4,7 @@ package format
import ( import (
"fmt" "fmt"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
@ -14,25 +15,27 @@ type Formatter interface {
// Format takes a LogEntry and returns the formatted log as a byte slice. // Format takes a LogEntry and returns the formatted log as a byte slice.
Format(entry core.LogEntry) ([]byte, error) Format(entry core.LogEntry) ([]byte, error)
// Name returns the formatter type name // Name returns the formatter's type name (e.g., "json", "raw").
Name() string Name() string
} }
// New creates a new Formatter based on the provided configuration. // NewFormatter is a factory function that creates a Formatter based on the provided configuration.
func New(name string, options map[string]any, logger *log.Logger) (Formatter, error) { func NewFormatter(cfg *config.FormatConfig, logger *log.Logger) (Formatter, error) {
// Default to raw if no format specified if cfg == nil {
if name == "" { // Fallback to raw when no formatter configured
name = "raw" return NewRawFormatter(&config.RawFormatterOptions{
AddNewLine: true,
}, logger)
} }
switch name { switch cfg.Type {
case "json": case "json":
return NewJSONFormatter(options, logger) return NewJSONFormatter(cfg.JSONFormatOptions, logger)
case "text": case "txt":
return NewTextFormatter(options, logger) return NewTxtFormatter(cfg.TxtFormatOptions, logger)
case "raw": case "raw":
return NewRawFormatter(options, logger) return NewRawFormatter(cfg.RawFormatOptions, logger)
default: default:
return nil, fmt.Errorf("unknown formatter type: %s", name) return nil, fmt.Errorf("unknown formatter type: %s", cfg.Type)
} }
} }

View File

@ -6,60 +6,37 @@ import (
"fmt" "fmt"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// JSONFormatter produces structured JSON logs // JSONFormatter produces structured JSON logs from LogEntry objects.
type JSONFormatter struct { type JSONFormatter struct {
pretty bool config *config.JSONFormatterOptions
timestampField string
levelField string
messageField string
sourceField string
logger *log.Logger logger *log.Logger
} }
// NewJSONFormatter creates a new JSON formatter // NewJSONFormatter creates a new JSON formatter from configuration options.
func NewJSONFormatter(options map[string]any, logger *log.Logger) (*JSONFormatter, error) { func NewJSONFormatter(opts *config.JSONFormatterOptions, logger *log.Logger) (*JSONFormatter, error) {
f := &JSONFormatter{ f := &JSONFormatter{
timestampField: "timestamp", config: opts,
levelField: "level",
messageField: "message",
sourceField: "source",
logger: logger, logger: logger,
} }
// Extract options
if pretty, ok := options["pretty"].(bool); ok {
f.pretty = pretty
}
if field, ok := options["timestamp_field"].(string); ok && field != "" {
f.timestampField = field
}
if field, ok := options["level_field"].(string); ok && field != "" {
f.levelField = field
}
if field, ok := options["message_field"].(string); ok && field != "" {
f.messageField = field
}
if field, ok := options["source_field"].(string); ok && field != "" {
f.sourceField = field
}
return f, nil return f, nil
} }
// Format formats the log entry as JSON // Format transforms a single LogEntry into a JSON byte slice.
func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) { func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Start with a clean map // Start with a clean map
output := make(map[string]any) output := make(map[string]any)
// First, populate with LogWisp metadata // First, populate with LogWisp metadata
output[f.timestampField] = entry.Time.Format(time.RFC3339Nano) output[f.config.TimestampField] = entry.Time.Format(time.RFC3339Nano)
output[f.levelField] = entry.Level output[f.config.LevelField] = entry.Level
output[f.sourceField] = entry.Source output[f.config.SourceField] = entry.Source
// Try to parse the message as JSON // Try to parse the message as JSON
var msgData map[string]any var msgData map[string]any
@ -68,21 +45,21 @@ func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
// LogWisp metadata takes precedence // LogWisp metadata takes precedence
for k, v := range msgData { for k, v := range msgData {
// Don't overwrite our standard fields // Don't overwrite our standard fields
if k != f.timestampField && k != f.levelField && k != f.sourceField { if k != f.config.TimestampField && k != f.config.LevelField && k != f.config.SourceField {
output[k] = v output[k] = v
} }
} }
// If the original JSON had these fields, log that we're overriding // If the original JSON had these fields, log that we're overriding
if _, hasTime := msgData[f.timestampField]; hasTime { if _, hasTime := msgData[f.config.TimestampField]; hasTime {
f.logger.Debug("msg", "Overriding timestamp from JSON message", f.logger.Debug("msg", "Overriding timestamp from JSON message",
"component", "json_formatter", "component", "json_formatter",
"original", msgData[f.timestampField], "original", msgData[f.config.TimestampField],
"logwisp", output[f.timestampField]) "logwisp", output[f.config.TimestampField])
} }
} else { } else {
// Message is not valid JSON - add as message field // Message is not valid JSON - add as message field
output[f.messageField] = entry.Message output[f.config.MessageField] = entry.Message
} }
// Add any additional fields from LogEntry.Fields // Add any additional fields from LogEntry.Fields
@ -101,7 +78,7 @@ func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Marshal to JSON // Marshal to JSON
var result []byte var result []byte
var err error var err error
if f.pretty { if f.config.Pretty {
result, err = json.MarshalIndent(output, "", " ") result, err = json.MarshalIndent(output, "", " ")
} else { } else {
result, err = json.Marshal(output) result, err = json.Marshal(output)
@ -115,13 +92,12 @@ func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
return append(result, '\n'), nil return append(result, '\n'), nil
} }
// Name returns the formatter name // Name returns the formatter's type name.
func (f *JSONFormatter) Name() string { func (f *JSONFormatter) Name() string {
return "json" return "json"
} }
// FormatBatch formats multiple entries as a JSON array // FormatBatch transforms a slice of LogEntry objects into a single JSON array byte slice.
// This is a special method for sinks that need to batch entries
func (f *JSONFormatter) FormatBatch(entries []core.LogEntry) ([]byte, error) { func (f *JSONFormatter) FormatBatch(entries []core.LogEntry) ([]byte, error) {
// For batching, we need to create an array of formatted objects // For batching, we need to create an array of formatted objects
batch := make([]json.RawMessage, 0, len(entries)) batch := make([]json.RawMessage, 0, len(entries))
@ -147,7 +123,7 @@ func (f *JSONFormatter) FormatBatch(entries []core.LogEntry) ([]byte, error) {
// Marshal the entire batch as an array // Marshal the entire batch as an array
var result []byte var result []byte
var err error var err error
if f.pretty { if f.config.Pretty {
result, err = json.MarshalIndent(batch, "", " ") result, err = json.MarshalIndent(batch, "", " ")
} else { } else {
result, err = json.Marshal(batch) result, err = json.Marshal(batch)

View File

@ -2,30 +2,36 @@
package format package format
import ( import (
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// RawFormatter outputs the log message as-is with a newline // RawFormatter outputs the raw log message, optionally with a newline.
type RawFormatter struct { type RawFormatter struct {
config *config.RawFormatterOptions
logger *log.Logger logger *log.Logger
} }
// NewRawFormatter creates a new raw formatter // NewRawFormatter creates a new raw pass-through formatter.
func NewRawFormatter(options map[string]any, logger *log.Logger) (*RawFormatter, error) { func NewRawFormatter(opts *config.RawFormatterOptions, logger *log.Logger) (*RawFormatter, error) {
return &RawFormatter{ return &RawFormatter{
config: opts,
logger: logger, logger: logger,
}, nil }, nil
} }
// Format returns the message with a newline appended // Format returns the raw message from the LogEntry as a byte slice.
func (f *RawFormatter) Format(entry core.LogEntry) ([]byte, error) { func (f *RawFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Simply return the message with newline if f.config.AddNewLine {
return append([]byte(entry.Message), '\n'), nil return append([]byte(entry.Message), '\n'), nil // Add back the trimmed new line
} else {
return []byte(entry.Message), nil // New line between log entries are trimmed
}
} }
// Name returns the formatter name // Name returns the formatter's type name.
func (f *RawFormatter) Name() string { func (f *RawFormatter) Name() string {
return "raw" return "raw"
} }

View File

@ -1,4 +1,4 @@
// FILE: logwisp/src/internal/format/text.go // FILE: logwisp/src/internal/format/txt.go
package format package format
import ( import (
@ -8,48 +8,37 @@ import (
"text/template" "text/template"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// TextFormatter produces human-readable text logs using templates // TxtFormatter produces human-readable, template-based text logs.
type TextFormatter struct { type TxtFormatter struct {
config *config.TxtFormatterOptions
template *template.Template template *template.Template
timestampFormat string
logger *log.Logger logger *log.Logger
} }
// NewTextFormatter creates a new text formatter // NewTxtFormatter creates a new text formatter from a template configuration.
func NewTextFormatter(options map[string]any, logger *log.Logger) (*TextFormatter, error) { func NewTxtFormatter(opts *config.TxtFormatterOptions, logger *log.Logger) (*TxtFormatter, error) {
// Default template f := &TxtFormatter{
templateStr := "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}" config: opts,
if tmpl, ok := options["template"].(string); ok && tmpl != "" {
templateStr = tmpl
}
// Default timestamp format
timestampFormat := time.RFC3339
if tsFormat, ok := options["timestamp_format"].(string); ok && tsFormat != "" {
timestampFormat = tsFormat
}
f := &TextFormatter{
timestampFormat: timestampFormat,
logger: logger, logger: logger,
} }
// Create template with helper functions // Create template with helper functions
funcMap := template.FuncMap{ funcMap := template.FuncMap{
"FmtTime": func(t time.Time) string { "FmtTime": func(t time.Time) string {
return t.Format(f.timestampFormat) return t.Format(f.config.TimestampFormat)
}, },
"ToUpper": strings.ToUpper, "ToUpper": strings.ToUpper,
"ToLower": strings.ToLower, "ToLower": strings.ToLower,
"TrimSpace": strings.TrimSpace, "TrimSpace": strings.TrimSpace,
} }
tmpl, err := template.New("log").Funcs(funcMap).Parse(templateStr) tmpl, err := template.New("log").Funcs(funcMap).Parse(f.config.Template)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid template: %w", err) return nil, fmt.Errorf("invalid template: %w", err)
} }
@ -58,8 +47,8 @@ func NewTextFormatter(options map[string]any, logger *log.Logger) (*TextFormatte
return f, nil return f, nil
} }
// Format formats the log entry using the template // Format transforms a LogEntry into a text byte slice using the configured template.
func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) { func (f *TxtFormatter) Format(entry core.LogEntry) ([]byte, error) {
// Prepare data for template // Prepare data for template
data := map[string]any{ data := map[string]any{
"Timestamp": entry.Time, "Timestamp": entry.Time,
@ -82,11 +71,11 @@ func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
if err := f.template.Execute(&buf, data); err != nil { if err := f.template.Execute(&buf, data); err != nil {
// Fallback: return a basic formatted message // Fallback: return a basic formatted message
f.logger.Debug("msg", "Template execution failed, using fallback", f.logger.Debug("msg", "Template execution failed, using fallback",
"component", "text_formatter", "component", "txt_formatter",
"error", err) "error", err)
fallback := fmt.Sprintf("[%s] [%s] %s - %s\n", fallback := fmt.Sprintf("[%s] [%s] %s - %s\n",
entry.Time.Format(f.timestampFormat), entry.Time.Format(f.config.TimestampFormat),
strings.ToUpper(entry.Level), strings.ToUpper(entry.Level),
entry.Source, entry.Source,
entry.Message) entry.Message)
@ -102,7 +91,7 @@ func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
return result, nil return result, nil
} }
// Name returns the formatter name // Name returns the formatter's type name.
func (f *TextFormatter) Name() string { func (f *TxtFormatter) Name() string {
return "text" return "txt"
} }

File diff suppressed because it is too large Load Diff

View File

@ -3,25 +3,27 @@ package service
import ( import (
"context" "context"
"fmt"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core"
"logwisp/src/internal/filter" "logwisp/src/internal/filter"
"logwisp/src/internal/limit" "logwisp/src/internal/flow"
"logwisp/src/internal/format"
"logwisp/src/internal/sink" "logwisp/src/internal/sink"
"logwisp/src/internal/source" "logwisp/src/internal/source"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// Pipeline manages the flow of data from sources through filters to sinks // Pipeline manages the flow of data from sources, through filters, to sinks.
type Pipeline struct { type Pipeline struct {
Name string Config *config.PipelineConfig
Config config.PipelineConfig
Sources []source.Source Sources []source.Source
RateLimiter *limit.RateLimiter RateLimiter *flow.RateLimiter
FilterChain *filter.Chain FilterChain *filter.Chain
Sinks []sink.Sink Sinks []sink.Sink
Stats *PipelineStats Stats *PipelineStats
@ -32,7 +34,7 @@ type Pipeline struct {
wg sync.WaitGroup wg sync.WaitGroup
} }
// PipelineStats contains statistics for a pipeline // PipelineStats contains runtime statistics for a pipeline.
type PipelineStats struct { type PipelineStats struct {
StartTime time.Time StartTime time.Time
TotalEntriesProcessed atomic.Uint64 TotalEntriesProcessed atomic.Uint64
@ -43,11 +45,116 @@ type PipelineStats struct {
FilterStats map[string]any FilterStats map[string]any
} }
// Shutdown gracefully stops the pipeline // NewPipeline creates, configures, and starts a new pipeline within the service.
func (s *Service) NewPipeline(cfg *config.PipelineConfig) error {
s.mu.Lock()
defer s.mu.Unlock()
if _, exists := s.pipelines[cfg.Name]; exists {
err := fmt.Errorf("pipeline '%s' already exists", cfg.Name)
s.logger.Error("msg", "Failed to create pipeline - duplicate name",
"component", "service",
"pipeline", cfg.Name,
"error", err)
return err
}
s.logger.Debug("msg", "Creating pipeline", "pipeline", cfg.Name)
// Create pipeline context
pipelineCtx, pipelineCancel := context.WithCancel(s.ctx)
// Create pipeline instance
pipeline := &Pipeline{
Config: cfg,
Stats: &PipelineStats{
StartTime: time.Now(),
},
ctx: pipelineCtx,
cancel: pipelineCancel,
logger: s.logger,
}
// Create sources
for i, srcCfg := range cfg.Sources {
src, err := s.createSource(&srcCfg)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create source[%d]: %w", i, err)
}
pipeline.Sources = append(pipeline.Sources, src)
}
// Create pipeline rate limiter
if cfg.RateLimit != nil {
limiter, err := flow.NewRateLimiter(*cfg.RateLimit, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create pipeline rate limiter: %w", err)
}
pipeline.RateLimiter = limiter
}
// Create filter chain
if len(cfg.Filters) > 0 {
chain, err := filter.NewChain(cfg.Filters, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create filter chain: %w", err)
}
pipeline.FilterChain = chain
}
// Create formatter for the pipeline
formatter, err := format.NewFormatter(cfg.Format, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create formatter: %w", err)
}
// Create sinks
for i, sinkCfg := range cfg.Sinks {
sinkInst, err := s.createSink(sinkCfg, formatter)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create sink[%d]: %w", i, err)
}
pipeline.Sinks = append(pipeline.Sinks, sinkInst)
}
// Start all sources
for i, src := range pipeline.Sources {
if err := src.Start(); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start source[%d]: %w", i, err)
}
}
// Start all sinks
for i, sinkInst := range pipeline.Sinks {
if err := sinkInst.Start(pipelineCtx); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start sink[%d]: %w", i, err)
}
}
// Wire sources to sinks through filters
s.wirePipeline(pipeline)
// Start stats updater
pipeline.startStatsUpdater(pipelineCtx)
s.pipelines[cfg.Name] = pipeline
s.logger.Info("msg", "Pipeline created successfully",
"pipeline", cfg.Name)
return nil
}
// Shutdown gracefully stops the pipeline and all its components.
func (p *Pipeline) Shutdown() { func (p *Pipeline) Shutdown() {
p.logger.Info("msg", "Shutting down pipeline", p.logger.Info("msg", "Shutting down pipeline",
"component", "pipeline", "component", "pipeline",
"pipeline", p.Name) "pipeline", p.Config.Name)
// Cancel context to stop processing // Cancel context to stop processing
p.cancel() p.cancel()
@ -78,17 +185,17 @@ func (p *Pipeline) Shutdown() {
p.logger.Info("msg", "Pipeline shutdown complete", p.logger.Info("msg", "Pipeline shutdown complete",
"component", "pipeline", "component", "pipeline",
"pipeline", p.Name) "pipeline", p.Config.Name)
} }
// GetStats returns pipeline statistics // GetStats returns a map of the pipeline's current statistics.
func (p *Pipeline) GetStats() map[string]any { func (p *Pipeline) GetStats() map[string]any {
// Recovery to handle concurrent access during shutdown // Recovery to handle concurrent access during shutdown
// When service is shutting down, sources/sinks might be nil or partially stopped // When service is shutting down, sources/sinks might be nil or partially stopped
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
p.logger.Error("msg", "Panic getting pipeline stats", p.logger.Error("msg", "Panic getting pipeline stats",
"pipeline", p.Name, "pipeline", p.Config.Name,
"panic", r) "panic", r)
} }
}() }()
@ -142,7 +249,7 @@ func (p *Pipeline) GetStats() map[string]any {
} }
return map[string]any{ return map[string]any{
"name": p.Name, "name": p.Config.Name,
"uptime_seconds": int(time.Since(p.Stats.StartTime).Seconds()), "uptime_seconds": int(time.Since(p.Stats.StartTime).Seconds()),
"total_processed": p.Stats.TotalEntriesProcessed.Load(), "total_processed": p.Stats.TotalEntriesProcessed.Load(),
"total_dropped_rate_limit": p.Stats.TotalEntriesDroppedByRateLimit.Load(), "total_dropped_rate_limit": p.Stats.TotalEntriesDroppedByRateLimit.Load(),
@ -157,10 +264,11 @@ func (p *Pipeline) GetStats() map[string]any {
} }
} }
// startStatsUpdater runs periodic stats updates // TODO: incomplete implementation
// startStatsUpdater runs a periodic stats updater.
func (p *Pipeline) startStatsUpdater(ctx context.Context) { func (p *Pipeline) startStatsUpdater(ctx context.Context) {
go func() { go func() {
ticker := time.NewTicker(1 * time.Second) ticker := time.NewTicker(core.ServiceStatsUpdateInterval)
defer ticker.Stop() defer ticker.Stop()
for { for {

View File

@ -5,20 +5,17 @@ import (
"context" "context"
"fmt" "fmt"
"sync" "sync"
"time"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/filter"
"logwisp/src/internal/format" "logwisp/src/internal/format"
"logwisp/src/internal/limit"
"logwisp/src/internal/sink" "logwisp/src/internal/sink"
"logwisp/src/internal/source" "logwisp/src/internal/source"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// Service manages multiple pipelines // Service manages a collection of log processing pipelines.
type Service struct { type Service struct {
pipelines map[string]*Pipeline pipelines map[string]*Pipeline
mu sync.RWMutex mu sync.RWMutex
@ -28,8 +25,8 @@ type Service struct {
logger *log.Logger logger *log.Logger
} }
// New creates a new service // NewService creates a new, empty service.
func New(ctx context.Context, logger *log.Logger) *Service { func NewService(ctx context.Context, logger *log.Logger) *Service {
serviceCtx, cancel := context.WithCancel(ctx) serviceCtx, cancel := context.WithCancel(ctx)
return &Service{ return &Service{
pipelines: make(map[string]*Pipeline), pipelines: make(map[string]*Pipeline),
@ -39,125 +36,97 @@ func New(ctx context.Context, logger *log.Logger) *Service {
} }
} }
// NewPipeline creates and starts a new pipeline // GetPipeline returns a pipeline by its name.
func (s *Service) NewPipeline(cfg config.PipelineConfig) error { func (s *Service) GetPipeline(name string) (*Pipeline, error) {
s.mu.RLock()
defer s.mu.RUnlock()
pipeline, exists := s.pipelines[name]
if !exists {
return nil, fmt.Errorf("pipeline '%s' not found", name)
}
return pipeline, nil
}
// ListPipelines returns the names of all currently managed pipelines.
func (s *Service) ListPipelines() []string {
s.mu.RLock()
defer s.mu.RUnlock()
names := make([]string, 0, len(s.pipelines))
for name := range s.pipelines {
names = append(names, name)
}
return names
}
// RemovePipeline stops and removes a pipeline from the service.
func (s *Service) RemovePipeline(name string) error {
s.mu.Lock() s.mu.Lock()
defer s.mu.Unlock() defer s.mu.Unlock()
if _, exists := s.pipelines[cfg.Name]; exists { pipeline, exists := s.pipelines[name]
err := fmt.Errorf("pipeline '%s' already exists", cfg.Name) if !exists {
s.logger.Error("msg", "Failed to create pipeline - duplicate name", err := fmt.Errorf("pipeline '%s' not found", name)
s.logger.Warn("msg", "Cannot remove non-existent pipeline",
"component", "service", "component", "service",
"pipeline", cfg.Name, "pipeline", name,
"error", err) "error", err)
return err return err
} }
s.logger.Debug("msg", "Creating pipeline", "pipeline", cfg.Name) s.logger.Info("msg", "Removing pipeline", "pipeline", name)
// Create pipeline context
pipelineCtx, pipelineCancel := context.WithCancel(s.ctx)
// Create pipeline instance
pipeline := &Pipeline{
Name: cfg.Name,
Config: cfg,
Stats: &PipelineStats{
StartTime: time.Now(),
},
ctx: pipelineCtx,
cancel: pipelineCancel,
logger: s.logger,
}
// Create sources
for i, srcCfg := range cfg.Sources {
src, err := s.createSource(srcCfg)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create source[%d]: %w", i, err)
}
pipeline.Sources = append(pipeline.Sources, src)
}
// Create pipeline rate limiter
if cfg.RateLimit != nil {
limiter, err := limit.NewRateLimiter(*cfg.RateLimit, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create pipeline rate limiter: %w", err)
}
pipeline.RateLimiter = limiter
}
// Create filter chain
if len(cfg.Filters) > 0 {
chain, err := filter.NewChain(cfg.Filters, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create filter chain: %w", err)
}
pipeline.FilterChain = chain
}
// Create formatter for the pipeline
var formatter format.Formatter
var err error
if cfg.Format != "" || len(cfg.FormatOptions) > 0 {
formatter, err = format.New(cfg.Format, cfg.FormatOptions, s.logger)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create formatter: %w", err)
}
}
// Create sinks
for i, sinkCfg := range cfg.Sinks {
sinkInst, err := s.createSink(sinkCfg, formatter)
if err != nil {
pipelineCancel()
return fmt.Errorf("failed to create sink[%d]: %w", i, err)
}
pipeline.Sinks = append(pipeline.Sinks, sinkInst)
}
// Start all sources
for i, src := range pipeline.Sources {
if err := src.Start(); err != nil {
pipeline.Shutdown() pipeline.Shutdown()
return fmt.Errorf("failed to start source[%d]: %w", i, err) delete(s.pipelines, name)
}
}
// Start all sinks
for i, sinkInst := range pipeline.Sinks {
if err := sinkInst.Start(pipelineCtx); err != nil {
pipeline.Shutdown()
return fmt.Errorf("failed to start sink[%d]: %w", i, err)
}
}
// Configure authentication for sinks that support it
for _, sinkInst := range pipeline.Sinks {
if setter, ok := sinkInst.(sink.AuthSetter); ok {
setter.SetAuthConfig(cfg.Auth)
}
}
// Wire sources to sinks through filters
s.wirePipeline(pipeline)
// Start stats updater
pipeline.startStatsUpdater(pipelineCtx)
s.pipelines[cfg.Name] = pipeline
s.logger.Info("msg", "Pipeline created successfully",
"pipeline", cfg.Name,
"auth_enabled", cfg.Auth != nil && cfg.Auth.Type != "none")
return nil return nil
} }
// wirePipeline connects sources to sinks through filters // Shutdown gracefully stops all pipelines managed by the service.
func (s *Service) Shutdown() {
s.logger.Info("msg", "Service shutdown initiated")
s.mu.Lock()
pipelines := make([]*Pipeline, 0, len(s.pipelines))
for _, pipeline := range s.pipelines {
pipelines = append(pipelines, pipeline)
}
s.mu.Unlock()
// Stop all pipelines concurrently
var wg sync.WaitGroup
for _, pipeline := range pipelines {
wg.Add(1)
go func(p *Pipeline) {
defer wg.Done()
p.Shutdown()
}(pipeline)
}
wg.Wait()
s.cancel()
s.wg.Wait()
s.logger.Info("msg", "Service shutdown complete")
}
// GetGlobalStats returns statistics for all pipelines.
func (s *Service) GetGlobalStats() map[string]any {
s.mu.RLock()
defer s.mu.RUnlock()
stats := map[string]any{
"pipelines": make(map[string]any),
"total_pipelines": len(s.pipelines),
}
for name, pipeline := range s.pipelines {
stats["pipelines"].(map[string]any)[name] = pipeline.GetStats()
}
return stats
}
// wirePipeline connects a pipeline's sources to its sinks through its filter chain.
func (s *Service) wirePipeline(p *Pipeline) { func (s *Service) wirePipeline(p *Pipeline) {
// For each source, subscribe and process entries // For each source, subscribe and process entries
for _, src := range p.Sources { for _, src := range p.Sources {
@ -172,17 +141,17 @@ func (s *Service) wirePipeline(p *Pipeline) {
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
s.logger.Error("msg", "Panic in pipeline processing", s.logger.Error("msg", "Panic in pipeline processing",
"pipeline", p.Name, "pipeline", p.Config.Name,
"source", source.GetStats().Type, "source", source.GetStats().Type,
"panic", r) "panic", r)
// Ensure failed pipelines don't leave resources hanging // Ensure failed pipelines don't leave resources hanging
go func() { go func() {
s.logger.Warn("msg", "Shutting down pipeline due to panic", s.logger.Warn("msg", "Shutting down pipeline due to panic",
"pipeline", p.Name) "pipeline", p.Config.Name)
if err := s.RemovePipeline(p.Name); err != nil { if err := s.RemovePipeline(p.Config.Name); err != nil {
s.logger.Error("msg", "Failed to remove panicked pipeline", s.logger.Error("msg", "Failed to remove panicked pipeline",
"pipeline", p.Name, "pipeline", p.Config.Name,
"error", err) "error", err)
} }
}() }()
@ -225,7 +194,7 @@ func (s *Service) wirePipeline(p *Pipeline) {
default: default:
// Drop if sink buffer is full, may flood logging for slow client // Drop if sink buffer is full, may flood logging for slow client
s.logger.Debug("msg", "Dropped log entry - sink buffer full", s.logger.Debug("msg", "Dropped log entry - sink buffer full",
"pipeline", p.Name) "pipeline", p.Config.Name)
} }
} }
} }
@ -234,159 +203,47 @@ func (s *Service) wirePipeline(p *Pipeline) {
} }
} }
// createSource creates a source instance based on configuration // createSource is a factory function for creating a source instance from configuration.
func (s *Service) createSource(cfg config.SourceConfig) (source.Source, error) { func (s *Service) createSource(cfg *config.SourceConfig) (source.Source, error) {
switch cfg.Type { switch cfg.Type {
case "directory": case "file":
return source.NewDirectorySource(cfg.Options, s.logger) return source.NewFileSource(cfg.File, s.logger)
case "stdin": case "console":
return source.NewStdinSource(cfg.Options, s.logger) return source.NewConsoleSource(cfg.Console, s.logger)
case "http": case "http":
return source.NewHTTPSource(cfg.Options, s.logger) return source.NewHTTPSource(cfg.HTTP, s.logger)
case "tcp": case "tcp":
return source.NewTCPSource(cfg.Options, s.logger) return source.NewTCPSource(cfg.TCP, s.logger)
default: default:
return nil, fmt.Errorf("unknown source type: %s", cfg.Type) return nil, fmt.Errorf("unknown source type: %s", cfg.Type)
} }
} }
// createSink creates a sink instance based on configuration // createSink is a factory function for creating a sink instance from configuration.
func (s *Service) createSink(cfg config.SinkConfig, formatter format.Formatter) (sink.Sink, error) { func (s *Service) createSink(cfg config.SinkConfig, formatter format.Formatter) (sink.Sink, error) {
if formatter == nil {
// Default formatters for different sink types
defaultFormat := "raw"
switch cfg.Type {
case "http", "tcp", "http_client", "tcp_client":
defaultFormat = "json"
}
var err error
formatter, err = format.New(defaultFormat, nil, s.logger)
if err != nil {
return nil, fmt.Errorf("failed to create default formatter: %w", err)
}
}
switch cfg.Type { switch cfg.Type {
case "http": case "http":
return sink.NewHTTPSink(cfg.Options, s.logger, formatter) if cfg.HTTP == nil {
return nil, fmt.Errorf("HTTP sink configuration missing")
}
return sink.NewHTTPSink(cfg.HTTP, s.logger, formatter)
case "tcp": case "tcp":
return sink.NewTCPSink(cfg.Options, s.logger, formatter) if cfg.TCP == nil {
return nil, fmt.Errorf("TCP sink configuration missing")
}
return sink.NewTCPSink(cfg.TCP, s.logger, formatter)
case "http_client": case "http_client":
return sink.NewHTTPClientSink(cfg.Options, s.logger, formatter) return sink.NewHTTPClientSink(cfg.HTTPClient, s.logger, formatter)
case "tcp_client": case "tcp_client":
return sink.NewTCPClientSink(cfg.Options, s.logger, formatter) return sink.NewTCPClientSink(cfg.TCPClient, s.logger, formatter)
case "file": case "file":
return sink.NewFileSink(cfg.Options, s.logger, formatter) return sink.NewFileSink(cfg.File, s.logger, formatter)
case "stdout": case "console":
return sink.NewStdoutSink(cfg.Options, s.logger, formatter) return sink.NewConsoleSink(cfg.Console, s.logger, formatter)
case "stderr":
return sink.NewStderrSink(cfg.Options, s.logger, formatter)
default: default:
return nil, fmt.Errorf("unknown sink type: %s", cfg.Type) return nil, fmt.Errorf("unknown sink type: %s", cfg.Type)
} }
} }
// GetPipeline returns a pipeline by name
func (s *Service) GetPipeline(name string) (*Pipeline, error) {
s.mu.RLock()
defer s.mu.RUnlock()
pipeline, exists := s.pipelines[name]
if !exists {
return nil, fmt.Errorf("pipeline '%s' not found", name)
}
return pipeline, nil
}
// ListStreams is deprecated, use ListPipelines
func (s *Service) ListStreams() []string {
s.logger.Warn("msg", "ListStreams is deprecated, use ListPipelines",
"component", "service")
return s.ListPipelines()
}
// ListPipelines returns all pipeline names
func (s *Service) ListPipelines() []string {
s.mu.RLock()
defer s.mu.RUnlock()
names := make([]string, 0, len(s.pipelines))
for name := range s.pipelines {
names = append(names, name)
}
return names
}
// RemoveStream is deprecated, use RemovePipeline
func (s *Service) RemoveStream(name string) error {
s.logger.Warn("msg", "RemoveStream is deprecated, use RemovePipeline",
"component", "service")
return s.RemovePipeline(name)
}
// RemovePipeline stops and removes a pipeline
func (s *Service) RemovePipeline(name string) error {
s.mu.Lock()
defer s.mu.Unlock()
pipeline, exists := s.pipelines[name]
if !exists {
err := fmt.Errorf("pipeline '%s' not found", name)
s.logger.Warn("msg", "Cannot remove non-existent pipeline",
"component", "service",
"pipeline", name,
"error", err)
return err
}
s.logger.Info("msg", "Removing pipeline", "pipeline", name)
pipeline.Shutdown()
delete(s.pipelines, name)
return nil
}
// Shutdown stops all pipelines
func (s *Service) Shutdown() {
s.logger.Info("msg", "Service shutdown initiated")
s.mu.Lock()
pipelines := make([]*Pipeline, 0, len(s.pipelines))
for _, pipeline := range s.pipelines {
pipelines = append(pipelines, pipeline)
}
s.mu.Unlock()
// Stop all pipelines concurrently
var wg sync.WaitGroup
for _, pipeline := range pipelines {
wg.Add(1)
go func(p *Pipeline) {
defer wg.Done()
p.Shutdown()
}(pipeline)
}
wg.Wait()
s.cancel()
s.wg.Wait()
s.logger.Info("msg", "Service shutdown complete")
}
// GetGlobalStats returns statistics for all pipelines
func (s *Service) GetGlobalStats() map[string]any {
s.mu.RLock()
defer s.mu.RUnlock()
stats := map[string]any{
"pipelines": make(map[string]any),
"total_pipelines": len(s.pipelines),
}
for name, pipeline := range s.pipelines {
stats["pipelines"].(map[string]any)[name] = pipeline.GetStats()
}
return stats
}

View File

@ -0,0 +1,292 @@
// FILE: src/internal/session/session.go
package session
import (
"crypto/rand"
"encoding/base64"
"fmt"
"sync"
"time"
"logwisp/src/internal/core"
)
// Session represents a connection session.
type Session struct {
ID string // Unique session identifier
RemoteAddr string // Client address
CreatedAt time.Time // Session creation time
LastActivity time.Time // Last activity timestamp
Metadata map[string]any // Optional metadata (e.g., TLS info)
// Connection context
Source string // Source type: "tcp_source", "http_source", "tcp_sink", etc.
}
// Manager handles the lifecycle of sessions.
type Manager struct {
sessions map[string]*Session
mu sync.RWMutex
// Cleanup configuration
maxIdleTime time.Duration
cleanupTicker *time.Ticker
done chan struct{}
// Expiry callbacks by source type
expiryCallbacks map[string]func(sessionID, remoteAddr string)
callbacksMu sync.RWMutex
}
// NewManager creates a new session manager with a specified idle timeout.
func NewManager(maxIdleTime time.Duration) *Manager {
if maxIdleTime == 0 {
maxIdleTime = core.SessionDefaultMaxIdleTime
}
m := &Manager{
sessions: make(map[string]*Session),
maxIdleTime: maxIdleTime,
done: make(chan struct{}),
}
// Start cleanup routine
m.startCleanup()
return m
}
// CreateSession creates and stores a new session for a connection.
func (m *Manager) CreateSession(remoteAddr string, source string, metadata map[string]any) *Session {
session := &Session{
ID: generateSessionID(),
RemoteAddr: remoteAddr,
CreatedAt: time.Now(),
LastActivity: time.Now(),
Source: source,
Metadata: metadata,
}
if metadata == nil {
session.Metadata = make(map[string]any)
}
m.StoreSession(session)
return session
}
// StoreSession adds a session to the manager.
func (m *Manager) StoreSession(session *Session) {
m.mu.Lock()
defer m.mu.Unlock()
m.sessions[session.ID] = session
}
// GetSession retrieves a session by its unique ID.
func (m *Manager) GetSession(sessionID string) (*Session, bool) {
m.mu.RLock()
defer m.mu.RUnlock()
session, exists := m.sessions[sessionID]
return session, exists
}
// RemoveSession removes a session from the manager.
func (m *Manager) RemoveSession(sessionID string) {
m.mu.Lock()
defer m.mu.Unlock()
delete(m.sessions, sessionID)
}
// UpdateActivity updates the last activity timestamp for a session.
func (m *Manager) UpdateActivity(sessionID string) {
m.mu.Lock()
defer m.mu.Unlock()
if session, exists := m.sessions[sessionID]; exists {
session.LastActivity = time.Now()
}
}
// IsSessionActive checks if a session exists and has not been idle for too long.
func (m *Manager) IsSessionActive(sessionID string) bool {
m.mu.RLock()
defer m.mu.RUnlock()
if session, exists := m.sessions[sessionID]; exists {
// Session exists and hasn't exceeded idle timeout
return time.Since(session.LastActivity) < m.maxIdleTime
}
return false
}
// GetActiveSessions returns a snapshot of all currently active sessions.
func (m *Manager) GetActiveSessions() []*Session {
m.mu.RLock()
defer m.mu.RUnlock()
sessions := make([]*Session, 0, len(m.sessions))
for _, session := range m.sessions {
sessions = append(sessions, session)
}
return sessions
}
// GetSessionCount returns the number of active sessions.
func (m *Manager) GetSessionCount() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.sessions)
}
// GetSessionsBySource returns all sessions matching a specific source type.
func (m *Manager) GetSessionsBySource(source string) []*Session {
m.mu.RLock()
defer m.mu.RUnlock()
var sessions []*Session
for _, session := range m.sessions {
if session.Source == source {
sessions = append(sessions, session)
}
}
return sessions
}
// GetActiveSessionsBySource returns all active sessions for a given source.
func (m *Manager) GetActiveSessionsBySource(source string) []*Session {
m.mu.RLock()
defer m.mu.RUnlock()
var sessions []*Session
now := time.Now()
for _, session := range m.sessions {
if session.Source == source && now.Sub(session.LastActivity) < m.maxIdleTime {
sessions = append(sessions, session)
}
}
return sessions
}
// GetStats returns statistics about the session manager.
func (m *Manager) GetStats() map[string]any {
m.mu.RLock()
defer m.mu.RUnlock()
sourceCounts := make(map[string]int)
var totalSessions int
var oldestSession time.Time
var newestSession time.Time
for _, session := range m.sessions {
totalSessions++
sourceCounts[session.Source]++
if oldestSession.IsZero() || session.CreatedAt.Before(oldestSession) {
oldestSession = session.CreatedAt
}
if newestSession.IsZero() || session.CreatedAt.After(newestSession) {
newestSession = session.CreatedAt
}
}
stats := map[string]any{
"total_sessions": totalSessions,
"sessions_by_type": sourceCounts,
"max_idle_time": m.maxIdleTime.String(),
}
if !oldestSession.IsZero() {
stats["oldest_session_age"] = time.Since(oldestSession).String()
}
if !newestSession.IsZero() {
stats["newest_session_age"] = time.Since(newestSession).String()
}
return stats
}
// Stop gracefully stops the session manager and its cleanup goroutine.
func (m *Manager) Stop() {
close(m.done)
if m.cleanupTicker != nil {
m.cleanupTicker.Stop()
}
}
// RegisterExpiryCallback registers a callback function to be executed when a session expires.
func (m *Manager) RegisterExpiryCallback(source string, callback func(sessionID, remoteAddr string)) {
m.callbacksMu.Lock()
defer m.callbacksMu.Unlock()
if m.expiryCallbacks == nil {
m.expiryCallbacks = make(map[string]func(sessionID, remoteAddr string))
}
m.expiryCallbacks[source] = callback
}
// UnregisterExpiryCallback removes an expiry callback for a given source type.
func (m *Manager) UnregisterExpiryCallback(source string) {
m.callbacksMu.Lock()
defer m.callbacksMu.Unlock()
delete(m.expiryCallbacks, source)
}
// startCleanup initializes the periodic cleanup of idle sessions.
func (m *Manager) startCleanup() {
m.cleanupTicker = time.NewTicker(core.SessionCleanupInterval)
go func() {
for {
select {
case <-m.cleanupTicker.C:
m.cleanupIdleSessions()
case <-m.done:
return
}
}
}()
}
// cleanupIdleSessions removes sessions that have exceeded the maximum idle time.
func (m *Manager) cleanupIdleSessions() {
m.mu.Lock()
defer m.mu.Unlock()
now := time.Now()
expiredSessions := make([]*Session, 0)
for id, session := range m.sessions {
idleTime := now.Sub(session.LastActivity)
if idleTime > m.maxIdleTime {
expiredSessions = append(expiredSessions, session)
delete(m.sessions, id)
}
}
m.mu.Unlock()
// Call callbacks outside of lock
if len(expiredSessions) > 0 {
m.callbacksMu.RLock()
defer m.callbacksMu.RUnlock()
for _, session := range expiredSessions {
if callback, exists := m.expiryCallbacks[session.Source]; exists {
// Call callback to notify owner
go callback(session.ID, session.RemoteAddr)
}
}
}
}
// generateSessionID creates a unique, random session identifier.
func generateSessionID() string {
b := make([]byte, 16)
if _, err := rand.Read(b); err != nil {
// Fallback to timestamp-based ID
return fmt.Sprintf("session_%d", time.Now().UnixNano())
}
return base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(b)
}

View File

@ -3,62 +3,73 @@ package sink
import ( import (
"context" "context"
"io" "fmt"
"os"
"strings" "strings"
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/format" "logwisp/src/internal/format"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// ConsoleConfig holds common configuration for console sinks // ConsoleSink writes log entries to the console (stdout/stderr) using an dedicated logger instance.
type ConsoleConfig struct { type ConsoleSink struct {
Target string // "stdout", "stderr", or "split" // Configuration
BufferSize int64 config *config.ConsoleSinkOptions
}
// StdoutSink writes log entries to stdout // Application
type StdoutSink struct {
input chan core.LogEntry input chan core.LogEntry
config ConsoleConfig writer *log.Logger // dedicated logger for console output
output io.Writer formatter format.Formatter
logger *log.Logger // application logger
// Runtime
done chan struct{} done chan struct{}
startTime time.Time startTime time.Time
logger *log.Logger
formatter format.Formatter
// Statistics // Statistics
totalProcessed atomic.Uint64 totalProcessed atomic.Uint64
lastProcessed atomic.Value // time.Time lastProcessed atomic.Value // time.Time
} }
// NewStdoutSink creates a new stdout sink // NewConsoleSink creates a new console sink.
func NewStdoutSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*StdoutSink, error) { func NewConsoleSink(opts *config.ConsoleSinkOptions, appLogger *log.Logger, formatter format.Formatter) (*ConsoleSink, error) {
config := ConsoleConfig{ if opts == nil {
Target: "stdout", return nil, fmt.Errorf("console sink options cannot be nil")
BufferSize: 1000,
} }
// Check for split mode configuration // Set defaults if not configured
if target, ok := options["target"].(string); ok { if opts.Target == "" {
config.Target = target opts.Target = "stdout"
}
if opts.BufferSize <= 0 {
opts.BufferSize = 1000
} }
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 { // Dedicated logger instance as console writer
config.BufferSize = bufSize writer, err := log.NewBuilder().
EnableFile(false).
EnableConsole(true).
ConsoleTarget(opts.Target).
Format("raw"). // Passthrough pre-formatted messages
ShowTimestamp(false). // Disable writer's own timestamp
ShowLevel(false). // Disable writer's own level prefix
Build()
if err != nil {
return nil, fmt.Errorf("failed to create console writer: %w", err)
} }
s := &StdoutSink{ s := &ConsoleSink{
input: make(chan core.LogEntry, config.BufferSize), config: opts,
config: config, input: make(chan core.LogEntry, opts.BufferSize),
output: os.Stdout, writer: writer,
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: appLogger,
formatter: formatter, formatter: formatter,
} }
s.lastProcessed.Store(time.Time{}) s.lastProcessed.Store(time.Time{})
@ -66,39 +77,56 @@ func NewStdoutSink(options map[string]any, logger *log.Logger, formatter format.
return s, nil return s, nil
} }
func (s *StdoutSink) Input() chan<- core.LogEntry { // Input returns the channel for sending log entries.
func (s *ConsoleSink) Input() chan<- core.LogEntry {
return s.input return s.input
} }
func (s *StdoutSink) Start(ctx context.Context) error { // Start begins the processing loop for the sink.
func (s *ConsoleSink) Start(ctx context.Context) error {
// Start the internal writer's processing goroutine.
if err := s.writer.Start(); err != nil {
return fmt.Errorf("failed to start console writer: %w", err)
}
go s.processLoop(ctx) go s.processLoop(ctx)
s.logger.Info("msg", "Stdout sink started", s.logger.Info("msg", "Console sink started",
"component", "stdout_sink", "component", "console_sink",
"target", s.config.Target) "target", s.writer.GetConfig().ConsoleTarget)
return nil return nil
} }
func (s *StdoutSink) Stop() { // Stop gracefully shuts down the sink.
s.logger.Info("msg", "Stopping stdout sink") func (s *ConsoleSink) Stop() {
target := s.writer.GetConfig().ConsoleTarget
s.logger.Info("msg", "Stopping console sink", "target", target)
close(s.done) close(s.done)
s.logger.Info("msg", "Stdout sink stopped")
// Shutdown the internal writer with a timeout.
if err := s.writer.Shutdown(2 * time.Second); err != nil {
s.logger.Error("msg", "Error shutting down console writer",
"component", "console_sink",
"error", err)
}
s.logger.Info("msg", "Console sink stopped", "target", target)
} }
func (s *StdoutSink) GetStats() SinkStats { // GetStats returns the sink's statistics.
func (s *ConsoleSink) GetStats() SinkStats {
lastProc, _ := s.lastProcessed.Load().(time.Time) lastProc, _ := s.lastProcessed.Load().(time.Time)
return SinkStats{ return SinkStats{
Type: "stdout", Type: "console",
TotalProcessed: s.totalProcessed.Load(), TotalProcessed: s.totalProcessed.Load(),
StartTime: s.startTime, StartTime: s.startTime,
LastProcessed: lastProc, LastProcessed: lastProc,
Details: map[string]any{ Details: map[string]any{
"target": s.config.Target, "target": s.writer.GetConfig().ConsoleTarget,
}, },
} }
} }
func (s *StdoutSink) processLoop(ctx context.Context) { // processLoop reads entries, formats them, and writes to the console.
func (s *ConsoleSink) processLoop(ctx context.Context) {
for { for {
select { select {
case entry, ok := <-s.input: case entry, ok := <-s.input:
@ -109,135 +137,29 @@ func (s *StdoutSink) processLoop(ctx context.Context) {
s.totalProcessed.Add(1) s.totalProcessed.Add(1)
s.lastProcessed.Store(time.Now()) s.lastProcessed.Store(time.Now())
// Handle split mode - only process INFO/DEBUG for stdout // Format the entry using the pipeline's configured formatter.
if s.config.Target == "split" {
upperLevel := strings.ToUpper(entry.Level)
if upperLevel == "ERROR" || upperLevel == "WARN" || upperLevel == "WARNING" {
// Skip ERROR/WARN levels in stdout when in split mode
continue
}
}
// Format and write
formatted, err := s.formatter.Format(entry) formatted, err := s.formatter.Format(entry)
if err != nil { if err != nil {
s.logger.Error("msg", "Failed to format log entry for stdout", "error", err) s.logger.Error("msg", "Failed to format log entry for console",
"component", "console_sink",
"error", err)
continue continue
} }
s.output.Write(formatted)
// Convert to string to prevent hex encoding of []byte by log package
case <-ctx.Done(): message := string(formatted)
return switch strings.ToUpper(entry.Level) {
case <-s.done: case "DEBUG":
return s.writer.Debug(message)
} case "INFO":
} s.writer.Info(message)
} case "WARN", "WARNING":
s.writer.Warn(message)
// StderrSink writes log entries to stderr case "ERROR", "FATAL":
type StderrSink struct { s.writer.Error(message)
input chan core.LogEntry default:
config ConsoleConfig s.writer.Message(message)
output io.Writer }
done chan struct{}
startTime time.Time
logger *log.Logger
formatter format.Formatter
// Statistics
totalProcessed atomic.Uint64
lastProcessed atomic.Value // time.Time
}
// NewStderrSink creates a new stderr sink
func NewStderrSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*StderrSink, error) {
config := ConsoleConfig{
Target: "stderr",
BufferSize: 1000,
}
// Check for split mode configuration
if target, ok := options["target"].(string); ok {
config.Target = target
}
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
config.BufferSize = bufSize
}
s := &StderrSink{
input: make(chan core.LogEntry, config.BufferSize),
config: config,
output: os.Stderr,
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
formatter: formatter,
}
s.lastProcessed.Store(time.Time{})
return s, nil
}
func (s *StderrSink) Input() chan<- core.LogEntry {
return s.input
}
func (s *StderrSink) Start(ctx context.Context) error {
go s.processLoop(ctx)
s.logger.Info("msg", "Stderr sink started",
"component", "stderr_sink",
"target", s.config.Target)
return nil
}
func (s *StderrSink) Stop() {
s.logger.Info("msg", "Stopping stderr sink")
close(s.done)
s.logger.Info("msg", "Stderr sink stopped")
}
func (s *StderrSink) GetStats() SinkStats {
lastProc, _ := s.lastProcessed.Load().(time.Time)
return SinkStats{
Type: "stderr",
TotalProcessed: s.totalProcessed.Load(),
StartTime: s.startTime,
LastProcessed: lastProc,
Details: map[string]any{
"target": s.config.Target,
},
}
}
func (s *StderrSink) processLoop(ctx context.Context) {
for {
select {
case entry, ok := <-s.input:
if !ok {
return
}
s.totalProcessed.Add(1)
s.lastProcessed.Store(time.Now())
// Handle split mode - only process ERROR/WARN for stderr
if s.config.Target == "split" {
upperLevel := strings.ToUpper(entry.Level)
if upperLevel != "ERROR" && upperLevel != "WARN" && upperLevel != "WARNING" {
// Skip non-ERROR/WARN levels in stderr when in split mode
continue
}
}
// Format and write
formatted, err := s.formatter.Format(entry)
if err != nil {
s.logger.Error("msg", "Failed to format log entry for stderr", "error", err)
continue
}
s.output.Write(formatted)
case <-ctx.Done(): case <-ctx.Done():
return return

View File

@ -7,83 +7,55 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/format" "logwisp/src/internal/format"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// FileSink writes log entries to files with rotation // FileSink writes log entries to files with rotation.
type FileSink struct { type FileSink struct {
// Configuration
config *config.FileSinkOptions
// Application
input chan core.LogEntry input chan core.LogEntry
writer *log.Logger // Internal logger instance for file writing writer *log.Logger // internal logger for file writing
formatter format.Formatter
logger *log.Logger // application logger
// Runtime
done chan struct{} done chan struct{}
startTime time.Time startTime time.Time
logger *log.Logger // Application logger
formatter format.Formatter
// Statistics // Statistics
totalProcessed atomic.Uint64 totalProcessed atomic.Uint64
lastProcessed atomic.Value // time.Time lastProcessed atomic.Value // time.Time
} }
// NewFileSink creates a new file sink // NewFileSink creates a new file sink.
func NewFileSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*FileSink, error) { func NewFileSink(opts *config.FileSinkOptions, logger *log.Logger, formatter format.Formatter) (*FileSink, error) {
directory, ok := options["directory"].(string) if opts == nil {
if !ok || directory == "" { return nil, fmt.Errorf("file sink options cannot be nil")
return nil, fmt.Errorf("file sink requires 'directory' option")
}
name, ok := options["name"].(string)
if !ok || name == "" {
return nil, fmt.Errorf("file sink requires 'name' option")
} }
// Create configuration for the internal log writer // Create configuration for the internal log writer
writerConfig := log.DefaultConfig() writerConfig := log.DefaultConfig()
writerConfig.Directory = directory writerConfig.Directory = opts.Directory
writerConfig.Name = name writerConfig.Name = opts.Name
writerConfig.EnableStdout = false // File only writerConfig.EnableConsole = false // File only
writerConfig.ShowTimestamp = false // We already have timestamps in entries writerConfig.ShowTimestamp = false // We already have timestamps in entries
writerConfig.ShowLevel = false // We already have levels in entries writerConfig.ShowLevel = false // We already have levels in entries
// Add optional configurations
if maxSize, ok := options["max_size_mb"].(int64); ok && maxSize > 0 {
writerConfig.MaxSizeKB = maxSize * 1000
}
if maxTotalSize, ok := options["max_total_size_mb"].(int64); ok && maxTotalSize >= 0 {
writerConfig.MaxTotalSizeKB = maxTotalSize * 1000
}
if retention, ok := options["retention_hours"].(int64); ok && retention > 0 {
writerConfig.RetentionPeriodHrs = float64(retention)
}
if minDiskFree, ok := options["min_disk_free_mb"].(int64); ok && minDiskFree > 0 {
writerConfig.MinDiskFreeKB = minDiskFree * 1000
}
// Create internal logger for file writing // Create internal logger for file writing
writer := log.NewLogger() writer := log.NewLogger()
if err := writer.ApplyConfig(writerConfig); err != nil { if err := writer.ApplyConfig(writerConfig); err != nil {
return nil, fmt.Errorf("failed to initialize file writer: %w", err) return nil, fmt.Errorf("failed to initialize file writer: %w", err)
} }
// Start the internal file writer
if err := writer.Start(); err != nil {
return nil, fmt.Errorf("failed to start file writer: %w", err)
}
// Buffer size for input channel
// TODO: Make this configurable
bufferSize := int64(1000)
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
bufferSize = bufSize
}
fs := &FileSink{ fs := &FileSink{
input: make(chan core.LogEntry, bufferSize), input: make(chan core.LogEntry, opts.BufferSize),
writer: writer, writer: writer,
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
@ -95,16 +67,24 @@ func NewFileSink(options map[string]any, logger *log.Logger, formatter format.Fo
return fs, nil return fs, nil
} }
// Input returns the channel for sending log entries.
func (fs *FileSink) Input() chan<- core.LogEntry { func (fs *FileSink) Input() chan<- core.LogEntry {
return fs.input return fs.input
} }
// Start begins the processing loop for the sink.
func (fs *FileSink) Start(ctx context.Context) error { func (fs *FileSink) Start(ctx context.Context) error {
// Start the internal file writer
if err := fs.writer.Start(); err != nil {
return fmt.Errorf("failed to start sink file writer: %w", err)
}
go fs.processLoop(ctx) go fs.processLoop(ctx)
fs.logger.Info("msg", "File sink started", "component", "file_sink") fs.logger.Info("msg", "File sink started", "component", "file_sink")
return nil return nil
} }
// Stop gracefully shuts down the sink.
func (fs *FileSink) Stop() { func (fs *FileSink) Stop() {
fs.logger.Info("msg", "Stopping file sink") fs.logger.Info("msg", "Stopping file sink")
close(fs.done) close(fs.done)
@ -119,6 +99,7 @@ func (fs *FileSink) Stop() {
fs.logger.Info("msg", "File sink stopped") fs.logger.Info("msg", "File sink stopped")
} }
// GetStats returns the sink's statistics.
func (fs *FileSink) GetStats() SinkStats { func (fs *FileSink) GetStats() SinkStats {
lastProc, _ := fs.lastProcessed.Load().(time.Time) lastProc, _ := fs.lastProcessed.Load().(time.Time)
@ -131,6 +112,7 @@ func (fs *FileSink) GetStats() SinkStats {
} }
} }
// processLoop reads entries, formats them, and writes to a file.
func (fs *FileSink) processLoop(ctx context.Context) { func (fs *FileSink) processLoop(ctx context.Context) {
for { for {
select { select {
@ -151,11 +133,8 @@ func (fs *FileSink) processLoop(ctx context.Context) {
continue continue
} }
// Write formatted bytes (strip newline as writer adds it) // Convert to string to prevent hex encoding of []byte by log package
message := string(formatted) message := string(formatted)
if len(message) > 0 && message[len(message)-1] == '\n' {
message = message[:len(message)-1]
}
fs.writer.Message(message) fs.writer.Message(message)
case <-ctx.Done(): case <-ctx.Done():

File diff suppressed because it is too large Load Diff

View File

@ -5,34 +5,50 @@ import (
"bytes" "bytes"
"context" "context"
"crypto/tls" "crypto/tls"
"crypto/x509"
"fmt" "fmt"
"net/url"
"os"
"strings" "strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/format" "logwisp/src/internal/format"
"logwisp/src/internal/session"
ltls "logwisp/src/internal/tls"
"logwisp/src/internal/version"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
"github.com/valyala/fasthttp" "github.com/valyala/fasthttp"
) )
// HTTPClientSink forwards log entries to a remote HTTP endpoint // TODO: add heartbeat
// HTTPClientSink forwards log entries to a remote HTTP endpoint.
type HTTPClientSink struct { type HTTPClientSink struct {
input chan core.LogEntry // Configuration
config HTTPClientConfig config *config.HTTPClientSinkOptions
// Network
client *fasthttp.Client client *fasthttp.Client
batch []core.LogEntry tlsManager *ltls.ClientManager
batchMu sync.Mutex
// Application
input chan core.LogEntry
formatter format.Formatter
logger *log.Logger
// Runtime
done chan struct{} done chan struct{}
wg sync.WaitGroup wg sync.WaitGroup
startTime time.Time startTime time.Time
logger *log.Logger
formatter format.Formatter // Batching
batch []core.LogEntry
batchMu sync.Mutex
// Security & Session
sessionID string
sessionManager *session.Manager
// Statistics // Statistics
totalProcessed atomic.Uint64 totalProcessed atomic.Uint64
@ -43,107 +59,21 @@ type HTTPClientSink struct {
activeConnections atomic.Int64 activeConnections atomic.Int64
} }
// HTTPClientConfig holds HTTP client sink configuration // NewHTTPClientSink creates a new HTTP client sink.
type HTTPClientConfig struct { func NewHTTPClientSink(opts *config.HTTPClientSinkOptions, logger *log.Logger, formatter format.Formatter) (*HTTPClientSink, error) {
URL string if opts == nil {
BufferSize int64 return nil, fmt.Errorf("HTTP client sink options cannot be nil")
BatchSize int64
BatchDelay time.Duration
Timeout time.Duration
Headers map[string]string
// Retry configuration
MaxRetries int64
RetryDelay time.Duration
RetryBackoff float64 // Multiplier for exponential backoff
// TLS configuration
InsecureSkipVerify bool
CAFile string
}
// NewHTTPClientSink creates a new HTTP client sink
func NewHTTPClientSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*HTTPClientSink, error) {
cfg := HTTPClientConfig{
BufferSize: int64(1000),
BatchSize: int64(100),
BatchDelay: time.Second,
Timeout: 30 * time.Second,
MaxRetries: int64(3),
RetryDelay: time.Second,
RetryBackoff: float64(2.0),
Headers: make(map[string]string),
}
// Extract URL
urlStr, ok := options["url"].(string)
if !ok || urlStr == "" {
return nil, fmt.Errorf("http_client sink requires 'url' option")
}
// Validate URL
parsedURL, err := url.Parse(urlStr)
if err != nil {
return nil, fmt.Errorf("invalid URL: %w", err)
}
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
return nil, fmt.Errorf("URL must use http or https scheme")
}
cfg.URL = urlStr
// Extract other options
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
cfg.BufferSize = bufSize
}
if batchSize, ok := options["batch_size"].(int64); ok && batchSize > 0 {
cfg.BatchSize = batchSize
}
if delayMs, ok := options["batch_delay_ms"].(int64); ok && delayMs > 0 {
cfg.BatchDelay = time.Duration(delayMs) * time.Millisecond
}
if timeoutSec, ok := options["timeout_seconds"].(int64); ok && timeoutSec > 0 {
cfg.Timeout = time.Duration(timeoutSec) * time.Second
}
if maxRetries, ok := options["max_retries"].(int64); ok && maxRetries >= 0 {
cfg.MaxRetries = maxRetries
}
if retryDelayMs, ok := options["retry_delay_ms"].(int64); ok && retryDelayMs > 0 {
cfg.RetryDelay = time.Duration(retryDelayMs) * time.Millisecond
}
if backoff, ok := options["retry_backoff"].(float64); ok && backoff >= 1.0 {
cfg.RetryBackoff = backoff
}
if insecure, ok := options["insecure_skip_verify"].(bool); ok {
cfg.InsecureSkipVerify = insecure
}
// Extract headers
if headers, ok := options["headers"].(map[string]any); ok {
for k, v := range headers {
if strVal, ok := v.(string); ok {
cfg.Headers[k] = strVal
}
}
}
// Set default Content-Type if not specified
if _, exists := cfg.Headers["Content-Type"]; !exists {
cfg.Headers["Content-Type"] = "application/json"
}
// Extract TLS options
if caFile, ok := options["ca_file"].(string); ok && caFile != "" {
cfg.CAFile = caFile
} }
h := &HTTPClientSink{ h := &HTTPClientSink{
input: make(chan core.LogEntry, cfg.BufferSize), config: opts,
config: cfg, input: make(chan core.LogEntry, opts.BufferSize),
batch: make([]core.LogEntry, 0, cfg.BatchSize), batch: make([]core.LogEntry, 0, opts.BatchSize),
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: logger,
formatter: formatter, formatter: formatter,
sessionManager: session.NewManager(30 * time.Minute),
} }
h.lastProcessed.Store(time.Time{}) h.lastProcessed.Store(time.Time{})
h.lastBatchSent.Store(time.Time{}) h.lastBatchSent.Store(time.Time{})
@ -152,42 +82,53 @@ func NewHTTPClientSink(options map[string]any, logger *log.Logger, formatter for
h.client = &fasthttp.Client{ h.client = &fasthttp.Client{
MaxConnsPerHost: 10, MaxConnsPerHost: 10,
MaxIdleConnDuration: 10 * time.Second, MaxIdleConnDuration: 10 * time.Second,
ReadTimeout: cfg.Timeout, ReadTimeout: time.Duration(opts.Timeout) * time.Second,
WriteTimeout: cfg.Timeout, WriteTimeout: time.Duration(opts.Timeout) * time.Second,
DisableHeaderNamesNormalizing: true, DisableHeaderNamesNormalizing: true,
} }
// Configure TLS if using HTTPS // Configure TLS for HTTPS
if strings.HasPrefix(cfg.URL, "https://") { if strings.HasPrefix(opts.URL, "https://") {
tlsConfig := &tls.Config{ if opts.TLS != nil && opts.TLS.Enabled {
InsecureSkipVerify: cfg.InsecureSkipVerify, // Use the new ClientManager with the clear client-specific config
} tlsManager, err := ltls.NewClientManager(opts.TLS, logger)
// Load custom CA if provided
if cfg.CAFile != "" {
caCert, err := os.ReadFile(cfg.CAFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read CA file: %w", err) return nil, fmt.Errorf("failed to create TLS client manager: %w", err)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to parse CA certificate")
}
tlsConfig.RootCAs = caCertPool
} }
h.tlsManager = tlsManager
// Get the generated config
h.client.TLSConfig = tlsManager.GetConfig()
// Set TLS config directly on the client logger.Info("msg", "Client TLS configured",
h.client.TLSConfig = tlsConfig "component", "http_client_sink",
"has_client_cert", opts.TLS.ClientCertFile != "", // Clearer check
"has_server_ca", opts.TLS.ServerCAFile != "", // Clearer check
"min_version", opts.TLS.MinVersion)
} else if opts.InsecureSkipVerify { // Use the new clear field
// TODO: document this behavior
h.client.TLSConfig = &tls.Config{
InsecureSkipVerify: true,
}
}
} }
return h, nil return h, nil
} }
// Input returns the channel for sending log entries.
func (h *HTTPClientSink) Input() chan<- core.LogEntry { func (h *HTTPClientSink) Input() chan<- core.LogEntry {
return h.input return h.input
} }
// Start begins the processing and batching loops.
func (h *HTTPClientSink) Start(ctx context.Context) error { func (h *HTTPClientSink) Start(ctx context.Context) error {
// Create session for HTTP client sink lifetime
sess := h.sessionManager.CreateSession(h.config.URL, "http_client_sink", map[string]any{
"batch_size": h.config.BatchSize,
"timeout": h.config.Timeout,
})
h.sessionID = sess.ID
h.wg.Add(2) h.wg.Add(2)
go h.processLoop(ctx) go h.processLoop(ctx)
go h.batchTimer(ctx) go h.batchTimer(ctx)
@ -196,10 +137,12 @@ func (h *HTTPClientSink) Start(ctx context.Context) error {
"component", "http_client_sink", "component", "http_client_sink",
"url", h.config.URL, "url", h.config.URL,
"batch_size", h.config.BatchSize, "batch_size", h.config.BatchSize,
"batch_delay", h.config.BatchDelay) "batch_delay_ms", h.config.BatchDelayMS,
"session_id", h.sessionID)
return nil return nil
} }
// Stop gracefully shuts down the sink, sending any remaining batched entries.
func (h *HTTPClientSink) Stop() { func (h *HTTPClientSink) Stop() {
h.logger.Info("msg", "Stopping HTTP client sink") h.logger.Info("msg", "Stopping HTTP client sink")
close(h.done) close(h.done)
@ -216,12 +159,21 @@ func (h *HTTPClientSink) Stop() {
h.batchMu.Unlock() h.batchMu.Unlock()
} }
// Remove session and stop manager
if h.sessionID != "" {
h.sessionManager.RemoveSession(h.sessionID)
}
if h.sessionManager != nil {
h.sessionManager.Stop()
}
h.logger.Info("msg", "HTTP client sink stopped", h.logger.Info("msg", "HTTP client sink stopped",
"total_processed", h.totalProcessed.Load(), "total_processed", h.totalProcessed.Load(),
"total_batches", h.totalBatches.Load(), "total_batches", h.totalBatches.Load(),
"failed_batches", h.failedBatches.Load()) "failed_batches", h.failedBatches.Load())
} }
// GetStats returns the sink's statistics.
func (h *HTTPClientSink) GetStats() SinkStats { func (h *HTTPClientSink) GetStats() SinkStats {
lastProc, _ := h.lastProcessed.Load().(time.Time) lastProc, _ := h.lastProcessed.Load().(time.Time)
lastBatch, _ := h.lastBatchSent.Load().(time.Time) lastBatch, _ := h.lastBatchSent.Load().(time.Time)
@ -230,6 +182,23 @@ func (h *HTTPClientSink) GetStats() SinkStats {
pendingEntries := len(h.batch) pendingEntries := len(h.batch)
h.batchMu.Unlock() h.batchMu.Unlock()
// Get session information
var sessionInfo map[string]any
if h.sessionID != "" {
if sess, exists := h.sessionManager.GetSession(h.sessionID); exists {
sessionInfo = map[string]any{
"session_id": sess.ID,
"created_at": sess.CreatedAt,
"last_activity": sess.LastActivity,
}
}
}
var tlsStats map[string]any
if h.tlsManager != nil {
tlsStats = h.tlsManager.GetStats()
}
return SinkStats{ return SinkStats{
Type: "http_client", Type: "http_client",
TotalProcessed: h.totalProcessed.Load(), TotalProcessed: h.totalProcessed.Load(),
@ -243,10 +212,13 @@ func (h *HTTPClientSink) GetStats() SinkStats {
"total_batches": h.totalBatches.Load(), "total_batches": h.totalBatches.Load(),
"failed_batches": h.failedBatches.Load(), "failed_batches": h.failedBatches.Load(),
"last_batch_sent": lastBatch, "last_batch_sent": lastBatch,
"session": sessionInfo,
"tls": tlsStats,
}, },
} }
} }
// processLoop collects incoming log entries into a batch.
func (h *HTTPClientSink) processLoop(ctx context.Context) { func (h *HTTPClientSink) processLoop(ctx context.Context) {
defer h.wg.Done() defer h.wg.Done()
@ -284,10 +256,11 @@ func (h *HTTPClientSink) processLoop(ctx context.Context) {
} }
} }
// batchTimer periodically triggers sending of the current batch.
func (h *HTTPClientSink) batchTimer(ctx context.Context) { func (h *HTTPClientSink) batchTimer(ctx context.Context) {
defer h.wg.Done() defer h.wg.Done()
ticker := time.NewTicker(h.config.BatchDelay) ticker := time.NewTicker(time.Duration(h.config.BatchDelayMS) * time.Millisecond)
defer ticker.Stop() defer ticker.Stop()
for { for {
@ -313,6 +286,7 @@ func (h *HTTPClientSink) batchTimer(ctx context.Context) {
} }
} }
// sendBatch sends a batch of log entries to the remote endpoint with retry logic.
func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) { func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
h.activeConnections.Add(1) h.activeConnections.Add(1)
defer h.activeConnections.Add(-1) defer h.activeConnections.Add(-1)
@ -356,7 +330,7 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
// Retry logic // Retry logic
var lastErr error var lastErr error
retryDelay := h.config.RetryDelay retryDelay := time.Duration(h.config.RetryDelayMS) * time.Millisecond
for attempt := int64(0); attempt <= h.config.MaxRetries; attempt++ { for attempt := int64(0); attempt <= h.config.MaxRetries; attempt++ {
if attempt > 0 { if attempt > 0 {
@ -367,9 +341,10 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
newDelay := time.Duration(float64(retryDelay) * h.config.RetryBackoff) newDelay := time.Duration(float64(retryDelay) * h.config.RetryBackoff)
// Cap at maximum to prevent integer overflow // Cap at maximum to prevent integer overflow
if newDelay > h.config.Timeout || newDelay < retryDelay { timeout := time.Duration(h.config.Timeout) * time.Second
if newDelay > timeout || newDelay < retryDelay {
// Either exceeded max or overflowed (negative/wrapped) // Either exceeded max or overflowed (negative/wrapped)
retryDelay = h.config.Timeout retryDelay = timeout
} else { } else {
retryDelay = newDelay retryDelay = newDelay
} }
@ -381,15 +356,13 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
req.SetRequestURI(h.config.URL) req.SetRequestURI(h.config.URL)
req.Header.SetMethod("POST") req.Header.SetMethod("POST")
req.Header.SetContentType("application/json")
req.SetBody(body) req.SetBody(body)
// Set headers req.Header.Set("User-Agent", fmt.Sprintf("LogWisp/%s", version.Short()))
for k, v := range h.config.Headers {
req.Header.Set(k, v)
}
// Send request // Send request
err := h.client.DoTimeout(req, resp, h.config.Timeout) err := h.client.DoTimeout(req, resp, time.Duration(h.config.Timeout)*time.Second)
// Capture response before releasing // Capture response before releasing
statusCode := resp.StatusCode() statusCode := resp.StatusCode()
@ -417,6 +390,12 @@ func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
// Check response status // Check response status
if statusCode >= 200 && statusCode < 300 { if statusCode >= 200 && statusCode < 300 {
// Success // Success
// Update session activity on successful batch send
if h.sessionID != "" {
h.sessionManager.UpdateActivity(h.sessionID)
}
h.logger.Debug("msg", "Batch sent successfully", h.logger.Debug("msg", "Batch sent successfully",
"component", "http_client_sink", "component", "http_client_sink",
"batch_size", len(batch), "batch_size", len(batch),

View File

@ -5,26 +5,25 @@ import (
"context" "context"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
) )
// Sink represents an output destination for log entries // Sink represents an output data stream.
type Sink interface { type Sink interface {
// Input returns the channel for sending log entries to this sink // Input returns the channel for sending log entries to this sink.
Input() chan<- core.LogEntry Input() chan<- core.LogEntry
// Start begins processing log entries // Start begins processing log entries.
Start(ctx context.Context) error Start(ctx context.Context) error
// Stop gracefully shuts down the sink // Stop gracefully shuts down the sink.
Stop() Stop()
// GetStats returns sink statistics // GetStats returns sink statistics.
GetStats() SinkStats GetStats() SinkStats
} }
// SinkStats contains statistics about a sink // SinkStats contains statistics about a sink.
type SinkStats struct { type SinkStats struct {
Type string Type string
TotalProcessed uint64 TotalProcessed uint64
@ -33,8 +32,3 @@ type SinkStats struct {
LastProcessed time.Time LastProcessed time.Time
Details map[string]any Details map[string]any
} }
// AuthSetter is an interface for sinks that can accept an AuthConfig.
type AuthSetter interface {
SetAuthConfig(auth *config.AuthConfig)
}

View File

@ -7,190 +7,110 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"net" "net"
"strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/auth"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/format" "logwisp/src/internal/format"
"logwisp/src/internal/limit" "logwisp/src/internal/network"
"logwisp/src/internal/tls" "logwisp/src/internal/session"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
"github.com/lixenwraith/log/compat" "github.com/lixenwraith/log/compat"
"github.com/panjf2000/gnet/v2" "github.com/panjf2000/gnet/v2"
) )
// TCPSink streams log entries via TCP // TCPSink streams log entries to connected TCP clients.
type TCPSink struct { type TCPSink struct {
input chan core.LogEntry // Configuration
config TCPConfig config *config.TCPSinkOptions
// Network
server *tcpServer server *tcpServer
done chan struct{}
activeConns atomic.Int64
startTime time.Time
engine *gnet.Engine engine *gnet.Engine
engineMu sync.Mutex engineMu sync.Mutex
wg sync.WaitGroup netLimiter *network.NetLimiter
netLimiter *limit.NetLimiter
logger *log.Logger
formatter format.Formatter
// Security components // Application
authenticator *auth.Authenticator input chan core.LogEntry
tlsManager *tls.Manager formatter format.Formatter
authConfig *config.AuthConfig logger *log.Logger
// Runtime
done chan struct{}
wg sync.WaitGroup
startTime time.Time
// Security & Session
sessionManager *session.Manager
// Statistics // Statistics
activeConns atomic.Int64
totalProcessed atomic.Uint64 totalProcessed atomic.Uint64
lastProcessed atomic.Value // time.Time lastProcessed atomic.Value // time.Time
authFailures atomic.Uint64
authSuccesses atomic.Uint64
// Write error tracking // Error tracking
writeErrors atomic.Uint64 writeErrors atomic.Uint64
consecutiveWriteErrors map[gnet.Conn]int consecutiveWriteErrors map[gnet.Conn]int
errorMu sync.Mutex errorMu sync.Mutex
} }
// TCPConfig holds TCP sink configuration // TCPConfig holds configuration for the TCPSink.
type TCPConfig struct { type TCPConfig struct {
Host string
Port int64 Port int64
BufferSize int64 BufferSize int64
Heartbeat *config.HeartbeatConfig Heartbeat *config.HeartbeatConfig
SSL *config.SSLConfig ACL *config.ACLConfig
NetLimit *config.NetLimitConfig
} }
// NewTCPSink creates a new TCP streaming sink // NewTCPSink creates a new TCP streaming sink.
func NewTCPSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*TCPSink, error) { func NewTCPSink(opts *config.TCPSinkOptions, logger *log.Logger, formatter format.Formatter) (*TCPSink, error) {
cfg := TCPConfig{ if opts == nil {
Port: int64(9090), return nil, fmt.Errorf("TCP sink options cannot be nil")
BufferSize: int64(1000),
}
// Extract configuration from options
if port, ok := options["port"].(int64); ok {
cfg.Port = port
}
if bufSize, ok := options["buffer_size"].(int64); ok {
cfg.BufferSize = bufSize
}
// Extract heartbeat config
if hb, ok := options["heartbeat"].(map[string]any); ok {
cfg.Heartbeat = &config.HeartbeatConfig{}
cfg.Heartbeat.Enabled, _ = hb["enabled"].(bool)
if interval, ok := hb["interval_seconds"].(int64); ok {
cfg.Heartbeat.IntervalSeconds = interval
}
cfg.Heartbeat.IncludeTimestamp, _ = hb["include_timestamp"].(bool)
cfg.Heartbeat.IncludeStats, _ = hb["include_stats"].(bool)
if hbFormat, ok := hb["format"].(string); ok {
cfg.Heartbeat.Format = hbFormat
}
}
// Extract SSL config
if ssl, ok := options["ssl"].(map[string]any); ok {
cfg.SSL = &config.SSLConfig{}
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
cfg.SSL.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
cfg.SSL.KeyFile = keyFile
}
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
cfg.SSL.ClientCAFile = caFile
}
cfg.SSL.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
if minVer, ok := ssl["min_version"].(string); ok {
cfg.SSL.MinVersion = minVer
}
if maxVer, ok := ssl["max_version"].(string); ok {
cfg.SSL.MaxVersion = maxVer
}
if ciphers, ok := ssl["cipher_suites"].(string); ok {
cfg.SSL.CipherSuites = ciphers
}
}
// Extract net limit config
if rl, ok := options["net_limit"].(map[string]any); ok {
cfg.NetLimit = &config.NetLimitConfig{}
cfg.NetLimit.Enabled, _ = rl["enabled"].(bool)
if rps, ok := rl["requests_per_second"].(float64); ok {
cfg.NetLimit.RequestsPerSecond = rps
}
if burst, ok := rl["burst_size"].(int64); ok {
cfg.NetLimit.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.NetLimit.LimitBy = limitBy
}
if respCode, ok := rl["response_code"].(int64); ok {
cfg.NetLimit.ResponseCode = respCode
}
if msg, ok := rl["response_message"].(string); ok {
cfg.NetLimit.ResponseMessage = msg
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.NetLimit.MaxConnectionsPerIP = maxPerIP
}
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
cfg.NetLimit.MaxTotalConnections = maxTotal
}
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
cfg.NetLimit.IPWhitelist = make([]string, 0, len(ipWhitelist))
for _, entry := range ipWhitelist {
if str, ok := entry.(string); ok {
cfg.NetLimit.IPWhitelist = append(cfg.NetLimit.IPWhitelist, str)
}
}
}
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
cfg.NetLimit.IPBlacklist = make([]string, 0, len(ipBlacklist))
for _, entry := range ipBlacklist {
if str, ok := entry.(string); ok {
cfg.NetLimit.IPBlacklist = append(cfg.NetLimit.IPBlacklist, str)
}
}
}
} }
t := &TCPSink{ t := &TCPSink{
input: make(chan core.LogEntry, cfg.BufferSize), config: opts,
config: cfg, input: make(chan core.LogEntry, opts.BufferSize),
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: logger,
formatter: formatter, formatter: formatter,
consecutiveWriteErrors: make(map[gnet.Conn]int),
sessionManager: session.NewManager(30 * time.Minute),
} }
t.lastProcessed.Store(time.Time{}) t.lastProcessed.Store(time.Time{})
// Initialize net limiter // Initialize net limiter with pointer
if cfg.NetLimit != nil && cfg.NetLimit.Enabled { if opts.ACL != nil && (opts.ACL.Enabled ||
t.netLimiter = limit.NewNetLimiter(*cfg.NetLimit, logger) len(opts.ACL.IPWhitelist) > 0 ||
len(opts.ACL.IPBlacklist) > 0) {
t.netLimiter = network.NewNetLimiter(opts.ACL, logger)
} }
return t, nil return t, nil
} }
// Input returns the channel for sending log entries.
func (t *TCPSink) Input() chan<- core.LogEntry { func (t *TCPSink) Input() chan<- core.LogEntry {
return t.input return t.input
} }
// Start initializes the TCP server and begins the broadcast loop.
func (t *TCPSink) Start(ctx context.Context) error { func (t *TCPSink) Start(ctx context.Context) error {
t.server = &tcpServer{ t.server = &tcpServer{
sink: t, sink: t,
clients: make(map[gnet.Conn]*tcpClient), clients: make(map[gnet.Conn]*tcpClient),
} }
// Register expiry callback
t.sessionManager.RegisterExpiryCallback("tcp_sink", func(sessionID, remoteAddr string) {
t.handleSessionExpiry(sessionID, remoteAddr)
})
// Start log broadcast loop // Start log broadcast loop
t.wg.Add(1) t.wg.Add(1)
go func() { go func() {
@ -199,7 +119,7 @@ func (t *TCPSink) Start(ctx context.Context) error {
}() }()
// Configure gnet options // Configure gnet options
addr := fmt.Sprintf("tcp://:%d", t.config.Port) addr := fmt.Sprintf("tcp://%s:%d", t.config.Host, t.config.Port)
// Create a gnet adapter using the existing logger instance // Create a gnet adapter using the existing logger instance
gnetLogger := compat.NewGnetAdapter(t.logger) gnetLogger := compat.NewGnetAdapter(t.logger)
@ -216,8 +136,7 @@ func (t *TCPSink) Start(ctx context.Context) error {
go func() { go func() {
t.logger.Info("msg", "Starting TCP server", t.logger.Info("msg", "Starting TCP server",
"component", "tcp_sink", "component", "tcp_sink",
"port", t.config.Port, "port", t.config.Port)
"auth", t.authenticator != nil)
err := gnet.Run(t.server, addr, opts...) err := gnet.Run(t.server, addr, opts...)
if err != nil { if err != nil {
@ -255,8 +174,13 @@ func (t *TCPSink) Start(ctx context.Context) error {
} }
} }
// Stop gracefully shuts down the TCP server.
func (t *TCPSink) Stop() { func (t *TCPSink) Stop() {
t.logger.Info("msg", "Stopping TCP sink") t.logger.Info("msg", "Stopping TCP sink")
// Unregister callback
t.sessionManager.UnregisterExpiryCallback("tcp_sink")
// Signal broadcast loop to stop // Signal broadcast loop to stop
close(t.done) close(t.done)
@ -274,9 +198,15 @@ func (t *TCPSink) Stop() {
// Wait for broadcast loop to finish // Wait for broadcast loop to finish
t.wg.Wait() t.wg.Wait()
// Stop session manager
if t.sessionManager != nil {
t.sessionManager.Stop()
}
t.logger.Info("msg", "TCP sink stopped") t.logger.Info("msg", "TCP sink stopped")
} }
// GetStats returns the sink's statistics.
func (t *TCPSink) GetStats() SinkStats { func (t *TCPSink) GetStats() SinkStats {
lastProc, _ := t.lastProcessed.Load().(time.Time) lastProc, _ := t.lastProcessed.Load().(time.Time)
@ -285,16 +215,9 @@ func (t *TCPSink) GetStats() SinkStats {
netLimitStats = t.netLimiter.GetStats() netLimitStats = t.netLimiter.GetStats()
} }
var authStats map[string]any var sessionStats map[string]any
if t.authenticator != nil { if t.sessionManager != nil {
authStats = t.authenticator.GetStats() sessionStats = t.sessionManager.GetStats()
authStats["failures"] = t.authFailures.Load()
authStats["successes"] = t.authSuccesses.Load()
}
var tlsStats map[string]any
if t.tlsManager != nil {
tlsStats = t.tlsManager.GetStats()
} }
return SinkStats{ return SinkStats{
@ -307,18 +230,38 @@ func (t *TCPSink) GetStats() SinkStats {
"port": t.config.Port, "port": t.config.Port,
"buffer_size": t.config.BufferSize, "buffer_size": t.config.BufferSize,
"net_limit": netLimitStats, "net_limit": netLimitStats,
"auth": authStats, "sessions": sessionStats,
"tls": tlsStats,
}, },
} }
} }
// GetActiveConnections returns the current number of active connections.
func (t *TCPSink) GetActiveConnections() int64 {
return t.activeConns.Load()
}
// tcpServer implements the gnet.EventHandler interface for the TCP sink.
type tcpServer struct {
gnet.BuiltinEventEngine
sink *TCPSink
clients map[gnet.Conn]*tcpClient
mu sync.RWMutex
}
// tcpClient represents a connected TCP client.
type tcpClient struct {
conn gnet.Conn
buffer bytes.Buffer
sessionID string
}
// broadcastLoop manages the central broadcasting of log entries to all clients.
func (t *TCPSink) broadcastLoop(ctx context.Context) { func (t *TCPSink) broadcastLoop(ctx context.Context) {
var ticker *time.Ticker var ticker *time.Ticker
var tickerChan <-chan time.Time var tickerChan <-chan time.Time
if t.config.Heartbeat.Enabled { if t.config.Heartbeat != nil && t.config.Heartbeat.Enabled {
ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalSeconds) * time.Second) ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalMS) * time.Millisecond)
tickerChan = ticker.C tickerChan = ticker.C
defer ticker.Stop() defer ticker.Stop()
} }
@ -342,21 +285,190 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
"entry_source", entry.Source) "entry_source", entry.Source)
continue continue
} }
t.broadcastData(data)
// Broadcast only to authenticated clients case <-tickerChan:
t.server.mu.RLock() heartbeatEntry := t.createHeartbeatEntry()
for conn, client := range t.server.clients { data, err := t.formatter.Format(heartbeatEntry)
if client.authenticated { if err != nil {
// Send through TLS bridge if present t.logger.Error("msg", "Failed to format heartbeat",
if client.tlsBridge != nil {
if _, err := client.tlsBridge.Write(data); err != nil {
// TLS write failed, connection likely dead
t.logger.Debug("msg", "TLS write failed",
"component", "tcp_sink", "component", "tcp_sink",
"error", err) "error", err)
conn.Close() continue
} }
} else { t.broadcastData(data)
case <-t.done:
return
}
}
}
// OnBoot is called when the server starts.
func (s *tcpServer) OnBoot(eng gnet.Engine) gnet.Action {
// Store engine reference for shutdown
s.sink.engineMu.Lock()
s.sink.engine = &eng
s.sink.engineMu.Unlock()
s.sink.logger.Debug("msg", "TCP server booted",
"component", "tcp_sink",
"port", s.sink.config.Port)
return gnet.None
}
// OnOpen is called when a new connection is established.
func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
remoteAddr := c.RemoteAddr()
remoteAddrStr := remoteAddr.String()
s.sink.logger.Debug("msg", "TCP connection attempt", "remote_addr", remoteAddrStr)
// Reject IPv6 connections
if tcpAddr, ok := remoteAddr.(*net.TCPAddr); ok {
if tcpAddr.IP.To4() == nil {
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
}
}
// Check net limit
if s.sink.netLimiter != nil {
tcpAddr, err := net.ResolveTCPAddr("tcp", remoteAddrStr)
if err != nil {
s.sink.logger.Warn("msg", "Failed to parse TCP address",
"remote_addr", remoteAddrStr,
"error", err)
return nil, gnet.Close
}
if !s.sink.netLimiter.CheckTCP(tcpAddr) {
s.sink.logger.Warn("msg", "TCP connection net limited",
"remote_addr", remoteAddrStr)
return nil, gnet.Close
}
// Register connection post-establishment
s.sink.netLimiter.RegisterConnection(remoteAddrStr)
}
// Create session for tracking
sess := s.sink.sessionManager.CreateSession(remoteAddrStr, "tcp_sink", nil)
// TCP Sink accepts all connections without authentication
client := &tcpClient{
conn: c,
buffer: bytes.Buffer{},
sessionID: sess.ID,
}
s.mu.Lock()
s.clients[c] = client
s.mu.Unlock()
newCount := s.sink.activeConns.Add(1)
s.sink.logger.Debug("msg", "TCP connection opened",
"remote_addr", remoteAddr,
"session_id", sess.ID,
"active_connections", newCount)
return nil, gnet.None
}
// OnClose is called when a connection is closed.
func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
remoteAddrStr := c.RemoteAddr().String()
// Get client to retrieve session ID
s.mu.RLock()
client, exists := s.clients[c]
s.mu.RUnlock()
if exists && client.sessionID != "" {
// Remove session
s.sink.sessionManager.RemoveSession(client.sessionID)
s.sink.logger.Debug("msg", "Session removed",
"component", "tcp_sink",
"session_id", client.sessionID,
"remote_addr", remoteAddrStr)
}
// Remove client state
s.mu.Lock()
delete(s.clients, c)
s.mu.Unlock()
// Clean up write error tracking
s.sink.errorMu.Lock()
delete(s.sink.consecutiveWriteErrors, c)
s.sink.errorMu.Unlock()
// Release connection
if s.sink.netLimiter != nil {
s.sink.netLimiter.ReleaseConnection(remoteAddrStr)
}
newCount := s.sink.activeConns.Add(-1)
s.sink.logger.Debug("msg", "TCP connection closed",
"remote_addr", remoteAddrStr,
"active_connections", newCount,
"error", err)
return gnet.None
}
// OnTraffic is called when data is received from a connection.
func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
s.mu.RLock()
client, exists := s.clients[c]
s.mu.RUnlock()
// Update session activity when client sends data
if exists && client.sessionID != "" {
s.sink.sessionManager.UpdateActivity(client.sessionID)
}
// TCP Sink doesn't expect any data from clients, discard all
c.Discard(-1)
return gnet.None
}
// handleSessionExpiry is the callback for cleaning up expired sessions.
func (t *TCPSink) handleSessionExpiry(sessionID, remoteAddr string) {
t.server.mu.RLock()
defer t.server.mu.RUnlock()
// Find connection by session ID
for conn, client := range t.server.clients {
if client.sessionID == sessionID {
t.logger.Info("msg", "Closing expired session connection",
"component", "tcp_sink",
"session_id", sessionID,
"remote_addr", remoteAddr)
// Close connection
conn.Close()
return
}
}
}
// broadcastData sends a formatted byte slice to all connected clients.
func (t *TCPSink) broadcastData(data []byte) {
t.server.mu.RLock()
defer t.server.mu.RUnlock()
// Track clients to remove after iteration
var staleClients []gnet.Conn
for conn, client := range t.server.clients {
// Update session activity before sending data
if client.sessionID != "" {
if !t.sessionManager.IsSessionActive(client.sessionID) {
// Session expired, mark for cleanup
staleClients = append(staleClients, conn)
continue
}
t.sessionManager.UpdateActivity(client.sessionID)
}
conn.AsyncWrite(data, func(c gnet.Conn, err error) error { conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
if err != nil { if err != nil {
t.writeErrors.Add(1) t.writeErrors.Add(1)
@ -370,59 +482,17 @@ func (t *TCPSink) broadcastLoop(ctx context.Context) {
return nil return nil
}) })
} }
}
}
t.server.mu.RUnlock()
case <-tickerChan: // Clean up stale connections outside the read lock
heartbeatEntry := t.createHeartbeatEntry() if len(staleClients) > 0 {
data, err := t.formatter.Format(heartbeatEntry) go t.cleanupStaleConnections(staleClients)
if err != nil {
t.logger.Error("msg", "Failed to format heartbeat",
"component", "tcp_sink",
"error", err)
continue
}
t.server.mu.RLock()
for conn, client := range t.server.clients {
if client.authenticated {
// Validate session is still active
if t.authenticator != nil && client.session != nil {
if !t.authenticator.ValidateSession(client.session.ID) {
// Session expired, close connection
conn.Close()
continue
}
}
if client.tlsBridge != nil {
if _, err := client.tlsBridge.Write(data); err != nil {
t.logger.Debug("msg", "TLS heartbeat write failed",
"component", "tcp_sink",
"error", err)
conn.Close()
}
} else {
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
if err != nil {
t.writeErrors.Add(1)
t.handleWriteError(c, err)
}
return nil
})
}
}
}
t.server.mu.RUnlock()
case <-t.done:
return
}
} }
} }
// Handle write errors with threshold-based connection termination // handleWriteError manages errors during async writes, closing faulty connections.
func (t *TCPSink) handleWriteError(c gnet.Conn, err error) { func (t *TCPSink) handleWriteError(c gnet.Conn, err error) {
remoteAddrStr := c.RemoteAddr().String()
t.errorMu.Lock() t.errorMu.Lock()
defer t.errorMu.Unlock() defer t.errorMu.Unlock()
@ -436,7 +506,7 @@ func (t *TCPSink) handleWriteError(c gnet.Conn, err error) {
t.logger.Debug("msg", "AsyncWrite error", t.logger.Debug("msg", "AsyncWrite error",
"component", "tcp_sink", "component", "tcp_sink",
"remote_addr", c.RemoteAddr(), "remote_addr", remoteAddrStr,
"error", err, "error", err,
"consecutive_errors", errorCount) "consecutive_errors", errorCount)
@ -444,14 +514,14 @@ func (t *TCPSink) handleWriteError(c gnet.Conn, err error) {
if errorCount >= 3 { if errorCount >= 3 {
t.logger.Warn("msg", "Closing connection due to repeated write errors", t.logger.Warn("msg", "Closing connection due to repeated write errors",
"component", "tcp_sink", "component", "tcp_sink",
"remote_addr", c.RemoteAddr(), "remote_addr", remoteAddrStr,
"error_count", errorCount) "error_count", errorCount)
delete(t.consecutiveWriteErrors, c) delete(t.consecutiveWriteErrors, c)
c.Close() c.Close()
} }
} }
// Create heartbeat as a proper LogEntry // createHeartbeatEntry generates a new heartbeat log entry.
func (t *TCPSink) createHeartbeatEntry() core.LogEntry { func (t *TCPSink) createHeartbeatEntry() core.LogEntry {
message := "heartbeat" message := "heartbeat"
@ -475,348 +545,12 @@ func (t *TCPSink) createHeartbeatEntry() core.LogEntry {
} }
} }
// GetActiveConnections returns the current number of connections // cleanupStaleConnections closes connections associated with expired sessions.
func (t *TCPSink) GetActiveConnections() int64 { func (t *TCPSink) cleanupStaleConnections(staleConns []gnet.Conn) {
return t.activeConns.Load() for _, conn := range staleConns {
} t.logger.Info("msg", "Closing stale connection",
"component", "tcp_sink",
// tcpClient represents a connected TCP client with auth state "remote_addr", conn.RemoteAddr().String())
type tcpClient struct { conn.Close()
conn gnet.Conn }
buffer bytes.Buffer
authenticated bool
session *auth.Session
authTimeout time.Time
tlsBridge *tls.GNetTLSConn
authTimeoutSet bool
}
// tcpServer handles gnet events with authentication
type tcpServer struct {
gnet.BuiltinEventEngine
sink *TCPSink
clients map[gnet.Conn]*tcpClient
mu sync.RWMutex
}
func (s *tcpServer) OnBoot(eng gnet.Engine) gnet.Action {
// Store engine reference for shutdown
s.sink.engineMu.Lock()
s.sink.engine = &eng
s.sink.engineMu.Unlock()
s.sink.logger.Debug("msg", "TCP server booted",
"component", "tcp_sink",
"port", s.sink.config.Port)
return gnet.None
}
func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
remoteAddr := c.RemoteAddr()
s.sink.logger.Debug("msg", "TCP connection attempt", "remote_addr", remoteAddr)
// Reject IPv6 connections immediately
if tcpAddr, ok := remoteAddr.(*net.TCPAddr); ok {
if tcpAddr.IP.To4() == nil {
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
}
}
// Check net limit
if s.sink.netLimiter != nil {
remoteStr := c.RemoteAddr().String()
tcpAddr, err := net.ResolveTCPAddr("tcp", remoteStr)
if err != nil {
s.sink.logger.Warn("msg", "Failed to parse TCP address",
"remote_addr", remoteAddr,
"error", err)
return nil, gnet.Close
}
if !s.sink.netLimiter.CheckTCP(tcpAddr) {
s.sink.logger.Warn("msg", "TCP connection net limited",
"remote_addr", remoteAddr)
return nil, gnet.Close
}
// Track connection
s.sink.netLimiter.AddConnection(remoteStr)
}
// Create client state without auth timeout initially
client := &tcpClient{
conn: c,
authenticated: s.sink.authenticator == nil, // No auth = auto authenticated
authTimeoutSet: false, // Auth timeout not started yet
}
// Initialize TLS bridge if enabled
if s.sink.tlsManager != nil {
tlsConfig := s.sink.tlsManager.GetTCPConfig()
client.tlsBridge = tls.NewServerConn(c, tlsConfig)
client.tlsBridge.Handshake() // Start async handshake
s.sink.logger.Debug("msg", "TLS handshake initiated",
"component", "tcp_sink",
"remote_addr", remoteAddr)
} else if s.sink.authenticator != nil {
// Only set auth timeout if no TLS (plain connection)
client.authTimeout = time.Now().Add(30 * time.Second) // TODO: configurable or non-hardcoded timer
client.authTimeoutSet = true
}
s.mu.Lock()
s.clients[c] = client
s.mu.Unlock()
newCount := s.sink.activeConns.Add(1)
s.sink.logger.Debug("msg", "TCP connection opened",
"remote_addr", remoteAddr,
"active_connections", newCount,
"requires_auth", s.sink.authenticator != nil)
// Send auth prompt if authentication is required
if s.sink.authenticator != nil && s.sink.tlsManager == nil {
authPrompt := []byte("AUTH REQUIRED\nFormat: AUTH <method> <credentials>\nMethods: basic, token\n")
return authPrompt, gnet.None
}
return nil, gnet.None
}
func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
remoteAddr := c.RemoteAddr().String()
// Remove client state
s.mu.Lock()
client := s.clients[c]
delete(s.clients, c)
s.mu.Unlock()
// Clean up TLS bridge if present
if client != nil && client.tlsBridge != nil {
client.tlsBridge.Close()
s.sink.logger.Debug("msg", "TLS connection closed",
"remote_addr", remoteAddr)
}
// Clean up write error tracking
s.sink.errorMu.Lock()
delete(s.sink.consecutiveWriteErrors, c)
s.sink.errorMu.Unlock()
// Remove connection tracking
if s.sink.netLimiter != nil {
s.sink.netLimiter.RemoveConnection(remoteAddr)
}
newCount := s.sink.activeConns.Add(-1)
s.sink.logger.Debug("msg", "TCP connection closed",
"remote_addr", remoteAddr,
"active_connections", newCount,
"error", err)
return gnet.None
}
func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
s.mu.RLock()
client, exists := s.clients[c]
s.mu.RUnlock()
if !exists {
return gnet.Close
}
// // Check auth timeout
// if !client.authenticated && time.Now().After(client.authTimeout) {
// s.sink.logger.Warn("msg", "Authentication timeout",
// "component", "tcp_sink",
// "remote_addr", c.RemoteAddr().String())
// if client.tlsBridge != nil && client.tlsBridge.IsHandshakeDone() {
// client.tlsBridge.Write([]byte("AUTH TIMEOUT\n"))
// } else if client.tlsBridge == nil {
// c.AsyncWrite([]byte("AUTH TIMEOUT\n"), nil)
// }
// return gnet.Close
// }
// Read all available data
data, err := c.Next(-1)
if err != nil {
s.sink.logger.Error("msg", "Error reading from connection",
"component", "tcp_sink",
"error", err)
return gnet.Close
}
// Process through TLS bridge if present
if client.tlsBridge != nil {
// Feed encrypted data into TLS engine
if err := client.tlsBridge.ProcessIncoming(data); err != nil {
s.sink.logger.Error("msg", "TLS processing error",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String(),
"error", err)
return gnet.Close
}
// Check if handshake is complete
if !client.tlsBridge.IsHandshakeDone() {
// Still handshaking, wait for more data
return gnet.None
}
// Check handshake result
_, hsErr := client.tlsBridge.HandshakeComplete()
if hsErr != nil {
s.sink.logger.Error("msg", "TLS handshake failed",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String(),
"error", hsErr)
return gnet.Close
}
// Set auth timeout only after TLS handshake completes
if !client.authTimeoutSet && s.sink.authenticator != nil && !client.authenticated {
client.authTimeout = time.Now().Add(30 * time.Second)
client.authTimeoutSet = true
s.sink.logger.Debug("msg", "Auth timeout started after TLS handshake",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String())
}
// Read decrypted plaintext
data = client.tlsBridge.Read()
if data == nil || len(data) == 0 {
// No plaintext available yet
return gnet.None
}
// First data after TLS handshake - send auth prompt if needed
if s.sink.authenticator != nil && !client.authenticated &&
len(client.buffer.Bytes()) == 0 {
authPrompt := []byte("AUTH REQUIRED\n")
client.tlsBridge.Write(authPrompt)
}
}
// Only check auth timeout if it has been set
if !client.authenticated && client.authTimeoutSet && time.Now().After(client.authTimeout) {
s.sink.logger.Warn("msg", "Authentication timeout",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String())
if client.tlsBridge != nil && client.tlsBridge.IsHandshakeDone() {
client.tlsBridge.Write([]byte("AUTH TIMEOUT\n"))
} else if client.tlsBridge == nil {
c.AsyncWrite([]byte("AUTH TIMEOUT\n"), nil)
}
return gnet.Close
}
// If not authenticated, expect auth command
if !client.authenticated {
client.buffer.Write(data)
// Look for complete auth line
if line, err := client.buffer.ReadBytes('\n'); err == nil {
line = bytes.TrimSpace(line)
// Parse AUTH command: AUTH <method> <credentials>
parts := strings.SplitN(string(line), " ", 3)
if len(parts) != 3 || parts[0] != "AUTH" {
// Send error through TLS if enabled
errMsg := []byte("AUTH FAILED\n")
if client.tlsBridge != nil {
client.tlsBridge.Write(errMsg)
} else {
c.AsyncWrite(errMsg, nil)
}
return gnet.None
}
// Authenticate
session, err := s.sink.authenticator.AuthenticateTCP(parts[1], parts[2], c.RemoteAddr().String())
if err != nil {
s.sink.authFailures.Add(1)
s.sink.logger.Warn("msg", "TCP authentication failed",
"remote_addr", c.RemoteAddr().String(),
"method", parts[1],
"error", err)
// Send error through TLS if enabled
errMsg := []byte("AUTH FAILED\n")
if client.tlsBridge != nil {
client.tlsBridge.Write(errMsg)
} else {
c.AsyncWrite(errMsg, nil)
}
return gnet.Close
}
// Authentication successful
s.sink.authSuccesses.Add(1)
s.mu.Lock()
client.authenticated = true
client.session = session
s.mu.Unlock()
s.sink.logger.Info("msg", "TCP client authenticated",
"component", "tcp_sink",
"remote_addr", c.RemoteAddr().String(),
"username", session.Username,
"method", session.Method,
"tls", client.tlsBridge != nil)
// Send success through TLS if enabled
successMsg := []byte("AUTH OK\n")
if client.tlsBridge != nil {
client.tlsBridge.Write(successMsg)
} else {
c.AsyncWrite(successMsg, nil)
}
// Clear buffer after auth
client.buffer.Reset()
}
return gnet.None
}
// Authenticated clients shouldn't send data, just discard
c.Discard(-1)
return gnet.None
}
// SetAuthConfig configures tcp sink authentication
func (t *TCPSink) SetAuthConfig(authCfg *config.AuthConfig) {
if authCfg == nil || authCfg.Type == "none" {
return
}
t.authConfig = authCfg
authenticator, err := auth.New(authCfg, t.logger)
if err != nil {
t.logger.Error("msg", "Failed to initialize authenticator for TCP sink",
"component", "tcp_sink",
"error", err)
return
}
t.authenticator = authenticator
// Initialize TLS manager if SSL is configured
if t.config.SSL != nil && t.config.SSL.Enabled {
tlsManager, err := tls.NewManager(t.config.SSL, t.logger)
if err != nil {
t.logger.Error("msg", "Failed to create TLS manager",
"component", "tcp_sink",
"error", err)
// Continue without TLS
return
}
t.tlsManager = tlsManager
}
t.logger.Info("msg", "Authentication configured for TCP sink",
"component", "tcp_sink",
"auth_type", authCfg.Type,
"tls_enabled", t.tlsManager != nil,
"tls_bridge", t.tlsManager != nil)
} }

View File

@ -3,10 +3,10 @@ package sink
import ( import (
"context" "context"
"crypto/tls"
"errors" "errors"
"fmt" "fmt"
"net" "net"
"strconv"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
@ -14,32 +14,41 @@ import (
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/format" "logwisp/src/internal/format"
tlspkg "logwisp/src/internal/tls" "logwisp/src/internal/session"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// TCPClientSink forwards log entries to a remote TCP endpoint // TODO: add heartbeat
// TCPClientSink forwards log entries to a remote TCP endpoint.
type TCPClientSink struct { type TCPClientSink struct {
input chan core.LogEntry // Configuration
config TCPClientConfig config *config.TCPClientSinkOptions
address string // computed from host:port
// Network
conn net.Conn conn net.Conn
connMu sync.RWMutex connMu sync.RWMutex
// Application
input chan core.LogEntry
formatter format.Formatter
logger *log.Logger
// Runtime
done chan struct{} done chan struct{}
wg sync.WaitGroup wg sync.WaitGroup
startTime time.Time startTime time.Time
logger *log.Logger
formatter format.Formatter
// TLS support // Connection state
tlsManager *tlspkg.Manager
tlsConfig *tls.Config
// Reconnection state
reconnecting atomic.Bool reconnecting atomic.Bool
lastConnectErr error lastConnectErr error
connectTime time.Time connectTime time.Time
// Security & Session
sessionID string
sessionManager *session.Manager
// Statistics // Statistics
totalProcessed atomic.Uint64 totalProcessed atomic.Uint64
totalFailed atomic.Uint64 totalFailed atomic.Uint64
@ -48,136 +57,35 @@ type TCPClientSink struct {
connectionUptime atomic.Value // time.Duration connectionUptime atomic.Value // time.Duration
} }
// TCPClientConfig holds TCP client sink configuration // NewTCPClientSink creates a new TCP client sink.
type TCPClientConfig struct { func NewTCPClientSink(opts *config.TCPClientSinkOptions, logger *log.Logger, formatter format.Formatter) (*TCPClientSink, error) {
Address string // Validation and defaults are handled in config package
BufferSize int64 if opts == nil {
DialTimeout time.Duration return nil, fmt.Errorf("TCP client sink options cannot be nil")
WriteTimeout time.Duration
KeepAlive time.Duration
// Reconnection settings
ReconnectDelay time.Duration
MaxReconnectDelay time.Duration
ReconnectBackoff float64
// TLS config
SSL *config.SSLConfig
}
// NewTCPClientSink creates a new TCP client sink
func NewTCPClientSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*TCPClientSink, error) {
cfg := TCPClientConfig{
BufferSize: int64(1000),
DialTimeout: 10 * time.Second,
WriteTimeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
ReconnectDelay: time.Second,
MaxReconnectDelay: 30 * time.Second,
ReconnectBackoff: float64(1.5),
}
// Extract address
address, ok := options["address"].(string)
if !ok || address == "" {
return nil, fmt.Errorf("tcp_client sink requires 'address' option")
}
// Validate address format
_, _, err := net.SplitHostPort(address)
if err != nil {
return nil, fmt.Errorf("invalid address format (expected host:port): %w", err)
}
cfg.Address = address
// Extract other options
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
cfg.BufferSize = bufSize
}
if dialTimeout, ok := options["dial_timeout_seconds"].(int64); ok && dialTimeout > 0 {
cfg.DialTimeout = time.Duration(dialTimeout) * time.Second
}
if writeTimeout, ok := options["write_timeout_seconds"].(int64); ok && writeTimeout > 0 {
cfg.WriteTimeout = time.Duration(writeTimeout) * time.Second
}
if keepAlive, ok := options["keep_alive_seconds"].(int64); ok && keepAlive > 0 {
cfg.KeepAlive = time.Duration(keepAlive) * time.Second
}
if reconnectDelay, ok := options["reconnect_delay_ms"].(int64); ok && reconnectDelay > 0 {
cfg.ReconnectDelay = time.Duration(reconnectDelay) * time.Millisecond
}
if maxReconnectDelay, ok := options["max_reconnect_delay_seconds"].(int64); ok && maxReconnectDelay > 0 {
cfg.MaxReconnectDelay = time.Duration(maxReconnectDelay) * time.Second
}
if backoff, ok := options["reconnect_backoff"].(float64); ok && backoff >= 1.0 {
cfg.ReconnectBackoff = backoff
}
// Extract SSL config
if ssl, ok := options["ssl"].(map[string]any); ok {
cfg.SSL = &config.SSLConfig{}
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
cfg.SSL.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
cfg.SSL.KeyFile = keyFile
}
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
cfg.SSL.ClientCAFile = caFile
}
if insecure, ok := ssl["insecure_skip_verify"].(bool); ok {
cfg.SSL.InsecureSkipVerify = insecure
}
} }
t := &TCPClientSink{ t := &TCPClientSink{
input: make(chan core.LogEntry, cfg.BufferSize), config: opts,
config: cfg, address: opts.Host + ":" + strconv.Itoa(int(opts.Port)),
input: make(chan core.LogEntry, opts.BufferSize),
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: logger,
formatter: formatter, formatter: formatter,
sessionManager: session.NewManager(30 * time.Minute),
} }
t.lastProcessed.Store(time.Time{}) t.lastProcessed.Store(time.Time{})
t.connectionUptime.Store(time.Duration(0)) t.connectionUptime.Store(time.Duration(0))
// Initialize TLS manager if SSL is configured
if cfg.SSL != nil && cfg.SSL.Enabled {
tlsManager, err := tlspkg.NewManager(cfg.SSL, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
t.tlsManager = tlsManager
// Get client TLS config
t.tlsConfig = tlsManager.GetTCPConfig()
// ADDED: Client-specific TLS config adjustments
t.tlsConfig.InsecureSkipVerify = cfg.SSL.InsecureSkipVerify
// Extract server name from address for SNI
host, _, err := net.SplitHostPort(cfg.Address)
if err != nil {
return nil, fmt.Errorf("failed to parse address for SNI: %w", err)
}
t.tlsConfig.ServerName = host
logger.Info("msg", "TLS enabled for TCP client",
"component", "tcp_client_sink",
"address", cfg.Address,
"server_name", host,
"insecure", cfg.SSL.InsecureSkipVerify)
}
return t, nil return t, nil
} }
// Input returns the channel for sending log entries.
func (t *TCPClientSink) Input() chan<- core.LogEntry { func (t *TCPClientSink) Input() chan<- core.LogEntry {
return t.input return t.input
} }
// Start begins the connection and processing loops.
func (t *TCPClientSink) Start(ctx context.Context) error { func (t *TCPClientSink) Start(ctx context.Context) error {
// Start connection manager // Start connection manager
t.wg.Add(1) t.wg.Add(1)
@ -189,10 +97,12 @@ func (t *TCPClientSink) Start(ctx context.Context) error {
t.logger.Info("msg", "TCP client sink started", t.logger.Info("msg", "TCP client sink started",
"component", "tcp_client_sink", "component", "tcp_client_sink",
"address", t.config.Address) "host", t.config.Host,
"port", t.config.Port)
return nil return nil
} }
// Stop gracefully shuts down the sink and its connection.
func (t *TCPClientSink) Stop() { func (t *TCPClientSink) Stop() {
t.logger.Info("msg", "Stopping TCP client sink") t.logger.Info("msg", "Stopping TCP client sink")
close(t.done) close(t.done)
@ -205,12 +115,21 @@ func (t *TCPClientSink) Stop() {
} }
t.connMu.Unlock() t.connMu.Unlock()
// Remove session and stop manager
if t.sessionID != "" {
t.sessionManager.RemoveSession(t.sessionID)
}
if t.sessionManager != nil {
t.sessionManager.Stop()
}
t.logger.Info("msg", "TCP client sink stopped", t.logger.Info("msg", "TCP client sink stopped",
"total_processed", t.totalProcessed.Load(), "total_processed", t.totalProcessed.Load(),
"total_failed", t.totalFailed.Load(), "total_failed", t.totalFailed.Load(),
"total_reconnects", t.totalReconnects.Load()) "total_reconnects", t.totalReconnects.Load())
} }
// GetStats returns the sink's statistics.
func (t *TCPClientSink) GetStats() SinkStats { func (t *TCPClientSink) GetStats() SinkStats {
lastProc, _ := t.lastProcessed.Load().(time.Time) lastProc, _ := t.lastProcessed.Load().(time.Time)
uptime, _ := t.connectionUptime.Load().(time.Duration) uptime, _ := t.connectionUptime.Load().(time.Duration)
@ -224,6 +143,19 @@ func (t *TCPClientSink) GetStats() SinkStats {
activeConns = 1 activeConns = 1
} }
// Get session stats
var sessionInfo map[string]any
if t.sessionID != "" {
if sess, exists := t.sessionManager.GetSession(t.sessionID); exists {
sessionInfo = map[string]any{
"session_id": sess.ID,
"created_at": sess.CreatedAt,
"last_activity": sess.LastActivity,
"remote_addr": sess.RemoteAddr,
}
}
}
return SinkStats{ return SinkStats{
Type: "tcp_client", Type: "tcp_client",
TotalProcessed: t.totalProcessed.Load(), TotalProcessed: t.totalProcessed.Load(),
@ -231,21 +163,23 @@ func (t *TCPClientSink) GetStats() SinkStats {
StartTime: t.startTime, StartTime: t.startTime,
LastProcessed: lastProc, LastProcessed: lastProc,
Details: map[string]any{ Details: map[string]any{
"address": t.config.Address, "address": t.address,
"connected": connected, "connected": connected,
"reconnecting": t.reconnecting.Load(), "reconnecting": t.reconnecting.Load(),
"total_failed": t.totalFailed.Load(), "total_failed": t.totalFailed.Load(),
"total_reconnects": t.totalReconnects.Load(), "total_reconnects": t.totalReconnects.Load(),
"connection_uptime": uptime.Seconds(), "connection_uptime": uptime.Seconds(),
"last_error": fmt.Sprintf("%v", t.lastConnectErr), "last_error": fmt.Sprintf("%v", t.lastConnectErr),
"session": sessionInfo,
}, },
} }
} }
// connectionManager handles the lifecycle of the TCP connection, including reconnections.
func (t *TCPClientSink) connectionManager(ctx context.Context) { func (t *TCPClientSink) connectionManager(ctx context.Context) {
defer t.wg.Done() defer t.wg.Done()
reconnectDelay := t.config.ReconnectDelay reconnectDelay := time.Duration(t.config.ReconnectDelayMS) * time.Millisecond
for { for {
select { select {
@ -256,6 +190,11 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
default: default:
} }
if t.sessionID != "" {
t.sessionManager.RemoveSession(t.sessionID)
t.sessionID = ""
}
// Attempt to connect // Attempt to connect
t.reconnecting.Store(true) t.reconnecting.Store(true)
conn, err := t.connect() conn, err := t.connect()
@ -265,9 +204,9 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
t.lastConnectErr = err t.lastConnectErr = err
t.logger.Warn("msg", "Failed to connect to TCP server", t.logger.Warn("msg", "Failed to connect to TCP server",
"component", "tcp_client_sink", "component", "tcp_client_sink",
"address", t.config.Address, "address", t.address,
"error", err, "error", err,
"retry_delay", reconnectDelay) "retry_delay_ms", reconnectDelay)
// Wait before retry // Wait before retry
select { select {
@ -280,26 +219,34 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
// Exponential backoff // Exponential backoff
reconnectDelay = time.Duration(float64(reconnectDelay) * t.config.ReconnectBackoff) reconnectDelay = time.Duration(float64(reconnectDelay) * t.config.ReconnectBackoff)
if reconnectDelay > t.config.MaxReconnectDelay { if reconnectDelay > time.Duration(t.config.MaxReconnectDelayMS)*time.Millisecond {
reconnectDelay = t.config.MaxReconnectDelay reconnectDelay = time.Duration(t.config.MaxReconnectDelayMS)
} }
continue continue
} }
// Connection successful // Connection successful
t.lastConnectErr = nil t.lastConnectErr = nil
reconnectDelay = t.config.ReconnectDelay // Reset backoff reconnectDelay = time.Duration(t.config.ReconnectDelayMS) * time.Millisecond // Reset backoff
t.connectTime = time.Now() t.connectTime = time.Now()
t.totalReconnects.Add(1) t.totalReconnects.Add(1)
// Create session for the connection
sess := t.sessionManager.CreateSession(t.address, "tcp_client_sink", map[string]any{
"local_addr": conn.LocalAddr().String(),
"sink_type": "tcp_client",
})
t.sessionID = sess.ID
t.connMu.Lock() t.connMu.Lock()
t.conn = conn t.conn = conn
t.connMu.Unlock() t.connMu.Unlock()
t.logger.Info("msg", "Connected to TCP server", t.logger.Info("msg", "Connected to TCP server",
"component", "tcp_client_sink", "component", "tcp_client_sink",
"address", t.config.Address, "address", t.address,
"local_addr", conn.LocalAddr()) "local_addr", conn.LocalAddr(),
"session_id", t.sessionID)
// Monitor connection // Monitor connection
t.monitorConnection(conn) t.monitorConnection(conn)
@ -315,93 +262,13 @@ func (t *TCPClientSink) connectionManager(ctx context.Context) {
t.logger.Warn("msg", "Lost connection to TCP server", t.logger.Warn("msg", "Lost connection to TCP server",
"component", "tcp_client_sink", "component", "tcp_client_sink",
"address", t.config.Address, "address", t.address,
"uptime", uptime) "uptime", uptime,
} "session_id", t.sessionID)
}
func (t *TCPClientSink) connect() (net.Conn, error) {
dialer := &net.Dialer{
Timeout: t.config.DialTimeout,
KeepAlive: t.config.KeepAlive,
}
conn, err := dialer.Dial("tcp", t.config.Address)
if err != nil {
return nil, err
}
// Set TCP keep-alive
if tcpConn, ok := conn.(*net.TCPConn); ok {
tcpConn.SetKeepAlive(true)
tcpConn.SetKeepAlivePeriod(t.config.KeepAlive)
}
// Wrap with TLS if configured
if t.tlsConfig != nil {
t.logger.Debug("msg", "Initiating TLS handshake",
"component", "tcp_client_sink",
"address", t.config.Address)
tlsConn := tls.Client(conn, t.tlsConfig)
// Perform handshake with timeout
handshakeCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := tlsConn.HandshakeContext(handshakeCtx); err != nil {
conn.Close()
return nil, fmt.Errorf("TLS handshake failed: %w", err)
}
// Log connection details
state := tlsConn.ConnectionState()
t.logger.Info("msg", "TLS connection established",
"component", "tcp_client_sink",
"address", t.config.Address,
"tls_version", tlsVersionString(state.Version),
"cipher_suite", tls.CipherSuiteName(state.CipherSuite),
"server_name", state.ServerName)
return tlsConn, nil
}
return conn, nil
}
func (t *TCPClientSink) monitorConnection(conn net.Conn) {
// Simple connection monitoring by periodic zero-byte reads
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
buf := make([]byte, 1)
for {
select {
case <-t.done:
return
case <-ticker.C:
// Set read deadline
// TODO: Add t.config.ReadTimeout and after addition use it instead of static value
if err := conn.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
t.logger.Debug("msg", "Failed to set read deadline", "error", err)
return
}
// Try to read (we don't expect any data)
_, err := conn.Read(buf)
if err != nil {
var netErr net.Error
if errors.As(err, &netErr) && netErr.Timeout() {
// Timeout is expected, connection is still alive
continue
}
// Real error, connection is dead
return
}
}
} }
} }
// processLoop reads entries from the input channel and sends them.
func (t *TCPClientSink) processLoop(ctx context.Context) { func (t *TCPClientSink) processLoop(ctx context.Context) {
defer t.wg.Done() defer t.wg.Done()
@ -421,6 +288,21 @@ func (t *TCPClientSink) processLoop(ctx context.Context) {
t.logger.Debug("msg", "Failed to send log entry", t.logger.Debug("msg", "Failed to send log entry",
"component", "tcp_client_sink", "component", "tcp_client_sink",
"error", err) "error", err)
} else {
// Update session activity on successful send
if t.sessionID != "" {
t.sessionManager.UpdateActivity(t.sessionID)
} else {
// Close invalid connection without session
t.logger.Warn("msg", "Connection without session detected, forcing reconnection",
"component", "tcp_client_sink")
t.connMu.Lock()
if t.conn != nil {
_ = t.conn.Close()
t.conn = nil
}
t.connMu.Unlock()
}
} }
case <-ctx.Done(): case <-ctx.Done():
@ -431,6 +313,61 @@ func (t *TCPClientSink) processLoop(ctx context.Context) {
} }
} }
// connect attempts to establish a connection to the remote server.
func (t *TCPClientSink) connect() (net.Conn, error) {
dialer := &net.Dialer{
Timeout: time.Duration(t.config.DialTimeout) * time.Second,
KeepAlive: time.Duration(t.config.KeepAlive) * time.Second,
}
conn, err := dialer.Dial("tcp", t.address)
if err != nil {
return nil, err
}
// Set TCP keep-alive
if tcpConn, ok := conn.(*net.TCPConn); ok {
tcpConn.SetKeepAlive(true)
tcpConn.SetKeepAlivePeriod(time.Duration(t.config.KeepAlive) * time.Second)
}
return conn, nil
}
// monitorConnection checks the health of the connection.
func (t *TCPClientSink) monitorConnection(conn net.Conn) {
// Simple connection monitoring by periodic zero-byte reads
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
buf := make([]byte, 1)
for {
select {
case <-t.done:
return
case <-ticker.C:
// Set read deadline
if err := conn.SetReadDeadline(time.Now().Add(time.Duration(t.config.ReadTimeout) * time.Second)); err != nil {
t.logger.Debug("msg", "Failed to set read deadline", "error", err)
return
}
// Try to read (we don't expect any data)
_, err := conn.Read(buf)
if err != nil {
var netErr net.Error
if errors.As(err, &netErr) && netErr.Timeout() {
// Timeout is expected, connection is still alive
continue
}
// Real error, connection is dead
return
}
}
}
}
// sendEntry formats and sends a single log entry over the connection.
func (t *TCPClientSink) sendEntry(entry core.LogEntry) error { func (t *TCPClientSink) sendEntry(entry core.LogEntry) error {
// Get current connection // Get current connection
t.connMu.RLock() t.connMu.RLock()
@ -448,7 +385,7 @@ func (t *TCPClientSink) sendEntry(entry core.LogEntry) error {
} }
// Set write deadline // Set write deadline
if err := conn.SetWriteDeadline(time.Now().Add(t.config.WriteTimeout)); err != nil { if err := conn.SetWriteDeadline(time.Now().Add(time.Duration(t.config.WriteTimeout) * time.Second)); err != nil {
return fmt.Errorf("failed to set write deadline: %w", err) return fmt.Errorf("failed to set write deadline: %w", err)
} }
@ -465,19 +402,3 @@ func (t *TCPClientSink) sendEntry(entry core.LogEntry) error {
return nil return nil
} }
// tlsVersionString returns human-readable TLS version
func tlsVersionString(version uint16) string {
switch version {
case tls.VersionTLS10:
return "TLS1.0"
case tls.VersionTLS11:
return "TLS1.1"
case tls.VersionTLS12:
return "TLS1.2"
case tls.VersionTLS13:
return "TLS1.3"
default:
return fmt.Sprintf("0x%04x", version)
}
}

View File

@ -0,0 +1,141 @@
// FILE: logwisp/src/internal/source/console.go
package source
import (
"bufio"
"os"
"sync/atomic"
"time"
"logwisp/src/internal/config"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// ConsoleSource reads log entries from the standard input stream.
type ConsoleSource struct {
// Configuration
config *config.ConsoleSourceOptions
// Application
subscribers []chan core.LogEntry
logger *log.Logger
// Runtime
done chan struct{}
// Statistics
totalEntries atomic.Uint64
droppedEntries atomic.Uint64
startTime time.Time
lastEntryTime atomic.Value // time.Time
}
// NewConsoleSource creates a new console(stdin) source.
func NewConsoleSource(opts *config.ConsoleSourceOptions, logger *log.Logger) (*ConsoleSource, error) {
if opts == nil {
opts = &config.ConsoleSourceOptions{
BufferSize: 1000, // Default
}
}
source := &ConsoleSource{
config: opts,
subscribers: make([]chan core.LogEntry, 0),
done: make(chan struct{}),
logger: logger,
startTime: time.Now(),
}
source.lastEntryTime.Store(time.Time{})
return source, nil
}
// Subscribe returns a channel for receiving log entries.
func (s *ConsoleSource) Subscribe() <-chan core.LogEntry {
ch := make(chan core.LogEntry, s.config.BufferSize)
s.subscribers = append(s.subscribers, ch)
return ch
}
// Start begins reading from the standard input.
func (s *ConsoleSource) Start() error {
go s.readLoop()
s.logger.Info("msg", "Console source started", "component", "console_source")
return nil
}
// Stop signals the source to stop reading.
func (s *ConsoleSource) Stop() {
close(s.done)
for _, ch := range s.subscribers {
close(ch)
}
s.logger.Info("msg", "Console source stopped", "component", "console_source")
}
// GetStats returns the source's statistics.
func (s *ConsoleSource) GetStats() SourceStats {
lastEntry, _ := s.lastEntryTime.Load().(time.Time)
return SourceStats{
Type: "console",
TotalEntries: s.totalEntries.Load(),
DroppedEntries: s.droppedEntries.Load(),
StartTime: s.startTime,
LastEntryTime: lastEntry,
Details: map[string]any{},
}
}
// readLoop continuously reads lines from stdin and publishes them.
func (s *ConsoleSource) readLoop() {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
select {
case <-s.done:
return
default:
// Get raw line
lineBytes := scanner.Bytes()
if len(lineBytes) == 0 {
continue
}
// Add newline back (scanner strips it)
lineWithNewline := append(lineBytes, '\n')
entry := core.LogEntry{
Time: time.Now(),
Source: "console",
Message: string(lineWithNewline), // Keep newline
Level: extractLogLevel(string(lineBytes)),
RawSize: int64(len(lineWithNewline)),
}
s.publish(entry)
}
}
if err := scanner.Err(); err != nil {
s.logger.Error("msg", "Scanner error reading stdin",
"component", "console_source",
"error", err)
}
}
// publish sends a log entry to all subscribers.
func (s *ConsoleSource) publish(entry core.LogEntry) {
s.totalEntries.Add(1)
s.lastEntryTime.Store(entry.Time)
for _, ch := range s.subscribers {
select {
case ch <- entry:
default:
s.droppedEntries.Add(1)
s.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "console_source")
}
}
}

View File

@ -1,4 +1,4 @@
// FILE: logwisp/src/internal/source/directory.go // FILE: logwisp/src/internal/source/file.go
package source package source
import ( import (
@ -13,55 +13,43 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// DirectorySource monitors a directory for log files // FileSource monitors log files and tails them.
type DirectorySource struct { type FileSource struct {
path string // Configuration
pattern string config *config.FileSourceOptions
checkInterval time.Duration
// Application
subscribers []chan core.LogEntry subscribers []chan core.LogEntry
watchers map[string]*fileWatcher watchers map[string]*fileWatcher
logger *log.Logger
// Runtime
mu sync.RWMutex mu sync.RWMutex
ctx context.Context ctx context.Context
cancel context.CancelFunc cancel context.CancelFunc
wg sync.WaitGroup wg sync.WaitGroup
// Statistics
totalEntries atomic.Uint64 totalEntries atomic.Uint64
droppedEntries atomic.Uint64 droppedEntries atomic.Uint64
startTime time.Time startTime time.Time
lastEntryTime atomic.Value // time.Time lastEntryTime atomic.Value // time.Time
logger *log.Logger
} }
// NewDirectorySource creates a new directory monitoring source // NewFileSource creates a new file monitoring source.
func NewDirectorySource(options map[string]any, logger *log.Logger) (*DirectorySource, error) { func NewFileSource(opts *config.FileSourceOptions, logger *log.Logger) (*FileSource, error) {
path, ok := options["path"].(string) if opts == nil {
if !ok { return nil, fmt.Errorf("file source options cannot be nil")
return nil, fmt.Errorf("directory source requires 'path' option")
} }
pattern, _ := options["pattern"].(string) ds := &FileSource{
if pattern == "" { config: opts,
pattern = "*"
}
checkInterval := 100 * time.Millisecond
if ms, ok := options["check_interval_ms"].(int64); ok && ms > 0 {
checkInterval = time.Duration(ms) * time.Millisecond
}
absPath, err := filepath.Abs(path)
if err != nil {
return nil, fmt.Errorf("invalid path %s: %w", path, err)
}
ds := &DirectorySource{
path: absPath,
pattern: pattern,
checkInterval: checkInterval,
watchers: make(map[string]*fileWatcher), watchers: make(map[string]*fileWatcher),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: logger,
@ -71,7 +59,8 @@ func NewDirectorySource(options map[string]any, logger *log.Logger) (*DirectoryS
return ds, nil return ds, nil
} }
func (ds *DirectorySource) Subscribe() <-chan core.LogEntry { // Subscribe returns a channel for receiving log entries.
func (ds *FileSource) Subscribe() <-chan core.LogEntry {
ds.mu.Lock() ds.mu.Lock()
defer ds.mu.Unlock() defer ds.mu.Unlock()
@ -80,20 +69,22 @@ func (ds *DirectorySource) Subscribe() <-chan core.LogEntry {
return ch return ch
} }
func (ds *DirectorySource) Start() error { // Start begins the file monitoring loop.
func (ds *FileSource) Start() error {
ds.ctx, ds.cancel = context.WithCancel(context.Background()) ds.ctx, ds.cancel = context.WithCancel(context.Background())
ds.wg.Add(1) ds.wg.Add(1)
go ds.monitorLoop() go ds.monitorLoop()
ds.logger.Info("msg", "Directory source started", ds.logger.Info("msg", "File source started",
"component", "directory_source", "component", "File_source",
"path", ds.path, "path", ds.config.Directory,
"pattern", ds.pattern, "pattern", ds.config.Pattern,
"check_interval_ms", ds.checkInterval.Milliseconds()) "check_interval_ms", ds.config.CheckIntervalMS)
return nil return nil
} }
func (ds *DirectorySource) Stop() { // Stop gracefully shuts down the file source and all file watchers.
func (ds *FileSource) Stop() {
if ds.cancel != nil { if ds.cancel != nil {
ds.cancel() ds.cancel()
} }
@ -101,19 +92,20 @@ func (ds *DirectorySource) Stop() {
ds.mu.Lock() ds.mu.Lock()
for _, w := range ds.watchers { for _, w := range ds.watchers {
w.close() w.stop()
} }
for _, ch := range ds.subscribers { for _, ch := range ds.subscribers {
close(ch) close(ch)
} }
ds.mu.Unlock() ds.mu.Unlock()
ds.logger.Info("msg", "Directory source stopped", ds.logger.Info("msg", "File source stopped",
"component", "directory_source", "component", "file_source",
"path", ds.path) "path", ds.config.Directory)
} }
func (ds *DirectorySource) GetStats() SourceStats { // GetStats returns the source's statistics, including active watchers.
func (ds *FileSource) GetStats() SourceStats {
lastEntry, _ := ds.lastEntryTime.Load().(time.Time) lastEntry, _ := ds.lastEntryTime.Load().(time.Time)
ds.mu.RLock() ds.mu.RLock()
@ -125,7 +117,7 @@ func (ds *DirectorySource) GetStats() SourceStats {
for _, w := range ds.watchers { for _, w := range ds.watchers {
info := w.getInfo() info := w.getInfo()
watchers = append(watchers, map[string]any{ watchers = append(watchers, map[string]any{
"path": info.Path, "directory": info.Directory,
"size": info.Size, "size": info.Size,
"position": info.Position, "position": info.Position,
"entries_read": info.EntriesRead, "entries_read": info.EntriesRead,
@ -138,7 +130,7 @@ func (ds *DirectorySource) GetStats() SourceStats {
ds.mu.RUnlock() ds.mu.RUnlock()
return SourceStats{ return SourceStats{
Type: "directory", Type: "file",
TotalEntries: ds.totalEntries.Load(), TotalEntries: ds.totalEntries.Load(),
DroppedEntries: ds.droppedEntries.Load(), DroppedEntries: ds.droppedEntries.Load(),
StartTime: ds.startTime, StartTime: ds.startTime,
@ -147,30 +139,13 @@ func (ds *DirectorySource) GetStats() SourceStats {
} }
} }
func (ds *DirectorySource) publish(entry core.LogEntry) { // monitorLoop periodically scans path for new or changed files.
ds.mu.RLock() func (ds *FileSource) monitorLoop() {
defer ds.mu.RUnlock()
ds.totalEntries.Add(1)
ds.lastEntryTime.Store(entry.Time)
for _, ch := range ds.subscribers {
select {
case ch <- entry:
default:
ds.droppedEntries.Add(1)
ds.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "directory_source")
}
}
}
func (ds *DirectorySource) monitorLoop() {
defer ds.wg.Done() defer ds.wg.Done()
ds.checkTargets() ds.checkTargets()
ticker := time.NewTicker(ds.checkInterval) ticker := time.NewTicker(time.Duration(ds.config.CheckIntervalMS) * time.Millisecond)
defer ticker.Stop() defer ticker.Stop()
for { for {
@ -183,13 +158,14 @@ func (ds *DirectorySource) monitorLoop() {
} }
} }
func (ds *DirectorySource) checkTargets() { // checkTargets finds matching files and ensures watchers are running for them.
files, err := ds.scanDirectory() func (ds *FileSource) checkTargets() {
files, err := ds.scanFile()
if err != nil { if err != nil {
ds.logger.Warn("msg", "Failed to scan directory", ds.logger.Warn("msg", "Failed to scan file",
"component", "directory_source", "component", "file_source",
"path", ds.path, "path", ds.config.Directory,
"pattern", ds.pattern, "pattern", ds.config.Pattern,
"error", err) "error", err)
return return
} }
@ -201,14 +177,88 @@ func (ds *DirectorySource) checkTargets() {
ds.cleanupWatchers() ds.cleanupWatchers()
} }
func (ds *DirectorySource) scanDirectory() ([]string, error) { // ensureWatcher creates and starts a new file watcher if one doesn't exist for the given path.
entries, err := os.ReadDir(ds.path) func (ds *FileSource) ensureWatcher(path string) {
ds.mu.Lock()
defer ds.mu.Unlock()
if _, exists := ds.watchers[path]; exists {
return
}
w := newFileWatcher(path, ds.publish, ds.logger)
ds.watchers[path] = w
ds.logger.Debug("msg", "Created file watcher",
"component", "file_source",
"path", path)
ds.wg.Add(1)
go func() {
defer ds.wg.Done()
if err := w.watch(ds.ctx); err != nil {
if errors.Is(err, context.Canceled) {
ds.logger.Debug("msg", "Watcher cancelled",
"component", "file_source",
"path", path)
} else {
ds.logger.Error("msg", "Watcher failed",
"component", "file_source",
"path", path,
"error", err)
}
}
ds.mu.Lock()
delete(ds.watchers, path)
ds.mu.Unlock()
}()
}
// cleanupWatchers stops and removes watchers for files that no longer exist.
func (ds *FileSource) cleanupWatchers() {
ds.mu.Lock()
defer ds.mu.Unlock()
for path, w := range ds.watchers {
if _, err := os.Stat(path); os.IsNotExist(err) {
w.stop()
delete(ds.watchers, path)
ds.logger.Debug("msg", "Cleaned up watcher for non-existent file",
"component", "file_source",
"path", path)
}
}
}
// publish sends a log entry to all subscribers.
func (ds *FileSource) publish(entry core.LogEntry) {
ds.mu.RLock()
defer ds.mu.RUnlock()
ds.totalEntries.Add(1)
ds.lastEntryTime.Store(entry.Time)
for _, ch := range ds.subscribers {
select {
case ch <- entry:
default:
ds.droppedEntries.Add(1)
ds.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "file_source")
}
}
}
// scanFile finds all files in the configured path that match the pattern.
func (ds *FileSource) scanFile() ([]string, error) {
entries, err := os.ReadDir(ds.config.Directory)
if err != nil { if err != nil {
return nil, err return nil, err
} }
// Convert glob pattern to regex // Convert glob pattern to regex
regexPattern := globToRegex(ds.pattern) regexPattern := globToRegex(ds.config.Pattern)
re, err := regexp.Compile(regexPattern) re, err := regexp.Compile(regexPattern)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid pattern regex: %w", err) return nil, fmt.Errorf("invalid pattern regex: %w", err)
@ -222,65 +272,14 @@ func (ds *DirectorySource) scanDirectory() ([]string, error) {
name := entry.Name() name := entry.Name()
if re.MatchString(name) { if re.MatchString(name) {
files = append(files, filepath.Join(ds.path, name)) files = append(files, filepath.Join(ds.config.Directory, name))
} }
} }
return files, nil return files, nil
} }
func (ds *DirectorySource) ensureWatcher(path string) { // globToRegex converts a simple glob pattern to a regular expression.
ds.mu.Lock()
defer ds.mu.Unlock()
if _, exists := ds.watchers[path]; exists {
return
}
w := newFileWatcher(path, ds.publish, ds.logger)
ds.watchers[path] = w
ds.logger.Debug("msg", "Created file watcher",
"component", "directory_source",
"path", path)
ds.wg.Add(1)
go func() {
defer ds.wg.Done()
if err := w.watch(ds.ctx); err != nil {
if errors.Is(err, context.Canceled) {
ds.logger.Debug("msg", "Watcher cancelled",
"component", "directory_source",
"path", path)
} else {
ds.logger.Error("msg", "Watcher failed",
"component", "directory_source",
"path", path,
"error", err)
}
}
ds.mu.Lock()
delete(ds.watchers, path)
ds.mu.Unlock()
}()
}
func (ds *DirectorySource) cleanupWatchers() {
ds.mu.Lock()
defer ds.mu.Unlock()
for path, w := range ds.watchers {
if _, err := os.Stat(path); os.IsNotExist(err) {
w.stop()
delete(ds.watchers, path)
ds.logger.Debug("msg", "Cleaned up watcher for non-existent file",
"component", "directory_source",
"path", path)
}
}
}
func globToRegex(glob string) string { func globToRegex(glob string) string {
regex := regexp.QuoteMeta(glob) regex := regexp.QuoteMeta(glob)
regex = strings.ReplaceAll(regex, `\*`, `.*`) regex = strings.ReplaceAll(regex, `\*`, `.*`)

View File

@ -20,9 +20,9 @@ import (
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
) )
// WatcherInfo contains information about a file watcher // WatcherInfo contains snapshot information about a file watcher's state.
type WatcherInfo struct { type WatcherInfo struct {
Path string Directory string
Size int64 Size int64
Position int64 Position int64
ModTime time.Time ModTime time.Time
@ -31,8 +31,9 @@ type WatcherInfo struct {
Rotations int64 Rotations int64
} }
// fileWatcher tails a single file, handles rotations, and sends new lines to a callback.
type fileWatcher struct { type fileWatcher struct {
path string directory string
callback func(core.LogEntry) callback func(core.LogEntry)
position int64 position int64
size int64 size int64
@ -46,9 +47,10 @@ type fileWatcher struct {
logger *log.Logger logger *log.Logger
} }
func newFileWatcher(path string, callback func(core.LogEntry), logger *log.Logger) *fileWatcher { // newFileWatcher creates a new watcher for a specific file path.
func newFileWatcher(directory string, callback func(core.LogEntry), logger *log.Logger) *fileWatcher {
w := &fileWatcher{ w := &fileWatcher{
path: path, directory: directory,
callback: callback, callback: callback,
position: -1, position: -1,
logger: logger, logger: logger,
@ -57,12 +59,13 @@ func newFileWatcher(path string, callback func(core.LogEntry), logger *log.Logge
return w return w
} }
// watch starts the main monitoring loop for the file.
func (w *fileWatcher) watch(ctx context.Context) error { func (w *fileWatcher) watch(ctx context.Context) error {
if err := w.seekToEnd(); err != nil { if err := w.seekToEnd(); err != nil {
return fmt.Errorf("seekToEnd failed: %w", err) return fmt.Errorf("seekToEnd failed: %w", err)
} }
ticker := time.NewTicker(100 * time.Millisecond) ticker := time.NewTicker(core.FileWatcherPollInterval)
defer ticker.Stop() defer ticker.Stop()
for { for {
@ -81,52 +84,36 @@ func (w *fileWatcher) watch(ctx context.Context) error {
} }
} }
// FILE: logwisp/src/internal/source/file_watcher.go // stop signals the watcher to terminate its loop.
func (w *fileWatcher) seekToEnd() error { func (w *fileWatcher) stop() {
file, err := os.Open(w.path)
if err != nil {
if os.IsNotExist(err) {
w.mu.Lock() w.mu.Lock()
w.position = 0 w.stopped = true
w.size = 0
w.modTime = time.Now()
w.inode = 0
w.mu.Unlock() w.mu.Unlock()
return nil
}
return err
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
w.mu.Lock()
defer w.mu.Unlock()
// Keep existing position (including 0)
// First time initialization seeks to the end of the file
if w.position == -1 {
pos, err := file.Seek(0, io.SeekEnd)
if err != nil {
return err
}
w.position = pos
}
w.size = info.Size()
w.modTime = info.ModTime()
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
w.inode = stat.Ino
}
return nil
} }
// getInfo returns a snapshot of the watcher's current statistics.
func (w *fileWatcher) getInfo() WatcherInfo {
w.mu.Lock()
info := WatcherInfo{
Directory: w.directory,
Size: w.size,
Position: w.position,
ModTime: w.modTime,
EntriesRead: w.entriesRead.Load(),
Rotations: w.rotationSeq,
}
w.mu.Unlock()
if lastRead, ok := w.lastReadTime.Load().(time.Time); ok {
info.LastReadTime = lastRead
}
return info
}
// checkFile examines the file for changes, rotations, or new content.
func (w *fileWatcher) checkFile() error { func (w *fileWatcher) checkFile() error {
file, err := os.Open(w.path) file, err := os.Open(w.directory)
if err != nil { if err != nil {
if os.IsNotExist(err) { if os.IsNotExist(err) {
// File doesn't exist yet, keep watching // File doesn't exist yet, keep watching
@ -134,7 +121,7 @@ func (w *fileWatcher) checkFile() error {
} }
w.logger.Error("msg", "Failed to open file for checking", w.logger.Error("msg", "Failed to open file for checking",
"component", "file_watcher", "component", "file_watcher",
"path", w.path, "directory", w.directory,
"error", err) "error", err)
return err return err
} }
@ -144,7 +131,7 @@ func (w *fileWatcher) checkFile() error {
if err != nil { if err != nil {
w.logger.Error("msg", "Failed to stat file", w.logger.Error("msg", "Failed to stat file",
"component", "file_watcher", "component", "file_watcher",
"path", w.path, "directory", w.directory,
"error", err) "error", err)
return err return err
} }
@ -214,7 +201,7 @@ func (w *fileWatcher) checkFile() error {
w.logger.Debug("msg", "Atomic file update detected", w.logger.Debug("msg", "Atomic file update detected",
"component", "file_watcher", "component", "file_watcher",
"path", w.path, "directory", w.directory,
"old_inode", oldInode, "old_inode", oldInode,
"new_inode", currentInode, "new_inode", currentInode,
"position", oldPos, "position", oldPos,
@ -233,26 +220,26 @@ func (w *fileWatcher) checkFile() error {
w.callback(core.LogEntry{ w.callback(core.LogEntry{
Time: time.Now(), Time: time.Now(),
Source: filepath.Base(w.path), Source: filepath.Base(w.directory),
Level: "INFO", Level: "INFO",
Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason), Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
}) })
w.logger.Info("msg", "Log rotation detected", w.logger.Info("msg", "Log rotation detected",
"component", "file_watcher", "component", "file_watcher",
"path", w.path, "directory", w.directory,
"sequence", seq, "sequence", seq,
"reason", rotationReason) "reason", rotationReason)
} }
// Only read if there's new content // Read if there's new content OR if we need to continue from position
if currentSize > startPos { if currentSize > startPos {
if _, err := file.Seek(startPos, io.SeekStart); err != nil { if _, err := file.Seek(startPos, io.SeekStart); err != nil {
return err return err
} }
scanner := bufio.NewScanner(file) scanner := bufio.NewScanner(file)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) scanner.Buffer(make([]byte, 0, 64*1024), core.MaxLogEntryBytes)
for scanner.Scan() { for scanner.Scan() {
line := scanner.Text() line := scanner.Text()
@ -272,7 +259,7 @@ func (w *fileWatcher) checkFile() error {
if err := scanner.Err(); err != nil { if err := scanner.Err(); err != nil {
w.logger.Error("msg", "Scanner error while reading file", w.logger.Error("msg", "Scanner error while reading file",
"component", "file_watcher", "component", "file_watcher",
"path", w.path, "directory", w.directory,
"position", startPos, "position", startPos,
"error", err) "error", err)
return err return err
@ -311,6 +298,58 @@ func (w *fileWatcher) checkFile() error {
return nil return nil
} }
// seekToEnd sets the initial read position to the end of the file.
func (w *fileWatcher) seekToEnd() error {
file, err := os.Open(w.directory)
if err != nil {
if os.IsNotExist(err) {
w.mu.Lock()
w.position = 0
w.size = 0
w.modTime = time.Now()
w.inode = 0
w.mu.Unlock()
return nil
}
return err
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return err
}
w.mu.Lock()
defer w.mu.Unlock()
// Keep existing position (including 0)
// First time initialization seeks to the end of the file
if w.position == -1 {
pos, err := file.Seek(0, io.SeekEnd)
if err != nil {
return err
}
w.position = pos
}
w.size = info.Size()
w.modTime = info.ModTime()
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
w.inode = stat.Ino
}
return nil
}
// isStopped checks if the watcher has been instructed to stop.
func (w *fileWatcher) isStopped() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.stopped
}
// parseLine attempts to parse a line as JSON, falling back to plain text.
func (w *fileWatcher) parseLine(line string) core.LogEntry { func (w *fileWatcher) parseLine(line string) core.LogEntry {
var jsonLog struct { var jsonLog struct {
Time string `json:"time"` Time string `json:"time"`
@ -327,7 +366,7 @@ func (w *fileWatcher) parseLine(line string) core.LogEntry {
return core.LogEntry{ return core.LogEntry{
Time: timestamp, Time: timestamp,
Source: filepath.Base(w.path), Source: filepath.Base(w.directory),
Level: jsonLog.Level, Level: jsonLog.Level,
Message: jsonLog.Message, Message: jsonLog.Message,
Fields: jsonLog.Fields, Fields: jsonLog.Fields,
@ -338,12 +377,13 @@ func (w *fileWatcher) parseLine(line string) core.LogEntry {
return core.LogEntry{ return core.LogEntry{
Time: time.Now(), Time: time.Now(),
Source: filepath.Base(w.path), Source: filepath.Base(w.directory),
Level: level, Level: level,
Message: line, Message: line,
} }
} }
// extractLogLevel heuristically determines the log level from a line of text.
func extractLogLevel(line string) string { func extractLogLevel(line string) string {
patterns := []struct { patterns := []struct {
patterns []string patterns []string
@ -367,38 +407,3 @@ func extractLogLevel(line string) string {
return "" return ""
} }
func (w *fileWatcher) getInfo() WatcherInfo {
w.mu.Lock()
info := WatcherInfo{
Path: w.path,
Size: w.size,
Position: w.position,
ModTime: w.modTime,
EntriesRead: w.entriesRead.Load(),
Rotations: w.rotationSeq,
}
w.mu.Unlock()
if lastRead, ok := w.lastReadTime.Load().(time.Time); ok {
info.LastReadTime = lastRead
}
return info
}
func (w *fileWatcher) close() {
w.stop()
}
func (w *fileWatcher) stop() {
w.mu.Lock()
w.stopped = true
w.mu.Unlock()
}
func (w *fileWatcher) isStopped() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.stopped
}

View File

@ -2,9 +2,9 @@
package source package source
import ( import (
"crypto/tls"
"encoding/json" "encoding/json"
"fmt" "fmt"
"logwisp/src/internal/tls"
"net" "net"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -12,28 +12,37 @@ import (
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/limit" "logwisp/src/internal/network"
"logwisp/src/internal/session"
ltls "logwisp/src/internal/tls"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
"github.com/valyala/fasthttp" "github.com/valyala/fasthttp"
) )
// HTTPSource receives log entries via HTTP POST requests // HTTPSource receives log entries via HTTP POST requests.
type HTTPSource struct { type HTTPSource struct {
port int64 // Configuration
ingestPath string config *config.HTTPSourceOptions
bufferSize int64
// Network
server *fasthttp.Server server *fasthttp.Server
netLimiter *network.NetLimiter
// Application
subscribers []chan core.LogEntry subscribers []chan core.LogEntry
logger *log.Logger
// Runtime
mu sync.RWMutex mu sync.RWMutex
done chan struct{} done chan struct{}
wg sync.WaitGroup wg sync.WaitGroup
netLimiter *limit.NetLimiter
logger *log.Logger
// CHANGED: Add TLS support // Security & Session
tlsManager *tls.Manager httpSessions sync.Map // remoteAddr -> sessionID
sslConfig *config.SSLConfig sessionManager *session.Manager
tlsManager *ltls.ServerManager
tlsStates sync.Map // remoteAddr -> *tls.ConnectionState
// Statistics // Statistics
totalEntries atomic.Uint64 totalEntries atomic.Uint64
@ -43,141 +52,142 @@ type HTTPSource struct {
lastEntryTime atomic.Value // time.Time lastEntryTime atomic.Value // time.Time
} }
// NewHTTPSource creates a new HTTP server source // NewHTTPSource creates a new HTTP server source.
func NewHTTPSource(options map[string]any, logger *log.Logger) (*HTTPSource, error) { func NewHTTPSource(opts *config.HTTPSourceOptions, logger *log.Logger) (*HTTPSource, error) {
port, ok := options["port"].(int64) // Validation done in config package
if !ok || port < 1 || port > 65535 { if opts == nil {
return nil, fmt.Errorf("http source requires valid 'port' option") return nil, fmt.Errorf("HTTP source options cannot be nil")
}
ingestPath := "/ingest"
if path, ok := options["ingest_path"].(string); ok && path != "" {
ingestPath = path
}
bufferSize := int64(1000)
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
bufferSize = bufSize
} }
h := &HTTPSource{ h := &HTTPSource{
port: port, config: opts,
ingestPath: ingestPath,
bufferSize: bufferSize,
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: logger,
sessionManager: session.NewManager(core.MaxSessionTime),
} }
h.lastEntryTime.Store(time.Time{}) h.lastEntryTime.Store(time.Time{})
// Initialize net limiter if configured // Initialize net limiter if configured
if rl, ok := options["net_limit"].(map[string]any); ok { if opts.ACL != nil && (opts.ACL.Enabled ||
if enabled, _ := rl["enabled"].(bool); enabled { len(opts.ACL.IPWhitelist) > 0 ||
cfg := config.NetLimitConfig{ len(opts.ACL.IPBlacklist) > 0) {
Enabled: true, h.netLimiter = network.NewNetLimiter(opts.ACL, logger)
} }
if rps, ok := toFloat(rl["requests_per_second"]); ok { // Initialize TLS manager if configured
cfg.RequestsPerSecond = rps if opts.TLS != nil && opts.TLS.Enabled {
} tlsManager, err := ltls.NewServerManager(opts.TLS, logger)
if burst, ok := rl["burst_size"].(int64); ok {
cfg.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.LimitBy = limitBy
}
if respCode, ok := rl["response_code"].(int64); ok {
cfg.ResponseCode = respCode
}
if msg, ok := rl["response_message"].(string); ok {
cfg.ResponseMessage = msg
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.MaxConnectionsPerIP = maxPerIP
}
h.netLimiter = limit.NewNetLimiter(cfg, logger)
}
}
// Extract SSL config after existing options
if ssl, ok := options["ssl"].(map[string]any); ok {
h.sslConfig = &config.SSLConfig{}
h.sslConfig.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
h.sslConfig.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
h.sslConfig.KeyFile = keyFile
}
// TODO: extract other SSL options similar to tcp_client_sink
// Create TLS manager
if h.sslConfig.Enabled {
tlsManager, err := tls.NewManager(h.sslConfig, logger)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err) return nil, fmt.Errorf("failed to create TLS manager: %w", err)
} }
h.tlsManager = tlsManager h.tlsManager = tlsManager
} }
}
return h, nil return h, nil
} }
// Subscribe returns a channel for receiving log entries.
func (h *HTTPSource) Subscribe() <-chan core.LogEntry { func (h *HTTPSource) Subscribe() <-chan core.LogEntry {
h.mu.Lock() h.mu.Lock()
defer h.mu.Unlock() defer h.mu.Unlock()
ch := make(chan core.LogEntry, h.bufferSize) ch := make(chan core.LogEntry, h.config.BufferSize)
h.subscribers = append(h.subscribers, ch) h.subscribers = append(h.subscribers, ch)
return ch return ch
} }
// Start initializes and starts the HTTP server.
func (h *HTTPSource) Start() error { func (h *HTTPSource) Start() error {
// Register expiry callback
h.sessionManager.RegisterExpiryCallback("http_source", func(sessionID, remoteAddrStr string) {
h.handleSessionExpiry(sessionID, remoteAddrStr)
})
h.server = &fasthttp.Server{ h.server = &fasthttp.Server{
Handler: h.requestHandler, Handler: h.requestHandler,
DisableKeepalive: false, DisableKeepalive: false,
StreamRequestBody: true, StreamRequestBody: true,
CloseOnShutdown: true, CloseOnShutdown: true,
ReadTimeout: time.Duration(h.config.ReadTimeout) * time.Millisecond,
WriteTimeout: time.Duration(h.config.WriteTimeout) * time.Millisecond,
MaxRequestBodySize: int(h.config.MaxRequestBodySize),
} }
addr := fmt.Sprintf(":%d", h.port) // TLS and mTLS configuration
if h.tlsManager != nil {
h.server.TLSConfig = h.tlsManager.GetHTTPConfig()
// Enforce mTLS configuration from the TLSServerConfig struct.
if h.config.TLS.ClientAuth {
if h.config.TLS.VerifyClientCert {
h.server.TLSConfig.ClientAuth = tls.RequireAndVerifyClientCert
} else {
h.server.TLSConfig.ClientAuth = tls.RequireAnyClientCert
}
}
}
// Use configured host and port
addr := fmt.Sprintf("%s:%d", h.config.Host, h.config.Port)
// Start server in background // Start server in background
h.wg.Add(1) h.wg.Add(1)
errChan := make(chan error, 1)
go func() { go func() {
defer h.wg.Done() defer h.wg.Done()
h.logger.Info("msg", "HTTP source server starting", h.logger.Info("msg", "HTTP source server starting",
"component", "http_source", "component", "http_source",
"port", h.port, "port", h.config.Port,
"ingest_path", h.ingestPath, "ingest_path", h.config.IngestPath,
"tls_enabled", h.tlsManager != nil) "tls_enabled", h.tlsManager != nil,
"mtls_enabled", h.config.TLS != nil && h.config.TLS.ClientAuth,
)
var err error var err error
// Check for TLS manager and start the appropriate server type
if h.tlsManager != nil { if h.tlsManager != nil {
h.server.TLSConfig = h.tlsManager.GetHTTPConfig() h.server.TLSConfig = h.tlsManager.GetHTTPConfig()
err = h.server.ListenAndServeTLS(addr, h.sslConfig.CertFile, h.sslConfig.KeyFile)
// Add certificate verification callback
if h.config.TLS.ClientAuth {
h.server.TLSConfig.ClientAuth = tls.RequireAndVerifyClientCert
if h.config.TLS.ClientCAFile != "" {
// ClientCAs already set by tls.Manager
}
}
// HTTPS server
err = h.server.ListenAndServeTLS(addr, h.config.TLS.CertFile, h.config.TLS.KeyFile)
} else { } else {
// HTTP server
err = h.server.ListenAndServe(addr) err = h.server.ListenAndServe(addr)
} }
if err != nil { if err != nil {
h.logger.Error("msg", "HTTP source server failed", h.logger.Error("msg", "HTTP source server failed",
"component", "http_source", "component", "http_source",
"port", h.port, "port", h.config.Port,
"error", err) "error", err)
errChan <- err
} }
}() }()
// Give server time to start // Wait briefly for server startup
time.Sleep(100 * time.Millisecond) // TODO: standardize and better manage timers select {
case err := <-errChan:
return fmt.Errorf("HTTP server failed to start: %w", err)
case <-time.After(250 * time.Millisecond):
return nil return nil
}
} }
// Stop gracefully shuts down the HTTP server.
func (h *HTTPSource) Stop() { func (h *HTTPSource) Stop() {
h.logger.Info("msg", "Stopping HTTP source") h.logger.Info("msg", "Stopping HTTP source")
// Unregister callback
h.sessionManager.UnregisterExpiryCallback("http_source")
close(h.done) close(h.done)
if h.server != nil { if h.server != nil {
@ -202,9 +212,15 @@ func (h *HTTPSource) Stop() {
} }
h.mu.Unlock() h.mu.Unlock()
// Stop session manager
if h.sessionManager != nil {
h.sessionManager.Stop()
}
h.logger.Info("msg", "HTTP source stopped") h.logger.Info("msg", "HTTP source stopped")
} }
// GetStats returns the source's statistics.
func (h *HTTPSource) GetStats() SourceStats { func (h *HTTPSource) GetStats() SourceStats {
lastEntry, _ := h.lastEntryTime.Load().(time.Time) lastEntry, _ := h.lastEntryTime.Load().(time.Time)
@ -213,6 +229,16 @@ func (h *HTTPSource) GetStats() SourceStats {
netLimitStats = h.netLimiter.GetStats() netLimitStats = h.netLimiter.GetStats()
} }
var sessionStats map[string]any
if h.sessionManager != nil {
sessionStats = h.sessionManager.GetStats()
}
var tlsStats map[string]any
if h.tlsManager != nil {
tlsStats = h.tlsManager.GetStats()
}
return SourceStats{ return SourceStats{
Type: "http", Type: "http",
TotalEntries: h.totalEntries.Load(), TotalEntries: h.totalEntries.Load(),
@ -220,29 +246,23 @@ func (h *HTTPSource) GetStats() SourceStats {
StartTime: h.startTime, StartTime: h.startTime,
LastEntryTime: lastEntry, LastEntryTime: lastEntry,
Details: map[string]any{ Details: map[string]any{
"port": h.port, "host": h.config.Host,
"ingest_path": h.ingestPath, "port": h.config.Port,
"path": h.config.IngestPath,
"invalid_entries": h.invalidEntries.Load(), "invalid_entries": h.invalidEntries.Load(),
"net_limit": netLimitStats, "net_limit": netLimitStats,
"sessions": sessionStats,
"tls": tlsStats,
}, },
} }
} }
// requestHandler is the main entry point for all incoming HTTP requests.
func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) { func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
// Only handle POST to the configured ingest path remoteAddrStr := ctx.RemoteAddr().String()
if string(ctx.Method()) != "POST" || string(ctx.Path()) != h.ingestPath {
ctx.SetStatusCode(fasthttp.StatusNotFound)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Not Found",
"hint": fmt.Sprintf("POST logs to %s", h.ingestPath),
})
return
}
// Extract and validate IP // 1. IPv6 check (early reject)
remoteAddr := ctx.RemoteAddr().String() ipStr, _, err := net.SplitHostPort(remoteAddrStr)
ipStr, _, err := net.SplitHostPort(remoteAddr)
if err == nil { if err == nil {
if ip := net.ParseIP(ipStr); ip != nil && ip.To4() == nil { if ip := net.ParseIP(ipStr); ip != nil && ip.To4() == nil {
ctx.SetStatusCode(fasthttp.StatusForbidden) ctx.SetStatusCode(fasthttp.StatusForbidden)
@ -254,9 +274,9 @@ func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
} }
} }
// Check net limit // 2. Net limit check (early reject)
if h.netLimiter != nil { if h.netLimiter != nil {
if allowed, statusCode, message := h.netLimiter.CheckHTTP(remoteAddr); !allowed { if allowed, statusCode, message := h.netLimiter.CheckHTTP(remoteAddrStr); !allowed {
ctx.SetStatusCode(int(statusCode)) ctx.SetStatusCode(int(statusCode))
ctx.SetContentType("application/json") ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]any{ json.NewEncoder(ctx).Encode(map[string]any{
@ -265,11 +285,69 @@ func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
}) })
return return
} }
// Reserve connection slot and release when finished
if !h.netLimiter.ReserveConnection(remoteAddrStr) {
ctx.SetStatusCode(fasthttp.StatusTooManyRequests)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Connection limit exceeded",
})
return
}
defer h.netLimiter.ReleaseConnection(remoteAddrStr)
} }
// Process the request body // 3. Create session for connections
var sess *session.Session
if savedID, exists := h.httpSessions.Load(remoteAddrStr); exists {
if s, found := h.sessionManager.GetSession(savedID.(string)); found {
sess = s
h.sessionManager.UpdateActivity(savedID.(string))
}
}
if sess == nil {
// New connection
sess = h.sessionManager.CreateSession(remoteAddrStr, "http_source", map[string]any{
"tls": ctx.IsTLS() || h.tlsManager != nil,
"mtls_enabled": h.config.TLS != nil && h.config.TLS.ClientAuth,
})
h.httpSessions.Store(remoteAddrStr, sess.ID)
// Setup connection close handler
ctx.SetConnectionClose()
go h.cleanupHTTPSession(remoteAddrStr, sess.ID)
}
// 4. Path check
path := string(ctx.Path())
if path != h.config.IngestPath {
ctx.SetStatusCode(fasthttp.StatusNotFound)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Not Found",
"hint": fmt.Sprintf("POST logs to %s", h.config.IngestPath),
})
return
}
// 5. Method check (only accepts POST)
if string(ctx.Method()) != "POST" {
ctx.SetStatusCode(fasthttp.StatusMethodNotAllowed)
ctx.SetContentType("application/json")
ctx.Response.Header.Set("Allow", "POST")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Method not allowed",
"hint": "Use POST to submit logs",
})
return
}
// 6. Process log entry
body := ctx.PostBody() body := ctx.PostBody()
if len(body) == 0 { if len(body) == 0 {
h.invalidEntries.Add(1)
ctx.SetStatusCode(fasthttp.StatusBadRequest) ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json") ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{ json.NewEncoder(ctx).Encode(map[string]string{
@ -278,35 +356,81 @@ func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
return return
} }
// Parse the log entries var entry core.LogEntry
entries, err := h.parseEntries(body) if err := json.Unmarshal(body, &entry); err != nil {
if err != nil {
h.invalidEntries.Add(1) h.invalidEntries.Add(1)
ctx.SetStatusCode(fasthttp.StatusBadRequest) ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json") ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{ json.NewEncoder(ctx).Encode(map[string]string{
"error": fmt.Sprintf("Invalid log format: %v", err), "error": fmt.Sprintf("Invalid JSON: %v", err),
}) })
return return
} }
// Publish entries // Set defaults
accepted := 0 if entry.Time.IsZero() {
for _, entry := range entries { entry.Time = time.Now()
if h.publish(entry) {
accepted++
} }
if entry.Source == "" {
entry.Source = "http"
} }
entry.RawSize = int64(len(body))
// Return success response // Publish to subscribers
h.publish(entry)
// Update session activity after successful processing
h.sessionManager.UpdateActivity(sess.ID)
// Success response
ctx.SetStatusCode(fasthttp.StatusAccepted) ctx.SetStatusCode(fasthttp.StatusAccepted)
ctx.SetContentType("application/json") ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]any{ json.NewEncoder(ctx).Encode(map[string]string{
"accepted": accepted, "status": "accepted",
"total": len(entries), "session_id": sess.ID,
}) })
} }
// publish sends a log entry to all subscribers.
func (h *HTTPSource) publish(entry core.LogEntry) {
h.mu.RLock()
defer h.mu.RUnlock()
h.totalEntries.Add(1)
h.lastEntryTime.Store(entry.Time)
for _, ch := range h.subscribers {
select {
case ch <- entry:
default:
h.droppedEntries.Add(1)
h.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "http_source")
}
}
}
// handleSessionExpiry is the callback for cleaning up expired sessions.
func (h *HTTPSource) handleSessionExpiry(sessionID, remoteAddrStr string) {
h.logger.Info("msg", "Removing expired HTTP session",
"component", "http_source",
"session_id", sessionID,
"remote_addr", remoteAddrStr)
// Remove from mapping
h.httpSessions.Delete(remoteAddrStr)
}
// cleanupHTTPSession removes a session when a client connection is closed.
func (h *HTTPSource) cleanupHTTPSession(addr, sessionID string) {
// Wait for connection to actually close
time.Sleep(100 * time.Millisecond)
h.httpSessions.CompareAndDelete(addr, sessionID)
h.sessionManager.RemoveSession(sessionID)
}
// parseEntries attempts to parse a request body as a single JSON object, a JSON array, or newline-delimited JSON.
func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) { func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
var entries []core.LogEntry var entries []core.LogEntry
@ -331,7 +455,8 @@ func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
// Try to parse as JSON array // Try to parse as JSON array
var array []core.LogEntry var array []core.LogEntry
if err := json.Unmarshal(body, &array); err == nil { if err := json.Unmarshal(body, &array); err == nil {
// NOTE: Placeholder; For array, divide total size by entry count as approximation // For array, divide total size by entry count as approximation
// Accurate calculation adds too much complexity and processing
approxSizePerEntry := int64(len(body) / len(array)) approxSizePerEntry := int64(len(body) / len(array))
for i, entry := range array { for i, entry := range array {
if entry.Message == "" { if entry.Message == "" {
@ -343,7 +468,6 @@ func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
if entry.Source == "" { if entry.Source == "" {
array[i].Source = "http" array[i].Source = "http"
} }
// NOTE: Placeholder
array[i].RawSize = approxSizePerEntry array[i].RawSize = approxSizePerEntry
} }
return array, nil return array, nil
@ -382,32 +506,7 @@ func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
return entries, nil return entries, nil
} }
func (h *HTTPSource) publish(entry core.LogEntry) bool { // splitLines splits a byte slice into lines, handling both \n and \r\n.
h.mu.RLock()
defer h.mu.RUnlock()
h.totalEntries.Add(1)
h.lastEntryTime.Store(entry.Time)
dropped := false
for _, ch := range h.subscribers {
select {
case ch <- entry:
default:
dropped = true
h.droppedEntries.Add(1)
}
}
if dropped {
h.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "http_source")
}
return true
}
// splitLines splits bytes into lines, handling both \n and \r\n
func splitLines(data []byte) [][]byte { func splitLines(data []byte) [][]byte {
var lines [][]byte var lines [][]byte
start := 0 start := 0
@ -431,17 +530,3 @@ func splitLines(data []byte) [][]byte {
return lines return lines
} }
// Helper function for type conversion
func toFloat(v any) (float64, bool) {
switch val := v.(type) {
case float64:
return val, true
case int:
return float64(val), true
case int64:
return float64(val), true
default:
return 0, false
}
}

View File

@ -7,22 +7,22 @@ import (
"logwisp/src/internal/core" "logwisp/src/internal/core"
) )
// Source represents an input data stream // Source represents an input data stream for log entries.
type Source interface { type Source interface {
// Subscribe returns a channel that receives log entries // Subscribe returns a channel that receives log entries from the source.
Subscribe() <-chan core.LogEntry Subscribe() <-chan core.LogEntry
// Start begins reading from the source // Start begins reading from the source.
Start() error Start() error
// Stop gracefully shuts down the source // Stop gracefully shuts down the source.
Stop() Stop()
// GetStats returns source statistics // SourceStats contains statistics about a source.
GetStats() SourceStats GetStats() SourceStats
} }
// SourceStats contains statistics about a source // SourceStats contains statistics about a source.
type SourceStats struct { type SourceStats struct {
Type string Type string
TotalEntries uint64 TotalEntries uint64

View File

@ -1,114 +0,0 @@
// FILE: logwisp/src/internal/source/stdin.go
package source
import (
"bufio"
"os"
"sync/atomic"
"time"
"logwisp/src/internal/core"
"github.com/lixenwraith/log"
)
// StdinSource reads log entries from standard input
type StdinSource struct {
subscribers []chan core.LogEntry
done chan struct{}
totalEntries atomic.Uint64
droppedEntries atomic.Uint64
startTime time.Time
lastEntryTime atomic.Value // time.Time
logger *log.Logger
}
// NewStdinSource creates a new stdin source
func NewStdinSource(options map[string]any, logger *log.Logger) (*StdinSource, error) {
s := &StdinSource{
done: make(chan struct{}),
startTime: time.Now(),
logger: logger,
}
s.lastEntryTime.Store(time.Time{})
return s, nil
}
func (s *StdinSource) Subscribe() <-chan core.LogEntry {
ch := make(chan core.LogEntry, 1000)
s.subscribers = append(s.subscribers, ch)
return ch
}
func (s *StdinSource) Start() error {
go s.readLoop()
s.logger.Info("msg", "Stdin source started", "component", "stdin_source")
return nil
}
func (s *StdinSource) Stop() {
close(s.done)
for _, ch := range s.subscribers {
close(ch)
}
s.logger.Info("msg", "Stdin source stopped", "component", "stdin_source")
}
func (s *StdinSource) GetStats() SourceStats {
lastEntry, _ := s.lastEntryTime.Load().(time.Time)
return SourceStats{
Type: "stdin",
TotalEntries: s.totalEntries.Load(),
DroppedEntries: s.droppedEntries.Load(),
StartTime: s.startTime,
LastEntryTime: lastEntry,
Details: map[string]any{},
}
}
func (s *StdinSource) readLoop() {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
select {
case <-s.done:
return
default:
line := scanner.Text()
if line == "" {
continue
}
entry := core.LogEntry{
Time: time.Now(),
Source: "stdin",
Message: line,
Level: extractLogLevel(line),
RawSize: int64(len(line)),
}
s.publish(entry)
}
}
if err := scanner.Err(); err != nil {
s.logger.Error("msg", "Scanner error reading stdin",
"component", "stdin_source",
"error", err)
}
}
func (s *StdinSource) publish(entry core.LogEntry) {
s.totalEntries.Add(1)
s.lastEntryTime.Store(entry.Time)
for _, ch := range s.subscribers {
select {
case ch <- entry:
default:
s.droppedEntries.Add(1)
s.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "stdin_source")
}
}
}

View File

@ -5,18 +5,16 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"net" "net"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
"logwisp/src/internal/auth"
"logwisp/src/internal/config" "logwisp/src/internal/config"
"logwisp/src/internal/core" "logwisp/src/internal/core"
"logwisp/src/internal/limit" "logwisp/src/internal/network"
"logwisp/src/internal/tls" "logwisp/src/internal/session"
"github.com/lixenwraith/log" "github.com/lixenwraith/log"
"github.com/lixenwraith/log/compat" "github.com/lixenwraith/log/compat"
@ -26,26 +24,31 @@ import (
const ( const (
maxClientBufferSize = 10 * 1024 * 1024 // 10MB max per client maxClientBufferSize = 10 * 1024 * 1024 // 10MB max per client
maxLineLength = 1 * 1024 * 1024 // 1MB max per log line maxLineLength = 1 * 1024 * 1024 // 1MB max per log line
maxEncryptedDataPerRead = 1 * 1024 * 1024 // 1MB max encrypted data per read
maxCumulativeEncrypted = 20 * 1024 * 1024 // 20MB total encrypted before processing
) )
// TCPSource receives log entries via TCP connections // TCPSource receives log entries via TCP connections.
type TCPSource struct { type TCPSource struct {
port int64 // Configuration
bufferSize int64 config *config.TCPSourceOptions
// Network
server *tcpSourceServer server *tcpSourceServer
subscribers []chan core.LogEntry
mu sync.RWMutex
done chan struct{}
engine *gnet.Engine engine *gnet.Engine
engineMu sync.Mutex engineMu sync.Mutex
wg sync.WaitGroup netLimiter *network.NetLimiter
netLimiter *limit.NetLimiter
tlsManager *tls.Manager // Application
sslConfig *config.SSLConfig subscribers []chan core.LogEntry
logger *log.Logger logger *log.Logger
// Runtime
mu sync.RWMutex
done chan struct{}
wg sync.WaitGroup
// Security & Session
sessionManager *session.Manager
// Statistics // Statistics
totalEntries atomic.Uint64 totalEntries atomic.Uint64
droppedEntries atomic.Uint64 droppedEntries atomic.Uint64
@ -55,99 +58,56 @@ type TCPSource struct {
lastEntryTime atomic.Value // time.Time lastEntryTime atomic.Value // time.Time
} }
// NewTCPSource creates a new TCP server source // NewTCPSource creates a new TCP server source.
func NewTCPSource(options map[string]any, logger *log.Logger) (*TCPSource, error) { func NewTCPSource(opts *config.TCPSourceOptions, logger *log.Logger) (*TCPSource, error) {
port, ok := options["port"].(int64) // Accept typed config - validation done in config package
if !ok || port < 1 || port > 65535 { if opts == nil {
return nil, fmt.Errorf("tcp source requires valid 'port' option") return nil, fmt.Errorf("TCP source options cannot be nil")
}
bufferSize := int64(1000)
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
bufferSize = bufSize
} }
t := &TCPSource{ t := &TCPSource{
port: port, config: opts,
bufferSize: bufferSize,
done: make(chan struct{}), done: make(chan struct{}),
startTime: time.Now(), startTime: time.Now(),
logger: logger, logger: logger,
sessionManager: session.NewManager(core.MaxSessionTime),
} }
t.lastEntryTime.Store(time.Time{}) t.lastEntryTime.Store(time.Time{})
// Initialize net limiter if configured // Initialize net limiter if configured
if rl, ok := options["net_limit"].(map[string]any); ok { if opts.ACL != nil && (opts.ACL.Enabled ||
if enabled, _ := rl["enabled"].(bool); enabled { len(opts.ACL.IPWhitelist) > 0 ||
cfg := config.NetLimitConfig{ len(opts.ACL.IPBlacklist) > 0) {
Enabled: true, t.netLimiter = network.NewNetLimiter(opts.ACL, logger)
}
if rps, ok := toFloat(rl["requests_per_second"]); ok {
cfg.RequestsPerSecond = rps
}
if burst, ok := rl["burst_size"].(int64); ok {
cfg.BurstSize = burst
}
if limitBy, ok := rl["limit_by"].(string); ok {
cfg.LimitBy = limitBy
}
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
cfg.MaxConnectionsPerIP = maxPerIP
}
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
cfg.MaxTotalConnections = maxTotal
}
t.netLimiter = limit.NewNetLimiter(cfg, logger)
}
}
// Extract SSL config and initialize TLS manager
if ssl, ok := options["ssl"].(map[string]any); ok {
t.sslConfig = &config.SSLConfig{}
t.sslConfig.Enabled, _ = ssl["enabled"].(bool)
if certFile, ok := ssl["cert_file"].(string); ok {
t.sslConfig.CertFile = certFile
}
if keyFile, ok := ssl["key_file"].(string); ok {
t.sslConfig.KeyFile = keyFile
}
t.sslConfig.ClientAuth, _ = ssl["client_auth"].(bool)
if caFile, ok := ssl["client_ca_file"].(string); ok {
t.sslConfig.ClientCAFile = caFile
}
t.sslConfig.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
// Create TLS manager if enabled
if t.sslConfig.Enabled {
tlsManager, err := tls.NewManager(t.sslConfig, logger)
if err != nil {
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
}
t.tlsManager = tlsManager
}
} }
return t, nil return t, nil
} }
// Subscribe returns a channel for receiving log entries.
func (t *TCPSource) Subscribe() <-chan core.LogEntry { func (t *TCPSource) Subscribe() <-chan core.LogEntry {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
ch := make(chan core.LogEntry, t.bufferSize) ch := make(chan core.LogEntry, t.config.BufferSize)
t.subscribers = append(t.subscribers, ch) t.subscribers = append(t.subscribers, ch)
return ch return ch
} }
// Start initializes and starts the TCP server.
func (t *TCPSource) Start() error { func (t *TCPSource) Start() error {
t.server = &tcpSourceServer{ t.server = &tcpSourceServer{
source: t, source: t,
clients: make(map[gnet.Conn]*tcpClient), clients: make(map[gnet.Conn]*tcpClient),
} }
addr := fmt.Sprintf("tcp://:%d", t.port) // Register expiry callback
t.sessionManager.RegisterExpiryCallback("tcp_source", func(sessionID, remoteAddrStr string) {
t.handleSessionExpiry(sessionID, remoteAddrStr)
})
// Use configured host and port
addr := fmt.Sprintf("tcp://%s:%d", t.config.Host, t.config.Port)
// Create a gnet adapter using the existing logger instance // Create a gnet adapter using the existing logger instance
gnetLogger := compat.NewGnetAdapter(t.logger) gnetLogger := compat.NewGnetAdapter(t.logger)
@ -159,18 +119,19 @@ func (t *TCPSource) Start() error {
defer t.wg.Done() defer t.wg.Done()
t.logger.Info("msg", "TCP source server starting", t.logger.Info("msg", "TCP source server starting",
"component", "tcp_source", "component", "tcp_source",
"port", t.port, "port", t.config.Port,
"tls_enabled", t.tlsManager != nil) )
err := gnet.Run(t.server, addr, err := gnet.Run(t.server, addr,
gnet.WithLogger(gnetLogger), gnet.WithLogger(gnetLogger),
gnet.WithMulticore(true), gnet.WithMulticore(true),
gnet.WithReusePort(true), gnet.WithReusePort(true),
gnet.WithTCPKeepAlive(time.Duration(t.config.KeepAlivePeriod)*time.Millisecond),
) )
if err != nil { if err != nil {
t.logger.Error("msg", "TCP source server failed", t.logger.Error("msg", "TCP source server failed",
"component", "tcp_source", "component", "tcp_source",
"port", t.port, "port", t.config.Port,
"error", err) "error", err)
} }
errChan <- err errChan <- err
@ -185,13 +146,18 @@ func (t *TCPSource) Start() error {
return err return err
case <-time.After(100 * time.Millisecond): case <-time.After(100 * time.Millisecond):
// Server started successfully // Server started successfully
t.logger.Info("msg", "TCP server started", "port", t.port) t.logger.Info("msg", "TCP server started", "port", t.config.Port)
return nil return nil
} }
} }
// Stop gracefully shuts down the TCP server.
func (t *TCPSource) Stop() { func (t *TCPSource) Stop() {
t.logger.Info("msg", "Stopping TCP source") t.logger.Info("msg", "Stopping TCP source")
// Unregister callback
t.sessionManager.UnregisterExpiryCallback("tcp_source")
close(t.done) close(t.done)
// Stop gnet engine if running // Stop gnet engine if running
@ -222,6 +188,7 @@ func (t *TCPSource) Stop() {
t.logger.Info("msg", "TCP source stopped") t.logger.Info("msg", "TCP source stopped")
} }
// GetStats returns the source's statistics.
func (t *TCPSource) GetStats() SourceStats { func (t *TCPSource) GetStats() SourceStats {
lastEntry, _ := t.lastEntryTime.Load().(time.Time) lastEntry, _ := t.lastEntryTime.Load().(time.Time)
@ -230,6 +197,11 @@ func (t *TCPSource) GetStats() SourceStats {
netLimitStats = t.netLimiter.GetStats() netLimitStats = t.netLimiter.GetStats()
} }
var sessionStats map[string]any
if t.sessionManager != nil {
sessionStats = t.sessionManager.GetStats()
}
return SourceStats{ return SourceStats{
Type: "tcp", Type: "tcp",
TotalEntries: t.totalEntries.Load(), TotalEntries: t.totalEntries.Load(),
@ -237,52 +209,16 @@ func (t *TCPSource) GetStats() SourceStats {
StartTime: t.startTime, StartTime: t.startTime,
LastEntryTime: lastEntry, LastEntryTime: lastEntry,
Details: map[string]any{ Details: map[string]any{
"port": t.port, "port": t.config.Port,
"active_connections": t.activeConns.Load(), "active_connections": t.activeConns.Load(),
"invalid_entries": t.invalidEntries.Load(), "invalid_entries": t.invalidEntries.Load(),
"net_limit": netLimitStats, "net_limit": netLimitStats,
"sessions": sessionStats,
}, },
} }
} }
func (t *TCPSource) publish(entry core.LogEntry) bool { // tcpSourceServer implements the gnet.EventHandler interface for the source.
t.mu.RLock()
defer t.mu.RUnlock()
t.totalEntries.Add(1)
t.lastEntryTime.Store(entry.Time)
dropped := false
for _, ch := range t.subscribers {
select {
case ch <- entry:
default:
dropped = true
t.droppedEntries.Add(1)
}
}
if dropped {
t.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "tcp_source")
}
return true
}
// tcpClient represents a connected TCP client
type tcpClient struct {
conn gnet.Conn
buffer bytes.Buffer
authenticated bool
session *auth.Session
authTimeout time.Time
tlsBridge *tls.GNetTLSConn
maxBufferSeen int
cumulativeEncrypted int64
}
// tcpSourceServer handles gnet events
type tcpSourceServer struct { type tcpSourceServer struct {
gnet.BuiltinEventEngine gnet.BuiltinEventEngine
source *TCPSource source *TCPSource
@ -290,6 +226,15 @@ type tcpSourceServer struct {
mu sync.RWMutex mu sync.RWMutex
} }
// tcpClient represents a connected TCP client and its state.
type tcpClient struct {
conn gnet.Conn
buffer *bytes.Buffer
sessionID string
maxBufferSeen int
}
// OnBoot is called when the server starts.
func (s *tcpSourceServer) OnBoot(eng gnet.Engine) gnet.Action { func (s *tcpSourceServer) OnBoot(eng gnet.Engine) gnet.Action {
// Store engine reference for shutdown // Store engine reference for shutdown
s.source.engineMu.Lock() s.source.engineMu.Lock()
@ -298,98 +243,111 @@ func (s *tcpSourceServer) OnBoot(eng gnet.Engine) gnet.Action {
s.source.logger.Debug("msg", "TCP source server booted", s.source.logger.Debug("msg", "TCP source server booted",
"component", "tcp_source", "component", "tcp_source",
"port", s.source.port) "port", s.source.config.Port)
return gnet.None return gnet.None
} }
// OnOpen is called when a new connection is established.
func (s *tcpSourceServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) { func (s *tcpSourceServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
remoteAddr := c.RemoteAddr().String() remoteAddrStr := c.RemoteAddr().String()
s.source.logger.Debug("msg", "TCP connection attempt", s.source.logger.Debug("msg", "TCP connection attempt",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", remoteAddr) "remote_addr", remoteAddrStr)
// Check net limit // Check net limit
if s.source.netLimiter != nil { if s.source.netLimiter != nil {
tcpAddr, err := net.ResolveTCPAddr("tcp", remoteAddr) tcpAddr, err := net.ResolveTCPAddr("tcp", remoteAddrStr)
if err != nil { if err != nil {
s.source.logger.Warn("msg", "Failed to parse TCP address", s.source.logger.Warn("msg", "Failed to parse TCP address",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", remoteAddr, "remote_addr", remoteAddrStr,
"error", err) "error", err)
return nil, gnet.Close return nil, gnet.Close
} }
// Check if connection is allowed
ip := tcpAddr.IP
if ip.To4() == nil {
// Reject IPv6
s.source.logger.Warn("msg", "IPv6 connection rejected",
"component", "tcp_source",
"remote_addr", remoteAddrStr)
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
}
if !s.source.netLimiter.CheckTCP(tcpAddr) { if !s.source.netLimiter.CheckTCP(tcpAddr) {
s.source.logger.Warn("msg", "TCP connection net limited", s.source.logger.Warn("msg", "TCP connection net limited",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", remoteAddr) "remote_addr", remoteAddrStr)
return nil, gnet.Close return nil, gnet.Close
} }
// Track connection // Reserve connection atomically
s.source.netLimiter.AddConnection(remoteAddr) if !s.source.netLimiter.ReserveConnection(remoteAddrStr) {
} s.source.logger.Warn("msg", "TCP connection limit exceeded",
// Create client state
client := &tcpClient{conn: c}
// Initialize TLS bridge if enabled
if s.source.tlsManager != nil {
tlsConfig := s.source.tlsManager.GetTCPConfig()
client.tlsBridge = tls.NewServerConn(c, tlsConfig)
client.tlsBridge.Handshake() // Start async handshake
s.source.logger.Debug("msg", "TLS handshake initiated",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", remoteAddr) "remote_addr", remoteAddrStr)
return nil, gnet.Close
}
} }
// Create session
sess := s.source.sessionManager.CreateSession(remoteAddrStr, "tcp_source", nil)
// Create client state // Create client state
client := &tcpClient{
conn: c,
buffer: bytes.NewBuffer(nil),
sessionID: sess.ID,
}
s.mu.Lock() s.mu.Lock()
s.clients[c] = &tcpClient{conn: c} s.clients[c] = client
s.mu.Unlock() s.mu.Unlock()
newCount := s.source.activeConns.Add(1) s.source.activeConns.Add(1)
s.source.logger.Debug("msg", "TCP connection opened", s.source.logger.Debug("msg", "TCP connection opened",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", remoteAddr, "remote_addr", remoteAddrStr,
"active_connections", newCount, "session_id", sess.ID)
"tls_enabled", s.source.tlsManager != nil)
return nil, gnet.None return out, gnet.None
} }
// OnClose is called when a connection is closed.
func (s *tcpSourceServer) OnClose(c gnet.Conn, err error) gnet.Action { func (s *tcpSourceServer) OnClose(c gnet.Conn, err error) gnet.Action {
remoteAddr := c.RemoteAddr().String() remoteAddrStr := c.RemoteAddr().String()
// Get client to retrieve session ID
s.mu.RLock()
client, exists := s.clients[c]
s.mu.RUnlock()
if exists && client.sessionID != "" {
// Remove session
s.source.sessionManager.RemoveSession(client.sessionID)
}
// Release connection
if s.source.netLimiter != nil {
s.source.netLimiter.ReleaseConnection(remoteAddrStr)
}
// Remove client state // Remove client state
s.mu.Lock() s.mu.Lock()
client := s.clients[c]
delete(s.clients, c) delete(s.clients, c)
s.mu.Unlock() s.mu.Unlock()
// Clean up TLS bridge if present newConnectionCount := s.source.activeConns.Add(-1)
if client != nil && client.tlsBridge != nil {
client.tlsBridge.Close()
s.source.logger.Debug("msg", "TLS connection closed",
"component", "tcp_source",
"remote_addr", remoteAddr)
}
// Remove connection tracking
if s.source.netLimiter != nil {
s.source.netLimiter.RemoveConnection(remoteAddr)
}
newCount := s.source.activeConns.Add(-1)
s.source.logger.Debug("msg", "TCP connection closed", s.source.logger.Debug("msg", "TCP connection closed",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", remoteAddr, "remote_addr", remoteAddrStr,
"active_connections", newCount, "active_connections", newConnectionCount,
"error", err) "error", err)
return gnet.None return gnet.None
} }
// OnTraffic is called when data is received from a connection.
func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action { func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
s.mu.RLock() s.mu.RLock()
client, exists := s.clients[c] client, exists := s.clients[c]
@ -399,6 +357,11 @@ func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
return gnet.Close return gnet.Close
} }
// Update session activity when client sends data
if client.sessionID != "" {
s.source.sessionManager.UpdateActivity(client.sessionID)
}
// Read all available data // Read all available data
data, err := c.Next(-1) data, err := c.Next(-1)
if err != nil { if err != nil {
@ -408,79 +371,19 @@ func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
return gnet.Close return gnet.Close
} }
// Check encrypted data size BEFORE processing through TLS return s.processLogData(c, client, data)
if len(data) > maxEncryptedDataPerRead { }
s.source.logger.Warn("msg", "Encrypted data per read limit exceeded",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"data_size", len(data),
"limit", maxEncryptedDataPerRead)
s.source.invalidEntries.Add(1)
return gnet.Close
}
// Track cumulative encrypted data to prevent slow accumulation // processLogData processes raw data from a client, parsing and publishing log entries.
client.cumulativeEncrypted += int64(len(data)) func (s *tcpSourceServer) processLogData(c gnet.Conn, client *tcpClient, data []byte) gnet.Action {
if client.cumulativeEncrypted > maxCumulativeEncrypted { // Check if appending the new data would exceed the client buffer limit.
s.source.logger.Warn("msg", "Cumulative encrypted data limit exceeded",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"total_encrypted", client.cumulativeEncrypted,
"limit", maxCumulativeEncrypted)
s.source.invalidEntries.Add(1)
return gnet.Close
}
// Process through TLS bridge if present
if client.tlsBridge != nil {
// Feed encrypted data into TLS engine
if err := client.tlsBridge.ProcessIncoming(data); err != nil {
if errors.Is(err, tls.ErrTLSBackpressure) {
s.source.logger.Warn("msg", "TLS backpressure, closing slow client",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String())
} else {
s.source.logger.Error("msg", "TLS processing error",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"error", err)
}
return gnet.Close
}
// Check if handshake is complete
if !client.tlsBridge.IsHandshakeDone() {
// Still handshaking, wait for more data
return gnet.None
}
// Check handshake result
_, hsErr := client.tlsBridge.HandshakeComplete()
if hsErr != nil {
s.source.logger.Error("msg", "TLS handshake failed",
"component", "tcp_source",
"remote_addr", c.RemoteAddr().String(),
"error", hsErr)
return gnet.Close
}
// Read decrypted plaintext
data = client.tlsBridge.Read()
if data == nil || len(data) == 0 {
// No plaintext available yet
return gnet.None
}
// Reset cumulative counter after successful decryption and processing
client.cumulativeEncrypted = 0
}
// Check buffer size before appending
if client.buffer.Len()+len(data) > maxClientBufferSize { if client.buffer.Len()+len(data) > maxClientBufferSize {
s.source.logger.Warn("msg", "Client buffer limit exceeded", s.source.logger.Warn("msg", "Client buffer limit exceeded, closing connection.",
"component", "tcp_source", "component", "tcp_source",
"remote_addr", c.RemoteAddr().String(), "remote_addr", c.RemoteAddr().String(),
"buffer_size", client.buffer.Len(), "buffer_size", client.buffer.Len(),
"incoming_size", len(data)) "incoming_size", len(data),
"limit", maxClientBufferSize)
s.source.invalidEntries.Add(1) s.source.invalidEntries.Add(1)
return gnet.Close return gnet.Close
} }
@ -565,12 +468,41 @@ func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
return gnet.None return gnet.None
} }
// noopLogger implements gnet's Logger interface but discards everything // publish sends a log entry to all subscribers.
// type noopLogger struct{} func (t *TCPSource) publish(entry core.LogEntry) {
// func (n noopLogger) Debugf(format string, args ...any) {} t.mu.RLock()
// func (n noopLogger) Infof(format string, args ...any) {} defer t.mu.RUnlock()
// func (n noopLogger) Warnf(format string, args ...any) {}
// func (n noopLogger) Errorf(format string, args ...any) {}
// func (n noopLogger) Fatalf(format string, args ...any) {}
// Usage: gnet.Run(..., gnet.WithLogger(noopLogger{}), ...) t.totalEntries.Add(1)
t.lastEntryTime.Store(entry.Time)
for _, ch := range t.subscribers {
select {
case ch <- entry:
default:
t.droppedEntries.Add(1)
t.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
"component", "tcp_source")
}
}
}
// handleSessionExpiry is the callback for cleaning up expired sessions.
func (t *TCPSource) handleSessionExpiry(sessionID, remoteAddrStr string) {
t.server.mu.RLock()
defer t.server.mu.RUnlock()
// Find connection by session ID
for conn, client := range t.server.clients {
if client.sessionID == sessionID {
t.logger.Info("msg", "Closing expired session connection",
"component", "tcp_source",
"session_id", sessionID,
"remote_addr", remoteAddrStr)
// Close connection
conn.Close()
return
}
}
}

View File

@ -0,0 +1,94 @@
// FILE: src/internal/tls/client.go
package tls
import (
"crypto/tls"
"crypto/x509"
"fmt"
"os"
"logwisp/src/internal/config"
"github.com/lixenwraith/log"
)
// ClientManager handles TLS configuration for client components.
type ClientManager struct {
config *config.TLSClientConfig
tlsConfig *tls.Config
logger *log.Logger
}
// NewClientManager creates a TLS manager for clients (HTTP Client Sink).
func NewClientManager(cfg *config.TLSClientConfig, logger *log.Logger) (*ClientManager, error) {
if cfg == nil || !cfg.Enabled {
return nil, nil
}
m := &ClientManager{
config: cfg,
logger: logger,
tlsConfig: &tls.Config{
MinVersion: parseTLSVersion(cfg.MinVersion, tls.VersionTLS12),
MaxVersion: parseTLSVersion(cfg.MaxVersion, tls.VersionTLS13),
},
}
// Cipher suite configuration
if cfg.CipherSuites != "" {
m.tlsConfig.CipherSuites = parseCipherSuites(cfg.CipherSuites)
}
// Load client certificate for mTLS, if provided.
if cfg.ClientCertFile != "" && cfg.ClientKeyFile != "" {
clientCert, err := tls.LoadX509KeyPair(cfg.ClientCertFile, cfg.ClientKeyFile)
if err != nil {
return nil, fmt.Errorf("failed to load client cert/key: %w", err)
}
m.tlsConfig.Certificates = []tls.Certificate{clientCert}
} else if cfg.ClientCertFile != "" || cfg.ClientKeyFile != "" {
return nil, fmt.Errorf("both client_cert_file and client_key_file must be provided for mTLS")
}
// Load server CA for verification.
if cfg.ServerCAFile != "" {
caCert, err := os.ReadFile(cfg.ServerCAFile)
if err != nil {
return nil, fmt.Errorf("failed to read server CA file: %w", err)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to parse server CA certificate")
}
m.tlsConfig.RootCAs = caCertPool
}
m.tlsConfig.InsecureSkipVerify = cfg.InsecureSkipVerify
m.tlsConfig.ServerName = cfg.ServerName
logger.Info("msg", "TLS Client Manager initialized", "component", "tls")
return m, nil
}
// GetConfig returns the client's TLS configuration.
func (m *ClientManager) GetConfig() *tls.Config {
if m == nil {
return nil
}
return m.tlsConfig.Clone()
}
// GetStats returns statistics about the current client TLS configuration.
func (m *ClientManager) GetStats() map[string]any {
if m == nil {
return map[string]any{"enabled": false}
}
return map[string]any{
"enabled": true,
"min_version": tlsVersionString(m.tlsConfig.MinVersion),
"max_version": tlsVersionString(m.tlsConfig.MaxVersion),
"has_client_cert": m.config.ClientCertFile != "",
"has_server_ca": m.config.ServerCAFile != "",
"insecure_skip_verify": m.config.InsecureSkipVerify,
}
}

View File

@ -1,341 +0,0 @@
// FILE: src/internal/tls/gnet_bridge.go
package tls
import (
"crypto/tls"
"errors"
"io"
"net"
"sync"
"sync/atomic"
"time"
"github.com/panjf2000/gnet/v2"
)
var (
ErrTLSBackpressure = errors.New("TLS processing backpressure")
ErrConnectionClosed = errors.New("connection closed")
ErrPlaintextBufferExceeded = errors.New("plaintext buffer size exceeded")
)
// Maximum plaintext buffer size to prevent memory exhaustion
const maxPlaintextBufferSize = 32 * 1024 * 1024 // 32MB
// GNetTLSConn bridges gnet.Conn with crypto/tls via io.Pipe
type GNetTLSConn struct {
gnetConn gnet.Conn
tlsConn *tls.Conn
config *tls.Config
// Buffered channels for non-blocking operation
incomingCipher chan []byte // Network → TLS (encrypted)
outgoingCipher chan []byte // TLS → Network (encrypted)
// Handshake state
handshakeOnce sync.Once
handshakeDone chan struct{}
handshakeErr error
// Decrypted data buffer
plainBuf []byte
plainMu sync.Mutex
// Lifecycle
closed atomic.Bool
closeOnce sync.Once
wg sync.WaitGroup
// Error tracking
lastErr atomic.Value // error
logger interface{ Warn(args ...any) } // Minimal logger interface
}
// NewServerConn creates a server-side TLS bridge
func NewServerConn(gnetConn gnet.Conn, config *tls.Config) *GNetTLSConn {
tc := &GNetTLSConn{
gnetConn: gnetConn,
config: config,
handshakeDone: make(chan struct{}),
// Buffered channels sized for throughput without blocking
incomingCipher: make(chan []byte, 128), // 128 packets buffer
outgoingCipher: make(chan []byte, 128),
plainBuf: make([]byte, 0, 65536), // 64KB initial capacity
}
// Create TLS conn with channel-based transport
rawConn := &channelConn{
incoming: tc.incomingCipher,
outgoing: tc.outgoingCipher,
localAddr: gnetConn.LocalAddr(),
remoteAddr: gnetConn.RemoteAddr(),
tc: tc,
}
tc.tlsConn = tls.Server(rawConn, config)
// Start pump goroutines
tc.wg.Add(2)
go tc.pumpCipherToNetwork()
go tc.pumpPlaintextFromTLS()
return tc
}
// NewClientConn creates a client-side TLS bridge (similar changes)
func NewClientConn(gnetConn gnet.Conn, config *tls.Config, serverName string) *GNetTLSConn {
tc := &GNetTLSConn{
gnetConn: gnetConn,
config: config,
handshakeDone: make(chan struct{}),
incomingCipher: make(chan []byte, 128),
outgoingCipher: make(chan []byte, 128),
plainBuf: make([]byte, 0, 65536),
}
if config.ServerName == "" {
config = config.Clone()
config.ServerName = serverName
}
rawConn := &channelConn{
incoming: tc.incomingCipher,
outgoing: tc.outgoingCipher,
localAddr: gnetConn.LocalAddr(),
remoteAddr: gnetConn.RemoteAddr(),
tc: tc,
}
tc.tlsConn = tls.Client(rawConn, config)
tc.wg.Add(2)
go tc.pumpCipherToNetwork()
go tc.pumpPlaintextFromTLS()
return tc
}
// ProcessIncoming feeds encrypted data from network into TLS engine (non-blocking)
func (tc *GNetTLSConn) ProcessIncoming(encryptedData []byte) error {
if tc.closed.Load() {
return ErrConnectionClosed
}
// Non-blocking send with backpressure detection
select {
case tc.incomingCipher <- encryptedData:
return nil
default:
// Channel full - TLS processing can't keep up
// Drop connection under backpressure vs blocking event loop
if tc.logger != nil {
tc.logger.Warn("msg", "TLS backpressure, dropping data",
"remote_addr", tc.gnetConn.RemoteAddr())
}
return ErrTLSBackpressure
}
}
// pumpCipherToNetwork sends TLS-encrypted data to network
func (tc *GNetTLSConn) pumpCipherToNetwork() {
defer tc.wg.Done()
for {
select {
case data, ok := <-tc.outgoingCipher:
if !ok {
return
}
// Send to network
if err := tc.gnetConn.AsyncWrite(data, nil); err != nil {
tc.lastErr.Store(err)
tc.Close()
return
}
case <-time.After(30 * time.Second):
// Keepalive/timeout check
if tc.closed.Load() {
return
}
}
}
}
// pumpPlaintextFromTLS reads decrypted data from TLS
func (tc *GNetTLSConn) pumpPlaintextFromTLS() {
defer tc.wg.Done()
buf := make([]byte, 32768) // 32KB read buffer
for {
n, err := tc.tlsConn.Read(buf)
if n > 0 {
tc.plainMu.Lock()
// Check buffer size limit before appending to prevent memory exhaustion
if len(tc.plainBuf)+n > maxPlaintextBufferSize {
tc.plainMu.Unlock()
// Log warning about buffer limit
if tc.logger != nil {
tc.logger.Warn("msg", "Plaintext buffer limit exceeded, closing connection",
"remote_addr", tc.gnetConn.RemoteAddr(),
"buffer_size", len(tc.plainBuf),
"incoming_size", n,
"limit", maxPlaintextBufferSize)
}
// Store error and close connection
tc.lastErr.Store(ErrPlaintextBufferExceeded)
tc.Close()
return
}
tc.plainBuf = append(tc.plainBuf, buf[:n]...)
tc.plainMu.Unlock()
}
if err != nil {
if err != io.EOF {
tc.lastErr.Store(err)
}
tc.Close()
return
}
}
}
// Read returns available decrypted plaintext (non-blocking)
func (tc *GNetTLSConn) Read() []byte {
tc.plainMu.Lock()
defer tc.plainMu.Unlock()
if len(tc.plainBuf) == 0 {
return nil
}
// Atomic buffer swap under mutex protection to prevent race condition
data := tc.plainBuf
tc.plainBuf = make([]byte, 0, cap(tc.plainBuf))
return data
}
// Write encrypts plaintext and queues for network transmission
func (tc *GNetTLSConn) Write(plaintext []byte) (int, error) {
if tc.closed.Load() {
return 0, ErrConnectionClosed
}
if !tc.IsHandshakeDone() {
return 0, errors.New("handshake not complete")
}
return tc.tlsConn.Write(plaintext)
}
// Handshake initiates TLS handshake asynchronously
func (tc *GNetTLSConn) Handshake() {
tc.handshakeOnce.Do(func() {
go func() {
tc.handshakeErr = tc.tlsConn.Handshake()
close(tc.handshakeDone)
}()
})
}
// IsHandshakeDone checks if handshake is complete
func (tc *GNetTLSConn) IsHandshakeDone() bool {
select {
case <-tc.handshakeDone:
return true
default:
return false
}
}
// HandshakeComplete waits for handshake completion
func (tc *GNetTLSConn) HandshakeComplete() (<-chan struct{}, error) {
<-tc.handshakeDone
return tc.handshakeDone, tc.handshakeErr
}
// Close shuts down the bridge
func (tc *GNetTLSConn) Close() error {
tc.closeOnce.Do(func() {
tc.closed.Store(true)
// Close TLS connection
tc.tlsConn.Close()
// Close channels to stop pumps
close(tc.incomingCipher)
close(tc.outgoingCipher)
})
// Wait for pumps to finish
tc.wg.Wait()
return nil
}
// GetConnectionState returns TLS connection state
func (tc *GNetTLSConn) GetConnectionState() tls.ConnectionState {
return tc.tlsConn.ConnectionState()
}
// GetError returns last error
func (tc *GNetTLSConn) GetError() error {
if err, ok := tc.lastErr.Load().(error); ok {
return err
}
return nil
}
// channelConn implements net.Conn over channels
type channelConn struct {
incoming <-chan []byte
outgoing chan<- []byte
localAddr net.Addr
remoteAddr net.Addr
tc *GNetTLSConn
readBuf []byte
}
func (c *channelConn) Read(b []byte) (int, error) {
// Use buffered read for efficiency
if len(c.readBuf) > 0 {
n := copy(b, c.readBuf)
c.readBuf = c.readBuf[n:]
return n, nil
}
// Wait for new data
select {
case data, ok := <-c.incoming:
if !ok {
return 0, io.EOF
}
n := copy(b, data)
if n < len(data) {
c.readBuf = data[n:] // Buffer remainder
}
return n, nil
case <-time.After(30 * time.Second):
return 0, errors.New("read timeout")
}
}
func (c *channelConn) Write(b []byte) (int, error) {
if c.tc.closed.Load() {
return 0, ErrConnectionClosed
}
// Make a copy since TLS may hold reference
data := make([]byte, len(b))
copy(data, b)
select {
case c.outgoing <- data:
return len(b), nil
case <-time.After(5 * time.Second):
return 0, errors.New("write timeout")
}
}
func (c *channelConn) Close() error { return nil }
func (c *channelConn) LocalAddr() net.Addr { return c.localAddr }
func (c *channelConn) RemoteAddr() net.Addr { return c.remoteAddr }
func (c *channelConn) SetDeadline(t time.Time) error { return nil }
func (c *channelConn) SetReadDeadline(t time.Time) error { return nil }
func (c *channelConn) SetWriteDeadline(t time.Time) error { return nil }

View File

@ -1,249 +0,0 @@
// FILE: logwisp/src/internal/tls/manager.go
package tls
import (
"crypto/tls"
"crypto/x509"
"fmt"
"os"
"strings"
"logwisp/src/internal/config"
"github.com/lixenwraith/log"
)
// Manager handles TLS configuration for servers
type Manager struct {
config *config.SSLConfig
tlsConfig *tls.Config
logger *log.Logger
}
// NewManager creates a TLS configuration from SSL config
func NewManager(cfg *config.SSLConfig, logger *log.Logger) (*Manager, error) {
if cfg == nil || !cfg.Enabled {
return nil, nil
}
m := &Manager{
config: cfg,
logger: logger,
}
// Load certificate and key
cert, err := tls.LoadX509KeyPair(cfg.CertFile, cfg.KeyFile)
if err != nil {
return nil, fmt.Errorf("failed to load cert/key: %w", err)
}
// Create base TLS config
m.tlsConfig = &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: parseTLSVersion(cfg.MinVersion, tls.VersionTLS12),
MaxVersion: parseTLSVersion(cfg.MaxVersion, tls.VersionTLS13),
}
// Configure cipher suites if specified
if cfg.CipherSuites != "" {
m.tlsConfig.CipherSuites = parseCipherSuites(cfg.CipherSuites)
} else {
// Use secure defaults
m.tlsConfig.CipherSuites = []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
}
}
// Configure client authentication (mTLS)
if cfg.ClientAuth {
if cfg.VerifyClientCert {
m.tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
} else {
m.tlsConfig.ClientAuth = tls.RequireAnyClientCert
}
// Load client CA if specified
if cfg.ClientCAFile != "" {
caCert, err := os.ReadFile(cfg.ClientCAFile)
if err != nil {
return nil, fmt.Errorf("failed to read client CA: %w", err)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to parse client CA certificate")
}
m.tlsConfig.ClientCAs = caCertPool
}
}
// Set secure defaults
m.tlsConfig.PreferServerCipherSuites = true
m.tlsConfig.SessionTicketsDisabled = false
m.tlsConfig.Renegotiation = tls.RenegotiateNever
logger.Info("msg", "TLS manager initialized",
"component", "tls",
"min_version", cfg.MinVersion,
"max_version", cfg.MaxVersion,
"client_auth", cfg.ClientAuth,
"cipher_count", len(m.tlsConfig.CipherSuites))
return m, nil
}
// GetConfig returns the TLS configuration
func (m *Manager) GetConfig() *tls.Config {
if m == nil {
return nil
}
// Return a clone to prevent modification
return m.tlsConfig.Clone()
}
// GetHTTPConfig returns TLS config suitable for HTTP servers
func (m *Manager) GetHTTPConfig() *tls.Config {
if m == nil {
return nil
}
cfg := m.tlsConfig.Clone()
// Enable HTTP/2
cfg.NextProtos = []string{"h2", "http/1.1"}
return cfg
}
// GetTCPConfig returns TLS config for raw TCP connections
func (m *Manager) GetTCPConfig() *tls.Config {
if m == nil {
return nil
}
cfg := m.tlsConfig.Clone()
// No ALPN for raw TCP
cfg.NextProtos = nil
return cfg
}
// ValidateClientCert validates a client certificate for mTLS
func (m *Manager) ValidateClientCert(rawCerts [][]byte) error {
if m == nil || !m.config.ClientAuth {
return nil
}
if len(rawCerts) == 0 {
return fmt.Errorf("no client certificate provided")
}
cert, err := x509.ParseCertificate(rawCerts[0])
if err != nil {
return fmt.Errorf("failed to parse client certificate: %w", err)
}
// Verify against CA if configured
if m.tlsConfig.ClientCAs != nil {
opts := x509.VerifyOptions{
Roots: m.tlsConfig.ClientCAs,
Intermediates: x509.NewCertPool(),
KeyUsages: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
}
// Add any intermediate certs
for i := 1; i < len(rawCerts); i++ {
intermediate, err := x509.ParseCertificate(rawCerts[i])
if err != nil {
continue
}
opts.Intermediates.AddCert(intermediate)
}
if _, err := cert.Verify(opts); err != nil {
return fmt.Errorf("client certificate verification failed: %w", err)
}
}
m.logger.Debug("msg", "Client certificate validated",
"component", "tls",
"subject", cert.Subject.String(),
"serial", cert.SerialNumber.String())
return nil
}
func parseTLSVersion(version string, defaultVersion uint16) uint16 {
switch strings.ToUpper(version) {
case "TLS1.0", "TLS10":
return tls.VersionTLS10
case "TLS1.1", "TLS11":
return tls.VersionTLS11
case "TLS1.2", "TLS12":
return tls.VersionTLS12
case "TLS1.3", "TLS13":
return tls.VersionTLS13
default:
return defaultVersion
}
}
func parseCipherSuites(suites string) []uint16 {
var result []uint16
// Map of cipher suite names to IDs
suiteMap := map[string]uint16{
// TLS 1.2 ECDHE suites (preferred)
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
// RSA suites (less preferred)
"TLS_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
"TLS_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
}
for _, suite := range strings.Split(suites, ",") {
suite = strings.TrimSpace(suite)
if id, ok := suiteMap[suite]; ok {
result = append(result, id)
}
}
return result
}
// GetStats returns TLS statistics
func (m *Manager) GetStats() map[string]any {
if m == nil {
return map[string]any{"enabled": false}
}
return map[string]any{
"enabled": true,
"min_version": tlsVersionString(m.tlsConfig.MinVersion),
"max_version": tlsVersionString(m.tlsConfig.MaxVersion),
"client_auth": m.config.ClientAuth,
"cipher_suites": len(m.tlsConfig.CipherSuites),
}
}
func tlsVersionString(version uint16) string {
switch version {
case tls.VersionTLS10:
return "TLS1.0"
case tls.VersionTLS11:
return "TLS1.1"
case tls.VersionTLS12:
return "TLS1.2"
case tls.VersionTLS13:
return "TLS1.3"
default:
return fmt.Sprintf("0x%04x", version)
}
}

69
src/internal/tls/parse.go Normal file
View File

@ -0,0 +1,69 @@
// FILE: logwisp/src/internal/tls/parse.go
package tls
import (
"crypto/tls"
"fmt"
"strings"
)
// parseTLSVersion converts a string representation (e.g., "TLS1.2") into a Go crypto/tls constant.
func parseTLSVersion(version string, defaultVersion uint16) uint16 {
switch strings.ToUpper(version) {
case "TLS1.0", "TLS10":
return tls.VersionTLS10
case "TLS1.1", "TLS11":
return tls.VersionTLS11
case "TLS1.2", "TLS12":
return tls.VersionTLS12
case "TLS1.3", "TLS13":
return tls.VersionTLS13
default:
return defaultVersion
}
}
// parseCipherSuites converts a comma-separated string of cipher suite names into a slice of Go constants.
func parseCipherSuites(suites string) []uint16 {
var result []uint16
// Map of cipher suite names to IDs
suiteMap := map[string]uint16{
// TLS 1.2 ECDHE suites (preferred)
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
// RSA suites
"TLS_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
"TLS_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
}
for _, suite := range strings.Split(suites, ",") {
suite = strings.TrimSpace(suite)
if id, ok := suiteMap[suite]; ok {
result = append(result, id)
}
}
return result
}
// tlsVersionString converts a Go crypto/tls version constant back into a string representation.
func tlsVersionString(version uint16) string {
switch version {
case tls.VersionTLS10:
return "TLS1.0"
case tls.VersionTLS11:
return "TLS1.1"
case tls.VersionTLS12:
return "TLS1.2"
case tls.VersionTLS13:
return "TLS1.3"
default:
return fmt.Sprintf("0x%04x", version)
}
}

View File

@ -0,0 +1,99 @@
// FILE: src/internal/tls/server.go
package tls
import (
"crypto/tls"
"crypto/x509"
"fmt"
"os"
"logwisp/src/internal/config"
"github.com/lixenwraith/log"
)
// ServerManager handles TLS configuration for server components.
type ServerManager struct {
config *config.TLSServerConfig
tlsConfig *tls.Config
logger *log.Logger
}
// NewServerManager creates a TLS manager for servers (HTTP Source/Sink).
func NewServerManager(cfg *config.TLSServerConfig, logger *log.Logger) (*ServerManager, error) {
if cfg == nil || !cfg.Enabled {
return nil, nil
}
m := &ServerManager{
config: cfg,
logger: logger,
}
cert, err := tls.LoadX509KeyPair(cfg.CertFile, cfg.KeyFile)
if err != nil {
return nil, fmt.Errorf("failed to load server cert/key: %w", err)
}
// Enforce TLS 1.2 / TLS 1.3
m.tlsConfig = &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: parseTLSVersion(cfg.MinVersion, tls.VersionTLS12),
MaxVersion: parseTLSVersion(cfg.MaxVersion, tls.VersionTLS13),
}
if cfg.CipherSuites != "" {
m.tlsConfig.CipherSuites = parseCipherSuites(cfg.CipherSuites)
} else {
// Use secure defaults
m.tlsConfig.CipherSuites = []uint16{
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
}
}
// Configure client authentication (mTLS)
if cfg.ClientAuth {
if cfg.ClientCAFile == "" {
return nil, fmt.Errorf("client_auth is enabled but client_ca_file is not specified")
}
caCert, err := os.ReadFile(cfg.ClientCAFile)
if err != nil {
return nil, fmt.Errorf("failed to read client CA file: %w", err)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to parse client CA certificate")
}
m.tlsConfig.ClientCAs = caCertPool
}
logger.Info("msg", "TLS Server Manager initialized", "component", "tls")
return m, nil
}
// GetHTTPConfig returns a TLS configuration suitable for HTTP servers.
func (m *ServerManager) GetHTTPConfig() *tls.Config {
if m == nil {
return nil
}
cfg := m.tlsConfig.Clone()
cfg.NextProtos = []string{"h2", "http/1.1"}
return cfg
}
// GetStats returns statistics about the current server TLS configuration.
func (m *ServerManager) GetStats() map[string]any {
if m == nil {
return map[string]any{"enabled": false}
}
return map[string]any{
"enabled": true,
"min_version": tlsVersionString(m.tlsConfig.MinVersion),
"max_version": tlsVersionString(m.tlsConfig.MaxVersion),
"client_auth": m.config.ClientAuth,
"cipher_suites": len(m.tlsConfig.CipherSuites),
}
}

View File

@ -1,13 +1,12 @@
// FILE: logwisp/src/internal/limit/token_bucket.go // FILE: src/internal/tokenbucket/bucket.go
package limit package tokenbucket
import ( import (
"sync" "sync"
"time" "time"
) )
// TokenBucket implements a token bucket rate limiter // TokenBucket implements a thread-safe token bucket rate limiter.
// Safe for concurrent use.
type TokenBucket struct { type TokenBucket struct {
capacity float64 capacity float64
tokens float64 tokens float64
@ -16,8 +15,8 @@ type TokenBucket struct {
mu sync.Mutex mu sync.Mutex
} }
// NewTokenBucket creates a new token bucket with given capacity and refill rate // New creates a new token bucket with given capacity and refill rate.
func NewTokenBucket(capacity float64, refillRate float64) *TokenBucket { func New(capacity float64, refillRate float64) *TokenBucket {
return &TokenBucket{ return &TokenBucket{
capacity: capacity, capacity: capacity,
tokens: capacity, // Start full tokens: capacity, // Start full
@ -26,12 +25,12 @@ func NewTokenBucket(capacity float64, refillRate float64) *TokenBucket {
} }
} }
// Allow attempts to consume one token, returns true if allowed // Allow attempts to consume one token, returns true if allowed.
func (tb *TokenBucket) Allow() bool { func (tb *TokenBucket) Allow() bool {
return tb.AllowN(1) return tb.AllowN(1)
} }
// AllowN attempts to consume n tokens, returns true if allowed // AllowN attempts to consume n tokens, returns true if allowed.
func (tb *TokenBucket) AllowN(n float64) bool { func (tb *TokenBucket) AllowN(n float64) bool {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
@ -45,7 +44,7 @@ func (tb *TokenBucket) AllowN(n float64) bool {
return false return false
} }
// Tokens returns the current number of available tokens // Tokens returns the current number of available tokens.
func (tb *TokenBucket) Tokens() float64 { func (tb *TokenBucket) Tokens() float64 {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
@ -54,8 +53,8 @@ func (tb *TokenBucket) Tokens() float64 {
return tb.tokens return tb.tokens
} }
// refill adds tokens based on time elapsed since last refill // refill adds tokens based on time elapsed since last refill.
// MUST be called with mutex held // MUST be called with mutex held.
func (tb *TokenBucket) refill() { func (tb *TokenBucket) refill() {
now := time.Now() now := time.Now()
elapsed := now.Sub(tb.lastRefill).Seconds() elapsed := now.Sub(tb.lastRefill).Seconds()

View File

@ -4,13 +4,15 @@ package version
import "fmt" import "fmt"
var ( var (
// Version is set at compile time via -ldflags // Version is the application version, set at compile time via -ldflags.
Version = "dev" Version = "dev"
// GitCommit is the git commit hash, set at compile time.
GitCommit = "unknown" GitCommit = "unknown"
// BuildTime is the application build time, set at compile time.
BuildTime = "unknown" BuildTime = "unknown"
) )
// returns a formatted version string // String returns a detailed, formatted version string including commit and build time.
func String() string { func String() string {
if Version == "dev" { if Version == "dev" {
return fmt.Sprintf("dev (commit: %s, built: %s)", GitCommit, BuildTime) return fmt.Sprintf("dev (commit: %s, built: %s)", GitCommit, BuildTime)
@ -18,7 +20,7 @@ func String() string {
return fmt.Sprintf("%s (commit: %s, built: %s)", Version, GitCommit, BuildTime) return fmt.Sprintf("%s (commit: %s, built: %s)", Version, GitCommit, BuildTime)
} }
// returns just the version tag // Short returns just the version tag.
func Short() string { func Short() string {
return Version return Version
} }