Compare commits
17 Commits
45b2093569
...
flow-plugi
| Author | SHA256 | Date | |
|---|---|---|---|
|
fa7f41c059
|
|||
|
36e7f8a6a9
|
|||
|
15f484f98b
|
|||
|
70bf6a8060
|
|||
|
98ace914f7
|
|||
|
22652f9e53
|
|||
|
dcf803bac1
|
|||
| 7e542b660a | |||
| 33bf36f27e | |||
| 89e6a4ea05 | |||
| 490fb777ab | |||
| 3047e556f7 | |||
| c33ec148ba | |||
| 15d72baafd | |||
| 9111d054fd | |||
| 5ea4dc8f5f | |||
| 94b5f3111e |
3
.gitignore
vendored
3
.gitignore
vendored
@ -7,6 +7,7 @@ cert
|
|||||||
bin
|
bin
|
||||||
script
|
script
|
||||||
build
|
build
|
||||||
test
|
|
||||||
*.log
|
*.log
|
||||||
*.toml
|
*.toml
|
||||||
|
build.sh
|
||||||
|
catalog.txt
|
||||||
|
|||||||
98
README.md
98
README.md
@ -6,7 +6,7 @@
|
|||||||
<td>
|
<td>
|
||||||
<h1>LogWisp</h1>
|
<h1>LogWisp</h1>
|
||||||
<p>
|
<p>
|
||||||
<a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.24-00ADD8?style=flat&logo=go" alt="Go"></a>
|
<a href="https://golang.org"><img src="https://img.shields.io/badge/Go-1.25-00ADD8?style=flat&logo=go" alt="Go"></a>
|
||||||
<a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a>
|
<a href="https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg" alt="License"></a>
|
||||||
<a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a>
|
<a href="doc/"><img src="https://img.shields.io/badge/Docs-Available-green.svg" alt="Documentation"></a>
|
||||||
</p>
|
</p>
|
||||||
@ -14,41 +14,81 @@
|
|||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
**Flexible log monitoring with real-time streaming over HTTP/SSE and TCP**
|
# LogWisp
|
||||||
|
|
||||||
LogWisp watches log files and streams updates to connected clients in real-time using a pipeline architecture: **sources → filters → sinks**. Perfect for monitoring multiple applications, filtering noise, and routing logs to multiple destinations.
|
A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with enterprise-grade security and reliability features.
|
||||||
|
|
||||||
## 🚀 Quick Start
|
## Features
|
||||||
|
|
||||||
```bash
|
### Core Capabilities
|
||||||
# Install
|
- **Pipeline Architecture**: Independent processing pipelines with source(s) → filter → format → sink(s) flow
|
||||||
git clone https://github.com/lixenwraith/logwisp.git
|
- **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
|
||||||
cd logwisp
|
- **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
|
||||||
make install
|
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
|
||||||
|
- **Hot Configuration Reload**: Update pipelines without service restart
|
||||||
|
|
||||||
# Run with defaults (monitors *.log in current directory)
|
### Data Processing
|
||||||
logwisp
|
- **Pattern-based Filtering**: Chainable include/exclude filters with regex support
|
||||||
|
- **Multiple Formatters**: Raw, JSON, and template-based text formatting
|
||||||
|
- **Rate Limiting**: Pipeline rate control
|
||||||
|
|
||||||
|
### Security & Reliability
|
||||||
|
- **Authentication**: mTLS support for HTTPS
|
||||||
|
- **TLS Encryption**: TLS 1.2/1.3 support for HTTP connections
|
||||||
|
- **Access Control**: IP whitelisting/blacklisting, connection limits
|
||||||
|
- **Automatic Reconnection**: Resilient client connections with exponential backoff
|
||||||
|
- **File Rotation**: Size-based rotation with retention policies
|
||||||
|
|
||||||
|
### Operational Features
|
||||||
|
- **Status Monitoring**: Real-time statistics and health endpoints
|
||||||
|
- **Signal Handling**: Graceful shutdown and configuration reload via signals
|
||||||
|
- **Background Mode**: Daemon operation with proper signal handling
|
||||||
|
- **Quiet Mode**: Silent operation for automated deployments
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
Available in `doc/` directory.
|
||||||
|
|
||||||
|
- [Installation Guide](doc/installation.md) - Platform setup and service configuration
|
||||||
|
- [Architecture Overview](doc/architecture.md) - System design and component interaction
|
||||||
|
- [Configuration Reference](doc/configuration.md) - TOML structure and configuration methods
|
||||||
|
- [Input Sources](doc/sources.md) - Available source types and configurations
|
||||||
|
- [Output Sinks](doc/sinks.md) - Sink types and output options
|
||||||
|
- [Filters](doc/filters.md) - Pattern-based log filtering
|
||||||
|
- [Formatters](doc/formatters.md) - Log formatting and transformation
|
||||||
|
- [Security](doc/security.md) - mTLS configurations and access control
|
||||||
|
- [Networking](doc/networking.md) - TLS, rate limiting, and network features
|
||||||
|
- [Command Line Interface](doc/cli.md) - CLI flags and subcommands
|
||||||
|
- [Operations Guide](doc/operations.md) - Running and maintaining LogWisp
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
Install LogWisp and create a basic configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines]]
|
||||||
|
name = "default"
|
||||||
|
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "directory"
|
||||||
|
[pipelines.sources.directory]
|
||||||
|
path = "./"
|
||||||
|
pattern = "*.log"
|
||||||
|
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "console"
|
||||||
|
[pipelines.sinks.console]
|
||||||
|
target = "stdout"
|
||||||
```
|
```
|
||||||
|
|
||||||
## ✨ Key Features
|
Run with: `logwisp -c config.toml`
|
||||||
|
|
||||||
- **🔧 Pipeline Architecture** - Flexible source → filter → sink processing
|
## System Requirements
|
||||||
- **📡 Real-time Streaming** - SSE (HTTP) and TCP protocols
|
|
||||||
- **🔍 Pattern Filtering** - Include/exclude logs with regex patterns
|
|
||||||
- **🛡️ Rate Limiting** - Protect against abuse with configurable limits
|
|
||||||
- **📊 Multi-pipeline** - Process different log sources simultaneously
|
|
||||||
- **🔄 Rotation Aware** - Handles log rotation seamlessly
|
|
||||||
- **⚡ High Performance** - Minimal CPU/memory footprint
|
|
||||||
|
|
||||||
## 📖 Documentation
|
- **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
|
||||||
|
- **Architecture**: amd64
|
||||||
|
- **Go Version**: 1.25+ (for building from source)
|
||||||
|
|
||||||
Complete documentation is available in the [`doc/`](doc/) directory:
|
## License
|
||||||
|
|
||||||
- [**Quick Start Guide**](doc/quickstart.md) - Get running in 5 minutes
|
BSD 3-Clause License
|
||||||
- [**Configuration**](doc/configuration.md) - All configuration options
|
|
||||||
- [**CLI Reference**](doc/cli.md) - Command-line interface
|
|
||||||
- [**Examples**](doc/examples/) - Ready-to-use configurations
|
|
||||||
|
|
||||||
## 📄 License
|
|
||||||
|
|
||||||
BSD-3-Clause
|
|
||||||
@ -1,261 +1,372 @@
|
|||||||
# LogWisp Configuration Reference
|
###############################################################################
|
||||||
# Default location: ~/.config/logwisp/logwisp.toml
|
### LogWisp Configuration
|
||||||
# Override: logwisp --config /path/to/config.toml
|
### Default location: ~/.config/logwisp/logwisp.toml
|
||||||
#
|
### Configuration Precedence: CLI flags > Environment > File > Defaults
|
||||||
# All values shown are defaults unless marked (required)
|
### Default values shown - uncommented lines represent active configuration
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
# ============================================================================
|
###############################################################################
|
||||||
# GLOBAL OPTIONS
|
### Global Settings
|
||||||
# ============================================================================
|
###############################################################################
|
||||||
# router = false # Enable router mode (multi-pipeline HTTP routing)
|
|
||||||
# background = false # Run as background daemon
|
quiet = false # Enable quiet mode, suppress console output
|
||||||
# quiet = false # Suppress all output
|
status_reporter = true # Enable periodic status logging
|
||||||
# disable_status_reporter = false # Disable periodic status logging
|
auto_reload = false # Enable config auto-reload on file change
|
||||||
# config_auto_reload = false # Auto-reload on config change
|
|
||||||
# config_save_on_exit = false # Save config on shutdown
|
###############################################################################
|
||||||
|
### Logging Configuration (LogWisp's internal operational logging)
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# LOGGING (LogWisp's operational logs)
|
|
||||||
# ============================================================================
|
|
||||||
[logging]
|
[logging]
|
||||||
output = "stderr" # file, stdout, stderr, both, none
|
output = "stdout" # file|stdout|stderr|split|all|none
|
||||||
level = "info" # debug, info, warn, error
|
level = "info" # debug|info|warn|error
|
||||||
|
|
||||||
[logging.file]
|
# [logging.file]
|
||||||
directory = "./logs" # Log file directory
|
# directory = "./log" # Log directory path
|
||||||
name = "logwisp" # Base filename
|
# name = "logwisp" # Base filename
|
||||||
max_size_mb = 100 # Rotate after size
|
# max_size_mb = 100 # Rotation threshold
|
||||||
max_total_size_mb = 1000 # Total size limit for all logs
|
# max_total_size_mb = 1000 # Total size limit
|
||||||
retention_hours = 168.0 # Delete logs older than (0 = disabled)
|
# retention_hours = 168.0 # Delete logs older than (7 days)
|
||||||
|
|
||||||
[logging.console]
|
[logging.console]
|
||||||
target = "stderr" # stdout, stderr, split (split: info→stdout, error→stderr)
|
target = "stdout" # stdout|stderr|split
|
||||||
format = "txt" # txt, json
|
format = "txt" # txt|json
|
||||||
|
|
||||||
# ============================================================================
|
###############################################################################
|
||||||
# PIPELINES
|
### Pipeline Configuration
|
||||||
# ============================================================================
|
### Each pipeline: sources -> rate_limit -> filters -> format -> sinks
|
||||||
# Define one or more [[pipelines]] blocks
|
###############################################################################
|
||||||
# Each pipeline: sources → [rate_limit] → [filters] → [format] → sinks
|
|
||||||
|
|
||||||
[[pipelines]]
|
[[pipelines]]
|
||||||
name = "default" # (required) Unique identifier
|
name = "default" # Pipeline identifier
|
||||||
|
|
||||||
|
###============================================================================
|
||||||
|
### Rate Limiting (Pipeline-level)
|
||||||
|
###============================================================================
|
||||||
|
|
||||||
# ----------------------------------------------------------------------------
|
|
||||||
# PIPELINE RATE LIMITING (optional)
|
|
||||||
# ----------------------------------------------------------------------------
|
|
||||||
# [pipelines.rate_limit]
|
# [pipelines.rate_limit]
|
||||||
# rate = 1000.0 # Entries per second (0 = unlimited)
|
# rate = 1000.0 # Entries per second (0=disabled)
|
||||||
# burst = 1000.0 # Max burst size (defaults to rate)
|
# burst = 2000.0 # Burst capacity (defaults to rate)
|
||||||
# policy = "drop" # drop, pass
|
# policy = "drop" # pass|drop
|
||||||
# max_entry_size_bytes = 0 # Max size per entry (0 = unlimited)
|
# max_entry_size_bytes = 0 # Max entry size (0=unlimited)
|
||||||
|
|
||||||
# ----------------------------------------------------------------------------
|
###============================================================================
|
||||||
# SOURCES
|
### Filters (Sequential pattern matching)
|
||||||
# ----------------------------------------------------------------------------
|
###============================================================================
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory" # directory, file, stdin, http, tcp
|
|
||||||
|
|
||||||
# Directory source options
|
### ⚠️ Example: Include only ERROR and WARN logs
|
||||||
[pipelines.sources.options]
|
## [[pipelines.filters]]
|
||||||
path = "./" # (required) Directory path
|
## type = "include" # include|exclude
|
||||||
pattern = "*.log" # Glob pattern
|
## logic = "or" # or|and
|
||||||
check_interval_ms = 100 # Scan interval (min: 10)
|
## patterns = [".*ERROR.*", ".*WARN.*"]
|
||||||
|
|
||||||
# File source options (alternative)
|
### ⚠️ Example: Exclude debug logs
|
||||||
# type = "file"
|
## [[pipelines.filters]]
|
||||||
# [pipelines.sources.options]
|
## type = "exclude"
|
||||||
# path = "/var/log/app.log" # (required) File path
|
## patterns = [".*DEBUG.*"]
|
||||||
|
|
||||||
# HTTP source options (alternative)
|
###============================================================================
|
||||||
# type = "http"
|
### Format (Log transformation)
|
||||||
# [pipelines.sources.options]
|
###============================================================================
|
||||||
# port = 8081 # (required) Listen port
|
|
||||||
# ingest_path = "/ingest" # POST endpoint
|
|
||||||
# buffer_size = 1000 # Entry buffer size
|
|
||||||
# net_limit = { # Rate limiting
|
|
||||||
# enabled = true,
|
|
||||||
# requests_per_second = 100.0,
|
|
||||||
# burst_size = 200,
|
|
||||||
# limit_by = "ip" # ip, global
|
|
||||||
# }
|
|
||||||
|
|
||||||
# TCP source options (alternative)
|
# [pipelines.format]
|
||||||
# type = "tcp"
|
# type = "raw" # raw|json|txt
|
||||||
# [pipelines.sources.options]
|
|
||||||
# port = 9091 # (required) Listen port
|
|
||||||
# buffer_size = 1000 # Entry buffer size
|
|
||||||
# net_limit = { ... } # Same as HTTP
|
|
||||||
|
|
||||||
# ----------------------------------------------------------------------------
|
## JSON formatting
|
||||||
# FILTERS (optional)
|
# [pipelines.format.json]
|
||||||
# ----------------------------------------------------------------------------
|
# pretty = false # Pretty-print JSON
|
||||||
# [[pipelines.filters]]
|
|
||||||
# type = "include" # include (whitelist), exclude (blacklist)
|
|
||||||
# logic = "or" # or (any match), and (all match)
|
|
||||||
# patterns = [ # Regular expressions
|
|
||||||
# "ERROR",
|
|
||||||
# "(?i)warn", # Case-insensitive
|
|
||||||
# "\\bfatal\\b" # Word boundary
|
|
||||||
# ]
|
|
||||||
|
|
||||||
# ----------------------------------------------------------------------------
|
|
||||||
# FORMAT (optional)
|
|
||||||
# ----------------------------------------------------------------------------
|
|
||||||
# format = "raw" # raw, json, text
|
|
||||||
# [pipelines.format_options]
|
|
||||||
# # JSON formatter options
|
|
||||||
# pretty = false # Pretty print JSON
|
|
||||||
# timestamp_field = "timestamp" # Field name for timestamp
|
# timestamp_field = "timestamp" # Field name for timestamp
|
||||||
# level_field = "level" # Field name for log level
|
# level_field = "level" # Field name for log level
|
||||||
# message_field = "message" # Field name for message
|
# message_field = "message" # Field name for message
|
||||||
# source_field = "source" # Field name for source
|
# source_field = "source" # Field name for source
|
||||||
#
|
|
||||||
# # Text formatter options
|
|
||||||
# template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
|
|
||||||
# timestamp_format = "2006-01-02T15:04:05Z07:00" # Go time format
|
|
||||||
|
|
||||||
# ----------------------------------------------------------------------------
|
## Text templating
|
||||||
# SINKS
|
# [pipelines.format.txt]
|
||||||
# ----------------------------------------------------------------------------
|
# template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
|
||||||
[[pipelines.sinks]]
|
# timestamp_format = "2006-01-02 15:04:05"
|
||||||
type = "http" # http, tcp, http_client, tcp_client, file, stdout, stderr
|
|
||||||
|
|
||||||
# HTTP sink options (streaming server)
|
## Raw templating
|
||||||
[pipelines.sinks.options]
|
# [pipelines.format.raw]
|
||||||
port = 8080 # (required) Listen port
|
# add_new_line = true # Preserve new line delimiter between log entries
|
||||||
buffer_size = 1000 # Entry buffer size
|
|
||||||
stream_path = "/stream" # SSE endpoint
|
|
||||||
status_path = "/status" # Status endpoint
|
|
||||||
|
|
||||||
[pipelines.sinks.options.heartbeat]
|
###============================================================================
|
||||||
enabled = true # Send periodic heartbeats
|
### SOURCES (Inputs)
|
||||||
interval_seconds = 30 # Heartbeat interval
|
### Architecture: Pipeline can have multiple sources
|
||||||
format = "comment" # comment, json
|
###============================================================================
|
||||||
include_timestamp = true # Include timestamp in heartbeat
|
|
||||||
include_stats = false # Include statistics
|
|
||||||
|
|
||||||
[pipelines.sinks.options.net_limit]
|
###----------------------------------------------------------------------------
|
||||||
enabled = false # Enable rate limiting
|
### File Source (File monitoring)
|
||||||
requests_per_second = 10.0 # Request rate limit
|
[[pipelines.sources]]
|
||||||
burst_size = 20 # Token bucket burst
|
type = "file"
|
||||||
limit_by = "ip" # ip, global
|
|
||||||
max_connections_per_ip = 5 # Per-IP connection limit
|
|
||||||
max_total_connections = 100 # Total connection limit
|
|
||||||
response_code = 429 # HTTP response code
|
|
||||||
response_message = "Rate limit exceeded"
|
|
||||||
|
|
||||||
# TCP sink options (alternative)
|
[pipelines.sources.file]
|
||||||
# type = "tcp"
|
directory = "./" # Directory to monitor
|
||||||
# [pipelines.sinks.options]
|
pattern = "*.log" # Glob pattern
|
||||||
# port = 9090 # (required) Listen port
|
check_interval_ms = 100 # File check interval
|
||||||
|
recursive = false # Recursive monitoring (TODO)
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### Console Source
|
||||||
|
# [[pipelines.sources]]
|
||||||
|
# type = "console"
|
||||||
|
|
||||||
|
# [pipelines.sources.console]
|
||||||
# buffer_size = 1000
|
# buffer_size = 1000
|
||||||
# heartbeat = { ... } # Same as HTTP
|
|
||||||
# net_limit = { ... } # Same as HTTP
|
|
||||||
|
|
||||||
# HTTP client sink options (forward to remote)
|
###----------------------------------------------------------------------------
|
||||||
|
### HTTP Source (Server mode - receives logs via HTTP POST)
|
||||||
|
# [[pipelines.sources]]
|
||||||
|
# type = "http"
|
||||||
|
|
||||||
|
# [pipelines.sources.http]
|
||||||
|
# host = "0.0.0.0" # Listen interface
|
||||||
|
# port = 8081 # Listen port
|
||||||
|
# ingest_path = "/ingest" # Ingestion endpoint
|
||||||
|
# buffer_size = 1000
|
||||||
|
# max_body_size = 1048576 # 1MB
|
||||||
|
# read_timeout_ms = 10000
|
||||||
|
# write_timeout_ms = 10000
|
||||||
|
|
||||||
|
### Network access control
|
||||||
|
# [pipelines.sources.http.acl]
|
||||||
|
# enabled = false
|
||||||
|
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
|
||||||
|
# max_connections_total = 100 # Max simultaneous connections for this component
|
||||||
|
# requests_per_second = 100.0 # Per-IP request rate limit
|
||||||
|
# burst_size = 200 # Per-IP request burst limit
|
||||||
|
# response_message = "Rate limit exceeded"
|
||||||
|
# response_code = 429
|
||||||
|
# ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
# ip_blacklist = ["10.0.0.100"]
|
||||||
|
|
||||||
|
### TLS configuration (mTLS support)
|
||||||
|
# [pipelines.sources.http.tls]
|
||||||
|
# enabled = false
|
||||||
|
# cert_file = "/path/to/server.pem" # Server certificate
|
||||||
|
# key_file = "/path/to/server.key" # Server private key
|
||||||
|
# client_auth = false # Enable mTLS
|
||||||
|
# client_ca_file = "/path/to/ca.pem" # CA for client verification
|
||||||
|
# verify_client_cert = true # Verify client certificates
|
||||||
|
# min_version = "TLS1.2" # TLS1.0|TLS1.1|TLS1.2|TLS1.3
|
||||||
|
# max_version = "TLS1.3"
|
||||||
|
# cipher_suites = "" # Comma-separated cipher list
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### TCP Source (Server mode - receives logs via TCP)
|
||||||
|
# [[pipelines.sources]]
|
||||||
|
# type = "tcp"
|
||||||
|
|
||||||
|
# [pipelines.sources.tcp]
|
||||||
|
# host = "0.0.0.0"
|
||||||
|
# port = 9091
|
||||||
|
# buffer_size = 1000
|
||||||
|
# read_timeout_ms = 10000
|
||||||
|
# keep_alive = true
|
||||||
|
# keep_alive_period_ms = 30000
|
||||||
|
|
||||||
|
### Network access control
|
||||||
|
# [pipelines.sources.tcp.acl]
|
||||||
|
# enabled = false
|
||||||
|
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
|
||||||
|
# max_connections_total = 100 # Max simultaneous connections for this component
|
||||||
|
# requests_per_second = 100.0 # Per-IP request rate limit
|
||||||
|
# burst_size = 200 # Per-IP request burst limit
|
||||||
|
# response_message = "Rate limit exceeded"
|
||||||
|
# response_code = 429
|
||||||
|
# ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
# ip_blacklist = ["10.0.0.100"]
|
||||||
|
|
||||||
|
### ⚠️ IMPORTANT: TCP does NOT support TLS/mTLS (gnet limitation)
|
||||||
|
### Use HTTP Source with TLS for encrypted transport
|
||||||
|
|
||||||
|
###============================================================================
|
||||||
|
### SINKS (Outputs)
|
||||||
|
### Architecture: Pipeline can have multiple sinks (fan-out)
|
||||||
|
###============================================================================
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### Console Sink
|
||||||
|
# [[pipelines.sinks]]
|
||||||
|
# type = "console"
|
||||||
|
|
||||||
|
# [pipelines.sinks.console]
|
||||||
|
# target = "stdout" # stdout|stderr|split
|
||||||
|
# colorize = false # Colorized output
|
||||||
|
# buffer_size = 100
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### File Sink (Rotating logs)
|
||||||
|
# [[pipelines.sinks]]
|
||||||
|
# type = "file"
|
||||||
|
|
||||||
|
# [pipelines.sinks.file]
|
||||||
|
# directory = "./logs"
|
||||||
|
# name = "output"
|
||||||
|
# max_size_mb = 100
|
||||||
|
# max_total_size_mb = 1000
|
||||||
|
# min_disk_free_mb = 100
|
||||||
|
# retention_hours = 168.0 # 7 days
|
||||||
|
# buffer_size = 1000
|
||||||
|
# flush_interval_ms = 1000
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### HTTP Sink (Server mode - SSE streaming for clients)
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "http"
|
||||||
|
|
||||||
|
[pipelines.sinks.http]
|
||||||
|
host = "0.0.0.0"
|
||||||
|
port = 8080
|
||||||
|
stream_path = "/stream" # SSE streaming endpoint
|
||||||
|
status_path = "/status" # Status endpoint
|
||||||
|
buffer_size = 1000
|
||||||
|
write_timeout_ms = 10000
|
||||||
|
|
||||||
|
### Heartbeat configuration (keep connections alive)
|
||||||
|
[pipelines.sinks.http.heartbeat]
|
||||||
|
enabled = true
|
||||||
|
interval_ms = 30000 # 30 seconds
|
||||||
|
include_timestamp = true
|
||||||
|
include_stats = false
|
||||||
|
format = "comment" # comment|event|json
|
||||||
|
|
||||||
|
### Network access control
|
||||||
|
# [pipelines.sinks.http.acl]
|
||||||
|
# enabled = false
|
||||||
|
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
|
||||||
|
# max_connections_total = 100 # Max simultaneous connections for this component
|
||||||
|
# requests_per_second = 100.0 # Per-IP request rate limit
|
||||||
|
# burst_size = 200 # Per-IP request burst limit
|
||||||
|
# response_message = "Rate limit exceeded"
|
||||||
|
# response_code = 429
|
||||||
|
# ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
# ip_blacklist = ["10.0.0.100"]
|
||||||
|
|
||||||
|
### TLS configuration (mTLS support)
|
||||||
|
# [pipelines.sinks.http.tls]
|
||||||
|
# enabled = false
|
||||||
|
# cert_file = "/path/to/server.pem" # Server certificate
|
||||||
|
# key_file = "/path/to/server.key" # Server private key
|
||||||
|
# client_auth = false # Enable mTLS
|
||||||
|
# client_ca_file = "/path/to/ca.pem" # CA for client verification
|
||||||
|
# verify_client_cert = true # Verify client certificates
|
||||||
|
# min_version = "TLS1.2" # TLS1.0|TLS1.1|TLS1.2|TLS1.3
|
||||||
|
# max_version = "TLS1.3"
|
||||||
|
# cipher_suites = "" # Comma-separated cipher list
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### TCP Sink (Server mode - TCP streaming for clients)
|
||||||
|
# [[pipelines.sinks]]
|
||||||
|
# type = "tcp"
|
||||||
|
|
||||||
|
# [pipelines.sinks.tcp]
|
||||||
|
# host = "0.0.0.0"
|
||||||
|
# port = 9090
|
||||||
|
# buffer_size = 1000
|
||||||
|
# write_timeout_ms = 10000
|
||||||
|
# keep_alive = true
|
||||||
|
# keep_alive_period_ms = 30000
|
||||||
|
|
||||||
|
### Heartbeat configuration
|
||||||
|
# [pipelines.sinks.tcp.heartbeat]
|
||||||
|
# enabled = false
|
||||||
|
# interval_ms = 30000
|
||||||
|
# include_timestamp = true
|
||||||
|
# include_stats = false
|
||||||
|
# format = "json" # json|txt
|
||||||
|
|
||||||
|
### Network access control
|
||||||
|
# [pipelines.sinks.tcp.acl]
|
||||||
|
# enabled = false
|
||||||
|
# max_connections_per_ip = 10 # Max simultaneous connections from a single IP
|
||||||
|
# max_connections_total = 100 # Max simultaneous connections for this component
|
||||||
|
# requests_per_second = 100.0 # Per-IP request rate limit
|
||||||
|
# burst_size = 200 # Per-IP request burst limit
|
||||||
|
# response_message = "Rate limit exceeded"
|
||||||
|
# response_code = 429
|
||||||
|
# ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
# ip_blacklist = ["10.0.0.100"]
|
||||||
|
|
||||||
|
### ⚠️ IMPORTANT: TCP does NOT support TLS/mTLS (gnet limitation)
|
||||||
|
### Use HTTP Sink with TLS for encrypted transport
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### HTTP Client Sink (Forward to remote HTTP endpoint)
|
||||||
|
# [[pipelines.sinks]]
|
||||||
# type = "http_client"
|
# type = "http_client"
|
||||||
# [pipelines.sinks.options]
|
|
||||||
# url = "https://logs.example.com/ingest" # (required) Target URL
|
# [pipelines.sinks.http_client]
|
||||||
|
# url = "https://logs.example.com/ingest"
|
||||||
|
# buffer_size = 1000
|
||||||
# batch_size = 100 # Entries per batch
|
# batch_size = 100 # Entries per batch
|
||||||
# batch_delay_ms = 1000 # Batch timeout
|
# batch_delay_ms = 1000 # Max wait before sending
|
||||||
# timeout_seconds = 30 # Request timeout
|
# timeout_seconds = 30
|
||||||
# max_retries = 3 # Retry attempts
|
# max_retries = 3
|
||||||
# retry_delay_ms = 1000 # Initial retry delay
|
# retry_delay_ms = 1000
|
||||||
# retry_backoff = 2.0 # Exponential backoff multiplier
|
# retry_backoff = 2.0 # Exponential backoff multiplier
|
||||||
# insecure_skip_verify = false # Skip TLS verification
|
# insecure_skip_verify = false # Skip TLS verification
|
||||||
# headers = { # Custom headers
|
|
||||||
# "Authorization" = "Bearer token",
|
|
||||||
# "X-Custom" = "value"
|
|
||||||
# }
|
|
||||||
|
|
||||||
# TCP client sink options (forward to remote)
|
### TLS configuration for client
|
||||||
|
# [pipelines.sinks.http_client.tls]
|
||||||
|
# enabled = false # Enable TLS for the outgoing connection
|
||||||
|
# server_ca_file = "/path/to/ca.pem" # CA for verifying the remote server's certificate
|
||||||
|
# server_name = "logs.example.com" # For server certificate validation (SNI)
|
||||||
|
# insecure_skip_verify = false # Skip server verification, use with caution
|
||||||
|
# client_cert_file = "/path/to/client.pem" # Client's certificate to present to the server for mTLS
|
||||||
|
# client_key_file = "/path/to/client.key" # Client's private key for mTLS
|
||||||
|
# min_version = "TLS1.2"
|
||||||
|
# max_version = "TLS1.3"
|
||||||
|
# cipher_suites = ""
|
||||||
|
|
||||||
|
### ⚠️ Example: HTTP Client Sink → HTTP Source with mTLS
|
||||||
|
## HTTP Source with mTLS:
|
||||||
|
## [pipelines.sources.http.tls]
|
||||||
|
## enabled = true
|
||||||
|
## cert_file = "/path/to/server.pem"
|
||||||
|
## key_file = "/path/to/server.key"
|
||||||
|
## client_auth = true # Enable client cert verification
|
||||||
|
## client_ca_file = "/path/to/ca.pem"
|
||||||
|
## verify_client_cert = true
|
||||||
|
|
||||||
|
## HTTP Client with client cert:
|
||||||
|
## [pipelines.sinks.http_client.tls]
|
||||||
|
## enabled = true
|
||||||
|
## server_ca_file = "/path/to/ca.pem" # Verify server
|
||||||
|
## client_cert_file = "/path/to/client.pem" # Client certificate
|
||||||
|
## client_key_file = "/path/to/client.key"
|
||||||
|
|
||||||
|
###----------------------------------------------------------------------------
|
||||||
|
### TCP Client Sink (Forward to remote TCP endpoint)
|
||||||
|
# [[pipelines.sinks]]
|
||||||
# type = "tcp_client"
|
# type = "tcp_client"
|
||||||
# [pipelines.sinks.options]
|
|
||||||
# address = "logs.example.com:9090" # (required) host:port
|
# [pipelines.sinks.tcp_client]
|
||||||
|
# host = "logs.example.com"
|
||||||
|
# port = 9090
|
||||||
# buffer_size = 1000
|
# buffer_size = 1000
|
||||||
# dial_timeout_seconds = 10 # Connection timeout
|
# dial_timeout_seconds = 10 # Connection timeout
|
||||||
# write_timeout_seconds = 30 # Write timeout
|
# write_timeout_seconds = 30 # Write timeout
|
||||||
# keep_alive_seconds = 30 # TCP keepalive
|
# read_timeout_seconds = 10 # Read timeout
|
||||||
|
# keep_alive_seconds = 30 # TCP keep-alive
|
||||||
# reconnect_delay_ms = 1000 # Initial reconnect delay
|
# reconnect_delay_ms = 1000 # Initial reconnect delay
|
||||||
# max_reconnect_delay_seconds = 30 # Max reconnect delay
|
# max_reconnect_delay_ms = 30000 # Max reconnect delay
|
||||||
# reconnect_backoff = 1.5 # Exponential backoff
|
# reconnect_backoff = 1.5 # Exponential backoff
|
||||||
|
|
||||||
# File sink options
|
### ⚠️ WARNING: TCP Client has NO TLS support
|
||||||
# type = "file"
|
### Use HTTP Client with TLS for encrypted transport
|
||||||
# [pipelines.sinks.options]
|
|
||||||
# directory = "/var/log/logwisp" # (required) Output directory
|
|
||||||
# name = "app" # (required) Base filename
|
|
||||||
# max_size_mb = 100 # Rotate after size
|
|
||||||
# max_total_size_mb = 0 # Total size limit (0 = unlimited)
|
|
||||||
# retention_hours = 0.0 # Delete old files (0 = disabled)
|
|
||||||
# min_disk_free_mb = 1000 # Maintain free disk space
|
|
||||||
|
|
||||||
# Console sink options
|
###############################################################################
|
||||||
# type = "stdout" # or "stderr"
|
### Common Usage Patterns
|
||||||
# [pipelines.sinks.options]
|
###############################################################################
|
||||||
# buffer_size = 1000
|
|
||||||
# target = "stdout" # Override for split mode
|
|
||||||
|
|
||||||
# ----------------------------------------------------------------------------
|
### Pattern 1: Log Aggregation (Client → Server)
|
||||||
# AUTHENTICATION (optional, for network sinks)
|
### - HTTP Client Sink → HTTP Source (with optional TLS/mTLS)
|
||||||
# ----------------------------------------------------------------------------
|
### - TCP Client Sink → TCP Source (unencrypted only)
|
||||||
# [pipelines.auth]
|
|
||||||
# type = "none" # none, basic, bearer
|
|
||||||
# ip_whitelist = [] # Allowed IPs (empty = all)
|
|
||||||
# ip_blacklist = [] # Blocked IPs
|
|
||||||
#
|
|
||||||
# [pipelines.auth.basic_auth]
|
|
||||||
# realm = "LogWisp" # WWW-Authenticate realm
|
|
||||||
# users_file = "" # External users file
|
|
||||||
# [[pipelines.auth.basic_auth.users]]
|
|
||||||
# username = "admin"
|
|
||||||
# password_hash = "$2a$10$..." # bcrypt hash
|
|
||||||
#
|
|
||||||
# [pipelines.auth.bearer_auth]
|
|
||||||
# tokens = ["token1", "token2"] # Static tokens
|
|
||||||
# [pipelines.auth.bearer_auth.jwt]
|
|
||||||
# jwks_url = "" # JWKS endpoint
|
|
||||||
# signing_key = "" # Static key (if not using JWKS)
|
|
||||||
# issuer = "" # Expected issuer
|
|
||||||
# audience = "" # Expected audience
|
|
||||||
|
|
||||||
# ============================================================================
|
### Pattern 2: Live Monitoring
|
||||||
# HOT RELOAD
|
### - HTTP Sink: Browser-based SSE streaming (https://host:8080/stream)
|
||||||
# ============================================================================
|
### - TCP Sink: Debug interface (telnet/netcat to port 9090)
|
||||||
# Enable with: --config-auto-reload
|
|
||||||
# Manual reload: kill -HUP $(pidof logwisp)
|
|
||||||
# Updates pipelines, filters, formatters without restart
|
|
||||||
# Logging changes require restart
|
|
||||||
|
|
||||||
# ============================================================================
|
### Pattern 3: Log Collection & Distribution
|
||||||
# ROUTER MODE
|
### - File Source → Multiple Sinks (fan-out)
|
||||||
# ============================================================================
|
### - Multiple Sources → Single Pipeline → Multiple Sinks
|
||||||
# Enable with: logwisp --router or router = true
|
|
||||||
# Combines multiple pipeline HTTP sinks on shared ports
|
|
||||||
# Access pattern: http://localhost:8080/{pipeline_name}/stream
|
|
||||||
# Global status: http://localhost:8080/status
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# SIGNALS
|
|
||||||
# ============================================================================
|
|
||||||
# SIGINT/SIGTERM: Graceful shutdown
|
|
||||||
# SIGHUP/SIGUSR1: Reload config (when auto-reload enabled)
|
|
||||||
# SIGKILL: Immediate shutdown
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# CLI FLAGS
|
|
||||||
# ============================================================================
|
|
||||||
# --config, -c PATH # Config file path
|
|
||||||
# --router, -r # Enable router mode
|
|
||||||
# --background, -b # Run as daemon
|
|
||||||
# --quiet, -q # Suppress output
|
|
||||||
# --version, -v # Show version
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# ENVIRONMENT VARIABLES
|
|
||||||
# ============================================================================
|
|
||||||
# LOGWISP_CONFIG_FILE # Config filename
|
|
||||||
# LOGWISP_CONFIG_DIR # Config directory
|
|
||||||
# LOGWISP_CONSOLE_TARGET # Override console target
|
|
||||||
# Any config value: LOGWISP_<SECTION>_<KEY> (uppercase, dots → underscores)
|
|
||||||
@ -1,42 +0,0 @@
|
|||||||
# LogWisp Minimal Configuration
|
|
||||||
# Save as: ~/.config/logwisp/logwisp.toml
|
|
||||||
|
|
||||||
# Basic pipeline monitoring application logs
|
|
||||||
[[pipelines]]
|
|
||||||
name = "app"
|
|
||||||
|
|
||||||
# Source: Monitor log directory
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/myapp", pattern = "*.log", check_interval_ms = 100 }
|
|
||||||
|
|
||||||
# Sink: HTTP streaming
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = {
|
|
||||||
port = 8080,
|
|
||||||
buffer_size = 1000,
|
|
||||||
stream_path = "/stream",
|
|
||||||
status_path = "/status"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Optional: Filter for errors only
|
|
||||||
# [[pipelines.filters]]
|
|
||||||
# type = "include"
|
|
||||||
# patterns = ["ERROR", "WARN", "CRITICAL"]
|
|
||||||
|
|
||||||
# Optional: Add rate limiting to HTTP sink
|
|
||||||
# [[pipelines.sinks]]
|
|
||||||
# type = "http"
|
|
||||||
# options = {
|
|
||||||
# port = 8080,
|
|
||||||
# buffer_size = 1000,
|
|
||||||
# stream_path = "/stream",
|
|
||||||
# status_path = "/status",
|
|
||||||
# net_limit = { enabled = true, requests_per_second = 10.0, burst_size = 20 }
|
|
||||||
# }
|
|
||||||
|
|
||||||
# Optional: Add file output
|
|
||||||
# [[pipelines.sinks]]
|
|
||||||
# type = "file"
|
|
||||||
# options = { directory = "/var/log/logwisp", name = "app" }
|
|
||||||
@ -1,27 +1,76 @@
|
|||||||
# LogWisp Documentation
|
# LogWisp
|
||||||
|
|
||||||
Documentation covers installation, configuration, and usage of LogWisp's pipeline-based log monitoring system.
|
A pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with security and reliability features.
|
||||||
|
|
||||||
## 📚 Documentation Index
|
## Features
|
||||||
|
|
||||||
### Getting Started
|
### Core Capabilities
|
||||||
- **[Installation Guide](installation.md)** - Platform-specific installation
|
- **Pipeline Architecture**: Independent processing pipelines with source(s) → filter → format → sink(s) flow
|
||||||
- **[Quick Start](quickstart.md)** - Get running in 5 minutes
|
- **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
|
||||||
- **[Architecture Overview](architecture.md)** - Pipeline design
|
- **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
|
||||||
|
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
|
||||||
|
- **Hot Configuration Reload**: Update pipelines without service restart
|
||||||
|
|
||||||
### Configuration
|
### Data Processing
|
||||||
- **[Configuration Guide](configuration.md)** - Complete reference
|
- **Pattern-based Filtering**: Chainable include/exclude filters with regex support
|
||||||
- **[Environment Variables](environment.md)** - Container configuration
|
- **Multiple Formatters**: Raw, JSON, and template-based text formatting
|
||||||
- **[Command Line Options](cli.md)** - CLI reference
|
- **Rate Limiting**: Pipeline rate controls
|
||||||
- **[Sample Configurations](../config/)** - Default & Minimal Config
|
|
||||||
|
|
||||||
### Features
|
### Security & Reliability
|
||||||
- **[Status Monitoring](status.md)** - Health checks
|
- **Authentication**: mTLS support
|
||||||
- **[Filters Guide](filters.md)** - Pattern-based filtering
|
- **Access Control**: IP whitelisting/blacklisting, connection limits
|
||||||
- **[Rate Limiting](ratelimiting.md)** - Connection protection
|
- **TLS Encryption**: Full TLS 1.2/1.3 support for HTTP connections
|
||||||
- **[Router Mode](router.md)** - Multi-pipeline routing
|
- **Automatic Reconnection**: Resilient client connections with exponential backoff
|
||||||
- **[Authentication](authentication.md)** - Access control *(planned)*
|
- **File Rotation**: Size-based rotation with retention policies
|
||||||
|
|
||||||
## 📝 License
|
### Operational Features
|
||||||
|
- **Status Monitoring**: Real-time statistics and health endpoints
|
||||||
|
- **Signal Handling**: Graceful shutdown and configuration reload via signals
|
||||||
|
- **Background Mode**: Daemon operation with proper signal handling
|
||||||
|
- **Quiet Mode**: Silent operation for automated deployments
|
||||||
|
|
||||||
BSD-3-Clause
|
## Documentation
|
||||||
|
|
||||||
|
- [Installation Guide](installation.md) - Platform setup and service configuration
|
||||||
|
- [Architecture Overview](architecture.md) - System design and component interaction
|
||||||
|
- [Configuration Reference](configuration.md) - TOML structure and configuration methods
|
||||||
|
- [Input Sources](sources.md) - Available source types and configurations
|
||||||
|
- [Output Sinks](sinks.md) - Sink types and output options
|
||||||
|
- [Filters](filters.md) - Pattern-based log filtering
|
||||||
|
- [Formatters](formatters.md) - Log formatting and transformation
|
||||||
|
- [Security](security.md) - IP-based access control configuration and mTLS
|
||||||
|
- [Networking](networking.md) - TLS, rate limiting, and network features
|
||||||
|
- [Command Line Interface](cli.md) - CLI flags and subcommands
|
||||||
|
- [Operations Guide](operations.md) - Running and maintaining LogWisp
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
Install LogWisp and create a basic configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines]]
|
||||||
|
name = "default"
|
||||||
|
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "directory"
|
||||||
|
[pipelines.sources.directory]
|
||||||
|
path = "./"
|
||||||
|
pattern = "*.log"
|
||||||
|
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "console"
|
||||||
|
[pipelines.sinks.console]
|
||||||
|
target = "stdout"
|
||||||
|
```
|
||||||
|
|
||||||
|
Run with: `logwisp -c config.toml`
|
||||||
|
|
||||||
|
## System Requirements
|
||||||
|
|
||||||
|
- **Operating Systems**: Linux (kernel 6.10+), FreeBSD (14.0+)
|
||||||
|
- **Architecture**: amd64
|
||||||
|
- **Go Version**: 1.25+ (for building from source)
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
BSD 3-Clause License
|
||||||
@ -1,343 +1,168 @@
|
|||||||
# Architecture Overview
|
# Architecture Overview
|
||||||
|
|
||||||
LogWisp implements a flexible pipeline architecture for real-time log processing and streaming.
|
LogWisp implements a pipeline-based architecture for flexible log processing and distribution.
|
||||||
|
|
||||||
## Core Architecture
|
## Core Concepts
|
||||||
|
|
||||||
|
### Pipeline Model
|
||||||
|
|
||||||
|
Each pipeline operates independently with a source → filter → format → sink flow. Multiple pipelines can run concurrently within a single LogWisp instance, each processing different log streams with unique configurations.
|
||||||
|
|
||||||
|
### Component Hierarchy
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
Service (Main Process)
|
||||||
│ LogWisp Service │
|
├── Pipeline 1
|
||||||
├─────────────────────────────────────────────────────────────────────────┤
|
│ ├── Sources (1 or more)
|
||||||
│ │
|
│ ├── Rate Limiter (optional)
|
||||||
│ ┌─────────────────────────── Pipeline 1 ───────────────────────────┐ │
|
│ ├── Filter Chain (optional)
|
||||||
│ │ │ │
|
│ ├── Formatter (optional)
|
||||||
│ │ Sources Filters Sinks │ │
|
│ └── Sinks (1 or more)
|
||||||
│ │ ┌──────┐ ┌────────┐ ┌──────┐ │ │
|
├── Pipeline 2
|
||||||
│ │ │ Dir │──┐ │Include │ ┌────│ HTTP │←── Client 1 │ │
|
│ └── [Same structure]
|
||||||
│ │ └──────┘ ├────▶│ ERROR │ │ └──────┘ │ │
|
└── Status Reporter (optional)
|
||||||
│ │ │ │ WARN │────▶├────┌──────┐ │ │
|
|
||||||
│ │ ┌──────┐ │ └────┬───┘ │ │ File │ │ │
|
|
||||||
│ │ │ HTTP │──┤ ▼ │ └──────┘ │ │
|
|
||||||
│ │ └──────┘ │ ┌────────┐ │ ┌──────┐ │ │
|
|
||||||
│ │ ┌──────┐ │ │Exclude │ └────│ TCP │←── Client 2 │ │
|
|
||||||
│ │ │ TCP │──┘ │ DEBUG │ └──────┘ │ │
|
|
||||||
│ │ └──────┘ └────────┘ │ │
|
|
||||||
│ └──────────────────────────────────────────────────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌─────────────────────────── Pipeline 2 ───────────────────────────┐ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ ┌──────┐ ┌───────────┐ │ │
|
|
||||||
│ │ │Stdin │───────────────────────┬───▶│HTTP Client│──► Remote │ │
|
|
||||||
│ │ └──────┘ (No Filters) │ └───────────┘ │ │
|
|
||||||
│ │ │ ┌───────────┐ │ │
|
|
||||||
│ │ └────│TCP Client │──► Remote │ │
|
|
||||||
│ │ └───────────┘ │ │
|
|
||||||
│ └──────────────────────────────────────────────────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌─────────────────────────── Pipeline N ───────────────────────────┐ │
|
|
||||||
│ │ Multiple Sources → Filter Chain → Multiple Sinks │ │
|
|
||||||
│ └──────────────────────────────────────────────────────────────────┘ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Data Flow
|
## Data Flow
|
||||||
|
|
||||||
```
|
### Processing Stages
|
||||||
Log Entry Flow:
|
|
||||||
|
|
||||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
1. **Source Stage**: Sources monitor inputs and generate log entries
|
||||||
│ Source │ │ Parse │ │ Filter │ │ Sink │
|
2. **Rate Limiting**: Optional pipeline-level rate control
|
||||||
│ Monitor │────▶│ Entry │────▶│ Chain │────▶│ Deliver │
|
3. **Filtering**: Pattern-based inclusion/exclusion
|
||||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘
|
4. **Formatting**: Transform entries to desired output format
|
||||||
│ │ │ │
|
5. **Distribution**: Fan-out to multiple sinks
|
||||||
▼ ▼ ▼ ▼
|
|
||||||
Detect Extract Include/ Send to
|
|
||||||
Input & Format Exclude Clients
|
|
||||||
|
|
||||||
|
### Entry Lifecycle
|
||||||
|
|
||||||
Entry Processing:
|
Log entries flow through the pipeline as `core.LogEntry` structures containing:
|
||||||
|
- **Time**: Entry timestamp
|
||||||
|
- **Level**: Log level (DEBUG, INFO, WARN, ERROR)
|
||||||
|
- **Source**: Origin identifier
|
||||||
|
- **Message**: Log content
|
||||||
|
- **Fields**: Additional metadata (JSON)
|
||||||
|
- **RawSize**: Original entry size
|
||||||
|
|
||||||
1. Source Detection 2. Entry Creation 3. Filter Application
|
### Buffering Strategy
|
||||||
┌──────────┐ ┌────────────┐ ┌─────────────┐
|
|
||||||
│New Entry │ │ Timestamp │ │ Filter 1 │
|
|
||||||
│Detected │──────────▶│ Level │────────▶│ Include? │
|
|
||||||
└──────────┘ │ Message │ └──────┬──────┘
|
|
||||||
└────────────┘ │
|
|
||||||
▼
|
|
||||||
4. Sink Distribution ┌─────────────┐
|
|
||||||
┌──────────┐ │ Filter 2 │
|
|
||||||
│ HTTP │◀───┐ │ Exclude? │
|
|
||||||
└──────────┘ │ └──────┬──────┘
|
|
||||||
┌──────────┐ │ │
|
|
||||||
│ TCP │◀───┼────────── Entry ◀──────────────────┘
|
|
||||||
└──────────┘ │ (if passed)
|
|
||||||
┌──────────┐ │
|
|
||||||
│ File │◀───┤
|
|
||||||
└──────────┘ │
|
|
||||||
┌──────────┐ │
|
|
||||||
│ HTTP/TCP │◀───┘
|
|
||||||
│ Client │
|
|
||||||
└──────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Component Details
|
Each component maintains internal buffers to handle burst traffic:
|
||||||
|
- Sources: Configurable buffer size (default 1000 entries)
|
||||||
|
- Sinks: Independent buffers per sink
|
||||||
|
- Network components: Additional TCP/HTTP buffers
|
||||||
|
|
||||||
### Sources
|
## Component Types
|
||||||
|
|
||||||
Sources monitor inputs and generate log entries:
|
### Sources (Input)
|
||||||
|
|
||||||
```
|
- **Directory Source**: File system monitoring with rotation detection
|
||||||
Directory Source:
|
- **Stdin Source**: Standard input processing
|
||||||
┌─────────────────────────────────┐
|
- **HTTP Source**: REST endpoint for log ingestion
|
||||||
│ Directory Monitor │
|
- **TCP Source**: Raw TCP socket listener
|
||||||
├─────────────────────────────────┤
|
|
||||||
│ • Pattern Matching (*.log) │
|
|
||||||
│ • File Rotation Detection │
|
|
||||||
│ • Position Tracking │
|
|
||||||
│ • Concurrent File Watching │
|
|
||||||
└─────────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
┌──────────────┐
|
|
||||||
│ File Watcher │ (per file)
|
|
||||||
├──────────────┤
|
|
||||||
│ • Read New │
|
|
||||||
│ • Track Pos │
|
|
||||||
│ • Detect Rot │
|
|
||||||
└──────────────┘
|
|
||||||
|
|
||||||
HTTP/TCP Sources:
|
### Sinks (Output)
|
||||||
┌─────────────────────────────────┐
|
|
||||||
│ Network Listener │
|
|
||||||
├─────────────────────────────────┤
|
|
||||||
│ • JSON Parsing │
|
|
||||||
│ • Rate Limiting │
|
|
||||||
│ • Connection Management │
|
|
||||||
│ • Input Validation │
|
|
||||||
└─────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filters
|
- **Console Sink**: stdout/stderr output
|
||||||
|
- **File Sink**: Rotating file writer
|
||||||
|
- **HTTP Sink**: Server-Sent Events (SSE) streaming
|
||||||
|
- **TCP Sink**: TCP server for client connections
|
||||||
|
- **HTTP Client Sink**: Forward to remote HTTP endpoints
|
||||||
|
- **TCP Client Sink**: Forward to remote TCP servers
|
||||||
|
|
||||||
Filters process entries through pattern matching:
|
### Processing Components
|
||||||
|
|
||||||
```
|
- **Rate Limiter**: Token bucket algorithm for flow control
|
||||||
Filter Chain:
|
- **Filter Chain**: Sequential pattern matching
|
||||||
┌─────────────┐
|
- **Formatters**: Raw, JSON, or template-based text transformation
|
||||||
Entry ──────────▶│ Filter 1 │
|
|
||||||
│ (Include) │
|
|
||||||
└──────┬──────┘
|
|
||||||
│ Pass?
|
|
||||||
▼
|
|
||||||
┌─────────────┐
|
|
||||||
│ Filter 2 │
|
|
||||||
│ (Exclude) │
|
|
||||||
└──────┬──────┘
|
|
||||||
│ Pass?
|
|
||||||
▼
|
|
||||||
┌─────────────┐
|
|
||||||
│ Filter N │
|
|
||||||
└──────┬──────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
To Sinks
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sinks
|
|
||||||
|
|
||||||
Sinks deliver processed entries to destinations:
|
|
||||||
|
|
||||||
```
|
|
||||||
HTTP Sink (SSE):
|
|
||||||
┌───────────────────────────────────┐
|
|
||||||
│ HTTP Server │
|
|
||||||
├───────────────────────────────────┤
|
|
||||||
│ ┌─────────┐ ┌─────────┐ │
|
|
||||||
│ │ Stream │ │ Status │ │
|
|
||||||
│ │Endpoint │ │Endpoint │ │
|
|
||||||
│ └────┬────┘ └────┬────┘ │
|
|
||||||
│ │ │ │
|
|
||||||
│ ┌────▼──────────────▼────┐ │
|
|
||||||
│ │ Connection Manager │ │
|
|
||||||
│ ├────────────────────────┤ │
|
|
||||||
│ │ • Rate Limiting │ │
|
|
||||||
│ │ • Heartbeat │ │
|
|
||||||
│ │ • Buffer Management │ │
|
|
||||||
│ └────────────────────────┘ │
|
|
||||||
└───────────────────────────────────┘
|
|
||||||
|
|
||||||
TCP Sink:
|
|
||||||
┌───────────────────────────────────┐
|
|
||||||
│ TCP Server │
|
|
||||||
├───────────────────────────────────┤
|
|
||||||
│ ┌────────────────────────┐ │
|
|
||||||
│ │ gnet Event Loop │ │
|
|
||||||
│ ├────────────────────────┤ │
|
|
||||||
│ │ • Async I/O │ │
|
|
||||||
│ │ • Connection Pool │ │
|
|
||||||
│ │ • Rate Limiting │ │
|
|
||||||
│ └────────────────────────┘ │
|
|
||||||
└───────────────────────────────────┘
|
|
||||||
|
|
||||||
Client Sinks:
|
|
||||||
┌───────────────────────────────────┐
|
|
||||||
│ HTTP/TCP Client │
|
|
||||||
├───────────────────────────────────┤
|
|
||||||
│ ┌────────────────────────┐ │
|
|
||||||
│ │ Output Manager │ │
|
|
||||||
│ ├────────────────────────┤ │
|
|
||||||
│ │ • Batching │ │
|
|
||||||
│ │ • Retry Logic │ │
|
|
||||||
│ │ • Connection Pooling │ │
|
|
||||||
│ │ • Failover │ │
|
|
||||||
│ └────────────────────────┘ │
|
|
||||||
└───────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Router Mode
|
|
||||||
|
|
||||||
In router mode, multiple pipelines share HTTP ports:
|
|
||||||
|
|
||||||
```
|
|
||||||
Router Architecture:
|
|
||||||
┌─────────────────┐
|
|
||||||
│ HTTP Router │
|
|
||||||
│ Port 8080 │
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
┌────────────────────┼────────────────────┐
|
|
||||||
│ │ │
|
|
||||||
/app/stream /db/stream /sys/stream
|
|
||||||
│ │ │
|
|
||||||
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
|
|
||||||
│Pipeline │ │Pipeline │ │Pipeline │
|
|
||||||
│ "app" │ │ "db" │ │ "sys" │
|
|
||||||
└─────────┘ └─────────┘ └─────────┘
|
|
||||||
|
|
||||||
Path Routing:
|
|
||||||
Client Request ──▶ Router ──▶ Parse Path ──▶ Find Pipeline ──▶ Route
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
Extract Pipeline Name
|
|
||||||
from /pipeline/endpoint
|
|
||||||
```
|
|
||||||
|
|
||||||
## Memory Management
|
|
||||||
|
|
||||||
```
|
|
||||||
Buffer Flow:
|
|
||||||
┌──────────┐ ┌──────────┐ ┌──────────┐
|
|
||||||
│ Source │ │ Pipeline │ │ Sink │
|
|
||||||
│ Buffer │────▶│ Buffer │────▶│ Buffer │
|
|
||||||
│ (1000) │ │ (chan) │ │ (1000) │
|
|
||||||
└──────────┘ └──────────┘ └──────────┘
|
|
||||||
│ │ │
|
|
||||||
▼ ▼ ▼
|
|
||||||
Drop if full Backpressure Drop if full
|
|
||||||
(counted) (blocking) (counted)
|
|
||||||
|
|
||||||
Client Sinks:
|
|
||||||
┌──────────┐ ┌──────────┐ ┌──────────┐
|
|
||||||
│ Entry │ │ Batch │ │ Send │
|
|
||||||
│ Buffer │────▶│ Buffer │────▶│ Queue │
|
|
||||||
│ (1000) │ │ (100) │ │ (retry) │
|
|
||||||
└──────────┘ └──────────┘ └──────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Rate Limiting
|
|
||||||
|
|
||||||
```
|
|
||||||
Token Bucket Algorithm:
|
|
||||||
┌─────────────────────────────┐
|
|
||||||
│ Token Bucket │
|
|
||||||
├─────────────────────────────┤
|
|
||||||
│ Capacity: burst_size │
|
|
||||||
│ Refill: requests_per_second │
|
|
||||||
│ │
|
|
||||||
│ ┌─────────────────────┐ │
|
|
||||||
│ │ ● ● ● ● ● ● ○ ○ ○ ○ │ │
|
|
||||||
│ └─────────────────────┘ │
|
|
||||||
│ 6/10 tokens available │
|
|
||||||
└─────────────────────────────┘
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
Request arrives
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
Token available? ──No──▶ Reject (429)
|
|
||||||
│
|
|
||||||
Yes
|
|
||||||
▼
|
|
||||||
Consume token ──▶ Allow request
|
|
||||||
```
|
|
||||||
|
|
||||||
## Concurrency Model
|
## Concurrency Model
|
||||||
|
|
||||||
```
|
### Goroutine Architecture
|
||||||
Goroutine Structure:
|
|
||||||
|
|
||||||
Main ────┬──── Pipeline 1 ────┬──── Source Reader 1
|
- Each source runs in dedicated goroutines for monitoring
|
||||||
│ ├──── Source Reader 2
|
- Sinks operate independently with their own processing loops
|
||||||
│ ├──── HTTP Server
|
- Network listeners use optimized event loops (gnet for TCP)
|
||||||
│ ├──── TCP Server
|
- Pipeline processing uses channel-based communication
|
||||||
│ ├──── Filter Processor
|
|
||||||
│ ├──── HTTP Client Writer
|
|
||||||
│ └──── TCP Client Writer
|
|
||||||
│
|
|
||||||
├──── Pipeline 2 ────┬──── Source Reader
|
|
||||||
│ └──── Sink Writers
|
|
||||||
│
|
|
||||||
└──── HTTP Router (if enabled)
|
|
||||||
|
|
||||||
Channel Communication:
|
### Synchronization
|
||||||
Source ──chan──▶ Filter ──chan──▶ Sink
|
|
||||||
│ │
|
|
||||||
└── Non-blocking send ────────────┘
|
|
||||||
(drop & count if full)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration Loading
|
- Atomic counters for statistics
|
||||||
|
- Read-write mutexes for configuration access
|
||||||
|
- Context-based cancellation for graceful shutdown
|
||||||
|
- Wait groups for coordinated startup/shutdown
|
||||||
|
|
||||||
```
|
## Network Architecture
|
||||||
Priority Order:
|
|
||||||
1. CLI Flags ─────────┐
|
|
||||||
2. Environment Vars ──┼──▶ Merge ──▶ Final Config
|
|
||||||
3. Config File ───────┤
|
|
||||||
4. Defaults ──────────┘
|
|
||||||
|
|
||||||
Example:
|
### Connection Patterns
|
||||||
CLI: --logging.level debug
|
|
||||||
Env: LOGWISP_PIPELINES_0_NAME=app
|
|
||||||
File: pipelines.toml
|
|
||||||
Default: buffer_size = 1000
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Architecture
|
**Chaining Design**:
|
||||||
|
- TCP Client Sink → TCP Source: Direct TCP forwarding
|
||||||
|
- HTTP Client Sink → HTTP Source: HTTP-based forwarding
|
||||||
|
|
||||||
```
|
**Monitoring Design**:
|
||||||
Security Layers:
|
- TCP Sink: Debugging interface
|
||||||
|
- HTTP Sink: Browser-based live monitoring
|
||||||
|
|
||||||
┌─────────────────────────────────────┐
|
### Protocol Support
|
||||||
│ Network Layer │
|
|
||||||
├─────────────────────────────────────┤
|
- HTTP/1.1 and HTTP/2 for HTTP connections
|
||||||
│ • Rate Limiting (per IP/global) │
|
- Raw TCP connections
|
||||||
│ • Connection Limits │
|
- TLS 1.2/1.3 for HTTPS connections (HTTP only)
|
||||||
│ • TLS/SSL (planned) │
|
- Server-Sent Events for real-time streaming
|
||||||
└──────────────┬──────────────────────┘
|
|
||||||
│
|
## Resource Management
|
||||||
┌──────────────▼──────────────────────┐
|
|
||||||
│ Authentication Layer │
|
### Memory Management
|
||||||
├─────────────────────────────────────┤
|
|
||||||
│ • Basic Auth (planned) │
|
- Bounded buffers prevent unbounded growth
|
||||||
│ • Bearer Tokens (planned) │
|
- Automatic garbage collection via Go runtime
|
||||||
│ • IP Whitelisting (planned) │
|
- Connection limits prevent resource exhaustion
|
||||||
└──────────────┬──────────────────────┘
|
|
||||||
│
|
### File Management
|
||||||
┌──────────────▼──────────────────────┐
|
|
||||||
│ Application Layer │
|
- Automatic rotation based on size thresholds
|
||||||
├─────────────────────────────────────┤
|
- Retention policies for old log files
|
||||||
│ • Input Validation │
|
- Minimum disk space checks before writing
|
||||||
│ • Path Traversal Prevention │
|
|
||||||
│ • Resource Limits │
|
### Connection Management
|
||||||
└─────────────────────────────────────┘
|
|
||||||
```
|
- Per-IP connection limits
|
||||||
|
- Global connection caps
|
||||||
|
- Automatic reconnection with exponential backoff
|
||||||
|
- Keep-alive for persistent connections
|
||||||
|
|
||||||
|
## Reliability Features
|
||||||
|
|
||||||
|
### Fault Tolerance
|
||||||
|
|
||||||
|
- Panic recovery in pipeline processing
|
||||||
|
- Independent pipeline operation
|
||||||
|
- Automatic source restart on failure
|
||||||
|
- Sink failure isolation
|
||||||
|
|
||||||
|
### Data Integrity
|
||||||
|
|
||||||
|
- Entry validation at ingestion
|
||||||
|
- Size limits for entries and batches
|
||||||
|
- Duplicate detection in file monitoring
|
||||||
|
- Position tracking for file reads
|
||||||
|
|
||||||
|
## Performance Characteristics
|
||||||
|
|
||||||
|
### Throughput
|
||||||
|
|
||||||
|
- Pipeline rate limiting: Configurable (default 1000 entries/second)
|
||||||
|
- Network throughput: Limited by network and sink capacity
|
||||||
|
- File monitoring: Sub-second detection (default 100ms interval)
|
||||||
|
|
||||||
|
### Latency
|
||||||
|
|
||||||
|
- Entry processing: Sub-millisecond in-memory
|
||||||
|
- Network forwarding: Depends on batch configuration
|
||||||
|
- File detection: Configurable check interval
|
||||||
|
|
||||||
|
### Scalability
|
||||||
|
|
||||||
|
- Horizontal: Multiple LogWisp instances with different configurations
|
||||||
|
- Vertical: Multiple pipelines per instance
|
||||||
|
- Fan-out: Multiple sinks per pipeline
|
||||||
|
- Fan-in: Multiple sources per pipeline
|
||||||
372
doc/cli.md
372
doc/cli.md
@ -1,196 +1,240 @@
|
|||||||
# Command Line Interface
|
# Command Line Interface
|
||||||
|
|
||||||
LogWisp CLI options for controlling behavior without modifying configuration files.
|
LogWisp CLI reference for commands and options.
|
||||||
|
|
||||||
## Synopsis
|
## Synopsis
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
logwisp [command] [options]
|
||||||
logwisp [options]
|
logwisp [options]
|
||||||
```
|
```
|
||||||
|
|
||||||
## General Options
|
## Commands
|
||||||
|
|
||||||
### `--config <path>`
|
### Main Commands
|
||||||
Configuration file location.
|
|
||||||
- **Default**: `~/.config/logwisp/logwisp.toml`
|
|
||||||
- **Example**: `logwisp --config /etc/logwisp/production.toml`
|
|
||||||
|
|
||||||
### `--router`
|
| Command | Description |
|
||||||
Enable HTTP router mode for path-based routing.
|
|---------|-------------|
|
||||||
- **Default**: `false`
|
| `tls` | Generate TLS certificates |
|
||||||
- **Example**: `logwisp --router`
|
| `version` | Display version information |
|
||||||
|
| `help` | Show help information |
|
||||||
|
|
||||||
|
### tls Command
|
||||||
|
|
||||||
|
Generate TLS certificates.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
logwisp tls [options]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
| Flag | Description | Default |
|
||||||
|
|------|-------------|---------|
|
||||||
|
| `-ca` | Generate CA certificate | - |
|
||||||
|
| `-server` | Generate server certificate | - |
|
||||||
|
| `-client` | Generate client certificate | - |
|
||||||
|
| `-host` | Comma-separated hosts/IPs | localhost |
|
||||||
|
| `-o` | Output file prefix | Required |
|
||||||
|
| `-ca-cert` | CA certificate file | Required for server/client |
|
||||||
|
| `-ca-key` | CA key file | Required for server/client |
|
||||||
|
| `-days` | Certificate validity days | 365 |
|
||||||
|
|
||||||
|
### version Command
|
||||||
|
|
||||||
### `--version`
|
|
||||||
Display version information.
|
Display version information.
|
||||||
|
|
||||||
### `--background`
|
|
||||||
Run as background process.
|
|
||||||
- **Example**: `logwisp --background`
|
|
||||||
|
|
||||||
### `--quiet`
|
|
||||||
Suppress all output (overrides logging configuration) except sinks.
|
|
||||||
- **Example**: `logwisp --quiet`
|
|
||||||
|
|
||||||
### `--disable-status-reporter`
|
|
||||||
Disable periodic status reporting.
|
|
||||||
- **Example**: `logwisp --disable-status-reporter`
|
|
||||||
|
|
||||||
### `--config-auto-reload`
|
|
||||||
Enable automatic configuration reloading on file changes.
|
|
||||||
- **Example**: `logwisp --config-auto-reload --config /etc/logwisp/config.toml`
|
|
||||||
- Monitors configuration file for changes
|
|
||||||
- Reloads pipelines without restart
|
|
||||||
- Preserves connections during reload
|
|
||||||
|
|
||||||
### `--config-save-on-exit`
|
|
||||||
Save current configuration to file on exit.
|
|
||||||
- **Example**: `logwisp --config-save-on-exit`
|
|
||||||
- Useful with runtime modifications
|
|
||||||
- Requires valid config file path
|
|
||||||
|
|
||||||
## Logging Options
|
|
||||||
|
|
||||||
Override configuration file settings:
|
|
||||||
|
|
||||||
### `--logging.output <mode>`
|
|
||||||
LogWisp's operational log output.
|
|
||||||
- **Values**: `file`, `stdout`, `stderr`, `both`, `none`
|
|
||||||
- **Example**: `logwisp --logging.output both`
|
|
||||||
|
|
||||||
### `--logging.level <level>`
|
|
||||||
Minimum log level.
|
|
||||||
- **Values**: `debug`, `info`, `warn`, `error`
|
|
||||||
- **Example**: `logwisp --logging.level debug`
|
|
||||||
|
|
||||||
### `--logging.file.directory <path>`
|
|
||||||
Log directory (with file output).
|
|
||||||
- **Example**: `logwisp --logging.file.directory /var/log/logwisp`
|
|
||||||
|
|
||||||
### `--logging.file.name <name>`
|
|
||||||
Log file name (with file output).
|
|
||||||
- **Example**: `logwisp --logging.file.name app`
|
|
||||||
|
|
||||||
### `--logging.file.max_size_mb <size>`
|
|
||||||
Maximum log file size in MB.
|
|
||||||
- **Example**: `logwisp --logging.file.max_size_mb 200`
|
|
||||||
|
|
||||||
### `--logging.file.max_total_size_mb <size>`
|
|
||||||
Maximum total log size in MB.
|
|
||||||
- **Example**: `logwisp --logging.file.max_total_size_mb 2000`
|
|
||||||
|
|
||||||
### `--logging.file.retention_hours <hours>`
|
|
||||||
Log retention period in hours.
|
|
||||||
- **Example**: `logwisp --logging.file.retention_hours 336`
|
|
||||||
|
|
||||||
### `--logging.console.target <target>`
|
|
||||||
Console output destination.
|
|
||||||
- **Values**: `stdout`, `stderr`, `split`
|
|
||||||
- **Example**: `logwisp --logging.console.target split`
|
|
||||||
|
|
||||||
### `--logging.console.format <format>`
|
|
||||||
Console output format.
|
|
||||||
- **Values**: `txt`, `json`
|
|
||||||
- **Example**: `logwisp --logging.console.format json`
|
|
||||||
|
|
||||||
## Pipeline Options
|
|
||||||
|
|
||||||
Configure pipelines via CLI (N = array index, 0-based):
|
|
||||||
|
|
||||||
### `--pipelines.N.name <name>`
|
|
||||||
Pipeline name.
|
|
||||||
- **Example**: `logwisp --pipelines.0.name myapp`
|
|
||||||
|
|
||||||
### `--pipelines.N.sources.N.type <type>`
|
|
||||||
Source type.
|
|
||||||
- **Example**: `logwisp --pipelines.0.sources.0.type directory`
|
|
||||||
|
|
||||||
### `--pipelines.N.sources.N.options.<key> <value>`
|
|
||||||
Source options.
|
|
||||||
- **Example**: `logwisp --pipelines.0.sources.0.options.path /var/log`
|
|
||||||
|
|
||||||
### `--pipelines.N.filters.N.type <type>`
|
|
||||||
Filter type.
|
|
||||||
- **Example**: `logwisp --pipelines.0.filters.0.type include`
|
|
||||||
|
|
||||||
### `--pipelines.N.filters.N.patterns <json>`
|
|
||||||
Filter patterns (JSON array).
|
|
||||||
- **Example**: `logwisp --pipelines.0.filters.0.patterns '["ERROR","WARN"]'`
|
|
||||||
|
|
||||||
### `--pipelines.N.sinks.N.type <type>`
|
|
||||||
Sink type.
|
|
||||||
- **Example**: `logwisp --pipelines.0.sinks.0.type http`
|
|
||||||
|
|
||||||
### `--pipelines.N.sinks.N.options.<key> <value>`
|
|
||||||
Sink options.
|
|
||||||
- **Example**: `logwisp --pipelines.0.sinks.0.options.port 8080`
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
```bash
|
```bash
|
||||||
# Default configuration
|
logwisp version
|
||||||
logwisp
|
logwisp -v
|
||||||
|
logwisp --version
|
||||||
# Specific configuration
|
|
||||||
logwisp --config /etc/logwisp/production.toml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Development
|
Output includes:
|
||||||
```bash
|
- Version number
|
||||||
# Debug mode
|
- Build date
|
||||||
logwisp --logging.output stderr --logging.level debug
|
- Git commit hash
|
||||||
|
- Go version
|
||||||
|
|
||||||
# With file output
|
## Global Options
|
||||||
logwisp --logging.output both --logging.level debug --logging.file.directory ./debug-logs
|
|
||||||
|
### Configuration Options
|
||||||
|
|
||||||
|
| Flag | Description | Default |
|
||||||
|
|------|-------------|---------|
|
||||||
|
| `-c, --config` | Configuration file path | `./logwisp.toml` |
|
||||||
|
| `-b, --background` | Run as daemon | false |
|
||||||
|
| `-q, --quiet` | Suppress console output | false |
|
||||||
|
| `--disable-status-reporter` | Disable status logging | false |
|
||||||
|
| `--config-auto-reload` | Enable config hot reload | false |
|
||||||
|
|
||||||
|
### Logging Options
|
||||||
|
|
||||||
|
| Flag | Description | Values |
|
||||||
|
|------|-------------|--------|
|
||||||
|
| `--logging.output` | Log output mode | file, stdout, stderr, split, all, none |
|
||||||
|
| `--logging.level` | Log level | debug, info, warn, error |
|
||||||
|
| `--logging.file.directory` | Log directory | Path |
|
||||||
|
| `--logging.file.name` | Log filename | String |
|
||||||
|
| `--logging.file.max_size_mb` | Max file size | Integer |
|
||||||
|
| `--logging.file.max_total_size_mb` | Total size limit | Integer |
|
||||||
|
| `--logging.file.retention_hours` | Retention period | Float |
|
||||||
|
| `--logging.console.target` | Console target | stdout, stderr, split |
|
||||||
|
| `--logging.console.format` | Output format | txt, json |
|
||||||
|
|
||||||
|
### Pipeline Options
|
||||||
|
|
||||||
|
Configure pipelines via CLI (N = array index, 0-based).
|
||||||
|
|
||||||
|
**Pipeline Configuration:**
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--pipelines.N.name` | Pipeline name |
|
||||||
|
| `--pipelines.N.sources.N.type` | Source type |
|
||||||
|
| `--pipelines.N.filters.N.type` | Filter type |
|
||||||
|
| `--pipelines.N.sinks.N.type` | Sink type |
|
||||||
|
|
||||||
|
## Flag Formats
|
||||||
|
|
||||||
|
### Boolean Flags
|
||||||
|
|
||||||
|
```bash
|
||||||
|
logwisp --quiet
|
||||||
|
logwisp --quiet=true
|
||||||
|
logwisp --quiet=false
|
||||||
```
|
```
|
||||||
|
|
||||||
### Production
|
### String Flags
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# File logging
|
logwisp --config /etc/logwisp/config.toml
|
||||||
logwisp --logging.output file --logging.file.directory /var/log/logwisp
|
logwisp -c config.toml
|
||||||
|
|
||||||
# Background with router
|
|
||||||
logwisp --background --router --config /etc/logwisp/prod.toml
|
|
||||||
|
|
||||||
# Quiet mode for cron
|
|
||||||
logwisp --quiet --config /etc/logwisp/batch.toml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Pipeline Configuration via CLI
|
### Nested Configuration
|
||||||
```bash
|
|
||||||
# Simple pipeline
|
|
||||||
logwisp --pipelines.0.name app \
|
|
||||||
--pipelines.0.sources.0.type directory \
|
|
||||||
--pipelines.0.sources.0.options.path /var/log/app \
|
|
||||||
--pipelines.0.sinks.0.type http \
|
|
||||||
--pipelines.0.sinks.0.options.port 8080
|
|
||||||
|
|
||||||
# With filters
|
```bash
|
||||||
logwisp --pipelines.0.name filtered \
|
logwisp --logging.level=debug
|
||||||
--pipelines.0.sources.0.type stdin \
|
logwisp --pipelines.0.name=myapp
|
||||||
--pipelines.0.filters.0.type include \
|
logwisp --pipelines.0.sources.0.type=stdin
|
||||||
--pipelines.0.filters.0.patterns '["ERROR","CRITICAL"]' \
|
|
||||||
--pipelines.0.sinks.0.type stdout
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Priority Order
|
### Array Values (JSON)
|
||||||
|
|
||||||
1. **Command-line flags** (highest)
|
```bash
|
||||||
2. **Environment variables**
|
logwisp --pipelines.0.filters.0.patterns='["ERROR","WARN"]'
|
||||||
3. **Configuration file**
|
```
|
||||||
4. **Built-in defaults** (lowest)
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
All flags can be set via environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export LOGWISP_QUIET=true
|
||||||
|
export LOGWISP_LOGGING_LEVEL=debug
|
||||||
|
export LOGWISP_PIPELINES_0_NAME=myapp
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Precedence
|
||||||
|
|
||||||
|
1. Command-line flags (highest)
|
||||||
|
2. Environment variables
|
||||||
|
3. Configuration file
|
||||||
|
4. Built-in defaults (lowest)
|
||||||
|
|
||||||
## Exit Codes
|
## Exit Codes
|
||||||
|
|
||||||
- `0`: Success
|
| Code | Description |
|
||||||
- `1`: General error
|
|------|-------------|
|
||||||
- `2`: Configuration file not found
|
| 0 | Success |
|
||||||
- `137`: SIGKILL received
|
| 1 | General error |
|
||||||
|
| 2 | Configuration file not found |
|
||||||
|
| 137 | SIGKILL received |
|
||||||
|
|
||||||
## Signals
|
## Signal Handling
|
||||||
|
|
||||||
- `SIGINT` (Ctrl+C): Graceful shutdown
|
| Signal | Action |
|
||||||
- `SIGTERM`: Graceful shutdown
|
|--------|--------|
|
||||||
- `SIGHUP`: Reload configuration (when auto-reload enabled)
|
| SIGINT (Ctrl+C) | Graceful shutdown |
|
||||||
- `SIGUSR1`: Reload configuration (when auto-reload enabled)
|
| SIGTERM | Graceful shutdown |
|
||||||
- `SIGKILL`: Immediate shutdown (exit code 137)
|
| SIGHUP | Reload configuration |
|
||||||
|
| SIGUSR1 | Reload configuration |
|
||||||
|
| SIGKILL | Immediate termination |
|
||||||
|
|
||||||
|
## Usage Patterns
|
||||||
|
|
||||||
|
### Development Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verbose logging to console
|
||||||
|
logwisp --logging.output=stderr --logging.level=debug
|
||||||
|
|
||||||
|
# Quick test with stdin
|
||||||
|
logwisp --pipelines.0.sources.0.type=stdin --pipelines.0.sinks.0.type=console
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production Deployment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Background with file logging
|
||||||
|
logwisp --background --config /etc/logwisp/prod.toml --logging.output=file
|
||||||
|
|
||||||
|
# Systemd service
|
||||||
|
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/config.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check configuration
|
||||||
|
logwisp --config test.toml --logging.level=debug --disable-status-reporter
|
||||||
|
|
||||||
|
# Dry run (verify config only)
|
||||||
|
logwisp --config test.toml --quiet
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quick Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate admin password
|
||||||
|
logwisp auth -u admin -b
|
||||||
|
|
||||||
|
# Create self-signed certs
|
||||||
|
logwisp tls -server -host localhost -o server
|
||||||
|
|
||||||
|
# Check version
|
||||||
|
logwisp version
|
||||||
|
```
|
||||||
|
|
||||||
|
## Help System
|
||||||
|
|
||||||
|
### General Help
|
||||||
|
|
||||||
|
```bash
|
||||||
|
logwisp --help
|
||||||
|
logwisp -h
|
||||||
|
logwisp help
|
||||||
|
```
|
||||||
|
|
||||||
|
### Command Help
|
||||||
|
|
||||||
|
```bash
|
||||||
|
logwisp auth --help
|
||||||
|
logwisp tls --help
|
||||||
|
logwisp help auth
|
||||||
|
```
|
||||||
|
|
||||||
|
## Special Flags
|
||||||
|
|
||||||
|
### Internal Flags
|
||||||
|
|
||||||
|
These flags are for internal use:
|
||||||
|
- `--background-daemon`: Child process indicator
|
||||||
|
- `--config-save-on-exit`: Save config on shutdown
|
||||||
|
|
||||||
|
### Hidden Behaviors
|
||||||
|
|
||||||
|
- SIGHUP ignored by default (nohup behavior)
|
||||||
|
- Automatic panic recovery in pipelines
|
||||||
|
- Resource cleanup on shutdown
|
||||||
@ -1,512 +1,198 @@
|
|||||||
# Configuration Guide
|
# Configuration Reference
|
||||||
|
|
||||||
LogWisp uses TOML format with a flexible **source → filter → sink** pipeline architecture.
|
LogWisp configuration uses TOML format with flexible override mechanisms.
|
||||||
|
|
||||||
## Configuration Methods
|
## Configuration Precedence
|
||||||
|
|
||||||
LogWisp supports three configuration methods with the following precedence:
|
|
||||||
|
|
||||||
|
Configuration sources are evaluated in order:
|
||||||
1. **Command-line flags** (highest priority)
|
1. **Command-line flags** (highest priority)
|
||||||
2. **Environment variables**
|
2. **Environment variables**
|
||||||
3. **Configuration file** (lowest priority)
|
3. **Configuration file**
|
||||||
|
4. **Built-in defaults** (lowest priority)
|
||||||
|
|
||||||
### Complete Configuration Reference
|
## File Location
|
||||||
|
|
||||||
| Category | CLI Flag | Environment Variable | TOML File |
|
LogWisp searches for configuration in order:
|
||||||
|----------|----------|---------------------|-----------|
|
1. Path specified via `--config` flag
|
||||||
| **Top-level** |
|
2. Path from `LOGWISP_CONFIG_FILE` environment variable
|
||||||
| Router mode | `--router` | `LOGWISP_ROUTER` | `router = true` |
|
3. `~/.config/logwisp/logwisp.toml`
|
||||||
| Background mode | `--background` | `LOGWISP_BACKGROUND` | `background = true` |
|
4. `./logwisp.toml` in current directory
|
||||||
| Show version | `--version` | `LOGWISP_VERSION` | `version = true` |
|
|
||||||
| Quiet mode | `--quiet` | `LOGWISP_QUIET` | `quiet = true` |
|
|
||||||
| Disable status reporter | `--disable-status-reporter` | `LOGWISP_DISABLE_STATUS_REPORTER` | `disable_status_reporter = true` |
|
|
||||||
| Config auto-reload | `--config-auto-reload` | `LOGWISP_CONFIG_AUTO_RELOAD` | `config_auto_reload = true` |
|
|
||||||
| Config save on exit | `--config-save-on-exit` | `LOGWISP_CONFIG_SAVE_ON_EXIT` | `config_save_on_exit = true` |
|
|
||||||
| Config file | `--config <path>` | `LOGWISP_CONFIG_FILE` | N/A |
|
|
||||||
| Config directory | N/A | `LOGWISP_CONFIG_DIR` | N/A |
|
|
||||||
| **Logging** |
|
|
||||||
| Output mode | `--logging.output <mode>` | `LOGWISP_LOGGING_OUTPUT` | `[logging]`<br>`output = "stderr"` |
|
|
||||||
| Log level | `--logging.level <level>` | `LOGWISP_LOGGING_LEVEL` | `[logging]`<br>`level = "info"` |
|
|
||||||
| File directory | `--logging.file.directory <path>` | `LOGWISP_LOGGING_FILE_DIRECTORY` | `[logging.file]`<br>`directory = "./logs"` |
|
|
||||||
| File name | `--logging.file.name <name>` | `LOGWISP_LOGGING_FILE_NAME` | `[logging.file]`<br>`name = "logwisp"` |
|
|
||||||
| Max file size | `--logging.file.max_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_SIZE_MB` | `[logging.file]`<br>`max_size_mb = 100` |
|
|
||||||
| Max total size | `--logging.file.max_total_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB` | `[logging.file]`<br>`max_total_size_mb = 1000` |
|
|
||||||
| Retention hours | `--logging.file.retention_hours <hours>` | `LOGWISP_LOGGING_FILE_RETENTION_HOURS` | `[logging.file]`<br>`retention_hours = 168` |
|
|
||||||
| Console target | `--logging.console.target <target>` | `LOGWISP_LOGGING_CONSOLE_TARGET` | `[logging.console]`<br>`target = "stderr"` |
|
|
||||||
| Console format | `--logging.console.format <format>` | `LOGWISP_LOGGING_CONSOLE_FORMAT` | `[logging.console]`<br>`format = "txt"` |
|
|
||||||
| **Pipelines** |
|
|
||||||
| Pipeline name | `--pipelines.N.name <name>` | `LOGWISP_PIPELINES_N_NAME` | `[[pipelines]]`<br>`name = "default"` |
|
|
||||||
| Source type | `--pipelines.N.sources.N.type <type>` | `LOGWISP_PIPELINES_N_SOURCES_N_TYPE` | `[[pipelines.sources]]`<br>`type = "directory"` |
|
|
||||||
| Source options | `--pipelines.N.sources.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SOURCES_N_OPTIONS_<KEY>` | `[[pipelines.sources]]`<br>`options = { ... }` |
|
|
||||||
| Filter type | `--pipelines.N.filters.N.type <type>` | `LOGWISP_PIPELINES_N_FILTERS_N_TYPE` | `[[pipelines.filters]]`<br>`type = "include"` |
|
|
||||||
| Filter logic | `--pipelines.N.filters.N.logic <logic>` | `LOGWISP_PIPELINES_N_FILTERS_N_LOGIC` | `[[pipelines.filters]]`<br>`logic = "or"` |
|
|
||||||
| Filter patterns | `--pipelines.N.filters.N.patterns <json>` | `LOGWISP_PIPELINES_N_FILTERS_N_PATTERNS` | `[[pipelines.filters]]`<br>`patterns = [...]` |
|
|
||||||
| Sink type | `--pipelines.N.sinks.N.type <type>` | `LOGWISP_PIPELINES_N_SINKS_N_TYPE` | `[[pipelines.sinks]]`<br>`type = "http"` |
|
|
||||||
| Sink options | `--pipelines.N.sinks.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SINKS_N_OPTIONS_<KEY>` | `[[pipelines.sinks]]`<br>`options = { ... }` |
|
|
||||||
| Auth type | `--pipelines.N.auth.type <type>` | `LOGWISP_PIPELINES_N_AUTH_TYPE` | `[pipelines.auth]`<br>`type = "none"` |
|
|
||||||
|
|
||||||
Note: `N` represents array indices (0-based).
|
## Global Settings
|
||||||
|
|
||||||
## Configuration File Location
|
Top-level configuration options:
|
||||||
|
|
||||||
1. Command line: `--config /path/to/config.toml`
|
| Setting | Type | Default | Description |
|
||||||
2. Environment: `$LOGWISP_CONFIG_FILE` and `$LOGWISP_CONFIG_DIR`
|
|---------|------|---------|-------------|
|
||||||
3. User config: `~/.config/logwisp/logwisp.toml`
|
| `background` | bool | false | Run as daemon process |
|
||||||
4. Current directory: `./logwisp.toml`
|
| `quiet` | bool | false | Suppress console output |
|
||||||
|
| `disable_status_reporter` | bool | false | Disable periodic status logging |
|
||||||
|
| `config_auto_reload` | bool | false | Enable file watch for auto-reload |
|
||||||
|
|
||||||
## Hot Reload
|
## Logging Configuration
|
||||||
|
|
||||||
LogWisp supports automatic configuration reloading without restart:
|
LogWisp's internal operational logging:
|
||||||
|
|
||||||
```bash
|
|
||||||
# Enable hot reload
|
|
||||||
logwisp --config-auto-reload --config /etc/logwisp/config.toml
|
|
||||||
|
|
||||||
# Manual reload via signal
|
|
||||||
kill -HUP $(pidof logwisp) # or SIGUSR1
|
|
||||||
```
|
|
||||||
|
|
||||||
Hot reload updates:
|
|
||||||
- Pipeline configurations
|
|
||||||
- Filters
|
|
||||||
- Formatters
|
|
||||||
- Rate limits
|
|
||||||
- Router mode changes
|
|
||||||
|
|
||||||
Not reloaded (requires restart):
|
|
||||||
- Logging configuration
|
|
||||||
- Background mode
|
|
||||||
|
|
||||||
## Configuration Structure
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# Optional: Enable router mode
|
|
||||||
router = false
|
|
||||||
|
|
||||||
# Optional: Background mode
|
|
||||||
background = false
|
|
||||||
|
|
||||||
# Optional: Quiet mode
|
|
||||||
quiet = false
|
|
||||||
|
|
||||||
# Optional: Disable status reporter
|
|
||||||
disable_status_reporter = false
|
|
||||||
|
|
||||||
# Optional: LogWisp's own logging
|
|
||||||
[logging]
|
[logging]
|
||||||
output = "stderr" # file, stdout, stderr, both, none
|
output = "stdout" # file|stdout|stderr|split|all|none
|
||||||
level = "info" # debug, info, warn, error
|
level = "info" # debug|info|warn|error
|
||||||
|
|
||||||
[logging.file]
|
[logging.file]
|
||||||
directory = "./logs"
|
directory = "./log"
|
||||||
name = "logwisp"
|
name = "logwisp"
|
||||||
max_size_mb = 100
|
max_size_mb = 100
|
||||||
max_total_size_mb = 1000
|
max_total_size_mb = 1000
|
||||||
retention_hours = 168
|
retention_hours = 168.0
|
||||||
|
|
||||||
[logging.console]
|
[logging.console]
|
||||||
target = "stderr" # stdout, stderr, split
|
target = "stdout" # stdout|stderr|split
|
||||||
format = "txt" # txt or json
|
format = "txt" # txt|json
|
||||||
|
```
|
||||||
|
|
||||||
# Required: At least one pipeline
|
### Output Modes
|
||||||
|
|
||||||
|
- **file**: Write to log files only
|
||||||
|
- **stdout**: Write to standard output
|
||||||
|
- **stderr**: Write to standard error
|
||||||
|
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
|
||||||
|
- **all**: Write to both file and console
|
||||||
|
- **none**: Disable all logging
|
||||||
|
|
||||||
|
## Pipeline Configuration
|
||||||
|
|
||||||
|
Each `[[pipelines]]` section defines an independent processing pipeline:
|
||||||
|
|
||||||
|
```toml
|
||||||
[[pipelines]]
|
[[pipelines]]
|
||||||
name = "default"
|
name = "pipeline-name"
|
||||||
|
|
||||||
# Sources (required)
|
# Rate limiting (optional)
|
||||||
|
[pipelines.rate_limit]
|
||||||
|
rate = 1000.0
|
||||||
|
burst = 2000.0
|
||||||
|
policy = "drop" # pass|drop
|
||||||
|
max_entry_size_bytes = 0 # 0=unlimited
|
||||||
|
|
||||||
|
# Format configuration (optional)
|
||||||
|
[pipelines.format]
|
||||||
|
type = "json" # raw|json|txt
|
||||||
|
|
||||||
|
# Sources (required, 1+)
|
||||||
[[pipelines.sources]]
|
[[pipelines.sources]]
|
||||||
type = "directory"
|
type = "directory"
|
||||||
options = { ... }
|
# ... source-specific config
|
||||||
|
|
||||||
# Filters (optional)
|
# Filters (optional)
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "include"
|
type = "include"
|
||||||
patterns = [...]
|
logic = "or"
|
||||||
|
patterns = ["ERROR", "WARN"]
|
||||||
|
|
||||||
# Sinks (required)
|
# Sinks (required, 1+)
|
||||||
[[pipelines.sinks]]
|
[[pipelines.sinks]]
|
||||||
type = "http"
|
type = "http"
|
||||||
options = { ... }
|
# ... sink-specific config
|
||||||
```
|
```
|
||||||
|
|
||||||
## Pipeline Configuration
|
## Environment Variables
|
||||||
|
|
||||||
Each `[[pipelines]]` section defines an independent processing pipeline.
|
All configuration options support environment variable overrides:
|
||||||
|
|
||||||
### Pipeline Formatters
|
### Naming Convention
|
||||||
|
|
||||||
Control output format per pipeline:
|
- Prefix: `LOGWISP_`
|
||||||
|
- Path separator: `_` (underscore)
|
||||||
|
- Array indices: Numeric suffix (0-based)
|
||||||
|
- Case: UPPERCASE
|
||||||
|
|
||||||
```toml
|
### Mapping Examples
|
||||||
[[pipelines]]
|
|
||||||
name = "json-output"
|
|
||||||
format = "json" # raw, json, text
|
|
||||||
|
|
||||||
[pipelines.format_options]
|
| TOML Path | Environment Variable |
|
||||||
# JSON formatter
|
|-----------|---------------------|
|
||||||
pretty = false
|
| `quiet` | `LOGWISP_QUIET` |
|
||||||
timestamp_field = "timestamp"
|
| `logging.level` | `LOGWISP_LOGGING_LEVEL` |
|
||||||
level_field = "level"
|
| `pipelines[0].name` | `LOGWISP_PIPELINES_0_NAME` |
|
||||||
message_field = "message"
|
| `pipelines[0].sources[0].type` | `LOGWISP_PIPELINES_0_SOURCES_0_TYPE` |
|
||||||
source_field = "source"
|
|
||||||
|
|
||||||
# Text formatter
|
## Command-Line Overrides
|
||||||
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Message}}"
|
|
||||||
timestamp_format = "2006-01-02T15:04:05Z07:00"
|
All configuration options can be overridden via CLI flags:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
logwisp --quiet \
|
||||||
|
--logging.level=debug \
|
||||||
|
--pipelines.0.name=myapp \
|
||||||
|
--pipelines.0.sources.0.type=stdin
|
||||||
```
|
```
|
||||||
|
|
||||||
### Sources
|
## Configuration Validation
|
||||||
|
|
||||||
Input data sources:
|
LogWisp validates configuration at startup:
|
||||||
|
- Required fields presence
|
||||||
|
- Type correctness
|
||||||
|
- Port conflicts
|
||||||
|
- Path accessibility
|
||||||
|
- Pattern compilation
|
||||||
|
- Network address formats
|
||||||
|
|
||||||
#### Directory Source
|
## Hot Reload
|
||||||
```toml
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = {
|
|
||||||
path = "/var/log/myapp", # Directory to monitor
|
|
||||||
pattern = "*.log", # File pattern (glob)
|
|
||||||
check_interval_ms = 100 # Check interval (10-60000)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### File Source
|
Enable configuration hot reload:
|
||||||
```toml
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "file"
|
|
||||||
options = {
|
|
||||||
path = "/var/log/app.log" # Specific file
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Stdin Source
|
|
||||||
```toml
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "stdin"
|
|
||||||
options = {}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### HTTP Source
|
|
||||||
```toml
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "http"
|
|
||||||
options = {
|
|
||||||
port = 8081, # Port to listen on
|
|
||||||
ingest_path = "/ingest", # Path for POST requests
|
|
||||||
buffer_size = 1000, # Input buffer size
|
|
||||||
rate_limit = { # Optional rate limiting
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 10.0,
|
|
||||||
burst_size = 20,
|
|
||||||
limit_by = "ip"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### TCP Source
|
|
||||||
```toml
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "tcp"
|
|
||||||
options = {
|
|
||||||
port = 9091, # Port to listen on
|
|
||||||
buffer_size = 1000, # Input buffer size
|
|
||||||
rate_limit = { # Optional rate limiting
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 5.0,
|
|
||||||
burst_size = 10,
|
|
||||||
limit_by = "ip"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filters
|
|
||||||
|
|
||||||
Control which log entries pass through:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Include filter - only matching logs pass
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "include"
|
|
||||||
logic = "or" # or: match any, and: match all
|
|
||||||
patterns = [
|
|
||||||
"ERROR",
|
|
||||||
"(?i)warn", # Case-insensitive
|
|
||||||
"\\bfatal\\b" # Word boundary
|
|
||||||
]
|
|
||||||
|
|
||||||
# Exclude filter - matching logs are dropped
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "exclude"
|
|
||||||
patterns = ["DEBUG", "health-check"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sinks
|
|
||||||
|
|
||||||
Output destinations:
|
|
||||||
|
|
||||||
#### HTTP Sink (SSE)
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = {
|
|
||||||
port = 8080,
|
|
||||||
buffer_size = 1000,
|
|
||||||
stream_path = "/stream",
|
|
||||||
status_path = "/status",
|
|
||||||
|
|
||||||
# Heartbeat
|
|
||||||
heartbeat = {
|
|
||||||
enabled = true,
|
|
||||||
interval_seconds = 30,
|
|
||||||
format = "comment", # comment or json
|
|
||||||
include_timestamp = true,
|
|
||||||
include_stats = false
|
|
||||||
},
|
|
||||||
|
|
||||||
# Rate limiting
|
|
||||||
rate_limit = {
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 10.0,
|
|
||||||
burst_size = 20,
|
|
||||||
limit_by = "ip", # ip or global
|
|
||||||
max_connections_per_ip = 5,
|
|
||||||
max_total_connections = 100,
|
|
||||||
response_code = 429,
|
|
||||||
response_message = "Rate limit exceeded"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### TCP Sink
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "tcp"
|
|
||||||
options = {
|
|
||||||
port = 9090,
|
|
||||||
buffer_size = 5000,
|
|
||||||
heartbeat = { enabled = true, interval_seconds = 60, format = "json" },
|
|
||||||
rate_limit = { enabled = true, requests_per_second = 5.0, burst_size = 10 }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### HTTP Client Sink
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http_client"
|
|
||||||
options = {
|
|
||||||
url = "https://remote-log-server.com/ingest",
|
|
||||||
buffer_size = 1000,
|
|
||||||
batch_size = 100,
|
|
||||||
batch_delay_ms = 1000,
|
|
||||||
timeout_seconds = 30,
|
|
||||||
max_retries = 3,
|
|
||||||
retry_delay_ms = 1000,
|
|
||||||
retry_backoff = 2.0,
|
|
||||||
headers = {
|
|
||||||
"Authorization" = "Bearer <API_KEY_HERE>",
|
|
||||||
"X-Custom-Header" = "value"
|
|
||||||
},
|
|
||||||
insecure_skip_verify = false
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### TCP Client Sink
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "tcp_client"
|
|
||||||
options = {
|
|
||||||
address = "remote-server.com:9090",
|
|
||||||
buffer_size = 1000,
|
|
||||||
dial_timeout_seconds = 10,
|
|
||||||
write_timeout_seconds = 30,
|
|
||||||
keep_alive_seconds = 30,
|
|
||||||
reconnect_delay_ms = 1000,
|
|
||||||
max_reconnect_delay_seconds = 30,
|
|
||||||
reconnect_backoff = 1.5
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### File Sink
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "file"
|
|
||||||
options = {
|
|
||||||
directory = "/var/log/logwisp",
|
|
||||||
name = "app",
|
|
||||||
max_size_mb = 100,
|
|
||||||
max_total_size_mb = 1000,
|
|
||||||
retention_hours = 168.0,
|
|
||||||
min_disk_free_mb = 1000,
|
|
||||||
buffer_size = 2000
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Console Sinks
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "stdout" # or "stderr"
|
|
||||||
options = {
|
|
||||||
buffer_size = 500,
|
|
||||||
target = "stdout" # stdout, stderr, or split
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Complete Examples
|
|
||||||
|
|
||||||
### Basic Application Monitoring
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "app"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
```
|
|
||||||
|
|
||||||
### Hot Reload with JSON Output
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
config_auto_reload = true
|
config_auto_reload = true
|
||||||
config_save_on_exit = true
|
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "app"
|
|
||||||
format = "json"
|
|
||||||
|
|
||||||
[pipelines.format_options]
|
|
||||||
pretty = true
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Filtering
|
Or via command line:
|
||||||
|
```bash
|
||||||
```toml
|
logwisp --config-auto-reload
|
||||||
[logging]
|
|
||||||
output = "file"
|
|
||||||
level = "info"
|
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "production"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log", check_interval_ms = 50 }
|
|
||||||
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "include"
|
|
||||||
patterns = ["ERROR", "WARN", "CRITICAL"]
|
|
||||||
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "exclude"
|
|
||||||
patterns = ["/health", "/metrics"]
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = {
|
|
||||||
port = 8080,
|
|
||||||
rate_limit = { enabled = true, requests_per_second = 25.0 }
|
|
||||||
}
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "file"
|
|
||||||
options = { directory = "/var/log/archive", name = "errors" }
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Multi-Source Aggregation
|
Reload triggers:
|
||||||
|
- File modification detection
|
||||||
|
- SIGHUP or SIGUSR1 signals
|
||||||
|
|
||||||
|
Reloadable items:
|
||||||
|
- Pipeline configurations
|
||||||
|
- Sources and sinks
|
||||||
|
- Filters and formatters
|
||||||
|
- Rate limits
|
||||||
|
|
||||||
|
Non-reloadable (requires restart):
|
||||||
|
- Logging configuration
|
||||||
|
- Background mode
|
||||||
|
- Global settings
|
||||||
|
|
||||||
|
## Default Configuration
|
||||||
|
|
||||||
|
Minimal working configuration:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[pipelines]]
|
[[pipelines]]
|
||||||
name = "aggregated"
|
name = "default"
|
||||||
|
|
||||||
[[pipelines.sources]]
|
[[pipelines.sources]]
|
||||||
type = "directory"
|
type = "directory"
|
||||||
options = { path = "/var/log/nginx", pattern = "*.log" }
|
[pipelines.sources.directory]
|
||||||
|
path = "./"
|
||||||
[[pipelines.sources]]
|
pattern = "*.log"
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "stdin"
|
|
||||||
options = {}
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8081, ingest_path = "/logs" }
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
[[pipelines.sinks]]
|
||||||
type = "tcp"
|
type = "console"
|
||||||
options = { port = 9090 }
|
[pipelines.sinks.console]
|
||||||
|
target = "stdout"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Router Mode
|
## Configuration Schema
|
||||||
|
|
||||||
```toml
|
### Type Reference
|
||||||
# Run with: logwisp --router
|
|
||||||
router = true
|
|
||||||
|
|
||||||
[[pipelines]]
|
| TOML Type | Go Type | Environment Format |
|
||||||
name = "api"
|
|-----------|---------|-------------------|
|
||||||
[[pipelines.sources]]
|
| String | string | Plain text |
|
||||||
type = "directory"
|
| Integer | int64 | Numeric string |
|
||||||
options = { path = "/var/log/api", pattern = "*.log" }
|
| Float | float64 | Decimal string |
|
||||||
[[pipelines.sinks]]
|
| Boolean | bool | true/false |
|
||||||
type = "http"
|
| Array | []T | JSON array string |
|
||||||
options = { port = 8080 } # Same port OK in router mode
|
| Table | struct | Nested with `_` |
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "web"
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/nginx", pattern = "*.log" }
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 } # Shared port
|
|
||||||
|
|
||||||
# Access:
|
|
||||||
# http://localhost:8080/api/stream
|
|
||||||
# http://localhost:8080/web/stream
|
|
||||||
# http://localhost:8080/status
|
|
||||||
```
|
|
||||||
|
|
||||||
### Remote Log Forwarding
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "forwarder"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "include"
|
|
||||||
patterns = ["ERROR", "WARN"]
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http_client"
|
|
||||||
options = {
|
|
||||||
url = "https://log-aggregator.example.com/ingest",
|
|
||||||
batch_size = 100,
|
|
||||||
batch_delay_ms = 5000,
|
|
||||||
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
|
|
||||||
}
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "tcp_client"
|
|
||||||
options = {
|
|
||||||
address = "backup-logger.example.com:9090",
|
|
||||||
reconnect_delay_ms = 5000
|
|
||||||
}
|
|
||||||
```
|
|
||||||
@ -1,274 +0,0 @@
|
|||||||
# Environment Variables
|
|
||||||
|
|
||||||
Configure LogWisp through environment variables for containerized deployments.
|
|
||||||
|
|
||||||
## Naming Convention
|
|
||||||
|
|
||||||
- **Prefix**: `LOGWISP_`
|
|
||||||
- **Path separator**: `_` (underscore)
|
|
||||||
- **Array indices**: Numeric suffix (0-based)
|
|
||||||
- **Case**: UPPERCASE
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- `logging.level` → `LOGWISP_LOGGING_LEVEL`
|
|
||||||
- `pipelines[0].name` → `LOGWISP_PIPELINES_0_NAME`
|
|
||||||
|
|
||||||
## General Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
|
|
||||||
LOGWISP_CONFIG_DIR=/etc/logwisp
|
|
||||||
LOGWISP_BACKGROUND=true
|
|
||||||
LOGWISP_QUIET=true
|
|
||||||
LOGWISP_DISABLE_STATUS_REPORTER=true
|
|
||||||
LOGWISP_CONFIG_AUTO_RELOAD=true
|
|
||||||
LOGWISP_CONFIG_SAVE_ON_EXIT=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### `LOGWISP_CONFIG_FILE`
|
|
||||||
Configuration file path.
|
|
||||||
```bash
|
|
||||||
export LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
|
|
||||||
```
|
|
||||||
|
|
||||||
### `LOGWISP_CONFIG_DIR`
|
|
||||||
Configuration directory.
|
|
||||||
```bash
|
|
||||||
export LOGWISP_CONFIG_DIR=/etc/logwisp
|
|
||||||
export LOGWISP_CONFIG_FILE=production.toml
|
|
||||||
```
|
|
||||||
|
|
||||||
### `LOGWISP_ROUTER`
|
|
||||||
Enable router mode.
|
|
||||||
```bash
|
|
||||||
export LOGWISP_ROUTER=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### `LOGWISP_BACKGROUND`
|
|
||||||
Run in background.
|
|
||||||
```bash
|
|
||||||
export LOGWISP_BACKGROUND=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### `LOGWISP_QUIET`
|
|
||||||
Suppress all output.
|
|
||||||
```bash
|
|
||||||
export LOGWISP_QUIET=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### `LOGWISP_DISABLE_STATUS_REPORTER`
|
|
||||||
Disable periodic status reporting.
|
|
||||||
```bash
|
|
||||||
export LOGWISP_DISABLE_STATUS_REPORTER=true
|
|
||||||
```
|
|
||||||
|
|
||||||
## Logging Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Output mode
|
|
||||||
LOGWISP_LOGGING_OUTPUT=both
|
|
||||||
|
|
||||||
# Log level
|
|
||||||
LOGWISP_LOGGING_LEVEL=debug
|
|
||||||
|
|
||||||
# File logging
|
|
||||||
LOGWISP_LOGGING_FILE_DIRECTORY=/var/log/logwisp
|
|
||||||
LOGWISP_LOGGING_FILE_NAME=logwisp
|
|
||||||
LOGWISP_LOGGING_FILE_MAX_SIZE_MB=100
|
|
||||||
LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB=1000
|
|
||||||
LOGWISP_LOGGING_FILE_RETENTION_HOURS=168
|
|
||||||
|
|
||||||
# Console logging
|
|
||||||
LOGWISP_LOGGING_CONSOLE_TARGET=stderr
|
|
||||||
LOGWISP_LOGGING_CONSOLE_FORMAT=json
|
|
||||||
|
|
||||||
# Special console target override
|
|
||||||
LOGWISP_CONSOLE_TARGET=split # Overrides sink console targets
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pipeline Configuration
|
|
||||||
|
|
||||||
### Basic Pipeline
|
|
||||||
```bash
|
|
||||||
# Pipeline name
|
|
||||||
LOGWISP_PIPELINES_0_NAME=app
|
|
||||||
|
|
||||||
# Source configuration
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/app
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_CHECK_INTERVAL_MS=100
|
|
||||||
|
|
||||||
# Sink configuration
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=1000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pipeline with Formatter
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Pipeline name and format
|
|
||||||
LOGWISP_PIPELINES_0_NAME=app
|
|
||||||
LOGWISP_PIPELINES_0_FORMAT=json
|
|
||||||
|
|
||||||
# Format options
|
|
||||||
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_PRETTY=true
|
|
||||||
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_TIMESTAMP_FIELD=ts
|
|
||||||
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_LEVEL_FIELD=severity
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filters
|
|
||||||
```bash
|
|
||||||
# Include filter
|
|
||||||
LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
|
|
||||||
LOGWISP_PIPELINES_0_FILTERS_0_LOGIC=or
|
|
||||||
LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
|
|
||||||
|
|
||||||
# Exclude filter
|
|
||||||
LOGWISP_PIPELINES_0_FILTERS_1_TYPE=exclude
|
|
||||||
LOGWISP_PIPELINES_0_FILTERS_1_PATTERNS='["DEBUG"]'
|
|
||||||
```
|
|
||||||
|
|
||||||
### HTTP Source
|
|
||||||
```bash
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=http
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=8081
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_INGEST_PATH=/ingest
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### TCP Source
|
|
||||||
```bash
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=tcp
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=9091
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
|
|
||||||
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=5.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### HTTP Sink Options
|
|
||||||
```bash
|
|
||||||
# Basic
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STREAM_PATH=/stream
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STATUS_PATH=/status
|
|
||||||
|
|
||||||
# Heartbeat
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_ENABLED=true
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INTERVAL_SECONDS=30
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_FORMAT=comment
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_TIMESTAMP=true
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_STATS=false
|
|
||||||
|
|
||||||
# Rate Limiting
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_BURST_SIZE=20
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_LIMIT_BY=ip
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_CONNECTIONS_PER_IP=5
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_TOTAL_CONNECTIONS=100
|
|
||||||
```
|
|
||||||
|
|
||||||
### HTTP Client Sink
|
|
||||||
```bash
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http_client
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_URL=https://log-server.com/ingest
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_SIZE=100
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_DELAY_MS=5000
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TIMEOUT_SECONDS=30
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RETRIES=3
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_DELAY_MS=1000
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_BACKOFF=2.0
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_INSECURE_SKIP_VERIFY=false
|
|
||||||
```
|
|
||||||
|
|
||||||
### TCP Client Sink
|
|
||||||
```bash
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_TYPE=tcp_client
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_ADDRESS=remote-server.com:9090
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIAL_TIMEOUT_SECONDS=10
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_WRITE_TIMEOUT_SECONDS=30
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_KEEP_ALIVE_SECONDS=30
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_DELAY_MS=1000
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RECONNECT_DELAY_SECONDS=30
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_BACKOFF=1.5
|
|
||||||
```
|
|
||||||
|
|
||||||
### File Sink
|
|
||||||
```bash
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIRECTORY=/var/log/logwisp
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_NAME=app
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_SIZE_MB=100
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_TOTAL_SIZE_MB=1000
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETENTION_HOURS=168
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MIN_DISK_FREE_MB=1000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Console Sinks
|
|
||||||
```bash
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_TYPE=stdout
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=500
|
|
||||||
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TARGET=stdout
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# General settings
|
|
||||||
export LOGWISP_DISABLE_STATUS_REPORTER=false
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
export LOGWISP_LOGGING_OUTPUT=both
|
|
||||||
export LOGWISP_LOGGING_LEVEL=info
|
|
||||||
|
|
||||||
# Pipeline 0: Application logs
|
|
||||||
export LOGWISP_PIPELINES_0_NAME=app
|
|
||||||
export LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
|
|
||||||
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/myapp
|
|
||||||
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
|
|
||||||
|
|
||||||
# Filters
|
|
||||||
export LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
|
|
||||||
export LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
|
|
||||||
|
|
||||||
# HTTP sink
|
|
||||||
export LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
|
|
||||||
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
|
|
||||||
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
|
|
||||||
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=25.0
|
|
||||||
|
|
||||||
# Pipeline 1: System logs
|
|
||||||
export LOGWISP_PIPELINES_1_NAME=system
|
|
||||||
export LOGWISP_PIPELINES_1_SOURCES_0_TYPE=file
|
|
||||||
export LOGWISP_PIPELINES_1_SOURCES_0_OPTIONS_PATH=/var/log/syslog
|
|
||||||
|
|
||||||
# TCP sink
|
|
||||||
export LOGWISP_PIPELINES_1_SINKS_0_TYPE=tcp
|
|
||||||
export LOGWISP_PIPELINES_1_SINKS_0_OPTIONS_PORT=9090
|
|
||||||
|
|
||||||
# Pipeline 2: Remote forwarding
|
|
||||||
export LOGWISP_PIPELINES_2_NAME=forwarder
|
|
||||||
export LOGWISP_PIPELINES_2_SOURCES_0_TYPE=http
|
|
||||||
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_PORT=8081
|
|
||||||
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_INGEST_PATH=/logs
|
|
||||||
|
|
||||||
# HTTP client sink
|
|
||||||
export LOGWISP_PIPELINES_2_SINKS_0_TYPE=http_client
|
|
||||||
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_URL=https://log-aggregator.example.com/ingest
|
|
||||||
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_BATCH_SIZE=100
|
|
||||||
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
|
|
||||||
|
|
||||||
logwisp
|
|
||||||
```
|
|
||||||
|
|
||||||
## Precedence
|
|
||||||
|
|
||||||
1. Command-line flags (highest)
|
|
||||||
2. Environment variables
|
|
||||||
3. Configuration file
|
|
||||||
4. Defaults (lowest)
|
|
||||||
361
doc/filters.md
361
doc/filters.md
@ -1,268 +1,185 @@
|
|||||||
# Filter Guide
|
# Filters
|
||||||
|
|
||||||
LogWisp filters control which log entries pass through pipelines using regular expressions.
|
LogWisp filters control which log entries pass through the pipeline using pattern matching.
|
||||||
|
|
||||||
## How Filters Work
|
## Filter Types
|
||||||
|
|
||||||
- **Include**: Only matching logs pass (whitelist)
|
### Include Filter
|
||||||
- **Exclude**: Matching logs are dropped (blacklist)
|
|
||||||
- Multiple filters apply sequentially - all must pass
|
|
||||||
|
|
||||||
## Configuration
|
Only entries matching patterns pass through.
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "include" # or "exclude"
|
|
||||||
logic = "or" # or "and"
|
|
||||||
patterns = [
|
|
||||||
"pattern1",
|
|
||||||
"pattern2"
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filter Types
|
|
||||||
|
|
||||||
#### Include Filter
|
|
||||||
```toml
|
```toml
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "include"
|
type = "include"
|
||||||
logic = "or"
|
logic = "or" # or|and
|
||||||
patterns = ["ERROR", "WARN", "CRITICAL"]
|
patterns = [
|
||||||
# Only ERROR, WARN, or CRITICAL logs pass
|
"ERROR",
|
||||||
|
"WARN",
|
||||||
|
"CRITICAL"
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Exclude Filter
|
### Exclude Filter
|
||||||
|
|
||||||
|
Entries matching patterns are dropped.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "exclude"
|
type = "exclude"
|
||||||
patterns = ["DEBUG", "TRACE", "/health"]
|
patterns = [
|
||||||
# DEBUG, TRACE, and health checks are dropped
|
"DEBUG",
|
||||||
|
"TRACE",
|
||||||
|
"health-check"
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Logic Operators
|
## Configuration Options
|
||||||
|
|
||||||
- **OR**: Match ANY pattern (default)
|
| Option | Type | Default | Description |
|
||||||
- **AND**: Match ALL patterns
|
|--------|------|---------|-------------|
|
||||||
|
| `type` | string | Required | Filter type (include/exclude) |
|
||||||
```toml
|
| `logic` | string | "or" | Pattern matching logic (or/and) |
|
||||||
# OR Logic
|
| `patterns` | []string | Required | Pattern list |
|
||||||
logic = "or"
|
|
||||||
patterns = ["ERROR", "FAIL"]
|
|
||||||
# Matches: "ERROR: disk full" OR "FAIL: timeout"
|
|
||||||
|
|
||||||
# AND Logic
|
|
||||||
logic = "and"
|
|
||||||
patterns = ["database", "timeout", "ERROR"]
|
|
||||||
# Matches: "ERROR: database connection timeout"
|
|
||||||
# Not: "ERROR: file not found"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pattern Syntax
|
## Pattern Syntax
|
||||||
|
|
||||||
Go regular expressions (RE2):
|
Patterns support regular expression syntax:
|
||||||
|
|
||||||
|
### Basic Patterns
|
||||||
|
- **Literal match**: `"ERROR"` - matches "ERROR" anywhere
|
||||||
|
- **Case-insensitive**: `"(?i)error"` - matches "error", "ERROR", "Error"
|
||||||
|
- **Word boundary**: `"\\berror\\b"` - matches whole word only
|
||||||
|
|
||||||
|
### Advanced Patterns
|
||||||
|
- **Alternation**: `"ERROR|WARN|FATAL"`
|
||||||
|
- **Character classes**: `"[0-9]{3}"`
|
||||||
|
- **Wildcards**: `".*exception.*"`
|
||||||
|
- **Line anchors**: `"^ERROR"` (start), `"ERROR$"` (end)
|
||||||
|
|
||||||
|
### Special Characters
|
||||||
|
Escape special regex characters with backslash:
|
||||||
|
- `.` → `\\.`
|
||||||
|
- `*` → `\\*`
|
||||||
|
- `[` → `\\[`
|
||||||
|
- `(` → `\\(`
|
||||||
|
|
||||||
|
## Filter Logic
|
||||||
|
|
||||||
|
### OR Logic (default)
|
||||||
|
Entry passes if ANY pattern matches:
|
||||||
```toml
|
```toml
|
||||||
"ERROR" # Substring match
|
logic = "or"
|
||||||
"(?i)error" # Case-insensitive
|
patterns = ["ERROR", "WARN"]
|
||||||
"\\berror\\b" # Word boundaries
|
# Passes: "ERROR in module", "WARN: low memory"
|
||||||
"^ERROR" # Start of line
|
# Blocks: "INFO: started"
|
||||||
"ERROR$" # End of line
|
|
||||||
"error|fail|warn" # Alternatives
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Common Patterns
|
### AND Logic
|
||||||
|
Entry passes only if ALL patterns match:
|
||||||
### Log Levels
|
|
||||||
```toml
|
```toml
|
||||||
patterns = [
|
logic = "and"
|
||||||
"\\[(ERROR|WARN|INFO)\\]", # [ERROR] format
|
patterns = ["database", "ERROR"]
|
||||||
"(?i)\\b(error|warning)\\b", # Word boundaries
|
# Passes: "ERROR: database connection failed"
|
||||||
"level=(error|warn)", # key=value format
|
# Blocks: "ERROR: file not found"
|
||||||
]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Application Errors
|
## Filter Chain
|
||||||
|
|
||||||
|
Multiple filters execute sequentially:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# Java
|
# First filter: Include errors and warnings
|
||||||
patterns = [
|
|
||||||
"Exception",
|
|
||||||
"at .+\\.java:[0-9]+",
|
|
||||||
"NullPointerException"
|
|
||||||
]
|
|
||||||
|
|
||||||
# Python
|
|
||||||
patterns = [
|
|
||||||
"Traceback",
|
|
||||||
"File \".+\\.py\", line [0-9]+",
|
|
||||||
"ValueError|TypeError"
|
|
||||||
]
|
|
||||||
|
|
||||||
# Go
|
|
||||||
patterns = [
|
|
||||||
"panic:",
|
|
||||||
"goroutine [0-9]+",
|
|
||||||
"runtime error:"
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Performance Issues
|
|
||||||
```toml
|
|
||||||
patterns = [
|
|
||||||
"took [0-9]{4,}ms", # >999ms operations
|
|
||||||
"timeout|timed out",
|
|
||||||
"slow query",
|
|
||||||
"high cpu|cpu usage: [8-9][0-9]%"
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
### HTTP Patterns
|
|
||||||
```toml
|
|
||||||
patterns = [
|
|
||||||
"status[=:][4-5][0-9]{2}", # 4xx/5xx codes
|
|
||||||
"HTTP/[0-9.]+ [4-5][0-9]{2}",
|
|
||||||
"\"/api/v[0-9]+/", # API paths
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Filter Chains
|
|
||||||
|
|
||||||
### Error Monitoring
|
|
||||||
```toml
|
|
||||||
# Include errors
|
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "include"
|
type = "include"
|
||||||
patterns = ["(?i)\\b(error|fail|critical)\\b"]
|
patterns = ["ERROR", "WARN"]
|
||||||
|
|
||||||
# Exclude known non-issues
|
# Second filter: Exclude test environments
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "exclude"
|
type = "exclude"
|
||||||
patterns = ["Error: Expected", "/health"]
|
patterns = ["test-env", "staging"]
|
||||||
```
|
```
|
||||||
|
|
||||||
### API Monitoring
|
Processing order:
|
||||||
|
1. Entry arrives from source
|
||||||
|
2. Include filter evaluates
|
||||||
|
3. If passed, exclude filter evaluates
|
||||||
|
4. If passed all filters, entry continues to sink
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
### Pattern Compilation
|
||||||
|
- Patterns compile once at startup
|
||||||
|
- Invalid patterns cause startup failure
|
||||||
|
- Complex patterns may impact performance
|
||||||
|
|
||||||
|
### Optimization Tips
|
||||||
|
- Place most selective filters first
|
||||||
|
- Use simple patterns when possible
|
||||||
|
- Combine related patterns with alternation
|
||||||
|
- Avoid excessive wildcards (`.*`)
|
||||||
|
|
||||||
|
## Filter Statistics
|
||||||
|
|
||||||
|
Filters track:
|
||||||
|
- Total entries evaluated
|
||||||
|
- Entries passed
|
||||||
|
- Entries blocked
|
||||||
|
- Processing time per pattern
|
||||||
|
|
||||||
|
## Common Use Cases
|
||||||
|
|
||||||
|
### Log Level Filtering
|
||||||
```toml
|
```toml
|
||||||
# Include API calls
|
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "include"
|
type = "include"
|
||||||
patterns = ["/api/", "/v[0-9]+/"]
|
patterns = ["ERROR", "WARN", "FATAL", "CRITICAL"]
|
||||||
|
```
|
||||||
|
|
||||||
# Exclude successful
|
### Application Filtering
|
||||||
|
```toml
|
||||||
|
[[pipelines.filters]]
|
||||||
|
type = "include"
|
||||||
|
patterns = ["app1", "app2", "app3"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Noise Reduction
|
||||||
|
```toml
|
||||||
[[pipelines.filters]]
|
[[pipelines.filters]]
|
||||||
type = "exclude"
|
type = "exclude"
|
||||||
patterns = ["\" 2[0-9]{2} "]
|
patterns = [
|
||||||
|
"health-check",
|
||||||
|
"ping",
|
||||||
|
"/metrics",
|
||||||
|
"heartbeat"
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Performance Tips
|
### Security Filtering
|
||||||
|
```toml
|
||||||
1. **Use anchors**: `^ERROR` faster than `ERROR`
|
[[pipelines.filters]]
|
||||||
2. **Avoid nested quantifiers**: `((a+)+)+`
|
type = "exclude"
|
||||||
3. **Non-capturing groups**: `(?:error|warn)`
|
patterns = [
|
||||||
4. **Order by frequency**: Most common first
|
"password",
|
||||||
5. **Simple patterns**: Faster than complex regex
|
"token",
|
||||||
|
"api[_-]key",
|
||||||
## Testing Filters
|
"secret"
|
||||||
|
]
|
||||||
```bash
|
|
||||||
# Test configuration
|
|
||||||
echo "[ERROR] Test" >> test.log
|
|
||||||
echo "[INFO] Test" >> test.log
|
|
||||||
|
|
||||||
# Run with debug
|
|
||||||
logwisp --log-level debug
|
|
||||||
|
|
||||||
# Check output
|
|
||||||
curl -N http://localhost:8080/stream
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Regex Pattern Guide
|
### Multi-stage Filtering
|
||||||
|
```toml
|
||||||
|
# Include production logs
|
||||||
|
[[pipelines.filters]]
|
||||||
|
type = "include"
|
||||||
|
patterns = ["prod-", "production"]
|
||||||
|
|
||||||
LogWisp uses Go's standard regex engine (RE2). It includes most common features but omits backtracking-heavy syntax.
|
# Include only errors
|
||||||
|
[[pipelines.filters]]
|
||||||
|
type = "include"
|
||||||
|
patterns = ["ERROR", "EXCEPTION", "FATAL"]
|
||||||
|
|
||||||
For complex logic, chain multiple filters (e.g., an `include` followed by an `exclude`) rather than writing one complex regex.
|
# Exclude known issues
|
||||||
|
[[pipelines.filters]]
|
||||||
### Basic Matching
|
type = "exclude"
|
||||||
|
patterns = ["ECONNRESET", "broken pipe"]
|
||||||
| Pattern | Description | Example |
|
```
|
||||||
| :--- | :--- | :--- |
|
|
||||||
| `literal` | Matches the exact text. | `"ERROR"` matches any log with "ERROR". |
|
|
||||||
| `.` | Matches any single character (except newline). | `"user."` matches "userA", "userB", etc. |
|
|
||||||
| `a\|b` | Matches expression `a` OR expression `b`. | `"error\|fail"` matches lines with "error" or "fail". |
|
|
||||||
|
|
||||||
### Anchors and Boundaries
|
|
||||||
|
|
||||||
Anchors tie your pattern to a specific position in the line.
|
|
||||||
|
|
||||||
| Pattern | Description | Example |
|
|
||||||
| :--- | :--- | :--- |
|
|
||||||
| `^` | Matches the beginning of the line. | `"^ERROR"` matches lines *starting* with "ERROR". |
|
|
||||||
| `$` | Matches the end of the line. | `"crashed$"` matches lines *ending* with "crashed". |
|
|
||||||
| `\b` | Matches a word boundary. | `"\berror\b"` matches "error" but not "terrorist". |
|
|
||||||
|
|
||||||
### Character Classes
|
|
||||||
|
|
||||||
| Pattern | Description | Example |
|
|
||||||
| :--- | :--- | :--- |
|
|
||||||
| `[abc]` | Matches `a`, `b`, or `c`. | `"[aeiou]"` matches any vowel. |
|
|
||||||
| `[^abc]` | Matches any character *except* `a`, `b`, or `c`. | `"[^0-9]"` matches any non-digit. |
|
|
||||||
| `[a-z]` | Matches any character in the range `a` to `z`. | `"[a-zA-Z]"` matches any letter. |
|
|
||||||
| `\d` | Matches any digit (`[0-9]`). | `\d{3}` matches three digits, like "123". |
|
|
||||||
| `\w` | Matches any word character (`[a-zA-Z0-9_]`). | `\w+` matches one or more word characters. |
|
|
||||||
| `\s` | Matches any whitespace character. | `\s+` matches one or more spaces or tabs. |
|
|
||||||
|
|
||||||
### Quantifiers
|
|
||||||
|
|
||||||
Quantifiers specify how many times a character or group must appear.
|
|
||||||
|
|
||||||
| Pattern | Description | Example |
|
|
||||||
| :--- | :--- | :--- |
|
|
||||||
| `*` | Zero or more times. | `"a*"` matches "", "a", "aa". |
|
|
||||||
| `+` | One or more times. | `"a+"` matches "a", "aa", but not "". |
|
|
||||||
| `?` | Zero or one time. | `"colou?r"` matches "color" and "colour". |
|
|
||||||
| `{n}` | Exactly `n` times. | `\d{4}` matches a 4-digit number. |
|
|
||||||
| `{n,}` | `n` or more times. | `\d{2,}` matches numbers with 2 or more digits. |
|
|
||||||
| `{n,m}` | Between `n` and `m` times. | `\d{1,3}` matches numbers with 1 to 3 digits. |
|
|
||||||
|
|
||||||
### Grouping
|
|
||||||
|
|
||||||
| Pattern | Description | Example |
|
|
||||||
| :--- | :--- | :--- |
|
|
||||||
| `(...)` | Groups an expression and captures the match. | `(ERROR|WARN)` captures "ERROR" or "WARN". |
|
|
||||||
| `(?:...)`| Groups an expression *without* capturing. Faster. | `(?:ERROR|WARN)` is more efficient if you just need to group. |
|
|
||||||
|
|
||||||
### Flags and Modifiers
|
|
||||||
|
|
||||||
Flags are placed at the beginning of a pattern to change its behavior.
|
|
||||||
|
|
||||||
| Pattern | Description |
|
|
||||||
| :--- | :--- |
|
|
||||||
| `(?i)` | Case-insensitive matching. |
|
|
||||||
| `(?m)` | Multi-line mode (`^` and `$` match start/end of lines). |
|
|
||||||
|
|
||||||
**Example:** `"(?i)error"` matches "error", "ERROR", and "Error".
|
|
||||||
|
|
||||||
### Practical Examples for Logging
|
|
||||||
|
|
||||||
* **Match an IP Address:**
|
|
||||||
```
|
|
||||||
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
|
|
||||||
```
|
|
||||||
|
|
||||||
* **Match HTTP 4xx or 5xx Status Codes:**
|
|
||||||
```
|
|
||||||
"status[= ](4|5)\d{2}"
|
|
||||||
```
|
|
||||||
|
|
||||||
* **Match a slow database query (>100ms):**
|
|
||||||
```
|
|
||||||
"Query took [1-9]\d{2,}ms"
|
|
||||||
```
|
|
||||||
|
|
||||||
* **Match key-value pairs:**
|
|
||||||
```
|
|
||||||
"user=(admin|guest)"
|
|
||||||
```
|
|
||||||
|
|
||||||
* **Match Java exceptions:**
|
|
||||||
```
|
|
||||||
"Exception:|at .+\.java:\d+"
|
|
||||||
```
|
|
||||||
215
doc/formatters.md
Normal file
215
doc/formatters.md
Normal file
@ -0,0 +1,215 @@
|
|||||||
|
# Formatters
|
||||||
|
|
||||||
|
LogWisp formatters transform log entries before output to sinks.
|
||||||
|
|
||||||
|
## Formatter Types
|
||||||
|
|
||||||
|
### Raw Formatter
|
||||||
|
|
||||||
|
Outputs the log message as-is with optional newline.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "raw"
|
||||||
|
|
||||||
|
[pipelines.format.raw]
|
||||||
|
add_new_line = true
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `add_new_line` | bool | true | Append newline to messages |
|
||||||
|
|
||||||
|
### JSON Formatter
|
||||||
|
|
||||||
|
Produces structured JSON output.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "json"
|
||||||
|
|
||||||
|
[pipelines.format.json]
|
||||||
|
pretty = false
|
||||||
|
timestamp_field = "timestamp"
|
||||||
|
level_field = "level"
|
||||||
|
message_field = "message"
|
||||||
|
source_field = "source"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `pretty` | bool | false | Pretty print JSON |
|
||||||
|
| `timestamp_field` | string | "timestamp" | Field name for timestamp |
|
||||||
|
| `level_field` | string | "level" | Field name for log level |
|
||||||
|
| `message_field` | string | "message" | Field name for message |
|
||||||
|
| `source_field` | string | "source" | Field name for source |
|
||||||
|
|
||||||
|
**Output Structure:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-01T12:00:00Z",
|
||||||
|
"level": "ERROR",
|
||||||
|
"source": "app",
|
||||||
|
"message": "Connection failed"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Text Formatter
|
||||||
|
|
||||||
|
Template-based text formatting.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "txt"
|
||||||
|
|
||||||
|
[pipelines.format.txt]
|
||||||
|
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
|
||||||
|
timestamp_format = "2006-01-02T15:04:05.000Z07:00"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `template` | string | See below | Go template string |
|
||||||
|
| `timestamp_format` | string | RFC3339 | Go time format string |
|
||||||
|
|
||||||
|
**Default Template:**
|
||||||
|
```
|
||||||
|
[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Template Functions
|
||||||
|
|
||||||
|
Available functions in text templates:
|
||||||
|
|
||||||
|
| Function | Description | Example |
|
||||||
|
|----------|-------------|---------|
|
||||||
|
| `FmtTime` | Format timestamp | `{{.Timestamp \| FmtTime}}` |
|
||||||
|
| `ToUpper` | Convert to uppercase | `{{.Level \| ToUpper}}` |
|
||||||
|
| `ToLower` | Convert to lowercase | `{{.Source \| ToLower}}` |
|
||||||
|
| `TrimSpace` | Remove whitespace | `{{.Message \| TrimSpace}}` |
|
||||||
|
|
||||||
|
## Template Variables
|
||||||
|
|
||||||
|
Available variables in templates:
|
||||||
|
|
||||||
|
| Variable | Type | Description |
|
||||||
|
|----------|------|-------------|
|
||||||
|
| `.Timestamp` | time.Time | Entry timestamp |
|
||||||
|
| `.Level` | string | Log level |
|
||||||
|
| `.Source` | string | Source identifier |
|
||||||
|
| `.Message` | string | Log message |
|
||||||
|
| `.Fields` | string | Additional fields (JSON) |
|
||||||
|
|
||||||
|
## Time Format Strings
|
||||||
|
|
||||||
|
Common Go time format patterns:
|
||||||
|
|
||||||
|
| Pattern | Example Output |
|
||||||
|
|---------|---------------|
|
||||||
|
| `2006-01-02T15:04:05Z07:00` | 2024-01-02T15:04:05Z |
|
||||||
|
| `2006-01-02 15:04:05` | 2024-01-02 15:04:05 |
|
||||||
|
| `Jan 2 15:04:05` | Jan 2 15:04:05 |
|
||||||
|
| `15:04:05.000` | 15:04:05.123 |
|
||||||
|
| `2006/01/02` | 2024/01/02 |
|
||||||
|
|
||||||
|
## Format Selection
|
||||||
|
|
||||||
|
### Default Behavior
|
||||||
|
|
||||||
|
If no formatter specified:
|
||||||
|
- **HTTP/TCP sinks**: JSON format
|
||||||
|
- **Console/File sinks**: Raw format
|
||||||
|
- **Client sinks**: JSON format
|
||||||
|
|
||||||
|
### Per-Pipeline Configuration
|
||||||
|
|
||||||
|
Each pipeline can have its own formatter:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines]]
|
||||||
|
name = "json-pipeline"
|
||||||
|
[pipelines.format]
|
||||||
|
type = "json"
|
||||||
|
|
||||||
|
[[pipelines]]
|
||||||
|
name = "text-pipeline"
|
||||||
|
[pipelines.format]
|
||||||
|
type = "txt"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Message Processing
|
||||||
|
|
||||||
|
### JSON Message Handling
|
||||||
|
|
||||||
|
When using JSON formatter with JSON log messages:
|
||||||
|
1. Attempts to parse message as JSON
|
||||||
|
2. Merges fields with LogWisp metadata
|
||||||
|
3. LogWisp fields take precedence
|
||||||
|
4. Falls back to string if parsing fails
|
||||||
|
|
||||||
|
### Field Preservation
|
||||||
|
|
||||||
|
LogWisp metadata always includes:
|
||||||
|
- Timestamp (from source or current time)
|
||||||
|
- Level (detected or default)
|
||||||
|
- Source (origin identifier)
|
||||||
|
- Message (original content)
|
||||||
|
|
||||||
|
## Performance Characteristics
|
||||||
|
|
||||||
|
### Formatter Performance
|
||||||
|
|
||||||
|
Relative performance (fastest to slowest):
|
||||||
|
1. **Raw**: Direct passthrough
|
||||||
|
2. **Text**: Template execution
|
||||||
|
3. **JSON**: Serialization
|
||||||
|
4. **JSON (pretty)**: Formatted serialization
|
||||||
|
|
||||||
|
### Optimization Tips
|
||||||
|
|
||||||
|
- Use raw format for high throughput
|
||||||
|
- Cache template compilation (automatic)
|
||||||
|
- Minimize template complexity
|
||||||
|
- Avoid pretty JSON in production
|
||||||
|
|
||||||
|
## Common Configurations
|
||||||
|
|
||||||
|
### Structured Logging
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "json"
|
||||||
|
[pipelines.format.json]
|
||||||
|
pretty = false
|
||||||
|
```
|
||||||
|
|
||||||
|
### Human-Readable Logs
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "txt"
|
||||||
|
[pipelines.format.txt]
|
||||||
|
template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
|
||||||
|
timestamp_format = "15:04:05"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Syslog Format
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "txt"
|
||||||
|
[pipelines.format.txt]
|
||||||
|
template = "{{.Timestamp | FmtTime}} {{.Source}} {{.Level}}: {{.Message}}"
|
||||||
|
timestamp_format = "Jan 2 15:04:05"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Minimal Output
|
||||||
|
```toml
|
||||||
|
[pipelines.format]
|
||||||
|
type = "txt"
|
||||||
|
[pipelines.format.txt]
|
||||||
|
template = "{{.Message}}"
|
||||||
|
```
|
||||||
@ -1,77 +1,76 @@
|
|||||||
# Installation Guide
|
# Installation Guide
|
||||||
|
|
||||||
Installation process on tested platforms.
|
LogWisp installation and service configuration for Linux and FreeBSD systems.
|
||||||
|
|
||||||
## Requirements
|
## Installation Methods
|
||||||
|
|
||||||
- **OS**: Linux, FreeBSD
|
|
||||||
- **Architecture**: amd64
|
|
||||||
- **Go**: 1.24+ (for building)
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Pre-built Binaries
|
### Pre-built Binaries
|
||||||
|
|
||||||
|
Download the latest release binary for your platform and install to `/usr/local/bin`:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Linux amd64
|
# Linux amd64
|
||||||
wget https://github.com/lixenwraith/logwisp/releases/latest/download/logwisp-linux-amd64
|
wget https://github.com/yourusername/logwisp/releases/latest/download/logwisp-linux-amd64
|
||||||
chmod +x logwisp-linux-amd64
|
chmod +x logwisp-linux-amd64
|
||||||
sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp
|
sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp
|
||||||
|
|
||||||
# Verify
|
# FreeBSD amd64
|
||||||
logwisp --version
|
fetch https://github.com/yourusername/logwisp/releases/latest/download/logwisp-freebsd-amd64
|
||||||
|
chmod +x logwisp-freebsd-amd64
|
||||||
|
sudo mv logwisp-freebsd-amd64 /usr/local/bin/logwisp
|
||||||
```
|
```
|
||||||
|
|
||||||
### From Source
|
### Building from Source
|
||||||
|
|
||||||
|
Requires Go 1.24 or newer:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/lixenwraith/logwisp.git
|
git clone https://github.com/yourusername/logwisp.git
|
||||||
cd logwisp
|
cd logwisp
|
||||||
make build
|
go build -o logwisp ./src/cmd/logwisp
|
||||||
sudo make install
|
sudo install -m 755 logwisp /usr/local/bin/
|
||||||
```
|
```
|
||||||
|
|
||||||
### Go Install
|
### Go Install Method
|
||||||
|
|
||||||
|
Install directly using Go (version information will not be embedded):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
|
go install github.com/yourusername/logwisp/src/cmd/logwisp@latest
|
||||||
```
|
```
|
||||||
Note: Binary created with this method will not contain version information.
|
|
||||||
|
|
||||||
## Platform-Specific
|
## Service Configuration
|
||||||
|
|
||||||
### Linux (systemd)
|
### Linux (systemd)
|
||||||
|
|
||||||
```bash
|
Create systemd service file `/etc/systemd/system/logwisp.service`:
|
||||||
# Create service
|
|
||||||
sudo tee /etc/systemd/system/logwisp.service << EOF
|
```ini
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=LogWisp Log Monitoring Service
|
Description=LogWisp Log Transport Service
|
||||||
After=network.target
|
After=network.target
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type=simple
|
Type=simple
|
||||||
User=logwisp
|
User=logwisp
|
||||||
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/logwisp.toml
|
Group=logwisp
|
||||||
Restart=always
|
ExecStart=/usr/local/bin/logwisp -c /etc/logwisp/logwisp.toml
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=10
|
||||||
StandardOutput=journal
|
StandardOutput=journal
|
||||||
StandardError=journal
|
StandardError=journal
|
||||||
|
WorkingDirectory=/var/lib/logwisp
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
```
|
||||||
|
|
||||||
# Create user
|
Setup service user and directories:
|
||||||
|
|
||||||
|
```bash
|
||||||
sudo useradd -r -s /bin/false logwisp
|
sudo useradd -r -s /bin/false logwisp
|
||||||
|
sudo mkdir -p /etc/logwisp /var/lib/logwisp /var/log/logwisp
|
||||||
# Create service user
|
sudo chown logwisp:logwisp /var/lib/logwisp /var/log/logwisp
|
||||||
sudo useradd -r -s /bin/false logwisp
|
|
||||||
|
|
||||||
# Create configuration directory
|
|
||||||
sudo mkdir -p /etc/logwisp
|
|
||||||
sudo chown logwisp:logwisp /etc/logwisp
|
|
||||||
|
|
||||||
# Enable and start
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable logwisp
|
sudo systemctl enable logwisp
|
||||||
sudo systemctl start logwisp
|
sudo systemctl start logwisp
|
||||||
@ -79,141 +78,90 @@ sudo systemctl start logwisp
|
|||||||
|
|
||||||
### FreeBSD (rc.d)
|
### FreeBSD (rc.d)
|
||||||
|
|
||||||
```bash
|
Create rc script `/usr/local/etc/rc.d/logwisp`:
|
||||||
# Create service script
|
|
||||||
sudo tee /usr/local/etc/rc.d/logwisp << 'EOF'
|
```sh
|
||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
|
|
||||||
# PROVIDE: logwisp
|
# PROVIDE: logwisp
|
||||||
# REQUIRE: DAEMON
|
# REQUIRE: DAEMON NETWORKING
|
||||||
# KEYWORD: shutdown
|
# KEYWORD: shutdown
|
||||||
|
|
||||||
. /etc/rc.subr
|
. /etc/rc.subr
|
||||||
|
|
||||||
name="logwisp"
|
name="logwisp"
|
||||||
rcvar="${name}_enable"
|
rcvar="${name}_enable"
|
||||||
command="/usr/local/bin/logwisp"
|
|
||||||
command_args="--config /usr/local/etc/logwisp/logwisp.toml"
|
|
||||||
pidfile="/var/run/${name}.pid"
|
pidfile="/var/run/${name}.pid"
|
||||||
start_cmd="logwisp_start"
|
command="/usr/local/bin/logwisp"
|
||||||
stop_cmd="logwisp_stop"
|
command_args="-c /usr/local/etc/logwisp/logwisp.toml"
|
||||||
|
|
||||||
logwisp_start()
|
|
||||||
{
|
|
||||||
echo "Starting logwisp service..."
|
|
||||||
/usr/sbin/daemon -c -f -p ${pidfile} ${command} ${command_args}
|
|
||||||
}
|
|
||||||
|
|
||||||
logwisp_stop()
|
|
||||||
{
|
|
||||||
if [ -f ${pidfile} ]; then
|
|
||||||
echo "Stopping logwisp service..."
|
|
||||||
kill $(cat ${pidfile})
|
|
||||||
rm -f ${pidfile}
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
load_rc_config $name
|
load_rc_config $name
|
||||||
: ${logwisp_enable:="NO"}
|
: ${logwisp_enable:="NO"}
|
||||||
: ${logwisp_config:="/usr/local/etc/logwisp/logwisp.toml"}
|
|
||||||
|
|
||||||
run_rc_command "$1"
|
run_rc_command "$1"
|
||||||
EOF
|
```
|
||||||
|
|
||||||
# Make executable
|
Setup service:
|
||||||
|
|
||||||
|
```bash
|
||||||
sudo chmod +x /usr/local/etc/rc.d/logwisp
|
sudo chmod +x /usr/local/etc/rc.d/logwisp
|
||||||
|
|
||||||
# Create service user
|
|
||||||
sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin
|
sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin
|
||||||
|
sudo mkdir -p /usr/local/etc/logwisp /var/log/logwisp
|
||||||
# Create configuration directory
|
sudo chown logwisp:logwisp /var/log/logwisp
|
||||||
sudo mkdir -p /usr/local/etc/logwisp
|
|
||||||
sudo chown logwisp:logwisp /usr/local/etc/logwisp
|
|
||||||
|
|
||||||
# Enable service
|
|
||||||
sudo sysrc logwisp_enable="YES"
|
sudo sysrc logwisp_enable="YES"
|
||||||
|
|
||||||
# Start service
|
|
||||||
sudo service logwisp start
|
sudo service logwisp start
|
||||||
```
|
```
|
||||||
|
|
||||||
## Post-Installation
|
## Directory Structure
|
||||||
|
|
||||||
|
Standard installation directories:
|
||||||
|
|
||||||
|
| Purpose | Linux | FreeBSD |
|
||||||
|
|---------|-------|---------|
|
||||||
|
| Binary | `/usr/local/bin/logwisp` | `/usr/local/bin/logwisp` |
|
||||||
|
| Configuration | `/etc/logwisp/` | `/usr/local/etc/logwisp/` |
|
||||||
|
| Working Directory | `/var/lib/logwisp/` | `/var/db/logwisp/` |
|
||||||
|
| Log Files | `/var/log/logwisp/` | `/var/log/logwisp/` |
|
||||||
|
| PID File | `/var/run/logwisp.pid` | `/var/run/logwisp.pid` |
|
||||||
|
|
||||||
|
## Post-Installation Verification
|
||||||
|
|
||||||
|
Verify the installation:
|
||||||
|
|
||||||
### Verify Installation
|
|
||||||
```bash
|
```bash
|
||||||
# Check version
|
# Check version
|
||||||
logwisp --version
|
logwisp version
|
||||||
|
|
||||||
# Test configuration
|
# Test configuration
|
||||||
logwisp --config /etc/logwisp/logwisp.toml --log-level debug
|
logwisp -c /etc/logwisp/logwisp.toml --disable-status-reporter
|
||||||
|
|
||||||
# Check service
|
# Check service status (Linux)
|
||||||
sudo systemctl status logwisp
|
sudo systemctl status logwisp
|
||||||
```
|
|
||||||
|
|
||||||
### Linux Service Status
|
# Check service status (FreeBSD)
|
||||||
```bash
|
|
||||||
sudo systemctl status logwisp
|
|
||||||
```
|
|
||||||
|
|
||||||
### FreeBSD Service Status
|
|
||||||
```bash
|
|
||||||
sudo service logwisp status
|
sudo service logwisp status
|
||||||
```
|
```
|
||||||
|
|
||||||
### Initial Configuration
|
|
||||||
|
|
||||||
Create a basic configuration file:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# /etc/logwisp/logwisp.toml (Linux)
|
|
||||||
# /usr/local/etc/logwisp/logwisp.toml (FreeBSD)
|
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "myapp"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = {
|
|
||||||
path = "/path/to/application/logs",
|
|
||||||
pattern = "*.log"
|
|
||||||
}
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart service after configuration changes:
|
|
||||||
|
|
||||||
**Linux:**
|
|
||||||
```bash
|
|
||||||
sudo systemctl restart logwisp
|
|
||||||
```
|
|
||||||
|
|
||||||
**FreeBSD:**
|
|
||||||
```bash
|
|
||||||
sudo service logwisp restart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Uninstallation
|
## Uninstallation
|
||||||
|
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl stop logwisp
|
sudo systemctl stop logwisp
|
||||||
sudo systemctl disable logwisp
|
sudo systemctl disable logwisp
|
||||||
sudo rm /usr/local/bin/logwisp
|
sudo rm /usr/local/bin/logwisp
|
||||||
sudo rm /etc/systemd/system/logwisp.service
|
sudo rm /etc/systemd/system/logwisp.service
|
||||||
sudo rm -rf /etc/logwisp
|
sudo rm -rf /etc/logwisp /var/lib/logwisp /var/log/logwisp
|
||||||
sudo userdel logwisp
|
sudo userdel logwisp
|
||||||
```
|
```
|
||||||
|
|
||||||
### FreeBSD
|
### FreeBSD
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo service logwisp stop
|
sudo service logwisp stop
|
||||||
sudo sysrc logwisp_enable="NO"
|
sudo sysrc -x logwisp_enable
|
||||||
sudo rm /usr/local/bin/logwisp
|
sudo rm /usr/local/bin/logwisp
|
||||||
sudo rm /usr/local/etc/rc.d/logwisp
|
sudo rm /usr/local/etc/rc.d/logwisp
|
||||||
sudo rm -rf /usr/local/etc/logwisp
|
sudo rm -rf /usr/local/etc/logwisp /var/db/logwisp /var/log/logwisp
|
||||||
sudo pw userdel logwisp
|
sudo pw userdel logwisp
|
||||||
```
|
```
|
||||||
289
doc/networking.md
Normal file
289
doc/networking.md
Normal file
@ -0,0 +1,289 @@
|
|||||||
|
# Networking
|
||||||
|
|
||||||
|
Network configuration for LogWisp connections, including TLS, rate limiting, and access control.
|
||||||
|
|
||||||
|
## TLS Configuration
|
||||||
|
|
||||||
|
### TLS Support Matrix
|
||||||
|
|
||||||
|
| Component | TLS Support | Notes |
|
||||||
|
|-----------|-------------|-------|
|
||||||
|
| HTTP Source | ✓ | Full TLS 1.2/1.3 |
|
||||||
|
| HTTP Sink | ✓ | Full TLS 1.2/1.3 |
|
||||||
|
| HTTP Client | ✓ | Client certificates |
|
||||||
|
| TCP Source | ✗ | No encryption |
|
||||||
|
| TCP Sink | ✗ | No encryption |
|
||||||
|
| TCP Client | ✗ | No encryption |
|
||||||
|
|
||||||
|
### Server TLS Configuration
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.tls]
|
||||||
|
enabled = true
|
||||||
|
cert_file = "/path/to/server.pem"
|
||||||
|
key_file = "/path/to/server.key"
|
||||||
|
min_version = "TLS1.2" # TLS1.2|TLS1.3
|
||||||
|
client_auth = false
|
||||||
|
client_ca_file = "/path/to/client-ca.pem"
|
||||||
|
verify_client_cert = true
|
||||||
|
```
|
||||||
|
|
||||||
|
### Client TLS Configuration
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http_client.tls]
|
||||||
|
enabled = true
|
||||||
|
server_ca_file = "/path/to/ca.pem" # For server verification
|
||||||
|
server_name = "logs.example.com"
|
||||||
|
insecure_skip_verify = false
|
||||||
|
client_cert_file = "/path/to/client.pem" # For mTLS
|
||||||
|
client_key_file = "/path/to/client.key" # For mTLS
|
||||||
|
```
|
||||||
|
|
||||||
|
### TLS Certificate Generation
|
||||||
|
|
||||||
|
Using the `tls` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate CA certificate
|
||||||
|
logwisp tls -ca -o myca
|
||||||
|
|
||||||
|
# Generate server certificate
|
||||||
|
logwisp tls -server -ca-cert myca.pem -ca-key myca.key -host localhost,server.example.com -o server
|
||||||
|
|
||||||
|
# Generate client certificate
|
||||||
|
logwisp tls -client -ca-cert myca.pem -ca-key myca.key -o client
|
||||||
|
```
|
||||||
|
|
||||||
|
Command options:
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `-ca` | Generate CA certificate |
|
||||||
|
| `-server` | Generate server certificate |
|
||||||
|
| `-client` | Generate client certificate |
|
||||||
|
| `-host` | Comma-separated hostnames/IPs |
|
||||||
|
| `-o` | Output file prefix |
|
||||||
|
| `-days` | Certificate validity (default: 365) |
|
||||||
|
|
||||||
|
## Network Rate Limiting
|
||||||
|
|
||||||
|
### Configuration Options
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.net_limit]
|
||||||
|
enabled = true
|
||||||
|
max_connections_per_ip = 10
|
||||||
|
max_connections_total = 100
|
||||||
|
requests_per_second = 100.0
|
||||||
|
burst_size = 200
|
||||||
|
response_code = 429
|
||||||
|
response_message = "Rate limit exceeded"
|
||||||
|
ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
ip_blacklist = ["10.0.0.0/8"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rate Limiting Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Description |
|
||||||
|
|-----------|------|-------------|
|
||||||
|
| `enabled` | bool | Enable rate limiting |
|
||||||
|
| `max_connections_per_ip` | int | Per-IP connection limit |
|
||||||
|
| `max_connections_total` | int | Global connection limit |
|
||||||
|
| `requests_per_second` | float | Request rate limit |
|
||||||
|
| `burst_size` | int | Token bucket burst capacity |
|
||||||
|
| `response_code` | int | HTTP response code when limited |
|
||||||
|
| `response_message` | string | Response message when limited |
|
||||||
|
|
||||||
|
### IP Access Control
|
||||||
|
|
||||||
|
**Whitelist**: Only specified IPs/networks allowed
|
||||||
|
```toml
|
||||||
|
ip_whitelist = [
|
||||||
|
"192.168.1.0/24", # Local network
|
||||||
|
"10.0.0.0/8", # Private network
|
||||||
|
"203.0.113.5" # Specific IP
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Blacklist**: Specified IPs/networks denied
|
||||||
|
```toml
|
||||||
|
ip_blacklist = [
|
||||||
|
"192.168.1.100", # Blocked host
|
||||||
|
"10.0.0.0/16" # Blocked subnet
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Processing order:
|
||||||
|
1. Blacklist (immediate deny if matched)
|
||||||
|
2. Whitelist (must match if configured)
|
||||||
|
3. Rate limiting
|
||||||
|
4. Authentication
|
||||||
|
|
||||||
|
## Connection Management
|
||||||
|
|
||||||
|
### TCP Keep-Alive
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.tcp]
|
||||||
|
keep_alive = true
|
||||||
|
keep_alive_period_ms = 30000 # 30 seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
Benefits:
|
||||||
|
- Detect dead connections
|
||||||
|
- Prevent connection timeout
|
||||||
|
- Maintain NAT mappings
|
||||||
|
|
||||||
|
### Connection Timeouts
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http]
|
||||||
|
read_timeout_ms = 10000 # 10 seconds
|
||||||
|
write_timeout_ms = 10000 # 10 seconds
|
||||||
|
|
||||||
|
[pipelines.sinks.tcp_client]
|
||||||
|
dial_timeout = 10 # Connection timeout
|
||||||
|
write_timeout = 30 # Write timeout
|
||||||
|
read_timeout = 10 # Read timeout
|
||||||
|
```
|
||||||
|
|
||||||
|
### Connection Limits
|
||||||
|
|
||||||
|
Global limits:
|
||||||
|
```toml
|
||||||
|
max_connections = 100 # Total concurrent connections
|
||||||
|
```
|
||||||
|
|
||||||
|
Per-IP limits:
|
||||||
|
```toml
|
||||||
|
max_connections_per_ip = 10
|
||||||
|
```
|
||||||
|
|
||||||
|
## Heartbeat Configuration
|
||||||
|
|
||||||
|
Keep connections alive with periodic heartbeats:
|
||||||
|
|
||||||
|
### HTTP Sink Heartbeat
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http.heartbeat]
|
||||||
|
enabled = true
|
||||||
|
interval_ms = 30000
|
||||||
|
include_timestamp = true
|
||||||
|
include_stats = false
|
||||||
|
format = "comment" # comment|event|json
|
||||||
|
```
|
||||||
|
|
||||||
|
Formats:
|
||||||
|
- **comment**: SSE comment (`: heartbeat`)
|
||||||
|
- **event**: SSE event with data
|
||||||
|
- **json**: JSON-formatted heartbeat
|
||||||
|
|
||||||
|
### TCP Sink Heartbeat
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.tcp.heartbeat]
|
||||||
|
enabled = true
|
||||||
|
interval_ms = 30000
|
||||||
|
include_timestamp = true
|
||||||
|
include_stats = false
|
||||||
|
format = "json" # json|txt
|
||||||
|
```
|
||||||
|
|
||||||
|
## Network Protocols
|
||||||
|
|
||||||
|
### HTTP/HTTPS
|
||||||
|
|
||||||
|
- HTTP/1.1 and HTTP/2 support
|
||||||
|
- Persistent connections
|
||||||
|
- Chunked transfer encoding
|
||||||
|
- Server-Sent Events (SSE)
|
||||||
|
|
||||||
|
### TCP
|
||||||
|
|
||||||
|
- Raw TCP sockets
|
||||||
|
- Newline-delimited protocol
|
||||||
|
- Binary-safe transmission
|
||||||
|
- No encryption available
|
||||||
|
|
||||||
|
## Port Configuration
|
||||||
|
|
||||||
|
### Default Ports
|
||||||
|
|
||||||
|
| Service | Default Port | Protocol |
|
||||||
|
|---------|--------------|----------|
|
||||||
|
| HTTP Source | 8081 | HTTP/HTTPS |
|
||||||
|
| HTTP Sink | 8080 | HTTP/HTTPS |
|
||||||
|
| TCP Source | 9091 | TCP |
|
||||||
|
| TCP Sink | 9090 | TCP |
|
||||||
|
|
||||||
|
### Port Conflict Prevention
|
||||||
|
|
||||||
|
LogWisp validates port usage at startup:
|
||||||
|
- Detects port conflicts across pipelines
|
||||||
|
- Prevents duplicate bindings
|
||||||
|
- Suggests alternative ports
|
||||||
|
|
||||||
|
## Network Security
|
||||||
|
|
||||||
|
### Best Practices
|
||||||
|
|
||||||
|
1. **Use TLS for HTTP** connections when possible
|
||||||
|
2. **Implement rate limiting** to prevent DoS
|
||||||
|
3. **Configure IP whitelists** for restricted access
|
||||||
|
4. **Enable authentication** for all network endpoints
|
||||||
|
5. **Use non-standard ports** to reduce scanning exposure
|
||||||
|
6. **Monitor connection metrics** for anomalies
|
||||||
|
7. **Set appropriate timeouts** to prevent resource exhaustion
|
||||||
|
|
||||||
|
### Security Warnings
|
||||||
|
|
||||||
|
- TCP connections are **always unencrypted**
|
||||||
|
- HTTP Basic/Token auth **requires TLS**
|
||||||
|
- Avoid `skip_verify` in production
|
||||||
|
- Never expose unauthenticated endpoints publicly
|
||||||
|
|
||||||
|
## Load Balancing
|
||||||
|
|
||||||
|
### Client-Side Load Balancing
|
||||||
|
|
||||||
|
Configure multiple endpoints (future feature):
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks.http_client]]
|
||||||
|
urls = [
|
||||||
|
"https://log1.example.com/ingest",
|
||||||
|
"https://log2.example.com/ingest"
|
||||||
|
]
|
||||||
|
strategy = "round-robin" # round-robin|random|least-conn
|
||||||
|
```
|
||||||
|
|
||||||
|
### Server-Side Considerations
|
||||||
|
|
||||||
|
- Use reverse proxy for load distribution
|
||||||
|
- Configure session affinity if needed
|
||||||
|
- Monitor individual instance health
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
**Connection Refused**
|
||||||
|
- Check firewall rules
|
||||||
|
- Verify service is running
|
||||||
|
- Confirm correct port/host
|
||||||
|
|
||||||
|
**TLS Handshake Failure**
|
||||||
|
- Verify certificate validity
|
||||||
|
- Check certificate chain
|
||||||
|
- Confirm TLS versions match
|
||||||
|
|
||||||
|
**Rate Limit Exceeded**
|
||||||
|
- Adjust rate limit parameters
|
||||||
|
- Add IP to whitelist
|
||||||
|
- Implement client-side throttling
|
||||||
|
|
||||||
|
**Connection Timeout**
|
||||||
|
- Increase timeout values
|
||||||
|
- Check network latency
|
||||||
|
- Verify keep-alive settings
|
||||||
343
doc/operations.md
Normal file
343
doc/operations.md
Normal file
@ -0,0 +1,343 @@
|
|||||||
|
# Operations Guide
|
||||||
|
|
||||||
|
Running, monitoring, and maintaining LogWisp in production.
|
||||||
|
|
||||||
|
## Starting LogWisp
|
||||||
|
|
||||||
|
### Manual Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Foreground with default config
|
||||||
|
logwisp
|
||||||
|
|
||||||
|
# Background mode
|
||||||
|
logwisp --background
|
||||||
|
|
||||||
|
# With specific configuration
|
||||||
|
logwisp --config /etc/logwisp/production.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Management
|
||||||
|
|
||||||
|
**Linux (systemd):**
|
||||||
|
```bash
|
||||||
|
sudo systemctl start logwisp
|
||||||
|
sudo systemctl stop logwisp
|
||||||
|
sudo systemctl restart logwisp
|
||||||
|
sudo systemctl status logwisp
|
||||||
|
```
|
||||||
|
|
||||||
|
**FreeBSD (rc.d):**
|
||||||
|
```bash
|
||||||
|
sudo service logwisp start
|
||||||
|
sudo service logwisp stop
|
||||||
|
sudo service logwisp restart
|
||||||
|
sudo service logwisp status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Management
|
||||||
|
|
||||||
|
### Hot Reload
|
||||||
|
|
||||||
|
Enable automatic configuration reload:
|
||||||
|
```toml
|
||||||
|
config_auto_reload = true
|
||||||
|
```
|
||||||
|
|
||||||
|
Or via command line:
|
||||||
|
```bash
|
||||||
|
logwisp --config-auto-reload
|
||||||
|
```
|
||||||
|
|
||||||
|
Trigger manual reload:
|
||||||
|
```bash
|
||||||
|
kill -HUP $(pidof logwisp)
|
||||||
|
# or
|
||||||
|
kill -USR1 $(pidof logwisp)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Validation
|
||||||
|
|
||||||
|
Test configuration without starting:
|
||||||
|
```bash
|
||||||
|
logwisp --config test.toml --quiet --disable-status-reporter
|
||||||
|
```
|
||||||
|
|
||||||
|
Check for errors:
|
||||||
|
- Port conflicts
|
||||||
|
- Invalid patterns
|
||||||
|
- Missing required fields
|
||||||
|
- File permissions
|
||||||
|
|
||||||
|
## Monitoring
|
||||||
|
|
||||||
|
### Status Reporter
|
||||||
|
|
||||||
|
Built-in periodic status logging (30-second intervals):
|
||||||
|
|
||||||
|
```
|
||||||
|
[INFO] Status report active_pipelines=2 time=15:04:05
|
||||||
|
[INFO] Pipeline status pipeline=app entries_processed=10523
|
||||||
|
[INFO] Pipeline status pipeline=system entries_processed=5231
|
||||||
|
```
|
||||||
|
|
||||||
|
Disable if not needed:
|
||||||
|
```toml
|
||||||
|
disable_status_reporter = true
|
||||||
|
```
|
||||||
|
|
||||||
|
### HTTP Status Endpoint
|
||||||
|
|
||||||
|
When using HTTP sink:
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8080/status | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Response structure:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"uptime": "2h15m30s",
|
||||||
|
"pipelines": {
|
||||||
|
"default": {
|
||||||
|
"sources": 1,
|
||||||
|
"sinks": 2,
|
||||||
|
"processed": 15234,
|
||||||
|
"filtered": 523,
|
||||||
|
"dropped": 12
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Metrics Collection
|
||||||
|
|
||||||
|
Track via logs:
|
||||||
|
- Total entries processed
|
||||||
|
- Entries filtered
|
||||||
|
- Entries dropped
|
||||||
|
- Active connections
|
||||||
|
- Buffer utilization
|
||||||
|
|
||||||
|
## Log Management
|
||||||
|
|
||||||
|
### LogWisp's Operational Logs
|
||||||
|
|
||||||
|
Configuration for LogWisp's own logs:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[logging]
|
||||||
|
output = "file"
|
||||||
|
level = "info"
|
||||||
|
|
||||||
|
[logging.file]
|
||||||
|
directory = "/var/log/logwisp"
|
||||||
|
name = "logwisp"
|
||||||
|
max_size_mb = 100
|
||||||
|
retention_hours = 168
|
||||||
|
```
|
||||||
|
|
||||||
|
### Log Rotation
|
||||||
|
|
||||||
|
Automatic rotation based on:
|
||||||
|
- File size threshold
|
||||||
|
- Total size limit
|
||||||
|
- Retention period
|
||||||
|
|
||||||
|
Manual rotation:
|
||||||
|
```bash
|
||||||
|
# Move current log
|
||||||
|
mv /var/log/logwisp/logwisp.log /var/log/logwisp/logwisp.log.1
|
||||||
|
# Send signal to reopen
|
||||||
|
kill -USR1 $(pidof logwisp)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Log Levels
|
||||||
|
|
||||||
|
Operational log levels:
|
||||||
|
- **debug**: Detailed debugging information
|
||||||
|
- **info**: General operational messages
|
||||||
|
- **warn**: Warning conditions
|
||||||
|
- **error**: Error conditions
|
||||||
|
|
||||||
|
Production recommendation: `info` or `warn`
|
||||||
|
|
||||||
|
## Performance Tuning
|
||||||
|
|
||||||
|
### Buffer Sizing
|
||||||
|
|
||||||
|
Adjust buffers based on load:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# High-volume source
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "http"
|
||||||
|
[pipelines.sources.http]
|
||||||
|
buffer_size = 5000 # Increase for burst traffic
|
||||||
|
|
||||||
|
# Slow consumer sink
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "http_client"
|
||||||
|
[pipelines.sinks.http_client]
|
||||||
|
buffer_size = 10000 # Larger buffer for slow endpoints
|
||||||
|
batch_size = 500 # Larger batches
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rate Limiting
|
||||||
|
|
||||||
|
Protect against overload:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.rate_limit]
|
||||||
|
rate = 1000.0 # Entries per second
|
||||||
|
burst = 2000.0 # Burst capacity
|
||||||
|
policy = "drop" # Drop excess entries
|
||||||
|
```
|
||||||
|
|
||||||
|
### Connection Limits
|
||||||
|
|
||||||
|
Prevent resource exhaustion:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.net_limit]
|
||||||
|
max_connections_total = 1000
|
||||||
|
max_connections_per_ip = 50
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
**High Memory Usage**
|
||||||
|
- Check buffer sizes
|
||||||
|
- Monitor goroutine count
|
||||||
|
- Review retention settings
|
||||||
|
|
||||||
|
**Dropped Entries**
|
||||||
|
- Increase buffer sizes
|
||||||
|
- Add rate limiting
|
||||||
|
- Check sink performance
|
||||||
|
|
||||||
|
**Connection Errors**
|
||||||
|
- Verify network connectivity
|
||||||
|
- Check firewall rules
|
||||||
|
- Review TLS certificates
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
Enable detailed logging:
|
||||||
|
```bash
|
||||||
|
logwisp --logging.level=debug --logging.output=stderr
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Checks
|
||||||
|
|
||||||
|
Implement external monitoring:
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Health check script
|
||||||
|
if ! curl -sf http://localhost:8080/status > /dev/null; then
|
||||||
|
echo "LogWisp health check failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup and Recovery
|
||||||
|
|
||||||
|
### Configuration Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup configuration
|
||||||
|
cp /etc/logwisp/logwisp.toml /backup/logwisp-$(date +%Y%m%d).toml
|
||||||
|
|
||||||
|
# Version control
|
||||||
|
git add /etc/logwisp/
|
||||||
|
git commit -m "LogWisp config update"
|
||||||
|
```
|
||||||
|
|
||||||
|
### State Recovery
|
||||||
|
|
||||||
|
LogWisp maintains minimal state:
|
||||||
|
- File read positions (automatic)
|
||||||
|
- Connection state (automatic)
|
||||||
|
|
||||||
|
Recovery after crash:
|
||||||
|
1. Service automatically restarts (systemd/rc.d)
|
||||||
|
2. File sources resume from last position
|
||||||
|
3. Network sources accept new connections
|
||||||
|
4. Clients reconnect automatically
|
||||||
|
|
||||||
|
## Security Operations
|
||||||
|
|
||||||
|
### Certificate Management
|
||||||
|
|
||||||
|
Monitor certificate expiration:
|
||||||
|
```bash
|
||||||
|
openssl x509 -in /path/to/cert.pem -noout -enddate
|
||||||
|
```
|
||||||
|
|
||||||
|
Rotate certificates:
|
||||||
|
1. Generate new certificates
|
||||||
|
2. Update configuration
|
||||||
|
3. Reload service (SIGHUP)
|
||||||
|
|
||||||
|
### Access Auditing
|
||||||
|
|
||||||
|
Monitor access patterns:
|
||||||
|
- Review connection logs
|
||||||
|
- Monitor rate limit hits
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Planned Maintenance
|
||||||
|
|
||||||
|
1. Notify users of maintenance window
|
||||||
|
2. Stop accepting new connections
|
||||||
|
3. Drain existing connections
|
||||||
|
4. Perform maintenance
|
||||||
|
5. Restart service
|
||||||
|
|
||||||
|
### Upgrade Process
|
||||||
|
|
||||||
|
1. Download new version
|
||||||
|
2. Test with current configuration
|
||||||
|
3. Stop old version
|
||||||
|
4. Install new version
|
||||||
|
5. Start service
|
||||||
|
6. Verify operation
|
||||||
|
|
||||||
|
### Cleanup Tasks
|
||||||
|
|
||||||
|
Regular maintenance:
|
||||||
|
- Remove old log files
|
||||||
|
- Clean temporary files
|
||||||
|
- Verify disk space
|
||||||
|
- Update documentation
|
||||||
|
|
||||||
|
## Disaster Recovery
|
||||||
|
|
||||||
|
### Backup Strategy
|
||||||
|
|
||||||
|
- Configuration files: Daily
|
||||||
|
- TLS certificates: After generation
|
||||||
|
- Authentication credentials: Secure storage
|
||||||
|
|
||||||
|
### Recovery Procedures
|
||||||
|
|
||||||
|
Service failure:
|
||||||
|
1. Check service status
|
||||||
|
2. Review error logs
|
||||||
|
3. Verify configuration
|
||||||
|
4. Restart service
|
||||||
|
|
||||||
|
Data loss:
|
||||||
|
1. Restore configuration from backup
|
||||||
|
2. Regenerate certificates if needed
|
||||||
|
3. Recreate authentication credentials
|
||||||
|
4. Restart service
|
||||||
|
|
||||||
|
### Business Continuity
|
||||||
|
|
||||||
|
- Run multiple instances for redundancy
|
||||||
|
- Use load balancer for distribution
|
||||||
|
- Implement monitoring alerts
|
||||||
|
- Document recovery procedures
|
||||||
@ -1,215 +0,0 @@
|
|||||||
# Quick Start Guide
|
|
||||||
|
|
||||||
Get LogWisp up and running in minutes:
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### From Source
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/lixenwraith/logwisp.git
|
|
||||||
cd logwisp
|
|
||||||
make install
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using Go Install
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
|
|
||||||
```
|
|
||||||
|
|
||||||
## Basic Usage
|
|
||||||
|
|
||||||
### 1. Monitor Current Directory
|
|
||||||
|
|
||||||
Start LogWisp with defaults (monitors `*.log` files in current directory):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
logwisp
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Stream Logs
|
|
||||||
|
|
||||||
Connect to the log stream:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSE stream
|
|
||||||
curl -N http://localhost:8080/stream
|
|
||||||
|
|
||||||
# Check status
|
|
||||||
curl http://localhost:8080/status | jq .
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Generate Test Logs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo "[ERROR] Something went wrong!" >> test.log
|
|
||||||
echo "[INFO] Application started" >> test.log
|
|
||||||
echo "[WARN] Low memory warning" >> test.log
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Scenarios
|
|
||||||
|
|
||||||
### Monitor Specific Directory
|
|
||||||
|
|
||||||
Create `~/.config/logwisp/logwisp.toml`:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "myapp"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/myapp", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filter Only Errors
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "errors"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "./", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "include"
|
|
||||||
patterns = ["ERROR", "WARN", "CRITICAL"]
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multiple Outputs
|
|
||||||
|
|
||||||
Send logs to both HTTP stream and file:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "multi-output"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
|
|
||||||
# HTTP streaming
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
|
|
||||||
# File archival
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "file"
|
|
||||||
options = { directory = "/var/log/archive", name = "app" }
|
|
||||||
```
|
|
||||||
|
|
||||||
### TCP Streaming
|
|
||||||
|
|
||||||
For high-performance streaming:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "highperf"
|
|
||||||
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "tcp"
|
|
||||||
options = { port = 9090, buffer_size = 5000 }
|
|
||||||
```
|
|
||||||
|
|
||||||
Connect with netcat:
|
|
||||||
```bash
|
|
||||||
nc localhost 9090
|
|
||||||
```
|
|
||||||
|
|
||||||
### Router Mode
|
|
||||||
|
|
||||||
Run multiple pipelines on shared ports:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
logwisp --router
|
|
||||||
|
|
||||||
# Access pipelines at:
|
|
||||||
# http://localhost:8080/myapp/stream
|
|
||||||
# http://localhost:8080/errors/stream
|
|
||||||
# http://localhost:8080/status (global)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Remote Log Collection
|
|
||||||
|
|
||||||
Receive logs via HTTP/TCP and forward to remote servers:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "collector"
|
|
||||||
|
|
||||||
# Receive logs via HTTP POST
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8081, ingest_path = "/ingest" }
|
|
||||||
|
|
||||||
# Forward to remote server
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http_client"
|
|
||||||
options = {
|
|
||||||
url = "https://log-server.com/ingest",
|
|
||||||
batch_size = 100,
|
|
||||||
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Send logs to collector:
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:8081/ingest \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"message": "Test log", "level": "INFO"}'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quick Tips
|
|
||||||
|
|
||||||
### Enable Debug Logging
|
|
||||||
```bash
|
|
||||||
logwisp --logging.level debug --logging.output stderr
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quiet Mode
|
|
||||||
```bash
|
|
||||||
logwisp --quiet
|
|
||||||
```
|
|
||||||
|
|
||||||
### Rate Limiting
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = {
|
|
||||||
port = 8080,
|
|
||||||
rate_limit = {
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 10.0,
|
|
||||||
burst_size = 20
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Console Output
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "stdout" # or "stderr"
|
|
||||||
options = {}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Split Console Output
|
|
||||||
```toml
|
|
||||||
# INFO/DEBUG to stdout, ERROR/WARN to stderr
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "stdout"
|
|
||||||
options = { target = "split" }
|
|
||||||
```
|
|
||||||
@ -1,125 +0,0 @@
|
|||||||
# Rate Limiting Guide
|
|
||||||
|
|
||||||
LogWisp provides configurable rate limiting to protect against abuse and ensure fair access.
|
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
Token bucket algorithm:
|
|
||||||
1. Each client gets a bucket with fixed capacity
|
|
||||||
2. Tokens refill at configured rate
|
|
||||||
3. Each request consumes one token
|
|
||||||
4. No tokens = request rejected
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http" # or "tcp"
|
|
||||||
options = {
|
|
||||||
port = 8080,
|
|
||||||
rate_limit = {
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 10.0,
|
|
||||||
burst_size = 20,
|
|
||||||
limit_by = "ip", # or "global"
|
|
||||||
max_connections_per_ip = 5,
|
|
||||||
max_total_connections = 100,
|
|
||||||
response_code = 429,
|
|
||||||
response_message = "Rate limit exceeded"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Strategies
|
|
||||||
|
|
||||||
### Per-IP Limiting (Default)
|
|
||||||
Each IP gets its own bucket:
|
|
||||||
```toml
|
|
||||||
limit_by = "ip"
|
|
||||||
requests_per_second = 10.0
|
|
||||||
# Client A: 10 req/sec
|
|
||||||
# Client B: 10 req/sec
|
|
||||||
```
|
|
||||||
|
|
||||||
### Global Limiting
|
|
||||||
All clients share one bucket:
|
|
||||||
```toml
|
|
||||||
limit_by = "global"
|
|
||||||
requests_per_second = 50.0
|
|
||||||
# All clients combined: 50 req/sec
|
|
||||||
```
|
|
||||||
|
|
||||||
## Connection Limits
|
|
||||||
|
|
||||||
```toml
|
|
||||||
max_connections_per_ip = 5 # Per IP
|
|
||||||
max_total_connections = 100 # Total
|
|
||||||
```
|
|
||||||
|
|
||||||
## Response Behavior
|
|
||||||
|
|
||||||
### HTTP
|
|
||||||
Returns JSON with configured status:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"error": "Rate limit exceeded",
|
|
||||||
"retry_after": "60"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### TCP
|
|
||||||
Connections silently dropped.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Light Protection
|
|
||||||
```toml
|
|
||||||
rate_limit = {
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 50.0,
|
|
||||||
burst_size = 100
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Moderate Protection
|
|
||||||
```toml
|
|
||||||
rate_limit = {
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 10.0,
|
|
||||||
burst_size = 30,
|
|
||||||
max_connections_per_ip = 5
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Strict Protection
|
|
||||||
```toml
|
|
||||||
rate_limit = {
|
|
||||||
enabled = true,
|
|
||||||
requests_per_second = 2.0,
|
|
||||||
burst_size = 5,
|
|
||||||
max_connections_per_ip = 2,
|
|
||||||
response_code = 503
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Monitoring
|
|
||||||
|
|
||||||
Check statistics:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:8080/status | jq '.sinks[0].details.rate_limit'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test rate limits
|
|
||||||
for i in {1..20}; do
|
|
||||||
curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/status
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tuning
|
|
||||||
|
|
||||||
- **requests_per_second**: Expected load
|
|
||||||
- **burst_size**: 2-3× requests_per_second
|
|
||||||
- **Connection limits**: Based on memory
|
|
||||||
158
doc/router.md
158
doc/router.md
@ -1,158 +0,0 @@
|
|||||||
# Router Mode Guide
|
|
||||||
|
|
||||||
Router mode enables multiple pipelines to share HTTP ports through path-based routing.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**Standard mode**: Each pipeline needs its own port
|
|
||||||
- Pipeline 1: `http://localhost:8080/stream`
|
|
||||||
- Pipeline 2: `http://localhost:8081/stream`
|
|
||||||
|
|
||||||
**Router mode**: Pipelines share ports via paths
|
|
||||||
- Pipeline 1: `http://localhost:8080/app/stream`
|
|
||||||
- Pipeline 2: `http://localhost:8080/database/stream`
|
|
||||||
- Global status: `http://localhost:8080/status`
|
|
||||||
|
|
||||||
## Enabling Router Mode
|
|
||||||
|
|
||||||
```bash
|
|
||||||
logwisp --router --config /etc/logwisp/multi-pipeline.toml
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# All pipelines can use the same port
|
|
||||||
[[pipelines]]
|
|
||||||
name = "app"
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/app", pattern = "*.log" }
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 } # Same port OK
|
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "database"
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/postgresql", pattern = "*.log" }
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 } # Shared port
|
|
||||||
```
|
|
||||||
|
|
||||||
## Path Structure
|
|
||||||
|
|
||||||
Paths are prefixed with pipeline name:
|
|
||||||
|
|
||||||
| Pipeline | Config Path | Router Path |
|
|
||||||
|----------|-------------|-------------|
|
|
||||||
| `app` | `/stream` | `/app/stream` |
|
|
||||||
| `app` | `/status` | `/app/status` |
|
|
||||||
| `database` | `/stream` | `/database/stream` |
|
|
||||||
|
|
||||||
### Custom Paths
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = {
|
|
||||||
stream_path = "/logs", # Becomes /app/logs
|
|
||||||
status_path = "/health" # Becomes /app/health
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Endpoints
|
|
||||||
|
|
||||||
### Pipeline Endpoints
|
|
||||||
```bash
|
|
||||||
# SSE stream
|
|
||||||
curl -N http://localhost:8080/app/stream
|
|
||||||
|
|
||||||
# Pipeline status
|
|
||||||
curl http://localhost:8080/database/status
|
|
||||||
```
|
|
||||||
|
|
||||||
### Global Status
|
|
||||||
```bash
|
|
||||||
curl http://localhost:8080/status
|
|
||||||
```
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"service": "LogWisp Router",
|
|
||||||
"pipelines": {
|
|
||||||
"app": { /* stats */ },
|
|
||||||
"database": { /* stats */ }
|
|
||||||
},
|
|
||||||
"total_pipelines": 2
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Use Cases
|
|
||||||
|
|
||||||
### Microservices
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "frontend"
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/frontend", pattern = "*.log" }
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "backend"
|
|
||||||
[[pipelines.sources]]
|
|
||||||
type = "directory"
|
|
||||||
options = { path = "/var/log/backend", pattern = "*.log" }
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
|
|
||||||
# Access:
|
|
||||||
# http://localhost:8080/frontend/stream
|
|
||||||
# http://localhost:8080/backend/stream
|
|
||||||
```
|
|
||||||
|
|
||||||
### Environment-Based
|
|
||||||
```toml
|
|
||||||
[[pipelines]]
|
|
||||||
name = "prod"
|
|
||||||
[[pipelines.filters]]
|
|
||||||
type = "include"
|
|
||||||
patterns = ["ERROR", "WARN"]
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
|
|
||||||
[[pipelines]]
|
|
||||||
name = "dev"
|
|
||||||
# No filters - all logs
|
|
||||||
[[pipelines.sinks]]
|
|
||||||
type = "http"
|
|
||||||
options = { port = 8080 }
|
|
||||||
```
|
|
||||||
|
|
||||||
## Limitations
|
|
||||||
|
|
||||||
1. **HTTP Only**: Router mode only works for HTTP/SSE
|
|
||||||
2. **No TCP Routing**: TCP remains on separate ports
|
|
||||||
3. **Path Conflicts**: Pipeline names must be unique
|
|
||||||
|
|
||||||
## Load Balancer Integration
|
|
||||||
|
|
||||||
```nginx
|
|
||||||
upstream logwisp {
|
|
||||||
server logwisp1:8080;
|
|
||||||
server logwisp2:8080;
|
|
||||||
}
|
|
||||||
|
|
||||||
location /logs/ {
|
|
||||||
proxy_pass http://logwisp/;
|
|
||||||
proxy_buffering off;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
58
doc/security.md
Normal file
58
doc/security.md
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
# Security
|
||||||
|
|
||||||
|
## mTLS (Mutual TLS)
|
||||||
|
|
||||||
|
Certificate-based authentication for HTTPS.
|
||||||
|
|
||||||
|
### Server Configuration
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.tls]
|
||||||
|
enabled = true
|
||||||
|
cert_file = "/path/to/server.pem"
|
||||||
|
key_file = "/path/to/server.key"
|
||||||
|
client_auth = true
|
||||||
|
client_ca_file = "/path/to/ca.pem"
|
||||||
|
verify_client_cert = true
|
||||||
|
```
|
||||||
|
|
||||||
|
### Client Configuration
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http_client.tls]
|
||||||
|
enabled = true
|
||||||
|
cert_file = "/path/to/client.pem"
|
||||||
|
key_file = "/path/to/client.key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Certificate Generation
|
||||||
|
|
||||||
|
Use the `tls` command:
|
||||||
|
```bash
|
||||||
|
# Generate CA
|
||||||
|
logwisp tls -ca -o ca
|
||||||
|
|
||||||
|
# Generate server certificate
|
||||||
|
logwisp tls -server -ca-cert ca.pem -ca-key ca.key -host localhost -o server
|
||||||
|
|
||||||
|
# Generate client certificate
|
||||||
|
logwisp tls -client -ca-cert ca.pem -ca-key ca.key -o client
|
||||||
|
```
|
||||||
|
|
||||||
|
## Access Control
|
||||||
|
|
||||||
|
ogWisp provides IP-based access control for network connections.
|
||||||
|
|
||||||
|
+## IP-Based Access Control
|
||||||
|
|
||||||
|
Configure IP-based access control for sources:
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.net_limit]
|
||||||
|
enabled = true
|
||||||
|
ip_whitelist = ["192.168.1.0/24", "10.0.0.0/8"]
|
||||||
|
ip_blacklist = ["192.168.1.100"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Priority order:
|
||||||
|
1. Blacklist (checked first, immediate deny)
|
||||||
|
2. Whitelist (if configured, must match)
|
||||||
273
doc/sinks.md
Normal file
273
doc/sinks.md
Normal file
@ -0,0 +1,273 @@
|
|||||||
|
# Output Sinks
|
||||||
|
|
||||||
|
LogWisp sinks deliver processed log entries to various destinations.
|
||||||
|
|
||||||
|
## Sink Types
|
||||||
|
|
||||||
|
### Console Sink
|
||||||
|
|
||||||
|
Output to stdout/stderr.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "console"
|
||||||
|
|
||||||
|
[pipelines.sinks.console]
|
||||||
|
target = "stdout" # stdout|stderr|split
|
||||||
|
colorize = false
|
||||||
|
buffer_size = 100
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `target` | string | "stdout" | Output target (stdout/stderr/split) |
|
||||||
|
| `colorize` | bool | false | Enable colored output |
|
||||||
|
| `buffer_size` | int | 100 | Internal buffer size |
|
||||||
|
|
||||||
|
**Target Modes:**
|
||||||
|
- **stdout**: All output to standard output
|
||||||
|
- **stderr**: All output to standard error
|
||||||
|
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
|
||||||
|
|
||||||
|
### File Sink
|
||||||
|
|
||||||
|
Write logs to rotating files.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "file"
|
||||||
|
|
||||||
|
[pipelines.sinks.file]
|
||||||
|
directory = "./logs"
|
||||||
|
name = "output"
|
||||||
|
max_size_mb = 100
|
||||||
|
max_total_size_mb = 1000
|
||||||
|
min_disk_free_mb = 500
|
||||||
|
retention_hours = 168.0
|
||||||
|
buffer_size = 1000
|
||||||
|
flush_interval_ms = 1000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `directory` | string | Required | Output directory |
|
||||||
|
| `name` | string | Required | Base filename |
|
||||||
|
| `max_size_mb` | int | 100 | Rotation threshold |
|
||||||
|
| `max_total_size_mb` | int | 1000 | Total size limit |
|
||||||
|
| `min_disk_free_mb` | int | 500 | Minimum free disk space |
|
||||||
|
| `retention_hours` | float | 168 | Delete files older than |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `flush_interval_ms` | int | 1000 | Force flush interval |
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Automatic rotation on size
|
||||||
|
- Retention management
|
||||||
|
- Disk space monitoring
|
||||||
|
- Periodic flushing
|
||||||
|
|
||||||
|
### HTTP Sink
|
||||||
|
|
||||||
|
SSE (Server-Sent Events) streaming server.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "http"
|
||||||
|
|
||||||
|
[pipelines.sinks.http]
|
||||||
|
host = "0.0.0.0"
|
||||||
|
port = 8080
|
||||||
|
stream_path = "/stream"
|
||||||
|
status_path = "/status"
|
||||||
|
buffer_size = 1000
|
||||||
|
max_connections = 100
|
||||||
|
read_timeout_ms = 10000
|
||||||
|
write_timeout_ms = 10000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `host` | string | "0.0.0.0" | Bind address |
|
||||||
|
| `port` | int | Required | Listen port |
|
||||||
|
| `stream_path` | string | "/stream" | SSE stream endpoint |
|
||||||
|
| `status_path` | string | "/status" | Status endpoint |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `max_connections` | int | 100 | Maximum concurrent clients |
|
||||||
|
| `read_timeout_ms` | int | 10000 | Read timeout |
|
||||||
|
| `write_timeout_ms` | int | 10000 | Write timeout |
|
||||||
|
|
||||||
|
**Heartbeat Configuration:**
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http.heartbeat]
|
||||||
|
enabled = true
|
||||||
|
interval_ms = 30000
|
||||||
|
include_timestamp = true
|
||||||
|
include_stats = false
|
||||||
|
format = "comment" # comment|event|json
|
||||||
|
```
|
||||||
|
|
||||||
|
### TCP Sink
|
||||||
|
|
||||||
|
TCP streaming server for debugging.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "tcp"
|
||||||
|
|
||||||
|
[pipelines.sinks.tcp]
|
||||||
|
host = "0.0.0.0"
|
||||||
|
port = 9090
|
||||||
|
buffer_size = 1000
|
||||||
|
max_connections = 100
|
||||||
|
keep_alive = true
|
||||||
|
keep_alive_period_ms = 30000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `host` | string | "0.0.0.0" | Bind address |
|
||||||
|
| `port` | int | Required | Listen port |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `max_connections` | int | 100 | Maximum concurrent clients |
|
||||||
|
| `keep_alive` | bool | true | Enable TCP keep-alive |
|
||||||
|
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
|
||||||
|
|
||||||
|
**Note:** TCP Sink has no authentication support (debugging only).
|
||||||
|
|
||||||
|
### HTTP Client Sink
|
||||||
|
|
||||||
|
Forward logs to remote HTTP endpoints.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "http_client"
|
||||||
|
|
||||||
|
[pipelines.sinks.http_client]
|
||||||
|
url = "https://logs.example.com/ingest"
|
||||||
|
buffer_size = 1000
|
||||||
|
batch_size = 100
|
||||||
|
batch_delay_ms = 1000
|
||||||
|
timeout_seconds = 30
|
||||||
|
max_retries = 3
|
||||||
|
retry_delay_ms = 1000
|
||||||
|
retry_backoff = 2.0
|
||||||
|
insecure_skip_verify = false
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `url` | string | Required | Target URL |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `batch_size` | int | 100 | Logs per request |
|
||||||
|
| `batch_delay_ms` | int | 1000 | Max wait before sending |
|
||||||
|
| `timeout_seconds` | int | 30 | Request timeout |
|
||||||
|
| `max_retries` | int | 3 | Retry attempts |
|
||||||
|
| `retry_delay_ms` | int | 1000 | Initial retry delay |
|
||||||
|
| `retry_backoff` | float | 2.0 | Exponential backoff multiplier |
|
||||||
|
| `insecure_skip_verify` | bool | false | Skip TLS verification |
|
||||||
|
|
||||||
|
### TCP Client Sink
|
||||||
|
|
||||||
|
Forward logs to remote TCP servers.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sinks]]
|
||||||
|
type = "tcp_client"
|
||||||
|
|
||||||
|
[pipelines.sinks.tcp_client]
|
||||||
|
host = "logs.example.com"
|
||||||
|
port = 9090
|
||||||
|
buffer_size = 1000
|
||||||
|
dial_timeout = 10
|
||||||
|
write_timeout = 30
|
||||||
|
read_timeout = 10
|
||||||
|
keep_alive = 30
|
||||||
|
reconnect_delay_ms = 1000
|
||||||
|
max_reconnect_delay_ms = 30000
|
||||||
|
reconnect_backoff = 1.5
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `host` | string | Required | Target host |
|
||||||
|
| `port` | int | Required | Target port |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `dial_timeout` | int | 10 | Connection timeout (seconds) |
|
||||||
|
| `write_timeout` | int | 30 | Write timeout (seconds) |
|
||||||
|
| `read_timeout` | int | 10 | Read timeout (seconds) |
|
||||||
|
| `keep_alive` | int | 30 | TCP keep-alive (seconds) |
|
||||||
|
| `reconnect_delay_ms` | int | 1000 | Initial reconnect delay |
|
||||||
|
| `max_reconnect_delay_ms` | int | 30000 | Maximum reconnect delay |
|
||||||
|
| `reconnect_backoff` | float | 1.5 | Backoff multiplier |
|
||||||
|
|
||||||
|
## Network Sink Features
|
||||||
|
|
||||||
|
### Network Rate Limiting
|
||||||
|
|
||||||
|
Available for HTTP and TCP sinks:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http.net_limit]
|
||||||
|
enabled = true
|
||||||
|
max_connections_per_ip = 10
|
||||||
|
max_connections_total = 100
|
||||||
|
ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
ip_blacklist = ["10.0.0.0/8"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### TLS Configuration (HTTP Only)
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http.tls]
|
||||||
|
enabled = true
|
||||||
|
cert_file = "/path/to/cert.pem"
|
||||||
|
key_file = "/path/to/key.pem"
|
||||||
|
ca_file = "/path/to/ca.pem"
|
||||||
|
min_version = "TLS1.2"
|
||||||
|
client_auth = false
|
||||||
|
```
|
||||||
|
|
||||||
|
HTTP Client TLS:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sinks.http_client.tls]
|
||||||
|
enabled = true
|
||||||
|
server_ca_file = "/path/to/ca.pem" # For server verification
|
||||||
|
server_name = "logs.example.com"
|
||||||
|
insecure_skip_verify = false
|
||||||
|
client_cert_file = "/path/to/client.pem" # For mTLS
|
||||||
|
client_key_file = "/path/to/client.key" # For mTLS
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sink Chaining
|
||||||
|
|
||||||
|
Designed connection patterns:
|
||||||
|
|
||||||
|
### Log Aggregation
|
||||||
|
- **HTTP Client Sink → HTTP Source**: HTTP/HTTPS (optional mTLS for HTTPS)
|
||||||
|
- **TCP Client Sink → TCP Source**: Raw TCP
|
||||||
|
|
||||||
|
### Live Monitoring
|
||||||
|
- **HTTP Sink**: Browser-based SSE streaming
|
||||||
|
- **TCP Sink**: Debug interface (telnet/netcat)
|
||||||
|
|
||||||
|
## Sink Statistics
|
||||||
|
|
||||||
|
All sinks track:
|
||||||
|
- Total entries processed
|
||||||
|
- Active connections
|
||||||
|
- Failed sends
|
||||||
|
- Retry attempts
|
||||||
|
- Last processed timestamp
|
||||||
177
doc/sources.md
Normal file
177
doc/sources.md
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
# Input Sources
|
||||||
|
|
||||||
|
LogWisp sources monitor various inputs and generate log entries for pipeline processing.
|
||||||
|
|
||||||
|
## Source Types
|
||||||
|
|
||||||
|
### Directory Source
|
||||||
|
|
||||||
|
Monitors a directory for log files matching a pattern.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "directory"
|
||||||
|
|
||||||
|
[pipelines.sources.directory]
|
||||||
|
path = "/var/log/myapp"
|
||||||
|
pattern = "*.log" # Glob pattern
|
||||||
|
check_interval_ms = 100 # Poll interval
|
||||||
|
recursive = false # Scan subdirectories
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `path` | string | Required | Directory to monitor |
|
||||||
|
| `pattern` | string | "*" | File pattern (glob) |
|
||||||
|
| `check_interval_ms` | int | 100 | File check interval in milliseconds |
|
||||||
|
| `recursive` | bool | false | Include subdirectories |
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Automatic file rotation detection
|
||||||
|
- Position tracking (resume after restart)
|
||||||
|
- Concurrent file monitoring
|
||||||
|
- Pattern-based file selection
|
||||||
|
|
||||||
|
### Stdin Source
|
||||||
|
|
||||||
|
Reads log entries from standard input.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "console"
|
||||||
|
|
||||||
|
[pipelines.sources.stdin]
|
||||||
|
buffer_size = 1000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Line-based processing
|
||||||
|
- Automatic level detection
|
||||||
|
- Non-blocking reads
|
||||||
|
|
||||||
|
### HTTP Source
|
||||||
|
|
||||||
|
REST endpoint for log ingestion.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "http"
|
||||||
|
|
||||||
|
[pipelines.sources.http]
|
||||||
|
host = "0.0.0.0"
|
||||||
|
port = 8081
|
||||||
|
ingest_path = "/ingest"
|
||||||
|
buffer_size = 1000
|
||||||
|
max_body_size = 1048576 # 1MB
|
||||||
|
read_timeout_ms = 10000
|
||||||
|
write_timeout_ms = 10000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `host` | string | "0.0.0.0" | Bind address |
|
||||||
|
| `port` | int | Required | Listen port |
|
||||||
|
| `ingest_path` | string | "/ingest" | Ingestion endpoint path |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `max_body_size` | int | 1048576 | Maximum request body size |
|
||||||
|
| `read_timeout_ms` | int | 10000 | Read timeout |
|
||||||
|
| `write_timeout_ms` | int | 10000 | Write timeout |
|
||||||
|
|
||||||
|
**Input Formats:**
|
||||||
|
- Single JSON object
|
||||||
|
- JSON array
|
||||||
|
- Newline-delimited JSON (NDJSON)
|
||||||
|
- Plain text (one entry per line)
|
||||||
|
|
||||||
|
### TCP Source
|
||||||
|
|
||||||
|
Raw TCP socket listener for log ingestion.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[pipelines.sources]]
|
||||||
|
type = "tcp"
|
||||||
|
|
||||||
|
[pipelines.sources.tcp]
|
||||||
|
host = "0.0.0.0"
|
||||||
|
port = 9091
|
||||||
|
buffer_size = 1000
|
||||||
|
read_timeout_ms = 10000
|
||||||
|
keep_alive = true
|
||||||
|
keep_alive_period_ms = 30000
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration Options:**
|
||||||
|
|
||||||
|
| Option | Type | Default | Description |
|
||||||
|
|--------|------|---------|-------------|
|
||||||
|
| `host` | string | "0.0.0.0" | Bind address |
|
||||||
|
| `port` | int | Required | Listen port |
|
||||||
|
| `buffer_size` | int | 1000 | Internal buffer size |
|
||||||
|
| `read_timeout_ms` | int | 10000 | Read timeout |
|
||||||
|
| `keep_alive` | bool | true | Enable TCP keep-alive |
|
||||||
|
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
|
||||||
|
|
||||||
|
**Protocol:**
|
||||||
|
- Newline-delimited JSON
|
||||||
|
- One log entry per line
|
||||||
|
- UTF-8 encoding
|
||||||
|
|
||||||
|
## Network Source Features
|
||||||
|
|
||||||
|
### Network Rate Limiting
|
||||||
|
|
||||||
|
Available for HTTP and TCP sources:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.net_limit]
|
||||||
|
enabled = true
|
||||||
|
max_connections_per_ip = 10
|
||||||
|
max_connections_total = 100
|
||||||
|
requests_per_second = 100.0
|
||||||
|
burst_size = 200
|
||||||
|
response_code = 429
|
||||||
|
response_message = "Rate limit exceeded"
|
||||||
|
ip_whitelist = ["192.168.1.0/24"]
|
||||||
|
ip_blacklist = ["10.0.0.0/8"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### TLS Configuration (HTTP Only)
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pipelines.sources.http.tls]
|
||||||
|
enabled = true
|
||||||
|
cert_file = "/path/to/cert.pem"
|
||||||
|
key_file = "/path/to/key.pem"
|
||||||
|
min_version = "TLS1.2"
|
||||||
|
client_auth = true
|
||||||
|
client_ca_file = "/path/to/client-ca.pem"
|
||||||
|
verify_client_cert = true
|
||||||
|
```
|
||||||
|
|
||||||
|
## Source Statistics
|
||||||
|
|
||||||
|
All sources track:
|
||||||
|
- Total entries received
|
||||||
|
- Dropped entries (buffer full)
|
||||||
|
- Invalid entries
|
||||||
|
- Last entry timestamp
|
||||||
|
- Active connections (network sources)
|
||||||
|
- Source-specific metrics
|
||||||
|
|
||||||
|
## Buffer Management
|
||||||
|
|
||||||
|
Each source maintains internal buffers:
|
||||||
|
- Default size: 1000 entries
|
||||||
|
- Drop policy when full
|
||||||
|
- Configurable per source
|
||||||
|
- Non-blocking writes
|
||||||
148
doc/status.md
148
doc/status.md
@ -1,148 +0,0 @@
|
|||||||
# Status Monitoring
|
|
||||||
|
|
||||||
LogWisp provides comprehensive monitoring through status endpoints and operational logs.
|
|
||||||
|
|
||||||
## Status Endpoints
|
|
||||||
|
|
||||||
### Pipeline Status
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Standalone mode
|
|
||||||
curl http://localhost:8080/status
|
|
||||||
|
|
||||||
# Router mode
|
|
||||||
curl http://localhost:8080/pipelinename/status
|
|
||||||
```
|
|
||||||
|
|
||||||
Example response:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"service": "LogWisp",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"server": {
|
|
||||||
"type": "http",
|
|
||||||
"port": 8080,
|
|
||||||
"active_clients": 5,
|
|
||||||
"buffer_size": 1000,
|
|
||||||
"uptime_seconds": 3600,
|
|
||||||
"mode": {"standalone": true, "router": false}
|
|
||||||
},
|
|
||||||
"sources": [{
|
|
||||||
"type": "directory",
|
|
||||||
"total_entries": 152341,
|
|
||||||
"dropped_entries": 12,
|
|
||||||
"active_watchers": 3
|
|
||||||
}],
|
|
||||||
"filters": {
|
|
||||||
"filter_count": 2,
|
|
||||||
"total_processed": 152341,
|
|
||||||
"total_passed": 48234
|
|
||||||
},
|
|
||||||
"sinks": [{
|
|
||||||
"type": "http",
|
|
||||||
"total_processed": 48234,
|
|
||||||
"active_connections": 5,
|
|
||||||
"details": {
|
|
||||||
"port": 8080,
|
|
||||||
"buffer_size": 1000,
|
|
||||||
"rate_limit": {
|
|
||||||
"enabled": true,
|
|
||||||
"total_requests": 98234,
|
|
||||||
"blocked_requests": 234
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}],
|
|
||||||
"endpoints": {
|
|
||||||
"transport": "/stream",
|
|
||||||
"status": "/status"
|
|
||||||
},
|
|
||||||
"features": {
|
|
||||||
"heartbeat": {
|
|
||||||
"enabled": true,
|
|
||||||
"interval": 30,
|
|
||||||
"format": "comment"
|
|
||||||
},
|
|
||||||
"ssl": {
|
|
||||||
"enabled": false
|
|
||||||
},
|
|
||||||
"rate_limit": {
|
|
||||||
"enabled": true,
|
|
||||||
"requests_per_second": 10.0,
|
|
||||||
"burst_size": 20
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Metrics
|
|
||||||
|
|
||||||
### Source Metrics
|
|
||||||
| Metric | Description | Healthy Range |
|
|
||||||
|--------|-------------|---------------|
|
|
||||||
| `active_watchers` | Files being watched | 1-1000 |
|
|
||||||
| `total_entries` | Entries processed | Increasing |
|
|
||||||
| `dropped_entries` | Buffer overflows | < 1% of total |
|
|
||||||
| `active_connections` | Network connections (HTTP/TCP sources) | Within limits |
|
|
||||||
|
|
||||||
### Sink Metrics
|
|
||||||
| Metric | Description | Warning Signs |
|
|
||||||
|--------|-------------|---------------|
|
|
||||||
| `active_connections` | Current clients | Near limit |
|
|
||||||
| `total_processed` | Entries sent | Should match filter output |
|
|
||||||
| `total_batches` | Batches sent (client sinks) | Increasing |
|
|
||||||
| `failed_batches` | Failed sends (client sinks) | > 0 indicates issues |
|
|
||||||
|
|
||||||
### Filter Metrics
|
|
||||||
| Metric | Description | Notes |
|
|
||||||
|--------|-------------|-------|
|
|
||||||
| `total_processed` | Entries checked | All entries |
|
|
||||||
| `total_passed` | Passed filters | Check if too low/high |
|
|
||||||
| `total_matched` | Pattern matches | Per filter stats |
|
|
||||||
|
|
||||||
### Rate Limit Metrics
|
|
||||||
| Metric | Description | Action |
|
|
||||||
|--------|-------------|--------|
|
|
||||||
| `blocked_requests` | Rejected requests | Increase limits if high |
|
|
||||||
| `active_ips` | Unique IPs tracked | Monitor for attacks |
|
|
||||||
| `total_connections` | Current connections | Check against limits |
|
|
||||||
|
|
||||||
## Operational Logging
|
|
||||||
|
|
||||||
### Log Levels
|
|
||||||
```toml
|
|
||||||
[logging]
|
|
||||||
level = "info" # debug, info, warn, error
|
|
||||||
```
|
|
||||||
|
|
||||||
## Health Checks
|
|
||||||
|
|
||||||
### Basic Check
|
|
||||||
```bash
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
if curl -s -f http://localhost:8080/status > /dev/null; then
|
|
||||||
echo "Healthy"
|
|
||||||
else
|
|
||||||
echo "Unhealthy"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Advanced Check
|
|
||||||
```bash
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
STATUS=$(curl -s http://localhost:8080/status)
|
|
||||||
DROPPED=$(echo "$STATUS" | jq '.sources[0].dropped_entries')
|
|
||||||
TOTAL=$(echo "$STATUS" | jq '.sources[0].total_entries')
|
|
||||||
|
|
||||||
if [ $((DROPPED * 100 / TOTAL)) -gt 5 ]; then
|
|
||||||
echo "High drop rate"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check client sink failures
|
|
||||||
FAILED=$(echo "$STATUS" | jq '.sinks[] | select(.type=="http_client") | .details.failed_batches // 0' | head -1)
|
|
||||||
if [ "$FAILED" -gt 10 ]; then
|
|
||||||
echo "High failure rate"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
30
go.mod
30
go.mod
@ -1,32 +1,26 @@
|
|||||||
module logwisp
|
module logwisp
|
||||||
|
|
||||||
go 1.25.1
|
go 1.25.4
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0
|
github.com/lixenwraith/config v0.1.1-0.20251114180219-f7875023a51b
|
||||||
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3
|
github.com/lixenwraith/log v0.1.1-0.20251115213227-55d2c92d483f
|
||||||
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208
|
github.com/panjf2000/gnet/v2 v2.9.7
|
||||||
github.com/panjf2000/gnet/v2 v2.9.3
|
github.com/valyala/fasthttp v1.68.0
|
||||||
github.com/valyala/fasthttp v1.65.0
|
|
||||||
golang.org/x/crypto v0.42.0
|
|
||||||
golang.org/x/term v0.35.0
|
|
||||||
golang.org/x/time v0.13.0
|
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/BurntSushi/toml v1.5.0 // indirect
|
github.com/BurntSushi/toml v1.6.0 // indirect
|
||||||
github.com/andybalholm/brotli v1.2.0 // indirect
|
github.com/andybalholm/brotli v1.2.0 // indirect
|
||||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||||
github.com/klauspost/compress v1.18.0 // indirect
|
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
|
||||||
github.com/mitchellh/mapstructure v1.5.0 // indirect
|
github.com/klauspost/compress v1.18.2 // indirect
|
||||||
github.com/panjf2000/ants/v2 v2.11.3 // indirect
|
github.com/panjf2000/ants/v2 v2.11.4 // indirect
|
||||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||||
go.uber.org/multierr v1.11.0 // indirect
|
go.uber.org/multierr v1.11.0 // indirect
|
||||||
go.uber.org/zap v1.27.0 // indirect
|
go.uber.org/zap v1.27.1 // indirect
|
||||||
golang.org/x/sync v0.17.0 // indirect
|
golang.org/x/sync v0.19.0 // indirect
|
||||||
golang.org/x/sys v0.36.0 // indirect
|
golang.org/x/sys v0.39.0 // indirect
|
||||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
|
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
replace github.com/mitchellh/mapstructure => github.com/go-viper/mapstructure v1.6.0
|
|
||||||
|
|||||||
54
go.sum
54
go.sum
@ -1,31 +1,33 @@
|
|||||||
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
|
github.com/BurntSushi/toml v1.6.0 h1:dRaEfpa2VI55EwlIW72hMRHdWouJeRF7TPYhI+AUQjk=
|
||||||
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
github.com/BurntSushi/toml v1.6.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||||
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
|
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
|
||||||
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
|
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/go-viper/mapstructure v1.6.0 h1:0WdPOF2rmmQDN1xo8qIgxyugvLp71HrZSWyGLxofobw=
|
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
|
||||||
github.com/go-viper/mapstructure v1.6.0/go.mod h1:FcbLReH7/cjaC0RVQR+LHFIrBhHF3s1e/ud1KMDoBVw=
|
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co=
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0=
|
||||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
|
||||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
|
||||||
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3 h1:+RwUb7dUz9mGdUSW+E0WuqJgTVg1yFnPb94Wyf5ma/0=
|
github.com/lixenwraith/config v0.1.1-0.20251114180219-f7875023a51b h1:TzTV0ArJ+nzVGPN8aiEJ2MknUqJdmHRP/0/RSfov2Qw=
|
||||||
github.com/lixenwraith/config v0.0.0-20250908085506-537a4d49d2c3/go.mod h1:I7ddNPT8MouXXz/ae4DQfBKMq5EisxdDLRX0C7Dv4O0=
|
github.com/lixenwraith/config v0.1.1-0.20251114180219-f7875023a51b/go.mod h1:roNPTSCT5HSV9dru/zi/Catwc3FZVCFf7vob2pSlNW0=
|
||||||
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208 h1:IB1O/HLv9VR/4mL1Tkjlr91lk+r8anP6bab7rYdS/oE=
|
github.com/lixenwraith/log v0.1.1-0.20251115213227-55d2c92d483f h1:X2LX5FQEuWYGBS3qp5z7XxBB1sWAlqumf/oW7n/f9c0=
|
||||||
github.com/lixenwraith/log v0.0.0-20250908085352-2df52dfb9208/go.mod h1:E7REMCVTr6DerzDtd2tpEEaZ9R9nduyAIKQFOqHqKr0=
|
github.com/lixenwraith/log v0.1.1-0.20251115213227-55d2c92d483f/go.mod h1:XcRPRuijAs+43Djk8VmioUJhcK8irRzUjCZaZqkd3gg=
|
||||||
github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
|
github.com/panjf2000/ants/v2 v2.11.3 h1:AfI0ngBoXJmYOpDh9m516vjqoUu2sLrIVgppI9TZVpg=
|
||||||
github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
|
github.com/panjf2000/ants/v2 v2.11.3/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
|
||||||
github.com/panjf2000/gnet/v2 v2.9.3 h1:auV3/A9Na3jiBDmYAAU00rPhFKnsAI+TnI1F7YUJMHQ=
|
github.com/panjf2000/ants/v2 v2.11.4 h1:UJQbtN1jIcI5CYNocTj0fuAUYvsLjPoYi0YuhqV/Y48=
|
||||||
github.com/panjf2000/gnet/v2 v2.9.3/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
|
github.com/panjf2000/ants/v2 v2.11.4/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
|
||||||
|
github.com/panjf2000/gnet/v2 v2.9.7 h1:6zW7Jl3oAfXwSuh1PxHLndoL2MQRWx0AJR6aaQjxUgA=
|
||||||
|
github.com/panjf2000/gnet/v2 v2.9.7/go.mod h1:WQTxDWYuQ/hz3eccH0FN32IVuvZ19HewEWx0l62fx7E=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||||
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
||||||
github.com/valyala/fasthttp v1.65.0 h1:j/u3uzFEGFfRxw79iYzJN+TteTJwbYkru9uDp3d0Yf8=
|
github.com/valyala/fasthttp v1.68.0 h1:v12Nx16iepr8r9ySOwqI+5RBJ/DqTxhOy1HrHoDFnok=
|
||||||
github.com/valyala/fasthttp v1.65.0/go.mod h1:P/93/YkKPMsKSnATEeELUCkG8a7Y+k99uxNHVbKINr4=
|
github.com/valyala/fasthttp v1.68.0/go.mod h1:5EXiRfYQAoiO/khu4oU9VISC/eVY6JqmSpPJoHCKsz4=
|
||||||
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
|
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
|
||||||
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
|
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
|
||||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||||
@ -34,16 +36,16 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
|||||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||||
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
|
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
|
||||||
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||||
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
|
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
||||||
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
|
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||||
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
|
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
|
||||||
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
golang.org/x/term v0.35.0 h1:bZBVKBudEyhRcajGcNc3jIfWPqV4y/Kt2XcoigOWtDQ=
|
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
|
||||||
golang.org/x/term v0.35.0/go.mod h1:TPGtkTLesOwf2DE8CgVYiZinHAOuy5AYUYT1lENIZnA=
|
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
|
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||||
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
|
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
|
||||||
|
|||||||
@ -1,110 +0,0 @@
|
|||||||
// FILE: logwisp/src/cmd/auth-gen/main.go
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"encoding/base64"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"syscall"
|
|
||||||
|
|
||||||
"golang.org/x/crypto/bcrypt"
|
|
||||||
"golang.org/x/term"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
var (
|
|
||||||
username = flag.String("u", "", "Username for basic auth")
|
|
||||||
password = flag.String("p", "", "Password to hash (will prompt if not provided)")
|
|
||||||
cost = flag.Int("c", 10, "Bcrypt cost (10-31)")
|
|
||||||
genToken = flag.Bool("t", false, "Generate random bearer token")
|
|
||||||
tokenLen = flag.Int("l", 32, "Token length in bytes")
|
|
||||||
)
|
|
||||||
|
|
||||||
flag.Usage = func() {
|
|
||||||
fmt.Fprintf(os.Stderr, "LogWisp Authentication Utility\n\n")
|
|
||||||
fmt.Fprintf(os.Stderr, "Usage:\n")
|
|
||||||
fmt.Fprintf(os.Stderr, " Generate bcrypt hash: %s -u <username> [-p <password>]\n", os.Args[0])
|
|
||||||
fmt.Fprintf(os.Stderr, " Generate bearer token: %s -t [-l <length>]\n", os.Args[0])
|
|
||||||
fmt.Fprintf(os.Stderr, "\nOptions:\n")
|
|
||||||
flag.PrintDefaults()
|
|
||||||
}
|
|
||||||
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
if *genToken {
|
|
||||||
generateToken(*tokenLen)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *username == "" {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: Username required for basic auth\n")
|
|
||||||
flag.Usage()
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get password
|
|
||||||
pass := *password
|
|
||||||
if pass == "" {
|
|
||||||
pass = promptPassword("Enter password: ")
|
|
||||||
confirm := promptPassword("Confirm password: ")
|
|
||||||
if pass != confirm {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: Passwords don't match\n")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate bcrypt hash
|
|
||||||
hash, err := bcrypt.GenerateFromPassword([]byte(pass), *cost)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error generating hash: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Output TOML config format
|
|
||||||
fmt.Println("\n# Add to logwisp.toml under [[pipelines.auth.basic_auth.users]]:")
|
|
||||||
fmt.Printf("[[pipelines.auth.basic_auth.users]]\n")
|
|
||||||
fmt.Printf("username = \"%s\"\n", *username)
|
|
||||||
fmt.Printf("password_hash = \"%s\"\n", string(hash))
|
|
||||||
|
|
||||||
// Also output for users file format
|
|
||||||
fmt.Println("\n# Or add to users file:")
|
|
||||||
fmt.Printf("%s:%s\n", *username, string(hash))
|
|
||||||
}
|
|
||||||
|
|
||||||
func promptPassword(prompt string) string {
|
|
||||||
fmt.Fprint(os.Stderr, prompt)
|
|
||||||
password, err := term.ReadPassword(int(syscall.Stdin))
|
|
||||||
fmt.Fprintln(os.Stderr)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error reading password: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
return string(password)
|
|
||||||
}
|
|
||||||
|
|
||||||
func generateToken(length int) {
|
|
||||||
if length < 16 {
|
|
||||||
fmt.Fprintf(os.Stderr, "Warning: Token length < 16 bytes is insecure\n")
|
|
||||||
}
|
|
||||||
|
|
||||||
token := make([]byte, length)
|
|
||||||
if _, err := rand.Read(token); err != nil {
|
|
||||||
fmt.Fprintf(os.Stderr, "Error generating token: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Output in various formats
|
|
||||||
b64 := base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(token)
|
|
||||||
hex := fmt.Sprintf("%x", token)
|
|
||||||
|
|
||||||
fmt.Println("\n# Add to logwisp.toml under [pipelines.auth.bearer_auth]:")
|
|
||||||
fmt.Printf("tokens = [\"%s\"]\n", b64)
|
|
||||||
|
|
||||||
fmt.Println("\n# Alternative hex encoding:")
|
|
||||||
fmt.Printf("# tokens = [\"%s\"]\n", hex)
|
|
||||||
|
|
||||||
fmt.Printf("\n# Token (base64): %s\n", b64)
|
|
||||||
fmt.Printf("# Token (hex): %s\n", hex)
|
|
||||||
}
|
|
||||||
@ -1,10 +1,19 @@
|
|||||||
// FILE: logwisp/src/cmd/logwisp/bootstrap.go
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strings"
|
|
||||||
|
_ "logwisp/src/internal/source/console"
|
||||||
|
_ "logwisp/src/internal/source/file"
|
||||||
|
_ "logwisp/src/internal/source/null"
|
||||||
|
_ "logwisp/src/internal/source/random"
|
||||||
|
|
||||||
|
_ "logwisp/src/internal/sink/console"
|
||||||
|
_ "logwisp/src/internal/sink/file"
|
||||||
|
_ "logwisp/src/internal/sink/http"
|
||||||
|
_ "logwisp/src/internal/sink/null"
|
||||||
|
_ "logwisp/src/internal/sink/tcp"
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
"logwisp/src/internal/config"
|
||||||
"logwisp/src/internal/service"
|
"logwisp/src/internal/service"
|
||||||
@ -13,39 +22,97 @@ import (
|
|||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
// bootstrapService creates and initializes the log transport service
|
// bootstrapInitial handles initial service startup with status reporter
|
||||||
|
func bootstrapInitial(ctx context.Context, cfg *config.Config) (*service.Service, context.CancelFunc, error) {
|
||||||
|
svc, err := bootstrapService(ctx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("failed to bootstrap service: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := svc.Start(); err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("failed to start service pipelines: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var statusCancel context.CancelFunc
|
||||||
|
if cfg.StatusReporter {
|
||||||
|
statusCancel = startStatusReporter(ctx, svc)
|
||||||
|
}
|
||||||
|
|
||||||
|
return svc, statusCancel, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleReload orchestrates the entire hot-reload process including status reporter lifecycle
|
||||||
|
func handleReload(ctx context.Context, oldSvc *service.Service, statusCancel context.CancelFunc) (*service.Service, *config.Config, context.CancelFunc, error) {
|
||||||
|
logger.Info("msg", "Starting configuration hot reload")
|
||||||
|
|
||||||
|
// Get updated config from the lixenwraith/config manager
|
||||||
|
lcfg := config.GetConfigManager()
|
||||||
|
if lcfg == nil {
|
||||||
|
err := fmt.Errorf("config manager not available for reload")
|
||||||
|
logger.Error("msg", "Reload failed", "error", err)
|
||||||
|
return nil, nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
updatedCfgStruct, err := lcfg.AsStruct()
|
||||||
|
if err != nil {
|
||||||
|
logger.Error("msg", "Failed to get updated config for reload", "error", err, "action", "keeping current configuration")
|
||||||
|
return nil, nil, nil, err
|
||||||
|
}
|
||||||
|
newCfg := updatedCfgStruct.(*config.Config)
|
||||||
|
|
||||||
|
// Bootstrap a new service to ensure it's valid before touching the old one
|
||||||
|
logger.Debug("msg", "Bootstrapping new service with updated config")
|
||||||
|
newService, err := bootstrapService(ctx, newCfg)
|
||||||
|
if err != nil {
|
||||||
|
logger.Error("msg", "Failed to bootstrap new service, keeping old service running", "error", err)
|
||||||
|
return nil, nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gracefully shut down the old service
|
||||||
|
if oldSvc != nil {
|
||||||
|
logger.Info("msg", "Shutting down old service before activating new one")
|
||||||
|
oldSvc.Shutdown()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start the new service
|
||||||
|
if err := newService.Start(); err != nil {
|
||||||
|
logger.Error("msg", "Failed to start new service pipelines after reload. The application may be in a non-functional state.", "error", err)
|
||||||
|
return nil, nil, nil, fmt.Errorf("failed to start new service: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Manage status reporter lifecycle
|
||||||
|
if statusCancel != nil {
|
||||||
|
statusCancel()
|
||||||
|
}
|
||||||
|
|
||||||
|
var newStatusCancel context.CancelFunc
|
||||||
|
if newCfg.StatusReporter {
|
||||||
|
newStatusCancel = startStatusReporter(ctx, newService)
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.Info("msg", "Configuration hot reload completed successfully")
|
||||||
|
return newService, newCfg, newStatusCancel, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// bootstrapService creates and initializes the main log transport service and its pipelines
|
||||||
func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service, error) {
|
func bootstrapService(ctx context.Context, cfg *config.Config) (*service.Service, error) {
|
||||||
// Create service with logger dependency injection
|
// Create service with logger dependency injection
|
||||||
svc := service.New(ctx, logger)
|
svc, err := service.NewService(ctx, cfg, logger)
|
||||||
|
if err != nil {
|
||||||
// Initialize pipelines
|
logger.Error("msg", "Failed to initialize service",
|
||||||
successCount := 0
|
"component", "bootstrap",
|
||||||
for _, pipelineCfg := range cfg.Pipelines {
|
)
|
||||||
logger.Info("msg", "Initializing pipeline", "pipeline", pipelineCfg.Name)
|
return nil, err
|
||||||
|
|
||||||
// Create the pipeline
|
|
||||||
if err := svc.NewPipeline(pipelineCfg); err != nil {
|
|
||||||
logger.Error("msg", "Failed to create pipeline",
|
|
||||||
"pipeline", pipelineCfg.Name,
|
|
||||||
"error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
successCount++
|
|
||||||
displayPipelineEndpoints(pipelineCfg)
|
|
||||||
}
|
|
||||||
|
|
||||||
if successCount == 0 {
|
|
||||||
return nil, fmt.Errorf("no pipelines successfully started (attempted %d)", len(cfg.Pipelines))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.Info("msg", "LogWisp started",
|
logger.Info("msg", "LogWisp started",
|
||||||
"version", version.Short(),
|
"version", version.Short(),
|
||||||
"pipelines", successCount)
|
)
|
||||||
|
|
||||||
return svc, nil
|
return svc, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// initializeLogger sets up the logger based on configuration
|
// initializeLogger sets up the global logger based on the application's configuration
|
||||||
func initializeLogger(cfg *config.Config) error {
|
func initializeLogger(cfg *config.Config) error {
|
||||||
logger = log.NewLogger()
|
logger = log.NewLogger()
|
||||||
logCfg := log.DefaultConfig()
|
logCfg := log.DefaultConfig()
|
||||||
@ -53,13 +120,13 @@ func initializeLogger(cfg *config.Config) error {
|
|||||||
if cfg.Quiet {
|
if cfg.Quiet {
|
||||||
// In quiet mode, disable ALL logging output
|
// In quiet mode, disable ALL logging output
|
||||||
logCfg.Level = 255 // A level that disables all output
|
logCfg.Level = 255 // A level that disables all output
|
||||||
logCfg.DisableFile = true
|
logCfg.EnableFile = false
|
||||||
logCfg.EnableStdout = false
|
logCfg.EnableConsole = false
|
||||||
return logger.ApplyConfig(logCfg)
|
return logger.ApplyConfig(logCfg)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Determine log level
|
// Determine log level
|
||||||
levelValue, err := parseLogLevel(cfg.Logging.Level)
|
levelValue, err := log.Level(cfg.Logging.Level)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("invalid log level: %w", err)
|
return fmt.Errorf("invalid log level: %w", err)
|
||||||
}
|
}
|
||||||
@ -68,36 +135,37 @@ func initializeLogger(cfg *config.Config) error {
|
|||||||
// Configure based on output mode
|
// Configure based on output mode
|
||||||
switch cfg.Logging.Output {
|
switch cfg.Logging.Output {
|
||||||
case "none":
|
case "none":
|
||||||
logCfg.DisableFile = true
|
logCfg.EnableFile = false
|
||||||
logCfg.EnableStdout = false
|
logCfg.EnableConsole = false
|
||||||
case "stdout":
|
case "stdout":
|
||||||
logCfg.DisableFile = true
|
logCfg.EnableFile = false
|
||||||
logCfg.EnableStdout = true
|
logCfg.EnableConsole = true
|
||||||
logCfg.StdoutTarget = "stdout"
|
logCfg.ConsoleTarget = "stdout"
|
||||||
case "stderr":
|
case "stderr":
|
||||||
logCfg.DisableFile = true
|
logCfg.EnableFile = false
|
||||||
logCfg.EnableStdout = true
|
logCfg.EnableConsole = true
|
||||||
logCfg.StdoutTarget = "stderr"
|
logCfg.ConsoleTarget = "stderr"
|
||||||
|
case "split":
|
||||||
|
logCfg.EnableFile = false
|
||||||
|
logCfg.EnableConsole = true
|
||||||
|
logCfg.ConsoleTarget = "split"
|
||||||
case "file":
|
case "file":
|
||||||
logCfg.EnableStdout = false
|
logCfg.EnableFile = true
|
||||||
|
logCfg.EnableConsole = false
|
||||||
configureFileLogging(logCfg, cfg)
|
configureFileLogging(logCfg, cfg)
|
||||||
case "both":
|
case "all":
|
||||||
logCfg.EnableStdout = true
|
logCfg.EnableFile = true
|
||||||
|
logCfg.EnableConsole = true
|
||||||
|
logCfg.ConsoleTarget = "split"
|
||||||
configureFileLogging(logCfg, cfg)
|
configureFileLogging(logCfg, cfg)
|
||||||
configureConsoleTarget(logCfg, cfg)
|
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("invalid log output mode: %s", cfg.Logging.Output)
|
return fmt.Errorf("invalid log output mode: %s", cfg.Logging.Output)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply format if specified
|
|
||||||
if cfg.Logging.Console != nil && cfg.Logging.Console.Format != "" {
|
|
||||||
logCfg.Format = cfg.Logging.Console.Format
|
|
||||||
}
|
|
||||||
|
|
||||||
return logger.ApplyConfig(logCfg)
|
return logger.ApplyConfig(logCfg)
|
||||||
}
|
}
|
||||||
|
|
||||||
// configureFileLogging sets up file-based logging parameters
|
// configureFileLogging sets up file-based logging parameters from the configuration
|
||||||
func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
|
func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
|
||||||
if cfg.Logging.File != nil {
|
if cfg.Logging.File != nil {
|
||||||
logCfg.Directory = cfg.Logging.File.Directory
|
logCfg.Directory = cfg.Logging.File.Directory
|
||||||
@ -109,30 +177,3 @@ func configureFileLogging(logCfg *log.Config, cfg *config.Config) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// configureConsoleTarget sets up console output parameters
|
|
||||||
func configureConsoleTarget(logCfg *log.Config, cfg *config.Config) {
|
|
||||||
target := "stderr" // default
|
|
||||||
|
|
||||||
if cfg.Logging.Console != nil && cfg.Logging.Console.Target != "" {
|
|
||||||
target = cfg.Logging.Console.Target
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set the target, which can be "stdout", "stderr", or "split"
|
|
||||||
logCfg.StdoutTarget = target
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseLogLevel(level string) (int64, error) {
|
|
||||||
switch strings.ToLower(level) {
|
|
||||||
case "debug":
|
|
||||||
return log.LevelDebug, nil
|
|
||||||
case "info":
|
|
||||||
return log.LevelInfo, nil
|
|
||||||
case "warn", "warning":
|
|
||||||
return log.LevelWarn, nil
|
|
||||||
case "error":
|
|
||||||
return log.LevelError, nil
|
|
||||||
default:
|
|
||||||
return 0, fmt.Errorf("unknown log level: %s", level)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,56 +0,0 @@
|
|||||||
// FILE: logwisp/src/cmd/logwisp/help.go
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
const helpText = `LogWisp: A flexible log transport and processing tool.
|
|
||||||
|
|
||||||
Usage: logwisp [options]
|
|
||||||
|
|
||||||
Application Control:
|
|
||||||
-c, --config <path> (string) Path to configuration file (default: logwisp.toml).
|
|
||||||
-h, --help Display this help message and exit.
|
|
||||||
-v, --version Display version information and exit.
|
|
||||||
-b, --background Run LogWisp in the background as a daemon.
|
|
||||||
-q, --quiet Suppress all console output, including errors.
|
|
||||||
|
|
||||||
Runtime Behavior:
|
|
||||||
--disable-status-reporter Disable the periodic status reporter.
|
|
||||||
--config-auto-reload Enable config reload and pipeline reconfiguration on config file change.
|
|
||||||
|
|
||||||
Configuration Sources (Precedence: CLI > Env > File > Defaults):
|
|
||||||
- CLI flags override all other settings.
|
|
||||||
- Environment variables override file settings.
|
|
||||||
- TOML configuration file is the primary method for defining pipelines.
|
|
||||||
|
|
||||||
Logging ([logging] section or LOGWISP_LOGGING_* env vars):
|
|
||||||
output = "stderr" (string) Log output: none, stdout, stderr, file, both.
|
|
||||||
level = "info" (string) Log level: debug, info, warn, error.
|
|
||||||
[logging.file] Settings for file logging (directory, name, rotation).
|
|
||||||
[logging.console] Settings for console logging (target, format).
|
|
||||||
|
|
||||||
Pipelines ([[pipelines]] array in TOML):
|
|
||||||
Each pipeline defines a complete data flow from sources to sinks.
|
|
||||||
name = "my_pipeline" (string) Unique name for the pipeline.
|
|
||||||
sources = [...] (array) Data inputs (e.g., directory, stdin, http, tcp).
|
|
||||||
sinks = [...] (array) Data outputs (e.g., http, tcp, file, stdout, stderr, http_client).
|
|
||||||
filters = [...] (array) Optional filters to include/exclude logs based on regex.
|
|
||||||
rate_limit = {...} (object) Optional rate limiting for the entire pipeline.
|
|
||||||
auth = {...} (object) Optional authentication for network sinks.
|
|
||||||
format = "json" (string) Optional output formatter for the pipeline (raw, text, json).
|
|
||||||
|
|
||||||
For detailed configuration options, please refer to the documentation.
|
|
||||||
`
|
|
||||||
|
|
||||||
// CheckAndDisplayHelp scans arguments for help flags and prints help text if found.
|
|
||||||
func CheckAndDisplayHelp(args []string) {
|
|
||||||
for _, arg := range args {
|
|
||||||
if arg == "-h" || arg == "--help" {
|
|
||||||
fmt.Fprint(os.Stdout, helpText)
|
|
||||||
os.Exit(0)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,31 +1,30 @@
|
|||||||
// FILE: logwisp/src/cmd/logwisp/main.go
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
"logwisp/src/internal/version"
|
"logwisp/src/internal/version"
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// logger is the global logger instance for the application
|
||||||
var logger *log.Logger
|
var logger *log.Logger
|
||||||
|
|
||||||
|
// main is the entry point for the LogWisp application
|
||||||
func main() {
|
func main() {
|
||||||
|
// --- 1. Initial setup ---
|
||||||
// Emulates nohup
|
// Emulates nohup
|
||||||
signal.Ignore(syscall.SIGHUP)
|
signal.Ignore(syscall.SIGHUP)
|
||||||
|
|
||||||
// Early check for help flag to avoid unnecessary config loading
|
|
||||||
CheckAndDisplayHelp(os.Args[1:])
|
|
||||||
|
|
||||||
// Load configuration with automatic CLI parsing
|
// Load configuration with automatic CLI parsing
|
||||||
cfg, err := config.Load(os.Args[1:])
|
cfg, err := config.Load(os.Args[1:])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -46,21 +45,6 @@ func main() {
|
|||||||
os.Exit(0)
|
os.Exit(0)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Background mode spawns a child with internal --background-daemon flag.
|
|
||||||
if cfg.Background && !cfg.BackgroundDaemon {
|
|
||||||
// Prepare arguments for the child process, including originals and daemon flag.
|
|
||||||
args := append(os.Args[1:], "--background-daemon")
|
|
||||||
|
|
||||||
cmd := exec.Command(os.Args[0], args...)
|
|
||||||
|
|
||||||
if err := cmd.Start(); err != nil {
|
|
||||||
FatalError(1, "Failed to start background process: %v\n", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
Print("Started LogWisp in background (PID: %d)\n", cmd.Process.Pid)
|
|
||||||
os.Exit(0) // The parent process exits successfully.
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize logger instance and apply configuration
|
// Initialize logger instance and apply configuration
|
||||||
if err := initializeLogger(cfg); err != nil {
|
if err := initializeLogger(cfg); err != nil {
|
||||||
FatalError(1, "Failed to initialize logger: %v\n", err)
|
FatalError(1, "Failed to initialize logger: %v\n", err)
|
||||||
@ -77,152 +61,95 @@ func main() {
|
|||||||
"version", version.String(),
|
"version", version.String(),
|
||||||
"config_file", cfg.ConfigFile,
|
"config_file", cfg.ConfigFile,
|
||||||
"log_output", cfg.Logging.Output,
|
"log_output", cfg.Logging.Output,
|
||||||
"background_mode", cfg.Background)
|
"status_reporter", cfg.StatusReporter,
|
||||||
|
"auto_reload", cfg.ConfigAutoReload)
|
||||||
|
|
||||||
|
time.Sleep(time.Second)
|
||||||
|
|
||||||
// Create context for shutdown
|
// Create context for shutdown
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
// Service and hot reload management
|
// --- 2. Bootstrap initial service ---
|
||||||
var reloadManager *ReloadManager
|
svc, statusReporterCancel, err := bootstrapInitial(ctx, cfg)
|
||||||
|
|
||||||
if cfg.ConfigAutoReload && cfg.ConfigFile != "" {
|
|
||||||
// Use reload manager for dynamic configuration
|
|
||||||
logger.Info("msg", "Config auto-reload enabled",
|
|
||||||
"config_file", cfg.ConfigFile)
|
|
||||||
|
|
||||||
reloadManager = NewReloadManager(cfg.ConfigFile, cfg, logger)
|
|
||||||
|
|
||||||
if err := reloadManager.Start(ctx); err != nil {
|
|
||||||
logger.Error("msg", "Failed to start reload manager", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
defer reloadManager.Shutdown()
|
|
||||||
|
|
||||||
// Setup signal handler with reload support
|
|
||||||
signalHandler := NewSignalHandler(reloadManager, logger)
|
|
||||||
defer signalHandler.Stop()
|
|
||||||
|
|
||||||
// Handle signals in background
|
|
||||||
go func() {
|
|
||||||
sig := signalHandler.Handle(ctx)
|
|
||||||
if sig != nil {
|
|
||||||
logger.Info("msg", "Shutdown signal received",
|
|
||||||
"signal", sig)
|
|
||||||
cancel() // Trigger shutdown
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
} else {
|
|
||||||
// Traditional static bootstrap
|
|
||||||
logger.Info("msg", "Config auto-reload disabled")
|
|
||||||
|
|
||||||
svc, err := bootstrapService(ctx, cfg)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logger.Error("msg", "Failed to bootstrap service", "error", err)
|
logger.Error("msg", "Failed to initialize service", "error", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start status reporter if enabled (static mode)
|
// --- 3. Setup signals and shutdown ---
|
||||||
if !cfg.DisableStatusReporter {
|
|
||||||
go statusReporter(svc, ctx)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Setup traditional signal handling
|
|
||||||
sigChan := make(chan os.Signal, 1)
|
sigChan := make(chan os.Signal, 1)
|
||||||
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGKILL)
|
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP, syscall.SIGUSR1)
|
||||||
|
|
||||||
// Wait for shutdown signal
|
var configChanges <-chan string
|
||||||
sig := <-sigChan
|
lcfg := config.GetConfigManager()
|
||||||
|
if cfg.ConfigAutoReload && lcfg != nil {
|
||||||
// Handle SIGKILL for immediate shutdown
|
configChanges = lcfg.Watch()
|
||||||
if sig == syscall.SIGKILL {
|
logger.Info("msg", "Config auto-reload enabled", "config_file", cfg.ConfigFile)
|
||||||
os.Exit(137) // Standard exit code for SIGKILL (128 + 9)
|
} else {
|
||||||
|
logger.Info("msg", "Config auto-reload disabled")
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.Info("msg", "Shutdown signal received, starting graceful shutdown...")
|
// Service shutdown sequence
|
||||||
|
defer func() {
|
||||||
// Shutdown service with timeout
|
logger.Info("msg", "Shutdown initiated")
|
||||||
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 10*time.Second)
|
if statusReporterCancel != nil {
|
||||||
defer shutdownCancel()
|
statusReporterCancel()
|
||||||
|
}
|
||||||
done := make(chan struct{})
|
if svc != nil {
|
||||||
go func() {
|
|
||||||
svc.Shutdown()
|
svc.Shutdown()
|
||||||
close(done)
|
}
|
||||||
|
if lcfg != nil {
|
||||||
|
lcfg.StopAutoUpdate()
|
||||||
|
}
|
||||||
|
logger.Info("msg", "Shutdown complete")
|
||||||
|
// Deferred logger shutdown will run after this
|
||||||
}()
|
}()
|
||||||
|
|
||||||
|
// --- 4. Main Application Event Loop ---
|
||||||
|
logger.Info("msg", "Application started, waiting for signals or config changes")
|
||||||
|
for {
|
||||||
select {
|
select {
|
||||||
case <-done:
|
case sig := <-sigChan:
|
||||||
// Save configuration after graceful shutdown (no reload manager in static mode)
|
if sig == syscall.SIGHUP || sig == syscall.SIGUSR1 {
|
||||||
saveConfigurationOnExit(cfg, nil, logger)
|
logger.Info("msg", "Reload signal received, triggering manual reload", "signal", sig)
|
||||||
logger.Info("msg", "Shutdown complete")
|
newSvc, newCfg, newStatusCancel, err := handleReload(ctx, svc, statusReporterCancel)
|
||||||
case <-shutdownCtx.Done():
|
if err == nil {
|
||||||
logger.Error("msg", "Shutdown timeout exceeded - forcing exit")
|
svc = newSvc
|
||||||
os.Exit(1)
|
cfg = newCfg
|
||||||
|
statusReporterCancel = newStatusCancel
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
logger.Info("msg", "Shutdown signal received", "signal", sig)
|
||||||
|
cancel() // Trigger service shutdown via context
|
||||||
}
|
}
|
||||||
|
|
||||||
return // Exit from static mode
|
case event, ok := <-configChanges:
|
||||||
|
if !ok {
|
||||||
|
logger.Warn("msg", "Configuration watch channel closed, disabling auto-reload")
|
||||||
|
configChanges = nil // Stop selecting on this channel
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
logger.Info("msg", "Configuration file change detected, triggering reload", "event", event)
|
||||||
|
newSvc, newCfg, newStatusCancel, err := handleReload(ctx, svc, statusReporterCancel)
|
||||||
|
if err == nil {
|
||||||
|
svc = newSvc
|
||||||
|
cfg = newCfg
|
||||||
|
statusReporterCancel = newStatusCancel
|
||||||
}
|
}
|
||||||
|
|
||||||
// Wait for context cancellation
|
case <-ctx.Done():
|
||||||
<-ctx.Done()
|
return // Exit the loop and trigger deferred shutdown
|
||||||
|
}
|
||||||
// Save configuration before final shutdown, handled by reloadManager
|
}
|
||||||
saveConfigurationOnExit(cfg, reloadManager, logger)
|
|
||||||
|
|
||||||
// Shutdown is handled by ReloadManager.Shutdown() in defer
|
|
||||||
logger.Info("msg", "Shutdown complete")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// shutdownLogger gracefully shuts down the global logger.
|
||||||
func shutdownLogger() {
|
func shutdownLogger() {
|
||||||
if logger != nil {
|
if logger != nil {
|
||||||
if err := logger.Shutdown(2 * time.Second); err != nil {
|
if err := logger.Shutdown(core.LoggerShutdownTimeout); err != nil {
|
||||||
// Best effort - can't log the shutdown error
|
// Best effort - can't log the shutdown error
|
||||||
Error("Logger shutdown error: %v\n", err)
|
Error("Logger shutdown error: %v\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// saveConfigurationOnExit saves the configuration to file on exist
|
|
||||||
func saveConfigurationOnExit(cfg *config.Config, reloadManager *ReloadManager, logger *log.Logger) {
|
|
||||||
// Only save if explicitly enabled and we have a valid path
|
|
||||||
if !cfg.ConfigSaveOnExit || cfg.ConfigFile == "" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a context with timeout for save operation
|
|
||||||
saveCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
// Perform save in goroutine to respect timeout
|
|
||||||
done := make(chan error, 1)
|
|
||||||
go func() {
|
|
||||||
var err error
|
|
||||||
if reloadManager != nil && reloadManager.lcfg != nil {
|
|
||||||
// Use existing lconfig instance from reload manager
|
|
||||||
// This ensures we save through the same configuration system
|
|
||||||
err = reloadManager.lcfg.Save(cfg.ConfigFile)
|
|
||||||
} else {
|
|
||||||
// Static mode: create temporary lconfig for saving
|
|
||||||
err = cfg.SaveToFile(cfg.ConfigFile)
|
|
||||||
}
|
|
||||||
done <- err
|
|
||||||
}()
|
|
||||||
|
|
||||||
select {
|
|
||||||
case err := <-done:
|
|
||||||
if err != nil {
|
|
||||||
logger.Error("msg", "Failed to save configuration on exit",
|
|
||||||
"path", cfg.ConfigFile,
|
|
||||||
"error", err)
|
|
||||||
// Don't fail the exit on save error
|
|
||||||
} else {
|
|
||||||
logger.Info("msg", "Configuration saved successfully",
|
|
||||||
"path", cfg.ConfigFile)
|
|
||||||
}
|
|
||||||
case <-saveCtx.Done():
|
|
||||||
logger.Error("msg", "Configuration save timeout exceeded",
|
|
||||||
"path", cfg.ConfigFile,
|
|
||||||
"timeout", "5s")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,4 +1,3 @@
|
|||||||
// FILE: logwisp/src/cmd/logwisp/output.go
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -8,7 +7,7 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
)
|
)
|
||||||
|
|
||||||
// OutputHandler manages all application output respecting quiet mode
|
// OutputHandler manages all application output, respecting the global quiet mode
|
||||||
type OutputHandler struct {
|
type OutputHandler struct {
|
||||||
quiet bool
|
quiet bool
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
@ -16,7 +15,7 @@ type OutputHandler struct {
|
|||||||
stderr io.Writer
|
stderr io.Writer
|
||||||
}
|
}
|
||||||
|
|
||||||
// Global output handler instance
|
// output is the global instance of the OutputHandler
|
||||||
var output *OutputHandler
|
var output *OutputHandler
|
||||||
|
|
||||||
// InitOutputHandler initializes the global output handler
|
// InitOutputHandler initializes the global output handler
|
||||||
@ -28,59 +27,21 @@ func InitOutputHandler(quiet bool) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Print writes to stdout if not in quiet mode
|
// Print writes to stdout
|
||||||
func (o *OutputHandler) Print(format string, args ...any) {
|
|
||||||
o.mu.RLock()
|
|
||||||
defer o.mu.RUnlock()
|
|
||||||
|
|
||||||
if !o.quiet {
|
|
||||||
fmt.Fprintf(o.stdout, format, args...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Error writes to stderr if not in quiet mode
|
|
||||||
func (o *OutputHandler) Error(format string, args ...any) {
|
|
||||||
o.mu.RLock()
|
|
||||||
defer o.mu.RUnlock()
|
|
||||||
|
|
||||||
if !o.quiet {
|
|
||||||
fmt.Fprintf(o.stderr, format, args...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// FatalError writes to stderr and exits (respects quiet mode)
|
|
||||||
func (o *OutputHandler) FatalError(code int, format string, args ...any) {
|
|
||||||
o.Error(format, args...)
|
|
||||||
os.Exit(code)
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsQuiet returns the current quiet mode status
|
|
||||||
func (o *OutputHandler) IsQuiet() bool {
|
|
||||||
o.mu.RLock()
|
|
||||||
defer o.mu.RUnlock()
|
|
||||||
return o.quiet
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetQuiet updates quiet mode (useful for testing)
|
|
||||||
func (o *OutputHandler) SetQuiet(quiet bool) {
|
|
||||||
o.mu.Lock()
|
|
||||||
defer o.mu.Unlock()
|
|
||||||
o.quiet = quiet
|
|
||||||
}
|
|
||||||
|
|
||||||
// Helper functions for global output handler
|
|
||||||
func Print(format string, args ...any) {
|
func Print(format string, args ...any) {
|
||||||
if output != nil {
|
if output != nil {
|
||||||
output.Print(format, args...)
|
output.Print(format, args...)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Error writes to stderr
|
||||||
func Error(format string, args ...any) {
|
func Error(format string, args ...any) {
|
||||||
if output != nil {
|
if output != nil {
|
||||||
output.Error(format, args...)
|
output.Error(format, args...)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// FatalError writes to stderr and exits the application
|
||||||
func FatalError(code int, format string, args ...any) {
|
func FatalError(code int, format string, args ...any) {
|
||||||
if output != nil {
|
if output != nil {
|
||||||
output.FatalError(code, format, args...)
|
output.FatalError(code, format, args...)
|
||||||
@ -90,3 +51,43 @@ func FatalError(code int, format string, args ...any) {
|
|||||||
os.Exit(code)
|
os.Exit(code)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Print writes a formatted string to stdout if not in quiet mode
|
||||||
|
func (o *OutputHandler) Print(format string, args ...any) {
|
||||||
|
o.mu.RLock()
|
||||||
|
defer o.mu.RUnlock()
|
||||||
|
|
||||||
|
if !o.quiet {
|
||||||
|
fmt.Fprintf(o.stdout, format, args...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error writes a formatted string to stderr if not in quiet mode
|
||||||
|
func (o *OutputHandler) Error(format string, args ...any) {
|
||||||
|
o.mu.RLock()
|
||||||
|
defer o.mu.RUnlock()
|
||||||
|
|
||||||
|
if !o.quiet {
|
||||||
|
fmt.Fprintf(o.stderr, format, args...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FatalError writes a formatted string to stderr and exits with the given code.
|
||||||
|
func (o *OutputHandler) FatalError(code int, format string, args ...any) {
|
||||||
|
o.Error(format, args...)
|
||||||
|
os.Exit(code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsQuiet returns the current quiet mode status.
|
||||||
|
func (o *OutputHandler) IsQuiet() bool {
|
||||||
|
o.mu.RLock()
|
||||||
|
defer o.mu.RUnlock()
|
||||||
|
return o.quiet
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetQuiet updates the quiet mode status.
|
||||||
|
func (o *OutputHandler) SetQuiet(quiet bool) {
|
||||||
|
o.mu.Lock()
|
||||||
|
defer o.mu.Unlock()
|
||||||
|
o.quiet = quiet
|
||||||
|
}
|
||||||
@ -1,340 +0,0 @@
|
|||||||
// FILE: src/cmd/logwisp/reload.go
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/service"
|
|
||||||
|
|
||||||
lconfig "github.com/lixenwraith/config"
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ReloadManager handles configuration hot reload
|
|
||||||
type ReloadManager struct {
|
|
||||||
configPath string
|
|
||||||
service *service.Service
|
|
||||||
cfg *config.Config
|
|
||||||
lcfg *lconfig.Config
|
|
||||||
logger *log.Logger
|
|
||||||
mu sync.RWMutex
|
|
||||||
reloadingMu sync.Mutex
|
|
||||||
isReloading bool
|
|
||||||
shutdownCh chan struct{}
|
|
||||||
wg sync.WaitGroup
|
|
||||||
|
|
||||||
// Status reporter management
|
|
||||||
statusReporterCancel context.CancelFunc
|
|
||||||
statusReporterMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewReloadManager creates a new reload manager
|
|
||||||
func NewReloadManager(configPath string, initialCfg *config.Config, logger *log.Logger) *ReloadManager {
|
|
||||||
return &ReloadManager{
|
|
||||||
configPath: configPath,
|
|
||||||
cfg: initialCfg,
|
|
||||||
logger: logger,
|
|
||||||
shutdownCh: make(chan struct{}),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start begins watching for configuration changes
|
|
||||||
func (rm *ReloadManager) Start(ctx context.Context) error {
|
|
||||||
// Bootstrap initial service
|
|
||||||
svc, err := bootstrapService(ctx, rm.cfg)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to bootstrap initial service: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
rm.mu.Lock()
|
|
||||||
rm.service = svc
|
|
||||||
rm.mu.Unlock()
|
|
||||||
|
|
||||||
// Start status reporter for initial service
|
|
||||||
if !rm.cfg.DisableStatusReporter {
|
|
||||||
rm.startStatusReporter(ctx, svc)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create lconfig instance for file watching, logwisp config is always TOML
|
|
||||||
lcfg, err := lconfig.NewBuilder().
|
|
||||||
WithFile(rm.configPath).
|
|
||||||
WithTarget(rm.cfg).
|
|
||||||
WithFileFormat("toml").
|
|
||||||
WithSecurityOptions(lconfig.SecurityOptions{
|
|
||||||
PreventPathTraversal: true,
|
|
||||||
MaxFileSize: 10 * 1024 * 1024,
|
|
||||||
}).
|
|
||||||
Build()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to create config watcher: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
rm.lcfg = lcfg
|
|
||||||
|
|
||||||
// Enable auto-update with custom options
|
|
||||||
watchOpts := lconfig.WatchOptions{
|
|
||||||
PollInterval: time.Second,
|
|
||||||
Debounce: 500 * time.Millisecond,
|
|
||||||
ReloadTimeout: 30 * time.Second,
|
|
||||||
VerifyPermissions: true, // TODO: Prevent malicious config replacement, to be implemented
|
|
||||||
}
|
|
||||||
lcfg.AutoUpdateWithOptions(watchOpts)
|
|
||||||
|
|
||||||
// Start watching for changes
|
|
||||||
rm.wg.Add(1)
|
|
||||||
go rm.watchLoop(ctx)
|
|
||||||
|
|
||||||
rm.logger.Info("msg", "Configuration hot reload enabled",
|
|
||||||
"config_file", rm.configPath)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// watchLoop monitors configuration changes
|
|
||||||
func (rm *ReloadManager) watchLoop(ctx context.Context) {
|
|
||||||
defer rm.wg.Done()
|
|
||||||
|
|
||||||
changeCh := rm.lcfg.Watch()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-rm.shutdownCh:
|
|
||||||
return
|
|
||||||
case changedPath := <-changeCh:
|
|
||||||
// Handle special notifications
|
|
||||||
switch changedPath {
|
|
||||||
case "file_deleted":
|
|
||||||
rm.logger.Error("msg", "Configuration file deleted",
|
|
||||||
"action", "keeping current configuration")
|
|
||||||
continue
|
|
||||||
case "permissions_changed":
|
|
||||||
// SECURITY: Config file permissions changed suspiciously
|
|
||||||
rm.logger.Error("msg", "Configuration file permissions changed",
|
|
||||||
"action", "reload blocked for security")
|
|
||||||
continue
|
|
||||||
case "reload_timeout":
|
|
||||||
rm.logger.Error("msg", "Configuration reload timed out",
|
|
||||||
"action", "keeping current configuration")
|
|
||||||
continue
|
|
||||||
default:
|
|
||||||
if strings.HasPrefix(changedPath, "reload_error:") {
|
|
||||||
rm.logger.Error("msg", "Configuration reload error",
|
|
||||||
"error", strings.TrimPrefix(changedPath, "reload_error:"),
|
|
||||||
"action", "keeping current configuration")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Trigger reload for any pipeline-related change
|
|
||||||
if rm.shouldReload(changedPath) {
|
|
||||||
rm.triggerReload(ctx)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// shouldReload determines if a config change requires service reload
|
|
||||||
func (rm *ReloadManager) shouldReload(path string) bool {
|
|
||||||
// Pipeline changes always require reload
|
|
||||||
if strings.HasPrefix(path, "pipelines.") || path == "pipelines" {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Logging changes don't require service reload
|
|
||||||
if strings.HasPrefix(path, "logging.") {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Status reporter changes
|
|
||||||
if path == "disable_status_reporter" {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// triggerReload performs the actual reload
|
|
||||||
func (rm *ReloadManager) triggerReload(ctx context.Context) {
|
|
||||||
// Prevent concurrent reloads
|
|
||||||
rm.reloadingMu.Lock()
|
|
||||||
if rm.isReloading {
|
|
||||||
rm.reloadingMu.Unlock()
|
|
||||||
rm.logger.Debug("msg", "Reload already in progress, skipping")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
rm.isReloading = true
|
|
||||||
rm.reloadingMu.Unlock()
|
|
||||||
|
|
||||||
defer func() {
|
|
||||||
rm.reloadingMu.Lock()
|
|
||||||
rm.isReloading = false
|
|
||||||
rm.reloadingMu.Unlock()
|
|
||||||
}()
|
|
||||||
|
|
||||||
rm.logger.Info("msg", "Starting configuration hot reload")
|
|
||||||
|
|
||||||
// Create reload context with timeout
|
|
||||||
reloadCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := rm.performReload(reloadCtx); err != nil {
|
|
||||||
rm.logger.Error("msg", "Hot reload failed",
|
|
||||||
"error", err,
|
|
||||||
"action", "keeping current configuration and services")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
rm.logger.Info("msg", "Configuration hot reload completed successfully")
|
|
||||||
}
|
|
||||||
|
|
||||||
// performReload executes the reload process
|
|
||||||
func (rm *ReloadManager) performReload(ctx context.Context) error {
|
|
||||||
// Get updated config from lconfig
|
|
||||||
updatedCfg, err := rm.lcfg.AsStruct()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to get updated config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
newCfg := updatedCfg.(*config.Config)
|
|
||||||
|
|
||||||
// Get current service snapshot
|
|
||||||
rm.mu.RLock()
|
|
||||||
oldService := rm.service
|
|
||||||
rm.mu.RUnlock()
|
|
||||||
|
|
||||||
// Try to bootstrap with new configuration
|
|
||||||
rm.logger.Debug("msg", "Bootstrapping new service with updated config")
|
|
||||||
newService, err := bootstrapService(ctx, newCfg)
|
|
||||||
if err != nil {
|
|
||||||
// Bootstrap failed - keep old services running
|
|
||||||
return fmt.Errorf("failed to bootstrap new service (old service still active): %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Bootstrap succeeded - swap services atomically
|
|
||||||
rm.mu.Lock()
|
|
||||||
rm.service = newService
|
|
||||||
rm.cfg = newCfg
|
|
||||||
rm.mu.Unlock()
|
|
||||||
|
|
||||||
// Stop old status reporter and start new one
|
|
||||||
rm.restartStatusReporter(ctx, newService)
|
|
||||||
|
|
||||||
// Gracefully shutdown old services
|
|
||||||
// This happens after the swap to minimize downtime
|
|
||||||
go rm.shutdownOldServices(oldService)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// shutdownOldServices gracefully shuts down old services
|
|
||||||
func (rm *ReloadManager) shutdownOldServices(svc *service.Service) {
|
|
||||||
// Give connections time to drain
|
|
||||||
rm.logger.Debug("msg", "Draining connections from old services")
|
|
||||||
time.Sleep(2 * time.Second)
|
|
||||||
|
|
||||||
if svc != nil {
|
|
||||||
rm.logger.Info("msg", "Shutting down old service")
|
|
||||||
svc.Shutdown()
|
|
||||||
}
|
|
||||||
|
|
||||||
rm.logger.Debug("msg", "Old services shutdown complete")
|
|
||||||
}
|
|
||||||
|
|
||||||
// startStatusReporter starts a new status reporter
|
|
||||||
func (rm *ReloadManager) startStatusReporter(ctx context.Context, svc *service.Service) {
|
|
||||||
rm.statusReporterMu.Lock()
|
|
||||||
defer rm.statusReporterMu.Unlock()
|
|
||||||
|
|
||||||
// Create cancellable context for status reporter
|
|
||||||
reporterCtx, cancel := context.WithCancel(ctx)
|
|
||||||
rm.statusReporterCancel = cancel
|
|
||||||
|
|
||||||
go statusReporter(svc, reporterCtx)
|
|
||||||
rm.logger.Debug("msg", "Started status reporter")
|
|
||||||
}
|
|
||||||
|
|
||||||
// restartStatusReporter stops old and starts new status reporter
|
|
||||||
func (rm *ReloadManager) restartStatusReporter(ctx context.Context, newService *service.Service) {
|
|
||||||
if rm.cfg.DisableStatusReporter {
|
|
||||||
// Just stop the old one if disabled
|
|
||||||
rm.stopStatusReporter()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
rm.statusReporterMu.Lock()
|
|
||||||
defer rm.statusReporterMu.Unlock()
|
|
||||||
|
|
||||||
// Stop old reporter
|
|
||||||
if rm.statusReporterCancel != nil {
|
|
||||||
rm.statusReporterCancel()
|
|
||||||
rm.logger.Debug("msg", "Stopped old status reporter")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start new reporter
|
|
||||||
reporterCtx, cancel := context.WithCancel(ctx)
|
|
||||||
rm.statusReporterCancel = cancel
|
|
||||||
|
|
||||||
go statusReporter(newService, reporterCtx)
|
|
||||||
rm.logger.Debug("msg", "Started new status reporter")
|
|
||||||
}
|
|
||||||
|
|
||||||
// stopStatusReporter stops the status reporter
|
|
||||||
func (rm *ReloadManager) stopStatusReporter() {
|
|
||||||
rm.statusReporterMu.Lock()
|
|
||||||
defer rm.statusReporterMu.Unlock()
|
|
||||||
|
|
||||||
if rm.statusReporterCancel != nil {
|
|
||||||
rm.statusReporterCancel()
|
|
||||||
rm.statusReporterCancel = nil
|
|
||||||
rm.logger.Debug("msg", "Stopped status reporter")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// SaveConfig is a wrapper to save the config
|
|
||||||
func (rm *ReloadManager) SaveConfig(path string) error {
|
|
||||||
if rm.lcfg == nil {
|
|
||||||
return fmt.Errorf("no lconfig instance available")
|
|
||||||
}
|
|
||||||
return rm.lcfg.Save(path)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shutdown stops the reload manager
|
|
||||||
func (rm *ReloadManager) Shutdown() {
|
|
||||||
rm.logger.Info("msg", "Shutting down reload manager")
|
|
||||||
|
|
||||||
// Stop status reporter
|
|
||||||
rm.stopStatusReporter()
|
|
||||||
|
|
||||||
// Stop watching
|
|
||||||
close(rm.shutdownCh)
|
|
||||||
rm.wg.Wait()
|
|
||||||
|
|
||||||
// Stop config watching
|
|
||||||
if rm.lcfg != nil {
|
|
||||||
rm.lcfg.StopAutoUpdate()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shutdown current services
|
|
||||||
rm.mu.RLock()
|
|
||||||
currentService := rm.service
|
|
||||||
rm.mu.RUnlock()
|
|
||||||
|
|
||||||
if currentService != nil {
|
|
||||||
rm.logger.Info("msg", "Shutting down service")
|
|
||||||
currentService.Shutdown()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetService returns the current service (thread-safe)
|
|
||||||
func (rm *ReloadManager) GetService() *service.Service {
|
|
||||||
rm.mu.RLock()
|
|
||||||
defer rm.mu.RUnlock()
|
|
||||||
return rm.service
|
|
||||||
}
|
|
||||||
@ -1,65 +0,0 @@
|
|||||||
// FILE: src/cmd/logwisp/signals.go
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"os/signal"
|
|
||||||
"syscall"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SignalHandler manages OS signals
|
|
||||||
type SignalHandler struct {
|
|
||||||
reloadManager *ReloadManager
|
|
||||||
logger *log.Logger
|
|
||||||
sigChan chan os.Signal
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewSignalHandler creates a signal handler
|
|
||||||
func NewSignalHandler(rm *ReloadManager, logger *log.Logger) *SignalHandler {
|
|
||||||
sh := &SignalHandler{
|
|
||||||
reloadManager: rm,
|
|
||||||
logger: logger,
|
|
||||||
sigChan: make(chan os.Signal, 1),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Register for signals
|
|
||||||
signal.Notify(sh.sigChan,
|
|
||||||
syscall.SIGINT,
|
|
||||||
syscall.SIGTERM,
|
|
||||||
syscall.SIGHUP, // Traditional reload signal
|
|
||||||
syscall.SIGUSR1, // Alternative reload signal
|
|
||||||
)
|
|
||||||
|
|
||||||
return sh
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle processes signals
|
|
||||||
func (sh *SignalHandler) Handle(ctx context.Context) os.Signal {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case sig := <-sh.sigChan:
|
|
||||||
switch sig {
|
|
||||||
case syscall.SIGHUP, syscall.SIGUSR1:
|
|
||||||
sh.logger.Info("msg", "Reload signal received",
|
|
||||||
"signal", sig)
|
|
||||||
// Trigger manual reload
|
|
||||||
go sh.reloadManager.triggerReload(ctx)
|
|
||||||
// Continue handling signals
|
|
||||||
default:
|
|
||||||
// Return termination signals
|
|
||||||
return sig
|
|
||||||
}
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Stop cleans up signal handling
|
|
||||||
func (sh *SignalHandler) Stop() {
|
|
||||||
signal.Stop(sh.sigChan)
|
|
||||||
close(sh.sigChan)
|
|
||||||
}
|
|
||||||
@ -1,4 +1,3 @@
|
|||||||
// FILE: logwisp/src/cmd/logwisp/status.go
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -6,11 +5,18 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/service"
|
"logwisp/src/internal/service"
|
||||||
)
|
)
|
||||||
|
|
||||||
// statusReporter periodically logs service status
|
// startStatusReporter starts a new status reporter for a service and returns its cancel function.
|
||||||
|
func startStatusReporter(ctx context.Context, svc *service.Service) context.CancelFunc {
|
||||||
|
reporterCtx, cancel := context.WithCancel(ctx)
|
||||||
|
go statusReporter(svc, reporterCtx)
|
||||||
|
logger.Debug("msg", "Started status reporter")
|
||||||
|
return cancel
|
||||||
|
}
|
||||||
|
|
||||||
|
// statusReporter periodically logs the health and statistics of the service
|
||||||
func statusReporter(service *service.Service, ctx context.Context) {
|
func statusReporter(service *service.Service, ctx context.Context) {
|
||||||
ticker := time.NewTicker(30 * time.Second)
|
ticker := time.NewTicker(30 * time.Second)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
@ -18,7 +24,6 @@ func statusReporter(service *service.Service, ctx context.Context) {
|
|||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
// Clean shutdown
|
|
||||||
return
|
return
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
if service == nil {
|
if service == nil {
|
||||||
@ -45,153 +50,99 @@ func statusReporter(service *service.Service, ctx context.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Log service-level summary
|
||||||
logger.Debug("msg", "Status report",
|
logger.Debug("msg", "Status report",
|
||||||
"component", "status_reporter",
|
"component", "status_reporter",
|
||||||
"active_pipelines", totalPipelines,
|
"active_pipelines", totalPipelines,
|
||||||
"time", time.Now().Format("15:04:05"))
|
"time", time.Now().Format("15:04:05"))
|
||||||
|
|
||||||
// Log individual pipeline status
|
// Log each pipeline's stats recursively
|
||||||
pipelines := stats["pipelines"].(map[string]any)
|
if pipelines, ok := stats["pipelines"].(map[string]any); ok {
|
||||||
for name, pipelineStats := range pipelines {
|
for name, pipelineStats := range pipelines {
|
||||||
logPipelineStatus(name, pipelineStats.(map[string]any))
|
logStats("Pipeline status", name, pipelineStats)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// logPipelineStatus logs the status of an individual pipeline
|
// logStats recursively logs statistics with automatic field extraction
|
||||||
func logPipelineStatus(name string, stats map[string]any) {
|
func logStats(msg string, name string, stats any) {
|
||||||
statusFields := []any{
|
// Build base log fields
|
||||||
"msg", "Pipeline status",
|
fields := []any{
|
||||||
"pipeline", name,
|
"msg", msg,
|
||||||
|
"name", name,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add processing statistics
|
// Extract and flatten important metrics from stats map
|
||||||
if totalProcessed, ok := stats["total_processed"].(uint64); ok {
|
if statsMap, ok := stats.(map[string]any); ok {
|
||||||
statusFields = append(statusFields, "entries_processed", totalProcessed)
|
// Add scalar values directly
|
||||||
|
for key, value := range statsMap {
|
||||||
|
switch v := value.(type) {
|
||||||
|
case string, bool, int, int64, uint64, float64:
|
||||||
|
fields = append(fields, key, v)
|
||||||
|
case time.Time:
|
||||||
|
if !v.IsZero() {
|
||||||
|
fields = append(fields, key, v.Format(time.RFC3339))
|
||||||
}
|
}
|
||||||
if totalFiltered, ok := stats["total_filtered"].(uint64); ok {
|
case map[string]any:
|
||||||
statusFields = append(statusFields, "entries_filtered", totalFiltered)
|
// For nested maps, log summary counts if they contain arrays/maps
|
||||||
}
|
if count := getItemCount(v); count > 0 {
|
||||||
|
fields = append(fields, fmt.Sprintf("%s_count", key), count)
|
||||||
// Add source count
|
|
||||||
if sourceCount, ok := stats["source_count"].(int); ok {
|
|
||||||
statusFields = append(statusFields, "sources", sourceCount)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add sink statistics
|
|
||||||
if sinks, ok := stats["sinks"].([]map[string]any); ok {
|
|
||||||
tcpConns := int64(0)
|
|
||||||
httpConns := int64(0)
|
|
||||||
|
|
||||||
for _, sink := range sinks {
|
|
||||||
sinkType := sink["type"].(string)
|
|
||||||
if activeConns, ok := sink["active_connections"].(int64); ok {
|
|
||||||
switch sinkType {
|
|
||||||
case "tcp":
|
|
||||||
tcpConns += activeConns
|
|
||||||
case "http":
|
|
||||||
httpConns += activeConns
|
|
||||||
}
|
}
|
||||||
|
case []any, []map[string]any:
|
||||||
|
// For arrays, just log the count
|
||||||
|
fields = append(fields, fmt.Sprintf("%s_count", key), getArrayLength(value))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if tcpConns > 0 {
|
// Log the flattened stats
|
||||||
statusFields = append(statusFields, "tcp_connections", tcpConns)
|
logger.Debug(fields...)
|
||||||
}
|
|
||||||
if httpConns > 0 {
|
|
||||||
statusFields = append(statusFields, "http_connections", httpConns)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.Debug(statusFields...)
|
// Recursively log nested structures with detail
|
||||||
|
for key, value := range statsMap {
|
||||||
|
switch v := value.(type) {
|
||||||
|
case map[string]any:
|
||||||
|
// Log nested component stats
|
||||||
|
if key == "flow" || key == "rate_limiter" || key == "filters" {
|
||||||
|
logStats(fmt.Sprintf("%s %s", name, key), key, v)
|
||||||
|
}
|
||||||
|
case []map[string]any:
|
||||||
|
// Log array items (sources, sinks, filters)
|
||||||
|
for i, item := range v {
|
||||||
|
if itemName, ok := item["id"].(string); ok {
|
||||||
|
logStats(fmt.Sprintf("%s %s", name, key), itemName, item)
|
||||||
|
} else {
|
||||||
|
logStats(fmt.Sprintf("%s %s", name, key), fmt.Sprintf("%s[%d]", key, i), item)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// displayPipelineEndpoints logs the configured endpoints for a pipeline
|
// getItemCount returns the count of items in a map (for nested structures)
|
||||||
func displayPipelineEndpoints(cfg config.PipelineConfig) {
|
func getItemCount(m map[string]any) int {
|
||||||
// Display sink endpoints
|
for _, v := range m {
|
||||||
for i, sinkCfg := range cfg.Sinks {
|
switch v.(type) {
|
||||||
switch sinkCfg.Type {
|
case []any:
|
||||||
case "tcp":
|
return len(v.([]any))
|
||||||
if port, ok := sinkCfg.Options["port"].(int64); ok {
|
case []map[string]any:
|
||||||
logger.Info("msg", "TCP endpoint configured",
|
return len(v.([]map[string]any))
|
||||||
"component", "main",
|
}
|
||||||
"pipeline", cfg.Name,
|
}
|
||||||
"sink_index", i,
|
return 0
|
||||||
"port", port)
|
}
|
||||||
|
|
||||||
// Display net limit info if configured
|
// getArrayLength safely gets the length of various array types
|
||||||
if rl, ok := sinkCfg.Options["net_limit"].(map[string]any); ok {
|
func getArrayLength(v any) int {
|
||||||
if enabled, ok := rl["enabled"].(bool); ok && enabled {
|
switch arr := v.(type) {
|
||||||
logger.Info("msg", "TCP net limiting enabled",
|
case []any:
|
||||||
"pipeline", cfg.Name,
|
return len(arr)
|
||||||
"sink_index", i,
|
case []map[string]any:
|
||||||
"requests_per_second", rl["requests_per_second"],
|
return len(arr)
|
||||||
"burst_size", rl["burst_size"])
|
default:
|
||||||
}
|
return 0
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "http":
|
|
||||||
if port, ok := sinkCfg.Options["port"].(int64); ok {
|
|
||||||
streamPath := "/transport"
|
|
||||||
statusPath := "/status"
|
|
||||||
if path, ok := sinkCfg.Options["stream_path"].(string); ok {
|
|
||||||
streamPath = path
|
|
||||||
}
|
|
||||||
if path, ok := sinkCfg.Options["status_path"].(string); ok {
|
|
||||||
statusPath = path
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.Info("msg", "HTTP endpoints configured",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"sink_index", i,
|
|
||||||
"stream_url", fmt.Sprintf("http://localhost:%d%s", port, streamPath),
|
|
||||||
"status_url", fmt.Sprintf("http://localhost:%d%s", port, statusPath))
|
|
||||||
|
|
||||||
// Display net limit info if configured
|
|
||||||
if rl, ok := sinkCfg.Options["net_limit"].(map[string]any); ok {
|
|
||||||
if enabled, ok := rl["enabled"].(bool); ok && enabled {
|
|
||||||
logger.Info("msg", "HTTP net limiting enabled",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"sink_index", i,
|
|
||||||
"requests_per_second", rl["requests_per_second"],
|
|
||||||
"burst_size", rl["burst_size"],
|
|
||||||
"limit_by", rl["limit_by"])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "file":
|
|
||||||
if dir, ok := sinkCfg.Options["directory"].(string); ok {
|
|
||||||
name, _ := sinkCfg.Options["name"].(string)
|
|
||||||
logger.Info("msg", "File sink configured",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"sink_index", i,
|
|
||||||
"directory", dir,
|
|
||||||
"name", name)
|
|
||||||
}
|
|
||||||
|
|
||||||
case "stdout", "stderr":
|
|
||||||
logger.Info("msg", "Console sink configured",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"sink_index", i,
|
|
||||||
"type", sinkCfg.Type)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Display authentication information
|
|
||||||
if cfg.Auth != nil && cfg.Auth.Type != "none" {
|
|
||||||
logger.Info("msg", "Authentication enabled",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"auth_type", cfg.Auth.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Display filter information
|
|
||||||
if len(cfg.Filters) > 0 {
|
|
||||||
logger.Info("msg", "Filters configured",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"filter_count", len(cfg.Filters))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1,652 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/auth/authenticator.go
|
|
||||||
package auth
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"crypto/rand"
|
|
||||||
"encoding/base64"
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
|
|
||||||
"github.com/golang-jwt/jwt/v5"
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
"golang.org/x/crypto/bcrypt"
|
|
||||||
"golang.org/x/time/rate"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Prevent unbounded map growth
|
|
||||||
const maxAuthTrackedIPs = 10000
|
|
||||||
|
|
||||||
// Authenticator handles all authentication methods for a pipeline
|
|
||||||
type Authenticator struct {
|
|
||||||
config *config.AuthConfig
|
|
||||||
logger *log.Logger
|
|
||||||
basicUsers map[string]string // username -> password hash
|
|
||||||
bearerTokens map[string]bool // token -> valid
|
|
||||||
jwtParser *jwt.Parser
|
|
||||||
jwtKeyFunc jwt.Keyfunc
|
|
||||||
mu sync.RWMutex
|
|
||||||
|
|
||||||
// Session tracking
|
|
||||||
sessions map[string]*Session
|
|
||||||
sessionMu sync.RWMutex
|
|
||||||
|
|
||||||
// Brute-force protection
|
|
||||||
ipAuthAttempts map[string]*ipAuthState
|
|
||||||
authMu sync.RWMutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// ADDED: Per-IP auth attempt tracking
|
|
||||||
type ipAuthState struct {
|
|
||||||
limiter *rate.Limiter
|
|
||||||
failCount int
|
|
||||||
lastAttempt time.Time
|
|
||||||
blockedUntil time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// Session represents an authenticated connection
|
|
||||||
type Session struct {
|
|
||||||
ID string
|
|
||||||
Username string
|
|
||||||
Method string // basic, bearer, jwt, mtls
|
|
||||||
RemoteAddr string
|
|
||||||
CreatedAt time.Time
|
|
||||||
LastActivity time.Time
|
|
||||||
Metadata map[string]any
|
|
||||||
}
|
|
||||||
|
|
||||||
// New creates a new authenticator from config
|
|
||||||
func New(cfg *config.AuthConfig, logger *log.Logger) (*Authenticator, error) {
|
|
||||||
if cfg == nil || cfg.Type == "none" {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
a := &Authenticator{
|
|
||||||
config: cfg,
|
|
||||||
logger: logger,
|
|
||||||
basicUsers: make(map[string]string),
|
|
||||||
bearerTokens: make(map[string]bool),
|
|
||||||
sessions: make(map[string]*Session),
|
|
||||||
ipAuthAttempts: make(map[string]*ipAuthState),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize Basic Auth users
|
|
||||||
if cfg.Type == "basic" && cfg.BasicAuth != nil {
|
|
||||||
for _, user := range cfg.BasicAuth.Users {
|
|
||||||
a.basicUsers[user.Username] = user.PasswordHash
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load users from file if specified
|
|
||||||
if cfg.BasicAuth.UsersFile != "" {
|
|
||||||
if err := a.loadUsersFile(cfg.BasicAuth.UsersFile); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to load users file: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize Bearer tokens
|
|
||||||
if cfg.Type == "bearer" && cfg.BearerAuth != nil {
|
|
||||||
for _, token := range cfg.BearerAuth.Tokens {
|
|
||||||
a.bearerTokens[token] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Setup JWT validation if configured
|
|
||||||
if cfg.BearerAuth.JWT != nil {
|
|
||||||
a.jwtParser = jwt.NewParser(
|
|
||||||
jwt.WithValidMethods([]string{"HS256", "HS384", "HS512", "RS256", "RS384", "RS512", "ES256", "ES384", "ES512"}),
|
|
||||||
jwt.WithLeeway(5*time.Second),
|
|
||||||
jwt.WithExpirationRequired(),
|
|
||||||
)
|
|
||||||
|
|
||||||
// Setup key function
|
|
||||||
if cfg.BearerAuth.JWT.SigningKey != "" {
|
|
||||||
// Static key
|
|
||||||
key := []byte(cfg.BearerAuth.JWT.SigningKey)
|
|
||||||
a.jwtKeyFunc = func(token *jwt.Token) (interface{}, error) {
|
|
||||||
return key, nil
|
|
||||||
}
|
|
||||||
} else if cfg.BearerAuth.JWT.JWKSURL != "" {
|
|
||||||
// JWKS support would require additional implementation
|
|
||||||
// ☢ SECURITY: JWKS rotation not implemented - tokens won't refresh keys
|
|
||||||
return nil, fmt.Errorf("JWKS support not yet implemented")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start session cleanup
|
|
||||||
go a.sessionCleanup()
|
|
||||||
|
|
||||||
// Start auth attempt cleanup
|
|
||||||
go a.authAttemptCleanup()
|
|
||||||
|
|
||||||
logger.Info("msg", "Authenticator initialized",
|
|
||||||
"component", "auth",
|
|
||||||
"type", cfg.Type)
|
|
||||||
|
|
||||||
return a, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check and enforce rate limits
|
|
||||||
func (a *Authenticator) checkRateLimit(remoteAddr string) error {
|
|
||||||
ip, _, err := net.SplitHostPort(remoteAddr)
|
|
||||||
if err != nil {
|
|
||||||
ip = remoteAddr // Fallback for malformed addresses
|
|
||||||
}
|
|
||||||
|
|
||||||
a.authMu.Lock()
|
|
||||||
defer a.authMu.Unlock()
|
|
||||||
|
|
||||||
state, exists := a.ipAuthAttempts[ip]
|
|
||||||
now := time.Now()
|
|
||||||
|
|
||||||
if !exists {
|
|
||||||
// Check map size limit before creating new entry
|
|
||||||
if len(a.ipAuthAttempts) >= maxAuthTrackedIPs {
|
|
||||||
// Evict an old entry using simplified LRU
|
|
||||||
// Sample 20 random entries and evict the oldest
|
|
||||||
const sampleSize = 20
|
|
||||||
var oldestIP string
|
|
||||||
oldestTime := now
|
|
||||||
|
|
||||||
// Build sample
|
|
||||||
sampled := 0
|
|
||||||
for sampledIP, sampledState := range a.ipAuthAttempts {
|
|
||||||
if sampledState.lastAttempt.Before(oldestTime) {
|
|
||||||
oldestIP = sampledIP
|
|
||||||
oldestTime = sampledState.lastAttempt
|
|
||||||
}
|
|
||||||
sampled++
|
|
||||||
if sampled >= sampleSize {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Evict the oldest from our sample
|
|
||||||
if oldestIP != "" {
|
|
||||||
delete(a.ipAuthAttempts, oldestIP)
|
|
||||||
a.logger.Debug("msg", "Evicted old auth attempt state",
|
|
||||||
"component", "auth",
|
|
||||||
"evicted_ip", oldestIP,
|
|
||||||
"last_seen", oldestTime)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create new state for this IP
|
|
||||||
// 5 attempts per minute, burst of 3
|
|
||||||
state = &ipAuthState{
|
|
||||||
limiter: rate.NewLimiter(rate.Every(12*time.Second), 3),
|
|
||||||
lastAttempt: now,
|
|
||||||
}
|
|
||||||
a.ipAuthAttempts[ip] = state
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if IP is temporarily blocked
|
|
||||||
if now.Before(state.blockedUntil) {
|
|
||||||
remaining := state.blockedUntil.Sub(now)
|
|
||||||
a.logger.Warn("msg", "IP temporarily blocked",
|
|
||||||
"component", "auth",
|
|
||||||
"ip", ip,
|
|
||||||
"remaining", remaining)
|
|
||||||
// Sleep to slow down even blocked attempts
|
|
||||||
time.Sleep(2 * time.Second)
|
|
||||||
return fmt.Errorf("temporarily blocked, try again in %v", remaining.Round(time.Second))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check rate limit
|
|
||||||
if !state.limiter.Allow() {
|
|
||||||
state.failCount++
|
|
||||||
|
|
||||||
// Only set new blockedUntil if not already blocked
|
|
||||||
// This prevents indefinite block extension
|
|
||||||
if state.blockedUntil.IsZero() || now.After(state.blockedUntil) {
|
|
||||||
// Progressive blocking: 2^failCount minutes
|
|
||||||
blockMinutes := 1 << min(state.failCount, 6) // Cap at 64 minutes
|
|
||||||
state.blockedUntil = now.Add(time.Duration(blockMinutes) * time.Minute)
|
|
||||||
|
|
||||||
a.logger.Warn("msg", "Rate limit exceeded, blocking IP",
|
|
||||||
"component", "auth",
|
|
||||||
"ip", ip,
|
|
||||||
"fail_count", state.failCount,
|
|
||||||
"block_duration", time.Duration(blockMinutes)*time.Minute)
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Errorf("rate limit exceeded")
|
|
||||||
}
|
|
||||||
|
|
||||||
state.lastAttempt = now
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Record failed attempt
|
|
||||||
func (a *Authenticator) recordFailure(remoteAddr string) {
|
|
||||||
ip, _, _ := net.SplitHostPort(remoteAddr)
|
|
||||||
if ip == "" {
|
|
||||||
ip = remoteAddr
|
|
||||||
}
|
|
||||||
|
|
||||||
a.authMu.Lock()
|
|
||||||
defer a.authMu.Unlock()
|
|
||||||
|
|
||||||
if state, exists := a.ipAuthAttempts[ip]; exists {
|
|
||||||
state.failCount++
|
|
||||||
state.lastAttempt = time.Now()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reset failure count on success
|
|
||||||
func (a *Authenticator) recordSuccess(remoteAddr string) {
|
|
||||||
ip, _, _ := net.SplitHostPort(remoteAddr)
|
|
||||||
if ip == "" {
|
|
||||||
ip = remoteAddr
|
|
||||||
}
|
|
||||||
|
|
||||||
a.authMu.Lock()
|
|
||||||
defer a.authMu.Unlock()
|
|
||||||
|
|
||||||
if state, exists := a.ipAuthAttempts[ip]; exists {
|
|
||||||
state.failCount = 0
|
|
||||||
state.blockedUntil = time.Time{}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AuthenticateHTTP handles HTTP authentication headers
|
|
||||||
func (a *Authenticator) AuthenticateHTTP(authHeader, remoteAddr string) (*Session, error) {
|
|
||||||
if a == nil || a.config.Type == "none" {
|
|
||||||
return &Session{
|
|
||||||
ID: generateSessionID(),
|
|
||||||
Method: "none",
|
|
||||||
RemoteAddr: remoteAddr,
|
|
||||||
CreatedAt: time.Now(),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check rate limit
|
|
||||||
if err := a.checkRateLimit(remoteAddr); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
var session *Session
|
|
||||||
var err error
|
|
||||||
|
|
||||||
switch a.config.Type {
|
|
||||||
case "basic":
|
|
||||||
session, err = a.authenticateBasic(authHeader, remoteAddr)
|
|
||||||
case "bearer":
|
|
||||||
session, err = a.authenticateBearer(authHeader, remoteAddr)
|
|
||||||
default:
|
|
||||||
err = fmt.Errorf("unsupported auth type: %s", a.config.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
a.recordFailure(remoteAddr)
|
|
||||||
time.Sleep(500 * time.Millisecond)
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
a.recordSuccess(remoteAddr)
|
|
||||||
return session, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// AuthenticateTCP handles TCP connection authentication
|
|
||||||
func (a *Authenticator) AuthenticateTCP(method, credentials, remoteAddr string) (*Session, error) {
|
|
||||||
if a == nil || a.config.Type == "none" {
|
|
||||||
return &Session{
|
|
||||||
ID: generateSessionID(),
|
|
||||||
Method: "none",
|
|
||||||
RemoteAddr: remoteAddr,
|
|
||||||
CreatedAt: time.Now(),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check rate limit first
|
|
||||||
if err := a.checkRateLimit(remoteAddr); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
var session *Session
|
|
||||||
var err error
|
|
||||||
|
|
||||||
// TCP auth protocol: AUTH <method> <credentials>
|
|
||||||
switch strings.ToLower(method) {
|
|
||||||
case "token":
|
|
||||||
if a.config.Type != "bearer" {
|
|
||||||
err = fmt.Errorf("token auth not configured")
|
|
||||||
} else {
|
|
||||||
session, err = a.validateToken(credentials, remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
case "basic":
|
|
||||||
if a.config.Type != "basic" {
|
|
||||||
err = fmt.Errorf("basic auth not configured")
|
|
||||||
} else {
|
|
||||||
// Expect base64(username:password)
|
|
||||||
decoded, decErr := base64.StdEncoding.DecodeString(credentials)
|
|
||||||
if decErr != nil {
|
|
||||||
err = fmt.Errorf("invalid credentials encoding")
|
|
||||||
} else {
|
|
||||||
parts := strings.SplitN(string(decoded), ":", 2)
|
|
||||||
if len(parts) != 2 {
|
|
||||||
err = fmt.Errorf("invalid credentials format")
|
|
||||||
} else {
|
|
||||||
session, err = a.validateBasicAuth(parts[0], parts[1], remoteAddr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
err = fmt.Errorf("unsupported auth method: %s", method)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
a.recordFailure(remoteAddr)
|
|
||||||
// Add delay on failure
|
|
||||||
time.Sleep(500 * time.Millisecond)
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
a.recordSuccess(remoteAddr)
|
|
||||||
return session, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) authenticateBasic(authHeader, remoteAddr string) (*Session, error) {
|
|
||||||
if !strings.HasPrefix(authHeader, "Basic ") {
|
|
||||||
return nil, fmt.Errorf("invalid basic auth header")
|
|
||||||
}
|
|
||||||
|
|
||||||
payload, err := base64.StdEncoding.DecodeString(authHeader[6:])
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid base64 encoding")
|
|
||||||
}
|
|
||||||
|
|
||||||
parts := strings.SplitN(string(payload), ":", 2)
|
|
||||||
if len(parts) != 2 {
|
|
||||||
return nil, fmt.Errorf("invalid credentials format")
|
|
||||||
}
|
|
||||||
|
|
||||||
return a.validateBasicAuth(parts[0], parts[1], remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) validateBasicAuth(username, password, remoteAddr string) (*Session, error) {
|
|
||||||
a.mu.RLock()
|
|
||||||
expectedHash, exists := a.basicUsers[username]
|
|
||||||
a.mu.RUnlock()
|
|
||||||
|
|
||||||
if !exists {
|
|
||||||
// ☢ SECURITY: Perform bcrypt anyway to prevent timing attacks
|
|
||||||
bcrypt.CompareHashAndPassword([]byte("$2a$10$dummy.hash.to.prevent.timing.attacks"), []byte(password))
|
|
||||||
return nil, fmt.Errorf("invalid credentials")
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := bcrypt.CompareHashAndPassword([]byte(expectedHash), []byte(password)); err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid credentials")
|
|
||||||
}
|
|
||||||
|
|
||||||
session := &Session{
|
|
||||||
ID: generateSessionID(),
|
|
||||||
Username: username,
|
|
||||||
Method: "basic",
|
|
||||||
RemoteAddr: remoteAddr,
|
|
||||||
CreatedAt: time.Now(),
|
|
||||||
LastActivity: time.Now(),
|
|
||||||
}
|
|
||||||
|
|
||||||
a.storeSession(session)
|
|
||||||
return session, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) authenticateBearer(authHeader, remoteAddr string) (*Session, error) {
|
|
||||||
if !strings.HasPrefix(authHeader, "Bearer ") {
|
|
||||||
return nil, fmt.Errorf("invalid bearer auth header")
|
|
||||||
}
|
|
||||||
|
|
||||||
token := authHeader[7:]
|
|
||||||
return a.validateToken(token, remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) validateToken(token, remoteAddr string) (*Session, error) {
|
|
||||||
// Check static tokens first
|
|
||||||
a.mu.RLock()
|
|
||||||
isStatic := a.bearerTokens[token]
|
|
||||||
a.mu.RUnlock()
|
|
||||||
|
|
||||||
if isStatic {
|
|
||||||
session := &Session{
|
|
||||||
ID: generateSessionID(),
|
|
||||||
Method: "bearer",
|
|
||||||
RemoteAddr: remoteAddr,
|
|
||||||
CreatedAt: time.Now(),
|
|
||||||
LastActivity: time.Now(),
|
|
||||||
Metadata: map[string]any{"token_type": "static"},
|
|
||||||
}
|
|
||||||
a.storeSession(session)
|
|
||||||
return session, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try JWT validation if configured
|
|
||||||
if a.jwtParser != nil && a.jwtKeyFunc != nil {
|
|
||||||
claims := jwt.MapClaims{}
|
|
||||||
parsedToken, err := a.jwtParser.ParseWithClaims(token, claims, a.jwtKeyFunc)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("JWT validation failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !parsedToken.Valid {
|
|
||||||
return nil, fmt.Errorf("invalid JWT token")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Explicit expiration check
|
|
||||||
if exp, ok := claims["exp"].(float64); ok {
|
|
||||||
if time.Now().Unix() > int64(exp) {
|
|
||||||
return nil, fmt.Errorf("token expired")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Reject tokens without expiration
|
|
||||||
return nil, fmt.Errorf("token missing expiration claim")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check not-before claim
|
|
||||||
if nbf, ok := claims["nbf"].(float64); ok {
|
|
||||||
if time.Now().Unix() < int64(nbf) {
|
|
||||||
return nil, fmt.Errorf("token not yet valid")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check issuer if configured
|
|
||||||
if a.config.BearerAuth.JWT.Issuer != "" {
|
|
||||||
if iss, ok := claims["iss"].(string); !ok || iss != a.config.BearerAuth.JWT.Issuer {
|
|
||||||
return nil, fmt.Errorf("invalid token issuer")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check audience if configured
|
|
||||||
if a.config.BearerAuth.JWT.Audience != "" {
|
|
||||||
// Handle both string and []string audience formats
|
|
||||||
audValid := false
|
|
||||||
switch aud := claims["aud"].(type) {
|
|
||||||
case string:
|
|
||||||
audValid = aud == a.config.BearerAuth.JWT.Audience
|
|
||||||
case []interface{}:
|
|
||||||
for _, aa := range aud {
|
|
||||||
if audStr, ok := aa.(string); ok && audStr == a.config.BearerAuth.JWT.Audience {
|
|
||||||
audValid = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !audValid {
|
|
||||||
return nil, fmt.Errorf("invalid token audience")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
username := ""
|
|
||||||
if sub, ok := claims["sub"].(string); ok {
|
|
||||||
username = sub
|
|
||||||
}
|
|
||||||
|
|
||||||
session := &Session{
|
|
||||||
ID: generateSessionID(),
|
|
||||||
Username: username,
|
|
||||||
Method: "jwt",
|
|
||||||
RemoteAddr: remoteAddr,
|
|
||||||
CreatedAt: time.Now(),
|
|
||||||
LastActivity: time.Now(),
|
|
||||||
Metadata: map[string]any{"claims": claims},
|
|
||||||
}
|
|
||||||
a.storeSession(session)
|
|
||||||
return session, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, fmt.Errorf("invalid token")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) storeSession(session *Session) {
|
|
||||||
a.sessionMu.Lock()
|
|
||||||
a.sessions[session.ID] = session
|
|
||||||
a.sessionMu.Unlock()
|
|
||||||
|
|
||||||
a.logger.Info("msg", "Session created",
|
|
||||||
"component", "auth",
|
|
||||||
"session_id", session.ID,
|
|
||||||
"username", session.Username,
|
|
||||||
"method", session.Method,
|
|
||||||
"remote_addr", session.RemoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) sessionCleanup() {
|
|
||||||
ticker := time.NewTicker(5 * time.Minute)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
for range ticker.C {
|
|
||||||
a.sessionMu.Lock()
|
|
||||||
now := time.Now()
|
|
||||||
for id, session := range a.sessions {
|
|
||||||
if now.Sub(session.LastActivity) > 30*time.Minute {
|
|
||||||
delete(a.sessions, id)
|
|
||||||
a.logger.Debug("msg", "Session expired",
|
|
||||||
"component", "auth",
|
|
||||||
"session_id", id)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
a.sessionMu.Unlock()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cleanup old auth attempts
|
|
||||||
func (a *Authenticator) authAttemptCleanup() {
|
|
||||||
ticker := time.NewTicker(5 * time.Minute)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
for range ticker.C {
|
|
||||||
a.authMu.Lock()
|
|
||||||
now := time.Now()
|
|
||||||
for ip, state := range a.ipAuthAttempts {
|
|
||||||
// Remove entries older than 1 hour with no recent activity
|
|
||||||
if now.Sub(state.lastAttempt) > time.Hour {
|
|
||||||
delete(a.ipAuthAttempts, ip)
|
|
||||||
a.logger.Debug("msg", "Cleaned up auth attempt state",
|
|
||||||
"component", "auth",
|
|
||||||
"ip", ip)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
a.authMu.Unlock()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Authenticator) loadUsersFile(path string) error {
|
|
||||||
file, err := os.Open(path)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("could not open users file: %w", err)
|
|
||||||
}
|
|
||||||
defer file.Close()
|
|
||||||
|
|
||||||
scanner := bufio.NewScanner(file)
|
|
||||||
lineNumber := 0
|
|
||||||
for scanner.Scan() {
|
|
||||||
lineNumber++
|
|
||||||
line := strings.TrimSpace(scanner.Text())
|
|
||||||
if line == "" || strings.HasPrefix(line, "#") {
|
|
||||||
continue // Skip empty lines and comments
|
|
||||||
}
|
|
||||||
|
|
||||||
parts := strings.SplitN(line, ":", 2)
|
|
||||||
if len(parts) != 2 {
|
|
||||||
a.logger.Warn("msg", "Skipping malformed line in users file",
|
|
||||||
"component", "auth",
|
|
||||||
"path", path,
|
|
||||||
"line_number", lineNumber)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
username, hash := strings.TrimSpace(parts[0]), strings.TrimSpace(parts[1])
|
|
||||||
if username != "" && hash != "" {
|
|
||||||
// File-based users can overwrite inline users if names conflict
|
|
||||||
a.basicUsers[username] = hash
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := scanner.Err(); err != nil {
|
|
||||||
return fmt.Errorf("error reading users file: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
a.logger.Info("msg", "Loaded users from file",
|
|
||||||
"component", "auth",
|
|
||||||
"path", path,
|
|
||||||
"user_count", len(a.basicUsers))
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func generateSessionID() string {
|
|
||||||
b := make([]byte, 32)
|
|
||||||
if _, err := rand.Read(b); err != nil {
|
|
||||||
// Fallback to a less secure method if crypto/rand fails
|
|
||||||
return fmt.Sprintf("fallback-%d", time.Now().UnixNano())
|
|
||||||
}
|
|
||||||
return base64.URLEncoding.EncodeToString(b)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ValidateSession checks if a session is still valid
|
|
||||||
func (a *Authenticator) ValidateSession(sessionID string) bool {
|
|
||||||
if a == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
a.sessionMu.RLock()
|
|
||||||
session, exists := a.sessions[sessionID]
|
|
||||||
a.sessionMu.RUnlock()
|
|
||||||
|
|
||||||
if !exists {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update activity
|
|
||||||
a.sessionMu.Lock()
|
|
||||||
session.LastActivity = time.Now()
|
|
||||||
a.sessionMu.Unlock()
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStats returns authentication statistics
|
|
||||||
func (a *Authenticator) GetStats() map[string]any {
|
|
||||||
if a == nil {
|
|
||||||
return map[string]any{"enabled": false}
|
|
||||||
}
|
|
||||||
|
|
||||||
a.sessionMu.RLock()
|
|
||||||
sessionCount := len(a.sessions)
|
|
||||||
a.sessionMu.RUnlock()
|
|
||||||
|
|
||||||
return map[string]any{
|
|
||||||
"enabled": true,
|
|
||||||
"type": a.config.Type,
|
|
||||||
"active_sessions": sessionCount,
|
|
||||||
"basic_users": len(a.basicUsers),
|
|
||||||
"static_tokens": len(a.bearerTokens),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,77 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/auth.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
type AuthConfig struct {
|
|
||||||
// Authentication type: "none", "basic", "bearer", "mtls"
|
|
||||||
Type string `toml:"type"`
|
|
||||||
|
|
||||||
// Basic auth
|
|
||||||
BasicAuth *BasicAuthConfig `toml:"basic_auth"`
|
|
||||||
|
|
||||||
// Bearer token auth
|
|
||||||
BearerAuth *BearerAuthConfig `toml:"bearer_auth"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type BasicAuthConfig struct {
|
|
||||||
// Static users (for simple deployments)
|
|
||||||
Users []BasicAuthUser `toml:"users"`
|
|
||||||
|
|
||||||
// External auth file
|
|
||||||
UsersFile string `toml:"users_file"`
|
|
||||||
|
|
||||||
// Realm for WWW-Authenticate header
|
|
||||||
Realm string `toml:"realm"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type BasicAuthUser struct {
|
|
||||||
Username string `toml:"username"`
|
|
||||||
// Password hash (bcrypt)
|
|
||||||
PasswordHash string `toml:"password_hash"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type BearerAuthConfig struct {
|
|
||||||
// Static tokens
|
|
||||||
Tokens []string `toml:"tokens"`
|
|
||||||
|
|
||||||
// JWT validation
|
|
||||||
JWT *JWTConfig `toml:"jwt"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type JWTConfig struct {
|
|
||||||
// JWKS URL for key discovery
|
|
||||||
JWKSURL string `toml:"jwks_url"`
|
|
||||||
|
|
||||||
// Static signing key (if not using JWKS)
|
|
||||||
SigningKey string `toml:"signing_key"`
|
|
||||||
|
|
||||||
// Expected issuer
|
|
||||||
Issuer string `toml:"issuer"`
|
|
||||||
|
|
||||||
// Expected audience
|
|
||||||
Audience string `toml:"audience"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateAuth(pipelineName string, auth *AuthConfig) error {
|
|
||||||
if auth == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
validTypes := map[string]bool{"none": true, "basic": true, "bearer": true, "mtls": true}
|
|
||||||
if !validTypes[auth.Type] {
|
|
||||||
return fmt.Errorf("pipeline '%s': invalid auth type: %s", pipelineName, auth.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
if auth.Type == "basic" && auth.BasicAuth == nil {
|
|
||||||
return fmt.Errorf("pipeline '%s': basic auth type specified but config missing", pipelineName)
|
|
||||||
}
|
|
||||||
|
|
||||||
if auth.Type == "bearer" && auth.BearerAuth == nil {
|
|
||||||
return fmt.Errorf("pipeline '%s': bearer auth type specified but config missing", pipelineName)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,24 +1,266 @@
|
|||||||
// FILE: logwisp/src/internal/config/config.go
|
|
||||||
package config
|
package config
|
||||||
|
|
||||||
|
// --- LogWisp Configuration Options ---
|
||||||
|
|
||||||
|
// Config is the top-level configuration structure for the LogWisp application
|
||||||
type Config struct {
|
type Config struct {
|
||||||
// Top-level flags for application control
|
// Top-level flags for application control
|
||||||
Background bool `toml:"background"`
|
|
||||||
ShowVersion bool `toml:"version"`
|
ShowVersion bool `toml:"version"`
|
||||||
Quiet bool `toml:"quiet"`
|
Quiet bool `toml:"quiet"`
|
||||||
|
|
||||||
// Runtime behavior flags
|
// Runtime behavior flags
|
||||||
DisableStatusReporter bool `toml:"disable_status_reporter"`
|
StatusReporter bool `toml:"status_reporter"`
|
||||||
ConfigAutoReload bool `toml:"config_auto_reload"`
|
ConfigAutoReload bool `toml:"auto_reload"`
|
||||||
ConfigSaveOnExit bool `toml:"config_save_on_exit"`
|
|
||||||
|
|
||||||
// Internal flag indicating demonized child process
|
|
||||||
BackgroundDaemon bool `toml:"background-daemon"`
|
|
||||||
|
|
||||||
// Configuration file path
|
// Configuration file path
|
||||||
ConfigFile string `toml:"config"`
|
ConfigFile string `toml:"config_file"`
|
||||||
|
|
||||||
// Existing fields
|
// Existing fields
|
||||||
Logging *LogConfig `toml:"logging"`
|
Logging *LogConfig `toml:"logging"`
|
||||||
Pipelines []PipelineConfig `toml:"pipelines"`
|
Pipelines []PipelineConfig `toml:"pipelines"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- Logging Options ---
|
||||||
|
|
||||||
|
// LogConfig represents the logging configuration for the LogWisp application itself
|
||||||
|
type LogConfig struct {
|
||||||
|
// Output mode: "file", "stdout", "stderr", "split", "all", "none"
|
||||||
|
Output string `toml:"output"`
|
||||||
|
|
||||||
|
// Log level: "debug", "info", "warn", "error"
|
||||||
|
Level string `toml:"level"`
|
||||||
|
|
||||||
|
// Format: "raw", "txt", "json"
|
||||||
|
Format string `toml:"format"`
|
||||||
|
|
||||||
|
// Sanitization policy for console output
|
||||||
|
Sanitization string `toml:"sanitization"`
|
||||||
|
|
||||||
|
// File output settings (when Output includes "file" or "all")
|
||||||
|
File *LogFileConfig `toml:"file"`
|
||||||
|
|
||||||
|
// Console output settings
|
||||||
|
Console *LogConsoleConfig `toml:"console"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// LogFileConfig defines settings for file-based application logging
|
||||||
|
type LogFileConfig struct {
|
||||||
|
// Directory for log files
|
||||||
|
Directory string `toml:"directory"`
|
||||||
|
|
||||||
|
// Base name for log files
|
||||||
|
Name string `toml:"name"`
|
||||||
|
|
||||||
|
// Maximum size per log file in MB
|
||||||
|
MaxSizeMB int64 `toml:"max_size_mb"`
|
||||||
|
|
||||||
|
// Maximum total size of all logs in MB
|
||||||
|
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
|
||||||
|
|
||||||
|
// Log retention in hours (0 = disabled)
|
||||||
|
RetentionHours float64 `toml:"retention_hours"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// LogConsoleConfig defines settings for console-based application logging
|
||||||
|
type LogConsoleConfig struct {
|
||||||
|
// Target for console output: "stdout", "stderr"
|
||||||
|
Target string `toml:"target"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Pipeline ---
|
||||||
|
|
||||||
|
// PipelineConfig defines a complete data flow from sources to sinks
|
||||||
|
type PipelineConfig struct {
|
||||||
|
Name string `toml:"name"`
|
||||||
|
Flow *FlowConfig `toml:"flow"`
|
||||||
|
|
||||||
|
PluginSources []PluginSourceConfig `toml:"plugin_sources,omitempty"`
|
||||||
|
PluginSinks []PluginSinkConfig `toml:"plugin_sinks,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Flow ---
|
||||||
|
|
||||||
|
// FlowConfig consolidates all processing stages between sources and sinks
|
||||||
|
type FlowConfig struct {
|
||||||
|
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
|
||||||
|
RateLimit *RateLimitConfig `toml:"rate_limit"`
|
||||||
|
Filters []FilterConfig `toml:"filters"`
|
||||||
|
Format *FormatConfig `toml:"format"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Heartbeat Options ---
|
||||||
|
|
||||||
|
// HeartbeatConfig defines settings for periodic keep-alive or status messages
|
||||||
|
type HeartbeatConfig struct {
|
||||||
|
Enabled bool `toml:"enabled"`
|
||||||
|
IntervalMS int64 `toml:"interval_ms"`
|
||||||
|
IncludeTimestamp bool `toml:"include_timestamp"`
|
||||||
|
IncludeStats bool `toml:"include_stats"`
|
||||||
|
Format string `toml:"format"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Formatter Options ---
|
||||||
|
|
||||||
|
// FormatConfig is a polymorphic struct representing log entry formatting options
|
||||||
|
type FormatConfig struct {
|
||||||
|
Type string `toml:"type"` // "json", "txt", "raw"
|
||||||
|
Flags int64 `toml:"flags"`
|
||||||
|
TimestampFormat string `toml:"timestamp_format"`
|
||||||
|
SanitizerPolicy string `toml:"sanitizer_policy"` // "raw", "json", "txt", "shell"
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Rate Limit Options ---
|
||||||
|
|
||||||
|
// RateLimitPolicy defines the action to take when a rate limit is exceeded
|
||||||
|
type RateLimitPolicy int
|
||||||
|
|
||||||
|
const (
|
||||||
|
// PolicyPass allows all logs through, effectively disabling the limiter
|
||||||
|
PolicyPass RateLimitPolicy = iota
|
||||||
|
// PolicyDrop drops logs that exceed the rate limit
|
||||||
|
PolicyDrop
|
||||||
|
)
|
||||||
|
|
||||||
|
// RateLimitConfig defines the configuration for pipeline-level rate limiting
|
||||||
|
type RateLimitConfig struct {
|
||||||
|
// Rate is the number of log entries allowed per second. Default: 0 (disabled)
|
||||||
|
Rate float64 `toml:"rate"`
|
||||||
|
// Burst is the maximum number of log entries that can be sent in a short burst. Defaults to the Rate
|
||||||
|
Burst float64 `toml:"burst"`
|
||||||
|
// Policy defines the action to take when the limit is exceeded. "pass" or "drop"
|
||||||
|
Policy string `toml:"policy"`
|
||||||
|
// MaxEntrySizeBytes is the maximum allowed size for a single log entry. 0 = no limit
|
||||||
|
MaxEntrySizeBytes int64 `toml:"max_entry_size_bytes"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Filter Options ---
|
||||||
|
|
||||||
|
// FilterType represents the filter's behavior (include or exclude)
|
||||||
|
type FilterType string
|
||||||
|
|
||||||
|
const (
|
||||||
|
// FilterTypeInclude specifies that only matching logs will pass
|
||||||
|
FilterTypeInclude FilterType = "include" // Whitelist - only matching logs pass
|
||||||
|
// FilterTypeExclude specifies that matching logs will be dropped
|
||||||
|
FilterTypeExclude FilterType = "exclude" // Blacklist - matching logs are dropped
|
||||||
|
)
|
||||||
|
|
||||||
|
// FilterLogic represents how multiple filter patterns are combined
|
||||||
|
type FilterLogic string
|
||||||
|
|
||||||
|
const (
|
||||||
|
// FilterLogicOr specifies that a match on any pattern is sufficient
|
||||||
|
FilterLogicOr FilterLogic = "or" // Match any pattern
|
||||||
|
// FilterLogicAnd specifies that all patterns must match
|
||||||
|
FilterLogicAnd FilterLogic = "and" // Match all patterns
|
||||||
|
)
|
||||||
|
|
||||||
|
// FilterConfig represents the configuration for a single filter
|
||||||
|
type FilterConfig struct {
|
||||||
|
Type FilterType `toml:"type"`
|
||||||
|
Logic FilterLogic `toml:"logic"`
|
||||||
|
Patterns []string `toml:"patterns"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Source Options ---
|
||||||
|
|
||||||
|
// PluginSourceConfig represents a source plugin instance configuration
|
||||||
|
type PluginSourceConfig struct {
|
||||||
|
ID string `toml:"id"`
|
||||||
|
Type string `toml:"type"`
|
||||||
|
Config map[string]any `toml:"config"`
|
||||||
|
ConfigFile string `toml:"config_file,omitempty"` // TODO: support for include/source mechanism for nested config
|
||||||
|
}
|
||||||
|
|
||||||
|
// // SourceConfig is a polymorphic struct representing a single data source
|
||||||
|
// type SourceConfig struct {
|
||||||
|
// Type string `toml:"type"`
|
||||||
|
//
|
||||||
|
// // Polymorphic - only one populated based on type
|
||||||
|
// File *FileSourceOptions `toml:"file,omitempty"`
|
||||||
|
// Console *ConsoleSourceOptions `toml:"console,omitempty"`
|
||||||
|
// }
|
||||||
|
|
||||||
|
// NullSourceOptions defines settings for a null source (no configuration needed)
|
||||||
|
type NullSourceOptions struct{}
|
||||||
|
|
||||||
|
// RandomSourceOptions defines settings for a random log generator source
|
||||||
|
type RandomSourceOptions struct {
|
||||||
|
IntervalMS int64 `toml:"interval_ms"`
|
||||||
|
JitterMS int64 `toml:"jitter_ms"`
|
||||||
|
Format string `toml:"format"`
|
||||||
|
Length int64 `toml:"length"`
|
||||||
|
Special bool `toml:"special"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// FileSourceOptions defines settings for a file-based source
|
||||||
|
type FileSourceOptions struct {
|
||||||
|
Directory string `toml:"directory"`
|
||||||
|
Pattern string `toml:"pattern"` // glob pattern
|
||||||
|
CheckIntervalMS int64 `toml:"check_interval_ms"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConsoleSourceOptions defines settings for a stdin-based source
|
||||||
|
type ConsoleSourceOptions struct {
|
||||||
|
BufferSize int64 `toml:"buffer_size"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Sink Options ---
|
||||||
|
|
||||||
|
// PluginSinkConfig represents a sink plugin instance configuration
|
||||||
|
type PluginSinkConfig struct {
|
||||||
|
ID string `toml:"id"`
|
||||||
|
Type string `toml:"type"`
|
||||||
|
Config map[string]any `toml:"config"`
|
||||||
|
ConfigFile string `toml:"config_file,omitempty"` // TODO: support for include/source mechanism for nested config
|
||||||
|
}
|
||||||
|
|
||||||
|
// // SinkConfig is a polymorphic struct representing a single data sink
|
||||||
|
// type SinkConfig struct {
|
||||||
|
// Type string `toml:"type"`
|
||||||
|
//
|
||||||
|
// // Polymorphic - only one populated based on type
|
||||||
|
// Console *ConsoleSinkOptions `toml:"console,omitempty"`
|
||||||
|
// File *FileSinkOptions `toml:"file,omitempty"`
|
||||||
|
// }
|
||||||
|
|
||||||
|
// NullSinkOptions defines settings for a null sink (no configuration needed)
|
||||||
|
type NullSinkOptions struct{}
|
||||||
|
|
||||||
|
// ConsoleSinkOptions defines settings for a console-based sink
|
||||||
|
type ConsoleSinkOptions struct {
|
||||||
|
Target string `toml:"target"` // "stdout", "stderr"
|
||||||
|
BufferSize int64 `toml:"buffer_size"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// FileSinkOptions defines settings for a file-based sink
|
||||||
|
type FileSinkOptions struct {
|
||||||
|
Directory string `toml:"directory"`
|
||||||
|
Name string `toml:"name"`
|
||||||
|
MaxSizeMB int64 `toml:"max_size_mb"`
|
||||||
|
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
|
||||||
|
MinDiskFreeMB int64 `toml:"min_disk_free_mb"`
|
||||||
|
RetentionHours float64 `toml:"retention_hours"`
|
||||||
|
BufferSize int64 `toml:"buffer_size"`
|
||||||
|
FlushIntervalMs int64 `toml:"flush_interval_ms"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// TCPSinkOptions defines settings for a TCP server sink
|
||||||
|
type TCPSinkOptions struct {
|
||||||
|
Host string `toml:"host"`
|
||||||
|
Port int64 `toml:"port"`
|
||||||
|
BufferSize int64 `toml:"buffer_size"`
|
||||||
|
WriteTimeout int64 `toml:"write_timeout_ms"`
|
||||||
|
KeepAlive bool `toml:"keep_alive"`
|
||||||
|
KeepAlivePeriod int64 `toml:"keep_alive_period_ms"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// HTTPSinkOptions defines settings for an HTTP SSE server sink
|
||||||
|
type HTTPSinkOptions struct {
|
||||||
|
Host string `toml:"host"`
|
||||||
|
Port int64 `toml:"port"`
|
||||||
|
StreamPath string `toml:"stream_path"`
|
||||||
|
StatusPath string `toml:"status_path"`
|
||||||
|
BufferSize int64 `toml:"buffer_size"`
|
||||||
|
WriteTimeout int64 `toml:"write_timeout_ms"`
|
||||||
|
}
|
||||||
@ -1,65 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/filter.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"regexp"
|
|
||||||
)
|
|
||||||
|
|
||||||
// FilterType represents the filter type
|
|
||||||
type FilterType string
|
|
||||||
|
|
||||||
const (
|
|
||||||
FilterTypeInclude FilterType = "include" // Whitelist - only matching logs pass
|
|
||||||
FilterTypeExclude FilterType = "exclude" // Blacklist - matching logs are dropped
|
|
||||||
)
|
|
||||||
|
|
||||||
// FilterLogic represents how multiple patterns are combined
|
|
||||||
type FilterLogic string
|
|
||||||
|
|
||||||
const (
|
|
||||||
FilterLogicOr FilterLogic = "or" // Match any pattern
|
|
||||||
FilterLogicAnd FilterLogic = "and" // Match all patterns
|
|
||||||
)
|
|
||||||
|
|
||||||
// FilterConfig represents filter configuration
|
|
||||||
type FilterConfig struct {
|
|
||||||
Type FilterType `toml:"type"`
|
|
||||||
Logic FilterLogic `toml:"logic"`
|
|
||||||
Patterns []string `toml:"patterns"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateFilter(pipelineName string, filterIndex int, cfg *FilterConfig) error {
|
|
||||||
// Validate filter type
|
|
||||||
switch cfg.Type {
|
|
||||||
case FilterTypeInclude, FilterTypeExclude, "":
|
|
||||||
// Valid types
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("pipeline '%s' filter[%d]: invalid type '%s' (must be 'include' or 'exclude')",
|
|
||||||
pipelineName, filterIndex, cfg.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate filter logic
|
|
||||||
switch cfg.Logic {
|
|
||||||
case FilterLogicOr, FilterLogicAnd, "":
|
|
||||||
// Valid logic
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("pipeline '%s' filter[%d]: invalid logic '%s' (must be 'or' or 'and')",
|
|
||||||
pipelineName, filterIndex, cfg.Logic)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Empty patterns is valid - passes everything
|
|
||||||
if len(cfg.Patterns) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate regex patterns
|
|
||||||
for i, pattern := range cfg.Patterns {
|
|
||||||
if _, err := regexp.Compile(pattern); err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' filter[%d] pattern[%d] '%s': invalid regex: %w",
|
|
||||||
pipelineName, filterIndex, i, pattern, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,58 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/ratelimit.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// RateLimitPolicy defines the action to take when a rate limit is exceeded.
|
|
||||||
type RateLimitPolicy int
|
|
||||||
|
|
||||||
const (
|
|
||||||
// PolicyPass allows all logs through, effectively disabling the limiter.
|
|
||||||
PolicyPass RateLimitPolicy = iota
|
|
||||||
// PolicyDrop drops logs that exceed the rate limit.
|
|
||||||
PolicyDrop
|
|
||||||
)
|
|
||||||
|
|
||||||
// RateLimitConfig defines the configuration for pipeline-level rate limiting.
|
|
||||||
type RateLimitConfig struct {
|
|
||||||
// Rate is the number of log entries allowed per second. Default: 0 (disabled).
|
|
||||||
Rate float64 `toml:"rate"`
|
|
||||||
// Burst is the maximum number of log entries that can be sent in a short burst. Defaults to the Rate.
|
|
||||||
Burst float64 `toml:"burst"`
|
|
||||||
// Policy defines the action to take when the limit is exceeded. "pass" or "drop".
|
|
||||||
Policy string `toml:"policy"`
|
|
||||||
// MaxEntrySizeBytes is the maximum allowed size for a single log entry. 0 = no limit.
|
|
||||||
MaxEntrySizeBytes int64 `toml:"max_entry_size_bytes"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateRateLimit(pipelineName string, cfg *RateLimitConfig) error {
|
|
||||||
if cfg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.Rate < 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s': rate limit rate cannot be negative", pipelineName)
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.Burst < 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s': rate limit burst cannot be negative", pipelineName)
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.MaxEntrySizeBytes < 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s': max entry size bytes cannot be negative", pipelineName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate policy
|
|
||||||
switch strings.ToLower(cfg.Policy) {
|
|
||||||
case "", "pass", "drop":
|
|
||||||
// Valid policies
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("pipeline '%s': invalid rate limit policy '%s' (must be 'pass' or 'drop')",
|
|
||||||
pipelineName, cfg.Policy)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,4 +1,3 @@
|
|||||||
// FILE: logwisp/src/internal/config/loader.go
|
|
||||||
package config
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -8,127 +7,157 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
|
||||||
lconfig "github.com/lixenwraith/config"
|
lconfig "github.com/lixenwraith/config"
|
||||||
)
|
)
|
||||||
|
|
||||||
// LoadContext holds all configuration sources
|
// configManager holds the global instance of the configuration manager
|
||||||
type LoadContext struct {
|
var configManager *lconfig.Config
|
||||||
FlagConfig any // Parsed command-line flags from main
|
|
||||||
}
|
|
||||||
|
|
||||||
func defaults() *Config {
|
// Load is the single entry point for loading all application configuration
|
||||||
return &Config{
|
|
||||||
// Top-level flag defaults
|
|
||||||
Background: false,
|
|
||||||
ShowVersion: false,
|
|
||||||
Quiet: false,
|
|
||||||
|
|
||||||
// Runtime behavior defaults
|
|
||||||
DisableStatusReporter: false,
|
|
||||||
ConfigAutoReload: false,
|
|
||||||
ConfigSaveOnExit: false,
|
|
||||||
|
|
||||||
// Child process indicator
|
|
||||||
BackgroundDaemon: false,
|
|
||||||
|
|
||||||
// Existing defaults
|
|
||||||
Logging: DefaultLogConfig(),
|
|
||||||
Pipelines: []PipelineConfig{
|
|
||||||
{
|
|
||||||
Name: "default",
|
|
||||||
Sources: []SourceConfig{
|
|
||||||
{
|
|
||||||
Type: "directory",
|
|
||||||
Options: map[string]any{
|
|
||||||
"path": "./",
|
|
||||||
"pattern": "*.log",
|
|
||||||
"check_interval_ms": int64(100),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Sinks: []SinkConfig{
|
|
||||||
{
|
|
||||||
Type: "http",
|
|
||||||
Options: map[string]any{
|
|
||||||
"port": int64(8080),
|
|
||||||
"buffer_size": int64(1000),
|
|
||||||
"stream_path": "/stream",
|
|
||||||
"status_path": "/status",
|
|
||||||
"heartbeat": map[string]any{
|
|
||||||
"enabled": true,
|
|
||||||
"interval_seconds": int64(30),
|
|
||||||
"include_timestamp": true,
|
|
||||||
"include_stats": false,
|
|
||||||
"format": "comment",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load is the single entry point for loading all configuration
|
|
||||||
func Load(args []string) (*Config, error) {
|
func Load(args []string) (*Config, error) {
|
||||||
configPath, isExplicit := resolveConfigPath(args)
|
configPath, isExplicit := resolveConfigPath(args)
|
||||||
// Build configuration with all sources
|
// Build configuration with all sources
|
||||||
|
|
||||||
|
// Create target config instance that will be populated
|
||||||
|
finalConfig := &Config{}
|
||||||
|
|
||||||
|
// Builder handles loading, populating the target struct, and validation
|
||||||
cfg, err := lconfig.NewBuilder().
|
cfg, err := lconfig.NewBuilder().
|
||||||
WithDefaults(defaults()).
|
WithTarget(finalConfig). // Typed target struct
|
||||||
WithEnvPrefix("LOGWISP_").
|
WithDefaults(defaults()). // Default values
|
||||||
WithEnvTransform(customEnvTransform).
|
|
||||||
WithArgs(args).
|
|
||||||
WithFile(configPath).
|
|
||||||
WithSources(
|
WithSources(
|
||||||
lconfig.SourceCLI,
|
lconfig.SourceCLI,
|
||||||
lconfig.SourceEnv,
|
lconfig.SourceEnv,
|
||||||
lconfig.SourceFile,
|
lconfig.SourceFile,
|
||||||
lconfig.SourceDefault,
|
lconfig.SourceDefault,
|
||||||
).
|
).
|
||||||
|
WithEnvTransform(customEnvTransform). // Convert '.' to '_' in env separation
|
||||||
|
WithEnvPrefix("LOGWISP_"). // Environment variable prefix
|
||||||
|
WithArgs(args). // Command-line arguments
|
||||||
|
WithFile(configPath). // TOML config file
|
||||||
|
WithFileFormat("toml"). // Explicit format
|
||||||
|
WithTypedValidator(ValidateConfig). // Centralized validation
|
||||||
|
WithSecurityOptions(lconfig.SecurityOptions{
|
||||||
|
PreventPathTraversal: true,
|
||||||
|
MaxFileSize: 10 * 1024 * 1024, // 10MB max config
|
||||||
|
}).
|
||||||
Build()
|
Build()
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Handle file not found errors - maintain existing behavior
|
// Handle file not found errors - maintain existing behavior
|
||||||
if errors.Is(err, lconfig.ErrConfigNotFound) {
|
if errors.Is(err, lconfig.ErrConfigNotFound) {
|
||||||
if isExplicit {
|
if isExplicit {
|
||||||
return nil, fmt.Errorf("config file not found: %s", configPath)
|
// Return empty config with file path
|
||||||
}
|
|
||||||
// If the default config file is not found, it's not an error
|
|
||||||
} else {
|
|
||||||
return nil, fmt.Errorf("failed to load config: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Scan into final config struct - using new interface
|
|
||||||
finalConfig := &Config{}
|
|
||||||
if err := cfg.Scan(finalConfig); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to scan config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set config file path if it exists
|
|
||||||
if _, err := os.Stat(configPath); err == nil {
|
|
||||||
finalConfig.ConfigFile = configPath
|
finalConfig.ConfigFile = configPath
|
||||||
|
return finalConfig, fmt.Errorf("config file not found: %s", configPath)
|
||||||
|
}
|
||||||
|
// If the default config file is not found, it's not an error, default/cli/env will be used
|
||||||
|
} else {
|
||||||
|
return nil, fmt.Errorf("failed to load or validate config: %w", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ensure critical fields are not nil
|
// Store the config file path for hot reload
|
||||||
if finalConfig.Logging == nil {
|
finalConfig.ConfigFile = configPath
|
||||||
finalConfig.Logging = DefaultLogConfig()
|
|
||||||
|
// Store the manager for hot reload
|
||||||
|
configManager = cfg
|
||||||
|
|
||||||
|
// Start watcher if auto-reload is enabled
|
||||||
|
if finalConfig.ConfigAutoReload {
|
||||||
|
watchOpts := lconfig.WatchOptions{
|
||||||
|
PollInterval: core.ReloadWatchPollInterval,
|
||||||
|
Debounce: core.ReloadWatchDebounce,
|
||||||
|
ReloadTimeout: core.ReloadWatchTimeout,
|
||||||
|
VerifyPermissions: true,
|
||||||
|
}
|
||||||
|
cfg.AutoUpdateWithOptions(watchOpts)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply console target overrides if needed
|
return finalConfig, nil
|
||||||
if err := applyConsoleTargetOverrides(finalConfig); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to apply console target overrides: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate configuration
|
|
||||||
return finalConfig, finalConfig.validate()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// resolveConfigPath returns the configuration file path
|
// GetConfigManager returns the global configuration manager instance for hot-reloading
|
||||||
|
func GetConfigManager() *lconfig.Config {
|
||||||
|
return configManager
|
||||||
|
}
|
||||||
|
|
||||||
|
// defaults provides the default configuration values for the application
|
||||||
|
func defaults() *Config {
|
||||||
|
return &Config{
|
||||||
|
// Top-level flag defaults
|
||||||
|
ShowVersion: false,
|
||||||
|
Quiet: false,
|
||||||
|
|
||||||
|
// Runtime behavior defaults
|
||||||
|
StatusReporter: true,
|
||||||
|
ConfigAutoReload: false,
|
||||||
|
|
||||||
|
// Existing defaults
|
||||||
|
Logging: &LogConfig{
|
||||||
|
Output: "stdout",
|
||||||
|
Level: "info",
|
||||||
|
File: &LogFileConfig{
|
||||||
|
Directory: "./log",
|
||||||
|
Name: "logwisp",
|
||||||
|
MaxSizeMB: 100,
|
||||||
|
MaxTotalSizeMB: 1000,
|
||||||
|
RetentionHours: 168, // 7 days
|
||||||
|
},
|
||||||
|
Console: &LogConsoleConfig{
|
||||||
|
Target: "stdout",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Pipelines: []PipelineConfig{
|
||||||
|
{
|
||||||
|
Name: "default_pipeline",
|
||||||
|
Flow: &FlowConfig{
|
||||||
|
RateLimit: &RateLimitConfig{
|
||||||
|
Rate: 5,
|
||||||
|
Burst: 10,
|
||||||
|
Policy: "drop",
|
||||||
|
MaxEntrySizeBytes: 65536,
|
||||||
|
},
|
||||||
|
Format: &FormatConfig{
|
||||||
|
Type: "json",
|
||||||
|
SanitizerPolicy: "json",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
PluginSources: []PluginSourceConfig{
|
||||||
|
{
|
||||||
|
ID: "default_source",
|
||||||
|
Type: "random",
|
||||||
|
Config: map[string]any{
|
||||||
|
"special": true,
|
||||||
|
},
|
||||||
|
// Config: &FileSourceOptions{
|
||||||
|
// Directory: "./",
|
||||||
|
// Pattern: "*.log",
|
||||||
|
// CheckIntervalMS: int64(100),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
PluginSinks: []PluginSinkConfig{
|
||||||
|
{
|
||||||
|
ID: "default_sink",
|
||||||
|
Type: "console",
|
||||||
|
Config: map[string]any{
|
||||||
|
"target": "stdout",
|
||||||
|
"buffer_size": 100,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// resolveConfigPath determines the configuration file path based on CLI args, env vars, and default locations
|
||||||
func resolveConfigPath(args []string) (path string, isExplicit bool) {
|
func resolveConfigPath(args []string) (path string, isExplicit bool) {
|
||||||
// 1. Check for --config flag in command-line arguments (highest precedence)
|
// 1. Check for --config flag in command-line arguments (highest precedence)
|
||||||
for i, arg := range args {
|
for i, arg := range args {
|
||||||
if (arg == "--config" || arg == "-c") && i+1 < len(args) {
|
if arg == "-c" {
|
||||||
return args[i+1], true
|
return args[i+1], true
|
||||||
}
|
}
|
||||||
if strings.HasPrefix(arg, "--config=") {
|
if strings.HasPrefix(arg, "--config=") {
|
||||||
@ -160,48 +189,10 @@ func resolveConfigPath(args []string) (path string, isExplicit bool) {
|
|||||||
return "logwisp.toml", false
|
return "logwisp.toml", false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// customEnvTransform converts TOML-style config paths (e.g., logging.level) to environment variable format (LOGGING_LEVEL)
|
||||||
func customEnvTransform(path string) string {
|
func customEnvTransform(path string) string {
|
||||||
env := strings.ReplaceAll(path, ".", "_")
|
env := strings.ReplaceAll(path, ".", "_")
|
||||||
env = strings.ToUpper(env)
|
env = strings.ToUpper(env)
|
||||||
// env = "LOGWISP_" + env // already added by WithEnvPrefix
|
// env = "LOGWISP_" + env // already added by WithEnvPrefix
|
||||||
return env
|
return env
|
||||||
}
|
}
|
||||||
|
|
||||||
// applyConsoleTargetOverrides centralizes console target configuration
|
|
||||||
func applyConsoleTargetOverrides(cfg *Config) error {
|
|
||||||
// Check environment variable for console target override
|
|
||||||
consoleTarget := os.Getenv("LOGWISP_CONSOLE_TARGET")
|
|
||||||
if consoleTarget == "" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate console target value
|
|
||||||
validTargets := map[string]bool{
|
|
||||||
"stdout": true,
|
|
||||||
"stderr": true,
|
|
||||||
"split": true,
|
|
||||||
}
|
|
||||||
if !validTargets[consoleTarget] {
|
|
||||||
return fmt.Errorf("invalid LOGWISP_CONSOLE_TARGET value: %s", consoleTarget)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Apply to all console sinks
|
|
||||||
for i, pipeline := range cfg.Pipelines {
|
|
||||||
for j, sink := range pipeline.Sinks {
|
|
||||||
if sink.Type == "stdout" || sink.Type == "stderr" {
|
|
||||||
if sink.Options == nil {
|
|
||||||
cfg.Pipelines[i].Sinks[j].Options = make(map[string]any)
|
|
||||||
}
|
|
||||||
// Set target for split mode handling
|
|
||||||
cfg.Pipelines[i].Sinks[j].Options["target"] = consoleTarget
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also update logging console target if applicable
|
|
||||||
if cfg.Logging.Console != nil && consoleTarget == "split" {
|
|
||||||
cfg.Logging.Console.Target = "split"
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,99 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/logging.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import "fmt"
|
|
||||||
|
|
||||||
// LogConfig represents logging configuration for LogWisp
|
|
||||||
type LogConfig struct {
|
|
||||||
// Output mode: "file", "stdout", "stderr", "both", "none"
|
|
||||||
Output string `toml:"output"`
|
|
||||||
|
|
||||||
// Log level: "debug", "info", "warn", "error"
|
|
||||||
Level string `toml:"level"`
|
|
||||||
|
|
||||||
// File output settings (when Output includes "file" or "both")
|
|
||||||
File *LogFileConfig `toml:"file"`
|
|
||||||
|
|
||||||
// Console output settings
|
|
||||||
Console *LogConsoleConfig `toml:"console"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type LogFileConfig struct {
|
|
||||||
// Directory for log files
|
|
||||||
Directory string `toml:"directory"`
|
|
||||||
|
|
||||||
// Base name for log files
|
|
||||||
Name string `toml:"name"`
|
|
||||||
|
|
||||||
// Maximum size per log file in MB
|
|
||||||
MaxSizeMB int64 `toml:"max_size_mb"`
|
|
||||||
|
|
||||||
// Maximum total size of all logs in MB
|
|
||||||
MaxTotalSizeMB int64 `toml:"max_total_size_mb"`
|
|
||||||
|
|
||||||
// Log retention in hours (0 = disabled)
|
|
||||||
RetentionHours float64 `toml:"retention_hours"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type LogConsoleConfig struct {
|
|
||||||
// Target for console output: "stdout", "stderr", "split"
|
|
||||||
// "split": info/debug to stdout, warn/error to stderr
|
|
||||||
Target string `toml:"target"`
|
|
||||||
|
|
||||||
// Format: "txt" or "json"
|
|
||||||
Format string `toml:"format"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// DefaultLogConfig returns sensible logging defaults
|
|
||||||
func DefaultLogConfig() *LogConfig {
|
|
||||||
return &LogConfig{
|
|
||||||
Output: "stderr",
|
|
||||||
Level: "info",
|
|
||||||
File: &LogFileConfig{
|
|
||||||
Directory: "./logs",
|
|
||||||
Name: "logwisp",
|
|
||||||
MaxSizeMB: 100,
|
|
||||||
MaxTotalSizeMB: 1000,
|
|
||||||
RetentionHours: 168, // 7 days
|
|
||||||
},
|
|
||||||
Console: &LogConsoleConfig{
|
|
||||||
Target: "stderr",
|
|
||||||
Format: "txt",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateLogConfig(cfg *LogConfig) error {
|
|
||||||
validOutputs := map[string]bool{
|
|
||||||
"file": true, "stdout": true, "stderr": true,
|
|
||||||
"both": true, "none": true,
|
|
||||||
}
|
|
||||||
if !validOutputs[cfg.Output] {
|
|
||||||
return fmt.Errorf("invalid log output mode: %s", cfg.Output)
|
|
||||||
}
|
|
||||||
|
|
||||||
validLevels := map[string]bool{
|
|
||||||
"debug": true, "info": true, "warn": true, "error": true,
|
|
||||||
}
|
|
||||||
if !validLevels[cfg.Level] {
|
|
||||||
return fmt.Errorf("invalid log level: %s", cfg.Level)
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.Console != nil {
|
|
||||||
validTargets := map[string]bool{
|
|
||||||
"stdout": true, "stderr": true, "split": true,
|
|
||||||
}
|
|
||||||
if !validTargets[cfg.Console.Target] {
|
|
||||||
return fmt.Errorf("invalid console target: %s", cfg.Console.Target)
|
|
||||||
}
|
|
||||||
|
|
||||||
validFormats := map[string]bool{
|
|
||||||
"txt": true, "json": true, "": true,
|
|
||||||
}
|
|
||||||
if !validFormats[cfg.Console.Format] {
|
|
||||||
return fmt.Errorf("invalid console format: %s", cfg.Console.Format)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,383 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/pipeline.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"net/url"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// PipelineConfig represents a data processing pipeline
|
|
||||||
type PipelineConfig struct {
|
|
||||||
// Pipeline identifier (used in logs and metrics)
|
|
||||||
Name string `toml:"name"`
|
|
||||||
|
|
||||||
// Data sources for this pipeline
|
|
||||||
Sources []SourceConfig `toml:"sources"`
|
|
||||||
|
|
||||||
// Rate limiting
|
|
||||||
RateLimit *RateLimitConfig `toml:"rate_limit"`
|
|
||||||
|
|
||||||
// Filter configuration
|
|
||||||
Filters []FilterConfig `toml:"filters"`
|
|
||||||
|
|
||||||
// Log formatting configuration
|
|
||||||
Format string `toml:"format"`
|
|
||||||
FormatOptions map[string]any `toml:"format_options"`
|
|
||||||
|
|
||||||
// Output sinks for this pipeline
|
|
||||||
Sinks []SinkConfig `toml:"sinks"`
|
|
||||||
|
|
||||||
// Authentication/Authorization (applies to network sinks)
|
|
||||||
Auth *AuthConfig `toml:"auth"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// SourceConfig represents an input data source
|
|
||||||
type SourceConfig struct {
|
|
||||||
// Source type: "directory", "file", "stdin", etc.
|
|
||||||
Type string `toml:"type"`
|
|
||||||
|
|
||||||
// Type-specific configuration options
|
|
||||||
Options map[string]any `toml:"options"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// SinkConfig represents an output destination
|
|
||||||
type SinkConfig struct {
|
|
||||||
// Sink type: "http", "tcp", "file", "stdout", "stderr"
|
|
||||||
Type string `toml:"type"`
|
|
||||||
|
|
||||||
// Type-specific configuration options
|
|
||||||
Options map[string]any `toml:"options"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateSource(pipelineName string, sourceIndex int, cfg *SourceConfig) error {
|
|
||||||
if cfg.Type == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: missing type", pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
switch cfg.Type {
|
|
||||||
case "directory":
|
|
||||||
// Validate directory source options
|
|
||||||
path, ok := cfg.Options["path"].(string)
|
|
||||||
if !ok || path == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: directory source requires 'path' option",
|
|
||||||
pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for directory traversal
|
|
||||||
if strings.Contains(path, "..") {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: path contains directory traversal",
|
|
||||||
pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate pattern if provided
|
|
||||||
if pattern, ok := cfg.Options["pattern"].(string); ok && pattern != "" {
|
|
||||||
// Try to compile as glob pattern (will be converted to regex internally)
|
|
||||||
if strings.Count(pattern, "*") == 0 && strings.Count(pattern, "?") == 0 {
|
|
||||||
// If no wildcards, ensure it's a valid filename
|
|
||||||
if filepath.Base(pattern) != pattern {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: pattern contains path separators",
|
|
||||||
pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate check interval if provided
|
|
||||||
if interval, ok := cfg.Options["check_interval_ms"]; ok {
|
|
||||||
if intVal, ok := interval.(int64); ok {
|
|
||||||
if intVal < 10 {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: check interval too small: %d ms (min: 10ms)",
|
|
||||||
pipelineName, sourceIndex, intVal)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: invalid check_interval_ms type",
|
|
||||||
pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "stdin":
|
|
||||||
// No specific validation needed for stdin
|
|
||||||
|
|
||||||
case "http":
|
|
||||||
// Validate HTTP source options
|
|
||||||
port, ok := cfg.Options["port"].(int64)
|
|
||||||
if !ok || port < 1 || port > 65535 {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: invalid or missing HTTP port",
|
|
||||||
pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate path if provided
|
|
||||||
if ingestPath, ok := cfg.Options["ingest_path"].(string); ok {
|
|
||||||
if !strings.HasPrefix(ingestPath, "/") {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: ingest path must start with /: %s",
|
|
||||||
pipelineName, sourceIndex, ingestPath)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate net_limit if present within Options
|
|
||||||
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
|
|
||||||
if err := validateNetLimitOptions("HTTP source", pipelineName, sourceIndex, rl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// CHANGED: Validate SSL if present
|
|
||||||
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
|
|
||||||
if err := validateSSLOptions("HTTP source", pipelineName, sourceIndex, ssl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "tcp":
|
|
||||||
// Validate TCP source options
|
|
||||||
port, ok := cfg.Options["port"].(int64)
|
|
||||||
if !ok || port < 1 || port > 65535 {
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: invalid or missing TCP port",
|
|
||||||
pipelineName, sourceIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate net_limit if present within Options
|
|
||||||
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
|
|
||||||
if err := validateNetLimitOptions("TCP source", pipelineName, sourceIndex, rl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// CHANGED: Validate SSL if present
|
|
||||||
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
|
|
||||||
if err := validateSSLOptions("TCP source", pipelineName, sourceIndex, ssl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("pipeline '%s' source[%d]: unknown source type '%s'",
|
|
||||||
pipelineName, sourceIndex, cfg.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateSink(pipelineName string, sinkIndex int, cfg *SinkConfig, allPorts map[int64]string) error {
|
|
||||||
if cfg.Type == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: missing type", pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
switch cfg.Type {
|
|
||||||
case "http":
|
|
||||||
// Extract and validate HTTP configuration
|
|
||||||
port, ok := cfg.Options["port"].(int64)
|
|
||||||
if !ok || port < 1 || port > 65535 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: invalid or missing HTTP port",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check port conflicts
|
|
||||||
if existing, exists := allPorts[port]; exists {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: HTTP port %d already used by %s",
|
|
||||||
pipelineName, sinkIndex, port, existing)
|
|
||||||
}
|
|
||||||
allPorts[port] = fmt.Sprintf("%s-http[%d]", pipelineName, sinkIndex)
|
|
||||||
|
|
||||||
// Validate buffer size
|
|
||||||
if bufSize, ok := cfg.Options["buffer_size"].(int64); ok {
|
|
||||||
if bufSize < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: HTTP buffer size must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, bufSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate paths if provided
|
|
||||||
if streamPath, ok := cfg.Options["stream_path"].(string); ok {
|
|
||||||
if !strings.HasPrefix(streamPath, "/") {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: stream path must start with /: %s",
|
|
||||||
pipelineName, sinkIndex, streamPath)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if statusPath, ok := cfg.Options["status_path"].(string); ok {
|
|
||||||
if !strings.HasPrefix(statusPath, "/") {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: status path must start with /: %s",
|
|
||||||
pipelineName, sinkIndex, statusPath)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate heartbeat if present
|
|
||||||
if hb, ok := cfg.Options["heartbeat"].(map[string]any); ok {
|
|
||||||
if err := validateHeartbeatOptions("HTTP", pipelineName, sinkIndex, hb); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate SSL if present
|
|
||||||
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
|
|
||||||
if err := validateSSLOptions("HTTP", pipelineName, sinkIndex, ssl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate net limit if present
|
|
||||||
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
|
|
||||||
if err := validateNetLimitOptions("HTTP", pipelineName, sinkIndex, rl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "tcp":
|
|
||||||
// Extract and validate TCP configuration
|
|
||||||
port, ok := cfg.Options["port"].(int64)
|
|
||||||
if !ok || port < 1 || port > 65535 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: invalid or missing TCP port",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check port conflicts
|
|
||||||
if existing, exists := allPorts[port]; exists {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: TCP port %d already used by %s",
|
|
||||||
pipelineName, sinkIndex, port, existing)
|
|
||||||
}
|
|
||||||
allPorts[port] = fmt.Sprintf("%s-tcp[%d]", pipelineName, sinkIndex)
|
|
||||||
|
|
||||||
// Validate buffer size
|
|
||||||
if bufSize, ok := cfg.Options["buffer_size"].(int64); ok {
|
|
||||||
if bufSize < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: TCP buffer size must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, bufSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate heartbeat if present
|
|
||||||
if hb, ok := cfg.Options["heartbeat"].(map[string]any); ok {
|
|
||||||
if err := validateHeartbeatOptions("TCP", pipelineName, sinkIndex, hb); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate SSL if present
|
|
||||||
if ssl, ok := cfg.Options["ssl"].(map[string]any); ok {
|
|
||||||
if err := validateSSLOptions("TCP", pipelineName, sinkIndex, ssl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate net limit if present
|
|
||||||
if rl, ok := cfg.Options["net_limit"].(map[string]any); ok {
|
|
||||||
if err := validateNetLimitOptions("TCP", pipelineName, sinkIndex, rl); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "http_client":
|
|
||||||
// Validate URL
|
|
||||||
urlStr, ok := cfg.Options["url"].(string)
|
|
||||||
if !ok || urlStr == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: http_client sink requires 'url' option",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate URL format
|
|
||||||
parsedURL, err := url.Parse(urlStr)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: invalid URL: %w",
|
|
||||||
pipelineName, sinkIndex, err)
|
|
||||||
}
|
|
||||||
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: URL must use http or https scheme",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate batch size
|
|
||||||
if batchSize, ok := cfg.Options["batch_size"].(int64); ok {
|
|
||||||
if batchSize < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: batch_size must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, batchSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate timeout
|
|
||||||
if timeout, ok := cfg.Options["timeout_seconds"].(int64); ok {
|
|
||||||
if timeout < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: timeout_seconds must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, timeout)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "tcp_client":
|
|
||||||
// FIXED: Added validation for TCP client sink
|
|
||||||
// Validate address
|
|
||||||
address, ok := cfg.Options["address"].(string)
|
|
||||||
if !ok || address == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: tcp_client sink requires 'address' option",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate address format
|
|
||||||
_, _, err := net.SplitHostPort(address)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: invalid address format (expected host:port): %w",
|
|
||||||
pipelineName, sinkIndex, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate timeouts
|
|
||||||
if dialTimeout, ok := cfg.Options["dial_timeout_seconds"].(int64); ok {
|
|
||||||
if dialTimeout < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: dial_timeout_seconds must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, dialTimeout)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if writeTimeout, ok := cfg.Options["write_timeout_seconds"].(int64); ok {
|
|
||||||
if writeTimeout < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: write_timeout_seconds must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, writeTimeout)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "file":
|
|
||||||
// Validate file sink options
|
|
||||||
directory, ok := cfg.Options["directory"].(string)
|
|
||||||
if !ok || directory == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: file sink requires 'directory' option",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
name, ok := cfg.Options["name"].(string)
|
|
||||||
if !ok || name == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: file sink requires 'name' option",
|
|
||||||
pipelineName, sinkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate numeric options
|
|
||||||
if maxSize, ok := cfg.Options["max_size_mb"].(int64); ok {
|
|
||||||
if maxSize < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: max_size_mb must be positive: %d",
|
|
||||||
pipelineName, sinkIndex, maxSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if maxTotalSize, ok := cfg.Options["max_total_size_mb"].(int64); ok {
|
|
||||||
if maxTotalSize < 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: max_total_size_mb cannot be negative: %d",
|
|
||||||
pipelineName, sinkIndex, maxTotalSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if retention, ok := cfg.Options["retention_hours"].(float64); ok {
|
|
||||||
if retention < 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: retention_hours cannot be negative: %f",
|
|
||||||
pipelineName, sinkIndex, retention)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "stdout", "stderr":
|
|
||||||
// No specific validation needed for console sinks
|
|
||||||
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d]: unknown sink type '%s'",
|
|
||||||
pipelineName, sinkIndex, cfg.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,34 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/saver.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
lconfig "github.com/lixenwraith/config"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SaveToFile saves the configuration to the specified file path.
|
|
||||||
// It uses the lconfig library's atomic file saving capabilities.
|
|
||||||
func (c *Config) SaveToFile(path string) error {
|
|
||||||
if path == "" {
|
|
||||||
return fmt.Errorf("cannot save config: path is empty")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a temporary lconfig instance just for saving
|
|
||||||
// This avoids the need to track lconfig throughout the application
|
|
||||||
lcfg, err := lconfig.NewBuilder().
|
|
||||||
WithFile(path).
|
|
||||||
WithTarget(c).
|
|
||||||
WithFileFormat("toml").
|
|
||||||
Build()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to create config builder: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use lconfig's Save method which handles atomic writes
|
|
||||||
if err := lcfg.Save(path); err != nil {
|
|
||||||
return fmt.Errorf("failed to save config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,205 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/server.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type TCPConfig struct {
|
|
||||||
Enabled bool `toml:"enabled"`
|
|
||||||
Port int64 `toml:"port"`
|
|
||||||
BufferSize int64 `toml:"buffer_size"`
|
|
||||||
|
|
||||||
// SSL/TLS Configuration
|
|
||||||
SSL *SSLConfig `toml:"ssl"`
|
|
||||||
|
|
||||||
// Net limiting
|
|
||||||
NetLimit *NetLimitConfig `toml:"net_limit"`
|
|
||||||
|
|
||||||
// Heartbeat
|
|
||||||
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type HTTPConfig struct {
|
|
||||||
Enabled bool `toml:"enabled"`
|
|
||||||
Port int64 `toml:"port"`
|
|
||||||
BufferSize int64 `toml:"buffer_size"`
|
|
||||||
|
|
||||||
// Endpoint paths
|
|
||||||
StreamPath string `toml:"stream_path"`
|
|
||||||
StatusPath string `toml:"status_path"`
|
|
||||||
|
|
||||||
// SSL/TLS Configuration
|
|
||||||
SSL *SSLConfig `toml:"ssl"`
|
|
||||||
|
|
||||||
// Nate limiting
|
|
||||||
NetLimit *NetLimitConfig `toml:"net_limit"`
|
|
||||||
|
|
||||||
// Heartbeat
|
|
||||||
Heartbeat *HeartbeatConfig `toml:"heartbeat"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type HeartbeatConfig struct {
|
|
||||||
Enabled bool `toml:"enabled"`
|
|
||||||
IntervalSeconds int64 `toml:"interval_seconds"`
|
|
||||||
IncludeTimestamp bool `toml:"include_timestamp"`
|
|
||||||
IncludeStats bool `toml:"include_stats"`
|
|
||||||
Format string `toml:"format"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type NetLimitConfig struct {
|
|
||||||
// Enable net limiting
|
|
||||||
Enabled bool `toml:"enabled"`
|
|
||||||
|
|
||||||
// IP Access Control Lists
|
|
||||||
IPWhitelist []string `toml:"ip_whitelist"`
|
|
||||||
IPBlacklist []string `toml:"ip_blacklist"`
|
|
||||||
|
|
||||||
// Requests per second per client
|
|
||||||
RequestsPerSecond float64 `toml:"requests_per_second"`
|
|
||||||
|
|
||||||
// Burst size (token bucket)
|
|
||||||
BurstSize int64 `toml:"burst_size"`
|
|
||||||
|
|
||||||
// Net limit by: "ip", "user", "token", "global"
|
|
||||||
LimitBy string `toml:"limit_by"`
|
|
||||||
|
|
||||||
// Response when net limited
|
|
||||||
ResponseCode int64 `toml:"response_code"` // Default: 429
|
|
||||||
ResponseMessage string `toml:"response_message"` // Default: "Net limit exceeded"
|
|
||||||
|
|
||||||
// Connection limits
|
|
||||||
MaxConnectionsPerIP int64 `toml:"max_connections_per_ip"`
|
|
||||||
MaxTotalConnections int64 `toml:"max_total_connections"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateHeartbeatOptions(serverType, pipelineName string, sinkIndex int, hb map[string]any) error {
|
|
||||||
if enabled, ok := hb["enabled"].(bool); ok && enabled {
|
|
||||||
interval, ok := hb["interval_seconds"].(int64)
|
|
||||||
if !ok || interval < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: heartbeat interval must be positive",
|
|
||||||
pipelineName, sinkIndex, serverType)
|
|
||||||
}
|
|
||||||
|
|
||||||
if format, ok := hb["format"].(string); ok {
|
|
||||||
if format != "json" && format != "comment" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: heartbeat format must be 'json' or 'comment': %s",
|
|
||||||
pipelineName, sinkIndex, serverType, format)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateNetLimitOptions(serverType, pipelineName string, sinkIndex int, rl map[string]any) error {
|
|
||||||
if enabled, ok := rl["enabled"].(bool); !ok || !enabled {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate IP lists if present
|
|
||||||
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
|
|
||||||
for i, entry := range ipWhitelist {
|
|
||||||
entryStr, ok := entry.(string)
|
|
||||||
if !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err := validateIPv4Entry(entryStr); err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: whitelist[%d] %v",
|
|
||||||
pipelineName, sinkIndex, serverType, i, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
|
|
||||||
for i, entry := range ipBlacklist {
|
|
||||||
entryStr, ok := entry.(string)
|
|
||||||
if !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err := validateIPv4Entry(entryStr); err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: blacklist[%d] %v",
|
|
||||||
pipelineName, sinkIndex, serverType, i, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate requests per second
|
|
||||||
rps, ok := rl["requests_per_second"].(float64)
|
|
||||||
if !ok || rps <= 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: requests_per_second must be positive",
|
|
||||||
pipelineName, sinkIndex, serverType)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate burst size
|
|
||||||
burst, ok := rl["burst_size"].(int64)
|
|
||||||
if !ok || burst < 1 {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: burst_size must be at least 1",
|
|
||||||
pipelineName, sinkIndex, serverType)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate limit_by
|
|
||||||
if limitBy, ok := rl["limit_by"].(string); ok && limitBy != "" {
|
|
||||||
validLimitBy := map[string]bool{"ip": true, "global": true}
|
|
||||||
if !validLimitBy[limitBy] {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid limit_by value: %s (must be 'ip' or 'global')",
|
|
||||||
pipelineName, sinkIndex, serverType, limitBy)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate response code
|
|
||||||
if respCode, ok := rl["response_code"].(int64); ok {
|
|
||||||
if respCode > 0 && (respCode < 400 || respCode >= 600) {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: response_code must be 4xx or 5xx: %d",
|
|
||||||
pipelineName, sinkIndex, serverType, respCode)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate connection limits
|
|
||||||
maxPerIP, perIPOk := rl["max_connections_per_ip"].(int64)
|
|
||||||
maxTotal, totalOk := rl["max_total_connections"].(int64)
|
|
||||||
|
|
||||||
if perIPOk && totalOk && maxPerIP > 0 && maxTotal > 0 {
|
|
||||||
if maxPerIP > maxTotal {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: max_connections_per_ip (%d) cannot exceed max_total_connections (%d)",
|
|
||||||
pipelineName, sinkIndex, serverType, maxPerIP, maxTotal)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// validateIPv4Entry ensures an IP or CIDR is IPv4
|
|
||||||
func validateIPv4Entry(entry string) error {
|
|
||||||
// Handle single IP
|
|
||||||
if !strings.Contains(entry, "/") {
|
|
||||||
ip := net.ParseIP(entry)
|
|
||||||
if ip == nil {
|
|
||||||
return fmt.Errorf("invalid IP address: %s", entry)
|
|
||||||
}
|
|
||||||
if ip.To4() == nil {
|
|
||||||
return fmt.Errorf("IPv6 not supported (IPv4-only): %s", entry)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle CIDR
|
|
||||||
ipAddr, ipNet, err := net.ParseCIDR(entry)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("invalid CIDR: %s", entry)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if the IP is IPv4
|
|
||||||
if ipAddr.To4() == nil {
|
|
||||||
return fmt.Errorf("IPv6 CIDR not supported (IPv4-only): %s", entry)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify the network mask is appropriate for IPv4
|
|
||||||
_, bits := ipNet.Mask.Size()
|
|
||||||
if bits != 32 {
|
|
||||||
return fmt.Errorf("invalid IPv4 CIDR mask (got %d bits, expected 32): %s", bits, entry)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@ -1,79 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/ssl.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
type SSLConfig struct {
|
|
||||||
Enabled bool `toml:"enabled"`
|
|
||||||
CertFile string `toml:"cert_file"`
|
|
||||||
KeyFile string `toml:"key_file"`
|
|
||||||
|
|
||||||
// Client certificate authentication
|
|
||||||
ClientAuth bool `toml:"client_auth"`
|
|
||||||
ClientCAFile string `toml:"client_ca_file"`
|
|
||||||
VerifyClientCert bool `toml:"verify_client_cert"`
|
|
||||||
|
|
||||||
// Option to skip verification for clients
|
|
||||||
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
|
|
||||||
|
|
||||||
// TLS version constraints
|
|
||||||
MinVersion string `toml:"min_version"` // "TLS1.2", "TLS1.3"
|
|
||||||
MaxVersion string `toml:"max_version"`
|
|
||||||
|
|
||||||
// Cipher suites (comma-separated list)
|
|
||||||
CipherSuites string `toml:"cipher_suites"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func validateSSLOptions(serverType, pipelineName string, sinkIndex int, ssl map[string]any) error {
|
|
||||||
if enabled, ok := ssl["enabled"].(bool); ok && enabled {
|
|
||||||
certFile, certOk := ssl["cert_file"].(string)
|
|
||||||
keyFile, keyOk := ssl["key_file"].(string)
|
|
||||||
|
|
||||||
if !certOk || certFile == "" || !keyOk || keyFile == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: SSL enabled but cert/key files not specified",
|
|
||||||
pipelineName, sinkIndex, serverType)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate that certificate files exist and are readable
|
|
||||||
if _, err := os.Stat(certFile); err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: cert_file is not accessible: %w",
|
|
||||||
pipelineName, sinkIndex, serverType, err)
|
|
||||||
}
|
|
||||||
if _, err := os.Stat(keyFile); err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: key_file is not accessible: %w",
|
|
||||||
pipelineName, sinkIndex, serverType, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if clientAuth, ok := ssl["client_auth"].(bool); ok && clientAuth {
|
|
||||||
caFile, caOk := ssl["client_ca_file"].(string)
|
|
||||||
if !caOk || caFile == "" {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: client auth enabled but CA file not specified",
|
|
||||||
pipelineName, sinkIndex, serverType)
|
|
||||||
}
|
|
||||||
// Validate that the client CA file exists and is readable
|
|
||||||
if _, err := os.Stat(caFile); err != nil {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: client_ca_file is not accessible: %w",
|
|
||||||
pipelineName, sinkIndex, serverType, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate TLS versions
|
|
||||||
validVersions := map[string]bool{"TLS1.0": true, "TLS1.1": true, "TLS1.2": true, "TLS1.3": true}
|
|
||||||
if minVer, ok := ssl["min_version"].(string); ok && minVer != "" {
|
|
||||||
if !validVersions[minVer] {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid min TLS version: %s",
|
|
||||||
pipelineName, sinkIndex, serverType, minVer)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if maxVer, ok := ssl["max_version"].(string); ok && maxVer != "" {
|
|
||||||
if !validVersions[maxVer] {
|
|
||||||
return fmt.Errorf("pipeline '%s' sink[%d] %s: invalid max TLS version: %s",
|
|
||||||
pipelineName, sinkIndex, serverType, maxVer)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
63
src/internal/config/validate.go
Normal file
63
src/internal/config/validate.go
Normal file
@ -0,0 +1,63 @@
|
|||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ValidateConfig validates top-level structure only
|
||||||
|
// Value range validation is delegated to component constructors
|
||||||
|
func ValidateConfig(cfg *Config) error {
|
||||||
|
if cfg == nil {
|
||||||
|
return fmt.Errorf("config is nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(cfg.Pipelines) == 0 {
|
||||||
|
return fmt.Errorf("no pipelines configured")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := validateLogConfig(cfg.Logging); err != nil {
|
||||||
|
return fmt.Errorf("logging: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, p := range cfg.Pipelines {
|
||||||
|
if err := lconfig.NonEmpty(p.Name); err != nil {
|
||||||
|
return fmt.Errorf("pipeline[%d].name: %w", i, err)
|
||||||
|
}
|
||||||
|
if len(p.PluginSources) == 0 {
|
||||||
|
return fmt.Errorf("pipeline[%d]: no sources defined", i)
|
||||||
|
}
|
||||||
|
if len(p.PluginSinks) == 0 {
|
||||||
|
return fmt.Errorf("pipeline[%d]: no sinks defined", i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateLogConfig validates application logging settings
|
||||||
|
func validateLogConfig(cfg *LogConfig) error {
|
||||||
|
if cfg == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
validateOutput := lconfig.OneOf("file", "stdout", "stderr", "split", "all", "none")
|
||||||
|
if err := validateOutput(cfg.Output); err != nil {
|
||||||
|
return fmt.Errorf("output: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
validateLevel := lconfig.OneOf("debug", "info", "warn", "error")
|
||||||
|
if err := validateLevel(cfg.Level); err != nil {
|
||||||
|
return fmt.Errorf("level: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Console != nil {
|
||||||
|
validateTarget := lconfig.OneOf("stdout", "stderr", "split")
|
||||||
|
if err := validateTarget(cfg.Console.Target); err != nil {
|
||||||
|
return fmt.Errorf("console.target: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@ -1,82 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/config/validation.go
|
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
func (c *Config) validate() error {
|
|
||||||
if c == nil {
|
|
||||||
return fmt.Errorf("config is nil")
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.Logging == nil {
|
|
||||||
c.Logging = DefaultLogConfig()
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(c.Pipelines) == 0 {
|
|
||||||
return fmt.Errorf("no pipelines configured")
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := validateLogConfig(c.Logging); err != nil {
|
|
||||||
return fmt.Errorf("logging config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track used ports across all pipelines
|
|
||||||
allPorts := make(map[int64]string)
|
|
||||||
pipelineNames := make(map[string]bool)
|
|
||||||
|
|
||||||
for i, pipeline := range c.Pipelines {
|
|
||||||
if pipeline.Name == "" {
|
|
||||||
return fmt.Errorf("pipeline %d: missing name", i)
|
|
||||||
}
|
|
||||||
|
|
||||||
if pipelineNames[pipeline.Name] {
|
|
||||||
return fmt.Errorf("pipeline %d: duplicate name '%s'", i, pipeline.Name)
|
|
||||||
}
|
|
||||||
pipelineNames[pipeline.Name] = true
|
|
||||||
|
|
||||||
// Pipeline must have at least one source
|
|
||||||
if len(pipeline.Sources) == 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s': no sources specified", pipeline.Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate sources
|
|
||||||
for j, source := range pipeline.Sources {
|
|
||||||
if err := validateSource(pipeline.Name, j, &source); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate rate limit if present
|
|
||||||
if err := validateRateLimit(pipeline.Name, pipeline.RateLimit); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate filters
|
|
||||||
for j, filterCfg := range pipeline.Filters {
|
|
||||||
if err := validateFilter(pipeline.Name, j, &filterCfg); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Pipeline must have at least one sink
|
|
||||||
if len(pipeline.Sinks) == 0 {
|
|
||||||
return fmt.Errorf("pipeline '%s': no sinks specified", pipeline.Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate sinks and check for port conflicts
|
|
||||||
for j, sink := range pipeline.Sinks {
|
|
||||||
if err := validateSink(pipeline.Name, j, &sink, allPorts); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate auth if present
|
|
||||||
if err := validateAuth(pipeline.Name, pipeline.Auth); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
20
src/internal/core/capability.go
Normal file
20
src/internal/core/capability.go
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
package core
|
||||||
|
|
||||||
|
// Capability represents a plugin feature
|
||||||
|
type Capability string
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Network capabilities
|
||||||
|
CapNetLimit Capability = "netlimit"
|
||||||
|
CapTLS Capability = "tls"
|
||||||
|
CapAuth Capability = "auth"
|
||||||
|
|
||||||
|
// Session capabilities
|
||||||
|
CapSessionAware Capability = "session_aware"
|
||||||
|
CapMultiSession Capability = "multi_session"
|
||||||
|
CapSingleInstance Capability = "single_instance"
|
||||||
|
|
||||||
|
// Stream capabilities
|
||||||
|
CapBidirectional Capability = "bidirectional"
|
||||||
|
CapCompression Capability = "compression"
|
||||||
|
)
|
||||||
29
src/internal/core/const.go
Normal file
29
src/internal/core/const.go
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
package core
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
MaxLogEntryBytes = 1024 * 1024
|
||||||
|
|
||||||
|
FileWatcherPollInterval = 100 * time.Millisecond
|
||||||
|
|
||||||
|
SessionDefaultMaxIdleTime = 30 * time.Minute
|
||||||
|
|
||||||
|
SessionCleanupInterval = 5 * time.Minute
|
||||||
|
|
||||||
|
ServiceStatsUpdateInterval = 1 * time.Second
|
||||||
|
|
||||||
|
ShutdownTimeout = 10 * time.Second
|
||||||
|
|
||||||
|
ConfigReloadTimeout = 30 * time.Second
|
||||||
|
|
||||||
|
LoggerShutdownTimeout = 2 * time.Second
|
||||||
|
|
||||||
|
ReloadWatchPollInterval = time.Second
|
||||||
|
|
||||||
|
ReloadWatchDebounce = 500 * time.Millisecond
|
||||||
|
|
||||||
|
ReloadWatchTimeout = 30 * time.Second
|
||||||
|
)
|
||||||
@ -1,4 +1,3 @@
|
|||||||
// FILE: logwisp/src/internal/core/types.go
|
|
||||||
package core
|
package core
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -6,7 +5,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
// LogEntry represents a single log record flowing through the pipeline
|
// Represents a single log record flowing through the pipeline
|
||||||
type LogEntry struct {
|
type LogEntry struct {
|
||||||
Time time.Time `json:"time"`
|
Time time.Time `json:"time"`
|
||||||
Source string `json:"source"`
|
Source string `json:"source"`
|
||||||
@ -15,3 +14,10 @@ type LogEntry struct {
|
|||||||
Fields json.RawMessage `json:"fields,omitempty"`
|
Fields json.RawMessage `json:"fields,omitempty"`
|
||||||
RawSize int64 `json:"-"`
|
RawSize int64 `json:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TransportEvent contains the final payload and minimal metadata needed by sinks
|
||||||
|
type TransportEvent struct {
|
||||||
|
Time time.Time
|
||||||
|
// Formatted, serialized log payload
|
||||||
|
Payload []byte
|
||||||
|
}
|
||||||
@ -1,4 +1,3 @@
|
|||||||
// FILE: logwisp/src/internal/filter/chain.go
|
|
||||||
package filter
|
package filter
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -11,7 +10,7 @@ import (
|
|||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Chain manages multiple filters in sequence
|
// Chain manages a sequence of filters, applying them in order
|
||||||
type Chain struct {
|
type Chain struct {
|
||||||
filters []*Filter
|
filters []*Filter
|
||||||
logger *log.Logger
|
logger *log.Logger
|
||||||
@ -21,7 +20,7 @@ type Chain struct {
|
|||||||
totalPassed atomic.Uint64
|
totalPassed atomic.Uint64
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewChain creates a new filter chain from configurations
|
// NewChain creates a new filter chain from a slice of filter configurations
|
||||||
func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error) {
|
func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error) {
|
||||||
chain := &Chain{
|
chain := &Chain{
|
||||||
filters: make([]*Filter, 0, len(configs)),
|
filters: make([]*Filter, 0, len(configs)),
|
||||||
@ -29,7 +28,7 @@ func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
for i, cfg := range configs {
|
for i, cfg := range configs {
|
||||||
filter, err := New(cfg, logger)
|
filter, err := NewFilter(cfg, logger)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("filter[%d]: %w", i, err)
|
return nil, fmt.Errorf("filter[%d]: %w", i, err)
|
||||||
}
|
}
|
||||||
@ -42,8 +41,7 @@ func NewChain(configs []config.FilterConfig, logger *log.Logger) (*Chain, error)
|
|||||||
return chain, nil
|
return chain, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply runs all filters in sequence
|
// Apply runs a log entry through all filters in the chain
|
||||||
// Returns true if the entry passes all filters
|
|
||||||
func (c *Chain) Apply(entry core.LogEntry) bool {
|
func (c *Chain) Apply(entry core.LogEntry) bool {
|
||||||
c.totalProcessed.Add(1)
|
c.totalProcessed.Add(1)
|
||||||
|
|
||||||
@ -68,7 +66,7 @@ func (c *Chain) Apply(entry core.LogEntry) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetStats returns chain statistics
|
// GetStats returns aggregated statistics for the entire chain
|
||||||
func (c *Chain) GetStats() map[string]any {
|
func (c *Chain) GetStats() map[string]any {
|
||||||
filterStats := make([]map[string]any, len(c.filters))
|
filterStats := make([]map[string]any, len(c.filters))
|
||||||
for i, filter := range c.filters {
|
for i, filter := range c.filters {
|
||||||
|
|||||||
@ -1,4 +1,3 @@
|
|||||||
// FILE: logwisp/src/internal/filter/filter.go
|
|
||||||
package filter
|
package filter
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -10,6 +9,7 @@ import (
|
|||||||
"logwisp/src/internal/config"
|
"logwisp/src/internal/config"
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/core"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -26,8 +26,22 @@ type Filter struct {
|
|||||||
totalDropped atomic.Uint64
|
totalDropped atomic.Uint64
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new filter from configuration
|
// NewFilter creates a new filter from a configuration
|
||||||
func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
|
func NewFilter(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
|
||||||
|
// Validate enums before setting defaults
|
||||||
|
if cfg.Type != "" {
|
||||||
|
validateType := lconfig.OneOf(config.FilterTypeInclude, config.FilterTypeExclude)
|
||||||
|
if err := validateType(cfg.Type); err != nil {
|
||||||
|
return nil, fmt.Errorf("type: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if cfg.Logic != "" {
|
||||||
|
validateLogic := lconfig.OneOf(config.FilterLogicOr, config.FilterLogicAnd)
|
||||||
|
if err := validateLogic(cfg.Logic); err != nil {
|
||||||
|
return nil, fmt.Errorf("logic: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Set defaults
|
// Set defaults
|
||||||
if cfg.Type == "" {
|
if cfg.Type == "" {
|
||||||
cfg.Type = config.FilterTypeInclude
|
cfg.Type = config.FilterTypeInclude
|
||||||
@ -46,7 +60,7 @@ func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
|
|||||||
for i, pattern := range cfg.Patterns {
|
for i, pattern := range cfg.Patterns {
|
||||||
re, err := regexp.Compile(pattern)
|
re, err := regexp.Compile(pattern)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("invalid regex pattern[%d] '%s': %w", i, pattern, err)
|
return nil, fmt.Errorf("pattern[%d] '%s': %w", i, pattern, err)
|
||||||
}
|
}
|
||||||
f.patterns = append(f.patterns, re)
|
f.patterns = append(f.patterns, re)
|
||||||
}
|
}
|
||||||
@ -60,12 +74,15 @@ func New(cfg config.FilterConfig, logger *log.Logger) (*Filter, error) {
|
|||||||
return f, nil
|
return f, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply checks if a log entry should be passed through
|
// Apply determines if a log entry should be passed through based on the filter's rules
|
||||||
func (f *Filter) Apply(entry core.LogEntry) bool {
|
func (f *Filter) Apply(entry core.LogEntry) bool {
|
||||||
f.totalProcessed.Add(1)
|
f.totalProcessed.Add(1)
|
||||||
|
|
||||||
// No patterns means pass everything
|
// No patterns means pass everything
|
||||||
if len(f.patterns) == 0 {
|
if len(f.patterns) == 0 {
|
||||||
|
f.logger.Debug("msg", "No patterns configured, passing entry",
|
||||||
|
"component", "filter",
|
||||||
|
"type", f.config.Type)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -78,10 +95,32 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
|
|||||||
text = entry.Source + " " + text
|
text = entry.Source + " " + text
|
||||||
}
|
}
|
||||||
|
|
||||||
|
f.logger.Debug("msg", "Filter checking entry",
|
||||||
|
"component", "filter",
|
||||||
|
"type", f.config.Type,
|
||||||
|
"logic", f.config.Logic,
|
||||||
|
"entry_level", entry.Level,
|
||||||
|
"entry_source", entry.Source,
|
||||||
|
"entry_message", entry.Message[:min(100, len(entry.Message))], // First 100 chars
|
||||||
|
"text_to_match", text[:min(150, len(text))], // First 150 chars
|
||||||
|
"patterns", f.config.Patterns)
|
||||||
|
|
||||||
|
for i, pattern := range f.config.Patterns {
|
||||||
|
isMatch := f.patterns[i].MatchString(text)
|
||||||
|
f.logger.Debug("msg", "Pattern match result",
|
||||||
|
"component", "filter",
|
||||||
|
"pattern_index", i,
|
||||||
|
"pattern", pattern,
|
||||||
|
"matched", isMatch)
|
||||||
|
}
|
||||||
|
|
||||||
matched := f.matches(text)
|
matched := f.matches(text)
|
||||||
if matched {
|
if matched {
|
||||||
f.totalMatched.Add(1)
|
f.totalMatched.Add(1)
|
||||||
}
|
}
|
||||||
|
f.logger.Debug("msg", "Filter final match result",
|
||||||
|
"component", "filter",
|
||||||
|
"matched", matched)
|
||||||
|
|
||||||
// Determine if we should pass or drop
|
// Determine if we should pass or drop
|
||||||
shouldPass := false
|
shouldPass := false
|
||||||
@ -92,6 +131,12 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
|
|||||||
shouldPass = !matched
|
shouldPass = !matched
|
||||||
}
|
}
|
||||||
|
|
||||||
|
f.logger.Debug("msg", "Filter decision",
|
||||||
|
"component", "filter",
|
||||||
|
"type", f.config.Type,
|
||||||
|
"matched", matched,
|
||||||
|
"should_pass", shouldPass)
|
||||||
|
|
||||||
if !shouldPass {
|
if !shouldPass {
|
||||||
f.totalDropped.Add(1)
|
f.totalDropped.Add(1)
|
||||||
}
|
}
|
||||||
@ -99,7 +144,44 @@ func (f *Filter) Apply(entry core.LogEntry) bool {
|
|||||||
return shouldPass
|
return shouldPass
|
||||||
}
|
}
|
||||||
|
|
||||||
// matches checks if text matches the patterns according to the logic
|
// GetStats returns the filter's current statistics
|
||||||
|
func (f *Filter) GetStats() map[string]any {
|
||||||
|
return map[string]any{
|
||||||
|
"type": f.config.Type,
|
||||||
|
"logic": f.config.Logic,
|
||||||
|
"pattern_count": len(f.patterns),
|
||||||
|
"total_processed": f.totalProcessed.Load(),
|
||||||
|
"total_matched": f.totalMatched.Load(),
|
||||||
|
"total_dropped": f.totalDropped.Load(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatePatterns allows for dynamic, thread-safe updates to the filter's regex patterns
|
||||||
|
func (f *Filter) UpdatePatterns(patterns []string) error {
|
||||||
|
compiled := make([]*regexp.Regexp, 0, len(patterns))
|
||||||
|
|
||||||
|
// Compile all patterns first
|
||||||
|
for i, pattern := range patterns {
|
||||||
|
re, err := regexp.Compile(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid regex pattern[%d] '%s': %w", i, pattern, err)
|
||||||
|
}
|
||||||
|
compiled = append(compiled, re)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update atomically
|
||||||
|
f.mu.Lock()
|
||||||
|
f.patterns = compiled
|
||||||
|
f.config.Patterns = patterns
|
||||||
|
f.mu.Unlock()
|
||||||
|
|
||||||
|
f.logger.Info("msg", "Filter patterns updated",
|
||||||
|
"component", "filter",
|
||||||
|
"pattern_count", len(patterns))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// matches checks if the given text matches the filter's patterns according to its logic
|
||||||
func (f *Filter) matches(text string) bool {
|
func (f *Filter) matches(text string) bool {
|
||||||
switch f.config.Logic {
|
switch f.config.Logic {
|
||||||
case config.FilterLogicOr:
|
case config.FilterLogicOr:
|
||||||
@ -128,40 +210,3 @@ func (f *Filter) matches(text string) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetStats returns filter statistics
|
|
||||||
func (f *Filter) GetStats() map[string]any {
|
|
||||||
return map[string]any{
|
|
||||||
"type": f.config.Type,
|
|
||||||
"logic": f.config.Logic,
|
|
||||||
"pattern_count": len(f.patterns),
|
|
||||||
"total_processed": f.totalProcessed.Load(),
|
|
||||||
"total_matched": f.totalMatched.Load(),
|
|
||||||
"total_dropped": f.totalDropped.Load(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdatePatterns allows dynamic pattern updates
|
|
||||||
func (f *Filter) UpdatePatterns(patterns []string) error {
|
|
||||||
compiled := make([]*regexp.Regexp, 0, len(patterns))
|
|
||||||
|
|
||||||
// Compile all patterns first
|
|
||||||
for i, pattern := range patterns {
|
|
||||||
re, err := regexp.Compile(pattern)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("invalid regex pattern[%d] '%s': %w", i, pattern, err)
|
|
||||||
}
|
|
||||||
compiled = append(compiled, re)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update atomically
|
|
||||||
f.mu.Lock()
|
|
||||||
f.patterns = compiled
|
|
||||||
f.config.Patterns = patterns
|
|
||||||
f.mu.Unlock()
|
|
||||||
|
|
||||||
f.logger.Info("msg", "Filter patterns updated",
|
|
||||||
"component", "filter",
|
|
||||||
"pattern_count", len(patterns))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
162
src/internal/flow/flow.go
Normal file
162
src/internal/flow/flow.go
Normal file
@ -0,0 +1,162 @@
|
|||||||
|
package flow
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync/atomic"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/filter"
|
||||||
|
"logwisp/src/internal/format"
|
||||||
|
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Flow manages the complete processing pipeline for log entries:
|
||||||
|
// LogEntry -> Rate Limiter -> Filters -> Formatter (with Sanitizer) -> TransportEvent
|
||||||
|
type Flow struct {
|
||||||
|
rateLimiter *RateLimiter
|
||||||
|
filterChain *filter.Chain
|
||||||
|
formatter format.Formatter
|
||||||
|
heartbeat *HeartbeatGenerator
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalProcessed atomic.Uint64
|
||||||
|
totalDropped atomic.Uint64
|
||||||
|
totalFormatted atomic.Uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewFlow creates a flow processor from configuration
|
||||||
|
func NewFlow(cfg *config.FlowConfig, logger *log.Logger) (*Flow, error) {
|
||||||
|
if cfg == nil {
|
||||||
|
cfg = &config.FlowConfig{}
|
||||||
|
}
|
||||||
|
|
||||||
|
f := &Flow{
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create rate limiter if configured
|
||||||
|
if cfg.RateLimit != nil {
|
||||||
|
limiter, err := NewRateLimiter(*cfg.RateLimit, logger)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create rate limiter: %w", err)
|
||||||
|
}
|
||||||
|
f.rateLimiter = limiter
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create filter chain if configured
|
||||||
|
if len(cfg.Filters) > 0 {
|
||||||
|
chain, err := filter.NewChain(cfg.Filters, logger)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create filter chain: %w", err)
|
||||||
|
}
|
||||||
|
f.filterChain = chain
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create formatter with sanitizer integration
|
||||||
|
formatter, err := format.NewFormatter(cfg.Format)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create formatter: %w", err)
|
||||||
|
}
|
||||||
|
f.formatter = formatter
|
||||||
|
|
||||||
|
// Create heartbeat generator with the same formatter if configured
|
||||||
|
if cfg.Heartbeat != nil {
|
||||||
|
hb, err := NewHeartbeatGenerator(cfg.Heartbeat, formatter, logger)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("heartbeat: %w", err)
|
||||||
|
}
|
||||||
|
f.heartbeat = hb
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.Info("msg", "Flow processor created",
|
||||||
|
"component", "flow",
|
||||||
|
"rate_limiter", f.rateLimiter != nil,
|
||||||
|
"filter_chain", f.filterChain != nil,
|
||||||
|
"formatter", formatter.Name(),
|
||||||
|
"heartbeat", f.heartbeat != nil)
|
||||||
|
|
||||||
|
return f, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process applies all flow stages to a log entry
|
||||||
|
// Returns TransportEvent and whether entry passed all stages
|
||||||
|
func (f *Flow) Process(entry core.LogEntry) (core.TransportEvent, bool) {
|
||||||
|
f.totalProcessed.Add(1)
|
||||||
|
|
||||||
|
// Stage 1: Rate limiting
|
||||||
|
if f.rateLimiter != nil {
|
||||||
|
if !f.rateLimiter.Allow(entry) {
|
||||||
|
f.totalDropped.Add(1)
|
||||||
|
return core.TransportEvent{}, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stage 2: Filtering
|
||||||
|
if f.filterChain != nil {
|
||||||
|
if !f.filterChain.Apply(entry) {
|
||||||
|
f.totalDropped.Add(1)
|
||||||
|
return core.TransportEvent{}, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stage 3: Formatting
|
||||||
|
formatted, err := f.formatter.Format(entry)
|
||||||
|
if err != nil {
|
||||||
|
f.logger.Error("msg", "Failed to format log entry",
|
||||||
|
"component", "flow",
|
||||||
|
"error", err)
|
||||||
|
f.totalDropped.Add(1)
|
||||||
|
return core.TransportEvent{}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
f.totalFormatted.Add(1)
|
||||||
|
|
||||||
|
// Create transport event
|
||||||
|
event := core.TransportEvent{
|
||||||
|
Time: entry.Time,
|
||||||
|
Payload: formatted,
|
||||||
|
}
|
||||||
|
|
||||||
|
return event, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartHeartbeat starts the heartbeat generator if configured
|
||||||
|
// Returns channel that emits heartbeat events
|
||||||
|
func (f *Flow) StartHeartbeat(ctx context.Context) <-chan core.TransportEvent {
|
||||||
|
if f.heartbeat == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return f.heartbeat.Start(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns flow statistics
|
||||||
|
func (f *Flow) GetStats() map[string]any {
|
||||||
|
stats := map[string]any{
|
||||||
|
"total_processed": f.totalProcessed.Load(),
|
||||||
|
"total_dropped": f.totalDropped.Load(),
|
||||||
|
"total_formatted": f.totalFormatted.Load(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.rateLimiter != nil {
|
||||||
|
stats["rate_limiter"] = f.rateLimiter.GetStats()
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.filterChain != nil {
|
||||||
|
stats["filters"] = f.filterChain.GetStats()
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.formatter != nil {
|
||||||
|
stats["formatter"] = f.formatter.Name()
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.heartbeat != nil {
|
||||||
|
stats["heartbeat_enabled"] = true
|
||||||
|
stats["heartbeat_interval_ms"] = f.heartbeat.IntervalMS()
|
||||||
|
}
|
||||||
|
|
||||||
|
return stats
|
||||||
|
}
|
||||||
168
src/internal/flow/heartbeat.go
Normal file
168
src/internal/flow/heartbeat.go
Normal file
@ -0,0 +1,168 @@
|
|||||||
|
package flow
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/format"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
"github.com/lixenwraith/log/formatter"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
MinHeartbeatIntervalMS = 100
|
||||||
|
DefaultHeartbeatIntervalMS = 1000
|
||||||
|
DefaultHeartbeatFormat = "txt"
|
||||||
|
)
|
||||||
|
|
||||||
|
// HeartbeatGenerator produces periodic heartbeat events
|
||||||
|
type HeartbeatGenerator struct {
|
||||||
|
config *config.HeartbeatConfig
|
||||||
|
formatter format.Formatter // Use flow's formatter
|
||||||
|
logger *log.Logger
|
||||||
|
beatCount atomic.Uint64
|
||||||
|
lastBeat atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewHeartbeatGenerator creates a new heartbeat generator
|
||||||
|
func NewHeartbeatGenerator(cfg *config.HeartbeatConfig, formatter format.Formatter, logger *log.Logger) (*HeartbeatGenerator, error) {
|
||||||
|
if cfg == nil || !cfg.Enabled {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate
|
||||||
|
if cfg.IntervalMS == 0 {
|
||||||
|
cfg.IntervalMS = DefaultHeartbeatIntervalMS
|
||||||
|
} else if cfg.IntervalMS < MinHeartbeatIntervalMS {
|
||||||
|
return nil, fmt.Errorf("interval_ms: must be >= %d, got %d", MinHeartbeatIntervalMS, cfg.IntervalMS)
|
||||||
|
}
|
||||||
|
|
||||||
|
validateFormat := lconfig.OneOf("txt", "json", "raw", "")
|
||||||
|
if err := validateFormat(cfg.Format); err != nil {
|
||||||
|
return nil, fmt.Errorf("format: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
if cfg.Format == "" {
|
||||||
|
cfg.Format = DefaultHeartbeatFormat
|
||||||
|
}
|
||||||
|
|
||||||
|
hg := &HeartbeatGenerator{
|
||||||
|
config: cfg,
|
||||||
|
formatter: formatter,
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
hg.lastBeat.Store(time.Time{})
|
||||||
|
return hg, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins generating heartbeat events
|
||||||
|
func (hg *HeartbeatGenerator) Start(ctx context.Context) <-chan core.TransportEvent {
|
||||||
|
ch := make(chan core.TransportEvent)
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
defer close(ch)
|
||||||
|
|
||||||
|
ticker := time.NewTicker(time.Duration(hg.config.IntervalMS) * time.Millisecond)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case t := <-ticker.C:
|
||||||
|
event := hg.generateHeartbeat(t)
|
||||||
|
select {
|
||||||
|
case ch <- event:
|
||||||
|
hg.beatCount.Add(1)
|
||||||
|
hg.lastBeat.Store(t)
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
return ch
|
||||||
|
}
|
||||||
|
|
||||||
|
// generateHeartbeat creates a heartbeat transport event
|
||||||
|
func (hg *HeartbeatGenerator) generateHeartbeat(t time.Time) core.TransportEvent {
|
||||||
|
// Create heartbeat as LogEntry for consistent formatting
|
||||||
|
entry := core.LogEntry{
|
||||||
|
Time: t,
|
||||||
|
Source: "heartbeat",
|
||||||
|
Level: "INFO",
|
||||||
|
Message: "heartbeat",
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add stats if configured
|
||||||
|
if hg.config.IncludeStats {
|
||||||
|
fields := map[string]any{
|
||||||
|
"type": "heartbeat",
|
||||||
|
"beat_count": hg.beatCount.Load(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if last, ok := hg.lastBeat.Load().(time.Time); ok && !last.IsZero() {
|
||||||
|
fields["interval_ms"] = t.Sub(last).Milliseconds()
|
||||||
|
}
|
||||||
|
|
||||||
|
fieldsJSON, _ := json.Marshal(fields)
|
||||||
|
entry.Fields = fieldsJSON
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use formatter to generate payload
|
||||||
|
var payload []byte
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Check if we need special formatting for heartbeat
|
||||||
|
if hg.config.Format == "comment" {
|
||||||
|
// SSE comment format - bypass formatter for this special case
|
||||||
|
if hg.config.IncludeStats {
|
||||||
|
beatNum := hg.beatCount.Load()
|
||||||
|
payload = []byte(": heartbeat " + t.Format(time.RFC3339) + " [#" + string(beatNum) + "]\n")
|
||||||
|
} else {
|
||||||
|
payload = []byte(": heartbeat " + t.Format(time.RFC3339) + "\n")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Use flow's formatter for consistent formatting
|
||||||
|
if adapter, ok := hg.formatter.(*format.FormatterAdapter); ok {
|
||||||
|
// Customize flags for heartbeat if needed
|
||||||
|
customFlags := int64(0)
|
||||||
|
if !hg.config.IncludeTimestamp {
|
||||||
|
// Remove timestamp flag if not wanted
|
||||||
|
customFlags = formatter.FlagShowLevel
|
||||||
|
} else {
|
||||||
|
customFlags = formatter.FlagDefault
|
||||||
|
}
|
||||||
|
payload, err = adapter.FormatWithFlags(entry, customFlags)
|
||||||
|
} else {
|
||||||
|
// Fallback to standard format
|
||||||
|
payload, err = hg.formatter.Format(entry)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
hg.logger.Error("msg", "Failed to format heartbeat",
|
||||||
|
"error", err)
|
||||||
|
// Fallback to simple text
|
||||||
|
payload = []byte("heartbeat: " + t.Format(time.RFC3339) + "\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return core.TransportEvent{
|
||||||
|
Time: t,
|
||||||
|
Payload: payload,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IntervalMS returns the heartbeat interval in milliseconds
|
||||||
|
func (hg *HeartbeatGenerator) IntervalMS() int64 {
|
||||||
|
return hg.config.IntervalMS
|
||||||
|
}
|
||||||
@ -1,19 +1,21 @@
|
|||||||
// FILE: logwisp/src/internal/limit/rate.go
|
package flow
|
||||||
package limit
|
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
"logwisp/src/internal/config"
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/tokenbucket"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
// RateLimiter enforces rate limits on log entries flowing through a pipeline.
|
// RateLimiter enforces rate limits on log entries flowing through a pipeline
|
||||||
type RateLimiter struct {
|
type RateLimiter struct {
|
||||||
bucket *TokenBucket
|
bucket *tokenbucket.TokenBucket
|
||||||
policy config.RateLimitPolicy
|
policy config.RateLimitPolicy
|
||||||
logger *log.Logger
|
logger *log.Logger
|
||||||
|
|
||||||
@ -23,41 +25,51 @@ type RateLimiter struct {
|
|||||||
droppedCount atomic.Uint64
|
droppedCount atomic.Uint64
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRateLimiter creates a new rate limiter. If cfg.Rate is 0, it returns nil.
|
// NewRateLimiter creates a new pipeline-level rate limiter from configuration
|
||||||
func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimiter, error) {
|
func NewRateLimiter(cfg config.RateLimitConfig, logger *log.Logger) (*RateLimiter, error) {
|
||||||
|
// Rate <= 0 means disabled
|
||||||
if cfg.Rate <= 0 {
|
if cfg.Rate <= 0 {
|
||||||
return nil, nil // No rate limit
|
return nil, nil // No rate limit
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Validate
|
||||||
|
if err := lconfig.NonNegative(cfg.Rate); err != nil {
|
||||||
|
return nil, fmt.Errorf("rate: %w", err)
|
||||||
|
}
|
||||||
|
if err := lconfig.NonNegative(cfg.Burst); err != nil {
|
||||||
|
return nil, fmt.Errorf("burst: %w", err)
|
||||||
|
}
|
||||||
|
if err := lconfig.NonNegative(cfg.MaxEntrySizeBytes); err != nil {
|
||||||
|
return nil, fmt.Errorf("max_entry_size_bytes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
burst := cfg.Burst
|
burst := cfg.Burst
|
||||||
if burst <= 0 {
|
if burst <= 0 {
|
||||||
burst = cfg.Rate // Default burst to rate
|
burst = cfg.Rate
|
||||||
}
|
}
|
||||||
|
|
||||||
var policy config.RateLimitPolicy
|
var policy config.RateLimitPolicy
|
||||||
switch strings.ToLower(cfg.Policy) {
|
switch strings.ToLower(cfg.Policy) {
|
||||||
case "drop":
|
case "drop":
|
||||||
policy = config.PolicyDrop
|
policy = config.PolicyDrop
|
||||||
default:
|
case "pass", "":
|
||||||
policy = config.PolicyPass
|
policy = config.PolicyPass
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("policy: must be one of [drop, pass], got %s", cfg.Policy)
|
||||||
}
|
}
|
||||||
|
|
||||||
l := &RateLimiter{
|
l := &RateLimiter{
|
||||||
bucket: NewTokenBucket(burst, cfg.Rate),
|
bucket: tokenbucket.New(burst, cfg.Rate),
|
||||||
policy: policy,
|
policy: policy,
|
||||||
logger: logger,
|
logger: logger,
|
||||||
maxEntrySizeBytes: cfg.MaxEntrySizeBytes,
|
maxEntrySizeBytes: cfg.MaxEntrySizeBytes,
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.Rate > 0 {
|
|
||||||
l.bucket = NewTokenBucket(burst, cfg.Rate)
|
|
||||||
}
|
|
||||||
|
|
||||||
return l, nil
|
return l, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Allow checks if a log entry is allowed to pass based on the rate limit.
|
// Allow checks if a log entry is permitted to pass based on the rate limit
|
||||||
// It returns true if the entry should pass, false if it should be dropped.
|
|
||||||
func (l *RateLimiter) Allow(entry core.LogEntry) bool {
|
func (l *RateLimiter) Allow(entry core.LogEntry) bool {
|
||||||
if l == nil || l.policy == config.PolicyPass {
|
if l == nil || l.policy == config.PolicyPass {
|
||||||
return true
|
return true
|
||||||
@ -83,7 +95,7 @@ func (l *RateLimiter) Allow(entry core.LogEntry) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetStats returns the statistics for the limiter.
|
// GetStats returns statistics for the rate limiter
|
||||||
func (l *RateLimiter) GetStats() map[string]any {
|
func (l *RateLimiter) GetStats() map[string]any {
|
||||||
if l == nil {
|
if l == nil {
|
||||||
return map[string]any{
|
return map[string]any{
|
||||||
@ -93,6 +105,8 @@ func (l *RateLimiter) GetStats() map[string]any {
|
|||||||
|
|
||||||
stats := map[string]any{
|
stats := map[string]any{
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
|
"rate": l.bucket.Rate(),
|
||||||
|
"burst": l.bucket.Capacity(),
|
||||||
"dropped_total": l.droppedCount.Load(),
|
"dropped_total": l.droppedCount.Load(),
|
||||||
"dropped_by_size_total": l.droppedBySizeCount.Load(),
|
"dropped_by_size_total": l.droppedBySizeCount.Load(),
|
||||||
"policy": policyString(l.policy),
|
"policy": policyString(l.policy),
|
||||||
@ -100,13 +114,13 @@ func (l *RateLimiter) GetStats() map[string]any {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if l.bucket != nil {
|
if l.bucket != nil {
|
||||||
stats["tokens"] = l.bucket.Tokens()
|
stats["available_tokens"] = l.bucket.Tokens()
|
||||||
}
|
}
|
||||||
|
|
||||||
return stats
|
return stats
|
||||||
}
|
}
|
||||||
|
|
||||||
// policyString returns the string representation of the policy.
|
// policyString returns the string representation of a rate limit policy
|
||||||
func policyString(p config.RateLimitPolicy) string {
|
func policyString(p config.RateLimitPolicy) string {
|
||||||
switch p {
|
switch p {
|
||||||
case config.PolicyDrop:
|
case config.PolicyDrop:
|
||||||
152
src/internal/format/adapter.go
Normal file
152
src/internal/format/adapter.go
Normal file
@ -0,0 +1,152 @@
|
|||||||
|
package format
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log/formatter"
|
||||||
|
"github.com/lixenwraith/log/sanitizer"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
DefaultFormatType = "raw"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FormatterAdapter wraps log/formatter for logwisp compatibility
|
||||||
|
type FormatterAdapter struct {
|
||||||
|
formatter *formatter.Formatter
|
||||||
|
format string
|
||||||
|
flags int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewFormatterAdapter creates adapter from config
|
||||||
|
func NewFormatterAdapter(cfg *config.FormatConfig) (*FormatterAdapter, error) {
|
||||||
|
// Validate
|
||||||
|
if cfg.Type != "" {
|
||||||
|
validateType := lconfig.OneOf("json", "txt", "text", "raw")
|
||||||
|
if err := validateType(cfg.Type); err != nil {
|
||||||
|
return nil, fmt.Errorf("type: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.SanitizerPolicy != "" {
|
||||||
|
validatePolicy := lconfig.OneOf("raw", "json", "txt", "shell")
|
||||||
|
if err := validatePolicy(cfg.SanitizerPolicy); err != nil {
|
||||||
|
return nil, fmt.Errorf("sanitizer_policy: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
if cfg.Type == "" {
|
||||||
|
cfg.Type = DefaultFormatType
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create sanitizer based on policy
|
||||||
|
var s *sanitizer.Sanitizer
|
||||||
|
if cfg.SanitizerPolicy != "" {
|
||||||
|
s = sanitizer.New().Policy(sanitizer.PolicyPreset(cfg.SanitizerPolicy))
|
||||||
|
} else {
|
||||||
|
// Default sanitizer policy based on format type
|
||||||
|
switch cfg.Type {
|
||||||
|
case "json":
|
||||||
|
s = sanitizer.New().Policy(sanitizer.PolicyJSON)
|
||||||
|
case "txt", "text":
|
||||||
|
s = sanitizer.New().Policy(sanitizer.PolicyTxt)
|
||||||
|
default:
|
||||||
|
s = sanitizer.New().Policy(sanitizer.PolicyRaw)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create formatter with sanitizer
|
||||||
|
f := formatter.New(s).Type(cfg.Type)
|
||||||
|
|
||||||
|
if cfg.TimestampFormat != "" {
|
||||||
|
f.TimestampFormat(cfg.TimestampFormat)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build flags from config
|
||||||
|
flags := cfg.Flags
|
||||||
|
if flags == 0 {
|
||||||
|
if cfg.Type == "raw" {
|
||||||
|
flags = formatter.FlagRaw
|
||||||
|
} else {
|
||||||
|
flags = formatter.FlagDefault
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &FormatterAdapter{
|
||||||
|
formatter: f,
|
||||||
|
format: cfg.Type,
|
||||||
|
flags: flags,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Format implements Formatter interface
|
||||||
|
func (a *FormatterAdapter) Format(entry core.LogEntry) ([]byte, error) {
|
||||||
|
// Map logwisp LogEntry to formatter args
|
||||||
|
level := mapLevel(entry.Level)
|
||||||
|
|
||||||
|
// Build args based on whether we have structured fields
|
||||||
|
var args []any
|
||||||
|
|
||||||
|
if len(entry.Fields) > 0 {
|
||||||
|
// Parse fields JSON
|
||||||
|
var fields map[string]any
|
||||||
|
if err := json.Unmarshal(entry.Fields, &fields); err == nil && len(fields) > 0 {
|
||||||
|
// Use structured JSON format for fields
|
||||||
|
args = []any{entry.Message, fields}
|
||||||
|
// Add structured flag to properly format fields as JSON object
|
||||||
|
effectiveFlags := a.flags | formatter.FlagStructuredJSON
|
||||||
|
return a.formatter.Format(effectiveFlags, entry.Time, level, entry.Source, args), nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simple message without fields
|
||||||
|
args = []any{entry.Message}
|
||||||
|
return a.formatter.Format(a.flags, entry.Time, level, entry.Source, args), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatWithFlags allows custom flags for specific formatting needs
|
||||||
|
func (a *FormatterAdapter) FormatWithFlags(entry core.LogEntry, customFlags int64) ([]byte, error) {
|
||||||
|
level := mapLevel(entry.Level)
|
||||||
|
|
||||||
|
var args []any
|
||||||
|
if len(entry.Fields) > 0 {
|
||||||
|
var fields map[string]any
|
||||||
|
if err := json.Unmarshal(entry.Fields, &fields); err == nil && len(fields) > 0 {
|
||||||
|
args = []any{entry.Message, fields}
|
||||||
|
customFlags |= formatter.FlagStructuredJSON
|
||||||
|
} else {
|
||||||
|
args = []any{entry.Message}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
args = []any{entry.Message}
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.formatter.Format(customFlags, entry.Time, level, entry.Source, args), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns formatter type
|
||||||
|
func (a *FormatterAdapter) Name() string {
|
||||||
|
return a.format
|
||||||
|
}
|
||||||
|
|
||||||
|
// mapLevel maps string level to int64
|
||||||
|
func mapLevel(level string) int64 {
|
||||||
|
switch level {
|
||||||
|
case "DEBUG", "debug":
|
||||||
|
return -4
|
||||||
|
case "INFO", "info":
|
||||||
|
return 0
|
||||||
|
case "WARN", "warn", "WARNING", "warning":
|
||||||
|
return 4
|
||||||
|
case "ERROR", "error":
|
||||||
|
return 8
|
||||||
|
default:
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,38 +1,28 @@
|
|||||||
// FILE: logwisp/src/internal/format/format.go
|
|
||||||
package format
|
package format
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"logwisp/src/internal/config"
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/core"
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Formatter defines the interface for transforming a LogEntry into a byte slice.
|
// Formatter defines the interface for transforming a LogEntry into a byte slice
|
||||||
type Formatter interface {
|
type Formatter interface {
|
||||||
// Format takes a LogEntry and returns the formatted log as a byte slice.
|
// Format takes a LogEntry and returns the formatted log as a byte slice
|
||||||
Format(entry core.LogEntry) ([]byte, error)
|
Format(entry core.LogEntry) ([]byte, error)
|
||||||
|
|
||||||
// Name returns the formatter type name
|
// Name returns the formatter's type name (e.g., "json", "raw")
|
||||||
Name() string
|
Name() string
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new Formatter based on the provided configuration.
|
// NewFormatter creates a Formatter using formatter/sanitizer packages
|
||||||
func New(name string, options map[string]any, logger *log.Logger) (Formatter, error) {
|
func NewFormatter(cfg *config.FormatConfig) (Formatter, error) {
|
||||||
// Default to raw if no format specified
|
if cfg == nil {
|
||||||
if name == "" {
|
cfg = &config.FormatConfig{
|
||||||
name = "raw"
|
Type: DefaultFormatType,
|
||||||
|
Flags: 0,
|
||||||
|
SanitizerPolicy: "raw",
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
switch name {
|
return NewFormatterAdapter(cfg)
|
||||||
case "json":
|
|
||||||
return NewJSONFormatter(options, logger)
|
|
||||||
case "text":
|
|
||||||
return NewTextFormatter(options, logger)
|
|
||||||
case "raw":
|
|
||||||
return NewRawFormatter(options, logger)
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("unknown formatter type: %s", name)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
@ -1,157 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/format/json.go
|
|
||||||
package format
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// JSONFormatter produces structured JSON logs
|
|
||||||
type JSONFormatter struct {
|
|
||||||
pretty bool
|
|
||||||
timestampField string
|
|
||||||
levelField string
|
|
||||||
messageField string
|
|
||||||
sourceField string
|
|
||||||
logger *log.Logger
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewJSONFormatter creates a new JSON formatter
|
|
||||||
func NewJSONFormatter(options map[string]any, logger *log.Logger) (*JSONFormatter, error) {
|
|
||||||
f := &JSONFormatter{
|
|
||||||
timestampField: "timestamp",
|
|
||||||
levelField: "level",
|
|
||||||
messageField: "message",
|
|
||||||
sourceField: "source",
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract options
|
|
||||||
if pretty, ok := options["pretty"].(bool); ok {
|
|
||||||
f.pretty = pretty
|
|
||||||
}
|
|
||||||
if field, ok := options["timestamp_field"].(string); ok && field != "" {
|
|
||||||
f.timestampField = field
|
|
||||||
}
|
|
||||||
if field, ok := options["level_field"].(string); ok && field != "" {
|
|
||||||
f.levelField = field
|
|
||||||
}
|
|
||||||
if field, ok := options["message_field"].(string); ok && field != "" {
|
|
||||||
f.messageField = field
|
|
||||||
}
|
|
||||||
if field, ok := options["source_field"].(string); ok && field != "" {
|
|
||||||
f.sourceField = field
|
|
||||||
}
|
|
||||||
|
|
||||||
return f, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format formats the log entry as JSON
|
|
||||||
func (f *JSONFormatter) Format(entry core.LogEntry) ([]byte, error) {
|
|
||||||
// Start with a clean map
|
|
||||||
output := make(map[string]any)
|
|
||||||
|
|
||||||
// First, populate with LogWisp metadata
|
|
||||||
output[f.timestampField] = entry.Time.Format(time.RFC3339Nano)
|
|
||||||
output[f.levelField] = entry.Level
|
|
||||||
output[f.sourceField] = entry.Source
|
|
||||||
|
|
||||||
// Try to parse the message as JSON
|
|
||||||
var msgData map[string]any
|
|
||||||
if err := json.Unmarshal([]byte(entry.Message), &msgData); err == nil {
|
|
||||||
// Message is valid JSON - merge fields
|
|
||||||
// LogWisp metadata takes precedence
|
|
||||||
for k, v := range msgData {
|
|
||||||
// Don't overwrite our standard fields
|
|
||||||
if k != f.timestampField && k != f.levelField && k != f.sourceField {
|
|
||||||
output[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// If the original JSON had these fields, log that we're overriding
|
|
||||||
if _, hasTime := msgData[f.timestampField]; hasTime {
|
|
||||||
f.logger.Debug("msg", "Overriding timestamp from JSON message",
|
|
||||||
"component", "json_formatter",
|
|
||||||
"original", msgData[f.timestampField],
|
|
||||||
"logwisp", output[f.timestampField])
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Message is not valid JSON - add as message field
|
|
||||||
output[f.messageField] = entry.Message
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add any additional fields from LogEntry.Fields
|
|
||||||
if len(entry.Fields) > 0 {
|
|
||||||
var fields map[string]any
|
|
||||||
if err := json.Unmarshal(entry.Fields, &fields); err == nil {
|
|
||||||
// Merge additional fields, but don't override existing
|
|
||||||
for k, v := range fields {
|
|
||||||
if _, exists := output[k]; !exists {
|
|
||||||
output[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Marshal to JSON
|
|
||||||
var result []byte
|
|
||||||
var err error
|
|
||||||
if f.pretty {
|
|
||||||
result, err = json.MarshalIndent(output, "", " ")
|
|
||||||
} else {
|
|
||||||
result, err = json.Marshal(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to marshal JSON: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add newline
|
|
||||||
return append(result, '\n'), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Name returns the formatter name
|
|
||||||
func (f *JSONFormatter) Name() string {
|
|
||||||
return "json"
|
|
||||||
}
|
|
||||||
|
|
||||||
// FormatBatch formats multiple entries as a JSON array
|
|
||||||
// This is a special method for sinks that need to batch entries
|
|
||||||
func (f *JSONFormatter) FormatBatch(entries []core.LogEntry) ([]byte, error) {
|
|
||||||
// For batching, we need to create an array of formatted objects
|
|
||||||
batch := make([]json.RawMessage, 0, len(entries))
|
|
||||||
|
|
||||||
for _, entry := range entries {
|
|
||||||
// Format each entry without the trailing newline
|
|
||||||
formatted, err := f.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
f.logger.Warn("msg", "Failed to format entry in batch",
|
|
||||||
"component", "json_formatter",
|
|
||||||
"error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove the trailing newline for array elements
|
|
||||||
if len(formatted) > 0 && formatted[len(formatted)-1] == '\n' {
|
|
||||||
formatted = formatted[:len(formatted)-1]
|
|
||||||
}
|
|
||||||
|
|
||||||
batch = append(batch, formatted)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Marshal the entire batch as an array
|
|
||||||
var result []byte
|
|
||||||
var err error
|
|
||||||
if f.pretty {
|
|
||||||
result, err = json.MarshalIndent(batch, "", " ")
|
|
||||||
} else {
|
|
||||||
result, err = json.Marshal(batch)
|
|
||||||
}
|
|
||||||
|
|
||||||
return result, err
|
|
||||||
}
|
|
||||||
@ -1,31 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/format/raw.go
|
|
||||||
package format
|
|
||||||
|
|
||||||
import (
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// RawFormatter outputs the log message as-is with a newline
|
|
||||||
type RawFormatter struct {
|
|
||||||
logger *log.Logger
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewRawFormatter creates a new raw formatter
|
|
||||||
func NewRawFormatter(options map[string]any, logger *log.Logger) (*RawFormatter, error) {
|
|
||||||
return &RawFormatter{
|
|
||||||
logger: logger,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format returns the message with a newline appended
|
|
||||||
func (f *RawFormatter) Format(entry core.LogEntry) ([]byte, error) {
|
|
||||||
// Simply return the message with newline
|
|
||||||
return append([]byte(entry.Message), '\n'), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Name returns the formatter name
|
|
||||||
func (f *RawFormatter) Name() string {
|
|
||||||
return "raw"
|
|
||||||
}
|
|
||||||
@ -1,108 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/format/text.go
|
|
||||||
package format
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
"text/template"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// TextFormatter produces human-readable text logs using templates
|
|
||||||
type TextFormatter struct {
|
|
||||||
template *template.Template
|
|
||||||
timestampFormat string
|
|
||||||
logger *log.Logger
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewTextFormatter creates a new text formatter
|
|
||||||
func NewTextFormatter(options map[string]any, logger *log.Logger) (*TextFormatter, error) {
|
|
||||||
// Default template
|
|
||||||
templateStr := "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}"
|
|
||||||
if tmpl, ok := options["template"].(string); ok && tmpl != "" {
|
|
||||||
templateStr = tmpl
|
|
||||||
}
|
|
||||||
|
|
||||||
// Default timestamp format
|
|
||||||
timestampFormat := time.RFC3339
|
|
||||||
if tsFormat, ok := options["timestamp_format"].(string); ok && tsFormat != "" {
|
|
||||||
timestampFormat = tsFormat
|
|
||||||
}
|
|
||||||
|
|
||||||
f := &TextFormatter{
|
|
||||||
timestampFormat: timestampFormat,
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create template with helper functions
|
|
||||||
funcMap := template.FuncMap{
|
|
||||||
"FmtTime": func(t time.Time) string {
|
|
||||||
return t.Format(f.timestampFormat)
|
|
||||||
},
|
|
||||||
"ToUpper": strings.ToUpper,
|
|
||||||
"ToLower": strings.ToLower,
|
|
||||||
"TrimSpace": strings.TrimSpace,
|
|
||||||
}
|
|
||||||
|
|
||||||
tmpl, err := template.New("log").Funcs(funcMap).Parse(templateStr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid template: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
f.template = tmpl
|
|
||||||
return f, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format formats the log entry using the template
|
|
||||||
func (f *TextFormatter) Format(entry core.LogEntry) ([]byte, error) {
|
|
||||||
// Prepare data for template
|
|
||||||
data := map[string]any{
|
|
||||||
"Timestamp": entry.Time,
|
|
||||||
"Level": entry.Level,
|
|
||||||
"Source": entry.Source,
|
|
||||||
"Message": entry.Message,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set default level if empty
|
|
||||||
if data["Level"] == "" {
|
|
||||||
data["Level"] = "INFO"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add fields if present
|
|
||||||
if len(entry.Fields) > 0 {
|
|
||||||
data["Fields"] = string(entry.Fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
var buf bytes.Buffer
|
|
||||||
if err := f.template.Execute(&buf, data); err != nil {
|
|
||||||
// Fallback: return a basic formatted message
|
|
||||||
f.logger.Debug("msg", "Template execution failed, using fallback",
|
|
||||||
"component", "text_formatter",
|
|
||||||
"error", err)
|
|
||||||
|
|
||||||
fallback := fmt.Sprintf("[%s] [%s] %s - %s\n",
|
|
||||||
entry.Time.Format(f.timestampFormat),
|
|
||||||
strings.ToUpper(entry.Level),
|
|
||||||
entry.Source,
|
|
||||||
entry.Message)
|
|
||||||
return []byte(fallback), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure newline at end
|
|
||||||
result := buf.Bytes()
|
|
||||||
if len(result) == 0 || result[len(result)-1] != '\n' {
|
|
||||||
result = append(result, '\n')
|
|
||||||
}
|
|
||||||
|
|
||||||
return result, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Name returns the formatter name
|
|
||||||
func (f *TextFormatter) Name() string {
|
|
||||||
return "text"
|
|
||||||
}
|
|
||||||
@ -1,732 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/limit/net.go
|
|
||||||
package limit
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"net"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// DenialReason indicates why a request was denied
|
|
||||||
type DenialReason string
|
|
||||||
|
|
||||||
const (
|
|
||||||
// IPv4Only is the enforcement message for IPv6 rejection
|
|
||||||
IPv4Only = "IPv4-only (IPv6 not supported)"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
ReasonAllowed DenialReason = ""
|
|
||||||
ReasonBlacklisted DenialReason = "IP denied by blacklist"
|
|
||||||
ReasonNotWhitelisted DenialReason = "IP not in whitelist"
|
|
||||||
ReasonRateLimited DenialReason = "Rate limit exceeded"
|
|
||||||
ReasonConnectionLimited DenialReason = "Connection limit exceeded"
|
|
||||||
ReasonInvalidIP DenialReason = "Invalid IP address"
|
|
||||||
)
|
|
||||||
|
|
||||||
// NetLimiter manages net limiting for a transport
|
|
||||||
type NetLimiter struct {
|
|
||||||
config config.NetLimitConfig
|
|
||||||
logger *log.Logger
|
|
||||||
|
|
||||||
// IP Access Control Lists
|
|
||||||
ipWhitelist []*net.IPNet
|
|
||||||
ipBlacklist []*net.IPNet
|
|
||||||
|
|
||||||
// Per-IP limiters
|
|
||||||
ipLimiters map[string]*ipLimiter
|
|
||||||
ipMu sync.RWMutex
|
|
||||||
|
|
||||||
// Global limiter for the transport
|
|
||||||
globalLimiter *TokenBucket
|
|
||||||
|
|
||||||
// Connection tracking
|
|
||||||
ipConnections map[string]*connTracker
|
|
||||||
connMu sync.RWMutex
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalRequests atomic.Uint64
|
|
||||||
blockedByBlacklist atomic.Uint64
|
|
||||||
blockedByWhitelist atomic.Uint64
|
|
||||||
blockedByRateLimit atomic.Uint64
|
|
||||||
blockedByConnLimit atomic.Uint64
|
|
||||||
blockedByInvalidIP atomic.Uint64
|
|
||||||
uniqueIPs atomic.Uint64
|
|
||||||
|
|
||||||
// Cleanup
|
|
||||||
lastCleanup time.Time
|
|
||||||
cleanupMu sync.Mutex
|
|
||||||
cleanupActive atomic.Bool
|
|
||||||
|
|
||||||
// Lifecycle management
|
|
||||||
ctx context.Context
|
|
||||||
cancel context.CancelFunc
|
|
||||||
cleanupDone chan struct{}
|
|
||||||
}
|
|
||||||
|
|
||||||
type ipLimiter struct {
|
|
||||||
bucket *TokenBucket
|
|
||||||
lastSeen time.Time
|
|
||||||
connections atomic.Int64
|
|
||||||
}
|
|
||||||
|
|
||||||
// Connection tracking with activity timestamp
|
|
||||||
type connTracker struct {
|
|
||||||
connections atomic.Int64
|
|
||||||
lastSeen time.Time
|
|
||||||
mu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// Creates a new net limiter
|
|
||||||
func NewNetLimiter(cfg config.NetLimitConfig, logger *log.Logger) *NetLimiter {
|
|
||||||
// Return nil only if nothing is configured
|
|
||||||
hasACL := len(cfg.IPWhitelist) > 0 || len(cfg.IPBlacklist) > 0
|
|
||||||
hasRateLimit := cfg.Enabled
|
|
||||||
|
|
||||||
if !hasACL && !hasRateLimit {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if logger == nil {
|
|
||||||
panic("netlimit.New: logger cannot be nil")
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
|
||||||
|
|
||||||
l := &NetLimiter{
|
|
||||||
config: cfg,
|
|
||||||
logger: logger,
|
|
||||||
ipWhitelist: make([]*net.IPNet, 0),
|
|
||||||
ipBlacklist: make([]*net.IPNet, 0),
|
|
||||||
ipLimiters: make(map[string]*ipLimiter),
|
|
||||||
ipConnections: make(map[string]*connTracker),
|
|
||||||
lastCleanup: time.Now(),
|
|
||||||
ctx: ctx,
|
|
||||||
cancel: cancel,
|
|
||||||
cleanupDone: make(chan struct{}),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse IP lists
|
|
||||||
l.parseIPLists(cfg)
|
|
||||||
|
|
||||||
// Create global limiter if configured
|
|
||||||
if cfg.Enabled && cfg.LimitBy == "global" {
|
|
||||||
l.globalLimiter = NewTokenBucket(
|
|
||||||
float64(cfg.BurstSize),
|
|
||||||
cfg.RequestsPerSecond,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start cleanup goroutine only if rate limiting is enabled
|
|
||||||
if cfg.Enabled {
|
|
||||||
go l.cleanupLoop()
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.Info("msg", "Net limiter initialized",
|
|
||||||
"component", "netlimit",
|
|
||||||
"acl_enabled", hasACL,
|
|
||||||
"rate_limiting", cfg.Enabled,
|
|
||||||
"whitelist_rules", len(l.ipWhitelist),
|
|
||||||
"blacklist_rules", len(l.ipBlacklist),
|
|
||||||
"requests_per_second", cfg.RequestsPerSecond,
|
|
||||||
"burst_size", cfg.BurstSize,
|
|
||||||
"limit_by", cfg.LimitBy)
|
|
||||||
|
|
||||||
return l
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseIPLists parses and validates IP whitelist/blacklist
|
|
||||||
func (l *NetLimiter) parseIPLists(cfg config.NetLimitConfig) {
|
|
||||||
// Parse whitelist
|
|
||||||
for _, entry := range cfg.IPWhitelist {
|
|
||||||
if ipNet := l.parseIPEntry(entry, "whitelist"); ipNet != nil {
|
|
||||||
l.ipWhitelist = append(l.ipWhitelist, ipNet)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse blacklist
|
|
||||||
for _, entry := range cfg.IPBlacklist {
|
|
||||||
if ipNet := l.parseIPEntry(entry, "blacklist"); ipNet != nil {
|
|
||||||
l.ipBlacklist = append(l.ipBlacklist, ipNet)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseIPEntry parses a single IP or CIDR entry
|
|
||||||
func (l *NetLimiter) parseIPEntry(entry, listType string) *net.IPNet {
|
|
||||||
// Handle single IP
|
|
||||||
if !strings.Contains(entry, "/") {
|
|
||||||
ip := net.ParseIP(entry)
|
|
||||||
if ip == nil {
|
|
||||||
l.logger.Warn("msg", "Invalid IP entry",
|
|
||||||
"component", "netlimit",
|
|
||||||
"list", listType,
|
|
||||||
"entry", entry)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reject IPv6
|
|
||||||
if ip.To4() == nil {
|
|
||||||
l.logger.Warn("msg", "IPv6 address rejected",
|
|
||||||
"component", "netlimit",
|
|
||||||
"list", listType,
|
|
||||||
"entry", entry,
|
|
||||||
"reason", IPv4Only)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return &net.IPNet{IP: ip.To4(), Mask: net.CIDRMask(32, 32)}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse CIDR
|
|
||||||
ipAddr, ipNet, err := net.ParseCIDR(entry)
|
|
||||||
if err != nil {
|
|
||||||
l.logger.Warn("msg", "Invalid CIDR entry",
|
|
||||||
"component", "netlimit",
|
|
||||||
"list", listType,
|
|
||||||
"entry", entry,
|
|
||||||
"error", err)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reject IPv6 CIDR
|
|
||||||
if ipAddr.To4() == nil {
|
|
||||||
l.logger.Warn("msg", "IPv6 CIDR rejected",
|
|
||||||
"component", "netlimit",
|
|
||||||
"list", listType,
|
|
||||||
"entry", entry,
|
|
||||||
"reason", IPv4Only)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure mask is IPv4
|
|
||||||
_, bits := ipNet.Mask.Size()
|
|
||||||
if bits != 32 {
|
|
||||||
l.logger.Warn("msg", "Non-IPv4 CIDR mask rejected",
|
|
||||||
"component", "netlimit",
|
|
||||||
"list", listType,
|
|
||||||
"entry", entry,
|
|
||||||
"mask_bits", bits,
|
|
||||||
"reason", IPv4Only)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return &net.IPNet{IP: ipAddr.To4(), Mask: ipNet.Mask}
|
|
||||||
}
|
|
||||||
|
|
||||||
// checkIPAccess checks if an IP is allowed by ACLs
|
|
||||||
func (l *NetLimiter) checkIPAccess(ip net.IP) DenialReason {
|
|
||||||
// 1. Check blacklist first (deny takes precedence)
|
|
||||||
for _, ipNet := range l.ipBlacklist {
|
|
||||||
if ipNet.Contains(ip) {
|
|
||||||
l.blockedByBlacklist.Add(1)
|
|
||||||
l.logger.Debug("msg", "IP denied by blacklist",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ip.String(),
|
|
||||||
"rule", ipNet.String())
|
|
||||||
return ReasonBlacklisted
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. If whitelist is configured, IP must be in it
|
|
||||||
if len(l.ipWhitelist) > 0 {
|
|
||||||
for _, ipNet := range l.ipWhitelist {
|
|
||||||
if ipNet.Contains(ip) {
|
|
||||||
l.logger.Debug("msg", "IP allowed by whitelist",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ip.String(),
|
|
||||||
"rule", ipNet.String())
|
|
||||||
return ReasonAllowed
|
|
||||||
}
|
|
||||||
}
|
|
||||||
l.blockedByWhitelist.Add(1)
|
|
||||||
l.logger.Debug("msg", "IP not in whitelist",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ip.String())
|
|
||||||
return ReasonNotWhitelisted
|
|
||||||
}
|
|
||||||
|
|
||||||
return ReasonAllowed
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *NetLimiter) Shutdown() {
|
|
||||||
if l == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
l.logger.Info("msg", "Shutting down net limiter", "component", "netlimit")
|
|
||||||
|
|
||||||
// Cancel context to stop cleanup goroutine
|
|
||||||
l.cancel()
|
|
||||||
|
|
||||||
// Wait for cleanup goroutine to finish
|
|
||||||
select {
|
|
||||||
case <-l.cleanupDone:
|
|
||||||
l.logger.Debug("msg", "Cleanup goroutine stopped", "component", "netlimit")
|
|
||||||
case <-time.After(2 * time.Second):
|
|
||||||
l.logger.Warn("msg", "Cleanup goroutine shutdown timeout", "component", "netlimit")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Checks if an HTTP request should be allowed
|
|
||||||
func (l *NetLimiter) CheckHTTP(remoteAddr string) (allowed bool, statusCode int64, message string) {
|
|
||||||
if l == nil {
|
|
||||||
return true, 0, ""
|
|
||||||
}
|
|
||||||
|
|
||||||
l.totalRequests.Add(1)
|
|
||||||
|
|
||||||
// Parse IP address
|
|
||||||
ipStr, _, err := net.SplitHostPort(remoteAddr)
|
|
||||||
if err != nil {
|
|
||||||
l.logger.Warn("msg", "Failed to parse remote addr",
|
|
||||||
"component", "netlimit",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"error", err)
|
|
||||||
return true, 0, ""
|
|
||||||
}
|
|
||||||
|
|
||||||
ip := net.ParseIP(ipStr)
|
|
||||||
if ip == nil {
|
|
||||||
l.blockedByInvalidIP.Add(1)
|
|
||||||
l.logger.Warn("msg", "Failed to parse IP",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ipStr)
|
|
||||||
return false, 403, string(ReasonInvalidIP)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reject IPv6 connections
|
|
||||||
if !isIPv4(ip) {
|
|
||||||
l.blockedByInvalidIP.Add(1)
|
|
||||||
l.logger.Warn("msg", "IPv6 connection rejected",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ipStr,
|
|
||||||
"reason", IPv4Only)
|
|
||||||
return false, 403, IPv4Only
|
|
||||||
}
|
|
||||||
|
|
||||||
// Normalize to IPv4 representation
|
|
||||||
ip = ip.To4()
|
|
||||||
|
|
||||||
// Check IP access control
|
|
||||||
if reason := l.checkIPAccess(ip); reason != ReasonAllowed {
|
|
||||||
return false, 403, string(reason)
|
|
||||||
}
|
|
||||||
|
|
||||||
// If rate limiting is not enabled, allow
|
|
||||||
if !l.config.Enabled {
|
|
||||||
return true, 0, ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check connection limits
|
|
||||||
if l.config.MaxConnectionsPerIP > 0 {
|
|
||||||
l.connMu.RLock()
|
|
||||||
tracker, exists := l.ipConnections[ipStr]
|
|
||||||
l.connMu.RUnlock()
|
|
||||||
|
|
||||||
if exists && tracker.connections.Load() >= l.config.MaxConnectionsPerIP {
|
|
||||||
l.blockedByConnLimit.Add(1)
|
|
||||||
statusCode = l.config.ResponseCode
|
|
||||||
if statusCode == 0 {
|
|
||||||
statusCode = 429
|
|
||||||
}
|
|
||||||
return false, statusCode, string(ReasonConnectionLimited)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check rate limit
|
|
||||||
if !l.checkLimit(ipStr) {
|
|
||||||
l.blockedByRateLimit.Add(1)
|
|
||||||
statusCode = l.config.ResponseCode
|
|
||||||
if statusCode == 0 {
|
|
||||||
statusCode = 429
|
|
||||||
}
|
|
||||||
message = l.config.ResponseMessage
|
|
||||||
if message == "" {
|
|
||||||
message = string(ReasonRateLimited)
|
|
||||||
}
|
|
||||||
return false, statusCode, message
|
|
||||||
}
|
|
||||||
|
|
||||||
return true, 0, ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update connection activity
|
|
||||||
func (l *NetLimiter) updateConnectionActivity(ip string) {
|
|
||||||
l.connMu.RLock()
|
|
||||||
tracker, exists := l.ipConnections[ip]
|
|
||||||
l.connMu.RUnlock()
|
|
||||||
|
|
||||||
if exists {
|
|
||||||
tracker.mu.Lock()
|
|
||||||
tracker.lastSeen = time.Now()
|
|
||||||
tracker.mu.Unlock()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Checks if a TCP connection should be allowed
|
|
||||||
func (l *NetLimiter) CheckTCP(remoteAddr net.Addr) bool {
|
|
||||||
if l == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
l.totalRequests.Add(1)
|
|
||||||
|
|
||||||
// Extract IP from TCP addr
|
|
||||||
tcpAddr, ok := remoteAddr.(*net.TCPAddr)
|
|
||||||
if !ok {
|
|
||||||
l.blockedByInvalidIP.Add(1)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reject IPv6 connections
|
|
||||||
if !isIPv4(tcpAddr.IP) {
|
|
||||||
l.blockedByInvalidIP.Add(1)
|
|
||||||
l.logger.Warn("msg", "IPv6 TCP connection rejected",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", tcpAddr.IP.String(),
|
|
||||||
"reason", IPv4Only)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Normalize to IPv4 representation
|
|
||||||
ip := tcpAddr.IP.To4()
|
|
||||||
|
|
||||||
// Check IP access control
|
|
||||||
if reason := l.checkIPAccess(ip); reason != ReasonAllowed {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// If rate limiting is not enabled, allow
|
|
||||||
if !l.config.Enabled {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check rate limit
|
|
||||||
ipStr := tcpAddr.IP.String()
|
|
||||||
if !l.checkLimit(ipStr) {
|
|
||||||
l.blockedByRateLimit.Add(1)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func isIPv4(ip net.IP) bool {
|
|
||||||
return ip.To4() != nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tracks a new connection for an IP
|
|
||||||
func (l *NetLimiter) AddConnection(remoteAddr string) {
|
|
||||||
if l == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ip, _, err := net.SplitHostPort(remoteAddr)
|
|
||||||
if err != nil {
|
|
||||||
l.logger.Warn("msg", "Failed to parse remote address in AddConnection",
|
|
||||||
"component", "netlimit",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"error", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// IP validation
|
|
||||||
parsedIP := net.ParseIP(ip)
|
|
||||||
if parsedIP == nil {
|
|
||||||
l.logger.Warn("msg", "Failed to parse IP in AddConnection",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ip)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only supporting ipv4
|
|
||||||
if !isIPv4(parsedIP) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
l.connMu.Lock()
|
|
||||||
tracker, exists := l.ipConnections[ip]
|
|
||||||
if !exists {
|
|
||||||
// Create new tracker with timestamp
|
|
||||||
tracker = &connTracker{
|
|
||||||
lastSeen: time.Now(),
|
|
||||||
}
|
|
||||||
l.ipConnections[ip] = tracker
|
|
||||||
}
|
|
||||||
l.connMu.Unlock()
|
|
||||||
|
|
||||||
newCount := tracker.connections.Add(1)
|
|
||||||
// Update activity timestamp
|
|
||||||
tracker.mu.Lock()
|
|
||||||
tracker.lastSeen = time.Now()
|
|
||||||
tracker.mu.Unlock()
|
|
||||||
|
|
||||||
l.logger.Debug("msg", "Connection added",
|
|
||||||
"ip", ip,
|
|
||||||
"connections", newCount)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Removes a connection for an IP
|
|
||||||
func (l *NetLimiter) RemoveConnection(remoteAddr string) {
|
|
||||||
if l == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ip, _, err := net.SplitHostPort(remoteAddr)
|
|
||||||
if err != nil {
|
|
||||||
l.logger.Warn("msg", "Failed to parse remote address in RemoveConnection",
|
|
||||||
"component", "netlimit",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"error", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// IP validation
|
|
||||||
parsedIP := net.ParseIP(ip)
|
|
||||||
if parsedIP == nil {
|
|
||||||
l.logger.Warn("msg", "Failed to parse IP in RemoveConnection",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ip)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only supporting ipv4
|
|
||||||
if !isIPv4(parsedIP) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
l.connMu.RLock()
|
|
||||||
tracker, exists := l.ipConnections[ip]
|
|
||||||
l.connMu.RUnlock()
|
|
||||||
|
|
||||||
if exists {
|
|
||||||
newCount := tracker.connections.Add(-1)
|
|
||||||
l.logger.Debug("msg", "Connection removed",
|
|
||||||
"ip", ip,
|
|
||||||
"connections", newCount)
|
|
||||||
|
|
||||||
if newCount <= 0 {
|
|
||||||
// Clean up if no more connections
|
|
||||||
l.connMu.Lock()
|
|
||||||
if tracker.connections.Load() <= 0 {
|
|
||||||
delete(l.ipConnections, ip)
|
|
||||||
}
|
|
||||||
l.connMu.Unlock()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Returns net limiter statistics
|
|
||||||
func (l *NetLimiter) GetStats() map[string]any {
|
|
||||||
if l == nil {
|
|
||||||
return map[string]any{"enabled": false}
|
|
||||||
}
|
|
||||||
|
|
||||||
l.ipMu.RLock()
|
|
||||||
activeIPs := len(l.ipLimiters)
|
|
||||||
l.ipMu.RUnlock()
|
|
||||||
|
|
||||||
l.connMu.RLock()
|
|
||||||
totalConnections := 0
|
|
||||||
for _, tracker := range l.ipConnections {
|
|
||||||
totalConnections += int(tracker.connections.Load())
|
|
||||||
}
|
|
||||||
l.connMu.RUnlock()
|
|
||||||
|
|
||||||
totalBlocked := l.blockedByBlacklist.Load() +
|
|
||||||
l.blockedByWhitelist.Load() +
|
|
||||||
l.blockedByRateLimit.Load() +
|
|
||||||
l.blockedByConnLimit.Load() +
|
|
||||||
l.blockedByInvalidIP.Load()
|
|
||||||
|
|
||||||
return map[string]any{
|
|
||||||
"enabled": true,
|
|
||||||
"total_requests": l.totalRequests.Load(),
|
|
||||||
"total_blocked": totalBlocked,
|
|
||||||
"blocked_breakdown": map[string]uint64{
|
|
||||||
"blacklist": l.blockedByBlacklist.Load(),
|
|
||||||
"whitelist": l.blockedByWhitelist.Load(),
|
|
||||||
"rate_limit": l.blockedByRateLimit.Load(),
|
|
||||||
"conn_limit": l.blockedByConnLimit.Load(),
|
|
||||||
"invalid_ip": l.blockedByInvalidIP.Load(),
|
|
||||||
},
|
|
||||||
"active_ips": activeIPs,
|
|
||||||
"total_connections": totalConnections,
|
|
||||||
"acl": map[string]int{
|
|
||||||
"whitelist_rules": len(l.ipWhitelist),
|
|
||||||
"blacklist_rules": len(l.ipBlacklist),
|
|
||||||
},
|
|
||||||
"rate_limit": map[string]any{
|
|
||||||
"enabled": l.config.Enabled,
|
|
||||||
"requests_per_second": l.config.RequestsPerSecond,
|
|
||||||
"burst_size": l.config.BurstSize,
|
|
||||||
"limit_by": l.config.LimitBy,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Performs the actual net limit check
|
|
||||||
func (l *NetLimiter) checkLimit(ip string) bool {
|
|
||||||
// Validate IP format
|
|
||||||
parsedIP := net.ParseIP(ip)
|
|
||||||
if parsedIP == nil || !isIPv4(parsedIP) {
|
|
||||||
l.logger.Warn("msg", "Invalid or non-IPv4 address in rate limiter",
|
|
||||||
"component", "netlimit",
|
|
||||||
"ip", ip)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Maybe run cleanup
|
|
||||||
l.maybeCleanup()
|
|
||||||
|
|
||||||
switch l.config.LimitBy {
|
|
||||||
case "global":
|
|
||||||
return l.globalLimiter.Allow()
|
|
||||||
|
|
||||||
case "ip", "":
|
|
||||||
// Default to per-IP limiting
|
|
||||||
l.ipMu.Lock()
|
|
||||||
lim, exists := l.ipLimiters[ip]
|
|
||||||
if !exists {
|
|
||||||
// Create new limiter for this IP
|
|
||||||
lim = &ipLimiter{
|
|
||||||
bucket: NewTokenBucket(
|
|
||||||
float64(l.config.BurstSize),
|
|
||||||
l.config.RequestsPerSecond,
|
|
||||||
),
|
|
||||||
lastSeen: time.Now(),
|
|
||||||
}
|
|
||||||
l.ipLimiters[ip] = lim
|
|
||||||
l.uniqueIPs.Add(1)
|
|
||||||
|
|
||||||
l.logger.Debug("msg", "Created new IP limiter",
|
|
||||||
"ip", ip,
|
|
||||||
"total_ips", l.uniqueIPs.Load())
|
|
||||||
} else {
|
|
||||||
lim.lastSeen = time.Now()
|
|
||||||
}
|
|
||||||
l.ipMu.Unlock()
|
|
||||||
|
|
||||||
// Check connection limit if configured
|
|
||||||
if l.config.MaxConnectionsPerIP > 0 {
|
|
||||||
l.connMu.RLock()
|
|
||||||
tracker, exists := l.ipConnections[ip]
|
|
||||||
l.connMu.RUnlock()
|
|
||||||
|
|
||||||
if exists && tracker.connections.Load() >= l.config.MaxConnectionsPerIP {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return lim.bucket.Allow()
|
|
||||||
|
|
||||||
default:
|
|
||||||
// Unknown limit_by value, allow by default
|
|
||||||
l.logger.Warn("msg", "Unknown limit_by value",
|
|
||||||
"limit_by", l.config.LimitBy)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Runs cleanup if enough time has passed
|
|
||||||
func (l *NetLimiter) maybeCleanup() {
|
|
||||||
l.cleanupMu.Lock()
|
|
||||||
|
|
||||||
// Check if enough time has passed
|
|
||||||
if time.Since(l.lastCleanup) < 30*time.Second {
|
|
||||||
l.cleanupMu.Unlock()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if cleanup already running
|
|
||||||
if !l.cleanupActive.CompareAndSwap(false, true) {
|
|
||||||
l.cleanupMu.Unlock()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
l.lastCleanup = time.Now()
|
|
||||||
l.cleanupMu.Unlock()
|
|
||||||
|
|
||||||
// Run cleanup async
|
|
||||||
go func() {
|
|
||||||
defer l.cleanupActive.Store(false)
|
|
||||||
l.cleanup()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Removes stale IP limiters
|
|
||||||
func (l *NetLimiter) cleanup() {
|
|
||||||
staleTimeout := 5 * time.Minute
|
|
||||||
now := time.Now()
|
|
||||||
|
|
||||||
l.ipMu.Lock()
|
|
||||||
defer l.ipMu.Unlock()
|
|
||||||
|
|
||||||
// Clean up rate limiters
|
|
||||||
l.ipMu.Lock()
|
|
||||||
cleaned := 0
|
|
||||||
for ip, lim := range l.ipLimiters {
|
|
||||||
if now.Sub(lim.lastSeen) > staleTimeout {
|
|
||||||
delete(l.ipLimiters, ip)
|
|
||||||
cleaned++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
l.ipMu.Unlock()
|
|
||||||
|
|
||||||
if cleaned > 0 {
|
|
||||||
l.logger.Debug("msg", "Cleaned up stale IP limiters",
|
|
||||||
"component", "netlimit",
|
|
||||||
"cleaned", cleaned,
|
|
||||||
"remaining", len(l.ipLimiters))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clean up stale connection trackers
|
|
||||||
l.connMu.Lock()
|
|
||||||
connCleaned := 0
|
|
||||||
for ip, tracker := range l.ipConnections {
|
|
||||||
tracker.mu.Lock()
|
|
||||||
lastSeen := tracker.lastSeen
|
|
||||||
tracker.mu.Unlock()
|
|
||||||
|
|
||||||
// Remove if no activity for 5 minutes AND no active connections
|
|
||||||
if now.Sub(lastSeen) > staleTimeout && tracker.connections.Load() <= 0 {
|
|
||||||
delete(l.ipConnections, ip)
|
|
||||||
connCleaned++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
l.connMu.Unlock()
|
|
||||||
|
|
||||||
if connCleaned > 0 {
|
|
||||||
l.logger.Debug("msg", "Cleaned up stale connection trackers",
|
|
||||||
"component", "netlimit",
|
|
||||||
"cleaned", connCleaned,
|
|
||||||
"remaining", len(l.ipConnections))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Runs periodic cleanup
|
|
||||||
func (l *NetLimiter) cleanupLoop() {
|
|
||||||
defer close(l.cleanupDone)
|
|
||||||
|
|
||||||
ticker := time.NewTicker(1 * time.Minute)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-l.ctx.Done():
|
|
||||||
// Exit when context is cancelled
|
|
||||||
l.logger.Debug("msg", "Cleanup loop stopping", "component", "netlimit")
|
|
||||||
return
|
|
||||||
case <-ticker.C:
|
|
||||||
l.cleanup()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
452
src/internal/pipeline/pipeline.go
Normal file
452
src/internal/pipeline/pipeline.go
Normal file
@ -0,0 +1,452 @@
|
|||||||
|
package pipeline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/flow"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Pipeline manages the flow of data from sources, through filters, to sinks
|
||||||
|
type Pipeline struct {
|
||||||
|
Config *config.PipelineConfig
|
||||||
|
|
||||||
|
// Components
|
||||||
|
Registry *Registry
|
||||||
|
Sources map[string]source.Source // Track instances by ID
|
||||||
|
Sinks map[string]sink.Sink
|
||||||
|
Sessions *session.Manager
|
||||||
|
|
||||||
|
// Pipeline flow
|
||||||
|
Flow *flow.Flow
|
||||||
|
Stats *PipelineStats
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
ctx context.Context
|
||||||
|
cancel context.CancelFunc
|
||||||
|
wg sync.WaitGroup
|
||||||
|
running atomic.Bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// PipelineStats contains runtime statistics for a pipeline
|
||||||
|
type PipelineStats struct {
|
||||||
|
StartTime time.Time
|
||||||
|
TotalEntriesProcessed atomic.Uint64
|
||||||
|
TotalEntriesDroppedByRateLimit atomic.Uint64
|
||||||
|
TotalEntriesFiltered atomic.Uint64
|
||||||
|
SourceStats []source.SourceStats
|
||||||
|
SinkStats []sink.SinkStats
|
||||||
|
FlowStats map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewPipeline creates a new pipeline with registry support
|
||||||
|
func NewPipeline(
|
||||||
|
cfg *config.PipelineConfig,
|
||||||
|
logger *log.Logger,
|
||||||
|
) (*Pipeline, error) {
|
||||||
|
// Create pipeline context
|
||||||
|
pipelineCtx, pipelineCancel := context.WithCancel(context.Background())
|
||||||
|
|
||||||
|
// Create session manager with default timeout
|
||||||
|
sessionManager := session.NewManager(core.SessionDefaultMaxIdleTime)
|
||||||
|
|
||||||
|
// Create pipeline instance with registry
|
||||||
|
pipeline := &Pipeline{
|
||||||
|
Config: cfg,
|
||||||
|
Registry: NewRegistry(cfg.Name, logger),
|
||||||
|
Sessions: sessionManager,
|
||||||
|
Sources: make(map[string]source.Source),
|
||||||
|
Sinks: make(map[string]sink.Sink),
|
||||||
|
Stats: &PipelineStats{},
|
||||||
|
logger: logger,
|
||||||
|
ctx: pipelineCtx,
|
||||||
|
cancel: pipelineCancel,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create flow processor
|
||||||
|
// Create flow processor
|
||||||
|
flowProcessor, err := flow.NewFlow(cfg.Flow, logger)
|
||||||
|
if err != nil {
|
||||||
|
// If flow fails, stop session manager
|
||||||
|
sessionManager.Stop()
|
||||||
|
return nil, fmt.Errorf("failed to create flow processor: %w", err)
|
||||||
|
}
|
||||||
|
pipeline.Flow = flowProcessor
|
||||||
|
|
||||||
|
// Initialize sources and sinks
|
||||||
|
if err := pipeline.initializeComponents(); err != nil {
|
||||||
|
pipelineCancel()
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return pipeline, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Pipeline) initializeComponents() error {
|
||||||
|
// Create sources based on plugin config if available
|
||||||
|
if len(p.Config.PluginSources) > 0 {
|
||||||
|
for _, srcCfg := range p.Config.PluginSources {
|
||||||
|
// Create session proxy for this source instance
|
||||||
|
sessionProxy := session.NewProxy(p.Sessions, srcCfg.ID)
|
||||||
|
|
||||||
|
src, err := p.Registry.CreateSource(
|
||||||
|
srcCfg.ID,
|
||||||
|
srcCfg.Type,
|
||||||
|
srcCfg.Config,
|
||||||
|
p.logger,
|
||||||
|
sessionProxy,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create source %s: %w", srcCfg.ID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check and inject capabilities using core interfaces
|
||||||
|
if err := p.initSourceCapabilities(src, srcCfg); err != nil {
|
||||||
|
return fmt.Errorf("failed to initiate capabilities for source %s: %w", srcCfg.ID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.Sources[srcCfg.ID] = src
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("no plugin sources defined")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create sinks based on plugin config if available
|
||||||
|
if len(p.Config.PluginSinks) > 0 {
|
||||||
|
for _, sinkCfg := range p.Config.PluginSinks {
|
||||||
|
// Create session proxy for this sink instance
|
||||||
|
sessionProxy := session.NewProxy(p.Sessions, sinkCfg.ID)
|
||||||
|
|
||||||
|
snk, err := p.Registry.CreateSink(
|
||||||
|
sinkCfg.ID,
|
||||||
|
sinkCfg.Type,
|
||||||
|
sinkCfg.Config,
|
||||||
|
p.logger,
|
||||||
|
sessionProxy,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create sink %s: %w", sinkCfg.ID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check and inject capabilities using core interfaces
|
||||||
|
if err := p.initSinkCapabilities(snk, sinkCfg); err != nil {
|
||||||
|
return fmt.Errorf("failed to initiate capabilities for sink %s: %w", sinkCfg.ID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.Sinks[sinkCfg.ID] = snk
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("no plugin sinks defined")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// initSourceCapabilities checks and injects optional capabilities
|
||||||
|
func (p *Pipeline) initSourceCapabilities(s source.Source, cfg config.PluginSourceConfig) error {
|
||||||
|
// Initiate and activate source capabilities
|
||||||
|
for _, c := range s.Capabilities() {
|
||||||
|
switch c {
|
||||||
|
// Network capabilities
|
||||||
|
case core.CapNetLimit, core.CapTLS, core.CapAuth:
|
||||||
|
continue // No-op for now, placeholder
|
||||||
|
|
||||||
|
// Session capabilities
|
||||||
|
case core.CapSessionAware:
|
||||||
|
case core.CapMultiSession:
|
||||||
|
continue // TODO
|
||||||
|
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unknown capability type: %s", c)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// initSinkCapabilities checks and injects optional capabilities
|
||||||
|
func (p *Pipeline) initSinkCapabilities(s sink.Sink, cfg config.PluginSinkConfig) error {
|
||||||
|
// Initiate and activate source capabilities
|
||||||
|
for _, c := range s.Capabilities() {
|
||||||
|
switch c {
|
||||||
|
// Network capabilities
|
||||||
|
case core.CapNetLimit, core.CapTLS, core.CapAuth:
|
||||||
|
continue // No-op for now, placeholder
|
||||||
|
|
||||||
|
// Session capabilities
|
||||||
|
case core.CapSessionAware:
|
||||||
|
case core.CapMultiSession:
|
||||||
|
continue // TODO
|
||||||
|
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unknown capability type: %s", c)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// run is the central processing loop that connects sources, flow, and sinks
|
||||||
|
func (p *Pipeline) run() {
|
||||||
|
defer p.wg.Done()
|
||||||
|
defer p.logger.Info("msg", "Pipeline processing loop stopped", "pipeline", p.Config.Name)
|
||||||
|
|
||||||
|
var componentWg sync.WaitGroup
|
||||||
|
|
||||||
|
// Start a goroutine for each source to fan-in data
|
||||||
|
for _, src := range p.Sources {
|
||||||
|
componentWg.Add(1)
|
||||||
|
go func(s source.Source) {
|
||||||
|
defer componentWg.Done()
|
||||||
|
ch := s.Subscribe()
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case entry, ok := <-ch:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Process and distribute the log entry
|
||||||
|
if event, passed := p.Flow.Process(entry); passed {
|
||||||
|
// Fan-out to all sinks
|
||||||
|
for _, snk := range p.Sinks {
|
||||||
|
snk.Input() <- event
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case <-p.ctx.Done():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}(src)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start heartbeat generator if enabled
|
||||||
|
if heartbeatCh := p.Flow.StartHeartbeat(p.ctx); heartbeatCh != nil {
|
||||||
|
componentWg.Add(1)
|
||||||
|
go func() {
|
||||||
|
defer componentWg.Done()
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case event, ok := <-heartbeatCh:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Fan-out heartbeat to all sinks
|
||||||
|
for _, snk := range p.Sinks {
|
||||||
|
snk.Input() <- event
|
||||||
|
}
|
||||||
|
case <-p.ctx.Done():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
componentWg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start starts the pipeline operation and all its components including flow, sources, and sinks
|
||||||
|
func (p *Pipeline) Start() error {
|
||||||
|
if !p.running.CompareAndSwap(false, true) {
|
||||||
|
return fmt.Errorf("pipeline %s is already running", p.Config.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.logger.Info("msg", "Starting pipeline", "pipeline", p.Config.Name)
|
||||||
|
p.ctx, p.cancel = context.WithCancel(context.Background())
|
||||||
|
|
||||||
|
// Start all sinks
|
||||||
|
for id, s := range p.Sinks {
|
||||||
|
if err := s.Start(p.ctx); err != nil {
|
||||||
|
return fmt.Errorf("failed to start sink %s: %w", id, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start all sources
|
||||||
|
for id, src := range p.Sources {
|
||||||
|
if err := src.Start(); err != nil {
|
||||||
|
return fmt.Errorf("failed to start source %s: %w", id, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start the central processing loop
|
||||||
|
p.Stats.StartTime = time.Now()
|
||||||
|
p.wg.Add(1)
|
||||||
|
go p.run()
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop stops the pipeline operation and all its components including flow, sources, and sinks
|
||||||
|
func (p *Pipeline) Stop() error {
|
||||||
|
if !p.running.CompareAndSwap(true, false) {
|
||||||
|
return fmt.Errorf("pipeline %s is not running", p.Config.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.logger.Info("msg", "Stopping pipeline", "pipeline", p.Config.Name)
|
||||||
|
|
||||||
|
// Signal all components and the run loop to stop
|
||||||
|
p.cancel()
|
||||||
|
|
||||||
|
// Stop all sources concurrently to halt new data ingress
|
||||||
|
var sourceWg sync.WaitGroup
|
||||||
|
for _, src := range p.Sources {
|
||||||
|
sourceWg.Add(1)
|
||||||
|
go func(s source.Source) {
|
||||||
|
defer sourceWg.Done()
|
||||||
|
s.Stop()
|
||||||
|
}(src)
|
||||||
|
}
|
||||||
|
sourceWg.Wait()
|
||||||
|
|
||||||
|
// Wait for the run loop to finish processing and sending all in-flight data
|
||||||
|
p.wg.Wait()
|
||||||
|
|
||||||
|
// Stop all sinks concurrently now that no new data will be sent
|
||||||
|
var sinkWg sync.WaitGroup
|
||||||
|
for _, s := range p.Sinks {
|
||||||
|
sinkWg.Add(1)
|
||||||
|
go func(snk sink.Sink) {
|
||||||
|
defer sinkWg.Done()
|
||||||
|
snk.Stop()
|
||||||
|
}(s)
|
||||||
|
}
|
||||||
|
sinkWg.Wait()
|
||||||
|
|
||||||
|
p.logger.Info("msg", "Pipeline stopped", "pipeline", p.Config.Name)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Shutdown gracefully stops the pipeline and all its components, deinitializing them for app shutdown or complete pipeline removal by service
|
||||||
|
func (p *Pipeline) Shutdown() {
|
||||||
|
p.logger.Info("msg", "Shutting down pipeline",
|
||||||
|
"component", "pipeline",
|
||||||
|
"pipeline", p.Config.Name)
|
||||||
|
|
||||||
|
// Ensure the pipeline is stopped before shutting down
|
||||||
|
if p.running.Load() {
|
||||||
|
if err := p.Stop(); err != nil {
|
||||||
|
p.logger.Error("msg", "Error stopping pipeline during shutdown", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop long-running components
|
||||||
|
if p.Sessions != nil {
|
||||||
|
p.Sessions.Stop()
|
||||||
|
}
|
||||||
|
|
||||||
|
p.logger.Info("msg", "Pipeline shutdown complete",
|
||||||
|
"component", "pipeline",
|
||||||
|
"pipeline", p.Config.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns a map of pipeline statistics
|
||||||
|
func (p *Pipeline) GetStats() map[string]any {
|
||||||
|
// Recovery to handle concurrent access during shutdown
|
||||||
|
// When service is shutting down, sources/sinks might be nil or partially stopped
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
p.logger.Error("msg", "Panic getting pipeline stats",
|
||||||
|
"pipeline", p.Config.Name,
|
||||||
|
"panic", r)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Collect source stats
|
||||||
|
sourceStats := make([]map[string]any, 0, len(p.Sources))
|
||||||
|
for _, src := range p.Sources {
|
||||||
|
if src == nil {
|
||||||
|
continue // Skip nil sources
|
||||||
|
}
|
||||||
|
|
||||||
|
stats := src.GetStats()
|
||||||
|
sourceStats = append(sourceStats, map[string]any{
|
||||||
|
"id": stats.ID,
|
||||||
|
"type": stats.Type,
|
||||||
|
"total_entries": stats.TotalEntries,
|
||||||
|
"dropped_entries": stats.DroppedEntries,
|
||||||
|
"start_time": stats.StartTime,
|
||||||
|
"last_entry_time": stats.LastEntryTime,
|
||||||
|
"details": stats.Details,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect sink stats
|
||||||
|
sinkStats := make([]map[string]any, 0, len(p.Sinks))
|
||||||
|
for _, s := range p.Sinks {
|
||||||
|
if s == nil {
|
||||||
|
continue // Skip nil sinks
|
||||||
|
}
|
||||||
|
|
||||||
|
stats := s.GetStats()
|
||||||
|
sinkStats = append(sinkStats, map[string]any{
|
||||||
|
"id": stats.ID,
|
||||||
|
"type": stats.Type,
|
||||||
|
"total_processed": stats.TotalProcessed,
|
||||||
|
"active_connections": stats.ActiveConnections,
|
||||||
|
"start_time": stats.StartTime,
|
||||||
|
"last_processed": stats.LastProcessed,
|
||||||
|
"details": stats.Details,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get flow stats
|
||||||
|
var flowStats map[string]any
|
||||||
|
var totalFiltered uint64
|
||||||
|
if p.Flow != nil {
|
||||||
|
flowStats = p.Flow.GetStats()
|
||||||
|
// Extract total_filtered from flow for top-level visibility
|
||||||
|
if filters, ok := flowStats["filters"].(map[string]any); ok {
|
||||||
|
if totalPassed, ok := filters["total_passed"].(uint64); ok {
|
||||||
|
if totalProcessed, ok := filters["total_processed"].(uint64); ok {
|
||||||
|
totalFiltered = totalProcessed - totalPassed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var uptime int
|
||||||
|
if p.running.Load() && !p.Stats.StartTime.IsZero() {
|
||||||
|
uptime = int(time.Since(p.Stats.StartTime).Seconds())
|
||||||
|
}
|
||||||
|
|
||||||
|
return map[string]any{
|
||||||
|
"name": p.Config.Name,
|
||||||
|
"running": p.running.Load(),
|
||||||
|
"uptime_seconds": uptime,
|
||||||
|
"total_processed": p.Stats.TotalEntriesProcessed.Load(),
|
||||||
|
"total_filtered": totalFiltered,
|
||||||
|
"source_count": len(p.Sources),
|
||||||
|
"sources": sourceStats,
|
||||||
|
"sink_count": len(p.Sinks),
|
||||||
|
"sinks": sinkStats,
|
||||||
|
"flow": flowStats,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: incomplete implementation
|
||||||
|
// startStatsUpdater runs a periodic stats updater
|
||||||
|
func (p *Pipeline) startStatsUpdater(ctx context.Context) {
|
||||||
|
go func() {
|
||||||
|
ticker := time.NewTicker(core.ServiceStatsUpdateInterval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
// Periodic stats updates if needed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
222
src/internal/pipeline/registry.go
Normal file
222
src/internal/pipeline/registry.go
Normal file
@ -0,0 +1,222 @@
|
|||||||
|
package pipeline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SourceFactory creates source instances with required dependencies
|
||||||
|
type SourceFactory func(
|
||||||
|
id string,
|
||||||
|
config map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
sessions *session.Proxy,
|
||||||
|
) (source.Source, error)
|
||||||
|
|
||||||
|
// SinkFactory creates sink instances with required dependencies
|
||||||
|
type SinkFactory func(
|
||||||
|
id string,
|
||||||
|
config map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
sessions *session.Proxy,
|
||||||
|
) (sink.Sink, error)
|
||||||
|
|
||||||
|
// Registry manages plugin instances for a single pipeline
|
||||||
|
type Registry struct {
|
||||||
|
pipelineName string
|
||||||
|
|
||||||
|
// Instance tracking
|
||||||
|
sourceInstances map[string]source.Source
|
||||||
|
sinkInstances map[string]sink.Sink
|
||||||
|
// Type count tracking (for single instance enforcement)
|
||||||
|
sourceTypeCounts map[string]int
|
||||||
|
sinkTypeCounts map[string]int
|
||||||
|
|
||||||
|
mu sync.RWMutex
|
||||||
|
logger *log.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewRegistry creates a new registry for a pipeline
|
||||||
|
func NewRegistry(pipelineName string, logger *log.Logger) *Registry {
|
||||||
|
return &Registry{
|
||||||
|
pipelineName: pipelineName,
|
||||||
|
sourceInstances: make(map[string]source.Source),
|
||||||
|
sinkInstances: make(map[string]sink.Sink),
|
||||||
|
sourceTypeCounts: make(map[string]int),
|
||||||
|
sinkTypeCounts: make(map[string]int),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateSource creates and tracks a source instance
|
||||||
|
func (r *Registry) CreateSource(
|
||||||
|
id string,
|
||||||
|
pluginType string,
|
||||||
|
config map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (source.Source, error) {
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
// Check for duplicate instance ID
|
||||||
|
if _, exists := r.sourceInstances[id]; exists {
|
||||||
|
return nil, fmt.Errorf("source instance with ID %s already exists", id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check single instance constraint
|
||||||
|
if meta, ok := plugin.GetSourceMetadata(pluginType); ok {
|
||||||
|
if meta.MaxInstances == 1 && r.sourceTypeCounts[pluginType] >= 1 {
|
||||||
|
return nil, fmt.Errorf("source type %s only allows single instance", pluginType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get source constructor
|
||||||
|
constructor, ok := plugin.GetSource(pluginType)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unknown source type: %s", pluginType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
src, err := constructor(id, config, logger, proxy)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create source %s: %w", id, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track instance
|
||||||
|
r.sourceInstances[id] = src
|
||||||
|
r.sourceTypeCounts[pluginType]++
|
||||||
|
|
||||||
|
r.logger.Info("msg", "Created source instance",
|
||||||
|
"pipeline", r.pipelineName,
|
||||||
|
"id", id,
|
||||||
|
"type", pluginType)
|
||||||
|
|
||||||
|
return src, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateSink creates and tracks a sink instance
|
||||||
|
func (r *Registry) CreateSink(
|
||||||
|
id string,
|
||||||
|
pluginType string,
|
||||||
|
config map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (sink.Sink, error) {
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
// Check for duplicate instance ID
|
||||||
|
if _, exists := r.sinkInstances[id]; exists {
|
||||||
|
return nil, fmt.Errorf("sink instance with ID %s already exists", id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check single instance constraint
|
||||||
|
if meta, ok := plugin.GetSinkMetadata(pluginType); ok {
|
||||||
|
if meta.MaxInstances == 1 && r.sinkTypeCounts[pluginType] >= 1 {
|
||||||
|
return nil, fmt.Errorf("sink type %s only allows single instance", pluginType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get sink constructor
|
||||||
|
constructor, ok := plugin.GetSink(pluginType)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unknown sink type: %s", pluginType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
snk, err := constructor(id, config, logger, proxy)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create sink %s: %w", id, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track instance
|
||||||
|
r.sinkInstances[id] = snk
|
||||||
|
r.sinkTypeCounts[pluginType]++
|
||||||
|
|
||||||
|
r.logger.Info("msg", "Created sink instance",
|
||||||
|
"pipeline", r.pipelineName,
|
||||||
|
"id", id,
|
||||||
|
"type", pluginType)
|
||||||
|
|
||||||
|
return snk, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSourceInstance retrieves a source instance by ID
|
||||||
|
func (r *Registry) GetSourceInstance(id string) (source.Source, bool) {
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
src, exists := r.sourceInstances[id]
|
||||||
|
return src, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSinkInstance retrieves a sink instance by ID
|
||||||
|
func (r *Registry) GetSinkInstance(id string) (sink.Sink, bool) {
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
snk, exists := r.sinkInstances[id]
|
||||||
|
return snk, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetAllSources returns all source instances
|
||||||
|
func (r *Registry) GetAllSources() map[string]source.Source {
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
|
||||||
|
sources := make(map[string]source.Source, len(r.sourceInstances))
|
||||||
|
for k, v := range r.sourceInstances {
|
||||||
|
sources[k] = v
|
||||||
|
}
|
||||||
|
return sources
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetAllSinks returns all sink instances
|
||||||
|
func (r *Registry) GetAllSinks() map[string]sink.Sink {
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
|
||||||
|
sinks := make(map[string]sink.Sink, len(r.sinkInstances))
|
||||||
|
for k, v := range r.sinkInstances {
|
||||||
|
sinks[k] = v
|
||||||
|
}
|
||||||
|
return sinks
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSource removes a source instance
|
||||||
|
func (r *Registry) RemoveSource(id string) {
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
// Decrement type count
|
||||||
|
if src, exists := r.sourceInstances[id]; exists {
|
||||||
|
stats := src.GetStats()
|
||||||
|
if pluginType, ok := stats.Details["type"].(string); ok {
|
||||||
|
r.sourceTypeCounts[pluginType]--
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(r.sourceInstances, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSink removes a sink instance
|
||||||
|
func (r *Registry) RemoveSink(id string) {
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
// Decrement type count
|
||||||
|
if snk, exists := r.sinkInstances[id]; exists {
|
||||||
|
stats := snk.GetStats()
|
||||||
|
if pluginType, ok := stats.Details["type"].(string); ok {
|
||||||
|
r.sinkTypeCounts[pluginType]--
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(r.sinkInstances, id)
|
||||||
|
}
|
||||||
204
src/internal/plugin/factory.go
Normal file
204
src/internal/plugin/factory.go
Normal file
@ -0,0 +1,204 @@
|
|||||||
|
package plugin
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SourceFactory creates source instances
|
||||||
|
type SourceFactory func(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
sessions *session.Proxy,
|
||||||
|
) (source.Source, error)
|
||||||
|
|
||||||
|
// SinkFactory creates sink instances
|
||||||
|
type SinkFactory func(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
sessions *session.Proxy,
|
||||||
|
) (sink.Sink, error)
|
||||||
|
|
||||||
|
// PluginMetadata stores metadata about a plugin type
|
||||||
|
type PluginMetadata struct {
|
||||||
|
Capabilities []core.Capability
|
||||||
|
MaxInstances int // 0 = unlimited, 1 = single instance only
|
||||||
|
}
|
||||||
|
|
||||||
|
// // global variables holding available source and sink plugins
|
||||||
|
// var (
|
||||||
|
// sourceFactories map[string]SourceFactory
|
||||||
|
// sinkFactories map[string]SinkFactory
|
||||||
|
// sourceMetadata map[string]*PluginMetadata
|
||||||
|
// sinkMetadata map[string]*PluginMetadata
|
||||||
|
// mu sync.RWMutex
|
||||||
|
// // once sync.Once
|
||||||
|
// )
|
||||||
|
|
||||||
|
// registry encapsulates all plugin factories with lazy initialization
|
||||||
|
type registry struct {
|
||||||
|
sourceFactories map[string]SourceFactory
|
||||||
|
sinkFactories map[string]SinkFactory
|
||||||
|
sourceMetadata map[string]*PluginMetadata
|
||||||
|
sinkMetadata map[string]*PluginMetadata
|
||||||
|
mu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
globalRegistry *registry
|
||||||
|
once sync.Once
|
||||||
|
)
|
||||||
|
|
||||||
|
// getRegistry returns the singleton registry, initializing on first access
|
||||||
|
func getRegistry() *registry {
|
||||||
|
once.Do(func() {
|
||||||
|
globalRegistry = ®istry{
|
||||||
|
sourceFactories: make(map[string]SourceFactory),
|
||||||
|
sinkFactories: make(map[string]SinkFactory),
|
||||||
|
sourceMetadata: make(map[string]*PluginMetadata),
|
||||||
|
sinkMetadata: make(map[string]*PluginMetadata),
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return globalRegistry
|
||||||
|
}
|
||||||
|
|
||||||
|
// func init() {
|
||||||
|
// sourceFactories = make(map[string]SourceFactory)
|
||||||
|
// sinkFactories = make(map[string]SinkFactory)
|
||||||
|
// }
|
||||||
|
|
||||||
|
// RegisterSource registers a source factory function
|
||||||
|
func RegisterSource(name string, constructor SourceFactory) error {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
if _, exists := r.sourceFactories[name]; exists {
|
||||||
|
return fmt.Errorf("source type %s already registered", name)
|
||||||
|
}
|
||||||
|
r.sourceFactories[name] = constructor
|
||||||
|
|
||||||
|
// Set default metadata
|
||||||
|
r.sourceMetadata[name] = &PluginMetadata{
|
||||||
|
MaxInstances: 0, // Unlimited by default
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegisterSink registers a sink factory function
|
||||||
|
func RegisterSink(name string, constructor SinkFactory) error {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
if _, exists := r.sinkFactories[name]; exists {
|
||||||
|
return fmt.Errorf("sink type %s already registered", name)
|
||||||
|
}
|
||||||
|
r.sinkFactories[name] = constructor
|
||||||
|
|
||||||
|
// Set default metadata
|
||||||
|
r.sinkMetadata[name] = &PluginMetadata{
|
||||||
|
MaxInstances: 0, // Unlimited by default
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetSourceMetadata sets metadata for a source type (call after RegisterSource)
|
||||||
|
func SetSourceMetadata(name string, metadata *PluginMetadata) error {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
if _, exists := r.sourceFactories[name]; !exists {
|
||||||
|
return fmt.Errorf("source type %s not registered", name)
|
||||||
|
}
|
||||||
|
r.sourceMetadata[name] = metadata
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetSinkMetadata sets metadata for a sink type (call after RegisterSink)
|
||||||
|
func SetSinkMetadata(name string, metadata *PluginMetadata) error {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
if _, exists := r.sinkFactories[name]; !exists {
|
||||||
|
return fmt.Errorf("sink type %s not registered", name)
|
||||||
|
}
|
||||||
|
r.sinkMetadata[name] = metadata
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSource retrieves a source factory function
|
||||||
|
func GetSource(name string) (SourceFactory, bool) {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
constructor, exists := r.sourceFactories[name]
|
||||||
|
return constructor, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSink retrieves a sink factory function
|
||||||
|
func GetSink(name string) (SinkFactory, bool) {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
constructor, exists := r.sinkFactories[name]
|
||||||
|
return constructor, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSourceMetadata retrieves metadata for a source type
|
||||||
|
func GetSourceMetadata(name string) (*PluginMetadata, bool) {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
meta, exists := r.sourceMetadata[name]
|
||||||
|
return meta, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSinkMetadata retrieves metadata for a sink type
|
||||||
|
func GetSinkMetadata(name string) (*PluginMetadata, bool) {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
meta, exists := r.sinkMetadata[name]
|
||||||
|
return meta, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListSources returns all registered source types
|
||||||
|
func ListSources() []string {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
|
||||||
|
types := make([]string, 0, len(r.sourceFactories))
|
||||||
|
for t := range r.sourceFactories {
|
||||||
|
types = append(types, t)
|
||||||
|
}
|
||||||
|
return types
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListSinks returns all registered sink types
|
||||||
|
func ListSinks() []string {
|
||||||
|
r := getRegistry()
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
|
||||||
|
types := make([]string, 0, len(r.sinkFactories))
|
||||||
|
for t := range r.sinkFactories {
|
||||||
|
types = append(types, t)
|
||||||
|
}
|
||||||
|
return types
|
||||||
|
}
|
||||||
70
src/internal/sanitize/sanitize.go
Normal file
70
src/internal/sanitize/sanitize.go
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
package sanitize
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/hex"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"unicode/utf8"
|
||||||
|
)
|
||||||
|
|
||||||
|
// String sanitizes a string by replacing non-printable characters with hex encoding
|
||||||
|
// Non-printable characters are encoded as <hex> (e.g., newline becomes <0a>)
|
||||||
|
func String(data string) string {
|
||||||
|
// Fast path: check if sanitization is needed
|
||||||
|
needsSanitization := false
|
||||||
|
for _, r := range data {
|
||||||
|
if !strconv.IsPrint(r) {
|
||||||
|
needsSanitization = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !needsSanitization {
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pre-allocate builder for efficiency
|
||||||
|
var builder strings.Builder
|
||||||
|
builder.Grow(len(data))
|
||||||
|
|
||||||
|
for _, r := range data {
|
||||||
|
if strconv.IsPrint(r) {
|
||||||
|
builder.WriteRune(r)
|
||||||
|
} else {
|
||||||
|
// Encode non-printable rune as <hex>
|
||||||
|
var runeBytes [utf8.UTFMax]byte
|
||||||
|
n := utf8.EncodeRune(runeBytes[:], r)
|
||||||
|
builder.WriteByte('<')
|
||||||
|
builder.WriteString(hex.EncodeToString(runeBytes[:n]))
|
||||||
|
builder.WriteByte('>')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return builder.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bytes sanitizes a byte slice by converting to string and sanitizing
|
||||||
|
func Bytes(data []byte) []byte {
|
||||||
|
return []byte(String(string(data)))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rune sanitizes a single rune, returning its string representation
|
||||||
|
func Rune(r rune) string {
|
||||||
|
if strconv.IsPrint(r) {
|
||||||
|
return string(r)
|
||||||
|
}
|
||||||
|
|
||||||
|
var runeBytes [utf8.UTFMax]byte
|
||||||
|
n := utf8.EncodeRune(runeBytes[:], r)
|
||||||
|
return "<" + hex.EncodeToString(runeBytes[:n]) + ">"
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsSafe checks if a string contains only printable characters
|
||||||
|
func IsSafe(data string) bool {
|
||||||
|
for _, r := range data {
|
||||||
|
if !strconv.IsPrint(r) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
@ -1,175 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/service/pipeline.go
|
|
||||||
package service
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/filter"
|
|
||||||
"logwisp/src/internal/limit"
|
|
||||||
"logwisp/src/internal/sink"
|
|
||||||
"logwisp/src/internal/source"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Pipeline manages the flow of data from sources through filters to sinks
|
|
||||||
type Pipeline struct {
|
|
||||||
Name string
|
|
||||||
Config config.PipelineConfig
|
|
||||||
Sources []source.Source
|
|
||||||
RateLimiter *limit.RateLimiter
|
|
||||||
FilterChain *filter.Chain
|
|
||||||
Sinks []sink.Sink
|
|
||||||
Stats *PipelineStats
|
|
||||||
logger *log.Logger
|
|
||||||
|
|
||||||
ctx context.Context
|
|
||||||
cancel context.CancelFunc
|
|
||||||
wg sync.WaitGroup
|
|
||||||
}
|
|
||||||
|
|
||||||
// PipelineStats contains statistics for a pipeline
|
|
||||||
type PipelineStats struct {
|
|
||||||
StartTime time.Time
|
|
||||||
TotalEntriesProcessed atomic.Uint64
|
|
||||||
TotalEntriesDroppedByRateLimit atomic.Uint64
|
|
||||||
TotalEntriesFiltered atomic.Uint64
|
|
||||||
SourceStats []source.SourceStats
|
|
||||||
SinkStats []sink.SinkStats
|
|
||||||
FilterStats map[string]any
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shutdown gracefully stops the pipeline
|
|
||||||
func (p *Pipeline) Shutdown() {
|
|
||||||
p.logger.Info("msg", "Shutting down pipeline",
|
|
||||||
"component", "pipeline",
|
|
||||||
"pipeline", p.Name)
|
|
||||||
|
|
||||||
// Cancel context to stop processing
|
|
||||||
p.cancel()
|
|
||||||
|
|
||||||
// Stop all sinks first
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
for _, s := range p.Sinks {
|
|
||||||
wg.Add(1)
|
|
||||||
go func(sink sink.Sink) {
|
|
||||||
defer wg.Done()
|
|
||||||
sink.Stop()
|
|
||||||
}(s)
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
// Stop all sources
|
|
||||||
for _, src := range p.Sources {
|
|
||||||
wg.Add(1)
|
|
||||||
go func(source source.Source) {
|
|
||||||
defer wg.Done()
|
|
||||||
source.Stop()
|
|
||||||
}(src)
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
// Wait for processing goroutines
|
|
||||||
p.wg.Wait()
|
|
||||||
|
|
||||||
p.logger.Info("msg", "Pipeline shutdown complete",
|
|
||||||
"component", "pipeline",
|
|
||||||
"pipeline", p.Name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStats returns pipeline statistics
|
|
||||||
func (p *Pipeline) GetStats() map[string]any {
|
|
||||||
// Recovery to handle concurrent access during shutdown
|
|
||||||
// When service is shutting down, sources/sinks might be nil or partially stopped
|
|
||||||
defer func() {
|
|
||||||
if r := recover(); r != nil {
|
|
||||||
p.logger.Error("msg", "Panic getting pipeline stats",
|
|
||||||
"pipeline", p.Name,
|
|
||||||
"panic", r)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Collect source stats
|
|
||||||
sourceStats := make([]map[string]any, 0, len(p.Sources))
|
|
||||||
for _, src := range p.Sources {
|
|
||||||
if src == nil {
|
|
||||||
continue // Skip nil sources
|
|
||||||
}
|
|
||||||
|
|
||||||
stats := src.GetStats()
|
|
||||||
sourceStats = append(sourceStats, map[string]any{
|
|
||||||
"type": stats.Type,
|
|
||||||
"total_entries": stats.TotalEntries,
|
|
||||||
"dropped_entries": stats.DroppedEntries,
|
|
||||||
"start_time": stats.StartTime,
|
|
||||||
"last_entry_time": stats.LastEntryTime,
|
|
||||||
"details": stats.Details,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Collect rate limit stats
|
|
||||||
var rateLimitStats map[string]any
|
|
||||||
if p.RateLimiter != nil {
|
|
||||||
rateLimitStats = p.RateLimiter.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Collect filter stats
|
|
||||||
var filterStats map[string]any
|
|
||||||
if p.FilterChain != nil {
|
|
||||||
filterStats = p.FilterChain.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Collect sink stats
|
|
||||||
sinkStats := make([]map[string]any, 0, len(p.Sinks))
|
|
||||||
for _, s := range p.Sinks {
|
|
||||||
if s == nil {
|
|
||||||
continue // Skip nil sinks
|
|
||||||
}
|
|
||||||
|
|
||||||
stats := s.GetStats()
|
|
||||||
sinkStats = append(sinkStats, map[string]any{
|
|
||||||
"type": stats.Type,
|
|
||||||
"total_processed": stats.TotalProcessed,
|
|
||||||
"active_connections": stats.ActiveConnections,
|
|
||||||
"start_time": stats.StartTime,
|
|
||||||
"last_processed": stats.LastProcessed,
|
|
||||||
"details": stats.Details,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
return map[string]any{
|
|
||||||
"name": p.Name,
|
|
||||||
"uptime_seconds": int(time.Since(p.Stats.StartTime).Seconds()),
|
|
||||||
"total_processed": p.Stats.TotalEntriesProcessed.Load(),
|
|
||||||
"total_dropped_rate_limit": p.Stats.TotalEntriesDroppedByRateLimit.Load(),
|
|
||||||
"total_filtered": p.Stats.TotalEntriesFiltered.Load(),
|
|
||||||
"sources": sourceStats,
|
|
||||||
"rate_limiter": rateLimitStats,
|
|
||||||
"sinks": sinkStats,
|
|
||||||
"filters": filterStats,
|
|
||||||
"source_count": len(p.Sources),
|
|
||||||
"sink_count": len(p.Sinks),
|
|
||||||
"filter_count": len(p.Config.Filters),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// startStatsUpdater runs periodic stats updates
|
|
||||||
func (p *Pipeline) startStatsUpdater(ctx context.Context) {
|
|
||||||
go func() {
|
|
||||||
ticker := time.NewTicker(1 * time.Second)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-ticker.C:
|
|
||||||
// Periodic stats updates if needed
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
@ -1,26 +1,20 @@
|
|||||||
// FILE: logwisp/src/internal/service/service.go
|
|
||||||
package service
|
package service
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
"logwisp/src/internal/config"
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/pipeline"
|
||||||
"logwisp/src/internal/filter"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
"logwisp/src/internal/limit"
|
|
||||||
"logwisp/src/internal/sink"
|
|
||||||
"logwisp/src/internal/source"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Service manages multiple pipelines
|
// Service manages a collection of log processing pipelines
|
||||||
type Service struct {
|
type Service struct {
|
||||||
pipelines map[string]*Pipeline
|
pipelines map[string]*pipeline.Pipeline
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
ctx context.Context
|
ctx context.Context
|
||||||
cancel context.CancelFunc
|
cancel context.CancelFunc
|
||||||
@ -28,364 +22,190 @@ type Service struct {
|
|||||||
logger *log.Logger
|
logger *log.Logger
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new service
|
// NewService creates a new, empty service
|
||||||
func New(ctx context.Context, logger *log.Logger) *Service {
|
func NewService(ctx context.Context, cfg *config.Config, logger *log.Logger) (*Service, error) {
|
||||||
serviceCtx, cancel := context.WithCancel(ctx)
|
serviceCtx, cancel := context.WithCancel(ctx)
|
||||||
return &Service{
|
svc := &Service{
|
||||||
pipelines: make(map[string]*Pipeline),
|
pipelines: make(map[string]*pipeline.Pipeline),
|
||||||
ctx: serviceCtx,
|
ctx: serviceCtx,
|
||||||
cancel: cancel,
|
cancel: cancel,
|
||||||
logger: logger,
|
logger: logger,
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// NewPipeline creates and starts a new pipeline
|
var errs error
|
||||||
func (s *Service) NewPipeline(cfg config.PipelineConfig) error {
|
// Initialize pipelines
|
||||||
s.mu.Lock()
|
for _, pipelineCfg := range cfg.Pipelines {
|
||||||
defer s.mu.Unlock()
|
pipelineName := pipelineCfg.Name
|
||||||
|
logger.Info("msg", "Initializing pipeline", "pipeline", pipelineName)
|
||||||
|
|
||||||
if _, exists := s.pipelines[cfg.Name]; exists {
|
// Create the pipeline
|
||||||
err := fmt.Errorf("pipeline '%s' already exists", cfg.Name)
|
if pl, err := pipeline.NewPipeline(&pipelineCfg, logger); err != nil {
|
||||||
s.logger.Error("msg", "Failed to create pipeline - duplicate name",
|
logger.Error("msg", "Failed to create pipeline",
|
||||||
"component", "service",
|
"pipeline", pipelineCfg.Name,
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"error", err)
|
"error", err)
|
||||||
return err
|
errs = errors.Join(errs, fmt.Errorf("failed to initialize pipeline %s: %w", pipelineName, err))
|
||||||
}
|
} else {
|
||||||
|
svc.pipelines[pipelineName] = pl
|
||||||
s.logger.Debug("msg", "Creating pipeline", "pipeline", cfg.Name)
|
|
||||||
|
|
||||||
// Create pipeline context
|
|
||||||
pipelineCtx, pipelineCancel := context.WithCancel(s.ctx)
|
|
||||||
|
|
||||||
// Create pipeline instance
|
|
||||||
pipeline := &Pipeline{
|
|
||||||
Name: cfg.Name,
|
|
||||||
Config: cfg,
|
|
||||||
Stats: &PipelineStats{
|
|
||||||
StartTime: time.Now(),
|
|
||||||
},
|
|
||||||
ctx: pipelineCtx,
|
|
||||||
cancel: pipelineCancel,
|
|
||||||
logger: s.logger,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create sources
|
|
||||||
for i, srcCfg := range cfg.Sources {
|
|
||||||
src, err := s.createSource(srcCfg)
|
|
||||||
if err != nil {
|
|
||||||
pipelineCancel()
|
|
||||||
return fmt.Errorf("failed to create source[%d]: %w", i, err)
|
|
||||||
}
|
|
||||||
pipeline.Sources = append(pipeline.Sources, src)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create pipeline rate limiter
|
|
||||||
if cfg.RateLimit != nil {
|
|
||||||
limiter, err := limit.NewRateLimiter(*cfg.RateLimit, s.logger)
|
|
||||||
if err != nil {
|
|
||||||
pipelineCancel()
|
|
||||||
return fmt.Errorf("failed to create pipeline rate limiter: %w", err)
|
|
||||||
}
|
|
||||||
pipeline.RateLimiter = limiter
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create filter chain
|
|
||||||
if len(cfg.Filters) > 0 {
|
|
||||||
chain, err := filter.NewChain(cfg.Filters, s.logger)
|
|
||||||
if err != nil {
|
|
||||||
pipelineCancel()
|
|
||||||
return fmt.Errorf("failed to create filter chain: %w", err)
|
|
||||||
}
|
|
||||||
pipeline.FilterChain = chain
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create formatter for the pipeline
|
|
||||||
var formatter format.Formatter
|
|
||||||
var err error
|
|
||||||
if cfg.Format != "" || len(cfg.FormatOptions) > 0 {
|
|
||||||
formatter, err = format.New(cfg.Format, cfg.FormatOptions, s.logger)
|
|
||||||
if err != nil {
|
|
||||||
pipelineCancel()
|
|
||||||
return fmt.Errorf("failed to create formatter: %w", err)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create sinks
|
logger.Info("msg", "Service initialization completed", "pipelines", len(svc.pipelines))
|
||||||
for i, sinkCfg := range cfg.Sinks {
|
|
||||||
sinkInst, err := s.createSink(sinkCfg, formatter)
|
|
||||||
if err != nil {
|
|
||||||
pipelineCancel()
|
|
||||||
return fmt.Errorf("failed to create sink[%d]: %w", i, err)
|
|
||||||
}
|
|
||||||
pipeline.Sinks = append(pipeline.Sinks, sinkInst)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start all sources
|
return svc, errs
|
||||||
for i, src := range pipeline.Sources {
|
|
||||||
if err := src.Start(); err != nil {
|
|
||||||
pipeline.Shutdown()
|
|
||||||
return fmt.Errorf("failed to start source[%d]: %w", i, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start all sinks
|
|
||||||
for i, sinkInst := range pipeline.Sinks {
|
|
||||||
if err := sinkInst.Start(pipelineCtx); err != nil {
|
|
||||||
pipeline.Shutdown()
|
|
||||||
return fmt.Errorf("failed to start sink[%d]: %w", i, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configure authentication for sinks that support it
|
|
||||||
for _, sinkInst := range pipeline.Sinks {
|
|
||||||
if setter, ok := sinkInst.(sink.AuthSetter); ok {
|
|
||||||
setter.SetAuthConfig(cfg.Auth)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wire sources to sinks through filters
|
|
||||||
s.wirePipeline(pipeline)
|
|
||||||
|
|
||||||
// Start stats updater
|
|
||||||
pipeline.startStatsUpdater(pipelineCtx)
|
|
||||||
|
|
||||||
s.pipelines[cfg.Name] = pipeline
|
|
||||||
s.logger.Info("msg", "Pipeline created successfully",
|
|
||||||
"pipeline", cfg.Name,
|
|
||||||
"auth_enabled", cfg.Auth != nil && cfg.Auth.Type != "none")
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// wirePipeline connects sources to sinks through filters
|
// Start starts all or specific pipelines
|
||||||
func (s *Service) wirePipeline(p *Pipeline) {
|
func (svc *Service) Start(names ...string) error {
|
||||||
// For each source, subscribe and process entries
|
svc.mu.RLock()
|
||||||
for _, src := range p.Sources {
|
defer svc.mu.RUnlock()
|
||||||
srcChan := src.Subscribe()
|
|
||||||
|
|
||||||
// Create a processing goroutine for this source
|
var errs error
|
||||||
p.wg.Add(1)
|
// If no names are provided, start all pipelines
|
||||||
go func(source source.Source, entries <-chan core.LogEntry) {
|
if len(names) == 0 {
|
||||||
defer p.wg.Done()
|
svc.logger.Info("msg", "Starting all pipelines")
|
||||||
|
for name, p := range svc.pipelines {
|
||||||
// Panic recovery to prevent single source from crashing pipeline
|
if err := p.Start(); err != nil {
|
||||||
defer func() {
|
errs = errors.Join(errs, fmt.Errorf("failed to start pipeline %s: %w", name, err))
|
||||||
if r := recover(); r != nil {
|
|
||||||
s.logger.Error("msg", "Panic in pipeline processing",
|
|
||||||
"pipeline", p.Name,
|
|
||||||
"source", source.GetStats().Type,
|
|
||||||
"panic", r)
|
|
||||||
|
|
||||||
// Ensure failed pipelines don't leave resources hanging
|
|
||||||
go func() {
|
|
||||||
s.logger.Warn("msg", "Shutting down pipeline due to panic",
|
|
||||||
"pipeline", p.Name)
|
|
||||||
if err := s.RemovePipeline(p.Name); err != nil {
|
|
||||||
s.logger.Error("msg", "Failed to remove panicked pipeline",
|
|
||||||
"pipeline", p.Name,
|
|
||||||
"error", err)
|
|
||||||
}
|
}
|
||||||
}()
|
|
||||||
}
|
}
|
||||||
}()
|
} else {
|
||||||
|
// Start only the specified pipelines
|
||||||
for {
|
svc.logger.Info("msg", "Starting specified pipelines", "pipelines", names)
|
||||||
select {
|
for _, name := range names {
|
||||||
case <-p.ctx.Done():
|
if p, exists := svc.pipelines[name]; exists {
|
||||||
return
|
if err := p.Start(); err != nil {
|
||||||
case entry, ok := <-entries:
|
errs = errors.Join(errs, fmt.Errorf("failed to start pipeline %s: %w", name, err))
|
||||||
if !ok {
|
}
|
||||||
return
|
} else {
|
||||||
|
errs = errors.Join(errs, fmt.Errorf("pipeline %s not found", name))
|
||||||
}
|
}
|
||||||
|
|
||||||
p.Stats.TotalEntriesProcessed.Add(1)
|
|
||||||
|
|
||||||
// Apply pipeline rate limiter
|
|
||||||
if p.RateLimiter != nil {
|
|
||||||
if !p.RateLimiter.Allow(entry) {
|
|
||||||
p.Stats.TotalEntriesDroppedByRateLimit.Add(1)
|
|
||||||
continue // Drop the entry
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply filters if configured
|
svc.logger.Debug("msg", "Finished starting pipeline(s)", "pipelines", names)
|
||||||
if p.FilterChain != nil {
|
|
||||||
if !p.FilterChain.Apply(entry) {
|
|
||||||
p.Stats.TotalEntriesFiltered.Add(1)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Send to all sinks
|
return errs
|
||||||
for _, sinkInst := range p.Sinks {
|
|
||||||
select {
|
|
||||||
case sinkInst.Input() <- entry:
|
|
||||||
case <-p.ctx.Done():
|
|
||||||
return
|
|
||||||
default:
|
|
||||||
// Drop if sink buffer is full, may flood logging for slow client
|
|
||||||
s.logger.Debug("msg", "Dropped log entry - sink buffer full",
|
|
||||||
"pipeline", p.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}(src, srcChan)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// createSource creates a source instance based on configuration
|
// Stop stops all or specific pipeline
|
||||||
func (s *Service) createSource(cfg config.SourceConfig) (source.Source, error) {
|
func (svc *Service) Stop(names ...string) error {
|
||||||
switch cfg.Type {
|
svc.mu.RLock()
|
||||||
case "directory":
|
defer svc.mu.RUnlock()
|
||||||
return source.NewDirectorySource(cfg.Options, s.logger)
|
|
||||||
case "stdin":
|
var errs error
|
||||||
return source.NewStdinSource(cfg.Options, s.logger)
|
|
||||||
case "http":
|
// If no names are provided, stop all pipelines
|
||||||
return source.NewHTTPSource(cfg.Options, s.logger)
|
if len(names) == 0 {
|
||||||
case "tcp":
|
svc.logger.Info("msg", "Stopping all pipelines")
|
||||||
return source.NewTCPSource(cfg.Options, s.logger)
|
for name, p := range svc.pipelines {
|
||||||
default:
|
if err := p.Stop(); err != nil {
|
||||||
return nil, fmt.Errorf("unknown source type: %s", cfg.Type)
|
errs = errors.Join(errs, fmt.Errorf("failed to stop pipeline %s: %w", name, err))
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Stop only the specified pipelines
|
||||||
|
svc.logger.Info("msg", "Stopping specified pipelines", "pipelines", names)
|
||||||
|
for _, name := range names {
|
||||||
|
if p, exists := svc.pipelines[name]; exists {
|
||||||
|
if err := p.Stop(); err != nil {
|
||||||
|
errs = errors.Join(errs, fmt.Errorf("failed to stop pipeline %s: %w", name, err))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
errs = errors.Join(errs, fmt.Errorf("pipeline %s not found", name))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
svc.logger.Debug("msg", "Finished stopping pipeline(s)", "pipelines", names)
|
||||||
|
|
||||||
|
return errs
|
||||||
}
|
}
|
||||||
|
|
||||||
// createSink creates a sink instance based on configuration
|
// GetPipeline returns a pipeline by its name
|
||||||
func (s *Service) createSink(cfg config.SinkConfig, formatter format.Formatter) (sink.Sink, error) {
|
func (svc *Service) GetPipeline(name string) (*pipeline.Pipeline, error) {
|
||||||
if formatter == nil {
|
svc.mu.RLock()
|
||||||
// Default formatters for different sink types
|
defer svc.mu.RUnlock()
|
||||||
defaultFormat := "raw"
|
|
||||||
switch cfg.Type {
|
|
||||||
case "http", "tcp", "http_client", "tcp_client":
|
|
||||||
defaultFormat = "json"
|
|
||||||
}
|
|
||||||
|
|
||||||
var err error
|
pipeline, exists := svc.pipelines[name]
|
||||||
formatter, err = format.New(defaultFormat, nil, s.logger)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create default formatter: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
switch cfg.Type {
|
|
||||||
case "http":
|
|
||||||
return sink.NewHTTPSink(cfg.Options, s.logger, formatter)
|
|
||||||
case "tcp":
|
|
||||||
return sink.NewTCPSink(cfg.Options, s.logger, formatter)
|
|
||||||
case "http_client":
|
|
||||||
return sink.NewHTTPClientSink(cfg.Options, s.logger, formatter)
|
|
||||||
case "tcp_client":
|
|
||||||
return sink.NewTCPClientSink(cfg.Options, s.logger, formatter)
|
|
||||||
case "file":
|
|
||||||
return sink.NewFileSink(cfg.Options, s.logger, formatter)
|
|
||||||
case "stdout":
|
|
||||||
return sink.NewStdoutSink(cfg.Options, s.logger, formatter)
|
|
||||||
case "stderr":
|
|
||||||
return sink.NewStderrSink(cfg.Options, s.logger, formatter)
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("unknown sink type: %s", cfg.Type)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetPipeline returns a pipeline by name
|
|
||||||
func (s *Service) GetPipeline(name string) (*Pipeline, error) {
|
|
||||||
s.mu.RLock()
|
|
||||||
defer s.mu.RUnlock()
|
|
||||||
|
|
||||||
pipeline, exists := s.pipelines[name]
|
|
||||||
if !exists {
|
if !exists {
|
||||||
return nil, fmt.Errorf("pipeline '%s' not found", name)
|
return nil, fmt.Errorf("pipeline '%s' not found", name)
|
||||||
}
|
}
|
||||||
return pipeline, nil
|
return pipeline, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListStreams is deprecated, use ListPipelines
|
// ListPipelines returns the names of all currently managed pipelines
|
||||||
func (s *Service) ListStreams() []string {
|
func (svc *Service) ListPipelines() []string {
|
||||||
s.logger.Warn("msg", "ListStreams is deprecated, use ListPipelines",
|
svc.mu.RLock()
|
||||||
"component", "service")
|
defer svc.mu.RUnlock()
|
||||||
return s.ListPipelines()
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListPipelines returns all pipeline names
|
names := make([]string, 0, len(svc.pipelines))
|
||||||
func (s *Service) ListPipelines() []string {
|
for name := range svc.pipelines {
|
||||||
s.mu.RLock()
|
|
||||||
defer s.mu.RUnlock()
|
|
||||||
|
|
||||||
names := make([]string, 0, len(s.pipelines))
|
|
||||||
for name := range s.pipelines {
|
|
||||||
names = append(names, name)
|
names = append(names, name)
|
||||||
}
|
}
|
||||||
return names
|
return names
|
||||||
}
|
}
|
||||||
|
|
||||||
// RemoveStream is deprecated, use RemovePipeline
|
// RemovePipeline stops and removes a pipeline from the service
|
||||||
func (s *Service) RemoveStream(name string) error {
|
func (svc *Service) RemovePipeline(name string) error {
|
||||||
s.logger.Warn("msg", "RemoveStream is deprecated, use RemovePipeline",
|
svc.mu.Lock()
|
||||||
"component", "service")
|
defer svc.mu.Unlock()
|
||||||
return s.RemovePipeline(name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemovePipeline stops and removes a pipeline
|
pl, exists := svc.pipelines[name]
|
||||||
func (s *Service) RemovePipeline(name string) error {
|
|
||||||
s.mu.Lock()
|
|
||||||
defer s.mu.Unlock()
|
|
||||||
|
|
||||||
pipeline, exists := s.pipelines[name]
|
|
||||||
if !exists {
|
if !exists {
|
||||||
err := fmt.Errorf("pipeline '%s' not found", name)
|
err := fmt.Errorf("pipeline '%s' not found", name)
|
||||||
s.logger.Warn("msg", "Cannot remove non-existent pipeline",
|
svc.logger.Warn("msg", "Cannot remove non-existent pipeline",
|
||||||
"component", "service",
|
"component", "service",
|
||||||
"pipeline", name,
|
"pipeline", name,
|
||||||
"error", err)
|
"error", err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
s.logger.Info("msg", "Removing pipeline", "pipeline", name)
|
svc.logger.Info("msg", "Removing pipeline", "pipeline", name)
|
||||||
pipeline.Shutdown()
|
pl.Shutdown()
|
||||||
delete(s.pipelines, name)
|
delete(svc.pipelines, name)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Shutdown stops all pipelines
|
// Shutdown gracefully stops all pipelines managed by the service
|
||||||
func (s *Service) Shutdown() {
|
func (svc *Service) Shutdown() {
|
||||||
s.logger.Info("msg", "Service shutdown initiated")
|
svc.logger.Info("msg", "Service shutdown initiated")
|
||||||
|
|
||||||
s.mu.Lock()
|
svc.mu.Lock()
|
||||||
pipelines := make([]*Pipeline, 0, len(s.pipelines))
|
pipelines := make([]*pipeline.Pipeline, 0, len(svc.pipelines))
|
||||||
for _, pipeline := range s.pipelines {
|
for _, pl := range svc.pipelines {
|
||||||
pipelines = append(pipelines, pipeline)
|
pipelines = append(pipelines, pl)
|
||||||
}
|
}
|
||||||
s.mu.Unlock()
|
svc.mu.Unlock()
|
||||||
|
|
||||||
// Stop all pipelines concurrently
|
// Stop all pipelines concurrently
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
for _, pipeline := range pipelines {
|
for _, pl := range pipelines {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(p *Pipeline) {
|
go func(p *pipeline.Pipeline) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
p.Shutdown()
|
p.Shutdown()
|
||||||
}(pipeline)
|
}(pl)
|
||||||
}
|
}
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
|
|
||||||
s.cancel()
|
svc.cancel()
|
||||||
s.wg.Wait()
|
svc.wg.Wait()
|
||||||
|
|
||||||
s.logger.Info("msg", "Service shutdown complete")
|
svc.logger.Info("msg", "Service shutdown complete")
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetGlobalStats returns statistics for all pipelines
|
// GetGlobalStats returns statistics for all pipelines
|
||||||
func (s *Service) GetGlobalStats() map[string]any {
|
func (svc *Service) GetGlobalStats() map[string]any {
|
||||||
s.mu.RLock()
|
svc.mu.RLock()
|
||||||
defer s.mu.RUnlock()
|
defer svc.mu.RUnlock()
|
||||||
|
|
||||||
stats := map[string]any{
|
stats := map[string]any{
|
||||||
"pipelines": make(map[string]any),
|
"pipelines": make(map[string]any),
|
||||||
"total_pipelines": len(s.pipelines),
|
"total_pipelines": len(svc.pipelines),
|
||||||
}
|
}
|
||||||
|
|
||||||
for name, pipeline := range s.pipelines {
|
for name, pl := range svc.pipelines {
|
||||||
stats["pipelines"].(map[string]any)[name] = pipeline.GetStats()
|
stats["pipelines"].(map[string]any)[name] = pl.GetStats()
|
||||||
}
|
}
|
||||||
|
|
||||||
return stats
|
return stats
|
||||||
|
|||||||
82
src/internal/session/proxy.go
Normal file
82
src/internal/session/proxy.go
Normal file
@ -0,0 +1,82 @@
|
|||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Proxy provides filtered access to session management for a specific plugin instance
|
||||||
|
type Proxy struct {
|
||||||
|
manager *Manager
|
||||||
|
instanceID string
|
||||||
|
mu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewProxy creates a session proxy for a specific plugin instance
|
||||||
|
func NewProxy(manager *Manager, instanceID string) *Proxy {
|
||||||
|
return &Proxy{
|
||||||
|
manager: manager,
|
||||||
|
instanceID: instanceID,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateSession creates a new session scoped to this instance
|
||||||
|
func (p *Proxy) CreateSession(remoteAddr string, metadata map[string]any) *Session {
|
||||||
|
if metadata == nil {
|
||||||
|
metadata = make(map[string]any)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add instance ID to metadata
|
||||||
|
metadata["instance_id"] = p.instanceID
|
||||||
|
|
||||||
|
// Create session with instance-scoped source
|
||||||
|
session := p.manager.CreateSession(remoteAddr, p.instanceID, metadata)
|
||||||
|
session.InstanceID = p.instanceID
|
||||||
|
|
||||||
|
return session
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSession retrieves a session if it belongs to this instance
|
||||||
|
func (p *Proxy) GetSession(sessionID string) (*Session, bool) {
|
||||||
|
session, exists := p.manager.GetSession(sessionID)
|
||||||
|
if !exists || session.InstanceID != p.instanceID {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
return session, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSession removes a session if it belongs to this instance
|
||||||
|
func (p *Proxy) RemoveSession(sessionID string) bool {
|
||||||
|
if session, exists := p.GetSession(sessionID); exists {
|
||||||
|
p.manager.RemoveSession(session.ID)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetActiveSessions returns all active sessions for this instance
|
||||||
|
func (p *Proxy) GetActiveSessions() []*Session {
|
||||||
|
allSessions := p.manager.GetSessionsBySource(p.instanceID)
|
||||||
|
|
||||||
|
// Filter by instance ID
|
||||||
|
var filtered []*Session
|
||||||
|
for _, session := range allSessions {
|
||||||
|
if session.InstanceID == p.instanceID {
|
||||||
|
filtered = append(filtered, session)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return filtered
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateActivity updates activity for a session if it belongs to this instance
|
||||||
|
func (p *Proxy) UpdateActivity(sessionID string) bool {
|
||||||
|
if session, exists := p.GetSession(sessionID); exists {
|
||||||
|
p.manager.UpdateActivity(session.ID)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetInstanceID returns the instance ID this proxy is bound to
|
||||||
|
func (p *Proxy) GetInstanceID() string {
|
||||||
|
return p.instanceID
|
||||||
|
}
|
||||||
294
src/internal/session/session.go
Normal file
294
src/internal/session/session.go
Normal file
@ -0,0 +1,294 @@
|
|||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/base64"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Session represents a connection session
|
||||||
|
type Session struct {
|
||||||
|
InstanceID string // Plugin instance identifier
|
||||||
|
ID string // Unique session identifier
|
||||||
|
RemoteAddr string // Client address
|
||||||
|
CreatedAt time.Time // Session creation time
|
||||||
|
LastActivity time.Time // Last activity timestamp
|
||||||
|
Metadata map[string]any // Optional metadata (e.g., TLS info)
|
||||||
|
|
||||||
|
// Connection context
|
||||||
|
Source string // Source type: "tcp_source", "http_source", "tcp_sink", etc.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Manager handles the lifecycle of sessions
|
||||||
|
type Manager struct {
|
||||||
|
sessions map[string]*Session
|
||||||
|
mu sync.RWMutex
|
||||||
|
|
||||||
|
// Cleanup configuration
|
||||||
|
maxIdleTime time.Duration
|
||||||
|
cleanupTicker *time.Ticker
|
||||||
|
done chan struct{}
|
||||||
|
|
||||||
|
// Expiry callbacks by source type
|
||||||
|
expiryCallbacks map[string]func(sessionID, remoteAddr string)
|
||||||
|
callbacksMu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewManager creates a new session manager with a specified idle timeout
|
||||||
|
func NewManager(maxIdleTime time.Duration) *Manager {
|
||||||
|
if maxIdleTime == 0 {
|
||||||
|
maxIdleTime = core.SessionDefaultMaxIdleTime
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &Manager{
|
||||||
|
sessions: make(map[string]*Session),
|
||||||
|
maxIdleTime: maxIdleTime,
|
||||||
|
done: make(chan struct{}),
|
||||||
|
expiryCallbacks: make(map[string]func(sessionID, remoteAddr string)),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start cleanup routine
|
||||||
|
m.startCleanup()
|
||||||
|
|
||||||
|
return m
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateSession creates and stores a new session for a connection
|
||||||
|
func (m *Manager) CreateSession(remoteAddr string, source string, metadata map[string]any) *Session {
|
||||||
|
session := &Session{
|
||||||
|
ID: generateSessionID(),
|
||||||
|
RemoteAddr: remoteAddr,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
LastActivity: time.Now(),
|
||||||
|
Source: source,
|
||||||
|
Metadata: metadata,
|
||||||
|
}
|
||||||
|
|
||||||
|
if metadata == nil {
|
||||||
|
session.Metadata = make(map[string]any)
|
||||||
|
}
|
||||||
|
|
||||||
|
m.StoreSession(session)
|
||||||
|
return session
|
||||||
|
}
|
||||||
|
|
||||||
|
// StoreSession adds a session to the manager
|
||||||
|
func (m *Manager) StoreSession(session *Session) {
|
||||||
|
m.mu.Lock()
|
||||||
|
defer m.mu.Unlock()
|
||||||
|
m.sessions[session.ID] = session
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSession retrieves a session by its unique ID
|
||||||
|
func (m *Manager) GetSession(sessionID string) (*Session, bool) {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
session, exists := m.sessions[sessionID]
|
||||||
|
return session, exists
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSession removes a session from the manager
|
||||||
|
func (m *Manager) RemoveSession(sessionID string) {
|
||||||
|
m.mu.Lock()
|
||||||
|
defer m.mu.Unlock()
|
||||||
|
delete(m.sessions, sessionID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateActivity updates the last activity timestamp for a session
|
||||||
|
func (m *Manager) UpdateActivity(sessionID string) {
|
||||||
|
m.mu.Lock()
|
||||||
|
defer m.mu.Unlock()
|
||||||
|
|
||||||
|
if session, exists := m.sessions[sessionID]; exists {
|
||||||
|
session.LastActivity = time.Now()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsSessionActive checks if a session exists and has not been idle for too long
|
||||||
|
func (m *Manager) IsSessionActive(sessionID string) bool {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
|
||||||
|
if session, exists := m.sessions[sessionID]; exists {
|
||||||
|
// Session exists and hasn't exceeded idle timeout
|
||||||
|
return time.Since(session.LastActivity) < m.maxIdleTime
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetActiveSessions returns a snapshot of all currently active sessions
|
||||||
|
func (m *Manager) GetActiveSessions() []*Session {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
|
||||||
|
sessions := make([]*Session, 0, len(m.sessions))
|
||||||
|
for _, session := range m.sessions {
|
||||||
|
sessions = append(sessions, session)
|
||||||
|
}
|
||||||
|
return sessions
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSessionCount returns the number of active sessions
|
||||||
|
func (m *Manager) GetSessionCount() int {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
return len(m.sessions)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSessionsBySource returns all sessions matching a specific source type
|
||||||
|
func (m *Manager) GetSessionsBySource(source string) []*Session {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
|
||||||
|
var sessions []*Session
|
||||||
|
for _, session := range m.sessions {
|
||||||
|
if session.Source == source {
|
||||||
|
sessions = append(sessions, session)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sessions
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetActiveSessionsBySource returns all active sessions for a given source
|
||||||
|
func (m *Manager) GetActiveSessionsBySource(source string) []*Session {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
|
||||||
|
var sessions []*Session
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
for _, session := range m.sessions {
|
||||||
|
if session.Source == source && now.Sub(session.LastActivity) < m.maxIdleTime {
|
||||||
|
sessions = append(sessions, session)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sessions
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns statistics about the session manager
|
||||||
|
func (m *Manager) GetStats() map[string]any {
|
||||||
|
m.mu.RLock()
|
||||||
|
defer m.mu.RUnlock()
|
||||||
|
|
||||||
|
sourceCounts := make(map[string]int)
|
||||||
|
var totalSessions int
|
||||||
|
var oldestSession time.Time
|
||||||
|
var newestSession time.Time
|
||||||
|
|
||||||
|
for _, session := range m.sessions {
|
||||||
|
totalSessions++
|
||||||
|
sourceCounts[session.Source]++
|
||||||
|
|
||||||
|
if oldestSession.IsZero() || session.CreatedAt.Before(oldestSession) {
|
||||||
|
oldestSession = session.CreatedAt
|
||||||
|
}
|
||||||
|
if newestSession.IsZero() || session.CreatedAt.After(newestSession) {
|
||||||
|
newestSession = session.CreatedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
stats := map[string]any{
|
||||||
|
"total_sessions": totalSessions,
|
||||||
|
"sessions_by_type": sourceCounts,
|
||||||
|
"max_idle_time": m.maxIdleTime.String(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if !oldestSession.IsZero() {
|
||||||
|
stats["oldest_session_age"] = time.Since(oldestSession).String()
|
||||||
|
}
|
||||||
|
if !newestSession.IsZero() {
|
||||||
|
stats["newest_session_age"] = time.Since(newestSession).String()
|
||||||
|
}
|
||||||
|
|
||||||
|
return stats
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully stops the session manager and its cleanup goroutine
|
||||||
|
func (m *Manager) Stop() {
|
||||||
|
close(m.done)
|
||||||
|
if m.cleanupTicker != nil {
|
||||||
|
m.cleanupTicker.Stop()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegisterExpiryCallback registers a callback function to be executed when a session expires
|
||||||
|
func (m *Manager) RegisterExpiryCallback(source string, callback func(sessionID, remoteAddr string)) {
|
||||||
|
m.callbacksMu.Lock()
|
||||||
|
defer m.callbacksMu.Unlock()
|
||||||
|
|
||||||
|
if m.expiryCallbacks == nil {
|
||||||
|
m.expiryCallbacks = make(map[string]func(sessionID, remoteAddr string))
|
||||||
|
}
|
||||||
|
m.expiryCallbacks[source] = callback
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnregisterExpiryCallback removes an expiry callback for a given source type
|
||||||
|
func (m *Manager) UnregisterExpiryCallback(source string) {
|
||||||
|
m.callbacksMu.Lock()
|
||||||
|
defer m.callbacksMu.Unlock()
|
||||||
|
|
||||||
|
delete(m.expiryCallbacks, source)
|
||||||
|
}
|
||||||
|
|
||||||
|
// startCleanup initializes the periodic cleanup of idle sessions
|
||||||
|
func (m *Manager) startCleanup() {
|
||||||
|
m.cleanupTicker = time.NewTicker(core.SessionCleanupInterval)
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-m.cleanupTicker.C:
|
||||||
|
m.cleanupIdleSessions()
|
||||||
|
case <-m.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanupIdleSessions removes sessions that have exceeded the maximum idle time.
|
||||||
|
func (m *Manager) cleanupIdleSessions() {
|
||||||
|
now := time.Now()
|
||||||
|
expiredSessions := make([]*Session, 0)
|
||||||
|
|
||||||
|
m.mu.Lock()
|
||||||
|
for id, session := range m.sessions {
|
||||||
|
idleTime := now.Sub(session.LastActivity)
|
||||||
|
|
||||||
|
if idleTime > m.maxIdleTime {
|
||||||
|
expiredSessions = append(expiredSessions, session)
|
||||||
|
delete(m.sessions, id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
m.mu.Unlock()
|
||||||
|
|
||||||
|
if len(expiredSessions) > 0 {
|
||||||
|
m.callbacksMu.RLock()
|
||||||
|
callbacks := make(map[string]func(sessionID, remoteAddr string))
|
||||||
|
for k, v := range m.expiryCallbacks {
|
||||||
|
callbacks[k] = v
|
||||||
|
}
|
||||||
|
m.callbacksMu.RUnlock()
|
||||||
|
|
||||||
|
for _, session := range expiredSessions {
|
||||||
|
if callback, exists := callbacks[session.Source]; exists {
|
||||||
|
// Call callback to notify owner
|
||||||
|
go callback(session.ID, session.RemoteAddr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// generateSessionID creates a unique, random session identifier.
|
||||||
|
func generateSessionID() string {
|
||||||
|
b := make([]byte, 16)
|
||||||
|
if _, err := rand.Read(b); err != nil {
|
||||||
|
// Fallback to timestamp-based ID
|
||||||
|
return fmt.Sprintf("session_%d", time.Now().UnixNano())
|
||||||
|
}
|
||||||
|
return base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(b)
|
||||||
|
}
|
||||||
@ -1,248 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/sink/console.go
|
|
||||||
package sink
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ConsoleConfig holds common configuration for console sinks
|
|
||||||
type ConsoleConfig struct {
|
|
||||||
Target string // "stdout", "stderr", or "split"
|
|
||||||
BufferSize int64
|
|
||||||
}
|
|
||||||
|
|
||||||
// StdoutSink writes log entries to stdout
|
|
||||||
type StdoutSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
config ConsoleConfig
|
|
||||||
output io.Writer
|
|
||||||
done chan struct{}
|
|
||||||
startTime time.Time
|
|
||||||
logger *log.Logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewStdoutSink creates a new stdout sink
|
|
||||||
func NewStdoutSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*StdoutSink, error) {
|
|
||||||
config := ConsoleConfig{
|
|
||||||
Target: "stdout",
|
|
||||||
BufferSize: 1000,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for split mode configuration
|
|
||||||
if target, ok := options["target"].(string); ok {
|
|
||||||
config.Target = target
|
|
||||||
}
|
|
||||||
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
config.BufferSize = bufSize
|
|
||||||
}
|
|
||||||
|
|
||||||
s := &StdoutSink{
|
|
||||||
input: make(chan core.LogEntry, config.BufferSize),
|
|
||||||
config: config,
|
|
||||||
output: os.Stdout,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
s.lastProcessed.Store(time.Time{})
|
|
||||||
|
|
||||||
return s, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdoutSink) Input() chan<- core.LogEntry {
|
|
||||||
return s.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdoutSink) Start(ctx context.Context) error {
|
|
||||||
go s.processLoop(ctx)
|
|
||||||
s.logger.Info("msg", "Stdout sink started",
|
|
||||||
"component", "stdout_sink",
|
|
||||||
"target", s.config.Target)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdoutSink) Stop() {
|
|
||||||
s.logger.Info("msg", "Stopping stdout sink")
|
|
||||||
close(s.done)
|
|
||||||
s.logger.Info("msg", "Stdout sink stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdoutSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := s.lastProcessed.Load().(time.Time)
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "stdout",
|
|
||||||
TotalProcessed: s.totalProcessed.Load(),
|
|
||||||
StartTime: s.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{
|
|
||||||
"target": s.config.Target,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdoutSink) processLoop(ctx context.Context) {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-s.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
s.totalProcessed.Add(1)
|
|
||||||
s.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
// Handle split mode - only process INFO/DEBUG for stdout
|
|
||||||
if s.config.Target == "split" {
|
|
||||||
upperLevel := strings.ToUpper(entry.Level)
|
|
||||||
if upperLevel == "ERROR" || upperLevel == "WARN" || upperLevel == "WARNING" {
|
|
||||||
// Skip ERROR/WARN levels in stdout when in split mode
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format and write
|
|
||||||
formatted, err := s.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
s.logger.Error("msg", "Failed to format log entry for stdout", "error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
s.output.Write(formatted)
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-s.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// StderrSink writes log entries to stderr
|
|
||||||
type StderrSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
config ConsoleConfig
|
|
||||||
output io.Writer
|
|
||||||
done chan struct{}
|
|
||||||
startTime time.Time
|
|
||||||
logger *log.Logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewStderrSink creates a new stderr sink
|
|
||||||
func NewStderrSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*StderrSink, error) {
|
|
||||||
config := ConsoleConfig{
|
|
||||||
Target: "stderr",
|
|
||||||
BufferSize: 1000,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for split mode configuration
|
|
||||||
if target, ok := options["target"].(string); ok {
|
|
||||||
config.Target = target
|
|
||||||
}
|
|
||||||
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
config.BufferSize = bufSize
|
|
||||||
}
|
|
||||||
|
|
||||||
s := &StderrSink{
|
|
||||||
input: make(chan core.LogEntry, config.BufferSize),
|
|
||||||
config: config,
|
|
||||||
output: os.Stderr,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
s.lastProcessed.Store(time.Time{})
|
|
||||||
|
|
||||||
return s, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StderrSink) Input() chan<- core.LogEntry {
|
|
||||||
return s.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StderrSink) Start(ctx context.Context) error {
|
|
||||||
go s.processLoop(ctx)
|
|
||||||
s.logger.Info("msg", "Stderr sink started",
|
|
||||||
"component", "stderr_sink",
|
|
||||||
"target", s.config.Target)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StderrSink) Stop() {
|
|
||||||
s.logger.Info("msg", "Stopping stderr sink")
|
|
||||||
close(s.done)
|
|
||||||
s.logger.Info("msg", "Stderr sink stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StderrSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := s.lastProcessed.Load().(time.Time)
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "stderr",
|
|
||||||
TotalProcessed: s.totalProcessed.Load(),
|
|
||||||
StartTime: s.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{
|
|
||||||
"target": s.config.Target,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StderrSink) processLoop(ctx context.Context) {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-s.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
s.totalProcessed.Add(1)
|
|
||||||
s.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
// Handle split mode - only process ERROR/WARN for stderr
|
|
||||||
if s.config.Target == "split" {
|
|
||||||
upperLevel := strings.ToUpper(entry.Level)
|
|
||||||
if upperLevel != "ERROR" && upperLevel != "WARN" && upperLevel != "WARNING" {
|
|
||||||
// Skip non-ERROR/WARN levels in stderr when in split mode
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format and write
|
|
||||||
formatted, err := s.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
s.logger.Error("msg", "Failed to format log entry for stderr", "error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
s.output.Write(formatted)
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-s.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
209
src/internal/sink/console/console.go
Normal file
209
src/internal/sink/console/console.go
Normal file
@ -0,0 +1,209 @@
|
|||||||
|
package console
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSink("console", NewConsoleSinkPlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register console sink: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConsoleSink writes log entries to the console (stdout/stderr) using an dedicated logger instance
|
||||||
|
type ConsoleSink struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.ConsoleSinkOptions
|
||||||
|
|
||||||
|
// Application
|
||||||
|
input chan core.TransportEvent
|
||||||
|
output io.Writer
|
||||||
|
logger *log.Logger // application logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
startTime time.Time
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalProcessed atomic.Uint64
|
||||||
|
lastProcessed atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Defaults
|
||||||
|
DefaultConsoleTarget = "stdout"
|
||||||
|
DefaultConsoleBufferSize = 1000
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewConsoleSinkPlugin creates a console sink through plugin factory
|
||||||
|
func NewConsoleSinkPlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (sink.Sink, error) {
|
||||||
|
opts := &config.ConsoleSinkOptions{}
|
||||||
|
|
||||||
|
// Scan config map into struct
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate and apply defaults
|
||||||
|
if opts.Target == "" {
|
||||||
|
opts.Target = DefaultConsoleTarget
|
||||||
|
} else {
|
||||||
|
validateTarget := lconfig.OneOf("stdout", "stderr")
|
||||||
|
if err := validateTarget(opts.Target); err != nil {
|
||||||
|
return nil, fmt.Errorf("target: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var output io.Writer
|
||||||
|
switch opts.Target {
|
||||||
|
case "stdout":
|
||||||
|
output = os.Stdout
|
||||||
|
case "stderr":
|
||||||
|
output = os.Stderr
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.BufferSize <= 0 {
|
||||||
|
opts.BufferSize = DefaultConsoleBufferSize
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create and return plugin instance
|
||||||
|
cs := &ConsoleSink{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
input: make(chan core.TransportEvent, opts.BufferSize),
|
||||||
|
output: output,
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
cs.lastProcessed.Store(time.Time{})
|
||||||
|
|
||||||
|
// Create session for output
|
||||||
|
cs.session = proxy.CreateSession(
|
||||||
|
fmt.Sprintf("console:%s", opts.Target),
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "console",
|
||||||
|
"target": opts.Target,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
cs.logger.Info("msg", "Console sink initialized",
|
||||||
|
"component", "console_sink",
|
||||||
|
"instance_id", id,
|
||||||
|
"target", opts.Target,
|
||||||
|
)
|
||||||
|
|
||||||
|
return cs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (cs *ConsoleSink) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware, // Single output session
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Input returns the channel for sending transport events
|
||||||
|
func (cs *ConsoleSink) Input() chan<- core.TransportEvent {
|
||||||
|
return cs.input
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins the processing loop
|
||||||
|
func (cs *ConsoleSink) Start(ctx context.Context) error {
|
||||||
|
cs.startTime = time.Now()
|
||||||
|
go cs.processLoop(ctx)
|
||||||
|
cs.logger.Info("msg", "Console sink started",
|
||||||
|
"component", "console_sink",
|
||||||
|
"target", cs.config.Target)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully shuts down the sink
|
||||||
|
func (cs *ConsoleSink) Stop() {
|
||||||
|
cs.logger.Info("msg", "Stopping console sink", "target", cs.config.Target)
|
||||||
|
|
||||||
|
// Remove session
|
||||||
|
if cs.session != nil {
|
||||||
|
cs.proxy.RemoveSession(cs.session.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
close(cs.done)
|
||||||
|
|
||||||
|
cs.logger.Info("msg", "Console sink stopped",
|
||||||
|
"instance_id", cs.id,
|
||||||
|
"target", cs.config.Target,
|
||||||
|
"instance_id", cs.id,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns sink statistics
|
||||||
|
func (cs *ConsoleSink) GetStats() sink.SinkStats {
|
||||||
|
lastProc, _ := cs.lastProcessed.Load().(time.Time)
|
||||||
|
|
||||||
|
return sink.SinkStats{
|
||||||
|
ID: cs.id,
|
||||||
|
Type: "console",
|
||||||
|
TotalProcessed: cs.totalProcessed.Load(),
|
||||||
|
StartTime: cs.startTime,
|
||||||
|
LastProcessed: lastProc,
|
||||||
|
Details: map[string]any{
|
||||||
|
"target": cs.config.Target,
|
||||||
|
"buffer_size": cs.config.BufferSize,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// processLoop reads transport events and writes to console
|
||||||
|
func (cs *ConsoleSink) processLoop(ctx context.Context) {
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case event, ok := <-cs.input:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write pre-formatted payload directly to output
|
||||||
|
if _, err := cs.output.Write(event.Payload); err != nil {
|
||||||
|
cs.logger.Error("msg", "Failed to write to console",
|
||||||
|
"component", "console_sink",
|
||||||
|
"target", cs.config.Target,
|
||||||
|
"error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
cs.totalProcessed.Add(1)
|
||||||
|
cs.lastProcessed.Store(time.Now())
|
||||||
|
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-cs.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,167 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/sink/file.go
|
|
||||||
package sink
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// FileSink writes log entries to files with rotation
|
|
||||||
type FileSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
writer *log.Logger // Internal logger instance for file writing
|
|
||||||
done chan struct{}
|
|
||||||
startTime time.Time
|
|
||||||
logger *log.Logger // Application logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewFileSink creates a new file sink
|
|
||||||
func NewFileSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*FileSink, error) {
|
|
||||||
directory, ok := options["directory"].(string)
|
|
||||||
if !ok || directory == "" {
|
|
||||||
return nil, fmt.Errorf("file sink requires 'directory' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
name, ok := options["name"].(string)
|
|
||||||
if !ok || name == "" {
|
|
||||||
return nil, fmt.Errorf("file sink requires 'name' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create configuration for the internal log writer
|
|
||||||
writerConfig := log.DefaultConfig()
|
|
||||||
writerConfig.Directory = directory
|
|
||||||
writerConfig.Name = name
|
|
||||||
writerConfig.EnableStdout = false // File only
|
|
||||||
writerConfig.ShowTimestamp = false // We already have timestamps in entries
|
|
||||||
writerConfig.ShowLevel = false // We already have levels in entries
|
|
||||||
|
|
||||||
// Add optional configurations
|
|
||||||
if maxSize, ok := options["max_size_mb"].(int64); ok && maxSize > 0 {
|
|
||||||
writerConfig.MaxSizeKB = maxSize * 1000
|
|
||||||
}
|
|
||||||
|
|
||||||
if maxTotalSize, ok := options["max_total_size_mb"].(int64); ok && maxTotalSize >= 0 {
|
|
||||||
writerConfig.MaxTotalSizeKB = maxTotalSize * 1000
|
|
||||||
}
|
|
||||||
|
|
||||||
if retention, ok := options["retention_hours"].(int64); ok && retention > 0 {
|
|
||||||
writerConfig.RetentionPeriodHrs = float64(retention)
|
|
||||||
}
|
|
||||||
|
|
||||||
if minDiskFree, ok := options["min_disk_free_mb"].(int64); ok && minDiskFree > 0 {
|
|
||||||
writerConfig.MinDiskFreeKB = minDiskFree * 1000
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create internal logger for file writing
|
|
||||||
writer := log.NewLogger()
|
|
||||||
if err := writer.ApplyConfig(writerConfig); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to initialize file writer: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start the internal file writer
|
|
||||||
if err := writer.Start(); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to start file writer: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Buffer size for input channel
|
|
||||||
// TODO: Make this configurable
|
|
||||||
bufferSize := int64(1000)
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
bufferSize = bufSize
|
|
||||||
}
|
|
||||||
|
|
||||||
fs := &FileSink{
|
|
||||||
input: make(chan core.LogEntry, bufferSize),
|
|
||||||
writer: writer,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
fs.lastProcessed.Store(time.Time{})
|
|
||||||
|
|
||||||
return fs, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *FileSink) Input() chan<- core.LogEntry {
|
|
||||||
return fs.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *FileSink) Start(ctx context.Context) error {
|
|
||||||
go fs.processLoop(ctx)
|
|
||||||
fs.logger.Info("msg", "File sink started", "component", "file_sink")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *FileSink) Stop() {
|
|
||||||
fs.logger.Info("msg", "Stopping file sink")
|
|
||||||
close(fs.done)
|
|
||||||
|
|
||||||
// Shutdown the writer with timeout
|
|
||||||
if err := fs.writer.Shutdown(2 * time.Second); err != nil {
|
|
||||||
fs.logger.Error("msg", "Error shutting down file writer",
|
|
||||||
"component", "file_sink",
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fs.logger.Info("msg", "File sink stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *FileSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := fs.lastProcessed.Load().(time.Time)
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "file",
|
|
||||||
TotalProcessed: fs.totalProcessed.Load(),
|
|
||||||
StartTime: fs.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fs *FileSink) processLoop(ctx context.Context) {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-fs.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
fs.totalProcessed.Add(1)
|
|
||||||
fs.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
// Format using the formatter instead of fmt.Sprintf
|
|
||||||
formatted, err := fs.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
fs.logger.Error("msg", "Failed to format log entry",
|
|
||||||
"component", "file_sink",
|
|
||||||
"error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write formatted bytes (strip newline as writer adds it)
|
|
||||||
message := string(formatted)
|
|
||||||
if len(message) > 0 && message[len(message)-1] == '\n' {
|
|
||||||
message = message[:len(message)-1]
|
|
||||||
}
|
|
||||||
fs.writer.Message(message)
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-fs.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
262
src/internal/sink/file/file.go
Normal file
262
src/internal/sink/file/file.go
Normal file
@ -0,0 +1,262 @@
|
|||||||
|
package file
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSink("file", NewFileSinkPlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register file sink: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FileSink writes log entries to files with rotation
|
||||||
|
type FileSink struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.FileSinkOptions
|
||||||
|
|
||||||
|
// Application
|
||||||
|
input chan core.TransportEvent
|
||||||
|
writer *log.Logger // internal logger for file writing
|
||||||
|
logger *log.Logger // application logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
startTime time.Time
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalProcessed atomic.Uint64
|
||||||
|
lastProcessed atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Defaults
|
||||||
|
DefaultFileMaxSizeMB = 100
|
||||||
|
DefaultFileMaxTotalSizeMB = 1000
|
||||||
|
DefaultFileMinDiskFreeMB = 100
|
||||||
|
DefaultFileRetentionHours = 168 // 7 days
|
||||||
|
DefaultFileBufferSize = 1000
|
||||||
|
DefaultFileFlushIntervalMs = 100
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewFileSinkPlugin creates a file sink through plugin factory
|
||||||
|
func NewFileSinkPlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (sink.Sink, error) {
|
||||||
|
// Create empty config struct
|
||||||
|
opts := &config.FileSinkOptions{}
|
||||||
|
|
||||||
|
// Scan config map into struct
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate
|
||||||
|
if err := lconfig.NonEmpty(opts.Directory); err != nil {
|
||||||
|
return nil, fmt.Errorf("directory: %w", err)
|
||||||
|
}
|
||||||
|
if err := lconfig.NonEmpty(opts.Name); err != nil {
|
||||||
|
return nil, fmt.Errorf("name: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
if opts.MaxSizeMB <= 0 {
|
||||||
|
opts.MaxSizeMB = DefaultFileMaxSizeMB
|
||||||
|
}
|
||||||
|
if opts.MaxTotalSizeMB <= 0 {
|
||||||
|
opts.MaxTotalSizeMB = DefaultFileMaxTotalSizeMB
|
||||||
|
}
|
||||||
|
if opts.MinDiskFreeMB < 0 {
|
||||||
|
opts.MinDiskFreeMB = DefaultFileMinDiskFreeMB
|
||||||
|
}
|
||||||
|
if opts.RetentionHours <= 0 {
|
||||||
|
opts.RetentionHours = DefaultFileRetentionHours
|
||||||
|
}
|
||||||
|
if opts.BufferSize <= 0 {
|
||||||
|
opts.BufferSize = DefaultFileBufferSize
|
||||||
|
}
|
||||||
|
if opts.FlushIntervalMs <= 0 {
|
||||||
|
opts.FlushIntervalMs = DefaultFileFlushIntervalMs
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create configuration for the internal log writer
|
||||||
|
writerConfig := log.DefaultConfig()
|
||||||
|
writerConfig.Directory = opts.Directory
|
||||||
|
writerConfig.Name = opts.Name
|
||||||
|
writerConfig.MaxSizeKB = opts.MaxSizeMB * 1000
|
||||||
|
writerConfig.MaxTotalSizeKB = opts.MaxTotalSizeMB * 1000
|
||||||
|
writerConfig.MinDiskFreeKB = opts.MinDiskFreeMB * 1000
|
||||||
|
writerConfig.RetentionPeriodHrs = opts.RetentionHours
|
||||||
|
writerConfig.BufferSize = opts.BufferSize
|
||||||
|
writerConfig.FlushIntervalMs = opts.FlushIntervalMs
|
||||||
|
// Sink logic
|
||||||
|
writerConfig.EnableConsole = false
|
||||||
|
writerConfig.EnableFile = true
|
||||||
|
writerConfig.ShowTimestamp = false
|
||||||
|
writerConfig.ShowLevel = false
|
||||||
|
writerConfig.Format = "raw"
|
||||||
|
|
||||||
|
// Create internal logger for file writing
|
||||||
|
writer := log.NewLogger()
|
||||||
|
if err := writer.ApplyConfig(writerConfig); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to initialize file writer: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fs := &FileSink{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
input: make(chan core.TransportEvent, opts.BufferSize),
|
||||||
|
writer: writer,
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
fs.lastProcessed.Store(time.Time{})
|
||||||
|
|
||||||
|
// Create session for file output
|
||||||
|
fs.session = proxy.CreateSession(
|
||||||
|
fmt.Sprintf("file:///%s/%s", opts.Directory, opts.Name),
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "file",
|
||||||
|
"directory": opts.Directory,
|
||||||
|
"name": opts.Name,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
fs.logger.Info("msg", "File sink initialized",
|
||||||
|
"component", "file_sink",
|
||||||
|
"instance_id", id,
|
||||||
|
"directory", opts.Directory,
|
||||||
|
"name", opts.Name)
|
||||||
|
|
||||||
|
return fs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (fs *FileSink) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware, // Single output session
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Input returns the channel for sending transport events
|
||||||
|
func (fs *FileSink) Input() chan<- core.TransportEvent {
|
||||||
|
return fs.input
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins the processing loop for the sink
|
||||||
|
func (fs *FileSink) Start(ctx context.Context) error {
|
||||||
|
// Start the internal file writer
|
||||||
|
if err := fs.writer.Start(); err != nil {
|
||||||
|
return fmt.Errorf("failed to start file writer: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.startTime = time.Now()
|
||||||
|
go fs.processLoop(ctx)
|
||||||
|
|
||||||
|
fs.logger.Info("msg", "File sink started",
|
||||||
|
"component", "file_sink",
|
||||||
|
)
|
||||||
|
fs.logger.Debug("msg", "File sink config",
|
||||||
|
"component", "file_sink",
|
||||||
|
"directory", fs.config.Directory,
|
||||||
|
"name", fs.config.Name,
|
||||||
|
"max_size_mb", fs.config.MaxSizeMB,
|
||||||
|
"max_total_size_mb", fs.config.MaxTotalSizeMB,
|
||||||
|
"min_disk_free_mb", fs.config.MinDiskFreeMB,
|
||||||
|
"retention_hours", fs.config.RetentionHours,
|
||||||
|
"buffer_size", fs.config.BufferSize,
|
||||||
|
"flush_interval_ms", fs.config.FlushIntervalMs,
|
||||||
|
)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully shuts down the sink
|
||||||
|
func (fs *FileSink) Stop() {
|
||||||
|
fs.logger.Info("msg", "Stopping file sink",
|
||||||
|
"component", "file_sink",
|
||||||
|
"directory", fs.config.Directory,
|
||||||
|
"name", fs.config.Name)
|
||||||
|
|
||||||
|
close(fs.done)
|
||||||
|
|
||||||
|
// Remove session
|
||||||
|
if fs.session != nil {
|
||||||
|
fs.proxy.RemoveSession(fs.session.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Shutdown the writer with timeout
|
||||||
|
if err := fs.writer.Shutdown(core.LoggerShutdownTimeout); err != nil {
|
||||||
|
fs.logger.Error("msg", "Error shutting down file writer",
|
||||||
|
"component", "file_sink",
|
||||||
|
"error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.logger.Info("msg", "File sink stopped",
|
||||||
|
"component", "file_sink",
|
||||||
|
"instance_id", fs.id,
|
||||||
|
"total_processed", fs.totalProcessed.Load())
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns the sink's statistics
|
||||||
|
func (fs *FileSink) GetStats() sink.SinkStats {
|
||||||
|
return sink.SinkStats{
|
||||||
|
ID: fs.id,
|
||||||
|
Type: "file",
|
||||||
|
TotalProcessed: fs.totalProcessed.Load(),
|
||||||
|
StartTime: fs.startTime,
|
||||||
|
LastProcessed: fs.lastProcessed.Load().(time.Time),
|
||||||
|
Details: map[string]any{
|
||||||
|
"directory": fs.config.Directory,
|
||||||
|
"name": fs.config.Name,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// processLoop reads transport events and writes to file
|
||||||
|
func (fs *FileSink) processLoop(ctx context.Context) {
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case event, ok := <-fs.input:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write the pre-formatted payload directly
|
||||||
|
// The writer handles rotation automatically based on configuration
|
||||||
|
fs.writer.Write(string(event.Payload))
|
||||||
|
|
||||||
|
fs.totalProcessed.Add(1)
|
||||||
|
fs.lastProcessed.Store(time.Now())
|
||||||
|
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
|
||||||
|
case <-fs.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,689 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/sink/http.go
|
|
||||||
package sink
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/auth"
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
"logwisp/src/internal/limit"
|
|
||||||
"logwisp/src/internal/tls"
|
|
||||||
"logwisp/src/internal/version"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
"github.com/lixenwraith/log/compat"
|
|
||||||
"github.com/valyala/fasthttp"
|
|
||||||
)
|
|
||||||
|
|
||||||
// HTTPSink streams log entries via Server-Sent Events
|
|
||||||
type HTTPSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
config HTTPConfig
|
|
||||||
server *fasthttp.Server
|
|
||||||
activeClients atomic.Int64
|
|
||||||
mu sync.RWMutex
|
|
||||||
startTime time.Time
|
|
||||||
done chan struct{}
|
|
||||||
wg sync.WaitGroup
|
|
||||||
logger *log.Logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// Security components
|
|
||||||
authenticator *auth.Authenticator
|
|
||||||
tlsManager *tls.Manager
|
|
||||||
authConfig *config.AuthConfig
|
|
||||||
|
|
||||||
// Path configuration
|
|
||||||
streamPath string
|
|
||||||
statusPath string
|
|
||||||
|
|
||||||
// Net limiting
|
|
||||||
netLimiter *limit.NetLimiter
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
authFailures atomic.Uint64
|
|
||||||
authSuccesses atomic.Uint64
|
|
||||||
}
|
|
||||||
|
|
||||||
// HTTPConfig holds HTTP sink configuration
|
|
||||||
type HTTPConfig struct {
|
|
||||||
Port int64
|
|
||||||
BufferSize int64
|
|
||||||
StreamPath string
|
|
||||||
StatusPath string
|
|
||||||
Heartbeat *config.HeartbeatConfig
|
|
||||||
SSL *config.SSLConfig
|
|
||||||
NetLimit *config.NetLimitConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewHTTPSink creates a new HTTP streaming sink
|
|
||||||
func NewHTTPSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*HTTPSink, error) {
|
|
||||||
cfg := HTTPConfig{
|
|
||||||
Port: 8080,
|
|
||||||
BufferSize: 1000,
|
|
||||||
StreamPath: "/transport",
|
|
||||||
StatusPath: "/status",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract configuration from options
|
|
||||||
if port, ok := options["port"].(int64); ok {
|
|
||||||
cfg.Port = port
|
|
||||||
}
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok {
|
|
||||||
cfg.BufferSize = bufSize
|
|
||||||
}
|
|
||||||
if path, ok := options["stream_path"].(string); ok {
|
|
||||||
cfg.StreamPath = path
|
|
||||||
}
|
|
||||||
if path, ok := options["status_path"].(string); ok {
|
|
||||||
cfg.StatusPath = path
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract heartbeat config
|
|
||||||
if hb, ok := options["heartbeat"].(map[string]any); ok {
|
|
||||||
cfg.Heartbeat = &config.HeartbeatConfig{}
|
|
||||||
cfg.Heartbeat.Enabled, _ = hb["enabled"].(bool)
|
|
||||||
if interval, ok := hb["interval_seconds"].(int64); ok {
|
|
||||||
cfg.Heartbeat.IntervalSeconds = interval
|
|
||||||
}
|
|
||||||
cfg.Heartbeat.IncludeTimestamp, _ = hb["include_timestamp"].(bool)
|
|
||||||
cfg.Heartbeat.IncludeStats, _ = hb["include_stats"].(bool)
|
|
||||||
if hbFormat, ok := hb["format"].(string); ok {
|
|
||||||
cfg.Heartbeat.Format = hbFormat
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract SSL config
|
|
||||||
if ssl, ok := options["ssl"].(map[string]any); ok {
|
|
||||||
cfg.SSL = &config.SSLConfig{}
|
|
||||||
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
|
|
||||||
if certFile, ok := ssl["cert_file"].(string); ok {
|
|
||||||
cfg.SSL.CertFile = certFile
|
|
||||||
}
|
|
||||||
if keyFile, ok := ssl["key_file"].(string); ok {
|
|
||||||
cfg.SSL.KeyFile = keyFile
|
|
||||||
}
|
|
||||||
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
|
|
||||||
if caFile, ok := ssl["client_ca_file"].(string); ok {
|
|
||||||
cfg.SSL.ClientCAFile = caFile
|
|
||||||
}
|
|
||||||
cfg.SSL.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
|
|
||||||
if minVer, ok := ssl["min_version"].(string); ok {
|
|
||||||
cfg.SSL.MinVersion = minVer
|
|
||||||
}
|
|
||||||
if maxVer, ok := ssl["max_version"].(string); ok {
|
|
||||||
cfg.SSL.MaxVersion = maxVer
|
|
||||||
}
|
|
||||||
if ciphers, ok := ssl["cipher_suites"].(string); ok {
|
|
||||||
cfg.SSL.CipherSuites = ciphers
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract net limit config
|
|
||||||
if rl, ok := options["net_limit"].(map[string]any); ok {
|
|
||||||
cfg.NetLimit = &config.NetLimitConfig{}
|
|
||||||
cfg.NetLimit.Enabled, _ = rl["enabled"].(bool)
|
|
||||||
if rps, ok := rl["requests_per_second"].(float64); ok {
|
|
||||||
cfg.NetLimit.RequestsPerSecond = rps
|
|
||||||
}
|
|
||||||
if burst, ok := rl["burst_size"].(int64); ok {
|
|
||||||
cfg.NetLimit.BurstSize = burst
|
|
||||||
}
|
|
||||||
if limitBy, ok := rl["limit_by"].(string); ok {
|
|
||||||
cfg.NetLimit.LimitBy = limitBy
|
|
||||||
}
|
|
||||||
if respCode, ok := rl["response_code"].(int64); ok {
|
|
||||||
cfg.NetLimit.ResponseCode = respCode
|
|
||||||
}
|
|
||||||
if msg, ok := rl["response_message"].(string); ok {
|
|
||||||
cfg.NetLimit.ResponseMessage = msg
|
|
||||||
}
|
|
||||||
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
|
|
||||||
cfg.NetLimit.MaxConnectionsPerIP = maxPerIP
|
|
||||||
}
|
|
||||||
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
|
|
||||||
cfg.NetLimit.MaxTotalConnections = maxTotal
|
|
||||||
}
|
|
||||||
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
|
|
||||||
cfg.NetLimit.IPWhitelist = make([]string, 0, len(ipWhitelist))
|
|
||||||
for _, entry := range ipWhitelist {
|
|
||||||
if str, ok := entry.(string); ok {
|
|
||||||
cfg.NetLimit.IPWhitelist = append(cfg.NetLimit.IPWhitelist, str)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
|
|
||||||
cfg.NetLimit.IPBlacklist = make([]string, 0, len(ipBlacklist))
|
|
||||||
for _, entry := range ipBlacklist {
|
|
||||||
if str, ok := entry.(string); ok {
|
|
||||||
cfg.NetLimit.IPBlacklist = append(cfg.NetLimit.IPBlacklist, str)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
h := &HTTPSink{
|
|
||||||
input: make(chan core.LogEntry, cfg.BufferSize),
|
|
||||||
config: cfg,
|
|
||||||
startTime: time.Now(),
|
|
||||||
done: make(chan struct{}),
|
|
||||||
streamPath: cfg.StreamPath,
|
|
||||||
statusPath: cfg.StatusPath,
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
h.lastProcessed.Store(time.Time{})
|
|
||||||
|
|
||||||
// Initialize net limiter if configured
|
|
||||||
if cfg.NetLimit != nil && cfg.NetLimit.Enabled {
|
|
||||||
h.netLimiter = limit.NewNetLimiter(*cfg.NetLimit, logger)
|
|
||||||
}
|
|
||||||
|
|
||||||
return h, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) Input() chan<- core.LogEntry {
|
|
||||||
return h.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) Start(ctx context.Context) error {
|
|
||||||
// Create fasthttp adapter for logging
|
|
||||||
fasthttpLogger := compat.NewFastHTTPAdapter(h.logger)
|
|
||||||
|
|
||||||
h.server = &fasthttp.Server{
|
|
||||||
Handler: h.requestHandler,
|
|
||||||
DisableKeepalive: false,
|
|
||||||
StreamRequestBody: true,
|
|
||||||
Logger: fasthttpLogger,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configure TLS if enabled
|
|
||||||
if h.tlsManager != nil {
|
|
||||||
h.server.TLSConfig = h.tlsManager.GetHTTPConfig()
|
|
||||||
h.logger.Info("msg", "TLS enabled for HTTP sink",
|
|
||||||
"component", "http_sink",
|
|
||||||
"port", h.config.Port)
|
|
||||||
}
|
|
||||||
|
|
||||||
addr := fmt.Sprintf(":%d", h.config.Port)
|
|
||||||
|
|
||||||
// Run server in separate goroutine to avoid blocking
|
|
||||||
errChan := make(chan error, 1)
|
|
||||||
go func() {
|
|
||||||
h.logger.Info("msg", "HTTP server started",
|
|
||||||
"component", "http_sink",
|
|
||||||
"port", h.config.Port,
|
|
||||||
"stream_path", h.streamPath,
|
|
||||||
"status_path", h.statusPath,
|
|
||||||
"tls_enabled", h.tlsManager != nil)
|
|
||||||
|
|
||||||
var err error
|
|
||||||
if h.tlsManager != nil {
|
|
||||||
// HTTPS server
|
|
||||||
err = h.server.ListenAndServeTLS(addr, h.config.SSL.CertFile, h.config.SSL.KeyFile)
|
|
||||||
} else {
|
|
||||||
// HTTP server
|
|
||||||
err = h.server.ListenAndServe(addr)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
errChan <- err
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Monitor context for shutdown signal
|
|
||||||
go func() {
|
|
||||||
<-ctx.Done()
|
|
||||||
if h.server != nil {
|
|
||||||
shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
h.server.ShutdownWithContext(shutdownCtx)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Check if server started successfully
|
|
||||||
select {
|
|
||||||
case err := <-errChan:
|
|
||||||
return err
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
// Server started successfully
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) Stop() {
|
|
||||||
h.logger.Info("msg", "Stopping HTTP sink")
|
|
||||||
|
|
||||||
// Signal all client handlers to stop
|
|
||||||
close(h.done)
|
|
||||||
|
|
||||||
// Shutdown HTTP server
|
|
||||||
if h.server != nil {
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
h.server.ShutdownWithContext(ctx)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all active client handlers to finish
|
|
||||||
h.wg.Wait()
|
|
||||||
|
|
||||||
h.logger.Info("msg", "HTTP sink stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := h.lastProcessed.Load().(time.Time)
|
|
||||||
|
|
||||||
var netLimitStats map[string]any
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
netLimitStats = h.netLimiter.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
var authStats map[string]any
|
|
||||||
if h.authenticator != nil {
|
|
||||||
authStats = h.authenticator.GetStats()
|
|
||||||
authStats["failures"] = h.authFailures.Load()
|
|
||||||
authStats["successes"] = h.authSuccesses.Load()
|
|
||||||
}
|
|
||||||
|
|
||||||
var tlsStats map[string]any
|
|
||||||
if h.tlsManager != nil {
|
|
||||||
tlsStats = h.tlsManager.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "http",
|
|
||||||
TotalProcessed: h.totalProcessed.Load(),
|
|
||||||
ActiveConnections: h.activeClients.Load(),
|
|
||||||
StartTime: h.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{
|
|
||||||
"port": h.config.Port,
|
|
||||||
"buffer_size": h.config.BufferSize,
|
|
||||||
"endpoints": map[string]string{
|
|
||||||
"stream": h.streamPath,
|
|
||||||
"status": h.statusPath,
|
|
||||||
},
|
|
||||||
"net_limit": netLimitStats,
|
|
||||||
"auth": authStats,
|
|
||||||
"tls": tlsStats,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) requestHandler(ctx *fasthttp.RequestCtx) {
|
|
||||||
remoteAddr := ctx.RemoteAddr().String()
|
|
||||||
|
|
||||||
// Check net limit
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
if allowed, statusCode, message := h.netLimiter.CheckHTTP(remoteAddr); !allowed {
|
|
||||||
ctx.SetStatusCode(int(statusCode))
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
h.logger.Warn("msg", "Net limited",
|
|
||||||
"component", "http_sink",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"status_code", statusCode,
|
|
||||||
"error", message)
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]any{
|
|
||||||
"error": "Too many requests",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
path := string(ctx.Path())
|
|
||||||
|
|
||||||
// Status endpoint doesn't require auth
|
|
||||||
if path == h.statusPath {
|
|
||||||
h.handleStatus(ctx)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Authenticate request
|
|
||||||
var session *auth.Session
|
|
||||||
if h.authenticator != nil {
|
|
||||||
authHeader := string(ctx.Request.Header.Peek("Authorization"))
|
|
||||||
var err error
|
|
||||||
session, err = h.authenticator.AuthenticateHTTP(authHeader, remoteAddr)
|
|
||||||
if err != nil {
|
|
||||||
h.authFailures.Add(1)
|
|
||||||
h.logger.Warn("msg", "Authentication failed",
|
|
||||||
"component", "http_sink",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"error", err)
|
|
||||||
|
|
||||||
// Return 401 with WWW-Authenticate header
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusUnauthorized)
|
|
||||||
if h.authConfig.Type == "basic" && h.authConfig.BasicAuth != nil {
|
|
||||||
realm := h.authConfig.BasicAuth.Realm
|
|
||||||
if realm == "" {
|
|
||||||
realm = "Restricted"
|
|
||||||
}
|
|
||||||
ctx.Response.Header.Set("WWW-Authenticate", fmt.Sprintf("Basic realm=\"%s\"", realm))
|
|
||||||
} else if h.authConfig.Type == "bearer" {
|
|
||||||
ctx.Response.Header.Set("WWW-Authenticate", "Bearer")
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]string{
|
|
||||||
"error": "Unauthorized",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
h.authSuccesses.Add(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
switch path {
|
|
||||||
case h.streamPath:
|
|
||||||
h.handleStream(ctx, session)
|
|
||||||
default:
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusNotFound)
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]any{
|
|
||||||
"error": "Not Found",
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) handleStream(ctx *fasthttp.RequestCtx, session *auth.Session) {
|
|
||||||
// Track connection for net limiting
|
|
||||||
remoteAddr := ctx.RemoteAddr().String()
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
h.netLimiter.AddConnection(remoteAddr)
|
|
||||||
defer h.netLimiter.RemoveConnection(remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set SSE headers
|
|
||||||
ctx.Response.Header.Set("Content-Type", "text/event-transport")
|
|
||||||
ctx.Response.Header.Set("Cache-Control", "no-cache")
|
|
||||||
ctx.Response.Header.Set("Connection", "keep-alive")
|
|
||||||
ctx.Response.Header.Set("Access-Control-Allow-Origin", "*")
|
|
||||||
ctx.Response.Header.Set("X-Accel-Buffering", "no")
|
|
||||||
|
|
||||||
// Create subscription for this client
|
|
||||||
clientChan := make(chan core.LogEntry, h.config.BufferSize)
|
|
||||||
clientDone := make(chan struct{})
|
|
||||||
|
|
||||||
// Subscribe to input channel
|
|
||||||
go func() {
|
|
||||||
defer close(clientChan)
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-h.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
h.totalProcessed.Add(1)
|
|
||||||
h.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
select {
|
|
||||||
case clientChan <- entry:
|
|
||||||
case <-clientDone:
|
|
||||||
return
|
|
||||||
case <-h.done:
|
|
||||||
return
|
|
||||||
default:
|
|
||||||
// Drop if client buffer full
|
|
||||||
h.logger.Debug("msg", "Dropped entry for slow client",
|
|
||||||
"component", "http_sink",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
}
|
|
||||||
case <-clientDone:
|
|
||||||
return
|
|
||||||
case <-h.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Define the transport writer function
|
|
||||||
streamFunc := func(w *bufio.Writer) {
|
|
||||||
newCount := h.activeClients.Add(1)
|
|
||||||
h.logger.Debug("msg", "HTTP client connected",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"username", session.Username,
|
|
||||||
"auth_method", session.Method,
|
|
||||||
"active_clients", newCount)
|
|
||||||
|
|
||||||
h.wg.Add(1)
|
|
||||||
defer func() {
|
|
||||||
close(clientDone)
|
|
||||||
newCount := h.activeClients.Add(-1)
|
|
||||||
h.logger.Debug("msg", "HTTP client disconnected",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"username", session.Username,
|
|
||||||
"active_clients", newCount)
|
|
||||||
h.wg.Done()
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Send initial connected event
|
|
||||||
clientID := fmt.Sprintf("%d", time.Now().UnixNano())
|
|
||||||
connectionInfo := map[string]any{
|
|
||||||
"client_id": clientID,
|
|
||||||
"username": session.Username,
|
|
||||||
"auth_method": session.Method,
|
|
||||||
"stream_path": h.streamPath,
|
|
||||||
"status_path": h.statusPath,
|
|
||||||
"buffer_size": h.config.BufferSize,
|
|
||||||
"tls": h.tlsManager != nil,
|
|
||||||
}
|
|
||||||
data, _ := json.Marshal(connectionInfo)
|
|
||||||
fmt.Fprintf(w, "event: connected\ndata: %s\n\n", data)
|
|
||||||
w.Flush()
|
|
||||||
|
|
||||||
var ticker *time.Ticker
|
|
||||||
var tickerChan <-chan time.Time
|
|
||||||
|
|
||||||
if h.config.Heartbeat.Enabled {
|
|
||||||
ticker = time.NewTicker(time.Duration(h.config.Heartbeat.IntervalSeconds) * time.Second)
|
|
||||||
tickerChan = ticker.C
|
|
||||||
defer ticker.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-clientChan:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.formatEntryForSSE(w, entry); err != nil {
|
|
||||||
h.logger.Error("msg", "Failed to format log entry",
|
|
||||||
"component", "http_sink",
|
|
||||||
"error", err,
|
|
||||||
"entry_source", entry.Source)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := w.Flush(); err != nil {
|
|
||||||
// Client disconnected
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
case <-tickerChan:
|
|
||||||
// Validate session is still active
|
|
||||||
if h.authenticator != nil && !h.authenticator.ValidateSession(session.ID) {
|
|
||||||
fmt.Fprintf(w, "event: disconnect\ndata: {\"reason\":\"session_expired\"}\n\n")
|
|
||||||
w.Flush()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
heartbeatEntry := h.createHeartbeatEntry()
|
|
||||||
if err := h.formatEntryForSSE(w, heartbeatEntry); err != nil {
|
|
||||||
h.logger.Error("msg", "Failed to format heartbeat",
|
|
||||||
"component", "http_sink",
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
if err := w.Flush(); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
case <-h.done:
|
|
||||||
// Send final disconnect event
|
|
||||||
fmt.Fprintf(w, "event: disconnect\ndata: {\"reason\":\"server_shutdown\"}\n\n")
|
|
||||||
w.Flush()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.SetBodyStreamWriter(streamFunc)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) formatEntryForSSE(w *bufio.Writer, entry core.LogEntry) error {
|
|
||||||
formatted, err := h.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove trailing newline if present (SSE adds its own)
|
|
||||||
formatted = bytes.TrimSuffix(formatted, []byte{'\n'})
|
|
||||||
|
|
||||||
// Multi-line content handler
|
|
||||||
lines := bytes.Split(formatted, []byte{'\n'})
|
|
||||||
for _, line := range lines {
|
|
||||||
// SSE needs "data: " prefix for each line
|
|
||||||
// TODO: validate above, is 'data: ' really necessary? make it optional if it works without it?
|
|
||||||
fmt.Fprintf(w, "data: %s\n", line)
|
|
||||||
}
|
|
||||||
fmt.Fprintf(w, "\n") // Empty line to terminate event
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) createHeartbeatEntry() core.LogEntry {
|
|
||||||
message := "heartbeat"
|
|
||||||
|
|
||||||
// Build fields for heartbeat metadata
|
|
||||||
fields := make(map[string]any)
|
|
||||||
fields["type"] = "heartbeat"
|
|
||||||
|
|
||||||
if h.config.Heartbeat.IncludeStats {
|
|
||||||
fields["active_clients"] = h.activeClients.Load()
|
|
||||||
fields["uptime_seconds"] = int(time.Since(h.startTime).Seconds())
|
|
||||||
}
|
|
||||||
|
|
||||||
fieldsJSON, _ := json.Marshal(fields)
|
|
||||||
|
|
||||||
return core.LogEntry{
|
|
||||||
Time: time.Now(),
|
|
||||||
Source: "logwisp-http",
|
|
||||||
Level: "INFO",
|
|
||||||
Message: message,
|
|
||||||
Fields: fieldsJSON,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSink) handleStatus(ctx *fasthttp.RequestCtx) {
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
|
|
||||||
var netLimitStats any
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
netLimitStats = h.netLimiter.GetStats()
|
|
||||||
} else {
|
|
||||||
netLimitStats = map[string]any{
|
|
||||||
"enabled": false,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var authStats any
|
|
||||||
if h.authenticator != nil {
|
|
||||||
authStats = h.authenticator.GetStats()
|
|
||||||
authStats.(map[string]any)["failures"] = h.authFailures.Load()
|
|
||||||
authStats.(map[string]any)["successes"] = h.authSuccesses.Load()
|
|
||||||
} else {
|
|
||||||
authStats = map[string]any{
|
|
||||||
"enabled": false,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var tlsStats any
|
|
||||||
if h.tlsManager != nil {
|
|
||||||
tlsStats = h.tlsManager.GetStats()
|
|
||||||
} else {
|
|
||||||
tlsStats = map[string]any{
|
|
||||||
"enabled": false,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
status := map[string]any{
|
|
||||||
"service": "LogWisp",
|
|
||||||
"version": version.Short(),
|
|
||||||
"server": map[string]any{
|
|
||||||
"type": "http",
|
|
||||||
"port": h.config.Port,
|
|
||||||
"active_clients": h.activeClients.Load(),
|
|
||||||
"buffer_size": h.config.BufferSize,
|
|
||||||
"uptime_seconds": int(time.Since(h.startTime).Seconds()),
|
|
||||||
},
|
|
||||||
"endpoints": map[string]string{
|
|
||||||
"transport": h.streamPath,
|
|
||||||
"status": h.statusPath,
|
|
||||||
},
|
|
||||||
"features": map[string]any{
|
|
||||||
"heartbeat": map[string]any{
|
|
||||||
"enabled": h.config.Heartbeat.Enabled,
|
|
||||||
"interval": h.config.Heartbeat.IntervalSeconds,
|
|
||||||
"format": h.config.Heartbeat.Format,
|
|
||||||
},
|
|
||||||
"tls": tlsStats,
|
|
||||||
"auth": authStats,
|
|
||||||
"net_limit": netLimitStats,
|
|
||||||
},
|
|
||||||
"statistics": map[string]any{
|
|
||||||
"total_processed": h.totalProcessed.Load(),
|
|
||||||
"auth_failures": h.authFailures.Load(),
|
|
||||||
"auth_successes": h.authSuccesses.Load(),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
data, _ := json.Marshal(status)
|
|
||||||
ctx.SetBody(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetActiveConnections returns the current number of active clients
|
|
||||||
func (h *HTTPSink) GetActiveConnections() int64 {
|
|
||||||
return h.activeClients.Load()
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStreamPath returns the configured transport endpoint path
|
|
||||||
func (h *HTTPSink) GetStreamPath() string {
|
|
||||||
return h.streamPath
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStatusPath returns the configured status endpoint path
|
|
||||||
func (h *HTTPSink) GetStatusPath() string {
|
|
||||||
return h.statusPath
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetAuthConfig configures http sink authentication
|
|
||||||
func (h *HTTPSink) SetAuthConfig(authCfg *config.AuthConfig) {
|
|
||||||
if authCfg == nil || authCfg.Type == "none" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
h.authConfig = authCfg
|
|
||||||
authenticator, err := auth.New(authCfg, h.logger)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("msg", "Failed to initialize authenticator for HTTP sink",
|
|
||||||
"component", "http_sink",
|
|
||||||
"error", err)
|
|
||||||
// Continue without auth
|
|
||||||
return
|
|
||||||
}
|
|
||||||
h.authenticator = authenticator
|
|
||||||
|
|
||||||
h.logger.Info("msg", "Authentication configured for HTTP sink",
|
|
||||||
"component", "http_sink",
|
|
||||||
"auth_type", authCfg.Type)
|
|
||||||
}
|
|
||||||
552
src/internal/sink/http/http.go
Normal file
552
src/internal/sink/http/http.go
Normal file
@ -0,0 +1,552 @@
|
|||||||
|
package http
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
"logwisp/src/internal/version"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
"github.com/lixenwraith/log/compat"
|
||||||
|
"github.com/valyala/fasthttp"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSink("http", NewHTTPSinkPlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register http sink: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// HTTPSink streams log entries via Server-Sent Events (SSE)
|
||||||
|
type HTTPSink struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.HTTPSinkOptions
|
||||||
|
|
||||||
|
// Network
|
||||||
|
server *fasthttp.Server
|
||||||
|
|
||||||
|
// Application
|
||||||
|
input chan core.TransportEvent
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
wg sync.WaitGroup
|
||||||
|
startTime time.Time
|
||||||
|
|
||||||
|
// Broker
|
||||||
|
clients map[uint64]chan []byte
|
||||||
|
clientsMu sync.RWMutex
|
||||||
|
unregister chan uint64
|
||||||
|
nextClientID atomic.Uint64
|
||||||
|
|
||||||
|
// Client session tracking
|
||||||
|
clientSessions map[uint64]string // clientID -> sessionID
|
||||||
|
sessionsMu sync.RWMutex
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
activeClients atomic.Int64
|
||||||
|
totalProcessed atomic.Uint64
|
||||||
|
lastProcessed atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Server lifecycle
|
||||||
|
HttpServerStartTimeout = 100 * time.Millisecond
|
||||||
|
HttpServerShutdownTimeout = 2 * time.Second
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
DefaultHTTPHost = "0.0.0.0"
|
||||||
|
DefaultHTTPBufferSize = 1000
|
||||||
|
DefaultHTTPStreamPath = "/stream"
|
||||||
|
DefaultHTTPStatusPath = "/status"
|
||||||
|
HTTPMaxPort = 65535
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewHTTPSinkPlugin creates an HTTP sink through plugin factory
|
||||||
|
func NewHTTPSinkPlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (sink.Sink, error) {
|
||||||
|
opts := &config.HTTPSinkOptions{
|
||||||
|
Host: DefaultHTTPHost,
|
||||||
|
Port: 0,
|
||||||
|
WriteTimeout: 0, // SSE indefinite streaming
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate
|
||||||
|
if opts.Port <= 0 || opts.Port > HTTPMaxPort {
|
||||||
|
return nil, fmt.Errorf("port must be between 1 and %d", HTTPMaxPort)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
if opts.BufferSize <= 0 {
|
||||||
|
opts.BufferSize = DefaultHTTPBufferSize
|
||||||
|
}
|
||||||
|
if opts.StreamPath == "" {
|
||||||
|
opts.StreamPath = DefaultHTTPStreamPath
|
||||||
|
}
|
||||||
|
if opts.StatusPath == "" {
|
||||||
|
opts.StatusPath = DefaultHTTPStatusPath
|
||||||
|
}
|
||||||
|
|
||||||
|
h := &HTTPSink{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
input: make(chan core.TransportEvent, opts.BufferSize),
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
clients: make(map[uint64]chan []byte),
|
||||||
|
unregister: make(chan uint64),
|
||||||
|
clientSessions: make(map[uint64]string),
|
||||||
|
}
|
||||||
|
h.lastProcessed.Store(time.Time{})
|
||||||
|
|
||||||
|
logger.Info("msg", "HTTP sink initialized",
|
||||||
|
"component", "http_sink",
|
||||||
|
"instance_id", id,
|
||||||
|
"host", opts.Host,
|
||||||
|
"port", opts.Port,
|
||||||
|
"stream_path", opts.StreamPath,
|
||||||
|
"status_path", opts.StatusPath)
|
||||||
|
|
||||||
|
return h, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (h *HTTPSink) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware,
|
||||||
|
core.CapMultiSession,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Input returns the channel for sending transport events
|
||||||
|
func (h *HTTPSink) Input() chan<- core.TransportEvent {
|
||||||
|
return h.input
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start initializes the HTTP server and begins the broker loop
|
||||||
|
func (h *HTTPSink) Start(ctx context.Context) error {
|
||||||
|
h.startTime = time.Now()
|
||||||
|
|
||||||
|
// Start central broker goroutine
|
||||||
|
h.wg.Add(1)
|
||||||
|
go h.brokerLoop(ctx)
|
||||||
|
|
||||||
|
fasthttpLogger := compat.NewFastHTTPAdapter(h.logger)
|
||||||
|
|
||||||
|
h.server = &fasthttp.Server{
|
||||||
|
Name: fmt.Sprintf("LogWisp/%s", version.Short()),
|
||||||
|
Handler: h.requestHandler,
|
||||||
|
DisableKeepalive: false,
|
||||||
|
StreamRequestBody: true,
|
||||||
|
Logger: fasthttpLogger,
|
||||||
|
WriteTimeout: time.Duration(h.config.WriteTimeout) * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
|
addr := fmt.Sprintf("%s:%d", h.config.Host, h.config.Port)
|
||||||
|
|
||||||
|
errChan := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
h.logger.Info("msg", "HTTP server starting",
|
||||||
|
"component", "http_sink",
|
||||||
|
"instance_id", h.id,
|
||||||
|
"address", addr)
|
||||||
|
|
||||||
|
err := h.server.ListenAndServe(addr)
|
||||||
|
if err != nil {
|
||||||
|
errChan <- err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Monitor context for shutdown
|
||||||
|
go func() {
|
||||||
|
<-ctx.Done()
|
||||||
|
if h.server != nil {
|
||||||
|
shutdownCtx, cancel := context.WithTimeout(context.Background(), HttpServerShutdownTimeout)
|
||||||
|
defer cancel()
|
||||||
|
h.server.ShutdownWithContext(shutdownCtx)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Check if server started
|
||||||
|
select {
|
||||||
|
case err := <-errChan:
|
||||||
|
return err
|
||||||
|
case <-time.After(HttpServerStartTimeout):
|
||||||
|
h.logger.Info("msg", "HTTP server started",
|
||||||
|
"component", "http_sink",
|
||||||
|
"instance_id", h.id,
|
||||||
|
"host", h.config.Host,
|
||||||
|
"port", h.config.Port)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully shuts down the HTTP server and all client connections
|
||||||
|
func (h *HTTPSink) Stop() {
|
||||||
|
h.logger.Info("msg", "Stopping HTTP sink",
|
||||||
|
"component", "http_sink",
|
||||||
|
"instance_id", h.id)
|
||||||
|
|
||||||
|
close(h.done)
|
||||||
|
|
||||||
|
if h.server != nil {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), HttpServerShutdownTimeout)
|
||||||
|
defer cancel()
|
||||||
|
h.server.ShutdownWithContext(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
h.wg.Wait()
|
||||||
|
|
||||||
|
close(h.unregister)
|
||||||
|
|
||||||
|
h.clientsMu.Lock()
|
||||||
|
for _, ch := range h.clients {
|
||||||
|
close(ch)
|
||||||
|
}
|
||||||
|
h.clients = make(map[uint64]chan []byte)
|
||||||
|
h.clientsMu.Unlock()
|
||||||
|
|
||||||
|
h.logger.Info("msg", "HTTP sink stopped",
|
||||||
|
"component", "http_sink",
|
||||||
|
"instance_id", h.id,
|
||||||
|
"total_processed", h.totalProcessed.Load())
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns sink statistics
|
||||||
|
func (h *HTTPSink) GetStats() sink.SinkStats {
|
||||||
|
lastProc, _ := h.lastProcessed.Load().(time.Time)
|
||||||
|
|
||||||
|
return sink.SinkStats{
|
||||||
|
ID: h.id,
|
||||||
|
Type: "http",
|
||||||
|
TotalProcessed: h.totalProcessed.Load(),
|
||||||
|
ActiveConnections: h.activeClients.Load(),
|
||||||
|
StartTime: h.startTime,
|
||||||
|
LastProcessed: lastProc,
|
||||||
|
Details: map[string]any{
|
||||||
|
"host": h.config.Host,
|
||||||
|
"port": h.config.Port,
|
||||||
|
"buffer_size": h.config.BufferSize,
|
||||||
|
"endpoints": map[string]string{
|
||||||
|
"stream": h.config.StreamPath,
|
||||||
|
"status": h.config.StatusPath,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// brokerLoop manages client connections and broadcasts transport events
|
||||||
|
func (h *HTTPSink) brokerLoop(ctx context.Context) {
|
||||||
|
defer h.wg.Done()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
h.logger.Debug("msg", "Broker loop stopping due to context cancellation",
|
||||||
|
"component", "http_sink")
|
||||||
|
return
|
||||||
|
|
||||||
|
case <-h.done:
|
||||||
|
h.logger.Debug("msg", "Broker loop stopping due to shutdown signal",
|
||||||
|
"component", "http_sink")
|
||||||
|
return
|
||||||
|
|
||||||
|
case clientID := <-h.unregister:
|
||||||
|
h.clientsMu.Lock()
|
||||||
|
if clientChan, exists := h.clients[clientID]; exists {
|
||||||
|
delete(h.clients, clientID)
|
||||||
|
close(clientChan)
|
||||||
|
h.logger.Debug("msg", "Unregistered client",
|
||||||
|
"component", "http_sink",
|
||||||
|
"client_id", clientID)
|
||||||
|
}
|
||||||
|
h.clientsMu.Unlock()
|
||||||
|
|
||||||
|
h.sessionsMu.Lock()
|
||||||
|
delete(h.clientSessions, clientID)
|
||||||
|
h.sessionsMu.Unlock()
|
||||||
|
|
||||||
|
case event, ok := <-h.input:
|
||||||
|
if !ok {
|
||||||
|
h.logger.Debug("msg", "Input channel closed, broker stopping",
|
||||||
|
"component", "http_sink")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
h.totalProcessed.Add(1)
|
||||||
|
h.lastProcessed.Store(time.Now())
|
||||||
|
|
||||||
|
h.clientsMu.RLock()
|
||||||
|
clientCount := len(h.clients)
|
||||||
|
if clientCount > 0 {
|
||||||
|
var staleClients []uint64
|
||||||
|
|
||||||
|
for id, ch := range h.clients {
|
||||||
|
h.sessionsMu.RLock()
|
||||||
|
sessionID, hasSession := h.clientSessions[id]
|
||||||
|
h.sessionsMu.RUnlock()
|
||||||
|
|
||||||
|
if !hasSession {
|
||||||
|
staleClients = append(staleClients, id)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check session still exists via proxy
|
||||||
|
if _, exists := h.proxy.GetSession(sessionID); !exists {
|
||||||
|
staleClients = append(staleClients, id)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case ch <- event.Payload:
|
||||||
|
h.proxy.UpdateActivity(sessionID)
|
||||||
|
default:
|
||||||
|
h.logger.Debug("msg", "Dropped event for slow client",
|
||||||
|
"component", "http_sink",
|
||||||
|
"client_id", id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(staleClients) > 0 {
|
||||||
|
go func() {
|
||||||
|
for _, clientID := range staleClients {
|
||||||
|
select {
|
||||||
|
case h.unregister <- clientID:
|
||||||
|
case <-h.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
h.clientsMu.RUnlock()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// requestHandler is the main entry point for all incoming HTTP requests
|
||||||
|
func (h *HTTPSink) requestHandler(ctx *fasthttp.RequestCtx) {
|
||||||
|
// IPv4-only enforcement - silent drop IPv6
|
||||||
|
remoteAddr := ctx.RemoteAddr()
|
||||||
|
if tcpAddr, ok := remoteAddr.(*net.TCPAddr); ok {
|
||||||
|
if tcpAddr.IP.To4() == nil {
|
||||||
|
ctx.SetConnectionClose()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
path := string(ctx.Path())
|
||||||
|
|
||||||
|
switch path {
|
||||||
|
case h.config.StatusPath:
|
||||||
|
h.handleStatus(ctx)
|
||||||
|
case h.config.StreamPath:
|
||||||
|
h.handleStream(ctx)
|
||||||
|
default:
|
||||||
|
ctx.SetStatusCode(fasthttp.StatusNotFound)
|
||||||
|
ctx.SetContentType("application/json")
|
||||||
|
json.NewEncoder(ctx).Encode(map[string]any{
|
||||||
|
"error": "Not Found",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleStream manages a client's Server-Sent Events (SSE) stream
|
||||||
|
func (h *HTTPSink) handleStream(ctx *fasthttp.RequestCtx) {
|
||||||
|
remoteAddrStr := ctx.RemoteAddr().String()
|
||||||
|
|
||||||
|
// Create session via proxy
|
||||||
|
sess := h.proxy.CreateSession(remoteAddrStr, map[string]any{
|
||||||
|
"type": "http_client",
|
||||||
|
})
|
||||||
|
|
||||||
|
// Set SSE headers
|
||||||
|
ctx.Response.Header.Set("Content-Type", "text/event-stream")
|
||||||
|
ctx.Response.Header.Set("Cache-Control", "no-cache")
|
||||||
|
ctx.Response.Header.Set("Connection", "keep-alive")
|
||||||
|
ctx.Response.Header.Set("Access-Control-Allow-Origin", "*")
|
||||||
|
ctx.Response.Header.Set("X-Accel-Buffering", "no")
|
||||||
|
|
||||||
|
// Register client with broker
|
||||||
|
clientID := h.nextClientID.Add(1)
|
||||||
|
clientChan := make(chan []byte, h.config.BufferSize)
|
||||||
|
|
||||||
|
h.clientsMu.Lock()
|
||||||
|
h.clients[clientID] = clientChan
|
||||||
|
h.clientsMu.Unlock()
|
||||||
|
|
||||||
|
h.sessionsMu.Lock()
|
||||||
|
h.clientSessions[clientID] = sess.ID
|
||||||
|
h.sessionsMu.Unlock()
|
||||||
|
|
||||||
|
streamFunc := func(w *bufio.Writer) {
|
||||||
|
connectCount := h.activeClients.Add(1)
|
||||||
|
h.logger.Debug("msg", "HTTP client connected",
|
||||||
|
"component", "http_sink",
|
||||||
|
"remote_addr", remoteAddrStr,
|
||||||
|
"session_id", sess.ID,
|
||||||
|
"client_id", clientID,
|
||||||
|
"active_clients", connectCount)
|
||||||
|
|
||||||
|
h.wg.Add(1)
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
disconnectCount := h.activeClients.Add(-1)
|
||||||
|
h.logger.Debug("msg", "HTTP client disconnected",
|
||||||
|
"component", "http_sink",
|
||||||
|
"remote_addr", remoteAddrStr,
|
||||||
|
"session_id", sess.ID,
|
||||||
|
"client_id", clientID,
|
||||||
|
"active_clients", disconnectCount)
|
||||||
|
|
||||||
|
select {
|
||||||
|
case h.unregister <- clientID:
|
||||||
|
case <-h.done:
|
||||||
|
}
|
||||||
|
|
||||||
|
h.proxy.RemoveSession(sess.ID)
|
||||||
|
h.wg.Done()
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Send connected event with metadata
|
||||||
|
connectionInfo := map[string]any{
|
||||||
|
"client_id": fmt.Sprintf("%d", clientID),
|
||||||
|
"session_id": sess.ID,
|
||||||
|
"instance_id": h.id,
|
||||||
|
"stream_path": h.config.StreamPath,
|
||||||
|
"status_path": h.config.StatusPath,
|
||||||
|
"buffer_size": h.config.BufferSize,
|
||||||
|
}
|
||||||
|
data, _ := json.Marshal(connectionInfo)
|
||||||
|
fmt.Fprintf(w, "event: connected\ndata: %s\n\n", data)
|
||||||
|
if err := w.Flush(); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case payload, ok := <-clientChan:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := h.writeSSE(w, payload); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := w.Flush(); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
h.proxy.UpdateActivity(sess.ID)
|
||||||
|
|
||||||
|
case <-h.done:
|
||||||
|
fmt.Fprintf(w, "event: disconnect\ndata: {\"reason\":\"server_shutdown\"}\n\n")
|
||||||
|
w.Flush()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx.SetBodyStreamWriter(streamFunc)
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleStatus provides a JSON status report
|
||||||
|
func (h *HTTPSink) handleStatus(ctx *fasthttp.RequestCtx) {
|
||||||
|
ctx.SetContentType("application/json")
|
||||||
|
|
||||||
|
status := map[string]any{
|
||||||
|
"service": "LogWisp",
|
||||||
|
"version": version.Short(),
|
||||||
|
"instance_id": h.id,
|
||||||
|
"server": map[string]any{
|
||||||
|
"type": "http",
|
||||||
|
"host": h.config.Host,
|
||||||
|
"port": h.config.Port,
|
||||||
|
"active_clients": h.activeClients.Load(),
|
||||||
|
"buffer_size": h.config.BufferSize,
|
||||||
|
"uptime_seconds": int(time.Since(h.startTime).Seconds()),
|
||||||
|
},
|
||||||
|
"endpoints": map[string]string{
|
||||||
|
"stream": h.config.StreamPath,
|
||||||
|
"status": h.config.StatusPath,
|
||||||
|
},
|
||||||
|
"statistics": map[string]any{
|
||||||
|
"total_processed": h.totalProcessed.Load(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, _ := json.Marshal(status)
|
||||||
|
ctx.SetBody(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeSSE formats payload into SSE data format
|
||||||
|
func (h *HTTPSink) writeSSE(w *bufio.Writer, payload []byte) error {
|
||||||
|
// Handle multi-line payloads per W3C SSE spec
|
||||||
|
lines := splitLines(payload)
|
||||||
|
for _, line := range lines {
|
||||||
|
if _, err := fmt.Fprintf(w, "data: %s\n", line); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Empty line terminates event
|
||||||
|
if _, err := w.WriteString("\n"); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// splitLines splits payload by newlines, handling different line endings
|
||||||
|
func splitLines(data []byte) [][]byte {
|
||||||
|
if len(data) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Trim trailing newline if present
|
||||||
|
if data[len(data)-1] == '\n' {
|
||||||
|
data = data[:len(data)-1]
|
||||||
|
}
|
||||||
|
|
||||||
|
var lines [][]byte
|
||||||
|
start := 0
|
||||||
|
for i := 0; i < len(data); i++ {
|
||||||
|
if data[i] == '\n' {
|
||||||
|
lines = append(lines, data[start:i])
|
||||||
|
start = i + 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if start < len(data) {
|
||||||
|
lines = append(lines, data[start:])
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(lines) == 0 {
|
||||||
|
return [][]byte{data}
|
||||||
|
}
|
||||||
|
return lines
|
||||||
|
}
|
||||||
@ -1,456 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/sink/http_client.go
|
|
||||||
package sink
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"crypto/tls"
|
|
||||||
"crypto/x509"
|
|
||||||
"fmt"
|
|
||||||
"net/url"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
"github.com/valyala/fasthttp"
|
|
||||||
)
|
|
||||||
|
|
||||||
// HTTPClientSink forwards log entries to a remote HTTP endpoint
|
|
||||||
type HTTPClientSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
config HTTPClientConfig
|
|
||||||
client *fasthttp.Client
|
|
||||||
batch []core.LogEntry
|
|
||||||
batchMu sync.Mutex
|
|
||||||
done chan struct{}
|
|
||||||
wg sync.WaitGroup
|
|
||||||
startTime time.Time
|
|
||||||
logger *log.Logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
totalBatches atomic.Uint64
|
|
||||||
failedBatches atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
lastBatchSent atomic.Value // time.Time
|
|
||||||
activeConnections atomic.Int64
|
|
||||||
}
|
|
||||||
|
|
||||||
// HTTPClientConfig holds HTTP client sink configuration
|
|
||||||
type HTTPClientConfig struct {
|
|
||||||
URL string
|
|
||||||
BufferSize int64
|
|
||||||
BatchSize int64
|
|
||||||
BatchDelay time.Duration
|
|
||||||
Timeout time.Duration
|
|
||||||
Headers map[string]string
|
|
||||||
|
|
||||||
// Retry configuration
|
|
||||||
MaxRetries int64
|
|
||||||
RetryDelay time.Duration
|
|
||||||
RetryBackoff float64 // Multiplier for exponential backoff
|
|
||||||
|
|
||||||
// TLS configuration
|
|
||||||
InsecureSkipVerify bool
|
|
||||||
CAFile string
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewHTTPClientSink creates a new HTTP client sink
|
|
||||||
func NewHTTPClientSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*HTTPClientSink, error) {
|
|
||||||
cfg := HTTPClientConfig{
|
|
||||||
BufferSize: int64(1000),
|
|
||||||
BatchSize: int64(100),
|
|
||||||
BatchDelay: time.Second,
|
|
||||||
Timeout: 30 * time.Second,
|
|
||||||
MaxRetries: int64(3),
|
|
||||||
RetryDelay: time.Second,
|
|
||||||
RetryBackoff: float64(2.0),
|
|
||||||
Headers: make(map[string]string),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract URL
|
|
||||||
urlStr, ok := options["url"].(string)
|
|
||||||
if !ok || urlStr == "" {
|
|
||||||
return nil, fmt.Errorf("http_client sink requires 'url' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate URL
|
|
||||||
parsedURL, err := url.Parse(urlStr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid URL: %w", err)
|
|
||||||
}
|
|
||||||
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
|
|
||||||
return nil, fmt.Errorf("URL must use http or https scheme")
|
|
||||||
}
|
|
||||||
cfg.URL = urlStr
|
|
||||||
|
|
||||||
// Extract other options
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
cfg.BufferSize = bufSize
|
|
||||||
}
|
|
||||||
if batchSize, ok := options["batch_size"].(int64); ok && batchSize > 0 {
|
|
||||||
cfg.BatchSize = batchSize
|
|
||||||
}
|
|
||||||
if delayMs, ok := options["batch_delay_ms"].(int64); ok && delayMs > 0 {
|
|
||||||
cfg.BatchDelay = time.Duration(delayMs) * time.Millisecond
|
|
||||||
}
|
|
||||||
if timeoutSec, ok := options["timeout_seconds"].(int64); ok && timeoutSec > 0 {
|
|
||||||
cfg.Timeout = time.Duration(timeoutSec) * time.Second
|
|
||||||
}
|
|
||||||
if maxRetries, ok := options["max_retries"].(int64); ok && maxRetries >= 0 {
|
|
||||||
cfg.MaxRetries = maxRetries
|
|
||||||
}
|
|
||||||
if retryDelayMs, ok := options["retry_delay_ms"].(int64); ok && retryDelayMs > 0 {
|
|
||||||
cfg.RetryDelay = time.Duration(retryDelayMs) * time.Millisecond
|
|
||||||
}
|
|
||||||
if backoff, ok := options["retry_backoff"].(float64); ok && backoff >= 1.0 {
|
|
||||||
cfg.RetryBackoff = backoff
|
|
||||||
}
|
|
||||||
if insecure, ok := options["insecure_skip_verify"].(bool); ok {
|
|
||||||
cfg.InsecureSkipVerify = insecure
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract headers
|
|
||||||
if headers, ok := options["headers"].(map[string]any); ok {
|
|
||||||
for k, v := range headers {
|
|
||||||
if strVal, ok := v.(string); ok {
|
|
||||||
cfg.Headers[k] = strVal
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set default Content-Type if not specified
|
|
||||||
if _, exists := cfg.Headers["Content-Type"]; !exists {
|
|
||||||
cfg.Headers["Content-Type"] = "application/json"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract TLS options
|
|
||||||
if caFile, ok := options["ca_file"].(string); ok && caFile != "" {
|
|
||||||
cfg.CAFile = caFile
|
|
||||||
}
|
|
||||||
|
|
||||||
h := &HTTPClientSink{
|
|
||||||
input: make(chan core.LogEntry, cfg.BufferSize),
|
|
||||||
config: cfg,
|
|
||||||
batch: make([]core.LogEntry, 0, cfg.BatchSize),
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
h.lastProcessed.Store(time.Time{})
|
|
||||||
h.lastBatchSent.Store(time.Time{})
|
|
||||||
|
|
||||||
// Create fasthttp client
|
|
||||||
h.client = &fasthttp.Client{
|
|
||||||
MaxConnsPerHost: 10,
|
|
||||||
MaxIdleConnDuration: 10 * time.Second,
|
|
||||||
ReadTimeout: cfg.Timeout,
|
|
||||||
WriteTimeout: cfg.Timeout,
|
|
||||||
DisableHeaderNamesNormalizing: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configure TLS if using HTTPS
|
|
||||||
if strings.HasPrefix(cfg.URL, "https://") {
|
|
||||||
tlsConfig := &tls.Config{
|
|
||||||
InsecureSkipVerify: cfg.InsecureSkipVerify,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load custom CA if provided
|
|
||||||
if cfg.CAFile != "" {
|
|
||||||
caCert, err := os.ReadFile(cfg.CAFile)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to read CA file: %w", err)
|
|
||||||
}
|
|
||||||
caCertPool := x509.NewCertPool()
|
|
||||||
if !caCertPool.AppendCertsFromPEM(caCert) {
|
|
||||||
return nil, fmt.Errorf("failed to parse CA certificate")
|
|
||||||
}
|
|
||||||
tlsConfig.RootCAs = caCertPool
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set TLS config directly on the client
|
|
||||||
h.client.TLSConfig = tlsConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
return h, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) Input() chan<- core.LogEntry {
|
|
||||||
return h.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) Start(ctx context.Context) error {
|
|
||||||
h.wg.Add(2)
|
|
||||||
go h.processLoop(ctx)
|
|
||||||
go h.batchTimer(ctx)
|
|
||||||
|
|
||||||
h.logger.Info("msg", "HTTP client sink started",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"url", h.config.URL,
|
|
||||||
"batch_size", h.config.BatchSize,
|
|
||||||
"batch_delay", h.config.BatchDelay)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) Stop() {
|
|
||||||
h.logger.Info("msg", "Stopping HTTP client sink")
|
|
||||||
close(h.done)
|
|
||||||
h.wg.Wait()
|
|
||||||
|
|
||||||
// Send any remaining batched entries
|
|
||||||
h.batchMu.Lock()
|
|
||||||
if len(h.batch) > 0 {
|
|
||||||
batch := h.batch
|
|
||||||
h.batch = make([]core.LogEntry, 0, h.config.BatchSize)
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
h.sendBatch(batch)
|
|
||||||
} else {
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
h.logger.Info("msg", "HTTP client sink stopped",
|
|
||||||
"total_processed", h.totalProcessed.Load(),
|
|
||||||
"total_batches", h.totalBatches.Load(),
|
|
||||||
"failed_batches", h.failedBatches.Load())
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := h.lastProcessed.Load().(time.Time)
|
|
||||||
lastBatch, _ := h.lastBatchSent.Load().(time.Time)
|
|
||||||
|
|
||||||
h.batchMu.Lock()
|
|
||||||
pendingEntries := len(h.batch)
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "http_client",
|
|
||||||
TotalProcessed: h.totalProcessed.Load(),
|
|
||||||
ActiveConnections: h.activeConnections.Load(),
|
|
||||||
StartTime: h.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{
|
|
||||||
"url": h.config.URL,
|
|
||||||
"batch_size": h.config.BatchSize,
|
|
||||||
"pending_entries": pendingEntries,
|
|
||||||
"total_batches": h.totalBatches.Load(),
|
|
||||||
"failed_batches": h.failedBatches.Load(),
|
|
||||||
"last_batch_sent": lastBatch,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) processLoop(ctx context.Context) {
|
|
||||||
defer h.wg.Done()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-h.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
h.totalProcessed.Add(1)
|
|
||||||
h.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
// Add to batch
|
|
||||||
h.batchMu.Lock()
|
|
||||||
h.batch = append(h.batch, entry)
|
|
||||||
|
|
||||||
// Check if batch is full
|
|
||||||
if int64(len(h.batch)) >= h.config.BatchSize {
|
|
||||||
batch := h.batch
|
|
||||||
h.batch = make([]core.LogEntry, 0, h.config.BatchSize)
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
|
|
||||||
// Send batch in background
|
|
||||||
go h.sendBatch(batch)
|
|
||||||
} else {
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-h.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) batchTimer(ctx context.Context) {
|
|
||||||
defer h.wg.Done()
|
|
||||||
|
|
||||||
ticker := time.NewTicker(h.config.BatchDelay)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ticker.C:
|
|
||||||
h.batchMu.Lock()
|
|
||||||
if len(h.batch) > 0 {
|
|
||||||
batch := h.batch
|
|
||||||
h.batch = make([]core.LogEntry, 0, h.config.BatchSize)
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
|
|
||||||
// Send batch in background
|
|
||||||
go h.sendBatch(batch)
|
|
||||||
} else {
|
|
||||||
h.batchMu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-h.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPClientSink) sendBatch(batch []core.LogEntry) {
|
|
||||||
h.activeConnections.Add(1)
|
|
||||||
defer h.activeConnections.Add(-1)
|
|
||||||
|
|
||||||
h.totalBatches.Add(1)
|
|
||||||
h.lastBatchSent.Store(time.Now())
|
|
||||||
|
|
||||||
// Special handling for JSON formatter with batching
|
|
||||||
var body []byte
|
|
||||||
var err error
|
|
||||||
|
|
||||||
if jsonFormatter, ok := h.formatter.(*format.JSONFormatter); ok {
|
|
||||||
// Use the batch formatting method
|
|
||||||
body, err = jsonFormatter.FormatBatch(batch)
|
|
||||||
} else {
|
|
||||||
// For non-JSON formatters, format each entry and combine
|
|
||||||
var formatted [][]byte
|
|
||||||
for _, entry := range batch {
|
|
||||||
entryBytes, err := h.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("msg", "Failed to format entry in batch",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
formatted = append(formatted, entryBytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// For raw/text formats, join with newlines
|
|
||||||
body = bytes.Join(formatted, nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("msg", "Failed to format batch",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"error", err,
|
|
||||||
"batch_size", len(batch))
|
|
||||||
h.failedBatches.Add(1)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Retry logic
|
|
||||||
var lastErr error
|
|
||||||
retryDelay := h.config.RetryDelay
|
|
||||||
|
|
||||||
for attempt := int64(0); attempt <= h.config.MaxRetries; attempt++ {
|
|
||||||
if attempt > 0 {
|
|
||||||
// Wait before retry
|
|
||||||
time.Sleep(retryDelay)
|
|
||||||
|
|
||||||
// Calculate new delay with overflow protection
|
|
||||||
newDelay := time.Duration(float64(retryDelay) * h.config.RetryBackoff)
|
|
||||||
|
|
||||||
// Cap at maximum to prevent integer overflow
|
|
||||||
if newDelay > h.config.Timeout || newDelay < retryDelay {
|
|
||||||
// Either exceeded max or overflowed (negative/wrapped)
|
|
||||||
retryDelay = h.config.Timeout
|
|
||||||
} else {
|
|
||||||
retryDelay = newDelay
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Acquire resources inside loop, release immediately after use
|
|
||||||
req := fasthttp.AcquireRequest()
|
|
||||||
resp := fasthttp.AcquireResponse()
|
|
||||||
|
|
||||||
req.SetRequestURI(h.config.URL)
|
|
||||||
req.Header.SetMethod("POST")
|
|
||||||
req.SetBody(body)
|
|
||||||
|
|
||||||
// Set headers
|
|
||||||
for k, v := range h.config.Headers {
|
|
||||||
req.Header.Set(k, v)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Send request
|
|
||||||
err := h.client.DoTimeout(req, resp, h.config.Timeout)
|
|
||||||
|
|
||||||
// Capture response before releasing
|
|
||||||
statusCode := resp.StatusCode()
|
|
||||||
var responseBody []byte
|
|
||||||
if len(resp.Body()) > 0 {
|
|
||||||
responseBody = make([]byte, len(resp.Body()))
|
|
||||||
copy(responseBody, resp.Body())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Release immediately, not deferred
|
|
||||||
fasthttp.ReleaseRequest(req)
|
|
||||||
fasthttp.ReleaseResponse(resp)
|
|
||||||
|
|
||||||
// Handle errors
|
|
||||||
if err != nil {
|
|
||||||
lastErr = fmt.Errorf("request failed: %w", err)
|
|
||||||
h.logger.Warn("msg", "HTTP request failed",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"attempt", attempt+1,
|
|
||||||
"max_retries", h.config.MaxRetries,
|
|
||||||
"error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check response status
|
|
||||||
if statusCode >= 200 && statusCode < 300 {
|
|
||||||
// Success
|
|
||||||
h.logger.Debug("msg", "Batch sent successfully",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"batch_size", len(batch),
|
|
||||||
"status_code", statusCode,
|
|
||||||
"attempt", attempt+1)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Non-2xx status
|
|
||||||
lastErr = fmt.Errorf("server returned status %d: %s", statusCode, responseBody)
|
|
||||||
|
|
||||||
// Don't retry on 4xx errors (client errors)
|
|
||||||
if statusCode >= 400 && statusCode < 500 {
|
|
||||||
h.logger.Error("msg", "Batch rejected by server",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"status_code", statusCode,
|
|
||||||
"response", string(responseBody),
|
|
||||||
"batch_size", len(batch))
|
|
||||||
h.failedBatches.Add(1)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
h.logger.Warn("msg", "Server returned error status",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"attempt", attempt+1,
|
|
||||||
"status_code", statusCode,
|
|
||||||
"response", string(responseBody))
|
|
||||||
}
|
|
||||||
|
|
||||||
// All retries exhausted
|
|
||||||
h.logger.Error("msg", "Failed to send batch after all retries",
|
|
||||||
"component", "http_client_sink",
|
|
||||||
"batch_size", len(batch),
|
|
||||||
"retries", h.config.MaxRetries,
|
|
||||||
"last_error", lastErr)
|
|
||||||
h.failedBatches.Add(1)
|
|
||||||
}
|
|
||||||
146
src/internal/sink/null/null.go
Normal file
146
src/internal/sink/null/null.go
Normal file
@ -0,0 +1,146 @@
|
|||||||
|
package null
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSink("null", NewNullSinkPlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register null sink: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NullSink discards all received transport events, used for testing
|
||||||
|
type NullSink struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Application
|
||||||
|
input chan core.TransportEvent
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
startTime time.Time
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalReceived atomic.Uint64
|
||||||
|
totalBytes atomic.Uint64
|
||||||
|
lastReceived atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewNullSinkPlugin creates a null sink through plugin factory
|
||||||
|
func NewNullSinkPlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (sink.Sink, error) {
|
||||||
|
ns := &NullSink{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
input: make(chan core.TransportEvent, 1000),
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
ns.lastReceived.Store(time.Time{})
|
||||||
|
|
||||||
|
// Create session for null sink
|
||||||
|
ns.session = proxy.CreateSession(
|
||||||
|
"null://devnull",
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "null",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.Debug("msg", "Null sink initialized",
|
||||||
|
"component", "null_sink",
|
||||||
|
"instance_id", id)
|
||||||
|
|
||||||
|
return ns, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (ns *NullSink) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Input returns the channel for sending transport events
|
||||||
|
func (ns *NullSink) Input() chan<- core.TransportEvent {
|
||||||
|
return ns.input
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins the processing loop
|
||||||
|
func (ns *NullSink) Start(ctx context.Context) error {
|
||||||
|
|
||||||
|
ns.startTime = time.Now()
|
||||||
|
go ns.processLoop(ctx)
|
||||||
|
ns.logger.Debug("msg", "Null sink started",
|
||||||
|
"component", "null_sink",
|
||||||
|
"instance_id", ns.id)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully shuts down the sink
|
||||||
|
func (ns *NullSink) Stop() {
|
||||||
|
if ns.session != nil {
|
||||||
|
ns.proxy.RemoveSession(ns.session.ID)
|
||||||
|
}
|
||||||
|
close(ns.done)
|
||||||
|
ns.logger.Debug("msg", "Null sink stopped",
|
||||||
|
"instance_id", ns.id,
|
||||||
|
"total_received", ns.totalReceived.Load())
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns sink statistics
|
||||||
|
func (ns *NullSink) GetStats() sink.SinkStats {
|
||||||
|
lastRcv, _ := ns.lastReceived.Load().(time.Time)
|
||||||
|
|
||||||
|
return sink.SinkStats{
|
||||||
|
ID: ns.id,
|
||||||
|
Type: "null",
|
||||||
|
TotalProcessed: ns.totalReceived.Load(),
|
||||||
|
StartTime: ns.startTime,
|
||||||
|
LastProcessed: lastRcv,
|
||||||
|
Details: map[string]any{
|
||||||
|
"total_bytes": ns.totalBytes.Load(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// processLoop reads transport events and discards them
|
||||||
|
func (ns *NullSink) processLoop(ctx context.Context) {
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case event, ok := <-ns.input:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Discard the event, only update stats
|
||||||
|
ns.totalReceived.Add(1)
|
||||||
|
ns.totalBytes.Add(uint64(len(event.Payload)))
|
||||||
|
ns.lastReceived.Store(time.Now())
|
||||||
|
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ns.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,31 +1,33 @@
|
|||||||
// FILE: logwisp/src/internal/sink/sink.go
|
|
||||||
package sink
|
package sink
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/core"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Sink represents an output destination for log entries
|
// Sink represents an output data stream.
|
||||||
type Sink interface {
|
type Sink interface {
|
||||||
// Input returns the channel for sending log entries to this sink
|
// Capabilities returns a slice of supported Source capabilities
|
||||||
Input() chan<- core.LogEntry
|
Capabilities() []core.Capability
|
||||||
|
|
||||||
// Start begins processing log entries
|
// Input returns the channel for sending transport events to this sink.
|
||||||
|
Input() chan<- core.TransportEvent
|
||||||
|
|
||||||
|
// Start begins processing transport events.
|
||||||
Start(ctx context.Context) error
|
Start(ctx context.Context) error
|
||||||
|
|
||||||
// Stop gracefully shuts down the sink
|
// Stop gracefully shuts down the sink.
|
||||||
Stop()
|
Stop()
|
||||||
|
|
||||||
// GetStats returns sink statistics
|
// GetStats returns sink statistics.
|
||||||
GetStats() SinkStats
|
GetStats() SinkStats
|
||||||
}
|
}
|
||||||
|
|
||||||
// SinkStats contains statistics about a sink
|
// SinkStats contains statistics about a sink.
|
||||||
type SinkStats struct {
|
type SinkStats struct {
|
||||||
|
ID string
|
||||||
Type string
|
Type string
|
||||||
TotalProcessed uint64
|
TotalProcessed uint64
|
||||||
ActiveConnections int64
|
ActiveConnections int64
|
||||||
@ -33,8 +35,3 @@ type SinkStats struct {
|
|||||||
LastProcessed time.Time
|
LastProcessed time.Time
|
||||||
Details map[string]any
|
Details map[string]any
|
||||||
}
|
}
|
||||||
|
|
||||||
// AuthSetter is an interface for sinks that can accept an AuthConfig.
|
|
||||||
type AuthSetter interface {
|
|
||||||
SetAuthConfig(auth *config.AuthConfig)
|
|
||||||
}
|
|
||||||
@ -1,822 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/sink/tcp.go
|
|
||||||
package sink
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/auth"
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
"logwisp/src/internal/limit"
|
|
||||||
"logwisp/src/internal/tls"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
"github.com/lixenwraith/log/compat"
|
|
||||||
"github.com/panjf2000/gnet/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
// TCPSink streams log entries via TCP
|
|
||||||
type TCPSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
config TCPConfig
|
|
||||||
server *tcpServer
|
|
||||||
done chan struct{}
|
|
||||||
activeConns atomic.Int64
|
|
||||||
startTime time.Time
|
|
||||||
engine *gnet.Engine
|
|
||||||
engineMu sync.Mutex
|
|
||||||
wg sync.WaitGroup
|
|
||||||
netLimiter *limit.NetLimiter
|
|
||||||
logger *log.Logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// Security components
|
|
||||||
authenticator *auth.Authenticator
|
|
||||||
tlsManager *tls.Manager
|
|
||||||
authConfig *config.AuthConfig
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
authFailures atomic.Uint64
|
|
||||||
authSuccesses atomic.Uint64
|
|
||||||
|
|
||||||
// Write error tracking
|
|
||||||
writeErrors atomic.Uint64
|
|
||||||
consecutiveWriteErrors map[gnet.Conn]int
|
|
||||||
errorMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// TCPConfig holds TCP sink configuration
|
|
||||||
type TCPConfig struct {
|
|
||||||
Port int64
|
|
||||||
BufferSize int64
|
|
||||||
Heartbeat *config.HeartbeatConfig
|
|
||||||
SSL *config.SSLConfig
|
|
||||||
NetLimit *config.NetLimitConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewTCPSink creates a new TCP streaming sink
|
|
||||||
func NewTCPSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*TCPSink, error) {
|
|
||||||
cfg := TCPConfig{
|
|
||||||
Port: int64(9090),
|
|
||||||
BufferSize: int64(1000),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract configuration from options
|
|
||||||
if port, ok := options["port"].(int64); ok {
|
|
||||||
cfg.Port = port
|
|
||||||
}
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok {
|
|
||||||
cfg.BufferSize = bufSize
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract heartbeat config
|
|
||||||
if hb, ok := options["heartbeat"].(map[string]any); ok {
|
|
||||||
cfg.Heartbeat = &config.HeartbeatConfig{}
|
|
||||||
cfg.Heartbeat.Enabled, _ = hb["enabled"].(bool)
|
|
||||||
if interval, ok := hb["interval_seconds"].(int64); ok {
|
|
||||||
cfg.Heartbeat.IntervalSeconds = interval
|
|
||||||
}
|
|
||||||
cfg.Heartbeat.IncludeTimestamp, _ = hb["include_timestamp"].(bool)
|
|
||||||
cfg.Heartbeat.IncludeStats, _ = hb["include_stats"].(bool)
|
|
||||||
if hbFormat, ok := hb["format"].(string); ok {
|
|
||||||
cfg.Heartbeat.Format = hbFormat
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract SSL config
|
|
||||||
if ssl, ok := options["ssl"].(map[string]any); ok {
|
|
||||||
cfg.SSL = &config.SSLConfig{}
|
|
||||||
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
|
|
||||||
if certFile, ok := ssl["cert_file"].(string); ok {
|
|
||||||
cfg.SSL.CertFile = certFile
|
|
||||||
}
|
|
||||||
if keyFile, ok := ssl["key_file"].(string); ok {
|
|
||||||
cfg.SSL.KeyFile = keyFile
|
|
||||||
}
|
|
||||||
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
|
|
||||||
if caFile, ok := ssl["client_ca_file"].(string); ok {
|
|
||||||
cfg.SSL.ClientCAFile = caFile
|
|
||||||
}
|
|
||||||
cfg.SSL.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
|
|
||||||
if minVer, ok := ssl["min_version"].(string); ok {
|
|
||||||
cfg.SSL.MinVersion = minVer
|
|
||||||
}
|
|
||||||
if maxVer, ok := ssl["max_version"].(string); ok {
|
|
||||||
cfg.SSL.MaxVersion = maxVer
|
|
||||||
}
|
|
||||||
if ciphers, ok := ssl["cipher_suites"].(string); ok {
|
|
||||||
cfg.SSL.CipherSuites = ciphers
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract net limit config
|
|
||||||
if rl, ok := options["net_limit"].(map[string]any); ok {
|
|
||||||
cfg.NetLimit = &config.NetLimitConfig{}
|
|
||||||
cfg.NetLimit.Enabled, _ = rl["enabled"].(bool)
|
|
||||||
if rps, ok := rl["requests_per_second"].(float64); ok {
|
|
||||||
cfg.NetLimit.RequestsPerSecond = rps
|
|
||||||
}
|
|
||||||
if burst, ok := rl["burst_size"].(int64); ok {
|
|
||||||
cfg.NetLimit.BurstSize = burst
|
|
||||||
}
|
|
||||||
if limitBy, ok := rl["limit_by"].(string); ok {
|
|
||||||
cfg.NetLimit.LimitBy = limitBy
|
|
||||||
}
|
|
||||||
if respCode, ok := rl["response_code"].(int64); ok {
|
|
||||||
cfg.NetLimit.ResponseCode = respCode
|
|
||||||
}
|
|
||||||
if msg, ok := rl["response_message"].(string); ok {
|
|
||||||
cfg.NetLimit.ResponseMessage = msg
|
|
||||||
}
|
|
||||||
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
|
|
||||||
cfg.NetLimit.MaxConnectionsPerIP = maxPerIP
|
|
||||||
}
|
|
||||||
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
|
|
||||||
cfg.NetLimit.MaxTotalConnections = maxTotal
|
|
||||||
}
|
|
||||||
if ipWhitelist, ok := rl["ip_whitelist"].([]any); ok {
|
|
||||||
cfg.NetLimit.IPWhitelist = make([]string, 0, len(ipWhitelist))
|
|
||||||
for _, entry := range ipWhitelist {
|
|
||||||
if str, ok := entry.(string); ok {
|
|
||||||
cfg.NetLimit.IPWhitelist = append(cfg.NetLimit.IPWhitelist, str)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ipBlacklist, ok := rl["ip_blacklist"].([]any); ok {
|
|
||||||
cfg.NetLimit.IPBlacklist = make([]string, 0, len(ipBlacklist))
|
|
||||||
for _, entry := range ipBlacklist {
|
|
||||||
if str, ok := entry.(string); ok {
|
|
||||||
cfg.NetLimit.IPBlacklist = append(cfg.NetLimit.IPBlacklist, str)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
t := &TCPSink{
|
|
||||||
input: make(chan core.LogEntry, cfg.BufferSize),
|
|
||||||
config: cfg,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
t.lastProcessed.Store(time.Time{})
|
|
||||||
|
|
||||||
// Initialize net limiter
|
|
||||||
if cfg.NetLimit != nil && cfg.NetLimit.Enabled {
|
|
||||||
t.netLimiter = limit.NewNetLimiter(*cfg.NetLimit, logger)
|
|
||||||
}
|
|
||||||
|
|
||||||
return t, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSink) Input() chan<- core.LogEntry {
|
|
||||||
return t.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSink) Start(ctx context.Context) error {
|
|
||||||
t.server = &tcpServer{
|
|
||||||
sink: t,
|
|
||||||
clients: make(map[gnet.Conn]*tcpClient),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start log broadcast loop
|
|
||||||
t.wg.Add(1)
|
|
||||||
go func() {
|
|
||||||
defer t.wg.Done()
|
|
||||||
t.broadcastLoop(ctx)
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Configure gnet options
|
|
||||||
addr := fmt.Sprintf("tcp://:%d", t.config.Port)
|
|
||||||
|
|
||||||
// Create a gnet adapter using the existing logger instance
|
|
||||||
gnetLogger := compat.NewGnetAdapter(t.logger)
|
|
||||||
|
|
||||||
var opts []gnet.Option
|
|
||||||
opts = append(opts,
|
|
||||||
gnet.WithLogger(gnetLogger),
|
|
||||||
gnet.WithMulticore(true),
|
|
||||||
gnet.WithReusePort(true),
|
|
||||||
)
|
|
||||||
|
|
||||||
// Start gnet server
|
|
||||||
errChan := make(chan error, 1)
|
|
||||||
go func() {
|
|
||||||
t.logger.Info("msg", "Starting TCP server",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"port", t.config.Port,
|
|
||||||
"auth", t.authenticator != nil)
|
|
||||||
|
|
||||||
err := gnet.Run(t.server, addr, opts...)
|
|
||||||
if err != nil {
|
|
||||||
t.logger.Error("msg", "TCP server failed",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"port", t.config.Port,
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
errChan <- err
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Monitor context for shutdown
|
|
||||||
go func() {
|
|
||||||
<-ctx.Done()
|
|
||||||
t.engineMu.Lock()
|
|
||||||
if t.engine != nil {
|
|
||||||
shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
(*t.engine).Stop(shutdownCtx)
|
|
||||||
}
|
|
||||||
t.engineMu.Unlock()
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Wait briefly for server to start or fail
|
|
||||||
select {
|
|
||||||
case err := <-errChan:
|
|
||||||
// Server failed immediately
|
|
||||||
close(t.done)
|
|
||||||
t.wg.Wait()
|
|
||||||
return err
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
// Server started successfully
|
|
||||||
t.logger.Info("msg", "TCP server started", "port", t.config.Port)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSink) Stop() {
|
|
||||||
t.logger.Info("msg", "Stopping TCP sink")
|
|
||||||
// Signal broadcast loop to stop
|
|
||||||
close(t.done)
|
|
||||||
|
|
||||||
// Stop gnet engine if running
|
|
||||||
t.engineMu.Lock()
|
|
||||||
engine := t.engine
|
|
||||||
t.engineMu.Unlock()
|
|
||||||
|
|
||||||
if engine != nil {
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
(*engine).Stop(ctx) // Dereference the pointer
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for broadcast loop to finish
|
|
||||||
t.wg.Wait()
|
|
||||||
|
|
||||||
t.logger.Info("msg", "TCP sink stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := t.lastProcessed.Load().(time.Time)
|
|
||||||
|
|
||||||
var netLimitStats map[string]any
|
|
||||||
if t.netLimiter != nil {
|
|
||||||
netLimitStats = t.netLimiter.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
var authStats map[string]any
|
|
||||||
if t.authenticator != nil {
|
|
||||||
authStats = t.authenticator.GetStats()
|
|
||||||
authStats["failures"] = t.authFailures.Load()
|
|
||||||
authStats["successes"] = t.authSuccesses.Load()
|
|
||||||
}
|
|
||||||
|
|
||||||
var tlsStats map[string]any
|
|
||||||
if t.tlsManager != nil {
|
|
||||||
tlsStats = t.tlsManager.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "tcp",
|
|
||||||
TotalProcessed: t.totalProcessed.Load(),
|
|
||||||
ActiveConnections: t.activeConns.Load(),
|
|
||||||
StartTime: t.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{
|
|
||||||
"port": t.config.Port,
|
|
||||||
"buffer_size": t.config.BufferSize,
|
|
||||||
"net_limit": netLimitStats,
|
|
||||||
"auth": authStats,
|
|
||||||
"tls": tlsStats,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSink) broadcastLoop(ctx context.Context) {
|
|
||||||
var ticker *time.Ticker
|
|
||||||
var tickerChan <-chan time.Time
|
|
||||||
|
|
||||||
if t.config.Heartbeat.Enabled {
|
|
||||||
ticker = time.NewTicker(time.Duration(t.config.Heartbeat.IntervalSeconds) * time.Second)
|
|
||||||
tickerChan = ticker.C
|
|
||||||
defer ticker.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case entry, ok := <-t.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
t.totalProcessed.Add(1)
|
|
||||||
t.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
data, err := t.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
t.logger.Error("msg", "Failed to format log entry",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err,
|
|
||||||
"entry_source", entry.Source)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Broadcast only to authenticated clients
|
|
||||||
t.server.mu.RLock()
|
|
||||||
for conn, client := range t.server.clients {
|
|
||||||
if client.authenticated {
|
|
||||||
// Send through TLS bridge if present
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
if _, err := client.tlsBridge.Write(data); err != nil {
|
|
||||||
// TLS write failed, connection likely dead
|
|
||||||
t.logger.Debug("msg", "TLS write failed",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err)
|
|
||||||
conn.Close()
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
t.writeErrors.Add(1)
|
|
||||||
t.handleWriteError(c, err)
|
|
||||||
} else {
|
|
||||||
// Reset consecutive error count on success
|
|
||||||
t.errorMu.Lock()
|
|
||||||
delete(t.consecutiveWriteErrors, c)
|
|
||||||
t.errorMu.Unlock()
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
t.server.mu.RUnlock()
|
|
||||||
|
|
||||||
case <-tickerChan:
|
|
||||||
heartbeatEntry := t.createHeartbeatEntry()
|
|
||||||
data, err := t.formatter.Format(heartbeatEntry)
|
|
||||||
if err != nil {
|
|
||||||
t.logger.Error("msg", "Failed to format heartbeat",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
t.server.mu.RLock()
|
|
||||||
for conn, client := range t.server.clients {
|
|
||||||
if client.authenticated {
|
|
||||||
// Validate session is still active
|
|
||||||
if t.authenticator != nil && client.session != nil {
|
|
||||||
if !t.authenticator.ValidateSession(client.session.ID) {
|
|
||||||
// Session expired, close connection
|
|
||||||
conn.Close()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
if _, err := client.tlsBridge.Write(data); err != nil {
|
|
||||||
t.logger.Debug("msg", "TLS heartbeat write failed",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err)
|
|
||||||
conn.Close()
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
t.writeErrors.Add(1)
|
|
||||||
t.handleWriteError(c, err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
t.server.mu.RUnlock()
|
|
||||||
|
|
||||||
case <-t.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle write errors with threshold-based connection termination
|
|
||||||
func (t *TCPSink) handleWriteError(c gnet.Conn, err error) {
|
|
||||||
t.errorMu.Lock()
|
|
||||||
defer t.errorMu.Unlock()
|
|
||||||
|
|
||||||
// Track consecutive errors per connection
|
|
||||||
if t.consecutiveWriteErrors == nil {
|
|
||||||
t.consecutiveWriteErrors = make(map[gnet.Conn]int)
|
|
||||||
}
|
|
||||||
|
|
||||||
t.consecutiveWriteErrors[c]++
|
|
||||||
errorCount := t.consecutiveWriteErrors[c]
|
|
||||||
|
|
||||||
t.logger.Debug("msg", "AsyncWrite error",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr(),
|
|
||||||
"error", err,
|
|
||||||
"consecutive_errors", errorCount)
|
|
||||||
|
|
||||||
// Close connection after 3 consecutive write errors
|
|
||||||
if errorCount >= 3 {
|
|
||||||
t.logger.Warn("msg", "Closing connection due to repeated write errors",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr(),
|
|
||||||
"error_count", errorCount)
|
|
||||||
delete(t.consecutiveWriteErrors, c)
|
|
||||||
c.Close()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create heartbeat as a proper LogEntry
|
|
||||||
func (t *TCPSink) createHeartbeatEntry() core.LogEntry {
|
|
||||||
message := "heartbeat"
|
|
||||||
|
|
||||||
// Build fields for heartbeat metadata
|
|
||||||
fields := make(map[string]any)
|
|
||||||
fields["type"] = "heartbeat"
|
|
||||||
|
|
||||||
if t.config.Heartbeat.IncludeStats {
|
|
||||||
fields["active_connections"] = t.activeConns.Load()
|
|
||||||
fields["uptime_seconds"] = int64(time.Since(t.startTime).Seconds())
|
|
||||||
}
|
|
||||||
|
|
||||||
fieldsJSON, _ := json.Marshal(fields)
|
|
||||||
|
|
||||||
return core.LogEntry{
|
|
||||||
Time: time.Now(),
|
|
||||||
Source: "logwisp-tcp",
|
|
||||||
Level: "INFO",
|
|
||||||
Message: message,
|
|
||||||
Fields: fieldsJSON,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetActiveConnections returns the current number of connections
|
|
||||||
func (t *TCPSink) GetActiveConnections() int64 {
|
|
||||||
return t.activeConns.Load()
|
|
||||||
}
|
|
||||||
|
|
||||||
// tcpClient represents a connected TCP client with auth state
|
|
||||||
type tcpClient struct {
|
|
||||||
conn gnet.Conn
|
|
||||||
buffer bytes.Buffer
|
|
||||||
authenticated bool
|
|
||||||
session *auth.Session
|
|
||||||
authTimeout time.Time
|
|
||||||
tlsBridge *tls.GNetTLSConn
|
|
||||||
authTimeoutSet bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// tcpServer handles gnet events with authentication
|
|
||||||
type tcpServer struct {
|
|
||||||
gnet.BuiltinEventEngine
|
|
||||||
sink *TCPSink
|
|
||||||
clients map[gnet.Conn]*tcpClient
|
|
||||||
mu sync.RWMutex
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpServer) OnBoot(eng gnet.Engine) gnet.Action {
|
|
||||||
// Store engine reference for shutdown
|
|
||||||
s.sink.engineMu.Lock()
|
|
||||||
s.sink.engine = &eng
|
|
||||||
s.sink.engineMu.Unlock()
|
|
||||||
|
|
||||||
s.sink.logger.Debug("msg", "TCP server booted",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"port", s.sink.config.Port)
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
|
|
||||||
remoteAddr := c.RemoteAddr()
|
|
||||||
s.sink.logger.Debug("msg", "TCP connection attempt", "remote_addr", remoteAddr)
|
|
||||||
|
|
||||||
// Reject IPv6 connections immediately
|
|
||||||
if tcpAddr, ok := remoteAddr.(*net.TCPAddr); ok {
|
|
||||||
if tcpAddr.IP.To4() == nil {
|
|
||||||
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check net limit
|
|
||||||
if s.sink.netLimiter != nil {
|
|
||||||
remoteStr := c.RemoteAddr().String()
|
|
||||||
tcpAddr, err := net.ResolveTCPAddr("tcp", remoteStr)
|
|
||||||
if err != nil {
|
|
||||||
s.sink.logger.Warn("msg", "Failed to parse TCP address",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"error", err)
|
|
||||||
return nil, gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
if !s.sink.netLimiter.CheckTCP(tcpAddr) {
|
|
||||||
s.sink.logger.Warn("msg", "TCP connection net limited",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
return nil, gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track connection
|
|
||||||
s.sink.netLimiter.AddConnection(remoteStr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create client state without auth timeout initially
|
|
||||||
client := &tcpClient{
|
|
||||||
conn: c,
|
|
||||||
authenticated: s.sink.authenticator == nil, // No auth = auto authenticated
|
|
||||||
authTimeoutSet: false, // Auth timeout not started yet
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize TLS bridge if enabled
|
|
||||||
if s.sink.tlsManager != nil {
|
|
||||||
tlsConfig := s.sink.tlsManager.GetTCPConfig()
|
|
||||||
client.tlsBridge = tls.NewServerConn(c, tlsConfig)
|
|
||||||
client.tlsBridge.Handshake() // Start async handshake
|
|
||||||
|
|
||||||
s.sink.logger.Debug("msg", "TLS handshake initiated",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
} else if s.sink.authenticator != nil {
|
|
||||||
// Only set auth timeout if no TLS (plain connection)
|
|
||||||
client.authTimeout = time.Now().Add(30 * time.Second) // TODO: configurable or non-hardcoded timer
|
|
||||||
client.authTimeoutSet = true
|
|
||||||
}
|
|
||||||
|
|
||||||
s.mu.Lock()
|
|
||||||
s.clients[c] = client
|
|
||||||
s.mu.Unlock()
|
|
||||||
|
|
||||||
newCount := s.sink.activeConns.Add(1)
|
|
||||||
s.sink.logger.Debug("msg", "TCP connection opened",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"active_connections", newCount,
|
|
||||||
"requires_auth", s.sink.authenticator != nil)
|
|
||||||
|
|
||||||
// Send auth prompt if authentication is required
|
|
||||||
if s.sink.authenticator != nil && s.sink.tlsManager == nil {
|
|
||||||
authPrompt := []byte("AUTH REQUIRED\nFormat: AUTH <method> <credentials>\nMethods: basic, token\n")
|
|
||||||
return authPrompt, gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
|
|
||||||
remoteAddr := c.RemoteAddr().String()
|
|
||||||
|
|
||||||
// Remove client state
|
|
||||||
s.mu.Lock()
|
|
||||||
client := s.clients[c]
|
|
||||||
delete(s.clients, c)
|
|
||||||
s.mu.Unlock()
|
|
||||||
|
|
||||||
// Clean up TLS bridge if present
|
|
||||||
if client != nil && client.tlsBridge != nil {
|
|
||||||
client.tlsBridge.Close()
|
|
||||||
s.sink.logger.Debug("msg", "TLS connection closed",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clean up write error tracking
|
|
||||||
s.sink.errorMu.Lock()
|
|
||||||
delete(s.sink.consecutiveWriteErrors, c)
|
|
||||||
s.sink.errorMu.Unlock()
|
|
||||||
|
|
||||||
// Remove connection tracking
|
|
||||||
if s.sink.netLimiter != nil {
|
|
||||||
s.sink.netLimiter.RemoveConnection(remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
newCount := s.sink.activeConns.Add(-1)
|
|
||||||
s.sink.logger.Debug("msg", "TCP connection closed",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"active_connections", newCount,
|
|
||||||
"error", err)
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
|
|
||||||
s.mu.RLock()
|
|
||||||
client, exists := s.clients[c]
|
|
||||||
s.mu.RUnlock()
|
|
||||||
|
|
||||||
if !exists {
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// // Check auth timeout
|
|
||||||
// if !client.authenticated && time.Now().After(client.authTimeout) {
|
|
||||||
// s.sink.logger.Warn("msg", "Authentication timeout",
|
|
||||||
// "component", "tcp_sink",
|
|
||||||
// "remote_addr", c.RemoteAddr().String())
|
|
||||||
// if client.tlsBridge != nil && client.tlsBridge.IsHandshakeDone() {
|
|
||||||
// client.tlsBridge.Write([]byte("AUTH TIMEOUT\n"))
|
|
||||||
// } else if client.tlsBridge == nil {
|
|
||||||
// c.AsyncWrite([]byte("AUTH TIMEOUT\n"), nil)
|
|
||||||
// }
|
|
||||||
// return gnet.Close
|
|
||||||
// }
|
|
||||||
|
|
||||||
// Read all available data
|
|
||||||
data, err := c.Next(-1)
|
|
||||||
if err != nil {
|
|
||||||
s.sink.logger.Error("msg", "Error reading from connection",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process through TLS bridge if present
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
// Feed encrypted data into TLS engine
|
|
||||||
if err := client.tlsBridge.ProcessIncoming(data); err != nil {
|
|
||||||
s.sink.logger.Error("msg", "TLS processing error",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"error", err)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if handshake is complete
|
|
||||||
if !client.tlsBridge.IsHandshakeDone() {
|
|
||||||
// Still handshaking, wait for more data
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check handshake result
|
|
||||||
_, hsErr := client.tlsBridge.HandshakeComplete()
|
|
||||||
if hsErr != nil {
|
|
||||||
s.sink.logger.Error("msg", "TLS handshake failed",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"error", hsErr)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set auth timeout only after TLS handshake completes
|
|
||||||
if !client.authTimeoutSet && s.sink.authenticator != nil && !client.authenticated {
|
|
||||||
client.authTimeout = time.Now().Add(30 * time.Second)
|
|
||||||
client.authTimeoutSet = true
|
|
||||||
s.sink.logger.Debug("msg", "Auth timeout started after TLS handshake",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr().String())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read decrypted plaintext
|
|
||||||
data = client.tlsBridge.Read()
|
|
||||||
if data == nil || len(data) == 0 {
|
|
||||||
// No plaintext available yet
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// First data after TLS handshake - send auth prompt if needed
|
|
||||||
if s.sink.authenticator != nil && !client.authenticated &&
|
|
||||||
len(client.buffer.Bytes()) == 0 {
|
|
||||||
authPrompt := []byte("AUTH REQUIRED\n")
|
|
||||||
client.tlsBridge.Write(authPrompt)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only check auth timeout if it has been set
|
|
||||||
if !client.authenticated && client.authTimeoutSet && time.Now().After(client.authTimeout) {
|
|
||||||
s.sink.logger.Warn("msg", "Authentication timeout",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr().String())
|
|
||||||
if client.tlsBridge != nil && client.tlsBridge.IsHandshakeDone() {
|
|
||||||
client.tlsBridge.Write([]byte("AUTH TIMEOUT\n"))
|
|
||||||
} else if client.tlsBridge == nil {
|
|
||||||
c.AsyncWrite([]byte("AUTH TIMEOUT\n"), nil)
|
|
||||||
}
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// If not authenticated, expect auth command
|
|
||||||
if !client.authenticated {
|
|
||||||
client.buffer.Write(data)
|
|
||||||
|
|
||||||
// Look for complete auth line
|
|
||||||
if line, err := client.buffer.ReadBytes('\n'); err == nil {
|
|
||||||
line = bytes.TrimSpace(line)
|
|
||||||
|
|
||||||
// Parse AUTH command: AUTH <method> <credentials>
|
|
||||||
parts := strings.SplitN(string(line), " ", 3)
|
|
||||||
if len(parts) != 3 || parts[0] != "AUTH" {
|
|
||||||
// Send error through TLS if enabled
|
|
||||||
errMsg := []byte("AUTH FAILED\n")
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
client.tlsBridge.Write(errMsg)
|
|
||||||
} else {
|
|
||||||
c.AsyncWrite(errMsg, nil)
|
|
||||||
}
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// Authenticate
|
|
||||||
session, err := s.sink.authenticator.AuthenticateTCP(parts[1], parts[2], c.RemoteAddr().String())
|
|
||||||
if err != nil {
|
|
||||||
s.sink.authFailures.Add(1)
|
|
||||||
s.sink.logger.Warn("msg", "TCP authentication failed",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"method", parts[1],
|
|
||||||
"error", err)
|
|
||||||
// Send error through TLS if enabled
|
|
||||||
errMsg := []byte("AUTH FAILED\n")
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
client.tlsBridge.Write(errMsg)
|
|
||||||
} else {
|
|
||||||
c.AsyncWrite(errMsg, nil)
|
|
||||||
}
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Authentication successful
|
|
||||||
s.sink.authSuccesses.Add(1)
|
|
||||||
s.mu.Lock()
|
|
||||||
client.authenticated = true
|
|
||||||
client.session = session
|
|
||||||
s.mu.Unlock()
|
|
||||||
|
|
||||||
s.sink.logger.Info("msg", "TCP client authenticated",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"username", session.Username,
|
|
||||||
"method", session.Method,
|
|
||||||
"tls", client.tlsBridge != nil)
|
|
||||||
|
|
||||||
// Send success through TLS if enabled
|
|
||||||
successMsg := []byte("AUTH OK\n")
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
client.tlsBridge.Write(successMsg)
|
|
||||||
} else {
|
|
||||||
c.AsyncWrite(successMsg, nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clear buffer after auth
|
|
||||||
client.buffer.Reset()
|
|
||||||
}
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// Authenticated clients shouldn't send data, just discard
|
|
||||||
c.Discard(-1)
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetAuthConfig configures tcp sink authentication
|
|
||||||
func (t *TCPSink) SetAuthConfig(authCfg *config.AuthConfig) {
|
|
||||||
if authCfg == nil || authCfg.Type == "none" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
t.authConfig = authCfg
|
|
||||||
authenticator, err := auth.New(authCfg, t.logger)
|
|
||||||
if err != nil {
|
|
||||||
t.logger.Error("msg", "Failed to initialize authenticator for TCP sink",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
t.authenticator = authenticator
|
|
||||||
|
|
||||||
// Initialize TLS manager if SSL is configured
|
|
||||||
if t.config.SSL != nil && t.config.SSL.Enabled {
|
|
||||||
tlsManager, err := tls.NewManager(t.config.SSL, t.logger)
|
|
||||||
if err != nil {
|
|
||||||
t.logger.Error("msg", "Failed to create TLS manager",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"error", err)
|
|
||||||
// Continue without TLS
|
|
||||||
return
|
|
||||||
}
|
|
||||||
t.tlsManager = tlsManager
|
|
||||||
}
|
|
||||||
|
|
||||||
t.logger.Info("msg", "Authentication configured for TCP sink",
|
|
||||||
"component", "tcp_sink",
|
|
||||||
"auth_type", authCfg.Type,
|
|
||||||
"tls_enabled", t.tlsManager != nil,
|
|
||||||
"tls_bridge", t.tlsManager != nil)
|
|
||||||
}
|
|
||||||
472
src/internal/sink/tcp/tcp.go
Normal file
472
src/internal/sink/tcp/tcp.go
Normal file
@ -0,0 +1,472 @@
|
|||||||
|
package tcp
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/sink"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
"github.com/lixenwraith/log/compat"
|
||||||
|
"github.com/panjf2000/gnet/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSink("tcp", NewTCPSinkPlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register tcp sink: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TCPSink streams log entries to connected TCP clients
|
||||||
|
type TCPSink struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.TCPSinkOptions
|
||||||
|
|
||||||
|
// Network
|
||||||
|
server *tcpServer
|
||||||
|
engine *gnet.Engine
|
||||||
|
engineMu sync.Mutex
|
||||||
|
|
||||||
|
// Application
|
||||||
|
input chan core.TransportEvent
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
wg sync.WaitGroup
|
||||||
|
startTime time.Time
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
activeConns atomic.Int64
|
||||||
|
totalProcessed atomic.Uint64
|
||||||
|
lastProcessed atomic.Value // time.Time
|
||||||
|
|
||||||
|
// Error tracking
|
||||||
|
writeErrors atomic.Uint64
|
||||||
|
consecutiveWriteErrors map[gnet.Conn]int
|
||||||
|
errorMu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Server lifecycle
|
||||||
|
TCPServerStartTimeout = 100 * time.Millisecond
|
||||||
|
TCPServerShutdownTimeout = 2 * time.Second
|
||||||
|
|
||||||
|
// Connection management
|
||||||
|
TCPMaxConsecutiveWriteErrors = 3
|
||||||
|
TCPMaxPort = 65535
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
DefaultTCPHost = "0.0.0.0"
|
||||||
|
DefaultTCPBufferSize = 1000
|
||||||
|
DefaultTCPWriteTimeoutMS = 5000
|
||||||
|
DefaultTCPKeepAlivePeriod = 30000
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewTCPSinkPlugin creates a TCP sink through plugin factory
|
||||||
|
func NewTCPSinkPlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (sink.Sink, error) {
|
||||||
|
// Create config struct with defaults
|
||||||
|
opts := &config.TCPSinkOptions{
|
||||||
|
Host: DefaultTCPHost,
|
||||||
|
Port: 0,
|
||||||
|
KeepAlive: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse config map into struct
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate
|
||||||
|
if err := lconfig.Port(opts.Port); err != nil {
|
||||||
|
return nil, fmt.Errorf("port: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
if opts.BufferSize <= 0 {
|
||||||
|
opts.BufferSize = DefaultTCPBufferSize
|
||||||
|
}
|
||||||
|
if opts.WriteTimeout <= 0 {
|
||||||
|
opts.WriteTimeout = DefaultTCPWriteTimeoutMS
|
||||||
|
}
|
||||||
|
if opts.KeepAlivePeriod <= 0 {
|
||||||
|
opts.KeepAlivePeriod = DefaultTCPKeepAlivePeriod
|
||||||
|
}
|
||||||
|
|
||||||
|
t := &TCPSink{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
input: make(chan core.TransportEvent, opts.BufferSize),
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
consecutiveWriteErrors: make(map[gnet.Conn]int),
|
||||||
|
}
|
||||||
|
t.lastProcessed.Store(time.Time{})
|
||||||
|
|
||||||
|
logger.Info("msg", "TCP sink initialized",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"instance_id", id,
|
||||||
|
"host", opts.Host,
|
||||||
|
"port", opts.Port)
|
||||||
|
|
||||||
|
return t, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (t *TCPSink) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware,
|
||||||
|
core.CapMultiSession,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Input returns the channel for sending transport events
|
||||||
|
func (t *TCPSink) Input() chan<- core.TransportEvent {
|
||||||
|
return t.input
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start initializes the TCP server and begins the broadcast loop
|
||||||
|
func (t *TCPSink) Start(ctx context.Context) error {
|
||||||
|
t.server = &tcpServer{
|
||||||
|
sink: t,
|
||||||
|
clients: make(map[gnet.Conn]*tcpClient),
|
||||||
|
}
|
||||||
|
|
||||||
|
t.startTime = time.Now()
|
||||||
|
|
||||||
|
// Start broadcast loop
|
||||||
|
t.wg.Add(1)
|
||||||
|
go func() {
|
||||||
|
defer t.wg.Done()
|
||||||
|
t.broadcastLoop(ctx)
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Configure gnet
|
||||||
|
addr := fmt.Sprintf("tcp://%s:%d", t.config.Host, t.config.Port)
|
||||||
|
gnetLogger := compat.NewGnetAdapter(t.logger)
|
||||||
|
|
||||||
|
opts := []gnet.Option{
|
||||||
|
gnet.WithLogger(gnetLogger),
|
||||||
|
gnet.WithMulticore(true),
|
||||||
|
gnet.WithReusePort(true),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply TCP keep-alive settings from config
|
||||||
|
if t.config.KeepAlive {
|
||||||
|
opts = append(opts,
|
||||||
|
gnet.WithTCPKeepAlive(time.Duration(t.config.KeepAlivePeriod)*time.Millisecond),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start gnet server
|
||||||
|
errChan := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
t.logger.Info("msg", "Starting TCP server",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"host", t.config.Host,
|
||||||
|
"port", t.config.Port)
|
||||||
|
|
||||||
|
err := gnet.Run(t.server, addr, opts...)
|
||||||
|
if err != nil {
|
||||||
|
t.logger.Error("msg", "TCP server failed",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"error", err)
|
||||||
|
}
|
||||||
|
errChan <- err
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Monitor context for shutdown
|
||||||
|
go func() {
|
||||||
|
<-ctx.Done()
|
||||||
|
t.engineMu.Lock()
|
||||||
|
if t.engine != nil {
|
||||||
|
shutdownCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
(*t.engine).Stop(shutdownCtx)
|
||||||
|
}
|
||||||
|
t.engineMu.Unlock()
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Wait briefly for server to start or fail
|
||||||
|
select {
|
||||||
|
case err := <-errChan:
|
||||||
|
close(t.done)
|
||||||
|
t.wg.Wait()
|
||||||
|
return err
|
||||||
|
case <-time.After(TCPServerStartTimeout):
|
||||||
|
t.logger.Info("msg", "TCP server started",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"instance_id", t.id,
|
||||||
|
"port", t.config.Port)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully shuts down the TCP sink
|
||||||
|
func (t *TCPSink) Stop() {
|
||||||
|
t.logger.Info("msg", "Stopping TCP sink",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"instance_id", t.id)
|
||||||
|
|
||||||
|
close(t.done)
|
||||||
|
|
||||||
|
// Stop gnet engine
|
||||||
|
t.engineMu.Lock()
|
||||||
|
engine := t.engine
|
||||||
|
t.engineMu.Unlock()
|
||||||
|
|
||||||
|
if engine != nil {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), TCPServerShutdownTimeout)
|
||||||
|
defer cancel()
|
||||||
|
(*engine).Stop(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
t.wg.Wait()
|
||||||
|
|
||||||
|
t.logger.Info("msg", "TCP sink stopped",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"instance_id", t.id,
|
||||||
|
"total_processed", t.totalProcessed.Load())
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns sink statistics
|
||||||
|
func (t *TCPSink) GetStats() sink.SinkStats {
|
||||||
|
lastProc, _ := t.lastProcessed.Load().(time.Time)
|
||||||
|
|
||||||
|
return sink.SinkStats{
|
||||||
|
ID: t.id,
|
||||||
|
Type: "tcp",
|
||||||
|
TotalProcessed: t.totalProcessed.Load(),
|
||||||
|
ActiveConnections: t.activeConns.Load(),
|
||||||
|
StartTime: t.startTime,
|
||||||
|
LastProcessed: lastProc,
|
||||||
|
Details: map[string]any{
|
||||||
|
"host": t.config.Host,
|
||||||
|
"port": t.config.Port,
|
||||||
|
"buffer_size": t.config.BufferSize,
|
||||||
|
"write_errors": t.writeErrors.Load(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// tcpServer implements gnet.EventHandler
|
||||||
|
type tcpServer struct {
|
||||||
|
gnet.BuiltinEventEngine
|
||||||
|
sink *TCPSink
|
||||||
|
clients map[gnet.Conn]*tcpClient
|
||||||
|
mu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// tcpClient represents a connected TCP client
|
||||||
|
type tcpClient struct {
|
||||||
|
conn gnet.Conn
|
||||||
|
buffer bytes.Buffer
|
||||||
|
sessionID string
|
||||||
|
}
|
||||||
|
|
||||||
|
// broadcastLoop sends transport events to all connected clients
|
||||||
|
func (t *TCPSink) broadcastLoop(ctx context.Context) {
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case event, ok := <-t.input:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.totalProcessed.Add(1)
|
||||||
|
t.lastProcessed.Store(time.Now())
|
||||||
|
t.broadcastData(event.Payload)
|
||||||
|
case <-t.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnBoot is called when the server starts
|
||||||
|
func (s *tcpServer) OnBoot(eng gnet.Engine) gnet.Action {
|
||||||
|
s.sink.engineMu.Lock()
|
||||||
|
s.sink.engine = &eng
|
||||||
|
s.sink.engineMu.Unlock()
|
||||||
|
|
||||||
|
s.sink.logger.Debug("msg", "TCP server booted",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"instance_id", s.sink.id)
|
||||||
|
return gnet.None
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnOpen is called when a new connection is established
|
||||||
|
func (s *tcpServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
|
||||||
|
remoteAddr := c.RemoteAddr()
|
||||||
|
remoteAddrStr := remoteAddr.String()
|
||||||
|
|
||||||
|
s.sink.logger.Debug("msg", "TCP connection attempt",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"remote_addr", remoteAddrStr)
|
||||||
|
|
||||||
|
// Reject IPv6 connections
|
||||||
|
if tcpAddr, ok := remoteAddr.(*net.TCPAddr); ok {
|
||||||
|
if tcpAddr.IP.To4() == nil {
|
||||||
|
s.sink.logger.Warn("msg", "IPv6 connection rejected",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"remote_addr", remoteAddrStr)
|
||||||
|
return []byte("IPv4-only (IPv6 not supported)\n"), gnet.Close
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply write timeout from config
|
||||||
|
if s.sink.config.WriteTimeout > 0 {
|
||||||
|
c.SetWriteDeadline(time.Now().Add(time.Duration(s.sink.config.WriteTimeout) * time.Millisecond))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create session via proxy
|
||||||
|
sess := s.sink.proxy.CreateSession(remoteAddrStr, map[string]any{
|
||||||
|
"type": "tcp_client",
|
||||||
|
"remote_addr": remoteAddrStr,
|
||||||
|
})
|
||||||
|
|
||||||
|
client := &tcpClient{
|
||||||
|
conn: c,
|
||||||
|
sessionID: sess.ID,
|
||||||
|
}
|
||||||
|
|
||||||
|
s.mu.Lock()
|
||||||
|
s.clients[c] = client
|
||||||
|
s.mu.Unlock()
|
||||||
|
|
||||||
|
newCount := s.sink.activeConns.Add(1)
|
||||||
|
s.sink.logger.Debug("msg", "TCP connection opened",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"remote_addr", remoteAddrStr,
|
||||||
|
"session_id", sess.ID,
|
||||||
|
"active_connections", newCount)
|
||||||
|
|
||||||
|
return nil, gnet.None
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnClose is called when a connection is closed
|
||||||
|
func (s *tcpServer) OnClose(c gnet.Conn, err error) gnet.Action {
|
||||||
|
remoteAddrStr := c.RemoteAddr().String()
|
||||||
|
|
||||||
|
s.mu.RLock()
|
||||||
|
client, exists := s.clients[c]
|
||||||
|
s.mu.RUnlock()
|
||||||
|
|
||||||
|
if exists && client.sessionID != "" {
|
||||||
|
s.sink.proxy.RemoveSession(client.sessionID)
|
||||||
|
s.sink.logger.Debug("msg", "Session removed",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"session_id", client.sessionID,
|
||||||
|
"remote_addr", remoteAddrStr)
|
||||||
|
}
|
||||||
|
|
||||||
|
s.mu.Lock()
|
||||||
|
delete(s.clients, c)
|
||||||
|
s.mu.Unlock()
|
||||||
|
|
||||||
|
s.sink.errorMu.Lock()
|
||||||
|
delete(s.sink.consecutiveWriteErrors, c)
|
||||||
|
s.sink.errorMu.Unlock()
|
||||||
|
|
||||||
|
newCount := s.sink.activeConns.Add(-1)
|
||||||
|
s.sink.logger.Debug("msg", "TCP connection closed",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"remote_addr", remoteAddrStr,
|
||||||
|
"active_connections", newCount,
|
||||||
|
"error", err)
|
||||||
|
|
||||||
|
return gnet.None
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnTraffic is called when data is received from a connection
|
||||||
|
func (s *tcpServer) OnTraffic(c gnet.Conn) gnet.Action {
|
||||||
|
s.mu.RLock()
|
||||||
|
client, exists := s.clients[c]
|
||||||
|
s.mu.RUnlock()
|
||||||
|
|
||||||
|
// Update session activity
|
||||||
|
if exists && client.sessionID != "" {
|
||||||
|
s.sink.proxy.UpdateActivity(client.sessionID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TCP sink doesn't expect data from clients, discard
|
||||||
|
c.Discard(-1)
|
||||||
|
return gnet.None
|
||||||
|
}
|
||||||
|
|
||||||
|
// broadcastData sends data to all connected clients
|
||||||
|
func (t *TCPSink) broadcastData(data []byte) {
|
||||||
|
t.server.mu.RLock()
|
||||||
|
defer t.server.mu.RUnlock()
|
||||||
|
|
||||||
|
for conn, client := range t.server.clients {
|
||||||
|
// Update session activity
|
||||||
|
if client.sessionID != "" {
|
||||||
|
t.proxy.UpdateActivity(client.sessionID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Refresh write deadline on each write if configured
|
||||||
|
if t.config.WriteTimeout > 0 {
|
||||||
|
conn.SetWriteDeadline(time.Now().Add(time.Duration(t.config.WriteTimeout) * time.Millisecond))
|
||||||
|
}
|
||||||
|
|
||||||
|
conn.AsyncWrite(data, func(c gnet.Conn, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
t.writeErrors.Add(1)
|
||||||
|
t.handleWriteError(c, err)
|
||||||
|
} else {
|
||||||
|
t.errorMu.Lock()
|
||||||
|
delete(t.consecutiveWriteErrors, c)
|
||||||
|
t.errorMu.Unlock()
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleWriteError manages errors during async writes
|
||||||
|
func (t *TCPSink) handleWriteError(c gnet.Conn, err error) {
|
||||||
|
remoteAddrStr := c.RemoteAddr().String()
|
||||||
|
|
||||||
|
t.errorMu.Lock()
|
||||||
|
defer t.errorMu.Unlock()
|
||||||
|
|
||||||
|
t.consecutiveWriteErrors[c]++
|
||||||
|
errorCount := t.consecutiveWriteErrors[c]
|
||||||
|
|
||||||
|
t.logger.Debug("msg", "AsyncWrite error",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"remote_addr", remoteAddrStr,
|
||||||
|
"error", err,
|
||||||
|
"consecutive_errors", errorCount)
|
||||||
|
|
||||||
|
// Close connection max consecutive write errors
|
||||||
|
if errorCount >= TCPMaxConsecutiveWriteErrors {
|
||||||
|
t.logger.Warn("msg", "Closing connection due to repeated write errors",
|
||||||
|
"component", "tcp_sink",
|
||||||
|
"remote_addr", remoteAddrStr,
|
||||||
|
"error_count", errorCount)
|
||||||
|
delete(t.consecutiveWriteErrors, c)
|
||||||
|
c.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,483 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/sink/tcp_client.go
|
|
||||||
package sink
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/format"
|
|
||||||
tlspkg "logwisp/src/internal/tls"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// TCPClientSink forwards log entries to a remote TCP endpoint
|
|
||||||
type TCPClientSink struct {
|
|
||||||
input chan core.LogEntry
|
|
||||||
config TCPClientConfig
|
|
||||||
conn net.Conn
|
|
||||||
connMu sync.RWMutex
|
|
||||||
done chan struct{}
|
|
||||||
wg sync.WaitGroup
|
|
||||||
startTime time.Time
|
|
||||||
logger *log.Logger
|
|
||||||
formatter format.Formatter
|
|
||||||
|
|
||||||
// TLS support
|
|
||||||
tlsManager *tlspkg.Manager
|
|
||||||
tlsConfig *tls.Config
|
|
||||||
|
|
||||||
// Reconnection state
|
|
||||||
reconnecting atomic.Bool
|
|
||||||
lastConnectErr error
|
|
||||||
connectTime time.Time
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalProcessed atomic.Uint64
|
|
||||||
totalFailed atomic.Uint64
|
|
||||||
totalReconnects atomic.Uint64
|
|
||||||
lastProcessed atomic.Value // time.Time
|
|
||||||
connectionUptime atomic.Value // time.Duration
|
|
||||||
}
|
|
||||||
|
|
||||||
// TCPClientConfig holds TCP client sink configuration
|
|
||||||
type TCPClientConfig struct {
|
|
||||||
Address string
|
|
||||||
BufferSize int64
|
|
||||||
DialTimeout time.Duration
|
|
||||||
WriteTimeout time.Duration
|
|
||||||
KeepAlive time.Duration
|
|
||||||
|
|
||||||
// Reconnection settings
|
|
||||||
ReconnectDelay time.Duration
|
|
||||||
MaxReconnectDelay time.Duration
|
|
||||||
ReconnectBackoff float64
|
|
||||||
|
|
||||||
// TLS config
|
|
||||||
SSL *config.SSLConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewTCPClientSink creates a new TCP client sink
|
|
||||||
func NewTCPClientSink(options map[string]any, logger *log.Logger, formatter format.Formatter) (*TCPClientSink, error) {
|
|
||||||
cfg := TCPClientConfig{
|
|
||||||
BufferSize: int64(1000),
|
|
||||||
DialTimeout: 10 * time.Second,
|
|
||||||
WriteTimeout: 30 * time.Second,
|
|
||||||
KeepAlive: 30 * time.Second,
|
|
||||||
ReconnectDelay: time.Second,
|
|
||||||
MaxReconnectDelay: 30 * time.Second,
|
|
||||||
ReconnectBackoff: float64(1.5),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract address
|
|
||||||
address, ok := options["address"].(string)
|
|
||||||
if !ok || address == "" {
|
|
||||||
return nil, fmt.Errorf("tcp_client sink requires 'address' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate address format
|
|
||||||
_, _, err := net.SplitHostPort(address)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid address format (expected host:port): %w", err)
|
|
||||||
}
|
|
||||||
cfg.Address = address
|
|
||||||
|
|
||||||
// Extract other options
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
cfg.BufferSize = bufSize
|
|
||||||
}
|
|
||||||
if dialTimeout, ok := options["dial_timeout_seconds"].(int64); ok && dialTimeout > 0 {
|
|
||||||
cfg.DialTimeout = time.Duration(dialTimeout) * time.Second
|
|
||||||
}
|
|
||||||
if writeTimeout, ok := options["write_timeout_seconds"].(int64); ok && writeTimeout > 0 {
|
|
||||||
cfg.WriteTimeout = time.Duration(writeTimeout) * time.Second
|
|
||||||
}
|
|
||||||
if keepAlive, ok := options["keep_alive_seconds"].(int64); ok && keepAlive > 0 {
|
|
||||||
cfg.KeepAlive = time.Duration(keepAlive) * time.Second
|
|
||||||
}
|
|
||||||
if reconnectDelay, ok := options["reconnect_delay_ms"].(int64); ok && reconnectDelay > 0 {
|
|
||||||
cfg.ReconnectDelay = time.Duration(reconnectDelay) * time.Millisecond
|
|
||||||
}
|
|
||||||
if maxReconnectDelay, ok := options["max_reconnect_delay_seconds"].(int64); ok && maxReconnectDelay > 0 {
|
|
||||||
cfg.MaxReconnectDelay = time.Duration(maxReconnectDelay) * time.Second
|
|
||||||
}
|
|
||||||
if backoff, ok := options["reconnect_backoff"].(float64); ok && backoff >= 1.0 {
|
|
||||||
cfg.ReconnectBackoff = backoff
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract SSL config
|
|
||||||
if ssl, ok := options["ssl"].(map[string]any); ok {
|
|
||||||
cfg.SSL = &config.SSLConfig{}
|
|
||||||
cfg.SSL.Enabled, _ = ssl["enabled"].(bool)
|
|
||||||
if certFile, ok := ssl["cert_file"].(string); ok {
|
|
||||||
cfg.SSL.CertFile = certFile
|
|
||||||
}
|
|
||||||
if keyFile, ok := ssl["key_file"].(string); ok {
|
|
||||||
cfg.SSL.KeyFile = keyFile
|
|
||||||
}
|
|
||||||
cfg.SSL.ClientAuth, _ = ssl["client_auth"].(bool)
|
|
||||||
if caFile, ok := ssl["client_ca_file"].(string); ok {
|
|
||||||
cfg.SSL.ClientCAFile = caFile
|
|
||||||
}
|
|
||||||
if insecure, ok := ssl["insecure_skip_verify"].(bool); ok {
|
|
||||||
cfg.SSL.InsecureSkipVerify = insecure
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
t := &TCPClientSink{
|
|
||||||
input: make(chan core.LogEntry, cfg.BufferSize),
|
|
||||||
config: cfg,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
formatter: formatter,
|
|
||||||
}
|
|
||||||
t.lastProcessed.Store(time.Time{})
|
|
||||||
t.connectionUptime.Store(time.Duration(0))
|
|
||||||
|
|
||||||
// Initialize TLS manager if SSL is configured
|
|
||||||
if cfg.SSL != nil && cfg.SSL.Enabled {
|
|
||||||
tlsManager, err := tlspkg.NewManager(cfg.SSL, logger)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
|
|
||||||
}
|
|
||||||
t.tlsManager = tlsManager
|
|
||||||
|
|
||||||
// Get client TLS config
|
|
||||||
t.tlsConfig = tlsManager.GetTCPConfig()
|
|
||||||
|
|
||||||
// ADDED: Client-specific TLS config adjustments
|
|
||||||
t.tlsConfig.InsecureSkipVerify = cfg.SSL.InsecureSkipVerify
|
|
||||||
|
|
||||||
// Extract server name from address for SNI
|
|
||||||
host, _, err := net.SplitHostPort(cfg.Address)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to parse address for SNI: %w", err)
|
|
||||||
}
|
|
||||||
t.tlsConfig.ServerName = host
|
|
||||||
|
|
||||||
logger.Info("msg", "TLS enabled for TCP client",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", cfg.Address,
|
|
||||||
"server_name", host,
|
|
||||||
"insecure", cfg.SSL.InsecureSkipVerify)
|
|
||||||
}
|
|
||||||
|
|
||||||
return t, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) Input() chan<- core.LogEntry {
|
|
||||||
return t.input
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) Start(ctx context.Context) error {
|
|
||||||
// Start connection manager
|
|
||||||
t.wg.Add(1)
|
|
||||||
go t.connectionManager(ctx)
|
|
||||||
|
|
||||||
// Start processing loop
|
|
||||||
t.wg.Add(1)
|
|
||||||
go t.processLoop(ctx)
|
|
||||||
|
|
||||||
t.logger.Info("msg", "TCP client sink started",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", t.config.Address)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) Stop() {
|
|
||||||
t.logger.Info("msg", "Stopping TCP client sink")
|
|
||||||
close(t.done)
|
|
||||||
t.wg.Wait()
|
|
||||||
|
|
||||||
// Close connection
|
|
||||||
t.connMu.Lock()
|
|
||||||
if t.conn != nil {
|
|
||||||
_ = t.conn.Close()
|
|
||||||
}
|
|
||||||
t.connMu.Unlock()
|
|
||||||
|
|
||||||
t.logger.Info("msg", "TCP client sink stopped",
|
|
||||||
"total_processed", t.totalProcessed.Load(),
|
|
||||||
"total_failed", t.totalFailed.Load(),
|
|
||||||
"total_reconnects", t.totalReconnects.Load())
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) GetStats() SinkStats {
|
|
||||||
lastProc, _ := t.lastProcessed.Load().(time.Time)
|
|
||||||
uptime, _ := t.connectionUptime.Load().(time.Duration)
|
|
||||||
|
|
||||||
t.connMu.RLock()
|
|
||||||
connected := t.conn != nil
|
|
||||||
t.connMu.RUnlock()
|
|
||||||
|
|
||||||
activeConns := int64(0)
|
|
||||||
if connected {
|
|
||||||
activeConns = 1
|
|
||||||
}
|
|
||||||
|
|
||||||
return SinkStats{
|
|
||||||
Type: "tcp_client",
|
|
||||||
TotalProcessed: t.totalProcessed.Load(),
|
|
||||||
ActiveConnections: activeConns,
|
|
||||||
StartTime: t.startTime,
|
|
||||||
LastProcessed: lastProc,
|
|
||||||
Details: map[string]any{
|
|
||||||
"address": t.config.Address,
|
|
||||||
"connected": connected,
|
|
||||||
"reconnecting": t.reconnecting.Load(),
|
|
||||||
"total_failed": t.totalFailed.Load(),
|
|
||||||
"total_reconnects": t.totalReconnects.Load(),
|
|
||||||
"connection_uptime": uptime.Seconds(),
|
|
||||||
"last_error": fmt.Sprintf("%v", t.lastConnectErr),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) connectionManager(ctx context.Context) {
|
|
||||||
defer t.wg.Done()
|
|
||||||
|
|
||||||
reconnectDelay := t.config.ReconnectDelay
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-t.done:
|
|
||||||
return
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
|
|
||||||
// Attempt to connect
|
|
||||||
t.reconnecting.Store(true)
|
|
||||||
conn, err := t.connect()
|
|
||||||
t.reconnecting.Store(false)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
t.lastConnectErr = err
|
|
||||||
t.logger.Warn("msg", "Failed to connect to TCP server",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", t.config.Address,
|
|
||||||
"error", err,
|
|
||||||
"retry_delay", reconnectDelay)
|
|
||||||
|
|
||||||
// Wait before retry
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-t.done:
|
|
||||||
return
|
|
||||||
case <-time.After(reconnectDelay):
|
|
||||||
}
|
|
||||||
|
|
||||||
// Exponential backoff
|
|
||||||
reconnectDelay = time.Duration(float64(reconnectDelay) * t.config.ReconnectBackoff)
|
|
||||||
if reconnectDelay > t.config.MaxReconnectDelay {
|
|
||||||
reconnectDelay = t.config.MaxReconnectDelay
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Connection successful
|
|
||||||
t.lastConnectErr = nil
|
|
||||||
reconnectDelay = t.config.ReconnectDelay // Reset backoff
|
|
||||||
t.connectTime = time.Now()
|
|
||||||
t.totalReconnects.Add(1)
|
|
||||||
|
|
||||||
t.connMu.Lock()
|
|
||||||
t.conn = conn
|
|
||||||
t.connMu.Unlock()
|
|
||||||
|
|
||||||
t.logger.Info("msg", "Connected to TCP server",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", t.config.Address,
|
|
||||||
"local_addr", conn.LocalAddr())
|
|
||||||
|
|
||||||
// Monitor connection
|
|
||||||
t.monitorConnection(conn)
|
|
||||||
|
|
||||||
// Connection lost, clear it
|
|
||||||
t.connMu.Lock()
|
|
||||||
t.conn = nil
|
|
||||||
t.connMu.Unlock()
|
|
||||||
|
|
||||||
// Update connection uptime
|
|
||||||
uptime := time.Since(t.connectTime)
|
|
||||||
t.connectionUptime.Store(uptime)
|
|
||||||
|
|
||||||
t.logger.Warn("msg", "Lost connection to TCP server",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", t.config.Address,
|
|
||||||
"uptime", uptime)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) connect() (net.Conn, error) {
|
|
||||||
dialer := &net.Dialer{
|
|
||||||
Timeout: t.config.DialTimeout,
|
|
||||||
KeepAlive: t.config.KeepAlive,
|
|
||||||
}
|
|
||||||
|
|
||||||
conn, err := dialer.Dial("tcp", t.config.Address)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set TCP keep-alive
|
|
||||||
if tcpConn, ok := conn.(*net.TCPConn); ok {
|
|
||||||
tcpConn.SetKeepAlive(true)
|
|
||||||
tcpConn.SetKeepAlivePeriod(t.config.KeepAlive)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wrap with TLS if configured
|
|
||||||
if t.tlsConfig != nil {
|
|
||||||
t.logger.Debug("msg", "Initiating TLS handshake",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", t.config.Address)
|
|
||||||
|
|
||||||
tlsConn := tls.Client(conn, t.tlsConfig)
|
|
||||||
|
|
||||||
// Perform handshake with timeout
|
|
||||||
handshakeCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := tlsConn.HandshakeContext(handshakeCtx); err != nil {
|
|
||||||
conn.Close()
|
|
||||||
return nil, fmt.Errorf("TLS handshake failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Log connection details
|
|
||||||
state := tlsConn.ConnectionState()
|
|
||||||
t.logger.Info("msg", "TLS connection established",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"address", t.config.Address,
|
|
||||||
"tls_version", tlsVersionString(state.Version),
|
|
||||||
"cipher_suite", tls.CipherSuiteName(state.CipherSuite),
|
|
||||||
"server_name", state.ServerName)
|
|
||||||
|
|
||||||
return tlsConn, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return conn, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) monitorConnection(conn net.Conn) {
|
|
||||||
// Simple connection monitoring by periodic zero-byte reads
|
|
||||||
ticker := time.NewTicker(5 * time.Second)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
buf := make([]byte, 1)
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-t.done:
|
|
||||||
return
|
|
||||||
case <-ticker.C:
|
|
||||||
// Set read deadline
|
|
||||||
// TODO: Add t.config.ReadTimeout and after addition use it instead of static value
|
|
||||||
if err := conn.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
|
|
||||||
t.logger.Debug("msg", "Failed to set read deadline", "error", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to read (we don't expect any data)
|
|
||||||
_, err := conn.Read(buf)
|
|
||||||
if err != nil {
|
|
||||||
var netErr net.Error
|
|
||||||
if errors.As(err, &netErr) && netErr.Timeout() {
|
|
||||||
// Timeout is expected, connection is still alive
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// Real error, connection is dead
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) processLoop(ctx context.Context) {
|
|
||||||
defer t.wg.Done()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case entry, ok := <-t.input:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
t.totalProcessed.Add(1)
|
|
||||||
t.lastProcessed.Store(time.Now())
|
|
||||||
|
|
||||||
// Send entry
|
|
||||||
if err := t.sendEntry(entry); err != nil {
|
|
||||||
t.totalFailed.Add(1)
|
|
||||||
t.logger.Debug("msg", "Failed to send log entry",
|
|
||||||
"component", "tcp_client_sink",
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
case <-t.done:
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPClientSink) sendEntry(entry core.LogEntry) error {
|
|
||||||
// Get current connection
|
|
||||||
t.connMu.RLock()
|
|
||||||
conn := t.conn
|
|
||||||
t.connMu.RUnlock()
|
|
||||||
|
|
||||||
if conn == nil {
|
|
||||||
return fmt.Errorf("not connected")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format data
|
|
||||||
data, err := t.formatter.Format(entry)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to marshal entry: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set write deadline
|
|
||||||
if err := conn.SetWriteDeadline(time.Now().Add(t.config.WriteTimeout)); err != nil {
|
|
||||||
return fmt.Errorf("failed to set write deadline: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write data
|
|
||||||
n, err := conn.Write(data)
|
|
||||||
if err != nil {
|
|
||||||
// Connection error, it will be reconnected
|
|
||||||
return fmt.Errorf("write failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if n != len(data) {
|
|
||||||
return fmt.Errorf("partial write: %d/%d bytes", n, len(data))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// tlsVersionString returns human-readable TLS version
|
|
||||||
func tlsVersionString(version uint16) string {
|
|
||||||
switch version {
|
|
||||||
case tls.VersionTLS10:
|
|
||||||
return "TLS1.0"
|
|
||||||
case tls.VersionTLS11:
|
|
||||||
return "TLS1.1"
|
|
||||||
case tls.VersionTLS12:
|
|
||||||
return "TLS1.2"
|
|
||||||
case tls.VersionTLS13:
|
|
||||||
return "TLS1.3"
|
|
||||||
default:
|
|
||||||
return fmt.Sprintf("0x%04x", version)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
224
src/internal/source/console/console.go
Normal file
224
src/internal/source/console/console.go
Normal file
@ -0,0 +1,224 @@
|
|||||||
|
package console
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSource("console", NewConsoleSourcePlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register console source: %v", err))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Console stdin can only have one reader
|
||||||
|
if err := plugin.SetSourceMetadata("console", &plugin.PluginMetadata{
|
||||||
|
Capabilities: []core.Capability{core.CapSessionAware, core.CapSingleInstance},
|
||||||
|
MaxInstances: 1,
|
||||||
|
}); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to set console source metadata: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConsoleSource reads log entries from the standard input stream
|
||||||
|
type ConsoleSource struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.ConsoleSourceOptions
|
||||||
|
|
||||||
|
// Application
|
||||||
|
subscribers []chan core.LogEntry
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalEntries atomic.Uint64
|
||||||
|
droppedEntries atomic.Uint64
|
||||||
|
startTime time.Time
|
||||||
|
lastEntryTime atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
DefaultConsoleSourceBufferSize = 1000
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewConsoleSourcePlugin creates a console source through plugin factory
|
||||||
|
func NewConsoleSourcePlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (source.Source, error) {
|
||||||
|
opts := &config.ConsoleSourceOptions{}
|
||||||
|
|
||||||
|
// Scan config map
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate and apply defaults
|
||||||
|
if opts.BufferSize <= 0 {
|
||||||
|
opts.BufferSize = DefaultConsoleSourceBufferSize
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create and return plugin instance
|
||||||
|
cs := &ConsoleSource{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
subscribers: make([]chan core.LogEntry, 0),
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
cs.lastEntryTime.Store(time.Time{})
|
||||||
|
|
||||||
|
// Create session
|
||||||
|
cs.session = proxy.CreateSession(
|
||||||
|
"console_stdin",
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "console",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
cs.logger.Info("msg", "Console source initialized",
|
||||||
|
"component", "console_source",
|
||||||
|
"instance_id", id)
|
||||||
|
|
||||||
|
return cs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (s *ConsoleSource) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware, // Single console session
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Subscribe returns a channel for receiving log entries.
|
||||||
|
func (s *ConsoleSource) Subscribe() <-chan core.LogEntry {
|
||||||
|
ch := make(chan core.LogEntry, s.config.BufferSize)
|
||||||
|
s.subscribers = append(s.subscribers, ch)
|
||||||
|
return ch
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins reading from the standard input.
|
||||||
|
func (s *ConsoleSource) Start() error {
|
||||||
|
s.startTime = time.Now()
|
||||||
|
go s.readLoop()
|
||||||
|
|
||||||
|
// Update session activity
|
||||||
|
s.proxy.UpdateActivity(s.session.ID)
|
||||||
|
|
||||||
|
s.logger.Info("msg", "Console source started",
|
||||||
|
"component", "console_source",
|
||||||
|
"instance_id", s.id)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop signals the source to stop reading.
|
||||||
|
func (s *ConsoleSource) Stop() {
|
||||||
|
close(s.done)
|
||||||
|
|
||||||
|
// Remove session
|
||||||
|
if s.session != nil {
|
||||||
|
s.proxy.RemoveSession(s.session.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close subscriber channels
|
||||||
|
for _, ch := range s.subscribers {
|
||||||
|
close(ch)
|
||||||
|
}
|
||||||
|
|
||||||
|
s.logger.Info("msg", "Console source stopped",
|
||||||
|
"component", "console_source",
|
||||||
|
"instance_id", s.id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns the source's statistics
|
||||||
|
func (s *ConsoleSource) GetStats() source.SourceStats {
|
||||||
|
lastEntry, _ := s.lastEntryTime.Load().(time.Time)
|
||||||
|
|
||||||
|
return source.SourceStats{
|
||||||
|
Type: "console",
|
||||||
|
TotalEntries: s.totalEntries.Load(),
|
||||||
|
DroppedEntries: s.droppedEntries.Load(),
|
||||||
|
StartTime: s.startTime,
|
||||||
|
LastEntryTime: lastEntry,
|
||||||
|
Details: map[string]any{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// readLoop continuously reads lines from stdin and publishes them
|
||||||
|
func (s *ConsoleSource) readLoop() {
|
||||||
|
scanner := bufio.NewScanner(os.Stdin)
|
||||||
|
for scanner.Scan() {
|
||||||
|
select {
|
||||||
|
case <-s.done:
|
||||||
|
return
|
||||||
|
default:
|
||||||
|
// Update session activity on each read
|
||||||
|
s.proxy.UpdateActivity(s.session.ID)
|
||||||
|
|
||||||
|
// Get raw line
|
||||||
|
lineBytes := scanner.Bytes()
|
||||||
|
if len(lineBytes) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add newline back (scanner strips it)
|
||||||
|
lineWithNewline := append(lineBytes, '\n')
|
||||||
|
|
||||||
|
entry := core.LogEntry{
|
||||||
|
Time: time.Now(),
|
||||||
|
Source: "console",
|
||||||
|
Message: string(lineWithNewline), // Keep newline
|
||||||
|
Level: source.ExtractLogLevel(string(lineBytes)),
|
||||||
|
RawSize: int64(len(lineWithNewline)),
|
||||||
|
}
|
||||||
|
|
||||||
|
s.publish(entry)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := scanner.Err(); err != nil {
|
||||||
|
s.logger.Error("msg", "Scanner error reading stdin",
|
||||||
|
"component", "console_source",
|
||||||
|
"instance_id", s.id,
|
||||||
|
"error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// publish sends a log entry to all subscribers
|
||||||
|
func (s *ConsoleSource) publish(entry core.LogEntry) {
|
||||||
|
s.totalEntries.Add(1)
|
||||||
|
s.lastEntryTime.Store(entry.Time)
|
||||||
|
|
||||||
|
for _, ch := range s.subscribers {
|
||||||
|
select {
|
||||||
|
case ch <- entry:
|
||||||
|
default:
|
||||||
|
s.droppedEntries.Add(1)
|
||||||
|
s.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
|
||||||
|
"component", "console_source")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,289 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/source/directory.go
|
|
||||||
package source
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"regexp"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// DirectorySource monitors a directory for log files
|
|
||||||
type DirectorySource struct {
|
|
||||||
path string
|
|
||||||
pattern string
|
|
||||||
checkInterval time.Duration
|
|
||||||
subscribers []chan core.LogEntry
|
|
||||||
watchers map[string]*fileWatcher
|
|
||||||
mu sync.RWMutex
|
|
||||||
ctx context.Context
|
|
||||||
cancel context.CancelFunc
|
|
||||||
wg sync.WaitGroup
|
|
||||||
totalEntries atomic.Uint64
|
|
||||||
droppedEntries atomic.Uint64
|
|
||||||
startTime time.Time
|
|
||||||
lastEntryTime atomic.Value // time.Time
|
|
||||||
logger *log.Logger
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewDirectorySource creates a new directory monitoring source
|
|
||||||
func NewDirectorySource(options map[string]any, logger *log.Logger) (*DirectorySource, error) {
|
|
||||||
path, ok := options["path"].(string)
|
|
||||||
if !ok {
|
|
||||||
return nil, fmt.Errorf("directory source requires 'path' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
pattern, _ := options["pattern"].(string)
|
|
||||||
if pattern == "" {
|
|
||||||
pattern = "*"
|
|
||||||
}
|
|
||||||
|
|
||||||
checkInterval := 100 * time.Millisecond
|
|
||||||
if ms, ok := options["check_interval_ms"].(int64); ok && ms > 0 {
|
|
||||||
checkInterval = time.Duration(ms) * time.Millisecond
|
|
||||||
}
|
|
||||||
|
|
||||||
absPath, err := filepath.Abs(path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid path %s: %w", path, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
ds := &DirectorySource{
|
|
||||||
path: absPath,
|
|
||||||
pattern: pattern,
|
|
||||||
checkInterval: checkInterval,
|
|
||||||
watchers: make(map[string]*fileWatcher),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
ds.lastEntryTime.Store(time.Time{})
|
|
||||||
|
|
||||||
return ds, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) Subscribe() <-chan core.LogEntry {
|
|
||||||
ds.mu.Lock()
|
|
||||||
defer ds.mu.Unlock()
|
|
||||||
|
|
||||||
ch := make(chan core.LogEntry, 1000)
|
|
||||||
ds.subscribers = append(ds.subscribers, ch)
|
|
||||||
return ch
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) Start() error {
|
|
||||||
ds.ctx, ds.cancel = context.WithCancel(context.Background())
|
|
||||||
ds.wg.Add(1)
|
|
||||||
go ds.monitorLoop()
|
|
||||||
|
|
||||||
ds.logger.Info("msg", "Directory source started",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", ds.path,
|
|
||||||
"pattern", ds.pattern,
|
|
||||||
"check_interval_ms", ds.checkInterval.Milliseconds())
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) Stop() {
|
|
||||||
if ds.cancel != nil {
|
|
||||||
ds.cancel()
|
|
||||||
}
|
|
||||||
ds.wg.Wait()
|
|
||||||
|
|
||||||
ds.mu.Lock()
|
|
||||||
for _, w := range ds.watchers {
|
|
||||||
w.close()
|
|
||||||
}
|
|
||||||
for _, ch := range ds.subscribers {
|
|
||||||
close(ch)
|
|
||||||
}
|
|
||||||
ds.mu.Unlock()
|
|
||||||
|
|
||||||
ds.logger.Info("msg", "Directory source stopped",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", ds.path)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) GetStats() SourceStats {
|
|
||||||
lastEntry, _ := ds.lastEntryTime.Load().(time.Time)
|
|
||||||
|
|
||||||
ds.mu.RLock()
|
|
||||||
watcherCount := int64(len(ds.watchers))
|
|
||||||
details := make(map[string]any)
|
|
||||||
|
|
||||||
// Add watcher details
|
|
||||||
watchers := make([]map[string]any, 0, watcherCount)
|
|
||||||
for _, w := range ds.watchers {
|
|
||||||
info := w.getInfo()
|
|
||||||
watchers = append(watchers, map[string]any{
|
|
||||||
"path": info.Path,
|
|
||||||
"size": info.Size,
|
|
||||||
"position": info.Position,
|
|
||||||
"entries_read": info.EntriesRead,
|
|
||||||
"rotations": info.Rotations,
|
|
||||||
"last_read": info.LastReadTime,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
details["watchers"] = watchers
|
|
||||||
details["active_watchers"] = watcherCount
|
|
||||||
ds.mu.RUnlock()
|
|
||||||
|
|
||||||
return SourceStats{
|
|
||||||
Type: "directory",
|
|
||||||
TotalEntries: ds.totalEntries.Load(),
|
|
||||||
DroppedEntries: ds.droppedEntries.Load(),
|
|
||||||
StartTime: ds.startTime,
|
|
||||||
LastEntryTime: lastEntry,
|
|
||||||
Details: details,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) publish(entry core.LogEntry) {
|
|
||||||
ds.mu.RLock()
|
|
||||||
defer ds.mu.RUnlock()
|
|
||||||
|
|
||||||
ds.totalEntries.Add(1)
|
|
||||||
ds.lastEntryTime.Store(entry.Time)
|
|
||||||
|
|
||||||
for _, ch := range ds.subscribers {
|
|
||||||
select {
|
|
||||||
case ch <- entry:
|
|
||||||
default:
|
|
||||||
ds.droppedEntries.Add(1)
|
|
||||||
ds.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
|
|
||||||
"component", "directory_source")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) monitorLoop() {
|
|
||||||
defer ds.wg.Done()
|
|
||||||
|
|
||||||
ds.checkTargets()
|
|
||||||
|
|
||||||
ticker := time.NewTicker(ds.checkInterval)
|
|
||||||
defer ticker.Stop()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ds.ctx.Done():
|
|
||||||
return
|
|
||||||
case <-ticker.C:
|
|
||||||
ds.checkTargets()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) checkTargets() {
|
|
||||||
files, err := ds.scanDirectory()
|
|
||||||
if err != nil {
|
|
||||||
ds.logger.Warn("msg", "Failed to scan directory",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", ds.path,
|
|
||||||
"pattern", ds.pattern,
|
|
||||||
"error", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, file := range files {
|
|
||||||
ds.ensureWatcher(file)
|
|
||||||
}
|
|
||||||
|
|
||||||
ds.cleanupWatchers()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) scanDirectory() ([]string, error) {
|
|
||||||
entries, err := os.ReadDir(ds.path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert glob pattern to regex
|
|
||||||
regexPattern := globToRegex(ds.pattern)
|
|
||||||
re, err := regexp.Compile(regexPattern)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("invalid pattern regex: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var files []string
|
|
||||||
for _, entry := range entries {
|
|
||||||
if entry.IsDir() {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
name := entry.Name()
|
|
||||||
if re.MatchString(name) {
|
|
||||||
files = append(files, filepath.Join(ds.path, name))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return files, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) ensureWatcher(path string) {
|
|
||||||
ds.mu.Lock()
|
|
||||||
defer ds.mu.Unlock()
|
|
||||||
|
|
||||||
if _, exists := ds.watchers[path]; exists {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
w := newFileWatcher(path, ds.publish, ds.logger)
|
|
||||||
ds.watchers[path] = w
|
|
||||||
|
|
||||||
ds.logger.Debug("msg", "Created file watcher",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", path)
|
|
||||||
|
|
||||||
ds.wg.Add(1)
|
|
||||||
go func() {
|
|
||||||
defer ds.wg.Done()
|
|
||||||
if err := w.watch(ds.ctx); err != nil {
|
|
||||||
if errors.Is(err, context.Canceled) {
|
|
||||||
ds.logger.Debug("msg", "Watcher cancelled",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", path)
|
|
||||||
} else {
|
|
||||||
ds.logger.Error("msg", "Watcher failed",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", path,
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ds.mu.Lock()
|
|
||||||
delete(ds.watchers, path)
|
|
||||||
ds.mu.Unlock()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ds *DirectorySource) cleanupWatchers() {
|
|
||||||
ds.mu.Lock()
|
|
||||||
defer ds.mu.Unlock()
|
|
||||||
|
|
||||||
for path, w := range ds.watchers {
|
|
||||||
if _, err := os.Stat(path); os.IsNotExist(err) {
|
|
||||||
w.stop()
|
|
||||||
delete(ds.watchers, path)
|
|
||||||
ds.logger.Debug("msg", "Cleaned up watcher for non-existent file",
|
|
||||||
"component", "directory_source",
|
|
||||||
"path", path)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func globToRegex(glob string) string {
|
|
||||||
regex := regexp.QuoteMeta(glob)
|
|
||||||
regex = strings.ReplaceAll(regex, `\*`, `.*`)
|
|
||||||
regex = strings.ReplaceAll(regex, `\?`, `.`)
|
|
||||||
return "^" + regex + "$"
|
|
||||||
}
|
|
||||||
363
src/internal/source/file/file.go
Normal file
363
src/internal/source/file/file.go
Normal file
@ -0,0 +1,363 @@
|
|||||||
|
package file
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSource("file", NewFileSourcePlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register file source: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FileSource monitors log files and tails them
|
||||||
|
type FileSource struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.FileSourceOptions
|
||||||
|
|
||||||
|
// Application
|
||||||
|
subscribers []chan core.LogEntry
|
||||||
|
watchers map[string]*fileWatcher
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
mu sync.RWMutex
|
||||||
|
ctx context.Context
|
||||||
|
cancel context.CancelFunc
|
||||||
|
wg sync.WaitGroup
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalEntries atomic.Uint64
|
||||||
|
droppedEntries atomic.Uint64
|
||||||
|
startTime time.Time
|
||||||
|
lastEntryTime atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
DefaultFileSourcePattern = "*"
|
||||||
|
DefaultFileSourceCheckIntervalMS = 100
|
||||||
|
MinFileSourceCheckIntervalMS = 10
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewFileSourcePlugin creates a file source through plugin factory
|
||||||
|
func NewFileSourcePlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (source.Source, error) {
|
||||||
|
opts := &config.FileSourceOptions{}
|
||||||
|
|
||||||
|
// Use lconfig to scan map into struct (overriding defaults)
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate and apply defaults
|
||||||
|
if err := lconfig.NonEmpty(opts.Directory); err != nil {
|
||||||
|
return nil, fmt.Errorf("directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.Pattern == "" {
|
||||||
|
opts.Pattern = DefaultFileSourcePattern
|
||||||
|
}
|
||||||
|
if opts.CheckIntervalMS <= 0 {
|
||||||
|
opts.CheckIntervalMS = DefaultFileSourceCheckIntervalMS
|
||||||
|
} else if opts.CheckIntervalMS < MinFileSourceCheckIntervalMS {
|
||||||
|
return nil, fmt.Errorf("check_interval_ms: must be >= %d", MinFileSourceCheckIntervalMS)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create and return plugin instance
|
||||||
|
fs := &FileSource{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
subscribers: make([]chan core.LogEntry, 0),
|
||||||
|
watchers: make(map[string]*fileWatcher),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
fs.lastEntryTime.Store(time.Time{})
|
||||||
|
|
||||||
|
fs.session = proxy.CreateSession(
|
||||||
|
fmt.Sprintf("file:///%s/%s", opts.Directory, opts.Pattern),
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "file",
|
||||||
|
"directory": opts.Directory,
|
||||||
|
"pattern": opts.Pattern,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
fs.logger.Info("msg", "File source initialized",
|
||||||
|
"component", "file_source",
|
||||||
|
"instance_id", id,
|
||||||
|
"directory", opts.Directory,
|
||||||
|
"pattern", opts.Pattern)
|
||||||
|
|
||||||
|
return fs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (fs *FileSource) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware, // Tracks sessions per file
|
||||||
|
core.CapMultiSession, // Multiple file sessions
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Subscribe returns a channel for receiving log entries
|
||||||
|
func (fs *FileSource) Subscribe() <-chan core.LogEntry {
|
||||||
|
fs.mu.Lock()
|
||||||
|
defer fs.mu.Unlock()
|
||||||
|
|
||||||
|
ch := make(chan core.LogEntry, 1000)
|
||||||
|
fs.subscribers = append(fs.subscribers, ch)
|
||||||
|
return ch
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins the file monitoring loop
|
||||||
|
func (fs *FileSource) Start() error {
|
||||||
|
fs.ctx, fs.cancel = context.WithCancel(context.Background())
|
||||||
|
fs.startTime = time.Now()
|
||||||
|
fs.wg.Add(1)
|
||||||
|
go fs.monitorLoop()
|
||||||
|
|
||||||
|
fs.logger.Info("msg", "File source started",
|
||||||
|
"component", "File_source",
|
||||||
|
"path", fs.config.Directory,
|
||||||
|
"pattern", fs.config.Pattern,
|
||||||
|
"check_interval_ms", fs.config.CheckIntervalMS)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully shuts down the file source and all file watchers
|
||||||
|
func (fs *FileSource) Stop() {
|
||||||
|
if fs.cancel != nil {
|
||||||
|
fs.cancel()
|
||||||
|
}
|
||||||
|
fs.wg.Wait()
|
||||||
|
|
||||||
|
fs.proxy.RemoveSession(fs.id)
|
||||||
|
|
||||||
|
fs.mu.Lock()
|
||||||
|
for _, w := range fs.watchers {
|
||||||
|
w.stop()
|
||||||
|
}
|
||||||
|
for _, ch := range fs.subscribers {
|
||||||
|
close(ch)
|
||||||
|
}
|
||||||
|
fs.mu.Unlock()
|
||||||
|
|
||||||
|
fs.logger.Info("msg", "File source stopped",
|
||||||
|
"component", "file_source",
|
||||||
|
"instance_id", fs.id,
|
||||||
|
"path", fs.config.Directory)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns the source's statistics, including active watchers.
|
||||||
|
func (fs *FileSource) GetStats() source.SourceStats {
|
||||||
|
lastEntry, _ := fs.lastEntryTime.Load().(time.Time)
|
||||||
|
|
||||||
|
fs.mu.RLock()
|
||||||
|
watcherCount := int64(len(fs.watchers))
|
||||||
|
details := make(map[string]any)
|
||||||
|
|
||||||
|
// Add watcher details
|
||||||
|
watchers := make([]map[string]any, 0, watcherCount)
|
||||||
|
for _, w := range fs.watchers {
|
||||||
|
info := w.getInfo()
|
||||||
|
watchers = append(watchers, map[string]any{
|
||||||
|
"directory": info.Directory,
|
||||||
|
"size": info.Size,
|
||||||
|
"position": info.Position,
|
||||||
|
"entries_read": info.EntriesRead,
|
||||||
|
"rotations": info.Rotations,
|
||||||
|
"last_read": info.LastReadTime,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
details["watchers"] = watchers
|
||||||
|
details["active_watchers"] = watcherCount
|
||||||
|
fs.mu.RUnlock()
|
||||||
|
|
||||||
|
return source.SourceStats{
|
||||||
|
ID: fs.id,
|
||||||
|
Type: "file",
|
||||||
|
TotalEntries: fs.totalEntries.Load(),
|
||||||
|
DroppedEntries: fs.droppedEntries.Load(),
|
||||||
|
StartTime: fs.startTime,
|
||||||
|
LastEntryTime: lastEntry,
|
||||||
|
Details: details,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// monitorLoop periodically scans path for new or changed files.
|
||||||
|
func (fs *FileSource) monitorLoop() {
|
||||||
|
defer fs.wg.Done()
|
||||||
|
|
||||||
|
fs.checkTargets()
|
||||||
|
|
||||||
|
ticker := time.NewTicker(time.Duration(fs.config.CheckIntervalMS) * time.Millisecond)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-fs.ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
fs.checkTargets()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkTargets finds matching files and ensures watchers are running for them.
|
||||||
|
func (fs *FileSource) checkTargets() {
|
||||||
|
files, err := fs.scanFile()
|
||||||
|
if err != nil {
|
||||||
|
fs.logger.Warn("msg", "Failed to scan file",
|
||||||
|
"component", "file_source",
|
||||||
|
"path", fs.config.Directory,
|
||||||
|
"pattern", fs.config.Pattern,
|
||||||
|
"error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, file := range files {
|
||||||
|
fs.ensureWatcher(file)
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.cleanupWatchers()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ensureWatcher creates and starts a new file watcher if one doesn't exist for the given path.
|
||||||
|
func (fs *FileSource) ensureWatcher(path string) {
|
||||||
|
fs.mu.Lock()
|
||||||
|
defer fs.mu.Unlock()
|
||||||
|
|
||||||
|
if _, exists := fs.watchers[path]; exists {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
w := newFileWatcher(path, fs.publish, fs.logger)
|
||||||
|
fs.watchers[path] = w
|
||||||
|
|
||||||
|
fs.logger.Debug("msg", "Created file watcher",
|
||||||
|
"component", "file_source",
|
||||||
|
"path", path)
|
||||||
|
|
||||||
|
fs.wg.Add(1)
|
||||||
|
go func() {
|
||||||
|
defer fs.wg.Done()
|
||||||
|
if err := w.watch(fs.ctx); err != nil {
|
||||||
|
if errors.Is(err, context.Canceled) {
|
||||||
|
fs.logger.Debug("msg", "Watcher cancelled",
|
||||||
|
"component", "file_source",
|
||||||
|
"path", path)
|
||||||
|
} else {
|
||||||
|
fs.logger.Error("msg", "Watcher failed",
|
||||||
|
"component", "file_source",
|
||||||
|
"path", path,
|
||||||
|
"error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.mu.Lock()
|
||||||
|
delete(fs.watchers, path)
|
||||||
|
fs.mu.Unlock()
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanupWatchers stops and removes watchers for files that no longer exist.
|
||||||
|
func (fs *FileSource) cleanupWatchers() {
|
||||||
|
fs.mu.Lock()
|
||||||
|
defer fs.mu.Unlock()
|
||||||
|
|
||||||
|
for path, w := range fs.watchers {
|
||||||
|
if _, err := os.Stat(path); os.IsNotExist(err) {
|
||||||
|
w.stop()
|
||||||
|
delete(fs.watchers, path)
|
||||||
|
fs.logger.Debug("msg", "Cleaned up watcher for non-existent file",
|
||||||
|
"component", "file_source",
|
||||||
|
"path", path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// publish sends a log entry to all subscribers.
|
||||||
|
func (fs *FileSource) publish(entry core.LogEntry) {
|
||||||
|
fs.mu.RLock()
|
||||||
|
defer fs.mu.RUnlock()
|
||||||
|
|
||||||
|
fs.totalEntries.Add(1)
|
||||||
|
fs.lastEntryTime.Store(entry.Time)
|
||||||
|
|
||||||
|
for _, ch := range fs.subscribers {
|
||||||
|
select {
|
||||||
|
case ch <- entry:
|
||||||
|
default:
|
||||||
|
fs.droppedEntries.Add(1)
|
||||||
|
fs.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
|
||||||
|
"component", "file_source")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanFile finds all files in the configured path that match the pattern.
|
||||||
|
func (fs *FileSource) scanFile() ([]string, error) {
|
||||||
|
entries, err := os.ReadDir(fs.config.Directory)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert glob pattern to regex
|
||||||
|
regexPattern := globToRegex(fs.config.Pattern)
|
||||||
|
re, err := regexp.Compile(regexPattern)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid pattern regex: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var files []string
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
name := entry.Name()
|
||||||
|
if re.MatchString(name) {
|
||||||
|
files = append(files, filepath.Join(fs.config.Directory, name))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// globToRegex converts a simple glob pattern to a regular expression.
|
||||||
|
func globToRegex(glob string) string {
|
||||||
|
regex := regexp.QuoteMeta(glob)
|
||||||
|
regex = strings.ReplaceAll(regex, `\*`, `.*`)
|
||||||
|
regex = strings.ReplaceAll(regex, `\?`, `.`)
|
||||||
|
return "^" + regex + "$"
|
||||||
|
}
|
||||||
@ -1,5 +1,4 @@
|
|||||||
// FILE: logwisp/src/internal/source/file_watcher.go
|
package file
|
||||||
package source
|
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
@ -9,20 +8,20 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
"github.com/lixenwraith/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
// WatcherInfo contains information about a file watcher
|
// WatcherInfo contains snapshot information about a file watcher's state
|
||||||
type WatcherInfo struct {
|
type WatcherInfo struct {
|
||||||
Path string
|
Directory string
|
||||||
Size int64
|
Size int64
|
||||||
Position int64
|
Position int64
|
||||||
ModTime time.Time
|
ModTime time.Time
|
||||||
@ -31,8 +30,9 @@ type WatcherInfo struct {
|
|||||||
Rotations int64
|
Rotations int64
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// fileWatcher tails a single file, handles rotations, and sends new lines to a callback
|
||||||
type fileWatcher struct {
|
type fileWatcher struct {
|
||||||
path string
|
directory string
|
||||||
callback func(core.LogEntry)
|
callback func(core.LogEntry)
|
||||||
position int64
|
position int64
|
||||||
size int64
|
size int64
|
||||||
@ -46,9 +46,10 @@ type fileWatcher struct {
|
|||||||
logger *log.Logger
|
logger *log.Logger
|
||||||
}
|
}
|
||||||
|
|
||||||
func newFileWatcher(path string, callback func(core.LogEntry), logger *log.Logger) *fileWatcher {
|
// newFileWatcher creates a new watcher for a specific file path
|
||||||
|
func newFileWatcher(directory string, callback func(core.LogEntry), logger *log.Logger) *fileWatcher {
|
||||||
w := &fileWatcher{
|
w := &fileWatcher{
|
||||||
path: path,
|
directory: directory,
|
||||||
callback: callback,
|
callback: callback,
|
||||||
position: -1,
|
position: -1,
|
||||||
logger: logger,
|
logger: logger,
|
||||||
@ -57,12 +58,13 @@ func newFileWatcher(path string, callback func(core.LogEntry), logger *log.Logge
|
|||||||
return w
|
return w
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// watch starts the main monitoring loop for the file
|
||||||
func (w *fileWatcher) watch(ctx context.Context) error {
|
func (w *fileWatcher) watch(ctx context.Context) error {
|
||||||
if err := w.seekToEnd(); err != nil {
|
if err := w.seekToEnd(); err != nil {
|
||||||
return fmt.Errorf("seekToEnd failed: %w", err)
|
return fmt.Errorf("seekToEnd failed: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ticker := time.NewTicker(100 * time.Millisecond)
|
ticker := time.NewTicker(core.FileWatcherPollInterval)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
|
|
||||||
for {
|
for {
|
||||||
@ -81,52 +83,36 @@ func (w *fileWatcher) watch(ctx context.Context) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// FILE: logwisp/src/internal/source/file_watcher.go
|
// stop signals the watcher to terminate its loop
|
||||||
func (w *fileWatcher) seekToEnd() error {
|
func (w *fileWatcher) stop() {
|
||||||
file, err := os.Open(w.path)
|
|
||||||
if err != nil {
|
|
||||||
if os.IsNotExist(err) {
|
|
||||||
w.mu.Lock()
|
w.mu.Lock()
|
||||||
w.position = 0
|
w.stopped = true
|
||||||
w.size = 0
|
|
||||||
w.modTime = time.Now()
|
|
||||||
w.inode = 0
|
|
||||||
w.mu.Unlock()
|
w.mu.Unlock()
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer file.Close()
|
|
||||||
|
|
||||||
info, err := file.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
w.mu.Lock()
|
|
||||||
defer w.mu.Unlock()
|
|
||||||
|
|
||||||
// Keep existing position (including 0)
|
|
||||||
// First time initialization seeks to the end of the file
|
|
||||||
if w.position == -1 {
|
|
||||||
pos, err := file.Seek(0, io.SeekEnd)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
w.position = pos
|
|
||||||
}
|
|
||||||
|
|
||||||
w.size = info.Size()
|
|
||||||
w.modTime = info.ModTime()
|
|
||||||
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
|
|
||||||
w.inode = stat.Ino
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getInfo returns a snapshot of the watcher's current statistics
|
||||||
|
func (w *fileWatcher) getInfo() WatcherInfo {
|
||||||
|
w.mu.Lock()
|
||||||
|
info := WatcherInfo{
|
||||||
|
Directory: w.directory,
|
||||||
|
Size: w.size,
|
||||||
|
Position: w.position,
|
||||||
|
ModTime: w.modTime,
|
||||||
|
EntriesRead: w.entriesRead.Load(),
|
||||||
|
Rotations: w.rotationSeq,
|
||||||
|
}
|
||||||
|
w.mu.Unlock()
|
||||||
|
|
||||||
|
if lastRead, ok := w.lastReadTime.Load().(time.Time); ok {
|
||||||
|
info.LastReadTime = lastRead
|
||||||
|
}
|
||||||
|
|
||||||
|
return info
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkFile examines the file for changes, rotations, or new content
|
||||||
func (w *fileWatcher) checkFile() error {
|
func (w *fileWatcher) checkFile() error {
|
||||||
file, err := os.Open(w.path)
|
file, err := os.Open(w.directory)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if os.IsNotExist(err) {
|
if os.IsNotExist(err) {
|
||||||
// File doesn't exist yet, keep watching
|
// File doesn't exist yet, keep watching
|
||||||
@ -134,7 +120,7 @@ func (w *fileWatcher) checkFile() error {
|
|||||||
}
|
}
|
||||||
w.logger.Error("msg", "Failed to open file for checking",
|
w.logger.Error("msg", "Failed to open file for checking",
|
||||||
"component", "file_watcher",
|
"component", "file_watcher",
|
||||||
"path", w.path,
|
"directory", w.directory,
|
||||||
"error", err)
|
"error", err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -144,7 +130,7 @@ func (w *fileWatcher) checkFile() error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
w.logger.Error("msg", "Failed to stat file",
|
w.logger.Error("msg", "Failed to stat file",
|
||||||
"component", "file_watcher",
|
"component", "file_watcher",
|
||||||
"path", w.path,
|
"directory", w.directory,
|
||||||
"error", err)
|
"error", err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -214,7 +200,7 @@ func (w *fileWatcher) checkFile() error {
|
|||||||
|
|
||||||
w.logger.Debug("msg", "Atomic file update detected",
|
w.logger.Debug("msg", "Atomic file update detected",
|
||||||
"component", "file_watcher",
|
"component", "file_watcher",
|
||||||
"path", w.path,
|
"directory", w.directory,
|
||||||
"old_inode", oldInode,
|
"old_inode", oldInode,
|
||||||
"new_inode", currentInode,
|
"new_inode", currentInode,
|
||||||
"position", oldPos,
|
"position", oldPos,
|
||||||
@ -233,26 +219,26 @@ func (w *fileWatcher) checkFile() error {
|
|||||||
|
|
||||||
w.callback(core.LogEntry{
|
w.callback(core.LogEntry{
|
||||||
Time: time.Now(),
|
Time: time.Now(),
|
||||||
Source: filepath.Base(w.path),
|
Source: filepath.Base(w.directory),
|
||||||
Level: "INFO",
|
Level: "INFO",
|
||||||
Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
|
Message: fmt.Sprintf("Log rotation detected (#%d): %s", seq, rotationReason),
|
||||||
})
|
})
|
||||||
|
|
||||||
w.logger.Info("msg", "Log rotation detected",
|
w.logger.Info("msg", "Log rotation detected",
|
||||||
"component", "file_watcher",
|
"component", "file_watcher",
|
||||||
"path", w.path,
|
"directory", w.directory,
|
||||||
"sequence", seq,
|
"sequence", seq,
|
||||||
"reason", rotationReason)
|
"reason", rotationReason)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Only read if there's new content
|
// Read if there's new content OR if we need to continue from position
|
||||||
if currentSize > startPos {
|
if currentSize > startPos {
|
||||||
if _, err := file.Seek(startPos, io.SeekStart); err != nil {
|
if _, err := file.Seek(startPos, io.SeekStart); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
scanner := bufio.NewScanner(file)
|
scanner := bufio.NewScanner(file)
|
||||||
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024)
|
scanner.Buffer(make([]byte, 0, 64*1024), core.MaxLogEntryBytes)
|
||||||
|
|
||||||
for scanner.Scan() {
|
for scanner.Scan() {
|
||||||
line := scanner.Text()
|
line := scanner.Text()
|
||||||
@ -272,7 +258,7 @@ func (w *fileWatcher) checkFile() error {
|
|||||||
if err := scanner.Err(); err != nil {
|
if err := scanner.Err(); err != nil {
|
||||||
w.logger.Error("msg", "Scanner error while reading file",
|
w.logger.Error("msg", "Scanner error while reading file",
|
||||||
"component", "file_watcher",
|
"component", "file_watcher",
|
||||||
"path", w.path,
|
"directory", w.directory,
|
||||||
"position", startPos,
|
"position", startPos,
|
||||||
"error", err)
|
"error", err)
|
||||||
return err
|
return err
|
||||||
@ -311,6 +297,58 @@ func (w *fileWatcher) checkFile() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// seekToEnd sets the initial read position to the end of the file
|
||||||
|
func (w *fileWatcher) seekToEnd() error {
|
||||||
|
file, err := os.Open(w.directory)
|
||||||
|
if err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
w.mu.Lock()
|
||||||
|
w.position = 0
|
||||||
|
w.size = 0
|
||||||
|
w.modTime = time.Now()
|
||||||
|
w.inode = 0
|
||||||
|
w.mu.Unlock()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
info, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
w.mu.Lock()
|
||||||
|
defer w.mu.Unlock()
|
||||||
|
|
||||||
|
// Keep existing position (including 0)
|
||||||
|
// First time initialization seeks to the end of the file
|
||||||
|
if w.position == -1 {
|
||||||
|
pos, err := file.Seek(0, io.SeekEnd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
w.position = pos
|
||||||
|
}
|
||||||
|
|
||||||
|
w.size = info.Size()
|
||||||
|
w.modTime = info.ModTime()
|
||||||
|
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
|
||||||
|
w.inode = stat.Ino
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// isStopped checks if the watcher has been instructed to stop
|
||||||
|
func (w *fileWatcher) isStopped() bool {
|
||||||
|
w.mu.Lock()
|
||||||
|
defer w.mu.Unlock()
|
||||||
|
return w.stopped
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseLine attempts to parse a line as JSON, falling back to plain text
|
||||||
func (w *fileWatcher) parseLine(line string) core.LogEntry {
|
func (w *fileWatcher) parseLine(line string) core.LogEntry {
|
||||||
var jsonLog struct {
|
var jsonLog struct {
|
||||||
Time string `json:"time"`
|
Time string `json:"time"`
|
||||||
@ -327,78 +365,19 @@ func (w *fileWatcher) parseLine(line string) core.LogEntry {
|
|||||||
|
|
||||||
return core.LogEntry{
|
return core.LogEntry{
|
||||||
Time: timestamp,
|
Time: timestamp,
|
||||||
Source: filepath.Base(w.path),
|
Source: filepath.Base(w.directory),
|
||||||
Level: jsonLog.Level,
|
Level: jsonLog.Level,
|
||||||
Message: jsonLog.Message,
|
Message: jsonLog.Message,
|
||||||
Fields: jsonLog.Fields,
|
Fields: jsonLog.Fields,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
level := extractLogLevel(line)
|
level := source.ExtractLogLevel(line)
|
||||||
|
|
||||||
return core.LogEntry{
|
return core.LogEntry{
|
||||||
Time: time.Now(),
|
Time: time.Now(),
|
||||||
Source: filepath.Base(w.path),
|
Source: filepath.Base(w.directory),
|
||||||
Level: level,
|
Level: level,
|
||||||
Message: line,
|
Message: line,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func extractLogLevel(line string) string {
|
|
||||||
patterns := []struct {
|
|
||||||
patterns []string
|
|
||||||
level string
|
|
||||||
}{
|
|
||||||
{[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
|
|
||||||
{[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
|
|
||||||
{[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
|
|
||||||
{[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
|
|
||||||
{[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
|
|
||||||
}
|
|
||||||
|
|
||||||
upperLine := strings.ToUpper(line)
|
|
||||||
for _, group := range patterns {
|
|
||||||
for _, pattern := range group.patterns {
|
|
||||||
if strings.Contains(upperLine, pattern) {
|
|
||||||
return group.level
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *fileWatcher) getInfo() WatcherInfo {
|
|
||||||
w.mu.Lock()
|
|
||||||
info := WatcherInfo{
|
|
||||||
Path: w.path,
|
|
||||||
Size: w.size,
|
|
||||||
Position: w.position,
|
|
||||||
ModTime: w.modTime,
|
|
||||||
EntriesRead: w.entriesRead.Load(),
|
|
||||||
Rotations: w.rotationSeq,
|
|
||||||
}
|
|
||||||
w.mu.Unlock()
|
|
||||||
|
|
||||||
if lastRead, ok := w.lastReadTime.Load().(time.Time); ok {
|
|
||||||
info.LastReadTime = lastRead
|
|
||||||
}
|
|
||||||
|
|
||||||
return info
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *fileWatcher) close() {
|
|
||||||
w.stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *fileWatcher) stop() {
|
|
||||||
w.mu.Lock()
|
|
||||||
w.stopped = true
|
|
||||||
w.mu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *fileWatcher) isStopped() bool {
|
|
||||||
w.mu.Lock()
|
|
||||||
defer w.mu.Unlock()
|
|
||||||
return w.stopped
|
|
||||||
}
|
|
||||||
@ -1,447 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/source/http.go
|
|
||||||
package source
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"logwisp/src/internal/tls"
|
|
||||||
"net"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/limit"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
"github.com/valyala/fasthttp"
|
|
||||||
)
|
|
||||||
|
|
||||||
// HTTPSource receives log entries via HTTP POST requests
|
|
||||||
type HTTPSource struct {
|
|
||||||
port int64
|
|
||||||
ingestPath string
|
|
||||||
bufferSize int64
|
|
||||||
server *fasthttp.Server
|
|
||||||
subscribers []chan core.LogEntry
|
|
||||||
mu sync.RWMutex
|
|
||||||
done chan struct{}
|
|
||||||
wg sync.WaitGroup
|
|
||||||
netLimiter *limit.NetLimiter
|
|
||||||
logger *log.Logger
|
|
||||||
|
|
||||||
// CHANGED: Add TLS support
|
|
||||||
tlsManager *tls.Manager
|
|
||||||
sslConfig *config.SSLConfig
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalEntries atomic.Uint64
|
|
||||||
droppedEntries atomic.Uint64
|
|
||||||
invalidEntries atomic.Uint64
|
|
||||||
startTime time.Time
|
|
||||||
lastEntryTime atomic.Value // time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewHTTPSource creates a new HTTP server source
|
|
||||||
func NewHTTPSource(options map[string]any, logger *log.Logger) (*HTTPSource, error) {
|
|
||||||
port, ok := options["port"].(int64)
|
|
||||||
if !ok || port < 1 || port > 65535 {
|
|
||||||
return nil, fmt.Errorf("http source requires valid 'port' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
ingestPath := "/ingest"
|
|
||||||
if path, ok := options["ingest_path"].(string); ok && path != "" {
|
|
||||||
ingestPath = path
|
|
||||||
}
|
|
||||||
|
|
||||||
bufferSize := int64(1000)
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
bufferSize = bufSize
|
|
||||||
}
|
|
||||||
|
|
||||||
h := &HTTPSource{
|
|
||||||
port: port,
|
|
||||||
ingestPath: ingestPath,
|
|
||||||
bufferSize: bufferSize,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
h.lastEntryTime.Store(time.Time{})
|
|
||||||
|
|
||||||
// Initialize net limiter if configured
|
|
||||||
if rl, ok := options["net_limit"].(map[string]any); ok {
|
|
||||||
if enabled, _ := rl["enabled"].(bool); enabled {
|
|
||||||
cfg := config.NetLimitConfig{
|
|
||||||
Enabled: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
if rps, ok := toFloat(rl["requests_per_second"]); ok {
|
|
||||||
cfg.RequestsPerSecond = rps
|
|
||||||
}
|
|
||||||
if burst, ok := rl["burst_size"].(int64); ok {
|
|
||||||
cfg.BurstSize = burst
|
|
||||||
}
|
|
||||||
if limitBy, ok := rl["limit_by"].(string); ok {
|
|
||||||
cfg.LimitBy = limitBy
|
|
||||||
}
|
|
||||||
if respCode, ok := rl["response_code"].(int64); ok {
|
|
||||||
cfg.ResponseCode = respCode
|
|
||||||
}
|
|
||||||
if msg, ok := rl["response_message"].(string); ok {
|
|
||||||
cfg.ResponseMessage = msg
|
|
||||||
}
|
|
||||||
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
|
|
||||||
cfg.MaxConnectionsPerIP = maxPerIP
|
|
||||||
}
|
|
||||||
|
|
||||||
h.netLimiter = limit.NewNetLimiter(cfg, logger)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract SSL config after existing options
|
|
||||||
if ssl, ok := options["ssl"].(map[string]any); ok {
|
|
||||||
h.sslConfig = &config.SSLConfig{}
|
|
||||||
h.sslConfig.Enabled, _ = ssl["enabled"].(bool)
|
|
||||||
if certFile, ok := ssl["cert_file"].(string); ok {
|
|
||||||
h.sslConfig.CertFile = certFile
|
|
||||||
}
|
|
||||||
if keyFile, ok := ssl["key_file"].(string); ok {
|
|
||||||
h.sslConfig.KeyFile = keyFile
|
|
||||||
}
|
|
||||||
// TODO: extract other SSL options similar to tcp_client_sink
|
|
||||||
|
|
||||||
// Create TLS manager
|
|
||||||
if h.sslConfig.Enabled {
|
|
||||||
tlsManager, err := tls.NewManager(h.sslConfig, logger)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
|
|
||||||
}
|
|
||||||
h.tlsManager = tlsManager
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return h, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) Subscribe() <-chan core.LogEntry {
|
|
||||||
h.mu.Lock()
|
|
||||||
defer h.mu.Unlock()
|
|
||||||
|
|
||||||
ch := make(chan core.LogEntry, h.bufferSize)
|
|
||||||
h.subscribers = append(h.subscribers, ch)
|
|
||||||
return ch
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) Start() error {
|
|
||||||
h.server = &fasthttp.Server{
|
|
||||||
Handler: h.requestHandler,
|
|
||||||
DisableKeepalive: false,
|
|
||||||
StreamRequestBody: true,
|
|
||||||
CloseOnShutdown: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
addr := fmt.Sprintf(":%d", h.port)
|
|
||||||
|
|
||||||
// Start server in background
|
|
||||||
h.wg.Add(1)
|
|
||||||
go func() {
|
|
||||||
defer h.wg.Done()
|
|
||||||
h.logger.Info("msg", "HTTP source server starting",
|
|
||||||
"component", "http_source",
|
|
||||||
"port", h.port,
|
|
||||||
"ingest_path", h.ingestPath,
|
|
||||||
"tls_enabled", h.tlsManager != nil)
|
|
||||||
|
|
||||||
var err error
|
|
||||||
// Check for TLS manager and start the appropriate server type
|
|
||||||
if h.tlsManager != nil {
|
|
||||||
h.server.TLSConfig = h.tlsManager.GetHTTPConfig()
|
|
||||||
err = h.server.ListenAndServeTLS(addr, h.sslConfig.CertFile, h.sslConfig.KeyFile)
|
|
||||||
} else {
|
|
||||||
err = h.server.ListenAndServe(addr)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
h.logger.Error("msg", "HTTP source server failed",
|
|
||||||
"component", "http_source",
|
|
||||||
"port", h.port,
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Give server time to start
|
|
||||||
time.Sleep(100 * time.Millisecond) // TODO: standardize and better manage timers
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) Stop() {
|
|
||||||
h.logger.Info("msg", "Stopping HTTP source")
|
|
||||||
close(h.done)
|
|
||||||
|
|
||||||
if h.server != nil {
|
|
||||||
if err := h.server.Shutdown(); err != nil {
|
|
||||||
h.logger.Error("msg", "Error shutting down HTTP source server",
|
|
||||||
"component", "http_source",
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shutdown net limiter
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
h.netLimiter.Shutdown()
|
|
||||||
}
|
|
||||||
|
|
||||||
h.wg.Wait()
|
|
||||||
|
|
||||||
// Close subscriber channels
|
|
||||||
h.mu.Lock()
|
|
||||||
for _, ch := range h.subscribers {
|
|
||||||
close(ch)
|
|
||||||
}
|
|
||||||
h.mu.Unlock()
|
|
||||||
|
|
||||||
h.logger.Info("msg", "HTTP source stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) GetStats() SourceStats {
|
|
||||||
lastEntry, _ := h.lastEntryTime.Load().(time.Time)
|
|
||||||
|
|
||||||
var netLimitStats map[string]any
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
netLimitStats = h.netLimiter.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
return SourceStats{
|
|
||||||
Type: "http",
|
|
||||||
TotalEntries: h.totalEntries.Load(),
|
|
||||||
DroppedEntries: h.droppedEntries.Load(),
|
|
||||||
StartTime: h.startTime,
|
|
||||||
LastEntryTime: lastEntry,
|
|
||||||
Details: map[string]any{
|
|
||||||
"port": h.port,
|
|
||||||
"ingest_path": h.ingestPath,
|
|
||||||
"invalid_entries": h.invalidEntries.Load(),
|
|
||||||
"net_limit": netLimitStats,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) requestHandler(ctx *fasthttp.RequestCtx) {
|
|
||||||
// Only handle POST to the configured ingest path
|
|
||||||
if string(ctx.Method()) != "POST" || string(ctx.Path()) != h.ingestPath {
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusNotFound)
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]string{
|
|
||||||
"error": "Not Found",
|
|
||||||
"hint": fmt.Sprintf("POST logs to %s", h.ingestPath),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract and validate IP
|
|
||||||
remoteAddr := ctx.RemoteAddr().String()
|
|
||||||
ipStr, _, err := net.SplitHostPort(remoteAddr)
|
|
||||||
if err == nil {
|
|
||||||
if ip := net.ParseIP(ipStr); ip != nil && ip.To4() == nil {
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusForbidden)
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]string{
|
|
||||||
"error": "IPv4-only (IPv6 not supported)",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check net limit
|
|
||||||
if h.netLimiter != nil {
|
|
||||||
if allowed, statusCode, message := h.netLimiter.CheckHTTP(remoteAddr); !allowed {
|
|
||||||
ctx.SetStatusCode(int(statusCode))
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]any{
|
|
||||||
"error": message,
|
|
||||||
"retry_after": "60",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process the request body
|
|
||||||
body := ctx.PostBody()
|
|
||||||
if len(body) == 0 {
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusBadRequest)
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]string{
|
|
||||||
"error": "Empty request body",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse the log entries
|
|
||||||
entries, err := h.parseEntries(body)
|
|
||||||
if err != nil {
|
|
||||||
h.invalidEntries.Add(1)
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusBadRequest)
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]string{
|
|
||||||
"error": fmt.Sprintf("Invalid log format: %v", err),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Publish entries
|
|
||||||
accepted := 0
|
|
||||||
for _, entry := range entries {
|
|
||||||
if h.publish(entry) {
|
|
||||||
accepted++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Return success response
|
|
||||||
ctx.SetStatusCode(fasthttp.StatusAccepted)
|
|
||||||
ctx.SetContentType("application/json")
|
|
||||||
json.NewEncoder(ctx).Encode(map[string]any{
|
|
||||||
"accepted": accepted,
|
|
||||||
"total": len(entries),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) parseEntries(body []byte) ([]core.LogEntry, error) {
|
|
||||||
var entries []core.LogEntry
|
|
||||||
|
|
||||||
// Try to parse as single JSON object first
|
|
||||||
var single core.LogEntry
|
|
||||||
if err := json.Unmarshal(body, &single); err == nil {
|
|
||||||
// Validate required fields
|
|
||||||
if single.Message == "" {
|
|
||||||
return nil, fmt.Errorf("missing required field: message")
|
|
||||||
}
|
|
||||||
if single.Time.IsZero() {
|
|
||||||
single.Time = time.Now()
|
|
||||||
}
|
|
||||||
if single.Source == "" {
|
|
||||||
single.Source = "http"
|
|
||||||
}
|
|
||||||
single.RawSize = int64(len(body))
|
|
||||||
entries = append(entries, single)
|
|
||||||
return entries, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to parse as JSON array
|
|
||||||
var array []core.LogEntry
|
|
||||||
if err := json.Unmarshal(body, &array); err == nil {
|
|
||||||
// NOTE: Placeholder; For array, divide total size by entry count as approximation
|
|
||||||
approxSizePerEntry := int64(len(body) / len(array))
|
|
||||||
for i, entry := range array {
|
|
||||||
if entry.Message == "" {
|
|
||||||
return nil, fmt.Errorf("entry %d missing required field: message", i)
|
|
||||||
}
|
|
||||||
if entry.Time.IsZero() {
|
|
||||||
array[i].Time = time.Now()
|
|
||||||
}
|
|
||||||
if entry.Source == "" {
|
|
||||||
array[i].Source = "http"
|
|
||||||
}
|
|
||||||
// NOTE: Placeholder
|
|
||||||
array[i].RawSize = approxSizePerEntry
|
|
||||||
}
|
|
||||||
return array, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to parse as newline-delimited JSON
|
|
||||||
lines := splitLines(body)
|
|
||||||
for i, line := range lines {
|
|
||||||
if len(line) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
var entry core.LogEntry
|
|
||||||
if err := json.Unmarshal(line, &entry); err != nil {
|
|
||||||
return nil, fmt.Errorf("line %d: %w", i+1, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if entry.Message == "" {
|
|
||||||
return nil, fmt.Errorf("line %d missing required field: message", i+1)
|
|
||||||
}
|
|
||||||
if entry.Time.IsZero() {
|
|
||||||
entry.Time = time.Now()
|
|
||||||
}
|
|
||||||
if entry.Source == "" {
|
|
||||||
entry.Source = "http"
|
|
||||||
}
|
|
||||||
entry.RawSize = int64(len(line))
|
|
||||||
|
|
||||||
entries = append(entries, entry)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(entries) == 0 {
|
|
||||||
return nil, fmt.Errorf("no valid log entries found")
|
|
||||||
}
|
|
||||||
|
|
||||||
return entries, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *HTTPSource) publish(entry core.LogEntry) bool {
|
|
||||||
h.mu.RLock()
|
|
||||||
defer h.mu.RUnlock()
|
|
||||||
|
|
||||||
h.totalEntries.Add(1)
|
|
||||||
h.lastEntryTime.Store(entry.Time)
|
|
||||||
|
|
||||||
dropped := false
|
|
||||||
for _, ch := range h.subscribers {
|
|
||||||
select {
|
|
||||||
case ch <- entry:
|
|
||||||
default:
|
|
||||||
dropped = true
|
|
||||||
h.droppedEntries.Add(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if dropped {
|
|
||||||
h.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
|
|
||||||
"component", "http_source")
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// splitLines splits bytes into lines, handling both \n and \r\n
|
|
||||||
func splitLines(data []byte) [][]byte {
|
|
||||||
var lines [][]byte
|
|
||||||
start := 0
|
|
||||||
|
|
||||||
for i := 0; i < len(data); i++ {
|
|
||||||
if data[i] == '\n' {
|
|
||||||
end := i
|
|
||||||
if i > 0 && data[i-1] == '\r' {
|
|
||||||
end = i - 1
|
|
||||||
}
|
|
||||||
if end > start {
|
|
||||||
lines = append(lines, data[start:end])
|
|
||||||
}
|
|
||||||
start = i + 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if start < len(data) {
|
|
||||||
lines = append(lines, data[start:])
|
|
||||||
}
|
|
||||||
|
|
||||||
return lines
|
|
||||||
}
|
|
||||||
|
|
||||||
// Helper function for type conversion
|
|
||||||
func toFloat(v any) (float64, bool) {
|
|
||||||
switch val := v.(type) {
|
|
||||||
case float64:
|
|
||||||
return val, true
|
|
||||||
case int:
|
|
||||||
return float64(val), true
|
|
||||||
case int64:
|
|
||||||
return float64(val), true
|
|
||||||
default:
|
|
||||||
return 0, false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
125
src/internal/source/null/null.go
Normal file
125
src/internal/source/null/null.go
Normal file
@ -0,0 +1,125 @@
|
|||||||
|
package null
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSource("null", NewNullSourcePlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register null source: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NullSource generates no log entries, used for testing
|
||||||
|
type NullSource struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Application
|
||||||
|
subscribers []chan core.LogEntry
|
||||||
|
logger *log.Logger
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalEntries atomic.Uint64
|
||||||
|
startTime time.Time
|
||||||
|
lastEntryTime atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewNullSourcePlugin creates a null source through plugin factory
|
||||||
|
func NewNullSourcePlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (source.Source, error) {
|
||||||
|
ns := &NullSource{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
subscribers: make([]chan core.LogEntry, 0),
|
||||||
|
done: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
ns.lastEntryTime.Store(time.Time{})
|
||||||
|
|
||||||
|
// Create session for null source
|
||||||
|
ns.session = proxy.CreateSession(
|
||||||
|
"null://void",
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "null",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.Debug("msg", "Null source initialized",
|
||||||
|
"component", "null_source",
|
||||||
|
"instance_id", id)
|
||||||
|
|
||||||
|
return ns, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (ns *NullSource) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Subscribe returns a channel for receiving log entries
|
||||||
|
func (ns *NullSource) Subscribe() <-chan core.LogEntry {
|
||||||
|
ch := make(chan core.LogEntry, 1000)
|
||||||
|
ns.subscribers = append(ns.subscribers, ch)
|
||||||
|
return ch
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins the source operation (no-op for null source)
|
||||||
|
func (ns *NullSource) Start() error {
|
||||||
|
ns.startTime = time.Now()
|
||||||
|
ns.proxy.UpdateActivity(ns.session.ID)
|
||||||
|
ns.logger.Debug("msg", "Null source started",
|
||||||
|
"component", "null_source",
|
||||||
|
"instance_id", ns.id)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop signals the source to stop
|
||||||
|
func (ns *NullSource) Stop() {
|
||||||
|
close(ns.done)
|
||||||
|
if ns.session != nil {
|
||||||
|
ns.proxy.RemoveSession(ns.session.ID)
|
||||||
|
}
|
||||||
|
for _, ch := range ns.subscribers {
|
||||||
|
close(ch)
|
||||||
|
}
|
||||||
|
ns.logger.Debug("msg", "Null source stopped",
|
||||||
|
"component", "null_source",
|
||||||
|
"instance_id", ns.id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns the source's statistics
|
||||||
|
func (ns *NullSource) GetStats() source.SourceStats {
|
||||||
|
lastEntry, _ := ns.lastEntryTime.Load().(time.Time)
|
||||||
|
|
||||||
|
return source.SourceStats{
|
||||||
|
ID: ns.id,
|
||||||
|
Type: "null",
|
||||||
|
TotalEntries: ns.totalEntries.Load(),
|
||||||
|
StartTime: ns.startTime,
|
||||||
|
LastEntryTime: lastEntry,
|
||||||
|
Details: map[string]any{},
|
||||||
|
}
|
||||||
|
}
|
||||||
358
src/internal/source/random/random.go
Normal file
358
src/internal/source/random/random.go
Normal file
@ -0,0 +1,358 @@
|
|||||||
|
package random
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"math/rand"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"logwisp/src/internal/config"
|
||||||
|
"logwisp/src/internal/core"
|
||||||
|
"logwisp/src/internal/plugin"
|
||||||
|
"logwisp/src/internal/session"
|
||||||
|
"logwisp/src/internal/source"
|
||||||
|
|
||||||
|
lconfig "github.com/lixenwraith/config"
|
||||||
|
"github.com/lixenwraith/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
// init registers the component in plugin factory
|
||||||
|
func init() {
|
||||||
|
if err := plugin.RegisterSource("random", NewRandomSourcePlugin); err != nil {
|
||||||
|
panic(fmt.Sprintf("failed to register random source: %v", err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RandomSource generates random log entries for testing
|
||||||
|
type RandomSource struct {
|
||||||
|
// Plugin identity and session management
|
||||||
|
id string
|
||||||
|
proxy *session.Proxy
|
||||||
|
session *session.Session
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
config *config.RandomSourceOptions
|
||||||
|
|
||||||
|
// Application
|
||||||
|
subscribers []chan core.LogEntry
|
||||||
|
logger *log.Logger
|
||||||
|
rng *rand.Rand
|
||||||
|
mu sync.RWMutex
|
||||||
|
|
||||||
|
// Runtime
|
||||||
|
done chan struct{}
|
||||||
|
wg sync.WaitGroup
|
||||||
|
cancel chan struct{}
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
totalEntries atomic.Uint64
|
||||||
|
droppedEntries atomic.Uint64
|
||||||
|
startTime time.Time
|
||||||
|
lastEntryTime atomic.Value // time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
DefaultRandomSourceIntervalMS = 500
|
||||||
|
DefaultRandomSourceFormat = "txt"
|
||||||
|
DefaultRandomSourceLength = 20
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewRandomSourcePlugin creates a random source through plugin factory
|
||||||
|
func NewRandomSourcePlugin(
|
||||||
|
id string,
|
||||||
|
configMap map[string]any,
|
||||||
|
logger *log.Logger,
|
||||||
|
proxy *session.Proxy,
|
||||||
|
) (source.Source, error) {
|
||||||
|
// Step 1: Create empty config struct with defaults
|
||||||
|
opts := &config.RandomSourceOptions{
|
||||||
|
IntervalMS: 500,
|
||||||
|
JitterMS: 0,
|
||||||
|
Format: "txt",
|
||||||
|
Length: 20,
|
||||||
|
Special: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan config map
|
||||||
|
if err := lconfig.ScanMap(configMap, opts); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Defaults
|
||||||
|
if opts.IntervalMS <= 0 {
|
||||||
|
opts.IntervalMS = DefaultRandomSourceIntervalMS
|
||||||
|
}
|
||||||
|
if opts.Format == "" {
|
||||||
|
opts.Format = DefaultRandomSourceFormat
|
||||||
|
}
|
||||||
|
if opts.Length <= 0 {
|
||||||
|
opts.Length = DefaultRandomSourceLength
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate
|
||||||
|
if opts.JitterMS < 0 {
|
||||||
|
return nil, fmt.Errorf("jitter_ms cannot be negative")
|
||||||
|
}
|
||||||
|
if opts.JitterMS > opts.IntervalMS {
|
||||||
|
opts.JitterMS = opts.IntervalMS
|
||||||
|
}
|
||||||
|
|
||||||
|
validateFormat := lconfig.OneOf("raw", "txt", "json")
|
||||||
|
if err := validateFormat(opts.Format); err != nil {
|
||||||
|
return nil, fmt.Errorf("format: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
rs := &RandomSource{
|
||||||
|
id: id,
|
||||||
|
proxy: proxy,
|
||||||
|
config: opts,
|
||||||
|
subscribers: make([]chan core.LogEntry, 0),
|
||||||
|
done: make(chan struct{}),
|
||||||
|
cancel: make(chan struct{}),
|
||||||
|
logger: logger,
|
||||||
|
rng: rand.New(rand.NewSource(time.Now().UnixNano())),
|
||||||
|
}
|
||||||
|
rs.lastEntryTime.Store(time.Time{})
|
||||||
|
|
||||||
|
// Create session for random source
|
||||||
|
rs.session = proxy.CreateSession(
|
||||||
|
fmt.Sprintf("random://%s", id),
|
||||||
|
map[string]any{
|
||||||
|
"instance_id": id,
|
||||||
|
"type": "random",
|
||||||
|
"format": opts.Format,
|
||||||
|
"interval_ms": opts.IntervalMS,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.Debug("msg", "Random source initialized",
|
||||||
|
"component", "random_source",
|
||||||
|
"instance_id", id,
|
||||||
|
"format", opts.Format,
|
||||||
|
"interval_ms", opts.IntervalMS,
|
||||||
|
"jitter_ms", opts.JitterMS)
|
||||||
|
|
||||||
|
return rs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capabilities returns supported capabilities
|
||||||
|
func (rs *RandomSource) Capabilities() []core.Capability {
|
||||||
|
return []core.Capability{
|
||||||
|
core.CapSessionAware,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Subscribe returns a channel for receiving log entries
|
||||||
|
func (rs *RandomSource) Subscribe() <-chan core.LogEntry {
|
||||||
|
rs.mu.Lock()
|
||||||
|
defer rs.mu.Unlock()
|
||||||
|
ch := make(chan core.LogEntry, 1000)
|
||||||
|
rs.subscribers = append(rs.subscribers, ch)
|
||||||
|
return ch
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins generating random log entries
|
||||||
|
func (rs *RandomSource) Start() error {
|
||||||
|
rs.startTime = time.Now()
|
||||||
|
rs.wg.Add(1)
|
||||||
|
go rs.generateLoop()
|
||||||
|
|
||||||
|
rs.proxy.UpdateActivity(rs.session.ID)
|
||||||
|
rs.logger.Debug("msg", "Random source started",
|
||||||
|
"component", "random_source",
|
||||||
|
"instance_id", rs.id)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop signals the source to stop generating
|
||||||
|
func (rs *RandomSource) Stop() {
|
||||||
|
close(rs.cancel)
|
||||||
|
rs.wg.Wait()
|
||||||
|
|
||||||
|
if rs.session != nil {
|
||||||
|
rs.proxy.RemoveSession(rs.session.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
rs.mu.Lock()
|
||||||
|
for _, ch := range rs.subscribers {
|
||||||
|
close(ch)
|
||||||
|
}
|
||||||
|
rs.mu.Unlock()
|
||||||
|
|
||||||
|
rs.logger.Debug("msg", "Random source stopped",
|
||||||
|
"component", "random_source",
|
||||||
|
"instance_id", rs.id,
|
||||||
|
"total_entries", rs.totalEntries.Load())
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStats returns the source's statistics
|
||||||
|
func (rs *RandomSource) GetStats() source.SourceStats {
|
||||||
|
lastEntry, _ := rs.lastEntryTime.Load().(time.Time)
|
||||||
|
|
||||||
|
return source.SourceStats{
|
||||||
|
ID: rs.id,
|
||||||
|
Type: "random",
|
||||||
|
TotalEntries: rs.totalEntries.Load(),
|
||||||
|
DroppedEntries: rs.droppedEntries.Load(),
|
||||||
|
StartTime: rs.startTime,
|
||||||
|
LastEntryTime: lastEntry,
|
||||||
|
Details: map[string]any{
|
||||||
|
"format": rs.config.Format,
|
||||||
|
"interval_ms": rs.config.IntervalMS,
|
||||||
|
"jitter_ms": rs.config.JitterMS,
|
||||||
|
"length": rs.config.Length,
|
||||||
|
"special": rs.config.Special,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// generateLoop continuously generates random log entries at configured intervals
|
||||||
|
func (rs *RandomSource) generateLoop() {
|
||||||
|
defer rs.wg.Done()
|
||||||
|
|
||||||
|
for {
|
||||||
|
// Calculate next interval with jitter
|
||||||
|
interval := time.Duration(rs.config.IntervalMS) * time.Millisecond
|
||||||
|
if rs.config.JitterMS > 0 {
|
||||||
|
jitter := time.Duration(rs.rng.Intn(int(rs.config.JitterMS))) * time.Millisecond
|
||||||
|
interval = interval - time.Duration(rs.config.JitterMS/2)*time.Millisecond + jitter
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-time.After(interval):
|
||||||
|
entry := rs.generateEntry()
|
||||||
|
rs.publish(entry)
|
||||||
|
rs.proxy.UpdateActivity(rs.session.ID)
|
||||||
|
case <-rs.cancel:
|
||||||
|
return
|
||||||
|
case <-rs.done:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// generateEntry creates a random log entry based on configured format
|
||||||
|
func (rs *RandomSource) generateEntry() core.LogEntry {
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
switch rs.config.Format {
|
||||||
|
case "raw":
|
||||||
|
message := rs.generateRandomString(int(rs.config.Length))
|
||||||
|
return core.LogEntry{
|
||||||
|
Time: now,
|
||||||
|
Source: fmt.Sprintf("random_%s", rs.id),
|
||||||
|
Message: message,
|
||||||
|
RawSize: int64(len(message) + 1), // +1 for newline
|
||||||
|
}
|
||||||
|
|
||||||
|
case "txt":
|
||||||
|
level := rs.randomLogLevel()
|
||||||
|
message := rs.generateRandomString(int(rs.config.Length))
|
||||||
|
formatted := fmt.Sprintf("[%s] [%s] random_%s - %s",
|
||||||
|
now.Format(time.RFC3339),
|
||||||
|
level,
|
||||||
|
rs.id,
|
||||||
|
message)
|
||||||
|
return core.LogEntry{
|
||||||
|
Time: now,
|
||||||
|
Source: fmt.Sprintf("random_%s", rs.id),
|
||||||
|
Level: level,
|
||||||
|
Message: formatted,
|
||||||
|
RawSize: int64(len(formatted) + 1),
|
||||||
|
}
|
||||||
|
|
||||||
|
case "json":
|
||||||
|
level := rs.randomLogLevel()
|
||||||
|
message := rs.generateRandomString(int(rs.config.Length))
|
||||||
|
data := map[string]any{
|
||||||
|
"time": now.Format(time.RFC3339Nano),
|
||||||
|
"level": level,
|
||||||
|
"source": fmt.Sprintf("random_%s", rs.id),
|
||||||
|
"message": message,
|
||||||
|
}
|
||||||
|
jsonBytes, _ := json.Marshal(data)
|
||||||
|
return core.LogEntry{
|
||||||
|
Time: now,
|
||||||
|
Source: fmt.Sprintf("random_%s", rs.id),
|
||||||
|
Level: level,
|
||||||
|
Message: string(jsonBytes),
|
||||||
|
RawSize: int64(len(jsonBytes) + 1),
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
return core.LogEntry{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// generateRandomString creates a random string of specified length
|
||||||
|
func (rs *RandomSource) generateRandomString(length int) string {
|
||||||
|
const normalChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 "
|
||||||
|
const specialChars = "\t\n\r\x00\x01\x02\x03\x04\x05\x06\x07\x08\x0B\x0C\x0E\x0F"
|
||||||
|
const unicodeChars = "™€¢£¥§©®°±µ¶·ÀÉÑÖÜßäëïöü←↑→↓∀∃∅∇∈∉∪∩≈≠≤≥"
|
||||||
|
|
||||||
|
result := make([]byte, 0, length)
|
||||||
|
|
||||||
|
if rs.config.Special && length >= 3 {
|
||||||
|
// Reserve space for at least one special and one unicode char
|
||||||
|
normalLength := length - 2
|
||||||
|
|
||||||
|
// Generate normal characters
|
||||||
|
for i := 0; i < normalLength; i++ {
|
||||||
|
result = append(result, normalChars[rs.rng.Intn(len(normalChars))])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Insert special character at random position
|
||||||
|
specialPos := rs.rng.Intn(len(result) + 1)
|
||||||
|
specialChar := specialChars[rs.rng.Intn(len(specialChars))]
|
||||||
|
result = append(result[:specialPos], append([]byte{specialChar}, result[specialPos:]...)...)
|
||||||
|
|
||||||
|
// Insert unicode character at random position
|
||||||
|
unicodePos := rs.rng.Intn(len(result) + 1)
|
||||||
|
unicodeChar := unicodeChars[rs.rng.Intn(len(unicodeChars)/3)*3:]
|
||||||
|
if len(unicodeChar) >= 3 {
|
||||||
|
unicodeBytes := []byte(unicodeChar[:3])
|
||||||
|
if unicodePos == len(result) {
|
||||||
|
result = append(result, unicodeBytes...)
|
||||||
|
} else {
|
||||||
|
result = append(result[:unicodePos], append(unicodeBytes, result[unicodePos:]...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Trim to exact length if needed
|
||||||
|
if len(result) > length {
|
||||||
|
result = result[:length]
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Normal generation without special characters
|
||||||
|
for i := 0; i < length; i++ {
|
||||||
|
result = append(result, normalChars[rs.rng.Intn(len(normalChars))])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// randomLogLevel returns a random log level
|
||||||
|
func (rs *RandomSource) randomLogLevel() string {
|
||||||
|
levels := []string{"DEBUG", "INFO", "WARN", "ERROR"}
|
||||||
|
return levels[rs.rng.Intn(len(levels))]
|
||||||
|
}
|
||||||
|
|
||||||
|
// publish sends a log entry to all subscribers
|
||||||
|
func (rs *RandomSource) publish(entry core.LogEntry) {
|
||||||
|
rs.mu.RLock()
|
||||||
|
defer rs.mu.RUnlock()
|
||||||
|
|
||||||
|
rs.totalEntries.Add(1)
|
||||||
|
rs.lastEntryTime.Store(entry.Time)
|
||||||
|
|
||||||
|
for _, ch := range rs.subscribers {
|
||||||
|
select {
|
||||||
|
case ch <- entry:
|
||||||
|
default:
|
||||||
|
rs.droppedEntries.Add(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@ -1,15 +1,18 @@
|
|||||||
// FILE: logwisp/src/internal/source/source.go
|
|
||||||
package source
|
package source
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
"logwisp/src/internal/core"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Source represents an input data stream
|
// Source represents an input data stream for log entries
|
||||||
type Source interface {
|
type Source interface {
|
||||||
// Subscribe returns a channel that receives log entries
|
// Capabilities returns a slice of supported Source capabilities
|
||||||
|
Capabilities() []core.Capability
|
||||||
|
|
||||||
|
// Subscribe returns a channel that receives log entries from the source
|
||||||
Subscribe() <-chan core.LogEntry
|
Subscribe() <-chan core.LogEntry
|
||||||
|
|
||||||
// Start begins reading from the source
|
// Start begins reading from the source
|
||||||
@ -18,12 +21,13 @@ type Source interface {
|
|||||||
// Stop gracefully shuts down the source
|
// Stop gracefully shuts down the source
|
||||||
Stop()
|
Stop()
|
||||||
|
|
||||||
// GetStats returns source statistics
|
// SourceStats contains statistics about a source
|
||||||
GetStats() SourceStats
|
GetStats() SourceStats
|
||||||
}
|
}
|
||||||
|
|
||||||
// SourceStats contains statistics about a source
|
// SourceStats contains statistics about a source
|
||||||
type SourceStats struct {
|
type SourceStats struct {
|
||||||
|
ID string
|
||||||
Type string
|
Type string
|
||||||
TotalEntries uint64
|
TotalEntries uint64
|
||||||
DroppedEntries uint64
|
DroppedEntries uint64
|
||||||
@ -31,3 +35,28 @@ type SourceStats struct {
|
|||||||
LastEntryTime time.Time
|
LastEntryTime time.Time
|
||||||
Details map[string]any
|
Details map[string]any
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ExtractLogLevel heuristically determines the log level from a line of text
|
||||||
|
func ExtractLogLevel(line string) string {
|
||||||
|
patterns := []struct {
|
||||||
|
patterns []string
|
||||||
|
level string
|
||||||
|
}{
|
||||||
|
{[]string{"[ERROR]", "ERROR:", " ERROR ", "ERR:", "[ERR]", "FATAL:", "[FATAL]"}, "ERROR"},
|
||||||
|
{[]string{"[WARN]", "WARN:", " WARN ", "WARNING:", "[WARNING]"}, "WARN"},
|
||||||
|
{[]string{"[INFO]", "INFO:", " INFO ", "[INF]", "INF:"}, "INFO"},
|
||||||
|
{[]string{"[DEBUG]", "DEBUG:", " DEBUG ", "[DBG]", "DBG:"}, "DEBUG"},
|
||||||
|
{[]string{"[TRACE]", "TRACE:", " TRACE "}, "TRACE"},
|
||||||
|
}
|
||||||
|
|
||||||
|
upperLine := strings.ToUpper(line)
|
||||||
|
for _, group := range patterns {
|
||||||
|
for _, pattern := range group.patterns {
|
||||||
|
if strings.Contains(upperLine, pattern) {
|
||||||
|
return group.level
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ""
|
||||||
|
}
|
||||||
@ -1,114 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/source/stdin.go
|
|
||||||
package source
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"os"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// StdinSource reads log entries from standard input
|
|
||||||
type StdinSource struct {
|
|
||||||
subscribers []chan core.LogEntry
|
|
||||||
done chan struct{}
|
|
||||||
totalEntries atomic.Uint64
|
|
||||||
droppedEntries atomic.Uint64
|
|
||||||
startTime time.Time
|
|
||||||
lastEntryTime atomic.Value // time.Time
|
|
||||||
logger *log.Logger
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewStdinSource creates a new stdin source
|
|
||||||
func NewStdinSource(options map[string]any, logger *log.Logger) (*StdinSource, error) {
|
|
||||||
s := &StdinSource{
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
s.lastEntryTime.Store(time.Time{})
|
|
||||||
return s, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdinSource) Subscribe() <-chan core.LogEntry {
|
|
||||||
ch := make(chan core.LogEntry, 1000)
|
|
||||||
s.subscribers = append(s.subscribers, ch)
|
|
||||||
return ch
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdinSource) Start() error {
|
|
||||||
go s.readLoop()
|
|
||||||
s.logger.Info("msg", "Stdin source started", "component", "stdin_source")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdinSource) Stop() {
|
|
||||||
close(s.done)
|
|
||||||
for _, ch := range s.subscribers {
|
|
||||||
close(ch)
|
|
||||||
}
|
|
||||||
s.logger.Info("msg", "Stdin source stopped", "component", "stdin_source")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdinSource) GetStats() SourceStats {
|
|
||||||
lastEntry, _ := s.lastEntryTime.Load().(time.Time)
|
|
||||||
|
|
||||||
return SourceStats{
|
|
||||||
Type: "stdin",
|
|
||||||
TotalEntries: s.totalEntries.Load(),
|
|
||||||
DroppedEntries: s.droppedEntries.Load(),
|
|
||||||
StartTime: s.startTime,
|
|
||||||
LastEntryTime: lastEntry,
|
|
||||||
Details: map[string]any{},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdinSource) readLoop() {
|
|
||||||
scanner := bufio.NewScanner(os.Stdin)
|
|
||||||
for scanner.Scan() {
|
|
||||||
select {
|
|
||||||
case <-s.done:
|
|
||||||
return
|
|
||||||
default:
|
|
||||||
line := scanner.Text()
|
|
||||||
if line == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
entry := core.LogEntry{
|
|
||||||
Time: time.Now(),
|
|
||||||
Source: "stdin",
|
|
||||||
Message: line,
|
|
||||||
Level: extractLogLevel(line),
|
|
||||||
RawSize: int64(len(line)),
|
|
||||||
}
|
|
||||||
|
|
||||||
s.publish(entry)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := scanner.Err(); err != nil {
|
|
||||||
s.logger.Error("msg", "Scanner error reading stdin",
|
|
||||||
"component", "stdin_source",
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StdinSource) publish(entry core.LogEntry) {
|
|
||||||
s.totalEntries.Add(1)
|
|
||||||
s.lastEntryTime.Store(entry.Time)
|
|
||||||
|
|
||||||
for _, ch := range s.subscribers {
|
|
||||||
select {
|
|
||||||
case ch <- entry:
|
|
||||||
default:
|
|
||||||
s.droppedEntries.Add(1)
|
|
||||||
s.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
|
|
||||||
"component", "stdin_source")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,576 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/source/tcp.go
|
|
||||||
package source
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"logwisp/src/internal/auth"
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
"logwisp/src/internal/core"
|
|
||||||
"logwisp/src/internal/limit"
|
|
||||||
"logwisp/src/internal/tls"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
"github.com/lixenwraith/log/compat"
|
|
||||||
"github.com/panjf2000/gnet/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
maxClientBufferSize = 10 * 1024 * 1024 // 10MB max per client
|
|
||||||
maxLineLength = 1 * 1024 * 1024 // 1MB max per log line
|
|
||||||
maxEncryptedDataPerRead = 1 * 1024 * 1024 // 1MB max encrypted data per read
|
|
||||||
maxCumulativeEncrypted = 20 * 1024 * 1024 // 20MB total encrypted before processing
|
|
||||||
)
|
|
||||||
|
|
||||||
// TCPSource receives log entries via TCP connections
|
|
||||||
type TCPSource struct {
|
|
||||||
port int64
|
|
||||||
bufferSize int64
|
|
||||||
server *tcpSourceServer
|
|
||||||
subscribers []chan core.LogEntry
|
|
||||||
mu sync.RWMutex
|
|
||||||
done chan struct{}
|
|
||||||
engine *gnet.Engine
|
|
||||||
engineMu sync.Mutex
|
|
||||||
wg sync.WaitGroup
|
|
||||||
netLimiter *limit.NetLimiter
|
|
||||||
tlsManager *tls.Manager
|
|
||||||
sslConfig *config.SSLConfig
|
|
||||||
logger *log.Logger
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
totalEntries atomic.Uint64
|
|
||||||
droppedEntries atomic.Uint64
|
|
||||||
invalidEntries atomic.Uint64
|
|
||||||
activeConns atomic.Int64
|
|
||||||
startTime time.Time
|
|
||||||
lastEntryTime atomic.Value // time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewTCPSource creates a new TCP server source
|
|
||||||
func NewTCPSource(options map[string]any, logger *log.Logger) (*TCPSource, error) {
|
|
||||||
port, ok := options["port"].(int64)
|
|
||||||
if !ok || port < 1 || port > 65535 {
|
|
||||||
return nil, fmt.Errorf("tcp source requires valid 'port' option")
|
|
||||||
}
|
|
||||||
|
|
||||||
bufferSize := int64(1000)
|
|
||||||
if bufSize, ok := options["buffer_size"].(int64); ok && bufSize > 0 {
|
|
||||||
bufferSize = bufSize
|
|
||||||
}
|
|
||||||
|
|
||||||
t := &TCPSource{
|
|
||||||
port: port,
|
|
||||||
bufferSize: bufferSize,
|
|
||||||
done: make(chan struct{}),
|
|
||||||
startTime: time.Now(),
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
t.lastEntryTime.Store(time.Time{})
|
|
||||||
|
|
||||||
// Initialize net limiter if configured
|
|
||||||
if rl, ok := options["net_limit"].(map[string]any); ok {
|
|
||||||
if enabled, _ := rl["enabled"].(bool); enabled {
|
|
||||||
cfg := config.NetLimitConfig{
|
|
||||||
Enabled: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
if rps, ok := toFloat(rl["requests_per_second"]); ok {
|
|
||||||
cfg.RequestsPerSecond = rps
|
|
||||||
}
|
|
||||||
if burst, ok := rl["burst_size"].(int64); ok {
|
|
||||||
cfg.BurstSize = burst
|
|
||||||
}
|
|
||||||
if limitBy, ok := rl["limit_by"].(string); ok {
|
|
||||||
cfg.LimitBy = limitBy
|
|
||||||
}
|
|
||||||
if maxPerIP, ok := rl["max_connections_per_ip"].(int64); ok {
|
|
||||||
cfg.MaxConnectionsPerIP = maxPerIP
|
|
||||||
}
|
|
||||||
if maxTotal, ok := rl["max_total_connections"].(int64); ok {
|
|
||||||
cfg.MaxTotalConnections = maxTotal
|
|
||||||
}
|
|
||||||
|
|
||||||
t.netLimiter = limit.NewNetLimiter(cfg, logger)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract SSL config and initialize TLS manager
|
|
||||||
if ssl, ok := options["ssl"].(map[string]any); ok {
|
|
||||||
t.sslConfig = &config.SSLConfig{}
|
|
||||||
t.sslConfig.Enabled, _ = ssl["enabled"].(bool)
|
|
||||||
if certFile, ok := ssl["cert_file"].(string); ok {
|
|
||||||
t.sslConfig.CertFile = certFile
|
|
||||||
}
|
|
||||||
if keyFile, ok := ssl["key_file"].(string); ok {
|
|
||||||
t.sslConfig.KeyFile = keyFile
|
|
||||||
}
|
|
||||||
t.sslConfig.ClientAuth, _ = ssl["client_auth"].(bool)
|
|
||||||
if caFile, ok := ssl["client_ca_file"].(string); ok {
|
|
||||||
t.sslConfig.ClientCAFile = caFile
|
|
||||||
}
|
|
||||||
t.sslConfig.VerifyClientCert, _ = ssl["verify_client_cert"].(bool)
|
|
||||||
|
|
||||||
// Create TLS manager if enabled
|
|
||||||
if t.sslConfig.Enabled {
|
|
||||||
tlsManager, err := tls.NewManager(t.sslConfig, logger)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create TLS manager: %w", err)
|
|
||||||
}
|
|
||||||
t.tlsManager = tlsManager
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return t, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSource) Subscribe() <-chan core.LogEntry {
|
|
||||||
t.mu.Lock()
|
|
||||||
defer t.mu.Unlock()
|
|
||||||
|
|
||||||
ch := make(chan core.LogEntry, t.bufferSize)
|
|
||||||
t.subscribers = append(t.subscribers, ch)
|
|
||||||
return ch
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSource) Start() error {
|
|
||||||
t.server = &tcpSourceServer{
|
|
||||||
source: t,
|
|
||||||
clients: make(map[gnet.Conn]*tcpClient),
|
|
||||||
}
|
|
||||||
|
|
||||||
addr := fmt.Sprintf("tcp://:%d", t.port)
|
|
||||||
|
|
||||||
// Create a gnet adapter using the existing logger instance
|
|
||||||
gnetLogger := compat.NewGnetAdapter(t.logger)
|
|
||||||
|
|
||||||
// Start gnet server
|
|
||||||
errChan := make(chan error, 1)
|
|
||||||
t.wg.Add(1)
|
|
||||||
go func() {
|
|
||||||
defer t.wg.Done()
|
|
||||||
t.logger.Info("msg", "TCP source server starting",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"port", t.port,
|
|
||||||
"tls_enabled", t.tlsManager != nil)
|
|
||||||
|
|
||||||
err := gnet.Run(t.server, addr,
|
|
||||||
gnet.WithLogger(gnetLogger),
|
|
||||||
gnet.WithMulticore(true),
|
|
||||||
gnet.WithReusePort(true),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
t.logger.Error("msg", "TCP source server failed",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"port", t.port,
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
errChan <- err
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Wait briefly for server to start or fail
|
|
||||||
select {
|
|
||||||
case err := <-errChan:
|
|
||||||
// Server failed immediately
|
|
||||||
close(t.done)
|
|
||||||
t.wg.Wait()
|
|
||||||
return err
|
|
||||||
case <-time.After(100 * time.Millisecond):
|
|
||||||
// Server started successfully
|
|
||||||
t.logger.Info("msg", "TCP server started", "port", t.port)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSource) Stop() {
|
|
||||||
t.logger.Info("msg", "Stopping TCP source")
|
|
||||||
close(t.done)
|
|
||||||
|
|
||||||
// Stop gnet engine if running
|
|
||||||
t.engineMu.Lock()
|
|
||||||
engine := t.engine
|
|
||||||
t.engineMu.Unlock()
|
|
||||||
|
|
||||||
if engine != nil {
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
(*engine).Stop(ctx)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shutdown net limiter
|
|
||||||
if t.netLimiter != nil {
|
|
||||||
t.netLimiter.Shutdown()
|
|
||||||
}
|
|
||||||
|
|
||||||
t.wg.Wait()
|
|
||||||
|
|
||||||
// Close subscriber channels
|
|
||||||
t.mu.Lock()
|
|
||||||
for _, ch := range t.subscribers {
|
|
||||||
close(ch)
|
|
||||||
}
|
|
||||||
t.mu.Unlock()
|
|
||||||
|
|
||||||
t.logger.Info("msg", "TCP source stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSource) GetStats() SourceStats {
|
|
||||||
lastEntry, _ := t.lastEntryTime.Load().(time.Time)
|
|
||||||
|
|
||||||
var netLimitStats map[string]any
|
|
||||||
if t.netLimiter != nil {
|
|
||||||
netLimitStats = t.netLimiter.GetStats()
|
|
||||||
}
|
|
||||||
|
|
||||||
return SourceStats{
|
|
||||||
Type: "tcp",
|
|
||||||
TotalEntries: t.totalEntries.Load(),
|
|
||||||
DroppedEntries: t.droppedEntries.Load(),
|
|
||||||
StartTime: t.startTime,
|
|
||||||
LastEntryTime: lastEntry,
|
|
||||||
Details: map[string]any{
|
|
||||||
"port": t.port,
|
|
||||||
"active_connections": t.activeConns.Load(),
|
|
||||||
"invalid_entries": t.invalidEntries.Load(),
|
|
||||||
"net_limit": netLimitStats,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TCPSource) publish(entry core.LogEntry) bool {
|
|
||||||
t.mu.RLock()
|
|
||||||
defer t.mu.RUnlock()
|
|
||||||
|
|
||||||
t.totalEntries.Add(1)
|
|
||||||
t.lastEntryTime.Store(entry.Time)
|
|
||||||
|
|
||||||
dropped := false
|
|
||||||
for _, ch := range t.subscribers {
|
|
||||||
select {
|
|
||||||
case ch <- entry:
|
|
||||||
default:
|
|
||||||
dropped = true
|
|
||||||
t.droppedEntries.Add(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if dropped {
|
|
||||||
t.logger.Debug("msg", "Dropped log entry - subscriber buffer full",
|
|
||||||
"component", "tcp_source")
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// tcpClient represents a connected TCP client
|
|
||||||
type tcpClient struct {
|
|
||||||
conn gnet.Conn
|
|
||||||
buffer bytes.Buffer
|
|
||||||
authenticated bool
|
|
||||||
session *auth.Session
|
|
||||||
authTimeout time.Time
|
|
||||||
tlsBridge *tls.GNetTLSConn
|
|
||||||
maxBufferSeen int
|
|
||||||
cumulativeEncrypted int64
|
|
||||||
}
|
|
||||||
|
|
||||||
// tcpSourceServer handles gnet events
|
|
||||||
type tcpSourceServer struct {
|
|
||||||
gnet.BuiltinEventEngine
|
|
||||||
source *TCPSource
|
|
||||||
clients map[gnet.Conn]*tcpClient
|
|
||||||
mu sync.RWMutex
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpSourceServer) OnBoot(eng gnet.Engine) gnet.Action {
|
|
||||||
// Store engine reference for shutdown
|
|
||||||
s.source.engineMu.Lock()
|
|
||||||
s.source.engine = &eng
|
|
||||||
s.source.engineMu.Unlock()
|
|
||||||
|
|
||||||
s.source.logger.Debug("msg", "TCP source server booted",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"port", s.source.port)
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpSourceServer) OnOpen(c gnet.Conn) (out []byte, action gnet.Action) {
|
|
||||||
remoteAddr := c.RemoteAddr().String()
|
|
||||||
s.source.logger.Debug("msg", "TCP connection attempt",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
|
|
||||||
// Check net limit
|
|
||||||
if s.source.netLimiter != nil {
|
|
||||||
tcpAddr, err := net.ResolveTCPAddr("tcp", remoteAddr)
|
|
||||||
if err != nil {
|
|
||||||
s.source.logger.Warn("msg", "Failed to parse TCP address",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"error", err)
|
|
||||||
return nil, gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
if !s.source.netLimiter.CheckTCP(tcpAddr) {
|
|
||||||
s.source.logger.Warn("msg", "TCP connection net limited",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
return nil, gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track connection
|
|
||||||
s.source.netLimiter.AddConnection(remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create client state
|
|
||||||
client := &tcpClient{conn: c}
|
|
||||||
|
|
||||||
// Initialize TLS bridge if enabled
|
|
||||||
if s.source.tlsManager != nil {
|
|
||||||
tlsConfig := s.source.tlsManager.GetTCPConfig()
|
|
||||||
client.tlsBridge = tls.NewServerConn(c, tlsConfig)
|
|
||||||
client.tlsBridge.Handshake() // Start async handshake
|
|
||||||
|
|
||||||
s.source.logger.Debug("msg", "TLS handshake initiated",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create client state
|
|
||||||
s.mu.Lock()
|
|
||||||
s.clients[c] = &tcpClient{conn: c}
|
|
||||||
s.mu.Unlock()
|
|
||||||
|
|
||||||
newCount := s.source.activeConns.Add(1)
|
|
||||||
s.source.logger.Debug("msg", "TCP connection opened",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"active_connections", newCount,
|
|
||||||
"tls_enabled", s.source.tlsManager != nil)
|
|
||||||
|
|
||||||
return nil, gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpSourceServer) OnClose(c gnet.Conn, err error) gnet.Action {
|
|
||||||
remoteAddr := c.RemoteAddr().String()
|
|
||||||
|
|
||||||
// Remove client state
|
|
||||||
s.mu.Lock()
|
|
||||||
client := s.clients[c]
|
|
||||||
delete(s.clients, c)
|
|
||||||
s.mu.Unlock()
|
|
||||||
|
|
||||||
// Clean up TLS bridge if present
|
|
||||||
if client != nil && client.tlsBridge != nil {
|
|
||||||
client.tlsBridge.Close()
|
|
||||||
s.source.logger.Debug("msg", "TLS connection closed",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove connection tracking
|
|
||||||
if s.source.netLimiter != nil {
|
|
||||||
s.source.netLimiter.RemoveConnection(remoteAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
newCount := s.source.activeConns.Add(-1)
|
|
||||||
s.source.logger.Debug("msg", "TCP connection closed",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", remoteAddr,
|
|
||||||
"active_connections", newCount,
|
|
||||||
"error", err)
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *tcpSourceServer) OnTraffic(c gnet.Conn) gnet.Action {
|
|
||||||
s.mu.RLock()
|
|
||||||
client, exists := s.clients[c]
|
|
||||||
s.mu.RUnlock()
|
|
||||||
|
|
||||||
if !exists {
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read all available data
|
|
||||||
data, err := c.Next(-1)
|
|
||||||
if err != nil {
|
|
||||||
s.source.logger.Error("msg", "Error reading from connection",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"error", err)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check encrypted data size BEFORE processing through TLS
|
|
||||||
if len(data) > maxEncryptedDataPerRead {
|
|
||||||
s.source.logger.Warn("msg", "Encrypted data per read limit exceeded",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"data_size", len(data),
|
|
||||||
"limit", maxEncryptedDataPerRead)
|
|
||||||
s.source.invalidEntries.Add(1)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track cumulative encrypted data to prevent slow accumulation
|
|
||||||
client.cumulativeEncrypted += int64(len(data))
|
|
||||||
if client.cumulativeEncrypted > maxCumulativeEncrypted {
|
|
||||||
s.source.logger.Warn("msg", "Cumulative encrypted data limit exceeded",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"total_encrypted", client.cumulativeEncrypted,
|
|
||||||
"limit", maxCumulativeEncrypted)
|
|
||||||
s.source.invalidEntries.Add(1)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process through TLS bridge if present
|
|
||||||
if client.tlsBridge != nil {
|
|
||||||
// Feed encrypted data into TLS engine
|
|
||||||
if err := client.tlsBridge.ProcessIncoming(data); err != nil {
|
|
||||||
if errors.Is(err, tls.ErrTLSBackpressure) {
|
|
||||||
s.source.logger.Warn("msg", "TLS backpressure, closing slow client",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String())
|
|
||||||
} else {
|
|
||||||
s.source.logger.Error("msg", "TLS processing error",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"error", err)
|
|
||||||
}
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if handshake is complete
|
|
||||||
if !client.tlsBridge.IsHandshakeDone() {
|
|
||||||
// Still handshaking, wait for more data
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check handshake result
|
|
||||||
_, hsErr := client.tlsBridge.HandshakeComplete()
|
|
||||||
if hsErr != nil {
|
|
||||||
s.source.logger.Error("msg", "TLS handshake failed",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"error", hsErr)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read decrypted plaintext
|
|
||||||
data = client.tlsBridge.Read()
|
|
||||||
if data == nil || len(data) == 0 {
|
|
||||||
// No plaintext available yet
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
// Reset cumulative counter after successful decryption and processing
|
|
||||||
client.cumulativeEncrypted = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check buffer size before appending
|
|
||||||
if client.buffer.Len()+len(data) > maxClientBufferSize {
|
|
||||||
s.source.logger.Warn("msg", "Client buffer limit exceeded",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"buffer_size", client.buffer.Len(),
|
|
||||||
"incoming_size", len(data))
|
|
||||||
s.source.invalidEntries.Add(1)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
|
|
||||||
// Append to client buffer
|
|
||||||
client.buffer.Write(data)
|
|
||||||
|
|
||||||
// Track high buffer
|
|
||||||
if client.buffer.Len() > client.maxBufferSeen {
|
|
||||||
client.maxBufferSeen = client.buffer.Len()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for suspiciously long lines before attempting to read
|
|
||||||
if client.buffer.Len() > maxLineLength {
|
|
||||||
// Scan for newline in current buffer
|
|
||||||
bufBytes := client.buffer.Bytes()
|
|
||||||
hasNewline := false
|
|
||||||
for _, b := range bufBytes {
|
|
||||||
if b == '\n' {
|
|
||||||
hasNewline = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !hasNewline {
|
|
||||||
s.source.logger.Warn("msg", "Line too long without newline",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"remote_addr", c.RemoteAddr().String(),
|
|
||||||
"buffer_size", client.buffer.Len())
|
|
||||||
s.source.invalidEntries.Add(1)
|
|
||||||
return gnet.Close
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process complete lines
|
|
||||||
for {
|
|
||||||
line, err := client.buffer.ReadBytes('\n')
|
|
||||||
if err != nil {
|
|
||||||
// No complete line available
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
// Trim newline
|
|
||||||
line = bytes.TrimRight(line, "\r\n")
|
|
||||||
if len(line) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Capture raw line size before parsing
|
|
||||||
rawSize := int64(len(line))
|
|
||||||
|
|
||||||
// Parse JSON log entry
|
|
||||||
var entry core.LogEntry
|
|
||||||
if err := json.Unmarshal(line, &entry); err != nil {
|
|
||||||
s.source.invalidEntries.Add(1)
|
|
||||||
s.source.logger.Debug("msg", "Invalid JSON log entry",
|
|
||||||
"component", "tcp_source",
|
|
||||||
"error", err,
|
|
||||||
"data", string(line))
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate and set defaults
|
|
||||||
if entry.Message == "" {
|
|
||||||
s.source.invalidEntries.Add(1)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if entry.Time.IsZero() {
|
|
||||||
entry.Time = time.Now()
|
|
||||||
}
|
|
||||||
if entry.Source == "" {
|
|
||||||
entry.Source = "tcp"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set raw size
|
|
||||||
entry.RawSize = rawSize
|
|
||||||
|
|
||||||
// Publish the entry
|
|
||||||
s.source.publish(entry)
|
|
||||||
}
|
|
||||||
|
|
||||||
return gnet.None
|
|
||||||
}
|
|
||||||
|
|
||||||
// noopLogger implements gnet's Logger interface but discards everything
|
|
||||||
// type noopLogger struct{}
|
|
||||||
// func (n noopLogger) Debugf(format string, args ...any) {}
|
|
||||||
// func (n noopLogger) Infof(format string, args ...any) {}
|
|
||||||
// func (n noopLogger) Warnf(format string, args ...any) {}
|
|
||||||
// func (n noopLogger) Errorf(format string, args ...any) {}
|
|
||||||
// func (n noopLogger) Fatalf(format string, args ...any) {}
|
|
||||||
|
|
||||||
// Usage: gnet.Run(..., gnet.WithLogger(noopLogger{}), ...)
|
|
||||||
@ -1,341 +0,0 @@
|
|||||||
// FILE: src/internal/tls/gnet_bridge.go
|
|
||||||
package tls
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/panjf2000/gnet/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
ErrTLSBackpressure = errors.New("TLS processing backpressure")
|
|
||||||
ErrConnectionClosed = errors.New("connection closed")
|
|
||||||
ErrPlaintextBufferExceeded = errors.New("plaintext buffer size exceeded")
|
|
||||||
)
|
|
||||||
|
|
||||||
// Maximum plaintext buffer size to prevent memory exhaustion
|
|
||||||
const maxPlaintextBufferSize = 32 * 1024 * 1024 // 32MB
|
|
||||||
|
|
||||||
// GNetTLSConn bridges gnet.Conn with crypto/tls via io.Pipe
|
|
||||||
type GNetTLSConn struct {
|
|
||||||
gnetConn gnet.Conn
|
|
||||||
tlsConn *tls.Conn
|
|
||||||
config *tls.Config
|
|
||||||
|
|
||||||
// Buffered channels for non-blocking operation
|
|
||||||
incomingCipher chan []byte // Network → TLS (encrypted)
|
|
||||||
outgoingCipher chan []byte // TLS → Network (encrypted)
|
|
||||||
|
|
||||||
// Handshake state
|
|
||||||
handshakeOnce sync.Once
|
|
||||||
handshakeDone chan struct{}
|
|
||||||
handshakeErr error
|
|
||||||
|
|
||||||
// Decrypted data buffer
|
|
||||||
plainBuf []byte
|
|
||||||
plainMu sync.Mutex
|
|
||||||
|
|
||||||
// Lifecycle
|
|
||||||
closed atomic.Bool
|
|
||||||
closeOnce sync.Once
|
|
||||||
wg sync.WaitGroup
|
|
||||||
|
|
||||||
// Error tracking
|
|
||||||
lastErr atomic.Value // error
|
|
||||||
logger interface{ Warn(args ...any) } // Minimal logger interface
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewServerConn creates a server-side TLS bridge
|
|
||||||
func NewServerConn(gnetConn gnet.Conn, config *tls.Config) *GNetTLSConn {
|
|
||||||
tc := &GNetTLSConn{
|
|
||||||
gnetConn: gnetConn,
|
|
||||||
config: config,
|
|
||||||
handshakeDone: make(chan struct{}),
|
|
||||||
// Buffered channels sized for throughput without blocking
|
|
||||||
incomingCipher: make(chan []byte, 128), // 128 packets buffer
|
|
||||||
outgoingCipher: make(chan []byte, 128),
|
|
||||||
plainBuf: make([]byte, 0, 65536), // 64KB initial capacity
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create TLS conn with channel-based transport
|
|
||||||
rawConn := &channelConn{
|
|
||||||
incoming: tc.incomingCipher,
|
|
||||||
outgoing: tc.outgoingCipher,
|
|
||||||
localAddr: gnetConn.LocalAddr(),
|
|
||||||
remoteAddr: gnetConn.RemoteAddr(),
|
|
||||||
tc: tc,
|
|
||||||
}
|
|
||||||
tc.tlsConn = tls.Server(rawConn, config)
|
|
||||||
|
|
||||||
// Start pump goroutines
|
|
||||||
tc.wg.Add(2)
|
|
||||||
go tc.pumpCipherToNetwork()
|
|
||||||
go tc.pumpPlaintextFromTLS()
|
|
||||||
|
|
||||||
return tc
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewClientConn creates a client-side TLS bridge (similar changes)
|
|
||||||
func NewClientConn(gnetConn gnet.Conn, config *tls.Config, serverName string) *GNetTLSConn {
|
|
||||||
tc := &GNetTLSConn{
|
|
||||||
gnetConn: gnetConn,
|
|
||||||
config: config,
|
|
||||||
handshakeDone: make(chan struct{}),
|
|
||||||
incomingCipher: make(chan []byte, 128),
|
|
||||||
outgoingCipher: make(chan []byte, 128),
|
|
||||||
plainBuf: make([]byte, 0, 65536),
|
|
||||||
}
|
|
||||||
|
|
||||||
if config.ServerName == "" {
|
|
||||||
config = config.Clone()
|
|
||||||
config.ServerName = serverName
|
|
||||||
}
|
|
||||||
|
|
||||||
rawConn := &channelConn{
|
|
||||||
incoming: tc.incomingCipher,
|
|
||||||
outgoing: tc.outgoingCipher,
|
|
||||||
localAddr: gnetConn.LocalAddr(),
|
|
||||||
remoteAddr: gnetConn.RemoteAddr(),
|
|
||||||
tc: tc,
|
|
||||||
}
|
|
||||||
tc.tlsConn = tls.Client(rawConn, config)
|
|
||||||
|
|
||||||
tc.wg.Add(2)
|
|
||||||
go tc.pumpCipherToNetwork()
|
|
||||||
go tc.pumpPlaintextFromTLS()
|
|
||||||
|
|
||||||
return tc
|
|
||||||
}
|
|
||||||
|
|
||||||
// ProcessIncoming feeds encrypted data from network into TLS engine (non-blocking)
|
|
||||||
func (tc *GNetTLSConn) ProcessIncoming(encryptedData []byte) error {
|
|
||||||
if tc.closed.Load() {
|
|
||||||
return ErrConnectionClosed
|
|
||||||
}
|
|
||||||
|
|
||||||
// Non-blocking send with backpressure detection
|
|
||||||
select {
|
|
||||||
case tc.incomingCipher <- encryptedData:
|
|
||||||
return nil
|
|
||||||
default:
|
|
||||||
// Channel full - TLS processing can't keep up
|
|
||||||
// Drop connection under backpressure vs blocking event loop
|
|
||||||
if tc.logger != nil {
|
|
||||||
tc.logger.Warn("msg", "TLS backpressure, dropping data",
|
|
||||||
"remote_addr", tc.gnetConn.RemoteAddr())
|
|
||||||
}
|
|
||||||
return ErrTLSBackpressure
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// pumpCipherToNetwork sends TLS-encrypted data to network
|
|
||||||
func (tc *GNetTLSConn) pumpCipherToNetwork() {
|
|
||||||
defer tc.wg.Done()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case data, ok := <-tc.outgoingCipher:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// Send to network
|
|
||||||
if err := tc.gnetConn.AsyncWrite(data, nil); err != nil {
|
|
||||||
tc.lastErr.Store(err)
|
|
||||||
tc.Close()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
case <-time.After(30 * time.Second):
|
|
||||||
// Keepalive/timeout check
|
|
||||||
if tc.closed.Load() {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// pumpPlaintextFromTLS reads decrypted data from TLS
|
|
||||||
func (tc *GNetTLSConn) pumpPlaintextFromTLS() {
|
|
||||||
defer tc.wg.Done()
|
|
||||||
buf := make([]byte, 32768) // 32KB read buffer
|
|
||||||
|
|
||||||
for {
|
|
||||||
n, err := tc.tlsConn.Read(buf)
|
|
||||||
if n > 0 {
|
|
||||||
tc.plainMu.Lock()
|
|
||||||
// Check buffer size limit before appending to prevent memory exhaustion
|
|
||||||
if len(tc.plainBuf)+n > maxPlaintextBufferSize {
|
|
||||||
tc.plainMu.Unlock()
|
|
||||||
// Log warning about buffer limit
|
|
||||||
if tc.logger != nil {
|
|
||||||
tc.logger.Warn("msg", "Plaintext buffer limit exceeded, closing connection",
|
|
||||||
"remote_addr", tc.gnetConn.RemoteAddr(),
|
|
||||||
"buffer_size", len(tc.plainBuf),
|
|
||||||
"incoming_size", n,
|
|
||||||
"limit", maxPlaintextBufferSize)
|
|
||||||
}
|
|
||||||
// Store error and close connection
|
|
||||||
tc.lastErr.Store(ErrPlaintextBufferExceeded)
|
|
||||||
tc.Close()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
tc.plainBuf = append(tc.plainBuf, buf[:n]...)
|
|
||||||
tc.plainMu.Unlock()
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err != io.EOF {
|
|
||||||
tc.lastErr.Store(err)
|
|
||||||
}
|
|
||||||
tc.Close()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read returns available decrypted plaintext (non-blocking)
|
|
||||||
func (tc *GNetTLSConn) Read() []byte {
|
|
||||||
tc.plainMu.Lock()
|
|
||||||
defer tc.plainMu.Unlock()
|
|
||||||
|
|
||||||
if len(tc.plainBuf) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Atomic buffer swap under mutex protection to prevent race condition
|
|
||||||
data := tc.plainBuf
|
|
||||||
tc.plainBuf = make([]byte, 0, cap(tc.plainBuf))
|
|
||||||
return data
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write encrypts plaintext and queues for network transmission
|
|
||||||
func (tc *GNetTLSConn) Write(plaintext []byte) (int, error) {
|
|
||||||
if tc.closed.Load() {
|
|
||||||
return 0, ErrConnectionClosed
|
|
||||||
}
|
|
||||||
|
|
||||||
if !tc.IsHandshakeDone() {
|
|
||||||
return 0, errors.New("handshake not complete")
|
|
||||||
}
|
|
||||||
|
|
||||||
return tc.tlsConn.Write(plaintext)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handshake initiates TLS handshake asynchronously
|
|
||||||
func (tc *GNetTLSConn) Handshake() {
|
|
||||||
tc.handshakeOnce.Do(func() {
|
|
||||||
go func() {
|
|
||||||
tc.handshakeErr = tc.tlsConn.Handshake()
|
|
||||||
close(tc.handshakeDone)
|
|
||||||
}()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsHandshakeDone checks if handshake is complete
|
|
||||||
func (tc *GNetTLSConn) IsHandshakeDone() bool {
|
|
||||||
select {
|
|
||||||
case <-tc.handshakeDone:
|
|
||||||
return true
|
|
||||||
default:
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// HandshakeComplete waits for handshake completion
|
|
||||||
func (tc *GNetTLSConn) HandshakeComplete() (<-chan struct{}, error) {
|
|
||||||
<-tc.handshakeDone
|
|
||||||
return tc.handshakeDone, tc.handshakeErr
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close shuts down the bridge
|
|
||||||
func (tc *GNetTLSConn) Close() error {
|
|
||||||
tc.closeOnce.Do(func() {
|
|
||||||
tc.closed.Store(true)
|
|
||||||
|
|
||||||
// Close TLS connection
|
|
||||||
tc.tlsConn.Close()
|
|
||||||
|
|
||||||
// Close channels to stop pumps
|
|
||||||
close(tc.incomingCipher)
|
|
||||||
close(tc.outgoingCipher)
|
|
||||||
})
|
|
||||||
|
|
||||||
// Wait for pumps to finish
|
|
||||||
tc.wg.Wait()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetConnectionState returns TLS connection state
|
|
||||||
func (tc *GNetTLSConn) GetConnectionState() tls.ConnectionState {
|
|
||||||
return tc.tlsConn.ConnectionState()
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetError returns last error
|
|
||||||
func (tc *GNetTLSConn) GetError() error {
|
|
||||||
if err, ok := tc.lastErr.Load().(error); ok {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// channelConn implements net.Conn over channels
|
|
||||||
type channelConn struct {
|
|
||||||
incoming <-chan []byte
|
|
||||||
outgoing chan<- []byte
|
|
||||||
localAddr net.Addr
|
|
||||||
remoteAddr net.Addr
|
|
||||||
tc *GNetTLSConn
|
|
||||||
readBuf []byte
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *channelConn) Read(b []byte) (int, error) {
|
|
||||||
// Use buffered read for efficiency
|
|
||||||
if len(c.readBuf) > 0 {
|
|
||||||
n := copy(b, c.readBuf)
|
|
||||||
c.readBuf = c.readBuf[n:]
|
|
||||||
return n, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for new data
|
|
||||||
select {
|
|
||||||
case data, ok := <-c.incoming:
|
|
||||||
if !ok {
|
|
||||||
return 0, io.EOF
|
|
||||||
}
|
|
||||||
n := copy(b, data)
|
|
||||||
if n < len(data) {
|
|
||||||
c.readBuf = data[n:] // Buffer remainder
|
|
||||||
}
|
|
||||||
return n, nil
|
|
||||||
case <-time.After(30 * time.Second):
|
|
||||||
return 0, errors.New("read timeout")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *channelConn) Write(b []byte) (int, error) {
|
|
||||||
if c.tc.closed.Load() {
|
|
||||||
return 0, ErrConnectionClosed
|
|
||||||
}
|
|
||||||
|
|
||||||
// Make a copy since TLS may hold reference
|
|
||||||
data := make([]byte, len(b))
|
|
||||||
copy(data, b)
|
|
||||||
|
|
||||||
select {
|
|
||||||
case c.outgoing <- data:
|
|
||||||
return len(b), nil
|
|
||||||
case <-time.After(5 * time.Second):
|
|
||||||
return 0, errors.New("write timeout")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *channelConn) Close() error { return nil }
|
|
||||||
func (c *channelConn) LocalAddr() net.Addr { return c.localAddr }
|
|
||||||
func (c *channelConn) RemoteAddr() net.Addr { return c.remoteAddr }
|
|
||||||
func (c *channelConn) SetDeadline(t time.Time) error { return nil }
|
|
||||||
func (c *channelConn) SetReadDeadline(t time.Time) error { return nil }
|
|
||||||
func (c *channelConn) SetWriteDeadline(t time.Time) error { return nil }
|
|
||||||
@ -1,249 +0,0 @@
|
|||||||
// FILE: logwisp/src/internal/tls/manager.go
|
|
||||||
package tls
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/tls"
|
|
||||||
"crypto/x509"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"logwisp/src/internal/config"
|
|
||||||
|
|
||||||
"github.com/lixenwraith/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Manager handles TLS configuration for servers
|
|
||||||
type Manager struct {
|
|
||||||
config *config.SSLConfig
|
|
||||||
tlsConfig *tls.Config
|
|
||||||
logger *log.Logger
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewManager creates a TLS configuration from SSL config
|
|
||||||
func NewManager(cfg *config.SSLConfig, logger *log.Logger) (*Manager, error) {
|
|
||||||
if cfg == nil || !cfg.Enabled {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
m := &Manager{
|
|
||||||
config: cfg,
|
|
||||||
logger: logger,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load certificate and key
|
|
||||||
cert, err := tls.LoadX509KeyPair(cfg.CertFile, cfg.KeyFile)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to load cert/key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create base TLS config
|
|
||||||
m.tlsConfig = &tls.Config{
|
|
||||||
Certificates: []tls.Certificate{cert},
|
|
||||||
MinVersion: parseTLSVersion(cfg.MinVersion, tls.VersionTLS12),
|
|
||||||
MaxVersion: parseTLSVersion(cfg.MaxVersion, tls.VersionTLS13),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configure cipher suites if specified
|
|
||||||
if cfg.CipherSuites != "" {
|
|
||||||
m.tlsConfig.CipherSuites = parseCipherSuites(cfg.CipherSuites)
|
|
||||||
} else {
|
|
||||||
// Use secure defaults
|
|
||||||
m.tlsConfig.CipherSuites = []uint16{
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
|
|
||||||
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configure client authentication (mTLS)
|
|
||||||
if cfg.ClientAuth {
|
|
||||||
if cfg.VerifyClientCert {
|
|
||||||
m.tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
|
|
||||||
} else {
|
|
||||||
m.tlsConfig.ClientAuth = tls.RequireAnyClientCert
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load client CA if specified
|
|
||||||
if cfg.ClientCAFile != "" {
|
|
||||||
caCert, err := os.ReadFile(cfg.ClientCAFile)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to read client CA: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
caCertPool := x509.NewCertPool()
|
|
||||||
if !caCertPool.AppendCertsFromPEM(caCert) {
|
|
||||||
return nil, fmt.Errorf("failed to parse client CA certificate")
|
|
||||||
}
|
|
||||||
m.tlsConfig.ClientCAs = caCertPool
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set secure defaults
|
|
||||||
m.tlsConfig.PreferServerCipherSuites = true
|
|
||||||
m.tlsConfig.SessionTicketsDisabled = false
|
|
||||||
m.tlsConfig.Renegotiation = tls.RenegotiateNever
|
|
||||||
|
|
||||||
logger.Info("msg", "TLS manager initialized",
|
|
||||||
"component", "tls",
|
|
||||||
"min_version", cfg.MinVersion,
|
|
||||||
"max_version", cfg.MaxVersion,
|
|
||||||
"client_auth", cfg.ClientAuth,
|
|
||||||
"cipher_count", len(m.tlsConfig.CipherSuites))
|
|
||||||
|
|
||||||
return m, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetConfig returns the TLS configuration
|
|
||||||
func (m *Manager) GetConfig() *tls.Config {
|
|
||||||
if m == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
// Return a clone to prevent modification
|
|
||||||
return m.tlsConfig.Clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetHTTPConfig returns TLS config suitable for HTTP servers
|
|
||||||
func (m *Manager) GetHTTPConfig() *tls.Config {
|
|
||||||
if m == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
cfg := m.tlsConfig.Clone()
|
|
||||||
// Enable HTTP/2
|
|
||||||
cfg.NextProtos = []string{"h2", "http/1.1"}
|
|
||||||
return cfg
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetTCPConfig returns TLS config for raw TCP connections
|
|
||||||
func (m *Manager) GetTCPConfig() *tls.Config {
|
|
||||||
if m == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
cfg := m.tlsConfig.Clone()
|
|
||||||
// No ALPN for raw TCP
|
|
||||||
cfg.NextProtos = nil
|
|
||||||
return cfg
|
|
||||||
}
|
|
||||||
|
|
||||||
// ValidateClientCert validates a client certificate for mTLS
|
|
||||||
func (m *Manager) ValidateClientCert(rawCerts [][]byte) error {
|
|
||||||
if m == nil || !m.config.ClientAuth {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(rawCerts) == 0 {
|
|
||||||
return fmt.Errorf("no client certificate provided")
|
|
||||||
}
|
|
||||||
|
|
||||||
cert, err := x509.ParseCertificate(rawCerts[0])
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to parse client certificate: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify against CA if configured
|
|
||||||
if m.tlsConfig.ClientCAs != nil {
|
|
||||||
opts := x509.VerifyOptions{
|
|
||||||
Roots: m.tlsConfig.ClientCAs,
|
|
||||||
Intermediates: x509.NewCertPool(),
|
|
||||||
KeyUsages: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add any intermediate certs
|
|
||||||
for i := 1; i < len(rawCerts); i++ {
|
|
||||||
intermediate, err := x509.ParseCertificate(rawCerts[i])
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
opts.Intermediates.AddCert(intermediate)
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := cert.Verify(opts); err != nil {
|
|
||||||
return fmt.Errorf("client certificate verification failed: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
m.logger.Debug("msg", "Client certificate validated",
|
|
||||||
"component", "tls",
|
|
||||||
"subject", cert.Subject.String(),
|
|
||||||
"serial", cert.SerialNumber.String())
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseTLSVersion(version string, defaultVersion uint16) uint16 {
|
|
||||||
switch strings.ToUpper(version) {
|
|
||||||
case "TLS1.0", "TLS10":
|
|
||||||
return tls.VersionTLS10
|
|
||||||
case "TLS1.1", "TLS11":
|
|
||||||
return tls.VersionTLS11
|
|
||||||
case "TLS1.2", "TLS12":
|
|
||||||
return tls.VersionTLS12
|
|
||||||
case "TLS1.3", "TLS13":
|
|
||||||
return tls.VersionTLS13
|
|
||||||
default:
|
|
||||||
return defaultVersion
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseCipherSuites(suites string) []uint16 {
|
|
||||||
var result []uint16
|
|
||||||
|
|
||||||
// Map of cipher suite names to IDs
|
|
||||||
suiteMap := map[string]uint16{
|
|
||||||
// TLS 1.2 ECDHE suites (preferred)
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
|
|
||||||
|
|
||||||
// RSA suites (less preferred)
|
|
||||||
"TLS_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
|
|
||||||
"TLS_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, suite := range strings.Split(suites, ",") {
|
|
||||||
suite = strings.TrimSpace(suite)
|
|
||||||
if id, ok := suiteMap[suite]; ok {
|
|
||||||
result = append(result, id)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStats returns TLS statistics
|
|
||||||
func (m *Manager) GetStats() map[string]any {
|
|
||||||
if m == nil {
|
|
||||||
return map[string]any{"enabled": false}
|
|
||||||
}
|
|
||||||
|
|
||||||
return map[string]any{
|
|
||||||
"enabled": true,
|
|
||||||
"min_version": tlsVersionString(m.tlsConfig.MinVersion),
|
|
||||||
"max_version": tlsVersionString(m.tlsConfig.MaxVersion),
|
|
||||||
"client_auth": m.config.ClientAuth,
|
|
||||||
"cipher_suites": len(m.tlsConfig.CipherSuites),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func tlsVersionString(version uint16) string {
|
|
||||||
switch version {
|
|
||||||
case tls.VersionTLS10:
|
|
||||||
return "TLS1.0"
|
|
||||||
case tls.VersionTLS11:
|
|
||||||
return "TLS1.1"
|
|
||||||
case tls.VersionTLS12:
|
|
||||||
return "TLS1.2"
|
|
||||||
case tls.VersionTLS13:
|
|
||||||
return "TLS1.3"
|
|
||||||
default:
|
|
||||||
return fmt.Sprintf("0x%04x", version)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,13 +1,11 @@
|
|||||||
// FILE: logwisp/src/internal/limit/token_bucket.go
|
package tokenbucket
|
||||||
package limit
|
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TokenBucket implements a token bucket rate limiter
|
// TokenBucket implements a thread-safe token bucket rate limiter
|
||||||
// Safe for concurrent use.
|
|
||||||
type TokenBucket struct {
|
type TokenBucket struct {
|
||||||
capacity float64
|
capacity float64
|
||||||
tokens float64
|
tokens float64
|
||||||
@ -16,8 +14,8 @@ type TokenBucket struct {
|
|||||||
mu sync.Mutex
|
mu sync.Mutex
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewTokenBucket creates a new token bucket with given capacity and refill rate
|
// New creates a new token bucket with given capacity and refill rate
|
||||||
func NewTokenBucket(capacity float64, refillRate float64) *TokenBucket {
|
func New(capacity float64, refillRate float64) *TokenBucket {
|
||||||
return &TokenBucket{
|
return &TokenBucket{
|
||||||
capacity: capacity,
|
capacity: capacity,
|
||||||
tokens: capacity, // Start full
|
tokens: capacity, // Start full
|
||||||
@ -73,3 +71,17 @@ func (tb *TokenBucket) refill() {
|
|||||||
}
|
}
|
||||||
tb.lastRefill = now
|
tb.lastRefill = now
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Rate returns the refill rate in tokens per second
|
||||||
|
func (tb *TokenBucket) Rate() float64 {
|
||||||
|
tb.mu.Lock()
|
||||||
|
defer tb.mu.Unlock()
|
||||||
|
return tb.refillRate
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capacity returns the bucket capacity
|
||||||
|
func (tb *TokenBucket) Capacity() float64 {
|
||||||
|
tb.mu.Lock()
|
||||||
|
defer tb.mu.Unlock()
|
||||||
|
return tb.capacity
|
||||||
|
}
|
||||||
@ -1,16 +1,17 @@
|
|||||||
// FILE: logwisp/src/internal/version/version.go
|
|
||||||
package version
|
package version
|
||||||
|
|
||||||
import "fmt"
|
import "fmt"
|
||||||
|
|
||||||
var (
|
var (
|
||||||
// Version is set at compile time via -ldflags
|
// Version is the application version, set at compile time via -ldflags
|
||||||
Version = "dev"
|
Version = "dev"
|
||||||
|
// GitCommit is the git commit hash, set at compile time
|
||||||
GitCommit = "unknown"
|
GitCommit = "unknown"
|
||||||
|
// BuildTime is the application build time, set at compile time
|
||||||
BuildTime = "unknown"
|
BuildTime = "unknown"
|
||||||
)
|
)
|
||||||
|
|
||||||
// returns a formatted version string
|
// String returns a detailed, formatted version string including commit and build time
|
||||||
func String() string {
|
func String() string {
|
||||||
if Version == "dev" {
|
if Version == "dev" {
|
||||||
return fmt.Sprintf("dev (commit: %s, built: %s)", GitCommit, BuildTime)
|
return fmt.Sprintf("dev (commit: %s, built: %s)", GitCommit, BuildTime)
|
||||||
@ -18,7 +19,7 @@ func String() string {
|
|||||||
return fmt.Sprintf("%s (commit: %s, built: %s)", Version, GitCommit, BuildTime)
|
return fmt.Sprintf("%s (commit: %s, built: %s)", Version, GitCommit, BuildTime)
|
||||||
}
|
}
|
||||||
|
|
||||||
// returns just the version tag
|
// Short returns just the version tag
|
||||||
func Short() string {
|
func Short() string {
|
||||||
return Version
|
return Version
|
||||||
}
|
}
|
||||||
Reference in New Issue
Block a user