v0.7.1 default config and documentation update, refactor

This commit is contained in:
2025-10-10 13:03:03 -04:00
parent 89e6a4ea05
commit 33bf36f27e
34 changed files with 2877 additions and 2794 deletions

View File

@ -1,27 +1,77 @@
# LogWisp Documentation
# LogWisp
Documentation covers installation, configuration, and usage of LogWisp's pipeline-based log monitoring system.
A high-performance, pipeline-based log transport and processing system built in Go. LogWisp provides flexible log collection, filtering, formatting, and distribution with enterprise-grade security and reliability features.
## 📚 Documentation Index
## Features
### Getting Started
- **[Installation Guide](installation.md)** - Platform-specific installation
- **[Quick Start](quickstart.md)** - Get running in 5 minutes
- **[Architecture Overview](architecture.md)** - Pipeline design
### Core Capabilities
- **Pipeline Architecture**: Independent processing pipelines with source → filter → format → sink flow
- **Multiple Input Sources**: Directory monitoring, stdin, HTTP, TCP
- **Flexible Output Sinks**: Console, file, HTTP SSE, TCP streaming, HTTP/TCP forwarding
- **Real-time Processing**: Sub-millisecond latency with configurable buffering
- **Hot Configuration Reload**: Update pipelines without service restart
### Configuration
- **[Configuration Guide](configuration.md)** - Complete reference
- **[Environment Variables](environment.md)** - Container configuration
- **[Command Line Options](cli.md)** - CLI reference
- **[Sample Configurations](../config/)** - Default & Minimal Config
### Data Processing
- **Pattern-based Filtering**: Include/exclude filters with regex support
- **Multiple Formatters**: Raw, JSON, and template-based text formatting
- **Rate Limiting**: Pipeline and per-connection rate controls
- **Batch Processing**: Configurable batching for HTTP/TCP clients
### Features
- **[Status Monitoring](status.md)** - Health checks
- **[Filters Guide](filters.md)** - Pattern-based filtering
- **[Rate Limiting](ratelimiting.md)** - Connection protection
- **[Router Mode](router.md)** - Multi-pipeline routing
- **[Authentication](authentication.md)** - Access control *(planned)*
### Security & Reliability
- **Authentication**: Basic, token, SCRAM, and mTLS support
- **TLS Encryption**: Full TLS 1.2/1.3 support for HTTP connections
- **Access Control**: IP whitelisting/blacklisting, connection limits
- **Automatic Reconnection**: Resilient client connections with exponential backoff
- **File Rotation**: Size-based rotation with retention policies
## 📝 License
### Operational Features
- **Status Monitoring**: Real-time statistics and health endpoints
- **Signal Handling**: Graceful shutdown and configuration reload via signals
- **Background Mode**: Daemon operation with proper signal handling
- **Quiet Mode**: Silent operation for automated deployments
BSD-3-Clause
## Documentation
- [Installation Guide](installation.md) - Platform setup and service configuration
- [Architecture Overview](architecture.md) - System design and component interaction
- [Configuration Reference](configuration.md) - TOML structure and configuration methods
- [Input Sources](sources.md) - Available source types and configurations
- [Output Sinks](sinks.md) - Sink types and output options
- [Filters](filters.md) - Pattern-based log filtering
- [Formatters](formatters.md) - Log formatting and transformation
- [Authentication](authentication.md) - Security configurations and auth methods
- [Networking](networking.md) - TLS, rate limiting, and network features
- [Command Line Interface](cli.md) - CLI flags and subcommands
- [Operations Guide](operations.md) - Running and maintaining LogWisp
## Quick Start
Install LogWisp and create a basic configuration:
```toml
[[pipelines]]
name = "default"
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
Run with: `logwisp -c config.toml`
## System Requirements
- **Operating Systems**: Linux (kernel 3.10+), FreeBSD (12.0+)
- **Architecture**: amd64
- **Go Version**: 1.24+ (for building from source)
## License
BSD 3-Clause License

View File

@ -1,343 +1,168 @@
# Architecture Overview
LogWisp implements a flexible pipeline architecture for real-time log processing and streaming.
LogWisp implements a pipeline-based architecture for flexible log processing and distribution.
## Core Architecture
## Core Concepts
### Pipeline Model
Each pipeline operates independently with a source → filter → format → sink flow. Multiple pipelines can run concurrently within a single LogWisp instance, each processing different log streams with unique configurations.
### Component Hierarchy
```
┌─────────────────────────────────────────────────────────────────────────┐
│ LogWisp Service │
├─────────────────────────────────────────────────────────────────────────┤
┌─────────────────────────── Pipeline 1 ───────────────────────────┐ │
│ │ │
│ Sources Filters Sinks │ │
│ │ ┌──────┐ ┌────────┐ ┌──────┐ │ │
│ │ Dir │──┐ │Include │ ┌────│ HTTP │←── Client 1 │ │
│ │ └──────┘ ├────▶│ ERROR │ │ └──────┘ │ │
│ │ │ │ WARN │────▶├────┌──────┐ │ │
│ │ ┌──────┐ │ └────┬───┘ │ │ File │ │ │
│ │ │ HTTP │──┤ ▼ │ └──────┘ │ │
│ │ └──────┘ │ ┌────────┐ │ ┌──────┐ │ │
│ │ ┌──────┐ │ │Exclude │ └────│ TCP │←── Client 2 │ │
│ │ │ TCP │──┘ │ DEBUG │ └──────┘ │ │
│ │ └──────┘ └────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline 2 ───────────────────────────┐ │
│ │ │ │
│ │ ┌──────┐ ┌───────────┐ │ │
│ │ │Stdin │───────────────────────┬───▶│HTTP Client│──► Remote │ │
│ │ └──────┘ (No Filters) │ └───────────┘ │ │
│ │ │ ┌───────────┐ │ │
│ │ └────│TCP Client │──► Remote │ │
│ │ └───────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────── Pipeline N ───────────────────────────┐ │
│ │ Multiple Sources → Filter Chain → Multiple Sinks │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
Service (Main Process)
├── Pipeline 1
│ ├── Sources (1 or more)
├── Rate Limiter (optional)
├── Filter Chain (optional)
├── Formatter (optional)
└── Sinks (1 or more)
├── Pipeline 2
└── [Same structure]
└── Status Reporter (optional)
```
## Data Flow
```
Log Entry Flow:
### Processing Stages
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Source │ │ Parse │ │ Filter │ │ Sink │
│ Monitor │────▶│ Entry │────▶│ Chain │────▶│ Deliver │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │ │ │
▼ ▼ ▼ ▼
Detect Extract Include/ Send to
Input & Format Exclude Clients
1. **Source Stage**: Sources monitor inputs and generate log entries
2. **Rate Limiting**: Optional pipeline-level rate control
3. **Filtering**: Pattern-based inclusion/exclusion
4. **Formatting**: Transform entries to desired output format
5. **Distribution**: Fan-out to multiple sinks
### Entry Lifecycle
Entry Processing:
Log entries flow through the pipeline as `core.LogEntry` structures containing:
- **Time**: Entry timestamp
- **Level**: Log level (DEBUG, INFO, WARN, ERROR)
- **Source**: Origin identifier
- **Message**: Log content
- **Fields**: Additional metadata (JSON)
- **RawSize**: Original entry size
1. Source Detection 2. Entry Creation 3. Filter Application
┌──────────┐ ┌────────────┐ ┌─────────────┐
│New Entry │ │ Timestamp │ │ Filter 1 │
│Detected │──────────▶│ Level │────────▶│ Include? │
└──────────┘ │ Message │ └──────┬──────┘
└────────────┘ │
4. Sink Distribution ┌─────────────┐
┌──────────┐ │ Filter 2 │
│ HTTP │◀───┐ │ Exclude? │
└──────────┘ │ └──────┬──────┘
┌──────────┐ │ │
│ TCP │◀───┼────────── Entry ◀──────────────────┘
└──────────┘ │ (if passed)
┌──────────┐ │
│ File │◀───┤
└──────────┘ │
┌──────────┐ │
│ HTTP/TCP │◀───┘
│ Client │
└──────────┘
```
### Buffering Strategy
## Component Details
Each component maintains internal buffers to handle burst traffic:
- Sources: Configurable buffer size (default 1000 entries)
- Sinks: Independent buffers per sink
- Network components: Additional TCP/HTTP buffers
### Sources
## Component Types
Sources monitor inputs and generate log entries:
### Sources (Input)
```
Directory Source:
┌─────────────────────────────────┐
│ Directory Monitor │
├─────────────────────────────────┤
│ • Pattern Matching (*.log) │
│ • File Rotation Detection │
│ • Position Tracking │
│ • Concurrent File Watching │
└─────────────────────────────────┘
┌──────────────┐
│ File Watcher │ (per file)
├──────────────┤
│ • Read New │
│ • Track Pos │
│ • Detect Rot │
└──────────────┘
- **Directory Source**: File system monitoring with rotation detection
- **Stdin Source**: Standard input processing
- **HTTP Source**: REST endpoint for log ingestion
- **TCP Source**: Raw TCP socket listener
HTTP/TCP Sources:
┌─────────────────────────────────┐
│ Network Listener │
├─────────────────────────────────┤
│ • JSON Parsing │
│ • Rate Limiting │
│ • Connection Management │
│ • Input Validation │
└─────────────────────────────────┘
```
### Sinks (Output)
### Filters
- **Console Sink**: stdout/stderr output
- **File Sink**: Rotating file writer
- **HTTP Sink**: Server-Sent Events (SSE) streaming
- **TCP Sink**: TCP server for client connections
- **HTTP Client Sink**: Forward to remote HTTP endpoints
- **TCP Client Sink**: Forward to remote TCP servers
Filters process entries through pattern matching:
### Processing Components
```
Filter Chain:
┌─────────────┐
Entry ──────────▶│ Filter 1 │
│ (Include) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter 2 │
│ (Exclude) │
└──────┬──────┘
│ Pass?
┌─────────────┐
│ Filter N │
└──────┬──────┘
To Sinks
```
### Sinks
Sinks deliver processed entries to destinations:
```
HTTP Sink (SSE):
┌───────────────────────────────────┐
│ HTTP Server │
├───────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ │
│ │ Stream │ │ Status │ │
│ │Endpoint │ │Endpoint │ │
│ └────┬────┘ └────┬────┘ │
│ │ │ │
│ ┌────▼──────────────▼────┐ │
│ │ Connection Manager │ │
│ ├────────────────────────┤ │
│ │ • Rate Limiting │ │
│ │ • Heartbeat │ │
│ │ • Buffer Management │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
TCP Sink:
┌───────────────────────────────────┐
│ TCP Server │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ gnet Event Loop │ │
│ ├────────────────────────┤ │
│ │ • Async I/O │ │
│ │ • Connection Pool │ │
│ │ • Rate Limiting │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
Client Sinks:
┌───────────────────────────────────┐
│ HTTP/TCP Client │
├───────────────────────────────────┤
│ ┌────────────────────────┐ │
│ │ Output Manager │ │
│ ├────────────────────────┤ │
│ │ • Batching │ │
│ │ • Retry Logic │ │
│ │ • Connection Pooling │ │
│ │ • Failover │ │
│ └────────────────────────┘ │
└───────────────────────────────────┘
```
## Router Mode
In router mode, multiple pipelines share HTTP ports:
```
Router Architecture:
┌─────────────────┐
│ HTTP Router │
│ Port 8080 │
└────────┬────────┘
┌────────────────────┼────────────────────┐
│ │ │
/app/stream /db/stream /sys/stream
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│Pipeline │ │Pipeline │ │Pipeline │
│ "app" │ │ "db" │ │ "sys" │
└─────────┘ └─────────┘ └─────────┘
Path Routing:
Client Request ──▶ Router ──▶ Parse Path ──▶ Find Pipeline ──▶ Route
Extract Pipeline Name
from /pipeline/endpoint
```
## Memory Management
```
Buffer Flow:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Source │ │ Pipeline │ │ Sink │
│ Buffer │────▶│ Buffer │────▶│ Buffer │
│ (1000) │ │ (chan) │ │ (1000) │
└──────────┘ └──────────┘ └──────────┘
│ │ │
▼ ▼ ▼
Drop if full Backpressure Drop if full
(counted) (blocking) (counted)
Client Sinks:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Entry │ │ Batch │ │ Send │
│ Buffer │────▶│ Buffer │────▶│ Queue │
│ (1000) │ │ (100) │ │ (retry) │
└──────────┘ └──────────┘ └──────────┘
```
## Rate Limiting
```
Token Bucket Algorithm:
┌─────────────────────────────┐
│ Token Bucket │
├─────────────────────────────┤
│ Capacity: burst_size │
│ Refill: requests_per_second │
│ │
│ ┌─────────────────────┐ │
│ │ ● ● ● ● ● ● ○ ○ ○ ○ │ │
│ └─────────────────────┘ │
│ 6/10 tokens available │
└─────────────────────────────┘
Request arrives
Token available? ──No──▶ Reject (429)
Yes
Consume token ──▶ Allow request
```
- **Rate Limiter**: Token bucket algorithm for flow control
- **Filter Chain**: Sequential pattern matching
- **Formatters**: Raw, JSON, or template-based text transformation
## Concurrency Model
```
Goroutine Structure:
### Goroutine Architecture
Main ────┬──── Pipeline 1 ────┬──── Source Reader 1
│ ├──── Source Reader 2
│ ├──── HTTP Server
│ ├──── TCP Server
│ ├──── Filter Processor
│ ├──── HTTP Client Writer
│ └──── TCP Client Writer
├──── Pipeline 2 ────┬──── Source Reader
│ └──── Sink Writers
└──── HTTP Router (if enabled)
- Each source runs in dedicated goroutines for monitoring
- Sinks operate independently with their own processing loops
- Network listeners use optimized event loops (gnet for TCP)
- Pipeline processing uses channel-based communication
Channel Communication:
Source ──chan──▶ Filter ──chan──▶ Sink
│ │
└── Non-blocking send ────────────┘
(drop & count if full)
```
### Synchronization
## Configuration Loading
- Atomic counters for statistics
- Read-write mutexes for configuration access
- Context-based cancellation for graceful shutdown
- Wait groups for coordinated startup/shutdown
```
Priority Order:
1. CLI Flags ─────────┐
2. Environment Vars ──┼──▶ Merge ──▶ Final Config
3. Config File ───────┤
4. Defaults ──────────┘
## Network Architecture
Example:
CLI: --logging.level debug
Env: LOGWISP_PIPELINES_0_NAME=app
File: pipelines.toml
Default: buffer_size = 1000
```
### Connection Patterns
## Security Architecture
**Chaining Design**:
- TCP Client Sink → TCP Source: Direct TCP forwarding
- HTTP Client Sink → HTTP Source: HTTP-based forwarding
```
Security Layers:
**Monitoring Design**:
- TCP Sink: Debugging interface
- HTTP Sink: Browser-based live monitoring
┌─────────────────────────────────────┐
│ Network Layer │
├─────────────────────────────────────┤
│ • Rate Limiting (per IP/global) │
│ • Connection Limits │
│ • TLS/SSL (planned) │
└──────────────┬──────────────────────┘
┌──────────────▼──────────────────────┐
│ Authentication Layer │
├─────────────────────────────────────┤
│ • Basic Auth (planned) │
│ • Bearer Tokens (planned) │
│ • IP Whitelisting (planned) │
└──────────────┬──────────────────────┘
┌──────────────▼──────────────────────┐
│ Application Layer │
├─────────────────────────────────────┤
│ • Input Validation │
│ • Path Traversal Prevention │
│ • Resource Limits │
└─────────────────────────────────────┘
```
### Protocol Support
- HTTP/1.1 and HTTP/2 for HTTP connections
- Raw TCP with optional SCRAM authentication
- TLS 1.2/1.3 for HTTPS connections (HTTP only)
- Server-Sent Events for real-time streaming
## Resource Management
### Memory Management
- Bounded buffers prevent unbounded growth
- Automatic garbage collection via Go runtime
- Connection limits prevent resource exhaustion
### File Management
- Automatic rotation based on size thresholds
- Retention policies for old log files
- Minimum disk space checks before writing
### Connection Management
- Per-IP connection limits
- Global connection caps
- Automatic reconnection with exponential backoff
- Keep-alive for persistent connections
## Reliability Features
### Fault Tolerance
- Panic recovery in pipeline processing
- Independent pipeline operation
- Automatic source restart on failure
- Sink failure isolation
### Data Integrity
- Entry validation at ingestion
- Size limits for entries and batches
- Duplicate detection in file monitoring
- Position tracking for file reads
## Performance Characteristics
### Throughput
- Pipeline rate limiting: Configurable (default 1000 entries/second)
- Network throughput: Limited by network and sink capacity
- File monitoring: Sub-second detection (default 100ms interval)
### Latency
- Entry processing: Sub-millisecond in-memory
- Network forwarding: Depends on batch configuration
- File detection: Configurable check interval
### Scalability
- Horizontal: Multiple LogWisp instances with different configurations
- Vertical: Multiple pipelines per instance
- Fan-out: Multiple sinks per pipeline
- Fan-in: Multiple sources per pipeline

237
doc/authentication.md Normal file
View File

@ -0,0 +1,237 @@
# Authentication
LogWisp supports multiple authentication methods for securing network connections.
## Authentication Methods
### Overview
| Method | HTTP Source | HTTP Sink | HTTP Client | TCP Source | TCP Client | TCP Sink |
|--------|------------|-----------|-------------|------------|------------|----------|
| None | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Basic | ✓ (TLS req) | ✓ (TLS req) | ✓ (TLS req) | ✗ | ✗ | ✗ |
| Token | ✓ (TLS req) | ✓ (TLS req) | ✓ (TLS req) | ✗ | ✗ | ✗ |
| SCRAM | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
| mTLS | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
**Important Notes:**
- HTTP authentication **requires** TLS to be enabled
- TCP connections are **always** unencrypted
- TCP Sink has **no** authentication (debugging only)
## Basic Authentication
HTTP/HTTPS connections with username/password.
### Configuration
```toml
[pipelines.sources.http.auth]
type = "basic"
realm = "LogWisp"
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2id$v=19$m=65536,t=3,p=2$..."
```
### Generating Credentials
Use the `auth` command:
```bash
logwisp auth -u admin -b
```
Output includes:
- Argon2id password hash for configuration
- TOML configuration snippet
### Password Hash Format
LogWisp uses Argon2id with parameters:
- Memory: 65536 KB
- Iterations: 3
- Parallelism: 2
- Salt: Random 16 bytes
## Token Authentication
Bearer token authentication for HTTP/HTTPS.
### Configuration
```toml
[pipelines.sources.http.auth]
type = "token"
[pipelines.sources.http.auth.token]
tokens = ["token1", "token2", "token3"]
```
### Generating Tokens
```bash
logwisp auth -k -l 32
```
Generates:
- Base64-encoded token
- Hex-encoded token
- Configuration snippet
### Token Usage
Include in requests:
```
Authorization: Bearer <token>
```
## SCRAM Authentication
Secure Challenge-Response for TCP connections.
### Configuration
```toml
[pipelines.sources.tcp.auth]
type = "scram"
[[pipelines.sources.tcp.auth.scram.users]]
username = "tcpuser"
stored_key = "base64..."
server_key = "base64..."
salt = "base64..."
argon_time = 3
argon_memory = 65536
argon_threads = 4
```
### Generating SCRAM Credentials
```bash
logwisp auth -u tcpuser -s
```
### SCRAM Features
- Argon2-SCRAM-SHA256 algorithm
- Challenge-response mechanism
- No password transmission
- Replay attack protection
- Works over unencrypted connections
## mTLS (Mutual TLS)
Certificate-based authentication for HTTPS.
### Server Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
client_auth = true
client_ca_file = "/path/to/ca.pem"
verify_client_cert = true
[pipelines.sources.http.auth]
type = "mtls"
```
### Client Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
cert_file = "/path/to/client.pem"
key_file = "/path/to/client.key"
[pipelines.sinks.http_client.auth]
type = "mtls"
```
### Certificate Generation
Use the `tls` command:
```bash
# Generate CA
logwisp tls -ca -o ca
# Generate server certificate
logwisp tls -server -ca-cert ca.pem -ca-key ca.key -host localhost -o server
# Generate client certificate
logwisp tls -client -ca-cert ca.pem -ca-key ca.key -o client
```
## Authentication Command
### Usage
```bash
logwisp auth [options]
```
### Options
| Flag | Description |
|------|-------------|
| `-u, --user` | Username for credential generation |
| `-p, --password` | Password (prompts if not provided) |
| `-b, --basic` | Generate basic auth (HTTP/HTTPS) |
| `-s, --scram` | Generate SCRAM auth (TCP) |
| `-k, --token` | Generate bearer token |
| `-l, --length` | Token length in bytes (default: 32) |
### Security Best Practices
1. **Always use TLS** for HTTP authentication
2. **Never hardcode passwords** in configuration
3. **Use strong passwords** (minimum 12 characters)
4. **Rotate tokens regularly**
5. **Limit user permissions** to minimum required
6. **Store password hashes only**, never plaintext
7. **Use unique credentials** per service/user
## Access Control Lists
Combine authentication with IP-based access control:
```toml
[pipelines.sources.http.net_limit]
enabled = true
ip_whitelist = ["192.168.1.0/24", "10.0.0.0/8"]
ip_blacklist = ["192.168.1.100"]
```
Priority order:
1. Blacklist (checked first, immediate deny)
2. Whitelist (if configured, must match)
3. Authentication (if configured)
## Credential Storage
### Configuration File
Store hashes in TOML:
```toml
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2id$..."
```
### Environment Variables
Override via environment:
```bash
export LOGWISP_PIPELINES_0_SOURCES_0_HTTP_AUTH_BASIC_USERS_0_USERNAME=admin
export LOGWISP_PIPELINES_0_SOURCES_0_HTTP_AUTH_BASIC_USERS_0_PASSWORD_HASH='$argon2id$...'
```
### External Files
Future support planned for:
- External user databases
- LDAP/AD integration
- OAuth2/OIDC providers

View File

@ -1,196 +1,260 @@
# Command Line Interface
LogWisp CLI options for controlling behavior without modifying configuration files.
LogWisp CLI reference for commands and options.
## Synopsis
```bash
logwisp [command] [options]
logwisp [options]
```
## General Options
## Commands
### `--config <path>`
Configuration file location.
- **Default**: `~/.config/logwisp/logwisp.toml`
- **Example**: `logwisp --config /etc/logwisp/production.toml`
### Main Commands
### `--router`
Enable HTTP router mode for path-based routing.
- **Default**: `false`
- **Example**: `logwisp --router`
| Command | Description |
|---------|-------------|
| `auth` | Generate authentication credentials |
| `tls` | Generate TLS certificates |
| `version` | Display version information |
| `help` | Show help information |
### auth Command
Generate authentication credentials.
```bash
logwisp auth [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-u, --user` | Username | Required for password auth |
| `-p, --password` | Password | Prompts if not provided |
| `-b, --basic` | Generate basic auth | - |
| `-s, --scram` | Generate SCRAM auth | - |
| `-k, --token` | Generate bearer token | - |
| `-l, --length` | Token length in bytes | 32 |
### tls Command
Generate TLS certificates.
```bash
logwisp tls [options]
```
**Options:**
| Flag | Description | Default |
|------|-------------|---------|
| `-ca` | Generate CA certificate | - |
| `-server` | Generate server certificate | - |
| `-client` | Generate client certificate | - |
| `-host` | Comma-separated hosts/IPs | localhost |
| `-o` | Output file prefix | Required |
| `-ca-cert` | CA certificate file | Required for server/client |
| `-ca-key` | CA key file | Required for server/client |
| `-days` | Certificate validity days | 365 |
### version Command
### `--version`
Display version information.
### `--background`
Run as background process.
- **Example**: `logwisp --background`
### `--quiet`
Suppress all output (overrides logging configuration) except sinks.
- **Example**: `logwisp --quiet`
### `--disable-status-reporter`
Disable periodic status reporting.
- **Example**: `logwisp --disable-status-reporter`
### `--config-auto-reload`
Enable automatic configuration reloading on file changes.
- **Example**: `logwisp --config-auto-reload --config /etc/logwisp/config.toml`
- Monitors configuration file for changes
- Reloads pipelines without restart
- Preserves connections during reload
### `--config-save-on-exit`
Save current configuration to file on exit.
- **Example**: `logwisp --config-save-on-exit`
- Useful with runtime modifications
- Requires valid config file path
## Logging Options
Override configuration file settings:
### `--logging.output <mode>`
LogWisp's operational log output.
- **Values**: `file`, `stdout`, `stderr`, `both`, `none`
- **Example**: `logwisp --logging.output both`
### `--logging.level <level>`
Minimum log level.
- **Values**: `debug`, `info`, `warn`, `error`
- **Example**: `logwisp --logging.level debug`
### `--logging.file.directory <path>`
Log directory (with file output).
- **Example**: `logwisp --logging.file.directory /var/log/logwisp`
### `--logging.file.name <name>`
Log file name (with file output).
- **Example**: `logwisp --logging.file.name app`
### `--logging.file.max_size_mb <size>`
Maximum log file size in MB.
- **Example**: `logwisp --logging.file.max_size_mb 200`
### `--logging.file.max_total_size_mb <size>`
Maximum total log size in MB.
- **Example**: `logwisp --logging.file.max_total_size_mb 2000`
### `--logging.file.retention_hours <hours>`
Log retention period in hours.
- **Example**: `logwisp --logging.file.retention_hours 336`
### `--logging.console.target <target>`
Console output destination.
- **Values**: `stdout`, `stderr`, `split`
- **Example**: `logwisp --logging.console.target split`
### `--logging.console.format <format>`
Console output format.
- **Values**: `txt`, `json`
- **Example**: `logwisp --logging.console.format json`
## Pipeline Options
Configure pipelines via CLI (N = array index, 0-based):
### `--pipelines.N.name <name>`
Pipeline name.
- **Example**: `logwisp --pipelines.0.name myapp`
### `--pipelines.N.sources.N.type <type>`
Source type.
- **Example**: `logwisp --pipelines.0.sources.0.type directory`
### `--pipelines.N.sources.N.options.<key> <value>`
Source options.
- **Example**: `logwisp --pipelines.0.sources.0.options.path /var/log`
### `--pipelines.N.filters.N.type <type>`
Filter type.
- **Example**: `logwisp --pipelines.0.filters.0.type include`
### `--pipelines.N.filters.N.patterns <json>`
Filter patterns (JSON array).
- **Example**: `logwisp --pipelines.0.filters.0.patterns '["ERROR","WARN"]'`
### `--pipelines.N.sinks.N.type <type>`
Sink type.
- **Example**: `logwisp --pipelines.0.sinks.0.type http`
### `--pipelines.N.sinks.N.options.<key> <value>`
Sink options.
- **Example**: `logwisp --pipelines.0.sinks.0.options.port 8080`
## Examples
### Basic Usage
```bash
# Default configuration
logwisp
# Specific configuration
logwisp --config /etc/logwisp/production.toml
logwisp version
logwisp -v
logwisp --version
```
### Development
```bash
# Debug mode
logwisp --logging.output stderr --logging.level debug
Output includes:
- Version number
- Build date
- Git commit hash
- Go version
# With file output
logwisp --logging.output both --logging.level debug --logging.file.directory ./debug-logs
## Global Options
### Configuration Options
| Flag | Description | Default |
|------|-------------|---------|
| `-c, --config` | Configuration file path | `./logwisp.toml` |
| `-b, --background` | Run as daemon | false |
| `-q, --quiet` | Suppress console output | false |
| `--disable-status-reporter` | Disable status logging | false |
| `--config-auto-reload` | Enable config hot reload | false |
### Logging Options
| Flag | Description | Values |
|------|-------------|--------|
| `--logging.output` | Log output mode | file, stdout, stderr, split, all, none |
| `--logging.level` | Log level | debug, info, warn, error |
| `--logging.file.directory` | Log directory | Path |
| `--logging.file.name` | Log filename | String |
| `--logging.file.max_size_mb` | Max file size | Integer |
| `--logging.file.max_total_size_mb` | Total size limit | Integer |
| `--logging.file.retention_hours` | Retention period | Float |
| `--logging.console.target` | Console target | stdout, stderr, split |
| `--logging.console.format` | Output format | txt, json |
### Pipeline Options
Configure pipelines via CLI (N = array index, 0-based).
**Pipeline Configuration:**
| Flag | Description |
|------|-------------|
| `--pipelines.N.name` | Pipeline name |
| `--pipelines.N.sources.N.type` | Source type |
| `--pipelines.N.filters.N.type` | Filter type |
| `--pipelines.N.sinks.N.type` | Sink type |
## Flag Formats
### Boolean Flags
```bash
logwisp --quiet
logwisp --quiet=true
logwisp --quiet=false
```
### Production
### String Flags
```bash
# File logging
logwisp --logging.output file --logging.file.directory /var/log/logwisp
# Background with router
logwisp --background --router --config /etc/logwisp/prod.toml
# Quiet mode for cron
logwisp --quiet --config /etc/logwisp/batch.toml
logwisp --config /etc/logwisp/config.toml
logwisp -c config.toml
```
### Pipeline Configuration via CLI
```bash
# Simple pipeline
logwisp --pipelines.0.name app \
--pipelines.0.sources.0.type directory \
--pipelines.0.sources.0.options.path /var/log/app \
--pipelines.0.sinks.0.type http \
--pipelines.0.sinks.0.options.port 8080
### Nested Configuration
# With filters
logwisp --pipelines.0.name filtered \
--pipelines.0.sources.0.type stdin \
--pipelines.0.filters.0.type include \
--pipelines.0.filters.0.patterns '["ERROR","CRITICAL"]' \
--pipelines.0.sinks.0.type stdout
```bash
logwisp --logging.level=debug
logwisp --pipelines.0.name=myapp
logwisp --pipelines.0.sources.0.type=stdin
```
## Priority Order
### Array Values (JSON)
1. **Command-line flags** (highest)
2. **Environment variables**
3. **Configuration file**
4. **Built-in defaults** (lowest)
```bash
logwisp --pipelines.0.filters.0.patterns='["ERROR","WARN"]'
```
## Environment Variables
All flags can be set via environment:
```bash
export LOGWISP_QUIET=true
export LOGWISP_LOGGING_LEVEL=debug
export LOGWISP_PIPELINES_0_NAME=myapp
```
## Configuration Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Built-in defaults (lowest)
## Exit Codes
- `0`: Success
- `1`: General error
- `2`: Configuration file not found
- `137`: SIGKILL received
| Code | Description |
|------|-------------|
| 0 | Success |
| 1 | General error |
| 2 | Configuration file not found |
| 137 | SIGKILL received |
## Signals
## Signal Handling
- `SIGINT` (Ctrl+C): Graceful shutdown
- `SIGTERM`: Graceful shutdown
- `SIGHUP`: Reload configuration (when auto-reload enabled)
- `SIGUSR1`: Reload configuration (when auto-reload enabled)
- `SIGKILL`: Immediate shutdown (exit code 137)
| Signal | Action |
|--------|--------|
| SIGINT (Ctrl+C) | Graceful shutdown |
| SIGTERM | Graceful shutdown |
| SIGHUP | Reload configuration |
| SIGUSR1 | Reload configuration |
| SIGKILL | Immediate termination |
## Usage Patterns
### Development Mode
```bash
# Verbose logging to console
logwisp --logging.output=stderr --logging.level=debug
# Quick test with stdin
logwisp --pipelines.0.sources.0.type=stdin --pipelines.0.sinks.0.type=console
```
### Production Deployment
```bash
# Background with file logging
logwisp --background --config /etc/logwisp/prod.toml --logging.output=file
# Systemd service
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/config.toml
```
### Debugging
```bash
# Check configuration
logwisp --config test.toml --logging.level=debug --disable-status-reporter
# Dry run (verify config only)
logwisp --config test.toml --quiet
```
### Quick Commands
```bash
# Generate admin password
logwisp auth -u admin -b
# Create self-signed certs
logwisp tls -server -host localhost -o server
# Check version
logwisp version
```
## Help System
### General Help
```bash
logwisp --help
logwisp -h
logwisp help
```
### Command Help
```bash
logwisp auth --help
logwisp tls --help
logwisp help auth
```
## Special Flags
### Internal Flags
These flags are for internal use:
- `--background-daemon`: Child process indicator
- `--config-save-on-exit`: Save config on shutdown
### Hidden Behaviors
- SIGHUP ignored by default (nohup behavior)
- Automatic panic recovery in pipelines
- Resource cleanup on shutdown

View File

@ -1,512 +1,198 @@
# Configuration Guide
# Configuration Reference
LogWisp uses TOML format with a flexible **source → filter → sink** pipeline architecture.
LogWisp configuration uses TOML format with flexible override mechanisms.
## Configuration Methods
LogWisp supports three configuration methods with the following precedence:
## Configuration Precedence
Configuration sources are evaluated in order:
1. **Command-line flags** (highest priority)
2. **Environment variables**
3. **Configuration file** (lowest priority)
3. **Configuration file**
4. **Built-in defaults** (lowest priority)
### Complete Configuration Reference
## File Location
| Category | CLI Flag | Environment Variable | TOML File |
|----------|----------|---------------------|-----------|
| **Top-level** |
| Router mode | `--router` | `LOGWISP_ROUTER` | `router = true` |
| Background mode | `--background` | `LOGWISP_BACKGROUND` | `background = true` |
| Show version | `--version` | `LOGWISP_VERSION` | `version = true` |
| Quiet mode | `--quiet` | `LOGWISP_QUIET` | `quiet = true` |
| Disable status reporter | `--disable-status-reporter` | `LOGWISP_DISABLE_STATUS_REPORTER` | `disable_status_reporter = true` |
| Config auto-reload | `--config-auto-reload` | `LOGWISP_CONFIG_AUTO_RELOAD` | `config_auto_reload = true` |
| Config save on exit | `--config-save-on-exit` | `LOGWISP_CONFIG_SAVE_ON_EXIT` | `config_save_on_exit = true` |
| Config file | `--config <path>` | `LOGWISP_CONFIG_FILE` | N/A |
| Config directory | N/A | `LOGWISP_CONFIG_DIR` | N/A |
| **Logging** |
| Output mode | `--logging.output <mode>` | `LOGWISP_LOGGING_OUTPUT` | `[logging]`<br>`output = "stderr"` |
| Log level | `--logging.level <level>` | `LOGWISP_LOGGING_LEVEL` | `[logging]`<br>`level = "info"` |
| File directory | `--logging.file.directory <path>` | `LOGWISP_LOGGING_FILE_DIRECTORY` | `[logging.file]`<br>`directory = "./logs"` |
| File name | `--logging.file.name <name>` | `LOGWISP_LOGGING_FILE_NAME` | `[logging.file]`<br>`name = "logwisp"` |
| Max file size | `--logging.file.max_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_SIZE_MB` | `[logging.file]`<br>`max_size_mb = 100` |
| Max total size | `--logging.file.max_total_size_mb <size>` | `LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB` | `[logging.file]`<br>`max_total_size_mb = 1000` |
| Retention hours | `--logging.file.retention_hours <hours>` | `LOGWISP_LOGGING_FILE_RETENTION_HOURS` | `[logging.file]`<br>`retention_hours = 168` |
| Console target | `--logging.console.target <target>` | `LOGWISP_LOGGING_CONSOLE_TARGET` | `[logging.console]`<br>`target = "stderr"` |
| Console format | `--logging.console.format <format>` | `LOGWISP_LOGGING_CONSOLE_FORMAT` | `[logging.console]`<br>`format = "txt"` |
| **Pipelines** |
| Pipeline name | `--pipelines.N.name <name>` | `LOGWISP_PIPELINES_N_NAME` | `[[pipelines]]`<br>`name = "default"` |
| Source type | `--pipelines.N.sources.N.type <type>` | `LOGWISP_PIPELINES_N_SOURCES_N_TYPE` | `[[pipelines.sources]]`<br>`type = "directory"` |
| Source options | `--pipelines.N.sources.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SOURCES_N_OPTIONS_<KEY>` | `[[pipelines.sources]]`<br>`options = { ... }` |
| Filter type | `--pipelines.N.filters.N.type <type>` | `LOGWISP_PIPELINES_N_FILTERS_N_TYPE` | `[[pipelines.filters]]`<br>`type = "include"` |
| Filter logic | `--pipelines.N.filters.N.logic <logic>` | `LOGWISP_PIPELINES_N_FILTERS_N_LOGIC` | `[[pipelines.filters]]`<br>`logic = "or"` |
| Filter patterns | `--pipelines.N.filters.N.patterns <json>` | `LOGWISP_PIPELINES_N_FILTERS_N_PATTERNS` | `[[pipelines.filters]]`<br>`patterns = [...]` |
| Sink type | `--pipelines.N.sinks.N.type <type>` | `LOGWISP_PIPELINES_N_SINKS_N_TYPE` | `[[pipelines.sinks]]`<br>`type = "http"` |
| Sink options | `--pipelines.N.sinks.N.options.<key> <value>` | `LOGWISP_PIPELINES_N_SINKS_N_OPTIONS_<KEY>` | `[[pipelines.sinks]]`<br>`options = { ... }` |
| Auth type | `--pipelines.N.auth.type <type>` | `LOGWISP_PIPELINES_N_AUTH_TYPE` | `[pipelines.auth]`<br>`type = "none"` |
LogWisp searches for configuration in order:
1. Path specified via `--config` flag
2. Path from `LOGWISP_CONFIG_FILE` environment variable
3. `~/.config/logwisp/logwisp.toml`
4. `./logwisp.toml` in current directory
Note: `N` represents array indices (0-based).
## Global Settings
## Configuration File Location
Top-level configuration options:
1. Command line: `--config /path/to/config.toml`
2. Environment: `$LOGWISP_CONFIG_FILE` and `$LOGWISP_CONFIG_DIR`
3. User config: `~/.config/logwisp/logwisp.toml`
4. Current directory: `./logwisp.toml`
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `background` | bool | false | Run as daemon process |
| `quiet` | bool | false | Suppress console output |
| `disable_status_reporter` | bool | false | Disable periodic status logging |
| `config_auto_reload` | bool | false | Enable file watch for auto-reload |
## Hot Reload
## Logging Configuration
LogWisp supports automatic configuration reloading without restart:
```bash
# Enable hot reload
logwisp --config-auto-reload --config /etc/logwisp/config.toml
# Manual reload via signal
kill -HUP $(pidof logwisp) # or SIGUSR1
```
Hot reload updates:
- Pipeline configurations
- Filters
- Formatters
- Rate limits
- Router mode changes
Not reloaded (requires restart):
- Logging configuration
- Background mode
## Configuration Structure
LogWisp's internal operational logging:
```toml
# Optional: Enable router mode
router = false
# Optional: Background mode
background = false
# Optional: Quiet mode
quiet = false
# Optional: Disable status reporter
disable_status_reporter = false
# Optional: LogWisp's own logging
[logging]
output = "stderr" # file, stdout, stderr, both, none
level = "info" # debug, info, warn, error
output = "stdout" # file|stdout|stderr|split|all|none
level = "info" # debug|info|warn|error
[logging.file]
directory = "./logs"
directory = "./log"
name = "logwisp"
max_size_mb = 100
max_total_size_mb = 1000
retention_hours = 168
retention_hours = 168.0
[logging.console]
target = "stderr" # stdout, stderr, split
format = "txt" # txt or json
target = "stdout" # stdout|stderr|split
format = "txt" # txt|json
```
# Required: At least one pipeline
### Output Modes
- **file**: Write to log files only
- **stdout**: Write to standard output
- **stderr**: Write to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
- **all**: Write to both file and console
- **none**: Disable all logging
## Pipeline Configuration
Each `[[pipelines]]` section defines an independent processing pipeline:
```toml
[[pipelines]]
name = "default"
name = "pipeline-name"
# Sources (required)
# Rate limiting (optional)
[pipelines.rate_limit]
rate = 1000.0
burst = 2000.0
policy = "drop" # pass|drop
max_entry_size_bytes = 0 # 0=unlimited
# Format configuration (optional)
[pipelines.format]
type = "json" # raw|json|txt
# Sources (required, 1+)
[[pipelines.sources]]
type = "directory"
options = { ... }
# ... source-specific config
# Filters (optional)
[[pipelines.filters]]
type = "include"
patterns = [...]
logic = "or"
patterns = ["ERROR", "WARN"]
# Sinks (required)
# Sinks (required, 1+)
[[pipelines.sinks]]
type = "http"
options = { ... }
# ... sink-specific config
```
## Pipeline Configuration
## Environment Variables
Each `[[pipelines]]` section defines an independent processing pipeline.
All configuration options support environment variable overrides:
### Pipeline Formatters
### Naming Convention
Control output format per pipeline:
- Prefix: `LOGWISP_`
- Path separator: `_` (underscore)
- Array indices: Numeric suffix (0-based)
- Case: UPPERCASE
```toml
[[pipelines]]
name = "json-output"
format = "json" # raw, json, text
### Mapping Examples
[pipelines.format_options]
# JSON formatter
pretty = false
timestamp_field = "timestamp"
level_field = "level"
message_field = "message"
source_field = "source"
| TOML Path | Environment Variable |
|-----------|---------------------|
| `quiet` | `LOGWISP_QUIET` |
| `logging.level` | `LOGWISP_LOGGING_LEVEL` |
| `pipelines[0].name` | `LOGWISP_PIPELINES_0_NAME` |
| `pipelines[0].sources[0].type` | `LOGWISP_PIPELINES_0_SOURCES_0_TYPE` |
# Text formatter
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Message}}"
timestamp_format = "2006-01-02T15:04:05Z07:00"
## Command-Line Overrides
All configuration options can be overridden via CLI flags:
```bash
logwisp --quiet \
--logging.level=debug \
--pipelines.0.name=myapp \
--pipelines.0.sources.0.type=stdin
```
### Sources
## Configuration Validation
Input data sources:
LogWisp validates configuration at startup:
- Required fields presence
- Type correctness
- Port conflicts
- Path accessibility
- Pattern compilation
- Network address formats
#### Directory Source
```toml
[[pipelines.sources]]
type = "directory"
options = {
path = "/var/log/myapp", # Directory to monitor
pattern = "*.log", # File pattern (glob)
check_interval_ms = 100 # Check interval (10-60000)
}
```
## Hot Reload
#### File Source
```toml
[[pipelines.sources]]
type = "file"
options = {
path = "/var/log/app.log" # Specific file
}
```
#### Stdin Source
```toml
[[pipelines.sources]]
type = "stdin"
options = {}
```
#### HTTP Source
```toml
[[pipelines.sources]]
type = "http"
options = {
port = 8081, # Port to listen on
ingest_path = "/ingest", # Path for POST requests
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip"
}
}
```
#### TCP Source
```toml
[[pipelines.sources]]
type = "tcp"
options = {
port = 9091, # Port to listen on
buffer_size = 1000, # Input buffer size
rate_limit = { # Optional rate limiting
enabled = true,
requests_per_second = 5.0,
burst_size = 10,
limit_by = "ip"
}
}
```
### Filters
Control which log entries pass through:
```toml
# Include filter - only matching logs pass
[[pipelines.filters]]
type = "include"
logic = "or" # or: match any, and: match all
patterns = [
"ERROR",
"(?i)warn", # Case-insensitive
"\\bfatal\\b" # Word boundary
]
# Exclude filter - matching logs are dropped
[[pipelines.filters]]
type = "exclude"
patterns = ["DEBUG", "health-check"]
```
### Sinks
Output destinations:
#### HTTP Sink (SSE)
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
buffer_size = 1000,
stream_path = "/stream",
status_path = "/status",
# Heartbeat
heartbeat = {
enabled = true,
interval_seconds = 30,
format = "comment", # comment or json
include_timestamp = true,
include_stats = false
},
# Rate limiting
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # ip or global
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
#### TCP Sink
```toml
[[pipelines.sinks]]
type = "tcp"
options = {
port = 9090,
buffer_size = 5000,
heartbeat = { enabled = true, interval_seconds = 60, format = "json" },
rate_limit = { enabled = true, requests_per_second = 5.0, burst_size = 10 }
}
```
#### HTTP Client Sink
```toml
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://remote-log-server.com/ingest",
buffer_size = 1000,
batch_size = 100,
batch_delay_ms = 1000,
timeout_seconds = 30,
max_retries = 3,
retry_delay_ms = 1000,
retry_backoff = 2.0,
headers = {
"Authorization" = "Bearer <API_KEY_HERE>",
"X-Custom-Header" = "value"
},
insecure_skip_verify = false
}
```
#### TCP Client Sink
```toml
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "remote-server.com:9090",
buffer_size = 1000,
dial_timeout_seconds = 10,
write_timeout_seconds = 30,
keep_alive_seconds = 30,
reconnect_delay_ms = 1000,
max_reconnect_delay_seconds = 30,
reconnect_backoff = 1.5
}
```
#### File Sink
```toml
[[pipelines.sinks]]
type = "file"
options = {
directory = "/var/log/logwisp",
name = "app",
max_size_mb = 100,
max_total_size_mb = 1000,
retention_hours = 168.0,
min_disk_free_mb = 1000,
buffer_size = 2000
}
```
#### Console Sinks
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {
buffer_size = 500,
target = "stdout" # stdout, stderr, or split
}
```
## Complete Examples
### Basic Application Monitoring
```toml
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Hot Reload with JSON Output
Enable configuration hot reload:
```toml
config_auto_reload = true
config_save_on_exit = true
[[pipelines]]
name = "app"
format = "json"
[pipelines.format_options]
pretty = true
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Filtering
```toml
[logging]
output = "file"
level = "info"
[[pipelines]]
name = "production"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log", check_interval_ms = 50 }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.filters]]
type = "exclude"
patterns = ["/health", "/metrics"]
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = { enabled = true, requests_per_second = 25.0 }
}
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "errors" }
Or via command line:
```bash
logwisp --config-auto-reload
```
### Multi-Source Aggregation
Reload triggers:
- File modification detection
- SIGHUP or SIGUSR1 signals
Reloadable items:
- Pipeline configurations
- Sources and sinks
- Filters and formatters
- Rate limits
Non-reloadable (requires restart):
- Logging configuration
- Background mode
- Global settings
## Default Configuration
Minimal working configuration:
```toml
[[pipelines]]
name = "aggregated"
name = "default"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" }
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sources]]
type = "stdin"
options = {}
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/logs" }
[pipelines.sources.directory]
path = "./"
pattern = "*.log"
[[pipelines.sinks]]
type = "tcp"
options = { port = 9090 }
type = "console"
[pipelines.sinks.console]
target = "stdout"
```
### Router Mode
## Configuration Schema
```toml
# Run with: logwisp --router
router = true
### Type Reference
[[pipelines]]
name = "api"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/api", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Same port OK in router mode
[[pipelines]]
name = "web"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/nginx", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
# Access:
# http://localhost:8080/api/stream
# http://localhost:8080/web/stream
# http://localhost:8080/status
```
### Remote Log Forwarding
```toml
[[pipelines]]
name = "forwarder"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-aggregator.example.com/ingest",
batch_size = 100,
batch_delay_ms = 5000,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
[[pipelines.sinks]]
type = "tcp_client"
options = {
address = "backup-logger.example.com:9090",
reconnect_delay_ms = 5000
}
```
| TOML Type | Go Type | Environment Format |
|-----------|---------|-------------------|
| String | string | Plain text |
| Integer | int64 | Numeric string |
| Float | float64 | Decimal string |
| Boolean | bool | true/false |
| Array | []T | JSON array string |
| Table | struct | Nested with `_` |

View File

@ -1,274 +0,0 @@
# Environment Variables
Configure LogWisp through environment variables for containerized deployments.
## Naming Convention
- **Prefix**: `LOGWISP_`
- **Path separator**: `_` (underscore)
- **Array indices**: Numeric suffix (0-based)
- **Case**: UPPERCASE
Examples:
- `logging.level``LOGWISP_LOGGING_LEVEL`
- `pipelines[0].name``LOGWISP_PIPELINES_0_NAME`
## General Variables
```bash
LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
LOGWISP_CONFIG_DIR=/etc/logwisp
LOGWISP_BACKGROUND=true
LOGWISP_QUIET=true
LOGWISP_DISABLE_STATUS_REPORTER=true
LOGWISP_CONFIG_AUTO_RELOAD=true
LOGWISP_CONFIG_SAVE_ON_EXIT=true
```
### `LOGWISP_CONFIG_FILE`
Configuration file path.
```bash
export LOGWISP_CONFIG_FILE=/etc/logwisp/config.toml
```
### `LOGWISP_CONFIG_DIR`
Configuration directory.
```bash
export LOGWISP_CONFIG_DIR=/etc/logwisp
export LOGWISP_CONFIG_FILE=production.toml
```
### `LOGWISP_ROUTER`
Enable router mode.
```bash
export LOGWISP_ROUTER=true
```
### `LOGWISP_BACKGROUND`
Run in background.
```bash
export LOGWISP_BACKGROUND=true
```
### `LOGWISP_QUIET`
Suppress all output.
```bash
export LOGWISP_QUIET=true
```
### `LOGWISP_DISABLE_STATUS_REPORTER`
Disable periodic status reporting.
```bash
export LOGWISP_DISABLE_STATUS_REPORTER=true
```
## Logging Variables
```bash
# Output mode
LOGWISP_LOGGING_OUTPUT=both
# Log level
LOGWISP_LOGGING_LEVEL=debug
# File logging
LOGWISP_LOGGING_FILE_DIRECTORY=/var/log/logwisp
LOGWISP_LOGGING_FILE_NAME=logwisp
LOGWISP_LOGGING_FILE_MAX_SIZE_MB=100
LOGWISP_LOGGING_FILE_MAX_TOTAL_SIZE_MB=1000
LOGWISP_LOGGING_FILE_RETENTION_HOURS=168
# Console logging
LOGWISP_LOGGING_CONSOLE_TARGET=stderr
LOGWISP_LOGGING_CONSOLE_FORMAT=json
# Special console target override
LOGWISP_CONSOLE_TARGET=split # Overrides sink console targets
```
## Pipeline Configuration
### Basic Pipeline
```bash
# Pipeline name
LOGWISP_PIPELINES_0_NAME=app
# Source configuration
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/app
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_CHECK_INTERVAL_MS=100
# Sink configuration
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=1000
```
### Pipeline with Formatter
```bash
# Pipeline name and format
LOGWISP_PIPELINES_0_NAME=app
LOGWISP_PIPELINES_0_FORMAT=json
# Format options
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_PRETTY=true
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_TIMESTAMP_FIELD=ts
LOGWISP_PIPELINES_0_FORMAT_OPTIONS_LEVEL_FIELD=severity
```
### Filters
```bash
# Include filter
LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
LOGWISP_PIPELINES_0_FILTERS_0_LOGIC=or
LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# Exclude filter
LOGWISP_PIPELINES_0_FILTERS_1_TYPE=exclude
LOGWISP_PIPELINES_0_FILTERS_1_PATTERNS='["DEBUG"]'
```
### HTTP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=http
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=8081
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_INGEST_PATH=/ingest
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
```
### TCP Source
```bash
LOGWISP_PIPELINES_0_SOURCES_0_TYPE=tcp
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PORT=9091
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_BUFFER_SIZE=1000
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=5.0
```
### HTTP Sink Options
```bash
# Basic
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STREAM_PATH=/stream
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_STATUS_PATH=/status
# Heartbeat
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INTERVAL_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_FORMAT=comment
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_TIMESTAMP=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEARTBEAT_INCLUDE_STATS=false
# Rate Limiting
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=10.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_BURST_SIZE=20
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_LIMIT_BY=ip
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_CONNECTIONS_PER_IP=5
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_MAX_TOTAL_CONNECTIONS=100
```
### HTTP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=http_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_URL=https://log-server.com/ingest
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_SIZE=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BATCH_DELAY_MS=5000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RETRIES=3
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETRY_BACKOFF=2.0
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_INSECURE_SKIP_VERIFY=false
```
### TCP Client Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=tcp_client
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_ADDRESS=remote-server.com:9090
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIAL_TIMEOUT_SECONDS=10
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_WRITE_TIMEOUT_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_KEEP_ALIVE_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_DELAY_MS=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_RECONNECT_DELAY_SECONDS=30
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RECONNECT_BACKOFF=1.5
```
### File Sink
```bash
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_DIRECTORY=/var/log/logwisp
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_NAME=app
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_SIZE_MB=100
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MAX_TOTAL_SIZE_MB=1000
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RETENTION_HOURS=168
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_MIN_DISK_FREE_MB=1000
```
### Console Sinks
```bash
LOGWISP_PIPELINES_0_SINKS_0_TYPE=stdout
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_BUFFER_SIZE=500
LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_TARGET=stdout
```
## Example
```bash
#!/usr/bin/env bash
# General settings
export LOGWISP_DISABLE_STATUS_REPORTER=false
# Logging
export LOGWISP_LOGGING_OUTPUT=both
export LOGWISP_LOGGING_LEVEL=info
# Pipeline 0: Application logs
export LOGWISP_PIPELINES_0_NAME=app
export LOGWISP_PIPELINES_0_SOURCES_0_TYPE=directory
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATH=/var/log/myapp
export LOGWISP_PIPELINES_0_SOURCES_0_OPTIONS_PATTERN="*.log"
# Filters
export LOGWISP_PIPELINES_0_FILTERS_0_TYPE=include
export LOGWISP_PIPELINES_0_FILTERS_0_PATTERNS='["ERROR","WARN"]'
# HTTP sink
export LOGWISP_PIPELINES_0_SINKS_0_TYPE=http
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_PORT=8080
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_ENABLED=true
export LOGWISP_PIPELINES_0_SINKS_0_OPTIONS_RATE_LIMIT_REQUESTS_PER_SECOND=25.0
# Pipeline 1: System logs
export LOGWISP_PIPELINES_1_NAME=system
export LOGWISP_PIPELINES_1_SOURCES_0_TYPE=file
export LOGWISP_PIPELINES_1_SOURCES_0_OPTIONS_PATH=/var/log/syslog
# TCP sink
export LOGWISP_PIPELINES_1_SINKS_0_TYPE=tcp
export LOGWISP_PIPELINES_1_SINKS_0_OPTIONS_PORT=9090
# Pipeline 2: Remote forwarding
export LOGWISP_PIPELINES_2_NAME=forwarder
export LOGWISP_PIPELINES_2_SOURCES_0_TYPE=http
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_PORT=8081
export LOGWISP_PIPELINES_2_SOURCES_0_OPTIONS_INGEST_PATH=/logs
# HTTP client sink
export LOGWISP_PIPELINES_2_SINKS_0_TYPE=http_client
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_URL=https://log-aggregator.example.com/ingest
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_BATCH_SIZE=100
export LOGWISP_PIPELINES_2_SINKS_0_OPTIONS_HEADERS='{"Authorization":"Bearer <API_KEY_HERE>"}'
logwisp
```
## Precedence
1. Command-line flags (highest)
2. Environment variables
3. Configuration file
4. Defaults (lowest)

View File

@ -1,268 +1,185 @@
# Filter Guide
# Filters
LogWisp filters control which log entries pass through pipelines using regular expressions.
LogWisp filters control which log entries pass through the pipeline using pattern matching.
## How Filters Work
## Filter Types
- **Include**: Only matching logs pass (whitelist)
- **Exclude**: Matching logs are dropped (blacklist)
- Multiple filters apply sequentially - all must pass
### Include Filter
## Configuration
Only entries matching patterns pass through.
```toml
[[pipelines.filters]]
type = "include" # or "exclude"
logic = "or" # or "and"
patterns = [
"pattern1",
"pattern2"
]
```
### Filter Types
#### Include Filter
```toml
[[pipelines.filters]]
type = "include"
logic = "or"
patterns = ["ERROR", "WARN", "CRITICAL"]
# Only ERROR, WARN, or CRITICAL logs pass
logic = "or" # or|and
patterns = [
"ERROR",
"WARN",
"CRITICAL"
]
```
#### Exclude Filter
### Exclude Filter
Entries matching patterns are dropped.
```toml
[[pipelines.filters]]
type = "exclude"
patterns = ["DEBUG", "TRACE", "/health"]
# DEBUG, TRACE, and health checks are dropped
patterns = [
"DEBUG",
"TRACE",
"health-check"
]
```
### Logic Operators
## Configuration Options
- **OR**: Match ANY pattern (default)
- **AND**: Match ALL patterns
```toml
# OR Logic
logic = "or"
patterns = ["ERROR", "FAIL"]
# Matches: "ERROR: disk full" OR "FAIL: timeout"
# AND Logic
logic = "and"
patterns = ["database", "timeout", "ERROR"]
# Matches: "ERROR: database connection timeout"
# Not: "ERROR: file not found"
```
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `type` | string | Required | Filter type (include/exclude) |
| `logic` | string | "or" | Pattern matching logic (or/and) |
| `patterns` | []string | Required | Pattern list |
## Pattern Syntax
Go regular expressions (RE2):
Patterns support regular expression syntax:
### Basic Patterns
- **Literal match**: `"ERROR"` - matches "ERROR" anywhere
- **Case-insensitive**: `"(?i)error"` - matches "error", "ERROR", "Error"
- **Word boundary**: `"\\berror\\b"` - matches whole word only
### Advanced Patterns
- **Alternation**: `"ERROR|WARN|FATAL"`
- **Character classes**: `"[0-9]{3}"`
- **Wildcards**: `".*exception.*"`
- **Line anchors**: `"^ERROR"` (start), `"ERROR$"` (end)
### Special Characters
Escape special regex characters with backslash:
- `.``\\.`
- `*``\\*`
- `[``\\[`
- `(``\\(`
## Filter Logic
### OR Logic (default)
Entry passes if ANY pattern matches:
```toml
"ERROR" # Substring match
"(?i)error" # Case-insensitive
"\\berror\\b" # Word boundaries
"^ERROR" # Start of line
"ERROR$" # End of line
"error|fail|warn" # Alternatives
logic = "or"
patterns = ["ERROR", "WARN"]
# Passes: "ERROR in module", "WARN: low memory"
# Blocks: "INFO: started"
```
## Common Patterns
### Log Levels
### AND Logic
Entry passes only if ALL patterns match:
```toml
patterns = [
"\\[(ERROR|WARN|INFO)\\]", # [ERROR] format
"(?i)\\b(error|warning)\\b", # Word boundaries
"level=(error|warn)", # key=value format
]
logic = "and"
patterns = ["database", "ERROR"]
# Passes: "ERROR: database connection failed"
# Blocks: "ERROR: file not found"
```
### Application Errors
## Filter Chain
Multiple filters execute sequentially:
```toml
# Java
patterns = [
"Exception",
"at .+\\.java:[0-9]+",
"NullPointerException"
]
# Python
patterns = [
"Traceback",
"File \".+\\.py\", line [0-9]+",
"ValueError|TypeError"
]
# Go
patterns = [
"panic:",
"goroutine [0-9]+",
"runtime error:"
]
```
### Performance Issues
```toml
patterns = [
"took [0-9]{4,}ms", # >999ms operations
"timeout|timed out",
"slow query",
"high cpu|cpu usage: [8-9][0-9]%"
]
```
### HTTP Patterns
```toml
patterns = [
"status[=:][4-5][0-9]{2}", # 4xx/5xx codes
"HTTP/[0-9.]+ [4-5][0-9]{2}",
"\"/api/v[0-9]+/", # API paths
]
```
## Filter Chains
### Error Monitoring
```toml
# Include errors
# First filter: Include errors and warnings
[[pipelines.filters]]
type = "include"
patterns = ["(?i)\\b(error|fail|critical)\\b"]
patterns = ["ERROR", "WARN"]
# Exclude known non-issues
# Second filter: Exclude test environments
[[pipelines.filters]]
type = "exclude"
patterns = ["Error: Expected", "/health"]
patterns = ["test-env", "staging"]
```
### API Monitoring
Processing order:
1. Entry arrives from source
2. Include filter evaluates
3. If passed, exclude filter evaluates
4. If passed all filters, entry continues to sink
## Performance Considerations
### Pattern Compilation
- Patterns compile once at startup
- Invalid patterns cause startup failure
- Complex patterns may impact performance
### Optimization Tips
- Place most selective filters first
- Use simple patterns when possible
- Combine related patterns with alternation
- Avoid excessive wildcards (`.*`)
## Filter Statistics
Filters track:
- Total entries evaluated
- Entries passed
- Entries blocked
- Processing time per pattern
## Common Use Cases
### Log Level Filtering
```toml
# Include API calls
[[pipelines.filters]]
type = "include"
patterns = ["/api/", "/v[0-9]+/"]
patterns = ["ERROR", "WARN", "FATAL", "CRITICAL"]
```
# Exclude successful
### Application Filtering
```toml
[[pipelines.filters]]
type = "include"
patterns = ["app1", "app2", "app3"]
```
### Noise Reduction
```toml
[[pipelines.filters]]
type = "exclude"
patterns = ["\" 2[0-9]{2} "]
patterns = [
"health-check",
"ping",
"/metrics",
"heartbeat"
]
```
## Performance Tips
1. **Use anchors**: `^ERROR` faster than `ERROR`
2. **Avoid nested quantifiers**: `((a+)+)+`
3. **Non-capturing groups**: `(?:error|warn)`
4. **Order by frequency**: Most common first
5. **Simple patterns**: Faster than complex regex
## Testing Filters
```bash
# Test configuration
echo "[ERROR] Test" >> test.log
echo "[INFO] Test" >> test.log
# Run with debug
logwisp --log-level debug
# Check output
curl -N http://localhost:8080/stream
### Security Filtering
```toml
[[pipelines.filters]]
type = "exclude"
patterns = [
"password",
"token",
"api[_-]key",
"secret"
]
```
## Regex Pattern Guide
### Multi-stage Filtering
```toml
# Include production logs
[[pipelines.filters]]
type = "include"
patterns = ["prod-", "production"]
LogWisp uses Go's standard regex engine (RE2). It includes most common features but omits backtracking-heavy syntax.
# Include only errors
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "EXCEPTION", "FATAL"]
For complex logic, chain multiple filters (e.g., an `include` followed by an `exclude`) rather than writing one complex regex.
### Basic Matching
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `literal` | Matches the exact text. | `"ERROR"` matches any log with "ERROR". |
| `.` | Matches any single character (except newline). | `"user."` matches "userA", "userB", etc. |
| `a\|b` | Matches expression `a` OR expression `b`. | `"error\|fail"` matches lines with "error" or "fail". |
### Anchors and Boundaries
Anchors tie your pattern to a specific position in the line.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `^` | Matches the beginning of the line. | `"^ERROR"` matches lines *starting* with "ERROR". |
| `$` | Matches the end of the line. | `"crashed$"` matches lines *ending* with "crashed". |
| `\b` | Matches a word boundary. | `"\berror\b"` matches "error" but not "terrorist". |
### Character Classes
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `[abc]` | Matches `a`, `b`, or `c`. | `"[aeiou]"` matches any vowel. |
| `[^abc]` | Matches any character *except* `a`, `b`, or `c`. | `"[^0-9]"` matches any non-digit. |
| `[a-z]` | Matches any character in the range `a` to `z`. | `"[a-zA-Z]"` matches any letter. |
| `\d` | Matches any digit (`[0-9]`). | `\d{3}` matches three digits, like "123". |
| `\w` | Matches any word character (`[a-zA-Z0-9_]`). | `\w+` matches one or more word characters. |
| `\s` | Matches any whitespace character. | `\s+` matches one or more spaces or tabs. |
### Quantifiers
Quantifiers specify how many times a character or group must appear.
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `*` | Zero or more times. | `"a*"` matches "", "a", "aa". |
| `+` | One or more times. | `"a+"` matches "a", "aa", but not "". |
| `?` | Zero or one time. | `"colou?r"` matches "color" and "colour". |
| `{n}` | Exactly `n` times. | `\d{4}` matches a 4-digit number. |
| `{n,}` | `n` or more times. | `\d{2,}` matches numbers with 2 or more digits. |
| `{n,m}` | Between `n` and `m` times. | `\d{1,3}` matches numbers with 1 to 3 digits. |
### Grouping
| Pattern | Description | Example |
| :--- | :--- | :--- |
| `(...)` | Groups an expression and captures the match. | `(ERROR|WARN)` captures "ERROR" or "WARN". |
| `(?:...)`| Groups an expression *without* capturing. Faster. | `(?:ERROR|WARN)` is more efficient if you just need to group. |
### Flags and Modifiers
Flags are placed at the beginning of a pattern to change its behavior.
| Pattern | Description |
| :--- | :--- |
| `(?i)` | Case-insensitive matching. |
| `(?m)` | Multi-line mode (`^` and `$` match start/end of lines). |
**Example:** `"(?i)error"` matches "error", "ERROR", and "Error".
### Practical Examples for Logging
* **Match an IP Address:**
```
\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
```
* **Match HTTP 4xx or 5xx Status Codes:**
```
"status[= ](4|5)\d{2}"
```
* **Match a slow database query (>100ms):**
```
"Query took [1-9]\d{2,}ms"
```
* **Match key-value pairs:**
```
"user=(admin|guest)"
```
* **Match Java exceptions:**
```
"Exception:|at .+\.java:\d+"
```
# Exclude known issues
[[pipelines.filters]]
type = "exclude"
patterns = ["ECONNRESET", "broken pipe"]
```

215
doc/formatters.md Normal file
View File

@ -0,0 +1,215 @@
# Formatters
LogWisp formatters transform log entries before output to sinks.
## Formatter Types
### Raw Formatter
Outputs the log message as-is with optional newline.
```toml
[pipelines.format]
type = "raw"
[pipelines.format.raw]
add_new_line = true
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `add_new_line` | bool | true | Append newline to messages |
### JSON Formatter
Produces structured JSON output.
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
timestamp_field = "timestamp"
level_field = "level"
message_field = "message"
source_field = "source"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `pretty` | bool | false | Pretty print JSON |
| `timestamp_field` | string | "timestamp" | Field name for timestamp |
| `level_field` | string | "level" | Field name for log level |
| `message_field` | string | "message" | Field name for message |
| `source_field` | string | "source" | Field name for source |
**Output Structure:**
```json
{
"timestamp": "2024-01-01T12:00:00Z",
"level": "ERROR",
"source": "app",
"message": "Connection failed"
}
```
### Text Formatter
Template-based text formatting.
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}"
timestamp_format = "2006-01-02T15:04:05.000Z07:00"
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `template` | string | See below | Go template string |
| `timestamp_format` | string | RFC3339 | Go time format string |
**Default Template:**
```
[{{.Timestamp | FmtTime}}] [{{.Level | ToUpper}}] {{.Source}} - {{.Message}}{{ if .Fields }} {{.Fields}}{{ end }}
```
## Template Functions
Available functions in text templates:
| Function | Description | Example |
|----------|-------------|---------|
| `FmtTime` | Format timestamp | `{{.Timestamp \| FmtTime}}` |
| `ToUpper` | Convert to uppercase | `{{.Level \| ToUpper}}` |
| `ToLower` | Convert to lowercase | `{{.Source \| ToLower}}` |
| `TrimSpace` | Remove whitespace | `{{.Message \| TrimSpace}}` |
## Template Variables
Available variables in templates:
| Variable | Type | Description |
|----------|------|-------------|
| `.Timestamp` | time.Time | Entry timestamp |
| `.Level` | string | Log level |
| `.Source` | string | Source identifier |
| `.Message` | string | Log message |
| `.Fields` | string | Additional fields (JSON) |
## Time Format Strings
Common Go time format patterns:
| Pattern | Example Output |
|---------|---------------|
| `2006-01-02T15:04:05Z07:00` | 2024-01-02T15:04:05Z |
| `2006-01-02 15:04:05` | 2024-01-02 15:04:05 |
| `Jan 2 15:04:05` | Jan 2 15:04:05 |
| `15:04:05.000` | 15:04:05.123 |
| `2006/01/02` | 2024/01/02 |
## Format Selection
### Default Behavior
If no formatter specified:
- **HTTP/TCP sinks**: JSON format
- **Console/File sinks**: Raw format
- **Client sinks**: JSON format
### Per-Pipeline Configuration
Each pipeline can have its own formatter:
```toml
[[pipelines]]
name = "json-pipeline"
[pipelines.format]
type = "json"
[[pipelines]]
name = "text-pipeline"
[pipelines.format]
type = "txt"
```
## Message Processing
### JSON Message Handling
When using JSON formatter with JSON log messages:
1. Attempts to parse message as JSON
2. Merges fields with LogWisp metadata
3. LogWisp fields take precedence
4. Falls back to string if parsing fails
### Field Preservation
LogWisp metadata always includes:
- Timestamp (from source or current time)
- Level (detected or default)
- Source (origin identifier)
- Message (original content)
## Performance Characteristics
### Formatter Performance
Relative performance (fastest to slowest):
1. **Raw**: Direct passthrough
2. **Text**: Template execution
3. **JSON**: Serialization
4. **JSON (pretty)**: Formatted serialization
### Optimization Tips
- Use raw format for high throughput
- Cache template compilation (automatic)
- Minimize template complexity
- Avoid pretty JSON in production
## Common Configurations
### Structured Logging
```toml
[pipelines.format]
type = "json"
[pipelines.format.json]
pretty = false
```
### Human-Readable Logs
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} [{{.Level}}] {{.Message}}"
timestamp_format = "15:04:05"
```
### Syslog Format
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Timestamp | FmtTime}} {{.Source}} {{.Level}}: {{.Message}}"
timestamp_format = "Jan 2 15:04:05"
```
### Minimal Output
```toml
[pipelines.format]
type = "txt"
[pipelines.format.txt]
template = "{{.Message}}"
```

View File

@ -1,77 +1,76 @@
# Installation Guide
Installation process on tested platforms.
LogWisp installation and service configuration for Linux and FreeBSD systems.
## Requirements
- **OS**: Linux, FreeBSD
- **Architecture**: amd64
- **Go**: 1.24+ (for building)
## Installation
## Installation Methods
### Pre-built Binaries
Download the latest release binary for your platform and install to `/usr/local/bin`:
```bash
# Linux amd64
wget https://github.com/lixenwraith/logwisp/releases/latest/download/logwisp-linux-amd64
wget https://github.com/yourusername/logwisp/releases/latest/download/logwisp-linux-amd64
chmod +x logwisp-linux-amd64
sudo mv logwisp-linux-amd64 /usr/local/bin/logwisp
# Verify
logwisp --version
# FreeBSD amd64
fetch https://github.com/yourusername/logwisp/releases/latest/download/logwisp-freebsd-amd64
chmod +x logwisp-freebsd-amd64
sudo mv logwisp-freebsd-amd64 /usr/local/bin/logwisp
```
### From Source
### Building from Source
Requires Go 1.24 or newer:
```bash
git clone https://github.com/lixenwraith/logwisp.git
git clone https://github.com/yourusername/logwisp.git
cd logwisp
make build
sudo make install
go build -o logwisp ./src/cmd/logwisp
sudo install -m 755 logwisp /usr/local/bin/
```
### Go Install
### Go Install Method
Install directly using Go (version information will not be embedded):
```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
go install github.com/yourusername/logwisp/src/cmd/logwisp@latest
```
Note: Binary created with this method will not contain version information.
## Platform-Specific
## Service Configuration
### Linux (systemd)
```bash
# Create service
sudo tee /etc/systemd/system/logwisp.service << EOF
Create systemd service file `/etc/systemd/system/logwisp.service`:
```ini
[Unit]
Description=LogWisp Log Monitoring Service
Description=LogWisp Log Transport Service
After=network.target
[Service]
Type=simple
User=logwisp
ExecStart=/usr/local/bin/logwisp --config /etc/logwisp/logwisp.toml
Restart=always
Group=logwisp
ExecStart=/usr/local/bin/logwisp -c /etc/logwisp/logwisp.toml
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
WorkingDirectory=/var/lib/logwisp
[Install]
WantedBy=multi-user.target
EOF
```
# Create user
Setup service user and directories:
```bash
sudo useradd -r -s /bin/false logwisp
# Create service user
sudo useradd -r -s /bin/false logwisp
# Create configuration directory
sudo mkdir -p /etc/logwisp
sudo chown logwisp:logwisp /etc/logwisp
# Enable and start
sudo mkdir -p /etc/logwisp /var/lib/logwisp /var/log/logwisp
sudo chown logwisp:logwisp /var/lib/logwisp /var/log/logwisp
sudo systemctl daemon-reload
sudo systemctl enable logwisp
sudo systemctl start logwisp
@ -79,141 +78,90 @@ sudo systemctl start logwisp
### FreeBSD (rc.d)
```bash
# Create service script
sudo tee /usr/local/etc/rc.d/logwisp << 'EOF'
Create rc script `/usr/local/etc/rc.d/logwisp`:
```sh
#!/bin/sh
# PROVIDE: logwisp
# REQUIRE: DAEMON
# REQUIRE: DAEMON NETWORKING
# KEYWORD: shutdown
. /etc/rc.subr
name="logwisp"
rcvar="${name}_enable"
command="/usr/local/bin/logwisp"
command_args="--config /usr/local/etc/logwisp/logwisp.toml"
pidfile="/var/run/${name}.pid"
start_cmd="logwisp_start"
stop_cmd="logwisp_stop"
logwisp_start()
{
echo "Starting logwisp service..."
/usr/sbin/daemon -c -f -p ${pidfile} ${command} ${command_args}
}
logwisp_stop()
{
if [ -f ${pidfile} ]; then
echo "Stopping logwisp service..."
kill $(cat ${pidfile})
rm -f ${pidfile}
fi
}
command="/usr/local/bin/logwisp"
command_args="-c /usr/local/etc/logwisp/logwisp.toml"
load_rc_config $name
: ${logwisp_enable:="NO"}
: ${logwisp_config:="/usr/local/etc/logwisp/logwisp.toml"}
run_rc_command "$1"
EOF
```
# Make executable
Setup service:
```bash
sudo chmod +x /usr/local/etc/rc.d/logwisp
# Create service user
sudo pw useradd logwisp -d /nonexistent -s /usr/sbin/nologin
# Create configuration directory
sudo mkdir -p /usr/local/etc/logwisp
sudo chown logwisp:logwisp /usr/local/etc/logwisp
# Enable service
sudo mkdir -p /usr/local/etc/logwisp /var/log/logwisp
sudo chown logwisp:logwisp /var/log/logwisp
sudo sysrc logwisp_enable="YES"
# Start service
sudo service logwisp start
```
## Post-Installation
## Directory Structure
Standard installation directories:
| Purpose | Linux | FreeBSD |
|---------|-------|---------|
| Binary | `/usr/local/bin/logwisp` | `/usr/local/bin/logwisp` |
| Configuration | `/etc/logwisp/` | `/usr/local/etc/logwisp/` |
| Working Directory | `/var/lib/logwisp/` | `/var/db/logwisp/` |
| Log Files | `/var/log/logwisp/` | `/var/log/logwisp/` |
| PID File | `/var/run/logwisp.pid` | `/var/run/logwisp.pid` |
## Post-Installation Verification
Verify the installation:
### Verify Installation
```bash
# Check version
logwisp --version
logwisp version
# Test configuration
logwisp --config /etc/logwisp/logwisp.toml --log-level debug
logwisp -c /etc/logwisp/logwisp.toml --disable-status-reporter
# Check service
# Check service status (Linux)
sudo systemctl status logwisp
```
### Linux Service Status
```bash
sudo systemctl status logwisp
```
### FreeBSD Service Status
```bash
# Check service status (FreeBSD)
sudo service logwisp status
```
### Initial Configuration
Create a basic configuration file:
```toml
# /etc/logwisp/logwisp.toml (Linux)
# /usr/local/etc/logwisp/logwisp.toml (FreeBSD)
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = {
path = "/path/to/application/logs",
pattern = "*.log"
}
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
Restart service after configuration changes:
**Linux:**
```bash
sudo systemctl restart logwisp
```
**FreeBSD:**
```bash
sudo service logwisp restart
```
## Uninstallation
### Linux
```bash
sudo systemctl stop logwisp
sudo systemctl disable logwisp
sudo rm /usr/local/bin/logwisp
sudo rm /etc/systemd/system/logwisp.service
sudo rm -rf /etc/logwisp
sudo rm -rf /etc/logwisp /var/lib/logwisp /var/log/logwisp
sudo userdel logwisp
```
### FreeBSD
```bash
sudo service logwisp stop
sudo sysrc logwisp_enable="NO"
sudo sysrc -x logwisp_enable
sudo rm /usr/local/bin/logwisp
sudo rm /usr/local/etc/rc.d/logwisp
sudo rm -rf /usr/local/etc/logwisp
sudo rm -rf /usr/local/etc/logwisp /var/db/logwisp /var/log/logwisp
sudo pw userdel logwisp
```

289
doc/networking.md Normal file
View File

@ -0,0 +1,289 @@
# Networking
Network configuration for LogWisp connections, including TLS, rate limiting, and access control.
## TLS Configuration
### TLS Support Matrix
| Component | TLS Support | Notes |
|-----------|-------------|-------|
| HTTP Source | ✓ | Full TLS 1.2/1.3 |
| HTTP Sink | ✓ | Full TLS 1.2/1.3 |
| HTTP Client | ✓ | Client certificates |
| TCP Source | ✗ | No encryption |
| TCP Sink | ✗ | No encryption |
| TCP Client | ✗ | No encryption |
### Server TLS Configuration
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/server.pem"
key_file = "/path/to/server.key"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2" # TLS1.2|TLS1.3
client_auth = false
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Client TLS Configuration
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_name = "logs.example.com"
skip_verify = false
cert_file = "/path/to/client.pem" # For mTLS
key_file = "/path/to/client.key" # For mTLS
```
### TLS Certificate Generation
Using the `tls` command:
```bash
# Generate CA certificate
logwisp tls -ca -o myca
# Generate server certificate
logwisp tls -server -ca-cert myca.pem -ca-key myca.key -host localhost,server.example.com -o server
# Generate client certificate
logwisp tls -client -ca-cert myca.pem -ca-key myca.key -o client
```
Command options:
| Flag | Description |
|------|-------------|
| `-ca` | Generate CA certificate |
| `-server` | Generate server certificate |
| `-client` | Generate client certificate |
| `-host` | Comma-separated hostnames/IPs |
| `-o` | Output file prefix |
| `-days` | Certificate validity (default: 365) |
## Network Rate Limiting
### Configuration Options
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### Rate Limiting Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `enabled` | bool | Enable rate limiting |
| `max_connections_per_ip` | int | Per-IP connection limit |
| `max_connections_total` | int | Global connection limit |
| `requests_per_second` | float | Request rate limit |
| `burst_size` | int | Token bucket burst capacity |
| `response_code` | int | HTTP response code when limited |
| `response_message` | string | Response message when limited |
### IP Access Control
**Whitelist**: Only specified IPs/networks allowed
```toml
ip_whitelist = [
"192.168.1.0/24", # Local network
"10.0.0.0/8", # Private network
"203.0.113.5" # Specific IP
]
```
**Blacklist**: Specified IPs/networks denied
```toml
ip_blacklist = [
"192.168.1.100", # Blocked host
"10.0.0.0/16" # Blocked subnet
]
```
Processing order:
1. Blacklist (immediate deny if matched)
2. Whitelist (must match if configured)
3. Rate limiting
4. Authentication
## Connection Management
### TCP Keep-Alive
```toml
[pipelines.sources.tcp]
keep_alive = true
keep_alive_period_ms = 30000 # 30 seconds
```
Benefits:
- Detect dead connections
- Prevent connection timeout
- Maintain NAT mappings
### Connection Timeouts
```toml
[pipelines.sources.http]
read_timeout_ms = 10000 # 10 seconds
write_timeout_ms = 10000 # 10 seconds
[pipelines.sinks.tcp_client]
dial_timeout = 10 # Connection timeout
write_timeout = 30 # Write timeout
read_timeout = 10 # Read timeout
```
### Connection Limits
Global limits:
```toml
max_connections = 100 # Total concurrent connections
```
Per-IP limits:
```toml
max_connections_per_ip = 10
```
## Heartbeat Configuration
Keep connections alive with periodic heartbeats:
### HTTP Sink Heartbeat
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
Formats:
- **comment**: SSE comment (`: heartbeat`)
- **event**: SSE event with data
- **json**: JSON-formatted heartbeat
### TCP Sink Heartbeat
```toml
[pipelines.sinks.tcp.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "json" # json|txt
```
## Network Protocols
### HTTP/HTTPS
- HTTP/1.1 and HTTP/2 support
- Persistent connections
- Chunked transfer encoding
- Server-Sent Events (SSE)
### TCP
- Raw TCP sockets
- Newline-delimited protocol
- Binary-safe transmission
- No encryption available
## Port Configuration
### Default Ports
| Service | Default Port | Protocol |
|---------|--------------|----------|
| HTTP Source | 8081 | HTTP/HTTPS |
| HTTP Sink | 8080 | HTTP/HTTPS |
| TCP Source | 9091 | TCP |
| TCP Sink | 9090 | TCP |
### Port Conflict Prevention
LogWisp validates port usage at startup:
- Detects port conflicts across pipelines
- Prevents duplicate bindings
- Suggests alternative ports
## Network Security
### Best Practices
1. **Use TLS for HTTP** connections when possible
2. **Implement rate limiting** to prevent DoS
3. **Configure IP whitelists** for restricted access
4. **Enable authentication** for all network endpoints
5. **Use non-standard ports** to reduce scanning exposure
6. **Monitor connection metrics** for anomalies
7. **Set appropriate timeouts** to prevent resource exhaustion
### Security Warnings
- TCP connections are **always unencrypted**
- HTTP Basic/Token auth **requires TLS**
- Avoid `skip_verify` in production
- Never expose unauthenticated endpoints publicly
## Load Balancing
### Client-Side Load Balancing
Configure multiple endpoints (future feature):
```toml
[[pipelines.sinks.http_client]]
urls = [
"https://log1.example.com/ingest",
"https://log2.example.com/ingest"
]
strategy = "round-robin" # round-robin|random|least-conn
```
### Server-Side Considerations
- Use reverse proxy for load distribution
- Configure session affinity if needed
- Monitor individual instance health
## Troubleshooting
### Common Issues
**Connection Refused**
- Check firewall rules
- Verify service is running
- Confirm correct port/host
**TLS Handshake Failure**
- Verify certificate validity
- Check certificate chain
- Confirm TLS versions match
**Rate Limit Exceeded**
- Adjust rate limit parameters
- Add IP to whitelist
- Implement client-side throttling
**Connection Timeout**
- Increase timeout values
- Check network latency
- Verify keep-alive settings

358
doc/operations.md Normal file
View File

@ -0,0 +1,358 @@
# Operations Guide
Running, monitoring, and maintaining LogWisp in production.
## Starting LogWisp
### Manual Start
```bash
# Foreground with default config
logwisp
# Background mode
logwisp --background
# With specific configuration
logwisp --config /etc/logwisp/production.toml
```
### Service Management
**Linux (systemd):**
```bash
sudo systemctl start logwisp
sudo systemctl stop logwisp
sudo systemctl restart logwisp
sudo systemctl status logwisp
```
**FreeBSD (rc.d):**
```bash
sudo service logwisp start
sudo service logwisp stop
sudo service logwisp restart
sudo service logwisp status
```
## Configuration Management
### Hot Reload
Enable automatic configuration reload:
```toml
config_auto_reload = true
```
Or via command line:
```bash
logwisp --config-auto-reload
```
Trigger manual reload:
```bash
kill -HUP $(pidof logwisp)
# or
kill -USR1 $(pidof logwisp)
```
### Configuration Validation
Test configuration without starting:
```bash
logwisp --config test.toml --quiet --disable-status-reporter
```
Check for errors:
- Port conflicts
- Invalid patterns
- Missing required fields
- File permissions
## Monitoring
### Status Reporter
Built-in periodic status logging (30-second intervals):
```
[INFO] Status report active_pipelines=2 time=15:04:05
[INFO] Pipeline status pipeline=app entries_processed=10523
[INFO] Pipeline status pipeline=system entries_processed=5231
```
Disable if not needed:
```toml
disable_status_reporter = true
```
### HTTP Status Endpoint
When using HTTP sink:
```bash
curl http://localhost:8080/status | jq .
```
Response structure:
```json
{
"uptime": "2h15m30s",
"pipelines": {
"default": {
"sources": 1,
"sinks": 2,
"processed": 15234,
"filtered": 523,
"dropped": 12
}
}
}
```
### Metrics Collection
Track via logs:
- Total entries processed
- Entries filtered
- Entries dropped
- Active connections
- Buffer utilization
## Log Management
### LogWisp's Operational Logs
Configuration for LogWisp's own logs:
```toml
[logging]
output = "file"
level = "info"
[logging.file]
directory = "/var/log/logwisp"
name = "logwisp"
max_size_mb = 100
retention_hours = 168
```
### Log Rotation
Automatic rotation based on:
- File size threshold
- Total size limit
- Retention period
Manual rotation:
```bash
# Move current log
mv /var/log/logwisp/logwisp.log /var/log/logwisp/logwisp.log.1
# Send signal to reopen
kill -USR1 $(pidof logwisp)
```
### Log Levels
Operational log levels:
- **debug**: Detailed debugging information
- **info**: General operational messages
- **warn**: Warning conditions
- **error**: Error conditions
Production recommendation: `info` or `warn`
## Performance Tuning
### Buffer Sizing
Adjust buffers based on load:
```toml
# High-volume source
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
buffer_size = 5000 # Increase for burst traffic
# Slow consumer sink
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
buffer_size = 10000 # Larger buffer for slow endpoints
batch_size = 500 # Larger batches
```
### Rate Limiting
Protect against overload:
```toml
[pipelines.rate_limit]
rate = 1000.0 # Entries per second
burst = 2000.0 # Burst capacity
policy = "drop" # Drop excess entries
```
### Connection Limits
Prevent resource exhaustion:
```toml
[pipelines.sources.http.net_limit]
max_connections_total = 1000
max_connections_per_ip = 50
```
## Troubleshooting
### Common Issues
**High Memory Usage**
- Check buffer sizes
- Monitor goroutine count
- Review retention settings
**Dropped Entries**
- Increase buffer sizes
- Add rate limiting
- Check sink performance
**Connection Errors**
- Verify network connectivity
- Check firewall rules
- Review TLS certificates
### Debug Mode
Enable detailed logging:
```bash
logwisp --logging.level=debug --logging.output=stderr
```
### Health Checks
Implement external monitoring:
```bash
#!/bin/bash
# Health check script
if ! curl -sf http://localhost:8080/status > /dev/null; then
echo "LogWisp health check failed"
exit 1
fi
```
## Backup and Recovery
### Configuration Backup
```bash
# Backup configuration
cp /etc/logwisp/logwisp.toml /backup/logwisp-$(date +%Y%m%d).toml
# Version control
git add /etc/logwisp/
git commit -m "LogWisp config update"
```
### State Recovery
LogWisp maintains minimal state:
- File read positions (automatic)
- Connection state (automatic)
Recovery after crash:
1. Service automatically restarts (systemd/rc.d)
2. File sources resume from last position
3. Network sources accept new connections
4. Clients reconnect automatically
## Security Operations
### Certificate Management
Monitor certificate expiration:
```bash
openssl x509 -in /path/to/cert.pem -noout -enddate
```
Rotate certificates:
1. Generate new certificates
2. Update configuration
3. Reload service (SIGHUP)
### Credential Rotation
Update authentication:
```bash
# Generate new credentials
logwisp auth -u admin -b
# Update configuration
vim /etc/logwisp/logwisp.toml
# Reload service
kill -HUP $(pidof logwisp)
```
### Access Auditing
Monitor access patterns:
- Review connection logs
- Track authentication failures
- Monitor rate limit hits
## Maintenance
### Planned Maintenance
1. Notify users of maintenance window
2. Stop accepting new connections
3. Drain existing connections
4. Perform maintenance
5. Restart service
### Upgrade Process
1. Download new version
2. Test with current configuration
3. Stop old version
4. Install new version
5. Start service
6. Verify operation
### Cleanup Tasks
Regular maintenance:
- Remove old log files
- Clean temporary files
- Verify disk space
- Update documentation
## Disaster Recovery
### Backup Strategy
- Configuration files: Daily
- TLS certificates: After generation
- Authentication credentials: Secure storage
### Recovery Procedures
Service failure:
1. Check service status
2. Review error logs
3. Verify configuration
4. Restart service
Data loss:
1. Restore configuration from backup
2. Regenerate certificates if needed
3. Recreate authentication credentials
4. Restart service
### Business Continuity
- Run multiple instances for redundancy
- Use load balancer for distribution
- Implement monitoring alerts
- Document recovery procedures

View File

@ -1,215 +0,0 @@
# Quick Start Guide
Get LogWisp up and running in minutes:
## Installation
### From Source
```bash
git clone https://github.com/lixenwraith/logwisp.git
cd logwisp
make install
```
### Using Go Install
```bash
go install github.com/lixenwraith/logwisp/src/cmd/logwisp@latest
```
## Basic Usage
### 1. Monitor Current Directory
Start LogWisp with defaults (monitors `*.log` files in current directory):
```bash
logwisp
```
### 2. Stream Logs
Connect to the log stream:
```bash
# SSE stream
curl -N http://localhost:8080/stream
# Check status
curl http://localhost:8080/status | jq .
```
### 3. Generate Test Logs
```bash
echo "[ERROR] Something went wrong!" >> test.log
echo "[INFO] Application started" >> test.log
echo "[WARN] Low memory warning" >> test.log
```
## Common Scenarios
### Monitor Specific Directory
Create `~/.config/logwisp/logwisp.toml`:
```toml
[[pipelines]]
name = "myapp"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/myapp", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Filter Only Errors
```toml
[[pipelines]]
name = "errors"
[[pipelines.sources]]
type = "directory"
options = { path = "./", pattern = "*.log" }
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN", "CRITICAL"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
### Multiple Outputs
Send logs to both HTTP stream and file:
```toml
[[pipelines]]
name = "multi-output"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
# HTTP streaming
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# File archival
[[pipelines.sinks]]
type = "file"
options = { directory = "/var/log/archive", name = "app" }
```
### TCP Streaming
For high-performance streaming:
```toml
[[pipelines]]
name = "highperf"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "tcp"
options = { port = 9090, buffer_size = 5000 }
```
Connect with netcat:
```bash
nc localhost 9090
```
### Router Mode
Run multiple pipelines on shared ports:
```bash
logwisp --router
# Access pipelines at:
# http://localhost:8080/myapp/stream
# http://localhost:8080/errors/stream
# http://localhost:8080/status (global)
```
### Remote Log Collection
Receive logs via HTTP/TCP and forward to remote servers:
```toml
[[pipelines]]
name = "collector"
# Receive logs via HTTP POST
[[pipelines.sources]]
type = "http"
options = { port = 8081, ingest_path = "/ingest" }
# Forward to remote server
[[pipelines.sinks]]
type = "http_client"
options = {
url = "https://log-server.com/ingest",
batch_size = 100,
headers = { "Authorization" = "Bearer <API_KEY_HERE>" }
}
```
Send logs to collector:
```bash
curl -X POST http://localhost:8081/ingest \
-H "Content-Type: application/json" \
-d '{"message": "Test log", "level": "INFO"}'
```
## Quick Tips
### Enable Debug Logging
```bash
logwisp --logging.level debug --logging.output stderr
```
### Quiet Mode
```bash
logwisp --quiet
```
### Rate Limiting
```toml
[[pipelines.sinks]]
type = "http"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20
}
}
```
### Console Output
```toml
[[pipelines.sinks]]
type = "stdout" # or "stderr"
options = {}
```
### Split Console Output
```toml
# INFO/DEBUG to stdout, ERROR/WARN to stderr
[[pipelines.sinks]]
type = "stdout"
options = { target = "split" }
```

View File

@ -1,125 +0,0 @@
# Rate Limiting Guide
LogWisp provides configurable rate limiting to protect against abuse and ensure fair access.
## How It Works
Token bucket algorithm:
1. Each client gets a bucket with fixed capacity
2. Tokens refill at configured rate
3. Each request consumes one token
4. No tokens = request rejected
## Configuration
```toml
[[pipelines.sinks]]
type = "http" # or "tcp"
options = {
port = 8080,
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 20,
limit_by = "ip", # or "global"
max_connections_per_ip = 5,
max_total_connections = 100,
response_code = 429,
response_message = "Rate limit exceeded"
}
}
```
## Strategies
### Per-IP Limiting (Default)
Each IP gets its own bucket:
```toml
limit_by = "ip"
requests_per_second = 10.0
# Client A: 10 req/sec
# Client B: 10 req/sec
```
### Global Limiting
All clients share one bucket:
```toml
limit_by = "global"
requests_per_second = 50.0
# All clients combined: 50 req/sec
```
## Connection Limits
```toml
max_connections_per_ip = 5 # Per IP
max_total_connections = 100 # Total
```
## Response Behavior
### HTTP
Returns JSON with configured status:
```json
{
"error": "Rate limit exceeded",
"retry_after": "60"
}
```
### TCP
Connections silently dropped.
## Examples
### Light Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 50.0,
burst_size = 100
}
```
### Moderate Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 10.0,
burst_size = 30,
max_connections_per_ip = 5
}
```
### Strict Protection
```toml
rate_limit = {
enabled = true,
requests_per_second = 2.0,
burst_size = 5,
max_connections_per_ip = 2,
response_code = 503
}
```
## Monitoring
Check statistics:
```bash
curl http://localhost:8080/status | jq '.sinks[0].details.rate_limit'
```
## Testing
```bash
# Test rate limits
for i in {1..20}; do
curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/status
done
```
## Tuning
- **requests_per_second**: Expected load
- **burst_size**: 2-3× requests_per_second
- **Connection limits**: Based on memory

View File

@ -1,158 +0,0 @@
# Router Mode Guide
Router mode enables multiple pipelines to share HTTP ports through path-based routing.
## Overview
**Standard mode**: Each pipeline needs its own port
- Pipeline 1: `http://localhost:8080/stream`
- Pipeline 2: `http://localhost:8081/stream`
**Router mode**: Pipelines share ports via paths
- Pipeline 1: `http://localhost:8080/app/stream`
- Pipeline 2: `http://localhost:8080/database/stream`
- Global status: `http://localhost:8080/status`
## Enabling Router Mode
```bash
logwisp --router --config /etc/logwisp/multi-pipeline.toml
```
## Configuration
```toml
# All pipelines can use the same port
[[pipelines]]
name = "app"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/app", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Same port OK
[[pipelines]]
name = "database"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/postgresql", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 } # Shared port
```
## Path Structure
Paths are prefixed with pipeline name:
| Pipeline | Config Path | Router Path |
|----------|-------------|-------------|
| `app` | `/stream` | `/app/stream` |
| `app` | `/status` | `/app/status` |
| `database` | `/stream` | `/database/stream` |
### Custom Paths
```toml
[[pipelines.sinks]]
type = "http"
options = {
stream_path = "/logs", # Becomes /app/logs
status_path = "/health" # Becomes /app/health
}
```
## Endpoints
### Pipeline Endpoints
```bash
# SSE stream
curl -N http://localhost:8080/app/stream
# Pipeline status
curl http://localhost:8080/database/status
```
### Global Status
```bash
curl http://localhost:8080/status
```
Returns:
```json
{
"service": "LogWisp Router",
"pipelines": {
"app": { /* stats */ },
"database": { /* stats */ }
},
"total_pipelines": 2
}
```
## Use Cases
### Microservices
```toml
[[pipelines]]
name = "frontend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/frontend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "backend"
[[pipelines.sources]]
type = "directory"
options = { path = "/var/log/backend", pattern = "*.log" }
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
# Access:
# http://localhost:8080/frontend/stream
# http://localhost:8080/backend/stream
```
### Environment-Based
```toml
[[pipelines]]
name = "prod"
[[pipelines.filters]]
type = "include"
patterns = ["ERROR", "WARN"]
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
[[pipelines]]
name = "dev"
# No filters - all logs
[[pipelines.sinks]]
type = "http"
options = { port = 8080 }
```
## Limitations
1. **HTTP Only**: Router mode only works for HTTP/SSE
2. **No TCP Routing**: TCP remains on separate ports
3. **Path Conflicts**: Pipeline names must be unique
## Load Balancer Integration
```nginx
upstream logwisp {
server logwisp1:8080;
server logwisp2:8080;
}
location /logs/ {
proxy_pass http://logwisp/;
proxy_buffering off;
}
```

293
doc/sinks.md Normal file
View File

@ -0,0 +1,293 @@
# Output Sinks
LogWisp sinks deliver processed log entries to various destinations.
## Sink Types
### Console Sink
Output to stdout/stderr.
```toml
[[pipelines.sinks]]
type = "console"
[pipelines.sinks.console]
target = "stdout" # stdout|stderr|split
colorize = false
buffer_size = 100
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `target` | string | "stdout" | Output target (stdout/stderr/split) |
| `colorize` | bool | false | Enable colored output |
| `buffer_size` | int | 100 | Internal buffer size |
**Target Modes:**
- **stdout**: All output to standard output
- **stderr**: All output to standard error
- **split**: INFO/DEBUG to stdout, WARN/ERROR to stderr
### File Sink
Write logs to rotating files.
```toml
[[pipelines.sinks]]
type = "file"
[pipelines.sinks.file]
directory = "./logs"
name = "output"
max_size_mb = 100
max_total_size_mb = 1000
min_disk_free_mb = 500
retention_hours = 168.0
buffer_size = 1000
flush_interval_ms = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `directory` | string | Required | Output directory |
| `name` | string | Required | Base filename |
| `max_size_mb` | int | 100 | Rotation threshold |
| `max_total_size_mb` | int | 1000 | Total size limit |
| `min_disk_free_mb` | int | 500 | Minimum free disk space |
| `retention_hours` | float | 168 | Delete files older than |
| `buffer_size` | int | 1000 | Internal buffer size |
| `flush_interval_ms` | int | 1000 | Force flush interval |
**Features:**
- Automatic rotation on size
- Retention management
- Disk space monitoring
- Periodic flushing
### HTTP Sink
SSE (Server-Sent Events) streaming server.
```toml
[[pipelines.sinks]]
type = "http"
[pipelines.sinks.http]
host = "0.0.0.0"
port = 8080
stream_path = "/stream"
status_path = "/status"
buffer_size = 1000
max_connections = 100
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `stream_path` | string | "/stream" | SSE stream endpoint |
| `status_path` | string | "/status" | Status endpoint |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Heartbeat Configuration:**
```toml
[pipelines.sinks.http.heartbeat]
enabled = true
interval_ms = 30000
include_timestamp = true
include_stats = false
format = "comment" # comment|event|json
```
### TCP Sink
TCP streaming server for debugging.
```toml
[[pipelines.sinks]]
type = "tcp"
[pipelines.sinks.tcp]
host = "0.0.0.0"
port = 9090
buffer_size = 1000
max_connections = 100
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_connections` | int | 100 | Maximum concurrent clients |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Note:** TCP Sink has no authentication support (debugging only).
### HTTP Client Sink
Forward logs to remote HTTP endpoints.
```toml
[[pipelines.sinks]]
type = "http_client"
[pipelines.sinks.http_client]
url = "https://logs.example.com/ingest"
buffer_size = 1000
batch_size = 100
batch_delay_ms = 1000
timeout_seconds = 30
max_retries = 3
retry_delay_ms = 1000
retry_backoff = 2.0
insecure_skip_verify = false
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `url` | string | Required | Target URL |
| `buffer_size` | int | 1000 | Internal buffer size |
| `batch_size` | int | 100 | Logs per request |
| `batch_delay_ms` | int | 1000 | Max wait before sending |
| `timeout_seconds` | int | 30 | Request timeout |
| `max_retries` | int | 3 | Retry attempts |
| `retry_delay_ms` | int | 1000 | Initial retry delay |
| `retry_backoff` | float | 2.0 | Exponential backoff multiplier |
| `insecure_skip_verify` | bool | false | Skip TLS verification |
### TCP Client Sink
Forward logs to remote TCP servers.
```toml
[[pipelines.sinks]]
type = "tcp_client"
[pipelines.sinks.tcp_client]
host = "logs.example.com"
port = 9090
buffer_size = 1000
dial_timeout = 10
write_timeout = 30
read_timeout = 10
keep_alive = 30
reconnect_delay_ms = 1000
max_reconnect_delay_ms = 30000
reconnect_backoff = 1.5
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | Required | Target host |
| `port` | int | Required | Target port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `dial_timeout` | int | 10 | Connection timeout (seconds) |
| `write_timeout` | int | 30 | Write timeout (seconds) |
| `read_timeout` | int | 10 | Read timeout (seconds) |
| `keep_alive` | int | 30 | TCP keep-alive (seconds) |
| `reconnect_delay_ms` | int | 1000 | Initial reconnect delay |
| `max_reconnect_delay_ms` | int | 30000 | Maximum reconnect delay |
| `reconnect_backoff` | float | 1.5 | Backoff multiplier |
## Network Sink Features
### Network Rate Limiting
Available for HTTP and TCP sinks:
```toml
[pipelines.sinks.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sinks.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = false
```
HTTP Client TLS:
```toml
[pipelines.sinks.http_client.tls]
enabled = true
server_name = "logs.example.com"
skip_verify = false
cert_file = "/path/to/client.pem" # For mTLS
key_file = "/path/to/client.key" # For mTLS
```
### Authentication
HTTP/HTTP Client authentication:
```toml
[pipelines.sinks.http_client.auth]
type = "basic" # none|basic|token|mtls
username = "user"
password = "pass"
token = "bearer-token"
```
TCP Client authentication:
```toml
[pipelines.sinks.tcp_client.auth]
type = "scram" # none|scram
username = "user"
password = "pass"
```
## Sink Chaining
Designed connection patterns:
### Log Aggregation
- **HTTP Client Sink → HTTP Source**: HTTPS with authentication
- **TCP Client Sink → TCP Source**: Raw TCP with SCRAM
### Live Monitoring
- **HTTP Sink**: Browser-based SSE streaming
- **TCP Sink**: Debug interface (telnet/netcat)
## Sink Statistics
All sinks track:
- Total entries processed
- Active connections
- Failed sends
- Retry attempts
- Last processed timestamp

214
doc/sources.md Normal file
View File

@ -0,0 +1,214 @@
# Input Sources
LogWisp sources monitor various inputs and generate log entries for pipeline processing.
## Source Types
### Directory Source
Monitors a directory for log files matching a pattern.
```toml
[[pipelines.sources]]
type = "directory"
[pipelines.sources.directory]
path = "/var/log/myapp"
pattern = "*.log" # Glob pattern
check_interval_ms = 100 # Poll interval
recursive = false # Scan subdirectories
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `path` | string | Required | Directory to monitor |
| `pattern` | string | "*" | File pattern (glob) |
| `check_interval_ms` | int | 100 | File check interval in milliseconds |
| `recursive` | bool | false | Include subdirectories |
**Features:**
- Automatic file rotation detection
- Position tracking (resume after restart)
- Concurrent file monitoring
- Pattern-based file selection
### Stdin Source
Reads log entries from standard input.
```toml
[[pipelines.sources]]
type = "stdin"
[pipelines.sources.stdin]
buffer_size = 1000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `buffer_size` | int | 1000 | Internal buffer size |
**Features:**
- Line-based processing
- Automatic level detection
- Non-blocking reads
### HTTP Source
REST endpoint for log ingestion.
```toml
[[pipelines.sources]]
type = "http"
[pipelines.sources.http]
host = "0.0.0.0"
port = 8081
ingest_path = "/ingest"
buffer_size = 1000
max_body_size = 1048576 # 1MB
read_timeout_ms = 10000
write_timeout_ms = 10000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `ingest_path` | string | "/ingest" | Ingestion endpoint path |
| `buffer_size` | int | 1000 | Internal buffer size |
| `max_body_size` | int | 1048576 | Maximum request body size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `write_timeout_ms` | int | 10000 | Write timeout |
**Input Formats:**
- Single JSON object
- JSON array
- Newline-delimited JSON (NDJSON)
- Plain text (one entry per line)
### TCP Source
Raw TCP socket listener for log ingestion.
```toml
[[pipelines.sources]]
type = "tcp"
[pipelines.sources.tcp]
host = "0.0.0.0"
port = 9091
buffer_size = 1000
read_timeout_ms = 10000
keep_alive = true
keep_alive_period_ms = 30000
```
**Configuration Options:**
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `host` | string | "0.0.0.0" | Bind address |
| `port` | int | Required | Listen port |
| `buffer_size` | int | 1000 | Internal buffer size |
| `read_timeout_ms` | int | 10000 | Read timeout |
| `keep_alive` | bool | true | Enable TCP keep-alive |
| `keep_alive_period_ms` | int | 30000 | Keep-alive interval |
**Protocol:**
- Newline-delimited JSON
- One log entry per line
- UTF-8 encoding
## Network Source Features
### Network Rate Limiting
Available for HTTP and TCP sources:
```toml
[pipelines.sources.http.net_limit]
enabled = true
max_connections_per_ip = 10
max_connections_total = 100
requests_per_second = 100.0
burst_size = 200
response_code = 429
response_message = "Rate limit exceeded"
ip_whitelist = ["192.168.1.0/24"]
ip_blacklist = ["10.0.0.0/8"]
```
### TLS Configuration (HTTP Only)
```toml
[pipelines.sources.http.tls]
enabled = true
cert_file = "/path/to/cert.pem"
key_file = "/path/to/key.pem"
ca_file = "/path/to/ca.pem"
min_version = "TLS1.2"
client_auth = true
client_ca_file = "/path/to/client-ca.pem"
verify_client_cert = true
```
### Authentication
HTTP Source authentication options:
```toml
[pipelines.sources.http.auth]
type = "basic" # none|basic|token|mtls
realm = "LogWisp"
# Basic auth
[[pipelines.sources.http.auth.basic.users]]
username = "admin"
password_hash = "$argon2..."
# Token auth
[pipelines.sources.http.auth.token]
tokens = ["token1", "token2"]
```
TCP Source authentication:
```toml
[pipelines.sources.tcp.auth]
type = "scram" # none|scram
# SCRAM users
[[pipelines.sources.tcp.auth.scram.users]]
username = "user1"
stored_key = "base64..."
server_key = "base64..."
salt = "base64..."
argon_time = 3
argon_memory = 65536
argon_threads = 4
```
## Source Statistics
All sources track:
- Total entries received
- Dropped entries (buffer full)
- Invalid entries
- Last entry timestamp
- Active connections (network sources)
- Source-specific metrics
## Buffer Management
Each source maintains internal buffers:
- Default size: 1000 entries
- Drop policy when full
- Configurable per source
- Non-blocking writes

View File

@ -1,148 +0,0 @@
# Status Monitoring
LogWisp provides comprehensive monitoring through status endpoints and operational logs.
## Status Endpoints
### Pipeline Status
```bash
# Standalone mode
curl http://localhost:8080/status
# Router mode
curl http://localhost:8080/pipelinename/status
```
Example response:
```json
{
"service": "LogWisp",
"version": "1.0.0",
"server": {
"type": "http",
"port": 8080,
"active_clients": 5,
"buffer_size": 1000,
"uptime_seconds": 3600,
"mode": {"standalone": true, "router": false}
},
"sources": [{
"type": "directory",
"total_entries": 152341,
"dropped_entries": 12,
"active_watchers": 3
}],
"filters": {
"filter_count": 2,
"total_processed": 152341,
"total_passed": 48234
},
"sinks": [{
"type": "http",
"total_processed": 48234,
"active_connections": 5,
"details": {
"port": 8080,
"buffer_size": 1000,
"rate_limit": {
"enabled": true,
"total_requests": 98234,
"blocked_requests": 234
}
}
}],
"endpoints": {
"transport": "/stream",
"status": "/status"
},
"features": {
"heartbeat": {
"enabled": true,
"interval": 30,
"format": "comment"
},
"ssl": {
"enabled": false
},
"rate_limit": {
"enabled": true,
"requests_per_second": 10.0,
"burst_size": 20
}
}
}
```
## Key Metrics
### Source Metrics
| Metric | Description | Healthy Range |
|--------|-------------|---------------|
| `active_watchers` | Files being watched | 1-1000 |
| `total_entries` | Entries processed | Increasing |
| `dropped_entries` | Buffer overflows | < 1% of total |
| `active_connections` | Network connections (HTTP/TCP sources) | Within limits |
### Sink Metrics
| Metric | Description | Warning Signs |
|--------|-------------|---------------|
| `active_connections` | Current clients | Near limit |
| `total_processed` | Entries sent | Should match filter output |
| `total_batches` | Batches sent (client sinks) | Increasing |
| `failed_batches` | Failed sends (client sinks) | > 0 indicates issues |
### Filter Metrics
| Metric | Description | Notes |
|--------|-------------|-------|
| `total_processed` | Entries checked | All entries |
| `total_passed` | Passed filters | Check if too low/high |
| `total_matched` | Pattern matches | Per filter stats |
### Rate Limit Metrics
| Metric | Description | Action |
|--------|-------------|--------|
| `blocked_requests` | Rejected requests | Increase limits if high |
| `active_ips` | Unique IPs tracked | Monitor for attacks |
| `total_connections` | Current connections | Check against limits |
## Operational Logging
### Log Levels
```toml
[logging]
level = "info" # debug, info, warn, error
```
## Health Checks
### Basic Check
```bash
#!/usr/bin/env bash
if curl -s -f http://localhost:8080/status > /dev/null; then
echo "Healthy"
else
echo "Unhealthy"
exit 1
fi
```
### Advanced Check
```bash
#!/usr/bin/env bash
STATUS=$(curl -s http://localhost:8080/status)
DROPPED=$(echo "$STATUS" | jq '.sources[0].dropped_entries')
TOTAL=$(echo "$STATUS" | jq '.sources[0].total_entries')
if [ $((DROPPED * 100 / TOTAL)) -gt 5 ]; then
echo "High drop rate"
exit 1
fi
# Check client sink failures
FAILED=$(echo "$STATUS" | jq '.sinks[] | select(.type=="http_client") | .details.failed_batches // 0' | head -1)
if [ "$FAILED" -gt 10 ]; then
echo "High failure rate"
exit 1
fi
```