e1.6.0 Documentation update.

This commit is contained in:
2025-07-10 13:51:27 -04:00
parent 43c98b08f9
commit 52d6a3c86d
12 changed files with 3747 additions and 313 deletions

428
doc/api-reference.md Normal file
View File

@ -0,0 +1,428 @@
# API Reference
[← Configuration](configuration.md) | [← Back to README](../README.md) | [Logging Guide →](logging-guide.md)
Complete API documentation for the lixenwraith/log package.
## Table of Contents
- [Logger Creation](#logger-creation)
- [Initialization Methods](#initialization-methods)
- [Logging Methods](#logging-methods)
- [Trace Logging Methods](#trace-logging-methods)
- [Special Logging Methods](#special-logging-methods)
- [Control Methods](#control-methods)
- [Constants](#constants)
- [Error Types](#error-types)
## Logger Creation
### NewLogger
```go
func NewLogger() *Logger
```
Creates a new, uninitialized logger instance with default configuration parameters registered internally.
**Example:**
```go
logger := log.NewLogger()
```
## Initialization Methods
### Init
```go
func (l *Logger) Init(cfg *config.Config, basePath string) error
```
Initializes the logger using settings from a `config.Config` instance.
**Parameters:**
- `cfg`: Configuration instance containing logger settings
- `basePath`: Prefix for configuration keys (e.g., "logging" looks for "logging.level", "logging.directory", etc.)
**Returns:**
- `error`: Initialization error if configuration is invalid
**Example:**
```go
cfg := config.New()
cfg.Load("app.toml", os.Args[1:])
err := logger.Init(cfg, "logging")
```
### InitWithDefaults
```go
func (l *Logger) InitWithDefaults(overrides ...string) error
```
Initializes the logger using built-in defaults with optional overrides.
**Parameters:**
- `overrides`: Variable number of "key=value" strings
**Returns:**
- `error`: Initialization error if overrides are invalid
**Example:**
```go
err := logger.InitWithDefaults(
"directory=/var/log/app",
"level=-4",
"format=json",
)
```
### LoadConfig
```go
func (l *Logger) LoadConfig(path string, args []string) error
```
Loads configuration from a TOML file with CLI overrides.
**Parameters:**
- `path`: Path to TOML configuration file
- `args`: Command-line arguments for overrides
**Returns:**
- `error`: Load or initialization error
**Example:**
```go
err := logger.LoadConfig("config.toml", os.Args[1:])
```
### SaveConfig
```go
func (l *Logger) SaveConfig(path string) error
```
Saves the current logger configuration to a file.
**Parameters:**
- `path`: Path where configuration should be saved
**Returns:**
- `error`: Save error if write fails
**Example:**
```go
err := logger.SaveConfig("current-config.toml")
```
## Logging Methods
All logging methods accept variadic arguments, typically used as key-value pairs for structured logging.
### Debug
```go
func (l *Logger) Debug(args ...any)
```
Logs a message at debug level (-4).
**Example:**
```go
logger.Debug("Processing started", "items", 100, "mode", "batch")
```
### Info
```go
func (l *Logger) Info(args ...any)
```
Logs a message at info level (0).
**Example:**
```go
logger.Info("Server started", "port", 8080, "tls", true)
```
### Warn
```go
func (l *Logger) Warn(args ...any)
```
Logs a message at warning level (4).
**Example:**
```go
logger.Warn("High memory usage", "used_mb", 1800, "limit_mb", 2048)
```
### Error
```go
func (l *Logger) Error(args ...any)
```
Logs a message at error level (8).
**Example:**
```go
logger.Error("Database connection failed", "host", "db.example.com", "error", err)
```
## Trace Logging Methods
These methods include function call traces in the log output.
### DebugTrace
```go
func (l *Logger) DebugTrace(depth int, args ...any)
```
Logs at debug level with function call trace.
**Parameters:**
- `depth`: Number of stack frames to include (0-10)
- `args`: Log message and fields
**Example:**
```go
logger.DebugTrace(3, "Entering critical section", "mutex", "db_lock")
```
### InfoTrace
```go
func (l *Logger) InfoTrace(depth int, args ...any)
```
Logs at info level with function call trace.
### WarnTrace
```go
func (l *Logger) WarnTrace(depth int, args ...any)
```
Logs at warning level with function call trace.
### ErrorTrace
```go
func (l *Logger) ErrorTrace(depth int, args ...any)
```
Logs at error level with function call trace.
## Special Logging Methods
### Log
```go
func (l *Logger) Log(args ...any)
```
Logs with timestamp only, no level information (uses Info level internally).
**Example:**
```go
logger.Log("Checkpoint reached", "step", 5)
```
### Message
```go
func (l *Logger) Message(args ...any)
```
Logs raw message without timestamp or level.
**Example:**
```go
logger.Message("Raw output for special formatting")
```
### LogTrace
```go
func (l *Logger) LogTrace(depth int, args ...any)
```
Logs with timestamp and trace, but no level information.
**Example:**
```go
logger.LogTrace(2, "Function boundary", "entering", true)
```
## Control Methods
### Shutdown
```go
func (l *Logger) Shutdown(timeout ...time.Duration) error
```
Gracefully shuts down the logger, attempting to flush pending logs.
**Parameters:**
- `timeout`: Optional timeout duration (defaults to 2x flush interval)
**Returns:**
- `error`: Shutdown error if flush fails or timeout exceeded
**Example:**
```go
err := logger.Shutdown(5 * time.Second)
if err != nil {
fmt.Printf("Shutdown error: %v\n", err)
}
```
### Flush
```go
func (l *Logger) Flush(timeout time.Duration) error
```
Explicitly triggers a sync of the current log file buffer to disk.
**Parameters:**
- `timeout`: Maximum time to wait for flush completion
**Returns:**
- `error`: Flush error if timeout exceeded
**Example:**
```go
err := logger.Flush(1 * time.Second)
```
## Constants
### Log Levels
```go
const (
LevelDebug int64 = -4
LevelInfo int64 = 0
LevelWarn int64 = 4
LevelError int64 = 8
)
```
Standard log levels for filtering output.
### Heartbeat Levels
```go
const (
LevelProc int64 = 12 // Process statistics
LevelDisk int64 = 16 // Disk usage statistics
LevelSys int64 = 20 // System statistics
)
```
Special levels for heartbeat monitoring that bypass level filtering.
### Format Flags
```go
const (
FlagShowTimestamp int64 = 0b01
FlagShowLevel int64 = 0b10
FlagDefault = FlagShowTimestamp | FlagShowLevel
)
```
Flags controlling log entry format.
### Level Helper Function
```go
func Level(levelStr string) (int64, error)
```
Converts level string to numeric constant.
**Parameters:**
- `levelStr`: Level name ("debug", "info", "warn", "error", "proc", "disk", "sys")
**Returns:**
- `int64`: Numeric level value
- `error`: Conversion error for invalid strings
**Example:**
```go
level, err := log.Level("debug") // Returns -4
```
## Error Types
The logger returns errors prefixed with "log: " for easy identification:
```go
// Configuration errors
"log: invalid format: 'xml' (use txt or json)"
"log: buffer_size must be positive: 0"
// Initialization errors
"log: failed to create log directory '/var/log/app': permission denied"
"log: logger previously failed to initialize and is disabled"
// Runtime errors
"log: logger not initialized or already shut down"
"log: timeout waiting for flush confirmation (1s)"
```
## Thread Safety
All public methods are thread-safe and can be called concurrently from multiple goroutines. The logger uses atomic operations and channels to ensure safe concurrent access without locks in the critical path.
## Usage Examples
### Complete Service Example
```go
type Service struct {
logger *log.Logger
}
func NewService() (*Service, error) {
logger := log.NewLogger()
err := logger.InitWithDefaults(
"directory=/var/log/service",
"format=json",
"buffer_size=2048",
"heartbeat_level=1",
)
if err != nil {
return nil, fmt.Errorf("logger init: %w", err)
}
return &Service{logger: logger}, nil
}
func (s *Service) ProcessRequest(id string) error {
s.logger.InfoTrace(1, "Processing request", "id", id)
if err := s.doWork(id); err != nil {
s.logger.Error("Request failed", "id", id, "error", err)
return err
}
s.logger.Info("Request completed", "id", id)
return nil
}
func (s *Service) Shutdown() error {
return s.logger.Shutdown(5 * time.Second)
}
```
---
[← Configuration](configuration.md) | [← Back to README](../README.md) | [Logging Guide →](logging-guide.md)

View File

@ -0,0 +1,444 @@
# Compatibility Adapters
[← Performance](performance.md) | [← Back to README](../README.md) | [Examples →](examples.md)
Guide to using lixenwraith/log with popular Go networking frameworks through compatibility adapters.
## Table of Contents
- [Overview](#overview)
- [gnet Adapter](#gnet-adapter)
- [fasthttp Adapter](#fasthttp-adapter)
- [Builder Pattern](#builder-pattern)
- [Structured Logging](#structured-logging)
- [Advanced Configuration](#advanced-configuration)
## Overview
The `compat` package provides adapters that allow the lixenwraith/log logger to work seamlessly with:
- **gnet v2**: High-performance event-driven networking framework
- **fasthttp**: Fast HTTP implementation
### Features
- ✅ Full interface compatibility
- ✅ Preserves structured logging
- ✅ Configurable behavior
- ✅ Shared logger instances
- ✅ Optional field extraction
## gnet Adapter
### Basic Usage
```go
import (
"github.com/lixenwraith/log"
"github.com/lixenwraith/log/compat"
"github.com/panjf2000/gnet/v2"
)
// Create logger
logger := log.NewLogger()
logger.InitWithDefaults("directory=/var/log/gnet")
defer logger.Shutdown()
// Create adapter
adapter := compat.NewGnetAdapter(logger)
// Use with gnet
gnet.Run(eventHandler, "tcp://127.0.0.1:9000",
gnet.WithLogger(adapter),
)
```
### gnet Interface Implementation
The adapter implements all gnet logger methods:
```go
type GnetAdapter struct {
logger *log.Logger
}
// Methods implemented:
// - Debugf(format string, args ...interface{})
// - Infof(format string, args ...interface{})
// - Warnf(format string, args ...interface{})
// - Errorf(format string, args ...interface{})
// - Fatalf(format string, args ...interface{})
```
### Custom Fatal Behavior
Override default fatal handling:
```go
adapter := compat.NewGnetAdapter(logger,
compat.WithFatalHandler(func(msg string) {
// Custom cleanup
saveApplicationState()
notifyOperations(msg)
gracefulShutdown()
os.Exit(1)
}),
)
```
### Complete gnet Example
```go
type echoServer struct {
gnet.BuiltinEventEngine
logger gnet.Logger
}
func (es *echoServer) OnBoot(eng gnet.Engine) gnet.Action {
es.logger.Infof("Server started on %s", eng.Addrs)
return gnet.None
}
func (es *echoServer) OnTraffic(c gnet.Conn) gnet.Action {
buf, _ := c.Next(-1)
es.logger.Debugf("Received %d bytes from %s", len(buf), c.RemoteAddr())
c.Write(buf)
return gnet.None
}
func main() {
logger := log.NewLogger()
logger.InitWithDefaults(
"directory=/var/log/gnet",
"format=json",
"buffer_size=2048",
)
defer logger.Shutdown()
adapter := compat.NewGnetAdapter(logger)
gnet.Run(
&echoServer{logger: adapter},
"tcp://127.0.0.1:9000",
gnet.WithMulticore(true),
gnet.WithLogger(adapter),
)
}
```
## fasthttp Adapter
### Basic Usage
```go
import (
"github.com/lixenwraith/log"
"github.com/lixenwraith/log/compat"
"github.com/valyala/fasthttp"
)
// Create logger
logger := log.NewLogger()
logger.InitWithDefaults("directory=/var/log/fasthttp")
defer logger.Shutdown()
// Create adapter
adapter := compat.NewFastHTTPAdapter(logger)
// Configure server
server := &fasthttp.Server{
Handler: requestHandler,
Logger: adapter,
}
```
### Level Detection
The adapter automatically detects log levels from message content:
```go
// Default detection rules:
// - Contains "error", "failed", "fatal", "panic" → ERROR
// - Contains "warn", "warning", "deprecated" → WARN
// - Contains "debug", "trace" → DEBUG
// - Otherwise → INFO
```
### Custom Level Detection
```go
adapter := compat.NewFastHTTPAdapter(logger,
compat.WithDefaultLevel(log.LevelInfo),
compat.WithLevelDetector(func(msg string) int64 {
// Custom detection logic
if strings.Contains(msg, "CRITICAL") {
return log.LevelError
}
if strings.Contains(msg, "performance") {
return log.LevelWarn
}
// Return 0 to use default detection
return 0
}),
)
```
### Complete fasthttp Example
```go
func main() {
logger := log.NewLogger()
logger.InitWithDefaults(
"directory=/var/log/fasthttp",
"format=json",
"heartbeat_level=1",
)
defer logger.Shutdown()
adapter := compat.NewFastHTTPAdapter(logger,
compat.WithDefaultLevel(log.LevelInfo),
)
server := &fasthttp.Server{
Handler: func(ctx *fasthttp.RequestCtx) {
// Your handler logic
ctx.Success("text/plain", []byte("Hello!"))
},
Logger: adapter,
Name: "MyServer",
Concurrency: fasthttp.DefaultConcurrency,
DisableKeepalive: false,
TCPKeepalive: true,
ReduceMemoryUsage: true,
}
if err := server.ListenAndServe(":8080"); err != nil {
logger.Error("Server failed", "error", err)
}
}
```
## Builder Pattern
### Shared Configuration
Use the builder for multiple adapters with shared configuration:
```go
// Create builder
builder := compat.NewBuilder().
WithOptions(
"directory=/var/log/app",
"format=json",
"buffer_size=4096",
"max_size_mb=100",
"heartbeat_level=2",
)
// Build adapters
gnetAdapter, fasthttpAdapter, err := builder.Build()
if err != nil {
panic(err)
}
// Get logger for direct use
logger := builder.GetLogger()
defer logger.Shutdown()
// Use adapters in your servers
// ...
```
### Structured Adapters
For enhanced field extraction:
```go
// Build with structured adapters
gnetStructured, fasthttpAdapter, err := builder.BuildStructured()
```
## Structured Logging
### Field Extraction
Structured adapters can extract fields from printf-style formats:
```go
// Regular adapter output:
// "client=192.168.1.1 port=8080"
// Structured adapter output:
// {"client": "192.168.1.1", "port": 8080, "source": "gnet"}
```
### Pattern Detection
The structured adapter recognizes patterns like:
- `key=%v`
- `key: %v`
- `key = %v`
```go
adapter := compat.NewStructuredGnetAdapter(logger)
// These will extract structured fields:
adapter.Infof("client=%s port=%d", "192.168.1.1", 8080)
// → {"client": "192.168.1.1", "port": 8080}
adapter.Errorf("user: %s, error: %s", "john", "auth failed")
// → {"user": "john", "error": "auth failed"}
// These remain as messages:
adapter.Infof("Connected to server")
// → {"msg": "Connected to server"}
```
## Advanced Configuration
### High-Performance Setup
```go
builder := compat.NewBuilder().
WithOptions(
"directory=/var/log/highperf",
"format=json",
"buffer_size=8192", // Large buffer
"flush_interval_ms=1000", // Batch writes
"enable_periodic_sync=false", // Reduce I/O
"heartbeat_level=1", // Monitor drops
)
```
### Development Setup
```go
builder := compat.NewBuilder().
WithOptions(
"directory=./logs",
"format=txt", // Human-readable
"level=-4", // Debug level
"trace_depth=3", // Include traces
"enable_stdout=true", // Console output
"flush_interval_ms=50", // Quick feedback
)
```
### Container Setup
```go
builder := compat.NewBuilder().
WithOptions(
"disable_file=true", // No files
"enable_stdout=true", // Console only
"format=json", // For aggregators
"level=0", // Info and above
)
```
### Helper Functions
Configure servers with adapters:
```go
// Configure gnet with options
opts := compat.ConfigureGnetServer(adapter,
gnet.WithMulticore(true),
gnet.WithReusePort(true),
)
gnet.Run(handler, addr, opts...)
// Configure fasthttp
server := &fasthttp.Server{Handler: handler}
compat.ConfigureFastHTTPServer(adapter, server)
```
### Integration Examples
#### Microservice with Both Frameworks
```go
type Service struct {
gnetAdapter *compat.GnetAdapter
fasthttpAdapter *compat.FastHTTPAdapter
logger *log.Logger
}
func NewService() (*Service, error) {
builder := compat.NewBuilder().
WithOptions(
"directory=/var/log/service",
"format=json",
"heartbeat_level=2",
)
gnet, fasthttp, err := builder.Build()
if err != nil {
return nil, err
}
return &Service{
gnetAdapter: gnet,
fasthttpAdapter: fasthttp,
logger: builder.GetLogger(),
}, nil
}
func (s *Service) StartTCPServer() error {
return gnet.Run(handler, "tcp://0.0.0.0:9000",
gnet.WithLogger(s.gnetAdapter),
)
}
func (s *Service) StartHTTPServer() error {
server := &fasthttp.Server{
Handler: s.handleHTTP,
Logger: s.fasthttpAdapter,
}
return server.ListenAndServe(":8080")
}
func (s *Service) Shutdown() error {
return s.logger.Shutdown(5 * time.Second)
}
```
#### Middleware Integration
```go
// gnet middleware
func loggingMiddleware(adapter *compat.GnetAdapter) gnet.EventHandler {
return func(c gnet.Conn) gnet.Action {
start := time.Now()
addr := c.RemoteAddr()
// Process connection
action := next(c)
adapter.Infof("conn_duration=%v remote=%s action=%v",
time.Since(start), addr, action)
return action
}
}
// fasthttp middleware
func requestLogger(adapter *compat.FastHTTPAdapter) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
start := time.Now()
// Process request
next(ctx)
// Adapter will detect level from status
adapter.Printf("method=%s path=%s status=%d duration=%v",
ctx.Method(), ctx.Path(),
ctx.Response.StatusCode(),
time.Since(start))
}
}
```
---
[← Performance](performance.md) | [← Back to README](../README.md) | [Examples →](examples.md)

286
doc/configuration.md Normal file
View File

@ -0,0 +1,286 @@
# Configuration Guide
[← Getting Started](getting-started.md) | [← Back to README](../README.md) | [API Reference →](api-reference.md)
This guide covers all configuration options and methods for customizing logger behavior.
## Table of Contents
- [Configuration Methods](#configuration-methods)
- [Configuration Parameters](#configuration-parameters)
- [Configuration Examples](#configuration-examples)
- [Dynamic Reconfiguration](#dynamic-reconfiguration)
- [Configuration Best Practices](#configuration-best-practices)
## Configuration Methods
### Method 1: InitWithDefaults
Simple string-based configuration using key=value pairs:
```go
logger := log.NewLogger()
err := logger.InitWithDefaults(
"directory=/var/log/myapp",
"level=-4",
"format=json",
"max_size_mb=100",
)
```
### Method 2: Init with config.Config
Integration with external configuration management:
```go
cfg := config.New()
cfg.Load("app.toml", os.Args[1:])
logger := log.NewLogger()
err := logger.Init(cfg, "logging") // Uses [logging] section
```
Example TOML configuration:
```toml
[logging]
level = -4
directory = "/var/log/myapp"
format = "json"
max_size_mb = 100
buffer_size = 2048
heartbeat_level = 2
heartbeat_interval_s = 300
```
## Configuration Parameters
### Basic Settings
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `level` | `int64` | Minimum log level (-4=Debug, 0=Info, 4=Warn, 8=Error) | `0` |
| `name` | `string` | Base name for log files | `"log"` |
| `directory` | `string` | Directory to store log files | `"./logs"` |
| `format` | `string` | Output format: `"txt"` or `"json"` | `"txt"` |
| `extension` | `string` | Log file extension (without dot) | `"log"` |
### Output Control
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `show_timestamp` | `bool` | Include timestamps in log entries | `true` |
| `show_level` | `bool` | Include log level in entries | `true` |
| `enable_stdout` | `bool` | Mirror logs to stdout/stderr | `false` |
| `stdout_target` | `string` | Console target: `"stdout"` or `"stderr"` | `"stdout"` |
| `disable_file` | `bool` | Disable file output (console-only) | `false` |
### Performance Tuning
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `buffer_size` | `int64` | Channel buffer size for log records | `1024` |
| `flush_interval_ms` | `int64` | Buffer flush interval (milliseconds) | `100` |
| `enable_periodic_sync` | `bool` | Enable periodic disk sync | `true` |
| `trace_depth` | `int64` | Default function trace depth (0-10) | `0` |
### File Management
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `max_size_mb` | `int64` | Maximum size per log file (MB) | `10` |
| `max_total_size_mb` | `int64` | Maximum total log directory size (MB) | `50` |
| `min_disk_free_mb` | `int64` | Minimum required free disk space (MB) | `100` |
| `retention_period_hrs` | `float64` | Hours to keep log files (0=disabled) | `0.0` |
| `retention_check_mins` | `float64` | Retention check interval (minutes) | `60.0` |
### Disk Monitoring
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `disk_check_interval_ms` | `int64` | Base disk check interval (ms) | `5000` |
| `enable_adaptive_interval` | `bool` | Adjust check interval based on load | `true` |
| `min_check_interval_ms` | `int64` | Minimum adaptive interval (ms) | `100` |
| `max_check_interval_ms` | `int64` | Maximum adaptive interval (ms) | `60000` |
### Heartbeat Monitoring
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `heartbeat_level` | `int64` | Heartbeat detail (0=off, 1=proc, 2=+disk, 3=+sys) | `0` |
| `heartbeat_interval_s` | `int64` | Heartbeat interval (seconds) | `60` |
## Configuration Examples
### Development Configuration
Verbose logging with quick rotation for testing:
```go
logger.InitWithDefaults(
"directory=./logs",
"level=-4", // Debug level
"format=txt", // Human-readable
"max_size_mb=1", // Small files for testing
"flush_interval_ms=50", // Quick flushes
"trace_depth=3", // Include call traces
"enable_stdout=true", // Also print to console
)
```
### Production Configuration
Optimized for performance with monitoring:
```go
logger.InitWithDefaults(
"directory=/var/log/app",
"level=0", // Info and above
"format=json", // Machine-parseable
"buffer_size=4096", // Large buffer
"max_size_mb=1000", // 1GB files
"max_total_size_mb=50000", // 50GB total
"retention_period_hrs=168", // 7 days
"heartbeat_level=2", // Process + disk stats
"heartbeat_interval_s=300", // 5 minutes
"enable_periodic_sync=false", // Reduce I/O
)
```
### Container/Cloud Configuration
Console-only with structured output:
```go
logger.InitWithDefaults(
"enable_stdout=true",
"disable_file=true", // No file output
"format=json", // Structured for log aggregators
"level=0", // Info level
"show_timestamp=true", // Include timestamps
)
```
### High-Security Configuration
Strict disk limits with frequent cleanup:
```go
logger.InitWithDefaults(
"directory=/secure/logs",
"level=4", // Warn and Error only
"max_size_mb=100", // 100MB files
"max_total_size_mb=1000", // 1GB total max
"min_disk_free_mb=5000", // 5GB free required
"retention_period_hrs=24", // 24 hour retention
"retention_check_mins=15", // Check every 15 min
"flush_interval_ms=10", // Immediate flush
)
```
## Dynamic Reconfiguration
The logger supports hot reconfiguration without losing data:
```go
// Initial configuration
logger := log.NewLogger()
logger.InitWithDefaults("level=0", "directory=/var/log/app")
// Later, change configuration
logger.InitWithDefaults(
"level=-4", // Now debug level
"enable_stdout=true", // Add console output
"heartbeat_level=1", // Enable monitoring
)
```
During reconfiguration:
- Pending logs are preserved
- Files are rotated if needed
- New settings take effect immediately
## Configuration Best Practices
### 1. Choose Appropriate Buffer Sizes
```go
// Low-volume application
"buffer_size=256"
// Medium-volume application (default)
"buffer_size=1024"
// High-volume application
"buffer_size=4096"
// Extreme volume (with monitoring)
"buffer_size=8192"
"heartbeat_level=1" // Monitor for dropped logs
```
### 2. Set Sensible Rotation Limits
Consider your disk space and retention needs:
```go
// Development
"max_size_mb=10"
"max_total_size_mb=100"
// Production with archival
"max_size_mb=1000" // 1GB files
"max_total_size_mb=0" // No limit (external archival)
"retention_period_hrs=168" // 7 days local
// Space-constrained environment
"max_size_mb=50"
"max_total_size_mb=500"
"min_disk_free_mb=1000"
```
### 3. Use Appropriate Formats
```go
// Development/debugging
"format=txt"
"show_timestamp=true"
"show_level=true"
// Production with log aggregation
"format=json"
"show_timestamp=true" // Aggregators parse this
"show_level=true"
```
### 4. Configure Monitoring
For production systems, enable heartbeats:
```go
// Basic monitoring
"heartbeat_level=1" // Process stats only
"heartbeat_interval_s=300" // Every 5 minutes
// Full monitoring
"heartbeat_level=3" // Process + disk + system
"heartbeat_interval_s=60" // Every minute
```
### 5. Platform-Specific Paths
```go
// Linux/Unix
"directory=/var/log/myapp"
// Windows
"directory=C:\\Logs\\MyApp"
// Container (ephemeral)
"disable_file=true"
"enable_stdout=true"
```
---
[← Getting Started](getting-started.md) | [← Back to README](../README.md) | [API Reference →](api-reference.md)

348
doc/disk-management.md Normal file
View File

@ -0,0 +1,348 @@
# Disk Management
[← Logging Guide](logging-guide.md) | [← Back to README](../README.md) | [Heartbeat Monitoring →](heartbeat-monitoring.md)
Comprehensive guide to log file rotation, retention policies, and disk space management.
## Table of Contents
- [File Rotation](#file-rotation)
- [Disk Space Management](#disk-space-management)
- [Retention Policies](#retention-policies)
- [Adaptive Monitoring](#adaptive-monitoring)
- [Recovery Behavior](#recovery-behavior)
- [Best Practices](#best-practices)
## File Rotation
### Automatic Rotation
Log files are automatically rotated when they reach the configured size limit:
```go
logger.InitWithDefaults(
"max_size_mb=100", // Rotate at 100MB
)
```
### Rotation Behavior
1. **Size Check**: Before each write, the logger checks if the file would exceed `max_size_mb`
2. **New File Creation**: Creates a new file with timestamp: `appname_240115_103045_123456789.log`
3. **Seamless Transition**: No logs are lost during rotation
4. **Old File Closure**: Previous file is properly closed and synced
### File Naming Convention
```
{name}_{YYMMDD}_{HHMMSS}_{nanoseconds}.{extension}
Example: myapp_240115_143022_987654321.log
```
Components:
- `name`: Configured log name
- `YYMMDD`: Date (year, month, day)
- `HHMMSS`: Time (hour, minute, second)
- `nanoseconds`: For uniqueness
- `extension`: Configured extension
## Disk Space Management
### Space Limits
The logger enforces two types of space limits:
```go
logger.InitWithDefaults(
"max_total_size_mb=1000", // Total log directory size
"min_disk_free_mb=5000", // Minimum free disk space
)
```
### Automatic Cleanup
When limits are exceeded, the logger:
1. Identifies oldest log files
2. Deletes them until space requirements are met
3. Preserves the current active log file
4. Logs cleanup actions for audit
### Example Configuration
```go
// Conservative: Strict limits
logger.InitWithDefaults(
"max_size_mb=50", // 50MB files
"max_total_size_mb=500", // 500MB total
"min_disk_free_mb=1000", // 1GB free required
)
// Generous: Large files, external archival
logger.InitWithDefaults(
"max_size_mb=1000", // 1GB files
"max_total_size_mb=0", // No total limit
"min_disk_free_mb=100", // 100MB free required
)
// Balanced: Production defaults
logger.InitWithDefaults(
"max_size_mb=100", // 100MB files
"max_total_size_mb=5000", // 5GB total
"min_disk_free_mb=500", // 500MB free required
)
```
## Retention Policies
### Time-Based Retention
Automatically delete logs older than a specified duration:
```go
logger.InitWithDefaults(
"retention_period_hrs=168", // Keep 7 days
"retention_check_mins=60", // Check hourly
)
```
### Retention Examples
```go
// Daily logs, keep 30 days
logger.InitWithDefaults(
"retention_period_hrs=720", // 30 days
"retention_check_mins=60", // Check hourly
"max_size_mb=1000", // 1GB daily files
)
// High-frequency logs, keep 24 hours
logger.InitWithDefaults(
"retention_period_hrs=24", // 1 day
"retention_check_mins=15", // Check every 15 min
"max_size_mb=100", // 100MB files
)
// Compliance: Keep 90 days
logger.InitWithDefaults(
"retention_period_hrs=2160", // 90 days
"retention_check_mins=360", // Check every 6 hours
"max_total_size_mb=100000", // 100GB total
)
```
### Retention Priority
When multiple policies conflict, cleanup priority is:
1. **Disk free space** (highest priority)
2. **Total size limit**
3. **Retention period** (lowest priority)
## Adaptive Monitoring
### Adaptive Disk Checks
The logger adjusts disk check frequency based on logging volume:
```go
logger.InitWithDefaults(
"enable_adaptive_interval=true",
"disk_check_interval_ms=5000", // Base: 5 seconds
"min_check_interval_ms=100", // Minimum: 100ms
"max_check_interval_ms=60000", // Maximum: 1 minute
)
```
### How It Works
1. **Low Activity**: Interval increases (up to max)
2. **High Activity**: Interval decreases (down to min)
3. **Reactive Checks**: Immediate check after 10MB written
### Monitoring Disk Usage
Check disk-related heartbeat messages:
```go
logger.InitWithDefaults(
"heartbeat_level=2", // Enable disk stats
"heartbeat_interval_s=300", // Every 5 minutes
)
```
Output:
```
2024-01-15T10:30:00Z DISK type="disk" sequence=1 rotated_files=5 deleted_files=2 total_log_size_mb="487.32" log_file_count=8 current_file_size_mb="23.45" disk_status_ok=true disk_free_mb="5234.67"
```
## Recovery Behavior
### Disk Full Handling
When disk space is exhausted:
1. **Detection**: Write failure or space check triggers recovery
2. **Cleanup Attempt**: Delete oldest logs to free space
3. **Status Update**: Set `disk_status_ok=false` if cleanup fails
4. **Log Dropping**: New logs dropped until space available
5. **Recovery**: Automatic retry on next disk check
### Monitoring Recovery
```go
// Check for disk issues in logs
grep "disk full" /var/log/myapp/*.log
grep "cleanup failed" /var/log/myapp/*.log
// Monitor disk status in heartbeats
grep "disk_status_ok=false" /var/log/myapp/*.log
```
### Manual Intervention
If automatic cleanup fails:
```bash
# Check disk usage
df -h /var/log
# Find large log files
find /var/log/myapp -name "*.log" -size +100M
# Manual cleanup (oldest first)
ls -t /var/log/myapp/*.log | tail -n 20 | xargs rm
# Verify space
df -h /var/log
```
## Best Practices
### 1. Plan for Growth
Estimate log volume and set appropriate limits:
```go
// Calculate required space:
// - Average log entry: 200 bytes
// - Entries per second: 100
// - Daily volume: 200 * 100 * 86400 = 1.7GB
logger.InitWithDefaults(
"max_size_mb=2000", // 2GB files (~ 1 day)
"max_total_size_mb=15000", // 15GB (~ 1 week)
"retention_period_hrs=168", // 7 days
)
```
### 2. External Archival
For long-term storage, implement external archival:
```go
// Configure for archival
logger.InitWithDefaults(
"max_size_mb=1000", // 1GB files for easy transfer
"max_total_size_mb=10000", // 10GB local buffer
"retention_period_hrs=48", // 2 days local
)
// Archive completed files
func archiveCompletedLogs(archivePath string) error {
files, _ := filepath.Glob("/var/log/myapp/*.log")
for _, file := range files {
if !isCurrentLogFile(file) {
// Move to archive storage (S3, NFS, etc.)
if err := archiveFile(file, archivePath); err != nil {
return err
}
os.Remove(file)
}
}
return nil
}
```
### 3. Monitor Disk Health
Set up alerts for disk issues:
```go
// Parse heartbeat logs for monitoring
type DiskStats struct {
TotalSizeMB float64
FileCount int
DiskFreeMB float64
DiskStatusOK bool
}
func monitorDiskHealth(logLine string) {
if strings.Contains(logLine, "type=\"disk\"") {
stats := parseDiskHeartbeat(logLine)
if !stats.DiskStatusOK {
alert("Log disk unhealthy")
}
if stats.DiskFreeMB < 1000 {
alert("Low disk space: %.0fMB free", stats.DiskFreeMB)
}
if stats.FileCount > 100 {
alert("Too many log files: %d", stats.FileCount)
}
}
}
```
### 4. Separate Log Volumes
Use dedicated volumes for logs:
```bash
# Create dedicated log volume
mkdir -p /mnt/logs
mount /dev/sdb1 /mnt/logs
# Configure logger
logger.InitWithDefaults(
"directory=/mnt/logs/myapp",
"max_total_size_mb=50000", # Use most of volume
"min_disk_free_mb=1000", # Leave 1GB free
)
```
### 5. Test Cleanup Behavior
Verify cleanup works before production:
```go
// Test configuration
func TestDiskCleanup(t *testing.T) {
logger := log.NewLogger()
logger.InitWithDefaults(
"directory=./test_logs",
"max_size_mb=1", // Small files
"max_total_size_mb=5", // Low limit
"retention_period_hrs=0.01", // 36 seconds
"retention_check_mins=0.5", // 30 seconds
)
// Generate logs to trigger cleanup
for i := 0; i < 1000; i++ {
logger.Info(strings.Repeat("x", 1000))
}
time.Sleep(45 * time.Second)
// Verify cleanup occurred
files, _ := filepath.Glob("./test_logs/*.log")
if len(files) > 5 {
t.Errorf("Cleanup failed: %d files remain", len(files))
}
}
```
---
[← Logging Guide](logging-guide.md) | [← Back to README](../README.md) | [Heartbeat Monitoring →](heartbeat-monitoring.md)

362
doc/examples.md Normal file
View File

@ -0,0 +1,362 @@
# Examples
[← Compatibility Adapters](compatibility-adapters.md) | [← Back to README](../README.md) | [Troubleshooting →](troubleshooting.md)
Sample applications demonstrating various features and use cases of the lixenwraith/log package.
## Table of Contents
- [Example Programs](#example-programs)
- [Running Examples](#running-examples)
- [Simple Example](#simple-example)
- [Stress Test](#stress-test)
- [Heartbeat Monitoring](#heartbeat-monitoring)
- [Reconfiguration](#reconfiguration)
- [Console Output](#console-output)
- [Framework Integration](#framework-integration)
## Example Programs
The `examples/` directory contains several demonstration programs:
| Example | Description | Key Features |
|---------|-------------|--------------|
| `simple` | Basic usage with config management | Configuration, basic logging |
| `stress` | High-volume stress testing | Performance testing, cleanup |
| `heartbeat` | Heartbeat monitoring demo | All heartbeat levels |
| `reconfig` | Dynamic reconfiguration | Hot reload, state management |
| `sink` | Console output configurations | stdout/stderr, dual output |
| `gnet` | gnet framework integration | Event-driven server |
| `fasthttp` | fasthttp framework integration | HTTP server logging |
## Running Examples
### Prerequisites
```bash
# Clone the repository
git clone https://github.com/lixenwraith/log
cd log
# Get dependencies
go mod download
```
### Running Individual Examples
```bash
# Simple example
go run examples/simple/main.go
# Stress test
go run examples/stress/main.go
# Heartbeat demo
go run examples/heartbeat/main.go
# View generated logs
ls -la ./logs/
```
## Simple Example
Demonstrates basic logger usage with configuration management.
### Key Features
- Configuration file creation
- Logger initialization
- Different log levels
- Structured logging
- Graceful shutdown
### Code Highlights
```go
// Initialize with external config
cfg := config.New()
cfg.Load("simple_config.toml", nil)
logger := log.NewLogger()
err := logger.Init(cfg, "logging")
// Log at different levels
logger.Debug("Debug message", "user_id", 123)
logger.Info("Application starting...")
logger.Warn("Warning", "threshold", 0.95)
logger.Error("Error occurred!", "code", 500)
// Save configuration
cfg.Save("simple_config.toml")
```
### What to Observe
- TOML configuration file generation
- Log file creation in `./logs`
- Structured output format
- Proper shutdown sequence
## Stress Test
Tests logger performance under high load.
### Key Features
- Concurrent logging from multiple workers
- Large message generation
- File rotation testing
- Retention policy testing
- Drop detection
### Configuration
```toml
[logstress]
level = -4
buffer_size = 500 # Small buffer to test drops
max_size_mb = 1 # Force frequent rotation
max_total_size_mb = 20 # Test cleanup
retention_period_hrs = 0.0028 # ~10 seconds
retention_check_mins = 0.084 # ~5 seconds
```
### What to Observe
- Log throughput (logs/second)
- File rotation behavior
- Automatic cleanup when limits exceeded
- "Logs were dropped" messages under load
- Memory and CPU usage
### Metrics to Monitor
```bash
# Watch file rotation
watch -n 1 'ls -lh ./logs/ | wc -l'
# Monitor log growth
watch -n 1 'du -sh ./logs/'
# Check for dropped logs
grep "dropped" ./logs/*.log
```
## Heartbeat Monitoring
Demonstrates all heartbeat levels and transitions.
### Test Sequence
1. Heartbeats disabled
2. PROC only (level 1)
3. PROC + DISK (level 2)
4. PROC + DISK + SYS (level 3)
5. Scale down to level 2
6. Scale down to level 1
7. Disable heartbeats
### What to Observe
```
--- Testing heartbeat level 1: PROC heartbeats only ---
2024-01-15T10:30:00Z PROC type="proc" sequence=1 uptime_hours="0.00" processed_logs=40 dropped_logs=0
--- Testing heartbeat level 2: PROC+DISK heartbeats ---
2024-01-15T10:30:05Z PROC type="proc" sequence=2 uptime_hours="0.00" processed_logs=80 dropped_logs=0
2024-01-15T10:30:05Z DISK type="disk" sequence=2 rotated_files=0 deleted_files=0 total_log_size_mb="0.12" log_file_count=1
--- Testing heartbeat level 3: PROC+DISK+SYS heartbeats ---
2024-01-15T10:30:10Z SYS type="sys" sequence=3 alloc_mb="4.23" sys_mb="12.45" num_gc=5 num_goroutine=8
```
### Use Cases
- Understanding heartbeat output
- Testing monitoring integration
- Verifying heartbeat configuration
## Reconfiguration
Tests dynamic logger reconfiguration without data loss.
### Test Scenario
```go
// Rapid reconfiguration loop
for i := 0; i < 10; i++ {
bufSize := fmt.Sprintf("buffer_size=%d", 100*(i+1))
err := logger.InitWithDefaults(bufSize)
time.Sleep(10 * time.Millisecond)
}
```
### What to Observe
- No log loss during reconfiguration
- Smooth transitions between configurations
- File handle management
- Channel recreation
### Verification
```bash
# Check total logs attempted vs written
# Should see minimal/no drops
```
## Console Output
Demonstrates various output configurations.
### Configurations Tested
1. **File Only** (default)
```go
"directory=./temp_logs",
"name=file_only_log"
```
2. **Console Only**
```go
"enable_stdout=true",
"disable_file=true"
```
3. **Dual Output**
```go
"enable_stdout=true",
"disable_file=false"
```
4. **Stderr Output**
```go
"enable_stdout=true",
"stdout_target=stderr"
```
### What to Observe
- Console output appearing immediately
- File creation behavior
- Transition between modes
- Separation of stdout/stderr
## Framework Integration
### gnet Example
High-performance TCP echo server:
```go
type echoServer struct {
gnet.BuiltinEventEngine
}
func main() {
logger := log.NewLogger()
logger.InitWithDefaults(
"directory=/var/log/gnet",
"format=json",
)
adapter := compat.NewGnetAdapter(logger)
gnet.Run(&echoServer{}, "tcp://127.0.0.1:9000",
gnet.WithLogger(adapter),
)
}
```
**Test with:**
```bash
# Terminal 1: Run server
go run examples/gnet/main.go
# Terminal 2: Test connection
echo "Hello gnet" | nc localhost 9000
```
### fasthttp Example
HTTP server with custom level detection:
```go
func main() {
logger := log.NewLogger()
adapter := compat.NewFastHTTPAdapter(logger,
compat.WithLevelDetector(customLevelDetector),
)
server := &fasthttp.Server{
Handler: requestHandler,
Logger: adapter,
}
server.ListenAndServe(":8080")
}
```
**Test with:**
```bash
# Terminal 1: Run server
go run examples/fasthttp/main.go
# Terminal 2: Send requests
curl http://localhost:8080/
curl http://localhost:8080/test
```
## Creating Your Own Examples
### Template Structure
```go
package main
import (
"fmt"
"time"
"github.com/lixenwraith/log"
)
func main() {
// Create logger
logger := log.NewLogger()
// Initialize with your configuration
err := logger.InitWithDefaults(
"directory=./my_logs",
"level=-4",
// Add your config...
)
if err != nil {
panic(err)
}
// Always shut down properly
defer func() {
if err := logger.Shutdown(2 * time.Second); err != nil {
fmt.Printf("Shutdown error: %v\n", err)
}
}()
// Your logging logic here
logger.Info("Example started")
// Test your specific use case
testYourFeature(logger)
}
func testYourFeature(logger *log.Logger) {
// Implementation
}
```
### Testing Checklist
When creating examples, test:
- [ ] Configuration loading
- [ ] Log output (file and/or console)
- [ ] Graceful shutdown
- [ ] Error handling
- [ ] Performance characteristics
- [ ] Resource cleanup
---
[← Compatibility Adapters](compatibility-adapters.md) | [← Back to README](../README.md) | [Troubleshooting →](troubleshooting.md)

234
doc/getting-started.md Normal file
View File

@ -0,0 +1,234 @@
# Getting Started
[← Back to README](../README.md) | [Configuration →](configuration.md)
This guide will help you get started with the lixenwraith/log package, from installation through basic usage.
## Table of Contents
- [Installation](#installation)
- [Basic Usage](#basic-usage)
- [Initialization Methods](#initialization-methods)
- [Your First Logger](#your-first-logger)
- [Console Output](#console-output)
- [Next Steps](#next-steps)
## Installation
Install the logger package:
```bash
go get github.com/lixenwraith/log
```
For advanced configuration management (optional):
```bash
go get github.com/lixenwraith/config
```
## Basic Usage
The logger follows an instance-based design. You create logger instances and call methods on them:
```go
package main
import (
"github.com/lixenwraith/log"
)
func main() {
// Create a new logger instance
logger := log.NewLogger()
// Initialize with defaults
err := logger.InitWithDefaults()
if err != nil {
panic(err)
}
defer logger.Shutdown()
// Start logging!
logger.Info("Application started")
logger.Debug("Debug mode enabled", "verbose", true)
}
```
## Initialization Methods
The logger provides two initialization methods:
### 1. Simple Initialization (Recommended for most cases)
Use `InitWithDefaults` with optional string overrides:
```go
logger := log.NewLogger()
err := logger.InitWithDefaults(
"directory=/var/log/myapp",
"level=-4", // Debug level
"format=json",
)
```
### 2. Configuration-Based Initialization
For complex applications with centralized configuration:
```go
import (
"github.com/lixenwraith/config"
"github.com/lixenwraith/log"
)
// Load configuration
cfg := config.New()
cfg.Load("app.toml", os.Args[1:])
// Initialize logger with config
logger := log.NewLogger()
err := logger.Init(cfg, "logging") // Uses [logging] section in config
```
## Your First Logger
Here's a complete example demonstrating basic logging features:
```go
package main
import (
"fmt"
"time"
"github.com/lixenwraith/log"
)
func main() {
// Create logger
logger := log.NewLogger()
// Initialize with custom settings
err := logger.InitWithDefaults(
"directory=./logs", // Log directory
"name=myapp", // Log file prefix
"level=0", // Info level and above
"format=txt", // Human-readable format
"max_size_mb=10", // Rotate at 10MB
)
if err != nil {
fmt.Printf("Failed to initialize logger: %v\n", err)
return
}
// Always shut down gracefully
defer func() {
if err := logger.Shutdown(2 * time.Second); err != nil {
fmt.Printf("Logger shutdown error: %v\n", err)
}
}()
// Log at different levels
logger.Debug("This won't appear (below Info level)")
logger.Info("Application started", "pid", 12345)
logger.Warn("Resource usage high", "cpu", 85.5)
logger.Error("Failed to connect", "host", "db.example.com", "port", 5432)
// Structured logging with key-value pairs
logger.Info("User action",
"user_id", 42,
"action", "login",
"ip", "192.168.1.100",
"timestamp", time.Now(),
)
}
```
## Console Output
For development or container environments, you might want console output:
```go
// Console-only logging (no files)
logger.InitWithDefaults(
"enable_stdout=true",
"disable_file=true",
"level=-4", // Debug level
)
// Dual output (both file and console)
logger.InitWithDefaults(
"directory=/var/log/app",
"enable_stdout=true",
"stdout_target=stderr", // Keep stdout clean
)
```
## Next Steps
Now that you have a working logger:
1. **[Learn about configuration options](configuration.md)** - Customize behavior for your needs
2. **[Explore the API](api-reference.md)** - See all available methods
3. **[Understand logging best practices](logging-guide.md)** - Write better logs
4. **[Check out examples](examples.md)** - See real-world usage patterns
## Common Patterns
### Service Initialization
```go
type Service struct {
logger *log.Logger
// other fields...
}
func NewService() (*Service, error) {
logger := log.NewLogger()
if err := logger.InitWithDefaults(
"directory=/var/log/service",
"name=service",
"format=json",
); err != nil {
return nil, fmt.Errorf("logger init failed: %w", err)
}
return &Service{
logger: logger,
}, nil
}
func (s *Service) Close() error {
return s.logger.Shutdown(5 * time.Second)
}
```
### HTTP Middleware
```go
func loggingMiddleware(logger *log.Logger) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
// Wrap response writer to capture status
wrapped := &responseWriter{ResponseWriter: w, status: 200}
next.ServeHTTP(wrapped, r)
logger.Info("HTTP request",
"method", r.Method,
"path", r.URL.Path,
"status", wrapped.status,
"duration_ms", time.Since(start).Milliseconds(),
"remote_addr", r.RemoteAddr,
)
})
}
}
```
---
[← Back to README](../README.md) | [Configuration →](configuration.md)

357
doc/heartbeat-monitoring.md Normal file
View File

@ -0,0 +1,357 @@
# Heartbeat Monitoring
[← Disk Management](disk-management.md) | [← Back to README](../README.md) | [Performance →](performance.md)
Guide to using heartbeat messages for operational monitoring and system health tracking.
## Table of Contents
- [Overview](#overview)
- [Heartbeat Levels](#heartbeat-levels)
- [Configuration](#configuration)
- [Heartbeat Messages](#heartbeat-messages)
- [Monitoring Integration](#monitoring-integration)
- [Use Cases](#use-cases)
## Overview
Heartbeats are periodic log messages that provide operational statistics about the logger and system. They bypass normal log level filtering, ensuring visibility even when running at higher log levels.
### Key Features
- **Always Visible**: Heartbeats use special log levels that bypass filtering
- **Multi-Level Detail**: Choose from process, disk, or system statistics
- **Production Monitoring**: Track logger health without debug logs
- **Metrics Source**: Parse heartbeats for monitoring dashboards
## Heartbeat Levels
### Level 0: Disabled (Default)
No heartbeat messages are generated.
```go
logger.InitWithDefaults(
"heartbeat_level=0", // No heartbeats
)
```
### Level 1: Process Statistics (PROC)
Basic logger operation metrics:
```go
logger.InitWithDefaults(
"heartbeat_level=1",
"heartbeat_interval_s=300", // Every 5 minutes
)
```
**Output:**
```
2024-01-15T10:30:00Z PROC type="proc" sequence=1 uptime_hours="24.50" processed_logs=1847293 dropped_logs=0
```
**Fields:**
- `sequence`: Incrementing counter
- `uptime_hours`: Logger uptime
- `processed_logs`: Successfully written logs
- `dropped_logs`: Logs lost due to buffer overflow
### Level 2: Process + Disk Statistics (DISK)
Includes file and disk usage information:
```go
logger.InitWithDefaults(
"heartbeat_level=2",
"heartbeat_interval_s=300",
)
```
**Additional Output:**
```
2024-01-15T10:30:00Z DISK type="disk" sequence=1 rotated_files=12 deleted_files=5 total_log_size_mb="487.32" log_file_count=8 current_file_size_mb="23.45" disk_status_ok=true disk_free_mb="5234.67"
```
**Additional Fields:**
- `rotated_files`: Total file rotations
- `deleted_files`: Files removed by cleanup
- `total_log_size_mb`: Size of all log files
- `log_file_count`: Number of log files
- `current_file_size_mb`: Active file size
- `disk_status_ok`: Disk health status
- `disk_free_mb`: Available disk space
### Level 3: Process + Disk + System Statistics (SYS)
Includes runtime and memory metrics:
```go
logger.InitWithDefaults(
"heartbeat_level=3",
"heartbeat_interval_s=60", // Every minute for detailed monitoring
)
```
**Additional Output:**
```
2024-01-15T10:30:00Z SYS type="sys" sequence=1 alloc_mb="45.23" sys_mb="128.45" num_gc=1523 num_goroutine=42
```
**Additional Fields:**
- `alloc_mb`: Allocated memory
- `sys_mb`: System memory reserved
- `num_gc`: Garbage collection runs
- `num_goroutine`: Active goroutines
## Configuration
### Basic Configuration
```go
logger.InitWithDefaults(
"heartbeat_level=2", // Process + Disk stats
"heartbeat_interval_s=300", // Every 5 minutes
)
```
### Interval Recommendations
| Environment | Level | Interval | Rationale |
|-------------|-------|----------|-----------|
| Development | 3 | 30s | Detailed debugging info |
| Staging | 2 | 300s | Balance detail vs noise |
| Production | 1-2 | 300-600s | Minimize overhead |
| High-Load | 1 | 600s | Reduce I/O impact |
### Dynamic Adjustment
```go
// Start with basic monitoring
logger.InitWithDefaults(
"heartbeat_level=1",
"heartbeat_interval_s=600",
)
// During incident, increase detail
logger.InitWithDefaults(
"heartbeat_level=3",
"heartbeat_interval_s=60",
)
// After resolution, reduce back
logger.InitWithDefaults(
"heartbeat_level=1",
"heartbeat_interval_s=600",
)
```
## Heartbeat Messages
### JSON Format Example
With `format=json`, heartbeats are structured for easy parsing:
```json
{
"time": "2024-01-15T10:30:00.123456789Z",
"level": "PROC",
"fields": [
"type", "proc",
"sequence", 42,
"uptime_hours", "24.50",
"processed_logs", 1847293,
"dropped_logs", 0
]
}
```
### Text Format Example
With `format=txt`, heartbeats are human-readable:
```
2024-01-15T10:30:00.123456789Z PROC type="proc" sequence=42 uptime_hours="24.50" processed_logs=1847293 dropped_logs=0
```
## Monitoring Integration
### Prometheus Exporter
```go
type LoggerMetrics struct {
logger *log.Logger
uptime prometheus.Gauge
processedTotal prometheus.Counter
droppedTotal prometheus.Counter
diskUsageMB prometheus.Gauge
diskFreeSpace prometheus.Gauge
fileCount prometheus.Gauge
}
func (m *LoggerMetrics) ParseHeartbeat(line string) {
if strings.Contains(line, "type=\"proc\"") {
// Extract and update process metrics
if match := regexp.MustCompile(`processed_logs=(\d+)`).FindStringSubmatch(line); match != nil {
if val, err := strconv.ParseFloat(match[1], 64); err == nil {
m.processedTotal.Set(val)
}
}
}
if strings.Contains(line, "type=\"disk\"") {
// Extract and update disk metrics
if match := regexp.MustCompile(`total_log_size_mb="([0-9.]+)"`).FindStringSubmatch(line); match != nil {
if val, err := strconv.ParseFloat(match[1], 64); err == nil {
m.diskUsageMB.Set(val)
}
}
}
}
```
### Grafana Dashboard
Create alerts based on heartbeat metrics:
```yaml
# Dropped logs alert
- alert: HighLogDropRate
expr: rate(logger_dropped_total[5m]) > 10
annotations:
summary: "High log drop rate detected"
description: "Logger dropping {{ $value }} logs/sec"
# Disk space alert
- alert: LogDiskSpaceLow
expr: logger_disk_free_mb < 1000
annotations:
summary: "Low log disk space"
description: "Only {{ $value }}MB free on log disk"
# Logger health alert
- alert: LoggerUnhealthy
expr: logger_disk_status_ok == 0
annotations:
summary: "Logger disk status unhealthy"
```
### ELK Stack Integration
Logstash filter for parsing heartbeats:
```ruby
filter {
if [message] =~ /type="(proc|disk|sys)"/ {
grok {
match => {
"message" => [
'%{TIMESTAMP_ISO8601:timestamp} %{WORD:level} type="%{WORD:heartbeat_type}" sequence=%{NUMBER:sequence:int} uptime_hours="%{NUMBER:uptime_hours:float}" processed_logs=%{NUMBER:processed_logs:int} dropped_logs=%{NUMBER:dropped_logs:int}',
'%{TIMESTAMP_ISO8601:timestamp} %{WORD:level} type="%{WORD:heartbeat_type}" sequence=%{NUMBER:sequence:int} rotated_files=%{NUMBER:rotated_files:int} deleted_files=%{NUMBER:deleted_files:int} total_log_size_mb="%{NUMBER:total_log_size_mb:float}"'
]
}
}
mutate {
add_tag => [ "heartbeat", "metrics" ]
}
}
}
```
## Use Cases
### 1. Production Health Monitoring
```go
// Production configuration
logger.InitWithDefaults(
"level=4", // Warn and Error only
"heartbeat_level=2", // But still get disk stats
"heartbeat_interval_s=300", // Every 5 minutes
)
// Monitor for:
// - Dropped logs (buffer overflow)
// - Disk space issues
// - File rotation frequency
// - Logger uptime (crash detection)
```
### 2. Performance Tuning
```go
// Detailed monitoring during load test
logger.InitWithDefaults(
"heartbeat_level=3", // All stats
"heartbeat_interval_s=10", // Frequent updates
)
// Track:
// - Memory usage trends
// - Goroutine leaks
// - GC frequency
// - Log throughput
```
### 3. Capacity Planning
```go
// Long-term trending
logger.InitWithDefaults(
"heartbeat_level=2",
"heartbeat_interval_s=3600", // Hourly
)
// Analyze:
// - Log growth rate
// - Rotation frequency
// - Disk usage trends
// - Seasonal patterns
```
### 4. Debugging Logger Issues
```go
// When investigating logger problems
logger.InitWithDefaults(
"level=-4", // Debug everything
"heartbeat_level=3", // All heartbeats
"heartbeat_interval_s=5", // Very frequent
"enable_stdout=true", // Console output
)
```
### 5. Alerting Script
```bash
#!/bin/bash
# Monitor heartbeats for issues
tail -f /var/log/myapp/*.log | while read line; do
if [[ $line =~ type=\"proc\" ]]; then
if [[ $line =~ dropped_logs=([0-9]+) ]] && [[ ${BASH_REMATCH[1]} -gt 0 ]]; then
alert "Logs being dropped: ${BASH_REMATCH[1]}"
fi
fi
if [[ $line =~ type=\"disk\" ]]; then
if [[ $line =~ disk_status_ok=false ]]; then
alert "Logger disk unhealthy!"
fi
if [[ $line =~ disk_free_mb=\"([0-9.]+)\" ]]; then
free_mb=${BASH_REMATCH[1]}
if (( $(echo "$free_mb < 500" | bc -l) )); then
alert "Low disk space: ${free_mb}MB"
fi
fi
fi
done
```
---
[← Disk Management](disk-management.md) | [← Back to README](../README.md) | [Performance →](performance.md)

391
doc/logging-guide.md Normal file
View File

@ -0,0 +1,391 @@
# Logging Guide
[← API Reference](api-reference.md) | [← Back to README](../README.md) | [Disk Management →](disk-management.md)
Best practices and patterns for effective logging with the lixenwraith/log package.
## Table of Contents
- [Log Levels](#log-levels)
- [Structured Logging](#structured-logging)
- [Output Formats](#output-formats)
- [Function Tracing](#function-tracing)
- [Error Handling](#error-handling)
- [Performance Considerations](#performance-considerations)
- [Logging Patterns](#logging-patterns)
## Log Levels
### Understanding Log Levels
The logger uses numeric levels for efficient filtering:
| Level | Name | Value | Use Case |
|-------|------|-------|----------|
| Debug | `LevelDebug` | -4 | Detailed information for debugging |
| Info | `LevelInfo` | 0 | General informational messages |
| Warn | `LevelWarn` | 4 | Warning conditions |
| Error | `LevelError` | 8 | Error conditions |
### Level Selection Guidelines
```go
// Debug: Detailed execution flow
logger.Debug("Cache lookup", "key", cacheKey, "found", found)
// Info: Important business events
logger.Info("Order processed", "order_id", orderID, "amount", 99.99)
// Warn: Recoverable issues
logger.Warn("Retry attempt", "service", "payment", "attempt", 3)
// Error: Failures requiring attention
logger.Error("Database query failed", "query", query, "error", err)
```
### Setting Log Level
```go
// Development: See everything
logger.InitWithDefaults("level=-4") // Debug and above
// Production: Reduce noise
logger.InitWithDefaults("level=0") // Info and above
// Critical systems: Errors only
logger.InitWithDefaults("level=8") // Error only
```
## Structured Logging
### Key-Value Pairs
Always use structured key-value pairs for machine-parseable logs:
```go
// Good: Structured data
logger.Info("User login",
"user_id", user.ID,
"email", user.Email,
"ip", request.RemoteAddr,
"timestamp", time.Now(),
)
// Avoid: Unstructured strings
logger.Info(fmt.Sprintf("User %s logged in from %s", user.Email, request.RemoteAddr))
```
### Consistent Field Names
Use consistent field names across your application:
```go
// Define common fields
const (
FieldUserID = "user_id"
FieldRequestID = "request_id"
FieldDuration = "duration_ms"
FieldError = "error"
)
// Use consistently
logger.Info("API call",
FieldRequestID, reqID,
FieldUserID, userID,
FieldDuration, elapsed.Milliseconds(),
)
```
### Context Propagation
```go
type contextKey string
const requestIDKey contextKey = "request_id"
func logWithContext(ctx context.Context, logger *log.Logger, level string, msg string, fields ...any) {
// Extract common fields from context
if reqID := ctx.Value(requestIDKey); reqID != nil {
fields = append([]any{"request_id", reqID}, fields...)
}
switch level {
case "info":
logger.Info(msg, fields...)
case "error":
logger.Error(msg, fields...)
}
}
```
## Output Formats
### Text Format (Human-Readable)
Default format for development and debugging:
```
2024-01-15T10:30:45.123456789Z INFO User login user_id=42 email="user@example.com" ip="192.168.1.100"
2024-01-15T10:30:45.234567890Z WARN Rate limit approaching user_id=42 requests=95 limit=100
```
Configuration:
```go
logger.InitWithDefaults(
"format=txt",
"show_timestamp=true",
"show_level=true",
)
```
### JSON Format (Machine-Parseable)
Ideal for log aggregation and analysis:
```json
{"time":"2024-01-15T10:30:45.123456789Z","level":"INFO","fields":["User login","user_id",42,"email","user@example.com","ip","192.168.1.100"]}
{"time":"2024-01-15T10:30:45.234567890Z","level":"WARN","fields":["Rate limit approaching","user_id",42,"requests",95,"limit",100]}
```
Configuration:
```go
logger.InitWithDefaults(
"format=json",
"show_timestamp=true",
"show_level=true",
)
```
## Function Tracing
### Using Trace Methods
Include call stack information for debugging:
```go
func processPayment(amount float64) error {
logger.InfoTrace(1, "Processing payment", "amount", amount)
if err := validateAmount(amount); err != nil {
logger.ErrorTrace(3, "Payment validation failed",
"amount", amount,
"error", err,
)
return err
}
return nil
}
```
Output includes function names:
```
2024-01-15T10:30:45.123456789Z INFO processPayment Processing payment amount=99.99
2024-01-15T10:30:45.234567890Z ERROR validateAmount -> processPayment -> main Payment validation failed amount=-10 error="negative amount"
```
### Trace Depth Guidelines
- `1`: Current function only
- `2-3`: Typical for error paths
- `4-5`: Deep debugging
- `10`: Maximum supported depth
## Error Handling
### Logging Errors
Always include error details in structured fields:
```go
if err := db.Query(sql); err != nil {
logger.Error("Database query failed",
"query", sql,
"error", err.Error(), // Convert to string
"error_type", fmt.Sprintf("%T", err),
)
return fmt.Errorf("query failed: %w", err)
}
```
### Error Context Pattern
```go
func (s *Service) ProcessOrder(orderID string) error {
logger := s.logger // Use service logger
logger.Info("Processing order", "order_id", orderID)
order, err := s.db.GetOrder(orderID)
if err != nil {
logger.Error("Failed to fetch order",
"order_id", orderID,
"error", err,
"step", "fetch",
)
return fmt.Errorf("fetch order %s: %w", orderID, err)
}
if err := s.validateOrder(order); err != nil {
logger.Warn("Order validation failed",
"order_id", orderID,
"error", err,
"step", "validate",
)
return fmt.Errorf("validate order %s: %w", orderID, err)
}
// ... more processing
logger.Info("Order processed successfully", "order_id", orderID)
return nil
}
```
## Performance Considerations
### Minimize Allocations
```go
// Avoid: String concatenation
logger.Info("User " + user.Name + " logged in")
// Good: Structured fields
logger.Info("User logged in", "username", user.Name)
// Avoid: Sprintf in hot path
logger.Debug(fmt.Sprintf("Processing item %d of %d", i, total))
// Good: Direct fields
logger.Debug("Processing item", "current", i, "total", total)
```
### Conditional Expensive Operations
```go
// Only compute expensive values if they'll be logged
if logger.IsEnabled(log.LevelDebug) {
stats := computeExpensiveStats()
logger.Debug("Detailed statistics", "stats", stats)
}
```
### Batch Related Logs
```go
// Instead of logging each item
for _, item := range items {
logger.Debug("Processing", "item", item) // Noisy
}
// Log summary information
logger.Info("Batch processing",
"count", len(items),
"first_id", items[0].ID,
"last_id", items[len(items)-1].ID,
)
```
## Logging Patterns
### Request Lifecycle
```go
func handleRequest(w http.ResponseWriter, r *http.Request) {
start := time.Now()
reqID := generateRequestID()
logger.Info("Request started",
"request_id", reqID,
"method", r.Method,
"path", r.URL.Path,
"remote_addr", r.RemoteAddr,
)
defer func() {
duration := time.Since(start)
logger.Info("Request completed",
"request_id", reqID,
"duration_ms", duration.Milliseconds(),
)
}()
// Handle request...
}
```
### Background Job Pattern
```go
func (w *Worker) processJob(job Job) {
logger := w.logger
logger.Info("Job started",
"job_id", job.ID,
"type", job.Type,
"scheduled_at", job.ScheduledAt,
)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
if err := w.execute(ctx, job); err != nil {
logger.Error("Job failed",
"job_id", job.ID,
"error", err,
"duration_ms", time.Since(job.StartedAt).Milliseconds(),
)
return
}
logger.Info("Job completed",
"job_id", job.ID,
"duration_ms", time.Since(job.StartedAt).Milliseconds(),
)
}
```
### Audit Logging
```go
func (s *Service) auditAction(userID string, action string, resource string, result string) {
s.auditLogger.Info("Audit event",
"timestamp", time.Now().UTC(),
"user_id", userID,
"action", action,
"resource", resource,
"result", result,
"ip", getCurrentIP(),
"session_id", getSessionID(),
)
}
// Usage
s.auditAction(user.ID, "DELETE", "post:123", "success")
```
### Metrics Logging
```go
func (m *MetricsCollector) logMetrics() {
ticker := time.NewTicker(1 * time.Minute)
defer ticker.Stop()
for range ticker.C {
stats := m.collect()
m.logger.Info("Metrics snapshot",
"requests_per_sec", stats.RequestRate,
"error_rate", stats.ErrorRate,
"p50_latency_ms", stats.P50Latency,
"p99_latency_ms", stats.P99Latency,
"active_connections", stats.ActiveConns,
"memory_mb", stats.MemoryMB,
)
}
}
```
---
[← API Reference](api-reference.md) | [← Back to README](../README.md) | [Disk Management →](disk-management.md)

363
doc/performance.md Normal file
View File

@ -0,0 +1,363 @@
# Performance Guide
[← Heartbeat Monitoring](heartbeat-monitoring.md) | [← Back to README](../README.md) | [Compatibility Adapters →](compatibility-adapters.md)
Architecture overview and performance optimization strategies for the lixenwraith/log package.
## Table of Contents
- [Architecture Overview](#architecture-overview)
- [Performance Characteristics](#performance-characteristics)
- [Optimization Strategies](#optimization-strategies)
- [Benchmarking](#benchmarking)
- [Troubleshooting Performance](#troubleshooting-performance)
## Architecture Overview
### Lock-Free Design
The logger uses a lock-free architecture for maximum performance:
```
┌─────────────┐ Atomic Checks ┌──────────────┐
│ Logger │ ──────────────────────→│ State Check │
│ Methods │ │ (No Locks) │
└─────────────┘ └──────────────┘
│ │
│ Non-blocking │ Pass
↓ Channel Send ↓
┌─────────────┐ ┌──────────────┐
│ Buffered │←───────────────────────│ Format Data │
│ Channel │ │ (Stack Alloc)│
└─────────────┘ └──────────────┘
│ Single Consumer
↓ Goroutine
┌─────────────┐ Batch Write ┌──────────────┐
│ Processor │ ──────────────────────→│ File System │
│ Goroutine │ │ (OS) │
└─────────────┘ └──────────────┘
```
### Key Components
1. **Atomic State Management**: No mutexes in hot path
2. **Buffered Channel**: Decouples producers from I/O
3. **Single Processor**: Eliminates write contention
4. **Reusable Serializer**: Minimizes allocations
## Performance Characteristics
### Throughput
Typical performance on modern hardware:
| Scenario | Logs/Second | Latency (p99) |
|----------|-------------|---------------|
| File only | 500,000+ | < 1μs |
| File + Console | 100,000+ | < 5μs |
| JSON format | 400,000+ | < 2μs |
| With rotation | 450,000+ | < 2μs |
### Memory Usage
- **Per Logger**: ~10KB base overhead
- **Per Log Entry**: 0 allocations (reused buffer)
- **Channel Buffer**: `buffer_size * 24 bytes`
### CPU Impact
- **Logging Thread**: < 0.1% CPU per 100k logs/sec
- **Processor Thread**: 1-5% CPU depending on I/O
## Optimization Strategies
### 1. Buffer Size Tuning
Choose buffer size based on burst patterns:
```go
// Low volume, consistent rate
logger.InitWithDefaults("buffer_size=256")
// Medium volume with bursts
logger.InitWithDefaults("buffer_size=1024") // Default
// High volume or large bursts
logger.InitWithDefaults("buffer_size=4096")
// Extreme bursts (monitor for drops)
logger.InitWithDefaults(
"buffer_size=8192",
"heartbeat_level=1", // Monitor dropped logs
)
```
### 2. Flush Interval Optimization
Balance latency vs throughput:
```go
// Low latency (more syscalls)
logger.InitWithDefaults("flush_interval_ms=10")
// Balanced (default)
logger.InitWithDefaults("flush_interval_ms=100")
// High throughput (batch writes)
logger.InitWithDefaults(
"flush_interval_ms=1000",
"enable_periodic_sync=false",
)
```
### 3. Format Selection
Choose format based on needs:
```go
// Maximum performance
logger.InitWithDefaults(
"format=txt",
"show_timestamp=false", // Skip time formatting
"show_level=false", // Skip level string
)
// Balanced features/performance
logger.InitWithDefaults("format=txt") // Default
// Structured but slower
logger.InitWithDefaults("format=json")
```
### 4. Disk I/O Optimization
Reduce disk operations:
```go
// Minimize disk checks
logger.InitWithDefaults(
"disk_check_interval_ms=30000", // 30 seconds
"enable_adaptive_interval=false", // Fixed interval
"enable_periodic_sync=false", // No periodic sync
)
// Large files to reduce rotations
logger.InitWithDefaults(
"max_size_mb=1000", // 1GB files
)
// Disable unnecessary features
logger.InitWithDefaults(
"retention_period_hrs=0", // No retention checks
"heartbeat_level=0", // No heartbeats
)
```
### 5. Console Output Optimization
For development with console output:
```go
// Faster console output
logger.InitWithDefaults(
"enable_stdout=true",
"stdout_target=stdout", // Slightly faster than stderr
"disable_file=true", // Skip file I/O entirely
)
```
## Benchmarking
### Basic Benchmark
```go
func BenchmarkLogger(b *testing.B) {
logger := log.NewLogger()
logger.InitWithDefaults(
"directory=./bench_logs",
"buffer_size=4096",
"flush_interval_ms=1000",
)
defer logger.Shutdown()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.Info("Benchmark log",
"iteration", 1,
"thread", runtime.GOID(),
"timestamp", time.Now(),
)
}
})
}
```
### Throughput Test
```go
func TestThroughput(t *testing.T) {
logger := log.NewLogger()
logger.InitWithDefaults("buffer_size=4096")
defer logger.Shutdown()
start := time.Now()
count := 1000000
for i := 0; i < count; i++ {
logger.Info("msg", "seq", i)
}
logger.Flush(5 * time.Second)
duration := time.Since(start)
rate := float64(count) / duration.Seconds()
t.Logf("Throughput: %.0f logs/sec", rate)
}
```
### Memory Profile
```go
func profileMemory() {
logger := log.NewLogger()
logger.InitWithDefaults()
defer logger.Shutdown()
// Force GC for baseline
runtime.GC()
var m1 runtime.MemStats
runtime.ReadMemStats(&m1)
// Log heavily
for i := 0; i < 100000; i++ {
logger.Info("Memory test", "index", i)
}
// Measure again
runtime.GC()
var m2 runtime.MemStats
runtime.ReadMemStats(&m2)
fmt.Printf("Alloc delta: %d bytes\n", m2.Alloc-m1.Alloc)
fmt.Printf("Total alloc: %d bytes\n", m2.TotalAlloc-m1.TotalAlloc)
}
```
## Troubleshooting Performance
### 1. Detecting Dropped Logs
Monitor heartbeats for drops:
```go
logger.InitWithDefaults(
"heartbeat_level=1",
"heartbeat_interval_s=60",
)
// In logs: dropped_logs=1523
```
**Solutions:**
- Increase `buffer_size`
- Reduce log volume
- Optimize log formatting
### 2. High CPU Usage
Check processor goroutine:
```go
// Enable system stats
logger.InitWithDefaults(
"heartbeat_level=3",
"heartbeat_interval_s=10",
)
// Monitor: num_goroutine count
// Monitor: CPU usage of process
```
**Solutions:**
- Increase `flush_interval_ms`
- Disable `enable_periodic_sync`
- Reduce `heartbeat_level`
### 3. Memory Growth
```go
// Add memory monitoring
go func() {
ticker := time.NewTicker(1 * time.Minute)
defer ticker.Stop()
for range ticker.C {
var m runtime.MemStats
runtime.ReadMemStats(&m)
logger.Info("Memory stats",
"alloc_mb", m.Alloc/1024/1024,
"sys_mb", m.Sys/1024/1024,
"num_gc", m.NumGC,
)
}
}()
```
**Solutions:**
- Check for logger reference leaks
- Verify `buffer_size` is reasonable
- Look for infinite log loops
### 4. Slow Disk I/O
Identify I/O bottlenecks:
```bash
# Monitor disk I/O
iostat -x 1
# Check write latency
ioping -c 10 /var/log
```
**Solutions:**
- Use faster storage (SSD)
- Increase `flush_interval_ms`
- Enable write caching
- Use separate log volume
### 5. Lock Contention
The logger is designed to avoid locks, but check for:
```go
// Profile mutex contention
import _ "net/http/pprof"
go func() {
runtime.SetMutexProfileFraction(1)
http.ListenAndServe("localhost:6060", nil)
}()
// Check: go tool pprof http://localhost:6060/debug/pprof/mutex
```
### Performance Checklist
Before deploying:
- [ ] Appropriate `buffer_size` for load
- [ ] Reasonable `flush_interval_ms`
- [ ] Correct `format` for use case
- [ ] Heartbeat monitoring enabled
- [ ] Disk space properly configured
- [ ] Retention policies set
- [ ] Load tested with expected volume
- [ ] Drop monitoring in place
- [ ] CPU/memory baseline established
---
[← Heartbeat Monitoring](heartbeat-monitoring.md) | [← Back to README](../README.md) | [Compatibility Adapters →](compatibility-adapters.md)

461
doc/troubleshooting.md Normal file
View File

@ -0,0 +1,461 @@
# Troubleshooting
[← Examples](examples.md) | [← Back to README](../README.md)
Common issues and solutions when using the lixenwraith/log package.
## Table of Contents
- [Common Issues](#common-issues)
- [Diagnostic Tools](#diagnostic-tools)
- [Error Messages](#error-messages)
- [Performance Issues](#performance-issues)
- [Platform-Specific Issues](#platform-specific-issues)
- [FAQ](#faq)
## Common Issues
### Logger Not Writing to File
**Symptoms:**
- No log files created
- Empty log directory
- No error messages
**Solutions:**
1. **Check initialization**
```go
logger := log.NewLogger()
err := logger.InitWithDefaults()
if err != nil {
fmt.Printf("Init failed: %v\n", err)
}
```
2. **Verify directory permissions**
```bash
# Check directory exists and is writable
ls -la /var/log/myapp
touch /var/log/myapp/test.log
```
3. **Check if file output is disabled**
```go
// Ensure file output is enabled
logger.InitWithDefaults(
"disable_file=false", // Default, but be explicit
"directory=/var/log/myapp",
)
```
4. **Enable console output for debugging**
```go
logger.InitWithDefaults(
"enable_stdout=true",
"level=-4", // Debug level
)
```
### Logs Being Dropped
**Symptoms:**
- "Logs were dropped" messages
- Missing log entries
- `dropped_logs` count in heartbeats
**Solutions:**
1. **Increase buffer size**
```go
logger.InitWithDefaults(
"buffer_size=4096", // Increase from default 1024
)
```
2. **Monitor with heartbeats**
```go
logger.InitWithDefaults(
"heartbeat_level=1",
"heartbeat_interval_s=60",
)
// Watch for: dropped_logs=N
```
3. **Reduce log volume**
```go
// Increase log level
logger.InitWithDefaults("level=0") // Info and above only
// Or batch operations
logger.Info("Batch processed", "count", 1000) // Not 1000 individual logs
```
4. **Optimize flush interval**
```go
logger.InitWithDefaults(
"flush_interval_ms=500", // Less frequent flushes
)
```
### Disk Full Errors
**Symptoms:**
- "Log directory full or disk space low" messages
- `disk_status_ok=false` in heartbeats
- No new logs being written
**Solutions:**
1. **Configure automatic cleanup**
```go
logger.InitWithDefaults(
"max_total_size_mb=1000", // 1GB total limit
"min_disk_free_mb=500", // 500MB free required
"retention_period_hrs=24", // Keep only 24 hours
)
```
2. **Manual cleanup**
```bash
# Find and remove old logs
find /var/log/myapp -name "*.log" -mtime +7 -delete
# Or keep only recent files
ls -t /var/log/myapp/*.log | tail -n +11 | xargs rm
```
3. **Monitor disk usage**
```bash
# Set up monitoring
df -h /var/log
du -sh /var/log/myapp
```
### Logger Initialization Failures
**Symptoms:**
- Init returns error
- "logger previously failed to initialize" errors
- Application won't start
**Common Errors and Solutions:**
1. **Invalid configuration**
```go
// Error: "invalid format: 'xml' (use txt or json)"
logger.InitWithDefaults("format=json") // Use valid format
// Error: "buffer_size must be positive"
logger.InitWithDefaults("buffer_size=1024") // Use positive value
```
2. **Directory creation failure**
```go
// Error: "failed to create log directory: permission denied"
// Solution: Check permissions or use accessible directory
logger.InitWithDefaults("directory=/tmp/logs")
```
3. **Configuration conflicts**
```go
// Error: "min_check_interval > max_check_interval"
logger.InitWithDefaults(
"min_check_interval_ms=100",
"max_check_interval_ms=60000", // Max must be >= min
)
```
## Diagnostic Tools
### Enable Debug Logging
```go
// Temporary debug configuration
logger.InitWithDefaults(
"level=-4", // Debug everything
"enable_stdout=true", // See logs immediately
"trace_depth=3", // Include call stacks
"heartbeat_level=3", // All statistics
"heartbeat_interval_s=10", // Frequent updates
)
```
### Check Logger State
```go
// Add diagnostic helper
func diagnoseLogger(logger *log.Logger) {
// Try logging at all levels
logger.Debug("Debug test")
logger.Info("Info test")
logger.Warn("Warn test")
logger.Error("Error test")
// Force flush
if err := logger.Flush(1 * time.Second); err != nil {
fmt.Printf("Flush failed: %v\n", err)
}
// Check for output
time.Sleep(100 * time.Millisecond)
}
```
### Monitor Resource Usage
```go
// Add resource monitoring
func monitorResources(logger *log.Logger) {
ticker := time.NewTicker(10 * time.Second)
defer ticker.Stop()
for range ticker.C {
var m runtime.MemStats
runtime.ReadMemStats(&m)
logger.Info("Resource usage",
"goroutines", runtime.NumGoroutine(),
"memory_mb", m.Alloc/1024/1024,
"gc_runs", m.NumGC,
)
}
}
```
## Error Messages
### Configuration Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `log name cannot be empty` | Empty name parameter | Provide valid name or use default |
| `invalid format: 'X' (use txt or json)` | Invalid format value | Use "txt" or "json" |
| `extension should not start with dot` | Extension has leading dot | Use "log" not ".log" |
| `buffer_size must be positive` | Zero or negative buffer | Use positive value (default: 1024) |
| `trace_depth must be between 0 and 10` | Invalid trace depth | Use 0-10 range |
### Runtime Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `logger not initialized or already shut down` | Using closed logger | Check initialization order |
| `timeout waiting for flush confirmation` | Flush timeout | Increase timeout or check I/O |
| `failed to create log file: permission denied` | Directory permissions | Check directory access rights |
| `failed to write to log file: no space left` | Disk full | Free space or configure cleanup |
### Recovery Errors
| Error | Cause | Solution |
|-------|-------|----------|
| `no old logs available to delete` | Can't free space | Manual intervention needed |
| `could not free enough space` | Cleanup insufficient | Reduce limits or add storage |
| `disk check failed` | Can't check disk space | Check filesystem health |
## Performance Issues
### High CPU Usage
**Diagnosis:**
```bash
# Check process CPU
top -p $(pgrep yourapp)
# Profile application
go tool pprof http://localhost:6060/debug/pprof/profile
```
**Solutions:**
1. Increase flush interval
2. Disable periodic sync
3. Reduce heartbeat level
4. Use text format instead of JSON
### Memory Growth
**Diagnosis:**
```go
// Add to application
import _ "net/http/pprof"
go http.ListenAndServe("localhost:6060", nil)
// Check heap
go tool pprof http://localhost:6060/debug/pprof/heap
```
**Solutions:**
1. Check for logger reference leaks
2. Verify reasonable buffer size
3. Look for logging loops
### Slow Disk I/O
**Diagnosis:**
```bash
# Check disk latency
iostat -x 1
ioping -c 10 /var/log
```
**Solutions:**
1. Use SSD storage
2. Increase flush interval
3. Disable periodic sync
4. Use separate log volume
## Platform-Specific Issues
### Linux
**File Handle Limits:**
```bash
# Check limits
ulimit -n
# Increase if needed
ulimit -n 65536
```
**SELinux Issues:**
```bash
# Check SELinux denials
ausearch -m avc -ts recent
# Set context for log directory
semanage fcontext -a -t var_log_t "/var/log/myapp(/.*)?"
restorecon -R /var/log/myapp
```
### FreeBSD
**Directory Permissions:**
```bash
# Ensure log directory ownership
chown appuser:appgroup /var/log/myapp
chmod 755 /var/log/myapp
```
**Jails Configuration:**
```bash
# Allow log directory access in jail
jail -m jid=1 allow.mount.devfs=1 path=/var/log/myapp
```
### Windows
**Path Format:**
```go
// Use proper Windows paths
logger.InitWithDefaults(
"directory=C:\\Logs\\MyApp", // Escaped backslashes
// or
"directory=C:/Logs/MyApp", // Forward slashes work too
)
```
**Permissions:**
- Run as Administrator for system directories
- Use user-writable locations like `%APPDATA%`
## FAQ
### Q: Can I use the logger before initialization?
No, always initialize first:
```go
logger := log.NewLogger()
logger.InitWithDefaults() // Must call before logging
logger.Info("Now safe to log")
```
### Q: How do I rotate logs manually?
The logger handles rotation automatically. To force rotation:
```go
// Set small size limit temporarily
logger.InitWithDefaults("max_size_mb=0.001")
logger.Info("This will trigger rotation")
```
### Q: Can I change log directory at runtime?
Yes, through reconfiguration:
```go
// Change directory
logger.InitWithDefaults("directory=/new/path")
```
### Q: How do I completely disable logging?
Several options:
```go
// Option 1: Disable file output, no console
logger.InitWithDefaults(
"disable_file=true",
"enable_stdout=false",
)
// Option 2: Set very high log level
logger.InitWithDefaults("level=100") // Nothing will log
// Option 3: Don't initialize (logs are dropped)
logger := log.NewLogger() // Don't call Init
```
### Q: Why are my logs not appearing immediately?
Logs are buffered for performance:
```go
// For immediate output
logger.InitWithDefaults(
"flush_interval_ms=10", // Quick flushes
"enable_stdout=true", // Also to console
)
// Or force flush
logger.Flush(1 * time.Second)
```
### Q: Can multiple processes write to the same log file?
No, each process should use its own log file:
```go
// Include process ID in name
logger.InitWithDefaults(
fmt.Sprintf("name=myapp_%d", os.Getpid()),
)
```
### Q: How do I parse JSON logs?
Use any JSON parser:
```go
type LogEntry struct {
Time string `json:"time"`
Level string `json:"level"`
Fields []interface{} `json:"fields"`
}
// Parse line
var entry LogEntry
json.Unmarshal([]byte(logLine), &entry)
```
### Getting Help
If you encounter issues not covered here:
1. Check the [examples](examples.md) for working code
2. Enable debug logging and heartbeats
3. Review error messages carefully
4. Check system logs for permission/disk issues
5. File an issue with:
- Go version
- OS/Platform
- Minimal reproduction code
- Error messages
- Heartbeat output if available
---
[← Examples](examples.md) | [← Back to README](../README.md)