Master Log Levels in Gleam: Complete Learning Path
Master Log Levels in Gleam: Complete Learning Path
Log levels are a foundational concept in software development, providing a systematic way to categorize and filter application messages. Mastering them in Gleam allows you to build robust, observable, and easily debuggable systems, transforming chaotic console output into a powerful diagnostic tool for any environment.
Imagine launching your new Gleam application. It works perfectly on your machine, but in production, a subtle bug appears. You check the logs, but they're a massive, undifferentiated wall of text—startup messages, user actions, and background tasks all jumbled together. Finding the one critical error message is like finding a needle in a digital haystack. This frustrating, time-consuming scenario is precisely the problem that a disciplined logging strategy solves.
This guide will take you from zero to hero in understanding and implementing log levels in Gleam. We'll explore not just the "what," but the critical "why" and "how." You will learn to build a simple, effective logger from scratch using Gleam's powerful type system, turning your application's internal monologue into a clear, structured, and actionable narrative.
What Exactly Are Log Levels?
At their core, log levels are a mechanism for assigning a severity or importance to a log message. Instead of every message being treated equally, each one is tagged with a label—such as DEBUG, INFO, WARN, or ERROR—that indicates its context and urgency. This simple act of categorization is incredibly powerful.
Think of it like a military communication system. A message about daily troop movements is important (INFO), but a message about an imminent threat is critical (ERROR). By categorizing messages, commanders can filter out the noise and focus on what requires immediate attention. A logging framework does the same for your application.
Defining Severity and Category
The primary purpose of a log level is to create a hierarchy of importance. This hierarchy allows you to configure your application to only show messages above a certain severity. For example, in a development environment, you might want to see everything, including granular DEBUG messages. In production, however, you likely only care about INFO level messages and above to reduce noise and performance overhead.
The Standard Hierarchy of Log Levels
While you can define any levels you want, a standard, widely-accepted hierarchy has emerged in the industry. Adhering to this standard makes your logs universally understandable to developers and operations teams. The levels are typically ordered from least severe to most severe:
- DEBUG: The most verbose level. Used for fine-grained information that is only useful for developers during debugging. This includes variable states, function entry/exit points, or detailed diagnostic information.
- INFO: General information about the application's lifecycle. These messages track the normal flow of the application, such as service startup, a user logging in, or a background job completing successfully.
- WARN (Warning): Indicates a potential problem or an unexpected event that is not (yet) an error. The application can continue running, but the event should be noted. Examples include using a deprecated API, a configuration issue, or low disk space.
- ERROR: A serious issue has occurred that prevented a specific operation from completing, but the application as a whole is still functional. Examples include a failed database query, a network connection timeout, or an inability to access a required file.
- FATAL (or CRITICAL): The most severe level. This indicates a critical error that is about to cause the application to terminate. This level is reserved for unrecoverable situations where the only option is to shut down.
Why Log Levels Are a Non-Negotiable Skill for Gleam Developers
In a language like Gleam, which emphasizes explicitness and type safety, it's easy to think that extensive logging is less necessary. The compiler catches many bugs before they ever run. However, runtime logic errors, external system failures, and unexpected user behavior are inevitable. This is where a robust logging strategy becomes your most valuable diagnostic tool.
Taming the Noise: From Chaos to Clarity
Without log levels, every piece of diagnostic output from your application has the same weight. The message "Server started on port 8000" is visually indistinguishable from "FATAL: Database connection lost". By assigning levels, you can instantly filter and color-code your logs, allowing human eyes and automated systems to spot critical issues immediately.
Environment-Specific Verbosity
The information you need from your application varies dramatically between environments. A developer needs to see everything, while a Site Reliability Engineer (SRE) monitoring the production environment needs to see only what's actionable.
- Development: Set the log level to
DEBUGto get maximum insight into the application's internal state while you're building and testing features. - Staging/QA: Set the level to
INFOorDEBUGto validate the application's behavior during integration testing. - Production: Set the level to
INFOorWARN. This keeps the logs concise, focusing on major application events and potential problems, while significantly reducing the performance and storage costs associated with logging. If an issue arises, you can dynamically change the log level toDEBUGfor a short period to gather more data without redeploying.
Enabling Proactive Monitoring and Alerting
Modern infrastructure relies on automated monitoring. Log levels are the hooks that these systems use. You can configure your observability platform (like Datadog, Grafana Loki, or Sentry) to:
- Create a dashboard that visualizes the rate of
WARNmessages. - Trigger a PagerDuty alert whenever an
ERRORmessage is logged. - Send a high-priority notification to the on-call engineer if a
FATALmessage appears.
Without levels, every log message is just a string, making it nearly impossible to build reliable automation.
How to Implement Log Levels in Gleam from Scratch
Gleam's core philosophy encourages building clear, explicit systems. While the ecosystem for third-party logging libraries is still growing, Gleam's powerful type system makes it straightforward and educational to build our own simple, type-safe logger. This approach gives you full control and a deep understanding of the mechanics.
The Gleam Way: Leveraging Custom Types
First, we'll model the log levels themselves using a custom type. This is far superior to using strings, as it allows the Gleam compiler to catch typos and ensure we only use valid levels.
We'll also add a derived `order` function. The `gleam/order` module lets us compare our levels, which is the key to filtering messages.
import gleam/order
// public/log_level.gleam
pub type LogLevel {
Debug
Info
Warn
Error
Fatal
}
pub fn to_int(level: LogLevel) -> Int {
case level {
Debug -> 0
Info -> 1
Warn -> 2
Error -> 3
Fatal -> 4
}
}
pub fn compare(a: LogLevel, b: LogLevel) -> order.Order {
order.compare(to_int(a), to_int(b))
}
By defining an explicit integer value for each level and a `compare` function, we can now programmatically check if one level is "greater than or equal to" another (e.g., is `Warn` severe enough to be shown when the minimum level is `Info`?).
Building a Simple, Configurable Logger
Next, let's create a core `log` function. This function will be the heart of our logger. It will take the minimum configured log level, the level of the message we want to log, and the message itself. It will only print the message if its level is at or above the minimum configured level.
// src/my_app/logger.gleam
import gleam/io
import gleam/string
import my_app/log_level.{type LogLevel, compare}
pub fn log(
minimum_level: LogLevel,
message_level: LogLevel,
message: String,
) -> Nil {
// Only log if the message's level is >= the minimum required level
case compare(message_level, minimum_level) {
order.Lt -> Nil // Less than, so we do nothing
order.Eq -> print_log(message_level, message)
order.Gt -> print_log(message_level, message)
}
}
fn print_log(level: LogLevel, message: String) -> Nil {
let prefix = case level {
log_level.Debug -> "[DEBUG]"
log_level.Info -> "[INFO] "
log_level.Warn -> "[WARN] "
log_level.Error -> "[ERROR]"
log_level.Fatal -> "[FATAL]"
}
// We use io.println to write to standard output
io.println(string.append(prefix, " " <> message))
}
This logic is the fundamental principle of all logging frameworks. Here is a visual representation of that decision-making flow:
● Log Message(Warn, "Disk space low")
│
▼
┌─────────────────────────┐
│ Read Minimum Log Level │
│ (e.g., from ENV var) │
│ Current Level: Info │
└────────────┬────────────┘
│
▼
◆ Is message_level >= min_level?
(Warn >= Info)
╱ ╲
Yes No
│ │
▼ ▼
┌──────────┐ ┌───────────┐
│ Format & │ │ Discard │
│ Write to │ │ Message │
│ Output │ │ (Silent) │
│ (stdout) │ └───────────┘
└──────────┘
│
▼
● End
Reading Configuration from the Environment
Hardcoding the minimum log level isn't flexible. The best practice is to read it from an environment variable. This allows you to change the log verbosity of your deployed application without changing the code.
We'll use Gleam's standard library to read an environment variable, providing a sensible default if it's not set.
import gleam/option.{None, Some}
import gleam/os
import my_app/log_level.{type LogLevel}
pub fn get_minimum_level_from_env() -> LogLevel {
case os.get_env("LOG_LEVEL") {
// Default to Info if the variable is not set
Error(Nil) -> log_level.Info
Ok(level_string) -> case level_string {
"DEBUG" -> log_level.Debug
"INFO" -> log_level.Info
"WARN" -> log_level.Warn
"ERROR" -> log_level.Error
"FATAL" -> log_level.Fatal
// If the value is invalid, we still default to Info
_ -> log_level.Info
}
}
}
Putting It All Together: A Practical Example
Now we can combine these pieces into a working application. We'll read the minimum level once at startup and then use our `log` function throughout the code.
import gleam/io
import my_app/logger
import my_app/log_level.{Debug, Info, Warn, Error}
pub fn main() {
// Read the configuration once when the application starts
let minimum_level = logger.get_minimum_level_from_env()
io.println("Logger initialized with minimum level.")
logger.log(minimum_level, Debug, "Attempting to connect to database...")
// ... database connection logic ...
logger.log(minimum_level, Info, "Database connection successful.")
logger.log(minimum_level, Info, "Processing user request for user_id=123.")
// ... processing logic ...
logger.log(minimum_level, Warn, "User 123 has a deprecated subscription plan.")
logger.log(minimum_level, Error, "Failed to process payment for order 456.")
}
To run this, you can control the output from your terminal:
Running with default (INFO):
$ gleam run
Logger initialized with minimum level.
[INFO] Database connection successful.
[INFO] Processing user request for user_id=123.
[WARN] User 123 has a deprecated subscription plan.
[ERROR] Failed to process payment for order 456.
Running with DEBUG verbosity:
$ LOG_LEVEL=DEBUG gleam run
Logger initialized with minimum level.
[DEBUG] Attempting to connect to database...
[INFO] Database connection successful.
[INFO] Processing user request for user_id=123.
[WARN] User 123 has a deprecated subscription plan.
[ERROR] Failed to process payment for order 456.
As you can see, we can now control the verbosity of our application's logs without a single code change!
Where and When: Best Practices for Using Each Log Level
Knowing how to implement a logger is only half the battle. Knowing what to log and at which level is an art that separates junior and senior engineers. Here are some guidelines:
Debug: For the Deepest Dives
Use this level for information that is only valuable to a developer actively debugging a specific piece of code. It should be safe to disable `Debug` logs completely in production.
- Good: `logger.log(Debug, "Payload received: " <> inspect(payload))`
- Good: `logger.log(Debug, "Entering function process_order with order_id=" <> order_id)`
- Bad: `logger.log(Debug, "Order processed.")` (This is an `Info` level event).
Info: The Story of Your Application's Happy Path
Info logs should narrate the normal, healthy operation of your application. They are the high-level story of what's happening. Looking at only the `Info` logs should give you a clear picture of the system's activity.
- Good: `logger.log(Info, "HTTP server started on port 8080.")`
- Good: `logger.log(Info, "User " <> user.name <> " logged in successfully.")`
- Bad: `logger.log(Info, "Checking user password...")` (This is too granular; it's a `Debug` message).
Warn: The Canaries in the Coal Mine
Warnings are for unexpected or potentially harmful situations that do not (yet) constitute an error. They are signals that you should investigate an issue before it becomes a full-blown error.
- Good: `logger.log(Warn, "API rate limit approaching. 95% of quota used.")`
- Good: `logger.log(Warn, "Configuration file not found. Using default settings.")`
- Bad: `logger.log(Warn, "User entered incorrect password.")` (This is a normal, expected event, not a system problem. It might be an `Info` or `Debug` log at most).
Error & Fatal: When Things Go Wrong
Reserve `Error` for genuine errors that have impacted a user or a system process. These are actionable events that often require an alert. `Fatal` is for the rare case where the application cannot recover and must shut down.
- Good: `logger.log(Error, "Failed to connect to database after 3 retries.")`
- Good: `logger.log(Error, "Could not process message from queue: " <> inspect(error))`
- Good: `logger.log(Fatal, "Required configuration 'DATABASE_URL' is missing. Shutting down.")`
Common Pitfalls: The Anti-Patterns of Logging
- Logging Sensitive Information: Never log passwords, API keys, credit card numbers, or personally identifiable information (PII). This is a massive security risk.
- Meaningless Messages: A log message like `logger.log(Error, "An error occurred.")` is useless. Always include context: what failed, why it failed, and any relevant identifiers (like a user ID or request ID).
- Over-logging in Loops: Be extremely careful about placing log statements inside loops that can execute thousands of times per second. This can severely degrade performance.
Beyond the Console: Structured Logging and the Modern Observability Stack
Printing plain text to the console is great for development, but modern production systems use structured logging. This means formatting logs as machine-readable data, typically JSON, so they can be easily parsed, indexed, and queried by log aggregation platforms.
What is Structured Logging (JSON)?
Instead of a simple string, a structured log is an object containing key-value pairs. This adds rich, queryable context to every message.
We can adapt our Gleam logger to output JSON. While a proper implementation would use a JSON library like `gleam/json`, here's a conceptual example using string concatenation:
// A conceptual example of a structured log function
fn print_structured_log(level: LogLevel, message: String) -> Nil {
// In a real app, use a JSON library for proper escaping!
let level_str = // ... convert level to string ...
let timestamp = // ... get current timestamp ...
let json_string =
"{ \"timestamp\": \"" <> timestamp <> "\", " <>
"\"level\": \"" <> level_str <> "\", " <>
"\"message\": \"" <> message <> "\" }"
io.println(json_string)
}
A log message might now look like this:
{"timestamp": "2023-10-27T10:00:00Z", "level": "ERROR", "message": "Failed to connect to database", "retry_attempts": 3}
The Journey of a Log Message
In a modern cloud-native environment, a log message goes on a journey from your application to a centralized platform where it can be analyzed. This pipeline is crucial for observability.
Gleam Application
┌─────────────────┐
│ logger.error(...)│
└────────┬────────┘
│
▼
Structured Log (JSON)
{"level":"error", "msg":"...", "ts":...}
│
▼
Log Shipper
(e.g., Vector, Fluentd)
│
▼
┌─────────────────────┐
│ Log Aggregation │
│ & Storage Platform │
│ (e.g., Grafana Loki)│
└──────────┬──────────┘
│
├──────────────────┐
│ │
▼ ▼
┌─────────┐ ┌────────────┐
│ Search │ │ Alerting │
│ & Query │ │ (on ERROR) │
└─────────┘ └────────────┘
This setup allows entire teams to search, filter, and visualize logs from hundreds of services in one place, making it possible to manage complex distributed systems.
Pros and Cons of Implementing a Logging Strategy
Like any engineering decision, adopting a formal logging strategy involves trade-offs. However, the benefits almost always outweigh the costs for any non-trivial application.
| Pros | Cons / Risks |
|---|---|
| Enhanced Debuggability | Performance Overhead |
| Quickly pinpoint issues in any environment without needing a debugger attached. | I/O operations for logging can be slow. Excessive logging can impact application latency. |
| Improved Observability | Increased Complexity |
| Gain deep insight into application behavior, performance, and usage patterns. | Requires careful planning and discipline from the entire development team. |
| Proactive Alerting | Storage and Financial Costs |
| Automatically detect and respond to critical errors before they impact many users. | Centralized logging platforms charge based on the volume of data ingested and stored. |
| Long-Term System Record | Security Risks |
| Provides a historical audit trail for security analysis and incident post-mortems. | Accidentally logging sensitive user data or credentials can create major vulnerabilities. |
Your Learning Path: The Log Levels Module
Understanding the theory is the first step. The next is to put it into practice. The Log Levels module in the kodikra.com Gleam learning path is designed to solidify these concepts through hands-on coding. You will implement a parser for a specific log format, reinforcing your understanding of string manipulation, pattern matching, and functional programming concepts in a real-world context.
This module is a critical step in your journey to becoming a proficient Gleam developer, as it teaches a skill that is essential for building and maintaining production-grade software.
Learn Log Levels step by step: Apply your knowledge by building a log-line parser and implementing the core logic discussed in this guide.
Frequently Asked Questions (FAQ)
Why not just use `io.println` for everything?
Using io.println is fine for very small scripts or "hello world" examples. However, it lacks the two most critical features for real applications: categorization (levels) and configurability (the ability to turn logs on/off by environment). Without these, your logs become an unmanageable wall of text in production.
What is the performance impact of logging in Gleam?
The performance impact depends on what you're doing. The check `compare(message_level, minimum_level)` is extremely fast. The main cost comes from I/O (writing to the console or a file) and string formatting. This is why it's crucial to set the log level to INFO or WARN in production, as discarded DEBUG messages will have almost zero performance cost.
How can I add more context to my logs, like a request ID?
This is a great question and leads to the concept of a "logger context." A more advanced logger would allow you to create a logger instance with pre-filled context, like `logger_with_context = logger.with("request_id", "xyz-123")`. All messages from that instance would automatically include the request ID. In our simple logger, you would manually add it to the message string.
Should I log expected errors, like a user entering a wrong password?
Generally, you should not log expected, non-malicious user errors at the ERROR level. A failed login attempt is part of the normal application flow. Logging it as an ERROR would trigger false alarms. It might be logged at the INFO level for security auditing, or at the DEBUG level, but not as a system error.
Are there any third-party logging libraries for Gleam yet?
The Gleam ecosystem is growing rapidly. Libraries for structured logging and integration with observability platforms are likely to emerge. It's always a good idea to check the Hex.pm package repository for new and popular logging packages. However, understanding how to build one from scratch, as shown here, provides a solid foundation for using any library effectively.
How do I handle logging in concurrent Gleam actors?
When logging from multiple concurrent processes (actors), it's important to ensure that log writes don't interleave and corrupt messages. A common pattern is to have a dedicated "logging actor" that receives log messages from all other actors via its mailbox and writes them to the output sequentially. This centralizes the I/O and prevents race conditions.
Can I direct logs to a file instead of the console?
Yes. While Gleam's core library focuses on stdout/stderr, you can use a native Erlang or Elixir function via the FFI (Foreign Function Interface) to write to files. However, the modern cloud-native approach is to always log to standard output (stdout) and let the container orchestration system (like Docker or Kubernetes) handle collecting, forwarding, and storing the logs.
Conclusion: From Noise to Signal
Log levels are not just a feature; they are a fundamental discipline for professional software engineering. By moving beyond simple print statements and adopting a structured, level-based approach, you elevate your Gleam applications from opaque black boxes to transparent, observable systems. You empower yourself and your team to diagnose problems faster, monitor system health effectively, and build more resilient software.
The techniques covered here—using custom types for safety, configuring verbosity with environment variables, and understanding the semantic meaning of each level—are universal principles that will serve you throughout your career. Now, it's time to put this knowledge into action.
Disclaimer: The code snippets and best practices in this article are based on Gleam v1.3.0 and its standard library. As the language and its ecosystem evolve, new libraries and patterns may emerge.
Return to the Gleam Learning Roadmap
Published by Kodikra — Your trusted Gleam learning resource.
Post a Comment