Back to Notes

Logging

Video ref: https://www.youtube.com/watch?v=pxuXaaT1u3k

Overview

Python's built-in logging module is the standard for production logging. Avoid using print() in production — it has no severity levels, no formatting, and can't be disabled without changing code.

Log Levels

LevelValueUse
DEBUG10Detailed diagnostic info
INFO20Confirming things work as expected
WARNING30Something unexpected, but still running
ERROR40Serious problem, function failed
CRITICAL50Program may not continue

Default level is WARNING — DEBUG/INFO are suppressed unless configured.


Basic Setup

import logging

logging.basicConfig(
    level=logging.DEBUG,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    datefmt="%Y-%m-%d %H:%M:%S",
)

logger = logging.getLogger(__name__)  # use module name, not root logger

logger.debug("Debug message")
logger.info("Server started on port 8080")
logger.warning("Disk usage above 80%%")
logger.error("Failed to connect to database")
logger.critical("Out of memory — shutting down")

Logger Hierarchy

Loggers are named in a hierarchy using dots: app, app.db, app.api. Child loggers propagate to parent unless propagate = False.

# Module-level pattern — best practice
logger = logging.getLogger(__name__)
# In myapp/db.py this becomes "myapp.db"

Handlers

Handlers send log records to different destinations:

import logging

logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)

# Console handler
console = logging.StreamHandler()
console.setLevel(logging.INFO)

# File handler
file_handler = logging.FileHandler("app.log")
file_handler.setLevel(logging.DEBUG)

# Formatter
fmt = logging.Formatter("%(asctime)s [%(levelname)s] %(name)s: %(message)s")
console.setFormatter(fmt)
file_handler.setFormatter(fmt)

logger.addHandler(console)
logger.addHandler(file_handler)

Common handlers:

HandlerPurpose
StreamHandlerConsole / stderr
FileHandlerWrite to a file
RotatingFileHandlerRotate when file hits max size
TimedRotatingFileHandlerRotate daily/weekly
HTTPHandlerSend to a web server

Structured Logging (Production)

For production/cloud environments, log JSON instead of plain text so tools like Datadog, CloudWatch, or ELK can parse fields.

import logging
import json

class JSONFormatter(logging.Formatter):
    def format(self, record):
        log_data = {
            "time": self.formatTime(record),
            "level": record.levelname,
            "name": record.name,
            "message": record.getMessage(),
        }
        if record.exc_info:
            log_data["exc_info"] = self.formatException(record.exc_info)
        return json.dumps(log_data)

Or use python-json-logger library:

pip install python-json-logger

Exception Logging

try:
    result = 1 / 0
except ZeroDivisionError:
    logger.exception("Division failed")  # logs ERROR + full traceback automatically
    # equivalent to: logger.error("msg", exc_info=True)

Key Rules

  • Always use logging.getLogger(__name__) — never use the root logger directly in libraries
  • Set level on the logger, not just the handler
  • Use logger.exception() inside except blocks — captures traceback automatically
  • Use %s style formatting in log messages (logger.debug("val: %s", x)) — lazy evaluation, no string built if level disabled