Logging
Video ref: https://www.youtube.com/watch?v=pxuXaaT1u3k
Overview
Python's built-in logging module is the standard for production logging. Avoid using print() in production — it has no severity levels, no formatting, and can't be disabled without changing code.
Log Levels
| Level | Value | Use |
|---|---|---|
DEBUG | 10 | Detailed diagnostic info |
INFO | 20 | Confirming things work as expected |
WARNING | 30 | Something unexpected, but still running |
ERROR | 40 | Serious problem, function failed |
CRITICAL | 50 | Program may not continue |
Default level is WARNING — DEBUG/INFO are suppressed unless configured.
Basic Setup
import logging
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger(__name__) # use module name, not root logger
logger.debug("Debug message")
logger.info("Server started on port 8080")
logger.warning("Disk usage above 80%%")
logger.error("Failed to connect to database")
logger.critical("Out of memory — shutting down")
Logger Hierarchy
Loggers are named in a hierarchy using dots: app, app.db, app.api.
Child loggers propagate to parent unless propagate = False.
# Module-level pattern — best practice
logger = logging.getLogger(__name__)
# In myapp/db.py this becomes "myapp.db"
Handlers
Handlers send log records to different destinations:
import logging
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# Console handler
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# File handler
file_handler = logging.FileHandler("app.log")
file_handler.setLevel(logging.DEBUG)
# Formatter
fmt = logging.Formatter("%(asctime)s [%(levelname)s] %(name)s: %(message)s")
console.setFormatter(fmt)
file_handler.setFormatter(fmt)
logger.addHandler(console)
logger.addHandler(file_handler)
Common handlers:
| Handler | Purpose |
|---|---|
StreamHandler | Console / stderr |
FileHandler | Write to a file |
RotatingFileHandler | Rotate when file hits max size |
TimedRotatingFileHandler | Rotate daily/weekly |
HTTPHandler | Send to a web server |
Structured Logging (Production)
For production/cloud environments, log JSON instead of plain text so tools like Datadog, CloudWatch, or ELK can parse fields.
import logging
import json
class JSONFormatter(logging.Formatter):
def format(self, record):
log_data = {
"time": self.formatTime(record),
"level": record.levelname,
"name": record.name,
"message": record.getMessage(),
}
if record.exc_info:
log_data["exc_info"] = self.formatException(record.exc_info)
return json.dumps(log_data)
Or use python-json-logger library:
pip install python-json-logger
Exception Logging
try:
result = 1 / 0
except ZeroDivisionError:
logger.exception("Division failed") # logs ERROR + full traceback automatically
# equivalent to: logger.error("msg", exc_info=True)
Key Rules
- Always use
logging.getLogger(__name__)— never use the root logger directly in libraries - Set level on the logger, not just the handler
- Use
logger.exception()insideexceptblocks — captures traceback automatically - Use
%sstyle formatting in log messages (logger.debug("val: %s", x)) — lazy evaluation, no string built if level disabled