• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

The Full Information to Logging for Python Builders

Admin by Admin
January 13, 2026
Home Machine Learning
Share on FacebookShare on Twitter


The Complete Guide to Logging for Python Developers
The Complete Guide to Logging for Python Developers
Picture by Creator

 

# Introduction

 
Most Python builders deal with logging as an afterthought. They throw round print() statements throughout growth, perhaps swap to primary logging later, and assume that’s sufficient. However when points come up in manufacturing, they be taught they’re lacking the context wanted to diagnose issues effectively.

Correct logging strategies provide you with visibility into software habits, efficiency patterns, and error situations. With the precise method, you may hint consumer actions, establish bottlenecks, and debug points with out reproducing them domestically. Good logging turns debugging from guesswork into systematic problem-solving.

This text covers the important logging patterns that Python builders can use. You’ll discover ways to construction log messages for searchability, deal with exceptions with out shedding context, and configure logging for various environments. We’ll begin with the fundamentals and work our method as much as extra superior logging methods that you need to use in tasks immediately. We will likely be utilizing solely the logging module.

You could find the code on GitHub.

 

# Setting Up Your First Logger

 
As a substitute of leaping straight to complicated configurations, allow us to perceive what a logger really does. We’ll create a primary logger that writes to each the console and a file.
 

import logging

logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)

console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)

formatter = logging.Formatter(
    '%(asctime)s - %(title)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)

logger.addHandler(console_handler)
logger.addHandler(file_handler)

logger.debug('It is a debug message')
logger.information('Utility began')
logger.warning('Disk house operating low')
logger.error('Failed to hook up with database')
logger.crucial('System shutting down')

 

Here’s what every bit of the code does.

The getLogger() perform creates a named logger occasion. Consider it as making a channel on your logs. The title ‘my_app’ helps you establish the place logs come from in bigger purposes.

We set the logger stage to DEBUG, which implies it can course of all messages. Then we create two handlers: one for console output and one for file output. Handlers management the place logs go.

The console handler solely exhibits INFO stage and above, whereas the file handler captures all the things, together with DEBUG messages. That is helpful since you need detailed logs in recordsdata however cleaner output on display screen.

The formatter determines how your log messages look. The format string makes use of placeholders like %(asctime)s for the timestamp and %(levelname)s for severity.

 

# Understanding Log Ranges and When to Use Every

 
Python’s logging module has 5 commonplace ranges, and figuring out when to make use of every one is necessary for helpful logs.

Right here is an instance:
 

logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)

def process_payment(user_id, quantity):
    logger.debug(f'Beginning fee processing for consumer {user_id}')

    if quantity <= 0:
        logger.error(f'Invalid fee quantity: {quantity}')
        return False

    logger.information(f'Processing ${quantity} fee for consumer {user_id}')

    if quantity > 10000:
        logger.warning(f'Giant transaction detected: ${quantity}')

    attempt:
        # Simulate fee processing
        success = charge_card(user_id, quantity)
        if success:
            logger.information(f'Cost profitable for consumer {user_id}')
            return True
        else:
            logger.error(f'Cost failed for consumer {user_id}')
            return False
    besides Exception as e:
        logger.crucial(f'Cost system crashed: {e}', exc_info=True)
        return False

def charge_card(user_id, quantity):
    # Simulated fee logic
    return True

process_payment(12345, 150.00)
process_payment(12345, 15000.00)

 

Allow us to break down when to make use of every stage:

  • DEBUG is for detailed info helpful throughout growth. You’d use it for variable values, loop iterations, or step-by-step execution traces. These are often disabled in manufacturing.
  • INFO marks regular operations that you just need to report. Beginning a server, finishing a process, or profitable transactions go right here. These verify your software is working as anticipated.
  • WARNING alerts one thing sudden however not breaking. This contains low disk house, deprecated API utilization, or uncommon however dealt with conditions. The applying continues operating, however somebody ought to examine.
  • ERROR means one thing failed however the software can proceed. Failed database queries, validation errors, or community timeouts belong right here. The particular operation failed, however the app retains operating.
  • CRITICAL signifies critical issues which may trigger the applying to crash or lose information. Use this sparingly for catastrophic failures that want quick consideration.

Whenever you run the above code, you’ll get:
 

DEBUG: Beginning fee processing for consumer 12345
DEBUG:payment_processor:Beginning fee processing for consumer 12345
INFO: Processing $150.0 fee for consumer 12345
INFO:payment_processor:Processing $150.0 fee for consumer 12345
INFO: Cost profitable for consumer 12345
INFO:payment_processor:Cost profitable for consumer 12345
DEBUG: Beginning fee processing for consumer 12345
DEBUG:payment_processor:Beginning fee processing for consumer 12345
INFO: Processing $15000.0 fee for consumer 12345
INFO:payment_processor:Processing $15000.0 fee for consumer 12345
WARNING: Giant transaction detected: $15000.0
WARNING:payment_processor:Giant transaction detected: $15000.0
INFO: Cost profitable for consumer 12345
INFO:payment_processor:Cost profitable for consumer 12345
True

 

Subsequent, allow us to proceed to grasp extra about logging exceptions.

 

# Logging Exceptions Correctly

 
When exceptions happen, you want extra than simply the error message; you want the complete stack hint. Right here is learn how to seize exceptions successfully.
 

import json

logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)

handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
    '%(asctime)s - %(title)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)

def fetch_user_data(user_id):
    logger.information(f'Fetching information for consumer {user_id}')

    attempt:
        # Simulate API name
        response = call_external_api(user_id)
        information = json.hundreds(response)
        logger.debug(f'Acquired information: {information}')
        return information
    besides json.JSONDecodeError as e:
        logger.error(
            f'Didn't parse JSON for consumer {user_id}: {e}',
            exc_info=True
        )
        return None
    besides ConnectionError as e:
        logger.error(
            f'Community error whereas fetching consumer {user_id}',
            exc_info=True
        )
        return None
    besides Exception as e:
        logger.crucial(
            f'Sudden error in fetch_user_data: {e}',
            exc_info=True
        )
        elevate

def call_external_api(user_id):
    # Simulated API response
    return '{"id": ' + str(user_id) + ', "title": "John"}'

fetch_user_data(123)

 

The important thing right here is the exc_info=True parameter. This tells the logger to incorporate the complete exception traceback in your logs. With out it, you solely get the error message, which regularly is just not sufficient to debug the issue.

Discover how we catch particular exceptions first, then have a normal Exception handler. The particular handlers allow us to present context-appropriate error messages. The final handler catches something sudden and re-raises it as a result of we have no idea learn how to deal with it safely.

Additionally discover we log at ERROR for anticipated exceptions (like community errors) however CRITICAL for sudden ones. This distinction helps you prioritize when reviewing logs.

 

# Making a Reusable Logger Configuration

 
Copying logger setup code throughout recordsdata is tedious and error-prone. Allow us to create a configuration perform you may import wherever in your mission.
 

# logger_config.py

import logging
import os
from datetime import datetime


def setup_logger(title, log_dir="logs", stage=logging.INFO):
    """
    Create a configured logger occasion

    Args:
        title: Logger title (often __name__ from calling module)
        log_dir: Listing to retailer log recordsdata
        stage: Minimal logging stage

    Returns:
        Configured logger occasion
    """
    # Create logs listing if it would not exist

    if not os.path.exists(log_dir):
        os.makedirs(log_dir)
    logger = logging.getLogger(title)

    # Keep away from including handlers a number of occasions

    if logger.handlers:
        return logger
    logger.setLevel(stage)

    # Console handler - INFO and above

    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    console_format = logging.Formatter("%(levelname)s - %(title)s - %(message)s")
    console_handler.setFormatter(console_format)

    # File handler - all the things

    log_filename = os.path.be part of(
        log_dir, f"{title.change('.', '_')}_{datetime.now().strftime('%Ypercentmpercentd')}.log"
    )
    file_handler = logging.FileHandler(log_filename)
    file_handler.setLevel(logging.DEBUG)
    file_format = logging.Formatter(
        "%(asctime)s - %(title)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
    )
    file_handler.setFormatter(file_format)

    logger.addHandler(console_handler)
    logger.addHandler(file_handler)

    return logger

 

Now that you’ve got arrange logger_config, you need to use it in your Python script like so:
 

from logger_config import setup_logger

logger = setup_logger(__name__)

def calculate_discount(value, discount_percent):
    logger.debug(f'Calculating low cost: {value} * {discount_percent}%')
    
    if discount_percent < 0 or discount_percent > 100:
        logger.warning(f'Invalid low cost share: {discount_percent}')
        discount_percent = max(0, min(100, discount_percent))
    
    low cost = value * (discount_percent / 100)
    final_price = value - low cost
    
    logger.information(f'Utilized {discount_percent}% low cost: ${value} -> ${final_price}')
    return final_price

calculate_discount(100, 20)
calculate_discount(100, 150)

 

This setup perform handles a number of necessary issues. First, it creates the logs listing if wanted, stopping crashes from lacking directories.

The perform checks if handlers exist already earlier than including new ones. With out this examine, calling setup_logger a number of occasions would create duplicate log entries.

We generate dated log filenames routinely. This prevents log recordsdata from rising infinitely and makes it straightforward to seek out logs from particular dates.

The file handler contains extra element than the console handler, together with perform names and line numbers. That is invaluable when debugging however would muddle console output.

Utilizing __name__ because the logger title creates a hierarchy that matches your module construction. This allows you to management logging for particular components of your software independently.

 

# Structuring Logs with Context

 
Plain textual content logs are fantastic for easy purposes, however structured logs with context make debugging a lot simpler. Allow us to add contextual info to our logs.
 

import json
from datetime import datetime, timezone

class ContextLogger:
    """Logger wrapper that provides contextual info to all log messages"""

    def __init__(self, title, context=None):
        self.logger = logging.getLogger(title)
        self.context = context or {}

        handler = logging.StreamHandler()
        formatter = logging.Formatter('%(message)s')
        handler.setFormatter(formatter)
        # Examine if handler already exists to keep away from duplicate handlers
        if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
            self.logger.addHandler(handler)
        self.logger.setLevel(logging.DEBUG)

    def _format_message(self, message, stage, extra_context=None):
        """Format message with context as JSON"""
        log_data = {
            'timestamp': datetime.now(timezone.utc).isoformat(),
            'stage': stage,
            'message': message,
            'context': {**self.context, **(extra_context or {})}
        }
        return json.dumps(log_data)

    def debug(self, message, **kwargs):
        self.logger.debug(self._format_message(message, 'DEBUG', kwargs))

    def information(self, message, **kwargs):
        self.logger.information(self._format_message(message, 'INFO', kwargs))

    def warning(self, message, **kwargs):
        self.logger.warning(self._format_message(message, 'WARNING', kwargs))

    def error(self, message, **kwargs):
        self.logger.error(self._format_message(message, 'ERROR', kwargs))

 

You should use the ContextLogger like so:
 

def process_order(order_id, user_id):
    logger = ContextLogger(__name__, context={
        'order_id': order_id,
        'user_id': user_id
    })

    logger.information('Order processing began')

    attempt:
        gadgets = fetch_order_items(order_id)
        logger.information('Gadgets fetched', item_count=len(gadgets))

        whole = calculate_total(gadgets)
        logger.information('Whole calculated', whole=whole)

        if whole > 1000:
            logger.warning('Excessive worth order', whole=whole, flagged=True)

        return True
    besides Exception as e:
        logger.error('Order processing failed', error=str(e))
        return False

def fetch_order_items(order_id):
    return [{'id': 1, 'price': 50}, {'id': 2, 'price': 75}]

def calculate_total(gadgets):
    return sum(merchandise['price'] for merchandise in gadgets)

process_order('ORD-12345', 'USER-789')

 

This ContextLogger wrapper does one thing helpful: it routinely contains context in each log message. The order_id and user_id get added to all logs with out repeating them in each logging name.

The JSON format makes these logs straightforward to parse and search.

The **kwargs in every logging technique allows you to add additional context to particular log messages. This combines international context (order_id, user_id) with native context (item_count, whole) routinely.

This sample is very helpful in internet purposes the place you need request IDs, consumer IDs, or session IDs in each log message from a request.

 

# Rotating Log Recordsdata to Stop Disk Area Points

 
Log recordsdata develop rapidly in manufacturing. With out rotation, they’ll ultimately fill your disk. Right here is learn how to implement computerized log rotation.
 

from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler

def setup_rotating_logger(title):
    logger = logging.getLogger(title)
    logger.setLevel(logging.DEBUG)

    # Dimension-based rotation: rotate when file reaches 10MB
    size_handler = RotatingFileHandler(
        'app_size_rotation.log',
        maxBytes=10 * 1024 * 1024,  # 10 MB
        backupCount=5  # Preserve 5 outdated recordsdata
    )
    size_handler.setLevel(logging.DEBUG)

    # Time-based rotation: rotate day by day at midnight
    time_handler = TimedRotatingFileHandler(
        'app_time_rotation.log',
        when='midnight',
        interval=1,
        backupCount=7  # Preserve 7 days
    )
    time_handler.setLevel(logging.INFO)

    formatter = logging.Formatter(
        '%(asctime)s - %(title)s - %(levelname)s - %(message)s'
    )
    size_handler.setFormatter(formatter)
    time_handler.setFormatter(formatter)

    logger.addHandler(size_handler)
    logger.addHandler(time_handler)

    return logger


logger = setup_rotating_logger('rotating_app')

 

Allow us to now attempt to use rotation of log recordsdata:
 

for i in vary(1000):
    logger.information(f'Processing report {i}')
    logger.debug(f'File {i} particulars: accomplished in {i * 0.1}ms')

 

RotatingFileHandler manages logs based mostly on file dimension. When the log file reaches 10MB (laid out in bytes), it will get renamed to app_size_rotation.log.1, and a brand new app_size_rotation.log begins. The backupCount of 5 means you’ll hold 5 outdated log recordsdata earlier than the oldest will get deleted.

TimedRotatingFileHandler rotates based mostly on time intervals. The ‘midnight’ parameter means it creates a brand new log file daily at midnight. You may additionally use ‘H’ for hourly, ‘D’ for day by day (at any time), or ‘W0’ for weekly on Monday.

The interval parameter works with the when parameter. With when='H' and interval=6, logs would rotate each 6 hours.

These handlers are important for manufacturing environments. With out them, your software may crash when the disk fills up with logs.

 

# Logging in Completely different Environments

 
Your logging wants differ between growth, staging, and manufacturing. Right here is learn how to configure logging that adapts to every surroundings.
 

import logging
import os

def configure_environment_logger(app_name):
    """Configure logger based mostly on surroundings"""
    surroundings = os.getenv('APP_ENV', 'growth')
    
    logger = logging.getLogger(app_name)
    
    # Clear present handlers
    logger.handlers = []
    
    if surroundings == 'growth':
        # Improvement: verbose console output
        logger.setLevel(logging.DEBUG)
        handler = logging.StreamHandler()
        handler.setLevel(logging.DEBUG)
        formatter = logging.Formatter(
            '%(levelname)s - %(title)s - %(funcName)s:%(lineno)d - %(message)s'
        )
        handler.setFormatter(formatter)
        logger.addHandler(handler)
        
    elif surroundings == 'staging':
        # Staging: detailed file logs + necessary console messages
        logger.setLevel(logging.DEBUG)
        
        file_handler = logging.FileHandler('staging.log')
        file_handler.setLevel(logging.DEBUG)
        file_formatter = logging.Formatter(
            '%(asctime)s - %(title)s - %(levelname)s - %(funcName)s - %(message)s'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.WARNING)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
        
    elif surroundings == 'manufacturing':
        # Manufacturing: structured logs, errors solely to console
        logger.setLevel(logging.INFO)
        
        file_handler = logging.handlers.RotatingFileHandler(
            'manufacturing.log',
            maxBytes=50 * 1024 * 1024,  # 50 MB
            backupCount=10
        )
        file_handler.setLevel(logging.INFO)
        file_formatter = logging.Formatter(
            '{"timestamp": "%(asctime)s", "stage": "%(levelname)s", '
            '"logger": "%(title)s", "message": "%(message)s"}'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.ERROR)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
    
    return logger

 

This environment-based configuration handles every stage otherwise. Improvement exhibits all the things on the console with detailed info, together with perform names and line numbers. This makes debugging quick.

Staging balances growth and manufacturing. It writes detailed logs to recordsdata for investigation however solely exhibits warnings and errors on the console to keep away from noise.

Manufacturing focuses on efficiency and construction. It solely logs INFO stage and above to recordsdata, makes use of JSON formatting for simple parsing, and implements log rotation to handle disk house. Console output is proscribed to errors solely.
 

# Set surroundings variable (usually carried out by deployment system)
os.environ['APP_ENV'] = 'manufacturing'

logger = configure_environment_logger('my_application')

logger.debug('This debug message will not seem in manufacturing')
logger.information('Person logged in efficiently')
logger.error('Didn't course of fee')

 

The surroundings is decided by the APP_ENV surroundings variable. Your deployment system (Docker, Kubernetes, or different cloud platforms) units this variable routinely.

Discover how we clear present handlers earlier than configuration. This prevents duplicate handlers if the perform is known as a number of occasions through the software lifecycle.

 

# Wrapping Up

 
Good logging makes the distinction between rapidly diagnosing points and spending hours guessing what went improper. Begin with primary logging utilizing applicable severity ranges, add structured context to make logs searchable, and configure rotation to forestall disk house issues.

The patterns proven right here work for purposes of any dimension. Begin easy with primary logging, then add structured logging once you want higher searchability, and implement environment-specific configuration once you deploy to manufacturing.

Joyful logging!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embrace DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and occasional! Presently, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.



Tags: CompletedevelopersGuideLoggingPython
Admin

Admin

Next Post
The Obtain: sodium-ion batteries and China’s vibrant tech future

The Obtain: sodium-ion batteries and China's vibrant tech future

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Information to Grocery Supply App Growth for Your Enterprise

Information to Grocery Supply App Growth for Your Enterprise

February 11, 2026
Save $35 Off the AMD Ryzen 7 9800X3D Processor and Get a Free Copy of Crimson Desrt

Save $35 Off the AMD Ryzen 7 9800X3D Processor and Get a Free Copy of Crimson Desrt

February 11, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved