Discover 5 useful Python decorators to optimize your AI code, from logging, seeding, and features to handling LLM errors.
In AI and machine learning projects, Python decorators are incredibly useful but often underestimated tools. They allow for the separation of core logic—such as model building or data pipelines—from 'backup' tasks like logging, measurement, testing, or data validation.
As a result, the code becomes cleaner, easier to read, and easier to extend, especially as the system becomes more complex.
This article introduces five Python decorators that have been widely used by developers and proven effective in improving the quality of AI code. The examples use standard Python libraries (such as `` functools.wraps) to focus on how decorators work, making it easy to apply them to your projects.
Concurrency limits when calling LLMs
When working with large model languages (LLMs), especially third-party APIs, you're likely to encounter limits on the number of requests—particularly with free plans.
Decorator's concurrency limiting feature helps address this issue by controlling the number of asynchronous functions that can run simultaneously. Through semaphore technology, it creates a 'throttle' layer, ensuring you don't send too many requests at once and avoid rate limit errors.
This is a simple but highly effective solution for stabilizing the system when integrating LLM.
import asyncio from functools import wraps def limit_concurrency(limit=5): sem = asyncio.Semaphore(limit) def decorator(func): @wraps(func) async def wrapper(*args, **kwargs): async with sem: return await func(*args, **kwargs) return wrapper return decorator # Application @limit_concurrency(5) async def fetch_llm_batch(prompt): return await async_api_client.generate(prompt)
Structured logger for machine learning systems
In complex machine learning systems, using print() is almost useless, especially once deployed to production. Logs are easily lost and difficult to trace.
Decorator logging has a structured format that records execution and errors in JSON format. This allows for faster log searching, easier debugging, and better integration with monitoring systems.
For example, you can apply this decorator to a function that trains an epoch in a neural network model to track each step in detail.
import logging, json, time from functools import wraps def json_log(func): @wraps(func) def wrapper(*args, **kwargs): start = time.time() try: res = func(*args, **kwargs) logging.info(json.dumps({"step": func.__name__, "status": "success", "time": time.time() - start})) return res except Exception as e: logging.error(json.dumps({"step": func.__name__, "error": str(e)})) raise return wrapper # Application @json_log def train_epoch(model, training_data): return model.fit(training_data)
Automatically generate input features (Feature Injector)
During the model deployment phase, a very common problem is ensuring that user input data is processed exactly like the initial training data. When migrating a model from a notebook environment (such as Jupyter) to production (e.g., FastAPI), repeating the data transformation steps is often cumbersome and prone to errors.
Decorator 'feature injector' solves this problem by automatically adding necessary features to the input data before feeding it into the model. This ensures consistency without requiring manual processing each time.
A simple example is automatically adding a feature is_weekendbased on the date column in the dataset, determining whether it's Saturday or Sunday.
from functools import wraps def add_weekend_feature(func): @wraps(func) def wrapper(df, *args, **kwargs): df = df.copy() # Prevents Pandas mutation warnings df['is_weekend'] = df['date'].dt.dayofweek.isin([5, 6]).astype(int) return func(df, *args, **kwargs) return wrapper # Application @add_weekend_feature def process_data(df): # 'is_weekend' is guaranteed to exist here return df.dropna()
Fix the random seed to ensure reproducibility.
During the testing and tuning of hyperparameters, model results can change due to random factors.
The 'seed setter' decorator helps to fix random seeds, thereby ensuring that test runs can be compared fairly with each other. This is especially important when you are testing the impact of a change, such as the learning rate.
Without seed control, it's difficult to know whether performance changes stem from the new configuration or simply from unfavorable random initialization. Fixing the seed helps isolate the variable and makes experiments like A/B testing more reliable.
import random, numpy as np from functools import wraps def lock_seed(seed=42): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): random.seed(seed) np.random.seed(seed) return func(*args, **kwargs) return wrapper return decorator # Application @lock_seed(42) def initialize_weights(): return np.random.randn(10, 10)
Fallback in the development environment
When building AI applications—especially systems like RAGs—you often rely on external services such as LLM APIs. If these services fail (timeouts, quotas exhausted, etc.), the entire system can be disrupted.
The 'dev-mode fallback' decorator acts as a protection layer. When the function encounters an error, instead of crashing, it returns predefined mock data.
This is especially useful in development or CI/CD environments, helping the system remain stable even if external services experience temporary problems.
from functools import wraps def fallback_mock(mock_data): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except Exception: # Catches timeouts and rate limits return mock_data return wrapper return decorator # Application @fallback_mock(mock_data=[0.01, -0.05, 0.02]) def get_text_embeddings(text): return external_api.embed(text)
The Python decorators in this article don't change the core logic of the AI system, but they help organize the code better and increase the reliability of the entire pipeline.
From controlling LLM requests, structured logging, ensuring data consistency, to controlling randomness and handling errors, each decorator addresses a very real problem in AI development.
When combined, they form a 'soft infrastructure' that makes code cleaner, easier to debug, and ready for production.
You've just finished reading the article "5 Python decorators to help write cleaner and more manageable AI code." edited by the TipsMake team. We hope this article has provided you with many useful tech tips and tricks. You can search for similar articles on tips and guides. Thank you for reading and for following us regularly.