Watchtower: Python CloudWatch Logging

Watchtower is a log handler for Amazon Web Services CloudWatch Logs.

CloudWatch Logs is a log management service built into AWS. It is conceptually similar to services like Splunk and Loggly, but is more lightweight, cheaper, and tightly integrated with the rest of AWS.

Watchtower, in turn, is a lightweight adapter between the Python logging system and CloudWatch Logs. It uses the boto3 AWS SDK, and lets you plug your application logging directly into CloudWatch without the need to install a system-wide log collector like awscli-cwlogs and round-trip your logs through the instance’s syslog. It aggregates logs into batches to avoid sending an API request per each log message, while guaranteeing a delivery deadline (60 seconds by default).


pip install watchtower


Install awscli and set your AWS credentials (run aws configure).

import watchtower, logging
logger = logging.getLogger(__name__)
logger.addHandler(watchtower.CloudWatchLogHandler())"Hi")"bar", details={}))

After running the example, you can see the log output in your AWS console.

Example: Flask logging with Watchtower

import watchtower, flask, logging

app = flask.Flask("loggable")
handler = watchtower.CloudWatchLogHandler()

def hello_world():
    return 'Hello World!'

if __name__ == '__main__':

(See also

Examples: Querying CloudWatch logs

This section is not specific to Watchtower. It demonstrates the use of awscli and jq to read and search CloudWatch logs on the command line.

For the Flask example above, you can retrieve your application logs with the following two commands:

aws logs get-log-events --log-group-name watchtower --log-stream-name loggable | jq '.events[].message'
aws logs get-log-events --log-group-name watchtower --log-stream-name werkzeug | jq '.events[].message'

CloudWatch Logs supports alerting and dashboards based on metric filters, which are pattern rules that extract information from your logs and feed it to alarms and dashboard graphs. The following example shows logging structured JSON data using Watchtower, setting up a metric filter to extract data from the log stream, a dashboard to visualize it, and an alarm that sends an email:



  • Andrey Kislyuk


Please report bugs, issues, feature requests, etc. on GitHub.


Licensed under the terms of the Apache License, Version 2.0.

API documentation

class watchtower.CloudWatchLogHandler(log_group='watchtower', stream_name=None, use_queues=True, send_interval=60, max_batch_size=1048576, max_batch_count=10000, boto3_session=None, create_log_group=True, *args, **kwargs)[source]

Create a new CloudWatch log handler object. This is the main entry point to the functionality of the module. See for more information.

  • log_group (String) – Name of the CloudWatch log group to write logs to. By default, the name of this module is used.
  • stream_name (String) – Name of the CloudWatch log stream to write logs to. By default, the name of the logger that processed the message is used. Accepts a format string parameter of {logger_name}.
  • use_queues – If True, logs will be queued on a per-stream basis and sent in batches. To manage the queues, a queue handler thread will be spawned.
  • send_interval (Integer) – Maximum time (in seconds, or a timedelta) to hold messages in queue before sending a batch.
  • max_batch_size (Integer) – Maximum size (in bytes) of the queue before sending a batch. From CloudWatch Logs documentation: The maximum batch size is 1,048,576 bytes, and this size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
  • max_batch_count (Integer) – Maximum number of messages in the queue before sending a batch. From CloudWatch Logs documentation: The maximum number of log events in a batch is 10,000.
  • boto3_session (boto3.session.Session) – Session object to create boto3 logs clients. Accepts AWS credential, profile_name, and region_name from its constructor.
  • create_log_group (Boolean) – Create log group. True by default.

Table of Contents