Classified as a "Management and Governance” tool in the AWS console, AWS CloudTrail is an auditing, compliance monitoring and governance tool from Amazon Web Services (AWS).
With CloudTrail, AWS account owners can ensure every API call made to every resource in their AWS account is recorded and written to a log. An API call request can be made when:
- A user accesses a resource from the AWS console
- Someone runs an AWS Command Line Interface (AWS CLI)
- A representational state transfer (REST) API call is made to an AWS resource
These actions can be coming from:
- Human users, e.g., when someone spins up an Amazon EC2 (AWS EC2) instance from the console)
- Applications, e.g., when a bash script calls an AWS CLI command
- Another AWS service, e.g., when a AWS Lambda function writes to an Amazon S3 (AWS S3) bucket)
CloudTrail saves the API events in a secured, immutable format, which can be used for later analysis.
In this article, we will examine the basics of AWS CloudTrail, learn how to create and enable custom trails, see where the trail logs are saved and how to analyze CloudTrail logs. We will also compare CloudTrail with another AWS cloud service: CloudWatch Logs.
Why AWS CloudTrail?
DevSecOps professionals can view, search for or analyze CloudTrail logs to find:
- Any particular action that happened in the account
- The time the action happened
- The user or process that initiated the action
- The resource(s) affected by the action
Having this kind of visibility into AWS cloud infrastructure can be useful to monitor for cloud vulnerabilities and threats proactively, ensuring adherence to compliance standards and performing post-security breach analysis. What’s more, management, data and insight events from custom trail events can trigger specific actions in solutions, such as invoking a Lambda function. Read how to read, search and analyze AWS CloudTrail logs.
Is AWS CloudTrail enabled by default?
AWS CloudTrail is now enabled for all users by default.
AWS CloudTrail features
Amazon CloudTrail has a number of features you would expect from a monitoring and governance tool. These features include:
- AWS CloudTrail is always on, enabling you to view data from the most recent 90 days
- Event History to see all changes made regarding the creation, modification or deletion of AWS resources
- Multi-region configuration, extended to all newly launched regions, and monitoring of changes with AWS Config
- Log file integrity validation and encryption that can be used with further encryption services, including AWS KMS
- Data events, management events and CloudTrail Insights
Of note, there are three types of events that can be logged in CloudTrail:
1) Management events
2) Data events
3) CloudTrail Insights events
By default, trails and event data stores log management events but not data or Insights events. All event types use a CloudTrail JSON log format.
AWS CloudTrail offers additional functionality as a data lake via its managed service to assist customers in capturing, storing and analyzing logs. For deeper audit and security analysis, users can run SQL queries on activity logs in CloudTrail Lake and CloudTrail events. CloudTrail Lake supports multi-cloud and multisource integration, consolidating activity events from AWS and non-AWS sources. You can learn how AWS Config compares to CloudTrail.
Amazon CloudTrail pricing
Amazon CloudTrail pricing is free of charge if you set up a single trail to deliver a single copy of management events in each region. With CloudTrail, users can download, filter, query and view data from the most recent 90 days for all management events at no cost.
Keep in mind Amazon S3 charges will apply based on your usage.
Additionally, you can use AWS CloudTrail Insights by enabling Insights events in your trails. AWS CloudTrail Insights prices are based on the number of events in each region. Pricing is as follows:
- Management Events: $2.00 per 100,000 events
- Data Events: $0.10 per 100,000 events
- CloudTrail Insights: $0.35 per 100,000 write management events
CloudTrail Event history
AWS account administrators don’t have to do anything to enable CloudTrail: it’s enabled by default when a user creates an account. Here is the default trail. Information in this trail is kept for the last 90 days in a rolling fashion.
To view the default trail, we can open the CloudTrail console and choose “Event history” from the navigation pane:
From the Event history, a user may set a date range, specify a particular resource type or resource name, an event name, AWS access key ID and other filters to narrow the search. The image below shows how we are narrowing down our search to view events related to RDS DBInstance type resource.
Clicking on a particular event’s record will show more information. The snippet below shows part of the JSON record for the DeleteDBInstance event:
{ "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "XXXXXXXXXXXXXXXXXXXX", "arn": "arn:aws:iam::1234567890:user/Administrator", "accountId": "1234567890", "accessKeyId": "XXXXXXXXXXXXXXXX", "userName": "Administrator", "sessionContext": { "attributes": { "mfaAuthenticated": "false", "creationDate": "2019-03-22T03:03:44Z" } }, "invokedBy": "signin.amazonaws.com" }, "eventTime": "2019-03-22T03:04:34Z", "eventSource": "rds.amazonaws.com", "eventName": "DeleteDBInstance", "awsRegion": "us-east-1", "sourceIPAddress": "12.34.56.78", "userAgent": "signin.amazonaws.com", "requestParameters": { "dBInstanceIdentifier": "mysql-db", "skipFinalSnapshot": true, "deleteAutomatedBackups": true }, ......
We can see an IAM user named “Administrator” deleted the “mysql-db” instance at a particular date and time in the US-East-1 region and did not make a snapshot before deleting the instance. Also, the user could log in to the console without Multi-Factor Authentication (MFA).
Information like this can reinforce the argument to instate the need for users to start using MFA or for administrators to start reviewing access to RDS, in addition to establishing role-based access controls where necessary.
Creating a trail
It’s also possible to create custom trails. A trail is a user-created audit definition that can capture one or more types of events. Unlike Event history, CloudTrail trail logs are not limited to 90 days of retention. They can be delivered to an S3 bucket or AWS CloudWatch Logs and configured to send SNS notifications when a particular event happens.
In the image below, we can see a trail called “Trail1.” The trail’s log files are delivered to an S3 bucket called “athena-cloudtrails”
To define a trail, a number of parameters need to be defined. The images below show each of these parameters.
First, we need to give our trail a name. This can be anything meaningful. We must also specify if the trail will audit account activities in all regions or only the current region. We can also specify if the trail should be enabled at the organization level for multi-account setups.
Next, we specify if we want to track read events like Describe or List API operations, write events like Create or Delete operations, both types of operations or none at all. These events are considered management events because they are related to actions performed on the resources, not actions happening within the resources.
Next, we have data events. Data events track API operations happening within a specific resource. At the time of writing, two types of resource operations are supported: S3 and Lambda.
With data events, it’s possible to track Amazon bucket-level operations such as PutObject. We can track it for all buckets or specific buckets:
For Lambda, the trail can capture an event every time any function or a particular function is invoked:
Finally, we need to specify where will be the trail data stored:
We can create a new bucket or choose an existing bucket for storing the trail log files. The files will be saved under a folder in the S3 bucket. The folder structure has the following naming style:
/AWSLogs/<account-id>/CloudTrail/<region>/yyyy/mm/dd/.
We can choose to put a file name prefix as well.
There are two options to ensure the logs are secured: each file with a KMS key and let CloudTrail validate the file optionally. With validation, CloudTrail checks if the generated log file was altered before saving it to S3.
Administrators can also be notified via SNS alert whenever a CloudTrail log file is created.
AWS CloudTrail files
The image below shows files from the trail we created before (Trail1), saved under the athena-cloudtrails bucket. The files are gzipped archives and have a naming pattern of:
<account_id>_CloudTrail_<region>_yyyymmdd<time>_<unique_hex_number>.json.gz
If validation is enabled, CloudTrail creates another separate folder structure with the naming pattern:
/AWSLogs/<account_id>/CloudTrail-Digest/<rergion>/yyyy/mm/dd
These folders are used to store the digest files. A digest file stores the names of the log files delivered to the S3 bucket in the last hour, their hash values and a “digital signature” of the last digest file. CloudTrail uses the digital signatures and the hash values to validate that files have not been tampered with since they were stored.
Enabling or disabling trails
We can enable or disable an existing trail by toggling the button at the top right corner of the trail’s property screen:
This may be necessary when you have a number of trails running in your account and you need to know which one is used by your log management monitoring platform. You can selectively disable one trail after another and check.
Sending AWS CloudTrail events to CloudWatch
Like CloudTrail, Amazon CloudWatch is a core service of the AWS platform. Simply put, CloudWatch can be considered “the eyes and ears” of any AWS account. While CloudTrail tracks and records API calls made to objects, CloudWatch offers a number of facilities for monitoring other resources in the account, sending alerts based on the resource state, scheduling Lambda functions and other jobs, and hosting log files from different AWS services and resources.
Some typical use cases of CloudWatch can be:
- AWS billing alerts
- EC2 auto-scaling alerts
- Performance monitoring dashboards for RDS instances
- CloudWatch events for Lambda functions or data pipelines
- CloudWatch log groups and log streams for different resource types (e.g., VPC flow logs)
It’s possible to publish CloudTrail events to CloudWatch Logs. With this approach, CloudTrail can take advantage of CloudWatch features such as:
- Easy searching through logs
- Sending alerts on logged events
- Creating metric filters and dashboards from logged events
To send a trail to CloudWatch, two things need to exist:
- A CloudWatch log group and a log stream
- An IAM role and its policy that grants CloudTrail access to CloudWatch
The feature to send CloudTrail events to CloudWatch Logs is available after creating the trail. To use CloudWatch, the trail needs to be edited afterward. In the image below, we are editing and configuring our Trail1 to publish its logs to CloudWatch.
Clicking the “Configure” button brings up the next section:
CloudWatch needs to know the log group and log stream names to which the events will be published. We can choose the default log group (CloudTrail) as suggested or specify an existing log group. Each trail configured to send logs to CloudWatch will have a log stream under this log group.
When we click Continue, the wizards ask for the IAM role to use:
Once again, we can specify our own custom role and policy or let the configuration wizard create a role and policy for us. The default name for the role will be CloudTrail_CloudWatchLogs_Role. The IAM policy attached to this role will grant CloudTrail CreateLogStream and PutLogEvents rights to CloudWatch Logs.
We can save the changes once the IAM role and policy have been specified.
Next, we can browse the log group and see the log stream for the trail has been created:
The naming pattern for the log stream is:
<account_id>_CloudTrail_<region>
Inside the log stream, we can view the log events:
Although sending CloudTrail logs to CloudWatch is simple, users need to be mindful of a limitation. The maximum event size in CloudWatch Logs is 256 KB. CloudTrail respects that and will not send an event’s data to CloudWatch Logs if it’s more than that size.
CloudTrail best practices
Here are some best practice tips for using CloudTrail:
- If you are using AWS Organizations to manage your multi-account setup, create a trail to apply to the organization. This trail will be created in the master account and will log events from the master account and other member accounts. The trail will serve as a “master” log for all activities in your AWS setup.
- Include all regions in your trail. You may be inadvertently creating AWS resources in only one region, but someone may create a resource in a different region. This is typically true for proof of concepts or when people don’t pay attention. If not careful, these resources could be running undetected, costing money.
- Unless you want to find out who is accessing or listing your resources, record only “write events” in the trail. This will show resource creation, modification and deletion events and reduce log size.
- People often create multiple trails. Too many logs can increase S3 storage costs over time and cause confusion. Create one trail to capture everything unless there is a need to create separate trails. Also, don’t create separate trails in each region; choose a “master region” and configure your trail to capture events from all regions. Include all regions in your trail to ensure it’s visible from every region’s CloudTrail console. By default, a trail that does not include all regions is only visible in the region where it was created.
- If you are searching for recent events in the last 90 days, use event history instead of searching through S3 files. This will improve the query time.
- Don’t send CloudTrail logs to CloudWatch unless you are sure none of your event data will ever cross the 256 KB limit. This is even more important if you use CloudWatch alarms for security and network-related API calls. You may miss important alerts if some events are dropped because of their sizes.
- Instead of manually searching through the logs, use industry-standard log management and SIEM tools, like Sumo Logic, to contextualize activities around threat hunting and Cloud Infrastructure Security.
Final words: AWS CloudTrail
This was a high-level introduction to AWS CloudTrail. As we saw, it’s easy to set up. Once running, the logs can be a valuable source of information for forensic analysis and governance audit. Setting up the trails is the first step for DevSecOps. The next step would be searching through the logs and analyzing those for insight. Learn more about the Sumo Logic CloudTrail monitoring tools.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.