Logging in a multi-account AWS environment

Table of Contents

Collecting and storing different types of logs are crucial for security and compliance, especially when we deal with such standards as HIPAA, PCI DSS and others.

When we build a secure multi-account infrastructure with AWS Control Tower, we get a “Log Archive” account in the initial setup. There are many AWS services that can generate logs, such as Cloudtrail logs, AWS Config logs, VPC Flow Logs, Access logs from ELB, API Gateway or CloudFront, DNS logs, WAF logs, application logs (lambdas, containers, servers, etc). These log files allow administrators and auditors to review actions and events that have occurred.

The “Log Archive” account works as a repository for logs of API activities and resource configurations from all accounts in the landing zone. Configuring cross-account logging may be not straightforward and takes some time. In this post we will look at various sources of logs, bucket policies for dedicated S3 buckets to allow cross-account access, log formats and some limitations. Automat-IT landing zone solution is a preconfigured solution using AWS and Automat-IT’s best practices. This allows launching a Landing Zone solution in days rather than weeks. Below is the high-level diagram of centralized logging in the Automat-IT Landing Zone solution.

AWS Cloudtrail and AWS Config logs

AWS Control Tower is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in AWS Control Tower. CloudTrail captures actions for AWS Control Tower as events. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for AWS Control Tower. Cloudtrail and AWS Config logs are automatically shipped from all accounts into Log Archive if you use AWS Control Tower. No extra effort is required.

Below is a policy for S3 bucket that allows only AWS Cloudtrail and AWS Config services to put log files:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSSLRequestsOnly",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
"arn:aws:s3:::aws-controltower-logs-3**********0-us-east-1",
"arn:aws:s3:::aws-controltower-logs-3**********0-us-east-1/*"
            ],
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        },
        {
            "Sid": "AWSBucketPermissionsCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                     "cloudtrail.amazonaws.com",
                     "config.amazonaws.com"
                ]
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::aws-controltower-logs-3**********0-us-east-1"
        },
        {
            "Sid": "AWSConfigBucketExistenceCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                     "cloudtrail.amazonaws.com",
                     "config.amazonaws.com"
                ]
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::aws-controltower-logs-3**********0-us-east-1"
        },
        {
            "Sid": "AWSBucketDelivery",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                     "cloudtrail.amazonaws.com",
                     "config.amazonaws.com"
                ]
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::aws-controltower-logs-3**********0-us-east-1/o-r********y/AWSLogs/*/*"
        }
    ]
}

The following example shows a CloudTrail log entry that shows the structure of a typical log file entry for an event, including a record of the identity of the user/role/service which initiated the action.

{
   "Records":[
      {
         "eventVersion":"1.08",
         "userIdentity":{
            "type":"AWSService",
            "invokedBy":"eks.amazonaws.com"
         },
         "eventTime":"2021-12-02T00:05:13Z",
         "eventSource":"kms.amazonaws.com",
         "eventName":"Encrypt",
         "awsRegion":"ap-southeast-2",
         "sourceIPAddress":"eks.amazonaws.com",
         "userAgent":"eks.amazonaws.com",
         "requestParameters":{
            "keyId":"arn:aws:kms:ap-southeast-2:3**********9:key/6546e1ba-****",
            "encryptionContext":{
               "aws:eks:context":"ba04801b-***"
            },
            "encryptionAlgorithm":"SYMMETRIC_DEFAULT"
         },
         "responseElements":null,
         "requestID":"350ad2d3-***",
         "eventID":"b679d306-***",
         "readOnly":true,
         "resources":[
            {
               "accountId":"3**********9",
               "type":"AWS::KMS::Key",
               "ARN":"arn:aws:kms:ap-southeast-2:3**********9:key/6546e1ba-****"
            }
         ],
         "eventType":"AwsApiCall",
         "managementEvent":true,
         "recipientAccountId":"3**********9",
         "sharedEventID":"674d532c-***",
         "eventCategory":"Management"
      },
....

AWS Config logs contain a history of configuration changes and periodical compliance checks.

Amazon GuardDuty logs

Amazon GuardDuty master is usually deployed in an “Audit” account. Other AWS accounts within the organization send findings there and all collected logs are periodically exported to S3 bucket in “Log Archive” account.

Below is a policy for S3 bucket that allows only Amazon GuardDuty service to put log files:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "guardduty.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::aws-guardduty-logs-3**********3/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        },
        {
            "Sid": "AWSLogDeliveryAclCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "guardduty.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::aws-guardduty-logs-3**********3"
        },
        {
            "Sid": "Allow GuardDuty to use the getBucketLocation operation",
            "Effect": "Allow",
            "Principal": {
                "Service": "guardduty.amazonaws.com"
            },
            "Action": "s3:GetBucketLocation",
            "Resource": "arn:aws:s3:::aws-guardduty-logs-3**********3"
        },
        {
            "Sid": "Deny non-HTTPS access",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::aws-guardduty-logs-3**********3/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        },
        {
            "Sid": "Deny unencrypted object uploads",
            "Effect": "Deny",
            "Principal": {
                "Service": "guardduty.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::aws-guardduty-logs-3**********3/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "aws:kms"
                }
            }
        }
    ]
}

A log format for a GuardDuty finding is following:

{
   "schemaVersion":"2.0",
   "accountId":"1**********4",
   "region":"us-east-1",
   "partition":"aws",
   "id":"f2bd80adf3df3ffb07810adf7b71a330",
   "arn":"arn:aws:guardduty:us-east-1:1**********4:detector/d2b9d***/finding/f2bd80a***",
   "type":"Policy:IAMUser/RootCredentialUsage",
   "resource":{
      "resourceType":"AccessKey",
      "accessKeyDetails":{
         "accessKeyId":"ASI*****************",
         "principalId":"1**********4",
         "userType":"Root",
         "userName":"research-user"
      }
   },
   "service":{
      "serviceName":"guardduty",
      "detectorId":"d2b9d***",
      "action":{
         "actionType":"AWS_API_CALL",
         "awsApiCallAction":{
            "api":"DescribeCheckSummaries",
            "serviceName":"trustedadvisor.amazonaws.com",
            "callerType":"Remote IP",
            "remoteIpDetails":{
               "ipAddressV4":"77.***.***.***",
               "organization":{
                  "asn":"12***",
                  "asnOrg":"***-Net internet services Ltd.",
                  "isp":"***net",
                  "org":"***net"
               },
               "country":{
                  "countryName":"Israel"
               },
               "city":{
                  "cityName":"Tel Aviv"
               },
               "geoLocation":{
                  "lat":32.****,
                  "lon":34.****
               }
            },
            "affectedResources":{

            }
         }
      },
      "resourceRole":"TARGET",
      "additionalInfo":{

      },
      "evidence":null,
      "eventFirstSeen":"2021-08-01T10:51:25Z",
      "eventLastSeen":"2021-08-01T10:51:57Z",
      "archived":false,
      "count":10
   },
   "severity":2,
   "createdAt":"2021-08-01T10:56:45.502Z",
   "updatedAt":"2021-08-01T10:58:55.565Z",
   "title":"API DescribeCheckSummaries was invoked using root credentials.",
   "description":"API DescribeCheckSummaries was invoked using root credentials from IP address 77.***.***.***."
}

VPC Flow Logs

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored.

Below is a policy for S3 bucket that allows putting VPC Flow Log files:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::aws-vpc-flow-logs-3**********3/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        },
        {
            "Sid": "AWSLogDeliveryAclCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::aws-vpc-flow-logs-3**********3"
        }
    ]
}

Flow log data for a monitored network interface is recorded as flow log records, which are log events consisting of fields that describe the traffic flow.

Version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status

214********64 eni-09972**********f910.103.5.192105.***.***.682260173614816311457591631145816 ACCEPT OK
214********64 eni-09972**********f9192.***.***.20010.103.5.192472888181614016311457591631145816 REJECT OK
214********64 eni-09972**********f9162.***.***.14910.103.5.192480728105614416311457591631145816 REJECT OK
214********64 eni-09972**********f9105.***.***.6810.103.5.1926017322628816311457591631145816 ACCEPT OK

Amazon CloudFront logs

You can configure CloudFront to create log files that contain detailed information about every user request that CloudFront receives. These are called standard logs, also known as access logs. If you enable standard logs, you can also specify the Amazon S3 bucket that you want CloudFront to save files in.

Below is a policy for S3 bucket that allows a member account to add the CloudFront canonical ID into bucket ACL and writing logs will be possible:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudFrontPutACL",
            "Effect": "Allow",
            "Principal": {
                "AWS": ["arn:aws:iam::4**********4:root"]
            },
            "Action": [
                "s3:GetBucketAcl",
                "s3:PutBucketAcl"
            ],
            "Resource": "arn:aws:s3:::aws-cloudfront-logs-3**********3",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalOrgID": "o-**********"
                }
            }
        }
    ]
}

Each entry in a log file gives details about a single viewer request. The log file for a distribution contains 33 fields:

#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stemsc-status cs(Referer) cs(User-Agent) cs-uri-querycs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-typecs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-typesc-content-typesc-content-len sc-range-start sc-range-end

2021-11-1912:46:00AMS1-C1516178.***.***.167GETdc*********e7.cloudfront.net/404-Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/95.0.4638.69%20Safari/537.36--ErrorL5dKmiBVQGha5ntK4QMPCDaf5L_pULiCZ3H14OD9X916Y9ZY104oMQ==dc*********e7.cloudfront.nethttps4852.154-TLSv1.3TLS_AES_128_GCM_SHA256ErrorHTTP/2.0--632442.154Errorapplication/xml---

2021-11-1912:46:01AMS1-C12285178.***.***.167GETdc*********e7.cloudfront.net/app/index.html200-Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/95.0.4638.69%20Safari/537.36--Miss-VoMbggQ9LhG57vO7XYRc4kIhFeh6Or0VKM9es--RD0yvQ4D5SyaZA==dc*********e7.cloudfront.nethttps411.145-TLSv1.3TLS_AES_128_GCM_SHA256MissHTTP/2.0--632441.145Misstext/html1949--

2021-11-1912:46:04AMS1-C1480178.***.***.167GETdc*********e7.cloudfront.net/app/main-es2015.dbcdf78f996f8e961639.js206https://dc*********e7.cloudfront.net/app/index.htmlMozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/95.0.4638.69%20Safari/537.36--Miss0Ufi0_3zJX9RWdY0T8iu29pVFpGN0p87Hdm0IUKx-mLG4Qtc-mhFMQ==dc*********e7.cloudfront.nethttps2022.162-TLSv1.3TLS_AES_128_GCM_SHA256MissHTTP/2.0--632441.135Missapplication/javascript1407813407813

AWS WAF logging

You can enable logging to get detailed information about traffic that is analyzed by your web ACL. Logged information includes the time that AWS WAF received a web request from your AWS resource, detailed information about the request, and details about the rules that the request matched. You can send your logs to an Amazon CloudWatch Logs log group, an Amazon Simple Storage Service (Amazon S3) bucket, or an Amazon Kinesis Data Firehose. Cloudwatch Logs and S3 can be the only destination within the same AWS account. If we need to send logs to a S3 bucket in a different account, we need to use Kinesis Data Firehose as an intermediate point, as depicted in the high-level diagram above.

There is a small issue in the destination settings configuration for Kinesis Firehose. If you try to set a S3 bucket from a different account, you will see an error message that such S3 bucket does not exist.

But you can easily set your bucket from the Log Archive account via API or CloudFormation, example is below:

DeliveryStream:
    Type:AWS::KinesisFirehose::DeliveryStream
    Properties:
      DeliveryStreamName:"aws-waf-logs-kinesis"
      DeliveryStreamType: DirectPut
      ExtendedS3DestinationConfiguration:
        BucketARN: !Sub "${DestinationBucketARN}"
        BufferingHints:
          IntervalInSeconds:300
          SizeInMBs:3
        ErrorOutputPrefix: !Sub "errors/${AWS::AccountId}/${AWS::Region}/"
        Prefix: !Sub "data/${AWS::AccountId}/${AWS::Region}/"
        RoleARN: !GetAtt KinesisRole.Arn

Here is a policy for S3 bucket that allows an IAM role that is used by Kinesis Firehose of a member account to put WAF logs:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::3**********4:role/StackSet-CustomControlTower-waf-owast-KinesisRole-***"
            },
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
"arn:aws:s3:::aws-waf-logs-3**********3-us-east-1/*",
"arn:aws:s3:::aws-waf-logs-3**********3-us-east-1"
            ]
        }
    ]
}

The following example shows the WAF log format:

{
   "timestamp":1638358189751,
   "formatVersion":1,
   "webaclId":"arn:aws:wafv2:eu-central-1:3**********4:regional/webacl/Fortinet-OWASP10/f63****",
   "terminatingRuleId":"Default_Action",
   "terminatingRuleType":"REGULAR",
   "action":"ALLOW",
   "terminatingRuleMatchDetails":[
      
   ],
   "httpSourceName":"APIGW",
   "httpSourceId":"3**********4:kb******ke:Prod",
   "ruleGroupList":[
      {
         "ruleGroupId":"Fortinet#all_rules",
         "terminatingRule":null,
         "nonTerminatingMatchingRules":[
            
         ],
         "excludedRules":null
      }
   ],
   "rateBasedRuleList":[
      
   ],
   "nonTerminatingMatchingRules":[
      
   ],
   "requestHeadersInserted":null,
   "responseCodeSent":null,
   "httpRequest":{
      "clientIp":"73.***.***.**",
      "country":"US",
      "headers":[
         {
            "name":"X-Forwarded-For",
            "value":"73.***.***.**"
         },
         {
            "name":"X-Forwarded-Proto",
            "value":"https"
         },
         {
            "name":"X-Forwarded-Port",
            "value":"443"
         },
         {
            "name":"Host",
            "value":"prod-**.***.com"
         },
         {
            "name":"X-Amzn-Trace-Id",
            "value":"Root=1-61a7***"
         },
         {
            "name":"Content-Length",
            "value":"178"
         },
         {
            "name":"authorization",
            "value":"ey****"
         },
         {
            "name":"app-version",
            "value":"1.0.***"
         },
         {
            "name":"build-number",
            "value":"480"
         },
         {
            "name":"language",
            "value":"en"
         },
         {
            "name":"platform",
            "value":"android"
         },
         {
            "name":"app-instance-id",
            "value":"50c000ad-***"
         },
         {
            "name":"content-type",
            "value":"application/json; charset=UTF-8"
         },
         {
            "name":"accept-encoding",
            "value":"gzip"
         },
         {
            "name":"user-agent",
            "value":"okhttp/4.9.1"
         }
      ],
      "uri":"REDACTED",
      "args":"REDACTED",
      "httpVersion":"HTTP/1.1",
      "httpMethod":"REDACTED",
      "requestId":"Jqt***"
   }
}
"value":"okhttp/4.9.1"
         }
      ],
"uri":"REDACTED",
"args":"REDACTED",
"httpVersion":"HTTP/1.1",
"httpMethod":"REDACTED",
"requestId":"Jqt***"
   }
}

Exported application logs

The most interesting and challenging case is when we need to send Lambda logs to S3 bucket in a Log Archive account. By default Lambda functions write logs to a CloudWatch log group in the same account and region. A possible option here is to periodically execute logs exports from CloudWatch log group to S3 Log Archive. The first limitation here is that S3 bucket MUST be in the same region with CloudWatch Log group. If we have log groups in several AWS regions, we need to have a dedicated S3 bucket in the same regions.

Below is a policy for S3 bucket that allows logs export:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.us-east-1.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::aws-app-logs-3**********3/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        },
        {
            "Sid": "AWSLogDeliveryLambda",
            "Effect": "Allow",
            "Principal": {
                "AWS": "ARO******************"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::aws-app-logs-3**********3/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        },
        {
            "Sid": "AWSLogDeliveryAclCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.us-east-1.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::aws-app-logs-3**********3"
        }
    ]
}

Here is an example of Lambda function, that is executed every 1 hour and creates log export tasks for different log groups, for example “api_gw_logGroup” that is created in advance for storing API Gateway logs and set in environment variables on the function, and log groups starting with ‘/aws/lambda’. There are the following limitations for this solution. The first one is that only one export task can be active at the same time. That’s why lambda starts export tasks and periodically checks it’s status. Only when it’s completed, Lambda can start the next one. If you have a large amount of logs that can not be exported within 15 minutes (maximum Lambda execution time), you can either reduce the time period or redesign the solution with Step Functions. Another limitation is that the “describe_export_tasks” method returns a maximum of 50 log groups, so if you have more than 50 Lambda functions, this solution needs to be adjusted.

Below is the Lambda code:

import boto3
import os
import time
import datetime

region = os.environ['AWS_REGION']
api_gw_logGroup = os.environ['api_gw_logGroup']
account_id = boto3.client('sts').get_caller_identity().get('Account')
destination = os.environ['Bucket_name']

# Timeframe for export task
export_to_time = int(round(time.time() * 1000))
export_from_time = export_to_time - 3600000

# Values for s3 prefixes 
previous_hour = datetime.datetime.fromtimestamp(time.time() - 3600).strftime('%H')
current_day = datetime.datetime.fromtimestamp(time.time() - 3600).strftime('%d')
current_month = datetime.datetime.fromtimestamp(time.time() - 3600).strftime('%m')
current_year = datetime.datetime.fromtimestamp(time.time() - 3600).strftime('%Y')
client = boto3.client('logs')

def wait_export_task():
    while True:
        time.sleep(2)
        response = client.describe_export_tasks(
            statusCode='RUNNING',
            limit=50
        )
        if not response['exportTasks']:
            print("There are NO Running export tasks. New task can be started")
            break
        else:
            #print(response['exportTasks'])
            print("Waiting till another export task is completed")
            time.sleep(3)
            continue
    return

def logs_export_task(task_name, log_group_name, from_time, to_time, destination_bucket, destination_prefix):
    logs_export_task = client.create_export_task(
    taskName=task_name,
    logGroupName=log_group_name,
    #logStreamNamePrefix='string',
    fromTime=from_time,
    to=to_time,
    destination=destination_bucket,
    destinationPrefix=destination_prefix
    )

def lambda_handler(event, context):
    #  Start export task forlog group that contains API Gateway access logs
    destinationPrefix ='AWSLogs/' + account_id + '/api-gw-access-logs/' + region + "/" + api_gw_logGroup + "/" + str(current_year) + "/" + str(current_month) + "/" + str(current_day) + "/" + str(previous_hour)
    wait_export_task()
    task_name = api_gw_logGroup + "-" + str(current_year) + "-" + str(current_month) + "-" + str(current_day) + "-" + str(previous_hour)
    logs_export_task(task_name, api_gw_logGroup, export_from_time, export_to_time, destination, destinationPrefix)
print("Started Export api-gateway access logs. LogGroup: " + api_gw_logGroup)
    # Wait is needed, because AWS allows only one active export task at the moment
    wait_export_task()

    # retrieve a list of CloudWatch log groups for Lambda functions
    lambda_log_groups = client.describe_log_groups(
        logGroupNamePrefix='/aws/lambda',
        limit=50
    )
    # Start export task for every Lambda log group
    for log_group in lambda_log_groups['logGroups']:
        lambda_name = log_group['logGroupName'].split("/aws/lambda/")[1]
        destinationPrefix ='AWSLogs/' + account_id + '/app-logs/' + region + "/" + lambda_name + "/" + str(current_year) + "/" + str(current_month) + "/" + str(current_day) + "/" + str(previous_hour)
        # Wait is needed, because AWS allows only one active export task at the moment
        wait_export_task()
        task_name = log_group['logGroupName'] + "-" + str(current_year) + "-" + str(current_month) + "-" + str(current_day) + "-" + str(previous_hour)
        logs_export_task(task_name, log_group['logGroupName'], export_from_time, export_to_time, destination, destinationPrefix)
        print("Started Export application logs. LogGroup: " + log_group['logGroupName'])

Conclusion

Logging is a very important part of secure and compliant cloud infrastructure and can be used for day-to-day operations, troubleshooting and audits. Logs must be stored at a centralized location, encrypted, logs integrity and confidentiality are essential. Centralized logging for the Automat-IT Landing Zone solution has been described in this post, including different logs sources, logs formats and storage configurations.