Multi-account backup copy in AWS

Table of Contents

In the previous post about the Landing Zone solution we checked what AWS Backup Policy is and how we can centrally manage AWS Backup service across multiple AWS accounts. Backups were created in the same account and region with the target resource, but what can we do if we need to copy backups to another account for security reasons or to the different AWS region for disaster recovery purposes? In this post we will take a look at different ways and limitations of backups copying. Examples and recommendations are based on the many projects implemented by Automat-IT.

Copying backups to a different AWS account

First of all it worth mentioning that AWS Backup started supporting DocumentDB and Neptune on 8 November 2021 and we will check it:

As a prerequisite you have to create a backup vault in your destination account. You must use vaults other than your default vaults to perform cross-account backup. Then, you assign a customer managed key to encrypt backups in the destination account, and a resource-based access policy to allow AWS Backup to access the resources you would like to copy. The following CloudFormation template can be used:

AWSTemplateFormatVersion: 2010-09-09
Description: This template creates backup vault required for the cross-account backup and cross-Region copy using AWS Backup

Metadata:
  'AWS::CloudFormation::Interface':
    ParameterGroups:
      - Label:
          default: AWS Backup Configuration
        Parameters:
          - vaultname
          - organizationid
          - sourceaccountid
    ParameterLabels:
      vaultname:
        default: Backup vault name
      organizationid:
        default: Enter your AWS organizations ID
      sourceaccountid:
        default: Enter the AWS account ID for your source backup account

Parameters:
  Vaultname:
    Type: String
  organizationid:
    Type: String
  sourceaccountid:
    Type: String

Resources:
  cabKey:
    Type: 'AWS::KMS::Key'
    Properties:
      Description: A symmetric CMK
      KeyPolicy:
        Version: 2012-10-17
        Id: cab-kms-key
        Statement:
          - Sid: Enable IAM User Permissions
            Effect: Allow
            Principal:
              AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root'
            Action: 'kms:*'
            Resource: '*'
          - Sid: Allow use of the key
            Effect: Allow
            Principal:
              AWS: !Sub 
                - 'arn:aws:iam::${yoursourceaccountid}:root'
                - yoursourceaccountid: !Ref sourceaccountid
            Action:
              - 'kms:DescribeKey'
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey'
              - 'kms:GenerateDataKeyWithoutPlaintext'
            Resource: '*'
          - Sid: Allow attachment of persistent resources
            Effect: Allow
            Principal:
              AWS: !Sub 
                - 'arn:aws:iam::${yoursourceaccountid}:root'
                - yoursourceaccountid: !Ref sourceaccountid
            Action:
              - 'kms:CreateGrant'
              - 'kms:ListGrants'
              - 'kms:RevokeGrant'
            Resource: '*'
            Condition:
              Bool:
                'kms:GrantIsForAWSResource': true
  cabvault:
    Type: 'AWS::Backup::BackupVault'
    Properties:
      AccessPolicy:
        Version: 2012-10-17
        Statement:
          - Sid: Enable backup vault access
            Effect: Allow
            Action: 'backup:CopyIntoBackupVault'
            Resource: '*'
            Principal: '*'
            Condition:
              StringEquals:
                'aws:PrincipalOrgID': !Ref organizationid
      BackupVaultName: !Ref vaultname
      EncryptionKeyArn: !GetAtt 
        - cabKey
        - Arn

Outputs:
  cabvault:
    Value: !Ref cabvault
  cabKey:
    Value: !Ref cabKey

For all services except Amazon EFS, cross-account backup only supports customer managed keys. It does not support vaults that are encrypted using AWS keys, including default vaults, because AWS keys are not intended to be shared between accounts. In the source account, if your resources are encrypted with a customer managed key, you must share this customer managed key with the destination account. Resources encrypted with AWS Managed key can not be copied:

You can then create a backup plan and choose a destination account that is part of your organizational unit in AWS Organizations.

Full backup policy looks like the following:

{
  "plans": {
    "backup": {
      "regions": {
        "@@assign": [
"ap-southeast-2"
        ]
      },
      "rules": {
        "backup": {
          "schedule_expression": {
            "@@assign": "cron(10 14 ? * * *)"
          },
          "start_backup_window_minutes": {
            "@@assign": "60"
          },
          "complete_backup_window_minutes": {
            "@@assign": "120"
          },
          "lifecycle": {
            "delete_after_days": {
              "@@assign": "30"
            }
          },
          "target_backup_vault_name": {
            "@@assign": "Default"
          },
          "recovery_point_tags": {
            "Name": {
              "tag_key": {
                "@@assign": "Name"
              },
              "tag_value": {
                "@@assign": "BackupOrg"
              }
            }
          },
          "copy_actions": {
            "arn:aws:backup:ap-southeast-2:5**********9:backup-vault:cabvault": {
              "target_backup_vault_arn": {
                "@@assign": "arn:aws:backup:ap-southeast-2:5**********9:backup-vault:cabvault"
              },
              "lifecycle": {}
            }
          }
        }
      },
      "backup_plan_tags": {
        "Name": {
          "tag_key": {
            "@@assign": "Name"
          },
          "tag_value": {
            "@@assign": "Backup"
          }
        }
      },
      "selections": {
        "tags": {
          "backup": {
            "iam_role_arn": {
              "@@assign": "arn:aws:iam::$account:role/service-role/AWSBackupDefaultServiceRole"
            },
            "tag_key": {
              "@@assign": "Backup"
            },
            "tag_value": {
              "@@assign": [
"true"
              ]
            }
          }
        }
      },
      "advanced_backup_settings": {
        "ec2": {
          "windows_vss": {
            "@@assign": "disabled"
          }
        }
      }
    }
  }
}

When a backup jobs are executed by defined schedule you will see them in the AWS Backup console:

In the next tab you can see copy jobs:

And in the AWS Backup console of destination account you will see snapshots:

Make sure that your RDS, Aurora and DocumentDB automated backup window is different with AWS Backup plan window, otherwise you can get an error message:

One more limitation is that Amazon RDS, Aurora, DocumentDB and Neptune support cross-Region backup, or cross-account backup, but not both in the same backup plan. You can use an AWS Lambda script to accomplish both. Also, copying Amazon RDS custom option groups across AWS Regions is not supported.

Amazon EC2 does not allow cross-account copies of AWS Marketplace AMIs.

DynamoDB did not support cross-account backup until November 2021. Advanced DynamoDB backup appeared on November 24 2021. To use these enhanced backup features, you simply need to opt-in to have AWS Backup manage your DynamoDB backups via AWS Management Console or AWS Backup APIs.

Once a DynamoDB backup is completed, copy job to the destination Backup vault is automatically started:

In the destination account we can see recovery points:

Next we will look at one more use case, when we have a requirement to copy backups to an AWS account outside the given AWS organization. AWS Backup can copy only within the AWS organization, so we need a custom solution.

Amazon DynamoDB table export to S3

While DynamoDB does not support, we can use a feature of exporting a table into S3 bucket. Exporting a table does not consume read capacity on the table, and has no impact on table performance and availability. You can export table data to an S3 bucket owned by another AWS account, and to a different region than the one your table is in. Your data is always encrypted end-to-end.

To export data from an Amazon DynamoDB table to an Amazon S3 bucket, point-in-time recovery (PITR) must be enabled on the source table. You can export table data from any point in time within the PITR window, up to 35 days.

Up to 300 export tasks, or up to 100 TB of table size, can be exported concurrently

First of all we need to create a S3 bucket in the target account and region and apply the following bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DynamoDBexport",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::3**********7:role/export-dynamodb-tables-lambda-role"
            },
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::dynamodb-backup/*"
        }
    ]
}

Lambda role must have the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ExportTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:ListTables",
                "dynamodb:ExportTableToPointInTime",
                "dynamodb:DescribeExport",
                "s3:PutObjectAcl",
"s3:PutObject",
"s3:AbortMultipartUpload"
            ],
            "Resource": "*"
        }
    ]
}

The following Lambda function can be executed by Scheduled Cloudwatch event daily:

import boto3
import os
from datetime import datetime, date

account_id = boto3.client('sts').get_caller_identity().get('Account')
region = boto3.session.Session().region_name
S3_bucket_dest = os.environ['S3_BUCKET_DEST']
S3_bucket_owner = os.environ['S3_BUCKET_OWNER']
environment = os.environ['ENV']
current_day = date.today().strftime('%d')
current_month = date.today().strftime('%m')
current_year = date.today().strftime('%Y')


def lambda_handler(event, context):

    dynamodb = boto3.client('dynamodb')

    dynamodb_tables_list = dynamodb.list_tables(
# ExclusiveStartTableName='string',
        Limit=99
    )
    for table in dynamodb_tables_list['TableNames']:
        try:
            start_export = dynamodb.export_table_to_point_in_time(
                TableArn=f"arn:aws:dynamodb:{region}:{account_id}:table/{table}",
                ExportTime=datetime(int(current_year), int(current_month), int(current_day)),
                S3Bucket=S3_bucket_dest,
                S3BucketOwner=S3_bucket_owner,
                S3Prefix=f"{environment}/{table}/{current_year}/{current_month}/{current_day}",
                S3SseAlgorithm='AES256',
                ExportFormat='DYNAMODB_JSON'
            )
       except Exception as e:
            print(table + ' can not be exported. ' + str(e))
            continue

In case of any exception, for example:

<table_name> can not be exported. An error occurred (PointInTimeRecoveryUnavailableException) when calling the ExportTableToPointInTime operation: Point in time recovery isnot enabled for table '<table_name>'

the table will be skipped and Lambda will try to export the next one.

An export output contains several files:

The manifest-summary.json file contains summary information about the export job. The manifest-files.json file contains information about the files that contain your exported table data. Checksum files for both manifests are also kept. DynamoDB also creates an empty file named _started in the same directory as the manifest files. This file verifies that the destination bucket is writable and that the export has begun. It can safely be deleted.

Data itself is located in the folder data/

Conclusion

Backups are important. But having a copy of backup in a different location is necessary for a disaster recovery process, when we lose access to the whole AWS region. Copying backups to another AWS account can protect us from unauthorized access, when an intruder gets admin access to an account and removes data with backups. AWS has already automated many such things via AWS Backup service and Backup policies, but we still need to pay attention and overcome existing limitations by custom solutions.