S3 Storage
Massdriver requires blob storage for several key functions:
- OCI bundle repository and provisioning log storage (the
massdriver
bucket) - Terraform/OpenTofu remote state storage (the
state
bucket)
Argo Workflows also requires blob storage, but only for workflow artifact storage (not to be confused with Massdriver artifacts).
By default, Massdriver installs and configures Minio for all these use cases. However S3 can also be used for object storage, and its strongly recommended to use S3 for production instances of Massdriver if it is accessible.
S3 vs Minio
- S3 is low cost, fully managed and highly available. This is ideal for production workloads, compliance, and scalability.
- Minio is a self-hosted, S3-compatible solution. It is great for local development, air-gapped environments, or when you want full control over your storage backend.
Massdriver and Argo can each have their blob storage configured independently. Massdriver can use S3 while Argo continues to use Minio, or vice versa.
Configuring S3 for Massdriver and Argo Workflows
Switching from Minio to S3 will NOT migrate your data. You must manually migrate any data stored in Minio to S3 before changing these settings or you will suffer data loss. This process is only recommended for fresh installs or after a successful migration. For guidance on migrating data, see the Minio documentation.
Permissions
For bucket access, the following permissions are required for each bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:HeadBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<bucket>"
]
},
{
"Action": [
"s3:DeleteObject*",
"s3:GetObject*",
"s3:ListObject*",
"s3:PutObject*",
"s3:RestoreObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<bucket>/*"
]
}
]
}
Massdriver will need these permissions for both the massdriver
and state
bucket. Argo will need these permissions for the argo
bucket.
Disabling Minio
If you are moving to S3 for all object storage and want to disable the installation of minio you can disable it in your custom values.yaml
:
minio:
enabled: false
Using S3 with IRSA (Recommended for EKS)
If you are running Massdriver in an AWS EKS cluster, it is strongly recommended to use "IAM Roles for Service Accounts" (IRSA) to authenticate container workloads.
You will need to create a role with the proper trust policy and permissions. Refer to the AWS documentation for setting up the proper trust policy for more information.
Massdriver Configuration
massdriver:
blobStorage:
type: s3
massdriverBucket: <massdriver bucket name>
stateBucket: <state bucket name>
s3:
region: <bucket region>
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: <massdriver role ARN>
Argo Workflows Configuration
provisioner:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: <argo role ARN>
argo-workflows:
artifactRepository:
s3:
bucket: <argo bucket name>
endpoint: s3.amazonaws.com
insecure: false
accessKeySecret:
name: "" # need set this field to an empty string to disable credentialed access
key: "" # need set this field to an empty string to disable credentialed access
secretKeySecret:
name: "" # need set this field to an empty string to disable credentialed access
key: "" # need set this field to an empty string to disable credentialed access
Using S3 with Static Credentials
If you are running outside of EKS or prefer to use static AWS credentials, configure Massdriver to use an AWS access key and secret key:
massdriver:
blobStorage:
type: s3
massdriverBucket: <massdriver bucket name>
stateBucket: <state bucket name>
s3:
region: us-east-2
accessKeyId: <AWS access key>
secretAccessKey: <AWS secret key>
Argo Workflows Configuration (Static Credentials)
Same Credentials as Massdriver
Argo can also use an access key and secret key for S3 permissions. If you want Argo to use the same AWS credentials as Massdriver, reference the Massdriver secret:
argo-workflows:
artifactRepository:
s3:
bucket: <argo bucket name>
endpoint: s3.amazonaws.com
insecure: false
accessKeySecret:
name: "massdriver-massdriver-envs"
key: "AWS_ACCESS_KEY_ID"
secretKeySecret:
name: "massdriver-massdriver-envs"
key: "AWS_SECRET_ACCESS_KEY"
Separate Credentials for Argo
If you wish to use a different AWS user for Argo, you can create a separate secret using the extraManifests
value:
extraManifests:
- apiVersion: v1
kind: Secret
metadata:
name: argo-aws-creds
type: Opaque
data:
AWS_ACCESS_KEY_ID: <base64-encoded-access-key>
AWS_SECRET_ACCESS_KEY: <base64-encoded-secret-key>
Then reference this secret in your Argo configuration:
argo-workflows:
artifactRepository:
s3:
accessKeySecret:
name: "argo-aws-creds"
key: "AWS_ACCESS_KEY_ID"
secretKeySecret:
name: "argo-aws-creds"
key: "AWS_SECRET_ACCESS_KEY"