Skip to main content

GCP

FieldDescriptionScheme
connection

The connection to use, mutually exclusive with credentials

Connection

credentials

The credentials to use for authentication

EnvVar

There are 3 options when connecting to GCP

GKE Workload Identity

GKE workload identity (the default if no connection or credentials is specified)

Connection
gcs-connection.yaml
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: database-backup-check
spec:
interval: 60
folder:
- name: gcs auth test
path: gcs://somegcsbucket
gcpConnection:
connection: connection://gcp/internal
Inline
gcp-inline.yaml
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: database-backup-check
spec:
interval: 60
folder:
- name: gcs auth test
path: gcs://somegcsbucket
gcpConnection:
credentials:
valueFrom:
secretKeyRef:
name: gcp-credentials
key: AUTH_ACCESS_TOKEN

Using the GCP Cloud SQL Instance instead of the default Postgres Statefulset

The flanksource/mission-control chart deploys a postgres statefulset in the cluster by default. Instead you can choose a cloud SQL instance and connect to it using a cloud sql proxy. The cloud sql proxy uses a GCP service account to authenticate to cloud SQL instance using IAM authentication. To disable the postgres statefulset and deploy the cloud sql proxy instead, follow these steps:

Create Postgres DB

Create a postgres db mission_control in GCP Cloud SQL Instance

Add Instance User

Enable IAM Authentication in the cloud postgresSQL instance and add a service account as a user

Create GCP Service Account

Create a GCP service account mission-control-sa

Attach roles to GCP Service Account

Attach the role cloudsql.instanceUser and cloudsql.client to the GCP service account

Deploy Cloud SQL Proxy

Annotate the SA to allow WIF. Here is a sample template for reference to deploy cloud sql proxy. This template is not a part of the Mission Control helm chart.

apiVersion: v1
kind: ServiceAccount
metadata:
name: mission-control-cloud-sql-proxy
annotations:
iam.gke.io/gcp-service-account: mission-control-sa@gcp-project-id.iam.gserviceaccount.com
---
apiVersion: v1
kind: Service
metadata:
name: mission-control-cloud-sql-proxy
spec:
selector:
app: mission-control-cloud-sql-proxy
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mission-control-cloud-sql-proxy
spec:
selector:
matchLabels:
app: mission-control-cloud-sql-proxy
template:
metadata:
labels:
app: mission-control-cloud-sql-proxy
spec:
serviceAccountName: mission-control-cloud-sql-proxy
containers:
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.8.0
ports:
- containerPort: 5432
args:
- "--address=0.0.0.0"
- "--port=5432"
- "{{ .Values.gcpProject }}:{{ .Values.cloudSqlProxy.sqlInstanceRegion }}:{{ .Values.sqlInstanceName }}"
- --auto-iam-authn
securityContext:
runAsNonRoot: true
Create k8s Secret

Create the secret with a connection string to allow mission control microservices to connect to Cloud SQL Instance via cloud sql proxy.

apiVersion: v1
kind: Secret
metadata:
name: mission-control-postgres-connection
namespace: flanksource
type: Opaque
stringData:
DB_URL: {{ (printf "postgres://mission-control-sa%40<gcp-project-id>.iam@mission-control-cloud-sql-proxy.%s.svc.cluster.local/mission_control?sslmode=disable" (include "chart.urlencodePostgresUser" .) .Release.Namespace ) }} # Note that '@' has been replaced with '%40'

The format of the connection string is: postgres://<iam user email without gserviceaccount.com>@<cloud sql proxy svc name>.<namespace of svc>.svc.cluster.local/<db>?sslmode=disable

Deploy Mission Control

Now deploy flanksource/mission-control chart with the following values

db:
create: false
secretKeyRef:
name: mission-control-postgres-connection
canary-checker:
db:
external:
secretKeyRef:
name: mission-control-postgres-connection
key: DB_URL
config-db:
db:
external:
secretKeyRef:
name: mission-control-postgres-connection
key: DB_URL