or 1940 upstream docker release with the existing installation (#3316)
* chore(docker): Adding dynamic env generator * ci(make): Create deployment yamls * ci(make): Generating docker envs * change env name structure * proper env names * chore(docker): clickhouse * chore(docker-compose): generate env file format * chore(docker-compose): Adding docker-compose * chore(docker-compose): format make * chore(docker-compose): Update version * chore(docker-compose): adding new secrets * ci(make): default target * ci(Makefile): Update common protocol * chore(docker-compose): refactor folder structure * ci(make): rename to docker-envs * feat(docker): add clickhouse volume definition Add clickhouse persistent volume to the docker-compose configuration to ensure data is preserved between container restarts. * refactor: move env files to docker-envs directory Updates all environment file references in docker-compose.yaml to use a consistent directory structure, placing them under the docker-envs/ directory for better organization. * fix(docker): rename imagestorage to images The `imagestorage` service and related environment file have been renamed to `images` for clarity and consistency. This change reflects the service's purpose of handling images. * feat(docker): introduce docker-compose template A new docker-compose template to generate docker-compose files from a list of services. The template uses helm syntax. * fix: Properly set FILES variable in Makefile The FILES variable was not being set correctly in the Makefile due to subshell issues. This commit fixes the variable assignment and ensures that the variable is accessible in subsequent commands. * feat: Refactor docker-compose template for local development This commit introduces a complete overhaul of the docker-compose template, switching from a helm-based template to a native docker-compose.yml file. This change simplifies local development and makes it easier to manage the OpenReplay stack. The new template includes services for: - PostgreSQL - ClickHouse - Redis - MinIO - Nginx - Caddy It also includes migration jobs for setting up the database and MinIO. * fix(docker-compose): Add fallback empty environment Add an empty environment to the docker-compose template to prevent errors when the env_file is missing. This ensures that the container can start even if the environment file is not present. * feat(docker): Add domainname and aliases to services This change adds the `domainname` and `aliases` attributes to each service in the docker-compose.yaml file. This is to ensure that the services can communicate with each other using their fully qualified domain names. Also adds shared volume and empty environment variables. * update version * chore(docker): don't pull parallel * chore(docker-compose): proper pull * chore(docker-compose): Update db service urls * fix(docker-compose): clickhouse url * chore(clickhouse): Adding clickhouse db migration * chore(docker-compose): Adding clickhouse * fix(tpl): variable injection * chore(fix): compose tpl variable rendering * chore(docker-compose): Allow override pg variable * chore(helm): remove assist-server * chore(helm): pg integrations * chore(nginx): removed services * chore(docker-compose): Mulitple aliases * chore(docker-compose): Adding more env vars * feat(install): Dynamically generate passwords dynamic password generation by identifying `change_me_*` entries in `common.env` and replacing them with random passwords. This enhances security and simplifies initial setup. The changes include: - Replacing hardcoded password replacements with a loop that iterates through all `change_me_*` entries. - Using `grep` to find all `change_me_*` tokens. - Generating a random password for each token. - Updating the `common.env` file with the generated passwords. * chore(docker-compose): disable clickhouse password * fix(docker-compose): clickhouse-migration * compose: chalice env * chore(docker-compose): overlay vars * chore(docker): Adding ch port * chore(docker-compose): disable clickhouse password * fix(docker-compose): migration name * feat(docker): skip specific values * chore(docker-compose): define namespace --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
This commit is contained in:
parent
8f67edde8d
commit
b3594136ce
48 changed files with 922 additions and 323 deletions
1
scripts/docker-compose/.gitignore
vendored
Normal file
1
scripts/docker-compose/.gitignore
vendored
Normal file
|
|
@ -0,0 +1 @@
|
|||
hacks/yamls
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
ASSIST_JWT_SECRET=${COMMON_JWT_SECRET}
|
||||
ASSIST_KEY=${COMMON_JWT_SECRET}
|
||||
ASSIST_RECORDS_BUCKET=records
|
||||
ASSIST_URL="http://assist-openreplay:9001/assist/%s"
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
CH_COMPRESSION="false"
|
||||
PYTHONUNBUFFERED="0"
|
||||
REDIS_STRING="redis://redis:6379"
|
||||
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
S3_KEY="${COMMON_S3_KEY}"
|
||||
S3_SECRET="${COMMON_S3_SECRET}"
|
||||
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
ch_host="clickhouse"
|
||||
ch_port="9000"
|
||||
ch_port_http="8123"
|
||||
ch_username="default"
|
||||
js_cache_bucket=sessions-assets
|
||||
jwt_secret="${COMMON_JWT_SECRET}"
|
||||
pg_dbname="postgres"
|
||||
pg_host="postgresql"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
sessions_bucket=mobs
|
||||
sessions_region="us-east-1"
|
||||
sourcemaps_bucket=sourcemaps
|
||||
sourcemaps_reader="http://sourcemapreader-openreplay:9000/sourcemaps/%s/sourcemaps"
|
||||
version_number="${COMMON_VERSION}"
|
||||
CLUSTER_URL=""
|
||||
POD_NAMESPACE=""
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
|
||||
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
|
||||
BUCKET_NAME=sessions-assets
|
||||
LICENSE_KEY=''
|
||||
AWS_ENDPOINT='http://minio:9000'
|
||||
AWS_REGION='us-east-1'
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
ASSETS_ORIGIN='https://${COMMON_DOMAIN_NAME}:443/sessions-assets'
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
ASSIST_JWT_SECRET=${COMMON_JWT_SECRET}
|
||||
ASSIST_KEY=${COMMON_JWT_SECRET}
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
S3_HOST="https://${COMMON_DOMAIN_NAME}:443"
|
||||
S3_KEY=changeMeMinioAccessKey
|
||||
S3_SECRET=changeMeMinioPassword
|
||||
REDIS_URL=redis
|
||||
CLEAR_SOCKET_TIME='720'
|
||||
debug='0'
|
||||
redis='false'
|
||||
uws='false'
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
ASSIST_JWT_SECRET=${COMMON_JWT_SECRET}
|
||||
ASSIST_KEY=${COMMON_JWT_SECRET}
|
||||
ASSIST_RECORDS_BUCKET=records
|
||||
ASSIST_URL="http://assist-openreplay:9001/assist/%s"
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
CH_COMPRESSION="false"
|
||||
PYTHONUNBUFFERED="0"
|
||||
REDIS_STRING="redis://redis:6379"
|
||||
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
S3_KEY="${COMMON_S3_KEY}"
|
||||
S3_SECRET="${COMMON_S3_SECRET}"
|
||||
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
ch_host="clickhouse"
|
||||
ch_port="9000"
|
||||
ch_port_http="8123"
|
||||
ch_username="default"
|
||||
js_cache_bucket=sessions-assets
|
||||
jwt_secret="${COMMON_JWT_SECRET}"
|
||||
pg_dbname="postgres"
|
||||
pg_host="postgresql"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
sessions_bucket=mobs
|
||||
sessions_region="us-east-1"
|
||||
sourcemaps_bucket=sourcemaps
|
||||
sourcemaps_reader="http://sourcemapreader-openreplay:9000/sourcemaps/%s/sourcemaps"
|
||||
version_number="${COMMON_VERSION}"
|
||||
CLUSTER_URL=""
|
||||
POD_NAMESPACE=""
|
||||
JWT_REFRESH_SECRET=${COMMON_JWT_REFRESH_SECRET}
|
||||
JWT_SPOT_REFRESH_SECRET=${COMMON_JWT_REFRESH_SECRET}
|
||||
JWT_SPOT_SECRET=${COMMON_JWT_SPOT_SECRET}
|
||||
|
|
@ -1,15 +1,20 @@
|
|||
COMMON_VERSION="v1.22.0"
|
||||
COMMON_PROTOCOL="https"
|
||||
COMMON_DOMAIN_NAME="change_me_domain"
|
||||
COMMON_JWT_SECRET="change_me_jwt"
|
||||
COMMON_JWT_SPOT_SECRET="change_me_jwt"
|
||||
COMMON_JWT_REFRESH_SECRET="change_me_jwt_refresh"
|
||||
COMMON_S3_KEY="change_me_s3_key"
|
||||
COMMON_S3_SECRET="change_me_s3_secret"
|
||||
COMMON_PG_PASSWORD="change_me_pg_password"
|
||||
COMMON_VERSION="v1.21.0"
|
||||
COMMON_JWT_REFRESH_SECRET="change_me_jwt_refresh"
|
||||
COMMON_JWT_SPOT_REFRESH_SECRET="change_me_jwt_spot_refresh"
|
||||
COMMON_ASSIST_JWT_SECRET="change_me_assist_jwt_secret"
|
||||
COMMON_ASSIST_KEY="change_me_assist_key"
|
||||
|
||||
## DB versions
|
||||
######################################
|
||||
POSTGRES_VERSION="14.5.0"
|
||||
POSTGRES_VERSION="17.2.0"
|
||||
REDIS_VERSION="6.0.12-debian-10-r33"
|
||||
MINIO_VERSION="2023.2.10-debian-11-r1"
|
||||
CLICKHOUSE_VERSION="25.1-alpine"
|
||||
######################################
|
||||
|
|
|
|||
|
|
@ -1,11 +0,0 @@
|
|||
CH_USERNAME='default'
|
||||
CH_PASSWORD=''
|
||||
CLICKHOUSE_STRING='clickhouse-openreplay-clickhouse.db.svc.cluster.local:9000/default'
|
||||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
QUICKWIT_ENABLED='false'
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
ch_db='default'
|
||||
|
|
@ -1,15 +1,34 @@
|
|||
|
||||
# vim: ft=yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
postgresql:
|
||||
image: bitnami/postgresql:${POSTGRES_VERSION}
|
||||
container_name: postgres
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- postgresql.db.svc.cluster.local
|
||||
environment:
|
||||
POSTGRESQL_PASSWORD: ${COMMON_PG_PASSWORD}
|
||||
POSTGRESQL_PASSWORD: "${COMMON_PG_PASSWORD}"
|
||||
|
||||
clickhouse:
|
||||
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
|
||||
container_name: clickhouse
|
||||
volumes:
|
||||
- clickhouse:/var/lib/clickhouse
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- clickhouse-openreplay-clickhouse.db.svc.cluster.local
|
||||
environment:
|
||||
CLICKHOUSE_USER: "default"
|
||||
CLICKHOUSE_PASSWORD: ""
|
||||
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: "1"
|
||||
|
||||
redis:
|
||||
image: bitnami/redis:${REDIS_VERSION}
|
||||
|
|
@ -17,7 +36,9 @@ services:
|
|||
volumes:
|
||||
- redisdata:/bitnami/redis/data
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- redis-master.db.svc.cluster.local
|
||||
environment:
|
||||
ALLOW_EMPTY_PASSWORD: "yes"
|
||||
|
||||
|
|
@ -27,7 +48,9 @@ services:
|
|||
volumes:
|
||||
- miniodata:/bitnami/minio/data
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- minio.db.svc.cluster.local
|
||||
ports:
|
||||
- 9001:9001
|
||||
environment:
|
||||
|
|
@ -63,7 +86,7 @@ services:
|
|||
volumes:
|
||||
- ../helmcharts/openreplay/files/minio.sh:/tmp/minio.sh
|
||||
environment:
|
||||
MINIO_HOST: http://minio:9000
|
||||
MINIO_HOST: http://minio.db.svc.cluster.local:9000
|
||||
MINIO_ACCESS_KEY: ${COMMON_S3_KEY}
|
||||
MINIO_SECRET_KEY: ${COMMON_S3_SECRET}
|
||||
user: root
|
||||
|
|
@ -80,7 +103,7 @@ services:
|
|||
bash /tmp/minio.sh init || exit 100
|
||||
|
||||
db-migration:
|
||||
image: bitnami/postgresql:14.5.0
|
||||
image: bitnami/postgresql:14.5.0
|
||||
container_name: db-migration
|
||||
profiles:
|
||||
- "migration"
|
||||
|
|
@ -101,65 +124,317 @@ services:
|
|||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
until PGPASSWORD=${COMMON_PG_PASSWORD} psql -h postgresql -U postgres -d postgres -c '\q'; do
|
||||
until psql -c '\q'; do
|
||||
echo "PostgreSQL is unavailable - sleeping"
|
||||
sleep 1
|
||||
done
|
||||
echo "PostgreSQL is up - executing command"
|
||||
psql -v ON_ERROR_STOP=1 -f /tmp/init_schema.sql
|
||||
|
||||
frontend-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/frontend:${COMMON_VERSION}
|
||||
container_name: frontend
|
||||
clickhouse-migration:
|
||||
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
|
||||
container_name: clickhouse-migration
|
||||
profiles:
|
||||
- "migration"
|
||||
depends_on:
|
||||
- clickhouse
|
||||
- minio-migration
|
||||
networks:
|
||||
- openreplay-net
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ../schema/db/init_dbs/clickhouse/init_schema.sql:/tmp/init_schema.sql
|
||||
environment:
|
||||
CH_HOST: "clickhouse-openreplay-clickhouse.db.svc.cluster.local"
|
||||
CH_PORT: "9000"
|
||||
CH_PORT_HTTP: "8123"
|
||||
CH_USERNAME: "default"
|
||||
CH_PASSWORD: ""
|
||||
entrypoint:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
# Checking variable is empty. Shell independant method.
|
||||
# Wait for Minio to be ready
|
||||
until nc -z -v -w30 clickhouse-openreplay-clickhouse.db.svc.cluster.local 9000; do
|
||||
echo "Waiting for Minio server to be ready..."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "clickhouse is up - executing command"
|
||||
clickhouse-client -h ${CH_HOST} --user ${CH_USERNAME} ${CH_PASSWORD} --port ${CH_PORT} --multiquery < /tmp/init_schema.sql || true
|
||||
|
||||
alerts-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/alerts:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: alerts
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- alerts-openreplay
|
||||
- alerts-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- alerts.env
|
||||
- docker-envs/alerts.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
|
||||
analytics-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/analytics:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: analytics
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- analytics-openreplay
|
||||
- analytics-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/analytics.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
http-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/http:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: http
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- http-openreplay
|
||||
- http-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/http.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
images-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/images:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: images
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- images-openreplay
|
||||
- images-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/images.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
integrations-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/integrations:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: integrations
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- integrations-openreplay
|
||||
- integrations-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/integrations.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
sink-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/sink:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: sink
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- sink-openreplay
|
||||
- sink-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/sink.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
sourcemapreader-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/sourcemapreader:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: sourcemapreader
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- sourcemapreader-openreplay
|
||||
- sourcemapreader-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/sourcemapreader.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
spot-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/spot:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: spot
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- spot-openreplay
|
||||
- spot-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/spot.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
storage-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/storage:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: storage
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- storage-openreplay
|
||||
- storage-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/storage.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
assets-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/assets:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: assets
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- assets-openreplay
|
||||
- assets-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- assets.env
|
||||
- docker-envs/assets.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
|
||||
assist-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/assist:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: assist
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- assist-openreplay
|
||||
- assist-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- assist.env
|
||||
- docker-envs/assist.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
|
||||
canvases-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/canvases:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: canvases
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- canvases-openreplay
|
||||
- canvases-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/canvases.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
chalice-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/chalice:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: chalice
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- chalice-openreplay
|
||||
- chalice-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/chalice.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
db-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/db:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: db
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- db-openreplay
|
||||
- db-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- db.env
|
||||
- docker-envs/db.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
|
||||
ender-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/ender:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: ender
|
||||
networks:
|
||||
- openreplay-net
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- ender-openreplay
|
||||
- ender-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- ender.env
|
||||
- docker-envs/ender.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
|
||||
frontend-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/frontend:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: frontend
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- frontend-openreplay
|
||||
- frontend-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/frontend.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
heuristics-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/heuristics:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
|
|
@ -167,88 +442,15 @@ services:
|
|||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- heuristics-openreplay
|
||||
- heuristics-openreplay.app.svc.cluster.local
|
||||
env_file:
|
||||
- heuristics.env
|
||||
restart: unless-stopped
|
||||
|
||||
imagestorage-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/imagestorage:${COMMON_VERSION}
|
||||
container_name: imagestorage
|
||||
env_file:
|
||||
- imagestorage.env
|
||||
networks:
|
||||
- openreplay-net
|
||||
restart: unless-stopped
|
||||
|
||||
integrations-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/integrations:${COMMON_VERSION}
|
||||
container_name: integrations
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- integrations.env
|
||||
restart: unless-stopped
|
||||
|
||||
peers-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/peers:${COMMON_VERSION}
|
||||
container_name: peers
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- peers.env
|
||||
restart: unless-stopped
|
||||
|
||||
sourcemapreader-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/sourcemapreader:${COMMON_VERSION}
|
||||
container_name: sourcemapreader
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- sourcemapreader.env
|
||||
restart: unless-stopped
|
||||
|
||||
http-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/http:${COMMON_VERSION}
|
||||
container_name: http
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- http.env
|
||||
restart: unless-stopped
|
||||
|
||||
chalice-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/chalice:${COMMON_VERSION}
|
||||
container_name: chalice
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- chalice.env
|
||||
restart: unless-stopped
|
||||
|
||||
sink-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/sink:${COMMON_VERSION}
|
||||
container_name: sink
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- sink.env
|
||||
restart: unless-stopped
|
||||
|
||||
storage-openreplay:
|
||||
image: public.ecr.aws/p1t3u8a3/storage:${COMMON_VERSION}
|
||||
container_name: storage
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
networks:
|
||||
- openreplay-net
|
||||
env_file:
|
||||
- storage.env
|
||||
- docker-envs/heuristics.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
nginx-openreplay:
|
||||
image: nginx:latest
|
||||
|
|
@ -280,6 +482,7 @@ services:
|
|||
|
||||
volumes:
|
||||
pgdata:
|
||||
clickhouse:
|
||||
redisdata:
|
||||
miniodata:
|
||||
shared-volume:
|
||||
|
|
|
|||
27
scripts/docker-compose/docker-envs/alerts.env
Normal file
27
scripts/docker-compose/docker-envs/alerts.env
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
version_number="v1.22.0"
|
||||
pg_host="postgresql.db.svc.cluster.local"
|
||||
pg_port="5432"
|
||||
pg_dbname="postgres"
|
||||
ch_host="clickhouse-openreplay-clickhouse.db.svc.cluster.local"
|
||||
ch_port="9000"
|
||||
ch_port_http="8123"
|
||||
ch_username="default"
|
||||
ch_password=""
|
||||
pg_user="postgres"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
S3_KEY="${COMMON_S3_KEY}"
|
||||
S3_SECRET="${COMMON_S3_SECRET}"
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
EMAIL_HOST=""
|
||||
EMAIL_PORT="587"
|
||||
EMAIL_USER=""
|
||||
EMAIL_PASSWORD=""
|
||||
EMAIL_USE_TLS="true"
|
||||
EMAIL_USE_SSL="false"
|
||||
EMAIL_SSL_KEY=""
|
||||
EMAIL_SSL_CERT=""
|
||||
EMAIL_FROM="OpenReplay<do-not-reply@openreplay.com>"
|
||||
LOGLEVEL="INFO"
|
||||
PYTHONUNBUFFERED="0"
|
||||
11
scripts/docker-compose/docker-envs/analytics.env
Normal file
11
scripts/docker-compose/docker-envs/analytics.env
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
TOKEN_SECRET="secret_token_string"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
JWT_SECRET="${COMMON_JWT_SECRET}"
|
||||
CH_USERNAME="default"
|
||||
CH_PASSWORD=""
|
||||
CLICKHOUSE_STRING="clickhouse-openreplay-clickhouse.db.svc.cluster.local:9000/"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
10
scripts/docker-compose/docker-envs/assets.env
Normal file
10
scripts/docker-compose/docker-envs/assets.env
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
BUCKET_NAME="sessions-assets"
|
||||
LICENSE_KEY=""
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
AWS_REGION="us-east-1"
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
ASSETS_ORIGIN="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/sessions-assets"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
11
scripts/docker-compose/docker-envs/assist.env
Normal file
11
scripts/docker-compose/docker-envs/assist.env
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
ASSIST_JWT_SECRET="${COMMON_ASSIST_JWT_SECRET}"
|
||||
ASSIST_KEY="${COMMON_ASSIST_KEY}"
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}:80"
|
||||
S3_KEY="${COMMON_S3_KEY}"
|
||||
S3_SECRET="${COMMON_S3_SECRET}"
|
||||
REDIS_URL="redis-master.db.svc.cluster.local"
|
||||
CLEAR_SOCKET_TIME="720"
|
||||
debug="0"
|
||||
redis="false"
|
||||
uws="false"
|
||||
10
scripts/docker-compose/docker-envs/canvases.env
Normal file
10
scripts/docker-compose/docker-envs/canvases.env
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
AWS_REGION="us-east-1"
|
||||
BUCKET_NAME="mobs"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
FS_CLEAN_HRS="24"
|
||||
61
scripts/docker-compose/docker-envs/chalice.env
Normal file
61
scripts/docker-compose/docker-envs/chalice.env
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local"
|
||||
ch_username="default"
|
||||
ch_password=""
|
||||
ch_host="clickhouse-openreplay-clickhouse.db.svc.cluster.local"
|
||||
ch_port="9000"
|
||||
ch_port_http="8123"
|
||||
sourcemaps_reader="http://sourcemapreader-openreplay.app.svc.cluster.local:9000/%s/sourcemaps"
|
||||
ASSIST_URL="http://assist-openreplay.app.svc.cluster.local:9001/assist/%s"
|
||||
ASSIST_JWT_SECRET="${COMMON_ASSIST_JWT_SECRET}"
|
||||
JWT_SECRET="${COMMON_JWT_SECRET}"
|
||||
JWT_SPOT_SECRET="${COMMON_JWT_SPOT_SECRET}"
|
||||
ASSIST_KEY="${COMMON_ASSIST_KEY}"
|
||||
LICENSE_KEY=""
|
||||
version_number="v1.22.0"
|
||||
pg_host="postgresql.db.svc.cluster.local"
|
||||
pg_port="5432"
|
||||
pg_dbname="postgres"
|
||||
pg_user="postgres"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
S3_KEY="${COMMON_S3_KEY}"
|
||||
S3_SECRET="${COMMON_S3_SECRET}"
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
sessions_region="us-east-1"
|
||||
ASSIST_RECORDS_BUCKET="records"
|
||||
sessions_bucket="mobs"
|
||||
IOS_VIDEO_BUCKET="mobs"
|
||||
sourcemaps_bucket="sourcemaps"
|
||||
js_cache_bucket="sessions-assets"
|
||||
EMAIL_HOST=""
|
||||
EMAIL_PORT="587"
|
||||
EMAIL_USER=""
|
||||
EMAIL_PASSWORD=""
|
||||
EMAIL_USE_TLS="true"
|
||||
EMAIL_USE_SSL="false"
|
||||
EMAIL_SSL_KEY=""
|
||||
EMAIL_SSL_CERT=""
|
||||
EMAIL_FROM="OpenReplay<do-not-reply@openreplay.com>"
|
||||
CH_COMPRESSION="false"
|
||||
CLUSTER_URL="svc.cluster.local"
|
||||
JWT_EXPIRATION="86400"
|
||||
JWT_REFRESH_SECRET="${COMMON_JWT_REFRESH_SECRET}"
|
||||
JWT_SPOT_REFRESH_SECRET="${COMMON_JWT_SPOT_REFRESH_SECRET}"
|
||||
LOGLEVEL="INFO"
|
||||
PYTHONUNBUFFERED="0"
|
||||
SAML2_MD_URL=""
|
||||
announcement_url=""
|
||||
assist_secret=""
|
||||
async_Token=""
|
||||
captcha_key=""
|
||||
captcha_server=""
|
||||
iceServers=""
|
||||
idp_entityId=""
|
||||
idp_name=""
|
||||
idp_sls_url=""
|
||||
idp_sso_url=""
|
||||
idp_tenantKey=""
|
||||
idp_x509cert=""
|
||||
jwt_algorithm="HS512"
|
||||
11
scripts/docker-compose/docker-envs/db.env
Normal file
11
scripts/docker-compose/docker-envs/db.env
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
CH_USERNAME="default"
|
||||
CH_PASSWORD=""
|
||||
CLICKHOUSE_STRING="clickhouse-openreplay-clickhouse.db.svc.cluster.local:9000/default"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
QUICKWIT_ENABLED="false"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
ch_db="default"
|
||||
6
scripts/docker-compose/docker-envs/ender.env
Normal file
6
scripts/docker-compose/docker-envs/ender.env
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
2
scripts/docker-compose/docker-envs/frontend.env
Normal file
2
scripts/docker-compose/docker-envs/frontend.env
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
TRACKER_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/script"
|
||||
HTTP_PORT="80"
|
||||
4
scripts/docker-compose/docker-envs/heuristics.env
Normal file
4
scripts/docker-compose/docker-envs/heuristics.env
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
15
scripts/docker-compose/docker-envs/http.env
Normal file
15
scripts/docker-compose/docker-envs/http.env
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
BUCKET_NAME="uxtesting-records"
|
||||
CACHE_ASSETS="true"
|
||||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
AWS_REGION="us-east-1"
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
JWT_SECRET="${COMMON_JWT_SECRET}"
|
||||
JWT_SPOT_SECRET="${COMMON_JWT_SPOT_SECRET}"
|
||||
TOKEN_SECRET="${COMMON_TOKEN_SECRET}"
|
||||
10
scripts/docker-compose/docker-envs/images.env
Normal file
10
scripts/docker-compose/docker-envs/images.env
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
AWS_REGION="us-east-1"
|
||||
BUCKET_NAME="mobs"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
FS_CLEAN_HRS="24"
|
||||
13
scripts/docker-compose/docker-envs/integrations.env
Normal file
13
scripts/docker-compose/docker-envs/integrations.env
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
AWS_REGION="us-east-1"
|
||||
BUCKET_NAME="mobs"
|
||||
JWT_SECRET="${COMMON_JWT_SECRET}"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
TOKEN_SECRET="secret_token_string"
|
||||
5
scripts/docker-compose/docker-envs/sink.env
Normal file
5
scripts/docker-compose/docker-envs/sink.env
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
ASSETS_ORIGIN="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/sessions-assets"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
11
scripts/docker-compose/docker-envs/sourcemapreader.env
Normal file
11
scripts/docker-compose/docker-envs/sourcemapreader.env
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
SMR_HOST="0.0.0.0"
|
||||
S3_HOST="http://minio.db.svc.cluster.local:9000"
|
||||
S3_KEY="${COMMON_S3_KEY}"
|
||||
S3_SECRET="${COMMON_S3_SECRET}"
|
||||
AWS_REGION="us-east-1"
|
||||
LICENSE_KEY=""
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres"
|
||||
ASSETS_ORIGIN="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/sessions-assets"
|
||||
16
scripts/docker-compose/docker-envs/spot.env
Normal file
16
scripts/docker-compose/docker-envs/spot.env
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
CACHE_ASSETS="true"
|
||||
FS_CLEAN_HRS="24"
|
||||
TOKEN_SECRET="secret_token_string"
|
||||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
BUCKET_NAME="spots"
|
||||
AWS_REGION="us-east-1"
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
JWT_SECRET="${COMMON_JWT_SECRET}"
|
||||
JWT_SPOT_SECRET="${COMMON_JWT_SPOT_SECRET}"
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
10
scripts/docker-compose/docker-envs/storage.env
Normal file
10
scripts/docker-compose/docker-envs/storage.env
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
|
||||
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
|
||||
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
|
||||
AWS_REGION="us-east-1"
|
||||
BUCKET_NAME="mobs"
|
||||
LICENSE_KEY=""
|
||||
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
|
||||
KAFKA_USE_SSL="false"
|
||||
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
|
||||
FS_CLEAN_HRS="24"
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
38
scripts/docker-compose/hacks/Makefile
Normal file
38
scripts/docker-compose/hacks/Makefile
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
.PHONY: default
|
||||
default: create-compose
|
||||
|
||||
help: ## Prints help for targets with comments
|
||||
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ \
|
||||
{ printf " \033[36m%-25s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
|
||||
|
||||
.PHONY: helm-template
|
||||
helm-template:
|
||||
@rm -rf yamls
|
||||
@mkdir yamls
|
||||
@helm template op ../../helmcharts/openreplay -n app -f ../../helmcharts/vars.yaml -f vars.yaml > yamls/deployment.yaml
|
||||
|
||||
.PHONY: create-yamls
|
||||
create-yamls: helm-template
|
||||
@awk -v RS='---' 'NR>1{kind=""; name=""; if(match($$0, /kind:[[:space:]]*([a-zA-Z]+)/, k) && \
|
||||
match($$0, /name:[[:space:]]*([a-zA-Z0-9_.-]+)/, n)) \
|
||||
{kind=k[1]; name=n[1]; if(kind == "Deployment") print $$0 > "yamls/"name".yaml";}}' yamls/deployment.yaml
|
||||
@rm yamls/ingress-nginx.yaml
|
||||
@rm yamls/deployment.yaml
|
||||
|
||||
.PHONY: create-envs
|
||||
create-envs: create-yamls ## Create envs from deployment
|
||||
@echo Creating env vars...
|
||||
@rm -rf ../docker-envs
|
||||
@mkdir ../docker-envs
|
||||
@# @find ./ -type f -iname "Deployment" -exec templater -i env.tpl -f ../deployment.yaml {} > {}.env \;
|
||||
@find yamls/ -type f -name "*.yaml" -exec sh -c 'filename=$$(basename {} -openreplay.yaml); \
|
||||
templater -i tpls/env.tpl -f {} > ../docker-envs/$${filename}.env' \;
|
||||
@# Replace all http/https for COMMON_DOMAIN_NAME with COMMON_PROTOCOL
|
||||
@find ../docker-envs/ -type f -name "*.env" -exec sed -i 's|http[s]\?://\$${COMMON_DOMAIN_NAME}|\$${COMMON_PROTOCOL}://\$${COMMON_DOMAIN_NAME}|g' {} \;
|
||||
|
||||
.PHONY: create-compose
|
||||
create-compose: create-envs ## Create docker-compose.yml
|
||||
@echo creating docker-compose yaml
|
||||
$(eval FILES := $(shell find yamls/ -type f -name "*.yaml" -exec basename {} .yaml \; | tr '\n' ',' | sed 's/,$$//'))
|
||||
@#echo "Files found: $(FILES)"
|
||||
@FILES=$(FILES) templater -i tpls/docker-compose.tpl -f ../../helmcharts/vars.yaml -f vars.yaml > ../docker-compose.yaml
|
||||
228
scripts/docker-compose/hacks/tpls/docker-compose.tpl
Normal file
228
scripts/docker-compose/hacks/tpls/docker-compose.tpl
Normal file
|
|
@ -0,0 +1,228 @@
|
|||
{{/* # vim: ft=helm: */}}
|
||||
# vim: ft=yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
postgresql:
|
||||
image: bitnami/postgresql:${POSTGRES_VERSION}
|
||||
container_name: postgres
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- postgresql.db.svc.cluster.local
|
||||
environment:
|
||||
POSTGRESQL_PASSWORD: "{{.Values.global.postgresql.postgresqlPassword}}"
|
||||
|
||||
clickhouse:
|
||||
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
|
||||
container_name: clickhouse
|
||||
volumes:
|
||||
- clickhouse:/var/lib/clickhouse
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- clickhouse-openreplay-clickhouse.db.svc.cluster.local
|
||||
environment:
|
||||
CLICKHOUSE_USER: "default"
|
||||
CLICKHOUSE_PASSWORD: ""
|
||||
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: "1"
|
||||
|
||||
redis:
|
||||
image: bitnami/redis:${REDIS_VERSION}
|
||||
container_name: redis
|
||||
volumes:
|
||||
- redisdata:/bitnami/redis/data
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- redis-master.db.svc.cluster.local
|
||||
environment:
|
||||
ALLOW_EMPTY_PASSWORD: "yes"
|
||||
|
||||
minio:
|
||||
image: bitnami/minio:${MINIO_VERSION}
|
||||
container_name: minio
|
||||
volumes:
|
||||
- miniodata:/bitnami/minio/data
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- minio.db.svc.cluster.local
|
||||
ports:
|
||||
- 9001:9001
|
||||
environment:
|
||||
MINIO_ROOT_USER: {{.Values.minio.global.minio.accessKey}}
|
||||
MINIO_ROOT_PASSWORD: {{.Values.minio.global.minio.secretKey}}
|
||||
|
||||
fs-permission:
|
||||
image: debian:stable-slim
|
||||
container_name: fs-permission
|
||||
profiles:
|
||||
- "migration"
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
- miniodata:/mnt/minio
|
||||
- pgdata:/mnt/postgres
|
||||
entrypoint:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
chown -R 1001:1001 /mnt/{efs,minio,postgres}
|
||||
restart: on-failure
|
||||
|
||||
minio-migration:
|
||||
image: bitnami/minio:2020.10.9-debian-10-r6
|
||||
container_name: minio-migration
|
||||
profiles:
|
||||
- "migration"
|
||||
depends_on:
|
||||
- minio
|
||||
- fs-permission
|
||||
networks:
|
||||
- openreplay-net
|
||||
volumes:
|
||||
- ../helmcharts/openreplay/files/minio.sh:/tmp/minio.sh
|
||||
environment:
|
||||
MINIO_HOST: http://minio.db.svc.cluster.local:9000
|
||||
MINIO_ACCESS_KEY: {{.Values.minio.global.minio.accessKey}}
|
||||
MINIO_SECRET_KEY: {{.Values.minio.global.minio.secretKey}}
|
||||
user: root
|
||||
entrypoint:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
apt update && apt install netcat -y
|
||||
# Wait for Minio to be ready
|
||||
until nc -z -v -w30 minio 9000; do
|
||||
echo "Waiting for Minio server to be ready..."
|
||||
sleep 1
|
||||
done
|
||||
bash /tmp/minio.sh init || exit 100
|
||||
|
||||
db-migration:
|
||||
image: bitnami/postgresql:14.5.0
|
||||
container_name: db-migration
|
||||
profiles:
|
||||
- "migration"
|
||||
depends_on:
|
||||
- postgresql
|
||||
- minio-migration
|
||||
networks:
|
||||
- openreplay-net
|
||||
volumes:
|
||||
- ../schema/db/init_dbs/postgresql/init_schema.sql:/tmp/init_schema.sql
|
||||
environment:
|
||||
PGHOST: postgresql
|
||||
PGPORT: 5432
|
||||
PGDATABASE: postgres
|
||||
PGUSER: postgres
|
||||
PGPASSWORD: {{.Values.global.postgresql.postgresqlPassword}}
|
||||
entrypoint:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
until psql -c '\q'; do
|
||||
echo "PostgreSQL is unavailable - sleeping"
|
||||
sleep 1
|
||||
done
|
||||
echo "PostgreSQL is up - executing command"
|
||||
psql -v ON_ERROR_STOP=1 -f /tmp/init_schema.sql
|
||||
|
||||
clickhouse-migration:
|
||||
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
|
||||
container_name: clickhouse-migration
|
||||
profiles:
|
||||
- "migration"
|
||||
depends_on:
|
||||
- clickhouse
|
||||
- minio-migration
|
||||
networks:
|
||||
- openreplay-net
|
||||
volumes:
|
||||
- ../schema/db/init_dbs/clickhouse/init_schema.sql:/tmp/init_schema.sql
|
||||
environment:
|
||||
CH_HOST: "{{.Values.global.clickhouse.chHost}}"
|
||||
CH_PORT: "{{.Values.global.clickhouse.service.webPort}}"
|
||||
CH_PORT_HTTP: "{{.Values.global.clickhouse.service.dataPort}}"
|
||||
CH_USERNAME: "{{.Values.global.clickhouse.username}}"
|
||||
CH_PASSWORD: "{{.Values.global.clickhouse.password}}"
|
||||
entrypoint:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
# Checking variable is empty. Shell independant method.
|
||||
# Wait for Minio to be ready
|
||||
until nc -z -v -w30 clickhouse-openreplay-clickhouse.db.svc.cluster.local 9000; do
|
||||
echo "Waiting for Minio server to be ready..."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "clickhouse is up - executing command"
|
||||
clickhouse-client -h ${CH_HOST} --user ${CH_USERNAME} ${CH_PASSWORD} --port ${CH_PORT} --multiquery < /tmp/init_schema.sql || true
|
||||
|
||||
{{- define "service" -}}
|
||||
{{- $service_name := . }}
|
||||
{{- $container_name := (splitList "-" $service_name) | first | printf "%s" }}
|
||||
{{print $service_name}}:
|
||||
image: public.ecr.aws/p1t3u8a3/{{$container_name}}:${COMMON_VERSION}
|
||||
domainname: app.svc.cluster.local
|
||||
container_name: {{print $container_name}}
|
||||
networks:
|
||||
openreplay-net:
|
||||
aliases:
|
||||
- {{print $container_name}}-openreplay
|
||||
- {{print $container_name}}-openreplay.app.svc.cluster.local
|
||||
volumes:
|
||||
- shared-volume:/mnt/efs
|
||||
env_file:
|
||||
- docker-envs/{{print $container_name}}.env
|
||||
environment: {} # Fallback empty environment if env_file is missing
|
||||
restart: unless-stopped
|
||||
{{ end -}}
|
||||
|
||||
{{- range $file := split "," (env "FILES")}}
|
||||
{{ template "service" $file}}
|
||||
{{- end}}
|
||||
|
||||
nginx-openreplay:
|
||||
image: nginx:latest
|
||||
container_name: nginx
|
||||
networks:
|
||||
- openreplay-net
|
||||
volumes:
|
||||
- ./nginx.conf:/etc/nginx/conf.d/default.conf
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
caddy:
|
||||
image: caddy:latest
|
||||
container_name: caddy
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./Caddyfile:/etc/caddy/Caddyfile
|
||||
- caddy_data:/data
|
||||
- caddy_config:/config
|
||||
networks:
|
||||
- openreplay-net
|
||||
environment:
|
||||
- ACME_AGREE=true # Agree to Let's Encrypt Subscriber Agreement
|
||||
- CADDY_DOMAIN=${CADDY_DOMAIN}
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
clickhouse:
|
||||
redisdata:
|
||||
miniodata:
|
||||
shared-volume:
|
||||
caddy_data:
|
||||
caddy_config:
|
||||
|
||||
networks:
|
||||
openreplay-net:
|
||||
6
scripts/docker-compose/hacks/tpls/env.tpl
Normal file
6
scripts/docker-compose/hacks/tpls/env.tpl
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
{{- $excludedKeys := list "POD_NAMESPACE" -}}
|
||||
{{ range (index .Values.spec.template.spec.containers 0).env -}}
|
||||
{{- if not (has .name $excludedKeys) -}}
|
||||
{{ .name }}="{{ .value }}"
|
||||
{{ end -}}
|
||||
{{ end -}}
|
||||
26
scripts/docker-compose/hacks/vars.yaml
Normal file
26
scripts/docker-compose/hacks/vars.yaml
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
postgresql: &postgres
|
||||
postgresqlPassword: '${COMMON_PG_PASSWORD}'
|
||||
minio:
|
||||
global:
|
||||
minio:
|
||||
accessKey: &accessKey '${COMMON_S3_KEY}'
|
||||
secretKey: &secretKey '${COMMON_S3_SECRET}'
|
||||
global:
|
||||
pg_connection_string: "postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
|
||||
postgresql: *postgres
|
||||
assistKey: '${COMMON_ASSIST_KEY}'
|
||||
assistJWTSecret: '${COMMON_ASSIST_JWT_SECRET}'
|
||||
jwtSecret: '${COMMON_JWT_SECRET}'
|
||||
jwtSpotSecret: '${COMMON_JWT_SPOT_SECRET}'
|
||||
tokenSecret: '${COMMON_TOKEN_SECRET}'
|
||||
domainName: "${COMMON_DOMAIN_NAME}"
|
||||
ORSecureAccess: false
|
||||
s3:
|
||||
accessKey: *accessKey
|
||||
secretKey: *secretKey
|
||||
chalice:
|
||||
env:
|
||||
JWT_REFRESH_SECRET: "${COMMON_JWT_REFRESH_SECRET}"
|
||||
JWT_SPOT_REFRESH_SECRET: "${COMMON_JWT_SPOT_REFRESH_SECRET}"
|
||||
POD_NAMESPACE: app
|
||||
CLUSTER_URL: svc.cluster.local
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
CACHE_ASSETS='true'
|
||||
TOKEN_SECRET='secret_token_string'
|
||||
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
|
||||
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
|
||||
AWS_REGION='us-east-1'
|
||||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
pg_password="${COMMON_PG_PASSWORD}"
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
BUCKET_NAME='uxtesting-records'
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
|
||||
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
|
||||
AWS_ENDPOINT='http://minio:9000'
|
||||
AWS_REGION='us-east-1'
|
||||
BUCKET_NAME=mobs
|
||||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
FS_CLEAN_HRS='24'
|
||||
|
|
@ -12,42 +12,48 @@ NC='\033[0m' # No Color
|
|||
|
||||
# --- Helper functions for logs ---
|
||||
info() {
|
||||
echo -e "${GREEN}[INFO] $1 ${NC} 👍"
|
||||
echo -e "${GREEN}[INFO] $1 ${NC} 👍"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[WARN] $1 ${NC} ⚠️"
|
||||
echo -e "${YELLOW}[WARN] $1 ${NC} ⚠️"
|
||||
}
|
||||
|
||||
fatal() {
|
||||
echo -e "${RED}[FATAL] $1 ${NC} 🔥"
|
||||
exit 1
|
||||
echo -e "${RED}[FATAL] $1 ${NC} 🔥"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Function to check if a command exists
|
||||
function exists() {
|
||||
type "$1" &>/dev/null
|
||||
type "$1" &>/dev/null
|
||||
}
|
||||
|
||||
# Generate a random password using openssl
|
||||
randomPass() {
|
||||
exists openssl || {
|
||||
info "Installing openssl... 🔐"
|
||||
sudo apt update &>/dev/null
|
||||
sudo apt install openssl -y &>/dev/null
|
||||
}
|
||||
openssl rand -hex 10
|
||||
exists openssl || {
|
||||
info "Installing openssl... 🔐"
|
||||
sudo apt update &>/dev/null
|
||||
sudo apt install openssl -y &>/dev/null
|
||||
}
|
||||
openssl rand -hex 10
|
||||
}
|
||||
|
||||
# Create dynamic passwords and update the environment file
|
||||
function create_passwords() {
|
||||
info "Creating dynamic passwords..."
|
||||
sed -i "s/change_me_domain/${DOMAIN_NAME}/g" common.env
|
||||
sed -i "s/change_me_jwt/$(randomPass)/g" common.env
|
||||
sed -i "s/change_me_s3_key/$(randomPass)/g" common.env
|
||||
sed -i "s/change_me_s3_secret/$(randomPass)/g" common.env
|
||||
sed -i "s/change_me_pg_password/$(randomPass)/g" common.env
|
||||
info "Passwords created and updated in common.env file."
|
||||
info "Creating dynamic passwords..."
|
||||
|
||||
# Update domain name replacement
|
||||
sed -i "s/change_me_domain/${DOMAIN_NAME}/g" common.env
|
||||
|
||||
# Find all change_me_ entries and replace them with random passwords
|
||||
grep -o 'change_me_[a-zA-Z0-9_]*' common.env | sort -u | while read -r token; do
|
||||
random_pass=$(randomPass)
|
||||
sed -i "s/${token}/${random_pass}/g" common.env
|
||||
info "Generated password for ${token}"
|
||||
done
|
||||
|
||||
info "Passwords created and updated in common.env file."
|
||||
}
|
||||
|
||||
# update apt cache
|
||||
|
|
@ -72,23 +78,27 @@ echo -e "${GREEN}"
|
|||
read -rp "Enter DOMAIN_NAME: " DOMAIN_NAME
|
||||
echo -e "${NC}"
|
||||
if [[ -z $DOMAIN_NAME ]]; then
|
||||
fatal "DOMAIN_NAME variable is empty. Please provide a valid domain name to proceed."
|
||||
fatal "DOMAIN_NAME variable is empty. Please provide a valid domain name to proceed."
|
||||
fi
|
||||
info "Using domain name: $DOMAIN_NAME 🌐"
|
||||
echo "CADDY_DOMAIN=\"$DOMAIN_NAME\"" >> common.env
|
||||
echo "CADDY_DOMAIN=\"$DOMAIN_NAME\"" >>common.env
|
||||
|
||||
read -p "Is the domain on a public DNS? (y/n) " yn
|
||||
case $yn in
|
||||
y ) echo "$DOMAIN_NAME is on a public DNS";
|
||||
;;
|
||||
n ) echo "$DOMAIN_NAME is on a private DNS";
|
||||
#add TLS internal to caddyfile
|
||||
#In local network Caddy can't reach Let's Encrypt servers to get a certificate
|
||||
mv Caddyfile Caddyfile.public
|
||||
mv Caddyfile.private Caddyfile
|
||||
;;
|
||||
* ) echo invalid response;
|
||||
exit 1;;
|
||||
case $yn in
|
||||
y)
|
||||
echo "$DOMAIN_NAME is on a public DNS"
|
||||
;;
|
||||
n)
|
||||
echo "$DOMAIN_NAME is on a private DNS"
|
||||
#add TLS internal to caddyfile
|
||||
#In local network Caddy can't reach Let's Encrypt servers to get a certificate
|
||||
mv Caddyfile Caddyfile.public
|
||||
mv Caddyfile.private Caddyfile
|
||||
;;
|
||||
*)
|
||||
echo invalid response
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Create passwords if they don't exist
|
||||
|
|
@ -103,23 +113,27 @@ set +a
|
|||
# Use the `envsubst` command to substitute the shell environment variables into reference_var.env and output to a combined .env
|
||||
find ./ -type f \( -iname "*.env" -o -iname "docker-compose.yaml" \) ! -name "common.env" -exec /bin/bash -c 'file="{}"; git checkout -- "$file"; cp "$file" "$file.bak"; envsubst < "$file.bak" > "$file"; rm "$file.bak"' \;
|
||||
|
||||
case $yn in
|
||||
y ) echo "$DOMAIN_NAME is on a public DNS";
|
||||
##No changes needed
|
||||
;;
|
||||
n ) echo "$DOMAIN_NAME is on a private DNS";
|
||||
##Add a variable to chalice.env file
|
||||
echo "SKIP_H_SSL=True" >> chalice.env
|
||||
;;
|
||||
* ) echo invalid response;
|
||||
exit 1;;
|
||||
case $yn in
|
||||
y)
|
||||
echo "$DOMAIN_NAME is on a public DNS"
|
||||
##No changes needed
|
||||
;;
|
||||
n)
|
||||
echo "$DOMAIN_NAME is on a private DNS"
|
||||
##Add a variable to chalice.env file
|
||||
echo "SKIP_H_SSL=True" >>chalice.env
|
||||
;;
|
||||
*)
|
||||
echo invalid response
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
services=$(sudo -E docker-compose config --services)
|
||||
for service in $services; do
|
||||
echo "Pulling image for $service..."
|
||||
sudo -E docker-compose pull $service
|
||||
sleep 5
|
||||
readarray -t services < <(sudo -E docker-compose config --services)
|
||||
for service in "${services[@]}"; do
|
||||
echo "Pulling image for $service..."
|
||||
sudo -E docker-compose pull --no-parallel "$service"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
sudo -E docker-compose --profile migration up --force-recreate --build -d
|
||||
|
|
|
|||
|
|
@ -1,12 +0,0 @@
|
|||
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
|
||||
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
|
||||
AWS_ENDPOINT='http://minio:9000'
|
||||
AWS_REGION='us-east-1'
|
||||
BUCKET_NAME=mobs
|
||||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
pg_password=${COMMON_PG_PASSWORD}
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
TOKEN_SECRET='secret_token_string'
|
||||
|
|
@ -71,7 +71,7 @@ server {
|
|||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://peers-openreplay:9000;
|
||||
proxy_pass http://assist-openreplay:9001;
|
||||
}
|
||||
location /ws-assist/ {
|
||||
rewrite ^/ws-assist/(.*) /$1 break;
|
||||
|
|
|
|||
|
|
@ -1,3 +0,0 @@
|
|||
ASSIST_KEY=SetARandomStringHere
|
||||
S3_KEY=${COMMON_S3_KEY}
|
||||
debug='0'
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
ASSETS_ORIGIN="https://${COMMON_DOMAIN_NAME}:443/sessions-assets"
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
SMR_HOST='0.0.0.0'
|
||||
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
|
||||
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
|
||||
AWS_REGION='us-east-1'
|
||||
LICENSE_KEY=''
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres"
|
||||
ASSETS_ORIGIN="sourcemapreaders://${COMMON_DOMAIN_NAME}:443/sessions-assets"
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
|
||||
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
|
||||
AWS_ENDPOINT='http://minio:9000'
|
||||
AWS_REGION='us-east-1'
|
||||
BUCKET_NAME=mobs
|
||||
LICENSE_KEY=''
|
||||
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
|
||||
KAFKA_USE_SSL='false'
|
||||
REDIS_STRING='redis://redis:6379'
|
||||
FS_CLEAN_HRS='24'
|
||||
|
|
@ -75,7 +75,7 @@ spec:
|
|||
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
|
||||
{{- end}}
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
|
||||
ports:
|
||||
{{- range $key, $val := .Values.service.ports }}
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ spec:
|
|||
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
|
||||
{{- end}}
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
- name: KAFKA_SERVERS
|
||||
value: '{{ .Values.global.kafka.kafkaHost }}:{{ .Values.global.kafka.kafkaPort }}'
|
||||
- name: KAFKA_USE_SSL
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ spec:
|
|||
- name: QUICKWIT_ENABLED
|
||||
value: '{{ .Values.global.quickwit.enabled }}'
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
|
||||
{{- range $key, $val := .Values.global.env }}
|
||||
- name: {{ $key }}
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ spec:
|
|||
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
|
||||
{{- end}}
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
|
||||
{{- range $key, $val := .Values.global.env }}
|
||||
- name: {{ $key }}
|
||||
|
|
|
|||
|
|
@ -89,7 +89,7 @@ spec:
|
|||
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
|
||||
{{- end}}
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
|
||||
- name: JWT_SECRET
|
||||
value: {{ .Values.global.jwtSecret }}
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ spec:
|
|||
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
|
||||
{{- end}}
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
|
||||
{{- range $key, $val := .Values.global.env }}
|
||||
- name: {{ $key }}
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@ spec:
|
|||
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
|
||||
{{- end}}
|
||||
- name: POSTGRES_STRING
|
||||
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
|
||||
value: {{ include "openreplay.pg_connection_string" .}}
|
||||
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
|
||||
ports:
|
||||
{{- range $key, $val := .Values.service.ports }}
|
||||
|
|
|
|||
|
|
@ -150,3 +150,11 @@ Create the volume mount config for redis TLS certificates
|
|||
{{- include "openreplay.s3Endpoint" . }}/{{.Values.global.s3.assetsBucket}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "openreplay.pg_connection_string"}}
|
||||
{{- if .Values.global.pg_connection_string }}
|
||||
{{- .Values.global.pg_connection_string -}}
|
||||
{{- else -}}
|
||||
{{- printf "postgres://%s:$(pg_password)@%s:%s/%s" .Values.global.postgresql.postgresqlUser .Values.global.postgresql.postgresqlHost .Values.global.postgresql.postgresqlPort .Values.global.postgresql.postgresqlDatabase -}}
|
||||
{{- end -}}
|
||||
{{- end}}
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue