or 1940 upstream docker release with the existing installation (#3316)

* chore(docker): Adding dynamic env generator
* ci(make): Create deployment yamls
* ci(make): Generating docker envs
* change env name structure
* proper env names
* chore(docker): clickhouse
* chore(docker-compose): generate env file format
* chore(docker-compose): Adding docker-compose
* chore(docker-compose): format make
* chore(docker-compose): Update version
* chore(docker-compose): adding new secrets
* ci(make): default target
* ci(Makefile): Update common protocol
* chore(docker-compose): refactor folder structure
* ci(make): rename to docker-envs
* feat(docker): add clickhouse volume definition
Add clickhouse persistent volume to the docker-compose configuration
to ensure data is preserved between container restarts.
* refactor: move env files to docker-envs directory
Updates all environment file references in docker-compose.yaml to use a
consistent directory structure, placing them under the docker-envs/
directory for better organization.
* fix(docker): rename imagestorage to images
 The `imagestorage` service and related environment file
 have been renamed to `images` for clarity and consistency.
 This change reflects the service's purpose of handling
 images.
* feat(docker): introduce docker-compose template
 A new docker-compose template
 to generate docker-compose files from a list of services.
 The template uses helm syntax.
* fix: Properly set FILES variable in Makefile
 The FILES variable was not being set correctly in the
 Makefile due to subshell issues. This commit fixes the
 variable assignment and ensures that the variable is
 accessible in subsequent commands.
* feat: Refactor docker-compose template for local development
 This commit introduces a complete overhaul of the
 docker-compose template, switching from a helm-based
 template to a native docker-compose.yml file. This
 change simplifies local development and makes it easier
 to manage the OpenReplay stack.
 The new template includes services for:
 - PostgreSQL
 - ClickHouse
 - Redis
 - MinIO
 - Nginx
 - Caddy
 It also includes migration jobs for setting up the
 database and MinIO.
* fix(docker-compose): Add fallback empty environment
 Add an empty environment to the docker-compose template to prevent
 errors when the env_file is missing. This ensures that the
 container can start even if the environment file is not present.
* feat(docker): Add domainname and aliases to services
 This change adds the `domainname` and `aliases` attributes to each
 service in the docker-compose.yaml file. This is to ensure that
 the services can communicate with each other using their fully
 qualified domain names. Also adds shared volume and empty
 environment variables.
* update version
* chore(docker): don't pull parallel
* chore(docker-compose): proper pull
* chore(docker-compose): Update db service urls
* fix(docker-compose): clickhouse url
* chore(clickhouse): Adding clickhouse db migration
* chore(docker-compose): Adding clickhouse
* fix(tpl): variable injection
* chore(fix): compose tpl variable rendering
* chore(docker-compose): Allow override pg variable
* chore(helm): remove assist-server
* chore(helm): pg integrations
* chore(nginx): removed services
* chore(docker-compose): Mulitple aliases
* chore(docker-compose): Adding more env vars
* feat(install): Dynamically generate passwords
 dynamic password generation by
 identifying `change_me_*` entries in `common.env` and
 replacing them with random passwords. This enhances
 security and simplifies initial setup.
 The changes include:
 - Replacing hardcoded password replacements with a loop
   that iterates through all `change_me_*` entries.
 - Using `grep` to find all `change_me_*` tokens.
 - Generating a random password for each token.
 - Updating the `common.env` file with the generated
   passwords.
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): clickhouse-migration
* compose: chalice env
* chore(docker-compose): overlay vars
* chore(docker): Adding ch port
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): migration name
* feat(docker): skip specific values
* chore(docker-compose): define namespace
---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
This commit is contained in:
Rajesh Rajendran 2025-04-23 10:57:19 +02:00 committed by GitHub
parent f963ff394d
commit b886e9b242
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
59 changed files with 922 additions and 842 deletions

1
scripts/docker-compose/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
hacks/yamls

View file

@ -1,28 +0,0 @@
ASSIST_JWT_SECRET=${COMMON_JWT_SECRET}
ASSIST_KEY=${COMMON_JWT_SECRET}
ASSIST_RECORDS_BUCKET=records
ASSIST_URL="http://assist-openreplay:9001/assist/%s"
AWS_DEFAULT_REGION="us-east-1"
CH_COMPRESSION="false"
PYTHONUNBUFFERED="0"
REDIS_STRING="redis://redis:6379"
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
S3_KEY="${COMMON_S3_KEY}"
S3_SECRET="${COMMON_S3_SECRET}"
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
ch_host="clickhouse"
ch_port="9000"
ch_port_http="8123"
ch_username="default"
js_cache_bucket=sessions-assets
jwt_secret="${COMMON_JWT_SECRET}"
pg_dbname="postgres"
pg_host="postgresql"
pg_password="${COMMON_PG_PASSWORD}"
sessions_bucket=mobs
sessions_region="us-east-1"
sourcemaps_bucket=sourcemaps
sourcemaps_reader="http://sourcemapreader-openreplay:9000/sourcemaps/%s/sourcemaps"
version_number="${COMMON_VERSION}"
CLUSTER_URL=""
POD_NAMESPACE=""

View file

@ -1,10 +0,0 @@
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
BUCKET_NAME=sessions-assets
LICENSE_KEY=''
AWS_ENDPOINT='http://minio:9000'
AWS_REGION='us-east-1'
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
ASSETS_ORIGIN='https://${COMMON_DOMAIN_NAME}:443/sessions-assets'
REDIS_STRING='redis://redis:6379'

View file

@ -1,11 +0,0 @@
ASSIST_JWT_SECRET=${COMMON_JWT_SECRET}
ASSIST_KEY=${COMMON_JWT_SECRET}
AWS_DEFAULT_REGION="us-east-1"
S3_HOST="https://${COMMON_DOMAIN_NAME}:443"
S3_KEY=changeMeMinioAccessKey
S3_SECRET=changeMeMinioPassword
REDIS_URL=redis
CLEAR_SOCKET_TIME='720'
debug='0'
redis='false'
uws='false'

View file

@ -1,31 +0,0 @@
ASSIST_JWT_SECRET=${COMMON_JWT_SECRET}
ASSIST_KEY=${COMMON_JWT_SECRET}
ASSIST_RECORDS_BUCKET=records
ASSIST_URL="http://assist-openreplay:9001/assist/%s"
AWS_DEFAULT_REGION="us-east-1"
CH_COMPRESSION="false"
PYTHONUNBUFFERED="0"
REDIS_STRING="redis://redis:6379"
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
S3_KEY="${COMMON_S3_KEY}"
S3_SECRET="${COMMON_S3_SECRET}"
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
ch_host="clickhouse"
ch_port="9000"
ch_port_http="8123"
ch_username="default"
js_cache_bucket=sessions-assets
jwt_secret="${COMMON_JWT_SECRET}"
pg_dbname="postgres"
pg_host="postgresql"
pg_password="${COMMON_PG_PASSWORD}"
sessions_bucket=mobs
sessions_region="us-east-1"
sourcemaps_bucket=sourcemaps
sourcemaps_reader="http://sourcemapreader-openreplay:9000/sourcemaps/%s/sourcemaps"
version_number="${COMMON_VERSION}"
CLUSTER_URL=""
POD_NAMESPACE=""
JWT_REFRESH_SECRET=${COMMON_JWT_REFRESH_SECRET}
JWT_SPOT_REFRESH_SECRET=${COMMON_JWT_REFRESH_SECRET}
JWT_SPOT_SECRET=${COMMON_JWT_SPOT_SECRET}

View file

@ -1,15 +1,20 @@
COMMON_VERSION="v1.22.0"
COMMON_PROTOCOL="https"
COMMON_DOMAIN_NAME="change_me_domain"
COMMON_JWT_SECRET="change_me_jwt"
COMMON_JWT_SPOT_SECRET="change_me_jwt"
COMMON_JWT_REFRESH_SECRET="change_me_jwt_refresh"
COMMON_S3_KEY="change_me_s3_key"
COMMON_S3_SECRET="change_me_s3_secret"
COMMON_PG_PASSWORD="change_me_pg_password"
COMMON_VERSION="v1.21.0"
COMMON_JWT_REFRESH_SECRET="change_me_jwt_refresh"
COMMON_JWT_SPOT_REFRESH_SECRET="change_me_jwt_spot_refresh"
COMMON_ASSIST_JWT_SECRET="change_me_assist_jwt_secret"
COMMON_ASSIST_KEY="change_me_assist_key"
## DB versions
######################################
POSTGRES_VERSION="14.5.0"
POSTGRES_VERSION="17.2.0"
REDIS_VERSION="6.0.12-debian-10-r33"
MINIO_VERSION="2023.2.10-debian-11-r1"
CLICKHOUSE_VERSION="25.1-alpine"
######################################

View file

@ -1,11 +0,0 @@
CH_USERNAME='default'
CH_PASSWORD=''
CLICKHOUSE_STRING='clickhouse-openreplay-clickhouse.db.svc.cluster.local:9000/default'
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
pg_password="${COMMON_PG_PASSWORD}"
QUICKWIT_ENABLED='false'
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
REDIS_STRING='redis://redis:6379'
ch_db='default'

View file

@ -1,15 +1,34 @@
# vim: ft=yaml
version: '3'
services:
postgresql:
image: bitnami/postgresql:${POSTGRES_VERSION}
container_name: postgres
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- openreplay-net
openreplay-net:
aliases:
- postgresql.db.svc.cluster.local
environment:
POSTGRESQL_PASSWORD: ${COMMON_PG_PASSWORD}
POSTGRESQL_PASSWORD: "${COMMON_PG_PASSWORD}"
clickhouse:
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
container_name: clickhouse
volumes:
- clickhouse:/var/lib/clickhouse
networks:
openreplay-net:
aliases:
- clickhouse-openreplay-clickhouse.db.svc.cluster.local
environment:
CLICKHOUSE_USER: "default"
CLICKHOUSE_PASSWORD: ""
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: "1"
redis:
image: bitnami/redis:${REDIS_VERSION}
@ -17,7 +36,9 @@ services:
volumes:
- redisdata:/bitnami/redis/data
networks:
- openreplay-net
openreplay-net:
aliases:
- redis-master.db.svc.cluster.local
environment:
ALLOW_EMPTY_PASSWORD: "yes"
@ -27,7 +48,9 @@ services:
volumes:
- miniodata:/bitnami/minio/data
networks:
- openreplay-net
openreplay-net:
aliases:
- minio.db.svc.cluster.local
ports:
- 9001:9001
environment:
@ -63,7 +86,7 @@ services:
volumes:
- ../helmcharts/openreplay/files/minio.sh:/tmp/minio.sh
environment:
MINIO_HOST: http://minio:9000
MINIO_HOST: http://minio.db.svc.cluster.local:9000
MINIO_ACCESS_KEY: ${COMMON_S3_KEY}
MINIO_SECRET_KEY: ${COMMON_S3_SECRET}
user: root
@ -80,7 +103,7 @@ services:
bash /tmp/minio.sh init || exit 100
db-migration:
image: bitnami/postgresql:14.5.0
image: bitnami/postgresql:14.5.0
container_name: db-migration
profiles:
- "migration"
@ -101,65 +124,317 @@ services:
- /bin/bash
- -c
- |
until PGPASSWORD=${COMMON_PG_PASSWORD} psql -h postgresql -U postgres -d postgres -c '\q'; do
until psql -c '\q'; do
echo "PostgreSQL is unavailable - sleeping"
sleep 1
done
echo "PostgreSQL is up - executing command"
psql -v ON_ERROR_STOP=1 -f /tmp/init_schema.sql
frontend-openreplay:
image: public.ecr.aws/p1t3u8a3/frontend:${COMMON_VERSION}
container_name: frontend
clickhouse-migration:
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
container_name: clickhouse-migration
profiles:
- "migration"
depends_on:
- clickhouse
- minio-migration
networks:
- openreplay-net
restart: unless-stopped
volumes:
- ../schema/db/init_dbs/clickhouse/init_schema.sql:/tmp/init_schema.sql
environment:
CH_HOST: "clickhouse-openreplay-clickhouse.db.svc.cluster.local"
CH_PORT: "9000"
CH_PORT_HTTP: "8123"
CH_USERNAME: "default"
CH_PASSWORD: ""
entrypoint:
- /bin/bash
- -c
- |
# Checking variable is empty. Shell independant method.
# Wait for Minio to be ready
until nc -z -v -w30 clickhouse-openreplay-clickhouse.db.svc.cluster.local 9000; do
echo "Waiting for Minio server to be ready..."
sleep 1
done
echo "clickhouse is up - executing command"
clickhouse-client -h ${CH_HOST} --user ${CH_USERNAME} ${CH_PASSWORD} --port ${CH_PORT} --multiquery < /tmp/init_schema.sql || true
alerts-openreplay:
image: public.ecr.aws/p1t3u8a3/alerts:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: alerts
networks:
- openreplay-net
openreplay-net:
aliases:
- alerts-openreplay
- alerts-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- alerts.env
- docker-envs/alerts.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
analytics-openreplay:
image: public.ecr.aws/p1t3u8a3/analytics:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: analytics
networks:
openreplay-net:
aliases:
- analytics-openreplay
- analytics-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/analytics.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
http-openreplay:
image: public.ecr.aws/p1t3u8a3/http:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: http
networks:
openreplay-net:
aliases:
- http-openreplay
- http-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/http.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
images-openreplay:
image: public.ecr.aws/p1t3u8a3/images:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: images
networks:
openreplay-net:
aliases:
- images-openreplay
- images-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/images.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
integrations-openreplay:
image: public.ecr.aws/p1t3u8a3/integrations:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: integrations
networks:
openreplay-net:
aliases:
- integrations-openreplay
- integrations-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/integrations.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
sink-openreplay:
image: public.ecr.aws/p1t3u8a3/sink:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: sink
networks:
openreplay-net:
aliases:
- sink-openreplay
- sink-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/sink.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
sourcemapreader-openreplay:
image: public.ecr.aws/p1t3u8a3/sourcemapreader:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: sourcemapreader
networks:
openreplay-net:
aliases:
- sourcemapreader-openreplay
- sourcemapreader-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/sourcemapreader.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
spot-openreplay:
image: public.ecr.aws/p1t3u8a3/spot:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: spot
networks:
openreplay-net:
aliases:
- spot-openreplay
- spot-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/spot.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
storage-openreplay:
image: public.ecr.aws/p1t3u8a3/storage:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: storage
networks:
openreplay-net:
aliases:
- storage-openreplay
- storage-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/storage.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
assets-openreplay:
image: public.ecr.aws/p1t3u8a3/assets:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: assets
networks:
- openreplay-net
openreplay-net:
aliases:
- assets-openreplay
- assets-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- assets.env
- docker-envs/assets.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
assist-openreplay:
image: public.ecr.aws/p1t3u8a3/assist:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: assist
networks:
- openreplay-net
openreplay-net:
aliases:
- assist-openreplay
- assist-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- assist.env
- docker-envs/assist.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
canvases-openreplay:
image: public.ecr.aws/p1t3u8a3/canvases:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: canvases
networks:
openreplay-net:
aliases:
- canvases-openreplay
- canvases-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/canvases.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
chalice-openreplay:
image: public.ecr.aws/p1t3u8a3/chalice:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: chalice
networks:
openreplay-net:
aliases:
- chalice-openreplay
- chalice-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/chalice.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
db-openreplay:
image: public.ecr.aws/p1t3u8a3/db:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: db
networks:
- openreplay-net
openreplay-net:
aliases:
- db-openreplay
- db-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- db.env
- docker-envs/db.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
ender-openreplay:
image: public.ecr.aws/p1t3u8a3/ender:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: ender
networks:
- openreplay-net
openreplay-net:
aliases:
- ender-openreplay
- ender-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- ender.env
- docker-envs/ender.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
frontend-openreplay:
image: public.ecr.aws/p1t3u8a3/frontend:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: frontend
networks:
openreplay-net:
aliases:
- frontend-openreplay
- frontend-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/frontend.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
heuristics-openreplay:
image: public.ecr.aws/p1t3u8a3/heuristics:${COMMON_VERSION}
domainname: app.svc.cluster.local
@ -167,88 +442,15 @@ services:
networks:
openreplay-net:
aliases:
- heuristics-openreplay
- heuristics-openreplay.app.svc.cluster.local
env_file:
- heuristics.env
restart: unless-stopped
imagestorage-openreplay:
image: public.ecr.aws/p1t3u8a3/imagestorage:${COMMON_VERSION}
container_name: imagestorage
env_file:
- imagestorage.env
networks:
- openreplay-net
restart: unless-stopped
integrations-openreplay:
image: public.ecr.aws/p1t3u8a3/integrations:${COMMON_VERSION}
container_name: integrations
networks:
- openreplay-net
env_file:
- integrations.env
restart: unless-stopped
peers-openreplay:
image: public.ecr.aws/p1t3u8a3/peers:${COMMON_VERSION}
container_name: peers
networks:
- openreplay-net
env_file:
- peers.env
restart: unless-stopped
sourcemapreader-openreplay:
image: public.ecr.aws/p1t3u8a3/sourcemapreader:${COMMON_VERSION}
container_name: sourcemapreader
networks:
- openreplay-net
env_file:
- sourcemapreader.env
restart: unless-stopped
http-openreplay:
image: public.ecr.aws/p1t3u8a3/http:${COMMON_VERSION}
container_name: http
networks:
- openreplay-net
env_file:
- http.env
restart: unless-stopped
chalice-openreplay:
image: public.ecr.aws/p1t3u8a3/chalice:${COMMON_VERSION}
container_name: chalice
volumes:
- shared-volume:/mnt/efs
networks:
- openreplay-net
env_file:
- chalice.env
restart: unless-stopped
sink-openreplay:
image: public.ecr.aws/p1t3u8a3/sink:${COMMON_VERSION}
container_name: sink
volumes:
- shared-volume:/mnt/efs
networks:
- openreplay-net
env_file:
- sink.env
restart: unless-stopped
storage-openreplay:
image: public.ecr.aws/p1t3u8a3/storage:${COMMON_VERSION}
container_name: storage
volumes:
- shared-volume:/mnt/efs
networks:
- openreplay-net
env_file:
- storage.env
- docker-envs/heuristics.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
nginx-openreplay:
image: nginx:latest
@ -280,6 +482,7 @@ services:
volumes:
pgdata:
clickhouse:
redisdata:
miniodata:
shared-volume:

View file

@ -0,0 +1,27 @@
version_number="v1.22.0"
pg_host="postgresql.db.svc.cluster.local"
pg_port="5432"
pg_dbname="postgres"
ch_host="clickhouse-openreplay-clickhouse.db.svc.cluster.local"
ch_port="9000"
ch_port_http="8123"
ch_username="default"
ch_password=""
pg_user="postgres"
pg_password="${COMMON_PG_PASSWORD}"
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
S3_KEY="${COMMON_S3_KEY}"
S3_SECRET="${COMMON_S3_SECRET}"
AWS_DEFAULT_REGION="us-east-1"
EMAIL_HOST=""
EMAIL_PORT="587"
EMAIL_USER=""
EMAIL_PASSWORD=""
EMAIL_USE_TLS="true"
EMAIL_USE_SSL="false"
EMAIL_SSL_KEY=""
EMAIL_SSL_CERT=""
EMAIL_FROM="OpenReplay<do-not-reply@openreplay.com>"
LOGLEVEL="INFO"
PYTHONUNBUFFERED="0"

View file

@ -0,0 +1,11 @@
TOKEN_SECRET="secret_token_string"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
JWT_SECRET="${COMMON_JWT_SECRET}"
CH_USERNAME="default"
CH_PASSWORD=""
CLICKHOUSE_STRING="clickhouse-openreplay-clickhouse.db.svc.cluster.local:9000/"
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"

View file

@ -0,0 +1,10 @@
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
BUCKET_NAME="sessions-assets"
LICENSE_KEY=""
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
AWS_REGION="us-east-1"
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
ASSETS_ORIGIN="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/sessions-assets"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"

View file

@ -0,0 +1,11 @@
ASSIST_JWT_SECRET="${COMMON_ASSIST_JWT_SECRET}"
ASSIST_KEY="${COMMON_ASSIST_KEY}"
AWS_DEFAULT_REGION="us-east-1"
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}:80"
S3_KEY="${COMMON_S3_KEY}"
S3_SECRET="${COMMON_S3_SECRET}"
REDIS_URL="redis-master.db.svc.cluster.local"
CLEAR_SOCKET_TIME="720"
debug="0"
redis="false"
uws="false"

View file

@ -0,0 +1,10 @@
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
AWS_REGION="us-east-1"
BUCKET_NAME="mobs"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
FS_CLEAN_HRS="24"

View file

@ -0,0 +1,61 @@
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
KAFKA_SERVERS="kafka.db.svc.cluster.local"
ch_username="default"
ch_password=""
ch_host="clickhouse-openreplay-clickhouse.db.svc.cluster.local"
ch_port="9000"
ch_port_http="8123"
sourcemaps_reader="http://sourcemapreader-openreplay.app.svc.cluster.local:9000/%s/sourcemaps"
ASSIST_URL="http://assist-openreplay.app.svc.cluster.local:9001/assist/%s"
ASSIST_JWT_SECRET="${COMMON_ASSIST_JWT_SECRET}"
JWT_SECRET="${COMMON_JWT_SECRET}"
JWT_SPOT_SECRET="${COMMON_JWT_SPOT_SECRET}"
ASSIST_KEY="${COMMON_ASSIST_KEY}"
LICENSE_KEY=""
version_number="v1.22.0"
pg_host="postgresql.db.svc.cluster.local"
pg_port="5432"
pg_dbname="postgres"
pg_user="postgres"
pg_password="${COMMON_PG_PASSWORD}"
SITE_URL="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
S3_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
S3_KEY="${COMMON_S3_KEY}"
S3_SECRET="${COMMON_S3_SECRET}"
AWS_DEFAULT_REGION="us-east-1"
sessions_region="us-east-1"
ASSIST_RECORDS_BUCKET="records"
sessions_bucket="mobs"
IOS_VIDEO_BUCKET="mobs"
sourcemaps_bucket="sourcemaps"
js_cache_bucket="sessions-assets"
EMAIL_HOST=""
EMAIL_PORT="587"
EMAIL_USER=""
EMAIL_PASSWORD=""
EMAIL_USE_TLS="true"
EMAIL_USE_SSL="false"
EMAIL_SSL_KEY=""
EMAIL_SSL_CERT=""
EMAIL_FROM="OpenReplay<do-not-reply@openreplay.com>"
CH_COMPRESSION="false"
CLUSTER_URL="svc.cluster.local"
JWT_EXPIRATION="86400"
JWT_REFRESH_SECRET="${COMMON_JWT_REFRESH_SECRET}"
JWT_SPOT_REFRESH_SECRET="${COMMON_JWT_SPOT_REFRESH_SECRET}"
LOGLEVEL="INFO"
PYTHONUNBUFFERED="0"
SAML2_MD_URL=""
announcement_url=""
assist_secret=""
async_Token=""
captcha_key=""
captcha_server=""
iceServers=""
idp_entityId=""
idp_name=""
idp_sls_url=""
idp_sso_url=""
idp_tenantKey=""
idp_x509cert=""
jwt_algorithm="HS512"

View file

@ -0,0 +1,11 @@
CH_USERNAME="default"
CH_PASSWORD=""
CLICKHOUSE_STRING="clickhouse-openreplay-clickhouse.db.svc.cluster.local:9000/default"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
pg_password="${COMMON_PG_PASSWORD}"
QUICKWIT_ENABLED="false"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
ch_db="default"

View file

@ -0,0 +1,6 @@
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"

View file

@ -0,0 +1,2 @@
TRACKER_HOST="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/script"
HTTP_PORT="80"

View file

@ -0,0 +1,4 @@
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"

View file

@ -0,0 +1,15 @@
BUCKET_NAME="uxtesting-records"
CACHE_ASSETS="true"
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
AWS_REGION="us-east-1"
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
JWT_SECRET="${COMMON_JWT_SECRET}"
JWT_SPOT_SECRET="${COMMON_JWT_SPOT_SECRET}"
TOKEN_SECRET="${COMMON_TOKEN_SECRET}"

View file

@ -0,0 +1,10 @@
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
AWS_REGION="us-east-1"
BUCKET_NAME="mobs"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
FS_CLEAN_HRS="24"

View file

@ -0,0 +1,13 @@
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
AWS_REGION="us-east-1"
BUCKET_NAME="mobs"
JWT_SECRET="${COMMON_JWT_SECRET}"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
TOKEN_SECRET="secret_token_string"

View file

@ -0,0 +1,5 @@
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
ASSETS_ORIGIN="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/sessions-assets"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"

View file

@ -0,0 +1,11 @@
SMR_HOST="0.0.0.0"
S3_HOST="http://minio.db.svc.cluster.local:9000"
S3_KEY="${COMMON_S3_KEY}"
S3_SECRET="${COMMON_S3_SECRET}"
AWS_REGION="us-east-1"
LICENSE_KEY=""
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres"
ASSETS_ORIGIN="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}/sessions-assets"

View file

@ -0,0 +1,16 @@
CACHE_ASSETS="true"
FS_CLEAN_HRS="24"
TOKEN_SECRET="secret_token_string"
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
BUCKET_NAME="spots"
AWS_REGION="us-east-1"
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
JWT_SECRET="${COMMON_JWT_SECRET}"
JWT_SPOT_SECRET="${COMMON_JWT_SPOT_SECRET}"
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"

View file

@ -0,0 +1,10 @@
AWS_ACCESS_KEY_ID="${COMMON_S3_KEY}"
AWS_SECRET_ACCESS_KEY="${COMMON_S3_SECRET}"
AWS_ENDPOINT="${COMMON_PROTOCOL}://${COMMON_DOMAIN_NAME}"
AWS_REGION="us-east-1"
BUCKET_NAME="mobs"
LICENSE_KEY=""
KAFKA_SERVERS="kafka.db.svc.cluster.local:9092"
KAFKA_USE_SSL="false"
REDIS_STRING="redis://redis-master.db.svc.cluster.local:6379"
FS_CLEAN_HRS="24"

View file

@ -1,6 +0,0 @@
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
REDIS_STRING='redis://redis:6379'

View file

@ -0,0 +1,38 @@
.PHONY: default
default: create-compose
help: ## Prints help for targets with comments
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ \
{ printf " \033[36m%-25s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
.PHONY: helm-template
helm-template:
@rm -rf yamls
@mkdir yamls
@helm template op ../../helmcharts/openreplay -n app -f ../../helmcharts/vars.yaml -f vars.yaml > yamls/deployment.yaml
.PHONY: create-yamls
create-yamls: helm-template
@awk -v RS='---' 'NR>1{kind=""; name=""; if(match($$0, /kind:[[:space:]]*([a-zA-Z]+)/, k) && \
match($$0, /name:[[:space:]]*([a-zA-Z0-9_.-]+)/, n)) \
{kind=k[1]; name=n[1]; if(kind == "Deployment") print $$0 > "yamls/"name".yaml";}}' yamls/deployment.yaml
@rm yamls/ingress-nginx.yaml
@rm yamls/deployment.yaml
.PHONY: create-envs
create-envs: create-yamls ## Create envs from deployment
@echo Creating env vars...
@rm -rf ../docker-envs
@mkdir ../docker-envs
@# @find ./ -type f -iname "Deployment" -exec templater -i env.tpl -f ../deployment.yaml {} > {}.env \;
@find yamls/ -type f -name "*.yaml" -exec sh -c 'filename=$$(basename {} -openreplay.yaml); \
templater -i tpls/env.tpl -f {} > ../docker-envs/$${filename}.env' \;
@# Replace all http/https for COMMON_DOMAIN_NAME with COMMON_PROTOCOL
@find ../docker-envs/ -type f -name "*.env" -exec sed -i 's|http[s]\?://\$${COMMON_DOMAIN_NAME}|\$${COMMON_PROTOCOL}://\$${COMMON_DOMAIN_NAME}|g' {} \;
.PHONY: create-compose
create-compose: create-envs ## Create docker-compose.yml
@echo creating docker-compose yaml
$(eval FILES := $(shell find yamls/ -type f -name "*.yaml" -exec basename {} .yaml \; | tr '\n' ',' | sed 's/,$$//'))
@#echo "Files found: $(FILES)"
@FILES=$(FILES) templater -i tpls/docker-compose.tpl -f ../../helmcharts/vars.yaml -f vars.yaml > ../docker-compose.yaml

View file

@ -0,0 +1,228 @@
{{/* # vim: ft=helm: */}}
# vim: ft=yaml
version: '3'
services:
postgresql:
image: bitnami/postgresql:${POSTGRES_VERSION}
container_name: postgres
volumes:
- pgdata:/var/lib/postgresql/data
networks:
openreplay-net:
aliases:
- postgresql.db.svc.cluster.local
environment:
POSTGRESQL_PASSWORD: "{{.Values.global.postgresql.postgresqlPassword}}"
clickhouse:
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
container_name: clickhouse
volumes:
- clickhouse:/var/lib/clickhouse
networks:
openreplay-net:
aliases:
- clickhouse-openreplay-clickhouse.db.svc.cluster.local
environment:
CLICKHOUSE_USER: "default"
CLICKHOUSE_PASSWORD: ""
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: "1"
redis:
image: bitnami/redis:${REDIS_VERSION}
container_name: redis
volumes:
- redisdata:/bitnami/redis/data
networks:
openreplay-net:
aliases:
- redis-master.db.svc.cluster.local
environment:
ALLOW_EMPTY_PASSWORD: "yes"
minio:
image: bitnami/minio:${MINIO_VERSION}
container_name: minio
volumes:
- miniodata:/bitnami/minio/data
networks:
openreplay-net:
aliases:
- minio.db.svc.cluster.local
ports:
- 9001:9001
environment:
MINIO_ROOT_USER: {{.Values.minio.global.minio.accessKey}}
MINIO_ROOT_PASSWORD: {{.Values.minio.global.minio.secretKey}}
fs-permission:
image: debian:stable-slim
container_name: fs-permission
profiles:
- "migration"
volumes:
- shared-volume:/mnt/efs
- miniodata:/mnt/minio
- pgdata:/mnt/postgres
entrypoint:
- /bin/bash
- -c
- |
chown -R 1001:1001 /mnt/{efs,minio,postgres}
restart: on-failure
minio-migration:
image: bitnami/minio:2020.10.9-debian-10-r6
container_name: minio-migration
profiles:
- "migration"
depends_on:
- minio
- fs-permission
networks:
- openreplay-net
volumes:
- ../helmcharts/openreplay/files/minio.sh:/tmp/minio.sh
environment:
MINIO_HOST: http://minio.db.svc.cluster.local:9000
MINIO_ACCESS_KEY: {{.Values.minio.global.minio.accessKey}}
MINIO_SECRET_KEY: {{.Values.minio.global.minio.secretKey}}
user: root
entrypoint:
- /bin/bash
- -c
- |
apt update && apt install netcat -y
# Wait for Minio to be ready
until nc -z -v -w30 minio 9000; do
echo "Waiting for Minio server to be ready..."
sleep 1
done
bash /tmp/minio.sh init || exit 100
db-migration:
image: bitnami/postgresql:14.5.0
container_name: db-migration
profiles:
- "migration"
depends_on:
- postgresql
- minio-migration
networks:
- openreplay-net
volumes:
- ../schema/db/init_dbs/postgresql/init_schema.sql:/tmp/init_schema.sql
environment:
PGHOST: postgresql
PGPORT: 5432
PGDATABASE: postgres
PGUSER: postgres
PGPASSWORD: {{.Values.global.postgresql.postgresqlPassword}}
entrypoint:
- /bin/bash
- -c
- |
until psql -c '\q'; do
echo "PostgreSQL is unavailable - sleeping"
sleep 1
done
echo "PostgreSQL is up - executing command"
psql -v ON_ERROR_STOP=1 -f /tmp/init_schema.sql
clickhouse-migration:
image: clickhouse/clickhouse-server:${CLICKHOUSE_VERSION}
container_name: clickhouse-migration
profiles:
- "migration"
depends_on:
- clickhouse
- minio-migration
networks:
- openreplay-net
volumes:
- ../schema/db/init_dbs/clickhouse/init_schema.sql:/tmp/init_schema.sql
environment:
CH_HOST: "{{.Values.global.clickhouse.chHost}}"
CH_PORT: "{{.Values.global.clickhouse.service.webPort}}"
CH_PORT_HTTP: "{{.Values.global.clickhouse.service.dataPort}}"
CH_USERNAME: "{{.Values.global.clickhouse.username}}"
CH_PASSWORD: "{{.Values.global.clickhouse.password}}"
entrypoint:
- /bin/bash
- -c
- |
# Checking variable is empty. Shell independant method.
# Wait for Minio to be ready
until nc -z -v -w30 clickhouse-openreplay-clickhouse.db.svc.cluster.local 9000; do
echo "Waiting for Minio server to be ready..."
sleep 1
done
echo "clickhouse is up - executing command"
clickhouse-client -h ${CH_HOST} --user ${CH_USERNAME} ${CH_PASSWORD} --port ${CH_PORT} --multiquery < /tmp/init_schema.sql || true
{{- define "service" -}}
{{- $service_name := . }}
{{- $container_name := (splitList "-" $service_name) | first | printf "%s" }}
{{print $service_name}}:
image: public.ecr.aws/p1t3u8a3/{{$container_name}}:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: {{print $container_name}}
networks:
openreplay-net:
aliases:
- {{print $container_name}}-openreplay
- {{print $container_name}}-openreplay.app.svc.cluster.local
volumes:
- shared-volume:/mnt/efs
env_file:
- docker-envs/{{print $container_name}}.env
environment: {} # Fallback empty environment if env_file is missing
restart: unless-stopped
{{ end -}}
{{- range $file := split "," (env "FILES")}}
{{ template "service" $file}}
{{- end}}
nginx-openreplay:
image: nginx:latest
container_name: nginx
networks:
- openreplay-net
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
restart: unless-stopped
caddy:
image: caddy:latest
container_name: caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- openreplay-net
environment:
- ACME_AGREE=true # Agree to Let's Encrypt Subscriber Agreement
- CADDY_DOMAIN=${CADDY_DOMAIN}
restart: unless-stopped
volumes:
pgdata:
clickhouse:
redisdata:
miniodata:
shared-volume:
caddy_data:
caddy_config:
networks:
openreplay-net:

View file

@ -0,0 +1,6 @@
{{- $excludedKeys := list "POD_NAMESPACE" -}}
{{ range (index .Values.spec.template.spec.containers 0).env -}}
{{- if not (has .name $excludedKeys) -}}
{{ .name }}="{{ .value }}"
{{ end -}}
{{ end -}}

View file

@ -0,0 +1,26 @@
postgresql: &postgres
postgresqlPassword: '${COMMON_PG_PASSWORD}'
minio:
global:
minio:
accessKey: &accessKey '${COMMON_S3_KEY}'
secretKey: &secretKey '${COMMON_S3_SECRET}'
global:
pg_connection_string: "postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres?sslmode=disable"
postgresql: *postgres
assistKey: '${COMMON_ASSIST_KEY}'
assistJWTSecret: '${COMMON_ASSIST_JWT_SECRET}'
jwtSecret: '${COMMON_JWT_SECRET}'
jwtSpotSecret: '${COMMON_JWT_SPOT_SECRET}'
tokenSecret: '${COMMON_TOKEN_SECRET}'
domainName: "${COMMON_DOMAIN_NAME}"
ORSecureAccess: false
s3:
accessKey: *accessKey
secretKey: *secretKey
chalice:
env:
JWT_REFRESH_SECRET: "${COMMON_JWT_REFRESH_SECRET}"
JWT_SPOT_REFRESH_SECRET: "${COMMON_JWT_SPOT_REFRESH_SECRET}"
POD_NAMESPACE: app
CLUSTER_URL: svc.cluster.local

View file

@ -1,4 +0,0 @@
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
REDIS_STRING='redis://redis:6379'

View file

@ -1,12 +0,0 @@
CACHE_ASSETS='true'
TOKEN_SECRET='secret_token_string'
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
AWS_REGION='us-east-1'
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
pg_password="${COMMON_PG_PASSWORD}"
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
REDIS_STRING='redis://redis:6379'
BUCKET_NAME='uxtesting-records'

View file

@ -1,10 +0,0 @@
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
AWS_ENDPOINT='http://minio:9000'
AWS_REGION='us-east-1'
BUCKET_NAME=mobs
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
REDIS_STRING='redis://redis:6379'
FS_CLEAN_HRS='24'

View file

@ -12,42 +12,48 @@ NC='\033[0m' # No Color
# --- Helper functions for logs ---
info() {
echo -e "${GREEN}[INFO] $1 ${NC} 👍"
echo -e "${GREEN}[INFO] $1 ${NC} 👍"
}
warn() {
echo -e "${YELLOW}[WARN] $1 ${NC} ⚠️"
echo -e "${YELLOW}[WARN] $1 ${NC} ⚠️"
}
fatal() {
echo -e "${RED}[FATAL] $1 ${NC} 🔥"
exit 1
echo -e "${RED}[FATAL] $1 ${NC} 🔥"
exit 1
}
# Function to check if a command exists
function exists() {
type "$1" &>/dev/null
type "$1" &>/dev/null
}
# Generate a random password using openssl
randomPass() {
exists openssl || {
info "Installing openssl... 🔐"
sudo apt update &>/dev/null
sudo apt install openssl -y &>/dev/null
}
openssl rand -hex 10
exists openssl || {
info "Installing openssl... 🔐"
sudo apt update &>/dev/null
sudo apt install openssl -y &>/dev/null
}
openssl rand -hex 10
}
# Create dynamic passwords and update the environment file
function create_passwords() {
info "Creating dynamic passwords..."
sed -i "s/change_me_domain/${DOMAIN_NAME}/g" common.env
sed -i "s/change_me_jwt/$(randomPass)/g" common.env
sed -i "s/change_me_s3_key/$(randomPass)/g" common.env
sed -i "s/change_me_s3_secret/$(randomPass)/g" common.env
sed -i "s/change_me_pg_password/$(randomPass)/g" common.env
info "Passwords created and updated in common.env file."
info "Creating dynamic passwords..."
# Update domain name replacement
sed -i "s/change_me_domain/${DOMAIN_NAME}/g" common.env
# Find all change_me_ entries and replace them with random passwords
grep -o 'change_me_[a-zA-Z0-9_]*' common.env | sort -u | while read -r token; do
random_pass=$(randomPass)
sed -i "s/${token}/${random_pass}/g" common.env
info "Generated password for ${token}"
done
info "Passwords created and updated in common.env file."
}
# update apt cache
@ -72,23 +78,27 @@ echo -e "${GREEN}"
read -rp "Enter DOMAIN_NAME: " DOMAIN_NAME
echo -e "${NC}"
if [[ -z $DOMAIN_NAME ]]; then
fatal "DOMAIN_NAME variable is empty. Please provide a valid domain name to proceed."
fatal "DOMAIN_NAME variable is empty. Please provide a valid domain name to proceed."
fi
info "Using domain name: $DOMAIN_NAME 🌐"
echo "CADDY_DOMAIN=\"$DOMAIN_NAME\"" >> common.env
echo "CADDY_DOMAIN=\"$DOMAIN_NAME\"" >>common.env
read -p "Is the domain on a public DNS? (y/n) " yn
case $yn in
y ) echo "$DOMAIN_NAME is on a public DNS";
;;
n ) echo "$DOMAIN_NAME is on a private DNS";
#add TLS internal to caddyfile
#In local network Caddy can't reach Let's Encrypt servers to get a certificate
mv Caddyfile Caddyfile.public
mv Caddyfile.private Caddyfile
;;
* ) echo invalid response;
exit 1;;
case $yn in
y)
echo "$DOMAIN_NAME is on a public DNS"
;;
n)
echo "$DOMAIN_NAME is on a private DNS"
#add TLS internal to caddyfile
#In local network Caddy can't reach Let's Encrypt servers to get a certificate
mv Caddyfile Caddyfile.public
mv Caddyfile.private Caddyfile
;;
*)
echo invalid response
exit 1
;;
esac
# Create passwords if they don't exist
@ -103,23 +113,27 @@ set +a
# Use the `envsubst` command to substitute the shell environment variables into reference_var.env and output to a combined .env
find ./ -type f \( -iname "*.env" -o -iname "docker-compose.yaml" \) ! -name "common.env" -exec /bin/bash -c 'file="{}"; git checkout -- "$file"; cp "$file" "$file.bak"; envsubst < "$file.bak" > "$file"; rm "$file.bak"' \;
case $yn in
y ) echo "$DOMAIN_NAME is on a public DNS";
##No changes needed
;;
n ) echo "$DOMAIN_NAME is on a private DNS";
##Add a variable to chalice.env file
echo "SKIP_H_SSL=True" >> chalice.env
;;
* ) echo invalid response;
exit 1;;
case $yn in
y)
echo "$DOMAIN_NAME is on a public DNS"
##No changes needed
;;
n)
echo "$DOMAIN_NAME is on a private DNS"
##Add a variable to chalice.env file
echo "SKIP_H_SSL=True" >>chalice.env
;;
*)
echo invalid response
exit 1
;;
esac
services=$(sudo -E docker-compose config --services)
for service in $services; do
echo "Pulling image for $service..."
sudo -E docker-compose pull $service
sleep 5
readarray -t services < <(sudo -E docker-compose config --services)
for service in "${services[@]}"; do
echo "Pulling image for $service..."
sudo -E docker-compose pull --no-parallel "$service"
sleep 5
done
sudo -E docker-compose --profile migration up --force-recreate --build -d

View file

@ -1,12 +0,0 @@
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
AWS_ENDPOINT='http://minio:9000'
AWS_REGION='us-east-1'
BUCKET_NAME=mobs
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
pg_password=${COMMON_PG_PASSWORD}
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql:5432/postgres"
REDIS_STRING='redis://redis:6379'
TOKEN_SECRET='secret_token_string'

View file

@ -71,7 +71,7 @@ server {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_pass http://peers-openreplay:9000;
proxy_pass http://assist-openreplay:9001;
}
location /ws-assist/ {
rewrite ^/ws-assist/(.*) /$1 break;

View file

@ -1,3 +0,0 @@
ASSIST_KEY=SetARandomStringHere
S3_KEY=${COMMON_S3_KEY}
debug='0'

View file

@ -1,5 +0,0 @@
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
ASSETS_ORIGIN="https://${COMMON_DOMAIN_NAME}:443/sessions-assets"
REDIS_STRING='redis://redis:6379'

View file

@ -1,10 +0,0 @@
SMR_HOST='0.0.0.0'
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
AWS_REGION='us-east-1'
LICENSE_KEY=''
REDIS_STRING='redis://redis:6379'
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
POSTGRES_STRING="postgres://postgres:${COMMON_PG_PASSWORD}@postgresql.db.svc.cluster.local:5432/postgres"
ASSETS_ORIGIN="sourcemapreaders://${COMMON_DOMAIN_NAME}:443/sessions-assets"

View file

@ -1,10 +0,0 @@
AWS_ACCESS_KEY_ID=${COMMON_S3_KEY}
AWS_SECRET_ACCESS_KEY=${COMMON_S3_SECRET}
AWS_ENDPOINT='http://minio:9000'
AWS_REGION='us-east-1'
BUCKET_NAME=mobs
LICENSE_KEY=''
KAFKA_SERVERS='kafka.db.svc.cluster.local:9092'
KAFKA_USE_SSL='false'
REDIS_STRING='redis://redis:6379'
FS_CLEAN_HRS='24'

View file

@ -75,7 +75,7 @@ spec:
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
{{- end}}
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
ports:
{{- range $key, $val := .Values.service.ports }}

View file

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View file

@ -1,24 +0,0 @@
apiVersion: v2
name: assist-server
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful assist-server or functions for the chart developer. They're included as
# a dependency of application charts to inject those assist-server and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.22.0"

View file

@ -1,22 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "assist-server.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "assist-server.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "assist-server.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "assist-server.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View file

@ -1,65 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "assist-server.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "assist-server.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "assist-server.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "assist-server.labels" -}}
helm.sh/chart: {{ include "assist-server.chart" . }}
{{ include "assist-server.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.global.appLabels }}
{{- .Values.global.appLabels | toYaml | nindent 0}}
{{- end}}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "assist-server.selectorLabels" -}}
app.kubernetes.io/name: {{ include "assist-server.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "assist-server.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "assist-server.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View file

@ -1,113 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "assist-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "assist-server.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "assist-server.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "assist-server.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "assist-server.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
shareProcessNamespace: true
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- if .Values.global.enterpriseEditionLicense }}
image: "{{ tpl .Values.image.repository . }}:{{ .Values.image.tag | default .Chart.AppVersion }}-ee"
{{- else }}
image: "{{ tpl .Values.image.repository . }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.healthCheck}}
{{- .Values.healthCheck | toYaml | nindent 10}}
{{- end}}
env:
- name: ASSIST_JWT_SECRET
value: {{ .Values.global.assistJWTSecret }}
- name: ASSIST_KEY
value: {{ .Values.global.assistKey }}
- name: AWS_DEFAULT_REGION
value: "{{ .Values.global.s3.region }}"
- name: S3_HOST
{{- if contains "minio" .Values.global.s3.endpoint }}
value: '{{ ternary "https" "http" .Values.global.ORSecureAccess}}://{{ .Values.global.domainName }}:{{ ternary .Values.global.ingress.controller.service.ports.https .Values.global.ingress.controller.service.ports.http .Values.global.ORSecureAccess }}'
{{- else}}
value: '{{ .Values.global.s3.endpoint }}'
{{- end}}
- name: S3_KEY
{{- if .Values.global.s3.existingSecret }}
valueFrom:
secretKeyRef:
name: {{ .Values.global.s3.existingSecret }}
key: access-key
{{- else }}
value: {{ .Values.global.s3.accessKey }}
{{- end }}
- name: S3_SECRET
{{- if .Values.global.s3.existingSecret }}
valueFrom:
secretKeyRef:
name: {{ .Values.global.s3.existingSecret }}
key: secret-key
{{- else }}
value: {{ .Values.global.s3.secretKey }}
{{- end }}
- name: REDIS_URL
value: {{ .Values.global.redis.redisHost }}
{{- range $key, $val := .Values.global.env }}
- name: {{ $key }}
value: '{{ $val }}'
{{- end }}
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: '{{ $val }}'
{{- end}}
ports:
{{- range $key, $val := .Values.service.ports }}
- name: {{ $key }}
containerPort: {{ $val }}
{{- end }}
protocol: TCP
{{- with .Values.persistence.mounts }}
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.persistence.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View file

@ -1,33 +0,0 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "assist-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "assist-server.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "assist-server.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View file

@ -1,56 +0,0 @@
{{- if .Values.ingress.enabled }}
{{- $fullName := include "assist-server.fullname" . -}}
{{- $socketioSvcPort := .Values.service.ports.socketio -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "assist-server.labels" . | nindent 4 }}
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
#set $sticky_used "no";
#if ($sessionid != "") {
# set $sticky_used "yes";
#}
#add_header X-Debug-Session-ID $sessionid;
#add_header X-Debug-Session-Type "wss";
#add_header X-Sticky-Session-Used $sticky_used;
#add_header X-Upstream-Server $upstream_addr;
proxy_hide_header access-control-allow-headers;
proxy_hide_header Access-Control-Allow-Origin;
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'sessionid, Content-Type, Authorization' always;
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
nginx.ingress.kubernetes.io/upstream-hash-by: $sessionid
{{- with .Values.ingress.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
ingressClassName: "{{ tpl .Values.ingress.className . }}"
tls:
- hosts:
- {{ .Values.global.domainName }}
{{- if .Values.ingress.tls.secretName}}
secretName: {{ .Values.ingress.tls.secretName }}
{{- end}}
rules:
- host: {{ .Values.global.domainName }}
http:
paths:
- pathType: Prefix
backend:
service:
name: {{ $fullName }}
port:
number: {{ $socketioSvcPort }}
path: /ws-assist-server/(.*)
{{- end }}

View file

@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "assist-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "assist-server.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
{{- range $key, $val := .Values.service.ports }}
- port: {{ $val }}
targetPort: {{ $key }}
protocol: TCP
name: {{ $key }}
{{- end}}
selector:
{{- include "assist-server.selectorLabels" . | nindent 4 }}

View file

@ -1,18 +0,0 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.serviceMonitor.enabled ) }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "assist-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "assist-server.labels" . | nindent 4 }}
{{- if .Values.serviceMonitor.additionalLabels }}
{{- toYaml .Values.serviceMonitor.additionalLabels | nindent 4 }}
{{- end }}
spec:
endpoints:
{{- .Values.serviceMonitor.scrapeConfigs | toYaml | nindent 4 }}
selector:
matchLabels:
{{- include "assist-server.selectorLabels" . | nindent 6 }}
{{- end }}

View file

@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "assist-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "assist-server.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View file

@ -1,134 +0,0 @@
# Default values for openreplay.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: "{{ .Values.global.openReplayContainerRegistry }}/assist-server"
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: "assist-server"
fullnameOverride: "assist-server-openreplay"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
securityContext:
runAsUser: 1001
runAsGroup: 1001
podSecurityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
fsGroupChangePolicy: "OnRootMismatch"
# podSecurityContext: {}
# fsGroup: 2000
# securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
#service:
# type: ClusterIP
# port: 9000
serviceMonitor:
enabled: false
additionalLabels:
release: observability
scrapeConfigs:
- port: metrics
honorLabels: true
interval: 15s
path: /metrics
scheme: http
scrapeTimeout: 10s
service:
type: ClusterIP
ports:
socketio: 9001
metrics: 8888
ingress:
enabled: true
className: "{{ .Values.global.ingress.controller.ingressClassResource.name }}"
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
add_header X-Debug-Session-ID $http_sessionid;
add_header X-Debug-Session-Type "wss";
# CORS configuration
# We don't need the upstream header
proxy_hide_header Access-Control-Allow-Origin;
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'sessionid, Content-Type, Authorization' always;
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
secretName: openreplay-ssl
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
env:
debug: 0
uws: false
redis: false
CLEAR_SOCKET_TIME: 720
nodeSelector: {}
tolerations: []
affinity: {}
persistence: {}
# # Spec of spec.template.spec.containers[*].volumeMounts
# mounts:
# - name: kafka-ssl
# mountPath: /opt/kafka/ssl
# # Spec of spec.template.spec.volumes
# volumes:
# - name: kafka-ssl
# secret:
# secretName: kafka-ssl

View file

@ -57,7 +57,7 @@ spec:
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
{{- end}}
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
- name: KAFKA_SERVERS
value: '{{ .Values.global.kafka.kafkaHost }}:{{ .Values.global.kafka.kafkaPort }}'
- name: KAFKA_USE_SSL

View file

@ -67,7 +67,7 @@ spec:
- name: QUICKWIT_ENABLED
value: '{{ .Values.global.quickwit.enabled }}'
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
{{- range $key, $val := .Values.global.env }}
- name: {{ $key }}

View file

@ -59,7 +59,7 @@ spec:
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
{{- end}}
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
{{- range $key, $val := .Values.global.env }}
- name: {{ $key }}

View file

@ -89,7 +89,7 @@ spec:
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
{{- end}}
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
- name: JWT_SECRET
value: {{ .Values.global.jwtSecret }}

View file

@ -85,7 +85,7 @@ spec:
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
{{- end}}
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
{{- range $key, $val := .Values.global.env }}
- name: {{ $key }}

View file

@ -95,7 +95,7 @@ spec:
value: '{{ .Values.global.postgresql.postgresqlPassword }}'
{{- end}}
- name: POSTGRES_STRING
value: 'postgres://{{ .Values.global.postgresql.postgresqlUser }}:$(pg_password)@{{ .Values.global.postgresql.postgresqlHost }}:{{ .Values.global.postgresql.postgresqlPort }}/{{ .Values.global.postgresql.postgresqlDatabase }}'
value: {{ include "openreplay.pg_connection_string" .}}
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
ports:
{{- range $key, $val := .Values.service.ports }}

View file

@ -153,3 +153,11 @@ Create the volume mount config for redis TLS certificates
{{- include "openreplay.s3Endpoint" . }}/{{.Values.global.s3.assetsBucket}}
{{- end }}
{{- end }}
{{- define "openreplay.pg_connection_string"}}
{{- if .Values.global.pg_connection_string }}
{{- .Values.global.pg_connection_string -}}
{{- else -}}
{{- printf "postgres://%s:$(pg_password)@%s:%s/%s" .Values.global.postgresql.postgresqlUser .Values.global.postgresql.postgresqlHost .Values.global.postgresql.postgresqlPort .Values.global.postgresql.postgresqlDatabase -}}
{{- end -}}
{{- end}}