* single db chart * chore(values): update subchart information * chore(helm): global override file * chore(helm): app chart * chore(helm): database-migrate script * chore(helm): pass fromVersion as command * chore(dbMigrate): modularize the job * chore(helm): upgrade hook * chore(helm): upgrade script * chore(helm): db init scripts * fix(uprade): version check * chore(helm): Adding ee charts * fix(helm): clickhouse upgrade script * chore(helm): add enterprise copy support * chore(helm): enterprise migration * fix(helm): clickhouse port * fix(helm): inject env vars * chore(clickhouse): ignore db errors * chore(helm): kafka db upgrade * fix(migration): kafka migration * chore(helm): helm chart to install openreplay * chore(helm): sample override value * chore(helm): cloning main branch * chore(helm): separate chart.yaml for all helm charts. * chore(helm): update version * fix(helm): templates * fix(helm): image name * chore(helm): pinning http port for http service * docs(helm): example for overriding values. * chore(dock): update the comments * removing duplicate helm chart * docs(values): comment for nginx redirect * chore(helm): Adding nginx-ingress commit * chore(helm): Adding default nginx ssl * chore(helm): change nginx image to openresty Ref: https://serverfault.com/questions/638822/nginx-resolver-address-from-etc-resolv-conf * chore(helm): consistent service name * chore(helm): change nginx healthcheck url * chore(helm): init minio * chore(helm): fix password for minio * chore(helm): minio init change workdir to /tmp * chore(helm): ignore error on repeated bucket create * chore(helm): fix enterprise check * fix(helm): storage s3 region * chore(helm): default false for cache_assets * chore(helm): set nginx app version * chore(helm): inject ee image * chore(helm): renaming values.yaml to override.yaml * chore(readme): update readme * chore(helm): renaming vars.yaml * chore(helm): git migrate current version of the app * chore(helm): managing dependencies * fix(helm): clone proper openreplay code * chore(helm): update variable name * chore(install): Adding web install script * chore(install): ignoring apt error messages. * chore(install): change only pre-populated passwords * chore(install): better error messages |
||
|---|---|---|
| .. | ||
| helm | ||
| certbot.sh | ||
| README.md | ||
Installing OpenReplay on any VM (Debian based, preferably Ubuntu 20.04)
You can start testing OpenReplay by installing it on any VM (at least 2 vCPUs, 8 GB of RAM and 50 GB of storage). We'll initialize a single node kubernetes cluster with k3s and install OpenReplay on the cluster.
cd helm && bash install.sh
Installing OpenReplay on Kubernetes
OpenReplay runs 100% on kubernetes. So if you've got a kubernetes cluster, preferably, a cluster dedicated to OpenReplay (on a single node of 4 vCPUs, 8 GB of RAM and 50 GB of storage). You can run the script, which internally uses helm to install OpenReplay.
We hope your cluster has provision to create a service type LoadBalancer for exposing OpenReplay on the internet.
cd helm && bash kube-install.sh
OpenReplay CLI
The CLI is helpful for managing basic aspects of your OpenReplay instance, things such as restarting or reinstalling a service, accessing a component's logs or simply checking the status of your backend services. Below the list of covered operations:
- status: status of the running services
- logs: logs of a specific service
- stop: stop one or all services
- start: start one or all services
- restart: restart one or all services
For more information:
cd helm && openreplay-cli -h