Compare commits

..

2 commits

Author SHA1 Message Date
rjshrjndrn
e0f81076a2 cp
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-02-15 07:12:54 +01:00
rjshrjndrn
9e23c0ba3b chore(install): enable debug log for minio
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-02-15 06:34:30 +01:00
7478 changed files with 296196 additions and 454375 deletions

View file

@ -1,33 +0,0 @@
---
name: Bug report
about: Report an issue and help improve OpenReplay
title: ''
labels: bug
assignees: estradino
---
**Describe the issue**
A short description of what the issue is.
**Steps to reproduce the issue**
1. Step 1
2. Step 2
3. You got it :)
**Expected behavior**
What you expected to happen.
**Screenshots**
If possible, that would be make our life easier.
**OpenReplay Environment**
- Frontend stack: [e.g. React/Axios/MobX, Next]
- OpenReplay version: [e.g. 1.6.0]
- Tracker version: [e.g. 3.5.10]
- Plugins used: [e.g. Fetch, Redux]
- Cloud provider: [e.g. AWS, GCP]
- System specs: [e.g. 2vCPU/16Gb with 50Gb of storage]
**Additional context**
Add additional information you think might be relevant for this behavior.

View file

@ -1,11 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: Documentation Request
url: https://github.com/openreplay/documentation/issues
about: Report a mistake or suggest anything we might be missing in the docs
- name: Discussions
url: https://github.com/openreplay/openreplay/discussions
about: Ask and answer various questions on GitHub Discussions
- name: Join our Slack Community
url: https://slack.openreplay.com
about: Take the discussion further by joining our community on Slack

View file

@ -1,10 +0,0 @@
---
name: Feature request
about: Suggest an idea or a feature to improve OpenReplay
title: ''
labels: feature-request
assignees: estradino
---
Briefly describe the feature you would like to see shipped with the upcoming versions of OpenReplay, the use-case (very important to us) and the alternative solutions you've considered so far.

View file

@ -1,74 +0,0 @@
name: 'Update Keys'
description: 'Updates keys'
inputs:
domain_name:
required: true
description: 'Domain Name'
license_key:
required: true
description: 'License Key'
jwt_secret:
required: true
description: 'JWT Secret'
jwt_spot_secret:
required: true
description: 'JWT spot Secret'
minio_access_key:
required: true
description: 'MinIO Access Key'
minio_secret_key:
required: true
description: 'MinIO Secret Key'
pg_password:
required: true
description: 'PostgreSQL Password'
registry_url:
required: true
description: 'Registry URL'
runs:
using: "composite"
steps:
- name: Downloading yq
run: |
VERSION="v4.42.1"
sudo wget https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 -O /usr/bin/yq
sudo chmod +x /usr/bin/yq
shell: bash
- name: "Updating OSS secrets"
run: |
cd scripts/helmcharts/
vars=(
"ASSIST_JWT_SECRET:.global.assistJWTSecret"
"ASSIST_KEY:.global.assistKey"
"DOMAIN_NAME:.global.domainName"
"JWT_REFRESH_SECRET:.chalice.env.JWT_REFRESH_SECRET"
"JWT_SECRET:.global.jwtSecret"
"JWT_SPOT_REFRESH_SECRET:.chalice.env.JWT_SPOT_REFRESH_SECRET"
"JWT_SPOT_SECRET:.global.jwtSpotSecret"
"LICENSE_KEY:.global.enterpriseEditionLicense"
"MINIO_ACCESS_KEY:.global.s3.accessKey"
"MINIO_SECRET_KEY:.global.s3.secretKey"
"PG_PASSWORD:.postgresql.postgresqlPassword"
"REGISTRY_URL:.global.openReplayContainerRegistry"
)
for var in "${vars[@]}"; do
IFS=":" read -r env_var yq_path <<<"$var"
yq e -i "${yq_path} = strenv(${env_var})" vars.yaml
done
shell: bash
env:
ASSIST_JWT_SECRET: ${{ inputs.assist_jwt_secret }}
ASSIST_KEY: ${{ inputs.assist_key }}
DOMAIN_NAME: ${{ inputs.domain_name }}
JWT_REFRESH_SECRET: ${{ inputs.jwt_refresh_secret }}
JWT_SECRET: ${{ inputs.jwt_secret }}
JWT_SPOT_REFRESH_SECRET: ${{inputs.jwt_spot_refresh_secret}}
JWT_SPOT_SECRET: ${{ inputs.jwt_spot_secret }}
LICENSE_KEY: ${{ inputs.license_key }}
MINIO_ACCESS_KEY: ${{ inputs.minio_access_key }}
MINIO_SECRET_KEY: ${{ inputs.minio_secret_key }}
PG_PASSWORD: ${{ inputs.pg_password }}
REGISTRY_URL: ${{ inputs.registry_url }}

View file

@ -1,12 +0,0 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
target-branch: "dev"
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
target-branch: "dev"

View file

@ -1,162 +0,0 @@
# This action will push the alerts changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "ee/api/**"
- "api/**"
- "!api/.gitignore"
- "!api/routers"
- "!api/app.py"
- "!api/*-dev.sh"
- "!api/requirements.txt"
- "!api/requirements-crons.txt"
- "!ee/api/.gitignore"
- "!ee/api/routers"
- "!ee/api/app.py"
- "!ee/api/*-dev.sh"
- "!ee/api/requirements.txt"
- "!ee/api/requirements-crons.txt"
name: Build and Deploy Alerts EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build_alerts.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("alerts")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("alerts")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/alerts/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,alerts,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,160 +0,0 @@
# This action will push the alerts changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "api/**"
- "!api/.gitignore"
- "!api/routers"
- "!api/app.py"
- "!api/*-dev.sh"
- "!api/requirements.txt"
- "!api/requirements-crons.txt"
name: Build and Deploy Alerts
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing Alerts image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build_alerts.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("alerts")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("alerts")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
## Update secerts
sed -i "s#openReplayContainerRegistry.*#openReplayContainerRegistry: \"${{ secrets.OSS_REGISTRY_URL }}\"#g" vars.yaml
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.OSS_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.OSS_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
# Update changed image tag
sed -i "/alerts/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,alerts,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,27 +1,10 @@
# This action will push the chalice changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "ee/api/**"
- "api/**"
- "!api/.gitignore"
- "!api/app_alerts.py"
- "!api/*-dev.sh"
- "!api/requirements-*.txt"
- "!ee/api/.gitignore"
- "!ee/api/app_alerts.py"
- "!ee/api/app_crons.py"
- "!ee/api/*-dev.sh"
- "!ee/api/requirements-*.txt"
- ee/api/**
name: Build and Deploy Chalice EE
@ -31,129 +14,51 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("chalice")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("chalice")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/chalice/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,chalice,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
- name: Building and Pusing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ee-${{ github.sha }}
ENVIRONMENT: staging
run: |
cd api
PUSH_IMAGE=1 bash build.sh ee
- name: Deploy to kubernetes
run: |
cd scripts/helm/
sed -i "s#minio_access_key.*#minio_access_key: \"${{ secrets.EE_MINIO_ACCESS_KEY }}\" #g" vars.yaml
sed -i "s#minio_secret_key.*#minio_secret_key: \"${{ secrets.EE_MINIO_SECRET_KEY }}\" #g" vars.yaml
sed -i "s#domain_name.*#domain_name: \"foss.openreplay.com\" #g" vars.yaml
sed -i "s#kubeconfig.*#kubeconfig_path: ${KUBECONFIG}#g" vars.yaml
sed -i "s/image_tag:.*/image_tag: \"$IMAGE_TAG\"/g" vars.yaml
bash kube-install.sh --app chalice
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ee-${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# IMAGE_TAG: ee-${{ github.sha }}
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true
#

View file

@ -1,21 +1,10 @@
# This action will push the chalice changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "api/**"
- "!api/.gitignore"
- "!api/app_alerts.py"
- "!api/*-dev.sh"
- "!api/requirements-*.txt"
- api/**
name: Build and Deploy Chalice
@ -25,126 +14,51 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pusing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
cd api
PUSH_IMAGE=1 bash build.sh
- name: Deploy to kubernetes
run: |
cd scripts/helm/
sed -i "s#minio_access_key.*#minio_access_key: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\" #g" vars.yaml
sed -i "s#minio_secret_key.*#minio_secret_key: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\" #g" vars.yaml
sed -i "s#domain_name.*#domain_name: \"foss.openreplay.com\" #g" vars.yaml
sed -i "s#kubeconfig.*#kubeconfig_path: ${KUBECONFIG}#g" vars.yaml
sed -i "s/image_tag:.*/image_tag: \"$IMAGE_TAG\"/g" vars.yaml
bash kube-install.sh --app chalice
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("chalice")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("chalice")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/chalice/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,chalice,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
#

View file

@ -1,134 +0,0 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/assist/**"
- "assist/**"
- "!assist/.gitignore"
- "!assist/*-dev.sh"
name: Build and Deploy Assist EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,122 +0,0 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/assist-server/**"
name: Build and Deploy Assist-Server EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist-Server image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-server
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-server")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist-server")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
pwd
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist-server/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-server,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging

View file

@ -1,162 +0,0 @@
# This action will push the assist-stats changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "assist-stats/**"
- "!assist-stats/.gitignore"
- "!assist-stats/*-dev.sh"
- "!assist-stats/requirements-*.txt"
name: Build and Deploy Assist Stats ee
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing assist-stats image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-stats
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-stats")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist-stats")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
### Enterprise code deployment
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontextee
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
assist-stats:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
export IMAGE_TAG=${IMAGE_TAG}
# Update changed image tag
yq '.utilities.apiCrons.assiststats.image.tag = strenv(IMAGE_TAG)' -i /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-stats,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,133 +0,0 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "assist/**"
- "!assist/.gitignore"
- "!assist/*-dev.sh"
name: Build and Deploy Assist
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,159 +0,0 @@
# This action will push the crons changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "ee/api/**"
- "api/**"
- "!api/.gitignore"
- "!api/app.py"
- "!api/app_alerts.py"
- "!api/*-dev.sh"
- "!api/requirements.txt"
- "!api/requirements-alerts.txt"
- "!ee/api/.gitignore"
- "!ee/api/app.py"
- "!ee/api/app_alerts.py"
- "!ee/api/*-dev.sh"
- "!ee/api/requirements.txt"
- "!ee/api/requirements-crons.txt"
name: Build and Deploy Crons EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build_crons.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("crons")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("crons")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
env:
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
image: &image
tag: "${IMAGE_TAG}"
utilities:
apiCrons:
assiststats:
image: *image
report:
image: *image
sessionsCleaner:
image: *image
projectsStats:
image: *image
fixProjectsStats:
image: *image
EOF
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,utilities,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,156 +0,0 @@
name: Database migration Deployment
on:
workflow_dispatch:
push:
branches:
- dev
paths:
- ee/scripts/helm/db/init_dbs/**
- scripts/helm/db/init_dbs/**
# Disable previous workflows for this action.
concurrency:
group: ${{ github.workflow }} #-${{ github.ref }}
cancel-in-progress: false
jobs:
db-migration:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Checking whether migration is needed for OSS
id: check-migration
run: |-
[[ `git --no-pager diff --name-only HEAD HEAD~1 | grep -E "scripts/helm/db/init_dbs" | grep -vE ^ee/` ]] || echo "::set-output name=skip_migration_oss::true"
- uses: azure/k8s-set-context@v1
if: ${{ steps.check-migration.outputs.skip_migration_oss != 'true' }}
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Creating old image input
if: ${{ steps.check-migration.outputs.skip_migration_oss != 'true' }}
run: |
set -x
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- uses: ./.github/composite-actions/update-keys
with:
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes foss
if: ${{ steps.check-migration.outputs.skip_migration_oss != 'true' }}
run: |
cd scripts/helmcharts/
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --atomic --set forceMigration=true --set dbMigrationUpstreamBranch=${IMAGE_TAG}
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
### Enterprise code deployment
- name: cleaning old assets
run: |
rm -rf /tmp/image_*
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontextee
- name: Creating old image input
env:
IMAGE_TAG: ${{ github.sha }}
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- uses: ./.github/composite-actions/update-keys
with:
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --atomic --set forceMigration=true --set dbMigrationUpstreamBranch=${IMAGE_TAG}
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,85 +0,0 @@
name: Frontend Dev Deployment
on: workflow_dispatch
# Disable previous workflows for this action.
concurrency:
group: ${{ github.workflow }} #-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Cache node modules
uses: actions/cache@v1
with:
path: node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- uses: ./.github/composite-actions/update-keys
with:
domain_name: ${{ secrets.DEV_DOMAIN_NAME }}
license_key: ${{ secrets.DEV_LICENSE_KEY }}
jwt_secret: ${{ secrets.DEV_JWT_SECRET }}
minio_access_key: ${{ secrets.DEV_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.DEV_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.DEV_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.DEV_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing frontend image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
set -x
cd frontend
mv .env.sample .env
docker run --rm -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v $(pwd):/home/${USER} -w /home/${USER} --name node_build node:20-slim /bin/bash -c "yarn && yarn build"
# https://github.com/docker/cli/issues/1134#issuecomment-613516912
DOCKER_BUILDKIT=1 docker build --target=cicd -t $DOCKER_REPO/frontend:${IMAGE_TAG} .
docker tag $DOCKER_REPO/frontend:${IMAGE_TAG} $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
- name: Deploy to kubernetes foss
run: |
cd scripts/helmcharts/
set -x
cat <<EOF>>/tmp/image_override.yaml
frontend:
image:
tag: ${IMAGE_TAG}
EOF
# Update changed image tag
sed -i "/frontend/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,frontend,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
iMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}

View file

@ -1,159 +1,51 @@
name: Frontend Foss Deployment
name: Frontend FOSS Deployment
on:
workflow_dispatch:
push:
branches:
- dev
paths:
- frontend/**
# Disable previous workflows for this action.
concurrency:
group: ${{ github.workflow }} #-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Checkout
uses: actions/checkout@v2
- name: Cache node modules
uses: actions/cache@v4
with:
path: |
/home/runner/work/openreplay/openreplay/frontend/node_modules
/home/runner/work/openreplay/openreplay/frontend/.yarn
key: ${{ runner.OS }}-build-${{ hashFiles('frontend/yarn.lock') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- name: Cache node modules
uses: actions/cache@v1
with:
path: node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# - name: Install
# run: npm install
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing frontend image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
set -x
cd frontend
mv .env.sample .env
docker run --rm -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v $(pwd):/home/${USER} -w /home/${USER} --name node_build node:20-slim /bin/bash -c "yarn && yarn build"
# https://github.com/docker/cli/issues/1134#issuecomment-613516912
DOCKER_BUILDKIT=1 docker build --target=cicd -t $DOCKER_REPO/frontend:${IMAGE_TAG} .
docker tag $DOCKER_REPO/frontend:${IMAGE_TAG} $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
- name: Deploy to kubernetes foss
run: |
cd scripts/helmcharts/
set -x
cat <<EOF>>/tmp/image_override.yaml
frontend:
image:
tag: ${IMAGE_TAG}
EOF
# Update changed image tag
sed -i "/frontend/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,frontend,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
### Enterprise code deployment
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontextee
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
frontend:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
# Update changed image tag
sed -i "/frontend/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,frontend,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Build and deploy
run: |
cd frontend
bash build.sh
cp -arl public frontend
minio_pod=$(kubectl get po -n db -l app.kubernetes.io/name=minio -n db --output custom-columns=name:.metadata.name | tail -n+2)
echo $minio_pod
echo copying frontend to container.
kubectl -n db cp frontend $minio_pod:/data/
rm -rf frontend
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# AWS_REGION: eu-central-1
# AWS_S3_BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET_NAME }}

View file

@ -1,189 +0,0 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
tag:
description: 'Tag to update.'
required: true
type: string
branch:
description: 'Branch to build patches from. Make sure the branch is uptodate with tag. Else itll cause missing commits.'
required: true
type: string
name: Build patches from tag, rewrite commit HEAD to older timestamp, and Push the tag
jobs:
deploy:
name: Build Patch from old tag
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 4
ref: ${{ github.event.inputs.tag }}
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Create backup tag with timestamp
run: |
set -e # Exit immediately if a command exits with a non-zero status
TIMESTAMP=$(date +%Y%m%d%H%M%S)
BACKUP_TAG="${{ github.event.inputs.tag }}-backup-${TIMESTAMP}"
echo "BACKUP_TAG=${BACKUP_TAG}" >> $GITHUB_ENV
echo "INPUT_TAG=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
git tag $BACKUP_TAG || { echo "Failed to create backup tag"; exit 1; }
git push origin $BACKUP_TAG || { echo "Failed to push backup tag"; exit 1; }
echo "Created backup tag: $BACKUP_TAG"
# Get the oldest commit date from the last 3 commits in raw format
OLDEST_COMMIT_TIMESTAMP=$(git log -3 --pretty=format:"%at" | tail -1)
echo "Oldest commit timestamp: $OLDEST_COMMIT_TIMESTAMP"
# Add 1 second to the timestamp
NEW_TIMESTAMP=$((OLDEST_COMMIT_TIMESTAMP + 1))
echo "NEW_TIMESTAMP=$NEW_TIMESTAMP" >> $GITHUB_ENV
- name: Setup yq
uses: mikefarah/yq@master
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=${{inputs.branch}}" >> $GITHUB_ENV
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b $INPUT_TAG --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout $INPUT_TAG
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
done
- name: Change commit timestamp
run: |
# Convert the timestamp to a date format git can understand
NEW_DATE=$(perl -le 'print scalar gmtime($ARGV[0])." +0000"' $NEW_TIMESTAMP)
echo "Setting commit date to: $NEW_DATE"
# Amend the commit with the new date
GIT_COMMITTER_DATE="$NEW_DATE" git commit --amend --no-edit --date="$NEW_DATE"
# Verify the change
git log -1 --pretty=format:"Commit now dated: %cD"
# git tag and push
git tag $INPUT_TAG -f
git push origin $INPUT_TAG -f
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -1,261 +0,0 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
name: Build patches from main branch, Raise PR to Main, and Push to tag
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Rebase with main branch, to make sure the code has latest main changes
if: github.ref != 'refs/heads/main'
run: |
git remote -v
git config --global user.email "action@github.com"
git config --global user.name "GitHub Action"
git config --global rebase.autoStash true
git fetch origin main:main
git rebase main
git log -3
- name: Downloading yq
run: |
VERSION="v4.42.1"
sudo wget https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 -O /usr/bin/yq
sudo chmod +x /usr/bin/yq
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=patch/main/${HEAD_COMMIT_ID}" >> $GITHUB_ENV
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
SERVICES_INPUT: ${{ github.event.inputs.services }}
run: |
#!/bin/bash
set -euo pipefail
# Configuration
readonly WORKING_DIR=$(pwd)
readonly BUILD_SCRIPT_NAME="build.sh"
readonly BACKEND_SERVICES_FILE="/tmp/backend.txt"
# Initialize git configuration
setup_git() {
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b "$BRANCH_NAME"
}
# Get and increment image version
image_version() {
local service=$1
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
local current_version new_version
current_version=$(yq eval '.AppVersion' "$chart_path")
new_version=$(echo "$current_version" | awk -F. '{$NF += 1; print $1"."$2"."$3}')
echo "$new_version"
}
# Clone MSAAS repository if not exists
clone_msaas() {
if [[ ! -d "$MSAAS_REPO_FOLDER" ]]; then
git clone -b dev --recursive "https://x-access-token:${MSAAS_REPO_CLONE_TOKEN}@${MSAAS_REPO_URL}" "$MSAAS_REPO_FOLDER"
cd "$MSAAS_REPO_FOLDER"
cd openreplay && git fetch origin && git checkout main
git log -1
cd "$MSAAS_REPO_FOLDER"
bash git-init.sh
git checkout
fi
}
# Build managed services
build_managed() {
local service=$1
local version=$2
echo "Building managed service: $service"
clone_msaas
if [[ $service == 'chalice' ]]; then
cd "$MSAAS_REPO_FOLDER/openreplay/api"
else
cd "$MSAAS_REPO_FOLDER/openreplay/$service"
fi
local build_cmd="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh"
echo "Executing: $build_cmd"
if ! eval "$build_cmd" 2>&1; then
echo "Build failed for $service"
exit 1
fi
}
# Build service with given arguments
build_service() {
local service=$1
local version=$2
local build_args=$3
local build_script=${4:-$BUILD_SCRIPT_NAME}
local command="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash $build_script $build_args"
echo "Executing: $command"
eval "$command"
}
# Update chart version and commit changes
update_chart_version() {
local service=$1
local version=$2
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
# Ensure we're in the original working directory/repository
cd "$WORKING_DIR"
yq eval ".AppVersion = \"$version\"" -i "$chart_path"
git add "$chart_path"
git commit -m "Increment $service chart version to $version"
git push --set-upstream origin "$BRANCH_NAME"
cd -
}
# Main execution
main() {
setup_git
# Get backend services list
ls backend/cmd >"$BACKEND_SERVICES_FILE"
# Parse services input (fix for GitHub Actions syntax)
echo "Services: ${SERVICES_INPUT:-$1}"
IFS=',' read -ra services <<<"${SERVICES_INPUT:-$1}"
# Process each service
for service in "${services[@]}"; do
echo "Processing service: $service"
cd "$WORKING_DIR"
local foss_build_args="" ee_build_args="" build_script="$BUILD_SCRIPT_NAME"
# Determine build configuration based on service type
if grep -q "$service" "$BACKEND_SERVICES_FILE"; then
# Backend service
cd backend
foss_build_args="nil $service"
ee_build_args="ee $service"
else
# Non-backend service
case "$service" in
chalice | alerts | crons)
cd "$WORKING_DIR/api"
;;
*)
cd "$service"
;;
esac
# Special build scripts for alerts/crons
if [[ $service == 'alerts' || $service == 'crons' ]]; then
build_script="build_${service}.sh"
fi
ee_build_args="ee"
fi
# Get version and build
local version
version=$(image_version "$service")
# Build FOSS and EE versions
build_service "$service" "$version" "$foss_build_args"
build_service "$service" "${version}-ee" "$ee_build_args"
# Build managed version for specific services
if [[ "$service" != "chalice" && "$service" != "frontend" ]]; then
echo "Nothing to build in managed for service $service"
else
build_managed "$service" "$version"
fi
# Update chart and commit
update_chart_version "$service" "$version"
done
cd "$WORKING_DIR"
# Cleanup
rm -f "$BACKEND_SERVICES_FILE"
}
echo "Working directory: $WORKING_DIR"
# Run main function with all arguments
main "$SERVICES_INPUT"
- name: Create Pull Request
uses: repo-sync/pull-request@v2
with:
github_token: ${{ secrets.ACTIONS_COMMMIT_TOKEN }}
source_branch: ${{ env.BRANCH_NAME }}
destination_branch: "main"
pr_title: "Updated patch build from main ${{ env.HEAD_COMMIT_ID }}"
pr_body: |
This PR updates the Helm chart version after building the patch from $HEAD_COMMIT_ID.
Once this PR is merged, tag update job will run automatically.
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -1,86 +0,0 @@
name: PR-Env-Delete
on:
workflow_dispatch:
inputs:
env_origin_url:
description: |
URL of the origin of the PR env to be deleted. Example: https://pr-1717-ee.openreplay.tools
required: true
jobs:
create-vcluster-pr:
runs-on: ubuntu-latest
env:
build_service: ${{ github.event.inputs.build_service }}
env_flavour: ${{ github.event.inputs.env_flavour }}
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.OR_PR_AWS_DEFAULT_REGION}}
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.PR_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Install vCluster CLI
run: |
# Replace with the command to install vCluster CLI
curl -s -L "https://github.com/loft-sh/vcluster/releases/download/v0.16.4/vcluster-linux-amd64" -o /usr/local/bin/vcluster
chmod +x /usr/local/bin/vcluster
- name: Deleting vcluster
run: |
url=${{ github.event.inputs.env_origin_url }}
# Remove the protocol part of the URL
url_no_protocol=${url#*//}
# Extract the subdomain and domain
subdomain=$(echo $url_no_protocol | cut -d"." -f1)
domain=$(echo $url_no_protocol | cut -d"." -f2-)
echo "subdomain=$subdomain" >> $GITHUB_ENV
echo "domain=$domain" >> $GITHUB_ENV
vcluster delete -n $subdomain-vcluster $subdomain-vcluster
echo $subdomain $domain
- name: Get LoadBalancer IP
id: lb-ip
run: |
LB_IP=$(kubectl get svc ingress-ingress-nginx-controller -n default -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "::set-output name=ip::$LB_IP"
- name: Delete dns record
env:
AWS_ACCESS_KEY_ID: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.OR_PR_AWS_DEFAULT_REGION }}
run: |
DOMAIN_NAME_1=$subdomain.$domain
DOMAIN_NAME_2=$subdomain-vcluster.$domain
cat <<EOF > route53-changes.json
{
"Comment": "Create record set for VCluster",
"Changes": [
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_1",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
},
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_2",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
}
]
}
EOF
iws route53 change-resource-record-sets --hosted-zone-id ${{ secrets.OR_PR_HOSTED_ZONE_ID }} --change-batch file://route53-changes.json

View file

@ -1,340 +0,0 @@
name: PR-Deployment
on:
workflow_dispatch:
inputs:
build_service:
description: |
Name of a single service to build(in small letters), eg: api or frontend etc. backend:sevice-name to build service.
If what ever image is not built, it'll be deployed from latest release.
Options: none/all/service-name/backend:{app1/app1,app2,app3/all}
required: false
default: none
env_flavour:
description: 'Which env to build. Values: foss/ee'
required: false
jobs:
create-vcluster-pr:
runs-on: ubuntu-latest
env:
build_service: ${{ github.event.inputs.build_service }}
env_flavour: ${{ github.event.inputs.env_flavour }}
steps:
- name: Checkout Code
uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.OR_PR_AWS_DEFAULT_REGION}}
- name: Setting up env variables
run: |
# Fetching details open/draft PR for current branch
PR_DATA=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/pulls" \
| jq -r --arg BRANCH "${{ github.ref_name }}" '.[] | select((.head.ref==$BRANCH) and (.state=="open") and (.draft==true or .draft==false))')
# Extracting PR number
PR_NUMBER=$(echo "$PR_DATA" | jq -r '.number' | head -n 1)
if [ -z $PR_NUMBER ]; then
echo "No PR found for ${{ github.ref_name}}"
exit 100
fi
echo "PR_NUMBER_PRE=$PR_NUMBER" >> $GITHUB_ENV
PR_NUMBER=pr-$PR_NUMBER
if [ $env_flavour == "ee" ]; then
PR_NUMBER=$PR_NUMBER-ee
fi
echo "PR number: $PR_NUMBER"
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
# Extracting PR status (open, closed, merged)
PR_STATUS=$(echo "$PR_DATA" | jq -r '.state' | head -n 1)
echo "PR status: $PR_STATUS"
echo "PR_STATUS=$PR_STATUS" >> $GITHUB_ENV
- name: Install vCluster CLI
run: |
# Replace with the command to install vCluster CLI
curl -s -L "https://github.com/loft-sh/vcluster/releases/download/v0.16.4/vcluster-linux-amd64" -o /usr/local/bin/vcluster
chmod +x /usr/local/bin/vcluster
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.PR_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Check existing vcluster
id: vcluster_exists
continue-on-error: true
run: |
if ! $(vcluster list | grep $PR_NUMBER &> /dev/null); then
echo "no cluster found for $PR_NUMBER"
echo "::set-output name=failed::true"
exit 100
fi
DOMAIN_NAME=${PR_NUMBER}-vcluster.${{ secrets.OR_PR_DOMAIN_NAME }}
vcluster connect ${PR_NUMBER}-vcluster --update-current=false --server=https://$DOMAIN_NAME
mv kubeconfig.yaml /tmp/kubeconfig.yaml
- name: Get LoadBalancer IP
if: steps.vcluster_exists.outputs.failed == 'true'
id: lb-ip
run: |
# LB_IP=$(kubectl get svc ingress-ingress-nginx-controller -n default -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
LB_IP=$(kubectl get svc ingress-ingress-nginx-controller -n default -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "::set-output name=ip::$LB_IP"
- name: Create vCluster
if: steps.vcluster_exists.outputs.failed == 'true'
run: |
# Replace with the actual command to create a vCluster
pwd
cd scripts/pr-env/
bash create.sh ${PR_NUMBER}.${{ secrets.OR_PR_DOMAIN_NAME }}
cp kubeconfig.yaml /tmp/
- name: Update AWS Route53 Record
if: steps.vcluster_exists.outputs.failed == 'true'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.OR_PR_AWS_DEFAULT_REGION }}
run: |
DOMAIN_NAME_1=$PR_NUMBER-vcluster.${{ secrets.OR_PR_DOMAIN_NAME }}
DOMAIN_NAME_2=$PR_NUMBER.${{ secrets.OR_PR_DOMAIN_NAME }}
cat <<EOF > route53-changes.json
{
"Comment": "Create record set for VCluster",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_1",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
},
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_2",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
}
]
}
EOF
#
NEW_IP=${{ steps.lb-ip.outputs.ip }}
# Get the current IP address associated with the domain
CURRENT_IP=$(dig +short $DOMAIN_NAME_1 @1.1.1.1)
echo "current ip: $CURRENT_IP"
# Check if the domain has no IP association or if the IPs are different
if [ -z "$CURRENT_IP" ] || [ "$CURRENT_IP" != "$NEW_IP" ]; then
aws route53 change-resource-record-sets --hosted-zone-id ${{ secrets.OR_PR_HOSTED_ZONE_ID }} --change-batch file://route53-changes.json
fi
- name: Wait for DNS Propagation
if: steps.vcluster_exists.outputs.failed == 'true'
env:
EXPECTED_IP: ${{ steps.lb-ip.outputs.ip }}
run: |
DOMAIN_NAME="$PR_NUMBER-vcluster.${{ secrets.OR_PR_DOMAIN_NAME }}"
MAX_ATTEMPTS=30
attempt=1
until [[ $attempt -gt $MAX_ATTEMPTS ]]
do
# Use dig to query DNS records
DNS_RESULT=$(dig +short $DOMAIN_NAME @1.1.1.1)
# Check if DNS result is empty
if [ -z "$DNS_RESULT" ]; then
echo "No IP or CNAME records found for $DOMAIN_NAME."
else
echo "DNS records found for $DOMAIN_NAME:"
echo "$DNS_RESULT"
break
fi
echo "Waiting for DNS propagation... Attempt $attempt of $MAX_ATTEMPTS"
((attempt++))
sleep 20
done
if [[ $attempt -gt $MAX_ATTEMPTS ]]; then
echo "DNS propagation check failed for $DOMAIN_NAME after $MAX_ATTEMPTS attempts."
exit 1
fi
- name: Install openreplay
if: steps.vcluster_exists.outputs.failed == 'true'
env:
KUBECONFIG: /tmp/kubeconfig.yaml
run: |
DOMAIN_NAME=$PR_NUMBER.${{ secrets.OR_PR_DOMAIN_NAME }}
cd scripts/helmcharts
sed -i "s/domainName: \"\"/domainName: \"${DOMAIN_NAME}\"/g" vars.yaml
# If ee cluster, enable the following
if [ $env_flavour == "ee" ]; then
# Explanation for the sed command:
# /clickhouse:/: Matches lines containing "clickhouse:".
# {:a: Starts a block with label 'a'.
# n;: Reads the next line.
# /enabled:/s/false/true/: If the line contains 'enabled:', replace 'false' with 'true'.
# t done;: If the substitution was made, branch to label 'done'.
# ba;: Go back to label 'a' if no substitution was made.
# :done}: Label 'done', where the script goes after a successful substitution.
sed -i '/clickhouse:/{:a;n;/enabled:/s/false/true/;t done; ba; :done}' vars.yaml
sed -i '/kafka:/{:a;n;/# enabled:/s/# enabled: .*/enabled: true/;t done; ba; :done}' vars.yaml
sed -i '/redis:/{:a;n;/enabled:/s/true/false/;t done; ba; :done}' vars.yaml
sed -i "s/enterpriseEditionLicense: \"\"/enterpriseEditionLicense: \"${{ secrets.EE_LICENSE_KEY }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${DOMAIN_NAME}\"/g" vars.yaml
fi
helm upgrade -i databases -n db ./databases -f vars.yaml --create-namespace --wait -f ../pr-env/resources.yaml
helm upgrade -i openreplay -n app ./openreplay -f vars.yaml --create-namespace --set ingress-nginx.enabled=false -f ../pr-env/resources.yaml --wait
- name: Build and deploy application
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
env: ${{ github.event.inputs.env_flavour }}
run: |
set -x
app_name=${{github.event.inputs.build_service}}
echo "building and deploying $app_name"
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
export KUBECONFIG=/tmp/kubeconfig.yaml
function build_and_deploy {
apps_to_build=$1
case $apps_to_build in
backend*)
echo "Building backend build"
cd $GITHUB_WORKSPACE/backend
components=()
if [ $apps_to_build == "backend:all" ]; then
# Append all folder names from 'cmd/' directory to the array
for folder in cmd/*/; do
# Use basename to extract the folder name without path
folder_name=$(basename "$folder")
components+=("$folder_name")
done
else
# "${apps_to_build#*:}" :: Strip backend: and output app1,app2,app3 to read -ra
IFS=',' read -ra components <<< "${apps_to_build#*:}"
fi
echo "Building components: " ${components[@]}
for component in "${components[@]}"; do
if [ $(docker manifest inspect ${DOCKER_REPO}/$component:${IMAGE_TAG} > /dev/null) ]; then
echo Image present upstream. Skipping build: $component
else
echo "Building backend:$component"
PUSH_IMAGE=1 bash -x ./build.sh $env $component
fi
kubectl set image -n app deployment/$component-openreplay $component=${DOCKER_REPO}/$component:${IMAGE_TAG}
done
;;
chalice|api)
echo "Chalice build"
component=chalice
cd $GITHUB_WORKSPACE/api || (Nothing to build: $component; exit 100)
if [ $(docker manifest inspect ${DOCKER_REPO}/$component:${IMAGE_TAG} > /dev/null) ]; then
echo Image present upstream. Skipping build: $component
else
echo "Building backend:$component"
PUSH_IMAGE=1 bash -x ./build.sh $env $component
fi
kubectl set image -n app deployment/$component-openreplay $component=${DOCKER_REPO}/$component:${IMAGE_TAG}
;;
*)
echo "$apps_to_build build"
cd $GITHUB_WORKSPACE/$apps_to_build || (Nothing to build: $apps_to_build; exit 100)
component=$apps_to_build
if [ $(docker manifest inspect ${DOCKER_REPO}/$component:${IMAGE_TAG} > /dev/null) ]; then
echo Image present upstream. Skipping build: $component
else
echo "Building backend:$component"
PUSH_IMAGE=1 bash -x ./build.sh $env $component
fi
kubectl set image -n app deployment/$apps_to_build-openreplay $apps_to_build=${DOCKER_REPO}/$apps_to_build:${IMAGE_TAG}
;;
esac
}
case $app_name in
all)
build_and_deploy "backend:all"
build_and_deploy "frontend"
build_and_deploy "chalice"
build_and_deploy "sourcemapreader"
build_and_deploy "assist-stats"
;;
none)
echo "Nothing to build"
;;
*)
build_and_deploy $app_name
;;
esac
- name: Sent results to slack
if: steps.vcluster_exists.outputs.failed == 'true'
env:
SLACK_BOT_TOKEN: ${{ secrets.OR_PR_SLACK_BOT_TOKEN }}
SLACK_CHANNEL: ${{ secrets.OR_PR_SLACK_CHANNEL }}
run: |
echo hi ${{ steps.vcluster_exists.outputs.failed }}
DOMAIN_NAME=https://$PR_NUMBER.${{ secrets.OR_PR_DOMAIN_NAME }}
# Variables
PR_NUMBER=https://github.com/${{ github.repository }}/pull/${PR_NUMBER_PRE}
BRANCH_NAME=${{ github.ref_name }}
ORIGIN=$DOMAIN_NAME
ASSETS_HOST=$DOMAIN_NAME/assets
API_EDP=$DOMAIN_NAME/api
INGEST_POINT=$DOMAIN_NAME/ingest
# File to be uploaded
FILE_PATH="/tmp/kubeconfig.yaml"
if [! -f $FILE_PATH ]; then
echo "Kubeconfig file not found: $FILE_PATH"
exit 100
fi
# Form the message payload
PAYLOAD=$(cat <<EOF
{
"channel": "$SLACK_CHANNEL",
"text": "Deployment Information:\n- PR#: $PR_NUMBER\n- PR Status: $PR_STATUS\n- Branch Name: $BRANCH_NAME\n- Origin: $ORIGIN\n- Assets Host: $ASSETS_HOST\n- API Endpoint: $API_EDP\n- Ingest Point: $INGEST_POINT\n- To use the cluster: download the following file and run the following commands, \n export KUBECONFIG=/path/to/kubeconfig.yaml\n k9s"
}
EOF
)
# Send the message to Slack
curl -X POST -H "Authorization: Bearer $SLACK_BOT_TOKEN" -H 'Content-type: application/json' --data "$PAYLOAD" https://slack.com/api/chat.postMessage > /dev/null
# Upload the file to Slack
curl -F file=@"$FILE_PATH" -F channels="$SLACK_CHANNEL" -F token="$SLACK_BOT_TOKEN" https://slack.com/api/files.upload > /dev/null
# - name: Cleanup
# if: always()
# run: |
# # Add any cleanup commands if necessary
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,103 +0,0 @@
name: Release Deployment
on:
workflow_dispatch:
inputs:
services:
description: 'Comma-separated list of services to deploy. eg: frontend,api,sink'
required: true
branch:
description: 'Branch to deploy (defaults to dev)'
required: false
default: 'dev'
env:
IMAGE_REGISTRY_URL: ${{ secrets.OSS_REGISTRY_URL }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ github.event.inputs.branch }}
- name: Docker login
run: |
docker login $IMAGE_REGISTRY_URL -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Set image tag with branch info
run: |
SHORT_SHA=$(git rev-parse --short HEAD)
echo "IMAGE_TAG=${{ github.event.inputs.branch }}-${SHORT_SHA}" >> $GITHUB_ENV
echo "Using image tag: $IMAGE_TAG"
- uses: depot/setup-action@v1
- name: Build and push Docker images
run: |
# Parse the comma-separated services list into an array
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
working_dir=$(pwd)
# Define backend services (consider moving this to workflow inputs or repo config)
ls backend/cmd >> /tmp/backend.txt
BUILD_SCRIPT_NAME="build.sh"
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd $working_dir/backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
cd $working_dir
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
{
echo IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
}&
{
echo IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
}&
done
wait
- uses: azure/k8s-set-context@v1
name: Using ee release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_RELEASE_KUBECONFIG }}
- name: Deploy to ee release Kubernetes
run: |
echo "Deploying services to EE cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to EE cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}-ee
done
- uses: azure/k8s-set-context@v1
name: Using foss release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.FOSS_RELEASE_KUBECONFIG }}
- name: Deploy to FOSS release Kubernetes
run: |
echo "Deploying services to FOSS cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}
done

View file

@ -1,150 +0,0 @@
# This action will push the sourcemapreader changes to ee
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/sourcemap-reader/**"
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"
- "!sourcemap-reader/*-dev.sh"
name: Build and Deploy sourcemap-reader EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing sourcemaps-reader image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd sourcemap-reader
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("sourcemaps-reader")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("sourcemaps-reader")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/sourcemaps-reader/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
sed -i "s/sourcemaps-reader/sourcemapreader/g" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,sourcemapreader,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,149 +0,0 @@
# This action will push the sourcemapreader changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"
- "!sourcemap-reader/*-dev.sh"
name: Build and Deploy sourcemap-reader
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing sourcemaps-reader image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd sourcemap-reader
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("sourcemaps-reader")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("sourcemaps-reader")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/sourcemaps-reader/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
sed -i "s/sourcemaps-reader/sourcemapreader/g" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,sourcemapreader,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,67 +0,0 @@
# Checking unit tests for tracker and assist
name: Tracker tests
on:
workflow_dispatch:
push:
branches: [ "main", "dev" ]
paths:
- tracker/**
pull_request:
branches: [ "dev", "main" ]
paths:
- tracker/**
jobs:
build-and-test:
runs-on: macos-latest
name: Build and test Tracker
steps:
- uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- uses: actions/checkout@v3
- name: Cache tracker modules
uses: actions/cache@v3
with:
path: tracker/tracker/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lockb') }}
restore-keys: |
test_tracker_build{{ runner.OS }}-build-
test_tracker_build{{ runner.OS }}-
- name: Cache tracker-assist modules
uses: actions/cache@v3
with:
path: tracker/tracker-assist/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lockb') }}
restore-keys: |
test_tracker_build{{ runner.OS }}-build-
test_tracker_build{{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker
bun install
- name: Jest tests
run: |
cd tracker/tracker
bun run test:ci
- name: Building test
run: |
cd tracker/tracker
bun run build
- name: (TA) Setup Testing packages
run: |
cd tracker/tracker-assist
bun install
- name: (TA) Jest tests
run: |
cd tracker/tracker-assist
bun run test:ci
- name: (TA) Building test
run: |
cd tracker/tracker-assist
bun run build
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: tracker
iame: tracker

View file

@ -1,157 +0,0 @@
# Checking unit and visual tests locally on every merge rq to dev and main
name: Frontend tests
on:
workflow_dispatch:
push:
branches: [ "main" ]
paths:
- frontend/**
- tracker/**
pull_request:
branches: [ "dev", "main" ]
paths:
- frontend/**
- tracker/**
env:
API: ${{ secrets.E2E_API_ORIGIN }}
ASSETS: ${{ secrets.E2E_ASSETS_ORIGIN }}
APIEDP: ${{ secrets.E2E_EDP_ORIGIN }}
CY_ACC: ${{ secrets.CYPRESS_ACCOUNT }}
CY_PASS: ${{ secrets.CYPRESS_PASSWORD }}
FOSS_PROJECT_KEY: ${{ secrets.FOSS_PROJECT_KEY }}
FOSS_INGEST: ${{ secrets.FOSS_INGEST }}
jobs:
build-and-test:
runs-on: macos-latest
name: Build and test Tracker plus Replayer
strategy:
matrix:
node-version: [ 20.x ]
steps:
- uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
- name: Cache tracker modules
uses: actions/cache@v3
with:
path: tracker/tracker/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('tracker/tracker/bun.lockb') }}
restore-keys: |
test_tracker_build-{{ runner.OS }}-build-
test_tracker_build-{{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker
bun install
- name: Build tracker inst
run: |
cd tracker/tracker
bun run build
- name: Setup Testing UI Env
run: |
cd tracker/tracker-testing-playground
echo "REACT_APP_KEY=$FOSS_PROJECT_KEY" >> .env
echo "REACT_APP_INGEST=$FOSS_INGEST" >> .env
- name: Cache testing UI node modules
uses: actions/cache@v3
with:
path: tracker/tracker-testing-playground/node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker-testing-playground
yarn
- name: Cache node modules
uses: actions/cache@v3
with:
path: frontend/node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('frontend/yarn.lock') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- name: Setup env
run: |
cd frontend
echo "NODE_ENV=development" >> .env
echo "SOURCEMAP=true" >> .env
echo "ORIGIN=$API" >> .env
echo "ASSETS_HOST=$ASSETS" >> .env
echo "API_EDP=$APIEDP" >> .env
echo "SENTRY_ENABLED = false" >> .env
echo "SENTRY_URL = ''" >> .env
echo "CAPTCHA_ENABLED = false" >> .env
echo "CAPTCHA_SITE_KEY = 'asdad'" >> .env
echo "MINIO_ENDPOINT = ''" >> .env
echo "MINIO_PORT = ''" >> .env
echo "MINIO_USE_SSL = ''" >> .env
echo "MINIO_ACCESS_KEY = ''" >> .env
echo "MINIO_SECRET_KEY = ''" >> .env
echo "VERSION = '1.15.0'" >> .env
echo "TRACKER_VERSION = '10.0.0'" >> .env
echo "COMMIT_HASH = 'dev'" >> .env
echo "{ \"account\": \"$CY_ACC\", \"password\": \"$CY_PASS\" }" >> cypress.env.json
- name: Setup packages
run: |
cd frontend
yarn
- name: Run unit tests
run: |
cd frontend
yarn test:ci
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: ui
name: ui
- name: Run testing frontend
run: |
cd tracker/tracker-testing-playground
yarn start &> testing.log &
echo "Started"
npm i -g wait-on
echo "Got wait on"
sleep 30
cat testing.log
npx wait-on http://localhost:3000
echo "Done"
timeout-minutes: 4
- name: Run Frontend
run: |
cd frontend
bun start &> frontend.log &
echo "Started"
sleep 30
cat frontend.log
npx wait-on http://0.0.0.0:3333
echo "Done"
timeout-minutes: 4
- name: (Chrome) Run visual tests
run: |
cd frontend
yarn cy:test
# firefox have different viewport somehow
# - name: (Firefox) Run visual tests
# run: yarn cy:test-firefox
# - name: (Edge) Run visual tests
# run: yarn cy:test-edge
timeout-minutes: 5
- name: Upload Debug
if: ${{ failure() }}
uses: actions/upload-artifact@v3
with:
name: 'Snapshots'
path: |
frontend/cypress/videos
frontend/cypress/snapshots/replayer.cy.ts
frontend/cypress/screenshots
frontend/cypress/snapshots/generalStability.cy.ts

View file

@ -1,42 +0,0 @@
on:
pull_request:
types: [closed]
branches:
- main
name: Release tag update --force
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
if: ${{ (github.event_name == 'pull_request' && github.event.pull_request.merged == true) || github.event.inputs.services == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get latest release tag using GitHub API
id: get-latest-tag
run: |
LATEST_TAG=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/latest" \
| jq -r .tag_name)
# Fallback to git command if API doesn't return a tag
if [ "$LATEST_TAG" == "null" ] || [ -z "$LATEST_TAG" ]; then
echo "Not found latest tag"
exit 100
fi
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
echo "Latest tag: $LATEST_TAG"
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}
- name: Push main branch to tag
run: |
git checkout main
echo "Updating tag ${{ env.LATEST_TAG }} to point to latest commit on main"
git push origin HEAD:refs/tags/${{ env.LATEST_TAG }} --force

64
.github/workflows/utilities.yaml vendored Normal file
View file

@ -0,0 +1,64 @@
# This action will push the utilities changes to aws
on:
push:
branches:
- dev
paths:
- utilities/**
name: Build and Deploy Utilities
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pusing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
cd utilities
PUSH_IMAGE=1 bash build.sh
- name: Deploy to kubernetes
run: |
cd scripts/helm/
sed -i "s#minio_access_key.*#minio_access_key: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\" #g" vars.yaml
sed -i "s#minio_secret_key.*#minio_secret_key: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\" #g" vars.yaml
sed -i "s#domain_name.*#domain_name: \"foss.openreplay.com\" #g" vars.yaml
sed -i "s#kubeconfig.*#kubeconfig_path: ${KUBECONFIG}#g" vars.yaml
sed -i "s/image_tag:.*/image_tag: \"$IMAGE_TAG\"/g" vars.yaml
bash kube-install.sh --app utilities
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
#

View file

@ -1,22 +1,11 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
build_service:
description: 'Name of a single service to build(in small letters). "all" to build everything'
required: false
default: "false"
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- dev
paths:
- ee/backend/**
- backend/**
name: Build and deploy workers EE
@ -26,168 +15,72 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Downloading yq
run: |
VERSION="v4.42.1"
sudo wget https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 -O /usr/bin/yq
sudo chmod +x /usr/bin/yq
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# # Caching docker images
# - uses: satackey/action-docker-layer-caching@v0.0.11
# # Ignore the failure of a step and avoid terminating the job.
# continue-on-error: true
- name: Build, tag
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
set -x
touch /tmp/images_to_build.txt
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
tmp_param=${{ github.event.inputs.build_service }}
build_param=${tmp_param:-'false'}
case ${build_param} in
false)
{
git diff --name-only HEAD HEAD~1 | grep -E "backend/pkg|backend/internal" | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services backend/cmd | cut -d '/' -f3
done
} | awk '!seen[$0]++' > /tmp/images_to_build.txt
;;
all)
ls backend/cmd > /tmp/images_to_build.txt
;;
*)
echo ${{github.event.inputs.build_service }} > /tmp/images_to_build.txt
;;
esac
if [[ $(cat /tmp/images_to_build.txt) == "" ]]; then
echo "Nothing to build here"
touch /tmp/nothing-to-build-here
exit 0
fi
#
# Pushing image to registry
#
cd backend
cat /tmp/images_to_build.txt
for image in $(cat /tmp/images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=0 bash -x ./build.sh ee $image
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
docker push $DOCKER_REPO/$image:$IMAGE_TAG
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
done
- name: Deploying to kuberntes
env:
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
#
# Deploying image to environment.
#
set -x
[[ -f /tmp/nothing-to-build-here ]] && exit 0
cd scripts/helmcharts/
set -x
echo > /tmp/image_override.yaml
mkdir /tmp/helmcharts
mv openreplay/charts/ingress-nginx /tmp/helmcharts/
mv openreplay/charts/quickwit /tmp/helmcharts/
mv openreplay/charts/connector /tmp/helmcharts/
## Update images
for image in $(cat /tmp/images_to_build.txt);
do
mv openreplay/charts/$image /tmp/helmcharts/
cat <<EOF>>/tmp/image_override.yaml
${image}:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
done
ls /tmp/helmcharts
rm -rf openreplay/charts/*
ls openreplay/charts
mv /tmp/helmcharts/* openreplay/charts/
ls openreplay/charts
- name: Build, tag, and Deploy to k8s
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ee-${{ github.sha }}
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
git diff --name-only HEAD HEAD~1 | grep backend/services | grep -vE ^ee/ | cut -d '/' -f3 | uniq > backend/images_to_build.txt
[[ $(cat backend/images_to_build.txt) != "" ]] || (echo "Nothing to build here"; exit 0)
#
# Pushing image to registry
#
cd backend
for image in $(cat images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=1 bash -x ./build.sh ee $image
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
done
#
# Deploying image to environment.
#
cd ../scripts/helm/
sed -i "s#minio_access_key.*#minio_access_key: \"${{ secrets.EE_MINIO_ACCESS_KEY }}\" #g" vars.yaml
sed -i "s#minio_secret_key.*#minio_secret_key: \"${{ secrets.EE_MINIO_SECRET_KEY }}\" #g" vars.yaml
sed -i "s#jwt_secret_key.*#jwt_secret_key: \"${{ secrets.EE_JWT_SECRET }}\" #g" vars.yaml
sed -i "s#domain_name.*#domain_name: \"foss.openreplay.com\" #g" vars.yaml
sed -i "s#kubeconfig.*#kubeconfig_path: ${KUBECONFIG}#g" vars.yaml
for image in $(cat ../../backend/images_to_build.txt);
do
sed -i "s/image_tag:.*/image_tag: \"$IMAGE_TAG\"/g" vars.yaml
# Deploy command
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true | kubectl apply -f -
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
bash openreplay-cli --install $image
done
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true
#

View file

@ -1,19 +1,9 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
build_service:
description: 'Name of a single service to build(in small letters). "all" to build everything'
required: false
default: "false"
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- dev
paths:
- backend/**
@ -25,162 +15,79 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Build, tag, and Deploy to k8s
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
# Caching docker images
# - uses: satackey/action-docker-layer-caching@v0.0.11
# # Ignore the failure of a step and avoid terminating the job.
# continue-on-error: true
- name: Build, tag
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
set -xe
touch /tmp/images_to_build.txt
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
tmp_param=${{ github.event.inputs.build_service }}
build_param=${tmp_param:-'false'}
case ${build_param} in
false)
{
git diff --name-only HEAD HEAD~1 | grep -E "backend/pkg|backend/internal" | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services backend/cmd | cut -d '/' -f3
done
} | awk '!seen[$0]++' > /tmp/images_to_build.txt
;;
all)
ls backend/cmd > /tmp/images_to_build.txt
;;
*)
echo ${{github.event.inputs.build_service }} > /tmp/images_to_build.txt
;;
esac
if [[ $(cat /tmp/images_to_build.txt) == "" ]]; then
echo "Nothing to build here"
touch /tmp/nothing-to-build-here
exit 0
fi
#
# Pushing image to registry
#
cd backend
cat /tmp/images_to_build.txt
for image in $(cat /tmp/images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=0 bash -x ./build.sh skip $image
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
docker push $DOCKER_REPO/$image:$IMAGE_TAG
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
{
git diff --name-only HEAD HEAD~1 | grep backend/services | grep -vE ^ee/ | cut -d '/' -f3
git diff --name-only HEAD HEAD~1 | grep backend/pkg | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services | cut -d '/' -f3
done
} | uniq > backend/images_to_build.txt
- name: Deploying to kuberntes
env:
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
#
# Deploying image to environment.
#
set -x
[[ -f /tmp/nothing-to-build-here ]] && exit 0
cd scripts/helmcharts/
set -x
echo > /tmp/image_override.yaml
mkdir /tmp/helmcharts
mv openreplay/charts/ingress-nginx /tmp/helmcharts/
mv openreplay/charts/quickwit /tmp/helmcharts/
mv openreplay/charts/connector /tmp/helmcharts/
## Update images
for image in $(cat /tmp/images_to_build.txt);
do
mv openreplay/charts/$image /tmp/helmcharts/
cat <<EOF>>/tmp/image_override.yaml
${image}:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
done
ls /tmp/helmcharts
rm -rf openreplay/charts/*
ls openreplay/charts
mv /tmp/helmcharts/* openreplay/charts/
ls openreplay/charts
[[ $(cat backend/images_to_build.txt) != "" ]] || (echo "Nothing to build here"; exit 0)
#
# Pushing image to registry
#
cd backend
for image in $(cat images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=1 bash -x ./build.sh skip $image
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
done
#
# Deploying image to environment.
#
cd ../scripts/helm/
sed -i "s#minio_access_key.*#minio_access_key: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\" #g" vars.yaml
sed -i "s#minio_secret_key.*#minio_secret_key: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\" #g" vars.yaml
sed -i "s#domain_name.*#domain_name: \"foss.openreplay.com\" #g" vars.yaml
sed -i "s#kubeconfig.*#kubeconfig_path: ${KUBECONFIG}#g" vars.yaml
for image in $(cat ../../backend/images_to_build.txt);
do
sed -i "s/image_tag:.*/image_tag: \"$IMAGE_TAG\"/g" vars.yaml
# Deploy command
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true | kubectl apply -f -
bash kube-install.sh --app $image
done
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true
#
#

5
.gitignore vendored
View file

@ -3,7 +3,4 @@ public
node_modules
*DS_Store
*.env
*.log
**/*.envrc
.idea
*.mob*
.idea

View file

@ -1,7 +0,0 @@
repos:
- repo: https://github.com/gitguardian/ggshield
rev: v1.14.5
hooks:
- id: ggshield
language_version: python3
stages: [commit]

678
LICENSE
View file

@ -1,15 +1,11 @@
Copyright (c) 2021-2025 Asayer, Inc dba OpenReplay
MIT License
OpenReplay monorepo uses multiple licenses. Portions of this software are licensed as follows:
- All content that resides under the "ee/" directory of this repository, is licensed under the license defined in "ee/LICENSE".
- All third party components incorporated into the OpenReplay Software are licensed under the original license provided by the owner of the applicable component.
- Some directories are licensed under the "MIT" license, as defined below.
- Content outside of the above mentioned directories or restrictions defaults to the "GNU Affero General Public License Version 3 (AGPL v3)" license, as defined below.
Copyright (c) 2022 Asayer SAS.
Reach out (license@openreplay.com) if you have any questions regarding licenses.
Portions of this software are licensed as follows:
--------------------------------------------------------------------------------
MIT LICENSE
- All content that resides under the "ee/" directory of this repository, if that directory exists, is licensed under the license defined in "ee/LICENSE".
- Content outside of the above mentioned directories or restrictions above is available under the "MIT Expat" license as defined below.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -28,667 +24,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
--------------------------------------------------------------------------------
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

View file

@ -1,53 +1,40 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README_AR.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
<a href="https://openreplay.com">
<img src="static/logo.svg" height="70">
</a>
</p>
<h3 align="center">Session replay for developers</h3>
<p align="center">The most advanced session replay for building delightful web apps.</p>
<p align="center">The most advanced open-source session replay to build delightful web apps.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
<img src="static/deploy-aws.png" height="35"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
<img src="static/deploy-gcp.png" height="35" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
<img src="static/deploy-azure.png" height="35" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
<img src="static/deploy-do.png" height="35" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
<img src="static/overview.png">
</a>
</p>
OpenReplay is an open-source session replay suite you can host yourself, that lets you see what users do on your web app, helping you troubleshoot issues faster.
OpenReplay is a session replay stack that lets you see what users do on your web app, helping you troubleshoot issues faster. It's the only open-source alternative to products such as FullStory and LogRocket.
- **Session replay.** OpenReplay replays what users do, but not only. It also shows you what went under the hood, how your website or app behaves by capturing network activity, console logs, JS errors, store actions/state, page speed metrics, cpu/memory usage and much more.
- **Low footprint**. With a ~26KB (.br) tracker that asynchronously sends minimal data for a very limited impact on performance.
- **Low footprint**. With a ~18KB (.gz) tracker that asynchronously sends minimal data for a very limited impact on performance.
- **Self-hosted**. No more security compliance checks, 3rd-parties processing user data. Everything OpenReplay captures stays in your cloud for a complete control over your data.
- **Privacy controls**. Fine-grained security features for sanitizing user data.
- **Easy deploy**. With support of major public cloud providers (AWS, GCP, Azure, DigitalOcean).
@ -55,13 +42,12 @@ OpenReplay is an open-source session replay suite you can host yourself, that le
## Features
- **Session replay:** Lets you relive your users' experience, see where they struggle and how it affects their behavior. Each session replay is automatically analyzed based on heuristics, for easy triage.
- **Spot:** A Chrome extension that lets record bugs directly from your browser — each recording includes all the technical details developers need to fix them.
- **DevTools:** It's like debugging in your own browser. OpenReplay provides you with the full context (network activity, JS errors, store actions/state and 40+ metrics) so you can instantly reproduce bugs and understand performance issues.
- **Assist:** Helps you support your users by seeing their live screen and instantly hopping on call (WebRTC) with them without requiring any 3rd-party screen sharing software.
- **Omni-search:** Search and filter by almost any user action/criteria, session attribute or technical event, so you can answer any question. No instrumentation required.
- **Analytics:** For surfacing the most impactful issues causing conversion and revenue loss.
- **Funnels:** For surfacing the most impactful issues causing conversion and revenue loss.
- **Fine-grained privacy controls:** Choose what to capture, what to obscure or what to ignore so user data doesn't even reach your servers.
- **Plugins oriented:** Get to the root cause even faster by tracking application state (Redux, VueX, MobX, NgRx, Pinia and Zustand) and logging GraphQL queries (Apollo, Relay) and Fetch/Axios requests.
- **Plugins oriented:** Get to the root cause even faster by tracking application state (Redux, VueX, MobX, NgRx) and logging GraphQL queries (Apollo, Relay) and Fetch requests.
- **Integrations:** Sync your backend logs with your session replays and see what happened front-to-back. OpenReplay supports Sentry, Datadog, CloudWatch, Stackdriver, Elastic and more.
## Deployment Options
@ -73,7 +59,6 @@ OpenReplay can be deployed anywhere. Follow our step-by-step guides for deployin
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
@ -87,7 +72,6 @@ Please refer to the [official OpenReplay documentation](https://docs.openreplay.
- [Slack](https://slack.openreplay.com) (Connect with our engineers and community)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Bug and issue reports)
- [Twitter](https://twitter.com/OpenReplayHQ) (Product updates, Great content)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (How-to tutorials, past Community Calls)
- [Website chat](https://openreplay.com) (Talk to us)
## Contributing
@ -98,6 +82,10 @@ See our [Contributing Guide](CONTRIBUTING.md) for more details.
Also, feel free to join our [Slack](https://slack.openreplay.com) to ask questions, discuss ideas or connect with our contributors.
## Roadmap
Check out our [roadmap](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) and keep an eye on what's coming next. You're free to [submit](https://github.com/openreplay/openreplay/issues/new) new ideas and vote on features.
## License
This monorepo uses several licenses. See [LICENSE](/LICENSE) for more details.
This repo is entirely MIT licensed, with the exception of the `ee` directory.

View file

@ -1,106 +0,0 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README.md">English</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">إعادة تشغيل الجلسة للمطورين</h3>
<p align="center">إعادة تشغيل الجلسة الأكثر تقدمًا لإنشاء تطبيقات ويب رائعة</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay هو مجموعة إعادة تشغيل الجلسة التي يمكنك استضافتها بنفسك، والتي تتيح لك رؤية ما يقوم به المستخدمون على تطبيق الويب الخاص بك، مما يساعدك على حل المشكلات بشكل أسرع.
- **إعادة تشغيل الجلسة.** يقوم OpenReplay بإعادة تشغيل ما يقوم به المستخدمون، وكيف يتصرف موقع الويب الخاص بك أو التطبيق من خلال التقاط النشاط على الشبكة، وسجلات وحدة التحكم، وأخطاء JavaScript، وإجراءات/حالة التخزين، وقياسات سرعة الصفحة، واستخدام وحدة المعالجة المركزية/الذاكرة، وأكثر من ذلك بكثير.
- **بصمة منخفضة**. مع متتبع بحجم حوالي 26 كيلوبايت (نوع .br) الذي يرسل بيانات دقيقة بشكل غير متزامن لتأثير محدود جدًا على الأداء.
- **مضيف بواسطتك.** لا مزيد من فحوص الامتثال الأمني، ومعالجة بيانات المستخدمين من قبل جهات خارجية. كل ما يتم التقاطه بواسطة OpenReplay يبقى في سحابتك للتحكم الكامل في بياناتك.
- **ضوابط الخصوصية.** ميزات أمان دقيقة لتنقية بيانات المستخدم.
- **نشر سهل.** بدعم من مزودي الخدمة السحابية العامة الرئيسيين (AWS، GCP، Azure، DigitalOcean).
## الميزات
- **إعادة تشغيل الجلسة:** تتيح لك إعادة تشغيل الجلسة إعادة عيش تجربة مستخدميك، ورؤية أين يواجهون صعوبة وكيف يؤثر ذلك على سلوكهم. يتم تحليل كل إعادة تشغيل للجلسة تلقائيًا بناءً على الأساليب الاستدلالية، لسهولة التقييم.
- **أدوات التطوير (DevTools):** إنها مثل التصحيح في متصفحك الخاص. يوفر لك OpenReplay السياق الكامل (نشاط الشبكة، أخطاء JavaScript، إجراءات/حالة التخزين وأكثر من 40 مقياسًا) حتى تتمكن من إعادة إنتاج الأخطاء فورًا وفهم مشكلات الأداء.
- **المساعدة (Assist):** تساعدك في دعم مستخدميك من خلال رؤية شاشتهم مباشرة والانضمام فورًا إلى مكالمة (WebRTC) معهم دون الحاجة إلى برامج مشاركة الشاشة من جهات خارجية.
- **البحث الشامل (Omni-search):** ابحث وفرز حسب أي عملية/معيار للمستخدم تقريبًا، أو سمة الجلسة أو الحدث التقني، حتى تتمكن من الرد على أي سؤال. لا يلزم تجهيز.
- **الأنفاق (Funnels):** للكشف عن المشكلات الأكثر تأثيرًا التي تسبب في فقدان التحويل والإيرادات.
- **ضوابط الخصوصية الدقيقة:** اختر ماذا تريد التقاطه، ماذا تريد أن تخفي أو تجاهل حتى لا تصل بيانات المستخدم حتى إلى خوادمك.
- **موجهة للمكونات الإضافية (Plugins oriented):** تصل إلى السبب الجذري بشكل أسرع عن طريق تتبع حالة التطبيق (Redux، VueX، MobX، NgRx، Pinia، وZustand) وتسجيل استعلامات GraphQL (Apollo، Relay) وطلبات Fetch/Axios.
- **التكاملات (Integrations):** مزامنة سجلات الخادم الخلفي مع إعادات التشغيل للجلسات ورؤية ما حدث من الأمام إلى الخلف. يدعم OpenReplay Sentry وDatadog وCloudWatch وStackdriver وElastic والمزيد.
## خيارات النشر
يمكن نشر OpenReplay في أي مكان. اتبع دليلنا الخطوة بالخطوة لنشره على خدمات السحابة العامة الرئيسية:
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## سحابة OpenReplay
لأولئك الذين يرغبون في استخدام OpenReplay كخدمة، [قم بالتسجيل](https://app.openreplay.com/signup) للحصول على حساب مجاني على عرض السحابة لدينا.
## دعم المجتمع
يرجى الرجوع إلى [الوثائق الرسمية لـ OpenReplay](https://docs.openreplay.com/). سيساعدك ذلك في حل المشكلات الشائعة. للحصول على مساعدة إضافية، يمكنك الاتصال بنا عبر أحد هذه القنوات:
- [Slack](https://slack.openreplay.com) (الاتصال مع مهندسينا والمجتمع)
- [GitHub](https://github.com/openreplay/openreplay/issues) (تقارير الأخطاء والمشكلات)
- [Twitter](https://twitter.com/OpenReplayHQ) (تحديثات المنتج، محتوى رائع)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (دروس حول كيفية الاستخدام، مكالمات مجتمع سابقة)
- [دردشة الموقع الإلكتروني](https://openreplay.com) (تحدث معنا)
## المساهمة
نحن دائمًا في انتظار المساهمات في OpenReplay، ونحن سعداء بأنك تفكر في ذلك! غير متأكد من أين تبدأ؟ ابحث عن المشاكل المفتوحة، وخاصة تلك المُميزة بأنها مناسبة للمبتدئين.
انظر دليل المساهمة لدينا [دليل المساهمة](CONTRIBUTING.md) لمزيد من التفاصيل.
كما توجد حرية الانضمام إلى Slack لدينا [Slack](https://slack.openreplay.com) لطرح الأسئلة، مناقشة الأفكار أو التواصل مع مساهمينا.
## الخارطة الزمنية
تحقق من [الخارطة الزمنية لدينا](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) وابق على اطلاع على ما سيأتي لاحقًا. لديك حرية [تقديم أفكار جديدة](https://github.com/openreplay/openreplay/issues/new) والتصويت على الميزات.
## الترخيص
يستخدم هذا المستودع المتعدد التراخيص. انظر إلى [LICENSE](/LICENSE) لمزيد من التفاصيل.

View file

@ -1,106 +0,0 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README.md">English</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">Reproducción de sesiones para desarrolladores</h3>
<p align="center">La reproducción de sesiones más avanzada para crear aplicaciones web encantadoras.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay es una suite de retransmisión de sesiones que puedes alojar tú mismo, lo que te permite ver lo que hacen los usuarios en tu aplicación web y ayudarte a solucionar problemas más rápido.
- **Reproducción de sesiones.** OpenReplay reproduce lo que hacen los usuarios, pero no solo eso. También te muestra lo que ocurre bajo el capó, cómo se comporta tu sitio web o aplicación al capturar la actividad de la red, registros de la consola, errores de JavaScript, acciones/estado del almacén, métricas de velocidad de la página, uso de CPU/memoria y mucho más.
- **Huella reducida.** Con un rastreador de aproximadamente 26 KB (.br) que envía datos mínimos de forma asíncrona, lo que tiene un impacto muy limitado en el rendimiento.
- **Auto-alojado.** No más verificaciones de cumplimiento de seguridad, procesamiento de datos de usuario por terceros. Todo lo que OpenReplay captura se queda en tu nube para un control completo sobre tus datos.
- **Controles de privacidad.** Funciones de seguridad detalladas para desinfectar los datos de usuario.
- **Despliegue sencillo.** Con el soporte de los principales proveedores de nube pública (AWS, GCP, Azure, DigitalOcean).
## Características
- **Reproducción de sesiones:** Te permite revivir la experiencia de tus usuarios, ver dónde encuentran dificultades y cómo afecta su comportamiento. Cada reproducción de sesión se analiza automáticamente en función de heurísticas, para un triaje sencillo.
- **Herramientas de desarrollo (DevTools):** Es como depurar en tu propio navegador. OpenReplay te proporciona el contexto completo (actividad de red, errores de JavaScript, acciones/estado del almacén y más de 40 métricas) para que puedas reproducir instantáneamente errores y entender problemas de rendimiento.
- **Asistencia (Assist):** Te ayuda a brindar soporte a tus usuarios al ver su pantalla en tiempo real y unirte instantáneamente a una llamada (WebRTC) con ellos, sin necesidad de software de uso compartido de pantalla de terceros.
- **Búsqueda universal (Omni-search):** Busca y filtra por casi cualquier acción/criterio de usuario, atributo de sesión o evento técnico, para que puedas responder a cualquier pregunta. No se requiere instrumentación.
- **Embudos (Funnels):** Para resaltar los problemas más impactantes que causan la conversión y la pérdida de ingresos.
- **Controles de privacidad detallados:** Elige qué capturar, qué ocultar o qué ignorar para que los datos de usuario ni siquiera lleguen a tus servidores.
- **Orientado a complementos (Plugins oriented):** Llega más rápido a la causa raíz siguiendo el estado de la aplicación (Redux, VueX, MobX, NgRx, Pinia y Zustand) y registrando consultas GraphQL (Apollo, Relay) y solicitudes Fetch/Axios.
- **Integraciones:** Sincroniza tus registros del servidor con tus repeticiones de sesiones y observa lo que sucedió de principio a fin. OpenReplay es compatible con Sentry, Datadog, CloudWatch, Stackdriver, Elastic y más.
## Opciones de implementación
OpenReplay se puede implementar en cualquier lugar. Sigue nuestras guías paso a paso para implementarlo en los principales servicios de nube pública:
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
Para aquellos que desean usar OpenReplay como un servicio, [regístrate](https://app.openreplay.com/signup) para obtener una cuenta gratuita en nuestra oferta en la nube.
## Soporte de la comunidad
Consulta la [documentación oficial de OpenReplay](https://docs.openreplay.com/). Eso debería ayudarte a solucionar problemas comunes. Para obtener ayuda adicional, puedes contactarnos a través de uno de estos canales:
- [Slack](https://slack.openreplay.com) (Conéctate con nuestros ingenieros y la comunidad)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Informes de errores y problemas)
- [Twitter](https://twitter.com/OpenReplayHQ) (Actualizaciones del producto, contenido excelente)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (Tutoriales, reuniones comunitarias anteriores)
- [Chat en el sitio web](https://openreplay.com) (Háblanos)
## Contribución
Siempre estamos buscando contribuciones para OpenReplay, ¡y nos alegra que lo estés considerando! ¿No estás seguro por dónde empezar? Busca problemas abiertos, preferiblemente aquellos marcados como "buenas primeras contribuciones".
Consulta nuestra [Guía de Contribución](CONTRIBUTING.md) para obtener más detalles.
Además, no dudes en unirte a nuestro [Slack](https://slack.openreplay.com) para hacer preguntas, discutir ideas o conectarte con nuestros colaboradores.
## Hoja de ruta
Consulta nuestra [hoja de ruta](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) y mantente atento a lo que viene a continuación. Eres libre de [enviar](https://github.com/openreplay/openreplay/issues/new) nuevas ideas y votar por funciones.
## Licencia
Este monorepo utiliza varias licencias. Consulta [LICENSE](/LICENSE) para obtener más detalles.

View file

@ -1,106 +0,0 @@
<p align="center">
<a href="/README.md">English</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">Relecture de session pour développeurs</h3>
<p align="center">La relecture de session la plus avancée sur le marché pour des applications perfectionnées.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay est une suite d'outils de relecture (appelée aussi "replay") de sessions que vous pouvez héberger vous-même, vous permettant de voir ce que les utilisateurs font sur une application web, vous aidant ainsi à résoudre différents types de problèmes plus rapidement.
- **Relecture de session.** OpenReplay rejoue ce que les utilisateurs font, mais pas seulement. Il vous montre également ce qui se passe en coulisse, comment votre site web ou votre application se comporte en capturant l'activité réseau, les journaux de console, les erreurs JS, les actions/états du store, les métriques de chargement des pages, l'utilisation du CPU/mémoire, et bien plus encore.
- **Faible empreinte**. Avec un traqueur d'environ 26 Ko (.br) qui envoie de manière asynchrone des données minimales, ce qui a un impact très limité sur les performances.
- **Auto-hébergé**. Plus de vérifications de conformité en matière de sécurité, plus de traitement des données des utilisateurs par des tiers. Tout ce qu'OpenReplay capture reste dans votre cloud pour un contrôle complet sur vos données.
- **Contrôles de confidentialité**. Fonctionnalités de sécurité détaillées pour la désinfection des données utilisateur.
- **Déploiement facile**. Avec le support des principaux fournisseurs de cloud public (AWS, GCP, Azure, DigitalOcean).
## Fonctionnalités
- **Relecture de session :** Vous permet de revivre l'expérience de vos utilisateurs, de voir où ils rencontrent des problèmes et comment cela affecte leur comportement. Chaque relecture de session est automatiquement analysée en se basant sur des heuristiques, pour un triage plus facile des problèmes en fonction de l'impact.
- **Outils de développement (DevTools) :** C'est comme déboguer dans votre propre navigateur. OpenReplay vous fournit le contexte complet (activité réseau, erreurs JS, actions/états du store et plus de 40 métriques) pour que vous puissiez instantanément reproduire les bugs et comprendre les problèmes de performance.
- **Assistance (Assist) :** Vous aide à soutenir vos utilisateurs en voyant leur écran en direct et en vous connectant instantanément avec eux via appel/vidéo (WebRTC), sans nécessiter de logiciel tiers de partage d'écran.
- **Recherche universelle (Omni-search) :** Recherchez et filtrez presque n'importe quelle action/critère utilisateur, attribut de session ou événement technique, afin de pouvoir répondre à n'importe quelle question. Aucune instrumentation requise.
- **Entonnoirs (Funnels) :** Pour mettre en évidence les problèmes les plus impactants entraînant une conversion et une perte de revenus.
- **Contrôles de confidentialité détaillés :** Choisissez ce que vous voulez capturer, ce que vous voulez obscurcir ou ignorer, de sorte que les données utilisateur n'atteignent même pas vos serveurs.
- **Orienté vers les plugins :** Corrigez plus rapidement les bogues en suivant l'état de l'application (Redux, VueX, MobX, NgRx, Pinia et Zustand) et enregistrant les requêtes GraphQL (Apollo, Relay) et les requêtes Fetch/Axios.
- **Intégrations :** Synchronisez vos journaux backend avec vos relectures de sessions et voyez ce qui s'est passé du début à la fin. OpenReplay prend en charge Sentry, Datadog, CloudWatch, Stackdriver, Elastic et bien d'autres.
## Options de déploiement
OpenReplay peut être déployé n'importe où. Suivez nos guides détaillés pour le déployer sur les principaux clouds publics :
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
Pour ceux qui veulent simplement utiliser OpenReplay en tant que service, [inscrivez-vous](https://app.openreplay.com/signup) pour un compte gratuit sur notre offre cloud.
## Support de la communauté
Veuillez vous référer à la [documentation officielle d'OpenReplay](https://docs.openreplay.com/). Cela devrait vous aider à résoudre les problèmes courants. Pour toute aide ou question supplémentaire, vous pouvez nous contacter sur l'un des canaux suivants :
- [Slack](https://slack.openreplay.com) (Connectez-vous avec nos ingénieurs et notre communauté)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Rapports de bogues et problèmes)
- [Twitter](https://twitter.com/OpenReplayHQ) (Mises à jour du produit, articles techniques et autres annonces)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (Tutoriels)
- [Chat sur le site Web](https://openreplay.com) (Nous contacter)
## Contribution
Nous sommes toujours à la recherche de contributions pour rendre OpenReplay meilleur. Vous ne savez pas par où commencer ? Recherchez dans notre "GitHub Issues" pour trouver des tickets ouverts, de préférence ceux marqués comme "bonnes premières contributions".
Consultez notre [Guide de contribution](CONTRIBUTING.md) pour plus de détails.
N'hésitez pas à rejoindre notre [Slack](https://slack.openreplay.com) pour poser des questions, discuter vos idées ou simplement pour vous connecter avec nos contributeurs.
## Feuille de route
Consultez notre [feuille de route](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) et gardez un œil sur ce qui arrive prochainement. Vous êtes libre de [proposer](https://github.com/openreplay/openreplay/issues/new) de nouvelles idées et de voter pour des fonctionnalités.
## Licence
Ce monorepo utilise plusieurs licences. Consultez [LICENSE](/LICENSE) pour plus de détails.

View file

@ -1,107 +0,0 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README.md">English</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">Реплей сессий для разработчиков</h3>
<p align="center">Самое продвинутое решение для воспроизведения сессий с открытым исходным кодом для создания восхитительных веб-приложений.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay - это набор инструментов для воспроизведения пользовательских сессий, позволяющий увидеть действия пользователи в вашем веб-приложении, который вы можете разместить в своем облаке или на серверах.
- **Воспроизведение сессий.** OpenReplay не только воспроизводит действия пользователей, но и показывает, что происходит под капотом сессии, как ведет себя ваш сайт или приложение, фиксируя сетевую активность, логи консоли, JS-ошибки, действия/состояние стейт менеджеров, показатели скорости страницы, использование процессора/памяти и многое другое.
- **Компактность**. Размером всего в ~26 КБ (.br), трекер асинхронно отправляет минимальное количество данных, оказывая очень незначительное влияние на производительность вашего приложения.
- **Self-hosted**. Больше никаких проверок на соответствие требованиям безопасности или обработки данных ваших пользователей третьими сторонами. Все, что фиксирует OpenReplay, остается в вашем облаке, что обеспечивает полный контроль над вашими данными.
- **Контроль над приватностью**. Тонкие настройки приватности позволяют записывать только действительно необходимые данные.
- **Легкая установка**. Мы поддерживаем всех крупных поставщиков облачных услуг (AWS, GCP, Azure, DigitalOcean).
## Особенности
- **Session Replay:** Позволяет повторить опыт пользователей, увидеть, где они испытывают трудности и как это влияет на конверсию. Каждый реплей автоматически анализируется на наличие ошибок и аномалий, что значительно облегчает сортировку и поиск проблемных сессий.
- **DevTools:** Прямо как отладка в вашем собственном браузере. OpenReplay предоставляет вам полный контекст (сетевая активность, JS ошибки, действия/состояние стейт менеджеров и более 40 метрик), чтобы вы могли мгновенно воспроизвести ошибки и найти проблемы с производительностью.
- **Assist:** Позволяет вам помочь вашим пользователям, наблюдая их экран в настоящем времени и мгновенно переходя на звонок (WebRTC) с ними, не требуя стороннего программного обеспечения для совместного просмотра экрана.
- **Omni-search:** Поиск и фильтрация практически любого действия пользователя/критерия, атрибута сессии или технического события, чтобы вы могли ответить на любой вопрос.
- **Воронки:** Для выявления наиболее влияющих на конверсию мест.
- **Тонкая настройка приватности:** Выбирайте, что записывать, а что игнорировать, чтобы данные пользователя даже не отправлялись на ваши сервера.
- **Ориентирован на плагины:** С помощью плагинов можно отслеживать состояние приложения (Redux, VueX, MobX, NgRx, Pinia, и Zustand), регистрировать запросы GraphQL (Apollo, Relay) и многое другое.
- **Интеграции:** OpenReplay поддерживает интеграции с Sentry, Datadog, CloudWatch, Stackdriver, Elastic и другими провайдерами, позволяя получать еще больше информации о пользовательской сессии.
## Варианты развертывания
OpenReplay можно развернуть где угодно. Следуйте нашим пошаговым руководствам по развертыванию на основных публичных облаках:
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
Для тех, кто просто хочет использовать OpenReplay как сервис, [зарегистрируйте](https://app.openreplay.com/signup) бесплатную учетную запись в нашем приложении.
## Поддержка сообщества
В случае возникновения проблем, вы можете обратиться к [официальной документации OpenReplay](https://docs.openreplay.com/). Это поможет вам решить наиболее распространенные проблемы.
Для дополнительной помощи, вы можете связаться с нами через один из этих каналов:
- [Slack](https://slack.openreplay.com) (Свяжитесь с нашими инженерами и сообществом)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Отчеты о багах и проблемах)
- [Twitter](https://twitter.com/OpenReplayHQ) (Обновления продукта)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (Учебные пособия, прошлые комьюнити-звонки)
- [Чат на веб-сайте](https://openreplay.com) (Общайтесь с нами)
## Содействие
Мы всегда рады любой помощи в создании OpenReplay, и готовы услышать ваши идеи. Не уверены, с чего начать? Ищите открытые задачи, особенно те, которые отмечены как "good first issue".
Смотрите наше [руководство по содействию](CONTRIBUTING.md) для более подробной информации.
Также не стесняйтесь присоединиться к нашему [Slack](https://slack.openreplay.com), чтобы задавать вопросы, обсуждать идеи или связываться с нашими участниками.
## План развития
Ознакомьтесь с нашим [планом развития](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) и следите за тем, что будет далее. Вы можете свободно [предложить](https://github.com/openreplay/openreplay/issues/new) новые идеи и голосовать за функции.
## Лицензия
В этом монорепозитории используются разные лицензии. См. [LICENSE](/LICENSE) для получения более подробной информации.

47
api/.env.default Normal file
View file

@ -0,0 +1,47 @@
EMAIL_FROM=OpenReplay<do-not-reply@openreplay.com>
EMAIL_HOST=
EMAIL_PASSWORD=
EMAIL_PORT=587
EMAIL_SSL_CERT=
EMAIL_SSL_KEY=
EMAIL_USER=
EMAIL_USE_SSL=false
EMAIL_USE_TLS=true
S3_HOST=
S3_KEY=
S3_SECRET=
SITE_URL=
alert_ntf=http://127.0.0.1:8000/async/alerts/notifications/%s
announcement_url=
assign_link=http://127.0.0.1:8000/async/email_assignment
async_Token=
captcha_key=
captcha_server=
change_password_link=/reset-password?invitation=%s&&pass=%s
email_basic=http://127.0.0.1:8000/async/basic/%s
email_signup=http://127.0.0.1:8000/async/email_signup/%s
invitation_link=/api/users/invitation?token=%s
isEE=false
isFOS=true
js_cache_bucket=sessions-assets
jwt_algorithm=HS512
jwt_exp_delta_seconds=2592000
jwt_issuer=openreplay-default-foss
jwt_secret="SET A RANDOM STRING HERE"
peersList=http://utilities-openreplay.app.svc.cluster.local:9001/assist/%s/sockets-list
peers=http://utilities-openreplay.app.svc.cluster.local:9001/assist/%s/sockets-live
pg_dbname=postgres
pg_host=postgresql.db.svc.cluster.local
pg_password=asayerPostgres
pg_port=5432
pg_user=postgres
pg_timeout=30
pg_minconn=45
put_S3_TTL=20
sentryURL=
sessions_bucket=mobs
sessions_region=us-east-1
sourcemaps_bucket=sourcemaps
sourcemaps_reader=http://utilities-openreplay.app.svc.cluster.local:9000/sourcemaps
stage=default-foss
version_number=1.4.0

8
api/.gitignore vendored
View file

@ -83,7 +83,7 @@ wheels/
.installed.cfg
*.egg
MANIFEST
Pipfile.lock
Pipfile
# PyInstaller
# Usually these files are written by a python script from a template
@ -143,7 +143,7 @@ celerybeat-schedule
# Environments
.env
.venv/*
.venv
env/
venv/
ENV/
@ -174,6 +174,4 @@ logs*.txt
SUBNETS.json
./chalicelib/.configs
README/*
.local
/.dev/
README/*

View file

@ -1 +0,0 @@
.venv

View file

@ -1,3 +0,0 @@
# Accept the risk until
# python setup tools recently fixed. Not yet available in distros.
CVE-2023-5363 exp:2023-12-31

View file

@ -1,31 +1,17 @@
FROM python:3.12-alpine AS builder
LABEL maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
RUN apk add --no-cache build-base
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv && \
export UV_SYSTEM_PYTHON=true && \
uv pip install --no-cache-dir --upgrade pip setuptools wheel && \
uv pip install --no-cache-dir --upgrade -r requirements.txt
FROM python:3.12-alpine
ARG GIT_SHA
ARG envarg
# Add Tini
# Startup daemon
ENV SOURCE_MAP_VERSION=0.7.4 \
APP_NAME=chalice \
LISTEN_PORT=8000 \
PRIVATE_ENDPOINTS=false \
ENTERPRISE_BUILD=${envarg} \
GIT_SHA=$GIT_SHA
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
FROM python:3.9.7-slim
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
WORKDIR /work
COPY . .
RUN apk add --no-cache tini && mv env.default .env
RUN pip install -r requirements.txt
RUN mv .env.default .env
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["./entrypoint.sh"]
# Add Tini
# Startup daemon
ENV TINI_VERSION v0.19.0
ARG envarg
ENV ENTERPRISE_BUILD ${envarg}
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
CMD ./entrypoint.sh

18
api/Dockerfile.alerts Normal file
View file

@ -0,0 +1,18 @@
FROM python:3.9.7-slim
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
WORKDIR /work
COPY . .
RUN pip install -r requirements.txt
RUN mv .env.default .env && mv app_alerts.py app.py
ENV pg_minconn 2
# Add Tini
# Startup daemon
ENV TINI_VERSION v0.19.0
ARG envarg
ENV ENTERPRISE_BUILD ${envarg}
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
CMD ./entrypoint.sh

27
api/Dockerfile.bundle Normal file
View file

@ -0,0 +1,27 @@
FROM python:3.6-slim
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
WORKDIR /work
COPY . .
COPY ../utilities ./utilities
RUN rm entrypoint.sh && rm .chalice/config.json
RUN mv entrypoint.bundle.sh entrypoint.sh && mv .chalice/config.bundle.json .chalice/config.json
RUN pip install -r requirements.txt -t ./vendor --upgrade
RUN pip install chalice==1.22.2
# Installing Nodejs
RUN apt update && apt install -y curl && \
curl -fsSL https://deb.nodesource.com/setup_12.x | bash - && \
apt install -y nodejs && \
apt remove --purge -y curl && \
rm -rf /var/lib/apt/lists/* && \
cd utilities && \
npm install
# Add Tini
# Startup daemon
ENV TINI_VERSION v0.19.0
ARG envarg
ENV ENTERPRISE_BUILD ${envarg}
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
CMD ./entrypoint.sh

View file

@ -1,11 +0,0 @@
# ignore .git and .cache folders
.git
.cache
**/build.sh
**/build_*.sh
**/*deploy.sh
Dockerfile*
app_alerts.py
requirements-alerts.txt
entrypoint_alerts.sh

View file

@ -1,29 +0,0 @@
FROM python:3.12-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA
LABEL GIT_SHA=$GIT_SHA
RUN apk add --no-cache build-base tini
ARG envarg
ENV APP_NAME=alerts \
PG_MINCONN=1 \
PG_MAXCONN=10 \
LISTEN_PORT=8000 \
PRIVATE_ENDPOINTS=true \
GIT_SHA=$GIT_SHA \
ENTERPRISE_BUILD=${envarg}
WORKDIR /work
COPY requirements-alerts.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv
RUN uv pip install --no-cache-dir --upgrade pip setuptools wheel --system
RUN uv pip install --no-cache-dir --upgrade -r requirements.txt --system
COPY . .
RUN mv env.default .env && mv app_alerts.py app.py && mv entrypoint_alerts.sh entrypoint.sh
RUN adduser -u 1001 openreplay -D
USER 1001
ENTRYPOINT ["/sbin/tini", "--"]
CMD ./entrypoint.sh

View file

@ -1,11 +0,0 @@
# ignore .git and .cache folders
.git
.cache
**/build.sh
**/build_*.sh
**/*deploy.sh
Dockerfile*
app.py
entrypoint.sh
requirements.txt

View file

@ -1,43 +0,0 @@
#### autogenerated api frontend
API can autogenerate a frontend that documents, and allows to play
with, in a limited way, its interface. Make sure you have the
following variables inside the current `.env`:
```
docs_url=/docs
root_path=''
```
If the `.env` that is in-use is based on `env.default` then it is
already the case. Start, or restart the http server, then go to
`https://127.0.0.1:8000/docs`. That is autogenerated documentation
based on pydantic schema, fastapi routes, and docstrings :wink:.
Happy experiments, and then documentation!
#### psycopg3 API
I mis-remember the psycopg v2 vs. v3 API.
For the record, the expected psycopg3's async api looks like the
following pseudo code:
```python
async with app.state.postgresql.connection() as cnx:
async with cnx.transaction():
row = await cnx.execute("SELECT EXISTS(SELECT 1 FROM public.tenants)")
row = await row.fetchone()
return row["exists"]
```
Mind the following:
- Where `app.state.postgresql` is the postgresql connection pooler.
- Wrap explicit transaction with `async with cnx.transaction():
foobar()`
- Most of the time the transaction object is not used;
- Do execute await operation against `cnx`;
- `await cnx.execute` returns a cursor object;
- Do the `await cursor.fetchqux...` calls against the object return by
a call to execute.

View file

@ -1,29 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
urllib3 = "==2.3.0"
requests = "==2.32.3"
boto3 = "==1.36.12"
pyjwt = "==2.10.1"
psycopg2-binary = "==2.9.10"
psycopg = {extras = ["pool", "binary"], version = "==3.2.4"}
clickhouse-driver = {extras = ["lz4"], version = "==0.2.9"}
clickhouse-connect = "==0.8.15"
elasticsearch = "==8.17.1"
jira = "==3.8.0"
cachetools = "==5.5.1"
fastapi = "==0.115.8"
uvicorn = {extras = ["standard"], version = "==0.34.0"}
python-decouple = "==3.8"
pydantic = {extras = ["email"], version = "==2.10.6"}
apscheduler = "==3.11.0"
redis = "==5.2.1"
[dev-packages]
[requires]
python_version = "3.12"
python_full_version = "3.12.8"

View file

@ -1,99 +1,38 @@
import logging
import time
from contextlib import asynccontextmanager
import psycopg_pool
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from decouple import config
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from psycopg import AsyncConnection
from psycopg.rows import dict_row
from starlette.responses import StreamingResponse
from chalicelib.utils import helper
from chalicelib.utils import pg_client, ch_client
from crons import core_crons, core_dynamic_crons
from chalicelib.utils import pg_client
from routers import core, core_dynamic
from routers.subs import insights, metrics, v1_api, health, usability_tests, spot, product_anaytics
from routers.app import v1_api
from routers.crons import core_crons
from routers.crons import core_dynamic_crons
from routers.subs import dashboard
loglevel = config("LOGLEVEL", default=logging.WARNING)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
class ORPYAsyncConnection(AsyncConnection):
def __init__(self, *args, **kwargs):
super().__init__(*args, row_factory=dict_row, **kwargs)
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
logging.info(">>>>> starting up <<<<<")
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app.schedule = AsyncIOScheduler()
await pg_client.init()
await ch_client.init()
app.schedule.start()
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs:
app.schedule.add_job(id=job["func"].__name__, **job)
ap_logger.info(">Scheduled jobs:")
for job in app.schedule.get_jobs():
ap_logger.info({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
database = {
"host": config("pg_host", default="localhost"),
"dbname": config("pg_dbname", default="orpy"),
"user": config("pg_user", default="orpy"),
"password": config("pg_password", default="orpy"),
"port": config("pg_port", cast=int, default=5432),
"application_name": "AIO" + config("APP_NAME", default="PY"),
}
database = psycopg_pool.AsyncConnectionPool(kwargs=database, connection_class=ORPYAsyncConnection,
min_size=config("PG_AIO_MINCONN", cast=int, default=1),
max_size=config("PG_AIO_MAXCONN", cast=int, default=5), )
app.state.postgresql = database
# App listening
yield
# Shutdown
await database.close()
logging.info(">>>>> shutting down <<<<<")
app.schedule.shutdown(wait=False)
await pg_client.terminate()
app = FastAPI(root_path=config("root_path", default="/api"), docs_url=config("docs_url", default=""),
redoc_url=config("redoc_url", default=""), lifespan=lifespan)
app.add_middleware(GZipMiddleware, minimum_size=1000)
app = FastAPI()
@app.middleware('http')
async def or_middleware(request: Request, call_next):
if helper.TRACK_TIME:
now = time.time()
global OR_SESSION_TOKEN
OR_SESSION_TOKEN = request.headers.get('vnd.openreplay.com.sid', request.headers.get('vnd.asayer.io.sid'))
try:
if helper.TRACK_TIME:
import time
now = int(time.time() * 1000)
response: StreamingResponse = await call_next(request)
except:
logging.error(f"{request.method}: {request.url.path} FAILED!")
raise
if response.status_code // 100 != 2:
logging.warning(f"{request.method}:{request.url.path} {response.status_code}!")
if helper.TRACK_TIME:
now = time.time() - now
if now > 2:
now = round(now, 2)
logging.warning(f"Execution time: {now} s for {request.method}: {request.url.path}")
response.headers["x-robots-tag"] = 'noindex, nofollow'
if helper.TRACK_TIME:
print(f"Execution time: {int(time.time() * 1000) - now} ms")
except Exception as e:
pg_client.close()
raise e
pg_client.close()
return response
@ -114,21 +53,18 @@ app.include_router(core.app_apikey)
app.include_router(core_dynamic.public_app)
app.include_router(core_dynamic.app)
app.include_router(core_dynamic.app_apikey)
app.include_router(metrics.app)
app.include_router(insights.app)
app.include_router(dashboard.app)
# app.include_router(insights.app)
app.include_router(v1_api.app_apikey)
app.include_router(health.public_app)
app.include_router(health.app)
app.include_router(health.app_apikey)
app.include_router(usability_tests.public_app)
app.include_router(usability_tests.app)
app.include_router(usability_tests.app_apikey)
Schedule = AsyncIOScheduler()
Schedule.start()
app.include_router(spot.public_app)
app.include_router(spot.app)
app.include_router(spot.app_apikey)
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs:
Schedule.add_job(id=job["func"].__name__, **job)
app.include_router(product_anaytics.public_app)
app.include_router(product_anaytics.app)
app.include_router(product_anaytics.app_apikey)
for job in Schedule.get_jobs():
print({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
logging.basicConfig(level=config("LOGLEVEL", default=logging.INFO))
logging.getLogger('apscheduler').setLevel(config("LOGLEVEL", default=logging.INFO))

View file

@ -1,48 +1,13 @@
import logging
from contextlib import asynccontextmanager
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from decouple import config
from fastapi import FastAPI
from chalicelib.core.alerts import alerts_processor
from chalicelib.utils import pg_client
from chalicelib.core import alerts_processor
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
ap_logger.info(">>>>> starting up <<<<<")
await pg_client.init()
app.schedule.start()
app.schedule.add_job(id="alerts_processor", **{"func": alerts_processor.process, "trigger": "interval",
"minutes": config("ALERTS_INTERVAL", cast=int, default=5),
"misfire_grace_time": 20})
ap_logger.info(">Scheduled jobs:")
for job in app.schedule.get_jobs():
ap_logger.info({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
# App listening
yield
# Shutdown
ap_logger.info(">>>>> shutting down <<<<<")
app.schedule.shutdown(wait=False)
await pg_client.terminate()
loglevel = config("LOGLEVEL", default=logging.INFO)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app = FastAPI(root_path=config("root_path", default="/alerts"), docs_url=config("docs_url", default=""),
redoc_url=config("redoc_url", default=""), lifespan=lifespan)
app.schedule = AsyncIOScheduler()
ap_logger.info("============= ALERTS =============")
app = FastAPI()
print("============= ALERTS =============")
@app.get("/")
@ -50,16 +15,13 @@ async def root():
return {"status": "Running"}
@app.get("/health")
async def get_health_status():
return {"data": {
"health": True,
"details": {"version": config("version_number", default="unknown")}
}}
app.schedule = AsyncIOScheduler()
app.schedule.start()
app.schedule.add_job(id="alerts_processor", **{"func": alerts_processor.process, "trigger": "interval",
"minutes": config("ALERTS_INTERVAL", cast=int, default=5),
"misfire_grace_time": 20})
for job in app.schedule.get_jobs():
print({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
if config("LOCAL_DEV", default=False, cast=bool):
@app.get('/trigger', tags=["private"])
async def trigger_main_cron():
ap_logger.info("Triggering main cron")
alerts_processor.process()
logging.basicConfig(level=config("LOGLEVEL", default=logging.INFO))
logging.getLogger('apscheduler').setLevel(config("LOGLEVEL", default=logging.INFO))

View file

@ -1,4 +1,3 @@
import logging
from typing import Optional
from fastapi import Request
@ -9,8 +8,6 @@ from starlette.exceptions import HTTPException
from chalicelib.core import authorizers
from schemas import CurrentAPIContext
logger = logging.getLogger(__name__)
class APIKeyAuth(APIKeyHeader):
def __init__(self, auto_error: bool = True):
@ -25,7 +22,7 @@ class APIKeyAuth(APIKeyHeader):
detail="Invalid API Key",
)
r["authorizer_identity"] = "api_key"
logger.debug(r)
print(r)
request.state.authorizer_identity = "api_key"
request.state.currentContext = CurrentAPIContext(tenantId=r["tenantId"])
request.state.currentContext = CurrentAPIContext(tenant_id=r["tenantId"])
return request.state.currentContext

View file

@ -1,153 +1,39 @@
import datetime
import logging
from typing import Optional
from decouple import config
from fastapi import Request
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from starlette import status
from starlette.exceptions import HTTPException
import schemas
from chalicelib.core import authorizers, users, spot
logger = logging.getLogger(__name__)
def _get_current_auth_context(request: Request, jwt_payload: dict) -> schemas.CurrentContext:
user = users.get(user_id=jwt_payload.get("userId", -1), tenant_id=jwt_payload.get("tenantId", -1))
if user is None:
logger.warning("User not found.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="User not found.")
request.state.authorizer_identity = "jwt"
request.state.currentContext = schemas.CurrentContext(tenantId=jwt_payload.get("tenantId", -1),
userId=jwt_payload.get("userId", -1),
email=user["email"],
role=user["role"])
return request.state.currentContext
from chalicelib.core import authorizers, users
from schemas import CurrentContext
class JWTAuth(HTTPBearer):
def __init__(self, auto_error: bool = True):
super(JWTAuth, self).__init__(auto_error=auto_error)
async def __call__(self, request: Request) -> Optional[schemas.CurrentContext]:
if request.url.path in ["/refresh", "/api/refresh"]:
return await self.__process_refresh_call(request)
elif request.url.path in ["/spot/refresh", "/api/spot/refresh"]:
return await self.__process_spot_refresh_call(request)
else:
credentials: HTTPAuthorizationCredentials = await super(JWTAuth, self).__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authentication scheme.")
jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials)
auth_exists = jwt_payload is not None and users.auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_iat=jwt_payload.get("iat", 100))
if jwt_payload is None \
or jwt_payload.get("iat") is None or jwt_payload.get("aud") is None \
or not auth_exists:
if jwt_payload is not None:
logger.debug(jwt_payload)
if jwt_payload.get("iat") is None:
logger.debug("iat is None")
if jwt_payload.get("aud") is None:
logger.debug("aud is None")
if not auth_exists:
logger.warning("not users.auth_exists")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Invalid token or expired token.")
if jwt_payload.get("aud", "").startswith("spot") and not request.url.path.startswith("/spot"):
# Allow access to spot endpoints only
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
detail="Unauthorized access (spot).")
elif jwt_payload.get("aud", "").startswith("front") and request.url.path.startswith("/spot"):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
detail="Unauthorized access endpoint reserved for Spot only.")
return _get_current_auth_context(request=request, jwt_payload=jwt_payload)
logger.warning("Invalid authorization code.")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authorization code.")
async def __process_refresh_call(self, request: Request) -> schemas.CurrentContext:
if "refreshToken" not in request.cookies:
logger.warning("Missing refreshToken cookie.")
jwt_payload = None
else:
jwt_payload = authorizers.jwt_refresh_authorizer(scheme="Bearer", token=request.cookies["refreshToken"])
if jwt_payload is None or jwt_payload.get("jti") is None:
logger.warning("Null refreshToken's payload, or null JTI.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid refresh-token or expired refresh-token.")
auth_exists = users.refresh_auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_jti=jwt_payload["jti"])
if not auth_exists:
logger.warning("refreshToken's user not found.")
logger.warning(jwt_payload)
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid refresh-token or expired refresh-token.")
async def __call__(self, request: Request) -> Optional[CurrentContext]:
credentials: HTTPAuthorizationCredentials = await super(JWTAuth, self).__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authentication scheme.")
old_jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials,
leeway=datetime.timedelta(
days=config("JWT_LEEWAY_DAYS", cast=int, default=3)
))
if old_jwt_payload is None \
or old_jwt_payload.get("userId") is None \
or old_jwt_payload.get("userId") != jwt_payload.get("userId"):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authentication scheme.")
jwt_payload = authorizers.jwt_authorizer(credentials.scheme + " " + credentials.credentials)
if jwt_payload is None \
or jwt_payload.get("iat") is None or jwt_payload.get("aud") is None \
or not users.auth_exists(user_id=jwt_payload["userId"], tenant_id=jwt_payload["tenantId"],
jwt_iat=jwt_payload["iat"], jwt_aud=jwt_payload["aud"]):
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Invalid token or expired token.")
user = users.get(user_id=jwt_payload["userId"], tenant_id=jwt_payload["tenantId"])
if user is None:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="User not found.")
jwt_payload["authorizer_identity"] = "jwt"
print(jwt_payload)
request.state.authorizer_identity = "jwt"
request.state.currentContext = CurrentContext(tenant_id=jwt_payload["tenantId"],
user_id=jwt_payload["userId"],
email=user["email"])
return request.state.currentContext
return _get_current_auth_context(request=request, jwt_payload=jwt_payload)
logger.warning("Invalid authorization code (refresh logic).")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authorization code for refresh.")
async def __process_spot_refresh_call(self, request: Request) -> schemas.CurrentContext:
if "spotRefreshToken" not in request.cookies:
logger.warning("Missing soptRefreshToken cookie.")
jwt_payload = None
else:
jwt_payload = authorizers.jwt_refresh_authorizer(scheme="Bearer", token=request.cookies["spotRefreshToken"])
if jwt_payload is None or jwt_payload.get("jti") is None:
logger.warning("Null spotRefreshToken's payload, or null JTI.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid spotRefreshToken or expired refresh-token.")
auth_exists = spot.refresh_auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_jti=jwt_payload["jti"])
if not auth_exists:
logger.warning("spotRefreshToken's user not found.")
logger.warning(jwt_payload)
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid spotRefreshToken or expired refresh-token.")
credentials: HTTPAuthorizationCredentials = await super(JWTAuth, self).__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid spot-authentication scheme.")
old_jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials,
leeway=datetime.timedelta(
days=config("JWT_LEEWAY_DAYS", cast=int, default=3)
))
if old_jwt_payload is None \
or old_jwt_payload.get("userId") is None \
or old_jwt_payload.get("userId") != jwt_payload.get("userId"):
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid spot-token or expired token.")
return _get_current_auth_context(request=request, jwt_payload=jwt_payload)
logger.warning("Invalid authorization code (spot-refresh logic).")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authorization code for spot-refresh.")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authorization code.")

View file

@ -1,38 +0,0 @@
import logging
from fastapi import Request
from starlette import status
from starlette.exceptions import HTTPException
import schemas
from chalicelib.core import projects
from or_dependencies import OR_context
logger = logging.getLogger(__name__)
class ProjectAuthorizer:
def __init__(self, project_identifier):
self.project_identifier: str = project_identifier
async def __call__(self, request: Request) -> None:
if len(request.path_params.keys()) == 0 or request.path_params.get(self.project_identifier) is None:
return
current_user: schemas.CurrentContext = await OR_context(request)
value = request.path_params[self.project_identifier]
current_project = None
if self.project_identifier == "projectId" \
and (isinstance(value, int) or isinstance(value, str) and value.isnumeric()):
current_project = projects.get_project(project_id=value, tenant_id=current_user.tenant_id)
elif self.project_identifier == "projectKey":
current_project = projects.get_by_project_key(project_key=value)
if current_project is None:
logger.debug(f"unauthorized project {self.project_identifier}:{value}")
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="project not found.")
else:
current_project = schemas.ProjectContext(projectId=current_project["projectId"],
projectKey=current_project["projectKey"],
platform=current_project["platform"],
name=current_project["name"])
request.state.currentContext.project = current_project

View file

@ -7,60 +7,17 @@
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
# Helper function
exit_err() {
err_code=$1
if [[ $err_code != 0 ]]; then
exit $err_code
fi
}
source ../scripts/lib/_docker.sh
ARCH=${ARCH:-'amd64'}
environment=$1
git_sha=$(git rev-parse --short HEAD)
image_tag=${IMAGE_TAG:-git_sha}
git_sha1=${IMAGE_TAG:-$(git rev-parse HEAD)}
envarg="default-foss"
chart="chalice"
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
exit 1
exit=1
}
return
[[ exit -eq 1 ]] && exit 1
}
[[ $1 == ee ]] && ee=true
[[ $PATCH -eq 1 ]] && {
image_tag="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/$chart | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
[[ $ee == "true" ]] && {
image_tag="${image_tag}-ee"
}
}
update_helm_release() {
[[ $ee == "true" ]] && return
HELM_TAG="$(grep -iER ^version ../scripts/helmcharts/openreplay/charts/$chart | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
# Update the chart version
sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Update image tags
sed -i "s#ppVersion.*#ppVersion: \"$image_tag\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Commit the changes
git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
git commit -m "chore(helm): Updating $chart image release"
}
function build_api() {
destination="_api"
[[ $1 == "ee" ]] && {
destination="_api_ee"
}
[[ -d ../${destination} ]] && {
echo "Removing previous build cache"
rm -rf ../${destination}
}
cp -R ../api ../${destination}
cd ../${destination} || exit_err 100
function build_api(){
tag=""
# Copy enterprise code
[[ $1 == "ee" ]] && {
@ -68,24 +25,14 @@ function build_api() {
envarg="default-ee"
tag="ee-"
}
mv Dockerfile.dockerignore .dockerignore
docker build -f ./Dockerfile --platform linux/${ARCH} --build-arg envarg=$envarg --build-arg GIT_SHA=$git_sha -t ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag} .
cd ../api || exit_err 100
rm -rf ../${destination}
docker build -f ./Dockerfile --build-arg envarg=$envarg -t ${DOCKER_REPO:-'local'}/chalice:${git_sha1} .
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag}
docker tag ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag} ${DOCKER_REPO:-'local'}/chalice:${tag}latest
docker push ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${tag}latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag}
}
echo "api docker build completed"
docker push ${DOCKER_REPO:-'local'}/chalice:${git_sha1}
docker tag ${DOCKER_REPO:-'local'}/chalice:${git_sha1} ${DOCKER_REPO:-'local'}/chalice:${tag}latest
docker push ${DOCKER_REPO:-'local'}/chalice:${tag}latest
}
}
check_prereq
build_api $environment
echo buil_complete
if [[ $PATCH -eq 1 ]]; then
update_helm_release
fi
build_api $1
IMAGE_TAG=$IMAGE_TAG PUSH_IMAGE=$PUSH_IMAGE DOCKER_REPO=$DOCKER_REPO bash build_alerts.sh $1

View file

@ -7,43 +7,46 @@
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
git_sha=$(git rev-parse --short HEAD)
image_tag=${IMAGE_TAG:-git_sha}
function make_submodule() {
[[ $1 != "ee" ]] && {
# -- this part was generated by modules_lister.py --
mkdir alerts
cp -R ./{app_alerts,schemas}.py ./alerts/
mkdir -p ./alerts/chalicelib/
cp -R ./chalicelib/__init__.py ./alerts/chalicelib/
mkdir -p ./alerts/chalicelib/core/
cp -R ./chalicelib/core/{__init__,alerts_processor,alerts_listener,sessions,events,issues,sessions_metas,metadata,projects,users,authorizers,tenants,assist,events_ios,sessions_mobs,errors,sourcemaps,sourcemaps_parser,resources,performance_event,alerts,notifications,slack,collaboration_slack,webhook}.py ./alerts/chalicelib/core/
mkdir -p ./alerts/chalicelib/utils/
cp -R ./chalicelib/utils/{__init__,TimeUTC,pg_client,helper,event_filter_definition,dev,email_helper,email_handler,smtp,s3,metrics_helper}.py ./alerts/chalicelib/utils/
# -- end of generated part
}
[[ $1 == "ee" ]] && {
# -- this part was generated by modules_lister.py --
mkdir alerts
cp -R ./{app_alerts,schemas,schemas_ee}.py ./alerts/
mkdir -p ./alerts/chalicelib/
cp -R ./chalicelib/__init__.py ./alerts/chalicelib/
mkdir -p ./alerts/chalicelib/core/
cp -R ./chalicelib/core/{__init__,alerts_processor,alerts_listener,sessions,events,issues,sessions_metas,metadata,projects,users,authorizers,tenants,roles,assist,events_ios,sessions_mobs,errors,dashboard,sourcemaps,sourcemaps_parser,resources,performance_event,alerts,notifications,slack,collaboration_slack,webhook}.py ./alerts/chalicelib/core/
mkdir -p ./alerts/chalicelib/utils/
cp -R ./chalicelib/utils/{__init__,TimeUTC,pg_client,helper,event_filter_definition,dev,SAML2_helper,email_helper,email_handler,smtp,s3,args_transformer,ch_client,metrics_helper}.py ./alerts/chalicelib/utils/
# -- end of generated part
}
cp -R ./{Dockerfile.alerts,requirements.txt,.env.default,entrypoint.sh} ./alerts/
cp -R ./chalicelib/utils/html ./alerts/chalicelib/utils/html
}
git_sha1=${IMAGE_TAG:-$(git rev-parse HEAD)}
envarg="default-foss"
source ../scripts/lib/_docker.sh
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
exit 1
exit=1
}
[[ exit -eq 1 ]] && exit 1
}
[[ $1 == ee ]] && ee=true
[[ $PATCH -eq 1 ]] && {
image_tag="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/$chart | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
[[ $ee == "true" ]] && {
image_tag="${image_tag}-ee"
}
}
update_helm_release() {
chart=$1
HELM_TAG="$(grep -iER ^version ../scripts/helmcharts/openreplay/charts/$chart | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
# Update the chart version
sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Update image tags
sed -i "s#ppVersion.*#ppVersion: \"$image_tag\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Commit the changes
git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
git commit -m "chore(helm): Updating $chart image release"
}
function build_alerts() {
destination="_alerts"
[[ $1 == "ee" ]] && {
destination="_alerts_ee"
}
cp -R ../api ../${destination}
cd ../${destination}
function build_api(){
tag=""
# Copy enterprise code
[[ $1 == "ee" ]] && {
@ -51,23 +54,17 @@ function build_alerts() {
envarg="default-ee"
tag="ee-"
}
mv Dockerfile_alerts.dockerignore .dockerignore
docker build -f ./Dockerfile_alerts --platform linux/${ARCH:-"amd64"} --build-arg envarg=$envarg --build-arg GIT_SHA=$git_sha -t ${DOCKER_REPO:-'local'}/alerts:${image_tag} .
cd ../api
rm -rf ../${destination}
make_submodule $1
cd alerts
docker build -f ./Dockerfile.alerts --build-arg envarg=$envarg -t ${DOCKER_REPO:-'local'}/alerts:${git_sha1} .
cd ..
rm -rf alerts
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/alerts:${image_tag}
docker tag ${DOCKER_REPO:-'local'}/alerts:${image_tag} ${DOCKER_REPO:-'local'}/alerts:${tag}latest
docker push ${DOCKER_REPO:-'local'}/alerts:${git_sha1}
docker tag ${DOCKER_REPO:-'local'}/alerts:${git_sha1} ${DOCKER_REPO:-'local'}/alerts:${tag}latest
docker push ${DOCKER_REPO:-'local'}/alerts:${tag}latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/alerts:${image_tag}
}
echo "completed alerts build"
}
check_prereq
build_alerts $1
if [[ $PATCH -eq 1 ]]; then
update_helm_release alerts
fi
build_api $1

View file

@ -1,52 +0,0 @@
#!/bin/bash
# Script to build crons module
# flags to accept:
# envarg: build for enterprise edition.
# Default will be OSS build.
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
git_sha1=${IMAGE_TAG:-$(git rev-parse HEAD)}
envarg="default-foss"
source ../scripts/lib/_docker.sh
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
exit=1
}
[[ exit -eq 1 ]] && exit 1
}
function build_crons() {
destination="_crons_ee"
cp -R ../api ../${destination}
cd ../${destination}
tag=""
# Copy enterprise code
cp -rf ../ee/api/* ./
envarg="default-ee"
tag="ee-"
mv Dockerfile_crons.dockerignore .dockerignore
docker build -f ./Dockerfile_crons --platform=linux/${ARCH:-'amd64'} --build-arg envarg=$envarg -t ${DOCKER_REPO:-'local'}/crons:${git_sha1} .
cd ../api
rm -rf ../${destination}
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/crons:${git_sha1}
docker tag ${DOCKER_REPO:-'local'}/crons:${git_sha1} ${DOCKER_REPO:-'local'}/crons:${tag}latest
docker push ${DOCKER_REPO:-'local'}/crons:${tag}latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/crons:${git_sha1}
}
echo "completed crons build"
}
check_prereq
[[ $1 == "ee" ]] && {
build_crons $1
} || {
echo -e "Crons is only for ee. Rerun the script using \n bash $0 ee"
exit 100
}

View file

@ -0,0 +1,170 @@
import json
import logging
import time
import schemas
from chalicelib.core import notifications, slack, webhook
from chalicelib.utils import pg_client, helper, email_helper
from chalicelib.utils.TimeUTC import TimeUTC
def get(id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
SELECT *
FROM public.alerts
WHERE alert_id =%(id)s;""",
{"id": id})
)
a = helper.dict_to_camel_case(cur.fetchone())
return helper.custom_alert_to_front(__process_circular(a))
def get_all(project_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
SELECT *
FROM public.alerts
WHERE project_id =%(project_id)s AND deleted_at ISNULL
ORDER BY created_at;""",
{"project_id": project_id})
cur.execute(query=query)
all = helper.list_to_camel_case(cur.fetchall())
for i in range(len(all)):
all[i] = helper.custom_alert_to_front(__process_circular(all[i]))
return all
def __process_circular(alert):
if alert is None:
return None
alert.pop("deletedAt")
alert["createdAt"] = TimeUTC.datetime_to_timestamp(alert["createdAt"])
return alert
def create(project_id, data: schemas.AlertSchema):
data = data.dict()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.alerts(project_id, name, description, detection_method, query, options, series_id)
VALUES (%(project_id)s, %(name)s, %(description)s, %(detection_method)s, %(query)s, %(options)s::jsonb, %(series_id)s)
RETURNING *;""",
{"project_id": project_id, **data})
)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(helper.dict_to_camel_case(__process_circular(a)))}
def update(id, data: schemas.AlertSchema):
data = data.dict()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
UPDATE public.alerts
SET name = %(name)s,
description = %(description)s,
active = TRUE,
detection_method = %(detection_method)s,
query = %(query)s,
options = %(options)s,
series_id = %(series_id)s
WHERE alert_id =%(id)s AND deleted_at ISNULL
RETURNING *;""",
{"id": id, **data})
cur.execute(query=query)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(__process_circular(a))}
def process_notifications(data):
full = {}
for n in data:
if "message" in n["options"]:
webhook_data = {}
if "data" in n["options"]:
webhook_data = n["options"].pop("data")
for c in n["options"].pop("message"):
if c["type"] not in full:
full[c["type"]] = []
if c["type"] in ["slack", "email"]:
full[c["type"]].append({
"notification": n,
"destination": c["value"]
})
elif c["type"] in ["webhook"]:
full[c["type"]].append({"data": webhook_data, "destination": c["value"]})
notifications.create(data)
BATCH_SIZE = 200
for t in full.keys():
for i in range(0, len(full[t]), BATCH_SIZE):
notifications_list = full[t][i:i + BATCH_SIZE]
if t == "slack":
try:
slack.send_batch(notifications_list=notifications_list)
except Exception as e:
logging.error("!!!Error while sending slack notifications batch")
logging.error(str(e))
elif t == "email":
try:
send_by_email_batch(notifications_list=notifications_list)
except Exception as e:
logging.error("!!!Error while sending email notifications batch")
logging.error(str(e))
elif t == "webhook":
try:
webhook.trigger_batch(data_list=notifications_list)
except Exception as e:
logging.error("!!!Error while sending webhook notifications batch")
logging.error(str(e))
def send_by_email(notification, destination):
if notification is None:
return
email_helper.alert_email(recipients=destination,
subject=f'"{notification["title"]}" has been triggered',
data={
"message": f'"{notification["title"]}" {notification["description"]}',
"project_id": notification["options"]["projectId"]})
def send_by_email_batch(notifications_list):
if notifications_list is None or len(notifications_list) == 0:
return
for n in notifications_list:
send_by_email(notification=n.get("notification"), destination=n.get("destination"))
time.sleep(1)
def delete(project_id, alert_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.alerts
SET
deleted_at = timezone('utc'::text, now()),
active = FALSE
WHERE
alert_id = %(alert_id)s AND project_id=%(project_id)s;""",
{"alert_id": alert_id, "project_id": project_id})
)
return {"data": {"state": "success"}}
def get_predefined_values():
values = [e.value for e in schemas.AlertColumn]
values = [{"name": v, "value": v,
"unit": "count" if v.endswith(".count") else "ms",
"predefined": True,
"metricId": None,
"seriesId": None} for v in values if v != schemas.AlertColumn.custom]
return values

View file

@ -1,10 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_ALERTS", cast=bool, default=False):
logging.info(">>> Using experimental alerts")
from . import alerts_processor_ch as alerts_processor
else:
from . import alerts_processor as alerts_processor

View file

@ -1,235 +0,0 @@
import json
import logging
import time
from datetime import datetime
from decouple import config
import schemas
from chalicelib.core import notifications, webhook
from chalicelib.core.collaborations.collaboration_msteams import MSTeams
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.utils import pg_client, helper, email_helper, smtp
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def get(id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
SELECT *
FROM public.alerts
WHERE alert_id =%(id)s;""",
{"id": id})
)
a = helper.dict_to_camel_case(cur.fetchone())
return helper.custom_alert_to_front(__process_circular(a))
def get_all(project_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
SELECT alerts.*,
COALESCE(metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count',
query ->> 'left') AS series_name
FROM public.alerts
LEFT JOIN metric_series USING (series_id)
LEFT JOIN metrics USING (metric_id)
WHERE alerts.project_id =%(project_id)s
AND alerts.deleted_at ISNULL
ORDER BY alerts.created_at;""",
{"project_id": project_id})
cur.execute(query=query)
all = helper.list_to_camel_case(cur.fetchall())
for i in range(len(all)):
all[i] = helper.custom_alert_to_front(__process_circular(all[i]))
return all
def __process_circular(alert):
if alert is None:
return None
alert.pop("deletedAt")
alert["createdAt"] = TimeUTC.datetime_to_timestamp(alert["createdAt"])
return alert
def create(project_id, data: schemas.AlertSchema):
data = data.model_dump()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.alerts(project_id, name, description, detection_method, query, options, series_id, change)
VALUES (%(project_id)s, %(name)s, %(description)s, %(detection_method)s, %(query)s, %(options)s::jsonb, %(series_id)s, %(change)s)
RETURNING *;""",
{"project_id": project_id, **data})
)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(helper.dict_to_camel_case(__process_circular(a)))}
def update(id, data: schemas.AlertSchema):
data = data.model_dump()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
UPDATE public.alerts
SET name = %(name)s,
description = %(description)s,
active = TRUE,
detection_method = %(detection_method)s,
query = %(query)s,
options = %(options)s,
series_id = %(series_id)s,
change = %(change)s
WHERE alert_id =%(id)s AND deleted_at ISNULL
RETURNING *;""",
{"id": id, **data})
cur.execute(query=query)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(__process_circular(a))}
def process_notifications(data):
full = {}
for n in data:
if "message" in n["options"]:
webhook_data = {}
if "data" in n["options"]:
webhook_data = n["options"].pop("data")
for c in n["options"].pop("message"):
if c["type"] not in full:
full[c["type"]] = []
if c["type"] in ["slack", "msteams", "email"]:
full[c["type"]].append({
"notification": n,
"destination": c["value"]
})
elif c["type"] in ["webhook"]:
full[c["type"]].append({"data": webhook_data, "destination": c["value"]})
notifications.create(data)
BATCH_SIZE = 200
for t in full.keys():
for i in range(0, len(full[t]), BATCH_SIZE):
notifications_list = full[t][i:min(i + BATCH_SIZE, len(full[t]))]
if notifications_list is None or len(notifications_list) == 0:
break
if t == "slack":
try:
send_to_slack_batch(notifications_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending slack notifications batch")
logger.error(str(e))
elif t == "msteams":
try:
send_to_msteams_batch(notifications_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending msteams notifications batch")
logger.error(str(e))
elif t == "email":
try:
send_by_email_batch(notifications_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending email notifications batch")
logger.error(str(e))
elif t == "webhook":
try:
webhook.trigger_batch(data_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending webhook notifications batch")
logger.error(str(e))
def send_by_email(notification, destination):
if notification is None:
return
email_helper.alert_email(recipients=destination,
subject=f'"{notification["title"]}" has been triggered',
data={
"message": f'"{notification["title"]}" {notification["description"]}',
"project_id": notification["options"]["projectId"]})
def send_by_email_batch(notifications_list):
if not smtp.has_smtp():
logger.info("no SMTP configuration for email notifications")
if notifications_list is None or len(notifications_list) == 0:
logger.info("no email notifications")
return
for n in notifications_list:
send_by_email(notification=n.get("notification"), destination=n.get("destination"))
time.sleep(1)
def send_to_slack_batch(notifications_list):
webhookId_map = {}
for n in notifications_list:
if n.get("destination") not in webhookId_map:
webhookId_map[n.get("destination")] = {"tenantId": n["notification"]["tenantId"], "batch": []}
webhookId_map[n.get("destination")]["batch"].append({"text": n["notification"]["description"] \
+ f"\n<{config('SITE_URL')}{n['notification']['buttonUrl']}|{n['notification']['buttonText']}>",
"title": n["notification"]["title"],
"title_link": n["notification"]["buttonUrl"],
"ts": datetime.now().timestamp()})
for batch in webhookId_map.keys():
Slack.send_batch(tenant_id=webhookId_map[batch]["tenantId"], webhook_id=batch,
attachments=webhookId_map[batch]["batch"])
def send_to_msteams_batch(notifications_list):
webhookId_map = {}
for n in notifications_list:
if n.get("destination") not in webhookId_map:
webhookId_map[n.get("destination")] = {"tenantId": n["notification"]["tenantId"], "batch": []}
link = f"{config('SITE_URL')}{n['notification']['buttonUrl']}"
# for MSTeams, the batch is the list of `sections`
webhookId_map[n.get("destination")]["batch"].append(
{
"activityTitle": n["notification"]["title"],
"activitySubtitle": f"On Project *{n['notification']['projectName']}*",
"facts": [
{
"name": "Target:",
"value": link
},
{
"name": "Description:",
"value": n["notification"]["description"]
}],
"markdown": True
}
)
for batch in webhookId_map.keys():
MSTeams.send_batch(tenant_id=webhookId_map[batch]["tenantId"], webhook_id=batch,
attachments=webhookId_map[batch]["batch"])
def delete(project_id, alert_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(""" UPDATE public.alerts
SET deleted_at = timezone('utc'::text, now()),
active = FALSE
WHERE alert_id = %(alert_id)s AND project_id=%(project_id)s;""",
{"alert_id": alert_id, "project_id": project_id})
)
return {"data": {"state": "success"}}
def get_predefined_values():
values = [e.value for e in schemas.AlertColumn]
values = [{"name": v, "value": v,
"unit": "count" if v.endswith(".count") else "ms",
"predefined": True,
"metricId": None,
"seriesId": None} for v in values if v != schemas.AlertColumn.CUSTOM]
return values

View file

@ -1,33 +0,0 @@
from chalicelib.core.alerts.modules import TENANT_ID
from chalicelib.utils import pg_client, helper
def get_all_alerts():
with pg_client.PostgresClient(long_query=True) as cur:
query = f"""SELECT {TENANT_ID} AS tenant_id,
alert_id,
projects.project_id,
projects.name AS project_name,
detection_method,
query,
options,
(EXTRACT(EPOCH FROM alerts.created_at) * 1000)::BIGINT AS created_at,
alerts.name,
alerts.series_id,
filter,
change,
COALESCE(metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count',
query ->> 'left') AS series_name
FROM public.alerts
INNER JOIN projects USING (project_id)
LEFT JOIN metric_series USING (series_id)
LEFT JOIN metrics USING (metric_id)
WHERE alerts.deleted_at ISNULL
AND alerts.active
AND projects.active
AND projects.deleted_at ISNULL
AND (alerts.series_id ISNULL OR metric_series.deleted_at ISNULL)
ORDER BY alerts.created_at;"""
cur.execute(query=query)
all_alerts = helper.list_to_camel_case(cur.fetchall())
return all_alerts

View file

@ -1,169 +0,0 @@
import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_pg as sessions
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
LeftToDb = {
schemas.AlertColumn.PERFORMANCE__DOM_CONTENT_LOADED__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(dom_content_loaded_time ,0)),0)"},
schemas.AlertColumn.PERFORMANCE__FIRST_MEANINGFUL_PAINT__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(first_contentful_paint_time,0)),0)"},
schemas.AlertColumn.PERFORMANCE__PAGE_LOAD_TIME__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(load_time ,0))"},
schemas.AlertColumn.PERFORMANCE__DOM_BUILD_TIME__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(dom_building_time,0))"},
schemas.AlertColumn.PERFORMANCE__SPEED_INDEX__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(speed_index,0))"},
schemas.AlertColumn.PERFORMANCE__PAGE_RESPONSE_TIME__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(response_time,0))"},
schemas.AlertColumn.PERFORMANCE__TTFB__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(first_paint_time,0))"},
schemas.AlertColumn.PERFORMANCE__TIME_TO_RENDER__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(visually_complete,0))"},
schemas.AlertColumn.PERFORMANCE__CRASHES__COUNT: {
"table": "public.sessions",
"formula": "COUNT(DISTINCT session_id)",
"condition": "errors_count > 0 AND duration>0"},
schemas.AlertColumn.ERRORS__JAVASCRIPT__COUNT: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source='js_exception'", "joinSessions": False},
schemas.AlertColumn.ERRORS__BACKEND__COUNT: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source!='js_exception'", "joinSessions": False},
}
def Build(a):
now = TimeUTC.now()
params = {"project_id": a["projectId"], "now": now}
full_args = {}
j_s = True
main_table = ""
if a["seriesId"] is not None:
a["filter"]["sort"] = "session_id"
a["filter"]["order"] = schemas.SortOrderType.DESC
a["filter"]["startDate"] = 0
a["filter"]["endDate"] = TimeUTC.now()
try:
data = schemas.SessionsSearchPayloadSchema.model_validate(a["filter"])
except ValidationError:
logger.warning("Validation error for:")
logger.warning(a["filter"])
raise
full_args, query_part = sessions.search_query_parts(data=data, error_status=None, errors_only=False,
issue=None, project_id=a["projectId"], user_id=None,
favorite_only=False)
subQ = f"""SELECT COUNT(session_id) AS value
{query_part}"""
else:
colDef = LeftToDb[a["query"]["left"]]
subQ = f"""SELECT {colDef["formula"]} AS value
FROM {colDef["table"]}
WHERE project_id = %(project_id)s
{"AND " + colDef["condition"] if colDef.get("condition") else ""}"""
j_s = colDef.get("joinSessions", True)
main_table = colDef["table"]
is_ss = main_table == "public.sessions"
q = f"""SELECT coalesce(value,0) AS value, coalesce(value,0) {a["query"]["operator"]} {a["query"]["right"]} AS valid"""
if a["detectionMethod"] == schemas.AlertDetectionMethod.THRESHOLD:
if a["seriesId"] is not None:
q += f""" FROM ({subQ}) AS stat"""
else:
q += f""" FROM ({subQ} {"AND timestamp >= %(startDate)s AND timestamp <= %(now)s" if not is_ss else ""}
{"AND start_ts >= %(startDate)s AND start_ts <= %(now)s" if j_s else ""}) AS stat"""
params = {**params, **full_args, "startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000}
else:
if a["change"] == schemas.AlertDetectionType.CHANGE:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})-({sub2})) AS value"
q += f" FROM ( {sub1} ) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000}
else:
sub1 = f"""{subQ} {"AND timestamp >= %(startDate)s AND timestamp <= %(now)s" if not is_ss else ""}
{"AND start_ts >= %(startDate)s AND start_ts <= %(now)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} {"AND timestamp < %(startDate)s AND timestamp >= %(timestamp_sub2)s" if not is_ss else ""}
{"AND start_ts < %(startDate)s AND start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000
sub1 = f"SELECT (( {sub1} )-( {sub2} )) AS value"
q += f" FROM ( {sub1} ) AS stat"
else:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) \
* 60 * 1000}
else:
sub1 = f"""{subQ} {"AND timestamp >= %(startDate)s AND timestamp <= %(now)s" if not is_ss else ""}
{"AND start_ts >= %(startDate)s AND start_ts <= %(now)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} {"AND timestamp < %(startDate)s AND timestamp >= %(timestamp_sub2)s" if not is_ss else ""}
{"AND start_ts < %(startDate)s AND start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) * 60 * 1000
sub1 = f"SELECT (({sub1})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
return q, params
def process():
logger.info("> processing alerts on PG")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:
for alert in all_alerts:
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = cur.mogrify(query, params)
except Exception as e:
logger.error(
f"!!!Error while building alert query for alertId:{alert['alertId']} name: {alert['name']}")
logger.error(e)
continue
logger.debug(alert)
logger.debug(query)
try:
cur.execute(query)
result = cur.fetchone()
if result["valid"]:
logger.info(f"Valid alert, notifying users, alertId:{alert['alertId']} name: {alert['name']}")
notifications.append(alert_helpers.generate_notification(alert, result))
except Exception as e:
logger.error(
f"!!!Error while running alert query for alertId:{alert['alertId']} name: {alert['name']}")
logger.error(query)
logger.error(e)
cur = cur.recreate(rollback=True)
if len(notifications) > 0:
cur.execute(
cur.mogrify(f"""UPDATE public.alerts
SET options = options||'{{"lastNotification":{TimeUTC.now()}}}'::jsonb
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)

View file

@ -1,195 +0,0 @@
import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.utils import pg_client, ch_client, exp_ch_helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_ch as sessions
logger = logging.getLogger(__name__)
LeftToDb = {
schemas.AlertColumn.PERFORMANCE__DOM_CONTENT_LOADED__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "COALESCE(AVG(NULLIF(dom_content_loaded_event_time ,0)),0)",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__FIRST_MEANINGFUL_PAINT__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "COALESCE(AVG(NULLIF(first_contentful_paint_time,0)),0)",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__PAGE_LOAD_TIME__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(load_event_time ,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__DOM_BUILD_TIME__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(dom_building_time,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__SPEED_INDEX__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(speed_index,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__PAGE_RESPONSE_TIME__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(response_time,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__TTFB__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(first_contentful_paint_time,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__TIME_TO_RENDER__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(visually_complete,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__CRASHES__COUNT: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_sessions_table(timestamp)} AS sessions",
"formula": "COUNT(DISTINCT session_id)",
"condition": "duration>0 AND errors_count>0"
},
schemas.AlertColumn.ERRORS__JAVASCRIPT__COUNT: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS errors",
"eventType": "ERROR",
"formula": "COUNT(DISTINCT session_id)",
"condition": "source='js_exception'"
},
schemas.AlertColumn.ERRORS__BACKEND__COUNT: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS errors",
"eventType": "ERROR",
"formula": "COUNT(DISTINCT session_id)",
"condition": "source!='js_exception'"
},
}
def Build(a):
now = TimeUTC.now()
params = {"project_id": a["projectId"], "now": now}
full_args = {}
if a["seriesId"] is not None:
a["filter"]["sort"] = "session_id"
a["filter"]["order"] = schemas.SortOrderType.DESC
a["filter"]["startDate"] = 0
a["filter"]["endDate"] = TimeUTC.now()
try:
data = schemas.SessionsSearchPayloadSchema.model_validate(a["filter"])
except ValidationError:
logger.warning("Validation error for:")
logger.warning(a["filter"])
raise
full_args, query_part = sessions.search_query_parts_ch(data=data, error_status=None, errors_only=False,
issue=None, project_id=a["projectId"], user_id=None,
favorite_only=False)
subQ = f"""SELECT COUNT(session_id) AS value
{query_part}"""
else:
colDef = LeftToDb[a["query"]["left"]]
params["event_type"] = LeftToDb[a["query"]["left"]].get("eventType")
subQ = f"""SELECT {colDef["formula"]} AS value
FROM {colDef["table"](now)}
WHERE project_id = %(project_id)s
{"AND event_type=%(event_type)s" if params["event_type"] else ""}
{"AND " + colDef["condition"] if colDef.get("condition") else ""}"""
q = f"""SELECT coalesce(value,0) AS value, coalesce(value,0) {a["query"]["operator"]} {a["query"]["right"]} AS valid"""
if a["detectionMethod"] == schemas.AlertDetectionMethod.THRESHOLD:
if a["seriesId"] is not None:
q += f""" FROM ({subQ}) AS stat"""
else:
q += f""" FROM ({subQ}
AND datetime>=toDateTime(%(startDate)s/1000)
AND datetime<=toDateTime(%(now)s/1000) ) AS stat"""
params = {**params, **full_args, "startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000}
else:
if a["change"] == schemas.AlertDetectionType.CHANGE:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})-({sub2})) AS value"
q += f" FROM ( {sub1} ) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000}
else:
sub1 = f"""{subQ} AND datetime>=toDateTime(%(startDate)s/1000)
AND datetime<=toDateTime(%(now)s/1000)"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND datetime<toDateTime(%(startDate)s/1000)
AND datetime>=toDateTime(%(timestamp_sub2)s/1000)"""
params["timestamp_sub2"] = TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000
sub1 = f"SELECT (( {sub1} )-( {sub2} )) AS value"
q += f" FROM ( {sub1} ) AS stat"
else:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) \
* 60 * 1000}
else:
sub1 = f"""{subQ} AND datetime>=toDateTime(%(startDate)s/1000)
AND datetime<=toDateTime(%(now)s/1000)"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND datetime<toDateTime(%(startDate)s/1000)
AND datetime>=toDateTime(%(timestamp_sub2)s/1000)"""
params["timestamp_sub2"] = TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) * 60 * 1000
sub1 = f"SELECT (({sub1})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
return q, params
def process():
logger.info("> processing alerts on CH")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur, ch_client.ClickHouseClient() as ch_cur:
for alert in all_alerts:
if alert["query"]["left"] != "CUSTOM":
continue
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = ch_cur.format(query=query, parameters=params)
except Exception as e:
logger.error(
f"!!!Error while building alert query for alertId:{alert['alertId']} name: {alert['name']}")
logger.error(e)
continue
logger.debug(alert)
logger.debug(query)
try:
result = ch_cur.execute(query=query)
if len(result) > 0:
result = result[0]
if result["valid"]:
logger.info("Valid alert, notifying users")
notifications.append(alert_helpers.generate_notification(alert, result))
except Exception as e:
logger.error(f"!!!Error while running alert query for alertId:{alert['alertId']}")
logger.error(str(e))
logger.error(query)
if len(notifications) > 0:
cur.execute(
cur.mogrify(f"""UPDATE public.alerts
SET options = options||'{{"lastNotification":{TimeUTC.now()}}}'::jsonb
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)

View file

@ -1,3 +0,0 @@
TENANT_ID = "-1"
from . import helpers as alert_helpers

View file

@ -1,74 +0,0 @@
import decimal
import logging
import schemas
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
# This is the frequency of execution for each threshold
TimeInterval = {
15: 3,
30: 5,
60: 10,
120: 20,
240: 30,
1440: 60,
}
def __format_value(x):
if x % 1 == 0:
x = int(x)
else:
x = round(x, 2)
return f"{x:,}"
def can_check(a) -> bool:
now = TimeUTC.now()
repetitionBase = a["options"]["currentPeriod"] \
if a["detectionMethod"] == schemas.AlertDetectionMethod.CHANGE \
and a["options"]["currentPeriod"] > a["options"]["previousPeriod"] \
else a["options"]["previousPeriod"]
if TimeInterval.get(repetitionBase) is None:
logger.error(f"repetitionBase: {repetitionBase} NOT FOUND")
return False
return (a["options"]["renotifyInterval"] <= 0 or
a["options"].get("lastNotification") is None or
a["options"]["lastNotification"] <= 0 or
((now - a["options"]["lastNotification"]) > a["options"]["renotifyInterval"] * 60 * 1000)) \
and ((now - a["createdAt"]) % (TimeInterval[repetitionBase] * 60 * 1000)) < 60 * 1000
def generate_notification(alert, result):
left = __format_value(result['value'])
right = __format_value(alert['query']['right'])
return {
"alertId": alert["alertId"],
"tenantId": alert["tenantId"],
"title": alert["name"],
"description": f"{alert['seriesName']} = {left} ({alert['query']['operator']} {right}).",
"buttonText": "Check metrics for more details",
"buttonUrl": f"/{alert['projectId']}/metrics",
"imageUrl": None,
"projectId": alert["projectId"],
"projectName": alert["projectName"],
"options": {"source": "ALERT", "sourceId": alert["alertId"],
"sourceMeta": alert["detectionMethod"],
"message": alert["options"]["message"], "projectId": alert["projectId"],
"data": {"title": alert["name"],
"limitValue": alert["query"]["right"],
"actualValue": float(result["value"]) \
if isinstance(result["value"], decimal.Decimal) \
else result["value"],
"operator": alert["query"]["operator"],
"trigger": alert["query"]["left"],
"alertId": alert["alertId"],
"detectionMethod": alert["detectionMethod"],
"currentPeriod": alert["options"]["currentPeriod"],
"previousPeriod": alert["options"]["previousPeriod"],
"createdAt": TimeUTC.now()}},
}

View file

@ -0,0 +1,27 @@
from chalicelib.utils import pg_client, helper
def get_all_alerts():
with pg_client.PostgresClient(long_query=True) as cur:
query = """SELECT -1 AS tenant_id,
alert_id,
project_id,
detection_method,
query,
options,
(EXTRACT(EPOCH FROM alerts.created_at) * 1000)::BIGINT AS created_at,
alerts.name,
alerts.series_id,
filter
FROM public.alerts
LEFT JOIN metric_series USING (series_id)
INNER JOIN projects USING (project_id)
WHERE alerts.deleted_at ISNULL
AND alerts.active
AND projects.active
AND projects.deleted_at ISNULL
AND (alerts.series_id ISNULL OR metric_series.deleted_at ISNULL)
ORDER BY alerts.created_at;"""
cur.execute(query=query)
all_alerts = helper.list_to_camel_case(cur.fetchall())
return all_alerts

View file

@ -0,0 +1,250 @@
import decimal
import logging
import schemas
from chalicelib.core import alerts_listener
from chalicelib.core import sessions, alerts
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
LeftToDb = {
schemas.AlertColumn.performance__dom_content_loaded__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(dom_content_loaded_time ,0)),0)"},
schemas.AlertColumn.performance__first_meaningful_paint__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(first_contentful_paint_time,0)),0)"},
schemas.AlertColumn.performance__page_load_time__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(load_time ,0))"},
schemas.AlertColumn.performance__dom_build_time__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(dom_building_time,0))"},
schemas.AlertColumn.performance__speed_index__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(speed_index,0))"},
schemas.AlertColumn.performance__page_response_time__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(response_time,0))"},
schemas.AlertColumn.performance__ttfb__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(first_paint_time,0))"},
schemas.AlertColumn.performance__time_to_render__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(visually_complete,0))"},
schemas.AlertColumn.performance__image_load_time__average: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(resources.duration,0))", "condition": "type='img'"},
schemas.AlertColumn.performance__request_load_time__average: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(resources.duration,0))", "condition": "type='fetch'"},
schemas.AlertColumn.resources__load_time__average: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(resources.duration,0))"},
schemas.AlertColumn.resources__missing__count: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(DISTINCT url_hostpath)", "condition": "success= FALSE"},
schemas.AlertColumn.errors__4xx_5xx__count: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)", "formula": "COUNT(session_id)",
"condition": "status/100!=2"},
schemas.AlertColumn.errors__4xx__count: {"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(session_id)", "condition": "status/100=4"},
schemas.AlertColumn.errors__5xx__count: {"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(session_id)", "condition": "status/100=5"},
schemas.AlertColumn.errors__javascript__impacted_sessions__count: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "success= FALSE AND type='script'"},
schemas.AlertColumn.performance__crashes__count: {
"table": "(SELECT *, start_ts AS timestamp FROM public.sessions WHERE errors_count > 0) AS sessions",
"formula": "COUNT(DISTINCT session_id)", "condition": "errors_count > 0"},
schemas.AlertColumn.errors__javascript__count: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source='js_exception'", "joinSessions": False},
schemas.AlertColumn.errors__backend__count: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source!='js_exception'", "joinSessions": False},
}
# This is the frequency of execution for each threshold
TimeInterval = {
15: 3,
30: 5,
60: 10,
120: 20,
240: 30,
1440: 60,
}
def can_check(a) -> bool:
now = TimeUTC.now()
repetitionBase = a["options"]["currentPeriod"] \
if a["detectionMethod"] == schemas.AlertDetectionMethod.change \
and a["options"]["currentPeriod"] > a["options"]["previousPeriod"] \
else a["options"]["previousPeriod"]
if TimeInterval.get(repetitionBase) is None:
logging.error(f"repetitionBase: {repetitionBase} NOT FOUND")
return False
return (a["options"]["renotifyInterval"] <= 0 or
a["options"].get("lastNotification") is None or
a["options"]["lastNotification"] <= 0 or
((now - a["options"]["lastNotification"]) > a["options"]["renotifyInterval"] * 60 * 1000)) \
and ((now - a["createdAt"]) % (TimeInterval[repetitionBase] * 60 * 1000)) < 60 * 1000
def Build(a):
params = {"project_id": a["projectId"]}
full_args = {}
j_s = True
if a["seriesId"] is not None:
a["filter"]["sort"] = "session_id"
a["filter"]["order"] = "DESC"
a["filter"]["startDate"] = -1
a["filter"]["endDate"] = TimeUTC.now()
full_args, query_part, sort = sessions.search_query_parts(
data=schemas.SessionsSearchPayloadSchema.parse_obj(a["filter"]),
error_status=None, errors_only=False,
favorite_only=False, issue=None, project_id=a["projectId"],
user_id=None)
subQ = f"""SELECT COUNT(session_id) AS value
{query_part}"""
else:
colDef = LeftToDb[a["query"]["left"]]
subQ = f"""SELECT {colDef["formula"]} AS value
FROM {colDef["table"]}
WHERE project_id = %(project_id)s
{"AND " + colDef["condition"] if colDef.get("condition") is not None else ""}"""
j_s = colDef.get("joinSessions", True)
q = f"""SELECT coalesce(value,0) AS value, coalesce(value,0) {a["query"]["operator"]} {a["query"]["right"]} AS valid"""
# if len(colDef.group) > 0 {
# subQ = subQ.Column(colDef.group + " AS group_value")
# subQ = subQ.GroupBy(colDef.group)
# q = q.Column("group_value")
# }
if a["detectionMethod"] == schemas.AlertDetectionMethod.threshold:
if a["seriesId"] is not None:
q += f""" FROM ({subQ}) AS stat"""
else:
q += f""" FROM ({subQ} AND timestamp>=%(startDate)s
{"AND sessions.start_ts >= %(startDate)s" if j_s else ""}) AS stat"""
params = {**params, **full_args, "startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000}
else:
if a["options"]["change"] == schemas.AlertDetectionChangeType.change:
# if len(colDef.group) > 0:
# subq1 := subQ.Where(sq.Expr("timestamp>=$2 ", time.Now().Unix()-a.Options.CurrentPeriod * 60))
# sub2, args2, _ := subQ.Where(
# sq.And{
# sq.Expr("timestamp<$3 ", time.Now().Unix()-a.Options.CurrentPeriod * 60),
# sq.Expr("timestamp>=$4 ", time.Now().Unix()-2 * a.Options.CurrentPeriod * 60),
# }).ToSql()
# sub1 := sq.Select("group_value", "(stat1.value-stat2.value) AS value").FromSelect(subq1, "stat1").JoinClause("INNER JOIN ("+sub2+") AS stat2 USING(group_value)", args2...)
# q = q.FromSelect(sub1, "stat")
# else:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})-({sub2})) AS value"
q += f" FROM ( {sub1} ) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000}
else:
sub1 = f"""{subQ} AND timestamp>=%(startDate)s
{"AND sessions.start_ts >= %(startDate)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND timestamp<%(startDate)s
AND timestamp>=%(timestamp_sub2)s
{"AND sessions.start_ts < %(startDate)s AND sessions.start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000
sub1 = f"SELECT (( {sub1} )-( {sub2} )) AS value"
q += f" FROM ( {sub1} ) AS stat"
else:
# if len(colDef.group) >0 {
# subq1 := subQ.Where(sq.Expr("timestamp>=$2 ", time.Now().Unix()-a.Options.CurrentPeriod * 60))
# sub2, args2, _ := subQ.Where(
# sq.And{
# sq.Expr("timestamp<$3 ", time.Now().Unix()-a.Options.CurrentPeriod * 60),
# sq.Expr("timestamp>=$4 ", time.Now().Unix()-a.Options.PreviousPeriod * 60-a.Options.CurrentPeriod * 60),
# }).ToSql()
# sub1 := sq.Select("group_value", "(stat1.value/stat2.value-1)*100 AS value").FromSelect(subq1, "stat1").JoinClause("INNER JOIN ("+sub2+") AS stat2 USING(group_value)", args2...)
# q = q.FromSelect(sub1, "stat")
# } else {
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) \
* 60 * 1000}
else:
sub1 = f"""{subQ} AND timestamp>=%(startDate)s
{"AND sessions.start_ts >= %(startDate)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND timestamp<%(startDate)s
AND timestamp>=%(timestamp_sub2)s
{"AND sessions.start_ts < %(startDate)s AND sessions.start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) * 60 * 1000
sub1 = f"SELECT (({sub1})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
return q, params
def process():
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:
for alert in all_alerts:
if can_check(alert):
logging.info(f"Querying alertId:{alert['alertId']} name: {alert['name']}")
query, params = Build(alert)
query = cur.mogrify(query, params)
logging.debug(alert)
logging.debug(query)
try:
cur.execute(query)
result = cur.fetchone()
if result["valid"]:
logging.info("Valid alert, notifying users")
notifications.append({
"alertId": alert["alertId"],
"tenantId": alert["tenantId"],
"title": alert["name"],
"description": f"has been triggered, {alert['query']['left']} = {round(result['value'], 2)} ({alert['query']['operator']} {alert['query']['right']}).",
"buttonText": "Check metrics for more details",
"buttonUrl": f"/{alert['projectId']}/metrics",
"imageUrl": None,
"options": {"source": "ALERT", "sourceId": alert["alertId"],
"sourceMeta": alert["detectionMethod"],
"message": alert["options"]["message"], "projectId": alert["projectId"],
"data": {"title": alert["name"],
"limitValue": alert["query"]["right"],
"actualValue": float(result["value"]) \
if isinstance(result["value"], decimal.Decimal) \
else result["value"],
"operator": alert["query"]["operator"],
"trigger": alert["query"]["left"],
"alertId": alert["alertId"],
"detectionMethod": alert["detectionMethod"],
"currentPeriod": alert["options"]["currentPeriod"],
"previousPeriod": alert["options"]["previousPeriod"],
"createdAt": TimeUTC.now()}},
})
except Exception as e:
logging.error(f"!!!Error while running alert query for alertId:{alert['alertId']}")
logging.error(str(e))
logging.error(query)
if len(notifications) > 0:
cur.execute(
cur.mogrify(f"""UPDATE public.Alerts
SET options = options||'{{"lastNotification":{TimeUTC.now()}}}'::jsonb
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)

View file

@ -1,287 +1,104 @@
import logging
from os import access, R_OK
from os.path import exists as path_exists, getsize
import jwt
import requests
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import projects
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core import projects, sessions
from chalicelib.utils import pg_client, helper
logger = logging.getLogger(__name__)
ASSIST_KEY = config("ASSIST_KEY")
ASSIST_URL = config("ASSIST_URL") % ASSIST_KEY
SESSION_PROJECTION_COLS = """s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_device_type,
s.user_country,
s.start_ts,
s.user_anonymous_id,
s.platform
"""
def get_live_sessions_ws_user_id(project_id, user_id):
data = {
"filter": {"userId": user_id} if user_id else {}
}
return __get_live_sessions_ws(project_id=project_id, data=data)
def get_live_sessions_ws_test_id(project_id, test_id):
data = {
"filter": {
'uxtId': test_id,
'operator': 'is'
}
}
return __get_live_sessions_ws(project_id=project_id, data=data)
def get_live_sessions_ws(project_id, body: schemas.LiveSessionsSearchPayloadSchema):
data = {
"filter": {},
"pagination": {"limit": body.limit, "page": body.page},
"sort": {"key": body.sort, "order": body.order}
}
for f in body.filters:
if f.type == schemas.LiveFilterType.METADATA:
data["filter"][f.source] = {"values": f.value, "operator": f.operator}
else:
data["filter"][f.type] = {"values": f.value, "operator": f.operator}
return __get_live_sessions_ws(project_id=project_id, data=data)
def __get_live_sessions_ws(project_id, data):
def get_live_sessions(project_id, filters=None):
project_key = projects.get_project_key(project_id)
try:
results = requests.post(ASSIST_URL + config("assist") + f"/{project_key}",
json=data, timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for __get_live_sessions_ws")
logger.error(results.text)
return {"total": 0, "sessions": []}
live_peers = results.json().get("data", [])
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
live_peers = {"total": 0, "sessions": []}
except Exception as e:
logger.error("!! Issue getting Live-Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
logger.error(results.text)
except:
logger.error("couldn't get response")
live_peers = {"total": 0, "sessions": []}
_live_peers = live_peers
if "sessions" in live_peers:
_live_peers = live_peers["sessions"]
for s in _live_peers:
connected_peers = requests.get(config("peers") % config("S3_KEY") + f"/{project_key}")
if connected_peers.status_code != 200:
print("!! issue with the peer-server")
print(connected_peers.text)
return []
connected_peers = connected_peers.json().get("data", [])
if len(connected_peers) == 0:
return []
connected_peers = tuple(connected_peers)
extra_constraints = ["project_id = %(project_id)s", "session_id IN %(connected_peers)s"]
extra_params = {}
if filters is not None:
for i, f in enumerate(filters):
if not isinstance(f.get("value"), list):
f["value"] = [f.get("value")]
if len(f["value"]) == 0 or f["value"][0] is None:
continue
filter_type = f["type"].upper()
f["value"] = sessions.__get_sql_value_multiple(f["value"])
if filter_type == schemas.FilterType.user_id:
op = sessions.__get_sql_operator(f["operator"])
extra_constraints.append(f"user_id {op} %(value_{i})s")
extra_params[f"value_{i}"] = helper.string_to_sql_like_with_op(f["value"][0], op)
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""\
SELECT {SESSION_PROJECTION_COLS}, %(project_key)s||'-'|| session_id AS peer_id
FROM public.sessions AS s
WHERE {" AND ".join(extra_constraints)}
ORDER BY start_ts DESC
LIMIT 500;""",
{"project_id": project_id,
"connected_peers": connected_peers,
"project_key": project_key,
**extra_params})
cur.execute(query)
results = cur.fetchall()
return helper.list_to_camel_case(results)
def get_live_sessions_ws(project_id):
project_key = projects.get_project_key(project_id)
connected_peers = requests.get(config("peers") % config("S3_KEY") + f"/{project_key}")
if connected_peers.status_code != 200:
print("!! issue with the peer-server")
print(connected_peers.text)
return []
live_peers = connected_peers.json().get("data", [])
for s in live_peers:
s["live"] = True
s["projectId"] = project_id
if "projectID" in s:
s.pop("projectID")
live_peers = sorted(live_peers, key=lambda l: l.get("timestamp", 0), reverse=True)
return live_peers
def __get_agent_token(project_id, project_key, session_id):
iat = TimeUTC.now()
return jwt.encode(
payload={
"projectKey": project_key,
"projectId": project_id,
"sessionId": session_id,
"iat": iat // 1000,
"exp": iat // 1000 + config("ASSIST_JWT_EXPIRATION", cast=int) + TimeUTC.get_utc_offset() // 1000,
"iss": config("JWT_ISSUER"),
"aud": f"openreplay:agent"
},
key=config("ASSIST_JWT_SECRET"),
algorithm=config("JWT_ALGORITHM")
)
def get_live_session_by_id(project_id, session_id):
project_key = projects.get_project_key(project_id)
try:
results = requests.get(ASSIST_URL + config("assist") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for get_live_session_by_id")
logger.error(results.text)
return None
results = results.json().get("data")
if results is None:
return None
results["live"] = True
results["agentToken"] = __get_agent_token(project_id=project_id, project_key=project_key, session_id=session_id)
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
return None
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
logger.error(results.text)
except:
logger.error("couldn't get response")
return None
return results
all_live = get_live_sessions_ws(project_id)
for l in all_live:
if str(l.get("sessionID")) == str(session_id):
return l
return None
def is_live(project_id, session_id, project_key=None):
if project_key is None:
project_key = projects.get_project_key(project_id)
try:
results = requests.get(ASSIST_URL + config("assistList") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for is_live")
logger.error(results.text)
return False
results = results.json().get("data")
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
connected_peers = requests.get(config("peersList") % config("S3_KEY") + f"/{project_key}")
if connected_peers.status_code != 200:
print("!! issue with the peer-server")
print(connected_peers.text)
return False
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
logger.error(results.text)
except:
logger.error("couldn't get response")
return False
return str(session_id) == results
connected_peers = connected_peers.json().get("data", [])
return str(session_id) in connected_peers
def autocomplete(project_id, q: str, key: str = None):
project_key = projects.get_project_key(project_id)
params = {"q": q}
if key:
params["key"] = key
try:
results = requests.get(
ASSIST_URL + config("assistList") + f"/{project_key}/autocomplete",
params=params, timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for autocomplete")
logger.error(results.text)
return {"errors": [f"Something went wrong wile calling assist:{results.text}"]}
results = results.json().get("data", [])
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
return {"errors": ["Assist request timeout"]}
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
logger.error(results.text)
except:
logger.error("couldn't get response")
return {"errors": ["Something went wrong wile calling assist"]}
for r in results:
r["type"] = __change_keys(r["type"])
return {"data": results}
def __get_efs_path():
efs_path = config("FS_DIR")
if not path_exists(efs_path):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"EFS not found in path: {efs_path}")
if not access(efs_path, R_OK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"EFS found under: {efs_path}; but it is not readable, please check permissions")
return efs_path
def __get_mob_path(project_id, session_id):
params = {"projectId": project_id, "sessionId": session_id}
return config("EFS_SESSION_MOB_PATTERN", default="%(sessionId)s") % params
def get_raw_mob_by_id(project_id, session_id):
efs_path = __get_efs_path()
path_to_file = efs_path + "/" + __get_mob_path(project_id=project_id, session_id=session_id)
if path_exists(path_to_file):
if not access(path_to_file, R_OK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Replay file found under: {efs_path};" +
" but it is not readable, please check permissions")
# getsize return size in bytes, UNPROCESSED_MAX_SIZE is in Kb
if (getsize(path_to_file) / 1000) >= config("UNPROCESSED_MAX_SIZE", cast=int, default=200 * 1000):
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE, detail="Replay file too large")
return path_to_file
return None
def __get_devtools_path(project_id, session_id):
params = {"projectId": project_id, "sessionId": session_id}
return config("EFS_DEVTOOLS_MOB_PATTERN", default="%(sessionId)s") % params
def get_raw_devtools_by_id(project_id, session_id):
efs_path = __get_efs_path()
path_to_file = efs_path + "/" + __get_devtools_path(project_id=project_id, session_id=session_id)
if path_exists(path_to_file):
if not access(path_to_file, R_OK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Devtools file found under: {efs_path};"
" but it is not readable, please check permissions")
return path_to_file
return None
def session_exists(project_id, session_id):
project_key = projects.get_project_key(project_id)
try:
results = requests.get(ASSIST_URL + config("assist") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for session_exists")
logger.error(results.text)
return None
results = results.json().get("data")
if results is None:
return False
return True
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
return False
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
logger.error(results.text)
except:
logger.error("couldn't get response")
return False
def __change_keys(key):
return {
"PAGETITLE": schemas.LiveFilterType.PAGE_TITLE.value,
"ACTIVE": "active",
"LIVE": "live",
"SESSIONID": schemas.LiveFilterType.SESSION_ID.value,
"METADATA": schemas.LiveFilterType.METADATA.value,
"USERID": schemas.LiveFilterType.USER_ID.value,
"USERUUID": schemas.LiveFilterType.USER_UUID.value,
"PROJECTKEY": "projectKey",
"REVID": schemas.LiveFilterType.REV_ID.value,
"TIMESTAMP": "timestamp",
"TRACKERVERSION": schemas.LiveFilterType.TRACKER_VERSION.value,
"ISSNIPPET": "isSnippet",
"USEROS": schemas.LiveFilterType.USER_OS.value,
"USERBROWSER": schemas.LiveFilterType.USER_BROWSER.value,
"USERBROWSERVERSION": schemas.LiveFilterType.USER_BROWSER_VERSION.value,
"USERDEVICE": schemas.LiveFilterType.USER_DEVICE.value,
"USERDEVICETYPE": schemas.LiveFilterType.USER_DEVICE_TYPE.value,
"USERCOUNTRY": schemas.LiveFilterType.USER_COUNTRY.value,
"PROJECTID": "projectId"
}.get(key.upper(), key)
def get_ice_servers():
return config("iceServers") if config("iceServers", default=None) is not None \
and len(config("iceServers")) > 0 else None

View file

@ -1,96 +1,54 @@
import logging
import jwt
from decouple import config
from chalicelib.core import tenants
from chalicelib.core import users, spot
from chalicelib.utils import helper
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
from decouple import config
from chalicelib.core import tenants
from chalicelib.core import users
def get_supported_audience():
return [users.AUDIENCE, spot.AUDIENCE]
def is_spot_token(token: str) -> bool:
try:
decoded_token = jwt.decode(token, options={"verify_signature": False, "verify_exp": False})
audience = decoded_token.get("aud")
return audience == spot.AUDIENCE
except jwt.InvalidTokenError:
logger.error(f"Invalid token for is_spot_token: {token}")
raise
def jwt_authorizer(scheme: str, token: str, leeway=0) -> dict | None:
if scheme.lower() != "bearer":
def jwt_authorizer(token):
token = token.split(" ")
if len(token) != 2 or token[0].lower() != "bearer":
return None
try:
payload = jwt.decode(jwt=token,
key=config("JWT_SECRET") if not is_spot_token(token) else config("JWT_SPOT_SECRET"),
algorithms=config("JWT_ALGORITHM"),
audience=get_supported_audience(),
leeway=leeway)
payload = jwt.decode(
token[1],
config("jwt_secret"),
algorithms=config("jwt_algorithm"),
audience=[f"plugin:{helper.get_stage_name()}", f"front:{helper.get_stage_name()}"]
)
except jwt.ExpiredSignatureError:
logger.debug("! JWT Expired signature")
print("! JWT Expired signature")
return None
except BaseException as e:
logger.warning("! JWT Base Exception", exc_info=e)
print("! JWT Base Exception")
return None
return payload
def jwt_refresh_authorizer(scheme: str, token: str):
if scheme.lower() != "bearer":
def jwt_context(context):
user = users.get(user_id=context["userId"], tenant_id=context["tenantId"])
if user is None:
return None
try:
payload = jwt.decode(jwt=token,
key=config("JWT_REFRESH_SECRET") if not is_spot_token(token) \
else config("JWT_SPOT_REFRESH_SECRET"),
algorithms=config("JWT_ALGORITHM"),
audience=get_supported_audience())
except jwt.ExpiredSignatureError:
logger.debug("! JWT-refresh Expired signature")
return None
except BaseException as e:
logger.error("! JWT-refresh Base Exception", exc_info=e)
return None
return payload
return {
"tenantId": context["tenantId"],
"userId": context["userId"],
**user
}
def generate_jwt(user_id, tenant_id, iat, aud, for_spot=False):
def generate_jwt(id, tenant_id, iat, aud):
token = jwt.encode(
payload={
"userId": user_id,
"userId": id,
"tenantId": tenant_id,
"exp": iat + (config("JWT_EXPIRATION", cast=int) if not for_spot
else config("JWT_SPOT_EXPIRATION", cast=int)),
"iss": config("JWT_ISSUER"),
"iat": iat,
"exp": iat // 1000 + config("jwt_exp_delta_seconds",cast=int) + TimeUTC.get_utc_offset() // 1000,
"iss": config("jwt_issuer"),
"iat": iat // 1000,
"aud": aud
},
key=config("JWT_SECRET") if not for_spot else config("JWT_SPOT_SECRET"),
algorithm=config("JWT_ALGORITHM")
)
return token
def generate_jwt_refresh(user_id, tenant_id, iat, aud, jwt_jti, for_spot=False):
token = jwt.encode(
payload={
"userId": user_id,
"tenantId": tenant_id,
"exp": iat + (config("JWT_REFRESH_EXPIRATION", cast=int) if not for_spot
else config("JWT_SPOT_REFRESH_EXPIRATION", cast=int)),
"iss": config("JWT_ISSUER"),
"iat": iat,
"aud": aud,
"jti": jwt_jti
},
key=config("JWT_REFRESH_SECRET") if not for_spot else config("JWT_SPOT_REFRESH_SECRET"),
algorithm=config("JWT_ALGORITHM")
key=config("jwt_secret"),
algorithm=config("jwt_algorithm")
)
return token

View file

@ -1,439 +0,0 @@
import logging
import schemas
from chalicelib.core import countries, events, metadata
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.event_filter_definition import Event
from chalicelib.utils.or_cache import CachedResponse
logger = logging.getLogger(__name__)
TABLE = "public.autocomplete"
def __get_autocomplete_table(value, project_id):
autocomplete_events = [schemas.FilterType.REV_ID,
schemas.EventType.CLICK,
schemas.FilterType.USER_DEVICE,
schemas.FilterType.USER_ID,
schemas.FilterType.USER_BROWSER,
schemas.FilterType.USER_OS,
schemas.EventType.CUSTOM,
schemas.FilterType.USER_COUNTRY,
schemas.FilterType.USER_CITY,
schemas.FilterType.USER_STATE,
schemas.EventType.LOCATION,
schemas.EventType.INPUT]
autocomplete_events.sort()
sub_queries = []
c_list = []
for e in autocomplete_events:
if e == schemas.FilterType.USER_COUNTRY:
c_list = countries.get_country_code_autocomplete(value)
if len(c_list) > 0:
sub_queries.append(f"""(SELECT DISTINCT ON(value) '{e.value}' AS _type, value
FROM {TABLE}
WHERE project_id = %(project_id)s
AND type= '{e.value.upper()}'
AND value IN %(c_list)s)""")
continue
sub_queries.append(f"""(SELECT '{e.value}' AS _type, value
FROM {TABLE}
WHERE project_id = %(project_id)s
AND type= '{e.value.upper()}'
AND value ILIKE %(svalue)s
ORDER BY value
LIMIT 5)""")
if len(value) > 2:
sub_queries.append(f"""(SELECT '{e.value}' AS _type, value
FROM {TABLE}
WHERE project_id = %(project_id)s
AND type= '{e.value.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5)""")
with pg_client.PostgresClient() as cur:
query = cur.mogrify(" UNION DISTINCT ".join(sub_queries) + ";",
{"project_id": project_id,
"value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value),
"c_list": tuple(c_list)
})
try:
cur.execute(query)
except Exception as err:
logger.exception("--------- AUTOCOMPLETE SEARCH QUERY EXCEPTION -----------")
logger.exception(query.decode('UTF-8'))
logger.exception("--------- VALUE -----------")
logger.exception(value)
logger.exception("--------------------")
raise err
results = cur.fetchall()
for r in results:
r["type"] = r.pop("_type")
results = helper.list_to_camel_case(results)
return results
def __generic_query(typename, value_length=None):
if typename == schemas.FilterType.USER_COUNTRY:
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value IN %(value)s
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""SELECT DISTINCT ON(value,type) value, type
((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value ILIKE %(svalue)s
ORDER BY value
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value ILIKE %(svalue)s
ORDER BY value
LIMIT 10;"""
def __generic_autocomplete(event: Event):
def f(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
query = __generic_query(event.ui_type, value_length=len(value))
params = {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}
cur.execute(cur.mogrify(query, params))
return helper.list_to_camel_case(cur.fetchall())
return f
def generic_autocomplete_metas(typename):
def f(project_id, text):
with pg_client.PostgresClient() as cur:
params = {"project_id": project_id, "value": helper.string_to_sql_like(text),
"svalue": helper.string_to_sql_like("^" + text)}
if typename == schemas.FilterType.USER_COUNTRY:
params["value"] = tuple(countries.get_country_code_autocomplete(text))
if len(params["value"]) == 0:
return []
query = cur.mogrify(__generic_query(typename, value_length=len(text)), params)
cur.execute(query)
rows = cur.fetchall()
return rows
return f
def __errors_query(source=None, value_length=None):
if value_length is None or value_length > 2:
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(value)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
def __search_errors(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(__errors_query(source,
value_length=len(value)),
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value),
"source": source}))
results = helper.list_to_camel_case(cur.fetchall())
return results
def __search_errors_mobile(project_id, value, key=None, source=None):
if len(value) > 2:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(value)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
LIMIT 5);"""
else:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
LIMIT 5);"""
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(query, {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
def __search_metadata(project_id, value, key=None, source=None):
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if len(meta_keys) == 0 or key is not None and key not in meta_keys.keys():
return []
sub_from = []
if key is not None:
meta_keys = {key: meta_keys[key]}
for k in meta_keys.keys():
colname = metadata.index_to_colname(meta_keys[k])
if len(value) > 2:
sub_from.append(f"""((SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)
UNION
(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(value)s LIMIT 5))
""")
else:
sub_from.append(f"""(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
TYPE_TO_COLUMN = {
schemas.EventType.CLICK: "label",
schemas.EventType.INPUT: "label",
schemas.EventType.LOCATION: "path",
schemas.EventType.CUSTOM: "name",
schemas.FetchFilterType.FETCH_URL: "path",
schemas.GraphqlFilterType.GRAPHQL_NAME: "name",
schemas.EventType.STATE_ACTION: "name",
# For ERROR, sessions search is happening over name OR message,
# for simplicity top 10 is using name only
schemas.EventType.ERROR: "name",
schemas.FilterType.USER_COUNTRY: "user_country",
schemas.FilterType.USER_CITY: "user_city",
schemas.FilterType.USER_STATE: "user_state",
schemas.FilterType.USER_ID: "user_id",
schemas.FilterType.USER_ANONYMOUS_ID: "user_anonymous_id",
schemas.FilterType.USER_OS: "user_os",
schemas.FilterType.USER_BROWSER: "user_browser",
schemas.FilterType.USER_DEVICE: "user_device",
schemas.FilterType.PLATFORM: "platform",
schemas.FilterType.REV_ID: "rev_id",
schemas.FilterType.REFERRER: "referrer",
schemas.FilterType.UTM_SOURCE: "utm_source",
schemas.FilterType.UTM_MEDIUM: "utm_medium",
schemas.FilterType.UTM_CAMPAIGN: "utm_campaign",
}
TYPE_TO_TABLE = {
schemas.EventType.CLICK: "events.clicks",
schemas.EventType.INPUT: "events.inputs",
schemas.EventType.LOCATION: "events.pages",
schemas.EventType.CUSTOM: "events_common.customs",
schemas.FetchFilterType.FETCH_URL: "events_common.requests",
schemas.GraphqlFilterType.GRAPHQL_NAME: "events.graphql",
schemas.EventType.STATE_ACTION: "events.state_actions",
}
def is_top_supported(event_type):
return TYPE_TO_COLUMN.get(event_type, False)
@CachedResponse(table="or_cache.autocomplete_top_values", ttl=5 * 60)
def get_top_values(project_id, event_type, event_key=None):
with pg_client.PostgresClient() as cur:
if schemas.FilterType.has_value(event_type):
if event_type == schemas.FilterType.METADATA \
and (event_key is None \
or (colname := metadata.get_colname_by_key(project_id=project_id, key=event_key)) is None) \
or event_type != schemas.FilterType.METADATA \
and (colname := TYPE_TO_COLUMN.get(event_type)) is None:
return []
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY {colname}) AS row_count,
COUNT(1) OVER () AS total_count
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} IS NOT NULL
AND sessions.duration IS NOT NULL
AND sessions.duration > 0
ORDER BY row_count DESC
LIMIT 10)
SELECT c_value AS value, row_count, trunc(row_count * 100 / total_count, 2) AS row_percentage
FROM raw;"""
elif event_type == schemas.EventType.ERROR:
colname = TYPE_TO_COLUMN.get(event_type)
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY {colname}) AS row_count,
COUNT(1) OVER () AS total_count
FROM public.errors
WHERE project_id = %(project_id)s
AND {colname} IS NOT NULL
AND {colname} != ''
ORDER BY row_count DESC
LIMIT 10)
SELECT c_value AS value, row_count, trunc(row_count * 100 / total_count,2) AS row_percentage
FROM raw;"""
else:
colname = TYPE_TO_COLUMN.get(event_type)
table = TYPE_TO_TABLE.get(event_type)
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY {colname}) AS row_count,
COUNT(1) OVER () AS total_count
FROM {table} INNER JOIN public.sessions USING(session_id)
WHERE project_id = %(project_id)s
AND {colname} IS NOT NULL
AND {colname} != ''
AND sessions.duration IS NOT NULL
AND sessions.duration > 0
ORDER BY row_count DESC
LIMIT 10)
SELECT c_value AS value, row_count, trunc(row_count * 100 / total_count,2) AS row_percentage
FROM raw;"""
params = {"project_id": project_id}
query = cur.mogrify(query, params)
logger.debug("--------------------")
logger.debug(query)
logger.debug("--------------------")
cur.execute(query=query)
results = cur.fetchall()
return helper.list_to_camel_case(results)

View file

@ -1,143 +1,116 @@
from chalicelib.core import projects
from chalicelib.core import users
from chalicelib.core.log_tools import datadog, stackdriver, sentry
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client
from chalicelib.core import projects, log_tool_datadog, log_tool_stackdriver, log_tool_sentry
from chalicelib.core import users
def get_state(tenant_id):
pids = projects.get_projects_ids(tenant_id=tenant_id)
my_projects = projects.get_projects(tenant_id=tenant_id, recording_state=False)
pids = [s["projectId"] for s in my_projects]
with pg_client.PostgresClient() as cur:
recorded = False
meta = False
if len(pids) > 0:
cur.execute(
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)},
)
cur.mogrify("""\
SELECT
COUNT(*)
FROM public.sessions AS s
where s.project_id IN %(ids)s
LIMIT 1;""",
{"ids": tuple(pids)})
)
recorded = cur.fetchone()["exists"]
recorded = cur.fetchone()["count"] > 0
meta = False
if recorded:
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE sessions.project_id = p.project_id
AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON (TRUE)
WHERE {TENANT_CONDITION} AND p.deleted_at ISNULL
AND ( sessions.user_id IS NOT NULL OR p.metadata_1 IS NOT NULL
OR p.metadata_2 IS NOT NULL OR p.metadata_3 IS NOT NULL
OR p.metadata_4 IS NOT NULL OR p.metadata_5 IS NOT NULL
OR p.metadata_6 IS NOT NULL OR p.metadata_7 IS NOT NULL
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id},
)
cur.execute(query)
cur.execute("""SELECT SUM((SELECT COUNT(t.meta)
FROM (VALUES (p.metadata_1), (p.metadata_2), (p.metadata_3), (p.metadata_4), (p.metadata_5),
(p.metadata_6), (p.metadata_7), (p.metadata_8), (p.metadata_9), (p.metadata_10),
(sessions.user_id)) AS t(meta)
WHERE t.meta NOTNULL))
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 'defined'
FROM public.sessions
WHERE sessions.project_id=p.project_id AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON(TRUE)
WHERE p.deleted_at ISNULL;"""
)
meta = cur.fetchone()["exists"]
meta = cur.fetchone()["sum"] > 0
return [
{
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
},
{
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
},
{
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
},
{
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
},
{"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"},
{"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"},
{"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"},
{"task": "Integrations",
"done": len(log_tool_datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
]
def get_state_installing(tenant_id):
pids = projects.get_projects_ids(tenant_id=tenant_id)
my_projects = projects.get_projects(tenant_id=tenant_id, recording_state=False)
pids = [s["projectId"] for s in my_projects]
with pg_client.PostgresClient() as cur:
recorded = False
if len(pids) > 0:
cur.execute(
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)},
)
cur.mogrify("""\
SELECT
COUNT(*)
FROM public.sessions AS s
where s.project_id IN %(ids)s
LIMIT 1;""",
{"ids": tuple(pids)})
)
recorded = cur.fetchone()["exists"]
recorded = cur.fetchone()["count"] > 0
return {
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
}
return {"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"}
def get_state_identify_users(tenant_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE sessions.project_id = p.project_id
AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON (TRUE)
WHERE {TENANT_CONDITION} AND p.deleted_at ISNULL
AND ( sessions.user_id IS NOT NULL OR p.metadata_1 IS NOT NULL
OR p.metadata_2 IS NOT NULL OR p.metadata_3 IS NOT NULL
OR p.metadata_4 IS NOT NULL OR p.metadata_5 IS NOT NULL
OR p.metadata_6 IS NOT NULL OR p.metadata_7 IS NOT NULL
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id},
)
cur.execute(query)
cur.execute(
"""SELECT SUM((SELECT COUNT(t.meta)
FROM (VALUES (p.metadata_1), (p.metadata_2), (p.metadata_3), (p.metadata_4), (p.metadata_5),
(p.metadata_6), (p.metadata_7), (p.metadata_8), (p.metadata_9), (p.metadata_10),
(sessions.user_id)) AS t(meta)
WHERE t.meta NOTNULL))
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 'defined'
FROM public.sessions
WHERE sessions.project_id=p.project_id AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON(TRUE)
WHERE p.deleted_at ISNULL;""")
meta = cur.fetchone()["exists"]
meta = cur.fetchone()["sum"] > 0
return {
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
}
return {"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"}
def get_state_manage_users(tenant_id):
return {
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
}
return {"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"}
def get_state_integrations(tenant_id):
return {
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
}
return {"task": "Integrations",
"done": len(log_tool_datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}

View file

@ -1,35 +0,0 @@
from chalicelib.utils import pg_client
from chalicelib.utils.storage import StorageClient
from decouple import config
def get_canvas_presigned_urls(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify("""\
SELECT *
FROM events.canvas_recordings
WHERE session_id = %(session_id)s
ORDER BY timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
urls = []
for i in range(len(rows)):
params = {
"sessionId": session_id,
"projectId": project_id,
"recordingId": rows[i]["recording_id"]
}
oldKey = "%(sessionId)s/%(recordingId)s.mp4" % params
key = config("CANVAS_PATTERN", default="%(sessionId)s/%(recordingId)s.tar.zst") % params
urls.append(StorageClient.get_presigned_url_for_sharing(
bucket=config("CANVAS_BUCKET", default=config("sessions_bucket")),
expires_in=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900),
key=key
))
urls.append(StorageClient.get_presigned_url_for_sharing(
bucket=config("CANVAS_BUCKET", default=config("sessions_bucket")),
expires_in=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900),
key=oldKey
))
return urls

View file

@ -0,0 +1,125 @@
import requests
from decouple import config
from datetime import datetime
from chalicelib.core import webhook
class Slack:
@classmethod
def add_channel(cls, tenant_id, **args):
url = args["url"]
name = args["name"]
if cls.say_hello(url):
return webhook.add(tenant_id=tenant_id,
endpoint=url,
webhook_type="slack",
name=name)
return None
@classmethod
def say_hello(cls, url):
r = requests.post(
url=url,
json={
"attachments": [
{
"text": "Welcome to OpenReplay",
"ts": datetime.now().timestamp(),
}
]
})
if r.status_code != 200:
print("slack integration failed")
print(r.text)
return False
return True
@classmethod
def send_text(cls, tenant_id, webhook_id, text, **args):
integration = cls.__get(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
print("====> sending slack notification")
r = requests.post(
url=integration["endpoint"],
json={
"attachments": [
{
"text": text,
"ts": datetime.now().timestamp(),
**args
}
]
})
print(r)
print(r.text)
return {"data": r.text}
@classmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
integration = cls.__get(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
print(f"====> sending slack batch notification: {len(attachments)}")
for i in range(0, len(attachments), 100):
r = requests.post(
url=integration["endpoint"],
json={"attachments": attachments[i:i + 100]})
if r.status_code != 200:
print("!!!! something went wrong")
print(r)
print(r.text)
@classmethod
def __share_to_slack(cls, tenant_id, integration_id, fallback, pretext, title, title_link, text):
integration = cls.__get(tenant_id=tenant_id, integration_id=integration_id)
if integration is None:
return {"errors": ["slack integration not found"]}
r = requests.post(
url=integration["endpoint"],
json={
"attachments": [
{
"fallback": fallback,
"pretext": pretext,
"title": title,
"title_link": title_link,
"text": text,
"ts": datetime.now().timestamp()
}
]
})
return r.text
@classmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, integration_id=None):
args = {"fallback": f"{user} has shared the below session!",
"pretext": f"{user} has shared the below session!",
"title": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"text": comment}
return {"data": cls.__share_to_slack(tenant_id, integration_id, **args)}
@classmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, integration_id=None):
args = {"fallback": f"{user} has shared the below error!",
"pretext": f"{user} has shared the below error!",
"title": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"text": comment}
return {"data": cls.__share_to_slack(tenant_id, integration_id, **args)}
@classmethod
def has_slack(cls, tenant_id):
integration = cls.__get(tenant_id=tenant_id)
return not (integration is None or len(integration) == 0)
@classmethod
def __get(cls, tenant_id, integration_id=None):
if integration_id is not None:
return webhook.get(tenant_id=tenant_id, webhook_id=integration_id)
integrations = webhook.get_by_type(tenant_id=tenant_id, webhook_type="slack")
if integrations is None or len(integrations) == 0:
return None
return integrations[0]

View file

@ -1 +0,0 @@
from . import collaboration_base as _

View file

@ -1,45 +0,0 @@
from abc import ABC, abstractmethod
import schemas
class BaseCollaboration(ABC):
@classmethod
@abstractmethod
def add(cls, tenant_id, data: schemas.AddCollaborationSchema):
pass
@classmethod
@abstractmethod
def say_hello(cls, url):
pass
@classmethod
@abstractmethod
def send_raw(cls, tenant_id, webhook_id, body):
pass
@classmethod
@abstractmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
pass
@classmethod
@abstractmethod
def __share(cls, tenant_id, integration_id, attachments, extra=None):
pass
@classmethod
@abstractmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, project_name=None, integration_id=None):
pass
@classmethod
@abstractmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, project_name=None, integration_id=None):
pass
@classmethod
@abstractmethod
def get_integration(cls, tenant_id, integration_id=None):
pass

View file

@ -1,171 +0,0 @@
import logging
import requests
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import webhook
from chalicelib.core.collaborations.collaboration_base import BaseCollaboration
logger = logging.getLogger(__name__)
class MSTeams(BaseCollaboration):
@classmethod
def add(cls, tenant_id, data: schemas.AddCollaborationSchema):
if webhook.exists_by_name(tenant_id=tenant_id, name=data.name, exclude_id=None,
webhook_type=schemas.WebhookType.MSTEAMS):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"name already exists.")
if cls.say_hello(data.url):
return webhook.add(tenant_id=tenant_id,
endpoint=data.url.unicode_string(),
webhook_type=schemas.WebhookType.MSTEAMS,
name=data.name)
return None
@classmethod
def say_hello(cls, url):
try:
r = requests.post(
url=url,
json={
"@type": "MessageCard",
"@context": "https://schema.org/extensions",
"summary": "Welcome to OpenReplay",
"title": "Welcome to OpenReplay"
},
timeout=3)
if r.status_code != 200:
logger.warning("MSTeams integration failed")
logger.warning(r.text)
return False
except Exception as e:
logger.warning("!!! MSTeams integration failed")
logger.exception(e)
return False
return True
@classmethod
def send_raw(cls, tenant_id, webhook_id, body):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["msteams integration not found"]}
try:
r = requests.post(
url=integration["endpoint"],
json=body,
timeout=5)
if r.status_code != 200:
logger.warning(f"!! issue sending msteams raw; webhookId:{webhook_id} code:{r.status_code}")
logger.warning(r.text)
return None
except requests.exceptions.Timeout:
logger.warning(f"!! Timeout sending msteams raw webhookId:{webhook_id}")
return None
except Exception as e:
logger.warning(f"!! Issue sending msteams raw webhookId:{webhook_id}")
logger.warning(e)
return None
return {"data": r.text}
@classmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["msteams integration not found"]}
logger.debug(f"====> sending msteams batch notification: {len(attachments)}")
for i in range(0, len(attachments), 50):
part = attachments[i:i + 50]
for j in range(1, len(part), 2):
part.insert(j, {"text": "***"})
r = requests.post(url=integration["endpoint"],
json={
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": part[0]["activityTitle"],
"sections": part
})
if r.status_code != 200:
logger.warning("!!!! something went wrong")
logger.warning(r.text)
@classmethod
def __share(cls, tenant_id, integration_id, attachement, extra=None):
if extra is None:
extra = {}
integration = cls.get_integration(tenant_id=tenant_id, integration_id=integration_id)
if integration is None:
return {"errors": ["Microsoft Teams integration not found"]}
r = requests.post(
url=integration["endpoint"],
json={
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"sections": [attachement],
**extra
})
return r.text
@classmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, project_name=None, integration_id=None):
title = f"*{user}* has shared the below session!"
link = f"{config('SITE_URL')}/{project_id}/session/{session_id}"
args = {
"activityTitle": title,
"facts": [
{
"name": "Session:",
"value": link
}],
"markdown": True
}
if project_name and len(project_name) > 0:
args["activitySubtitle"] = f"On Project *{project_name}*"
if comment and len(comment) > 0:
args["facts"].append({
"name": "Comment:",
"value": comment
})
data = cls.__share(tenant_id, integration_id, attachement=args, extra={"summary": title})
if "errors" in data:
return data
return {"data": data}
@classmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, project_name=None, integration_id=None):
title = f"*{user}* has shared the below error!"
link = f"{config('SITE_URL')}/{project_id}/errors/{error_id}"
args = {
"activityTitle": title,
"facts": [
{
"name": "Session:",
"value": link
}],
"markdown": True
}
if project_name and len(project_name) > 0:
args["activitySubtitle"] = f"On Project *{project_name}*"
if comment and len(comment) > 0:
args["facts"].append({
"name": "Comment:",
"value": comment
})
data = cls.__share(tenant_id, integration_id, attachement=args, extra={"summary": title})
if "errors" in data:
return data
return {"data": data}
@classmethod
def get_integration(cls, tenant_id, integration_id=None):
if integration_id is not None:
return webhook.get_webhook(tenant_id=tenant_id, webhook_id=integration_id,
webhook_type=schemas.WebhookType.MSTEAMS)
integrations = webhook.get_by_type(tenant_id=tenant_id, webhook_type=schemas.WebhookType.MSTEAMS)
if integrations is None or len(integrations) == 0:
return None
return integrations[0]

View file

@ -1,126 +0,0 @@
from datetime import datetime
import requests
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import webhook
from chalicelib.core.collaborations.collaboration_base import BaseCollaboration
class Slack(BaseCollaboration):
@classmethod
def add(cls, tenant_id, data: schemas.AddCollaborationSchema):
if webhook.exists_by_name(tenant_id=tenant_id, name=data.name, exclude_id=None,
webhook_type=schemas.WebhookType.SLACK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"name already exists.")
if cls.say_hello(data.url):
return webhook.add(tenant_id=tenant_id,
endpoint=data.url.unicode_string(),
webhook_type=schemas.WebhookType.SLACK,
name=data.name)
return None
@classmethod
def say_hello(cls, url):
r = requests.post(
url=url,
json={
"attachments": [
{
"text": "Welcome to OpenReplay",
"ts": datetime.now().timestamp(),
}
]
})
if r.status_code != 200:
print("slack integration failed")
print(r.text)
return False
return True
@classmethod
def send_raw(cls, tenant_id, webhook_id, body):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
try:
r = requests.post(
url=integration["endpoint"],
json=body,
timeout=5)
if r.status_code != 200:
print(f"!! issue sending slack raw; webhookId:{webhook_id} code:{r.status_code}")
print(r.text)
return None
except requests.exceptions.Timeout:
print(f"!! Timeout sending slack raw webhookId:{webhook_id}")
return None
except Exception as e:
print(f"!! Issue sending slack raw webhookId:{webhook_id}")
print(str(e))
return None
return {"data": r.text}
@classmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
print(f"====> sending slack batch notification: {len(attachments)}")
for i in range(0, len(attachments), 100):
r = requests.post(
url=integration["endpoint"],
json={"attachments": attachments[i:i + 100]})
if r.status_code != 200:
print("!!!! something went wrong while sending to:")
print(integration)
print(r)
print(r.text)
@classmethod
def __share(cls, tenant_id, integration_id, attachement, extra=None):
if extra is None:
extra = {}
integration = cls.get_integration(tenant_id=tenant_id, integration_id=integration_id)
if integration is None:
return {"errors": ["slack integration not found"]}
attachement["ts"] = datetime.now().timestamp()
r = requests.post(url=integration["endpoint"], json={"attachments": [attachement], **extra})
return r.text
@classmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, project_name=None, integration_id=None):
args = {"fallback": f"{user} has shared the below session!",
"pretext": f"{user} has shared the below session!",
"title": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"text": comment}
data = cls.__share(tenant_id, integration_id, attachement=args)
if "errors" in data:
return data
return {"data": data}
@classmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, project_name=None, integration_id=None):
args = {"fallback": f"{user} has shared the below error!",
"pretext": f"{user} has shared the below error!",
"title": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"text": comment}
data = cls.__share(tenant_id, integration_id, attachement=args)
if "errors" in data:
return data
return {"data": data}
@classmethod
def get_integration(cls, tenant_id, integration_id=None):
if integration_id is not None:
return webhook.get_webhook(tenant_id=tenant_id, webhook_id=integration_id,
webhook_type=schemas.WebhookType.SLACK)
integrations = webhook.get_by_type(tenant_id=tenant_id, webhook_type=schemas.WebhookType.SLACK)
if integrations is None or len(integrations) == 0:
return None
return integrations[0]

View file

@ -1,296 +0,0 @@
COUNTRIES = {
"AC": "Ascension Island",
"AD": "Andorra",
"AE": "United Arab Emirates",
"AF": "Afghanistan",
"AG": "Antigua And Barbuda",
"AI": "Anguilla",
"AL": "Albania",
"AM": "Armenia",
"AN": "Netherlands Antilles",
"AO": "Angola",
"AQ": "Antarctica",
"AR": "Argentina",
"AS": "American Samoa",
"AT": "Austria",
"AU": "Australia",
"AW": "Aruba",
"AX": "Åland Islands",
"AZ": "Azerbaijan",
"BA": "Bosnia & Herzegovina",
"BB": "Barbados",
"BD": "Bangladesh",
"BE": "Belgium",
"BF": "Burkina Faso",
"BG": "Bulgaria",
"BH": "Bahrain",
"BI": "Burundi",
"BJ": "Benin",
"BL": "Saint Barthélemy",
"BM": "Bermuda",
"BN": "Brunei Darussalam",
"BO": "Bolivia",
"BQ": "Bonaire, Saint Eustatius And Saba",
"BR": "Brazil",
"BS": "Bahamas",
"BT": "Bhutan",
"BU": "Burma",
"BV": "Bouvet Island",
"BW": "Botswana",
"BY": "Belarus",
"BZ": "Belize",
"CA": "Canada",
"CC": "Cocos Islands",
"CD": "Congo",
"CF": "Central African Republic",
"CG": "Congo",
"CH": "Switzerland",
"CI": "Côte d'Ivoire",
"CK": "Cook Islands",
"CL": "Chile",
"CM": "Cameroon",
"CN": "China",
"CO": "Colombia",
"CP": "Clipperton Island",
"CR": "Costa Rica",
"CS": "Serbia and Montenegro",
"CT": "Canton and Enderbury Islands",
"CU": "Cuba",
"CV": "Cabo Verde",
"CW": "Curacao",
"CX": "Christmas Island",
"CY": "Cyprus",
"CZ": "Czech Republic",
"DD": "Germany",
"DE": "Germany",
"DG": "Diego Garcia",
"DJ": "Djibouti",
"DK": "Denmark",
"DM": "Dominica",
"DO": "Dominican Republic",
"DY": "Dahomey",
"DZ": "Algeria",
"EA": "Ceuta, Mulilla",
"EC": "Ecuador",
"EE": "Estonia",
"EG": "Egypt",
"EH": "Western Sahara",
"ER": "Eritrea",
"ES": "Spain",
"ET": "Ethiopia",
"FI": "Finland",
"FJ": "Fiji",
"FK": "Falkland Islands",
"FM": "Micronesia",
"FO": "Faroe Islands",
"FQ": "French Southern and Antarctic Territories",
"FR": "France",
"FX": "France, Metropolitan",
"GA": "Gabon",
"GB": "United Kingdom",
"GD": "Grenada",
"GE": "Georgia",
"GF": "French Guiana",
"GG": "Guernsey",
"GH": "Ghana",
"GI": "Gibraltar",
"GL": "Greenland",
"GM": "Gambia",
"GN": "Guinea",
"GP": "Guadeloupe",
"GQ": "Equatorial Guinea",
"GR": "Greece",
"GS": "South Georgia And The South Sandwich Islands",
"GT": "Guatemala",
"GU": "Guam",
"GW": "Guinea-bissau",
"GY": "Guyana",
"HK": "Hong Kong",
"HM": "Heard Island And McDonald Islands",
"HN": "Honduras",
"HR": "Croatia",
"HT": "Haiti",
"HU": "Hungary",
"HV": "Upper Volta",
"IC": "Canary Islands",
"ID": "Indonesia",
"IE": "Ireland",
"IL": "Israel",
"IM": "Isle Of Man",
"IN": "India",
"IO": "British Indian Ocean Territory",
"IQ": "Iraq",
"IR": "Iran",
"IS": "Iceland",
"IT": "Italy",
"JE": "Jersey",
"JM": "Jamaica",
"JO": "Jordan",
"JP": "Japan",
"JT": "Johnston Island",
"KE": "Kenya",
"KG": "Kyrgyzstan",
"KH": "Cambodia",
"KI": "Kiribati",
"KM": "Comoros",
"KN": "Saint Kitts And Nevis",
"KP": "Korea",
"KR": "Korea",
"KW": "Kuwait",
"KY": "Cayman Islands",
"KZ": "Kazakhstan",
"LA": "Laos",
"LB": "Lebanon",
"LC": "Saint Lucia",
"LI": "Liechtenstein",
"LK": "Sri Lanka",
"LR": "Liberia",
"LS": "Lesotho",
"LT": "Lithuania",
"LU": "Luxembourg",
"LV": "Latvia",
"LY": "Libya",
"MA": "Morocco",
"MC": "Monaco",
"MD": "Moldova",
"ME": "Montenegro",
"MF": "Saint Martin",
"MG": "Madagascar",
"MH": "Marshall Islands",
"MI": "Midway Islands",
"MK": "Macedonia",
"ML": "Mali",
"MM": "Myanmar",
"MN": "Mongolia",
"MO": "Macao",
"MP": "Northern Mariana Islands",
"MQ": "Martinique",
"MR": "Mauritania",
"MS": "Montserrat",
"MT": "Malta",
"MU": "Mauritius",
"MV": "Maldives",
"MW": "Malawi",
"MX": "Mexico",
"MY": "Malaysia",
"MZ": "Mozambique",
"NA": "Namibia",
"NC": "New Caledonia",
"NE": "Niger",
"NF": "Norfolk Island",
"NG": "Nigeria",
"NH": "New Hebrides",
"NI": "Nicaragua",
"NL": "Netherlands",
"NO": "Norway",
"NP": "Nepal",
"NQ": "Dronning Maud Land",
"NR": "Nauru",
"NT": "Neutral Zone",
"NU": "Niue",
"NZ": "New Zealand",
"OM": "Oman",
"PA": "Panama",
"PC": "Pacific Islands",
"PE": "Peru",
"PF": "French Polynesia",
"PG": "Papua New Guinea",
"PH": "Philippines",
"PK": "Pakistan",
"PL": "Poland",
"PM": "Saint Pierre And Miquelon",
"PN": "Pitcairn",
"PR": "Puerto Rico",
"PS": "Palestine",
"PT": "Portugal",
"PU": "U.S. Miscellaneous Pacific Islands",
"PW": "Palau",
"PY": "Paraguay",
"PZ": "Panama Canal Zone",
"QA": "Qatar",
"RE": "Reunion",
"RH": "Southern Rhodesia",
"RO": "Romania",
"RS": "Serbia",
"RU": "Russian Federation",
"RW": "Rwanda",
"SA": "Saudi Arabia",
"SB": "Solomon Islands",
"SC": "Seychelles",
"SD": "Sudan",
"SE": "Sweden",
"SG": "Singapore",
"SH": "Saint Helena, Ascension And Tristan Da Cunha",
"SI": "Slovenia",
"SJ": "Svalbard And Jan Mayen",
"SK": "Slovakia",
"SL": "Sierra Leone",
"SM": "San Marino",
"SN": "Senegal",
"SO": "Somalia",
"SR": "Suriname",
"SS": "South Sudan",
"ST": "Sao Tome and Principe",
"SU": "USSR",
"SV": "El Salvador",
"SX": "Sint Maarten",
"SY": "Syrian Arab Republic",
"SZ": "Swaziland",
"TA": "Tristan de Cunha",
"TC": "Turks And Caicos Islands",
"TD": "Chad",
"TF": "French Southern Territories",
"TG": "Togo",
"TH": "Thailand",
"TJ": "Tajikistan",
"TK": "Tokelau",
"TL": "Timor-Leste",
"TM": "Turkmenistan",
"TN": "Tunisia",
"TO": "Tonga",
"TP": "East Timor",
"TR": "Turkey",
"TT": "Trinidad And Tobago",
"TV": "Tuvalu",
"TW": "Taiwan",
"TZ": "Tanzania",
"UA": "Ukraine",
"UG": "Uganda",
"UM": "United States Minor Outlying Islands",
"UN": "United Nations",
"US": "United States",
"UY": "Uruguay",
"UZ": "Uzbekistan",
"VA": "Vatican City State",
"VC": "Saint Vincent And The Grenadines",
"VD": "VietNam",
"VE": "Venezuela",
"VG": "Virgin Islands (British)",
"VI": "Virgin Islands (US)",
"VN": "VietNam",
"VU": "Vanuatu",
"WF": "Wallis And Futuna",
"WK": "Wake Island",
"WS": "Samoa",
"XK": "Kosovo",
"YD": "Yemen",
"YE": "Yemen",
"YT": "Mayotte",
"YU": "Yugoslavia",
"ZA": "South Africa",
"ZM": "Zambia",
"ZR": "Zaire",
"ZW": "Zimbabwe",
}
def get_country_code_autocomplete(text):
if text is None or len(text) == 0:
return []
results = []
for code in COUNTRIES:
if text.lower() in code.lower() \
or text.lower() in COUNTRIES[code].lower():
results.append(code)
return results

View file

@ -0,0 +1,283 @@
import json
import schemas
from chalicelib.core import sessions
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
def try_live(project_id, data: schemas.TryCustomMetricsSchema):
results = []
for i, s in enumerate(data.series):
s.filter.startDate = data.startDate
s.filter.endDate = data.endDate
results.append(sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
view_type=data.viewType))
if data.viewType == schemas.MetricViewType.progress:
r = {"count": results[-1]}
diff = s.filter.endDate - s.filter.startDate
s.filter.startDate = data.endDate
s.filter.endDate = data.endDate - diff
r["previousCount"] = sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
view_type=data.viewType)
r["countProgress"] = helper.__progress(old_val=r["previousCount"], new_val=r["count"])
r["seriesName"] = s.name if s.name else i + 1
r["seriesId"] = s.series_id if s.series_id else None
results[-1] = r
return results
def merged_live(project_id, data: schemas.TryCustomMetricsSchema):
series_charts = try_live(project_id=project_id, data=data)
if data.viewType == schemas.MetricViewType.progress:
return series_charts
results = [{}] * len(series_charts[0])
for i in range(len(results)):
for j, series_chart in enumerate(series_charts):
results[i] = {**results[i], "timestamp": series_chart[i]["timestamp"],
data.series[j].name if data.series[j].name else j + 1: series_chart[i]["count"]}
return results
def make_chart(project_id, user_id, metric_id, data: schemas.CustomMetricChartPayloadSchema):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.TryCustomMetricsSchema = schemas.TryCustomMetricsSchema.parse_obj({**data.dict(), **metric})
series_charts = try_live(project_id=project_id, data=metric)
if data.viewType == schemas.MetricViewType.progress:
return series_charts
results = [{}] * len(series_charts[0])
for i in range(len(results)):
for j, series_chart in enumerate(series_charts):
results[i] = {**results[i], "timestamp": series_chart[i]["timestamp"],
metric.series[j].name: series_chart[i]["count"]}
return results
def get_sessions(project_id, user_id, metric_id, data: schemas.CustomMetricRawPayloadSchema):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.TryCustomMetricsSchema = schemas.TryCustomMetricsSchema.parse_obj({**data.dict(), **metric})
results = []
for s in metric.series:
s.filter.startDate = data.startDate
s.filter.endDate = data.endDate
results.append({"seriesId": s.series_id, "seriesName": s.name,
**sessions.search2_pg(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def create(project_id, user_id, data: schemas.CreateCustomMetricsSchema):
with pg_client.PostgresClient() as cur:
_data = {}
for i, s in enumerate(data.series):
for k in s.dict().keys():
_data[f"{k}_{i}"] = s.__getattribute__(k)
_data[f"index_{i}"] = i
_data[f"filter_{i}"] = s.filter.json()
series_len = len(data.series)
data.series = None
params = {"user_id": user_id, "project_id": project_id, **data.dict(), **_data}
query = cur.mogrify(f"""\
WITH m AS (INSERT INTO metrics (project_id, user_id, name)
VALUES (%(project_id)s, %(user_id)s, %(name)s)
RETURNING *)
INSERT
INTO metric_series(metric_id, index, name, filter)
VALUES {",".join([f"((SELECT metric_id FROM m), %(index_{i})s, %(name_{i})s, %(filter_{i})s::jsonb)"
for i in range(series_len)])}
RETURNING metric_id;""", params)
cur.execute(
query
)
r = cur.fetchone()
return {"data": get(metric_id=r["metric_id"], project_id=project_id, user_id=user_id)}
def __get_series_id(metric_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT series_id
FROM metric_series
WHERE metric_series.metric_id = %(metric_id)s
AND metric_series.deleted_at ISNULL;""",
{"metric_id": metric_id}
)
)
rows = cur.fetchall()
return [r["series_id"] for r in rows]
def update(metric_id, user_id, project_id, data: schemas.UpdateCustomMetricsSchema):
series_ids = __get_series_id(metric_id)
n_series = []
d_series_ids = []
u_series = []
u_series_ids = []
params = {"metric_id": metric_id, "is_public": data.is_public, "name": data.name,
"user_id": user_id, "project_id": project_id}
for i, s in enumerate(data.series):
prefix = "u_"
if s.series_id is None:
n_series.append({"i": i, "s": s})
prefix = "n_"
s.index = i
else:
u_series.append({"i": i, "s": s})
u_series_ids.append(s.series_id)
ns = s.dict()
for k in ns.keys():
if k == "filter":
ns[k] = json.dumps(ns[k])
params[f"{prefix}{k}_{i}"] = ns[k]
for i in series_ids:
if i not in u_series_ids:
d_series_ids.append(i)
params["d_series_ids"] = tuple(d_series_ids)
with pg_client.PostgresClient() as cur:
sub_queries = []
if len(n_series) > 0:
sub_queries.append(f"""\
n AS (INSERT INTO metric_series (metric_id, index, name, filter)
VALUES {",".join([f"(%(metric_id)s, %(n_index_{s['i']})s, %(n_name_{s['i']})s, %(n_filter_{s['i']})s::jsonb)"
for s in n_series])}
RETURNING 1)""")
if len(u_series) > 0:
sub_queries.append(f"""\
u AS (UPDATE metric_series
SET name=series.name,
filter=series.filter,
index=series.index
FROM (VALUES {",".join([f"(%(u_series_id_{s['i']})s,%(u_index_{s['i']})s,%(u_name_{s['i']})s,%(u_filter_{s['i']})s::jsonb)"
for s in u_series])}) AS series(series_id, index, name, filter)
WHERE metric_series.metric_id =%(metric_id)s AND metric_series.series_id=series.series_id
RETURNING 1)""")
if len(d_series_ids) > 0:
sub_queries.append("""\
d AS (DELETE FROM metric_series WHERE metric_id =%(metric_id)s AND series_id IN %(d_series_ids)s
RETURNING 1)""")
query = cur.mogrify(f"""\
{"WITH " if len(sub_queries) > 0 else ""}{",".join(sub_queries)}
UPDATE metrics
SET name = %(name)s, is_public= %(is_public)s
WHERE metric_id = %(metric_id)s
AND project_id = %(project_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING metric_id;""", params)
cur.execute(
query
)
return get(metric_id=metric_id, project_id=project_id, user_id=user_id)
def get_all(project_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT *
FROM metrics
LEFT JOIN LATERAL (SELECT jsonb_agg(metric_series.* ORDER BY index) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
WHERE metrics.project_id = %(project_id)s
AND metrics.deleted_at ISNULL
AND (user_id = %(user_id)s OR is_public)
ORDER BY created_at;""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
for r in rows:
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
for s in r["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
rows = helper.list_to_camel_case(rows)
return rows
def delete(project_id, metric_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET deleted_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id})
)
return {"state": "success"}
def get(metric_id, project_id, user_id, flatten=True):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT *
FROM metrics
LEFT JOIN LATERAL (SELECT jsonb_agg(metric_series.* ORDER BY index) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
WHERE metrics.project_id = %(project_id)s
AND metrics.deleted_at ISNULL
AND (metrics.user_id = %(user_id)s OR metrics.is_public)
AND metrics.metric_id = %(metric_id)s
ORDER BY created_at;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id}
)
)
row = cur.fetchone()
if row is None:
return None
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
if flatten:
for s in row["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
return helper.dict_to_camel_case(row)
def get_series_for_alert(project_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT series_id AS value,
metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count' AS name,
'count' AS unit,
FALSE AS predefined,
metric_id,
series_id
FROM metric_series
INNER JOIN metrics USING (metric_id)
WHERE metrics.deleted_at ISNULL
AND metrics.project_id = %(project_id)s
AND (user_id = %(user_id)s OR is_public)
ORDER BY name;""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
return helper.list_to_camel_case(rows)
def change_state(project_id, metric_id, user_id, status):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET active = %(status)s
WHERE metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"metric_id": metric_id, "status": status, "user_id": user_id})
)
return get(metric_id=metric_id, project_id=project_id, user_id=user_id)

File diff suppressed because it is too large Load diff

View file

@ -1,205 +0,0 @@
import logging
from chalicelib.utils import pg_client
logger = logging.getLogger(__name__)
class DatabaseRequestHandler:
def __init__(self, table_name):
self.table_name = table_name
self.constraints = []
self.params = {}
self.order_clause = ""
self.sort_clause = ""
self.select_columns = []
self.sub_queries = []
self.joins = []
self.group_by_clause = ""
self.client = pg_client
self.logger = logging.getLogger(__name__)
self.pagination = {}
def add_constraint(self, constraint, param=None):
self.constraints.append(constraint)
if param:
self.params.update(param)
def add_subquery(self, subquery, alias, param=None):
self.sub_queries.append((subquery, alias))
if param:
self.params.update(param)
def add_join(self, join_clause):
self.joins.append(join_clause)
def add_param(self, key, value):
self.params[key] = value
def set_order_by(self, order_by):
self.order_clause = order_by
def set_sort_by(self, sort_by):
self.sort_clause = sort_by
def set_select_columns(self, columns):
self.select_columns = columns
def set_group_by(self, group_by_clause):
self.group_by_clause = group_by_clause
def set_pagination(self, page, page_size):
"""
Set pagination parameters for the query.
:param page: The page number (1-indexed)
:param page_size: Number of items per page
"""
self.pagination = {
'offset': (page - 1) * page_size,
'limit': page_size
}
def build_query(self, action="select", additional_clauses=None, data=None):
if action == "select":
query = f"SELECT {', '.join(self.select_columns)} FROM {self.table_name}"
elif action == "insert":
columns = ', '.join(data.keys())
placeholders = ', '.join(f'%({k})s' for k in data.keys())
query = f"INSERT INTO {self.table_name} ({columns}) VALUES ({placeholders})"
elif action == "update":
set_clause = ', '.join(f"{k} = %({k})s" for k in data.keys())
query = f"UPDATE {self.table_name} SET {set_clause}"
elif action == "delete":
query = f"DELETE FROM {self.table_name}"
for join in self.joins:
query += f" {join}"
for subquery, alias in self.sub_queries:
query += f", ({subquery}) AS {alias}"
if self.constraints:
query += " WHERE " + " AND ".join(self.constraints)
if action == "select":
if self.group_by_clause:
query += " GROUP BY " + self.group_by_clause
if self.sort_clause:
query += " ORDER BY " + self.sort_clause
if self.order_clause:
query += " " + self.order_clause
if hasattr(self, 'pagination') and self.pagination:
query += " LIMIT %(limit)s OFFSET %(offset)s"
self.params.update(self.pagination)
if additional_clauses:
query += " " + additional_clauses
logger.debug(f"Query: {query}")
return query
def execute_query(self, query, data=None):
try:
with self.client.PostgresClient() as cur:
mogrified_query = cur.mogrify(query, {**data, **self.params} if data else self.params)
cur.execute(mogrified_query)
return cur.fetchall() if cur.description else None
except Exception as e:
self.logger.error(f"Database operation failed: {e}")
raise
def fetchall(self):
query = self.build_query()
return self.execute_query(query)
def fetchone(self):
query = self.build_query()
result = self.execute_query(query)
return result[0] if result else None
def insert(self, data):
query = self.build_query(action="insert", data=data)
query += " RETURNING *;"
result = self.execute_query(query, data)
return result[0] if result else None
def update(self, data):
query = self.build_query(action="update", data=data)
query += " RETURNING *;"
result = self.execute_query(query, data)
return result[0] if result else None
def delete(self):
query = self.build_query(action="delete")
return self.execute_query(query)
def batch_insert(self, items):
if not items:
return None
columns = ', '.join(items[0].keys())
# Building a values string with unique parameter names for each item
all_values_query = ', '.join(
'(' + ', '.join([f"%({key}_{i})s" for key in item]) + ')'
for i, item in enumerate(items)
)
query = f"INSERT INTO {self.table_name} ({columns}) VALUES {all_values_query} RETURNING *;"
try:
with self.client.PostgresClient() as cur:
# Flatten items into a single dictionary with unique keys
combined_params = {f"{k}_{i}": v for i, item in enumerate(items) for k, v in item.items()}
mogrified_query = cur.mogrify(query, combined_params)
cur.execute(mogrified_query)
return cur.fetchall()
except Exception as e:
self.logger.error(f"Database batch insert operation failed: {e}")
raise
def raw_query(self, query, params=None):
try:
with self.client.PostgresClient() as cur:
mogrified_query = cur.mogrify(query, params)
cur.execute(mogrified_query)
return cur.fetchall() if cur.description else None
except Exception as e:
self.logger.error(f"Database operation failed: {e}")
raise
def batch_update(self, items):
if not items:
return None
id_column = list(items[0])[0]
# Building the set clause for the update statement
update_columns = list(items[0].keys())
update_columns.remove(id_column)
set_clause = ', '.join([f"{col} = v.{col}" for col in update_columns])
# Building the values part for the 'VALUES' section
values_rows = []
for item in items:
values = ', '.join([f"%({key})s" for key in item.keys()])
values_rows.append(f"({values})")
values_query = ', '.join(values_rows)
# Constructing the full update query
query = f"""
UPDATE {self.table_name} AS t
SET {set_clause}
FROM (VALUES {values_query}) AS v ({', '.join(items[0].keys())})
WHERE t.{id_column} = v.{id_column};
"""
try:
with self.client.PostgresClient() as cur:
# Flatten items into a single dictionary for mogrify
combined_params = {k: v for item in items for k, v in item.items()}
mogrified_query = cur.mogrify(query, combined_params)
cur.execute(mogrified_query)
except Exception as e:
self.logger.error(f"Database batch update operation failed: {e}")
raise

View file

@ -0,0 +1,780 @@
import json
from chalicelib.core import sourcemaps, sessions
from chalicelib.utils import pg_client, helper, dev
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import __get_step_size
def get(error_id, family=False):
if family:
return get_batch([error_id])
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"SELECT * FROM events.errors AS e INNER JOIN public.errors AS re USING(error_id) WHERE error_id = %(error_id)s;",
{"error_id": error_id})
cur.execute(query=query)
result = cur.fetchone()
if result is not None:
result["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(result["stacktrace_parsed_at"])
return helper.dict_to_camel_case(result)
def get_batch(error_ids):
if len(error_ids) == 0:
return []
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
WITH RECURSIVE error_family AS (
SELECT *
FROM public.errors
WHERE error_id IN %(error_ids)s
UNION
SELECT child_errors.*
FROM public.errors AS child_errors
INNER JOIN error_family ON error_family.error_id = child_errors.parent_error_id OR error_family.parent_error_id = child_errors.error_id
)
SELECT *
FROM error_family;""",
{"error_ids": tuple(error_ids)})
cur.execute(query=query)
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f'{o["name"]}@{v["version"]}',
"count": v["count"]
} for o in data for v in o["partition"]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o["name"],
"count": o["count"],
} for o in data
]
def __process_tags(row):
return [
{"name": "browser", "partitions": __flatten_sort_key_count_version(data=row.get("browsers_partition"))},
{"name": "browser.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("browsers_partition"), merge_nested=True)},
{"name": "OS", "partitions": __flatten_sort_key_count_version(data=row.get("os_partition"))},
{"name": "OS.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("os_partition"), merge_nested=True)},
{"name": "device.family", "partitions": __flatten_sort_key_count_version(data=row.get("device_partition"))},
{"name": "device",
"partitions": __flatten_sort_key_count_version(data=row.pop("device_partition"), merge_nested=True)},
{"name": "country", "partitions": row.pop("country_partition")}
]
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = __get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30 = __get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = __get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
data["endDate24"] = TimeUTC.now()
data["startDate30"] = TimeUTC.now(-30)
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = __get_step_size(data["startDate24"], data["endDate24"], density24, factor=1)
density30 = int(data.get("density30", 30))
step_size30 = __get_step_size(data["startDate30"], data["endDate30"], density30, factor=1)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
"startDate30": data['startDate30'],
"endDate30": data['endDate30'],
"project_id": project_id,
"userId": user_id,
"step_size24": step_size24,
"step_size30": step_size30,
"error_id": error_id}
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
last_session_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart24,
chart30
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_uuid) AS users,
COUNT(DISTINCT session_id) AS sessions
FROM public.errors
INNER JOIN events.errors AS s_errors USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE error_id = %(error_id)s
GROUP BY error_id, name, message) AS details
INNER JOIN (SELECT error_id,
MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s
GROUP BY error_id) AS time_details USING (error_id)
INNER JOIN (SELECT error_id,
session_id AS last_session_id,
user_os,
user_os_version,
user_browser,
user_browser_version,
user_device,
user_device_type,
user_uuid
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details USING (error_id)
INNER JOIN (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT JSONB_AGG(version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
AND sessions.user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS version_details
) AS browser_version_details ON (TRUE)) AS browser_details) AS browser_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT COALESCE(user_os_version,'unknown') AS version, COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
AND sessions.user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_details
GROUP BY count_per_os_details.name ) AS os_version_details
ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_v_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
AND sessions.user_device_type = count_per_device_details.name
GROUP BY user_device
ORDER BY count DESC) AS count_per_device_v_details
GROUP BY count_per_device_details.name ) AS device_version_details
ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart24
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query24)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details24 ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart30
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30)}) AS chart_details
ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details30 ON (TRUE);
"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
query = cur.mogrify(
f"""SELECT error_id, status, session_id, start_ts,
parent_error_id,session_id, user_anonymous_id,
user_id, user_uuid, user_browser, user_browser_version,
user_os, user_os_version, user_device, payload,
COALESCE((SELECT TRUE
FROM public.user_favorite_errors AS fe
WHERE pe.error_id = fe.error_id
AND fe.user_id = %(user_id)s), FALSE) AS favorite,
True AS viewed
FROM public.errors AS pe
INNER JOIN events.errors AS ee USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE pe.project_id = %(project_id)s
AND error_id = %(error_id)s
ORDER BY start_ts DESC
LIMIT 1;""",
{"project_id": project_id, "error_id": error_id, "user_id": user_id})
cur.execute(query=query)
status = cur.fetchone()
if status is not None:
row["stack"] = format_first_stack_frame(status).pop("stack")
row["status"] = status.pop("status")
row["parent_error_id"] = status.pop("parent_error_id")
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
row["last_hydrated_session"] = status
else:
row["stack"] = []
row["last_hydrated_session"] = None
row["status"] = "untracked"
row["parent_error_id"] = None
row["favorite"] = False
row["viewed"] = False
return {"data": helper.dict_to_camel_case(row)}
def get_details_chart(project_id, error_id, user_id, **data):
pg_sub_query = __get_basic_constraints()
pg_sub_query.append("error_id = %(error_id)s")
pg_sub_query_chart = __get_basic_constraints(time_constraint=False, chart=True)
pg_sub_query_chart.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
if data.get("startDate") is None:
data["startDate"] = TimeUTC.now(-7)
else:
data["startDate"] = int(data["startDate"])
if data.get("endDate") is None:
data["endDate"] = TimeUTC.now()
else:
data["endDate"] = int(data["endDate"])
density = int(data.get("density", 7))
step_size = __get_step_size(data["startDate"], data["endDate"], density, factor=1)
params = {
"startDate": data['startDate'],
"endDate": data['endDate'],
"project_id": project_id,
"userId": user_id,
"step_size": step_size,
"error_id": error_id}
main_pg_query = f"""\
SELECT %(error_id)s AS error_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart
FROM (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS count_per_version_details) AS browesr_version_details
ON (TRUE)) AS browser_details) AS browser_details
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_query) AS partition
FROM (SELECT COALESCE(user_os_version, 'unknown') AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_query
) AS os_version_query ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_device_type = count_per_device_details.name
GROUP BY user_device_type, user_device
ORDER BY count DESC) AS count_per_device_details
) AS device_version_details ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_chart)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details ON (TRUE);"""
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
return {"data": helper.dict_to_camel_case(row)}
def __get_basic_constraints(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", chart=False, step_size_name="step_size",
project_key="project_id"):
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == 'mobile':
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == 'desktop':
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
"datetime": "max_datetime",
"lastOccurrence": "max_datetime",
"firstOccurrence": "min_datetime"
}.get(key, 'max_datetime')
@dev.timed
def search(data, project_id, user_id, flows=False, status="ALL", favorite_only=False):
status = status.upper()
if status.lower() not in ['all', 'unresolved', 'resolved', 'ignored']:
return {"errors": ["invalid error status"]}
pg_sub_query = __get_basic_constraints(data.get('platform'), project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
pg_sub_query_chart = __get_basic_constraints(data.get('platform'), time_constraint=False, chart=True)
pg_sub_query_chart.append("source ='js_exception'")
pg_sub_query_chart.append("errors.error_id =details.error_id")
statuses = []
error_ids = None
if data.get("startDate") is None:
data["startDate"] = TimeUTC.now(-30)
if data.get("endDate") is None:
data["endDate"] = TimeUTC.now(1)
if len(data.get("events", [])) > 0 or len(data.get("filters", [])) > 0 or status != "ALL" or favorite_only:
statuses = sessions.search2_pg(data=data, project_id=project_id, user_id=user_id, errors_only=True,
error_status=status, favorite_only=favorite_only)
if len(statuses) == 0:
return {"data": {
'total': 0,
'errors': []
}}
error_ids = [e["error_id"] for e in statuses]
with pg_client.PostgresClient() as cur:
if data.get("startDate") is None:
data["startDate"] = TimeUTC.now(-7)
if data.get("endDate") is None:
data["endDate"] = TimeUTC.now()
density = data.get("density", 7)
step_size = __get_step_size(data["startDate"], data["endDate"], density, factor=1)
sort = __get_sort_key('datetime')
if data.get("sort") is not None:
sort = __get_sort_key(data["sort"])
order = "DESC"
if data.get("order") is not None:
order = data["order"]
params = {
"startDate": data['startDate'],
"endDate": data['endDate'],
"project_id": project_id,
"userId": user_id,
"step_size": step_size}
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
pg_sub_query.append("error_id IN %(error_ids)s")
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
chart
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_uuid) AS users,
COUNT(DISTINCT session_id) AS sessions,
MAX(timestamp) AS max_datetime,
MIN(timestamp) AS min_datetime
FROM events.errors
INNER JOIN public.errors AS pe USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}) AS details
INNER JOIN LATERAL (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE errors.error_id = details.error_id) AS time_details ON (TRUE)
INNER JOIN LATERAL (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details ON (TRUE);"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
cur.execute(cur.mogrify(main_pg_query, params))
total = cur.rowcount
if flows:
return {"data": {"count": total}}
row = cur.fetchone()
rows = []
limit = 200
while row is not None and len(rows) < limit:
rows.append(row)
row = cur.fetchone()
if total == 0:
rows = []
else:
if len(statuses) == 0:
query = cur.mogrify(
"""SELECT error_id, status, parent_error_id, payload,
COALESCE((SELECT TRUE
FROM public.user_favorite_errors AS fe
WHERE errors.error_id = fe.error_id
AND fe.user_id = %(user_id)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_errors AS ve
WHERE errors.error_id = ve.error_id
AND ve.user_id = %(user_id)s LIMIT 1), FALSE) AS viewed
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = cur.fetchall()
statuses = {
s["error_id"]: s for s in statuses
}
for r in rows:
if r["error_id"] in statuses:
r["status"] = statuses[r["error_id"]]["status"]
r["parent_error_id"] = statuses[r["error_id"]]["parent_error_id"]
r["favorite"] = statuses[r["error_id"]]["favorite"]
r["viewed"] = statuses[r["error_id"]]["viewed"]
r["stack"] = format_first_stack_frame(statuses[r["error_id"]])["stack"]
else:
r["status"] = "untracked"
r["parent_error_id"] = None
r["favorite"] = False
r["viewed"] = False
r["stack"] = None
offset = len(rows)
rows = [r for r in rows if r["stack"] is None
or (len(r["stack"]) == 0 or len(r["stack"]) > 1
or len(r["stack"]) > 0
and (r["message"].lower() != "script error." or len(r["stack"][0]["absPath"]) > 0))]
offset -= len(rows)
return {
"data": {
'total': total - offset,
'errors': helper.list_to_camel_case(rows)
}
}
def __save_stacktrace(error_id, data):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET stacktrace=%(data)s::jsonb, stacktrace_parsed_at=timezone('utc'::text, now())
WHERE error_id = %(error_id)s;""",
{"error_id": error_id, "data": json.dumps(data)})
cur.execute(query=query)
def get_trace(project_id, error_id):
error = get(error_id=error_id)
if error is None:
return {"errors": ["error not found"]}
if error.get("source", "") != "js_exception":
return {"errors": ["this source of errors doesn't have a sourcemap"]}
if error.get("payload") is None:
return {"errors": ["null payload"]}
if error.get("stacktrace") is not None:
return {"sourcemapUploaded": True,
"trace": error.get("stacktrace"),
"preparsed": True}
trace, all_exists = sourcemaps.get_traces_group(project_id=project_id, payload=error["payload"])
if all_exists:
__save_stacktrace(error_id=error_id, data=trace)
return {"sourcemapUploaded": all_exists,
"trace": trace,
"preparsed": False}
def get_sessions(start_date, end_date, project_id, user_id, error_id):
extra_constraints = ["s.project_id = %(project_id)s",
"s.start_ts >= %(startDate)s",
"s.start_ts <= %(endDate)s",
"e.error_id = %(error_id)s"]
if start_date is None:
start_date = TimeUTC.now(-7)
if end_date is None:
end_date = TimeUTC.now()
params = {
"startDate": start_date,
"endDate": end_date,
"project_id": project_id,
"userId": user_id,
"error_id": error_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_country,
s.start_ts,
s.duration,
s.events_count,
s.pages_count,
s.errors_count,
s.issue_types,
COALESCE((SELECT TRUE
FROM public.user_favorite_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.sessions AS s INNER JOIN events.errors AS e USING (session_id)
WHERE {" AND ".join(extra_constraints)}
ORDER BY s.start_ts DESC;""",
params)
cur.execute(query=query)
sessions_list = []
total = cur.rowcount
row = cur.fetchone()
while row is not None and len(sessions_list) < 100:
sessions_list.append(row)
row = cur.fetchone()
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions_list)
}
ACTION_STATE = {
"unsolve": 'unresolved',
"solve": 'resolved',
"ignore": 'ignored'
}
def change_state(project_id, user_id, error_id, action):
errors = get(error_id, family=True)
print(len(errors))
status = ACTION_STATE.get(action)
if errors is None or len(errors) == 0:
return {"errors": ["error not found"]}
if errors[0]["status"] == status:
return {"errors": [f"error is already {status}"]}
if errors[0]["status"] == ACTION_STATE["solve"] and status == ACTION_STATE["ignore"]:
return {"errors": [f"state transition not permitted {errors[0]['status']} -> {status}"]}
params = {
"userId": user_id,
"error_ids": tuple([e["errorId"] for e in errors]),
"status": status}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET status = %(status)s
WHERE error_id IN %(error_ids)s
RETURNING status""",
params)
cur.execute(query=query)
row = cur.fetchone()
if row is not None:
for e in errors:
e["status"] = row["status"]
return {"data": errors}
MAX_RANK = 2
def __status_rank(status):
return {
'unresolved': MAX_RANK - 2,
'ignored': MAX_RANK - 1,
'resolved': MAX_RANK
}.get(status)
def merge(error_ids):
error_ids = list(set(error_ids))
errors = get_batch(error_ids)
if len(error_ids) <= 1 or len(error_ids) > len(errors):
return {"errors": ["invalid list of ids"]}
error_ids = [e["errorId"] for e in errors]
parent_error_id = error_ids[0]
status = "unresolved"
for e in errors:
if __status_rank(status) < __status_rank(e["status"]):
status = e["status"]
if __status_rank(status) == MAX_RANK:
break
params = {
"error_ids": tuple(error_ids),
"parent_error_id": parent_error_id,
"status": status
}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET parent_error_id = %(parent_error_id)s, status = %(status)s
WHERE error_id IN %(error_ids)s OR parent_error_id IN %(error_ids)s;""",
params)
cur.execute(query=query)
# row = cur.fetchone()
return {"data": "success"}
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error
def stats(project_id, user_id, startTimestamp=TimeUTC.now(delta_days=-7), endTimestamp=TimeUTC.now()):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
SELECT COUNT(errors.*) AS unresolved_and_unviewed
FROM public.errors
INNER JOIN (SELECT root_error.error_id
FROM events.errors
INNER JOIN public.errors AS root_error USING (error_id)
WHERE project_id = %(project_id)s
AND timestamp >= %(startTimestamp)s
AND timestamp <= %(endTimestamp)s
AND source = 'js_exception') AS timed_errors USING (error_id)
LEFT JOIN (SELECT error_id FROM public.user_viewed_errors WHERE user_id = %(user_id)s) AS user_viewed
USING (error_id)
WHERE user_viewed.error_id ISNULL
AND errors.project_id = %(project_id)s
AND errors.status = 'unresolved'
AND errors.source = 'js_exception';""",
{"project_id": project_id, "user_id": user_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp})
cur.execute(query=query)
row = cur.fetchone()
return {
"data": helper.dict_to_camel_case(row)
}

View file

@ -1,13 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import errors_pg as errors_legacy
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental error search")
from . import errors_ch as errors
else:
from . import errors_pg as errors

View file

@ -1,409 +0,0 @@
import schemas
from chalicelib.core import metadata
from chalicelib.core.errors import errors_legacy
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.errors.modules import sessions
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, metrics_helper
from chalicelib.utils.TimeUTC import TimeUTC
def _multiple_values(values, value_key="value"):
query_values = {}
if values is not None and isinstance(values, list):
for i in range(len(values)):
k = f"{value_key}_{i}"
query_values[k] = values[i]
return query_values
def __get_sql_operator(op: schemas.SearchEventOperator):
return {
schemas.SearchEventOperator.IS: "=",
schemas.SearchEventOperator.IS_ANY: "IN",
schemas.SearchEventOperator.ON: "=",
schemas.SearchEventOperator.ON_ANY: "IN",
schemas.SearchEventOperator.IS_NOT: "!=",
schemas.SearchEventOperator.NOT_ON: "!=",
schemas.SearchEventOperator.CONTAINS: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH: "ILIKE",
}.get(op, "=")
def _isAny_opreator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.ON_ANY, schemas.SearchEventOperator.IS_ANY]
def _isUndefined_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_UNDEFINED]
def __is_negation_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_NOT,
schemas.SearchEventOperator.NOT_ON,
schemas.SearchEventOperator.NOT_CONTAINS]
def _multiple_conditions(condition, values, value_key="value", is_not=False):
query = []
for i in range(len(values)):
k = f"{value_key}_{i}"
query.append(condition.replace(value_key, k))
return "(" + (" AND " if is_not else " OR ").join(query) + ")"
def get(error_id, family=False):
return errors_legacy.get(error_id=error_id, family=family)
def get_batch(error_ids):
return errors_legacy.get_batch(error_ids=error_ids)
def __get_basic_constraints_events(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}created_at >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}created_at < toDateTime(%({endTime_arg_name})s/1000)"]
# if platform == schemas.PlatformType.MOBILE:
# ch_sub_query.append("user_device_type = 'mobile'")
# elif platform == schemas.PlatformType.DESKTOP:
# ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
MAIN_EVENTS_TABLE = exp_ch_helper.get_main_events_table(data.startTimestamp)
MAIN_SESSIONS_TABLE = exp_ch_helper.get_main_sessions_table(data.startTimestamp)
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
ch_sessions_sub_query = errors_helper.__get_basic_constraints_ch(platform, type_condition=False)
# ignore platform for errors table
ch_sub_query = __get_basic_constraints_events(None, type_condition=True)
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'source') = 'js_exception'")
# To ignore Script error
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'message') != 'Script error.'")
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-7)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
subquery_part = ""
params = {}
if len(data.events) > 0:
errors_condition_count = 0
for i, e in enumerate(data.events):
if e.type == schemas.EventType.ERROR:
errors_condition_count += 1
is_any = _isAny_opreator(e.operator)
op = __get_sql_operator(e.operator)
e_k = f"e_value{i}"
params = {**params, **_multiple_values(e.value, value_key=e_k)}
if not is_any and len(e.value) > 0 and e.value[1] not in [None, "*", ""]:
ch_sub_query.append(
_multiple_conditions(f"(message {op} %({e_k})s OR name {op} %({e_k})s)",
e.value, value_key=e_k))
if len(data.events) > errors_condition_count:
subquery_part_args, subquery_part = sessions.search_query_parts_ch(data=data, error_status=data.status,
errors_only=True,
project_id=project.project_id,
user_id=user_id,
issue=None,
favorite_only=False)
subquery_part = f"INNER JOIN {subquery_part} USING(session_id)"
params = {**params, **subquery_part_args}
if len(data.filters) > 0:
meta_keys = None
# to reduce include a sub-query of sessions inside events query, in order to reduce the selected data
for i, f in enumerate(data.filters):
if not isinstance(f.value, list):
f.value = [f.value]
filter_type = f.type
f.value = helper.values_for_operator(value=f.value, op=f.operator)
f_k = f"f_value{i}"
params = {**params, f_k: f.value, **_multiple_values(f.value, value_key=f_k)}
op = __get_sql_operator(f.operator) \
if filter_type not in [schemas.FilterType.EVENTS_COUNT] else f.operator
is_any = _isAny_opreator(f.operator)
is_undefined = _isUndefined_operator(f.operator)
if not is_any and not is_undefined and len(f.value) == 0:
continue
is_not = False
if __is_negation_operator(f.operator):
is_not = True
if filter_type == schemas.FilterType.USER_BROWSER:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_browser)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_browser {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_OS, schemas.FilterType.USER_OS_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_os)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_os {op} %({f_k})s', f.value, is_not=is_not, value_key=f_k))
elif filter_type in [schemas.FilterType.USER_DEVICE, schemas.FilterType.USER_DEVICE_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_device)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_device {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_COUNTRY, schemas.FilterType.USER_COUNTRY_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_country)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_country {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_SOURCE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_source)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_source)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_source {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_MEDIUM]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_medium)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_medium)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_medium {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_CAMPAIGN]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_campaign)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_campaign)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_campaign {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type == schemas.FilterType.DURATION:
if len(f.value) > 0 and f.value[0] is not None:
ch_sessions_sub_query.append("s.duration >= %(minDuration)s")
params["minDuration"] = f.value[0]
if len(f.value) > 1 and f.value[1] is not None and int(f.value[1]) > 0:
ch_sessions_sub_query.append("s.duration <= %(maxDuration)s")
params["maxDuration"] = f.value[1]
elif filter_type == schemas.FilterType.REFERRER:
# extra_from += f"INNER JOIN {events.EventType.LOCATION.table} AS p USING(session_id)"
if is_any:
referrer_constraint = 'isNotNull(s.base_referrer)'
else:
referrer_constraint = _multiple_conditions(f"s.base_referrer {op} %({f_k})s", f.value,
is_not=is_not, value_key=f_k)
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if f.source in meta_keys.keys():
if is_any:
ch_sessions_sub_query.append(f"isNotNull(s.{metadata.index_to_colname(meta_keys[f.source])})")
elif is_undefined:
ch_sessions_sub_query.append(f"isNull(s.{metadata.index_to_colname(meta_keys[f.source])})")
else:
ch_sessions_sub_query.append(
_multiple_conditions(
f"s.{metadata.index_to_colname(meta_keys[f.source])} {op} toString(%({f_k})s)",
f.value, is_not=is_not, value_key=f_k))
elif filter_type in [schemas.FilterType.USER_ID, schemas.FilterType.USER_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.user_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_id {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_ANONYMOUS_ID,
schemas.FilterType.USER_ANONYMOUS_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_anonymous_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.user_anonymous_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_anonymous_id {op} toString(%({f_k})s)", f.value,
is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.REV_ID, schemas.FilterType.REV_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.rev_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.rev_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.rev_id {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type == schemas.FilterType.PLATFORM:
# op = __get_sql_operator(f.operator)
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_device_type {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
# elif filter_type == schemas.FilterType.issue:
# if is_any:
# ch_sessions_sub_query.append("notEmpty(s.issue_types)")
# else:
# ch_sessions_sub_query.append(f"hasAny(s.issue_types,%({f_k})s)")
# # _multiple_conditions(f"%({f_k})s {op} ANY (s.issue_types)", f.value, is_not=is_not,
# # value_key=f_k))
#
# if is_not:
# extra_constraints[-1] = f"not({extra_constraints[-1]})"
# ss_constraints[-1] = f"not({ss_constraints[-1]})"
elif filter_type == schemas.FilterType.EVENTS_COUNT:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.events_count {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
with ch_client.ClickHouseClient() as ch:
step_size = metrics_helper.get_step_size(data.startTimestamp, data.endTimestamp, data.density)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = "DESC"
if data.order is not None:
order = data.order
params = {
**params,
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"userId": user_id,
"step_size": step_size}
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
# if data.bookmarked:
# cur.execute(cur.mogrify(f"""SELECT error_id
# FROM public.user_favorite_errors
# WHERE user_id = %(userId)s
# {"" if error_ids is None else "AND error_id IN %(error_ids)s"}""",
# {"userId": user_id, "error_ids": tuple(error_ids or [])}))
# error_ids = cur.fetchall()
# if len(error_ids) == 0:
# return empty_response
# error_ids = [e["error_id"] for e in error_ids]
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
ch_sub_query.append("error_id IN %(error_ids)s")
main_ch_query = f"""\
SELECT details.error_id as error_id,
name, message, users, total,
sessions, last_occurrence, first_occurrence, chart
FROM (SELECT error_id,
JSONExtractString(toString(`$properties`), 'name') AS name,
JSONExtractString(toString(`$properties`), 'message') AS message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT events.session_id) AS sessions,
MAX(created_at) AS max_datetime,
MIN(created_at) AS min_datetime,
COUNT(DISTINCT error_id)
OVER() AS total
FROM {MAIN_EVENTS_TABLE} AS events
INNER JOIN (SELECT session_id, coalesce(user_id,toString(user_uuid)) AS user_id
FROM {MAIN_SESSIONS_TABLE} AS s
{subquery_part}
WHERE {" AND ".join(ch_sessions_sub_query)}) AS sessions
ON (events.session_id = sessions.session_id)
WHERE {" AND ".join(ch_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}
LIMIT %(errors_limit)s OFFSET %(errors_offset)s) AS details
INNER JOIN (SELECT error_id,
toUnixTimestamp(MAX(created_at))*1000 AS last_occurrence,
toUnixTimestamp(MIN(created_at))*1000 AS first_occurrence
FROM {MAIN_EVENTS_TABLE}
WHERE project_id=%(project_id)s
AND `$event_name`='ERROR'
GROUP BY error_id) AS time_details
ON details.error_id=time_details.error_id
INNER JOIN (SELECT error_id, groupArray([timestamp, count]) AS chart
FROM (SELECT error_id,
gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs
LEFT JOIN {MAIN_EVENTS_TABLE} ON(TRUE)
WHERE {" AND ".join(ch_sub_query)}
AND created_at >= toDateTime(timestamp / 1000)
AND created_at < toDateTime((timestamp + %(step_size)s) / 1000)
GROUP BY error_id, timestamp
ORDER BY timestamp) AS sub_table
GROUP BY error_id) AS chart_details ON details.error_id=chart_details.error_id;"""
# print("------------")
# print(ch.format(main_ch_query, params))
# print("------------")
query = ch.format(query=main_ch_query, parameters=params)
rows = ch.execute(query=query)
total = rows[0]["total"] if len(rows) > 0 else 0
for r in rows:
r["chart"] = list(r["chart"])
for i in range(len(r["chart"])):
r["chart"][i] = {"timestamp": r["chart"][i][0], "count": r["chart"][i][1]}
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def get_trace(project_id, error_id):
return errors_legacy.get_trace(project_id=project_id, error_id=error_id)
def get_sessions(start_date, end_date, project_id, user_id, error_id):
return errors_legacy.get_sessions(start_date=start_date,
end_date=end_date,
project_id=project_id,
user_id=user_id,
error_id=error_id)

View file

@ -1,248 +0,0 @@
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f'{o["name"]}@{v["version"]}',
"count": v["count"]
} for o in data for v in o["partition"]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o["name"],
"count": o["count"],
} for o in data
]
def __process_tags(row):
return [
{"name": "browser", "partitions": __flatten_sort_key_count_version(data=row.get("browsers_partition"))},
{"name": "browser.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("browsers_partition"), merge_nested=True)},
{"name": "OS", "partitions": __flatten_sort_key_count_version(data=row.get("os_partition"))},
{"name": "OS.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("os_partition"), merge_nested=True)},
{"name": "device.family", "partitions": __flatten_sort_key_count_version(data=row.get("device_partition"))},
{"name": "device",
"partitions": __flatten_sort_key_count_version(data=row.pop("device_partition"), merge_nested=True)},
{"name": "country", "partitions": row.pop("country_partition")}
]
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30_session = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_session.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_session.append("error_id = %(error_id)s")
pg_sub_query30_err = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="errors.project_id")
pg_sub_query30_err.append("sessions.project_id = %(project_id)s")
pg_sub_query30_err.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_err.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_err.append("error_id = %(error_id)s")
pg_sub_query30_err.append("source ='js_exception'")
pg_sub_query30 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = errors_helper.__get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
data["endDate24"] = TimeUTC.now()
data["startDate30"] = TimeUTC.now(-30)
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = get_step_size(data["startDate24"], data["endDate24"], density24, factor=1)
density30 = int(data.get("density30", 30))
step_size30 = get_step_size(data["startDate30"], data["endDate30"], density30, factor=1)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
"startDate30": data['startDate30'],
"endDate30": data['endDate30'],
"project_id": project_id,
"userId": user_id,
"step_size24": step_size24,
"step_size30": step_size30,
"error_id": error_id}
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
last_session_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart24,
chart30
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT session_id) AS sessions
FROM public.errors
INNER JOIN events.errors AS s_errors USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_err)}
GROUP BY error_id, name, message) AS details
INNER JOIN (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s) AS time_details ON (TRUE)
INNER JOIN (SELECT session_id AS last_session_id
FROM events.errors
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT JSONB_AGG(version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS version_details
) AS browser_version_details ON (TRUE)) AS browser_details) AS browser_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT COALESCE(user_os_version,'unknown') AS version, COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_details
GROUP BY count_per_os_details.name ) AS os_version_details
ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_v_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_device_type = count_per_device_details.name
GROUP BY user_device
ORDER BY count DESC) AS count_per_device_v_details
GROUP BY count_per_device_details.name ) AS device_version_details
ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart24
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query24)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details24 ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart30
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30)}) AS chart_details
ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details30 ON (TRUE);
"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
query = cur.mogrify(
f"""SELECT error_id, status, session_id, start_ts,
parent_error_id,session_id, user_anonymous_id,
user_id, user_uuid, user_browser, user_browser_version,
user_os, user_os_version, user_device, payload,
FALSE AS favorite,
True AS viewed
FROM public.errors AS pe
INNER JOIN events.errors AS ee USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE pe.project_id = %(project_id)s
AND error_id = %(error_id)s
ORDER BY start_ts DESC
LIMIT 1;""",
{"project_id": project_id, "error_id": error_id, "user_id": user_id})
cur.execute(query=query)
status = cur.fetchone()
if status is not None:
row["stack"] = errors_helper.format_first_stack_frame(status).pop("stack")
row["status"] = status.pop("status")
row["parent_error_id"] = status.pop("parent_error_id")
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
row["last_hydrated_session"] = status
else:
row["stack"] = []
row["last_hydrated_session"] = None
row["status"] = "untracked"
row["parent_error_id"] = None
row["favorite"] = False
row["viewed"] = False
return {"data": helper.dict_to_camel_case(row)}

View file

@ -1,294 +0,0 @@
import json
from typing import List
import schemas
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.sessions import sessions_search
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def get(error_id, family=False) -> dict | List[dict]:
if family:
return get_batch([error_id])
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT *
FROM public.errors
WHERE error_id = %(error_id)s
LIMIT 1;""",
{"error_id": error_id})
cur.execute(query=query)
result = cur.fetchone()
if result is not None:
result["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(result["stacktrace_parsed_at"])
return helper.dict_to_camel_case(result)
def get_batch(error_ids):
if len(error_ids) == 0:
return []
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
WITH RECURSIVE error_family AS (
SELECT *
FROM public.errors
WHERE error_id IN %(error_ids)s
UNION
SELECT child_errors.*
FROM public.errors AS child_errors
INNER JOIN error_family ON error_family.error_id = child_errors.parent_error_id OR error_family.parent_error_id = child_errors.error_id
)
SELECT *
FROM error_family;""",
{"error_ids": tuple(error_ids)})
cur.execute(query=query)
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
empty_response = {
'total': 0,
'errors': []
}
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = errors_helper.__get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = errors_helper.__get_basic_constraints(platform, time_constraint=False, chart=True,
project_key=None)
if platform:
pg_sub_query_chart += ["start_ts>=%(startDate)s", "start_ts<%(endDate)s", "project_id=%(project_id)s"]
pg_sub_query_chart.append("errors.error_id =details.error_id")
statuses = []
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-30)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
if len(data.events) > 0 or len(data.filters) > 0:
print("-- searching for sessions before errors")
statuses = sessions_search.search_sessions(data=data, project=project, user_id=user_id, errors_only=True,
error_status=data.status)
if len(statuses) == 0:
return empty_response
error_ids = [e["errorId"] for e in statuses]
with pg_client.PostgresClient() as cur:
step_size = get_step_size(data.startTimestamp, data.endTimestamp, data.density, factor=1)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = schemas.SortOrderType.DESC
if data.order is not None:
order = data.order
extra_join = ""
params = {
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"userId": user_id,
"step_size": step_size}
if data.status != schemas.ErrorStatus.ALL:
pg_sub_query.append("status = %(error_status)s")
params["error_status"] = data.status
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
pg_sub_query.append("error_id IN %(error_ids)s")
# if data.bookmarked:
# pg_sub_query.append("ufe.user_id = %(userId)s")
# extra_join += " INNER JOIN public.user_favorite_errors AS ufe USING (error_id)"
if data.query is not None and len(data.query) > 0:
pg_sub_query.append("(pe.name ILIKE %(error_query)s OR pe.message ILIKE %(error_query)s)")
params["error_query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
main_pg_query = f"""SELECT full_count,
error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
chart
FROM (SELECT COUNT(details) OVER () AS full_count, details.*
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT COALESCE(user_id,user_uuid::text)) AS users,
COUNT(DISTINCT session_id) AS sessions,
MAX(timestamp) AS max_datetime,
MIN(timestamp) AS min_datetime
FROM events.errors
INNER JOIN public.errors AS pe USING (error_id)
INNER JOIN public.sessions USING (session_id)
{extra_join}
WHERE {" AND ".join(pg_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}) AS details
LIMIT %(errors_limit)s OFFSET %(errors_offset)s
) AS details
INNER JOIN LATERAL (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE errors.error_id = details.error_id) AS time_details ON (TRUE)
INNER JOIN LATERAL (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
{"INNER JOIN public.sessions USING(session_id)" if platform else ""}
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details ON (TRUE);"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
rows = cur.fetchall()
total = 0 if len(rows) == 0 else rows[0]["full_count"]
if total == 0:
rows = []
else:
if len(statuses) == 0:
query = cur.mogrify(
"""SELECT error_id
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project.project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = helper.list_to_camel_case(cur.fetchall())
statuses = {
s["errorId"]: s for s in statuses
}
for r in rows:
r.pop("full_count")
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def __save_stacktrace(error_id, data):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET stacktrace=%(data)s::jsonb, stacktrace_parsed_at=timezone('utc'::text, now())
WHERE error_id = %(error_id)s;""",
{"error_id": error_id, "data": json.dumps(data)})
cur.execute(query=query)
def get_trace(project_id, error_id):
error = get(error_id=error_id, family=False)
if error is None:
return {"errors": ["error not found"]}
if error.get("source", "") != "js_exception":
return {"errors": ["this source of errors doesn't have a sourcemap"]}
if error.get("payload") is None:
return {"errors": ["null payload"]}
if error.get("stacktrace") is not None:
return {"sourcemapUploaded": True,
"trace": error.get("stacktrace"),
"preparsed": True}
trace, all_exists = sourcemaps.get_traces_group(project_id=project_id, payload=error["payload"])
if all_exists:
__save_stacktrace(error_id=error_id, data=trace)
return {"sourcemapUploaded": all_exists,
"trace": trace,
"preparsed": False}
def get_sessions(start_date, end_date, project_id, user_id, error_id):
extra_constraints = ["s.project_id = %(project_id)s",
"s.start_ts >= %(startDate)s",
"s.start_ts <= %(endDate)s",
"e.error_id = %(error_id)s"]
if start_date is None:
start_date = TimeUTC.now(-7)
if end_date is None:
end_date = TimeUTC.now()
params = {
"startDate": start_date,
"endDate": end_date,
"project_id": project_id,
"userId": user_id,
"error_id": error_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_country,
s.start_ts,
s.duration,
s.events_count,
s.pages_count,
s.errors_count,
s.issue_types,
COALESCE((SELECT TRUE
FROM public.user_favorite_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.sessions AS s INNER JOIN events.errors AS e USING (session_id)
WHERE {" AND ".join(extra_constraints)}
ORDER BY s.start_ts DESC;""",
params)
cur.execute(query=query)
sessions_list = []
total = cur.rowcount
row = cur.fetchone()
while row is not None and len(sessions_list) < 100:
sessions_list.append(row)
row = cur.fetchone()
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions_list)
}

View file

@ -1,11 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import helper as errors_helper
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
import chalicelib.core.sessions.sessions_ch as sessions
else:
import chalicelib.core.sessions.sessions_pg as sessions

View file

@ -1,58 +0,0 @@
from typing import Optional
import schemas
from chalicelib.core.sourcemaps import sourcemaps
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_basic_constraints_ch(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -0,0 +1,91 @@
from chalicelib.utils import pg_client
def add_favorite_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""\
INSERT INTO public.user_favorite_errors
(user_id, error_id)
VALUES
(%(userId)s,%(error_id)s);""",
{"userId": user_id, "error_id": error_id})
)
return {"errorId": error_id, "favorite": True}
def remove_favorite_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""\
DELETE FROM public.user_favorite_errors
WHERE
user_id = %(userId)s
AND error_id = %(error_id)s;""",
{"userId": user_id, "error_id": error_id})
)
return {"errorId": error_id, "favorite": False}
def favorite_error(project_id, user_id, error_id):
exists, favorite = error_exists_and_favorite(user_id=user_id, error_id=error_id)
if not exists:
return {"errors": ["cannot bookmark non-rehydrated errors"]}
if favorite:
return remove_favorite_error(project_id=project_id, user_id=user_id, error_id=error_id)
return add_favorite_error(project_id=project_id, user_id=user_id, error_id=error_id)
def error_exists_and_favorite(user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT errors.error_id AS exists, ufe.error_id AS favorite
FROM public.errors
LEFT JOIN (SELECT error_id FROM public.user_favorite_errors WHERE user_id = %(userId)s) AS ufe USING (error_id)
WHERE error_id = %(error_id)s;""",
{"userId": user_id, "error_id": error_id})
)
r = cur.fetchone()
if r is None:
return False, False
return True, r.get("favorite") is not None
def add_viewed_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.user_viewed_errors
(user_id, error_id)
VALUES
(%(userId)s,%(error_id)s);""",
{"userId": user_id, "error_id": error_id})
)
def viewed_error_exists(user_id, error_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT
errors.error_id AS hydrated,
COALESCE((SELECT TRUE
FROM public.user_viewed_errors AS ve
WHERE ve.error_id = %(error_id)s
AND ve.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.errors
WHERE error_id = %(error_id)s""",
{"userId": user_id, "error_id": error_id})
cur.execute(
query=query
)
r = cur.fetchone()
if r:
return r.get("viewed")
return True
def viewed_error(project_id, user_id, error_id):
if viewed_error_exists(user_id=user_id, error_id=error_id):
return None
return add_viewed_error(project_id=project_id, user_id=user_id, error_id=error_id)

View file

@ -1,16 +1,12 @@
from functools import cache
from typing import Optional
import schemas
from chalicelib.core import issues
from chalicelib.core.autocomplete import autocomplete
from chalicelib.core.sessions import sessions_metas
from chalicelib.core import sessions_metas, metadata
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.event_filter_definition import SupportedFilter, Event
def get_customs_by_session_id(session_id, project_id):
def get_customs_by_sessionId2_pg(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify("""\
SELECT
@ -32,15 +28,15 @@ def __merge_cells(rows, start, count, replacement):
return rows
def __get_grouped_clickrage(rows, session_id, project_id):
click_rage_issues = issues.get_by_session_id(session_id=session_id, issue_type="click_rage", project_id=project_id)
def __get_grouped_clickrage(rows, session_id):
click_rage_issues = issues.get_by_session_id(session_id=session_id, issue_type="click_rage")
if len(click_rage_issues) == 0:
return rows
for c in click_rage_issues:
merge_count = c.get("payload")
if merge_count is not None:
merge_count = merge_count.get("Count", 3)
merge_count = merge_count.get("count", 3)
else:
merge_count = 3
for i in range(len(rows)):
@ -53,174 +49,375 @@ def __get_grouped_clickrage(rows, session_id, project_id):
return rows
def get_by_session_id(session_id, project_id, group_clickrage=False, event_type: Optional[schemas.EventType] = None):
def get_by_sessionId2_pg(session_id, project_id, group_clickrage=False):
with pg_client.PostgresClient() as cur:
rows = []
if event_type is None or event_type == schemas.EventType.CLICK:
cur.execute(cur.mogrify("""\
SELECT
c.*,
'CLICK' AS type
FROM events.clicks AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if group_clickrage:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id, project_id=project_id)
if event_type is None or event_type == schemas.EventType.INPUT:
cur.execute(cur.mogrify("""
SELECT
i.*,
'INPUT' AS type
FROM events.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if event_type is None or event_type == schemas.EventType.LOCATION:
cur.execute(cur.mogrify("""\
SELECT
l.*,
l.path AS value,
l.path AS url,
'LOCATION' AS type
FROM events.pages AS l
WHERE
l.session_id = %(session_id)s
ORDER BY l.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
cur.execute(cur.mogrify("""\
SELECT
c.*,
'CLICK' AS type
FROM events.clicks AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
if group_clickrage:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id)
cur.execute(cur.mogrify("""
SELECT
i.*,
'INPUT' AS type
FROM events.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
cur.execute(cur.mogrify("""\
SELECT
l.*,
l.path AS value,
l.path AS url,
'LOCATION' AS type
FROM events.pages AS l
WHERE
l.session_id = %(session_id)s
ORDER BY l.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: (k["timestamp"], k["messageId"]))
rows = sorted(rows, key=lambda k: k["messageId"])
return rows
def _search_tags(project_id, value, key=None, source=None):
def __get_data_for_extend(data):
if "errors" not in data:
return data["data"]
def __pg_errors_query(source=None):
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION
(SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(value)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
def __search_pg_errors(project_id, value, key=None, source=None):
now = TimeUTC.now()
with pg_client.PostgresClient() as cur:
query = f"""
SELECT public.tags.name
'TAG' AS type
FROM public.tags
WHERE public.tags.project_id = %(project_id)s
ORDER BY SIMILARITY(public.tags.name, %(value)s) DESC
LIMIT 10
"""
query = cur.mogrify(query, {'project_id': project_id, 'value': value})
cur.execute(query)
cur.execute(
cur.mogrify(__pg_errors_query(source), {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value),
"source": source}))
results = helper.list_to_camel_case(cur.fetchall())
print(f"{TimeUTC.now() - now} : errors")
return results
def __search_pg_errors_ios(project_id, value, key=None, source=None):
now = TimeUTC.now()
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.reason ILIKE %(value)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
LIMIT 5);""",
{"project_id": project_id, "value": helper.string_to_sql_like(value)}))
results = helper.list_to_camel_case(cur.fetchall())
print(f"{TimeUTC.now() - now} : errors")
return results
def __search_pg_metadata(project_id, value, key=None, source=None):
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if len(meta_keys) == 0 or key is not None and key not in meta_keys.keys():
return []
sub_from = []
if key is not None:
meta_keys = {key: meta_keys[key]}
for k in meta_keys.keys():
colname = metadata.index_to_colname(meta_keys[k])
sub_from.append(
f"(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key FROM public.sessions WHERE project_id = %(project_id)s AND {colname} ILIKE %(value)s LIMIT 5)")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
class EventType:
CLICK = Event(ui_type=schemas.EventType.CLICK, table="events.clicks", column="label")
INPUT = Event(ui_type=schemas.EventType.INPUT, table="events.inputs", column="label")
LOCATION = Event(ui_type=schemas.EventType.LOCATION, table="events.pages", column="path")
CUSTOM = Event(ui_type=schemas.EventType.CUSTOM, table="events_common.customs", column="name")
REQUEST = Event(ui_type=schemas.EventType.REQUEST, table="events_common.requests", column="path")
GRAPHQL = Event(ui_type=schemas.EventType.GRAPHQL, table="events.graphql", column="name")
STATEACTION = Event(ui_type=schemas.EventType.STATE_ACTION, table="events.state_actions", column="name")
TAG = Event(ui_type=schemas.EventType.TAG, table="events.tags", column="tag_id")
ERROR = Event(ui_type=schemas.EventType.ERROR, table="events.errors",
def __generic_query(typename):
return f"""\
(SELECT value, type
FROM public.autocomplete
WHERE
project_id = %(project_id)s
AND type='{typename}'
AND value ILIKE %(svalue)s
LIMIT 5)
UNION
(SELECT value, type
FROM public.autocomplete
WHERE
project_id = %(project_id)s
AND type='{typename}'
AND value ILIKE %(value)s
LIMIT 5)"""
def __generic_autocomplete(event: Event):
def f(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(__generic_query(event.ui_type),
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
return helper.list_to_camel_case(cur.fetchall())
return f
class event_type:
CLICK = Event(ui_type=schemas.EventType.click, table="events.clicks", column="label")
INPUT = Event(ui_type=schemas.EventType.input, table="events.inputs", column="label")
LOCATION = Event(ui_type=schemas.EventType.location, table="events.pages", column="base_path")
CUSTOM = Event(ui_type=schemas.EventType.custom, table="events_common.customs", column="name")
REQUEST = Event(ui_type=schemas.EventType.request, table="events_common.requests", column="url")
GRAPHQL = Event(ui_type=schemas.EventType.graphql, table="events.graphql", column="name")
STATEACTION = Event(ui_type=schemas.EventType.state_action, table="events.state_actions", column="name")
ERROR = Event(ui_type=schemas.EventType.error, table="events.errors",
column=None) # column=None because errors are searched by name or message
METADATA = Event(ui_type=schemas.FilterType.METADATA, table="public.sessions", column=None)
# MOBILE
CLICK_MOBILE = Event(ui_type=schemas.EventType.CLICK_MOBILE, table="events_ios.taps", column="label")
INPUT_MOBILE = Event(ui_type=schemas.EventType.INPUT_MOBILE, table="events_ios.inputs", column="label")
VIEW_MOBILE = Event(ui_type=schemas.EventType.VIEW_MOBILE, table="events_ios.views", column="name")
SWIPE_MOBILE = Event(ui_type=schemas.EventType.SWIPE_MOBILE, table="events_ios.swipes", column="label")
CUSTOM_MOBILE = Event(ui_type=schemas.EventType.CUSTOM_MOBILE, table="events_common.customs", column="name")
REQUEST_MOBILE = Event(ui_type=schemas.EventType.REQUEST_MOBILE, table="events_common.requests", column="path")
CRASH_MOBILE = Event(ui_type=schemas.EventType.ERROR_MOBILE, table="events_common.crashes",
column=None) # column=None because errors are searched by name or message
METADATA = Event(ui_type=schemas.FilterType.metadata, table="public.sessions", column=None)
# IOS
CLICK_IOS = Event(ui_type=schemas.EventType.click_ios, table="events_ios.clicks", column="label")
INPUT_IOS = Event(ui_type=schemas.EventType.input_ios, table="events_ios.inputs", column="label")
VIEW_IOS = Event(ui_type=schemas.EventType.view_ios, table="events_ios.views", column="name")
CUSTOM_IOS = Event(ui_type=schemas.EventType.custom_ios, table="events_common.customs", column="name")
REQUEST_IOS = Event(ui_type=schemas.EventType.request_ios, table="events_common.requests", column="url")
ERROR_IOS = Event(ui_type=schemas.EventType.error_ios, table="events_ios.crashes",
column=None) # column=None because errors are searched by name or message
@cache
def supported_types():
return {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
query=autocomplete.__generic_query(
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
}
SUPPORTED_TYPES = {
event_type.CLICK.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CLICK),
query=__generic_query(typename=event_type.CLICK.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=["/"]),
event_type.INPUT.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.INPUT),
query=__generic_query(typename=event_type.INPUT.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=["/"]),
event_type.LOCATION.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.LOCATION),
query=__generic_query(typename=event_type.LOCATION.ui_type),
value_limit=3,
starts_with="/",
starts_limit=3,
ignore_if_starts_with=[]),
event_type.CUSTOM.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CUSTOM),
query=__generic_query(typename=event_type.CUSTOM.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=[""]),
event_type.REQUEST.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.REQUEST),
query=__generic_query(typename=event_type.REQUEST.ui_type),
value_limit=3,
starts_with="/",
starts_limit=3,
ignore_if_starts_with=[""]),
event_type.GRAPHQL.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.GRAPHQL),
query=__generic_query(typename=event_type.GRAPHQL.ui_type),
value_limit=3,
starts_with="/",
starts_limit=4,
ignore_if_starts_with=[]),
event_type.STATEACTION.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.STATEACTION),
query=__generic_query(typename=event_type.STATEACTION.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=[]),
event_type.ERROR.ui_type: SupportedFilter(get=__search_pg_errors,
query=None,
value_limit=4,
starts_with="",
starts_limit=4,
ignore_if_starts_with=["/"]),
event_type.METADATA.ui_type: SupportedFilter(get=__search_pg_metadata,
query=None,
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=["/"]),
# IOS
event_type.CLICK_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CLICK_IOS),
query=__generic_query(typename=event_type.CLICK_IOS.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=["/"]),
event_type.INPUT_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.INPUT_IOS),
query=__generic_query(typename=event_type.INPUT_IOS.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=["/"]),
event_type.VIEW_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.VIEW_IOS),
query=__generic_query(typename=event_type.VIEW_IOS.ui_type),
value_limit=3,
starts_with="/",
starts_limit=3,
ignore_if_starts_with=[]),
event_type.CUSTOM_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CUSTOM_IOS),
query=__generic_query(typename=event_type.CUSTOM_IOS.ui_type),
value_limit=3,
starts_with="",
starts_limit=3,
ignore_if_starts_with=[""]),
event_type.REQUEST_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.REQUEST_IOS),
query=__generic_query(typename=event_type.REQUEST_IOS.ui_type),
value_limit=3,
starts_with="/",
starts_limit=3,
ignore_if_starts_with=[""]),
event_type.ERROR_IOS.ui_type: SupportedFilter(get=__search_pg_errors,
query=None,
value_limit=4,
starts_with="",
starts_limit=4,
ignore_if_starts_with=["/"]),
}
def get_errors_by_session_id(session_id, project_id):
def __get_merged_queries(queries, value, project_id):
if len(queries) == 0:
return []
now = TimeUTC.now()
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT er.*,ur.*, er.timestamp - s.start_ts AS time
FROM {EventType.ERROR.table} AS er INNER JOIN public.errors AS ur USING (error_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE er.session_id = %(session_id)s AND s.project_id=%(project_id)s
ORDER BY timestamp;""", {"session_id": session_id, "project_id": project_id}))
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
cur.execute(cur.mogrify("(" + ")UNION ALL(".join(queries) + ")",
{"project_id": project_id, "value": helper.string_to_sql_like(value)}))
results = helper.list_to_camel_case(cur.fetchall())
print(f"{TimeUTC.now() - now} : merged-queries for len: {len(queries)}")
return results
def search(text, event_type, project_id, source, key):
def __get_autocomplete_table(value, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify("""SELECT DISTINCT ON(value,type) project_id, value, type
FROM (SELECT project_id, type, value
FROM (SELECT *,
ROW_NUMBER() OVER (PARTITION BY type ORDER BY value) AS Row_ID
FROM public.autocomplete
WHERE project_id = %(project_id)s
AND value ILIKE %(svalue)s
UNION
SELECT *,
ROW_NUMBER() OVER (PARTITION BY type ORDER BY value) AS Row_ID
FROM public.autocomplete
WHERE project_id = %(project_id)s
AND value ILIKE %(value)s) AS u
WHERE Row_ID <= 5) AS sfa
ORDER BY sfa.type;""",
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
def search_pg2(text, event_type, project_id, source, key):
if not event_type:
return {"data": autocomplete.__get_autocomplete_table(text, project_id)}
return {"data": __get_autocomplete_table(text, project_id)}
if event_type in supported_types().keys():
rows = supported_types()[event_type].get(project_id=project_id, value=text, key=key, source=source)
elif event_type + "_MOBILE" in supported_types().keys():
rows = supported_types()[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.supported_types().keys():
if event_type in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type].get(project_id=project_id, value=text, key=key, source=source)
if event_type + "_IOS" in SUPPORTED_TYPES.keys():
rows += SUPPORTED_TYPES[event_type + "_IOS"].get(project_id=project_id, value=text, key=key,
source=source)
elif event_type + "_IOS" in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type + "_IOS"].get(project_id=project_id, value=text, key=key,
source=source)
elif event_type in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_MOBILE") \
and event_type[:-len("_MOBILE")] in sessions_metas.supported_types().keys():
and event_type[:-len("_IOS")] in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}
return {"data": rows}
def get_errors_by_session_id(session_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT er.*,ur.*, er.timestamp - s.start_ts AS time
FROM {event_type.ERROR.table} AS er INNER JOIN public.errors AS ur USING (error_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE
er.session_id = %(session_id)s
ORDER BY timestamp;""", {"session_id": session_id}))
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)

View file

@ -0,0 +1,69 @@
from chalicelib.utils import pg_client, helper
from chalicelib.core import events
def get_customs_by_sessionId(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT
c.*,
'{events.event_type.CUSTOM_IOS.ui_type}' AS type
FROM {events.event_type.CUSTOM_IOS.table} AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
return helper.dict_to_camel_case(rows)
def get_by_sessionId(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT
c.*,
'{events.event_type.CLICK_IOS.ui_type}' AS type
FROM {events.event_type.CLICK_IOS.table} AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
i.*,
'{events.event_type.INPUT_IOS.ui_type}' AS type
FROM {events.event_type.INPUT_IOS.table} AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
v.*,
'{events.event_type.VIEW_IOS.ui_type}' AS type
FROM {events.event_type.VIEW_IOS.table} AS v
WHERE
v.session_id = %(session_id)s
ORDER BY v.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: k["timestamp"])
return rows
def get_crashes_by_session_id(session_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT cr.*,uc.*, cr.timestamp - s.start_ts AS time
FROM {events.event_type.ERROR_IOS.table} AS cr INNER JOIN public.crashes_ios AS uc USING (crash_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE
cr.session_id = %(session_id)s
ORDER BY timestamp;""", {"session_id": session_id}))
errors = cur.fetchall()
return helper.list_to_camel_case(errors)

View file

@ -1,68 +0,0 @@
from chalicelib.utils import pg_client, helper
from chalicelib.core import events
def get_customs_by_session_id(session_id, project_id):
return events.get_customs_by_session_id(session_id=session_id, project_id=project_id)
def get_by_sessionId(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT
c.*,
'TAP' AS type
FROM events_ios.taps AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
i.*,
'INPUT' AS type
FROM events_ios.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
v.*,
'VIEW' AS type
FROM events_ios.views AS v
WHERE
v.session_id = %(session_id)s
ORDER BY v.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
s.*,
'SWIPE' AS type
FROM events_ios.swipes AS s
WHERE
s.session_id = %(session_id)s
ORDER BY s.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: k["timestamp"])
return rows
def get_crashes_by_session_id(session_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT cr.*,uc.*, cr.timestamp - s.start_ts AS time
FROM {events.EventType.CRASH_MOBILE.table} AS cr
INNER JOIN public.crashes_ios AS uc USING (crash_ios_id)
INNER JOIN public.sessions AS s USING (session_id)
WHERE
cr.session_id = %(session_id)s
ORDER BY timestamp;""", {"session_id": session_id}))
errors = cur.fetchall()
return helper.list_to_camel_case(errors)

View file

@ -1,598 +0,0 @@
import json
import logging
from typing import Any, List, Dict, Optional
import schemas
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from fastapi import HTTPException, status
logger = logging.getLogger(__name__)
feature_flag_columns = (
"feature_flag_id",
"payload",
"flag_key",
"description",
"flag_type",
"is_persist",
"is_active",
"created_at",
"updated_at",
"created_by",
"updated_by",
)
def exists_by_name(flag_key: str, project_id: int, exclude_id: Optional[int]) -> bool:
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT EXISTS(SELECT 1
FROM public.feature_flags
WHERE deleted_at IS NULL
AND flag_key ILIKE %(flag_key)s AND project_id=%(project_id)s
{"AND feature_flag_id!=%(exclude_id)s" if exclude_id else ""}) AS exists;""",
{"flag_key": flag_key, "exclude_id": exclude_id, "project_id": project_id})
cur.execute(query=query)
row = cur.fetchone()
return row["exists"]
def update_feature_flag_status(project_id: int, feature_flag_id: int, is_active: bool) -> Dict[str, Any]:
try:
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""UPDATE feature_flags
SET is_active = %(is_active)s, updated_at=NOW()
WHERE feature_flag_id=%(feature_flag_id)s AND project_id=%(project_id)s
RETURNING is_active;""",
{"feature_flag_id": feature_flag_id, "is_active": is_active, "project_id": project_id})
cur.execute(query=query)
return {"is_active": cur.fetchone()["is_active"]}
except Exception as e:
logger.error(f"Failed to update feature flag status: {e}")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Failed to update feature flag status")
def search_feature_flags(project_id: int, user_id: int, data: schemas.SearchFlagsSchema) -> Dict[str, Any]:
"""
Get all feature flags and their total count.
"""
constraints, params = prepare_constraints_params_to_search(data, project_id, user_id)
sql = f"""
SELECT COUNT(1) OVER () AS count, {", ".join(feature_flag_columns)}
FROM feature_flags
WHERE {" AND ".join(constraints)}
ORDER BY updated_at {data.order}
LIMIT %(limit)s OFFSET %(offset)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
rows = cur.fetchall()
if len(rows) == 0:
return {"data": {"total": 0, "list": []}}
results = {"total": rows[0]["count"]}
rows = helper.list_to_camel_case(rows)
for row in rows:
row.pop("count")
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row["updatedAt"] = TimeUTC.datetime_to_timestamp(row["updatedAt"])
results["list"] = rows
return {"data": results}
def prepare_constraints_params_to_search(data, project_id, user_id):
constraints = [
"feature_flags.project_id = %(project_id)s",
"feature_flags.deleted_at IS NULL",
]
params = {
"project_id": project_id,
"user_id": user_id,
"limit": data.limit,
"offset": (data.page - 1) * data.limit,
}
if data.is_active is not None:
constraints.append("feature_flags.is_active=%(is_active)s")
params["is_active"] = data.is_active
if data.user_id is not None:
constraints.append("feature_flags.created_by=%(user_id)s")
if data.query is not None and len(data.query) > 0:
constraints.append("flag_key ILIKE %(query)s")
params["query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
return constraints, params
def create_feature_flag(project_id: int, user_id: int, feature_flag_data: schemas.FeatureFlagSchema) -> Optional[int]:
if feature_flag_data.flag_type == schemas.FeatureFlagType.MULTI_VARIANT and len(feature_flag_data.variants) == 0:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Variants are required for multi variant flag")
validate_unique_flag_key(feature_flag_data, project_id)
validate_multi_variant_flag(feature_flag_data)
insert_columns = (
'project_id',
'flag_key',
'description',
'flag_type',
'payload',
'is_persist',
'is_active',
'created_by'
)
params = prepare_params_to_create_flag(feature_flag_data, project_id, user_id)
conditions_len = len(feature_flag_data.conditions)
variants_len = len(feature_flag_data.variants)
flag_sql = f"""
INSERT INTO feature_flags ({", ".join(insert_columns)})
VALUES ({", ".join(["%(" + col + ")s" for col in insert_columns])})
RETURNING feature_flag_id
"""
conditions_query = ""
variants_query = ""
if conditions_len > 0:
conditions_query = f"""
inserted_conditions AS (
INSERT INTO feature_flags_conditions(feature_flag_id, name, rollout_percentage, filters)
VALUES {",".join([f"(("
f"SELECT feature_flag_id FROM inserted_flag),"
f"%(name_{i})s,"
f"%(rollout_percentage_{i})s,"
f"%(filters_{i})s::jsonb)"
for i in range(conditions_len)])}
RETURNING feature_flag_id
)
"""
if variants_len > 0:
variants_query = f""",
inserted_variants AS (
INSERT INTO feature_flags_variants(feature_flag_id, value, description, rollout_percentage, payload)
VALUES {",".join([f"((SELECT feature_flag_id FROM inserted_flag),"
f"%(v_value_{i})s,"
f"%(v_description_{i})s,"
f"%(v_rollout_percentage_{i})s,"
f"%(v_payload_{i})s::jsonb)"
for i in range(variants_len)])}
RETURNING feature_flag_id
)
"""
query = f"""
WITH inserted_flag AS ({flag_sql}),
{conditions_query}
{variants_query}
SELECT feature_flag_id FROM inserted_flag;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(query, params)
cur.execute(query)
row = cur.fetchone()
if row is None:
return None
return get_feature_flag(project_id=project_id, feature_flag_id=row["feature_flag_id"])
def validate_unique_flag_key(feature_flag_data, project_id, exclude_id=None):
if exists_by_name(project_id=project_id, flag_key=feature_flag_data.flag_key, exclude_id=exclude_id):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"Feature flag with key already exists.")
def validate_multi_variant_flag(feature_flag_data):
if feature_flag_data.flag_type == schemas.FeatureFlagType.MULTI_VARIANT:
if sum([v.rollout_percentage for v in feature_flag_data.variants]) > 100:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Sum of rollout percentage for variants cannot be greater than 100.")
def prepare_params_to_create_flag(feature_flag_data, project_id, user_id):
conditions_data = prepare_conditions_values(feature_flag_data)
variants_data = prepare_variants_values(feature_flag_data)
params = {
"project_id": project_id,
"created_by": user_id,
**feature_flag_data.model_dump(),
**conditions_data,
**variants_data,
"payload": json.dumps(feature_flag_data.payload)
}
return params
def prepare_variants_values(feature_flag_data):
variants_data = {}
for i, v in enumerate(feature_flag_data.variants):
for k in v.model_dump().keys():
variants_data[f"v_{k}_{i}"] = v.__getattribute__(k)
variants_data[f"v_value_{i}"] = v.value
variants_data[f"v_description_{i}"] = v.description
variants_data[f"v_payload_{i}"] = json.dumps(v.payload)
variants_data[f"v_rollout_percentage_{i}"] = v.rollout_percentage
return variants_data
def prepare_conditions_values(feature_flag_data):
conditions_data = {}
for i, s in enumerate(feature_flag_data.conditions):
for k in s.model_dump().keys():
conditions_data[f"{k}_{i}"] = s.__getattribute__(k)
conditions_data[f"name_{i}"] = s.name
conditions_data[f"rollout_percentage_{i}"] = s.rollout_percentage
conditions_data[f"filters_{i}"] = json.dumps([filter_.model_dump() for filter_ in s.filters])
return conditions_data
def get_feature_flag(project_id: int, feature_flag_id: int) -> Optional[Dict[str, Any]]:
conditions_query = """
SELECT COALESCE(jsonb_agg(ffc ORDER BY condition_id), '[]'::jsonb) AS conditions
FROM feature_flags_conditions AS ffc
WHERE ffc.feature_flag_id = %(feature_flag_id)s
"""
variants_query = """
SELECT COALESCE(jsonb_agg(ffv ORDER BY variant_id), '[]'::jsonb) AS variants
FROM feature_flags_variants AS ffv
WHERE ffv.feature_flag_id = %(feature_flag_id)s
"""
sql = f"""
SELECT {", ".join(["ff." + col for col in feature_flag_columns])},
({conditions_query}) AS conditions,
({variants_query}) AS variants
FROM feature_flags AS ff
WHERE ff.feature_flag_id = %(feature_flag_id)s
AND ff.project_id = %(project_id)s
AND ff.deleted_at IS NULL;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id, "project_id": project_id})
cur.execute(query)
row = cur.fetchone()
if row is None:
return {"errors": ["Feature flag not found"]}
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
row["updated_at"] = TimeUTC.datetime_to_timestamp(row["updated_at"])
return {"data": helper.dict_to_camel_case(row)}
def create_conditions(feature_flag_id: int, conditions: List[schemas.FeatureFlagCondition]) -> List[Dict[str, Any]]:
"""
Create new feature flag conditions and return their data.
"""
rows = []
# insert all conditions rows with single sql query
if len(conditions) > 0:
columns = (
"feature_flag_id",
"name",
"rollout_percentage",
"filters",
)
sql = f"""
INSERT INTO feature_flags_conditions
(feature_flag_id, name, rollout_percentage, filters)
VALUES {", ".join(["%s"] * len(conditions))}
RETURNING condition_id, {", ".join(columns)}
"""
with pg_client.PostgresClient() as cur:
params = [
(feature_flag_id, c.name, c.rollout_percentage,
json.dumps([filter_.model_dump() for filter_ in c.filters]))
for c in conditions]
query = cur.mogrify(sql, params)
cur.execute(query)
rows = cur.fetchall()
return rows
def update_feature_flag(project_id: int, feature_flag_id: int,
feature_flag: schemas.FeatureFlagSchema, user_id: int):
"""
Update an existing feature flag and return its updated data.
"""
validate_unique_flag_key(feature_flag_data=feature_flag, project_id=project_id, exclude_id=feature_flag_id)
validate_multi_variant_flag(feature_flag_data=feature_flag)
columns = (
"flag_key",
"description",
"flag_type",
"is_persist",
"is_active",
"payload",
"updated_by",
)
params = {
"updated_by": user_id,
"feature_flag_id": feature_flag_id,
"project_id": project_id,
**feature_flag.model_dump(),
"payload": json.dumps(feature_flag.payload),
}
sql = f"""
UPDATE feature_flags
SET {", ".join(f"{column} = %({column})s" for column in columns)},
updated_at = timezone('utc'::text, now())
WHERE feature_flag_id = %(feature_flag_id)s AND project_id = %(project_id)s
RETURNING feature_flag_id, {", ".join(columns)}, created_at, updated_at
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
row = cur.fetchone()
if row is None:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Feature flag not found")
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
row["updated_at"] = TimeUTC.datetime_to_timestamp(row["updated_at"])
row['conditions'] = check_conditions(feature_flag_id, feature_flag.conditions)
row['variants'] = check_variants(feature_flag_id, feature_flag.variants)
return {"data": helper.dict_to_camel_case(row)}
def get_conditions(feature_flag_id: int):
"""
Get all conditions for a feature flag.
"""
sql = """
SELECT
condition_id,
feature_flag_id,
name,
rollout_percentage,
filters
FROM feature_flags_conditions
WHERE feature_flag_id = %(feature_flag_id)s
ORDER BY condition_id;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id})
cur.execute(query)
rows = cur.fetchall()
return rows
def check_variants(feature_flag_id: int, variants: List[schemas.FeatureFlagVariant]) -> Any:
existing_ids = [ev.get("variant_id") for ev in get_variants(feature_flag_id)]
to_be_deleted = []
to_be_updated = []
to_be_created = []
for vid in existing_ids:
if vid not in [v.variant_id for v in variants]:
to_be_deleted.append(vid)
for variant in variants:
if variant.variant_id is None:
to_be_created.append(variant)
else:
to_be_updated.append(variant)
if len(to_be_created) > 0:
create_variants(feature_flag_id=feature_flag_id, variants=to_be_created)
if len(to_be_updated) > 0:
update_variants(feature_flag_id=feature_flag_id, variants=to_be_updated)
if len(to_be_deleted) > 0:
delete_variants(feature_flag_id=feature_flag_id, ids=to_be_deleted)
return get_variants(feature_flag_id)
def get_variants(feature_flag_id: int):
sql = """
SELECT
variant_id,
feature_flag_id,
value,
payload,
rollout_percentage
FROM feature_flags_variants
WHERE feature_flag_id = %(feature_flag_id)s
ORDER BY variant_id;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id})
cur.execute(query)
rows = cur.fetchall()
return rows
def create_variants(feature_flag_id: int, variants: List[schemas.FeatureFlagVariant]) -> List[Dict[str, Any]]:
"""
Create new feature flag variants and return their data.
"""
rows = []
# insert all variants rows with single sql query
if len(variants) > 0:
columns = (
"feature_flag_id",
"value",
"description",
"payload",
"rollout_percentage",
)
sql = f"""
INSERT INTO feature_flags_variants
(feature_flag_id, value, description, payload, rollout_percentage)
VALUES {", ".join(["%s"] * len(variants))}
RETURNING variant_id, {", ".join(columns)}
"""
with pg_client.PostgresClient() as cur:
params = [(feature_flag_id, v.value, v.description, json.dumps(v.payload), v.rollout_percentage) for v in
variants]
query = cur.mogrify(sql, params)
cur.execute(query)
rows = cur.fetchall()
return rows
def update_variants(feature_flag_id: int, variants: List[schemas.FeatureFlagVariant]) -> Any:
"""
Update existing feature flag variants and return their updated data.
"""
values = []
params = {
"feature_flag_id": feature_flag_id,
}
for i in range(len(variants)):
values.append(f"(%(variant_id_{i})s, %(value_{i})s, %(rollout_percentage_{i})s, %(payload_{i})s::jsonb)")
params[f"variant_id_{i}"] = variants[i].variant_id
params[f"value_{i}"] = variants[i].value
params[f"rollout_percentage_{i}"] = variants[i].rollout_percentage
params[f"payload_{i}"] = json.dumps(variants[i].payload)
sql = f"""
UPDATE feature_flags_variants
SET value = c.value, rollout_percentage = c.rollout_percentage, payload = c.payload
FROM (VALUES {','.join(values)}) AS c(variant_id, value, rollout_percentage, payload)
WHERE c.variant_id = feature_flags_variants.variant_id AND feature_flag_id = %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
def delete_variants(feature_flag_id: int, ids: List[int]) -> None:
"""
Delete existing feature flag variants and return their data.
"""
sql = """
DELETE FROM feature_flags_variants
WHERE variant_id IN %(ids)s
AND feature_flag_id= %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id, "ids": tuple(ids)})
cur.execute(query)
def check_conditions(feature_flag_id: int, conditions: List[schemas.FeatureFlagCondition]) -> Any:
existing_ids = [ec.get("condition_id") for ec in get_conditions(feature_flag_id)]
to_be_deleted = []
to_be_updated = []
to_be_created = []
for cid in existing_ids:
if cid not in [c.condition_id for c in conditions]:
to_be_deleted.append(cid)
for condition in conditions:
if condition.condition_id is None:
to_be_created.append(condition)
else:
to_be_updated.append(condition)
if len(to_be_created) > 0:
create_conditions(feature_flag_id=feature_flag_id, conditions=to_be_created)
if len(to_be_updated) > 0:
update_conditions(feature_flag_id=feature_flag_id, conditions=to_be_updated)
if len(to_be_deleted) > 0:
delete_conditions(feature_flag_id=feature_flag_id, ids=to_be_deleted)
return get_conditions(feature_flag_id)
def update_conditions(feature_flag_id: int, conditions: List[schemas.FeatureFlagCondition]) -> Any:
"""
Update existing feature flag conditions and return their updated data.
"""
values = []
params = {
"feature_flag_id": feature_flag_id,
}
for i in range(len(conditions)):
values.append(f"(%(condition_id_{i})s, %(name_{i})s, %(rollout_percentage_{i})s, %(filters_{i})s::jsonb)")
params[f"condition_id_{i}"] = conditions[i].condition_id
params[f"name_{i}"] = conditions[i].name
params[f"rollout_percentage_{i}"] = conditions[i].rollout_percentage
params[f"filters_{i}"] = json.dumps(conditions[i].filters)
sql = f"""
UPDATE feature_flags_conditions
SET name = c.name, rollout_percentage = c.rollout_percentage, filters = c.filters
FROM (VALUES {','.join(values)}) AS c(condition_id, name, rollout_percentage, filters)
WHERE c.condition_id = feature_flags_conditions.condition_id AND feature_flag_id = %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
def delete_conditions(feature_flag_id: int, ids: List[int]) -> None:
"""
Delete feature flag conditions.
"""
sql = """
DELETE FROM feature_flags_conditions
WHERE condition_id IN %(ids)s
AND feature_flag_id= %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id, "ids": tuple(ids)})
cur.execute(query)
def delete_feature_flag(project_id: int, feature_flag_id: int):
"""
Delete a feature flag.
"""
conditions = [
"project_id=%(project_id)s",
"feature_flags.feature_flag_id=%(feature_flag_id)s"
]
params = {"project_id": project_id, "feature_flag_id": feature_flag_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""UPDATE feature_flags
SET deleted_at= (now() at time zone 'utc'), is_active=false
WHERE {" AND ".join(conditions)};""", params)
cur.execute(query)
return {"state": "success"}

View file

@ -0,0 +1,284 @@
import json
import chalicelib.utils.helper
import schemas
from chalicelib.core import significance, sessions
from chalicelib.utils import dev
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
REMOVE_KEYS = ["key", "_key", "startDate", "endDate"]
ALLOW_UPDATE_FOR = ["name", "filter"]
# def filter_stages(stages):
# ALLOW_TYPES = [events.event_type.CLICK.ui_type, events.event_type.INPUT.ui_type,
# events.event_type.LOCATION.ui_type, events.event_type.CUSTOM.ui_type,
# events.event_type.CLICK_IOS.ui_type, events.event_type.INPUT_IOS.ui_type,
# events.event_type.VIEW_IOS.ui_type, events.event_type.CUSTOM_IOS.ui_type, ]
# return [s for s in stages if s["type"] in ALLOW_TYPES and s.get("value") is not None]
def create(project_id, user_id, name, filter: schemas.FunnelSearchPayloadSchema, is_public):
helper.delete_keys_from_dict(filter, REMOVE_KEYS)
# filter.events = filter_stages(stages=filter.events)
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
INSERT INTO public.funnels (project_id, user_id, name, filter,is_public)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(filter)s::jsonb,%(is_public)s)
RETURNING *;""",
{"user_id": user_id, "project_id": project_id, "name": name,
"filter": json.dumps(filter.dict()),
"is_public": is_public})
cur.execute(
query
)
r = cur.fetchone()
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r = helper.dict_to_camel_case(r)
r["filter"]["startDate"], r["filter"]["endDate"] = TimeUTC.get_start_end_from_range(r["filter"]["rangeValue"])
return {"data": r}
def update(funnel_id, user_id, project_id, name=None, filter=None, is_public=None):
s_query = []
if filter is not None:
helper.delete_keys_from_dict(filter, REMOVE_KEYS)
s_query.append("filter = %(filter)s::jsonb")
if name is not None and len(name) > 0:
s_query.append("name = %(name)s")
if is_public is not None:
s_query.append("is_public = %(is_public)s")
if len(s_query) == 0:
return {"errors": ["Nothing to update"]}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""\
UPDATE public.funnels
SET {" , ".join(s_query)}
WHERE funnel_id=%(funnel_id)s
AND project_id = %(project_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING *;""", {"user_id": user_id, "funnel_id": funnel_id, "name": name,
"filter": json.dumps(filter) if filter is not None else None, "is_public": is_public,
"project_id": project_id})
# print("--------------------")
# print(query)
# print("--------------------")
cur.execute(
query
)
r = cur.fetchone()
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r = helper.dict_to_camel_case(r)
r["filter"]["startDate"], r["filter"]["endDate"] = TimeUTC.get_start_end_from_range(r["filter"]["rangeValue"])
return {"data": r}
def get_by_user(project_id, user_id, range_value=None, start_date=None, end_date=None, details=False):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
f"""\
SELECT funnel_id, project_id, user_id, name, created_at, deleted_at, is_public
{",filter" if details else ""}
FROM public.funnels
WHERE project_id = %(project_id)s
AND funnels.deleted_at IS NULL
AND (funnels.user_id = %(user_id)s OR funnels.is_public);""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
rows = helper.list_to_camel_case(rows)
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
if details:
# row["filter"]["events"] = filter_stages(row["filter"]["events"])
get_start_end_time(filter_d=row["filter"], range_value=range_value, start_date=start_date,
end_date=end_date)
counts = sessions.search2_pg(data=schemas.SessionsSearchPayloadSchema.parse_obj(row["filter"]),
project_id=project_id, user_id=None, count_only=True)
row["sessionsCount"] = counts["countSessions"]
row["usersCount"] = counts["countUsers"]
filter_clone = dict(row["filter"])
overview = significance.get_overview(filter_d=row["filter"], project_id=project_id)
row["stages"] = overview["stages"]
row.pop("filter")
row["stagesCount"] = len(row["stages"])
# TODO: ask david to count it alone
row["criticalIssuesCount"] = overview["criticalIssuesCount"]
row["missedConversions"] = 0 if len(row["stages"]) < 2 \
else row["stages"][0]["sessionsCount"] - row["stages"][-1]["sessionsCount"]
row["filter"] = helper.old_search_payload_to_flat(filter_clone)
return rows
def get_possible_issue_types(project_id):
return [{"type": t, "title": chalicelib.utils.helper.get_issue_title(t)} for t in
['click_rage', 'dead_click', 'excessive_scrolling',
'bad_request', 'missing_resource', 'memory', 'cpu',
'slow_resource', 'slow_page_load', 'crash', 'custom_event_error',
'js_error']]
def get_start_end_time(filter_d, range_value, start_date, end_date):
if start_date is not None and end_date is not None:
filter_d["startDate"], filter_d["endDate"] = start_date, end_date
elif range_value is not None and len(range_value) > 0:
filter_d["rangeValue"] = range_value
filter_d["startDate"], filter_d["endDate"] = TimeUTC.get_start_end_from_range(range_value)
else:
filter_d["startDate"], filter_d["endDate"] = TimeUTC.get_start_end_from_range(filter_d["rangeValue"])
def delete(project_id, funnel_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.funnels
SET deleted_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND funnel_id = %(funnel_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"funnel_id": funnel_id, "project_id": project_id, "user_id": user_id})
)
return {"data": {"state": "success"}}
def get_sessions(project_id, funnel_id, user_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
return sessions.search2_pg(data=schemas.SessionsSearchPayloadSchema.parse_obj(f["filter"]), project_id=project_id,
user_id=user_id)
def get_sessions_on_the_fly(funnel_id, project_id, user_id, data: schemas.FunnelSearchPayloadSchema):
# data.events = filter_stages(data.events)
if len(data.events) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.range_value,
start_date=data.startDate, end_date=data.endDate)
data = schemas.FunnelSearchPayloadSchema.parse_obj(f["filter"])
return sessions.search2_pg(data=data, project_id=project_id,
user_id=user_id)
def get_top_insights(project_id, user_id, funnel_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=f["filter"], project_id=project_id)
if len(insights) > 0:
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"data": {"stages": helper.list_to_camel_case(insights),
"totalDropDueToIssues": total_drop_due_to_issues}}
def get_top_insights_on_the_fly(funnel_id, user_id, project_id, data):
# data["events"] = filter_stages(data.get("events", []))
if len(data["events"]) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.get("rangeValue", None),
start_date=data.get('startDate', None),
end_date=data.get('endDate', None))
data = f["filter"]
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=data, project_id=project_id)
if len(insights) > 0:
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"data": {"stages": helper.list_to_camel_case(insights),
"totalDropDueToIssues": total_drop_due_to_issues}}
def get_issues(project_id, user_id, funnel_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
return {"data": {
"issues": helper.dict_to_camel_case(significance.get_issues_list(filter_d=f["filter"], project_id=project_id))
}}
@dev.timed
def get_issues_on_the_fly(funnel_id, user_id, project_id, data):
first_stage = data.get("firstStage")
last_stage = data.get("lastStage")
# data["events"] = filter_stages(data.get("events", []))
if len(data["events"]) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.get("rangeValue", None),
start_date=data.get('startDate', None),
end_date=data.get('endDate', None))
data = f["filter"]
return {
"issues": helper.dict_to_camel_case(
significance.get_issues_list(filter_d=data, project_id=project_id, first_stage=first_stage,
last_stage=last_stage))}
def get(funnel_id, project_id, user_id, flatten=True):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""\
SELECT
*
FROM public.funnels
WHERE project_id = %(project_id)s
AND deleted_at IS NULL
AND funnel_id = %(funnel_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"funnel_id": funnel_id, "project_id": project_id, "user_id": user_id}
)
)
f = helper.dict_to_camel_case(cur.fetchone())
if f is None:
return None
f["createdAt"] = TimeUTC.datetime_to_timestamp(f["createdAt"])
# f["filter"]["events"] = filter_stages(stages=f["filter"]["events"])
if flatten:
f["filter"] = helper.old_search_payload_to_flat(f["filter"])
return f
@dev.timed
def search_by_issue(user_id, project_id, funnel_id, issue_id, data: schemas.FunnelSearchPayloadSchema, range_value=None,
start_date=None, end_date=None):
if len(data.events) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id)
if f is None:
return {"errors": ["funnel not found"]}
data.startDate = data.startDate if data.startDate is not None else start_date
data.endDate = data.endDate if data.endDate is not None else end_date
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=data.startDate,
end_date=data.endDate)
data = schemas.FunnelSearchPayloadSchema.parse_obj(f["filter"])
issues = get_issues_on_the_fly(funnel_id=funnel_id, user_id=user_id, project_id=project_id, data=data.dict()) \
.get("issues", {})
issues = issues.get("significant", []) + issues.get("insignificant", [])
issue = None
for i in issues:
if i.get("issueId", "") == issue_id:
issue = i
break
return {"sessions": sessions.search2_pg(user_id=user_id, project_id=project_id, issue=issue,
data=data) if issue is not None else {"total": 0, "sessions": []},
# "stages": helper.list_to_camel_case(insights),
# "totalDropDueToIssues": total_drop_due_to_issues,
"issue": issue}

View file

@ -1,330 +0,0 @@
import logging
import redis
import requests
from decouple import config
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def app_connection_string(name, port, path):
namespace = config("POD_NAMESPACE", default="app")
conn_string = config("CLUSTER_URL", default="svc.cluster.local")
return f"http://{name}.{namespace}.{conn_string}:{port}/{path}"
HEALTH_ENDPOINTS = {
"alerts": app_connection_string("alerts-openreplay", 8888, "health"),
"assets": app_connection_string("assets-openreplay", 8888, "metrics"),
"assist": app_connection_string("assist-openreplay", 8888, "health"),
"chalice": app_connection_string("chalice-openreplay", 8888, "metrics"),
"db": app_connection_string("db-openreplay", 8888, "metrics"),
"ender": app_connection_string("ender-openreplay", 8888, "metrics"),
"heuristics": app_connection_string("heuristics-openreplay", 8888, "metrics"),
"http": app_connection_string("http-openreplay", 8888, "metrics"),
"ingress-nginx": app_connection_string("ingress-nginx-openreplay", 80, "healthz"),
"integrations": app_connection_string("integrations-openreplay", 8888, "metrics"),
"sink": app_connection_string("sink-openreplay", 8888, "metrics"),
"sourcemapreader": app_connection_string(
"sourcemapreader-openreplay", 8888, "health"
),
"storage": app_connection_string("storage-openreplay", 8888, "metrics"),
}
def __check_database_pg(*_):
fail_response = {
"health": False,
"details": {"errors": ["Postgres health-check failed"]},
}
with pg_client.PostgresClient() as cur:
try:
cur.execute("SHOW server_version;")
# server_version = cur.fetchone()
except Exception as e:
logger.error("!! health failed: postgres not responding")
logger.exception(e)
return fail_response
try:
cur.execute("SELECT openreplay_version() AS version;")
# schema_version = cur.fetchone()
except Exception as e:
logger.error("!! health failed: openreplay_version not defined")
logger.exception(e)
return fail_response
return {
"health": True,
"details": {
# "version": server_version["server_version"],
# "schema": schema_version["version"]
},
}
def __always_healthy(*_):
return {"health": True, "details": {}}
def __check_be_service(service_name):
def fn(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]},
}
try:
results = requests.get(HEALTH_ENDPOINTS.get(service_name), timeout=2)
if results.status_code != 200:
logger.error(
f"!! issue with the {service_name}-health code:{results.status_code}"
)
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
return fail_response
except requests.exceptions.Timeout:
logger.error(f"!! Timeout getting {service_name}-health")
# fail_response["details"]["errors"].append("timeout")
return fail_response
except Exception as e:
logger.error(f"!! Issue getting {service_name}-health response")
logger.exception(e)
try:
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
except Exception:
logger.error("couldn't get response")
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {"health": True, "details": {}}
return fn
def __check_redis(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]},
}
if config("REDIS_STRING", default=None) is None:
# fail_response["details"]["errors"].append("REDIS_STRING not defined in env-vars")
return fail_response
try:
r = redis.from_url(config("REDIS_STRING"), socket_timeout=2)
r.ping()
except Exception as e:
logger.error("!! Issue getting redis-health response")
logger.exception(e)
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {
"health": True,
"details": {
# "version": r.execute_command('INFO')['redis_version']
},
}
def __check_SSL(*_):
fail_response = {
"health": False,
"details": {"errors": ["SSL Certificate health-check failed"]},
}
try:
requests.get(config("SITE_URL"), verify=True, allow_redirects=True)
except Exception as e:
logger.error("!! health failed: SSL Certificate")
logger.exception(e)
return fail_response
return {"health": True, "details": {}}
def __get_sessions_stats(*_):
with pg_client.PostgresClient() as cur:
constraints = ["projects.deleted_at IS NULL"]
query = cur.mogrify(
f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
COALESCE(SUM(events_count),0) AS e_c
FROM public.projects_stats
INNER JOIN public.projects USING(project_id)
WHERE {" AND ".join(constraints)};"""
)
cur.execute(query)
row = cur.fetchone()
return {"numberOfSessionsCaptured": row["s_c"], "numberOfEventCaptured": row["e_c"]}
def get_health(tenant_id=None):
health_map = {
"databases": {"postgres": __check_database_pg},
"ingestionPipeline": {"redis": __check_redis},
"backendServices": {
"alerts": __check_be_service("alerts"),
"assets": __check_be_service("assets"),
"assist": __check_be_service("assist"),
"chalice": __always_healthy,
"db": __check_be_service("db"),
"ender": __check_be_service("ender"),
"frontend": __always_healthy,
"heuristics": __check_be_service("heuristics"),
"http": __check_be_service("http"),
"ingress-nginx": __always_healthy,
"integrations": __check_be_service("integrations"),
"sink": __check_be_service("sink"),
"sourcemapreader": __check_be_service("sourcemapreader"),
"storage": __check_be_service("storage"),
},
"details": __get_sessions_stats,
"ssl": __check_SSL,
}
return __process_health(health_map=health_map)
def __process_health(health_map):
response = dict(health_map)
for parent_key in health_map.keys():
if config(f"SKIP_H_{parent_key.upper()}", cast=bool, default=False):
response.pop(parent_key)
elif isinstance(health_map[parent_key], dict):
for element_key in health_map[parent_key]:
if config(
f"SKIP_H_{parent_key.upper()}_{element_key.upper()}",
cast=bool,
default=False,
):
response[parent_key].pop(element_key)
else:
response[parent_key][element_key] = health_map[parent_key][
element_key
]()
else:
response[parent_key] = health_map[parent_key]()
return response
def cron():
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT projects.project_id,
projects.created_at,
projects.sessions_last_check_at,
projects.first_recorded_session_at,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
insert = False
if r["last_update_at"] is None:
# never counted before, must insert
insert = True
if r["first_recorded_session_at"] is None:
if r["sessions_last_check_at"] is None:
count_start_from = r["created_at"]
else:
count_start_from = r["sessions_last_check_at"]
else:
count_start_from = r["first_recorded_session_at"]
else:
# counted before, must update
count_start_from = r["last_update_at"]
count_start_from = TimeUTC.datetime_to_timestamp(count_start_from)
params = {
"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts>=%(start_ts)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
if insert:
query = cur.mogrify(
"""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
VALUES (%(project_id)s, %(sessions_count)s, %(events_count)s, (now() AT TIME ZONE 'utc'::text));""",
params,
)
else:
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=sessions_count+%(sessions_count)s,
events_count=events_count+%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params,
)
cur.execute(query)
# this cron is used to correct the sessions&events count every week
def weekly_cron():
with pg_client.PostgresClient(long_query=True) as cur:
query = cur.mogrify(
"""SELECT project_id,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
if r["last_update_at"] is None:
continue
params = {
"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=%(sessions_count)s,
events_count=%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params,
)
cur.execute(query)

View file

@ -0,0 +1,30 @@
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils import helper, pg_client
from chalicelib.utils import dev
@dev.timed
def get_by_url(project_id, data):
args = {"startDate": data.get('startDate', TimeUTC.now(delta_days=-30)),
"endDate": data.get('endDate', TimeUTC.now()),
"project_id": project_id, "url": data["url"]}
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""SELECT selector, count(1) AS count
FROM events.clicks
INNER JOIN sessions USING (session_id)
WHERE project_id = %(project_id)s
AND url = %(url)s
AND timestamp >= %(startDate)s
AND timestamp <= %(endDate)s
AND start_ts >= %(startDate)s
AND start_ts <= %(endDate)s
AND duration IS NOT NULL
GROUP BY selector;""",
args)
cur.execute(
query
)
rows = cur.fetchall()
return helper.dict_to_camel_case(rows)

View file

@ -0,0 +1,933 @@
import schemas
from chalicelib.core import sessions_metas
from chalicelib.utils import helper, dev
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import __get_step_size
import math
from chalicelib.core.dashboard import __get_constraints, __get_constraint_values
def __transform_journey(rows):
nodes = []
links = []
for r in rows:
source = r["source_event"][r["source_event"].index("_") + 1:]
target = r["target_event"][r["target_event"].index("_") + 1:]
if source not in nodes:
nodes.append(source)
if target not in nodes:
nodes.append(target)
links.append({"source": nodes.index(source), "target": nodes.index(target), "value": r["value"]})
return {"nodes": nodes, "links": sorted(links, key=lambda x: x["value"], reverse=True)}
JOURNEY_DEPTH = 5
JOURNEY_TYPES = {
"PAGES": {"table": "events.pages", "column": "base_path", "table_id": "message_id"},
"CLICK": {"table": "events.clicks", "column": "label", "table_id": "message_id"},
# "VIEW": {"table": "events_ios.views", "column": "name", "table_id": "seq_index"}, TODO: enable this for SAAS only
"EVENT": {"table": "events_common.customs", "column": "name", "table_id": "seq_index"}
}
@dev.timed
def journey(project_id, startTimestamp=TimeUTC.now(delta_days=-1), endTimestamp=TimeUTC.now(), filters=[], **args):
pg_sub_query_subset = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
event_start = None
event_table = JOURNEY_TYPES["PAGES"]["table"]
event_column = JOURNEY_TYPES["PAGES"]["column"]
event_table_id = JOURNEY_TYPES["PAGES"]["table_id"]
extra_values = {}
for f in filters:
if f["type"] == "START_POINT":
event_start = f["value"]
elif f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_table = JOURNEY_TYPES[f["value"]]["table"]
event_column = JOURNEY_TYPES[f["value"]]["column"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query_subset.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT source_event,
target_event,
count(*) AS value
FROM (SELECT event_number || '_' || value as target_event,
LAG(event_number || '_' || value, 1) OVER ( PARTITION BY session_rank ) AS source_event
FROM (SELECT value,
session_rank,
message_id,
ROW_NUMBER() OVER ( PARTITION BY session_rank ORDER BY timestamp ) AS event_number
{f"FROM (SELECT * FROM (SELECT *, MIN(mark) OVER ( PARTITION BY session_id , session_rank ORDER BY timestamp ) AS max FROM (SELECT *, CASE WHEN value = %(event_start)s THEN timestamp ELSE NULL END as mark"
if event_start else ""}
FROM (SELECT session_id,
message_id,
timestamp,
value,
SUM(new_session) OVER (ORDER BY session_id, timestamp) AS session_rank
FROM (SELECT *,
CASE
WHEN source_timestamp IS NULL THEN 1
ELSE 0 END AS new_session
FROM (SELECT session_id,
{event_table_id} AS message_id,
timestamp,
{event_column} AS value,
LAG(timestamp)
OVER (PARTITION BY session_id ORDER BY timestamp) AS source_timestamp
FROM {event_table} INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
) AS related_events) AS ranked_events) AS processed
{") AS marked) AS maxed WHERE timestamp >= max) AS filtered" if event_start else ""}
) AS sorted_events
WHERE event_number <= %(JOURNEY_DEPTH)s) AS final
WHERE source_event IS NOT NULL
and target_event IS NOT NULL
GROUP BY source_event, target_event
ORDER BY value DESC
LIMIT 20;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, "event_start": event_start, "JOURNEY_DEPTH": JOURNEY_DEPTH,
**__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return __transform_journey(rows)
def __compute_weekly_percentage(rows):
if rows is None or len(rows) == 0:
return rows
t = -1
for r in rows:
if r["week"] == 0:
t = r["usersCount"]
r["percentage"] = r["usersCount"] / t
return rows
def __complete_retention(rows, start_date, end_date=None):
if rows is None:
return []
max_week = 10
for i in range(max_week):
if end_date is not None and start_date + i * TimeUTC.MS_WEEK >= end_date:
break
neutral = {
"firstConnexionWeek": start_date,
"week": i,
"usersCount": 0,
"connectedUsers": [],
"percentage": 0
}
if i < len(rows) \
and i != rows[i]["week"]:
rows.insert(i, neutral)
elif i >= len(rows):
rows.append(neutral)
return rows
def __complete_acquisition(rows, start_date, end_date=None):
if rows is None:
return []
max_week = 10
week = 0
delta_date = 0
while max_week > 0:
start_date += TimeUTC.MS_WEEK
if end_date is not None and start_date >= end_date:
break
delta = 0
if delta_date + week >= len(rows) \
or delta_date + week < len(rows) and rows[delta_date + week]["firstConnexionWeek"] > start_date:
for i in range(max_week):
if end_date is not None and start_date + i * TimeUTC.MS_WEEK >= end_date:
break
neutral = {
"firstConnexionWeek": start_date,
"week": i,
"usersCount": 0,
"connectedUsers": [],
"percentage": 0
}
rows.insert(delta_date + week + i, neutral)
delta = i
else:
for i in range(max_week):
if end_date is not None and start_date + i * TimeUTC.MS_WEEK >= end_date:
break
neutral = {
"firstConnexionWeek": start_date,
"week": i,
"usersCount": 0,
"connectedUsers": [],
"percentage": 0
}
if delta_date + week + i < len(rows) \
and i != rows[delta_date + week + i]["week"]:
rows.insert(delta_date + week + i, neutral)
elif delta_date + week + i >= len(rows):
rows.append(neutral)
delta = i
week += delta
max_week -= 1
delta_date += 1
return rows
@dev.timed
def users_retention(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(), filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("DATE_TRUNC('week', to_timestamp(start_ts / 1000)) = to_timestamp(%(startTimestamp)s / 1000)")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT FLOOR(DATE_PART('day', connexion_week - DATE_TRUNC('week', to_timestamp(%(startTimestamp)s / 1000)::timestamp)) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT DISTINCT user_id
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND DATE_PART('week', to_timestamp((sessions.start_ts - %(startTimestamp)s)/1000)) = 1
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess
WHERE bsess.start_ts < %(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
LIMIT 1))
) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions
WHERE users_list.user_id = sessions.user_id
AND %(startTimestamp)s <=sessions.start_ts
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
GROUP BY connexion_week, user_id
) AS connexions_list ON (TRUE)
GROUP BY week
ORDER BY week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"chart": __complete_retention(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
@dev.timed
def users_acquisition(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT EXTRACT(EPOCH FROM first_connexion_week::date)::bigint*1000 AS first_connexion_week,
FLOOR(DATE_PART('day', connexion_week - first_connexion_week) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT user_id, MIN(DATE_TRUNC('week', to_timestamp(start_ts / 1000))) AS first_connexion_week
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess
WHERE bsess.start_ts<%(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
LIMIT 1))
GROUP BY user_id) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions
WHERE users_list.user_id = sessions.user_id
AND first_connexion_week <=
DATE_TRUNC('week', to_timestamp(sessions.start_ts / 1000)::timestamp)
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
GROUP BY connexion_week, user_id) AS connexions_list ON (TRUE)
GROUP BY first_connexion_week, week
ORDER BY first_connexion_week, week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"chart": __complete_acquisition(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
@dev.timed
def feature_retention(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_type = "PAGES"
event_value = "/"
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
pg_sub_query.append(f"feature.{event_column} = %(value)s")
with pg_client.PostgresClient() as cur:
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT FLOOR(DATE_PART('day', connexion_week - to_timestamp(%(startTimestamp)s/1000)) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT DISTINCT user_id
FROM sessions INNER JOIN {event_table} AS feature USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND DATE_PART('week', to_timestamp((sessions.start_ts - %(startTimestamp)s)/1000)) = 1
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess INNER JOIN {event_table} AS bfeature USING (session_id)
WHERE bsess.start_ts<%(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
AND bfeature.timestamp<%(startTimestamp)s
AND bfeature.{event_column}=%(value)s
LIMIT 1))
GROUP BY user_id) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions INNER JOIN {event_table} AS feature USING (session_id)
WHERE users_list.user_id = sessions.user_id
AND %(startTimestamp)s <= sessions.start_ts
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
AND feature.timestamp >= %(startTimestamp)s
AND feature.timestamp < %(endTimestamp)s
AND feature.{event_column} = %(value)s
GROUP BY connexion_week, user_id) AS connexions_list ON (TRUE)
GROUP BY week
ORDER BY week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}],
"chart": __complete_retention(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
@dev.timed
def feature_acquisition(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_type = "PAGES"
event_value = "/"
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
pg_sub_query.append(f"feature.{event_column} = %(value)s")
with pg_client.PostgresClient() as cur:
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT EXTRACT(EPOCH FROM first_connexion_week::date)::bigint*1000 AS first_connexion_week,
FLOOR(DATE_PART('day', connexion_week - first_connexion_week) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT user_id, DATE_TRUNC('week', to_timestamp(first_connexion_week / 1000)) AS first_connexion_week
FROM(SELECT DISTINCT user_id, MIN(start_ts) AS first_connexion_week
FROM sessions INNER JOIN {event_table} AS feature USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess INNER JOIN {event_table} AS bfeature USING (session_id)
WHERE bsess.start_ts<%(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
AND bfeature.timestamp<%(startTimestamp)s
AND bfeature.{event_column}=%(value)s
LIMIT 1))
GROUP BY user_id) AS raw_users_list) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions INNER JOIN {event_table} AS feature USING(session_id)
WHERE users_list.user_id = sessions.user_id
AND first_connexion_week <=
DATE_TRUNC('week', to_timestamp(sessions.start_ts / 1000)::timestamp)
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
AND feature.timestamp >= %(startTimestamp)s
AND feature.timestamp < %(endTimestamp)s
AND feature.{event_column} = %(value)s
GROUP BY connexion_week, user_id) AS connexions_list ON (TRUE)
GROUP BY first_connexion_week, week
ORDER BY first_connexion_week, week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}],
"chart": __complete_acquisition(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
@dev.timed
def feature_popularity_frequency(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
event_table = JOURNEY_TYPES["CLICK"]["table"]
event_column = JOURNEY_TYPES["CLICK"]["column"]
extra_values = {}
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_table = JOURNEY_TYPES[f["value"]]["table"]
event_column = JOURNEY_TYPES[f["value"]]["column"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COUNT(DISTINCT user_id) AS count
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
all_user_count = cur.fetchone()["count"]
if all_user_count == 0:
return []
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT {event_column} AS value, COUNT(DISTINCT user_id) AS count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL
GROUP BY value
ORDER BY count DESC
LIMIT 7;"""
# TODO: solve full scan
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
popularity = cur.fetchall()
pg_query = f"""SELECT {event_column} AS value, COUNT(session_id) AS count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY value;"""
# TODO: solve full scan
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
frequencies = cur.fetchall()
total_usage = sum([f["count"] for f in frequencies])
frequencies = {f["value"]: f["count"] for f in frequencies}
for p in popularity:
p["popularity"] = p.pop("count") / all_user_count
p["frequency"] = frequencies[p["value"]] / total_usage
return popularity
@dev.timed
def feature_adoption(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
event_type = "CLICK"
event_value = '/'
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COUNT(DISTINCT user_id) AS count
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
all_user_count = cur.fetchone()["count"]
if all_user_count == 0:
return {"adoption": 0, "target": 0, "filters": [{"type": "EVENT_TYPE", "value": event_type},
{"type": "EVENT_VALUE", "value": event_value}], }
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_sub_query.append(f"feature.{event_column} = %(value)s")
pg_query = f"""SELECT COUNT(DISTINCT user_id) AS count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
adoption = cur.fetchone()["count"] / all_user_count
return {"target": all_user_count, "adoption": adoption,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}]}
@dev.timed
def feature_adoption_top_users(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
event_type = "CLICK"
event_value = '/'
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
with pg_client.PostgresClient() as cur:
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_sub_query.append(f"feature.{event_column} = %(value)s")
pg_query = f"""SELECT user_id, COUNT(DISTINCT session_id) AS count
FROM {event_table} AS feature
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY 1
ORDER BY 2 DESC
LIMIT 10;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return {"users": helper.list_to_camel_case(rows),
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}]}
@dev.timed
def feature_adoption_daily_usage(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
event_type = "CLICK"
event_value = '/'
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query_chart.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
with pg_client.PostgresClient() as cur:
pg_sub_query_chart.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query_chart.append("feature.timestamp < %(endTimestamp)s")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND length({event_column})>2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_sub_query_chart.append(f"feature.{event_column} = %(value)s")
pg_query = f"""SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(session_id), 0) AS count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT session_id
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_chart)}
) AS users ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"step_size": TimeUTC.MS_DAY, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return {"chart": helper.list_to_camel_case(rows),
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}]}
@dev.timed
def feature_intensity(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_table = JOURNEY_TYPES["CLICK"]["table"]
event_column = JOURNEY_TYPES["CLICK"]["column"]
extra_values = {}
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_table = JOURNEY_TYPES[f["value"]]["table"]
event_column = JOURNEY_TYPES[f["value"]]["column"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
pg_sub_query.append(f"length({event_column})>2")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT {event_column} AS value, AVG(DISTINCT session_id) AS avg
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY value
ORDER BY avg DESC
LIMIT 7;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# TODO: solve full scan issue
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return rows
@dev.timed
def users_active(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
pg_sub_query_chart.append("user_id IS NOT NULL")
period = "DAY"
extra_values = {}
for f in filters:
if f["type"] == "PERIOD" and f["value"] in ["DAY", "WEEK"]:
period = f["value"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query_chart.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT AVG(count) AS avg, JSONB_AGG(chart) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(users), 0) AS count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT user_id
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS users ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart;"""
params = {"step_size": TimeUTC.MS_DAY if period == "DAY" else TimeUTC.MS_WEEK,
"project_id": project_id,
"startTimestamp": TimeUTC.trunc_day(startTimestamp) if period == "DAY" else TimeUTC.trunc_week(
startTimestamp),
"endTimestamp": endTimestamp, **__get_constraint_values(args),
**extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
row_users = cur.fetchone()
return row_users
@dev.timed
def users_power(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, time_constraint=True, chart=False, data=args)
pg_sub_query.append("user_id IS NOT NULL")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT AVG(count) AS avg, JSONB_AGG(day_users_partition) AS partition
FROM (SELECT number_of_days, COUNT(user_id) AS count
FROM (SELECT user_id, COUNT(DISTINCT DATE_TRUNC('day', to_timestamp(start_ts / 1000))) AS number_of_days
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
GROUP BY 1) AS users_connexions
GROUP BY number_of_days
ORDER BY number_of_days) AS day_users_partition;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp, "endTimestamp": endTimestamp, **__get_constraint_values(args)}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
row_users = cur.fetchone()
return helper.dict_to_camel_case(row_users)
@dev.timed
def users_slipping(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_type = "PAGES"
event_value = "/"
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
pg_sub_query.append(f"feature.{event_column} = %(value)s")
with pg_client.PostgresClient() as cur:
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT user_id, last_time, interactions_count, MIN(start_ts) AS first_seen, MAX(start_ts) AS last_seen
FROM (SELECT user_id, MAX(timestamp) AS last_time, COUNT(DISTINCT session_id) AS interactions_count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_id) AS user_last_usage
INNER JOIN sessions USING (user_id)
WHERE EXTRACT(EPOCH FROM now()) * 1000 - last_time > 7 * 24 * 60 * 60 * 1000
GROUP BY user_id, last_time,interactions_count;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return {
"startTimestamp": startTimestamp,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}],
"list": helper.list_to_camel_case(rows)
}
@dev.timed
def search(text, feature_type, project_id, platform=None):
if not feature_type:
resource_type = "ALL"
data = search(text=text, feature_type=resource_type, project_id=project_id, platform=platform)
return data
pg_sub_query = __get_constraints(project_id=project_id, time_constraint=True, duration=True,
data={} if platform is None else {"platform": platform})
params = {"startTimestamp": TimeUTC.now() - 2 * TimeUTC.MS_MONTH,
"endTimestamp": TimeUTC.now(),
"project_id": project_id,
"value": helper.string_to_sql_like(text.lower()),
"platform_0": platform}
if feature_type == "ALL":
with pg_client.PostgresClient() as cur:
sub_queries = []
for e in JOURNEY_TYPES:
sub_queries.append(f"""(SELECT DISTINCT {JOURNEY_TYPES[e]["column"]} AS value, '{e}' AS "type"
FROM {JOURNEY_TYPES[e]["table"]} INNER JOIN public.sessions USING(session_id)
WHERE {" AND ".join(pg_sub_query)} AND {JOURNEY_TYPES[e]["column"]} ILIKE %(value)s
LIMIT 10)""")
pg_query = "UNION ALL".join(sub_queries)
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
elif JOURNEY_TYPES.get(feature_type) is not None:
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT DISTINCT {JOURNEY_TYPES[feature_type]["column"]} AS value, '{feature_type}' AS "type"
FROM {JOURNEY_TYPES[feature_type]["table"]} INNER JOIN public.sessions USING(session_id)
WHERE {" AND ".join(pg_sub_query)} AND {JOURNEY_TYPES[feature_type]["column"]} ILIKE %(value)s
LIMIT 10;"""
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
else:
return []
return [helper.dict_to_camel_case(row) for row in rows]

View file

@ -0,0 +1,53 @@
from abc import ABC, abstractmethod
from chalicelib.utils import pg_client, helper
class BaseIntegration(ABC):
def __init__(self, user_id, ISSUE_CLASS):
self._user_id = user_id
self.issue_handler = ISSUE_CLASS(self.integration_token)
@property
@abstractmethod
def provider(self):
pass
@property
def integration_token(self):
integration = self.get()
if integration is None:
print("no token configured yet")
return None
return integration["token"]
def get(self):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT *
FROM public.oauth_authentication
WHERE user_id=%(user_id)s AND provider=%(provider)s;""",
{"user_id": self._user_id, "provider": self.provider.lower()})
)
return helper.dict_to_camel_case(cur.fetchone())
@abstractmethod
def get_obfuscated(self):
pass
@abstractmethod
def update(self, changes):
pass
@abstractmethod
def _add(self, data):
pass
@abstractmethod
def delete(self):
pass
@abstractmethod
def add_edit(self, data):
pass

Some files were not shown because too many files have changed in this diff Show more