Compare commits

..

124 commits

Author SHA1 Message Date
nick-delirium
cd5f95a428
spot: more fixes for network, reinit stage for content script 2025-01-24 09:57:52 +01:00
nick-delirium
4160c2c946
spot: add err ctx, add iterator for values 2025-01-15 12:09:17 +01:00
nick-delirium
7178b89eef
spot: more fixes for debugger approach, check settings before enabling network 2025-01-15 11:20:18 +01:00
nick-delirium
825026d78b
spot: small refactoring + testing debugger for network capture 2025-01-14 10:28:00 +01:00
nick-delirium
542c3848a7
spot: .14
Signed-off-by: nick-delirium <nikita@openreplay.com>
2025-01-09 15:13:42 +01:00
nick-delirium
57e451b7ef
spot: mix network requests with webRequest data for better tracking 2025-01-09 14:17:49 +01:00
nick-delirium
f09232218e
ui: disable button if no saved 2025-01-08 14:42:44 +01:00
nick-delirium
1126461d68
ui: update savedsearch drop 2025-01-08 14:36:42 +01:00
nick-delirium
d8490bd96b
ui: enable error logging, remove immutable reference 2025-01-08 12:57:02 +01:00
nick-delirium
b039209944
ui: fixing saved search button 2025-01-08 12:50:23 +01:00
nick-delirium
ba95a6ccb8
ui: fix funnel type selection 2025-01-08 12:29:26 +01:00
nick-delirium
a0ab5c7131
ui: fix savesearch button, fix comparison period tracking 2025-01-08 12:16:00 +01:00
nick-delirium
53e88abe12
Merge branch 'dev' into live-se-red 2025-01-08 11:58:52 +01:00
nick-delirium
ba12dfcdf8
ui: small fixes for granularity and comparisons 2025-01-08 11:57:33 +01:00
nick-delirium
be51fba227
ui: keep state in storage 2025-01-08 11:20:55 +01:00
nick-delirium
f7f28de014
ui: rm unused 2025-01-07 17:23:10 +01:00
nick-delirium
52401fc1a7
ui: fixing horizontal bar tooltip 2025-01-07 17:13:43 +01:00
nick-delirium
1c200452b5
ui: fixing bars under comparison 2025-01-07 16:20:15 +01:00
nick-delirium
8e1f50e4a3
ui: move barchart to echarts 2025-01-07 14:47:16 +01:00
nick-delirium
81d99bd985
Merge branch 'dev' into live-se-red 2025-01-07 11:47:14 +01:00
nick-delirium
e4e77b4cfd
ui: resize chart on window 2025-01-07 10:43:46 +01:00
nick-delirium
05d5aadbd2
ui: series update observe 2025-01-07 10:39:29 +01:00
Sudheer Salavadi
c229125055
Streamlined icons and improved echarts trends (#2920)
* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

* ui crash

* Chart improvements and layout toggling

* Various improvements

* Tooltips

* Improved icons in cards listing page

* Update WidgetFormNew.tsx

* Sankey improvements

* Icon and text updates

Text alignment and color changes in x-ray
Icon Mapping with appropriate names and shapes

* Colors and Trend Chart Interaction updates

* ui

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-01-07 10:34:20 +01:00
nick-delirium
3dd50df715
ui: fix comparison for 24hr 2025-01-06 18:02:07 +01:00
nick-delirium
3cb6c8824f
ui: rename var for readability 2025-01-06 17:33:13 +01:00
nick-delirium
9b836ce88e
ui: add timestamp for comp tooltip row 2025-01-06 17:28:45 +01:00
nick-delirium
1f3bb2b33c
ui: use unique id for window values 2025-01-06 17:13:20 +01:00
nick-delirium
db88d039ff
Merge branch 'dev' into live-se-red 2025-01-06 16:47:07 +01:00
nick-delirium
251cd5a6c4
ui: move timeseries to apache echarts 2025-01-06 16:42:09 +01:00
nick-delirium
d3acbdcc1b
Merge branch 'dev' into live-se-red 2024-12-30 12:48:37 +01:00
nick-delirium
e98f4245f2
ui: save comp range in widget details 2024-12-30 11:29:20 +01:00
nick-delirium
e3f3ccfe63
ui: change new series default expand state 2024-12-27 14:24:59 +01:00
nick-delirium
3a48e061ef
ui: fix some proptype warnings 2024-12-27 14:18:55 +01:00
nick-delirium
16d75a6a1b
ui: fix card creator visibility in grid, fix table exporter visiblility in grid 2024-12-27 14:12:41 +01:00
nick-delirium
592ae5ac4d
ui: use default filter for sessions, move around saved search actions, remove tags modal 2024-12-27 10:53:22 +01:00
nick-delirium
15fe4e5994
Merge branch 'dev' into live-se-red 2024-12-27 10:15:03 +01:00
nick-delirium
3c82b135cb
ui: filterMinorPaths -> return input data if nodes arr. is empty 2024-12-26 09:21:45 +01:00
nick-delirium
e719a1bdc0
ui: lower default density to 35, fix table card display 2024-12-24 15:17:59 +01:00
nick-delirium
1397cace53
ui: fix weekday mapper for x axis on >7d range 2024-12-24 15:13:30 +01:00
nick-delirium
6078872f0e
Merge branch 'dev' into live-se-red 2024-12-24 15:05:44 +01:00
Sudheer Salavadi
488dfcd849
Various improvements in graphs, and analytics pages. (#2908)
* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

* ui crash

* Chart improvements and layout toggling

* Various improvements

* Tooltips

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2024-12-24 10:44:24 +01:00
nick-delirium
48038b4fc6
Merge branch 'dev' into live-se-red 2024-12-23 17:50:06 +01:00
nick-delirium
88fd95f9bb
ui: hide btn control for table view 2024-12-23 17:43:32 +01:00
nick-delirium
e60945c400
ui: some strings changed 2024-12-23 17:32:36 +01:00
nick-delirium
78731fac37
ui: assign icon for event types in sankey nodes 2024-12-23 16:28:12 +01:00
nick-delirium
eb24893b2d
ui: handle minor paths on frontend for path/sankey 2024-12-23 16:19:59 +01:00
nick-delirium
bfb3557508
ui: fix custom comparison period 2024-12-23 14:59:56 +01:00
nick-delirium
b3f642e772
ui: fix custom comparison period 2024-12-23 14:49:03 +01:00
nick-delirium
2c189f983a
ui: fix lucide version 2024-12-23 11:15:31 +01:00
nick-delirium
d24fc53124
Merge branch 'dev' into live-se-red 2024-12-23 10:27:26 +01:00
Delirium
8e856e9040
Live se red s2 (#2902)
* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

* ui crash

---------

Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>
2024-12-23 10:00:15 +01:00
Sudheer Salavadi
fe4bbcda6d
Product Analytics UI Improvements. (#2896)
* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2024-12-23 09:56:45 +01:00
nick-delirium
415e24b9a0
ui: longer node width for journey 2024-12-20 16:30:47 +01:00
nick-delirium
3b7d86d8c6
ui: some improvements for cards list view, funnels and general filter display 2024-12-20 14:28:35 +01:00
nick-delirium
fbf7d716a6
Merge branch 'dev' into live-se-red 2024-12-20 10:56:58 +01:00
Sudheer Salavadi
592923b6d1
Various improvements Cards, OmniSearch and Cards Listing (#2881) 2024-12-17 14:32:22 +01:00
nick-delirium
c851bf0ad1
ui: fix for dlt button in widgets 2024-12-16 17:08:42 +01:00
nick-delirium
ab0a14fcb2
ui: fix icons in list view 2024-12-16 13:20:38 +01:00
nick-delirium
cddb933e0a
ui: fix series amount for path analysis, rm grid/list selector 2024-12-16 12:18:42 +01:00
nick-delirium
6f845f5e91
ui: fix exclusion component for path 2024-12-16 12:03:22 +01:00
nick-delirium
2c82ed81cc
Merge branch 'dev' into live-se-red 2024-12-16 11:32:21 +01:00
nick-delirium
4d7ceab026
ui: fix time mapping for charts 2024-12-13 16:47:42 +01:00
nick-delirium
680c918b03
ui: hide addevent in heatmaps 2024-12-13 16:25:25 +01:00
nick-delirium
07c7cc9f57
ui: fix dash list menu propagation 2024-12-13 16:15:59 +01:00
nick-delirium
7f908fc18b
ui: fix dash options menu, fix cr/up button 2024-12-13 14:48:12 +01:00
nick-delirium
5dd498f592
ui: filter category for clickmap filt 2024-12-13 14:08:36 +01:00
nick-delirium
34a51adfc2
ui: fix crashes related to metric type changes 2024-12-13 12:04:00 +01:00
nick-delirium
2136f87a27
ui: fix crash with table metrics 2024-12-12 16:53:32 +01:00
nick-delirium
0743a4fd17
ui: check chart/widget components for crashes 2024-12-12 15:57:11 +01:00
nick-delirium
8a7570549c
ui: change card type selector, add automapping 2024-12-12 15:09:39 +01:00
nick-delirium
197504caa7
ui: metric type selector for head 2024-12-12 12:54:22 +01:00
nick-delirium
e9ec4a7c9b
ui: some minor dashb improvements 2024-12-12 11:29:57 +01:00
Sudheer Salavadi
f3561e55fb
UI improvements in New Cards (#2864) 2024-12-12 10:54:16 +01:00
nick-delirium
c1283c7f20
Merge branch 'dev' into live-se-red 2024-12-12 10:47:52 +01:00
nick-delirium
6c25dd7cad
ui: filterout empty options 2024-12-11 17:25:41 +01:00
nick-delirium
e13804009a
ui: refactor local autocomplete input 2024-12-11 17:17:00 +01:00
nick-delirium
1385ba40a1
Merge branch 'dev' into live-se-red 2024-12-11 15:12:46 +01:00
nick-delirium
8947ee386a
ui: add card floater 2024-12-10 12:04:16 +01:00
nick-delirium
117d88efb0
ui: add card floater 2024-12-10 12:02:24 +01:00
nick-delirium
7744677d7e
rm husky 2024-12-10 11:35:46 +01:00
nick-delirium
4189a7a79e
Merge branch 'dev' into live-se-red 2024-12-10 11:32:39 +01:00
nick-delirium
e1bb2e8bcb
Merge branch 'dev' into live-se-red 2024-12-09 17:22:37 +01:00
nick-delirium
3190a1d3a7
ui: fix filter mapper 2024-12-09 10:59:56 +01:00
nick-delirium
8ef01a2bbf
ui: fix bad merge (git hallo?) 2024-12-06 17:45:02 +01:00
Delirium
71844f395d
Omni-search improvements. (#2823)
Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>
2024-12-06 17:36:02 +01:00
nick-delirium
16ab955da7
ui: table, bignum and comp for funnel, add csv export 2024-12-06 17:17:41 +01:00
nick-delirium
a0738bdd11
ui: show health status fetch error 2024-12-06 13:48:41 +01:00
nick-delirium
39e05faab4
Merge branch 'dev' into live-se-red 2024-12-06 13:40:24 +01:00
Sudheer Salavadi
4a2a859d01 Revert "Style improvements in omnisearch headers"
This reverts commit 89e51b0531.
2024-12-05 16:42:15 -05:00
Sudheer Salavadi
89e51b0531 Style improvements in omnisearch headers 2024-12-05 16:28:39 -05:00
nick-delirium
130e00748b
ui: comparing for funnels, alternate column view, some refactoring to prepare for customizations... 2024-12-05 16:37:42 +01:00
nick-delirium
7b0a41b743
ui: some dashboard issues with card selection modal and empty states 2024-12-04 13:35:26 +01:00
nick-delirium
486ffc7764
ui: some fixes to issue filters default value, display and placeholder consistency 2024-12-04 10:58:38 +01:00
nick-delirium
f854c3d43c
Merge branch 'dev' into live-se-red 2024-12-03 14:20:40 +01:00
nick-delirium
27d2627a66
ui: autoclose autocomplete modal 2024-12-03 12:14:21 +01:00
nick-delirium
24419712b0
ui: fix timeseries table format 2024-12-03 12:03:30 +01:00
nick-delirium
63aeb103e6
ui: restrict filterselection 2024-12-03 11:45:37 +01:00
nick-delirium
4e274cb30e
ui: change new series btn 2024-12-03 11:19:11 +01:00
nick-delirium
e4db5524e6
ui: fix crashes, add widget library table 2024-12-03 11:15:44 +01:00
nick-delirium
c41deb1af2
ui: check for metric type 2024-12-03 11:03:58 +01:00
nick-delirium
48a573399e
ui: add metricOf selector 2024-12-02 15:08:37 +01:00
nick-delirium
45fd50149f
Merge branch 'dev' into live-se-red 2024-12-02 14:35:24 +01:00
nick-delirium
c5441a45c8
ui: moveing and renaming filters 2024-12-02 14:34:45 +01:00
nick-delirium
ce345542a6
ui: more refactoring; silence warnings for list renderers 2024-12-02 14:03:35 +01:00
nick-delirium
0932b4de13
ui: some refactoring and type coverage... 2024-12-02 12:09:50 +01:00
nick-delirium
8cbe0caf66
ui: fix defualt import, fix sessheader crash, fix condition set ui 2024-12-02 10:29:49 +01:00
nick-delirium
63085d6ff1
ui: cleanup logs 2024-11-29 17:36:28 +01:00
nick-delirium
817f039ddc
ui: add comparison to more charts, add "metric" chart (BigNumChart.tsx) 2024-11-29 17:34:34 +01:00
nick-delirium
08be943ae0
ui: better granularity support, comparison view for bar chart 2024-11-28 18:08:42 +01:00
nick-delirium
92412685b0
Merge branch 'dev' into live-se-red 2024-11-28 12:02:22 +01:00
nick-delirium
29fec35046
ui: comparison designs 2024-11-28 11:51:13 +01:00
nick-delirium
19b5addc95
ui: more chart types, add table with filtering out series, start "compare to" thing 2024-11-26 17:39:15 +01:00
nick-delirium
170e85b505
ui: some changes for card creation flow, add series table to CustomMetricLineChart.tsx 2024-11-25 17:44:49 +01:00
nick-delirium
e5267497a6
ui: mimic ant card 2024-11-25 09:57:10 +01:00
nick-delirium
8f9cad4715
Merge branch 'dashboard-red-12123' into live-se-red 2024-11-22 16:27:54 +01:00
nick-delirium
ca6a5e71e9
ui: split up search component (1.22+ tbd?), restrict filter type to own modals 2024-11-22 16:27:04 +01:00
nick-delirium
6510e5e24f
ui: some "new dashboard" view improvs, fix icons fill inheritance, add ai button colors 2024-11-22 13:48:14 +01:00
nick-delirium
c1dd43f975
refining new card section 2024-11-22 10:57:53 +01:00
nick-delirium
b36875492e
ui: start new dashboard redesign 2024-11-22 10:57:40 +01:00
nick-delirium
614c0655c2
ui: finish with omnisearch thing 2024-11-22 10:55:21 +01:00
nick-delirium
9909511e94
ui: filter modal wip 2024-11-20 10:50:56 +01:00
nick-delirium
36feeb5ba9
ui: filter modal wip 2024-11-20 10:50:56 +01:00
nick-delirium
da632304a1
ui: remove search field, show filters picker by default for assist 2024-11-20 10:50:55 +01:00
nick-delirium
b941ea2cd8
ui: start redesign for live search/list 2024-11-20 10:50:54 +01:00
2670 changed files with 55674 additions and 98371 deletions

View file

@ -10,6 +10,8 @@ on:
branches:
- dev
- api-*
- v1.11.0-patch
- actions_test
paths:
- "ee/api/**"
- "api/**"

View file

@ -10,6 +10,7 @@ on:
branches:
- dev
- api-*
- v1.11.0-patch
paths:
- "api/**"
- "!api/.gitignore"

View file

@ -9,6 +9,7 @@ on:
push:
branches:
- dev
- api-*
paths:
- "ee/assist/**"
- "assist/**"

View file

@ -15,7 +15,7 @@ on:
- "!assist-stats/*-dev.sh"
- "!assist-stats/requirements-*.txt"
name: Build and Deploy Assist Stats ee
name: Build and Deploy Assist Stats
jobs:
deploy:
@ -123,9 +123,8 @@ jobs:
tag: ${IMAGE_TAG}
EOF
export IMAGE_TAG=${IMAGE_TAG}
# Update changed image tag
yq '.utilities.apiCrons.assiststats.image.tag = strenv(IMAGE_TAG)' -i /tmp/image_override.yaml
sed -i "/assist-stats/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command

View file

@ -9,6 +9,7 @@ on:
push:
branches:
- dev
- api-*
paths:
- "assist/**"
- "!assist/.gitignore"

View file

@ -10,6 +10,7 @@ on:
branches:
- dev
- api-*
- v1.11.0-patch
paths:
- "ee/api/**"
- "api/**"
@ -100,32 +101,33 @@ jobs:
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
env:
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
image: &image
tag: "${IMAGE_TAG}"
utilities:
apiCrons:
assiststats:
image: *image
report:
image: *image
sessionsCleaner:
image: *image
projectsStats:
image: *image
fixProjectsStats:
image: *image
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/crons/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
@ -135,6 +137,8 @@ jobs:
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack

View file

@ -1,189 +0,0 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
tag:
description: 'Tag to update.'
required: true
type: string
branch:
description: 'Branch to build patches from. Make sure the branch is uptodate with tag. Else itll cause missing commits.'
required: true
type: string
name: Build patches from tag, rewrite commit HEAD to older timestamp, and Push the tag
jobs:
deploy:
name: Build Patch from old tag
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 4
ref: ${{ github.event.inputs.tag }}
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Create backup tag with timestamp
run: |
set -e # Exit immediately if a command exits with a non-zero status
TIMESTAMP=$(date +%Y%m%d%H%M%S)
BACKUP_TAG="${{ github.event.inputs.tag }}-backup-${TIMESTAMP}"
echo "BACKUP_TAG=${BACKUP_TAG}" >> $GITHUB_ENV
echo "INPUT_TAG=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
git tag $BACKUP_TAG || { echo "Failed to create backup tag"; exit 1; }
git push origin $BACKUP_TAG || { echo "Failed to push backup tag"; exit 1; }
echo "Created backup tag: $BACKUP_TAG"
# Get the oldest commit date from the last 3 commits in raw format
OLDEST_COMMIT_TIMESTAMP=$(git log -3 --pretty=format:"%at" | tail -1)
echo "Oldest commit timestamp: $OLDEST_COMMIT_TIMESTAMP"
# Add 1 second to the timestamp
NEW_TIMESTAMP=$((OLDEST_COMMIT_TIMESTAMP + 1))
echo "NEW_TIMESTAMP=$NEW_TIMESTAMP" >> $GITHUB_ENV
- name: Setup yq
uses: mikefarah/yq@master
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=${{inputs.branch}}" >> $GITHUB_ENV
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b $INPUT_TAG --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout $INPUT_TAG
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
done
- name: Change commit timestamp
run: |
# Convert the timestamp to a date format git can understand
NEW_DATE=$(perl -le 'print scalar gmtime($ARGV[0])." +0000"' $NEW_TIMESTAMP)
echo "Setting commit date to: $NEW_DATE"
# Amend the commit with the new date
GIT_COMMITTER_DATE="$NEW_DATE" git commit --amend --no-edit --date="$NEW_DATE"
# Verify the change
git log -1 --pretty=format:"Commit now dated: %cD"
# git tag and push
git tag $INPUT_TAG -f
git push origin $INPUT_TAG -f
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -2,6 +2,7 @@
on:
workflow_dispatch:
description: 'This workflow will build for patches for latest tag, and will Always use commit from main branch.'
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
@ -19,20 +20,12 @@ jobs:
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 1
- name: Rebase with main branch, to make sure the code has latest main changes
if: github.ref != 'refs/heads/main'
run: |
git remote -v
git config --global user.email "action@github.com"
git config --global user.name "GitHub Action"
git config --global rebase.autoStash true
git fetch origin main:main
git rebase main
git log -3
git pull --rebase origin main
- name: Downloading yq
run: |
@ -55,8 +48,6 @@ jobs:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
@ -74,168 +65,78 @@ jobs:
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
SERVICES_INPUT: ${{ github.event.inputs.services }}
run: |
#!/bin/bash
set -euo pipefail
# Configuration
readonly WORKING_DIR=$(pwd)
readonly BUILD_SCRIPT_NAME="build.sh"
readonly BACKEND_SERVICES_FILE="/tmp/backend.txt"
# Initialize git configuration
setup_git() {
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b "$BRANCH_NAME"
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
# Get and increment image version
image_version() {
local service=$1
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
local current_version new_version
current_version=$(yq eval '.AppVersion' "$chart_path")
new_version=$(echo "$current_version" | awk -F. '{$NF += 1; print $1"."$2"."$3}')
echo "$new_version"
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b dev --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout main # This have to be changed to specific tag
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
# Clone MSAAS repository if not exists
clone_msaas() {
if [[ ! -d "$MSAAS_REPO_FOLDER" ]]; then
git clone -b dev --recursive "https://x-access-token:${MSAAS_REPO_CLONE_TOKEN}@${MSAAS_REPO_URL}" "$MSAAS_REPO_FOLDER"
cd "$MSAAS_REPO_FOLDER"
cd openreplay && git fetch origin && git checkout main
git log -1
cd "$MSAAS_REPO_FOLDER"
bash git-init.sh
git checkout
fi
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Build managed services
build_managed() {
local service=$1
local version=$2
echo "Building managed service: $service"
clone_msaas
if [[ $service == 'chalice' ]]; then
cd "$MSAAS_REPO_FOLDER/openreplay/api"
else
cd "$MSAAS_REPO_FOLDER/openreplay/$service"
fi
local build_cmd="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh"
echo "Executing: $build_cmd"
if ! eval "$build_cmd" 2>&1; then
echo "Build failed for $service"
exit 1
fi
}
# Build service with given arguments
build_service() {
local service=$1
local version=$2
local build_args=$3
local build_script=${4:-$BUILD_SCRIPT_NAME}
local command="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash $build_script $build_args"
echo "Executing: $command"
eval "$command"
}
# Update chart version and commit changes
update_chart_version() {
local service=$1
local version=$2
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
# Ensure we're in the original working directory/repository
cd "$WORKING_DIR"
yq eval ".AppVersion = \"$version\"" -i "$chart_path"
git add "$chart_path"
git commit -m "Increment $service chart version to $version"
git push --set-upstream origin "$BRANCH_NAME"
cd -
}
# Main execution
main() {
setup_git
# Get backend services list
ls backend/cmd >"$BACKEND_SERVICES_FILE"
# Parse services input (fix for GitHub Actions syntax)
echo "Services: ${SERVICES_INPUT:-$1}"
IFS=',' read -ra services <<<"${SERVICES_INPUT:-$1}"
# Process each service
for service in "${services[@]}"; do
echo "Processing service: $service"
cd "$WORKING_DIR"
local foss_build_args="" ee_build_args="" build_script="$BUILD_SCRIPT_NAME"
# Determine build configuration based on service type
if grep -q "$service" "$BACKEND_SERVICES_FILE"; then
# Backend service
cd backend
foss_build_args="nil $service"
ee_build_args="ee $service"
else
# Non-backend service
case "$service" in
chalice | alerts | crons)
cd "$WORKING_DIR/api"
;;
*)
cd "$service"
;;
esac
# Special build scripts for alerts/crons
if [[ $service == 'alerts' || $service == 'crons' ]]; then
build_script="build_${service}.sh"
fi
ee_build_args="ee"
fi
# Get version and build
local version
version=$(image_version "$service")
# Build FOSS and EE versions
build_service "$service" "$version" "$foss_build_args"
build_service "$service" "${version}-ee" "$ee_build_args"
# Build managed version for specific services
if [[ "$service" != "chalice" && "$service" != "frontend" ]]; then
echo "Nothing to build in managed for service $service"
else
build_managed "$service" "$version"
fi
# Update chart and commit
update_chart_version "$service" "$version"
done
cd "$WORKING_DIR"
# Cleanup
rm -f "$BACKEND_SERVICES_FILE"
}
echo "Working directory: $WORKING_DIR"
# Run main function with all arguments
main "$SERVICES_INPUT"
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
git push --set-upstream origin $BRANCH_NAME
done
- name: Create Pull Request
uses: repo-sync/pull-request@v2
@ -246,7 +147,8 @@ jobs:
pr_title: "Updated patch build from main ${{ env.HEAD_COMMIT_ID }}"
pr_body: |
This PR updates the Helm chart version after building the patch from $HEAD_COMMIT_ID.
Once this PR is merged, tag update job will run automatically.
Once this PR is merged, To update the latest tag, run the following workflow.
https://github.com/openreplay/openreplay/actions/workflows/update-tag.yaml
# - name: Debug Job
# if: ${{ failure() }}

View file

@ -1,4 +1,4 @@
# This action will push the assist changes to aws
# This action will push the peers changes to aws
on:
workflow_dispatch:
inputs:
@ -9,10 +9,14 @@ on:
push:
branches:
- dev
- api-*
paths:
- "ee/assist-server/**"
- "ee/peers/**"
- "peers/**"
- "!peers/.gitignore"
- "!peers/*-dev.sh"
name: Build and Deploy Assist-Server EE
name: Build and Deploy Peers EE
jobs:
deploy:
@ -53,7 +57,12 @@ jobs:
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist-Server image
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing peers image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
@ -61,11 +70,11 @@ jobs:
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-server
cd peers
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-server")
images=("peers")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
@ -76,7 +85,7 @@ jobs:
} && {
echo "Skipping Security Checks"
}
images=("assist-server")
images=("peers")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
@ -100,23 +109,43 @@ jobs:
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
pwd
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist-server/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
sed -i "/peers/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-server,quickwit,connector} /tmp/charts/
mv openreplay/charts/{ingress-nginx,peers,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

149
.github/workflows/peers.yaml vendored Normal file
View file

@ -0,0 +1,149 @@
# This action will push the peers changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "peers/**"
- "!peers/.gitignore"
- "!peers/*-dev.sh"
name: Build and Deploy Peers
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing peers image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd peers
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("peers")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("peers")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/peers/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,peers,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,103 +0,0 @@
name: Release Deployment
on:
workflow_dispatch:
inputs:
services:
description: 'Comma-separated list of services to deploy. eg: frontend,api,sink'
required: true
branch:
description: 'Branch to deploy (defaults to dev)'
required: false
default: 'dev'
env:
IMAGE_REGISTRY_URL: ${{ secrets.OSS_REGISTRY_URL }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ github.event.inputs.branch }}
- name: Docker login
run: |
docker login $IMAGE_REGISTRY_URL -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Set image tag with branch info
run: |
SHORT_SHA=$(git rev-parse --short HEAD)
echo "IMAGE_TAG=${{ github.event.inputs.branch }}-${SHORT_SHA}" >> $GITHUB_ENV
echo "Using image tag: $IMAGE_TAG"
- uses: depot/setup-action@v1
- name: Build and push Docker images
run: |
# Parse the comma-separated services list into an array
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
working_dir=$(pwd)
# Define backend services (consider moving this to workflow inputs or repo config)
ls backend/cmd >> /tmp/backend.txt
BUILD_SCRIPT_NAME="build.sh"
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd $working_dir/backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
cd $working_dir
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
{
echo IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
}&
{
echo IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
}&
done
wait
- uses: azure/k8s-set-context@v1
name: Using ee release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_RELEASE_KUBECONFIG }}
- name: Deploy to ee release Kubernetes
run: |
echo "Deploying services to EE cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to EE cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}-ee
done
- uses: azure/k8s-set-context@v1
name: Using foss release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.FOSS_RELEASE_KUBECONFIG }}
- name: Deploy to FOSS release Kubernetes
run: |
echo "Deploying services to FOSS cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}
done

View file

@ -1,4 +1,4 @@
# This action will push the sourcemapreader changes to ee
# This action will push the sourcemapreader changes to aws
on:
workflow_dispatch:
inputs:
@ -9,13 +9,13 @@ on:
push:
branches:
- dev
- api-*
paths:
- "ee/sourcemap-reader/**"
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"
- "!sourcemap-reader/*-dev.sh"
name: Build and Deploy sourcemap-reader EE
name: Build and Deploy sourcemap-reader
jobs:
deploy:
@ -64,7 +64,7 @@ jobs:
- name: Building and Pushing sourcemaps-reader image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
@ -132,7 +132,7 @@ jobs:
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}

View file

@ -9,6 +9,7 @@ on:
push:
branches:
- dev
- api-*
paths:
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"

View file

@ -1,42 +1,35 @@
on:
pull_request:
types: [closed]
branches:
- main
name: Release tag update --force
workflow_dispatch:
description: "This workflow will build for patches for latest tag, and will Always use commit from main branch."
inputs:
services:
description: "This action will update the latest tag with current main branch HEAD. Should I proceed ? true/false"
required: true
default: "false"
name: Force Push tag with main branch HEAD
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
if: ${{ (github.event_name == 'pull_request' && github.event.pull_request.merged == true) || github.event.inputs.services == 'true' }}
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get latest release tag using GitHub API
id: get-latest-tag
run: |
LATEST_TAG=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/latest" \
| jq -r .tag_name)
# Fallback to git command if API doesn't return a tag
if [ "$LATEST_TAG" == "null" ] || [ -z "$LATEST_TAG" ]; then
echo "Not found latest tag"
exit 100
fi
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
echo "Latest tag: $LATEST_TAG"
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Push main branch to tag
run: |
git fetch --tags
git checkout main
echo "Updating tag ${{ env.LATEST_TAG }} to point to latest commit on main"
git push origin HEAD:refs/tags/${{ env.LATEST_TAG }} --force
git push origin HEAD:refs/tags/$(git tag --list 'v[0-9]*' --sort=-v:refname | head -n 1) --force
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# with:
# limit-access-to-actor: true

View file

@ -1,4 +1,4 @@
Copyright (c) 2021-2025 Asayer, Inc dba OpenReplay
Copyright (c) 2021-2024 Asayer, Inc dba OpenReplay
OpenReplay monorepo uses multiple licenses. Portions of this software are licensed as follows:
- All content that resides under the "ee/" directory of this repository, is licensed under the license defined in "ee/LICENSE".

View file

@ -1,17 +1,10 @@
FROM python:3.12-alpine AS builder
LABEL maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
RUN apk add --no-cache build-base
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv && \
export UV_SYSTEM_PYTHON=true && \
uv pip install --no-cache-dir --upgrade pip setuptools wheel && \
uv pip install --no-cache-dir --upgrade -r requirements.txt
FROM python:3.12-alpine
FROM python:3.11-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA
LABEL GIT_SHA=$GIT_SHA
RUN apk add --no-cache build-base tini
ARG envarg
# Add Tini
# Startup daemon
@ -21,11 +14,19 @@ ENV SOURCE_MAP_VERSION=0.7.4 \
PRIVATE_ENDPOINTS=false \
ENTERPRISE_BUILD=${envarg} \
GIT_SHA=$GIT_SHA
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv
RUN uv pip install --no-cache-dir --upgrade pip setuptools wheel --system
RUN uv pip install --no-cache-dir --upgrade -r requirements.txt --system
COPY . .
RUN apk add --no-cache tini && mv env.default .env
RUN mv env.default .env
RUN adduser -u 1001 openreplay -D
USER 1001
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["./entrypoint.sh"]
CMD ./entrypoint.sh

View file

@ -1,4 +1,4 @@
FROM python:3.12-alpine
FROM python:3.11-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA

View file

@ -4,21 +4,21 @@ verify_ssl = true
name = "pypi"
[packages]
urllib3 = "==2.3.0"
urllib3 = "==2.2.3"
requests = "==2.32.3"
boto3 = "==1.36.12"
boto3 = "==1.35.86"
pyjwt = "==2.10.1"
psycopg2-binary = "==2.9.10"
psycopg = {extras = ["pool", "binary"], version = "==3.2.4"}
psycopg = {extras = ["pool", "binary"], version = "==3.2.3"}
clickhouse-driver = {extras = ["lz4"], version = "==0.2.9"}
clickhouse-connect = "==0.8.15"
elasticsearch = "==8.17.1"
clickhouse-connect = "==0.8.11"
elasticsearch = "==8.17.0"
jira = "==3.8.0"
cachetools = "==5.5.1"
fastapi = "==0.115.8"
cachetools = "==5.5.0"
fastapi = "==0.115.6"
uvicorn = {extras = ["standard"], version = "==0.34.0"}
python-decouple = "==3.8"
pydantic = {extras = ["email"], version = "==2.10.6"}
pydantic = {extras = ["email"], version = "==2.10.4"}
apscheduler = "==3.11.0"
redis = "==5.2.1"
@ -26,4 +26,3 @@ redis = "==5.2.1"
[requires]
python_version = "3.12"
python_full_version = "3.12.8"

View file

@ -12,7 +12,7 @@ from chalicelib.utils import pg_client
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
ap_logger.info(">>>>> starting up <<<<<")
logging.info(">>>>> starting up <<<<<")
await pg_client.init()
app.schedule.start()
app.schedule.add_job(id="alerts_processor", **{"func": alerts_processor.process, "trigger": "interval",
@ -27,22 +27,14 @@ async def lifespan(app: FastAPI):
yield
# Shutdown
ap_logger.info(">>>>> shutting down <<<<<")
logging.info(">>>>> shutting down <<<<<")
app.schedule.shutdown(wait=False)
await pg_client.terminate()
loglevel = config("LOGLEVEL", default=logging.INFO)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app = FastAPI(root_path=config("root_path", default="/alerts"), docs_url=config("docs_url", default=""),
redoc_url=config("redoc_url", default=""), lifespan=lifespan)
app.schedule = AsyncIOScheduler()
ap_logger.info("============= ALERTS =============")
logging.info("============= ALERTS =============")
@app.get("/")
@ -58,8 +50,17 @@ async def get_health_status():
}}
app.schedule = AsyncIOScheduler()
loglevel = config("LOGLEVEL", default=logging.INFO)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app.schedule = AsyncIOScheduler()
if config("LOCAL_DEV", default=False, cast=bool):
@app.get('/trigger', tags=["private"])
async def trigger_main_cron():
ap_logger.info("Triggering main cron")
logging.info("Triggering main cron")
alerts_processor.process()

View file

@ -4,8 +4,7 @@ from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_pg as sessions
from chalicelib.core.alerts.modules import sessions, alert_helpers
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
@ -132,7 +131,6 @@ def Build(a):
def process():
logger.info("> processing alerts on PG")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:

View file

@ -3,11 +3,10 @@ import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import sessions, alert_helpers
from chalicelib.utils import pg_client, ch_client, exp_ch_helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_ch as sessions
logger = logging.getLogger(__name__)
@ -156,7 +155,6 @@ def Build(a):
def process():
logger.info("> processing alerts on CH")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur, ch_client.ClickHouseClient() as ch_cur:
@ -166,7 +164,7 @@ def process():
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = ch_cur.format(query=query, parameters=params)
query = ch_cur.format(query, params)
except Exception as e:
logger.error(
f"!!!Error while building alert query for alertId:{alert['alertId']} name: {alert['name']}")
@ -175,7 +173,7 @@ def process():
logger.debug(alert)
logger.debug(query)
try:
result = ch_cur.execute(query=query)
result = ch_cur.execute(query)
if len(result) > 0:
result = result[0]

View file

@ -1,3 +1,9 @@
from decouple import config
TENANT_ID = "-1"
if config("EXP_ALERTS", cast=bool, default=False):
from chalicelib.core.sessions import sessions_ch as sessions
else:
from chalicelib.core.sessions import sessions
from . import helpers as alert_helpers

View file

@ -37,7 +37,8 @@ def jwt_authorizer(scheme: str, token: str, leeway=0) -> dict | None:
logger.debug("! JWT Expired signature")
return None
except BaseException as e:
logger.warning("! JWT Base Exception", exc_info=e)
logger.warning("! JWT Base Exception")
logger.debug(e)
return None
return payload
@ -55,7 +56,8 @@ def jwt_refresh_authorizer(scheme: str, token: str):
logger.debug("! JWT-refresh Expired signature")
return None
except BaseException as e:
logger.error("! JWT-refresh Base Exception", exc_info=e)
logger.warning("! JWT-refresh Base Exception")
logger.debug(e)
return None
return payload

View file

@ -0,0 +1,11 @@
import logging
from decouple import config
logging.basicConfig(level=config("LOGLEVEL", default=logging.INFO))
if config("EXP_AUTOCOMPLETE", cast=bool, default=False):
logging.info(">>> Using experimental autocomplete")
from . import autocomplete_ch as autocomplete
else:
from . import autocomplete

View file

@ -85,8 +85,7 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""SELECT DISTINCT ON(value,type) value, type
((SELECT DISTINCT value, type
return f"""(SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -102,7 +101,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5)) AS raw;"""
LIMIT 5);"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -125,7 +124,7 @@ def __generic_autocomplete(event: Event):
return f
def generic_autocomplete_metas(typename):
def __generic_autocomplete_metas(typename):
def f(project_id, text):
with pg_client.PostgresClient() as cur:
params = {"project_id": project_id, "value": helper.string_to_sql_like(text),
@ -327,7 +326,7 @@ def __search_metadata(project_id, value, key=None, source=None):
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
SELECT key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))

View file

@ -86,8 +86,7 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""SELECT DISTINCT ON(value, type) value, type
FROM ((SELECT DISTINCT value, type
return f"""(SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -103,7 +102,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5)) AS raw;"""
LIMIT 5);"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -126,7 +125,7 @@ def __generic_autocomplete(event: Event):
return f
def generic_autocomplete_metas(typename):
def __generic_autocomplete_metas(typename):
def f(project_id, text):
with ch_client.ClickHouseClient() as cur:
params = {"project_id": project_id, "value": helper.string_to_sql_like(text),
@ -258,25 +257,25 @@ def __search_metadata(project_id, value, key=None, source=None):
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
query = cur.format(f"""SELECT key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", parameters={"project_id": project_id, "value": helper.string_to_sql_like(value),
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)})
results = cur.execute(query)
return helper.list_to_camel_case(results)
TYPE_TO_COLUMN = {
schemas.EventType.CLICK: "`$properties`.label",
schemas.EventType.INPUT: "`$properties`.label",
schemas.EventType.LOCATION: "`$properties`.url_path",
schemas.EventType.CUSTOM: "`$properties`.name",
schemas.FetchFilterType.FETCH_URL: "`$properties`.url_path",
schemas.GraphqlFilterType.GRAPHQL_NAME: "`$properties`.name",
schemas.EventType.STATE_ACTION: "`$properties`.name",
schemas.EventType.CLICK: "label",
schemas.EventType.INPUT: "label",
schemas.EventType.LOCATION: "url_path",
schemas.EventType.CUSTOM: "name",
schemas.FetchFilterType.FETCH_URL: "url_path",
schemas.GraphqlFilterType.GRAPHQL_NAME: "name",
schemas.EventType.STATE_ACTION: "name",
# For ERROR, sessions search is happening over name OR message,
# for simplicity top 10 is using name only
schemas.EventType.ERROR: "`$properties`.name",
schemas.EventType.ERROR: "name",
schemas.FilterType.USER_COUNTRY: "user_country",
schemas.FilterType.USER_CITY: "user_city",
schemas.FilterType.USER_STATE: "user_state",
@ -326,9 +325,9 @@ def get_top_values(project_id, event_type, event_key=None):
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY c_value) AS row_count,
COUNT(1) OVER () AS total_count
FROM product_analytics.events
FROM experimental.events
WHERE project_id = %(project_id)s
AND `$event_name` = '{event_type}'
AND event_type = '{event_type}'
AND isNotNull(c_value)
AND notEmpty(c_value)
ORDER BY row_count DESC

View file

@ -13,18 +13,15 @@ def get_state(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
cur.mogrify("""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)},
)
{"ids": tuple(pids)})
)
recorded = cur.fetchone()["exists"]
meta = False
if recorded:
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
query = cur.mogrify("""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
@ -39,35 +36,26 @@ def get_state(tenant_id):
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id},
)
{"tenant_id": tenant_id})
cur.execute(query)
meta = cur.fetchone()["exists"]
return [
{
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
},
{
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
},
{
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
},
{
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
},
{"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"},
{"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"},
{"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"},
{"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
]
@ -78,26 +66,21 @@ def get_state_installing(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
cur.mogrify("""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)},
)
{"ids": tuple(pids)})
)
recorded = cur.fetchone()["exists"]
return {
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
}
return {"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"}
def get_state_identify_users(tenant_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
query = cur.mogrify(f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
@ -112,32 +95,25 @@ def get_state_identify_users(tenant_id):
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id},
)
{"tenant_id": tenant_id})
cur.execute(query)
meta = cur.fetchone()["exists"]
return {
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
}
return {"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"}
def get_state_manage_users(tenant_id):
return {
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
}
return {"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"}
def get_state_integrations(tenant_id):
return {
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
}
return {"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}

View file

@ -4,10 +4,10 @@ from decouple import config
logger = logging.getLogger(__name__)
from . import errors_pg as errors_legacy
from . import errors as errors_legacy
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental error search")
from . import errors_ch as errors
else:
from . import errors_pg as errors
from . import errors

View file

@ -1,8 +1,7 @@
import json
from typing import List
from typing import Optional, List
import schemas
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.sessions import sessions_search
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.utils import pg_client, helper
@ -52,6 +51,27 @@ def get_batch(error_ids):
return helper.list_to_camel_case(errors)
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
@ -60,7 +80,7 @@ def __get_sort_key(key):
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
def search(data: schemas.SearchErrorsSchema, project_id, user_id):
empty_response = {
'total': 0,
'errors': []
@ -70,13 +90,12 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = errors_helper.__get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query = __get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = errors_helper.__get_basic_constraints(platform, time_constraint=False, chart=True,
project_key=None)
pg_sub_query_chart = __get_basic_constraints(platform, time_constraint=False, chart=True, project_key=None)
if platform:
pg_sub_query_chart += ["start_ts>=%(startDate)s", "start_ts<%(endDate)s", "project_id=%(project_id)s"]
pg_sub_query_chart.append("errors.error_id =details.error_id")
@ -88,7 +107,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
data.endTimestamp = TimeUTC.now(1)
if len(data.events) > 0 or len(data.filters) > 0:
print("-- searching for sessions before errors")
statuses = sessions_search.search_sessions(data=data, project=project, user_id=user_id, errors_only=True,
statuses = sessions_search.search_sessions(data=data, project_id=project_id, user_id=user_id, errors_only=True,
error_status=data.status)
if len(statuses) == 0:
return empty_response
@ -106,7 +125,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
params = {
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"project_id": project_id,
"userId": user_id,
"step_size": step_size}
if data.status != schemas.ErrorStatus.ALL:
@ -188,7 +207,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
"""SELECT error_id
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project.project_id, "error_ids": tuple([r["error_id"] for r in rows]),
{"project_id": project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = helper.list_to_camel_case(cur.fetchall())

View file

@ -1,11 +1,11 @@
import schemas
from chalicelib.core import metadata
from chalicelib.core.errors import errors_legacy
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.errors.modules import sessions
from chalicelib.core import sessions
from chalicelib.core.metrics import metrics
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, metrics_helper
from chalicelib.utils.TimeUTC import TimeUTC
from . import errors as errors_legacy
def _multiple_values(values, value_key="value"):
@ -62,23 +62,22 @@ def get_batch(error_ids):
return errors_legacy.get_batch(error_ids=error_ids)
def __get_basic_constraints_events(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
def __get_basic_constraints(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id", table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
ch_sub_query.append(f"{table_name}event_type='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}created_at >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}created_at < toDateTime(%({endTime_arg_name})s/1000)"]
# if platform == schemas.PlatformType.MOBILE:
# ch_sub_query.append("user_device_type = 'mobile'")
# elif platform == schemas.PlatformType.DESKTOP:
# ch_sub_query.append("user_device_type = 'desktop'")
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
@ -90,7 +89,7 @@ def __get_sort_key(key):
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
def search(data: schemas.SearchErrorsSchema, project_id, user_id):
MAIN_EVENTS_TABLE = exp_ch_helper.get_main_events_table(data.startTimestamp)
MAIN_SESSIONS_TABLE = exp_ch_helper.get_main_sessions_table(data.startTimestamp)
@ -98,13 +97,12 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
ch_sessions_sub_query = errors_helper.__get_basic_constraints_ch(platform, type_condition=False)
ch_sessions_sub_query = __get_basic_constraints(platform, type_condition=False)
# ignore platform for errors table
ch_sub_query = __get_basic_constraints_events(None, type_condition=True)
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'source') = 'js_exception'")
ch_sub_query = __get_basic_constraints(None, type_condition=True)
ch_sub_query.append("source ='js_exception'")
# To ignore Script error
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'message') != 'Script error.'")
ch_sub_query.append("message!='Script error.'")
error_ids = None
if data.startTimestamp is None:
@ -130,8 +128,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
if len(data.events) > errors_condition_count:
subquery_part_args, subquery_part = sessions.search_query_parts_ch(data=data, error_status=data.status,
errors_only=True,
project_id=project.project_id,
user_id=user_id,
project_id=project_id, user_id=user_id,
issue=None,
favorite_only=False)
subquery_part = f"INNER JOIN {subquery_part} USING(session_id)"
@ -186,6 +183,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
_multiple_conditions(f's.user_country {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_SOURCE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_source)')
@ -233,7 +231,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if f.source in meta_keys.keys():
if is_any:
@ -310,7 +308,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
**params,
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"project_id": project_id,
"userId": user_id,
"step_size": step_size}
if data.limit is not None and data.page is not None:
@ -335,18 +333,17 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
ch_sub_query.append("error_id IN %(error_ids)s")
main_ch_query = f"""\
SELECT details.error_id as error_id,
SELECT details.error_id AS error_id,
name, message, users, total,
sessions, last_occurrence, first_occurrence, chart
FROM (SELECT error_id,
JSONExtractString(toString(`$properties`), 'name') AS name,
JSONExtractString(toString(`$properties`), 'message') AS message,
name,
message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT events.session_id) AS sessions,
MAX(created_at) AS max_datetime,
MIN(created_at) AS min_datetime,
COUNT(DISTINCT error_id)
OVER() AS total
MAX(datetime) AS max_datetime,
MIN(datetime) AS min_datetime,
COUNT(DISTINCT events.error_id) OVER() AS total
FROM {MAIN_EVENTS_TABLE} AS events
INNER JOIN (SELECT session_id, coalesce(user_id,toString(user_uuid)) AS user_id
FROM {MAIN_SESSIONS_TABLE} AS s
@ -357,23 +354,19 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
GROUP BY error_id, name, message
ORDER BY {sort} {order}
LIMIT %(errors_limit)s OFFSET %(errors_offset)s) AS details
INNER JOIN (SELECT error_id,
toUnixTimestamp(MAX(created_at))*1000 AS last_occurrence,
toUnixTimestamp(MIN(created_at))*1000 AS first_occurrence
INNER JOIN (SELECT error_id AS error_id,
toUnixTimestamp(MAX(datetime))*1000 AS last_occurrence,
toUnixTimestamp(MIN(datetime))*1000 AS first_occurrence
FROM {MAIN_EVENTS_TABLE}
WHERE project_id=%(project_id)s
AND `$event_name`='ERROR'
AND event_type='ERROR'
GROUP BY error_id) AS time_details
ON details.error_id=time_details.error_id
INNER JOIN (SELECT error_id, groupArray([timestamp, count]) AS chart
FROM (SELECT error_id,
gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs
LEFT JOIN {MAIN_EVENTS_TABLE} ON(TRUE)
FROM (SELECT error_id, toUnixTimestamp(toStartOfInterval(datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM {MAIN_EVENTS_TABLE}
WHERE {" AND ".join(ch_sub_query)}
AND created_at >= toDateTime(timestamp / 1000)
AND created_at < toDateTime((timestamp + %(step_size)s) / 1000)
GROUP BY error_id, timestamp
ORDER BY timestamp) AS sub_table
GROUP BY error_id) AS chart_details ON details.error_id=chart_details.error_id;"""
@ -381,16 +374,17 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
# print("------------")
# print(ch.format(main_ch_query, params))
# print("------------")
query = ch.format(query=main_ch_query, parameters=params)
rows = ch.execute(query=query)
rows = ch.execute(query=main_ch_query, parameters=params)
total = rows[0]["total"] if len(rows) > 0 else 0
for r in rows:
r["chart"] = list(r["chart"])
for i in range(len(r["chart"])):
r["chart"][i] = {"timestamp": r["chart"][i][0], "count": r["chart"][i][1]}
r["chart"] = metrics.__complete_missing_steps(rows=r["chart"], start_time=data.startTimestamp,
end_time=data.endTimestamp,
density=data.density, neutral={"count": 0})
return {
'total': total,
'errors': helper.list_to_camel_case(rows)

View file

@ -1,5 +1,5 @@
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.errors import errors_legacy as errors
from chalicelib.utils import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
@ -40,29 +40,26 @@ def __process_tags(row):
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size24")
pg_sub_query24 = errors.__get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30_session = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session = errors.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_session.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_session.append("error_id = %(error_id)s")
pg_sub_query30_err = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="errors.project_id")
pg_sub_query30_err = errors.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30", project_key="errors.project_id")
pg_sub_query30_err.append("sessions.project_id = %(project_id)s")
pg_sub_query30_err.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_err.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_err.append("error_id = %(error_id)s")
pg_sub_query30_err.append("source ='js_exception'")
pg_sub_query30 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size30")
pg_sub_query30 = errors.__get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = errors_helper.__get_basic_constraints(time_constraint=False)
pg_basic_query = errors.__get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
@ -98,7 +95,8 @@ def get_details(project_id, error_id, user_id, **data):
device_partition,
country_partition,
chart24,
chart30
chart30,
custom_tags
FROM (SELECT error_id,
name,
message,
@ -113,8 +111,15 @@ def get_details(project_id, error_id, user_id, **data):
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s) AS time_details ON (TRUE)
INNER JOIN (SELECT session_id AS last_session_id
INNER JOIN (SELECT session_id AS last_session_id,
coalesce(custom_tags, '[]')::jsonb AS custom_tags
FROM events.errors
LEFT JOIN LATERAL (
SELECT jsonb_agg(jsonb_build_object(errors_tags.key, errors_tags.value)) AS custom_tags
FROM errors_tags
WHERE errors_tags.error_id = %(error_id)s
AND errors_tags.session_id = errors.session_id
AND errors_tags.message_id = errors.message_id) AS errors_tags ON (TRUE)
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details ON (TRUE)

View file

@ -3,9 +3,8 @@ import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import helper as errors_helper
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
import chalicelib.core.sessions.sessions_ch as sessions
from chalicelib.core.sessions import sessions_ch as sessions
else:
import chalicelib.core.sessions.sessions_pg as sessions
from chalicelib.core.sessions import sessions

View file

@ -1,58 +0,0 @@
from typing import Optional
import schemas
from chalicelib.core.sourcemaps import sourcemaps
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_basic_constraints_ch(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -1,9 +1,8 @@
from functools import cache
from typing import Optional
import schemas
from chalicelib.core import issues
from chalicelib.core.autocomplete import autocomplete
from chalicelib.core import issues
from chalicelib.core.sessions import sessions_metas
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
@ -138,57 +137,52 @@ class EventType:
column=None) # column=None because errors are searched by name or message
@cache
def supported_types():
return {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
SUPPORTED_TYPES = {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
query=autocomplete.__generic_query(
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
}
}
def get_errors_by_session_id(session_id, project_id):
@ -208,17 +202,20 @@ def search(text, event_type, project_id, source, key):
if not event_type:
return {"data": autocomplete.__get_autocomplete_table(text, project_id)}
if event_type in supported_types().keys():
rows = supported_types()[event_type].get(project_id=project_id, value=text, key=key, source=source)
elif event_type + "_MOBILE" in supported_types().keys():
rows = supported_types()[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.supported_types().keys():
if event_type in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type].get(project_id=project_id, value=text, key=key, source=source)
# for IOS events autocomplete
# if event_type + "_IOS" in SUPPORTED_TYPES.keys():
# rows += SUPPORTED_TYPES[event_type + "_IOS"].get(project_id=project_id, value=text, key=key,source=source)
elif event_type + "_MOBILE" in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.supported_types().keys():
and event_type[:-len("_IOS")] in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_MOBILE") \
and event_type[:-len("_MOBILE")] in sessions_metas.supported_types().keys():
and event_type[:-len("_MOBILE")] in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}

View file

@ -27,6 +27,7 @@ HEALTH_ENDPOINTS = {
"http": app_connection_string("http-openreplay", 8888, "metrics"),
"ingress-nginx": app_connection_string("ingress-nginx-openreplay", 80, "healthz"),
"integrations": app_connection_string("integrations-openreplay", 8888, "metrics"),
"peers": app_connection_string("peers-openreplay", 8888, "health"),
"sink": app_connection_string("sink-openreplay", 8888, "metrics"),
"sourcemapreader": app_connection_string(
"sourcemapreader-openreplay", 8888, "health"
@ -38,7 +39,9 @@ HEALTH_ENDPOINTS = {
def __check_database_pg(*_):
fail_response = {
"health": False,
"details": {"errors": ["Postgres health-check failed"]},
"details": {
"errors": ["Postgres health-check failed"]
}
}
with pg_client.PostgresClient() as cur:
try:
@ -60,26 +63,29 @@ def __check_database_pg(*_):
"details": {
# "version": server_version["server_version"],
# "schema": schema_version["version"]
},
}
}
def __always_healthy(*_):
return {"health": True, "details": {}}
return {
"health": True,
"details": {}
}
def __check_be_service(service_name):
def fn(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]},
"details": {
"errors": ["server health-check failed"]
}
}
try:
results = requests.get(HEALTH_ENDPOINTS.get(service_name), timeout=2)
if results.status_code != 200:
logger.error(
f"!! issue with the {service_name}-health code:{results.status_code}"
)
logger.error(f"!! issue with the {service_name}-health code:{results.status_code}")
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
return fail_response
@ -97,7 +103,10 @@ def __check_be_service(service_name):
logger.error("couldn't get response")
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {"health": True, "details": {}}
return {
"health": True,
"details": {}
}
return fn
@ -105,7 +114,7 @@ def __check_be_service(service_name):
def __check_redis(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]},
"details": {"errors": ["server health-check failed"]}
}
if config("REDIS_STRING", default=None) is None:
# fail_response["details"]["errors"].append("REDIS_STRING not defined in env-vars")
@ -124,14 +133,16 @@ def __check_redis(*_):
"health": True,
"details": {
# "version": r.execute_command('INFO')['redis_version']
},
}
}
def __check_SSL(*_):
fail_response = {
"health": False,
"details": {"errors": ["SSL Certificate health-check failed"]},
"details": {
"errors": ["SSL Certificate health-check failed"]
}
}
try:
requests.get(config("SITE_URL"), verify=True, allow_redirects=True)
@ -139,28 +150,36 @@ def __check_SSL(*_):
logger.error("!! health failed: SSL Certificate")
logger.exception(e)
return fail_response
return {"health": True, "details": {}}
return {
"health": True,
"details": {}
}
def __get_sessions_stats(*_):
with pg_client.PostgresClient() as cur:
constraints = ["projects.deleted_at IS NULL"]
query = cur.mogrify(
f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
query = cur.mogrify(f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
COALESCE(SUM(events_count),0) AS e_c
FROM public.projects_stats
INNER JOIN public.projects USING(project_id)
WHERE {" AND ".join(constraints)};"""
)
WHERE {" AND ".join(constraints)};""")
cur.execute(query)
row = cur.fetchone()
return {"numberOfSessionsCaptured": row["s_c"], "numberOfEventCaptured": row["e_c"]}
return {
"numberOfSessionsCaptured": row["s_c"],
"numberOfEventCaptured": row["e_c"]
}
def get_health(tenant_id=None):
health_map = {
"databases": {"postgres": __check_database_pg},
"ingestionPipeline": {"redis": __check_redis},
"databases": {
"postgres": __check_database_pg
},
"ingestionPipeline": {
"redis": __check_redis
},
"backendServices": {
"alerts": __check_be_service("alerts"),
"assets": __check_be_service("assets"),
@ -173,12 +192,13 @@ def get_health(tenant_id=None):
"http": __check_be_service("http"),
"ingress-nginx": __always_healthy,
"integrations": __check_be_service("integrations"),
"peers": __check_be_service("peers"),
"sink": __check_be_service("sink"),
"sourcemapreader": __check_be_service("sourcemapreader"),
"storage": __check_be_service("storage"),
"storage": __check_be_service("storage")
},
"details": __get_sessions_stats,
"ssl": __check_SSL,
"ssl": __check_SSL
}
return __process_health(health_map=health_map)
@ -190,16 +210,10 @@ def __process_health(health_map):
response.pop(parent_key)
elif isinstance(health_map[parent_key], dict):
for element_key in health_map[parent_key]:
if config(
f"SKIP_H_{parent_key.upper()}_{element_key.upper()}",
cast=bool,
default=False,
):
if config(f"SKIP_H_{parent_key.upper()}_{element_key.upper()}", cast=bool, default=False):
response[parent_key].pop(element_key)
else:
response[parent_key][element_key] = health_map[parent_key][
element_key
]()
response[parent_key][element_key] = health_map[parent_key][element_key]()
else:
response[parent_key] = health_map[parent_key]()
return response
@ -207,8 +221,7 @@ def __process_health(health_map):
def cron():
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT projects.project_id,
query = cur.mogrify("""SELECT projects.project_id,
projects.created_at,
projects.sessions_last_check_at,
projects.first_recorded_session_at,
@ -216,8 +229,7 @@ def cron():
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;"""
)
ORDER BY project_id;""")
cur.execute(query)
rows = cur.fetchall()
for r in rows:
@ -238,24 +250,20 @@ def cron():
count_start_from = r["last_update_at"]
count_start_from = TimeUTC.datetime_to_timestamp(count_start_from)
params = {
"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
params = {"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts>=%(start_ts)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params,
)
params)
cur.execute(query)
row = cur.fetchone()
if row is not None:
@ -263,68 +271,56 @@ def cron():
params["events_count"] = row["events_count"]
if insert:
query = cur.mogrify(
"""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
query = cur.mogrify("""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
VALUES (%(project_id)s, %(sessions_count)s, %(events_count)s, (now() AT TIME ZONE 'utc'::text));""",
params,
)
params)
else:
query = cur.mogrify(
"""UPDATE public.projects_stats
query = cur.mogrify("""UPDATE public.projects_stats
SET sessions_count=sessions_count+%(sessions_count)s,
events_count=events_count+%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params,
)
params)
cur.execute(query)
# this cron is used to correct the sessions&events count every week
def weekly_cron():
with pg_client.PostgresClient(long_query=True) as cur:
query = cur.mogrify(
"""SELECT project_id,
query = cur.mogrify("""SELECT project_id,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;"""
)
ORDER BY project_id;""")
cur.execute(query)
rows = cur.fetchall()
for r in rows:
if r["last_update_at"] is None:
continue
params = {
"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
params = {"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params,
)
params)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
query = cur.mogrify(
"""UPDATE public.projects_stats
query = cur.mogrify("""UPDATE public.projects_stats
SET sessions_count=%(sessions_count)s,
events_count=%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params,
)
params)
cur.execute(query)

View file

@ -50,8 +50,8 @@ class JIRAIntegration(base.BaseIntegration):
cur.execute(
cur.mogrify(
"""SELECT username, token, url
FROM public.jira_cloud
WHERE user_id = %(user_id)s;""",
FROM public.jira_cloud
WHERE user_id=%(user_id)s;""",
{"user_id": self._user_id})
)
data = helper.dict_to_camel_case(cur.fetchone())
@ -95,9 +95,10 @@ class JIRAIntegration(base.BaseIntegration):
def add(self, username, token, url, obfuscate=False):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(""" \
INSERT INTO public.jira_cloud(username, token, user_id, url)
VALUES (%(username)s, %(token)s, %(user_id)s, %(url)s) RETURNING username, token, url;""",
cur.mogrify("""\
INSERT INTO public.jira_cloud(username, token, user_id,url)
VALUES (%(username)s, %(token)s, %(user_id)s,%(url)s)
RETURNING username, token, url;""",
{"user_id": self._user_id, "username": username,
"token": token, "url": url})
)
@ -111,10 +112,9 @@ class JIRAIntegration(base.BaseIntegration):
def delete(self):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(""" \
DELETE
FROM public.jira_cloud
WHERE user_id = %(user_id)s;""",
cur.mogrify("""\
DELETE FROM public.jira_cloud
WHERE user_id=%(user_id)s;""",
{"user_id": self._user_id})
)
return {"state": "success"}
@ -125,7 +125,7 @@ class JIRAIntegration(base.BaseIntegration):
changes={
"username": data.username,
"token": data.token if len(data.token) > 0 and data.token.find("***") == -1 \
else self.integration["token"],
else self.integration.token,
"url": str(data.url)
},
obfuscate=True

View file

@ -1,5 +1,6 @@
from chalicelib.core import log_tools
import requests
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "bugsnag"

View file

@ -1,5 +1,5 @@
import boto3
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "cloudwatch"

View file

@ -1,4 +1,4 @@
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "datadog"

View file

@ -1,7 +1,8 @@
import logging
from chalicelib.core.log_tools import log_tools
from elasticsearch import Elasticsearch
from chalicelib.core import log_tools
from schemas import schemas
logger = logging.getLogger(__name__)

View file

@ -1,7 +1,6 @@
import json
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client, helper
import json
from chalicelib.core.modules import TENANT_CONDITION
EXCEPT = ["jira_server", "jira_cloud"]

View file

@ -1,4 +1,4 @@
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "newrelic"

View file

@ -1,4 +1,4 @@
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "rollbar"

View file

@ -1,5 +1,5 @@
import requests
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "sentry"

View file

@ -1,4 +1,4 @@
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "stackdriver"

View file

@ -1,4 +1,4 @@
from chalicelib.core.log_tools import log_tools
from chalicelib.core import log_tools
from schemas import schemas
IN_TY = "sumologic"

View file

@ -98,23 +98,17 @@ def __edit(project_id, col_index, colname, new_name):
if col_index not in list(old_metas.keys()):
return {"errors": ["custom field not found"]}
if old_metas[col_index]["key"] != new_name:
with pg_client.PostgresClient() as cur:
with pg_client.PostgresClient() as cur:
if old_metas[col_index]["key"] != new_name:
query = cur.mogrify(f"""UPDATE public.projects
SET {colname} = %(value)s
WHERE project_id = %(project_id)s
AND deleted_at ISNULL
RETURNING {colname},
(SELECT {colname} FROM projects WHERE project_id = %(project_id)s) AS old_{colname};""",
RETURNING {colname};""",
{"project_id": project_id, "value": new_name})
cur.execute(query=query)
row = cur.fetchone()
new_name = row[colname]
old_name = row['old_' + colname]
new_name = cur.fetchone()[colname]
old_metas[col_index]["key"] = new_name
projects.rename_metadata_condition(project_id=project_id,
old_metadata_key=old_name,
new_metadata_key=new_name)
return {"data": old_metas[col_index]}
@ -127,8 +121,8 @@ def edit(tenant_id, project_id, index: int, new_name: str):
def delete(tenant_id, project_id, index: int):
index = int(index)
old_segments = get(project_id)
old_indexes = [k["index"] for k in old_segments]
if index not in old_indexes:
old_segments = [k["index"] for k in old_segments]
if index not in old_segments:
return {"errors": ["custom field not found"]}
with pg_client.PostgresClient() as cur:
@ -138,8 +132,7 @@ def delete(tenant_id, project_id, index: int):
WHERE project_id = %(project_id)s AND deleted_at ISNULL;""",
{"project_id": project_id})
cur.execute(query=query)
projects.delete_metadata_condition(project_id=project_id,
metadata_key=old_segments[old_indexes.index(index)]["key"])
return {"data": get(project_id)}

View file

@ -6,5 +6,10 @@ logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
logger.info(">>> Using experimental metrics")
from chalicelib.core.metrics import heatmaps_ch as heatmaps
from chalicelib.core.metrics import metrics_ch as metrics
from chalicelib.core.metrics import product_analytics_ch as product_analytics
else:
pass
from chalicelib.core.metrics import heatmaps
from chalicelib.core.metrics import metrics
from chalicelib.core.metrics import product_analytics

View file

@ -6,7 +6,7 @@ from fastapi import HTTPException, status
import schemas
from chalicelib.core import issues
from chalicelib.core.errors import errors
from chalicelib.core.metrics import heatmaps, product_analytics, funnels
from chalicelib.core.metrics import heatmaps, product_analytics, funnels, custom_metrics_predefined
from chalicelib.core.sessions import sessions, sessions_search
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
@ -42,7 +42,7 @@ def __get_errors_list(project: schemas.ProjectContext, user_id, data: schemas.Ca
"total": 0,
"errors": []
}
return errors.search(data.series[0].filter, project=project, user_id=user_id)
return errors.search(data.series[0].filter, project_id=project.project_id, user_id=user_id)
def __get_sessions_list(project: schemas.ProjectContext, user_id, data: schemas.CardSchema):
@ -52,11 +52,11 @@ def __get_sessions_list(project: schemas.ProjectContext, user_id, data: schemas.
"total": 0,
"sessions": []
}
return sessions_search.search_sessions(data=data.series[0].filter, project=project, user_id=user_id)
return sessions_search.search_sessions(data=data.series[0].filter, project_id=project.project_id, user_id=user_id)
def get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas.CardHeatMap,
include_mobs: bool = True):
def __get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas.CardHeatMap,
include_mobs: bool = True):
if len(data.series) == 0:
return None
data.series[0].filter.filters += data.series[0].filter.events
@ -153,28 +153,33 @@ def __get_table_chart(project: schemas.ProjectContext, data: schemas.CardTable,
def get_chart(project: schemas.ProjectContext, data: schemas.CardSchema, user_id: int):
if data.is_predefined:
return custom_metrics_predefined.get_metric(key=data.metric_of,
project_id=project.project_id,
data=data.model_dump())
supported = {
schemas.MetricType.TIMESERIES: __get_timeseries_chart,
schemas.MetricType.TABLE: __get_table_chart,
schemas.MetricType.HEAT_MAP: get_heat_map_chart,
schemas.MetricType.HEAT_MAP: __get_heat_map_chart,
schemas.MetricType.FUNNEL: __get_funnel_chart,
schemas.MetricType.PATH_ANALYSIS: __get_path_analysis_chart
}
return supported.get(data.metric_type, not_supported)(project=project, data=data, user_id=user_id)
def get_sessions_by_card_id(project: schemas.ProjectContext, user_id, metric_id, data: schemas.CardSessionsSchema):
if not card_exists(metric_id=metric_id, project_id=project.project_id, user_id=user_id):
def get_sessions_by_card_id(project_id, user_id, metric_id, data: schemas.CardSessionsSchema):
if not card_exists(metric_id=metric_id, project_id=project_id, user_id=user_id):
return None
results = []
for s in data.series:
results.append({"seriesId": s.series_id, "seriesName": s.name,
**sessions_search.search_sessions(data=s.filter, project=project, user_id=user_id)})
**sessions_search.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def get_sessions(project: schemas.ProjectContext, user_id, data: schemas.CardSessionsSchema):
def get_sessions(project_id, user_id, data: schemas.CardSessionsSchema):
results = []
if len(data.series) == 0:
return results
@ -184,12 +189,14 @@ def get_sessions(project: schemas.ProjectContext, user_id, data: schemas.CardSes
s.filter = schemas.SessionsSearchPayloadSchema(**s.filter.model_dump(by_alias=True))
results.append({"seriesId": None, "seriesName": s.name,
**sessions_search.search_sessions(data=s.filter, project=project, user_id=user_id)})
**sessions_search.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def get_issues(project: schemas.ProjectContext, user_id: int, data: schemas.CardSchema):
if data.is_predefined:
return not_supported()
if data.metric_of == schemas.MetricOfTable.ISSUES:
return __get_table_of_issues(project=project, user_id=user_id, data=data)
supported = {
@ -201,16 +208,15 @@ def get_issues(project: schemas.ProjectContext, user_id: int, data: schemas.Card
return supported.get(data.metric_type, not_supported)()
def get_global_card_info(data: schemas.CardSchema):
r = {"hideExcess": data.hide_excess, "compareTo": data.compare_to, "rows": data.rows}
def __get_global_card_info(data: schemas.CardSchema):
r = {"hideExcess": data.hide_excess, "compareTo": data.compare_to}
return r
def get_path_analysis_card_info(data: schemas.CardPathAnalysis):
def __get_path_analysis_card_info(data: schemas.CardPathAnalysis):
r = {"start_point": [s.model_dump() for s in data.start_point],
"start_type": data.start_type,
"excludes": [e.model_dump() for e in data.excludes],
"rows": data.rows}
"excludes": [e.model_dump() for e in data.excludes]}
return r
@ -221,8 +227,8 @@ def create_card(project: schemas.ProjectContext, user_id, data: schemas.CardSche
if data.session_id is not None:
session_data = {"sessionId": data.session_id}
else:
session_data = get_heat_map_chart(project=project, user_id=user_id,
data=data, include_mobs=False)
session_data = __get_heat_map_chart(project=project, user_id=user_id,
data=data, include_mobs=False)
if session_data is not None:
session_data = {"sessionId": session_data["sessionId"]}
@ -235,9 +241,9 @@ def create_card(project: schemas.ProjectContext, user_id, data: schemas.CardSche
series_len = len(data.series)
params = {"user_id": user_id, "project_id": project.project_id, **data.model_dump(), **_data,
"default_config": json.dumps(data.default_config.model_dump()), "card_info": None}
params["card_info"] = get_global_card_info(data=data)
params["card_info"] = __get_global_card_info(data=data)
if data.metric_type == schemas.MetricType.PATH_ANALYSIS:
params["card_info"] = {**params["card_info"], **get_path_analysis_card_info(data=data)}
params["card_info"] = {**params["card_info"], **__get_path_analysis_card_info(data=data)}
params["card_info"] = json.dumps(params["card_info"])
query = """INSERT INTO metrics (project_id, user_id, name, is_public,
@ -299,9 +305,9 @@ def update_card(metric_id, user_id, project_id, data: schemas.CardSchema):
d_series_ids.append(i)
params["d_series_ids"] = tuple(d_series_ids)
params["session_data"] = json.dumps(metric["data"])
params["card_info"] = get_global_card_info(data=data)
params["card_info"] = __get_global_card_info(data=data)
if data.metric_type == schemas.MetricType.PATH_ANALYSIS:
params["card_info"] = {**params["card_info"], **get_path_analysis_card_info(data=data)}
params["card_info"] = {**params["card_info"], **__get_path_analysis_card_info(data=data)}
elif data.metric_type == schemas.MetricType.HEAT_MAP:
if data.session_id is not None:
params["session_data"] = json.dumps({"sessionId": data.session_id})
@ -352,100 +358,6 @@ def update_card(metric_id, user_id, project_id, data: schemas.CardSchema):
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
def search_metrics(project_id, user_id, data: schemas.MetricSearchSchema, include_series=False):
constraints = ["metrics.project_id = %(project_id)s", "metrics.deleted_at ISNULL"]
params = {
"project_id": project_id,
"user_id": user_id,
"offset": (data.page - 1) * data.limit,
"limit": data.limit,
}
if data.mine_only:
constraints.append("user_id = %(user_id)s")
else:
constraints.append("(user_id = %(user_id)s OR metrics.is_public)")
if data.shared_only:
constraints.append("is_public")
if data.filter is not None:
if data.filter.type:
constraints.append("metrics.metric_type = %(filter_type)s")
params["filter_type"] = data.filter.type
if data.filter.query and len(data.filter.query) > 0:
constraints.append("(metrics.name ILIKE %(filter_query)s OR owner.owner_name ILIKE %(filter_query)s)")
params["filter_query"] = helper.values_for_operator(
value=data.filter.query, op=schemas.SearchEventOperator.CONTAINS
)
with pg_client.PostgresClient() as cur:
sub_join = ""
if include_series:
sub_join = """LEFT JOIN LATERAL (
SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)"""
sort_column = data.sort.field if data.sort.field is not None and len(data.sort.field) > 0 \
else "created_at"
# change ascend to asc and descend to desc
sort_order = data.sort.order.value if hasattr(data.sort.order, "value") else data.sort.order
if sort_order == "ascend":
sort_order = "asc"
elif sort_order == "descend":
sort_order = "desc"
query = cur.mogrify(
f"""SELECT count(1) OVER () AS total,metric_id, project_id, user_id, name, is_public, created_at, edited_at,
metric_type, metric_of, metric_format, metric_value, view_type, is_pinned,
dashboards, owner_email, owner_name, default_config AS config, thumbnail
FROM metrics
{sub_join}
LEFT JOIN LATERAL (
SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public, name),'[]'::jsonb) AS dashboards
FROM (
SELECT DISTINCT dashboard_id, name, is_public
FROM dashboards
INNER JOIN dashboard_widgets USING (dashboard_id)
WHERE deleted_at ISNULL
AND dashboard_widgets.metric_id = metrics.metric_id
AND project_id = %(project_id)s
AND ((dashboards.user_id = %(user_id)s OR is_public))
) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (
SELECT email AS owner_email, name AS owner_name
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE {" AND ".join(constraints)}
ORDER BY {sort_column} {sort_order}
LIMIT %(limit)s OFFSET %(offset)s;""",
params
)
cur.execute(query)
rows = cur.fetchall()
if len(rows) > 0:
total = rows[0]["total"]
if include_series:
for r in rows:
r.pop("total")
for s in r.get("series", []):
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
else:
for r in rows:
r.pop("total")
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r["edited_at"] = TimeUTC.datetime_to_timestamp(r["edited_at"])
rows = helper.list_to_camel_case(rows)
else:
total = 0
return {"total": total, "list": rows}
def search_all(project_id, user_id, data: schemas.SearchCardsSchema, include_series=False):
constraints = ["metrics.project_id = %(project_id)s",
"metrics.deleted_at ISNULL"]
@ -542,7 +454,7 @@ def __get_global_attributes(row):
if row is None or row.get("cardInfo") is None:
return row
card_info = row.get("cardInfo", {})
row["compareTo"] = card_info["compareTo"] if card_info.get("compareTo") is not None else []
row["compareTo"] = card_info.get("compareTo", [])
return row
@ -685,7 +597,11 @@ def make_chart_from_card(project: schemas.ProjectContext, user_id, metric_id, da
raw_metric["density"] = data.density
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
if metric.metric_type == schemas.MetricType.HEAT_MAP:
if metric.is_predefined:
return custom_metrics_predefined.get_metric(key=metric.metric_of,
project_id=project.project_id,
data=data.model_dump())
elif metric.metric_type == schemas.MetricType.HEAT_MAP:
if raw_metric["data"] and raw_metric["data"].get("sessionId"):
return heatmaps.get_selected_session(project_id=project.project_id,
session_id=raw_metric["data"]["sessionId"])

View file

@ -0,0 +1,25 @@
import logging
from typing import Union
import schemas
from chalicelib.core.metrics import metrics
logger = logging.getLogger(__name__)
def get_metric(key: Union[schemas.MetricOfWebVitals, schemas.MetricOfErrors], project_id: int, data: dict):
supported = {
schemas.MetricOfWebVitals.COUNT_SESSIONS: metrics.get_processed_sessions,
schemas.MetricOfWebVitals.AVG_VISITED_PAGES: metrics.get_user_activity_avg_visited_pages,
schemas.MetricOfWebVitals.COUNT_REQUESTS: metrics.get_top_metrics_count_requests,
schemas.MetricOfErrors.IMPACTED_SESSIONS_BY_JS_ERRORS: metrics.get_impacted_sessions_by_js_errors,
schemas.MetricOfErrors.DOMAINS_ERRORS_4XX: metrics.get_domains_errors_4xx,
schemas.MetricOfErrors.DOMAINS_ERRORS_5XX: metrics.get_domains_errors_5xx,
schemas.MetricOfErrors.ERRORS_PER_DOMAINS: metrics.get_errors_per_domains,
schemas.MetricOfErrors.ERRORS_PER_TYPE: metrics.get_errors_per_type,
schemas.MetricOfErrors.RESOURCES_BY_PARTY: metrics.get_resources_by_party,
schemas.MetricOfWebVitals.COUNT_USERS: metrics.get_unique_users,
schemas.MetricOfWebVitals.SPEED_LOCATION: metrics.get_speed_index_location,
}
return supported.get(key, lambda *args: None)(project_id=project_id, **data)

View file

@ -1,11 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
logger.info(">>> Using experimental heatmaps")
from .heatmaps_ch import *
else:
from .heatmaps import *

View file

@ -17,18 +17,15 @@ def get_by_url(project_id, data: schemas.GetHeatMapPayloadSchema):
return []
args = {"startDate": data.startTimestamp, "endDate": data.endTimestamp,
"project_id": project_id, "url": data.url}
constraints = [
"main_events.project_id = toUInt16(%(project_id)s)",
"main_events.created_at >= toDateTime(%(startDate)s / 1000)",
"main_events.created_at <= toDateTime(%(endDate)s / 1000)",
"main_events.`$event_name` = 'CLICK'",
"isNotNull(JSON_VALUE(CAST(main_events.`$properties` AS String), '$.normalized_x'))"
]
constraints = ["main_events.project_id = toUInt16(%(project_id)s)",
"main_events.datetime >= toDateTime(%(startDate)s/1000)",
"main_events.datetime <= toDateTime(%(endDate)s/1000)",
"main_events.event_type='CLICK'",
"isNotNull(main_events.normalized_x)"]
if data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
constraints.append("url_path= %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
constraints.append("url_path ILIKE %(url)s")
args["url"] = helper.values_for_operator(data.url, data.operator)
query_from = f"{exp_ch_helper.get_main_events_table(data.startTimestamp)} AS main_events"
@ -73,18 +70,17 @@ def get_by_url(project_id, data: schemas.GetHeatMapPayloadSchema):
# query_from += """ LEFT JOIN experimental.events AS issues_t ON (main_events.session_id=issues_t.session_id)
# LEFT JOIN experimental.issues AS mis ON (issues_t.issue_id=mis.issue_id)"""
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT
JSON_VALUE(CAST(`$properties` AS String), '$.normalized_x') AS normalized_x,
JSON_VALUE(CAST(`$properties` AS String), '$.normalized_y') AS normalized_y
FROM {query_from}
WHERE {" AND ".join(constraints)}
LIMIT 500;""",
query = cur.format(query=f"""SELECT main_events.normalized_x AS normalized_x,
main_events.normalized_y AS normalized_y
FROM {query_from}
WHERE {" AND ".join(constraints)}
LIMIT 500;""",
parameters=args)
logger.debug("---------")
logger.debug(query)
logger.debug("---------")
try:
rows = cur.execute(query=query)
rows = cur.execute(query)
except Exception as err:
logger.warning("--------- HEATMAP 2 SEARCH QUERY EXCEPTION CH -----------")
logger.warning(query)
@ -98,16 +94,14 @@ def get_by_url(project_id, data: schemas.GetHeatMapPayloadSchema):
def get_x_y_by_url_and_session_id(project_id, session_id, data: schemas.GetHeatMapPayloadSchema):
args = {"project_id": project_id, "session_id": session_id, "url": data.url}
constraints = [
"main_events.project_id = toUInt16(%(project_id)s)",
"main_events.session_id = %(session_id)s",
"main_events.`$event_name`='CLICK'",
"isNotNull(JSON_VALUE(CAST(main_events.`$properties` AS String), '$.normalized_x'))"
]
constraints = ["main_events.project_id = toUInt16(%(project_id)s)",
"main_events.session_id = %(session_id)s",
"main_events.event_type='CLICK'",
"isNotNull(main_events.normalized_x)"]
if data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
constraints.append("main_events.url_path = %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
constraints.append("main_events.url_path ILIKE %(url)s")
args["url"] = helper.values_for_operator(data.url, data.operator)
query_from = f"{exp_ch_helper.get_main_events_table(0)} AS main_events"
@ -122,7 +116,7 @@ def get_x_y_by_url_and_session_id(project_id, session_id, data: schemas.GetHeatM
logger.debug(query)
logger.debug("---------")
try:
rows = cur.execute(query=query)
rows = cur.execute(query)
except Exception as err:
logger.warning("--------- HEATMAP-session_id SEARCH QUERY EXCEPTION CH -----------")
logger.warning(query)
@ -138,18 +132,17 @@ def get_selectors_by_url_and_session_id(project_id, session_id, data: schemas.Ge
args = {"project_id": project_id, "session_id": session_id, "url": data.url}
constraints = ["main_events.project_id = toUInt16(%(project_id)s)",
"main_events.session_id = %(session_id)s",
"main_events.`$event_name`='CLICK'"]
"main_events.event_type='CLICK'"]
if data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
constraints.append("main_events.url_path = %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
constraints.append("main_events.url_path ILIKE %(url)s")
args["url"] = helper.values_for_operator(data.url, data.operator)
query_from = f"{exp_ch_helper.get_main_events_table(0)} AS main_events"
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT CAST(`$properties`.selector AS String) AS selector,
query = cur.format(query=f"""SELECT main_events.selector AS selector,
COUNT(1) AS count
FROM {query_from}
WHERE {" AND ".join(constraints)}
@ -160,7 +153,7 @@ def get_selectors_by_url_and_session_id(project_id, session_id, data: schemas.Ge
logger.debug(query)
logger.debug("---------")
try:
rows = cur.execute(query=query)
rows = cur.execute(query)
except Exception as err:
logger.warning("--------- HEATMAP-session_id SEARCH QUERY EXCEPTION CH -----------")
logger.warning(query)
@ -188,7 +181,7 @@ def __get_1_url(location_condition: schemas.SessionSearchEventSchema2 | None, se
"start_time": start_time,
"end_time": end_time,
}
sub_condition = ["session_id = %(sessionId)s", "`$event_name` = 'CLICK'", "project_id = %(projectId)s"]
sub_condition = ["session_id = %(sessionId)s", "event_type = 'CLICK'", "project_id = %(projectId)s"]
if location_condition and len(location_condition.value) > 0:
f_k = "LOC"
op = sh.get_sql_operator(location_condition.operator)
@ -197,31 +190,25 @@ def __get_1_url(location_condition: schemas.SessionSearchEventSchema2 | None, se
sh.multi_conditions(f'path {op} %({f_k})s', location_condition.value, is_not=False,
value_key=f_k))
with ch_client.ClickHouseClient() as cur:
main_query = cur.format(query=f"""WITH paths AS (
SELECT DISTINCT
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS url_path
FROM product_analytics.events
WHERE {" AND ".join(sub_condition)}
)
SELECT
paths.url_path,
COUNT(*) AS count
FROM product_analytics.events
INNER JOIN paths
ON JSON_VALUE(CAST(product_analytics.events.$properties AS String), '$.url_path') = paths.url_path
WHERE `$event_name` = 'CLICK'
AND project_id = %(projectId)s
AND created_at >= toDateTime(%(start_time)s / 1000)
AND created_at <= toDateTime(%(end_time)s / 1000)
GROUP BY paths.url_path
ORDER BY count DESC
LIMIT 1;""",
main_query = cur.format(query=f"""WITH paths AS (SELECT DISTINCT url_path
FROM experimental.events
WHERE {" AND ".join(sub_condition)})
SELECT url_path, COUNT(1) AS count
FROM experimental.events
INNER JOIN paths USING (url_path)
WHERE event_type = 'CLICK'
AND project_id = %(projectId)s
AND datetime >= toDateTime(%(start_time)s / 1000)
AND datetime <= toDateTime(%(end_time)s / 1000)
GROUP BY url_path
ORDER BY count DESC
LIMIT 1;""",
parameters=full_args)
logger.debug("--------------------")
logger.debug(main_query)
logger.debug("--------------------")
try:
url = cur.execute(query=main_query)
url = cur.execute(main_query)
except Exception as err:
logger.warning("--------- CLICK MAP BEST URL SEARCH QUERY EXCEPTION CH-----------")
logger.warning(main_query.decode('UTF-8'))
@ -295,7 +282,7 @@ def search_short_session(data: schemas.HeatMapSessionsSearch, project_id, user_i
logger.debug(main_query)
logger.debug("--------------------")
try:
session = cur.execute(query=main_query)
session = cur.execute(main_query)
except Exception as err:
logger.warning("--------- CLICK MAP SHORT SESSION SEARCH QUERY EXCEPTION CH -----------")
logger.warning(main_query)
@ -342,7 +329,7 @@ def get_selected_session(project_id, session_id):
logger.debug(main_query)
logger.debug("--------------------")
try:
session = cur.execute(query=main_query)
session = cur.execute(main_query)
except Exception as err:
logger.warning("--------- CLICK MAP GET SELECTED SESSION QUERY EXCEPTION -----------")
logger.warning(main_query.decode('UTF-8'))
@ -365,21 +352,19 @@ def get_selected_session(project_id, session_id):
def get_page_events(session_id, project_id):
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT
event_id as message_id,
toUnixTimestamp(created_at)*1000 AS timestamp,
JSON_VALUE(CAST(`$properties` AS String), '$.url_host') AS host,
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS path,
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS value,
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS url,
'LOCATION' AS type
FROM product_analytics.events
WHERE session_id = %(session_id)s
AND `$event_name`='LOCATION'
AND project_id= %(project_id)s
ORDER BY created_at,message_id;""",
parameters={"session_id": session_id, "project_id": project_id})
rows = cur.execute(query=query)
rows = cur.execute("""\
SELECT
message_id,
toUnixTimestamp(datetime)*1000 AS timestamp,
url_host AS host,
url_path AS path,
url_path AS value,
url_path AS url,
'LOCATION' AS type
FROM experimental.events
WHERE session_id = %(session_id)s
AND event_type='LOCATION'
AND project_id= %(project_id)s
ORDER BY datetime,message_id;""", {"session_id": session_id, "project_id": project_id})
rows = helper.list_to_camel_case(rows)
return rows

View file

@ -0,0 +1,624 @@
import logging
import schemas
from chalicelib.core import metadata
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
logger = logging.getLogger(__name__)
def __get_constraints(project_id, time_constraint=True, chart=False, duration=True, project=True,
project_identifier="project_id",
main_table="sessions", time_column="start_ts", data={}):
pg_sub_query = []
main_table = main_table + "." if main_table is not None and len(main_table) > 0 else ""
if project:
pg_sub_query.append(f"{main_table}{project_identifier} =%({project_identifier})s")
if duration:
pg_sub_query.append(f"{main_table}duration>0")
if time_constraint:
pg_sub_query.append(f"{main_table}{time_column} >= %(startTimestamp)s")
pg_sub_query.append(f"{main_table}{time_column} < %(endTimestamp)s")
if chart:
pg_sub_query.append(f"{main_table}{time_column} >= generated_timestamp")
pg_sub_query.append(f"{main_table}{time_column} < generated_timestamp + %(step_size)s")
return pg_sub_query + __get_meta_constraint(project_id=project_id, data=data)
def __merge_charts(list1, list2, time_key="timestamp"):
if len(list1) != len(list2):
raise Exception("cannot merge unequal lists")
result = []
for i in range(len(list1)):
timestamp = min(list1[i][time_key], list2[i][time_key])
result.append({**list1[i], **list2[i], time_key: timestamp})
return result
def __get_constraint_values(data):
params = {}
for i, f in enumerate(data.get("filters", [])):
params[f"{f['key']}_{i}"] = f["value"]
return params
def __get_meta_constraint(project_id, data):
if len(data.get("filters", [])) == 0:
return []
constraints = []
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
for i, f in enumerate(data.get("filters", [])):
if f["key"] in meta_keys.keys():
key = f"sessions.metadata_{meta_keys[f['key']]})"
if f["value"] in ["*", ""]:
constraints.append(f"{key} IS NOT NULL")
else:
constraints.append(f"{key} = %({f['key']}_{i})s")
else:
filter_type = f["key"].upper()
filter_type = [filter_type, "USER" + filter_type, filter_type[4:]]
if any(item in [schemas.FilterType.USER_BROWSER] \
for item in filter_type):
constraints.append(f"sessions.user_browser = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_OS, schemas.FilterType.USER_OS_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_os = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_DEVICE, schemas.FilterType.USER_DEVICE_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_device = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_COUNTRY, schemas.FilterType.USER_COUNTRY_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_country = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_ID, schemas.FilterType.USER_ID_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_id = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_ANONYMOUS_ID, schemas.FilterType.USER_ANONYMOUS_ID_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_anonymous_id = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.REV_ID, schemas.FilterType.REV_ID_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.rev_id = %({f['key']}_{i})s")
return constraints
def get_processed_sessions(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(),
density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(sessions), 0) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
results = {
"value": sum([r["value"] for r in rows]),
"chart": rows
}
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
pg_query = f"""SELECT COUNT(sessions.session_id) AS count
FROM public.sessions
WHERE {" AND ".join(pg_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
count = cur.fetchone()["count"]
results["progress"] = helper.__progress(old_val=count, new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def __get_neutral(rows, add_All_if_empty=True):
neutral = {l: 0 for l in [i for k in [list(v.keys()) for v in rows] for i in k]}
if add_All_if_empty and len(neutral.keys()) <= 1:
neutral = {"All": 0}
return neutral
def __merge_rows_with_neutral(rows, neutral):
for i in range(len(rows)):
rows[i] = {**neutral, **rows[i]}
return rows
def __get_domains_errors_4xx_and_5xx(status, project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query_subset = __get_constraints(project_id=project_id, time_constraint=True, chart=False, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, chart=True,
data=args, main_table="requests", time_column="timestamp", project=False,
duration=False)
pg_sub_query_subset.append("requests.status_code/100 = %(status_code)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH requests AS (SELECT host, timestamp
FROM events_common.requests INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT generated_timestamp AS timestamp,
COALESCE(JSONB_AGG(requests) FILTER ( WHERE requests IS NOT NULL ), '[]'::JSONB) AS keys
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT requests.host, COUNT(*) AS count
FROM requests
WHERE {" AND ".join(pg_sub_query_chart)}
GROUP BY host
ORDER BY count DESC
LIMIT 5
) AS requests ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"step_size": step_size,
"status_code": status, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __nested_array_to_dict_array(rows, key="host")
neutral = __get_neutral(rows)
rows = __merge_rows_with_neutral(rows, neutral)
return rows
def get_domains_errors_4xx(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
return __get_domains_errors_4xx_and_5xx(status=4, project_id=project_id, startTimestamp=startTimestamp,
endTimestamp=endTimestamp, density=density, **args)
def get_domains_errors_5xx(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
return __get_domains_errors_4xx_and_5xx(status=5, project_id=project_id, startTimestamp=startTimestamp,
endTimestamp=endTimestamp, density=density, **args)
def __nested_array_to_dict_array(rows, key="url_host", value="count"):
for r in rows:
for i in range(len(r["keys"])):
r[r["keys"][i][key]] = r["keys"][i][value]
r.pop("keys")
return rows
def get_errors_per_domains(project_id, limit, page, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query.append("requests.success = FALSE")
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"limit_s": (page - 1) * limit,
"limit_e": page * limit,
**__get_constraint_values(args)}
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COALESCE(SUM(errors_count),0)::INT AS count,
COUNT(raw.domain) AS total,
jsonb_agg(raw) FILTER ( WHERE rn > %(limit_s)s
AND rn <= %(limit_e)s ) AS values
FROM (SELECT requests.host AS domain,
COUNT(requests.session_id) AS errors_count,
row_number() over (ORDER BY COUNT(requests.session_id) DESC ) AS rn
FROM events_common.requests
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY requests.host
ORDER BY errors_count DESC) AS raw;"""
pg_query = cur.mogrify(pg_query, params)
logger.debug("-----------")
logger.debug(pg_query)
logger.debug("-----------")
cur.execute(pg_query)
row = cur.fetchone()
if row:
row["values"] = row["values"] or []
for r in row["values"]:
r.pop("rn")
return helper.dict_to_camel_case(row)
def get_errors_per_type(project_id, startTimestamp=TimeUTC.now(delta_days=-1), endTimestamp=TimeUTC.now(),
platform=None, density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query_subset = __get_constraints(project_id=project_id, data=args)
pg_sub_query_subset.append("requests.timestamp>=%(startTimestamp)s")
pg_sub_query_subset.append("requests.timestamp<%(endTimestamp)s")
pg_sub_query_subset.append("requests.status_code > 200")
pg_sub_query_subset_e = __get_constraints(project_id=project_id, data=args, duration=False, main_table="m_errors",
time_constraint=False)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False,
chart=True, data=args, main_table="", time_column="timestamp",
project=False, duration=False)
pg_sub_query_subset_e.append("timestamp>=%(startTimestamp)s")
pg_sub_query_subset_e.append("timestamp<%(endTimestamp)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH requests AS (SELECT status_code AS status, timestamp
FROM events_common.requests
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
),
errors_integ AS (SELECT timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset_e)}
AND source != 'js_exception'
),
errors_js AS (SELECT timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset_e)}
AND source = 'js_exception'
)
SELECT generated_timestamp AS timestamp,
COALESCE(SUM(CASE WHEN status / 100 = 4 THEN 1 ELSE 0 END), 0) AS _4xx,
COALESCE(SUM(CASE WHEN status / 100 = 5 THEN 1 ELSE 0 END), 0) AS _5xx,
COALESCE((SELECT COUNT(*)
FROM errors_js
WHERE {" AND ".join(pg_sub_query_chart)}
), 0) AS js,
COALESCE((SELECT COUNT(*)
FROM errors_integ
WHERE {" AND ".join(pg_sub_query_chart)}
), 0) AS integrations
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT status
FROM requests
WHERE {" AND ".join(pg_sub_query_chart)}
) AS errors_partition ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp;"""
params = {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = helper.list_to_camel_case(rows)
return rows
def get_impacted_sessions_by_js_errors(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
pg_sub_query.append("m_errors.source = 'js_exception'")
pg_sub_query.append("m_errors.project_id = %(project_id)s")
pg_sub_query.append("errors.timestamp >= %(startTimestamp)s")
pg_sub_query.append("errors.timestamp < %(endTimestamp)s")
pg_sub_query_chart.append("m_errors.source = 'js_exception'")
pg_sub_query_chart.append("m_errors.project_id = %(project_id)s")
pg_sub_query_chart.append("errors.timestamp >= generated_timestamp")
pg_sub_query_chart.append("errors.timestamp < generated_timestamp+ %(step_size)s")
pg_sub_query_subset = __get_constraints(project_id=project_id, data=args, duration=False, main_table="m_errors",
time_constraint=False)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False,
chart=True, data=args, main_table="errors", time_column="timestamp",
project=False, duration=False)
pg_sub_query_subset.append("m_errors.source = 'js_exception'")
pg_sub_query_subset.append("errors.timestamp>=%(startTimestamp)s")
pg_sub_query_subset.append("errors.timestamp<%(endTimestamp)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH errors AS (SELECT DISTINCT ON (session_id,timestamp) session_id, timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT *
FROM (SELECT COUNT(DISTINCT session_id) AS sessions_count
FROM errors) AS counts
LEFT JOIN
(SELECT jsonb_agg(chart) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(session_id), 0) AS sessions_count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT session_id
FROM errors
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart) AS chart ON (TRUE);"""
cur.execute(cur.mogrify(pg_query, {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
**__get_constraint_values(args)}))
row_sessions = cur.fetchone()
pg_query = f"""WITH errors AS ( SELECT DISTINCT ON(errors.error_id,timestamp) errors.error_id,timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT *
FROM (SELECT COUNT(DISTINCT errors.error_id) AS errors_count
FROM errors) AS counts
LEFT JOIN
(SELECT jsonb_agg(chart) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(error_id), 0) AS errors_count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT errors.error_id
FROM errors
WHERE {" AND ".join(pg_sub_query_chart)}
) AS errors ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart) AS chart ON (TRUE);"""
cur.execute(cur.mogrify(pg_query, {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
**__get_constraint_values(args)}))
row_errors = cur.fetchone()
chart = __merge_charts(row_sessions.pop("chart"), row_errors.pop("chart"))
row_sessions = helper.dict_to_camel_case(row_sessions)
row_errors = helper.dict_to_camel_case(row_errors)
return {**row_sessions, **row_errors, "chart": chart}
def get_resources_by_party(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query_subset = __get_constraints(project_id=project_id, time_constraint=True,
chart=False, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, project=False,
chart=True, data=args, main_table="requests", time_column="timestamp",
duration=False)
pg_sub_query_subset.append("requests.timestamp >= %(startTimestamp)s")
pg_sub_query_subset.append("requests.timestamp < %(endTimestamp)s")
# pg_sub_query_subset.append("resources.type IN ('fetch', 'script')")
pg_sub_query_subset.append("requests.success = FALSE")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH requests AS (
SELECT requests.host, timestamp
FROM events_common.requests
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT generated_timestamp AS timestamp,
SUM(CASE WHEN first.host = sub_requests.host THEN 1 ELSE 0 END) AS first_party,
SUM(CASE WHEN first.host != sub_requests.host THEN 1 ELSE 0 END) AS third_party
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN (
SELECT requests.host,
COUNT(requests.session_id) AS count
FROM events_common.requests
INNER JOIN public.sessions USING (session_id)
WHERE sessions.project_id = '1'
AND sessions.start_ts > (EXTRACT(EPOCH FROM now() - INTERVAL '31 days') * 1000)::BIGINT
AND sessions.start_ts < (EXTRACT(EPOCH FROM now()) * 1000)::BIGINT
AND requests.timestamp > (EXTRACT(EPOCH FROM now() - INTERVAL '31 days') * 1000)::BIGINT
AND requests.timestamp < (EXTRACT(EPOCH FROM now()) * 1000)::BIGINT
AND sessions.duration>0
GROUP BY requests.host
ORDER BY count DESC
LIMIT 1
) AS first ON (TRUE)
LEFT JOIN LATERAL (
SELECT requests.host
FROM requests
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sub_requests ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
cur.execute(cur.mogrify(pg_query, {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}))
rows = cur.fetchall()
return rows
def get_user_activity_avg_visited_pages(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
with pg_client.PostgresClient() as cur:
row = __get_user_activity_avg_visited_pages(cur, project_id, startTimestamp, endTimestamp, **args)
results = helper.dict_to_camel_case(row)
results["chart"] = __get_user_activity_avg_visited_pages_chart(cur, project_id, startTimestamp,
endTimestamp, **args)
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
row = __get_user_activity_avg_visited_pages(cur, project_id, startTimestamp, endTimestamp, **args)
previous = helper.dict_to_camel_case(row)
results["progress"] = helper.__progress(old_val=previous["value"], new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def __get_user_activity_avg_visited_pages(cur, project_id, startTimestamp, endTimestamp, **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query.append("sessions.pages_count>0")
pg_query = f"""SELECT COALESCE(CEIL(AVG(sessions.pages_count)),0) AS value
FROM public.sessions
WHERE {" AND ".join(pg_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
return row
def __get_user_activity_avg_visited_pages_chart(cur, project_id, startTimestamp, endTimestamp, density=20, **args):
step_size = get_step_size(endTimestamp=endTimestamp, startTimestamp=startTimestamp, density=density, factor=1)
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp}
pg_sub_query_subset = __get_constraints(project_id=project_id, time_constraint=True,
chart=False, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, project=False,
chart=True, data=args, main_table="sessions", time_column="start_ts",
duration=False)
pg_sub_query_subset.append("sessions.duration IS NOT NULL")
pg_query = f"""WITH sessions AS(SELECT sessions.pages_count, sessions.start_ts
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT generated_timestamp AS timestamp,
COALESCE(AVG(sessions.pages_count),0) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (
SELECT sessions.pages_count
FROM sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
cur.execute(cur.mogrify(pg_query, {**params, **__get_constraint_values(args)}))
rows = cur.fetchall()
return rows
def get_top_metrics_count_requests(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), value=None, density=20, **args):
step_size = get_step_size(endTimestamp=endTimestamp, startTimestamp=startTimestamp, density=density, factor=1)
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp}
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, project=False,
chart=True, data=args, main_table="pages", time_column="timestamp",
duration=False)
if value is not None:
pg_sub_query.append("pages.path = %(value)s")
pg_sub_query_chart.append("pages.path = %(value)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COUNT(pages.session_id) AS value
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)};"""
cur.execute(cur.mogrify(pg_query, {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"value": value, **__get_constraint_values(args)}))
row = cur.fetchone()
pg_query = f"""WITH pages AS(SELECT pages.timestamp
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
)
SELECT generated_timestamp AS timestamp,
COUNT(pages.*) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (
SELECT 1
FROM pages
WHERE {" AND ".join(pg_sub_query_chart)}
) AS pages ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
cur.execute(cur.mogrify(pg_query, {**params, **__get_constraint_values(args)}))
rows = cur.fetchall()
row["chart"] = rows
row["unit"] = schemas.TemplatePredefinedUnits.COUNT
return helper.dict_to_camel_case(row)
def get_unique_users(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(),
density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("user_id != ''")
pg_sub_query_chart.append("user_id IS NOT NULL")
pg_sub_query_chart.append("user_id != ''")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(sessions), 0) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT user_id
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
results = {
"value": sum([r["value"] for r in rows]),
"chart": rows
}
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
pg_query = f"""SELECT COUNT(DISTINCT sessions.user_id) AS count
FROM public.sessions
WHERE {" AND ".join(pg_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
count = cur.fetchone()["count"]
results["progress"] = helper.__progress(old_val=count, new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def get_speed_index_location(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query.append("pages.speed_index IS NOT NULL")
pg_sub_query.append("pages.speed_index>0")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT sessions.user_country, AVG(pages.speed_index) AS value
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY sessions.user_country
ORDER BY value, sessions.user_country;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
if len(rows) > 0:
pg_query = f"""SELECT AVG(pages.speed_index) AS avg
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)};"""
cur.execute(cur.mogrify(pg_query, params))
avg = cur.fetchone()["avg"]
else:
avg = 0
return {"value": avg, "chart": helper.list_to_camel_case(rows), "unit": schemas.TemplatePredefinedUnits.MILLISECOND}

View file

@ -0,0 +1,613 @@
import logging
from math import isnan
import schemas
from chalicelib.utils import ch_client
from chalicelib.utils import exp_ch_helper
from chalicelib.utils import helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
logger = logging.getLogger(__name__)
def __get_basic_constraints(table_name=None, time_constraint=True, round_start=False, data={}, identifier="project_id"):
if table_name:
table_name += "."
else:
table_name = ""
ch_sub_query = [f"{table_name}{identifier} =toUInt16(%({identifier})s)"]
if time_constraint:
if round_start:
ch_sub_query.append(
f"toStartOfInterval({table_name}datetime, INTERVAL %(step_size)s second) >= toDateTime(%(startTimestamp)s/1000)")
else:
ch_sub_query.append(f"{table_name}datetime >= toDateTime(%(startTimestamp)s/1000)")
ch_sub_query.append(f"{table_name}datetime < toDateTime(%(endTimestamp)s/1000)")
return ch_sub_query + __get_generic_constraint(data=data, table_name=table_name)
def __frange(start, stop, step):
result = []
i = start
while i < stop:
result.append(i)
i += step
return result
def __add_missing_keys(original, complete):
for missing in [key for key in complete.keys() if key not in original.keys()]:
original[missing] = complete[missing]
return original
def __complete_missing_steps(start_time, end_time, density, neutral, rows, time_key="timestamp", time_coefficient=1000):
if len(rows) == density:
return rows
step = get_step_size(start_time, end_time, density, decimal=True)
optimal = [(int(i * time_coefficient), int((i + step) * time_coefficient)) for i in
__frange(start_time // time_coefficient, end_time // time_coefficient, step)]
result = []
r = 0
o = 0
for i in range(density):
neutral_clone = dict(neutral)
for k in neutral_clone.keys():
if callable(neutral_clone[k]):
neutral_clone[k] = neutral_clone[k]()
if r < len(rows) and len(result) + len(rows) - r == density:
result += rows[r:]
break
if r < len(rows) and o < len(optimal) and rows[r][time_key] < optimal[o][0]:
# complete missing keys in original object
rows[r] = __add_missing_keys(original=rows[r], complete=neutral_clone)
result.append(rows[r])
r += 1
elif r < len(rows) and o < len(optimal) and optimal[o][0] <= rows[r][time_key] < optimal[o][1]:
# complete missing keys in original object
rows[r] = __add_missing_keys(original=rows[r], complete=neutral_clone)
result.append(rows[r])
r += 1
o += 1
else:
neutral_clone[time_key] = optimal[o][0]
result.append(neutral_clone)
o += 1
# elif r < len(rows) and rows[r][time_key] >= optimal[o][1]:
# neutral_clone[time_key] = optimal[o][0]
# result.append(neutral_clone)
# o += 1
# else:
# neutral_clone[time_key] = optimal[o][0]
# result.append(neutral_clone)
# o += 1
return result
def __get_constraint(data, fields, table_name):
constraints = []
# for k in fields.keys():
for i, f in enumerate(data.get("filters", [])):
if f["key"] in fields.keys():
if f["value"] in ["*", ""]:
constraints.append(f"isNotNull({table_name}{fields[f['key']]})")
else:
constraints.append(f"{table_name}{fields[f['key']]} = %({f['key']}_{i})s")
# TODO: remove this in next release
offset = len(data.get("filters", []))
for i, f in enumerate(data.keys()):
if f in fields.keys():
if data[f] in ["*", ""]:
constraints.append(f"isNotNull({table_name}{fields[f]})")
else:
constraints.append(f"{table_name}{fields[f]} = %({f}_{i + offset})s")
return constraints
def __get_constraint_values(data):
params = {}
for i, f in enumerate(data.get("filters", [])):
params[f"{f['key']}_{i}"] = f["value"]
# TODO: remove this in next release
offset = len(data.get("filters", []))
for i, f in enumerate(data.keys()):
params[f"{f}_{i + offset}"] = data[f]
return params
METADATA_FIELDS = {"userId": "user_id",
"userAnonymousId": "user_anonymous_id",
"metadata1": "metadata_1",
"metadata2": "metadata_2",
"metadata3": "metadata_3",
"metadata4": "metadata_4",
"metadata5": "metadata_5",
"metadata6": "metadata_6",
"metadata7": "metadata_7",
"metadata8": "metadata_8",
"metadata9": "metadata_9",
"metadata10": "metadata_10"}
def __get_meta_constraint(data):
return __get_constraint(data=data, fields=METADATA_FIELDS, table_name="sessions_metadata.")
SESSIONS_META_FIELDS = {"revId": "rev_id",
"country": "user_country",
"os": "user_os",
"platform": "user_device_type",
"device": "user_device",
"browser": "user_browser"}
def __get_generic_constraint(data, table_name):
return __get_constraint(data=data, fields=SESSIONS_META_FIELDS, table_name=table_name)
def get_processed_sessions(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(),
density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density)
ch_sub_query = __get_basic_constraints(table_name="sessions", data=args)
ch_sub_query_chart = __get_basic_constraints(table_name="sessions", round_start=True, data=args)
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
ch_sub_query_chart += meta_condition
with ch_client.ClickHouseClient() as ch:
ch_query = f"""\
SELECT toUnixTimestamp(toStartOfInterval(sessions.datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
COUNT(DISTINCT sessions.session_id) AS value
FROM {exp_ch_helper.get_main_sessions_table(startTimestamp)} AS sessions
WHERE {" AND ".join(ch_sub_query_chart)}
GROUP BY timestamp
ORDER BY timestamp;\
"""
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
results = {
"value": sum([r["value"] for r in rows]),
"chart": __complete_missing_steps(rows=rows, start_time=startTimestamp, end_time=endTimestamp,
density=density,
neutral={"value": 0})
}
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
ch_query = f""" SELECT COUNT(1) AS count
FROM {exp_ch_helper.get_main_sessions_table(startTimestamp)} AS sessions
WHERE {" AND ".join(ch_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
count = ch.execute(query=ch_query, parameters=params)
count = count[0]["count"]
results["progress"] = helper.__progress(old_val=count, new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def __get_domains_errors_neutral(rows):
neutral = {l: 0 for l in [i for k in [list(v.keys()) for v in rows] for i in k]}
if len(neutral.keys()) == 0:
neutral = {"All": 0}
return neutral
def __merge_rows_with_neutral(rows, neutral):
for i in range(len(rows)):
rows[i] = {**neutral, **rows[i]}
return rows
def __get_domains_errors_4xx_and_5xx(status, project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density)
ch_sub_query = __get_basic_constraints(table_name="requests", round_start=True, data=args)
ch_sub_query.append("requests.event_type='REQUEST'")
ch_sub_query.append("intDiv(requests.status, 100) == %(status_code)s")
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT timestamp,
groupArray([domain, toString(count)]) AS keys
FROM (SELECT toUnixTimestamp(toStartOfInterval(requests.datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
requests.url_host AS domain, COUNT(1) AS count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS requests
WHERE {" AND ".join(ch_sub_query)}
GROUP BY timestamp,requests.url_host
ORDER BY timestamp, count DESC
LIMIT 5 BY timestamp) AS domain_stats
GROUP BY timestamp;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"step_size": step_size,
"status_code": status, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
rows = __nested_array_to_dict_array(rows)
neutral = __get_domains_errors_neutral(rows)
rows = __merge_rows_with_neutral(rows, neutral)
return __complete_missing_steps(rows=rows, start_time=startTimestamp,
end_time=endTimestamp,
density=density, neutral=neutral)
def get_domains_errors_4xx(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
return __get_domains_errors_4xx_and_5xx(status=4, project_id=project_id, startTimestamp=startTimestamp,
endTimestamp=endTimestamp, density=density, **args)
def get_domains_errors_5xx(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
return __get_domains_errors_4xx_and_5xx(status=5, project_id=project_id, startTimestamp=startTimestamp,
endTimestamp=endTimestamp, density=density, **args)
def __nested_array_to_dict_array(rows):
for r in rows:
for i in range(len(r["keys"])):
r[r["keys"][i][0]] = int(r["keys"][i][1])
r.pop("keys")
return rows
def get_errors_per_domains(project_id, limit, page, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
ch_sub_query = __get_basic_constraints(table_name="requests", data=args)
ch_sub_query.append("requests.event_type = 'REQUEST'")
ch_sub_query.append("requests.success = 0")
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
**__get_constraint_values(args),
"limit_s": (page - 1) * limit,
"limit": limit}
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT
requests.url_host AS domain,
COUNT(1) AS errors_count,
COUNT(1) OVER () AS total,
SUM(errors_count) OVER () AS count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS requests
WHERE {" AND ".join(ch_sub_query)}
GROUP BY requests.url_host
ORDER BY errors_count DESC
LIMIT %(limit)s OFFSET %(limit_s)s;"""
logger.debug("-----------")
logger.debug(ch.format(query=ch_query, parameters=params))
logger.debug("-----------")
rows = ch.execute(query=ch_query, parameters=params)
response = {"count": 0, "total": 0, "values": []}
if len(rows) > 0:
response["count"] = rows[0]["count"]
response["total"] = rows[0]["total"]
rows = helper.list_to_camel_case(rows)
for r in rows:
r.pop("count")
r.pop("total")
return response
def get_errors_per_type(project_id, startTimestamp=TimeUTC.now(delta_days=-1), endTimestamp=TimeUTC.now(),
platform=None, density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density)
ch_sub_query_chart = __get_basic_constraints(table_name="events", round_start=True,
data=args)
ch_sub_query_chart.append("(events.event_type = 'REQUEST' OR events.event_type = 'ERROR')")
ch_sub_query_chart.append("(events.status>200 OR events.event_type = 'ERROR')")
meta_condition = __get_meta_constraint(args)
ch_sub_query_chart += meta_condition
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT toUnixTimestamp(toStartOfInterval(datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
SUM(events.event_type = 'REQUEST' AND intDiv(events.status, 100) == 4) AS _4xx,
SUM(events.event_type = 'REQUEST' AND intDiv(events.status, 100) == 5) AS _5xx,
SUM(events.event_type = 'ERROR' AND events.source == 'js_exception') AS js,
SUM(events.event_type = 'ERROR' AND events.source != 'js_exception') AS integrations
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS events
WHERE {" AND ".join(ch_sub_query_chart)}
GROUP BY timestamp
ORDER BY timestamp;"""
params = {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
rows = helper.list_to_camel_case(rows)
return __complete_missing_steps(rows=rows, start_time=startTimestamp,
end_time=endTimestamp,
density=density,
neutral={"4xx": 0, "5xx": 0, "js": 0, "integrations": 0})
def get_impacted_sessions_by_js_errors(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density)
ch_sub_query_chart = __get_basic_constraints(table_name="errors", round_start=True, data=args)
ch_sub_query_chart.append("errors.event_type='ERROR'")
ch_sub_query_chart.append("errors.source == 'js_exception'")
meta_condition = __get_meta_constraint(args)
ch_sub_query_chart += meta_condition
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT toUnixTimestamp(toStartOfInterval(errors.datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
COUNT(DISTINCT errors.session_id) AS sessions_count,
COUNT(DISTINCT errors.error_id) AS errors_count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS errors
WHERE {" AND ".join(ch_sub_query_chart)}
GROUP BY timestamp
ORDER BY timestamp;;"""
rows = ch.execute(query=ch_query,
params={"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)})
ch_query = f"""SELECT COUNT(DISTINCT errors.session_id) AS sessions_count,
COUNT(DISTINCT errors.error_id) AS errors_count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS errors
WHERE {" AND ".join(ch_sub_query_chart)};"""
counts = ch.execute(query=ch_query,
params={"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)})
return {"sessionsCount": counts[0]["sessions_count"],
"errorsCount": counts[0]["errors_count"],
"chart": helper.list_to_camel_case(__complete_missing_steps(rows=rows, start_time=startTimestamp,
end_time=endTimestamp,
density=density,
neutral={"sessions_count": 0,
"errors_count": 0}))}
def get_resources_by_party(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density)
ch_sub_query = __get_basic_constraints(table_name="requests", round_start=True, data=args)
ch_sub_query.append("requests.event_type='REQUEST'")
ch_sub_query.append("requests.success = 0")
sch_sub_query = ["rs.project_id =toUInt16(%(project_id)s)", "rs.event_type='REQUEST'"]
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
# sch_sub_query += meta_condition
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT toUnixTimestamp(toStartOfInterval(sub_requests.datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
SUM(first.url_host = sub_requests.url_host) AS first_party,
SUM(first.url_host != sub_requests.url_host) AS third_party
FROM
(
SELECT requests.datetime, requests.url_host
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS requests
WHERE {" AND ".join(ch_sub_query)}
) AS sub_requests
CROSS JOIN
(
SELECT
rs.url_host,
COUNT(1) AS count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS rs
WHERE {" AND ".join(sch_sub_query)}
GROUP BY rs.url_host
ORDER BY count DESC
LIMIT 1
) AS first
GROUP BY timestamp
ORDER BY timestamp;"""
params = {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
return helper.list_to_camel_case(__complete_missing_steps(rows=rows, start_time=startTimestamp,
end_time=endTimestamp,
density=density,
neutral={"first_party": 0,
"third_party": 0}))
def get_user_activity_avg_visited_pages(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
results = {}
with ch_client.ClickHouseClient() as ch:
rows = __get_user_activity_avg_visited_pages(ch, project_id, startTimestamp, endTimestamp, **args)
if len(rows) > 0:
results = helper.dict_to_camel_case(rows[0])
for key in results:
if isnan(results[key]):
results[key] = 0
results["chart"] = __get_user_activity_avg_visited_pages_chart(ch, project_id, startTimestamp,
endTimestamp, **args)
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
rows = __get_user_activity_avg_visited_pages(ch, project_id, startTimestamp, endTimestamp, **args)
if len(rows) > 0:
previous = helper.dict_to_camel_case(rows[0])
results["progress"] = helper.__progress(old_val=previous["value"], new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def __get_user_activity_avg_visited_pages(ch, project_id, startTimestamp, endTimestamp, **args):
ch_sub_query = __get_basic_constraints(table_name="pages", data=args)
ch_sub_query.append("pages.event_type='LOCATION'")
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
ch_query = f"""SELECT COALESCE(CEIL(avgOrNull(count)),0) AS value
FROM (SELECT COUNT(1) AS count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS pages
WHERE {" AND ".join(ch_sub_query)}
GROUP BY session_id) AS groupped_data
WHERE count>0;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
return rows
def __get_user_activity_avg_visited_pages_chart(ch, project_id, startTimestamp, endTimestamp, density=20, **args):
step_size = get_step_size(endTimestamp=endTimestamp, startTimestamp=startTimestamp, density=density)
ch_sub_query_chart = __get_basic_constraints(table_name="pages", round_start=True, data=args)
ch_sub_query_chart.append("pages.event_type='LOCATION'")
meta_condition = __get_meta_constraint(args)
ch_sub_query_chart += meta_condition
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
ch_query = f"""SELECT timestamp, COALESCE(avgOrNull(count), 0) AS value
FROM (SELECT toUnixTimestamp(toStartOfInterval(pages.datetime, INTERVAL %(step_size)s second ))*1000 AS timestamp,
session_id, COUNT(1) AS count
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS pages
WHERE {" AND ".join(ch_sub_query_chart)}
GROUP BY timestamp,session_id
ORDER BY timestamp) AS groupped_data
WHERE count>0
GROUP BY timestamp
ORDER BY timestamp;"""
rows = ch.execute(query=ch_query, parameters=params)
rows = __complete_missing_steps(rows=rows, start_time=startTimestamp,
end_time=endTimestamp,
density=density, neutral={"value": 0})
return rows
def get_top_metrics_count_requests(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), value=None, density=20, **args):
step_size = get_step_size(endTimestamp=endTimestamp, startTimestamp=startTimestamp, density=density)
ch_sub_query_chart = __get_basic_constraints(table_name="pages", round_start=True, data=args)
ch_sub_query_chart.append("pages.event_type='LOCATION'")
meta_condition = __get_meta_constraint(args)
ch_sub_query_chart += meta_condition
ch_sub_query = __get_basic_constraints(table_name="pages", data=args)
ch_sub_query.append("pages.event_type='LOCATION'")
ch_sub_query += meta_condition
if value is not None:
ch_sub_query.append("pages.url_path = %(value)s")
ch_sub_query_chart.append("pages.url_path = %(value)s")
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT COUNT(1) AS value
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS pages
WHERE {" AND ".join(ch_sub_query)};"""
params = {"step_size": step_size, "project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"value": value, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
result = rows[0]
ch_query = f"""SELECT toUnixTimestamp(toStartOfInterval(pages.datetime, INTERVAL %(step_size)s second ))*1000 AS timestamp,
COUNT(1) AS value
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS pages
WHERE {" AND ".join(ch_sub_query_chart)}
GROUP BY timestamp
ORDER BY timestamp;"""
params = {**params, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
rows = __complete_missing_steps(rows=rows, start_time=startTimestamp,
end_time=endTimestamp,
density=density, neutral={"value": 0})
result["chart"] = rows
result["unit"] = schemas.TemplatePredefinedUnits.COUNT
return helper.dict_to_camel_case(result)
def get_unique_users(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(),
density=7, **args):
step_size = get_step_size(startTimestamp, endTimestamp, density)
ch_sub_query = __get_basic_constraints(table_name="sessions", data=args)
ch_sub_query_chart = __get_basic_constraints(table_name="sessions", round_start=True, data=args)
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
ch_sub_query_chart += meta_condition
ch_sub_query_chart.append("isNotNull(sessions.user_id)")
ch_sub_query_chart.append("sessions.user_id!=''")
with ch_client.ClickHouseClient() as ch:
ch_query = f"""\
SELECT toUnixTimestamp(toStartOfInterval(sessions.datetime, INTERVAL %(step_size)s second)) * 1000 AS timestamp,
COUNT(DISTINCT sessions.user_id) AS value
FROM {exp_ch_helper.get_main_sessions_table(startTimestamp)} AS sessions
WHERE {" AND ".join(ch_sub_query_chart)}
GROUP BY timestamp
ORDER BY timestamp;\
"""
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
results = {
"value": sum([r["value"] for r in rows]),
"chart": __complete_missing_steps(rows=rows, start_time=startTimestamp, end_time=endTimestamp,
density=density,
neutral={"value": 0})
}
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
ch_query = f""" SELECT COUNT(DISTINCT user_id) AS count
FROM {exp_ch_helper.get_main_sessions_table(startTimestamp)} AS sessions
WHERE {" AND ".join(ch_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
count = ch.execute(query=ch_query, parameters=params)
count = count[0]["count"]
results["progress"] = helper.__progress(old_val=count, new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def get_speed_index_location(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
ch_sub_query = __get_basic_constraints(table_name="pages", data=args)
ch_sub_query.append("pages.event_type='LOCATION'")
ch_sub_query.append("isNotNull(pages.speed_index)")
ch_sub_query.append("pages.speed_index>0")
meta_condition = __get_meta_constraint(args)
ch_sub_query += meta_condition
with ch_client.ClickHouseClient() as ch:
ch_query = f"""SELECT sessions.user_country, COALESCE(avgOrNull(pages.speed_index),0) AS value
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS pages
INNER JOIN {exp_ch_helper.get_main_sessions_table(startTimestamp)} AS sessions USING (session_id)
WHERE {" AND ".join(ch_sub_query)}
GROUP BY sessions.user_country
ORDER BY value ,sessions.user_country;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
rows = ch.execute(query=ch_query, parameters=params)
ch_query = f"""SELECT COALESCE(avgOrNull(pages.speed_index),0) AS avg
FROM {exp_ch_helper.get_main_events_table(startTimestamp)} AS pages
WHERE {" AND ".join(ch_sub_query)};"""
avg = ch.execute(query=ch_query, parameters=params)[0]["avg"] if len(rows) > 0 else 0
return {"value": avg, "chart": helper.list_to_camel_case(rows), "unit": schemas.TemplatePredefinedUnits.MILLISECOND}

View file

@ -5,8 +5,8 @@ from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
import chalicelib.core.sessions.sessions_ch as sessions
from chalicelib.core.sessions import sessions_ch as sessions
else:
import chalicelib.core.sessions.sessions_pg as sessions
from chalicelib.core.sessions import sessions
from chalicelib.core.sessions import sessions_mobs

View file

@ -19,9 +19,9 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
filters: List[schemas.SessionSearchFilterSchema] = filter_d.filters
platform = project.platform
constraints = ["e.project_id = %(project_id)s",
"e.created_at >= toDateTime(%(startTimestamp)s/1000)",
"e.created_at <= toDateTime(%(endTimestamp)s/1000)",
"e.`$event_name` IN %(eventTypes)s"]
"e.datetime >= toDateTime(%(startTimestamp)s/1000)",
"e.datetime <= toDateTime(%(endTimestamp)s/1000)",
"e.event_type IN %(eventTypes)s"]
full_args = {"project_id": project.project_id, "startTimestamp": filter_d.startTimestamp,
"endTimestamp": filter_d.endTimestamp}
@ -157,25 +157,18 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
if next_event_type not in event_types:
event_types.append(next_event_type)
full_args[f"event_type_{i}"] = next_event_type
n_stages_query.append(f"`$event_name`=%(event_type_{i})s")
n_stages_query.append(f"event_type=%(event_type_{i})s")
if is_not:
n_stages_query_not.append(n_stages_query[-1] + " AND " +
(sh.multi_conditions(
f"JSON_VALUE(CAST(`$properties` AS String), '$.{next_col_name}') {op} %({e_k})s",
s.value,
is_not=is_not,
value_key=e_k
) if not specific_condition else specific_condition))
(sh.multi_conditions(f' {next_col_name} {op} %({e_k})s', s.value,
is_not=is_not, value_key=e_k)
if not specific_condition else specific_condition))
elif not is_any:
n_stages_query[-1] += " AND " + (
sh.multi_conditions(
f"JSON_VALUE(CAST(`$properties` AS String), '$.{next_col_name}') {op} %({e_k})s",
s.value,
is_not=is_not,
value_key=e_k
) if not specific_condition else specific_condition)
n_stages_query[-1] += " AND " + (sh.multi_conditions(f' {next_col_name} {op} %({e_k})s', s.value,
is_not=is_not, value_key=e_k)
if not specific_condition else specific_condition)
full_args = {"eventTypes": event_types, **full_args, **values}
full_args = {"eventTypes": tuple(event_types), **full_args, **values}
n_stages = len(n_stages_query)
if n_stages == 0:
return []
@ -195,8 +188,8 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
if len(n_stages_query_not) > 0:
value_conditions_not_base = ["project_id = %(project_id)s",
"created_at >= toDateTime(%(startTimestamp)s/1000)",
"created_at <= toDateTime(%(endTimestamp)s/1000)"]
"datetime >= toDateTime(%(startTimestamp)s/1000)",
"datetime <= toDateTime(%(endTimestamp)s/1000)"]
_value_conditions_not = []
value_conditions_not = []
for c in n_stages_query_not:
@ -228,7 +221,7 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
pattern += f"(?{j + 1})"
conditions.append(n_stages_query[j])
j += 1
sequences.append(f"sequenceMatch('{pattern}')(toDateTime(e.created_at), {','.join(conditions)}) AS T{i + 1}")
sequences.append(f"sequenceMatch('{pattern}')(e.datetime, {','.join(conditions)}) AS T{i + 1}")
n_stages_query = f"""
SELECT {",".join(projections)}
@ -243,7 +236,7 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
logger.debug(query)
logger.debug("---------------------------------------------------")
try:
row = cur.execute(query=query)
row = cur.execute(query)
except Exception as err:
logger.warning("--------- SIMPLE FUNNEL SEARCH QUERY EXCEPTION CH-----------")
logger.warning(query)

View file

@ -12,75 +12,42 @@ logger = logging.getLogger(__name__)
def __transform_journey(rows, reverse_path=False):
total_100p = 0
number_of_step1 = 0
for r in rows:
if r["event_number_in_session"] > 1:
break
number_of_step1 += 1
total_100p += r["sessions_count"]
# for i in range(number_of_step1):
# rows[i]["value"] = 100 / number_of_step1
# for i in range(number_of_step1, len(rows)):
for i in range(len(rows)):
rows[i]["value"] = rows[i]["sessions_count"] * 100 / total_100p
nodes = []
nodes_values = []
links = []
drops = []
max_depth = 0
for r in rows:
r["value"] = r["sessions_count"] * 100 / total_100p
source = f"{r['event_number_in_session'] - 1}_{r['event_type']}_{r['e_value']}"
source = f"{r['event_number_in_session']}_{r['event_type']}_{r['e_value']}"
if source not in nodes:
nodes.append(source)
nodes_values.append({"depth": r['event_number_in_session'] - 1,
"name": r['e_value'],
"eventType": r['event_type'],
"id": len(nodes_values)})
target = f"{r['event_number_in_session']}_{r['next_type']}_{r['next_value']}"
if target not in nodes:
nodes.append(target)
nodes_values.append({"depth": r['event_number_in_session'],
"name": r['next_value'],
"eventType": r['next_type'],
"id": len(nodes_values)})
sr_idx = nodes.index(source)
tg_idx = nodes.index(target)
link = {"eventType": r['event_type'], "sessionsCount": r["sessions_count"], "value": r["value"]}
if not reverse_path:
link["source"] = sr_idx
link["target"] = tg_idx
else:
link["source"] = tg_idx
link["target"] = sr_idx
links.append(link)
max_depth = r['event_number_in_session']
if r["next_type"] == "DROP":
for d in drops:
if d["depth"] == r['event_number_in_session']:
d["sessions_count"] += r["sessions_count"]
break
else:
drops.append({"depth": r['event_number_in_session'], "sessions_count": r["sessions_count"]})
for i in range(len(drops)):
if drops[i]["depth"] < max_depth:
source = f"{drops[i]['depth']}_DROP_None"
target = f"{drops[i]['depth'] + 1}_DROP_None"
sr_idx = nodes.index(source)
if i < len(drops) - 1 and drops[i]["depth"] + 1 == drops[i + 1]["depth"]:
tg_idx = nodes.index(target)
else:
nodes_values.append({"name": r['e_value'], "eventType": r['event_type'],
"avgTimeFromPrevious": 0, "sessionsCount": 0})
if r['next_value']:
target = f"{r['event_number_in_session'] + 1}_{r['next_type']}_{r['next_value']}"
if target not in nodes:
nodes.append(target)
nodes_values.append({"depth": drops[i]["depth"] + 1,
"name": None,
"eventType": "DROP",
"id": len(nodes_values)})
tg_idx = len(nodes) - 1
nodes_values.append({"name": r['next_value'], "eventType": r['next_type'],
"avgTimeFromPrevious": 0, "sessionsCount": 0})
link = {"eventType": "DROP",
"sessionsCount": drops[i]["sessions_count"],
"value": drops[i]["sessions_count"] * 100 / total_100p}
sr_idx = nodes.index(source)
tg_idx = nodes.index(target)
if r["avg_time_from_previous"] is not None:
nodes_values[tg_idx]["avgTimeFromPrevious"] += r["avg_time_from_previous"] * r["sessions_count"]
nodes_values[tg_idx]["sessionsCount"] += r["sessions_count"]
link = {"eventType": r['event_type'], "sessionsCount": r["sessions_count"],
"value": r["value"], "avgTimeFromPrevious": r["avg_time_from_previous"]}
if not reverse_path:
link["source"] = sr_idx
link["target"] = tg_idx
@ -88,10 +55,13 @@ def __transform_journey(rows, reverse_path=False):
link["source"] = tg_idx
link["target"] = sr_idx
links.append(link)
for n in nodes_values:
if n["sessionsCount"] > 0:
n["avgTimeFromPrevious"] = n["avgTimeFromPrevious"] / n["sessionsCount"]
else:
n["avgTimeFromPrevious"] = None
n.pop("sessionsCount")
if reverse_path:
for n in nodes_values:
n["depth"] = max_depth - n["depth"]
return {"nodes": nodes_values,
"links": sorted(links, key=lambda x: (x["source"], x["target"]), reverse=False)}

View file

@ -1,10 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
logger.info(">>> Using experimental product-analytics")
from .product_analytics_ch import *
else:
from .product_analytics import *

View file

@ -1,135 +1,54 @@
import logging
from time import time
from typing import List
import schemas
from chalicelib.core import metadata
from .product_analytics import __transform_journey
from chalicelib.core.metrics.metrics_ch import __get_basic_constraints, __get_meta_constraint
from chalicelib.core.metrics.metrics_ch import __get_constraint_values, __complete_missing_steps
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh
from chalicelib.utils import helper, dev
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
from chalicelib.utils import sql_helper as sh
from chalicelib.core import metadata
from time import time
import logging
from chalicelib.core.metrics.product_analytics import __transform_journey
logger = logging.getLogger(__name__)
JOURNEY_TYPES = {
schemas.ProductAnalyticsSelectedEventType.LOCATION: {"eventType": "LOCATION", "column": "`$properties`.url_path"},
schemas.ProductAnalyticsSelectedEventType.CLICK: {"eventType": "CLICK", "column": "`$properties`.label"},
schemas.ProductAnalyticsSelectedEventType.INPUT: {"eventType": "INPUT", "column": "`$properties`.label"},
schemas.ProductAnalyticsSelectedEventType.CUSTOM_EVENT: {"eventType": "CUSTOM", "column": "`$properties`.name"}
schemas.ProductAnalyticsSelectedEventType.LOCATION: {"eventType": "LOCATION", "column": "url_path"},
schemas.ProductAnalyticsSelectedEventType.CLICK: {"eventType": "CLICK", "column": "label"},
schemas.ProductAnalyticsSelectedEventType.INPUT: {"eventType": "INPUT", "column": "label"},
schemas.ProductAnalyticsSelectedEventType.CUSTOM_EVENT: {"eventType": "CUSTOM", "column": "name"}
}
def __get_basic_constraints_events(table_name=None, identifier="project_id"):
if table_name:
table_name += "."
else:
table_name = ""
ch_sub_query = [f"{table_name}{identifier} =toUInt16(%({identifier})s)"]
ch_sub_query.append(f"{table_name}created_at >= toDateTime(%(startTimestamp)s/1000)")
ch_sub_query.append(f"{table_name}created_at < toDateTime(%(endTimestamp)s/1000)")
return ch_sub_query
def __frange(start, stop, step):
result = []
i = start
while i < stop:
result.append(i)
i += step
return result
def __add_missing_keys(original, complete):
for missing in [key for key in complete.keys() if key not in original.keys()]:
original[missing] = complete[missing]
return original
def __complete_missing_steps(start_time, end_time, density, neutral, rows, time_key="timestamp", time_coefficient=1000):
if len(rows) == density:
return rows
step = get_step_size(start_time, end_time, density, decimal=True)
optimal = [(int(i * time_coefficient), int((i + step) * time_coefficient)) for i in
__frange(start_time // time_coefficient, end_time // time_coefficient, step)]
result = []
r = 0
o = 0
for i in range(density):
neutral_clone = dict(neutral)
for k in neutral_clone.keys():
if callable(neutral_clone[k]):
neutral_clone[k] = neutral_clone[k]()
if r < len(rows) and len(result) + len(rows) - r == density:
result += rows[r:]
break
if r < len(rows) and o < len(optimal) and rows[r][time_key] < optimal[o][0]:
# complete missing keys in original object
rows[r] = __add_missing_keys(original=rows[r], complete=neutral_clone)
result.append(rows[r])
r += 1
elif r < len(rows) and o < len(optimal) and optimal[o][0] <= rows[r][time_key] < optimal[o][1]:
# complete missing keys in original object
rows[r] = __add_missing_keys(original=rows[r], complete=neutral_clone)
result.append(rows[r])
r += 1
o += 1
else:
neutral_clone[time_key] = optimal[o][0]
result.append(neutral_clone)
o += 1
return result
# Q6: use events as a sub_query to support filter of materialized columns when doing a join
# query: Q5, the result is correct,
# startPoints are computed before ranked_events to reduce the number of window functions over rows
# compute avg_time_from_previous at the same level as sessions_count (this was removed in v1.22)
# replaced time_to_target by time_from_previous
# compute avg_time_from_previous at the same level as sessions_count
# sort by top 5 according to sessions_count at the CTE level
# final part project data without grouping
# if start-point is selected, the selected event is ranked n°1
def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
if not data.hide_excess:
data.hide_excess = True
data.rows = 50
sub_events = []
start_points_conditions = []
step_0_conditions = []
step_1_post_conditions = ["event_number_in_session <= %(density)s"]
q2_extra_col = None
q2_extra_condition = None
if len(data.metric_value) == 0:
data.metric_value.append(schemas.ProductAnalyticsSelectedEventType.LOCATION)
sub_events.append({"column": JOURNEY_TYPES[schemas.ProductAnalyticsSelectedEventType.LOCATION]["column"],
"eventType": schemas.ProductAnalyticsSelectedEventType.LOCATION.value})
else:
if len(data.start_point) > 0:
extra_metric_values = []
for s in data.start_point:
if s.type not in data.metric_value:
sub_events.append({"column": JOURNEY_TYPES[s.type]["column"],
"eventType": JOURNEY_TYPES[s.type]["eventType"]})
step_1_post_conditions.append(
f"(`$event_name`='{JOURNEY_TYPES[s.type]['eventType']}' AND event_number_in_session = 1 \
OR `$event_name`!='{JOURNEY_TYPES[s.type]['eventType']}' AND event_number_in_session > 1)")
extra_metric_values.append(s.type)
if not q2_extra_col:
# This is used in case start event has different type of the visible event,
# because it causes intermediary events to be removed, so you find a jump from step-0 to step-3
# because step-2 is not of a visible event
q2_extra_col = """,leadInFrame(toNullable(event_number_in_session))
OVER (PARTITION BY session_id ORDER BY created_at %s
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_event_number_in_session"""
q2_extra_condition = """WHERE event_number_in_session + 1 = next_event_number_in_session
OR isNull(next_event_number_in_session);"""
data.metric_value += extra_metric_values
for v in data.metric_value:
if JOURNEY_TYPES.get(v):
sub_events.append({"column": JOURNEY_TYPES[v]["column"],
"eventType": JOURNEY_TYPES[v]["eventType"]})
if len(sub_events) == 1:
main_column = sub_events[0]['column']
else:
main_column = f"multiIf(%s,%s)" % (
','.join([f"`$event_name`='{s['eventType']}',{s['column']}" for s in sub_events[:-1]]),
','.join([f"event_type='{s['eventType']}',{s['column']}" for s in sub_events[:-1]]),
sub_events[-1]["column"])
extra_values = {}
reverse = data.start_type == "end"
@ -142,19 +61,19 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
event_type = JOURNEY_TYPES[sf.type]['eventType']
extra_values = {**extra_values, **sh.multi_values(sf.value, value_key=f_k),
f"start_event_type_{i}": event_type}
start_points_conditions.append(f"(`$event_name`=%(start_event_type_{i})s AND " +
start_points_conditions.append(f"(event_type=%(start_event_type_{i})s AND " +
sh.multi_conditions(f'{event_column} {op} %({f_k})s', sf.value, is_not=is_not,
value_key=f_k)
+ ")")
step_0_conditions.append(f"(`$event_name`=%(start_event_type_{i})s AND " +
step_0_conditions.append(f"(event_type=%(start_event_type_{i})s AND " +
sh.multi_conditions(f'e_value {op} %({f_k})s', sf.value, is_not=is_not,
value_key=f_k)
+ ")")
if len(start_points_conditions) > 0:
start_points_conditions = ["(" + " OR ".join(start_points_conditions) + ")",
"events.project_id = toUInt16(%(project_id)s)",
"events.created_at >= toDateTime(%(startTimestamp)s / 1000)",
"events.created_at < toDateTime(%(endTimestamp)s / 1000)"]
"events.datetime >= toDateTime(%(startTimestamp)s / 1000)",
"events.datetime < toDateTime(%(endTimestamp)s / 1000)"]
step_0_conditions = ["(" + " OR ".join(step_0_conditions) + ")",
"pre_ranked_events.event_number_in_session = 1"]
@ -343,11 +262,10 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
else:
path_direction = ""
# ch_sub_query = __get_basic_constraints(table_name="events")
ch_sub_query = __get_basic_constraints_events(table_name="events")
ch_sub_query = __get_basic_constraints(table_name="events")
selected_event_type_sub_query = []
for s in data.metric_value:
selected_event_type_sub_query.append(f"events.`$event_name` = '{JOURNEY_TYPES[s]['eventType']}'")
selected_event_type_sub_query.append(f"events.event_type = '{JOURNEY_TYPES[s]['eventType']}'")
if s in exclusions:
selected_event_type_sub_query[-1] += " AND (" + " AND ".join(exclusions[s]) + ")"
selected_event_type_sub_query = " OR ".join(selected_event_type_sub_query)
@ -370,14 +288,14 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
if len(start_points_conditions) == 0:
step_0_subquery = """SELECT DISTINCT session_id
FROM (SELECT `$event_name`, e_value
FROM (SELECT event_type, e_value
FROM pre_ranked_events
WHERE event_number_in_session = 1
GROUP BY `$event_name`, e_value
GROUP BY event_type, e_value
ORDER BY count(1) DESC
LIMIT 1) AS top_start_events
INNER JOIN pre_ranked_events
ON (top_start_events.`$event_name` = pre_ranked_events.`$event_name` AND
ON (top_start_events.event_type = pre_ranked_events.event_type AND
top_start_events.e_value = pre_ranked_events.e_value)
WHERE pre_ranked_events.event_number_in_session = 1"""
initial_event_cte = ""
@ -386,85 +304,67 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
FROM pre_ranked_events
WHERE {" AND ".join(step_0_conditions)}"""
initial_event_cte = f"""\
initial_event AS (SELECT events.session_id, MIN(created_at) AS start_event_timestamp
initial_event AS (SELECT events.session_id, MIN(datetime) AS start_event_timestamp
FROM {main_events_table} {"INNER JOIN sub_sessions USING (session_id)" if len(sessions_conditions) > 0 else ""}
WHERE {" AND ".join(start_points_conditions)}
GROUP BY 1),"""
ch_sub_query.append(f"events.created_at{'<=' if reverse else '>='}initial_event.start_event_timestamp")
ch_sub_query.append("events.datetime>=initial_event.start_event_timestamp")
main_events_table += " INNER JOIN initial_event ON (events.session_id = initial_event.session_id)"
sessions_conditions = []
steps_query = []
# This is used if data.hideExcess is True
projection_query = []
drop_query = []
top_query = []
top_with_next_query = []
other_query = []
for i in range(1, data.density + (1 if data.hide_excess else 0)):
steps_query.append(f"""n{i} AS (SELECT event_number_in_session,
`$event_name`,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = {i}
GROUP BY event_number_in_session, `$event_name`, e_value, next_type, next_value
ORDER BY sessions_count DESC)""")
if not data.hide_excess:
projection_query.append(f"""\
SELECT event_number_in_session,
`$event_name`,
steps_query = ["""n1 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
AVG(time_from_previous) AS avg_time_from_previous,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 1
AND isNotNull(next_value)
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC
LIMIT %(eventThresholdNumberInGroup)s)"""]
projection_query = ["""SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count,
avg_time_from_previous
FROM n1"""]
for i in range(2, data.density + 1):
steps_query.append(f"""n{i} AS (SELECT *
FROM (SELECT re.event_number_in_session AS event_number_in_session,
re.event_type AS event_type,
re.e_value AS e_value,
re.next_type AS next_type,
re.next_value AS next_value,
AVG(re.time_from_previous) AS avg_time_from_previous,
COUNT(1) AS sessions_count
FROM n{i - 1} INNER JOIN ranked_events AS re
ON (n{i - 1}.next_value = re.e_value AND n{i - 1}.next_type = re.event_type)
WHERE re.event_number_in_session = {i}
GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type, re.next_value) AS sub_level
ORDER BY sessions_count DESC
LIMIT %(eventThresholdNumberInGroup)s)""")
projection_query.append(f"""SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n{i}
WHERE isNotNull(next_type)""")
else:
top_query.append(f"""\
SELECT event_number_in_session,
`$event_name`,
e_value,
SUM(n{i}.sessions_count) AS sessions_count
FROM n{i}
GROUP BY event_number_in_session, `$event_name`, e_value
ORDER BY sessions_count DESC
LIMIT %(visibleRows)s""")
if i < data.density:
drop_query.append(f"""SELECT event_number_in_session,
`$event_name`,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n{i}
WHERE isNull(n{i}.next_type)""")
if data.hide_excess:
top_with_next_query.append(f"""\
SELECT n{i}.*
FROM n{i}
INNER JOIN top_n
ON (n{i}.event_number_in_session = top_n.event_number_in_session
AND n{i}.`$event_name` = top_n.`$event_name`
AND n{i}.e_value = top_n.e_value)""")
if i > 1 and data.hide_excess:
other_query.append(f"""SELECT n{i}.*
FROM n{i}
WHERE (event_number_in_session, `$event_name`, e_value) NOT IN
(SELECT event_number_in_session, `$event_name`, e_value
FROM top_n
WHERE top_n.event_number_in_session = {i})""")
sessions_count,
avg_time_from_previous
FROM n{i}""")
with ch_client.ClickHouseClient(database="experimental") as ch:
time_key = TimeUTC.now()
_now = time()
params = {"project_id": project_id, "startTimestamp": data.startTimestamp,
"endTimestamp": data.endTimestamp, "density": data.density,
"visibleRows": data.rows,
# This is ignored because UI will take care of it
# "eventThresholdNumberInGroup": 4 if data.hide_excess else 8,
"eventThresholdNumberInGroup": 8,
**extra_values}
ch_query1 = f"""\
@ -473,24 +373,23 @@ WITH {initial_sessions_cte}
{initial_event_cte}
pre_ranked_events AS (SELECT *
FROM (SELECT session_id,
`$event_name`,
created_at,
toString({main_column}) AS e_value,
event_type,
datetime,
{main_column} AS e_value,
row_number() OVER (PARTITION BY session_id
ORDER BY created_at {path_direction},
event_id {path_direction} ) AS event_number_in_session
ORDER BY datetime {path_direction},
message_id {path_direction} ) AS event_number_in_session
FROM {main_events_table} {"INNER JOIN sub_sessions ON (sub_sessions.session_id = events.session_id)" if len(sessions_conditions) > 0 else ""}
WHERE {" AND ".join(ch_sub_query)}
) AS full_ranked_events
WHERE {" AND ".join(step_1_post_conditions)})
WHERE event_number_in_session <= %(density)s)
SELECT *
FROM pre_ranked_events;"""
logger.debug("---------Q1-----------")
ch_query1 = ch.format(query=ch_query1, parameters=params)
ch.execute(query=ch_query1)
ch.execute(query=ch_query1, parameters=params)
if time() - _now > 2:
logger.warning(f">>>>>>>>>PathAnalysis long query EE ({int(time() - _now)}s)<<<<<<<<<")
logger.warning(str.encode(ch_query1))
logger.warning(ch.format(ch_query1, params))
logger.warning("----------------------")
_now = time()
@ -501,136 +400,38 @@ WITH pre_ranked_events AS (SELECT *
start_points AS ({step_0_subquery}),
ranked_events AS (SELECT pre_ranked_events.*,
leadInFrame(e_value)
OVER (PARTITION BY session_id ORDER BY created_at {path_direction}
OVER (PARTITION BY session_id ORDER BY datetime {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_value,
leadInFrame(toNullable(`$event_name`))
OVER (PARTITION BY session_id ORDER BY created_at {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_type
{q2_extra_col % path_direction if q2_extra_col else ""}
leadInFrame(toNullable(event_type))
OVER (PARTITION BY session_id ORDER BY datetime {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_type,
abs(lagInFrame(toNullable(datetime))
OVER (PARTITION BY session_id ORDER BY datetime {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
- pre_ranked_events.datetime) AS time_from_previous
FROM start_points INNER JOIN pre_ranked_events USING (session_id))
SELECT *
FROM ranked_events
{q2_extra_condition if q2_extra_condition else ""};"""
FROM ranked_events;"""
logger.debug("---------Q2-----------")
ch_query2 = ch.format(query=ch_query2, parameters=params)
ch.execute(query=ch_query2)
ch.execute(query=ch_query2, parameters=params)
if time() - _now > 2:
logger.warning(f">>>>>>>>>PathAnalysis long query EE ({int(time() - _now)}s)<<<<<<<<<")
logger.warning(str.encode(ch_query2))
logger.warning(ch.format(ch_query2, params))
logger.warning("----------------------")
_now = time()
sub_cte = ""
if data.hide_excess:
sub_cte = f""",
top_n AS ({" UNION ALL ".join(top_query)}),
top_n_with_next AS ({" UNION ALL ".join(top_with_next_query)}),
others_n AS ({" UNION ALL ".join(other_query)})"""
projection_query = """\
-- Top to Top: valid
SELECT top_n_with_next.*
FROM top_n_with_next
INNER JOIN top_n
ON (top_n_with_next.event_number_in_session + 1 = top_n.event_number_in_session
AND top_n_with_next.next_type = top_n.`$event_name`
AND top_n_with_next.next_value = top_n.e_value)
UNION ALL
-- Top to Others: valid
SELECT top_n_with_next.event_number_in_session,
top_n_with_next.`$event_name`,
top_n_with_next.e_value,
'OTHER' AS next_type,
NULL AS next_value,
SUM(top_n_with_next.sessions_count) AS sessions_count
FROM top_n_with_next
WHERE (top_n_with_next.event_number_in_session + 1, top_n_with_next.next_type, top_n_with_next.next_value) IN
(SELECT others_n.event_number_in_session, others_n.`$event_name`, others_n.e_value FROM others_n)
GROUP BY top_n_with_next.event_number_in_session, top_n_with_next.`$event_name`, top_n_with_next.e_value
UNION ALL
-- Top go to Drop: valid
SELECT drop_n.event_number_in_session,
drop_n.`$event_name`,
drop_n.e_value,
drop_n.next_type,
drop_n.next_value,
drop_n.sessions_count
FROM drop_n
INNER JOIN top_n ON (drop_n.event_number_in_session = top_n.event_number_in_session
AND drop_n.`$event_name` = top_n.`$event_name`
AND drop_n.e_value = top_n.e_value)
ORDER BY drop_n.event_number_in_session
UNION ALL
-- Others got to Drop: valid
SELECT others_n.event_number_in_session,
'OTHER' AS `$event_name`,
NULL AS e_value,
'DROP' AS next_type,
NULL AS next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNull(others_n.next_type)
AND others_n.event_number_in_session < 3
GROUP BY others_n.event_number_in_session, next_type, next_value
UNION ALL
-- Others got to Top:valid
SELECT others_n.event_number_in_session,
'OTHER' AS `$event_name`,
NULL AS e_value,
others_n.next_type,
others_n.next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) IN
(SELECT top_n.event_number_in_session, top_n.`$event_name`, top_n.e_value FROM top_n)
GROUP BY others_n.event_number_in_session, others_n.next_type, others_n.next_value
UNION ALL
-- Others got to Others
SELECT others_n.event_number_in_session,
'OTHER' AS `$event_name`,
NULL AS e_value,
'OTHER' AS next_type,
NULL AS next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND others_n.event_number_in_session < %(density)s
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) NOT IN
(SELECT event_number_in_session, `$event_name`, e_value FROM top_n)
GROUP BY others_n.event_number_in_session"""
else:
projection_query.append("""\
SELECT event_number_in_session,
`$event_name`,
e_value,
next_type,
next_value,
sessions_count
FROM drop_n""")
projection_query = " UNION ALL ".join(projection_query)
ch_query3 = f"""\
WITH ranked_events AS (SELECT *
FROM ranked_events_{time_key}),
{", ".join(steps_query)},
drop_n AS ({" UNION ALL ".join(drop_query)})
{sub_cte}
SELECT event_number_in_session,
`$event_name` AS event_type,
e_value,
next_type,
next_value,
sessions_count
FROM (
{projection_query}
) AS chart_steps
ORDER BY event_number_in_session, sessions_count DESC;"""
WITH ranked_events AS (SELECT *
FROM ranked_events_{time_key}),
{",".join(steps_query)}
SELECT *
FROM ({" UNION ALL ".join(projection_query)}) AS chart_steps
ORDER BY event_number_in_session;"""
logger.debug("---------Q3-----------")
ch_query3 = ch.format(query=ch_query3, parameters=params)
rows = ch.execute(query=ch_query3)
rows = ch.execute(query=ch_query3, parameters=params)
if time() - _now > 2:
logger.warning(f">>>>>>>>>PathAnalysis long query EE ({int(time() - _now)}s)<<<<<<<<<")
logger.warning(str.encode(ch_query3))
logger.warning(ch.format(ch_query3, params))
logger.warning("----------------------")
return __transform_journey(rows=rows, reverse_path=reverse)

View file

@ -1,6 +1,2 @@
TENANT_CONDITION = "TRUE"
MOB_KEY = ""
def get_file_key(project_id, session_id):
return {}
MOB_KEY=""

View file

@ -413,6 +413,7 @@ def update_project_conditions(project_id, conditions):
create_project_conditions(project_id, to_be_created)
if to_be_updated:
logger.debug(to_be_updated)
update_project_condition(project_id, to_be_updated)
return get_conditions(project_id)
@ -427,45 +428,3 @@ def get_projects_ids(tenant_id):
cur.execute(query=query)
rows = cur.fetchall()
return [r["project_id"] for r in rows]
def delete_metadata_condition(project_id, metadata_key):
sql = """\
UPDATE public.projects_conditions
SET filters=(SELECT COALESCE(jsonb_agg(elem), '[]'::jsonb)
FROM jsonb_array_elements(filters) AS elem
WHERE NOT (elem ->> 'type' = 'metadata'
AND elem ->> 'source' = %(metadata_key)s))
WHERE project_id = %(project_id)s
AND jsonb_typeof(filters) = 'array'
AND EXISTS (SELECT 1
FROM jsonb_array_elements(filters) AS elem
WHERE elem ->> 'type' = 'metadata'
AND elem ->> 'source' = %(metadata_key)s);"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"project_id": project_id, "metadata_key": metadata_key})
cur.execute(query)
def rename_metadata_condition(project_id, old_metadata_key, new_metadata_key):
sql = """\
UPDATE public.projects_conditions
SET filters = (SELECT jsonb_agg(CASE
WHEN elem ->> 'type' = 'metadata' AND elem ->> 'source' = %(old_metadata_key)s
THEN elem || ('{"source": "'||%(new_metadata_key)s||'"}')::jsonb
ELSE elem END)
FROM jsonb_array_elements(filters) AS elem)
WHERE project_id = %(project_id)s
AND jsonb_typeof(filters) = 'array'
AND EXISTS (SELECT 1
FROM jsonb_array_elements(filters) AS elem
WHERE elem ->> 'type' = 'metadata'
AND elem ->> 'source' = %(old_metadata_key)s);"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"project_id": project_id, "old_metadata_key": old_metadata_key,
"new_metadata_key": new_metadata_key})
cur.execute(query)
# TODO: make project conditions use metadata-column-name instead of metadata-key

View file

@ -14,7 +14,7 @@ def reset(data: schemas.ForgetPasswordPayloadSchema, background_tasks: Backgroun
if helper.allow_captcha() and not captcha.is_valid(data.g_recaptcha_response):
return {"errors": ["Invalid captcha."]}
if not smtp.has_smtp():
return {"errors": ["Email delivery failed due to invalid SMTP configuration. Please contact your admin."]}
return {"errors": ["no SMTP configuration found, you can ask your admin to reset your password"]}
a_user = users.get_by_email_only(data.email)
if a_user:
invitation_link = users.generate_new_invitation(user_id=a_user["userId"])

View file

@ -3,11 +3,9 @@ import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import sessions_pg
from . import sessions_pg as sessions_legacy
from . import sessions_ch
from . import sessions as sessions_legacy
if config("EXP_METRICS", cast=bool, default=False):
from . import sessions_ch as sessions
else:
from . import sessions_pg as sessions
from . import sessions

View file

@ -2,8 +2,8 @@ import logging
from typing import List, Union
import schemas
from chalicelib.core import events, metadata
from . import performance_event
from chalicelib.core import events, metadata, projects
from chalicelib.core.sessions import sessions_favorite, performance_event
from chalicelib.utils import pg_client, helper, metrics_helper
from chalicelib.utils import sql_helper as sh
@ -36,24 +36,24 @@ def search2_series(data: schemas.SessionsSearchPayloadSchema, project_id: int, d
SELECT generated_timestamp AS timestamp,
COUNT(s) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT 1 AS s
FROM full_sessions
WHERE start_ts >= generated_timestamp
AND start_ts <= generated_timestamp + %(step_size)s) AS sessions ON (TRUE)
LEFT JOIN LATERAL ( SELECT 1 AS s
FROM full_sessions
WHERE start_ts >= generated_timestamp
AND start_ts <= generated_timestamp + %(step_size)s) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;""", full_args)
elif metric_of == schemas.MetricOfTimeseries.USER_COUNT:
main_query = cur.mogrify(f"""WITH full_sessions AS (SELECT s.user_id, s.start_ts
{query_part}
AND s.user_id IS NOT NULL
AND s.user_id != '')
{query_part}
AND s.user_id IS NOT NULL
AND s.user_id != '')
SELECT generated_timestamp AS timestamp,
COUNT(s) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT user_id AS s
FROM full_sessions
WHERE start_ts >= generated_timestamp
AND start_ts <= generated_timestamp + %(step_size)s) AS sessions ON (TRUE)
LEFT JOIN LATERAL ( SELECT DISTINCT user_id AS s
FROM full_sessions
WHERE start_ts >= generated_timestamp
AND start_ts <= generated_timestamp + %(step_size)s) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;""", full_args)
else:
@ -148,7 +148,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": e.filters
"filters": []
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -165,7 +165,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": e.filters
"filters": []
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -989,7 +989,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
sh.multi_conditions(f"ev.{events.EventType.LOCATION.column} {op} %({e_k})s",
c.value, value_key=e_k))
else:
logger.warning(f"unsupported extra_event type: {c.type}")
logger.warning(f"unsupported extra_event type:${c.type}")
if len(_extra_or_condition) > 0:
extra_constraints.append("(" + " OR ".join(_extra_or_condition) + ")")
query_part = f"""\
@ -1002,6 +1002,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
return full_args, query_part
def get_user_sessions(project_id, user_id, start_date, end_date):
with pg_client.PostgresClient() as cur:
constraints = ["s.project_id = %(projectId)s", "s.user_id = %(userId)s"]
@ -1111,3 +1112,4 @@ def check_recording_status(project_id: int) -> dict:
"recordingStatus": row["recording_status"],
"sessionsCount": row["sessions_count"]
}

View file

@ -1,11 +1,9 @@
import json
from decouple import config
from chalicelib.core.issue_tracking import integrations_manager, base_issue
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils import pg_client
from chalicelib.core.issue_tracking import integrations_manager, base_issue
import json
def __get_saved_data(project_id, session_id, issue_id, tool):
@ -41,8 +39,8 @@ def create_new_assignment(tenant_id, project_id, session_id, creator_id, assigne
issue = integration.issue_handler.create_new_assignment(title=title, assignee=assignee, description=description,
issue_type=issue_type,
integration_project_id=integration_project_id)
except base_issue.RequestException as e:
return base_issue.proxy_issues_handler(e)
except integration_base_issue.RequestException as e:
return integration_base_issue.proxy_issues_handler(e)
if issue is None or "id" not in issue:
return {"errors": ["something went wrong while creating the issue"]}
with pg_client.PostgresClient() as cur:

File diff suppressed because it is too large Load diff

View file

@ -4,7 +4,7 @@ import schemas
from chalicelib.utils.storage import StorageClient
def get_devtools_keys(project_id, session_id):
def __get_devtools_keys(project_id, session_id):
params = {
"sessionId": session_id,
"projectId": project_id
@ -16,7 +16,7 @@ def get_devtools_keys(project_id, session_id):
def get_urls(session_id, project_id, context: schemas.CurrentContext, check_existence: bool = True):
results = []
for k in get_devtools_keys(project_id=project_id, session_id=session_id):
for k in __get_devtools_keys(project_id=project_id, session_id=session_id):
if check_existence and not StorageClient.exists(bucket=config("sessions_bucket"), key=k):
continue
results.append(StorageClient.get_presigned_url_for_sharing(
@ -29,5 +29,5 @@ def get_urls(session_id, project_id, context: schemas.CurrentContext, check_exis
def delete_mobs(project_id, session_ids):
for session_id in session_ids:
for k in get_devtools_keys(project_id=project_id, session_id=session_id):
for k in __get_devtools_keys(project_id=project_id, session_id=session_id):
StorageClient.tag_for_deletion(bucket=config("sessions_bucket"), key=k)

View file

@ -1 +0,0 @@
from .sessions_devtool import *

View file

@ -1 +0,0 @@
from .sessions_favorite import *

View file

@ -1,81 +1,76 @@
from functools import cache
import schemas
from chalicelib.core.autocomplete import autocomplete
from chalicelib.utils.event_filter_definition import SupportedFilter
SUPPORTED_TYPES = {
schemas.FilterType.USER_OS: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_OS),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_OS)),
schemas.FilterType.USER_BROWSER: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_BROWSER),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_BROWSER)),
schemas.FilterType.USER_DEVICE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_DEVICE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_DEVICE)),
schemas.FilterType.USER_COUNTRY: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY)),
schemas.FilterType.USER_CITY: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_CITY),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_CITY)),
schemas.FilterType.USER_STATE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_STATE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_STATE)),
schemas.FilterType.USER_ID: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ID),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ID)),
schemas.FilterType.USER_ANONYMOUS_ID: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID)),
schemas.FilterType.REV_ID: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.REV_ID),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.REV_ID)),
schemas.FilterType.REFERRER: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.REFERRER),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.REFERRER)),
schemas.FilterType.UTM_CAMPAIGN: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.UTM_CAMPAIGN),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.UTM_CAMPAIGN)),
schemas.FilterType.UTM_MEDIUM: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.UTM_MEDIUM),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.UTM_MEDIUM)),
schemas.FilterType.UTM_SOURCE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.UTM_SOURCE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.UTM_SOURCE)),
# Mobile
schemas.FilterType.USER_OS_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_OS_MOBILE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_OS_MOBILE)),
schemas.FilterType.USER_DEVICE_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(
typename=schemas.FilterType.USER_DEVICE_MOBILE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_DEVICE_MOBILE)),
schemas.FilterType.USER_COUNTRY_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY_MOBILE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY_MOBILE)),
schemas.FilterType.USER_ID_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ID_MOBILE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ID_MOBILE)),
schemas.FilterType.USER_ANONYMOUS_ID_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID_MOBILE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID_MOBILE)),
schemas.FilterType.REV_ID_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.REV_ID_MOBILE),
query=autocomplete.__generic_autocomplete_metas(typename=schemas.FilterType.REV_ID_MOBILE)),
@cache
def supported_types():
return {
schemas.FilterType.USER_OS: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_OS),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_OS)),
schemas.FilterType.USER_BROWSER: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_BROWSER),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_BROWSER)),
schemas.FilterType.USER_DEVICE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_DEVICE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_DEVICE)),
schemas.FilterType.USER_COUNTRY: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY)),
schemas.FilterType.USER_CITY: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_CITY),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_CITY)),
schemas.FilterType.USER_STATE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_STATE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_STATE)),
schemas.FilterType.USER_ID: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ID),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ID)),
schemas.FilterType.USER_ANONYMOUS_ID: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID)),
schemas.FilterType.REV_ID: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.REV_ID),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.REV_ID)),
schemas.FilterType.REFERRER: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.REFERRER),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.REFERRER)),
schemas.FilterType.UTM_CAMPAIGN: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.UTM_CAMPAIGN),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.UTM_CAMPAIGN)),
schemas.FilterType.UTM_MEDIUM: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.UTM_MEDIUM),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.UTM_MEDIUM)),
schemas.FilterType.UTM_SOURCE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.UTM_SOURCE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.UTM_SOURCE)),
# Mobile
schemas.FilterType.USER_OS_MOBILE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_OS_MOBILE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_OS_MOBILE)),
schemas.FilterType.USER_DEVICE_MOBILE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(
typename=schemas.FilterType.USER_DEVICE_MOBILE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_DEVICE_MOBILE)),
schemas.FilterType.USER_COUNTRY_MOBILE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY_MOBILE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_COUNTRY_MOBILE)),
schemas.FilterType.USER_ID_MOBILE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ID_MOBILE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ID_MOBILE)),
schemas.FilterType.USER_ANONYMOUS_ID_MOBILE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID_MOBILE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.USER_ANONYMOUS_ID_MOBILE)),
schemas.FilterType.REV_ID_MOBILE: SupportedFilter(
get=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.REV_ID_MOBILE),
query=autocomplete.generic_autocomplete_metas(typename=schemas.FilterType.REV_ID_MOBILE)),
}
}
def search(text: str, meta_type: schemas.FilterType, project_id: int):
rows = []
if meta_type not in list(supported_types().keys()):
if meta_type not in list(SUPPORTED_TYPES.keys()):
return {"errors": ["unsupported type"]}
rows += supported_types()[meta_type].get(project_id=project_id, text=text)
rows += SUPPORTED_TYPES[meta_type].get(project_id=project_id, text=text)
# for IOS events autocomplete
# if meta_type + "_IOS" in list(SUPPORTED_TYPES.keys()):
# rows += SUPPORTED_TYPES[meta_type + "_IOS"].get(project_id=project_id, text=text)

View file

@ -30,7 +30,6 @@ def get_note(tenant_id, project_id, user_id, note_id, share=None):
row = helper.dict_to_camel_case(row)
if row:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row["updatedAt"] = TimeUTC.datetime_to_timestamp(row["updatedAt"])
return row
@ -57,74 +56,41 @@ def get_session_notes(tenant_id, project_id, session_id, user_id):
def get_all_notes_by_project_id(tenant_id, project_id, user_id, data: schemas.SearchNoteSchema):
with pg_client.PostgresClient() as cur:
# base conditions
conditions = [
"sessions_notes.project_id = %(project_id)s",
"sessions_notes.deleted_at IS NULL"
]
params = {"project_id": project_id, "user_id": user_id, "tenant_id": tenant_id}
# tag conditions
if data.tags:
tag_key = "tag_value"
conditions = ["sessions_notes.project_id = %(project_id)s", "sessions_notes.deleted_at IS NULL"]
extra_params = {}
if data.tags and len(data.tags) > 0:
k = "tag_value"
conditions.append(
sh.multi_conditions(f"%({tag_key})s = sessions_notes.tag", data.tags, value_key=tag_key)
)
params.update(sh.multi_values(data.tags, value_key=tag_key))
# filter by ownership or shared status
sh.multi_conditions(f"%({k})s = sessions_notes.tag", data.tags, value_key=k))
extra_params = sh.multi_values(data.tags, value_key=k)
if data.shared_only:
conditions.append("sessions_notes.is_public IS TRUE")
conditions.append("sessions_notes.is_public")
elif data.mine_only:
conditions.append("sessions_notes.user_id = %(user_id)s")
else:
conditions.append("(sessions_notes.user_id = %(user_id)s OR sessions_notes.is_public)")
# search condition
if data.search:
conditions.append("sessions_notes.message ILIKE %(search)s")
params["search"] = f"%{data.search}%"
query = f"""
SELECT
COUNT(1) OVER () AS full_count,
sessions_notes.*,
users.name AS user_name
FROM
sessions_notes
INNER JOIN
users USING (user_id)
WHERE
{" AND ".join(conditions)}
ORDER BY
created_at {data.order}
LIMIT
%(limit)s OFFSET %(offset)s;
"""
params.update({
"limit": data.limit,
"offset": data.limit * (data.page - 1)
})
query = cur.mogrify(query, params)
query = cur.mogrify(f"""SELECT COUNT(1) OVER () AS full_count, sessions_notes.*, users.name AS user_name
FROM sessions_notes INNER JOIN users USING (user_id)
WHERE {" AND ".join(conditions)}
ORDER BY created_at {data.order}
LIMIT {data.limit} OFFSET {data.limit * (data.page - 1)};""",
{"project_id": project_id, "user_id": user_id, "tenant_id": tenant_id, **extra_params})
logger.debug(query)
cur.execute(query)
cur.execute(query=query)
rows = cur.fetchall()
result = {"count": 0, "notes": helper.list_to_camel_case(rows)}
if rows:
if len(rows) > 0:
result["count"] = rows[0]["fullCount"]
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row.pop("fullCount")
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row.pop("fullCount")
return result
def create(tenant_id, user_id, project_id, session_id, data: schemas.SessionNoteSchema):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""INSERT INTO public.sessions_notes (message, user_id, tag, session_id, project_id, timestamp, is_public, thumbnail, start_at, end_at)
VALUES (%(message)s, %(user_id)s, %(tag)s, %(session_id)s, %(project_id)s, %(timestamp)s, %(is_public)s, %(thumbnail)s, %(start_at)s, %(end_at)s)
query = cur.mogrify(f"""INSERT INTO public.sessions_notes (message, user_id, tag, session_id, project_id, timestamp, is_public)
VALUES (%(message)s, %(user_id)s, %(tag)s, %(session_id)s, %(project_id)s, %(timestamp)s, %(is_public)s)
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s) AS user_name;""",
{"user_id": user_id, "project_id": project_id, "session_id": session_id,
**data.model_dump()})
@ -145,8 +111,6 @@ def edit(tenant_id, user_id, project_id, note_id, data: schemas.SessionUpdateNot
sub_query.append("is_public = %(is_public)s")
if data.timestamp is not None:
sub_query.append("timestamp = %(timestamp)s")
sub_query.append("updated_at = timezone('utc'::text, now())")
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""UPDATE public.sessions_notes

View file

@ -1,10 +1,10 @@
import schemas
from chalicelib.core import events, metadata, events_mobile, \
issues, assist, canvas, user_testing
from . import sessions_mobs, sessions_devtool
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.sessions import sessions_mobs, sessions_devtool
from chalicelib.utils import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.core.modules import MOB_KEY, get_file_key
from chalicelib.core.modules import MOB_KEY
def __is_mobile_session(platform):
@ -22,7 +22,6 @@ def __group_metadata(session, project_metadata):
def get_pre_replay(project_id, session_id):
return {
**get_file_key(project_id=project_id, session_id=session_id),
'domURL': [sessions_mobs.get_first_url(project_id=project_id, session_id=session_id, check_existence=False)]}

View file

@ -1,9 +1,11 @@
import logging
from typing import List, Union
import schemas
from chalicelib.core import metadata, projects
from . import sessions_favorite, sessions_legacy
from chalicelib.utils import pg_client, helper
from chalicelib.core import events, metadata, projects
from chalicelib.core.sessions import sessions_favorite, performance_event, sessions_legacy
from chalicelib.utils import pg_client, helper, metrics_helper
from chalicelib.utils import sql_helper as sh
logger = logging.getLogger(__name__)
@ -38,22 +40,16 @@ COALESCE((SELECT TRUE
# This function executes the query and return result
def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.ProjectContext,
user_id, errors_only=False, error_status=schemas.ErrorStatus.ALL,
count_only=False, issue=None, ids_only=False, platform="web"):
def search_sessions(data: schemas.SessionsSearchPayloadSchema, project_id, user_id, errors_only=False,
error_status=schemas.ErrorStatus.ALL, count_only=False, issue=None, ids_only=False,
platform="web"):
if data.bookmarked:
data.startTimestamp, data.endTimestamp = sessions_favorite.get_start_end_timestamp(project.project_id, user_id)
if data.startTimestamp is None:
logger.debug(f"No vault sessions found for project:{project.project_id}")
return {
'total': 0,
'sessions': [],
'src': 1
}
data.startTimestamp, data.endTimestamp = sessions_favorite.get_start_end_timestamp(project_id, user_id)
full_args, query_part = sessions_legacy.search_query_parts(data=data, error_status=error_status,
errors_only=errors_only,
favorite_only=data.bookmarked, issue=issue,
project_id=project.project_id,
project_id=project_id,
user_id=user_id, platform=platform)
if data.limit is not None and data.page is not None:
full_args["sessions_limit"] = data.limit
@ -90,7 +86,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
else:
sort = 'start_ts'
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = metadata.get(project_id=project_id)
main_query = cur.mogrify(f"""SELECT COUNT(*) AS count,
COALESCE(JSONB_AGG(users_sessions)
FILTER (WHERE rn>%(sessions_limit_s)s AND rn<=%(sessions_limit_e)s), '[]'::JSONB) AS sessions
@ -122,12 +118,9 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
sort = 'session_id'
if data.sort is not None and data.sort != "session_id":
# sort += " " + data.order + "," + helper.key_to_snake_case(data.sort)
if data.sort == 'datetime':
sort = 'start_ts'
else:
sort = helper.key_to_snake_case(data.sort)
sort = helper.key_to_snake_case(data.sort)
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = metadata.get(project_id=project_id)
main_query = cur.mogrify(f"""SELECT COUNT(full_sessions) AS count,
COALESCE(JSONB_AGG(full_sessions)
FILTER (WHERE rn>%(sessions_limit_s)s AND rn<=%(sessions_limit_e)s), '[]'::JSONB) AS sessions
@ -175,8 +168,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
# reverse=data.order.upper() == "DESC")
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions),
'src': 1
'sessions': helper.list_to_camel_case(sessions)
}

View file

@ -1 +0,0 @@
from .sessions_viewed import *

View file

@ -1,7 +1,6 @@
import logging
from chalicelib.core import assist
from . import sessions
from chalicelib.core import sessions, assist
logger = logging.getLogger(__name__)

View file

@ -18,7 +18,7 @@ def refresh_spot_jwt_iat_jti(user_id):
{"user_id": user_id})
cur.execute(query)
row = cur.fetchone()
return users.RefreshSpotJWTs(**row)
return row.get("spot_jwt_iat"), row.get("spot_jwt_refresh_jti"), row.get("spot_jwt_refresh_iat")
def logout(user_id: int):
@ -26,13 +26,13 @@ def logout(user_id: int):
def refresh(user_id: int, tenant_id: int = -1) -> dict:
j = refresh_spot_jwt_iat_jti(user_id=user_id)
spot_jwt_iat, spot_jwt_r_jti, spot_jwt_r_iat = refresh_spot_jwt_iat_jti(user_id=user_id)
return {
"jwt": authorizers.generate_jwt(user_id=user_id, tenant_id=tenant_id, iat=j.spot_jwt_iat,
"jwt": authorizers.generate_jwt(user_id=user_id, tenant_id=tenant_id, iat=spot_jwt_iat,
aud=AUDIENCE, for_spot=True),
"refreshToken": authorizers.generate_jwt_refresh(user_id=user_id, tenant_id=tenant_id, iat=j.spot_jwt_refresh_iat,
aud=AUDIENCE, jwt_jti=j.spot_jwt_refresh_jti, for_spot=True),
"refreshTokenMaxAge": config("JWT_SPOT_REFRESH_EXPIRATION", cast=int) - (j.spot_jwt_iat - j.spot_jwt_refresh_iat)
"refreshToken": authorizers.generate_jwt_refresh(user_id=user_id, tenant_id=tenant_id, iat=spot_jwt_r_iat,
aud=AUDIENCE, jwt_jti=spot_jwt_r_jti, for_spot=True),
"refreshTokenMaxAge": config("JWT_SPOT_REFRESH_EXPIRATION", cast=int) - (spot_jwt_iat - spot_jwt_r_iat)
}

View file

@ -1,14 +1,13 @@
import json
import secrets
from typing import Optional
from decouple import config
from fastapi import BackgroundTasks
from pydantic import BaseModel, model_validator
from pydantic import BaseModel
import schemas
from chalicelib.core import authorizers
from chalicelib.core import tenants, spot
from chalicelib.core import authorizers, metadata
from chalicelib.core import tenants, spot, scope
from chalicelib.utils import email_helper
from chalicelib.utils import helper
from chalicelib.utils import pg_client
@ -84,6 +83,7 @@ def restore_member(user_id, email, invitation_token, admin, name, owner=False):
"name": name, "invitation_token": invitation_token})
cur.execute(query)
result = cur.fetchone()
cur.execute(query)
result["created_at"] = TimeUTC.datetime_to_timestamp(result["created_at"])
return helper.dict_to_camel_case(result)
@ -284,7 +284,7 @@ def edit_member(user_id_to_update, tenant_id, changes: schemas.EditMemberSchema,
if editor_id != user_id_to_update:
admin = get_user_role(tenant_id=tenant_id, user_id=editor_id)
if not admin["superAdmin"] and not admin["admin"]:
return {"errors": ["unauthorized, you must have admin privileges"]}
return {"errors": ["unauthorized"]}
if admin["admin"] and user["superAdmin"]:
return {"errors": ["only the owner can edit his own details"]}
else:
@ -552,35 +552,14 @@ def refresh_auth_exists(user_id, jwt_jti=None):
return r is not None
class FullLoginJWTs(BaseModel):
class ChangeJwt(BaseModel):
jwt_iat: int
jwt_refresh_jti: str
jwt_refresh_jti: int
jwt_refresh_iat: int
spot_jwt_iat: int
spot_jwt_refresh_jti: str
spot_jwt_refresh_jti: int
spot_jwt_refresh_iat: int
@model_validator(mode="before")
@classmethod
def _transform_data(cls, values):
if values.get("jwt_refresh_jti") is not None:
values["jwt_refresh_jti"] = str(values["jwt_refresh_jti"])
if values.get("spot_jwt_refresh_jti") is not None:
values["spot_jwt_refresh_jti"] = str(values["spot_jwt_refresh_jti"])
return values
class RefreshLoginJWTs(FullLoginJWTs):
spot_jwt_iat: Optional[int] = None
spot_jwt_refresh_jti: Optional[str] = None
spot_jwt_refresh_iat: Optional[int] = None
class RefreshSpotJWTs(FullLoginJWTs):
jwt_iat: Optional[int] = None
jwt_refresh_jti: Optional[str] = None
jwt_refresh_iat: Optional[int] = None
def change_jwt_iat_jti(user_id):
with pg_client.PostgresClient() as cur:
@ -601,7 +580,7 @@ def change_jwt_iat_jti(user_id):
{"user_id": user_id})
cur.execute(query)
row = cur.fetchone()
return FullLoginJWTs(**row)
return ChangeJwt(**row)
def refresh_jwt_iat_jti(user_id):
@ -616,7 +595,7 @@ def refresh_jwt_iat_jti(user_id):
{"user_id": user_id})
cur.execute(query)
row = cur.fetchone()
return RefreshLoginJWTs(**row)
return row.get("jwt_iat"), row.get("jwt_refresh_jti"), row.get("jwt_refresh_iat")
def authenticate(email, password, for_change_password=False) -> dict | bool | None:
@ -648,12 +627,9 @@ def authenticate(email, password, for_change_password=False) -> dict | bool | No
response = {
"jwt": authorizers.generate_jwt(user_id=r['userId'], tenant_id=r['tenantId'], iat=j_r.jwt_iat,
aud=AUDIENCE),
"refreshToken": authorizers.generate_jwt_refresh(user_id=r['userId'],
tenant_id=r['tenantId'],
iat=j_r.jwt_refresh_iat,
aud=AUDIENCE,
jwt_jti=j_r.jwt_refresh_jti,
for_spot=False),
"refreshToken": authorizers.generate_jwt_refresh(user_id=r['userId'], tenant_id=r['tenantId'],
iat=j_r.jwt_refresh_iat, aud=AUDIENCE,
jwt_jti=j_r.jwt_refresh_jti),
"refreshTokenMaxAge": config("JWT_REFRESH_EXPIRATION", cast=int),
"email": email,
"spotJwt": authorizers.generate_jwt(user_id=r['userId'], tenant_id=r['tenantId'],
@ -684,13 +660,13 @@ def logout(user_id: int):
def refresh(user_id: int, tenant_id: int = -1) -> dict:
j = refresh_jwt_iat_jti(user_id=user_id)
jwt_iat, jwt_r_jti, jwt_r_iat = refresh_jwt_iat_jti(user_id=user_id)
return {
"jwt": authorizers.generate_jwt(user_id=user_id, tenant_id=tenant_id, iat=j.jwt_iat,
"jwt": authorizers.generate_jwt(user_id=user_id, tenant_id=tenant_id, iat=jwt_iat,
aud=AUDIENCE),
"refreshToken": authorizers.generate_jwt_refresh(user_id=user_id, tenant_id=tenant_id, iat=j.jwt_refresh_iat,
aud=AUDIENCE, jwt_jti=j.jwt_refresh_jti),
"refreshTokenMaxAge": config("JWT_REFRESH_EXPIRATION", cast=int) - (j.jwt_iat - j.jwt_refresh_iat),
"refreshToken": authorizers.generate_jwt_refresh(user_id=user_id, tenant_id=tenant_id, iat=jwt_r_iat,
aud=AUDIENCE, jwt_jti=jwt_r_jti),
"refreshTokenMaxAge": config("JWT_REFRESH_EXPIRATION", cast=int) - (jwt_iat - jwt_r_iat)
}

View file

@ -41,7 +41,8 @@ class ClickHouseClient:
keys = tuple(x for x, y in results[1])
return [dict(zip(keys, i)) for i in results[0]]
except Exception as err:
logger.error("--------- CH EXCEPTION -----------", exc_info=err)
logger.error("--------- CH EXCEPTION -----------")
logger.error(err)
logger.error("--------- CH QUERY EXCEPTION -----------")
logger.error(self.format(query=query, parameters=parameters)
.replace('\n', '\\n')

View file

@ -6,6 +6,7 @@ from queue import Queue, Empty
import clickhouse_connect
from clickhouse_connect.driver.query import QueryContext
from clickhouse_connect.driver.exceptions import DatabaseError
from decouple import config
logger = logging.getLogger(__name__)
@ -31,13 +32,9 @@ if config("CH_COMPRESSION", cast=bool, default=True):
extra_args["compression"] = "lz4"
def transform_result(self, original_function):
def transform_result(original_function):
@wraps(original_function)
def wrapper(*args, **kwargs):
if kwargs.get("parameters"):
logger.debug(str.encode(self.format(query=kwargs.get("query", ""), parameters=kwargs.get("parameters"))))
elif len(args) > 0:
logger.debug(str.encode(args[0]))
result = original_function(*args, **kwargs)
if isinstance(result, clickhouse_connect.driver.query.QueryResult):
column_names = result.column_names
@ -111,14 +108,14 @@ def make_pool():
try:
CH_pool.close_all()
except Exception as error:
logger.error("Error while closing all connexions to CH", exc_info=error)
logger.error("Error while closing all connexions to CH", error)
try:
CH_pool = ClickHouseConnectionPool(min_size=config("CH_MINCONN", cast=int, default=4),
max_size=config("CH_MAXCONN", cast=int, default=8))
if CH_pool is not None:
logger.info("Connection pool created successfully for CH")
except ConnectionError as error:
logger.error("Error while connecting to CH", exc_info=error)
logger.error("Error while connecting to CH", error)
if RETRY < RETRY_MAX:
RETRY += 1
logger.info(f"waiting for {RETRY_INTERVAL}s before retry n°{RETRY}")
@ -143,17 +140,19 @@ class ClickHouseClient:
else:
self.__client = CH_pool.get_connection()
self.__client.execute = transform_result(self, self.__client.query)
self.__client.execute = transform_result(self.__client.query)
self.__client.format = self.format
def __enter__(self):
return self.__client
def format(self, query, parameters=None):
if parameters:
ctx = QueryContext(query=query, parameters=parameters)
return ctx.final_query
return query
def format(self, query, *, parameters=None):
if parameters is None:
return query
return query % {
key: f"'{value}'" if isinstance(value, str) else value
for key, value in parameters.items()
}
def __exit__(self, *args):
if config('CH_POOL', cast=bool, default=True):
@ -175,4 +174,4 @@ async def terminate():
CH_pool.close_all()
logger.info("Closed all connexions to CH")
except Exception as error:
logger.error("Error while closing all connexions to CH", exc_info=error)
logger.error("Error while closing all connexions to CH", error)

View file

@ -1,36 +0,0 @@
import schemas
from fastapi import HTTPException, Depends
from or_dependencies import OR_context
def validate_contextual_payload(
item: schemas.SessionsSearchPayloadSchema,
context: schemas.CurrentContext = Depends(OR_context)
) -> schemas.SessionsSearchPayloadSchema:
if context.project.platform == "web":
for e in item.events:
if e.type in [schemas.EventType.CLICK_MOBILE,
schemas.EventType.INPUT_MOBILE,
schemas.EventType.VIEW_MOBILE,
schemas.EventType.CUSTOM_MOBILE,
schemas.EventType.REQUEST_MOBILE,
schemas.EventType.ERROR_MOBILE,
schemas.EventType.SWIPE_MOBILE]:
raise HTTPException(status_code=422,
detail=f"Mobile event '{e.type}' not supported for web project")
else:
for e in item.events:
if e.type in [schemas.EventType.CLICK,
schemas.EventType.INPUT,
schemas.EventType.LOCATION,
schemas.EventType.CUSTOM,
schemas.EventType.REQUEST,
schemas.EventType.REQUEST_DETAILS,
schemas.EventType.GRAPHQL,
schemas.EventType.STATE_ACTION,
schemas.EventType.ERROR,
schemas.EventType.TAG]:
raise HTTPException(status_code=422,
detail=f"Web event '{e.type}' not supported for mobile project")
return item

View file

@ -0,0 +1,14 @@
from chalicelib.core.sourcemaps import sourcemaps
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -8,7 +8,7 @@ logger = logging.getLogger(__name__)
def get_main_events_table(timestamp=0, platform="web"):
if platform == "web":
return "product_analytics.events"
return "experimental.events"
else:
return "experimental.ios_events"
@ -16,7 +16,6 @@ def get_main_events_table(timestamp=0, platform="web"):
def get_main_sessions_table(timestamp=0):
return "experimental.sessions"
def get_user_favorite_sessions_table(timestamp=0):
return "experimental.user_favorite_sessions"

View file

@ -334,5 +334,3 @@ def cast_session_id_to_string(data):
for key in keys:
data[key] = cast_session_id_to_string(data[key])
return data

View file

@ -1,26 +1,7 @@
from typing import List
def get_step_size(startTimestamp, endTimestamp, density, decimal=False, factor=1000):
if endTimestamp == 0:
raise Exception("endTimestamp cannot be 0 in order to get step size")
step_size = (endTimestamp // factor - startTimestamp // factor)
if density <= 1:
return step_size
if decimal:
return step_size / density
return step_size // density
def complete_missing_steps(rows: List[dict], start_timestamp: int, end_timestamp: int, step: int, neutral: dict,
time_key: str = "timestamp") -> List[dict]:
result = []
i = 0
for t in range(start_timestamp, end_timestamp, step):
if i >= len(rows) or rows[i][time_key] > t:
neutral[time_key] = t
result.append(neutral.copy())
elif i < len(rows) and rows[i][time_key] == t:
result.append(rows[i])
i += 1
return result
return step_size // (density - 1)

View file

@ -19,16 +19,6 @@ PG_CONFIG = dict(_PG_CONFIG)
if config("PG_TIMEOUT", cast=int, default=0) > 0:
PG_CONFIG["options"] = f"-c statement_timeout={config('PG_TIMEOUT', cast=int) * 1000}"
if config('PG_POOL', cast=bool, default=True):
PG_CONFIG = {
**PG_CONFIG,
# Keepalive settings
"keepalives": 1, # Enable keepalives
"keepalives_idle": 300, # Seconds before sending keepalive
"keepalives_interval": 10, # Seconds between keepalives
"keepalives_count": 3 # Number of keepalives before giving up
}
class ORThreadedConnectionPool(psycopg2.pool.ThreadedConnectionPool):
def __init__(self, minconn, maxconn, *args, **kwargs):
@ -65,7 +55,6 @@ RETRY = 0
def make_pool():
if not config('PG_POOL', cast=bool, default=True):
logger.info("PG_POOL is disabled, not creating a new one")
return
global postgreSQL_pool
global RETRY
@ -73,7 +62,7 @@ def make_pool():
try:
postgreSQL_pool.closeall()
except (Exception, psycopg2.DatabaseError) as error:
logger.error("Error while closing all connexions to PostgreSQL", exc_info=error)
logger.error("Error while closing all connexions to PostgreSQL", error)
try:
postgreSQL_pool = ORThreadedConnectionPool(config("PG_MINCONN", cast=int, default=4),
config("PG_MAXCONN", cast=int, default=8),
@ -81,10 +70,10 @@ def make_pool():
if postgreSQL_pool is not None:
logger.info("Connection pool created successfully")
except (Exception, psycopg2.DatabaseError) as error:
logger.error("Error while connecting to PostgreSQL", exc_info=error)
logger.error("Error while connecting to PostgreSQL", error)
if RETRY < RETRY_MAX:
RETRY += 1
logger.info(f"Waiting for {RETRY_INTERVAL}s before retry n°{RETRY}")
logger.info(f"waiting for {RETRY_INTERVAL}s before retry n°{RETRY}")
time.sleep(RETRY_INTERVAL)
make_pool()
else:
@ -108,17 +97,13 @@ class PostgresClient:
elif long_query:
long_config = dict(_PG_CONFIG)
long_config["application_name"] += "-LONG"
if config('PG_TIMEOUT_LONG', cast=int, default=1) > 0:
long_config["options"] = f"-c statement_timeout=" \
f"{config('PG_TIMEOUT_LONG', cast=int, default=5 * 60) * 1000}"
else:
logger.info("Disabled timeout for long query")
long_config["options"] = f"-c statement_timeout=" \
f"{config('pg_long_timeout', cast=int, default=5 * 60) * 1000}"
self.connection = psycopg2.connect(**long_config)
elif not use_pool or not config('PG_POOL', cast=bool, default=True):
single_config = dict(_PG_CONFIG)
single_config["application_name"] += "-NOPOOL"
if config('PG_TIMEOUT', cast=int, default=1) > 0:
single_config["options"] = f"-c statement_timeout={config('PG_TIMEOUT', cast=int, default=30) * 1000}"
single_config["options"] = f"-c statement_timeout={config('PG_TIMEOUT', cast=int, default=30) * 1000}"
self.connection = psycopg2.connect(**single_config)
else:
self.connection = postgreSQL_pool.getconn()
@ -138,7 +123,7 @@ class PostgresClient:
if not self.use_pool or self.long_query or self.unlimited_query:
self.connection.close()
except Exception as error:
logger.error("Error while committing/closing PG-connection", exc_info=error)
logger.error("Error while committing/closing PG-connection", error)
if str(error) == "connection already closed" \
and self.use_pool \
and not self.long_query \
@ -165,7 +150,7 @@ class PostgresClient:
try:
self.connection.rollback()
except psycopg2.InterfaceError as e:
logger.error("!!! Error while rollbacking connection", exc_info=e)
logger.error("!!! Error while rollbacking connection", e)
logger.error("!!! Trying to recreate the cursor")
self.recreate_cursor()
raise error
@ -176,18 +161,19 @@ class PostgresClient:
try:
self.connection.rollback()
except Exception as error:
logger.error("Error while rollbacking connection for recreation", exc_info=error)
logger.error("Error while rollbacking connection for recreation", error)
try:
self.cursor.close()
except Exception as error:
logger.error("Error while closing cursor for recreation", exc_info=error)
logger.error("Error while closing cursor for recreation", error)
self.cursor = None
return self.__enter__()
async def init():
logger.info(f">use PG_POOL:{config('PG_POOL', default=True)}")
make_pool()
if config('PG_POOL', cast=bool, default=True):
make_pool()
async def terminate():
@ -197,4 +183,4 @@ async def terminate():
postgreSQL_pool.closeall()
logger.info("Closed all connexions to PostgreSQL")
except (Exception, psycopg2.DatabaseError) as error:
logger.error("Error while closing all connexions to PostgreSQL", exc_info=error)
logger.error("Error while closing all connexions to PostgreSQL", error)

View file

@ -3,42 +3,33 @@ from enum import Enum
import schemas
def get_sql_operator(op: Union[schemas.SearchEventOperator, schemas.ClickEventExtraOperator, schemas.MathOperator]):
if isinstance(op, Enum):
op = op.value
def get_sql_operator(op: Union[schemas.SearchEventOperator, schemas.ClickEventExtraOperator]):
return {
schemas.SearchEventOperator.IS.value: "=",
schemas.SearchEventOperator.ON.value: "=",
schemas.SearchEventOperator.ON_ANY.value: "IN",
schemas.SearchEventOperator.IS_NOT.value: "!=",
schemas.SearchEventOperator.NOT_ON.value: "!=",
schemas.SearchEventOperator.CONTAINS.value: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS.value: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH.value: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH.value: "ILIKE",
schemas.SearchEventOperator.IS: "=",
schemas.SearchEventOperator.ON: "=",
schemas.SearchEventOperator.ON_ANY: "IN",
schemas.SearchEventOperator.IS_NOT: "!=",
schemas.SearchEventOperator.NOT_ON: "!=",
schemas.SearchEventOperator.CONTAINS: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH: "ILIKE",
# Selector operators:
schemas.ClickEventExtraOperator.IS.value: "=",
schemas.ClickEventExtraOperator.IS_NOT.value: "!=",
schemas.ClickEventExtraOperator.CONTAINS.value: "ILIKE",
schemas.ClickEventExtraOperator.NOT_CONTAINS.value: "NOT ILIKE",
schemas.ClickEventExtraOperator.STARTS_WITH.value: "ILIKE",
schemas.ClickEventExtraOperator.ENDS_WITH.value: "ILIKE",
schemas.MathOperator.GREATER.value: ">",
schemas.MathOperator.GREATER_EQ.value: ">=",
schemas.MathOperator.LESS.value: "<",
schemas.MathOperator.LESS_EQ.value: "<=",
schemas.ClickEventExtraOperator.IS: "=",
schemas.ClickEventExtraOperator.IS_NOT: "!=",
schemas.ClickEventExtraOperator.CONTAINS: "ILIKE",
schemas.ClickEventExtraOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.ClickEventExtraOperator.STARTS_WITH: "ILIKE",
schemas.ClickEventExtraOperator.ENDS_WITH: "ILIKE",
}.get(op, "=")
def is_negation_operator(op: schemas.SearchEventOperator):
if isinstance(op, Enum):
op = op.value
return op in [schemas.SearchEventOperator.IS_NOT.value,
schemas.SearchEventOperator.NOT_ON.value,
schemas.SearchEventOperator.NOT_CONTAINS.value,
schemas.ClickEventExtraOperator.IS_NOT.value,
schemas.ClickEventExtraOperator.NOT_CONTAINS.value]
return op in [schemas.SearchEventOperator.IS_NOT,
schemas.SearchEventOperator.NOT_ON,
schemas.SearchEventOperator.NOT_CONTAINS,
schemas.ClickEventExtraOperator.IS_NOT,
schemas.ClickEventExtraOperator.NOT_CONTAINS]
def reverse_sql_operator(op):
@ -68,12 +59,3 @@ def isAny_opreator(op: schemas.SearchEventOperator):
def isUndefined_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_UNDEFINED]
def single_value(values):
if values is not None and isinstance(values, list):
for i, v in enumerate(values):
if isinstance(v, Enum):
values[i] = v.value
return values

View file

@ -1,5 +1,4 @@
#!/bin/sh
export TZ=UTC
export ASSIST_KEY=ignore
export CH_POOL=false
uvicorn app:app --host 0.0.0.0 --port 8888 --log-level ${S_LOGLEVEL:-warning}

View file

@ -63,9 +63,4 @@ sessions_region=us-east-1
SITE_URL=http://127.0.0.1:3333
sourcemaps_bucket=
sourcemaps_reader=http://127.0.0.1:3000/sourcemaps
TZ=UTC
EXP_CH_DRIVER=true
EXP_AUTOCOMPLETE=true
EXP_ALERTS=true
EXP_ERRORS_SEARCH=true
EXP_METRICS=true
TZ=UTC

View file

@ -1,17 +1,19 @@
urllib3==2.3.0
urllib3==2.2.3
requests==2.32.3
boto3==1.36.12
boto3==1.35.86
pyjwt==2.10.1
psycopg2-binary==2.9.10
psycopg[pool,binary]==3.2.4
psycopg[pool,binary]==3.2.3
clickhouse-driver[lz4]==0.2.9
clickhouse-connect==0.8.15
elasticsearch==8.17.1
clickhouse-connect==0.8.11
elasticsearch==8.17.0
jira==3.8.0
cachetools==5.5.1
cachetools==5.5.0
fastapi==0.115.8
fastapi==0.115.6
uvicorn[standard]==0.34.0
python-decouple==3.8
pydantic[email]==2.10.6
pydantic[email]==2.10.4
apscheduler==3.11.0

View file

@ -1,19 +1,21 @@
urllib3==2.3.0
urllib3==2.2.3
requests==2.32.3
boto3==1.36.12
boto3==1.35.86
pyjwt==2.10.1
psycopg2-binary==2.9.10
psycopg[pool,binary]==3.2.4
psycopg[pool,binary]==3.2.3
clickhouse-driver[lz4]==0.2.9
clickhouse-connect==0.8.15
elasticsearch==8.17.1
clickhouse-connect==0.8.11
elasticsearch==8.17.0
jira==3.8.0
cachetools==5.5.1
cachetools==5.5.0
fastapi==0.115.8
fastapi==0.115.6
uvicorn[standard]==0.34.0
python-decouple==3.8
pydantic[email]==2.10.6
pydantic[email]==2.10.4
apscheduler==3.11.0
redis==5.2.1

View file

@ -4,9 +4,8 @@ from decouple import config
from fastapi import Depends, Body, BackgroundTasks
import schemas
from chalicelib.core import events, projects, issues, metadata, reset_password, log_tools, \
from chalicelib.core import sourcemaps, events, projects, issues, metadata, reset_password, log_tools, \
announcements, weekly_report, assist, mobile, tenants, boarding, notifications, webhook, users, saved_search, tags
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.core.metrics import custom_metrics
from chalicelib.core.alerts import alerts
from chalicelib.core.autocomplete import autocomplete
@ -33,7 +32,8 @@ def events_search(projectId: int, q: Optional[str] = None,
context: schemas.CurrentContext = Depends(OR_context)):
if type and (not q or len(q) == 0) \
and (autocomplete.is_top_supported(type)):
return autocomplete.get_top_values(project_id=projectId, event_type=type, event_key=key)
# return autocomplete.get_top_values(project_id=projectId, event_type=type, event_key=key)
return autocomplete.get_top_values(projectId, type, event_key=key)
elif not q or len(q) == 0:
return {"data": []}
@ -79,7 +79,7 @@ def notify(projectId: int, integration: str, webhookId: int, source: str, source
args = {"tenant_id": context.tenant_id,
"user": context.email, "comment": comment, "project_id": projectId,
"integration_id": webhookId,
"id": webhookId,
"project_name": context.project.name}
if integration == schemas.WebhookType.SLACK:
if source == "sessions":

View file

@ -7,17 +7,16 @@ from fastapi import HTTPException, status
from starlette.responses import RedirectResponse, FileResponse, JSONResponse, Response
import schemas
from chalicelib.core import assist, signup, feature_flags
from chalicelib.core import scope
from chalicelib.core import assist, signup, feature_flags
from chalicelib.core.metrics import heatmaps
from chalicelib.core.errors import errors, errors_details
from chalicelib.core.sessions import sessions, sessions_notes, sessions_replay, sessions_favorite, sessions_viewed, \
sessions_assignments, unprocessed_sessions, sessions_search
from chalicelib.core import tenants, users, projects, license
from chalicelib.core import webhook
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.core.errors import errors, errors_details
from chalicelib.core.metrics import heatmaps
from chalicelib.core.sessions import sessions, sessions_notes, sessions_replay, sessions_favorite, sessions_viewed, \
sessions_assignments, unprocessed_sessions, sessions_search
from chalicelib.utils import captcha, smtp
from chalicelib.utils import contextual_validators
from chalicelib.utils import helper
from chalicelib.utils.TimeUTC import TimeUTC
from or_dependencies import OR_context, OR_role
@ -27,10 +26,7 @@ from routers.subs import spot
logger = logging.getLogger(__name__)
public_app, app, app_apikey = get_routers()
if config("LOCAL_DEV", cast=bool, default=False):
COOKIE_PATH = "/refresh"
else:
COOKIE_PATH = "/api/refresh"
COOKIE_PATH = "/api/refresh"
@public_app.get('/signup', tags=['signup'])
@ -256,19 +252,17 @@ def get_projects(context: schemas.CurrentContext = Depends(OR_context)):
@app.post('/{projectId}/sessions/search', tags=["sessions"])
def search_sessions(projectId: int, data: schemas.SessionsSearchPayloadSchema = \
Depends(contextual_validators.validate_contextual_payload),
def search_sessions(projectId: int, data: schemas.SessionsSearchPayloadSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_search.search_sessions(data=data, project=context.project, user_id=context.user_id,
data = sessions_search.search_sessions(data=data, project_id=projectId, user_id=context.user_id,
platform=context.project.platform)
return {'data': data}
@app.post('/{projectId}/sessions/search/ids', tags=["sessions"])
def session_ids_search(projectId: int, data: schemas.SessionsSearchPayloadSchema = \
Depends(contextual_validators.validate_contextual_payload),
def session_ids_search(projectId: int, data: schemas.SessionsSearchPayloadSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_search.search_sessions(data=data, project=context.project, user_id=context.user_id, ids_only=True,
data = sessions_search.search_sessions(data=data, project_id=projectId, user_id=context.user_id, ids_only=True,
platform=context.project.platform)
return {'data': data}
@ -473,17 +467,6 @@ def comment_assignment(projectId: int, sessionId: int, issueId: str,
}
@app.get('/{projectId}/notes/{noteId}', tags=["sessions", "notes"])
def get_note_by_id(projectId: int, noteId: int, context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_notes.get_note(tenant_id=context.tenant_id, project_id=projectId, note_id=noteId,
user_id=context.user_id)
if "errors" in data:
return data
return {
'data': data
}
@app.post('/{projectId}/sessions/{sessionId}/notes', tags=["sessions", "notes"])
def create_note(projectId: int, sessionId: int, data: schemas.SessionNoteSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):

View file

@ -9,244 +9,172 @@ from routers.base import get_routers
public_app, app, app_apikey = get_routers()
@app.post("/{projectId}/dashboards", tags=["dashboard"])
@app.post('/{projectId}/dashboards', tags=["dashboard"])
def create_dashboards(projectId: int, data: schemas.CreateDashboardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return dashboards.create_dashboard(
project_id=projectId, user_id=context.user_id, data=data
)
return dashboards.create_dashboard(project_id=projectId, user_id=context.user_id, data=data)
@app.get("/{projectId}/dashboards", tags=["dashboard"])
@app.get('/{projectId}/dashboards', tags=["dashboard"])
def get_dashboards(projectId: int, context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": dashboards.get_dashboards(project_id=projectId, user_id=context.user_id)
}
return {"data": dashboards.get_dashboards(project_id=projectId, user_id=context.user_id)}
@app.get("/{projectId}/dashboards/{dashboardId}", tags=["dashboard"])
@app.get('/{projectId}/dashboards/{dashboardId}', tags=["dashboard"])
def get_dashboard(projectId: int, dashboardId: int, context: schemas.CurrentContext = Depends(OR_context)):
data = dashboards.get_dashboard(
project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId
)
data = dashboards.get_dashboard(project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId)
if data is None:
return {"errors": ["dashboard not found"]}
return {"data": data}
@app.put("/{projectId}/dashboards/{dashboardId}", tags=["dashboard"])
@app.put('/{projectId}/dashboards/{dashboardId}', tags=["dashboard"])
def update_dashboard(projectId: int, dashboardId: int, data: schemas.EditDashboardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": dashboards.update_dashboard(
project_id=projectId,
user_id=context.user_id,
dashboard_id=dashboardId,
data=data,
)
}
return {"data": dashboards.update_dashboard(project_id=projectId, user_id=context.user_id,
dashboard_id=dashboardId, data=data)}
@app.delete("/{projectId}/dashboards/{dashboardId}", tags=["dashboard"])
@app.delete('/{projectId}/dashboards/{dashboardId}', tags=["dashboard"])
def delete_dashboard(projectId: int, dashboardId: int, _=Body(None),
context: schemas.CurrentContext = Depends(OR_context)):
return dashboards.delete_dashboard(
project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId
)
return dashboards.delete_dashboard(project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId)
@app.get("/{projectId}/dashboards/{dashboardId}/pin", tags=["dashboard"])
@app.get('/{projectId}/dashboards/{dashboardId}/pin', tags=["dashboard"])
def pin_dashboard(projectId: int, dashboardId: int, context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": dashboards.pin_dashboard(
project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId
)
}
return {"data": dashboards.pin_dashboard(project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId)}
@app.post("/{projectId}/dashboards/{dashboardId}/cards", tags=["cards"])
def add_card_to_dashboard(projectId: int, dashboardId: int, data: schemas.AddWidgetToDashboardPayloadSchema = Body(...),
@app.post('/{projectId}/dashboards/{dashboardId}/cards', tags=["cards"])
def add_card_to_dashboard(projectId: int, dashboardId: int,
data: schemas.AddWidgetToDashboardPayloadSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": dashboards.add_widget(
project_id=projectId,
user_id=context.user_id,
dashboard_id=dashboardId,
data=data,
)
}
return {"data": dashboards.add_widget(project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId,
data=data)}
@app.post("/{projectId}/dashboards/{dashboardId}/metrics", tags=["dashboard"])
@app.post('/{projectId}/dashboards/{dashboardId}/metrics', tags=["dashboard"])
# @app.put('/{projectId}/dashboards/{dashboardId}/metrics', tags=["dashboard"])
def create_metric_and_add_to_dashboard(projectId: int, dashboardId: int, data: schemas.CardSchema = Body(...),
def create_metric_and_add_to_dashboard(projectId: int, dashboardId: int,
data: schemas.CardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": dashboards.create_metric_add_widget(
project=context.project,
user_id=context.user_id,
dashboard_id=dashboardId,
data=data,
)
}
return {"data": dashboards.create_metric_add_widget(project=context.project, user_id=context.user_id,
dashboard_id=dashboardId, data=data)}
@app.put("/{projectId}/dashboards/{dashboardId}/widgets/{widgetId}", tags=["dashboard"])
@app.put('/{projectId}/dashboards/{dashboardId}/widgets/{widgetId}', tags=["dashboard"])
def update_widget_in_dashboard(projectId: int, dashboardId: int, widgetId: int,
data: schemas.UpdateWidgetPayloadSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return dashboards.update_widget(
project_id=projectId,
user_id=context.user_id,
dashboard_id=dashboardId,
widget_id=widgetId,
data=data,
)
return dashboards.update_widget(project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId,
widget_id=widgetId, data=data)
@app.delete("/{projectId}/dashboards/{dashboardId}/widgets/{widgetId}", tags=["dashboard"])
@app.delete('/{projectId}/dashboards/{dashboardId}/widgets/{widgetId}', tags=["dashboard"])
def remove_widget_from_dashboard(projectId: int, dashboardId: int, widgetId: int, _=Body(None),
context: schemas.CurrentContext = Depends(OR_context)):
return dashboards.remove_widget(
project_id=projectId,
user_id=context.user_id,
dashboard_id=dashboardId,
widget_id=widgetId,
)
return dashboards.remove_widget(project_id=projectId, user_id=context.user_id, dashboard_id=dashboardId,
widget_id=widgetId)
@app.post("/{projectId}/cards/try", tags=["cards"])
@app.post('/{projectId}/cards/try', tags=["cards"])
def try_card(projectId: int, data: schemas.CardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": custom_metrics.get_chart(
project=context.project, data=data, user_id=context.user_id
)
}
return {"data": custom_metrics.get_chart(project=context.project, data=data, user_id=context.user_id)}
@app.post("/{projectId}/cards/try/sessions", tags=["cards"])
@app.post('/{projectId}/cards/try/sessions', tags=["cards"])
def try_card_sessions(projectId: int, data: schemas.CardSessionsSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = custom_metrics.get_sessions(
project=context.project, user_id=context.user_id, data=data
)
data = custom_metrics.get_sessions(project_id=projectId, user_id=context.user_id, data=data)
return {"data": data}
@app.post("/{projectId}/cards/try/issues", tags=["cards"])
@app.post('/{projectId}/cards/try/issues', tags=["cards"])
def try_card_issues(projectId: int, data: schemas.CardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": custom_metrics.get_issues(
project=context.project, user_id=context.user_id, data=data
)
}
return {"data": custom_metrics.get_issues(project=context.project, user_id=context.user_id, data=data)}
@app.get("/{projectId}/cards", tags=["cards"])
@app.get('/{projectId}/cards', tags=["cards"])
def get_cards(projectId: int, context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": custom_metrics.get_all(project_id=projectId, user_id=context.user_id)
}
return {"data": custom_metrics.get_all(project_id=projectId, user_id=context.user_id)}
@app.post("/{projectId}/cards", tags=["cards"])
@app.post('/{projectId}/cards', tags=["cards"])
def create_card(projectId: int, data: schemas.CardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return custom_metrics.create_card(
project=context.project, user_id=context.user_id, data=data
)
return custom_metrics.create_card(project=context.project, user_id=context.user_id, data=data)
@app.post("/{projectId}/cards/search", tags=["cards"])
def search_cards(projectId: int, data: schemas.MetricSearchSchema = Body(...),
@app.post('/{projectId}/cards/search', tags=["cards"])
def search_cards(projectId: int, data: schemas.SearchCardsSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": custom_metrics.search_metrics(
project_id=projectId, user_id=context.user_id, data=data
)
}
return {"data": custom_metrics.search_all(project_id=projectId, user_id=context.user_id, data=data)}
@app.get("/{projectId}/cards/{metric_id}", tags=["cards"])
@app.get('/{projectId}/cards/{metric_id}', tags=["cards"])
def get_card(projectId: int, metric_id: Union[int, str], context: schemas.CurrentContext = Depends(OR_context)):
if metric_id.isnumeric():
metric_id = int(metric_id)
else:
return {"errors": ["invalid card_id"]}
data = custom_metrics.get_card(
project_id=projectId, user_id=context.user_id, metric_id=metric_id
)
data = custom_metrics.get_card(project_id=projectId, user_id=context.user_id, metric_id=metric_id)
if data is None:
return {"errors": ["card not found"]}
return {"data": data}
@app.post("/{projectId}/cards/{metric_id}/sessions", tags=["cards"])
def get_card_sessions(projectId: int, metric_id: int, data: schemas.CardSessionsSchema = Body(...),
@app.post('/{projectId}/cards/{metric_id}/sessions', tags=["cards"])
def get_card_sessions(projectId: int, metric_id: int,
data: schemas.CardSessionsSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = custom_metrics.get_sessions_by_card_id(
project=context.project, user_id=context.user_id, metric_id=metric_id, data=data
)
data = custom_metrics.get_sessions_by_card_id(project_id=projectId, user_id=context.user_id, metric_id=metric_id,
data=data)
if data is None:
return {"errors": ["custom metric not found"]}
return {"data": data}
@app.post("/{projectId}/cards/{metric_id}/issues/{issueId}/sessions", tags=["dashboard"])
@app.post('/{projectId}/cards/{metric_id}/issues/{issueId}/sessions', tags=["dashboard"])
def get_metric_funnel_issue_sessions(projectId: int, metric_id: int, issueId: str,
data: schemas.CardSessionsSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = custom_metrics.get_funnel_sessions_by_issue(
project_id=projectId,
user_id=context.user_id,
metric_id=metric_id,
issue_id=issueId,
data=data,
)
data = custom_metrics.get_funnel_sessions_by_issue(project_id=projectId, user_id=context.user_id,
metric_id=metric_id, issue_id=issueId, data=data)
if data is None:
return {"errors": ["custom metric not found"]}
return {"data": data}
@app.post("/{projectId}/cards/{metric_id}/chart", tags=["card"])
@app.post('/{projectId}/cards/{metric_id}/chart', tags=["card"])
def get_card_chart(projectId: int, metric_id: int, data: schemas.CardSessionsSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = custom_metrics.make_chart_from_card(
project=context.project, user_id=context.user_id, metric_id=metric_id, data=data
)
data = custom_metrics.make_chart_from_card(project=context.project, user_id=context.user_id, metric_id=metric_id,
data=data)
return {"data": data}
@app.post("/{projectId}/cards/{metric_id}", tags=["dashboard"])
@app.post('/{projectId}/cards/{metric_id}', tags=["dashboard"])
def update_card(projectId: int, metric_id: int, data: schemas.CardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = custom_metrics.update_card(
project_id=projectId, user_id=context.user_id, metric_id=metric_id, data=data
)
data = custom_metrics.update_card(project_id=projectId, user_id=context.user_id, metric_id=metric_id, data=data)
if data is None:
return {"errors": ["custom metric not found"]}
return {"data": data}
@app.post("/{projectId}/cards/{metric_id}/status", tags=["dashboard"])
def update_card_state(projectId: int, metric_id: int, data: schemas.UpdateCardStatusSchema = Body(...),
@app.post('/{projectId}/cards/{metric_id}/status', tags=["dashboard"])
def update_card_state(projectId: int, metric_id: int,
data: schemas.UpdateCardStatusSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": custom_metrics.change_state(
project_id=projectId,
user_id=context.user_id,
metric_id=metric_id,
status=data.active,
)
}
"data": custom_metrics.change_state(project_id=projectId, user_id=context.user_id, metric_id=metric_id,
status=data.active)}
@app.delete("/{projectId}/cards/{metric_id}", tags=["dashboard"])
def delete_card(projectId: int, metric_id: int, _=Body(None), context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": custom_metrics.delete_card(
project_id=projectId, user_id=context.user_id, metric_id=metric_id
)
}
@app.delete('/{projectId}/cards/{metric_id}', tags=["dashboard"])
def delete_card(projectId: int, metric_id: int, _=Body(None),
context: schemas.CurrentContext = Depends(OR_context)):
return {"data": custom_metrics.delete_card(project_id=projectId, user_id=context.user_id, metric_id=metric_id)}

View file

@ -1,4 +1,3 @@
from decouple import config
from fastapi import Depends
from starlette.responses import JSONResponse, Response
@ -9,10 +8,7 @@ from routers.base import get_routers
public_app, app, app_apikey = get_routers(prefix="/spot", tags=["spot"])
if config("LOCAL_DEV", cast=bool, default=False):
COOKIE_PATH = "/spot/refresh"
else:
COOKIE_PATH = "/api/spot/refresh"
COOKIE_PATH = "/api/spot/refresh"
@app.get('/logout')

View file

@ -1,8 +1,7 @@
from fastapi import Depends, Body
import schemas
from chalicelib.core import events, jobs, projects
from chalicelib.core.sessions import sessions
from chalicelib.core import sessions, events, jobs, projects
from or_dependencies import OR_context
from routers.base import get_routers

View file

@ -133,7 +133,7 @@ class _TimedSchema(BaseModel):
class NotificationsViewSchema(_TimedSchema):
ids: List[int] = Field(default_factory=list)
ids: List[int] = Field(default=[])
startTimestamp: Optional[int] = Field(default=None)
endTimestamp: Optional[int] = Field(default=None)
@ -537,13 +537,6 @@ class RequestGraphqlFilterSchema(BaseModel):
value: List[Union[int, str]] = Field(...)
operator: Union[SearchEventOperator, MathOperator] = Field(...)
@model_validator(mode="before")
@classmethod
def _transform_data(cls, values):
if values.get("type") in [FetchFilterType.FETCH_DURATION, FetchFilterType.FETCH_STATUS_CODE]:
values["value"] = [int(v) for v in values["value"] if v is not None and str(v).isnumeric()]
return values
class SessionSearchEventSchema2(BaseModel):
is_event: Literal[True] = True
@ -552,7 +545,7 @@ class SessionSearchEventSchema2(BaseModel):
operator: Union[SearchEventOperator, ClickEventExtraOperator] = Field(...)
source: Optional[List[Union[ErrorSource, int, str]]] = Field(default=None)
sourceOperator: Optional[MathOperator] = Field(default=None)
filters: Optional[List[RequestGraphqlFilterSchema]] = Field(default_factory=list)
filters: Optional[List[RequestGraphqlFilterSchema]] = Field(default=[])
_remove_duplicate_values = field_validator('value', mode='before')(remove_duplicate_values)
_single_to_list_values = field_validator('value', mode='before')(single_to_list)
@ -586,7 +579,7 @@ class SessionSearchEventSchema2(BaseModel):
class SessionSearchFilterSchema(BaseModel):
is_event: Literal[False] = False
value: List[Union[IssueType, PlatformType, int, str]] = Field(default_factory=list)
value: List[Union[IssueType, PlatformType, int, str]] = Field(default=[])
type: FilterType = Field(...)
operator: Union[SearchEventOperator, MathOperator] = Field(...)
source: Optional[Union[ErrorSource, str]] = Field(default=None)
@ -665,8 +658,8 @@ Field(discriminator='is_event'), BeforeValidator(add_missing_is_event)]
class SessionsSearchPayloadSchema(_TimedSchema, _PaginatedSchema):
events: List[SessionSearchEventSchema2] = Field(default_factory=list, doc_hidden=True)
filters: List[GroupedFilterType] = Field(default_factory=list)
events: List[SessionSearchEventSchema2] = Field(default=[], doc_hidden=True)
filters: List[GroupedFilterType] = Field(default=[])
sort: str = Field(default="startTs")
order: SortOrderType = Field(default=SortOrderType.DESC)
events_order: Optional[SearchEventOrder] = Field(default=SearchEventOrder.THEN)
@ -823,7 +816,7 @@ Field(discriminator='is_event')]
class PathAnalysisSchema(_TimedSchema, _PaginatedSchema):
density: int = Field(default=7)
filters: List[ProductAnalyticsFilter] = Field(default_factory=list)
filters: List[ProductAnalyticsFilter] = Field(default=[])
type: Optional[str] = Field(default=None)
_transform_filters = field_validator('filters', mode='before') \
@ -851,21 +844,18 @@ class MetricTimeseriesViewType(str, Enum):
LINE_CHART = "lineChart"
AREA_CHART = "areaChart"
BAR_CHART = "barChart"
PROGRESS_CHART = "progressChart"
PIE_CHART = "pieChart"
METRIC_CHART = "metric"
PROGRESS_CHART = "progressChart"
TABLE_CHART = "table"
METRIC_CHART = "metric"
class MetricTableViewType(str, Enum):
TABLE_CHART = "table"
TABLE = "table"
class MetricOtherViewType(str, Enum):
OTHER_CHART = "chart"
COLUMN_CHART = "columnChart"
METRIC_CHART = "metric"
TABLE_CHART = "table"
LIST_CHART = "list"
@ -873,13 +863,37 @@ class MetricType(str, Enum):
TIMESERIES = "timeseries"
TABLE = "table"
FUNNEL = "funnel"
ERRORS = "errors"
PERFORMANCE = "performance"
RESOURCES = "resources"
WEB_VITAL = "webVitals"
PATH_ANALYSIS = "pathAnalysis"
RETENTION = "retention"
STICKINESS = "stickiness"
HEAT_MAP = "heatMap"
class MetricOfErrors(str, Enum):
DOMAINS_ERRORS_4XX = "domainsErrors4xx"
DOMAINS_ERRORS_5XX = "domainsErrors5xx"
ERRORS_PER_DOMAINS = "errorsPerDomains"
ERRORS_PER_TYPE = "errorsPerType"
IMPACTED_SESSIONS_BY_JS_ERRORS = "impactedSessionsByJsErrors"
RESOURCES_BY_PARTY = "resourcesByParty"
class MetricOfWebVitals(str, Enum):
AVG_SESSION_DURATION = "avgSessionDuration"
AVG_USED_JS_HEAP_SIZE = "avgUsedJsHeapSize"
AVG_VISITED_PAGES = "avgVisitedPages"
COUNT_REQUESTS = "countRequests"
COUNT_SESSIONS = "countSessions"
COUNT_USERS = "userCount"
SPEED_LOCATION = "speedLocation"
class MetricOfTable(str, Enum):
USER_OS = FilterType.USER_OS.value
USER_BROWSER = FilterType.USER_BROWSER.value
USER_DEVICE = FilterType.USER_DEVICE.value
USER_COUNTRY = FilterType.USER_COUNTRY.value
@ -914,10 +928,10 @@ class CardSessionsSchema(_TimedSchema, _PaginatedSchema):
startTimestamp: int = Field(default=TimeUTC.now(-7))
endTimestamp: int = Field(default=TimeUTC.now())
density: int = Field(default=7, ge=1, le=200)
series: List[CardSeriesSchema] = Field(default_factory=list)
series: List[CardSeriesSchema] = Field(default=[])
# events: List[SessionSearchEventSchema2] = Field(default_factory=list, doc_hidden=True)
filters: List[GroupedFilterType] = Field(default_factory=list)
# events: List[SessionSearchEventSchema2] = Field(default=[], doc_hidden=True)
filters: List[GroupedFilterType] = Field(default=[])
compare_to: Optional[List[str]] = Field(default=None)
@ -960,6 +974,36 @@ class CardSessionsSchema(_TimedSchema, _PaginatedSchema):
return self
# We don't need this as the UI is expecting filters to override the full series' filters
# @model_validator(mode="after")
# def __merge_out_filters_with_series(self):
# for f in self.filters:
# for s in self.series:
# found = False
#
# if f.is_event:
# sub = s.filter.events
# else:
# sub = s.filter.filters
#
# for e in sub:
# if f.type == e.type and f.operator == e.operator:
# found = True
# if f.is_event:
# # If extra event: append value
# for v in f.value:
# if v not in e.value:
# e.value.append(v)
# else:
# # If extra filter: override value
# e.value = f.value
# if not found:
# sub.append(f)
#
# self.filters = []
#
# return self
# UI is expecting filters to override the full series' filters
@model_validator(mode="after")
def __override_series_filters_with_outer_filters(self):
@ -993,11 +1037,15 @@ class __CardSchema(CardSessionsSchema):
view_type: Any
metric_type: MetricType = Field(...)
metric_of: Any
metric_value: List[IssueType] = Field(default_factory=list)
metric_value: List[IssueType] = Field(default=[])
# This is used to save the selected session for heatmaps
session_id: Optional[int] = Field(default=None)
# This is used to specify the number of top values for PathAnalysis
rows: int = Field(default=3, ge=1, le=10)
@computed_field
@property
def is_predefined(self) -> bool:
return self.metric_type in [MetricType.ERRORS, MetricType.PERFORMANCE,
MetricType.RESOURCES, MetricType.WEB_VITAL]
class CardTimeSeries(__CardSchema):
@ -1030,16 +1078,6 @@ class CardTable(__CardSchema):
values["metricValue"] = []
return values
@model_validator(mode="after")
def __enforce_AND_operator(self):
self.metric_of = MetricOfTable(self.metric_of)
if self.metric_of in (MetricOfTable.VISITED_URL, MetricOfTable.FETCH, \
MetricOfTable.VISITED_URL.value, MetricOfTable.FETCH.value):
for s in self.series:
if s.filter is not None:
s.filter.events_order = SearchEventOrder.AND
return self
@model_validator(mode="after")
def __transform(self):
self.metric_of = MetricOfTable(self.metric_of)
@ -1067,7 +1105,7 @@ class CardFunnel(__CardSchema):
def __enforce_default(cls, values):
if values.get("metricOf") and not MetricOfFunnels.has_value(values["metricOf"]):
values["metricOf"] = MetricOfFunnels.SESSION_COUNT
# values["viewType"] = MetricOtherViewType.OTHER_CHART
values["viewType"] = MetricOtherViewType.OTHER_CHART
if values.get("series") is not None and len(values["series"]) > 0:
values["series"] = [values["series"][0]]
return values
@ -1078,6 +1116,40 @@ class CardFunnel(__CardSchema):
return self
class CardErrors(__CardSchema):
metric_type: Literal[MetricType.ERRORS]
metric_of: MetricOfErrors = Field(default=MetricOfErrors.IMPACTED_SESSIONS_BY_JS_ERRORS)
view_type: MetricOtherViewType = Field(...)
@model_validator(mode="before")
@classmethod
def __enforce_default(cls, values):
values["series"] = []
return values
@model_validator(mode="after")
def __transform(self):
self.metric_of = MetricOfErrors(self.metric_of)
return self
class CardWebVital(__CardSchema):
metric_type: Literal[MetricType.WEB_VITAL]
metric_of: MetricOfWebVitals = Field(default=MetricOfWebVitals.AVG_VISITED_PAGES)
view_type: MetricOtherViewType = Field(...)
@model_validator(mode="before")
@classmethod
def __enforce_default(cls, values):
values["series"] = []
return values
@model_validator(mode="after")
def __transform(self):
self.metric_of = MetricOfWebVitals(self.metric_of)
return self
class CardHeatMap(__CardSchema):
metric_type: Literal[MetricType.HEAT_MAP]
metric_of: MetricOfHeatMap = Field(default=MetricOfHeatMap.HEAT_MAP_URL)
@ -1113,15 +1185,14 @@ class CardPathAnalysis(__CardSchema):
metric_type: Literal[MetricType.PATH_ANALYSIS]
metric_of: MetricOfPathAnalysis = Field(default=MetricOfPathAnalysis.session_count)
view_type: MetricOtherViewType = Field(...)
metric_value: List[ProductAnalyticsSelectedEventType] = Field(default_factory=list)
metric_value: List[ProductAnalyticsSelectedEventType] = Field(default=[])
density: int = Field(default=4, ge=2, le=10)
rows: int = Field(default=5, ge=1, le=10)
start_type: Literal["start", "end"] = Field(default="start")
start_point: List[PathAnalysisSubFilterSchema] = Field(default_factory=list)
excludes: List[PathAnalysisSubFilterSchema] = Field(default_factory=list)
start_point: List[PathAnalysisSubFilterSchema] = Field(default=[])
excludes: List[PathAnalysisSubFilterSchema] = Field(default=[])
series: List[CardPathAnalysisSeriesSchema] = Field(default_factory=list)
series: List[CardPathAnalysisSeriesSchema] = Field(default=[])
@model_validator(mode="before")
@classmethod
@ -1138,10 +1209,10 @@ class CardPathAnalysis(__CardSchema):
if len(s.value) == 0:
continue
start_point.append(s)
# self.metric_value.append(s.type)
self.metric_value.append(s.type)
self.start_point = start_point
# self.metric_value = remove_duplicate_values(self.metric_value)
self.metric_value = remove_duplicate_values(self.metric_value)
return self
@ -1167,7 +1238,9 @@ class CardPathAnalysis(__CardSchema):
# Union of cards-schemas that doesn't change between FOSS and EE
__cards_union_base = Union[
CardTimeSeries, CardTable, CardFunnel, CardHeatMap, CardPathAnalysis]
CardTimeSeries, CardTable, CardFunnel,
CardErrors, CardWebVital, CardHeatMap,
CardPathAnalysis]
CardSchema = ORUnion(__cards_union_base, discriminator='metric_type')
@ -1185,13 +1258,13 @@ class ProjectConditions(BaseModel):
condition_id: Optional[int] = Field(default=None)
name: str = Field(...)
capture_rate: int = Field(..., ge=0, le=100)
filters: List[GroupedFilterType] = Field(default_factory=list)
filters: List[GroupedFilterType] = Field(default=[])
class ProjectSettings(BaseModel):
rate: int = Field(..., ge=0, le=100)
conditional_capture: bool = Field(default=False)
conditions: List[ProjectConditions] = Field(default_factory=list)
conditions: List[ProjectConditions] = Field(default=[])
class CreateDashboardSchema(BaseModel):
@ -1199,7 +1272,7 @@ class CreateDashboardSchema(BaseModel):
description: Optional[str] = Field(default='')
is_public: bool = Field(default=False)
is_pinned: bool = Field(default=False)
metrics: Optional[List[int]] = Field(default_factory=list)
metrics: Optional[List[int]] = Field(default=[])
class EditDashboardSchema(CreateDashboardSchema):
@ -1208,7 +1281,7 @@ class EditDashboardSchema(CreateDashboardSchema):
class UpdateWidgetPayloadSchema(BaseModel):
config: dict = Field(default_factory=dict)
config: dict = Field(default={})
class AddWidgetToDashboardPayloadSchema(UpdateWidgetPayloadSchema):
@ -1306,20 +1379,16 @@ class IntegrationType(str, Enum):
class SearchNoteSchema(_PaginatedSchema):
sort: str = Field(default="createdAt")
order: SortOrderType = Field(default=SortOrderType.DESC)
tags: Optional[List[str]] = Field(default_factory=list)
tags: Optional[List[str]] = Field(default=[])
shared_only: bool = Field(default=False)
mine_only: bool = Field(default=False)
search: Optional[str] = Field(default=None)
class SessionNoteSchema(BaseModel):
message: Optional[str] = Field(None, max_length=250)
message: str = Field(..., min_length=2)
tag: Optional[str] = Field(default=None)
timestamp: int = Field(default=-1)
is_public: bool = Field(default=False)
thumbnail: Optional[str] = Field(default=None)
start_at: int = Field(default=None)
end_at: int = Field(default=None)
class SessionUpdateNoteSchema(SessionNoteSchema):
@ -1348,49 +1417,13 @@ class SearchCardsSchema(_PaginatedSchema):
query: Optional[str] = Field(default=None)
class MetricSortColumnType(str, Enum):
NAME = "name"
METRIC_TYPE = "metric_type"
METRIC_OF = "metric_of"
IS_PUBLIC = "is_public"
CREATED_AT = "created_at"
EDITED_AT = "edited_at"
class MetricFilterColumnType(str, Enum):
NAME = "name"
METRIC_TYPE = "metric_type"
METRIC_OF = "metric_of"
IS_PUBLIC = "is_public"
USER_ID = "user_id"
CREATED_AT = "created_at"
EDITED_AT = "edited_at"
class MetricListSort(BaseModel):
field: Optional[str] = Field(default=None)
order: Optional[str] = Field(default=SortOrderType.DESC)
class MetricFilter(BaseModel):
type: Optional[str] = Field(default=None)
query: Optional[str] = Field(default=None)
class MetricSearchSchema(_PaginatedSchema):
filter: Optional[MetricFilter] = Field(default=None)
sort: Optional[MetricListSort] = Field(default=MetricListSort())
shared_only: bool = Field(default=False)
mine_only: bool = Field(default=False)
class _HeatMapSearchEventRaw(SessionSearchEventSchema2):
type: Literal[EventType.LOCATION] = Field(...)
class HeatMapSessionsSearch(SessionsSearchPayloadSchema):
events: Optional[List[_HeatMapSearchEventRaw]] = Field(default_factory=list)
filters: List[Union[SessionSearchFilterSchema, _HeatMapSearchEventRaw]] = Field(default_factory=list)
events: Optional[List[_HeatMapSearchEventRaw]] = Field(default=[])
filters: List[Union[SessionSearchFilterSchema, _HeatMapSearchEventRaw]] = Field(default=[])
@model_validator(mode="before")
@classmethod
@ -1405,14 +1438,14 @@ class HeatMapSessionsSearch(SessionsSearchPayloadSchema):
class HeatMapFilterSchema(BaseModel):
value: List[Literal[IssueType.CLICK_RAGE, IssueType.DEAD_CLICK]] = Field(default_factory=list)
value: List[Literal[IssueType.CLICK_RAGE, IssueType.DEAD_CLICK]] = Field(default=[])
type: Literal[FilterType.ISSUE] = Field(...)
operator: Literal[SearchEventOperator.IS, MathOperator.EQUAL] = Field(...)
class GetHeatMapPayloadSchema(_TimedSchema):
url: Optional[str] = Field(default=None)
filters: List[HeatMapFilterSchema] = Field(default_factory=list)
filters: List[HeatMapFilterSchema] = Field(default=[])
click_rage: bool = Field(default=False)
operator: Literal[SearchEventOperator.IS, SearchEventOperator.STARTS_WITH,
SearchEventOperator.CONTAINS, SearchEventOperator.ENDS_WITH] = Field(default=SearchEventOperator.STARTS_WITH)
@ -1433,7 +1466,7 @@ class FeatureFlagVariant(BaseModel):
class FeatureFlagConditionFilterSchema(BaseModel):
is_event: Literal[False] = False
type: FilterType = Field(...)
value: List[str] = Field(default_factory=list, min_length=1)
value: List[str] = Field(default=[], min_length=1)
operator: Union[SearchEventOperator, MathOperator] = Field(...)
source: Optional[str] = Field(default=None)
sourceOperator: Optional[Union[SearchEventOperator, MathOperator]] = Field(default=None)
@ -1449,7 +1482,7 @@ class FeatureFlagCondition(BaseModel):
condition_id: Optional[int] = Field(default=None)
name: str = Field(...)
rollout_percentage: Optional[int] = Field(default=0)
filters: List[FeatureFlagConditionFilterSchema] = Field(default_factory=list)
filters: List[FeatureFlagConditionFilterSchema] = Field(default=[])
class SearchFlagsSchema(_PaginatedSchema):
@ -1476,8 +1509,8 @@ class FeatureFlagSchema(BaseModel):
flag_type: FeatureFlagType = Field(default=FeatureFlagType.SINGLE_VARIANT)
is_persist: Optional[bool] = Field(default=False)
is_active: Optional[bool] = Field(default=True)
conditions: List[FeatureFlagCondition] = Field(default_factory=list, min_length=1)
variants: List[FeatureFlagVariant] = Field(default_factory=list)
conditions: List[FeatureFlagCondition] = Field(default=[], min_length=1)
variants: List[FeatureFlagVariant] = Field(default=[])
class ModuleType(str, Enum):

View file

@ -38,9 +38,6 @@ def force_is_event(events_enum: list[Type[Enum]]):
def fn(value: list):
if value is not None and isinstance(value, list):
for v in value:
if v.get("type") is None:
v["isEvent"] = False
continue
r = False
for en in events_enum:
if en.has_value(v["type"]) or en.has_value(v["type"].lower()):

Some files were not shown because too many files have changed in this diff Show more