Compare commits

...

794 commits

Author SHA1 Message Date
nick-delirium
90510aa33b ui: fix double metric selection in list 2025-06-06 16:19:54 +02:00
GitHub Action
96a70f5d41 Increment frontend chart version to v1.22.42 2025-06-04 11:41:56 +02:00
rjshrjndrn
d4a13edcf0 fix(actions): frontend image with proper tag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-06-04 11:33:19 +02:00
GitHub Action
51fad91a22 Increment frontend chart version to v1.22.41 2025-06-04 10:48:50 +02:00
nick-delirium
36abcda1e1 ui: fix audioplayer start point 2025-06-04 10:39:08 +02:00
Mehdi Osman
dd5f464f73
Increment frontend chart version to v1.22.40 (#3479)
Co-authored-by: GitHub Action <action@github.com>
2025-06-03 16:22:12 +02:00
Delirium
f9ada41272
ui: recreate period on db visit (#3478) 2025-06-03 16:05:52 +02:00
rjshrjndrn
9e24a3583e feat(nginx): add integrations endpoint with CORS support
Add new /integrations/ location block that proxies requests to
integrations-openreplay:8080 service. Includes proper CORS headers
for cross-origin requests and WebSocket upgrade support.

- Rewrite /integrations/ path to root
- Configure proxy headers for forwarding
- Set connection timeouts for stability
- Add CORS headers for API access

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-06-02 10:55:50 +02:00
Taha Yassine Kraiem
0a3129d3cd fix(chalice): fixed JIRA integration 2025-05-30 15:25:41 +02:00
Mehdi Osman
99d61db9d9
Increment frontend chart version to v1.22.39 (#3460)
Co-authored-by: GitHub Action <action@github.com>
2025-05-30 15:07:29 +02:00
Delirium
133958622e
ui: fix alert create button (#3459) 2025-05-30 14:56:21 +02:00
GitHub Action
fb021f606f Increment frontend chart version to v1.22.38 2025-05-29 12:21:04 +02:00
rjshrjndrn
a2905fa8ed fix: move cd - command after git operations in patch workflow
Move the directory restoration command after the git operations to
ensure all git commands execute in the correct working directory
before returning to the previous directory.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-29 12:16:28 +02:00
rjshrjndrn
beec2283fd refactor(ci): restructure patch-build workflow script
- Extract inline bash script into structured functions
- Add proper error handling with set -euo pipefail
- Improve variable scoping with readonly and local declarations
- Add descriptive function names and comments
- Fix shell quoting and parameter expansion
- Consolidate build logic into reusable functions
- Add proper cleanup of temporary files
- Improve readability and maintainability of the CI script

The refactored script maintains the same functionality while being
more robust and easier to understand.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-29 12:16:28 +02:00
GitHub Action
6c8b55019e Increment frontend chart version 2025-05-29 10:29:46 +02:00
rjshrjndrn
e3e3e11227 fix(action): proper registry
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-29 10:18:55 +02:00
Shekar Siri
c6f7de04cc Revert "fix(ui): new card data state is not updating"
This reverts commit 2921c17cbf.
2025-05-28 22:16:00 +02:00
Shekar Siri
2921c17cbf fix(ui): new card data state is not updating 2025-05-28 19:49:01 +02:00
Mehdi Osman
7eb3f5c4c8
Increment frontend chart version (#3436)
Co-authored-by: GitHub Action <action@github.com>
2025-05-26 16:10:35 +02:00
Rajesh Rajendran
5a9a8e588a
chore(actions): rebase only if not main (#3435)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-26 16:04:50 +02:00
Rajesh Rajendran
4b14258266
fix(action): clone repo (#3433)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-26 15:50:13 +02:00
Rajesh Rajendran
744d2d4311
actions fix or 2070 (#3432)
* chore(build): Better error handling

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(build): remove fetch depth, as it might cause issue in rebase

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(build): proper platform

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-26 15:45:48 +02:00
Taha Yassine Kraiem
64242a5dc0 refactor(DB): changed supported platforms in CH 2025-05-26 11:51:49 +02:00
Rajesh Rajendran
cae3002697
feat(ci): Support building from branch for old patch (#3419)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-20 15:19:04 +02:00
GitHub Action
3d3c62196b Increment frontend chart version 2025-05-20 11:44:16 +02:00
nick-delirium
e810958a5d ui: fix ant imports 2025-05-20 11:26:20 +02:00
nick-delirium
39fa9787d1 ui: prevent network row modal from changing replayer time 2025-05-20 11:21:50 +02:00
nick-delirium
c9c1ad4dde ui: comments etc 2025-05-20 11:21:50 +02:00
nick-delirium
d9868928be ui: improve network panel row mapping 2025-05-20 11:21:50 +02:00
GitHub Action
a460d8c9a2 Increment frontend chart version 2025-05-15 15:18:19 +02:00
nick-delirium
930417aab4 ui: fix session search on url change 2025-05-15 15:12:30 +02:00
GitHub Action
07bc184f4d Increment chalice chart version 2025-05-14 18:59:43 +02:00
Rajesh Rajendran
71b7cca569
Patch/api v1.22.0 (#3401)
* fix(chalice): fixed duplicate autocomplete values

* ci(actions): possible fix for pull --rebase

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com>
2025-05-14 18:42:25 +02:00
Mehdi Osman
355d27eaa0
Increment frontend chart version (#3397)
Co-authored-by: GitHub Action <action@github.com>
2025-05-13 13:38:15 +02:00
Mehdi Osman
66b485cccf
Increment db chart version (#3396)
Co-authored-by: GitHub Action <action@github.com>
2025-05-13 10:34:28 +02:00
Alexander
de33a42151
feat(db): custom event's ts (#3395) 2025-05-12 17:52:24 +02:00
Rajesh Rajendran
f12bdebf82
ci(actions): fix push denied (#3392) (#3393) (#3394)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 17:19:41 +02:00
Rajesh Rajendran
bbfa20c693
ci(actions): fix push denied (#3392) (#3393)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:58:19 +02:00
Rajesh Rajendran
f264ba043d
ci(actions): fix push denied (#3392)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:55:23 +02:00
Rajesh Rajendran
a05dce8125
main (#3391)
* ci(actions): Update pr description

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* ci(actions): run only on pull request merge

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:50:20 +02:00
Mehdi Osman
3a1635d81f
Increment frontend chart version (#3389)
Co-authored-by: GitHub Action <action@github.com>
2025-05-12 16:12:43 +02:00
Delirium
ccb332c636
ui: change <slot> check (#3388) 2025-05-12 16:02:26 +02:00
Rajesh Rajendran
80ffa15959
ci(actions): Auto update tag for patch build (#3387)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 15:54:10 +02:00
Rajesh Rajendran
b2e961d621
ci(actions): Auto update tag for patch build (#3386)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 15:49:19 +02:00
Mehdi Osman
b4d0598f23
Increment frontend chart version (#3385)
Co-authored-by: GitHub Action <action@github.com>
2025-05-12 15:46:29 +02:00
Delirium
e77f083f10
ui: fixup toggler closing (#3384) 2025-05-12 15:40:30 +02:00
Delirium
58da1d3f64
fix litjs support, fix autocomplete modal options reset, fix dashboard chart density (#3382)
* Litjs fixes2 (#3381)

* ui: fixes for litjs capture

* ui: introduce vmode for lwc light dom

* ui: fixup the mode toggle and remover

* ui: fix filter options reset, fix dashboard chart density
2025-05-12 15:27:44 +02:00
GitHub Action
447fc26a2a Increment frontend chart version 2025-05-12 10:46:33 +02:00
nick-delirium
9bdf6e4f92 ui: fix heatmaps crash 2025-05-12 10:37:48 +02:00
GitHub Action
01f403e12d Increment chalice chart version 2025-05-07 12:28:44 +02:00
Taha Yassine Kraiem
39eb943b86 fix(chalice): fixed get error's details 2025-05-07 12:15:33 +02:00
GitHub Action
366b0d38b0 Increment frontend chart version 2025-05-06 16:28:28 +02:00
nick-delirium
f4d5b3c06e ui: fix max meta length, add horizontal layout for player 2025-05-06 16:23:47 +02:00
Mehdi Osman
93ae18133e
Increment frontend chart version (#3366)
Co-authored-by: GitHub Action <action@github.com>
2025-05-06 13:16:57 +02:00
Andrey Babushkin
fbe5d78270
Revert update (#3365)
* Revert "Increment chalice chart version"

This reverts commit 5e0e5730ba.

* revert updates

* changed chalice version
2025-05-06 13:08:08 +02:00
Mehdi Osman
b803eed1d4
Increment frontend chart version (#3362)
Co-authored-by: GitHub Action <action@github.com>
2025-05-05 17:49:39 +02:00
Andrey Babushkin
9ed3cb1b7e
Add searched events (#3361)
* add filtered events to search

* removed consoles

* changed styles to tailwind

* changed styles to tailwind

* fixed errors
2025-05-05 17:40:10 +02:00
GitHub Action
5e0e5730ba Increment chalice chart version 2025-05-05 17:04:29 +02:00
Taha Yassine Kraiem
d78b33dcd2 refactor(DB): remove TTL for CH tables 2025-05-05 16:49:37 +02:00
Taha Yassine Kraiem
4b1ca200b4 fix(chalice): fixed empty error_id for table of errors 2025-05-05 16:49:37 +02:00
rjshrjndrn
08d930f9ff fix(docker-compose): proper volume path #3279
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-28 17:28:40 +02:00
Mehdi Osman
da37809bc8
Increment frontend chart version (#3345)
Co-authored-by: GitHub Action <action@github.com>
2025-04-28 11:38:04 +02:00
Andrey Babushkin
d922fc7ad5
Patch frontend inline css (#3344)
* add inlineCss enum

* updated changelog
2025-04-28 11:29:53 +02:00
GitHub Action
796360fdd2 Increment frontend chart version 2025-04-28 11:01:55 +02:00
nick-delirium
13dbb60d8b ui: fix velement applychanges 2025-04-28 10:40:11 +02:00
Андрей Бабушкин
9e20a49128 add slot tag to custom elements 2025-04-28 10:34:43 +02:00
nick-delirium
91f8cc1399 ui: move debouncecall 2025-04-28 10:34:43 +02:00
Andrey Babushkin
f8ba3f6d89 Css batching (#3326)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* ui: rereoute custom html component fragments

* removed sorting

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-28 10:34:43 +02:00
Delirium
85e30b3692 tracker css batching/inlining (#3334)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* removed sorting

* upgrade css inliner

* ui: better logging for ocunter

* tracker: force-fetch mode for cssInliner

* tracker: fix ts warns

* tracker: use debug opts

* tracker: 16.2.0 changelogs, inliner opts

* tracker: remove debug options

---------

Co-authored-by: Андрей Бабушкин <andreybabushkin2000@gmail.com>
2025-04-28 10:34:43 +02:00
nick-delirium
0360e3726e ui: fixup autoplay on inactive tabs 2025-04-28 10:34:43 +02:00
nick-delirium
77bbb5af36 tracker: update css inject 2025-04-28 10:34:43 +02:00
Andrey Babushkin
ab0d4cfb62 Css inliner tuning (#3337)
* tracker: don't send double sheets

* tracker: don't send double sheets

* tracker: slot checker

* add slot tag to custom elements

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-28 10:34:43 +02:00
Andrey Babushkin
3fd506a812 Css batching (#3326)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* ui: rereoute custom html component fragments

* removed sorting

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-28 10:34:43 +02:00
Shekar Siri
e8432e2dec change(ui): force the table cards events order to use and istead the defaul then 2025-04-24 10:09:19 +02:00
GitHub Action
5c76a8524c Increment frontend chart version 2025-04-23 18:41:46 +02:00
rjshrjndrn
3ba40a4811 feat(cli): Add support for image versions
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
rjshrjndrn
f9a3f24590 fix(docker-compose): clickhouse migration
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
rjshrjndrn
85d6d0abac fix(docker-compose): remove shell interpolation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
Rajesh Rajendran
b3594136ce or 1940 upstream docker release with the existing installation (#3316)
* chore(docker): Adding dynamic env generator
* ci(make): Create deployment yamls
* ci(make): Generating docker envs
* change env name structure
* proper env names
* chore(docker): clickhouse
* chore(docker-compose): generate env file format
* chore(docker-compose): Adding docker-compose
* chore(docker-compose): format make
* chore(docker-compose): Update version
* chore(docker-compose): adding new secrets
* ci(make): default target
* ci(Makefile): Update common protocol
* chore(docker-compose): refactor folder structure
* ci(make): rename to docker-envs
* feat(docker): add clickhouse volume definition
Add clickhouse persistent volume to the docker-compose configuration
to ensure data is preserved between container restarts.
* refactor: move env files to docker-envs directory
Updates all environment file references in docker-compose.yaml to use a
consistent directory structure, placing them under the docker-envs/
directory for better organization.
* fix(docker): rename imagestorage to images
 The `imagestorage` service and related environment file
 have been renamed to `images` for clarity and consistency.
 This change reflects the service's purpose of handling
 images.
* feat(docker): introduce docker-compose template
 A new docker-compose template
 to generate docker-compose files from a list of services.
 The template uses helm syntax.
* fix: Properly set FILES variable in Makefile
 The FILES variable was not being set correctly in the
 Makefile due to subshell issues. This commit fixes the
 variable assignment and ensures that the variable is
 accessible in subsequent commands.
* feat: Refactor docker-compose template for local development
 This commit introduces a complete overhaul of the
 docker-compose template, switching from a helm-based
 template to a native docker-compose.yml file. This
 change simplifies local development and makes it easier
 to manage the OpenReplay stack.
 The new template includes services for:
 - PostgreSQL
 - ClickHouse
 - Redis
 - MinIO
 - Nginx
 - Caddy
 It also includes migration jobs for setting up the
 database and MinIO.
* fix(docker-compose): Add fallback empty environment
 Add an empty environment to the docker-compose template to prevent
 errors when the env_file is missing. This ensures that the
 container can start even if the environment file is not present.
* feat(docker): Add domainname and aliases to services
 This change adds the `domainname` and `aliases` attributes to each
 service in the docker-compose.yaml file. This is to ensure that
 the services can communicate with each other using their fully
 qualified domain names. Also adds shared volume and empty
 environment variables.
* update version
* chore(docker): don't pull parallel
* chore(docker-compose): proper pull
* chore(docker-compose): Update db service urls
* fix(docker-compose): clickhouse url
* chore(clickhouse): Adding clickhouse db migration
* chore(docker-compose): Adding clickhouse
* fix(tpl): variable injection
* chore(fix): compose tpl variable rendering
* chore(docker-compose): Allow override pg variable
* chore(helm): remove assist-server
* chore(helm): pg integrations
* chore(nginx): removed services
* chore(docker-compose): Mulitple aliases
* chore(docker-compose): Adding more env vars
* feat(install): Dynamically generate passwords
 dynamic password generation by
 identifying `change_me_*` entries in `common.env` and
 replacing them with random passwords. This enhances
 security and simplifies initial setup.
 The changes include:
 - Replacing hardcoded password replacements with a loop
   that iterates through all `change_me_*` entries.
 - Using `grep` to find all `change_me_*` tokens.
 - Generating a random password for each token.
 - Updating the `common.env` file with the generated
   passwords.
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): clickhouse-migration
* compose: chalice env
* chore(docker-compose): overlay vars
* chore(docker): Adding ch port
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): migration name
* feat(docker): skip specific values
* chore(docker-compose): define namespace
---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
GitHub Action
8f67edde8d Increment chalice chart version 2025-04-23 12:26:20 +02:00
Taha Yassine Kraiem
74ed29915b fix(chalice): enforce AND operator for table of requests and table of pages 2025-04-23 11:51:38 +02:00
GitHub Action
3ca71ec211 Increment chalice chart version 2025-04-22 19:23:11 +02:00
Taha Yassine Kraiem
0e469fd056 fix(chalice): fixes for table of requests 2025-04-22 19:03:35 +02:00
KRAIEM Taha Yassine
a8cb0e1643 fix(chalice): fixes for table of requests 2025-04-22 19:03:35 +02:00
GitHub Action
e171f0d8d5 Increment frontend chart version 2025-04-22 17:56:00 +02:00
nick-delirium
68ea291444 ui: fix timepicker and timezone interactions 2025-04-22 17:42:56 +02:00
GitHub Action
05cbb831c7 Increment frontend chart version 2025-04-22 10:32:00 +02:00
nick-delirium
5070ded1f4 ui: fix empty sank sessions fetch 2025-04-22 10:27:16 +02:00
GitHub Action
77610a4924 Increment frontend chart version 2025-04-16 17:45:25 +02:00
nick-delirium
7c34e4a0f6 ui: virtualizer for filter options list 2025-04-16 17:36:34 +02:00
GitHub Action
330e21183f Increment frontend chart version 2025-04-15 18:25:49 +02:00
Shekar Siri
30ce37896c feat(widget-sessions): improve session filtering logic
- Refactored session filtering logic to handle nested filters properly.
- Enhanced `fetchSessions` to ensure null checks and avoid errors.
- Updated `loadData` to handle `USER_PATH` and `HEATMAP` metric types.
- Improved UI consistency by adjusting spacing and formatting.
- Replaced redundant code with cleaner, more maintainable patterns.

This change improves the reliability and readability of the session
filtering and loading logic in the WidgetSessions component.
2025-04-15 18:15:03 +02:00
Andrey Babushkin
80a7817e7d
removed sorting by id (#3305) 2025-04-15 13:32:53 +02:00
Jorgen Evens
1b9c568cb1 fix(helm): fix broken volumeMounts indentation 2025-04-14 15:51:41 +02:00
GitHub Action
3759771ae9 Increment frontend chart version 2025-04-14 12:06:09 +02:00
Shekar Siri
f6ae5aba88 feat(SessionsBy): add specific filter for FETCH metric
Added a conditional check to handle the FETCH metric in the SessionsBy
component. When the metric is FETCH, a specific filter with key
FETCH_URL, operator is, and value derived from data.name is applied.
This ensures proper filtering behavior for FETCH-related metrics.
2025-04-14 12:01:51 +02:00
Mehdi Osman
5190dc512a
Increment frontend chart version (#3297)
Co-authored-by: GitHub Action <action@github.com>
2025-04-14 11:54:25 +02:00
Andrey Babushkin
3fcccb51e8
Patch assist (#3296)
* add global method support

* fix errors

* remove wrong updates

* remove wrong updates

* add onDrag as option

* fix wrong updates
2025-04-14 11:33:06 +02:00
GitHub Action
26077d5689 Increment frontend chart version 2025-04-11 14:56:11 +02:00
Shekar Siri
00c57348fd feat(search): enhance filter value handling
- Added `checkFilterValue` function to validate and update filter values
  in `SearchStoreLive`.
- Updated `FilterItem` to handle undefined `value` gracefully by providing
  a default empty array.

These changes improve robustness in filter value processing.
2025-04-11 14:36:25 +02:00
Shekar Siri
1f9bc5520a feat(search): add rounding to next minutes for date ranges
- Introduced `roundToNextMinutes` utility function to round timestamps
  to the next specified minute interval.
- Updated `Search` class to use the rounding function for non-custom
  date ranges.
- Modified `getRange` in `period.js` to align LAST_24_HOURS with
  15-minute intervals.
- Added `roundToNextMinutes` implementation in `utils/index.ts`.
2025-04-11 12:01:15 +02:00
Shekar Siri
aef94618f6 Revert "Increment frontend chart version"
This reverts commit 2a330318c7.
2025-04-11 11:03:01 +02:00
GitHub Action
2a330318c7 Increment frontend chart version 2025-04-11 11:01:53 +02:00
Shekar Siri
6777d5ce2a feat(dashboard): set initial drill down period
Change default drill down period from LAST_7_DAYS to LAST_24_HOURS
and preserve current period when drilling down on chart click
2025-04-11 10:49:17 +02:00
rjshrjndrn
8a6f8fe91f chore(action): cloning specific tag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-10 15:45:50 +02:00
Mehdi Osman
7b078fed4c
Increment frontend chart version (#3278)
Co-authored-by: GitHub Action <action@github.com>
2025-04-07 15:24:32 +02:00
Andrey Babushkin
894d4c84b3
Patch assist canvas (#3277)
* resolved conflict

* removed comments
2025-04-07 15:13:36 +02:00
Alexander
46390a3ba9
feat(assist-server): added the github action (#3275) 2025-04-07 10:43:48 +02:00
rjshrjndrn
621667f5ce ci(action): Build and patch github tags
feat(workflow): update commit timestamp for patching

Add a step to set the commit timestamp of the HEAD commit to be 1
second newer than the oldest of the last 3 commits. This ensures
proper chronological order while preserving the commit content.

- Fetch deeper history to access commit history
- Get oldest timestamp from recent commits
- Set new commit date with BSD-compatible date command
- Verify timestamp change with git log

The workflow was previously checking out 'main' branch with a
comment indicating it needed to be fixed. This change makes it
properly checkout the tag specified by the workflow input.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-04 16:09:05 +02:00
rjshrjndrn
a72f476f1c chore(ci): tag patching
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-04 13:15:56 +02:00
Mehdi Osman
623946ce4e
Increment assist chart version (#3267)
Co-authored-by: GitHub Action <action@github.com>
2025-04-03 13:29:02 -04:00
Mehdi Osman
2d099214fc
Increment frontend chart version (#3266)
Co-authored-by: GitHub Action <action@github.com>
2025-04-03 18:27:05 +02:00
Andrey Babushkin
b0e7054f89
Assist patch canvas (#3265)
* add agent info to assist and tracker

* removed AGENTS_CONNECTED event
2025-04-03 18:22:08 +02:00
Mehdi Osman
a9097270af
Increment chalice chart version (#3260)
Co-authored-by: GitHub Action <action@github.com>
2025-04-02 16:43:46 +02:00
Alexander
5d514ddaf2
feat(chalice): added for_spot=True for authenticate_sso (#3259) 2025-04-02 16:35:19 +02:00
Mehdi Osman
43688bb03b
Increment assist chart version (#3256)
Co-authored-by: GitHub Action <action@github.com>
2025-04-01 16:04:41 +02:00
Mehdi Osman
e050cee7bb
Increment frontend chart version (#3255)
Co-authored-by: GitHub Action <action@github.com>
2025-03-31 18:19:52 +02:00
Andrey Babushkin
6b35df7125
pulled updates (#3254) 2025-03-31 18:13:51 +02:00
GitHub Action
8e099b6dc3 Increment frontend chart version 2025-03-31 17:25:58 +02:00
nick-delirium
c0a4734054 ui: fix double fetches for sessions 2025-03-31 17:19:33 +02:00
GitHub Action
7de1efb5fe Increment frontend chart version 2025-03-31 12:08:45 +02:00
nick-delirium
d4ff28ddbe ui: fix modules label 2025-03-31 11:54:13 +02:00
nick-delirium
b2256f72d0 ui: fix modules mapper 2025-03-31 11:48:14 +02:00
GitHub Action
a63bda1c79 Increment frontend chart version 2025-03-31 11:17:34 +02:00
nick-delirium
3a0176789e ui: filter keys 2025-03-31 10:34:02 +02:00
nick-delirium
f2b7271fca ui: add old devtool filters 2025-03-31 10:31:06 +02:00
GitHub Action
d50f89662b Increment frontend chart version 2025-03-28 21:37:59 +01:00
GitHub Action
35051d201c Increment assist chart version 2025-03-28 21:37:59 +01:00
rjshrjndrn
214be95ecc fix(init): remove duplicate clone
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-28 21:25:24 +01:00
Delirium
dbc142c114
UI patches (28.03) (#3231)
* ui: force getting url for location in tabmanagers

* Assist add turn servers (#3229)

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* ui: fix chat draggable, fix default params

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>

* ui: fix spritemap generation for assist sessions

* ui: fix yarnlock

* fix errors

* updated widget link

* resolved conflicts

* updated widget url

---------

Co-authored-by: Andrey Babushkin <55714097+reyand43@users.noreply.github.com>
Co-authored-by: Андрей Бабушкин <andreybabushkin2000@gmail.com>
2025-03-28 17:32:12 +01:00
GitHub Action
443f5e8f08 Increment frontend chart version 2025-03-27 12:36:54 +01:00
Shekar Siri
9f693f220d refactor(auth): separate SSO support from enterprise edition
Add dedicated isSSOSupported property to correctly identify when SSO
authentication is available, properly handling the 'msaas' edition
case separately from enterprise edition checks. This fixes SSO
visibility in the login interface.
2025-03-27 12:28:10 +01:00
GitHub Action
5ab30380b0 Increment chalice chart version 2025-03-26 17:48:08 +01:00
Taha Yassine Kraiem
fc86555644 refactor(chalice): changed user-journey 2025-03-26 17:18:17 +01:00
GitHub Action
2a3c611a27 Increment frontend chart version 2025-03-26 16:48:29 +01:00
Delirium
1d6fb0ae9e ui: shrink icons when no space, adjust player area for events export … (#3217)
* ui: shrink icons when no space, adjust player area for events export panel, fix panel size

* ui: rm log
2025-03-26 16:38:48 +01:00
GitHub Action
bef91a6136 Increment frontend chart version 2025-03-25 18:15:34 +01:00
Shekar Siri
1e2bd19d32 fix(dashboard): update filter condition in MetricsList
Change the filter type comparison from checking against 'all' to
checking against an empty string. This ensures proper filtering
behavior when filtering metrics in the dashboard component.
2025-03-25 18:10:13 +01:00
rjshrjndrn
3b58cb347e chore(http): remove default token_string
scripts/helmcharts/openreplay/charts/http/scripts/entrypoint.sh

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 19:31:01 +01:00
GitHub Action
ca4590501a Increment frontend chart version 2025-03-24 17:45:24 +01:00
Andrey Babushkin
fd12cc7585
fix(GraphQL): remove unused useTranslation hook (#3200) (#3206)
Co-authored-by: PiRDub <pirddeveloppeur@gmail.com>
2025-03-24 17:38:45 +01:00
rjshrjndrn
6abded53e0 feat(helm): add TOKEN_SECRET environment variable
Add TOKEN_SECRET environment variable to HTTP service deployment and
generate a random value for it in vars.yaml.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 16:55:35 +01:00
GitHub Action
82c5e5e59d Increment frontend chart version 2025-03-24 14:34:51 +01:00
nick-delirium
c77b0cc4de ui: fixes for onboarding ui 2025-03-24 14:30:22 +01:00
nick-delirium
de344e62ef ui: onboarding fixes 2025-03-24 14:30:22 +01:00
Mehdi Osman
deb78a62c0
Increment frontend chart version (#3189)
Co-authored-by: GitHub Action <action@github.com>
2025-03-21 11:00:14 +01:00
Shekar Siri
0724cf05f0
fix(auth): remove unnecessary captcha token validation (#3188)
The token validation checks were redundant as the validation is already
handled by the captcha wrapper component. This change simplifies the
password reset flow while maintaining security.
2025-03-21 10:55:39 +01:00
GitHub Action
cc704f1bc3 Increment frontend chart version 2025-03-20 16:18:42 +01:00
nick-delirium
4c159b2d26 ui: fix table column export 2025-03-20 16:08:58 +01:00
Mehdi Osman
42df33bc01
Increment assist chart version (#3181)
Co-authored-by: GitHub Action <action@github.com>
2025-03-19 14:58:26 +01:00
Alexander
ae95b48760
feat(assist): improved caching mechanism for cluster mode (#3180) 2025-03-19 14:53:58 +01:00
Mehdi Osman
4be3050e61
Increment frontend chart version (#3179)
Co-authored-by: GitHub Action <action@github.com>
2025-03-19 14:47:37 +01:00
Shekar Siri
8eec6e983b
feat(auth): implement withCaptcha HOC for consistent reCAPTCHA (#3177)
* feat(auth): implement withCaptcha HOC for consistent reCAPTCHA

This commit refactors the reCAPTCHA implementation across the application
by introducing a Higher Order Component (withCaptcha) that encapsulates
captcha verification logic. The changes:

- Create a reusable withCaptcha HOC in withRecaptcha.tsx
- Refactor Login, ResetPasswordRequest, and CreatePassword components
- Extract SSOLogin into a separate component
- Improve error handling and user feedback
- Standardize loading and verification states across forms
- Make captcha implementation more maintainable and consistent

* feat(auth): support msaas edition for enterprise features

Add msaas to the isEnterprise check alongside ee edition to properly
display enterprise features. Use userStore.isEnterprise in SSOLogin
component instead of directly checking authDetails.edition for
consistent
enterprise status detection.
2025-03-19 14:36:56 +01:00
Taha Yassine Kraiem
5fec615044 refactor(chalice): cleaned code
fix(chalice): fixed session-search-pg sortKey issue
fix(chalice): fixed CH-query-formatter to handle special chars
fix(chalice): fixed /ids response
2025-03-18 13:51:10 +01:00
Mehdi Osman
f77568a01c
Increment frontend chart version (#3167)
Co-authored-by: GitHub Action <action@github.com>
2025-03-18 13:45:09 +01:00
Shekar Siri
618e4dc59f
refactor(searchStore): reformat filterMap function parameters (#3166)
- Reformat the parameters of the filterMap function for better readability.
- Comment out the fetchSessions call in clearSearch method to avoid unnecessary session fetch.
2025-03-15 11:42:14 +01:00
nick-delirium
b94fcb11e5
ui: fix pageselect for insights 2025-03-14 17:35:54 +01:00
nick-delirium
f93ee6fb8f
ui: fix filekey on prefetched sessions 2025-03-14 17:30:00 +01:00
Alexander
23820b7ea5
feat(ender): grab all sessions per tick (#3163) 2025-03-14 17:16:56 +01:00
nick-delirium
e92bfe3cfe
ui: fix efs file replay 2025-03-14 17:06:36 +01:00
Gabriele Angrisani
102f0c7b06 fix redis volume reference folder (#2805) 2025-03-14 15:34:11 +01:00
Laurenz Glück
8d57cc55a5 fix: updates docker-compose setup to be compatible with v1.21.0 2025-03-14 15:34:11 +01:00
nick-delirium
24b36efc9d
ui: update env sample 2025-03-14 15:06:11 +01:00
Alexander
fe91cad4af feat(db): moved out the error_id from json 2025-03-14 14:51:19 +01:00
Taha Yassine Kraiem
033ffcb7b9 refactor(DB): changed product_analytics.events to expose error_id
fix(chalice): fixed search events by error
2025-03-14 14:32:17 +01:00
Taha Yassine Kraiem
499048e46c refactor(chalice): changed pg_client to send keep-alive signals 2025-03-14 14:32:17 +01:00
nick-delirium
5b6c653862
ui: fix rm unused assets 2025-03-14 13:31:09 +01:00
nick-delirium
4169ab87c6
ui: fix comparison reset 2025-03-14 13:19:00 +01:00
Taha Yassine Kraiem
80229a0214 refactor(chalice): use new errors columns 2025-03-14 10:53:09 +01:00
Taha Yassine Kraiem
fb48ba8300 refactor(chalice): refactored errors helper
refactor(chalice): removed errors-tags
2025-03-14 10:53:09 +01:00
Taha Yassine Kraiem
b0f3c50c0f refactor(DB): DB changes 2025-03-14 10:53:09 +01:00
Alexander
5806362ce0 feat(db): added missing columns for events 2025-03-14 10:31:15 +01:00
rjshrjndrn
2458af460b feat(docker): switch to Chainguard nginx image
Replace nginx:alpine with cgr.dev/chainguard/nginx base image and
remove unnecessary permission changes since the Chainguard image
handles permissions differently and runs with proper security defaults.
2025-03-13 17:45:38 +01:00
Alexander
6c891cb131 feat(db): removed js exception tags in CH 2025-03-13 17:25:53 +01:00
rjshrjndrn
8e41c3ce91 fix(chalice): default envs
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 17:08:50 +01:00
rjshrjndrn
14d0a77a73 feat(chalice): add JWT expiration configuration
Add JWT_EXPIRATION environment variable to the chalice helm chart with
default value set to 86400 s (24 hours).
2025-03-13 17:00:23 +01:00
rjshrjndrn
0333c56d52 feat(clickhouse): Upgrade version
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:39:03 +01:00
rjshrjndrn
52d4abb61c fix(cli): download clickhouse
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:36:20 +01:00
rjshrjndrn
b0e7d3aa79 ci(make): download-cli
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:19:13 +01:00
rjshrjndrn
e9eea78283 ci(make): Upgrade installation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:04:25 +01:00
rjshrjndrn
0f4c509582 ci(make): get version for chart
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:02:17 +01:00
rjshrjndrn
820bca6308 build(api): implement multi-stage Dockerfile
- Use multi-stage build to reduce final image size
- Move build dependencies to builder stage
- Copy only necessary files to final stage
- Use UV for faster Python package installation
- Clean up duplicate operations
2025-03-13 14:51:59 +01:00
rjshrjndrn
51e71a4d52 ci(make): pull the latest images
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 14:37:13 +01:00
rjshrjndrn
2c9e9576c5 ci(Makefile): clean installation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 14:12:53 +01:00
Taha Yassine Kraiem
9e7f751df6 fix(chalice): fixed EE error-details undefined status 2025-03-13 13:59:46 +01:00
Taha Yassine Kraiem
b6d0e71544 fix(chalice): fixed EE imports for errors 2025-03-13 13:53:10 +01:00
rjshrjndrn
93a9e03026 fix(api): improve Dockerfile with best practices
- Use lowercase labels in accordance with Docker conventions
- Pin package versions for better build reproducibility
- Consolidate RUN commands to reduce image layers
- Use JSON array notation for CMD instruction
- Restore GIT_SHA label for proper image versioning
- Maintain consistent code formatting and ordering

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 13:48:53 +01:00
rjshrjndrn
a62f6f6bb0 ci(build): remove peers
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 13:48:53 +01:00
Taha Yassine Kraiem
cd80aa85ea refactor(chalice): refactored error-details code 2025-03-13 13:46:03 +01:00
Taha Yassine Kraiem
961c685310 refactor(chalice): refactored error-details code
refactor(chalice): moved error-details to use new product-analytics DB structure
2025-03-13 13:46:03 +01:00
Alexander
160b5ac2c8 feat(metrics): moved back the metrics endpoint to support the undocumented functionality 2025-03-13 13:34:23 +01:00
nick-delirium
1cca40d4c5
ui: fix calendar self-close 2025-03-13 13:08:44 +01:00
rjshrjndrn
bd2a59266d feat(api): migrate to uv package manager
- Add uv as dependency manager for faster installations
- Remove pinned versions from apk packages for better maintenance
- Fix lxml installation and dependency issues in requirements.txt
- Add environment setup steps in Dockerfile

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 13:01:02 +01:00
Alexander
8acee7d357 feat(connector): fixed several release bugs 2025-03-13 12:28:23 +01:00
rjshrjndrn
fb49c715cb refactor(chalice): remove peers from health checks and fix formatting
Updated health.py to remove the peers-openreplay service from health
checks and applied consistent formatting throughout the file. This
includes proper line breaks, trailing commas for multi-line data
structures, and consistent indentation patterns.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 11:51:04 +01:00
nick-delirium
221bee70f5
ui: add hash to css filenames 2025-03-13 11:45:22 +01:00
rjshrjndrn
8eb431f70c fix(docker): pin pip packages in API Dockerfile
Add exact version pinning for all packages installed via pip to improve
build reproducibility and security. Also consolidates package install
steps and improves the docker image build process with proper cleanup
of build dependencies.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 11:38:57 +01:00
rjshrjndrn
820b0954e7 ci(makefile): install test
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 11:13:38 +01:00
Andrey Babushkin
19b350761c
add few locales (#3151) 2025-03-13 10:08:08 +01:00
Alexander
3b3e95a413
Observability upgrade (#3146)
* feat(metrics): grand update

* feat(metrics): fixed missing part in ee tracer

* feat(assets): added missing arg

* feat(metrics): fixed naming problems
2025-03-13 08:09:29 +01:00
Taha Yassine Kraiem
fe1130397c fix(alerts): fixed crash while processing over CH 2025-03-12 17:15:41 +01:00
Taha Yassine Kraiem
fd4b71d854 fix(chalice): fixed EE docker image 2025-03-12 16:47:04 +01:00
nick-delirium
404ffd5b2d
ui: add hash to css filenames 2025-03-12 16:46:07 +01:00
Taha Yassine Kraiem
5af63eb9f1 fix(chalice): fixed legacy sessions search for EE 2025-03-12 15:14:54 +01:00
Shekar Siri
038bfee383 change(ui): tabl loader 2025-03-12 14:41:33 +01:00
Taha Yassine Kraiem
bd09160a4a fix(chalice): fixed search usability-test's sessions 2025-03-12 13:43:01 +01:00
nick-delirium
136a5b2bfb
ui: wrap title with i18n 2025-03-12 13:21:51 +01:00
Taha Yassine Kraiem
33deaef0ce fix(chalice): changes sessions_search importer 2025-03-12 12:28:16 +01:00
Taha Yassine Kraiem
3f541e5d59 refactor(chalice): changed .gitignore 2025-03-12 12:23:03 +01:00
Kraiem Taha Yassine
ae463db150
fix(chalice): fixed empty bookmark/vault projects null-timestamp issue (#3142) 2025-03-12 12:11:34 +01:00
Taha Yassine Kraiem
9eb19fedf1 refactor(chalice): changed import/flags logic 2025-03-12 11:54:04 +01:00
Andrey Babushkin
5df934c9ce
fixed sessions layout (#3138) 2025-03-12 09:21:16 +01:00
Taha Yassine Kraiem
e027a2d016 fix(chalice): fixed circular imports 2025-03-11 17:02:01 +01:00
nick-delirium
c7f3c78740
ui: fix heatmap scaling (use true document height) 2025-03-11 17:00:46 +01:00
Taha Yassine Kraiem
3245579b7c fix(chalice): remove duplicate sessions when using MV 2025-03-11 16:37:00 +01:00
nick-delirium
0107c9c523
ui: fix in-session clickmap refresh 2025-03-11 16:24:26 +01:00
nick-delirium
05f4054b31
ui: fix sank tooltip spacing 2025-03-11 16:20:15 +01:00
Taha Yassine Kraiem
ce844296ed refactored(chalice): include sessions src 2025-03-11 15:46:05 +01:00
Taha Yassine Kraiem
0a5856afe1 refactored(chalice): optimized search-product-analytics-cards
fix(chalice): fixed search-product-analytics-cards
2025-03-11 14:06:48 +01:00
Taha Yassine Kraiem
45b8bdef8a fix(DB): fixed old-product-analytics wrong view_type 2025-03-11 13:13:39 +01:00
Taha Yassine Kraiem
264f28ed39 refactor(chalice): optimized autocomplete lazy initialization 2025-03-11 13:13:39 +01:00
Taha Yassine Kraiem
59d3253737 refactor(chalice): events-autocomplete lazy initialization 2025-03-11 12:09:49 +01:00
Taha Yassine Kraiem
1c8c231d13 refactor(chalice): metafilters-autocomplete lazy initialization 2025-03-11 11:58:57 +01:00
nick-delirium
77208b95e8
ui: fix locale json endings 2025-03-11 11:38:23 +01:00
nick-delirium
cdbbb482ce ui: translate more lines 2025-03-11 10:35:08 +01:00
nick-delirium
ccd8d76e98 ui: improve metadata display 2025-03-11 10:35:08 +01:00
Andrey Babushkin
17a5089c24
updated locales (#3129) 2025-03-10 23:19:54 +01:00
Shekar Siri
384866621c change(api): router to search with pagination schema 2025-03-10 19:15:26 +01:00
nick-delirium
743625f66b
ui: fixes for metadata list in sessions 2025-03-10 18:00:05 +01:00
Andrey Babushkin
ffd134c204
Fix localisation (#3128)
* fix localised errors

* fix locales

* fix locales

* fix highlight badges

* fix errors
2025-03-10 17:46:36 +01:00
Taha Yassine Kraiem
8da099ba98 fix(chalice): fix import issue 2025-03-10 17:11:39 +01:00
Andrey Babushkin
75ca0267ae
Fix localisation (#3126)
* fix localised errors

* fix locales

* fix locales

* fix highlight badges
2025-03-10 16:50:10 +01:00
Andrey Babushkin
6ab3c80985
Fix localisation (#3125)
* fix localised errors

* fix locales

* fix locales
2025-03-10 16:43:53 +01:00
Andrey Babushkin
eab2d3a2cf
Fix localisation (#3123)
* fix localised errors

* fix locales
2025-03-10 15:51:21 +01:00
Shekar Siri
c6cbc4eba8 fix(ui): align session date range text properly
Add text-start class to the date range container to ensure proper
left alignment of text in the SessionDateRange component.
2025-03-10 15:24:17 +01:00
Alexander
fdd26c567c feat(auth): added missing prefix support to other services 2025-03-10 15:18:00 +01:00
Alexander
4b9be69719 feat(spot): added missing prefix support to the auth middleware 2025-03-10 15:12:19 +01:00
Shekar Siri
b8511b6be1 change(api): schema for card search with filter and sort 2025-03-10 15:06:54 +01:00
Shekar Siri
5cc9945f16 change(api): keep the original formatting 2025-03-10 15:05:31 +01:00
Shekar Siri
cef251db6a feat(metrics): add metrics search functionality
Implement new search_metrics function in custom_metrics.py to allow
filtering and sorting of metrics. Add corresponding endpoint in the
metrics router and supporting schema classes in schemas.py. The new
implementation provides pagination, filtering, and sorting capabilities
for metrics.
2025-03-10 14:58:30 +01:00
Shekar Siri
687ab05f22 feat(metrics): implement server-side pagination and sorting
Refactors metrics list view to use server-side pagination and sorting
instead of client-side implementation. This improves performance for
large datasets by reducing client workload and network payload size.

Key changes:
- Add pagination API endpoint in MetricService
- Update MetricStore to handle server pagination
- Refactor ListView component to use server-side sorting
- Remove client-side sorting and pagination logic
2025-03-10 14:58:30 +01:00
Alexander
4b09213448 feat(images): added a proper observability 2025-03-10 14:14:43 +01:00
Taha Yassine Kraiem
af4a344c85 fix(chalice): fix multi-refresh token
fix(chalice): fix spot multi-refresh token
2025-03-10 13:14:39 +01:00
Taha Yassine Kraiem
c40e32d624 refactor(chalice): refactored dynamic routs 2025-03-10 12:02:03 +01:00
Taha Yassine Kraiem
afbf5fee7a fix(chalice): fix refresh token 2025-03-10 12:02:03 +01:00
rjshrjndrn
28b580499f feat(helm): add configurable assets origin
Add a helper template to allow customizing the assets origin URL.
This gives users the ability to override the default S3 endpoint
construction when needed, while maintaining backward compatibility.
This can be used when try to use proxy the bucket like cloudfront or
some custom domain.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-10 11:54:18 +01:00
nick-delirium
9d7c54554e
ui: fix live tag 2025-03-10 09:49:33 +01:00
Sudheer Salavadi
adf302bc34
Improved project tags message. (#3115) 2025-03-07 19:09:51 +01:00
Andrey Babushkin
6852d63cdb
fix localised errors (#3117) 2025-03-07 17:58:00 +01:00
Shekar Siri
41178ba841 change(ui): webpack config to use env vars 2025-03-07 17:50:32 +01:00
nick-delirium
90bc6bc83e
ui: restrict meta list to 1 2025-03-07 17:12:45 +01:00
Taha Yassine Kraiem
b8d365de3d fix(chalice): debug refresh token 2025-03-07 17:06:49 +01:00
Taha Yassine Kraiem
87e7acecde fix(chalice): debug refresh token 2025-03-07 16:49:34 +01:00
Taha Yassine Kraiem
e53301d18e fix(chalice): debug refresh token 2025-03-07 16:40:11 +01:00
Taha Yassine Kraiem
ff04276623 fix(chalice): debug refresh token 2025-03-07 16:21:48 +01:00
nick-delirium
b0e0321224
ui: fix ui crash 2025-03-07 16:09:51 +01:00
Taha Yassine Kraiem
e95417c1ed fix(chalice): debug refresh token 2025-03-07 15:49:51 +01:00
Alexander
5f3b3bb2ef feat(canvases): added a proper canvas observability 2025-03-07 15:46:27 +01:00
Taha Yassine Kraiem
06937b305a fix(chalice): debug refresh token 2025-03-07 15:38:23 +01:00
Andrey Babushkin
a693a36a6c
Add lokalisation (#3107)
* applied eslint

* add locales and lint the project

* removed error boundary

* updated locales

* fix min files

* fix locales

* fix erorrs

* fix errors

* fix errors

* fix error

* add locales

* fix locales
2025-03-07 11:59:37 +01:00
Andrey Babushkin
c8ff481725
Add lokalisation (#3106)
* applied eslint

* add locales and lint the project

* removed error boundary

* updated locales

* fix min files

* fix locales

* fix erorrs

* fix errors

* fix errors

* fix error

* add locales
2025-03-07 11:48:34 +01:00
Alexander
ef897538d1 feat(images): name fix in Dockerfile 2025-03-07 11:30:52 +01:00
Alexander
07ffb06db1 feat(images): renamed + small improvements 2025-03-07 11:22:55 +01:00
Andrey Babushkin
ad9883ceb2
Add lokalisation (#3105)
* applied eslint

* add locales and lint the project

* removed error boundary

* updated locales

* fix min files

* fix locales

* fix erorrs

* fix errors

* fix errors

* fix error
2025-03-07 11:18:12 +01:00
Andrey Babushkin
5c9a29570c
Add lokalisation (#3104)
* applied eslint

* add locales and lint the project

* removed error boundary

* updated locales

* fix min files

* fix locales

* fix erorrs

* fix errors
2025-03-07 10:43:08 +01:00
Andrey Babushkin
9f9990d737
Add lokalisation (#3103)
* applied eslint

* add locales and lint the project

* removed error boundary

* updated locales

* fix min files

* fix locales

* fix erorrs
2025-03-06 18:17:38 +01:00
Andrey Babushkin
fd5c0c9747
Add lokalisation (#3092)
* applied eslint

* add locales and lint the project

* removed error boundary

* updated locales

* fix min files

* fix locales
2025-03-06 17:43:15 +01:00
Taha Yassine Kraiem
b8091b69c2 refactor(chalice): fixed product analytics 2025-03-06 17:16:21 +01:00
Taha Yassine Kraiem
502303aee7 refactor(chalice): refactored product analytics 2025-03-06 17:12:19 +01:00
Taha Yassine Kraiem
632bc1cbb9 refactor(chalice): refactored product analytics 2025-03-06 17:12:19 +01:00
rjshrjndrn
bcc7d35b7f docs: improve services input description
Add example values (frontend,api,sink) to the services input
description in the release deployment workflow to make it clearer
for users what format is expected.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:24:25 +01:00
rjshrjndrn
45656ec6d7 fix(ci): maintain correct working directory in workflow
Adds working_dir variable to track the initial directory and ensures
proper directory navigation when processing services in the release
deployment workflow.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
15829d865e fix(workflow): move wait outside build services loop
The wait command was placed inside the service loop,
causing the workflow to wait after each individual service build.
Moving it outside ensures all service builds run in parallel before
proceeding to the next step.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
029376c3e4 fix(ci): add missing loop closures in deploy workflow
Add the required 'done' keywords at the end of for loops in the
Kubernetes deployment steps for both EE and FOSS clusters to ensure
proper script execution.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
3ca6f78bed perf(ci): parallelize FOSS and EE build steps
Improves build performance by running EE and FOSS builds in parallel
using depot's parallel build capabilities. Each service's builds now
run concurrently with proper process management via Bash background
jobs, reducing overall build time.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
12a729fafe refactor(ci): simplify image tag format
The image tag generation in the release deployment workflow was simplified
by removing the redundant IMAGE_TAG variable prepending. Now the tag is
directly composed of the branch name and short SHA, resulting in cleaner
and more readable image tags.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
0a17460c5a feat: add enterprise edition image build and deployment
Add parallel Enterprise Edition (-ee suffix) image building and update
K8s deployments to use the EE images in the EE cluster. This change
enables maintaining both community and enterprise edition deployments
from the same workflow.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
faadfa497f fix(actions): set default build script name before service loop
The workflow was missing a default value for the BUILD_SCRIPT_NAME
variable, which could cause failures when processing services. This
commit adds "build.sh" as the default value.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
bbeb508738 fix(actions): standardize registry URL in workflows
Replace all instances of RELEASE_OSS_REGISTRY secret with the already
defined IMAGE_REGISTRY_URL environment variable for consistency across
deployment steps. This eliminates duplicate references to the same
registry URL and simplifies future maintenance.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
333fd642be feat(ci): add DEPOT_TOKEN to release workflow env
Add the DEPOT_TOKEN secret to the environment variables section of the
release-deployment workflow to enable proper authentication with the
Depot service for Docker builds.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
rjshrjndrn
5e93178876 refactor(ci): revamp release deployment workflow
Completely redesign the release deployment workflow to:
- Simplify image building and deployment process
- Add branch-based tagging with commit SHA
- Replace AWS ECR login with direct Docker registry auth
- Improve service deployment with explicit image setting
- Update naming and descriptions for better clarity

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-06 16:22:55 +01:00
Taha Yassine Kraiem
da433e1666 refactor(chalice): refactored sessions code 2025-03-06 14:57:37 +01:00
nick-delirium
a87f6c658c
ui: drop old deprecated message types 2025-03-06 14:49:45 +01:00
Alexander
4ebbfd3501 feat(canvases): improved performance 2025-03-06 14:12:49 +01:00
Alexander
6dc3dcfd4e feat(proto): removed a part of deprecated messages (min supported tracker version is 6.0.0) 2025-03-06 13:49:18 +01:00
Taha Yassine Kraiem
74146eecf1 refactor(chalice): customizable long-query args 2025-03-06 12:51:36 +01:00
Taha Yassine Kraiem
2e69a6e4df refactor(chalice): refactored sessions package 2025-03-06 12:34:34 +01:00
Taha Yassine Kraiem
afacbc1460 refactor(alerts): refactored code 2025-03-06 12:20:46 +01:00
Taha Yassine Kraiem
6e1316c05f refactor(chalice): refactored code 2025-03-06 11:16:10 +01:00
Taha Yassine Kraiem
d3851cedec refactor(chalice): refactored code 2025-03-06 10:14:52 +01:00
nick-delirium
a1989eb574
tracker: 16.0.1 changelog 2025-03-06 09:38:58 +01:00
nick-delirium
95455f761b
ui: checkbox spacing 2025-03-05 17:10:21 +01:00
nick-delirium
69d1d88600
tracker: 16.0.1-beta 2025-03-05 16:51:27 +01:00
nick-delirium
ceb40992cc
tracker: export tracker App type from entry.ts 2025-03-05 15:56:53 +01:00
nick-delirium
1ab7d0ad7f
tracker: introduce singleton approach for tracker 2025-03-05 15:56:53 +01:00
nick-delirium
2ee535f213
tracker: move domparser location inside observer 2025-03-05 15:56:53 +01:00
nick-delirium
0ba1382c16
tracker: fix spritemap parser, add svgdoc cache 2025-03-05 15:56:53 +01:00
Shekar Siri
c025b2f1a5 fix(ui): disable table sorter tooltips and fix indentation
Removes the default tooltips that appear when hovering over sortable
column headers by setting showSorterTooltip={false} on Table components.
Also fixes indentation in text components and function parameters for
better code readability.

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-05 14:53:13 +01:00
Shekar Siri
918d9de4c9 change(ui): ignore logs 2025-03-05 13:57:53 +01:00
nick-delirium
047a5f52e7
ui: fix preset comparison check 2025-03-05 13:47:37 +01:00
Shekar Siri
7a88acfa9f change(ui): .gitignore 2025-03-05 13:29:14 +01:00
Taha Yassine Kraiem
366d2e1017 refactor(chalice): throw an error when endTimestamp is 0 for product analytics 2025-03-05 13:27:09 +01:00
Shekar Siri
46e6f1a503 feat(deploy): add production deployment target
Add new prod-deploy make target that validates environment
configuration before deploying to production. The target checks
for .env.production file and extracts NODE_ENV and API_EDP values
before executing the deployment process.

Also updates help documentation to include the new target.

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-05 13:21:18 +01:00
Shekar Siri
ce2a65f276 fix(filter-types): standardize SLOW_PAGE_LOAD enum value
Change SLOW_PAGE_LOAD enum string value from camelCase 'slow_pageLoad'
to snake_case 'slow_page_load' to maintain consistent naming convention
across all enum values.

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-05 12:46:39 +01:00
Shekar Siri
f168f90f10 feat(build): add Makefile for frontend application management
This commit introduces a Makefile for the frontend application that:
- Provides commands to start the app in background or foreground
- Includes utilities to check status and stop the application
- Reads environment variables from .env file with fallback values
- Sets up logging infrastructure with timestamped log files
- Includes help documentation for all available commands

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-05 11:58:25 +01:00
Taha Yassine Kraiem
b6cca71053 refactor(chalice): refactored vault 2025-03-05 11:38:40 +01:00
Taha Yassine Kraiem
2841740afb fix(chalice): fixed vault 2025-03-05 11:28:40 +01:00
Shekar Siri
927f96cb79 fix(ui): update icon in health modal refresh button
Replace the string-based icon reference 'arrow-repeat' with the React
component <RefreshCcw size={18} /> from lucide-react. Also fix indentation
in the category mapping section for better code organization.

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-05 11:26:40 +01:00
Shekar Siri
e174a11466 fix(ui): update spots list header title for consistency
Change header title from "Spot List" to "Spots" to improve UI consistency
and make the heading more concise.

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-05 11:22:26 +01:00
Shekar Siri
ee4c5cf45d feat(ui): improve session search and count functionality
- Replace latestList with latestSessionCount to better track new sessions
- Move debouncing logic to PrivateRoutes for improved consistency
- Fix session count checking with proper API integration
- Clean up code formatting and remove unnecessary function calls

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-04 19:37:20 +01:00
Taha Yassine Kraiem
78ddbb9233 fix(chalice): fixed error-exp 2025-03-04 17:04:31 +01:00
Shekar Siri
66edf44f8b fix(ui): resolve tooltip conditional rendering
- Replace separate delay and disabled props with conditional title
- Ensure tooltip only shows content when not disabled
- Maintain consistent tooltip behavior across the application
- Prevent potential rendering errors on disabled tooltips
- Improve code maintainability for tooltip components

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-04 16:39:25 +01:00
Shekar Siri
0af941e543 refactor(SessionList): optimize component performance
- Fix TypeScript error with SessionItem JSX component
- Convert SessionItem to use modern React hooks and patterns
- Implement useCallback and useMemo for better rendering performance
- Properly handle optional chaining for conditional properties
- Remove console.log statements from search store
- Fix useEffect dependencies to prevent unnecessary rerenders
- Cleanup unused imports and commented code

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-04 16:33:01 +01:00
Shekar Siri
fd64d721c6 fix(ui): resolve SessionItem JSX rendering error
- Fix TypeScript error with SessionItem component by providing proper props
- Remove unused isSessionsRoute variable
- Remove commented out code for better clarity
- Fix formatting and indentation throughout the file
- Update setTimeout formatting to match project style

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-04 16:07:35 +01:00
Shekar Siri
f965c69a26 refactor(Copyright): modernize component and add dynamic year
- Convert from function to React.memo for better performance
- Replace hardcoded year with dynamic current year calculation
- Update styling classes to use Tailwind format
- Add hover states to improve UX
- Add rel="noopener noreferrer" for security best practices
- Change container from div to semantic footer element

Signed-off-by: Shekar Siri <sshekarsiri@gmail.com>
2025-03-04 15:47:01 +01:00
Kraiem Taha Yassine
9bb93d5daa
fix(chalice): fixed get error details (#3084) 2025-03-04 15:20:51 +01:00
Shekar Siri
4d19586eb9 fix(ui): tailwind preflight that causing antd button with icon aligment issue 2025-03-04 15:05:53 +01:00
nick-delirium
5e10e168c6
ui: fix timetable crash on empty record 2025-03-04 13:41:52 +01:00
Kraiem Taha Yassine
aa2c14b7c1
refactor(chalice): refactored collaboration code (#3082) 2025-03-04 10:53:54 +01:00
Shekar Siri
4ef61f6fb5 change(ui): sessions list header y padding 2025-03-03 18:39:58 +01:00
nick-delirium
95a5037abf
tracker: fix up formatting, changelog 2025-03-03 17:07:46 +01:00
Aspyryan
23514d4b3f
Running buffer slicing when browser is idle (#3050)
* Fixed tracker uploadOfflineRecording

* Make FlushBuffer perform slicing when browser is idle

* Use map function to cast away proxy objects in flushBuffer
2025-03-03 17:06:41 +01:00
Delirium
ee46413b13
Events for E2E testing (#3081)
* ui: change export event ui, add rightblock panel

* ui: add timeline select checkbox

* ui: keep selected framework in localstorage

* ui: on timeline => on the timeline
2025-03-03 16:36:42 +01:00
nick-delirium
9f57271af2
ui: update loglevel for observed node warning, drop digit computing from attributeSender 2025-03-03 16:30:24 +01:00
Kraiem Taha Yassine
84771542a6
dev (#3080)
* fix(chalice): fixed unprocessed_sessions.py

* refactor(chalice): refactored favorite sessions

* fix(chalice): fixed errors handing issue
2025-03-03 13:43:59 +01:00
nick-delirium
83f8b67f74
ui: fix sankey crash, fix journey startpoint size 2025-03-03 12:36:31 +01:00
Shekar Siri
6af9f719c8 fix(ui): ui request credentials for refresh token 2025-03-03 12:24:26 +01:00
Shekar Siri
789427dd57 change(ui): copyright 2025-03-03 12:24:04 +01:00
Alexander
59bbc6a903 feat(canvas): fixed an issue with already existing archive 2025-03-03 08:20:17 +01:00
Alexander
0529ee3afd feat(db): reduced the error log length 2025-03-03 08:15:57 +01:00
Shekar Siri
307b0c1cd8 change(api): tenant_id usage 2025-02-28 21:52:54 +01:00
nick-delirium
11a2ea48bc
ui: fix caching for autocomplete values 2025-02-28 17:39:21 +01:00
Shekar Siri
1146900dc0 fix(ui): search call behaviour 2025-02-28 17:26:36 +01:00
Ghaida Bouchaala
0a999247e4
update docs links (#3076) 2025-02-28 16:54:04 +01:00
Alexander
f13ad8a882 feat(http): config changes 2025-02-28 15:41:45 +01:00
Alexander
0d12fdddc9 feat(canvas): moved logs to debug 2025-02-28 15:36:41 +01:00
Shekar Siri
c0a5415eb9 change(api): ee related changes for notes 2025-02-28 15:29:03 +01:00
Alexander
b8a70367ed feat(sessions): added the specific log 2025-02-28 15:22:56 +01:00
Shekar Siri
1efe5c87e8 change(db): ee related init schema updated for notes 2025-02-28 15:21:53 +01:00
Alexander
2dcbfe2ef9 feat(integrations): removed 'req' from jwt_spot_secret 2025-02-28 15:01:03 +01:00
Alexander
fedc48bd0e feat(backend): small changes from saas repo 2025-02-28 14:39:54 +01:00
nick-delirium
de72e79fc6
ui: rm random log 2025-02-28 14:37:09 +01:00
nick-delirium
d43bc3a2e9
tracker: fix tests, release 16.0 + 11.0 (assist) 2025-02-28 10:34:53 +01:00
nick-delirium
8ba6a17055
ui: fix error table pagination 2025-02-28 10:34:39 +01:00
nick-delirium
e5809a5eff
ui: fix long loader ui 2025-02-28 09:26:32 +01:00
Mehdi Osman
171fd5aa59
Update date 2025-02-27 19:42:44 -05:00
Kraiem Taha Yassine
533fb71cb7
Dev (#3074)
* fix(chalice): fixed public api

* fix(chalice): changed user-journey response

* fix(chalice): fixed viewed sessions
2025-02-27 23:42:29 +01:00
Kraiem Taha Yassine
90964e8f50
Dev (#3073)
* fix(chalice): fixed public api

* fix(chalice): changed user-journey response
2025-02-27 21:28:32 +01:00
Kraiem Taha Yassine
7d5ac6a8c9
fix(chalice): fixed public api (#3072) 2025-02-27 21:11:51 +01:00
Sudheer Salavadi
32b281f689
Improvements in Sessions list & Cobrowsing (#3071) 2025-02-27 20:26:03 +01:00
Kraiem Taha Yassine
b175c836a3
fix(chalice): fixed sessions favorite (#3070) 2025-02-27 19:01:01 +01:00
Kraiem Taha Yassine
4e54bced9c
fix(chalice): added get note by id (#3069) 2025-02-27 18:30:44 +01:00
nick-delirium
e2fa3c91e2
ui: fix tabclose events distribution 2025-02-27 17:48:47 +01:00
nick-delirium
19c8fba445
ui: fix button icons 2025-02-27 16:37:05 +01:00
nick-delirium
94e8e0319d
ui: fix cobrowse buttons 2025-02-27 16:17:41 +01:00
nick-delirium
ec8f9a349d
ui: hide training videos from saas 2025-02-27 16:03:18 +01:00
Alexander
992cb2feca feat(peers): removed peers actions 2025-02-27 13:29:57 +01:00
Alexander
844f79a989 feat(peers): removed the service itself 2025-02-27 13:28:43 +01:00
rjshrjndrn
1ec06d360e chore(helm): remove peers service 2025-02-27 10:38:15 +01:00
Andrey Babushkin
fd76f7c302
Migrate to webrtc (#3051)
* resolved conflicts

* resolved conflicts

* translated comments

* changed console.log message lang

* changed console to logs

* implementing conference call

* add isAgent flag

* add webrtc handlers

* add conference call

* removed conference calls

* fix lint error

---------

Co-authored-by: Andrey Babushkin <a.babushkin@lemon-ai.com>
2025-02-27 10:12:27 +01:00
Andrey Babushkin
c793d9d177
add handler (#3062)
* aff handler

* fix socket id handling
2025-02-27 10:12:06 +01:00
nick-delirium
1c1a41bb55
ui: pick series by name if no id exist 2025-02-27 09:41:49 +01:00
nick-delirium
6873f1c56b
ui: fix funnel wording 2025-02-26 17:33:13 +01:00
Kraiem Taha Yassine
d79665cbea
fix(chalice): fixed delete/update metadata used in conditional recording (#3068) 2025-02-26 15:59:10 +01:00
Shekar Siri
256c065153 fix(ui): metadata reload on project config 2025-02-26 14:52:44 +01:00
Kraiem Taha Yassine
114bd4080b
fix(chalice): fixed EE autocomplete top values (#3067)
fix(chalice): fixed funnels param
2025-02-25 18:22:51 +01:00
nick-delirium
d4965f2137
ui: fetch sessions from journey start point 2025-02-25 17:40:33 +01:00
nick-delirium
8ed97b353b
ui: fix empty funnel behavior 2025-02-25 17:12:11 +01:00
Delirium
ac232ef599
Str dict global (#3064)
* testing global string dictionary

* ui: v bump

* tracker: save last prefix

* tracker: substract years from dateid

* tracker: fix digit shaving
2025-02-25 15:19:31 +01:00
nick-delirium
264f35cc9e
ui: add pwright 2025-02-25 15:10:24 +01:00
nick-delirium
d85f63c72e
ui: add icon to e2e button 2025-02-25 14:55:29 +01:00
nick-delirium
4b16e50e5f
ui: export events for e2e 2025-02-25 14:54:02 +01:00
nick-delirium
735b86d778
ui: fix pathname reset 2025-02-25 10:15:49 +01:00
nick-delirium
78bb1c3c6b
ui: remove unused libraries 2025-02-24 16:22:48 +01:00
Delirium
968a3eefde
ui: migrating old components -> ant (#3060)
* ui: migrating old components -> ant

* ui: moving input, tooltip, toggler, checkbox... -> Toggler\s*(.)? from 'UI

* ui: more components moved

* ui: move popover to ant
2025-02-24 16:11:44 +01:00
nick-delirium
1122ced4c3
ui: remove utm and tagged element from mobile filter 2025-02-24 16:00:31 +01:00
nick-delirium
b406893d00
ui: fix funnel table 2025-02-24 15:57:42 +01:00
nick-delirium
8b2cf031ca
ui: chart drilldown -- fix datatable filtering, fix series filtering 2025-02-24 13:59:52 +01:00
nick-delirium
fe06f43dd5
ui: date picker and db name improvements 2025-02-24 10:37:08 +01:00
Kraiem Taha Yassine
64e08916f9
fix(chalice): fixed session's clickmap (#3056) 2025-02-21 18:21:49 +01:00
nick-delirium
99d6545720
ui: fix table crash 2025-02-21 17:48:38 +01:00
nick-delirium
7e4782ae71
ui: hide selection in tablemode 2025-02-21 17:36:28 +01:00
Kraiem Taha Yassine
ed3020dc7e
fix(chalice): fixed enumeration based session's filters (including custom events) (#3055) 2025-02-21 17:31:56 +01:00
nick-delirium
74f6c2cd66
ui: fix autoopen state 2025-02-21 17:15:37 +01:00
Kraiem Taha Yassine
0533624c25
fix(chalice): fixed EE create heatmaps card (#3054) 2025-02-21 16:25:12 +01:00
nick-delirium
c271e01dfc
ui: reload tags on project change 2025-02-21 15:42:59 +01:00
Kraiem Taha Yassine
c07ad14ffc
Dev (#3053)
* fix(chalice): support wrong payload for user-journey drill-down

* fix(chalice): ignore event's MV for v1.22
refactor(DB): ignore event's MV for v1.22

* fix(chalice): fixed EE metrics
2025-02-21 15:21:22 +01:00
Shekar Siri
b91d979c98 change(ui): recordings admin only access 2025-02-21 15:06:13 +01:00
Shekar Siri
d63877de1c change(ui): condition check for recordings module 2025-02-21 15:05:30 +01:00
Kraiem Taha Yassine
3a331d266c
Dev (#3052)
* fix(chalice): support wrong payload for user-journey drill-down

* fix(chalice): ignore event's MV for v1.22
refactor(DB): ignore event's MV for v1.22
2025-02-21 12:56:05 +01:00
nick-delirium
e8835d3058
ui: fix button styling for assist 2025-02-21 12:40:22 +01:00
nick-delirium
3c32e8eec1
ui: fix broken imports 2025-02-21 11:13:40 +01:00
nick-delirium
b1d51c19ea
ui: fix broken import 2025-02-21 11:04:52 +01:00
nick-delirium
f6015f31f5
ui: fix hlid opener 2025-02-21 10:58:45 +01:00
Shekar Siri
06113f7534 change(ui): notes to highlights modules and menu 2025-02-21 10:45:54 +01:00
nick-delirium
8500c1c11e
ui: disable hl edit for non creators 2025-02-21 10:38:15 +01:00
nick-delirium
fc542cd7d2
ui: fix chart label alignments 2025-02-21 10:33:10 +01:00
nick-delirium
44a1d96d2d
ui: journey fixes 2025-02-21 09:40:27 +01:00
Mehdi Osman
bb8e097759
Update .env.sample 2025-02-20 18:24:52 -05:00
Mehdi Osman
7e7387001f
Update .env.sample 2025-02-20 18:17:59 -05:00
nick-delirium
5dd1256cd3
ui: fix card modal from staying open 2025-02-20 17:58:25 +01:00
nick-delirium
bf56cc53a7
tracker: release 15.0.5-beta.1 2025-02-20 17:58:25 +01:00
Aspyryan
da9b926b25
Fixed tracker uploadOfflineRecording (#3048)
Co-authored-by: Jasper Baetsle <jasper.baetsle@orbid.be>
2025-02-20 17:58:25 +01:00
Kraiem Taha Yassine
ef3ed8b690
fix(chalice): fixed int value check (#3049) 2025-02-20 17:39:13 +01:00
nick-delirium
b86e6fdadc
ui: add sankey link sourceValue to chart tooltip 2025-02-20 17:07:48 +01:00
Kraiem Taha Yassine
1293cbde7d
fix(chalice): fixed cards different viewTypes (#3047) 2025-02-20 16:21:08 +01:00
nick-delirium
3e1f073e07
ui: fix subcat for heatmap 2025-02-20 15:39:43 +01:00
Shekar Siri
7cfe29adf3 change(ui): remove segments for mobile since it has only one category 2025-02-20 15:31:19 +01:00
nick-delirium
aa07d41bb5
ui: merge startpoint with widgetsessions mapper 2025-02-20 14:47:58 +01:00
nick-delirium
305c7ae064
ui: fix hasChanged flag, fix auto height for sankey 2025-02-20 14:21:56 +01:00
Kraiem Taha Yassine
724d5a2897
Dev (#3046)
* refactor(chalice): upgraded dependencies

* refactor(chalice): changed logging

* fix(chalice): fixed CH pagination
2025-02-20 13:30:36 +01:00
nick-delirium
f8a40fd875
ui: dissallow taint for clickmap thumbnails 2025-02-20 12:18:03 +01:00
nick-delirium
6c7880efbc
ui: dissallow taint for clickmap thumbnails 2025-02-20 12:05:29 +01:00
nick-delirium
2465029a6c
ui: remove broken sank tooltip row 2025-02-20 12:01:06 +01:00
nick-delirium
11824d2993
ui: set empty defaults for clickmap 2025-02-20 11:40:35 +01:00
nick-delirium
0a4379be6b
ui: fix heatmap expand state 2025-02-20 11:30:47 +01:00
Kraiem Taha Yassine
659aa7495f
fix(chalice): fixed assist record (#3045) 2025-02-19 13:48:44 +01:00
rjshrjndrn
af5d730028 fix(backend): go sum 2025-02-19 11:37:06 +01:00
rjshrjndrn
346fd76ea8 chore(db): Update min CH version 2025-02-19 10:57:03 +01:00
nick-delirium
963c8354c6
ui: fix playlink hover state 2025-02-19 09:27:05 +01:00
Kraiem Taha Yassine
4970bc365b
fix(chalice): fixed product analytics query issue for old CH version (#3044) 2025-02-18 18:28:30 +01:00
rjshrjndrn
def33daa6c fix(backend): go sum 2025-02-18 18:18:34 +01:00
Kraiem Taha Yassine
f752876675
refactor(chalice): product analytics log refactoring (#3043) 2025-02-18 18:00:49 +01:00
Alexander
e046bcbe0a feat(go.mod): upgraded CH library version 2025-02-18 17:32:12 +01:00
Kraiem Taha Yassine
3ed8b3c27d
Dev (#3041)
* refactor(chalice): refactored code

* refactor(DB): changed delta

* refactor(chalice): product analytics log refactoring
2025-02-18 16:37:37 +01:00
nick-delirium
e996600dc8
ui: fix data parser for heatmap 2025-02-18 15:19:41 +01:00
nick-delirium
00a834b143
ui: preset default chart view for few tablelike charts 2025-02-18 14:38:48 +01:00
nick-delirium
b4497edb05
ui: preset default chart view for heatmap 2025-02-18 14:36:22 +01:00
rjshrjndrn
231a3ac330 docs(vars): keep the ep empty for iam auth.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-18 13:21:58 +01:00
rjshrjndrn
b70effa904 chore(helm): remove clickhouse resource requests
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-18 13:21:58 +01:00
nick-delirium
63b6b39d75
ui: fix weird dateRange interaction with time picker 2025-02-18 13:05:48 +01:00
nick-delirium
cd5d6e861d
ui: fix undef component in AssistSessionsModal 2025-02-18 10:02:09 +01:00
Shekar Siri
2144a90ea7 change(ui): spot menu item handle collapse 2025-02-17 17:59:23 +01:00
Shekar Siri
ecdb98b057 change(ui): spot menu item handle collapse 2025-02-17 17:56:24 +01:00
Kraiem Taha Yassine
8d0c9d5a1f
Dev (#3037)
* refactor(chalice): refactored code

* fix(frontend): changed LAST_7_DAYS/LAST_30_DAYS/PREV_7_DAYS/PREV_30_DAYS to return rounded boundaries
2025-02-17 17:55:32 +01:00
Kraiem Taha Yassine
f8a1c9447b
fix(frontend): fixed LAST_7_DAYS/LAST_30_DAYS/PREV_7_DAYS/PREV_30_DAYS time periods (#3036) 2025-02-17 16:39:17 +01:00
Alexander
f1614b6626 feat(canvas): removed unnecessary log 2025-02-17 15:21:37 +01:00
nick-delirium
adb359b3bf
ui: reset autocomplete values with project change 2025-02-17 14:47:56 +01:00
nick-delirium
9949928335
ui: support auto opening for AutocompleteModal 2025-02-14 16:14:17 +01:00
nick-delirium
6360b9a580
ui: re-download comparison data on metricOf change 2025-02-14 10:00:58 +01:00
nick-delirium
132de0af0d
ui: reduce limit for displayed session tabs 2025-02-14 09:54:55 +01:00
Kraiem Taha Yassine
b70a641af5
Dev (#3034)
* fix(chalice): changed trend - group by users

* fix(chalice): fixed partial right table issue
2025-02-13 18:12:21 +01:00
nick-delirium
ea142b9596
ui: special check for selected values 2025-02-13 17:27:29 +01:00
Sudheer Salavadi
08340eb0f4
Omni-Search filters modal updates (#3030) 2025-02-13 15:14:53 +01:00
Kraiem Taha Yassine
7b61d06454
Dev (#3031)
* fix(chalice): fixed share to slack

* fix(chalice): fixed empty right table limitation
2025-02-13 14:09:15 +01:00
nick-delirium
d031210365
ui: conditional rec fixes 2025-02-13 12:33:43 +01:00
Shekar Siri
4b21194ec5 fix(ui): webhooks ui fixes and improvements 2025-02-13 12:27:40 +01:00
nick-delirium
6bd5b60b1e
ui: check period start/end to prevent useless calculations 2025-02-13 10:59:47 +01:00
nick-delirium
c55b1971c4
ui: remove filtering function from sank 2025-02-13 10:39:53 +01:00
nick-delirium
e34e4fad6c
ui: fallback for hl 2025-02-12 16:03:55 +01:00
nick-delirium
57041140cb
ui: fix custom comparison period generation 2025-02-12 15:53:26 +01:00
Kraiem Taha Yassine
d70ecab1d9
fix(chalice): working on a fix for reversed user-journey (reversed depth) (#3029) 2025-02-12 15:52:17 +01:00
Kraiem Taha Yassine
c4c5fcc2b2
fix(chalice): working on a fix for reversed user-journey (reversed links) (#3028) 2025-02-12 15:32:18 +01:00
Kraiem Taha Yassine
118412d4ab
Dev (#3027)
* fix(chalice): fixed EE code afer refactoring

* fix(chalice): fixed sourcemaps presign URL

* fix(chalice): working on a fix for reversed user-journey
2025-02-12 14:48:38 +01:00
nick-delirium
38653d200f
ui: mobile hl player 2025-02-12 14:00:36 +01:00
nick-delirium
3be8e8092d
ui: reload hls on site change 2025-02-12 10:50:26 +01:00
nick-delirium
2654273f97
ui: fix sank sizes in db/in builder 2025-02-12 10:48:59 +01:00
nick-delirium
7dc70c0ce5
ui: rm meta cta from event list modal 2025-02-12 09:24:04 +01:00
rjshrjndrn
7d31197c78 fix(helm): regression #3026
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-11 19:12:17 +01:00
Sudheer Salavadi
f4b659e508
Improvements in Saved Search and Reset Password Modules (#3025) 2025-02-11 17:52:43 +01:00
rjshrjndrn
31290d7a89 feat(chalice): if iam role is using then the host variable should be empty
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-11 17:26:31 +01:00
nick-delirium
198c5e3a92
ui: fix autocomplete double fetch 2025-02-11 17:09:19 +01:00
nick-delirium
7da11341cf
ui: same filter keys for exclusion in sankey, fix meta cta 2025-02-11 16:58:42 +01:00
Shekar Siri
9492234ccc change(ui): close the share modal only on success 2025-02-11 16:46:53 +01:00
Shekar Siri
ed528e7b5e change(api): reset password error message 2025-02-11 16:37:25 +01:00
nick-delirium
d7a85d0920
ui: sankey styles fixes 2025-02-11 16:28:52 +01:00
rjshrjndrn
f7339c8954 fix(helm): handle empty s3 endpoint url
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-11 16:12:54 +01:00
Shekar Siri
febe784322 change(api): reset password error message 2025-02-11 16:05:06 +01:00
Kraiem Taha Yassine
f6cd20712d
fix(frontend): for Past 7 Days range, return an exact 7.00 days timeperiod instead of 7.99 days timeperiod (#3024) 2025-02-11 15:35:32 +01:00
Shekar Siri
c2b84d18b5 fix(ui): project create or delete handling 2025-02-11 15:23:46 +01:00
Kraiem Taha Yassine
6579b6842b
Dev (#3023)
* fix(chalice): removed support for all type: webVitals/errors/performance/resources predefined card because UI is not showing them anymore

* refactor(chalice): changed all charts CH queries
2025-02-11 15:11:55 +01:00
Shekar Siri
ba55b359fb change(ui): auth with api error messages and antd components 2025-02-11 14:44:24 +01:00
nick-delirium
0d9c265452
ui: table card creation fix, notif item swap, empty metadata 2025-02-11 14:35:58 +01:00
Kraiem Taha Yassine
b09becdcb7
Dev (#3022)
* refactor(chalice): refactored code

* refactor(chalice): removed support for domainsErrors4xx & domainsErrors5xx predefined cards because UI is not showing them anymore
refactor(chalice): removed support of processed_sessions & count_requests predefined cards because UI is not showing them anymore

* fix(chalice): fixed table of errors CH

* fix(chalice): removed support for errorsPerDomains & errorsPerType predefined cards because UI is not showing them anymore

* fix(chalice): removed support for speedLocation predefined card because UI is not showing it anymore
2025-02-11 13:54:47 +01:00
nick-delirium
4819907635
ui: fix sankey node titles, fix option saving, 2025-02-11 13:07:55 +01:00
nick-delirium
6e7ced6959
ui: show note start point instead of date 2025-02-11 11:49:25 +01:00
nick-delirium
3a2e822bea
ui: support empty hls 2025-02-11 11:39:15 +01:00
Shekar Siri
4245dd49e8 fix(api): notes message validation 2025-02-11 11:17:06 +01:00
nick-delirium
b93e953fd9
ui: rm hlid from page location 2025-02-11 11:07:21 +01:00
nick-delirium
a67ca7b870
ui: rm note icon 2025-02-11 10:57:18 +01:00
rjshrjndrn
d457332461 fix(actions): assist-stats-ee
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-10 18:02:01 +01:00
nick-delirium
36a0fad5b4
ui: funnel view type 2025-02-10 17:23:23 +01:00
nick-delirium
e6c7c43246
ui: chart alignments 2025-02-10 17:16:41 +01:00
nick-delirium
b04bcb935e
ui: fix drilldown reset 2025-02-10 17:06:04 +01:00
nick-delirium
9e17673a4a
ui: squeeze time ranges for charts 2025-02-10 15:55:09 +01:00
Rajesh Rajendran
1799f9d4a2
fix crons doesn't have proper commit (#3020)
* fix(action): probable image not correct tag issue

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(ci): possible fix for cron image update

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-10 15:51:29 +01:00
nick-delirium
eed357a79b
ui: rm log 2025-02-10 15:41:36 +01:00
nick-delirium
ff061567d8
ui: html converter test 2025-02-10 15:19:40 +01:00
nick-delirium
ebee06b37a
ui: test new snapshotter for thumbnail 2025-02-10 15:02:43 +01:00
Shekar Siri
442611cb26 fix(ui): mobile highlights modal 2025-02-10 14:16:32 +01:00
Kraiem Taha Yassine
ad022f9cea
fix(alerts): fixed alerts (#3019) 2025-02-10 13:11:50 +01:00
Taha Yassine Kraiem
b9ac2d2238 chore(actions): changed github actions 2025-02-10 12:58:21 +01:00
Shekar Siri
3191843829 change(ui): highlights messages to be nullable 2025-02-10 12:21:31 +01:00
nick-delirium
5267a1c830
ui: revert lib 2025-02-10 12:00:36 +01:00
nick-delirium
0b7b857d65
ui: swap html converter, fix tainted images map, forbid cors objects for highlight 2025-02-10 11:46:52 +01:00
nick-delirium
4ba16bada1
ui: fix api client pathing for spot, integrations 2025-02-10 11:35:33 +01:00
nick-delirium
c7523a1526
tracker: option to disable network 2025-02-10 10:03:27 +01:00
nick-delirium
3e722ea5ba
ui: fix sankey session filtering 2025-02-10 09:54:15 +01:00
nick-delirium
06bad31a7d
ui: prevent overflow in filter modals 2025-02-10 09:37:29 +01:00
Kraiem Taha Yassine
4f2b8d43b7
fix(alerts): fixed alerts (#3014) 2025-02-07 17:58:52 +01:00
Kraiem Taha Yassine
51ba151794
fix(alerts): fixed alerts (#3013) 2025-02-07 17:45:29 +01:00
Kraiem Taha Yassine
fda53bc4ad
fix(alerts): fixed alerts (#3012) 2025-02-07 17:24:34 +01:00
Shekar Siri
d45347da2b fix(ui): session share modal fetch list and modal component 2025-02-07 16:47:21 +01:00
Shekar Siri
9c1be9b22a change(react-native): version jump that fixes kotlin syntax issues 2025-02-07 15:57:22 +01:00
Shekar Siri
dd549b4c1f fix(ui): filters padding 2025-02-07 14:26:53 +01:00
Shekar Siri
5dc5f085b9 fix(ui): clear filters is disblaed for events 2025-02-07 14:05:09 +01:00
Shekar Siri
e325eee47e change(ui): debounce the highlights search 2025-02-07 14:00:52 +01:00
rjshrjndrn
0d68fcc428 fix(helm): check pg version
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-07 11:29:28 +01:00
Shekar Siri
e73d633518 change(ui): notes route cleanup 2025-02-07 11:21:34 +01:00
Shekar Siri
22b95f308c change(ui): removed unsupported cards for mobile 2025-02-07 10:57:30 +01:00
Shekar Siri
f87c3e7a5e change(ui): text change for tend card 2025-02-07 10:32:31 +01:00
Shekar Siri
603df2d559 fix(ui): default filter for mobile is wrong 2025-02-07 10:13:08 +01:00
Shekar Siri
98405db9ff fix(ui): activeTab check crashing 2025-02-07 10:12:49 +01:00
Shekar Siri
d4092ebc69 fix(ui): filter item check for subcategory first 2025-02-06 15:51:44 +01:00
Shekar Siri
5d49a91dde fix(ui): drag events 2025-02-06 15:04:36 +01:00
Shekar Siri
8162236139 fix(ui): maintain the card type on reload on card create from the list or from the dashboard 2025-02-06 13:11:25 +01:00
Sudheer Salavadi
3dc933daf3
Product analytics refinements (#3011)
* Various UX, UI and Functional Improvements in  Dashboards & Cards

- Depth filter of Sankey chart data in frontend
- Dashboard & Cards empty state view updates
- Disabled save image feature on cards

* Fixed empty views and headers

* Various improvements across dashboards and cards.

* Dashboard and Sankey refinements.

* More improvements in Sankey and Dashboard

* Autocomplete with checklist -- improvements
2025-02-06 09:43:10 +01:00
Kraiem Taha Yassine
afb08cfe6d
fix(chalice): fixed EE sessions search for mobile projects (#3010)
refactor(chalice): enhanced sessions search payload validation
2025-02-05 18:50:14 +01:00
Shekar Siri
500d70aa67 fix(ui): project form to use the same component that shows errors 2025-02-05 16:59:13 +01:00
Kraiem Taha Yassine
c697c99fec
fix(chalice): fixed autocomplete (#3009) 2025-02-05 16:39:17 +01:00
rjshrjndrn
600eba27a1 fix(helm): varable value
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-05 14:51:36 +01:00
Alexander
c50515e799 feat(http): added missing web prefix to tags endpoint 2025-02-05 14:46:08 +01:00
Shekar Siri
d29c7f20a4 fix(ui): table csv export 2025-02-05 14:32:03 +01:00
Shekar Siri
a4b65c618f fix(ui): user journey is not sending the metricValue 2025-02-05 13:49:09 +01:00
rjshrjndrn
7a8be69c85 chore(init): Update kubernetes to version 1.31
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-05 12:36:44 +01:00
rjshrjndrn
92c142ec33 chore(databases): Update postgresql to 17.2
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-05 12:36:44 +01:00
rjshrjndrn
8d878a3445 feat(helm): Database version bounds
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-05 12:36:44 +01:00
rjshrjndrn
e1b05dbd33 chore(helmcharts): Update database versions
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-05 12:36:44 +01:00
Shekar Siri
30dd123f29 fix(ui): sessions listings settings filter 2025-02-05 12:21:41 +01:00
Sudheer Salavadi
f88ff53e15
Product analytics refinements (#3006)
* Various UX, UI and Functional Improvements in  Dashboards & Cards

- Depth filter of Sankey chart data in frontend
- Dashboard & Cards empty state view updates
- Disabled save image feature on cards

* Fixed empty views and headers

* Various improvements across dashboards and cards.

* Dashboard and Sankey refinements.
2025-02-05 10:43:16 +01:00
Kraiem Taha Yassine
cb8d87e367
Dev (#3003)
* refactor(chalice): upgraded dependencies
refactor(crons): upgraded dependencies
refactor(alerts): upgraded dependencies

* fix(chalice): fixed boarding

* fix(chalice): fixed assign session

* refactor(assist-stats): upgraded dependencies

* fixed(assist-stats): fixed import issue

* fix(chalice): changed env vars

* fix(chalice): fixed search sessions for EE
2025-02-04 19:06:10 +01:00
rjshrjndrn
0e5fe14dc2 chore(helm): make s3 external endpoint
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-04 18:45:20 +01:00
rjshrjndrn
1feb4bdc64 chore(helm): Adding secret with db secrets
Use all the db jobs with secret from this.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-02-04 18:45:20 +01:00
Shekar Siri
de0c10de56 fix(ui): metric update in the list 2025-02-04 18:23:27 +01:00
Shekar Siri
c59dbbc79d fix(ui): table charts checking the total and list 2025-02-04 17:26:18 +01:00
Shekar Siri
49c408f44e fix(ui): update created card id 2025-02-04 16:52:47 +01:00
Shekar Siri
d374137e42 fix(ui): recording status check 2025-02-04 16:46:06 +01:00
Shekar Siri
7caa386d2d change(ui): session_replay permission check for sessions list, highlights and bookmarks/vault 2025-02-04 13:22:55 +01:00
Shekar Siri
82e170ff1c change(ui): do not set the active project on creation 2025-02-04 13:02:29 +01:00
Shekar Siri
047a4d0108 fix(ui): error handling 2025-02-04 12:55:27 +01:00
Shekar Siri
7485016f92 fix(ui): error handling 2025-02-04 12:55:27 +01:00
Alexander
ff6342298e feat(github): fixed some typos 2025-02-04 11:51:49 +01:00
Alexander
8d8e6176be feat(node.js): upgraded express module and node-alpine for sourcemap-reader and peers 2025-02-04 11:41:38 +01:00
Shekar Siri
82621012de fix(ui): issue form\ 2025-02-04 11:32:53 +01:00
Sudheer Salavadi
1b3a3dfc21
Product analytics refinements (#3002)
* Various UX, UI and Functional Improvements in  Dashboards & Cards

- Depth filter of Sankey chart data in frontend
- Dashboard & Cards empty state view updates
- Disabled save image feature on cards

* Fixed empty views and headers

* Various improvements across dashboards and cards.
2025-02-04 09:49:49 +01:00
Shekar Siri
da923f13b9 fix(rn-android): syntax issue 2025-02-03 15:44:33 +01:00
Sudheer Salavadi
2a52de073d
Interaction and UI updates in Sankey Chart (#2997) 2025-02-03 14:40:15 +01:00
Alexander
ea8729dd93 feat(assist): upgraded assist version 2025-02-03 13:45:29 +01:00
Alexander
84f9c02802 feat(assist): upgraded uws library (ee) 2025-02-03 13:27:00 +01:00
Alexander
6ec7fe64a7 feat(assist): upgraded node version for docker 2025-02-03 10:16:50 +01:00
Alexander
68c5d986fe feat(assist): upgraded the express module version (vuln cause) 2025-02-03 09:56:46 +01:00
Kraiem Taha Yassine
cb977d54e1
refactor(chalice): changed default env vars (#2996) 2025-01-31 18:47:42 +01:00
nick-delirium
392088be22
ui: some tweaks for visual adjust jump 2025-01-31 17:32:33 +01:00
Kraiem Taha Yassine
88c1f18c48
refactor(chalice): upgraded dependencies (#2995)
refactor(crons): upgraded dependencies
refactor(alerts): upgraded dependencies
2025-01-31 17:23:15 +01:00
Shekar Siri
8597f9ef84 fix(ui): sql query for status with number 2025-01-31 15:08:56 +01:00
Sudheer Salavadi
12f4d9a10c
Highlight on timeline 2025-01-31 11:23:55 +01:00
Kraiem Taha Yassine
ab7e9e505d
fix(chalice): user-journey reversed hide minor paths (#2992) 2025-01-30 18:50:26 +01:00
Delirium
0484c0ccdd
ui: tracked user profile and list (#2991)
* ui: tracked user profile and list

* ui: turnoff unsupported node cb

* ui: excess toggle
2025-01-30 18:06:12 +01:00
rjshrjndrn
e72d492e66 fix(clikhouse): minimal cpu/mem for clickhouse
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-30 16:55:20 +01:00
rjshrjndrn
12472cf84c fix(cli): string interpolation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-30 16:46:19 +01:00
rjshrjndrn
2fc4f552d5 chore(cli): proper formatting
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-30 16:42:29 +01:00
rjshrjndrn
dbcb651f40 fix(cli): string interpolation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-30 16:39:05 +01:00
rjshrjndrn
cd868f736b docs(cli): todo version constraint
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-30 16:22:51 +01:00
rjshrjndrn
d4b3791b19 chore(release): Adding clickhouse foss manifest 2025-01-30 16:20:38 +01:00
rjshrjndrn
a8f167b5af feat(cli): add version specific checks
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-30 16:15:38 +01:00
Kraiem Taha Yassine
5c1e5078b5
fix(chalice): fixed string fetchDuration value support (#2989) 2025-01-30 15:07:21 +01:00
Kraiem Taha Yassine
c2ce9b8466
Dev (#2988)
* fix(chalice): changed new user-journey to return identical response to the old user-journey

* fix(chalice): fixed boarding

* refactor(chalice): upgraded dependencies
refactor(crons): upgraded dependencies
refactor(alerts): upgraded dependencies

* feat(DB): product analytics schema for CH
2025-01-30 13:39:18 +01:00
Kraiem Taha Yassine
3da965959b
fix(chalice): fixed get first mob (#2985) 2025-01-29 18:56:41 +01:00
Kraiem Taha Yassine
44108bd57e
Dev (#2982)
* refactor(chalice): code cleaning

* refactor(chalice): user journey use new DB structure

* refactor(chalice): fixed user journey when start event is different from the visible type
2025-01-28 17:29:07 +01:00
Alexander
30c0e5abe9 feat(api): fixed a crashloop in chalice-ee 2025-01-28 17:26:42 +01:00
rjshrjndrn
3dd56cbf13 fix: helm chart migration
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-28 16:27:38 +01:00
Taha Yassine Kraiem
084749b6f9 refactor(chalice): return file_key with first mob 2025-01-28 16:21:55 +01:00
Taha Yassine Kraiem
693634fb14 refactor(chalice): return file_key with first mob 2025-01-28 16:09:42 +01:00
Alexander
d50ad9e579 feat(go.mod): upgraded imports 2025-01-28 14:48:11 +01:00
Alexander
c83dec7774 feat(connector): fix the s3 upload method's signature 2025-01-28 14:39:10 +01:00
Alexander
2b05bb59af feat(http): added missing responser to the conditions module 2025-01-28 14:35:09 +01:00
nick-delirium
4c6f23e31f
ui: better index check 2025-01-28 14:18:36 +01:00
Kraiem Taha Yassine
312db29d23
Dev (#2979)
* refactor(chalice): code cleaning

* refactor(chalice): user journey use new DB structure
2025-01-28 13:39:11 +01:00
Shekar Siri
3038fe58d0 fix(ui): default settings values for existing users 2025-01-28 12:51:16 +01:00
Shekar Siri
defcc65848 fix(ui): co-browser (assist) list sorting - duration 2025-01-28 12:43:38 +01:00
Alexander
14d64256a9 feat(azure): added the missing func argument 2025-01-28 11:47:59 +01:00
rjshrjndrn
bced0611ea chore(devops): enable clickhouse for foss
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-28 11:46:51 +01:00
rjshrjndrn
1dbf29a595 chore(release): update version 2025-01-28 11:36:49 +01:00
Shekar Siri
93db47901d fix(ui): co-browser (assist) list sorting 2025-01-28 11:13:02 +01:00
nick-delirium
260ed8ac19
ui: better naming for lstorage key 2025-01-28 10:59:14 +01:00
nick-delirium
2585107bd7
ui: simplify saving for debug 2025-01-28 10:56:10 +01:00
nick-delirium
bd80b7fccd
ui: add debug toggler 2025-01-28 10:54:13 +01:00
nick-delirium
ab84a872db
ui: fix assist filter list 2025-01-27 16:47:42 +01:00
Alexander
b4d2e685de feat(azure): added the content encoding support 2025-01-27 15:41:25 +01:00
Alexander
16182031e1 feat(s3): added the content encoding support 2025-01-27 15:37:04 +01:00
nick-delirium
6cbe17c8e6
ui: fix note edit dropdown 2025-01-27 15:11:48 +01:00
nick-delirium
fbfd0a9854
tracker: fix singletab initialization 2025-01-27 15:00:35 +01:00
nick-delirium
778112c751
ui: fix network panel re-render 2025-01-27 12:08:46 +01:00
nick-delirium
0f744ec1a0
ui: fix tainted images for highlight? 2025-01-24 16:36:46 +01:00
nick-delirium
ab454894f8
ui: rename string 2025-01-24 15:18:44 +01:00
Delirium
6882c62a32
Better network sanitizer (#2969)
* tracker: improve network sanitization

* ui: fix hl image gen

* tracker: rm sanitizer thing
2025-01-24 14:06:34 +01:00
nick-delirium
c2878bacd4
ui: beautify 2025-01-24 11:45:01 +01:00
Alexander
1a70e61de8 feat(http): removed un-started handler 2025-01-24 11:30:20 +01:00
nick-delirium
f4c94aa2d1
tracker: remove unstarted call 2025-01-24 11:27:32 +01:00
nick-delirium
6ccf2e2887
ui: separate card types for device types 2025-01-24 10:42:13 +01:00
Delirium
2cd96b0df0
Highlight UI (#2951)
* ui: start highlight ui

* ui: tag items

* ui: connecting highlights to notes api...

* Highlight feature refinements (#2948)

* ui: move clips player to foss, connect notes api to hl

* ui: tune note/hl editing, prevent zoom slider body from jumping around

* ui: safe check for tag

* ui: fix thumbnail gen

* ui: fix thumbnail gen

* ui: make player modal wider, add shadow

* ui: custom warn barge for clips

* ui: swap icon for note event wrapper

* ui: rm other, fix cancel

* ui: moving around creation modal

* ui: bg tint

* ui: rm disabled for text btn

* ui: fix ownership sorting

* ui: close player on bg click

* ui: fix query, fix min distance for default range

* ui: move hl list header out of list comp

* ui: spot list header segmented size

* Various improvements in highlights (#2955)

* ui: update hl in hlPanel comp

* ui: rm debug

* ui: fix icons file

---------

Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>
2025-01-24 09:59:54 +01:00
Delirium
622d0a7dfa
ui: omnisearch, timeseries charts redesign (#2791)
* ui: start redesign for live search/list

* ui: remove search field, show filters picker by default for assist

* ui: filter modal wip

* ui: filter modal wip

* ui: finish with omnisearch thing

* ui: start new dashboard redesign

* refining new card section

* ui: some "new dashboard" view improvs, fix icons fill inheritance, add ai button colors

* ui: split up search component (1.22+ tbd?), restrict filter type to own modals

* ui: mimic ant card

* ui: some changes for card creation flow, add series table to CustomMetricLineChart.tsx

* ui: more chart types, add table with filtering out series, start "compare to" thing

* ui: comparison designs

* ui: better granularity support, comparison view for bar chart

* ui: add comparison to more charts, add "metric" chart (BigNumChart.tsx)

* ui: cleanup logs

* ui: fix defualt import, fix sessheader crash, fix condition set ui

* ui: some refactoring and type coverage...

* ui: more refactoring; silence warnings for list renderers

* ui: moveing and renaming filters

* ui: add metricOf selector

* ui: check for metric type

* ui: fix crashes, add widget library table

* ui: change new series btn

* ui: restrict filterselection

* ui: fix timeseries table format

* ui: autoclose autocomplete modal

* ui: some fixes to issue filters default value, display and placeholder consistency

* ui: some dashboard issues with card selection modal and empty states

* ui: comparing for funnels, alternate column view, some refactoring to prepare for customizations...

* Style improvements in omnisearch headers

* Revert "Style improvements in omnisearch headers"

This reverts commit 89e51b0531.

* ui: show health status fetch error

* ui: table, bignum and comp for funnel, add csv export

* Omni-search improvements. (#2823)

Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>

* ui: fix bad merge (git hallo?)

* ui: fix filter mapper

* rm husky

* ui: add card floater

* ui: add card floater

* ui: refactor local autocomplete input

* ui: filterout empty options

* UI improvements in New Cards (#2864)

* ui: some minor dashb improvements

* ui: metric type selector for head

* ui: change card type selector, add automapping

* ui: check chart/widget components for crashes

* ui: fix crash with table metrics

* ui: fix crashes related to metric type changes

* ui: filter category for clickmap filt

* ui: fix dash options menu, fix cr/up button

* ui: fix dash list menu propagation

* ui: hide addevent in heatmaps

* ui: fix time mapping for charts

* ui: fix exclusion component for path

* ui: fix series amount for path analysis, rm grid/list selector

* ui: fix icons in list view

* ui: fix for dlt button in widgets

* Various improvements Cards, OmniSearch and Cards  Listing (#2881)

* ui: some improvements for cards list view, funnels and general filter display

* ui: longer node width for journey

* Product Analytics UI Improvements. (#2896)

* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>

* Live se red s2 (#2902)

* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

* ui crash

---------

Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>

* ui: fix lucide version

* ui: fix custom comparison period

* ui: fix custom comparison period

* ui: handle minor paths on frontend for path/sankey

* ui: assign icon for event types in sankey nodes

* ui: some strings changed

* ui: hide btn control for table view

* Various improvements in graphs, and analytics pages. (#2908)

* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

* ui crash

* Chart improvements and layout toggling

* Various improvements

* Tooltips

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>

* ui: fix weekday mapper for x axis on >7d range

* ui: lower default density to 35, fix table card display

* ui: filterMinorPaths -> return input data if nodes arr. is empty

* ui: use default filter for sessions, move around saved search actions, remove tags modal

* ui: fix card creator visibility in grid, fix table exporter visiblility in grid

* ui: fix some proptype warnings

* ui: change new series default expand state

* ui: save comp range in widget details

* ui: move timeseries to apache echarts

* ui: use unique id for window values

* ui: add timestamp for comp tooltip row

* ui: rename var for readability

* ui: fix comparison for 24hr

* Streamlined icons and improved echarts trends (#2920)

* Various improvements Cards, OmniSearch and Cards  Listing

* Improved cards listing page

* Various improvements in product analytics

* Charts UI improvements

* ui crash

* Chart improvements and layout toggling

* Various improvements

* Tooltips

* Improved icons in cards listing page

* Update WidgetFormNew.tsx

* Sankey improvements

* Icon and text updates

Text alignment and color changes in x-ray
Icon Mapping with appropriate names and shapes

* Colors and Trend Chart Interaction updates

* ui

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>

* ui: series update observe

* ui: resize chart on window

* ui: move barchart to echarts

* ui: fixing bars under comparison

* ui: fixing horizontal bar tooltip

* ui: rm unused

* ui: keep state in storage

* ui: small fixes for granularity and comparisons

* ui: fix savesearch button, fix comparison period tracking

* ui: fix funnel type selection

* ui: fixing saved search button

* ui: enable error logging, remove immutable reference

* ui: update savedsearch drop

* ui: disable button if no saved

* ui: small ui fixes

* ui: add drill to summary charts, add more options to card category picker

* ui: filter compSeries with table

* ui: swap tag_el operator and value

* ui: fix top countries

* ui: further changes for search/cards

* ui: move focus to session list on line click

* ui: fix issue filter mapper

* ui: fix alert pre-init function, fix metric list options, fix legend placement

* ui: fixes for card library

* ui: work on new sankey chart

* ui: fix metadata prefetch

* ui: moving snakey to echarts

* ui: fix funnel comparison focus

* ui: stale loader

---------

Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>
2025-01-24 09:58:35 +01:00
Shekar Siri
954e811be0
change(api): follow the new structure for cards (#2952)
* change(api): follow the new strucutre for caards

* change(api): update query to handle location and performance events

* change(api): ch query updaets - monitors - sessions with 4xx ..

* change(api): ch query updaets - monitors - table of errors

* change(api): ch query updates - use created_at

* change(api): ch query updates - fix the column name for errorId

* change(api): ch query updates - heatmaps

* change(api): ch query updates - funnels

* change(api): ch query updates - user jounrey / path finder

* change(api): ch query updates - user jounrey / path finder

* change(api): ch query updates - heatmaps fix

* refactor(chalice): changes

* refactor(chalice): changes

* refactor(chalice): changes

---------

Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com>
2025-01-23 12:21:23 +01:00
Kraiem Taha Yassine
f535870811
Dev (#2967)
feat(chalice): new user journey: add ids to nodes
2025-01-21 16:19:30 +01:00
Kraiem Taha Yassine
6559fe27ee
Dev (#2962)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* feat(chalice): new user journey: optimized post-processing + fixed DROP value
2025-01-21 15:41:18 +01:00
Kraiem Taha Yassine
c26c235f2f
Dev (#2961)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* feat(chalice): new user journey 1/2

* feat(chalice): new user journey hideExcess support

* feat(chalice): new user journey create drop to drop nodes&links
2025-01-21 15:16:14 +01:00
Shekar Siri
57e604794c fix(ui): conditional capture metadata 2025-01-21 11:24:36 +01:00
Alexander
eb11f4bf58 feat(ender): optimized logs 2025-01-20 15:38:53 +01:00
Alexander
12a9448a8d feat(go): updated all golang imports 2025-01-20 14:47:56 +01:00
Alexander
9370a7a50e
Adapt CH client for a new PA events table (#2960)
* feat(db): use a new CH events schema

* feat(db): added a missing columns to issue events

* feat(db): correct order of the issue's arguments

* feat(db): crop for url related strings

* feat(db): added missing values

* feat(db): moved materialized columns to json

* feat(db): use the latest ch library with JSON support

* feat(db): added missing duration for requests event
2025-01-20 14:21:57 +01:00
Kraiem Taha Yassine
3f13eeef75
Dev (#2959)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* feat(chalice): changed user journey
2025-01-20 12:35:25 +01:00
nick-delirium
e8169fdf2a
ui: update core yarn v 2025-01-20 10:47:52 +01:00
nick-delirium
325937dc4e
ui: update core yarn v 2025-01-20 10:42:16 +01:00
Kraiem Taha Yassine
74637f3042
Dev (#2954)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* feat(chalice): hardcoded user journey for testing (+ support of drop&other events)

* refactor(chalice): upgraded dependencies
2025-01-17 18:15:26 +01:00
Kraiem Taha Yassine
eb0fd35688
Dev (#2953)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* feat(chalice): new user journey query

* feat(chalice): hardcoded user journey for testing
2025-01-17 17:31:52 +01:00
rjshrjndrn
c692ff26b5 chore(cli): install k3s with cloudflare dns
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-01-17 16:46:35 +01:00
Shekar Siri
48bf849f4b feat(api): session highlights get a single record 2025-01-16 14:07:52 +01:00
Sudheer Salavadi
e71036b2f8
Projects Refinements (#2949) 2025-01-16 09:44:34 +01:00
nick-delirium
0911c6528c
ui: wrap error logs 2025-01-16 09:16:17 +01:00
Shekar Siri
d42905d394
feat(api): session highlights (#2947) 2025-01-14 15:21:19 +01:00
Sudheer Salavadi
016011dd23
Preferences > Project - UI Improvements (#2941) 2025-01-14 12:48:11 +01:00
Shekar Siri
9ee853365c fix(ui): user invitation 2025-01-13 13:48:13 +01:00
Kraiem Taha Yassine
a58bff9d11 Patch api v1 21 0 (#2932)
* fix(chalice): fixes for SSO

* fix(chalice): changed base image

* fix(chalice): changed requirements

(cherry picked from commit c8775f3c15)
2025-01-13 11:42:40 +01:00
Shekar Siri
830fb70ee0 fix(ui): project delete 2025-01-10 16:01:17 +01:00
Shekar Siri
637a265c24 change(ui): projects revamp - project edit 2025-01-09 15:43:42 +01:00
Shekar Siri
c28f677d10 change(ui): projects revamp - project edit 2025-01-09 15:41:12 +01:00
Shekar Siri
7c5af16493 change(ui): projects revamp - project edit 2025-01-09 15:21:17 +01:00
Shekar Siri
b5bf70a8e0 change(ui): projects revamp - install docs change 2025-01-09 15:21:17 +01:00
Kraiem Taha Yassine
a9bbf31f73
Dev (#2931)
* fix(chalice): fixed errors package
2025-01-09 13:19:10 +01:00
Kraiem Taha Yassine
c74d1671a5
Dev (#2930)
* fix(chalice): changed ee dockerfile to use pip instead of uv
2025-01-09 13:05:25 +01:00
Kraiem Taha Yassine
4c81b195a1
Dev (#2929)
* Revert "fix(chalice): changed ee dockerfile"

This reverts commit c6ba000c49.
2025-01-09 12:48:48 +01:00
Kraiem Taha Yassine
8c66fd412d
Dev (#2928)
* fix(chalice): changed ee dockerfile
2025-01-09 12:42:42 +01:00
Kraiem Taha Yassine
bd9f95851c
Dev (#2927)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): upgraded docker base image
refactor(crons): upgraded docker base image
refactor(alerts): upgraded docker base image
2025-01-08 16:28:07 +01:00
Kraiem Taha Yassine
2291980a89
Dev (#2926)
* fix(chalice): fixed path-finder first step-type issue

* refactor(chalice): removed time between steps in path-finder
2025-01-08 16:09:53 +01:00
Shekar Siri
80462e4534
change(ui): projects settings (#2924)
* change(ui): projects revamtp (wip)

* change(ui): projects revamtp (wip)

* change(ui): projects revamp - project form

* change(ui): projects revamp - capture rate tab

* change(ui): projects revamp - gdpr

* change(ui): projects revamp - reset state

* change(ui): projects revamp - progress avatar of samplerate, scroll etc.,

* change(ui): projects revamp - sync projects in list

* change(ui): projects revamp - project menu improvements
2025-01-08 11:50:22 +01:00
Shekar Siri
adf27d4cb7 fix(ui): total/count change as per the api response 2025-01-07 10:54:01 +01:00
Shekar Siri
22d04436c0 fix(ui): bookmarks separation and other page titles 2025-01-02 15:07:43 +01:00
Shekar Siri
e7e821daee fix(ui): session tags issue type to be lowercase 2025-01-02 11:24:15 +01:00
Alexander
700870f957 feat(assist): added a missing package to ee version 2024-12-31 14:01:15 +01:00
Alexander
0dfe1da0af feat(assist): use .cork() to avoid warnings for uws 2024-12-31 13:50:58 +01:00
Alexander
dcad0798e8 feat(assist): removed an old logs approach (exception reason) 2024-12-31 12:29:59 +01:00
Shekar Siri
776069fca1 change(react-native): android version jump to use v1.1.7 which has support of android graphql 2024-12-30 16:09:43 +01:00
Shekar Siri
463ffc8cae feat(android): sendMessage with support of graphql 2024-12-30 16:07:51 +01:00
Alexander
3003934374 feat(integrations): added support for both issues and events for sentry integration 2024-12-30 15:22:55 +01:00
nick-delirium
69e1e60c70
ui: bump finder version for css in js support 2024-12-30 13:45:13 +01:00
Shekar Siri
e8dbc40a5c change(react-native): android version jump to use v1.1.6 2024-12-30 12:46:39 +01:00
Shekar Siri
297f633906 fix(tracker): data type 2024-12-27 14:49:34 +01:00
Kraiem Taha Yassine
70bae502d3
Dev (#2914)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): fixed get recording status
2024-12-27 14:11:39 +01:00
Kraiem Taha Yassine
c9d63d912f
Dev (#2913)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): fixed get null card info
2024-12-26 16:06:25 +01:00
Kraiem Taha Yassine
b92d5c8706
Dev (#2912)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* feature(chalice): store card's compare-to
2024-12-26 15:32:24 +01:00
nick-delirium
b18871c632
ui: fix multi assist view metadata mapper 2024-12-26 13:54:03 +01:00
Kraiem Taha Yassine
aad8542d97
Dev (#2911)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): support low-density for cards
2024-12-26 13:23:19 +01:00
Alexander
fe7e200dba feat(analytics): use validator as a singleton 2024-12-24 15:14:31 +01:00
Alexander
471558fec5 feat(analytics): added helm chart 2024-12-24 12:55:24 +01:00
Kraiem Taha Yassine
7b7856184e
Dev (#2907)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): fixed view error code
2024-12-23 18:46:56 +01:00
Kraiem Taha Yassine
82ab91bc25
Dev (#2906)
* refactor(chalice): removed errors status
refactor(chalice): removed errors viewed
refactor(chalice): removed errors favorite
refactor(DB): removed errors viewed
refactor(DB): removed errors favorite

* refactor(chalice): ignore hide excess for Path Finder as it will be done in UI
2024-12-23 18:34:57 +01:00
Kraiem Taha Yassine
4c4e1b6580
Dev (#2905)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): pg client helper handles closed connexion

* refactor(chalice): upgraded dependencies
refactor(chalice): restricted get error's details
2024-12-23 17:19:43 +01:00
Alexander
763aed14a1 feat(analytics): moved charts data to the separate module 2024-12-23 15:32:30 +01:00
Alexander
230924c4b8 feat(analytics): removed unnecessary comments 2024-12-23 14:50:58 +01:00
Alexander
880f4f1a94 feat(analytics): better naming for cards and dashboards modules 2024-12-23 14:43:12 +01:00
Alexander
f05e84777b feat(analytics): small refactoring of the service's architecture 2024-12-23 14:24:06 +01:00
Kraiem Taha Yassine
5fe204020f
Dev (#2903)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): fixed restricted sessions search
2024-12-23 11:54:28 +01:00
nick-delirium
4395c9ee46
tracker: react native 0.6.11 (background, ios crash, screen size) 2024-12-23 10:54:06 +01:00
nick-delirium
0658e3b3d9
tracker: update ios pkg ver 2024-12-23 10:49:13 +01:00
Shekar Siri
31ba4176aa change(react-native): android version jump to use v1.1.4 2024-12-23 10:48:18 +01:00
nick-delirium
2a75785181
ui: fix perfwarnings container margin 2024-12-23 10:26:56 +01:00
Kraiem Taha Yassine
ddfaaeb6c5
Dev (#2898)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): restricted sessions search
2024-12-20 18:23:48 +01:00
Kraiem Taha Yassine
7c4ff2ed3f
Dev (#2897)
* fix(alerts): fixed alerts

* fix(chalice): fixed product analytics

* fix(chalice): fixed product analytics
2024-12-20 17:37:01 +01:00
Kraiem Taha Yassine
21992ceadb
Dev (#2895)
* fix(chalice): fixed alerts
fix(DB): fixed missing table
2024-12-20 11:46:24 +01:00
Alexander
d35d201a10 feat(go imports): total imports upgrade 2024-12-20 11:05:28 +01:00
nick-delirium
9d4120e7d6
ui: show bg badge for mobile 2024-12-20 10:43:27 +01:00
Alexander
93d51acfc4 feat(pa): removed unnecessary s3 import 2024-12-20 10:40:33 +01:00
Alexander
fdae00c602 feat(go mod): vuln import update 2024-12-20 10:31:42 +01:00
Shekar Siri
9d82c2935a
feat(analytics): dashboard manage cards (#2893) 2024-12-20 10:27:58 +01:00
nick-delirium
99ddcd9708
ui: return throw for log parser (for consistency) 2024-12-20 10:22:35 +01:00
Kraiem Taha Yassine
f7ddf82591
Dev (#2894)
* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): multiple migration fixes
refactor(chalice): refactored ch-sessions code
2024-12-19 18:21:30 +01:00
vagelim
c004bc8932
feat: Add support for self-hosted Sentry instances with configurable URL (#2887) 2024-12-19 08:02:24 +01:00
nick-delirium
694de75052
tracker: changes to resource fails tracking 2024-12-18 15:10:20 +01:00
nick-delirium
3b68bebf40
tracker: changes to resource fails tracking 2024-12-18 14:48:54 +01:00
nick-delirium
b6080b2492
tracker: use simple string for sprites 2024-12-18 14:31:07 +01:00
nick-delirium
f791d06ecd
tracker: add use el / sprite map support, change graphql relay plugin 2024-12-18 14:04:29 +01:00
Alexander
d7f810809e feat(analytics): removed unnecessary keys import 2024-12-18 11:10:38 +01:00
Shekar Siri
21895677c3
feat(analytics): cards to use db (#2886) 2024-12-18 11:02:44 +01:00
Alexander
129ab734f3
feat(frontend): added a support for the self-hosted sentry (#2890) 2024-12-18 10:49:33 +01:00
Alexander
8882a18c0d feat(go mod): go modules updating 2024-12-18 10:23:18 +01:00
nick-delirium
4f8dd444ff
ui: fix sentry log check 2024-12-18 10:19:11 +01:00
Alexander
c42391c3da feat(integrations): added missing env configuration for docker-compose and helm-chart 2024-12-17 15:06:05 +01:00
Alexander
e27d2394d1 feat(go.mod): upgraded outdated imports 2024-12-17 11:39:13 +01:00
Alexander
77981feb2b feat(integrations): fixed a tags search in Sentry provider 2024-12-17 11:11:28 +01:00
Shekar Siri
e38b729edd feat(analytics): resolve conflcits 2024-12-16 15:37:05 +01:00
Shekar Siri
8ca332e2f0 Merge branch 'product-analytics-go' into dev 2024-12-16 15:36:38 +01:00
Shekar Siri
af761693aa feat(analytics): dashbaord check existence and paginated methods 2024-12-16 13:10:24 +01:00
Shekar Siri
64d9029554 feat(analytics): dashbaord creatge validation 2024-12-16 11:54:15 +01:00
Shekar Siri
0e00ca19ad feat(analytics): dashbaord creatge validation 2024-12-16 11:51:19 +01:00
Shekar Siri
a88002852d feat(analytics): dashbaord update and delete 2024-12-16 11:43:40 +01:00
nick-delirium
77673b15f8
ui: bundle prism locally 2024-12-16 11:17:08 +01:00
nick-delirium
efa0a2878b
ui: fix default language 2024-12-16 11:05:01 +01:00
nick-delirium
bc2259aef3
ui: fix prismjs loading 2024-12-16 10:52:06 +01:00
nick-delirium
92a6379e2c
ui: fix prismjs loading 2024-12-16 10:39:48 +01:00
Shekar Siri
0a49df3996 feat(analytics): dashbaord pgconn 2024-12-16 10:36:25 +01:00
Shekar Siri
69ef083abe
feat(pa): cards endpoints (#2871)
* feat(analytics): dashboards

* feat(analytics): cards api endpoints

* feat(analytics): validator dependency
2024-12-13 14:08:03 +01:00
Shekar Siri
00b7f65e31 feat(analytics): validator dependency 2024-12-13 13:28:44 +01:00
Shekar Siri
0c0cac8fbe feat(analytics): cards api endpoints 2024-12-13 11:53:46 +01:00
Kraiem Taha Yassine
48483be8f9
Dev (#2870)
* fix(chalice): fixed clickmap
2024-12-12 18:31:54 +01:00
Kraiem Taha Yassine
0eae03f29d
Dev (#2869)
* fix(chalice): fixed user's journey
2024-12-12 18:11:28 +01:00
Kraiem Taha Yassine
383bbee2dc
Dev (#2868)
* refactor(chalice): refactored product analytics
2024-12-12 17:56:41 +01:00
Kraiem Taha Yassine
77d4c890cf
Dev (#2867)
* fix(chalice): fixed CH funnels query for new driver
* fix(chalice): fixed CH funnels support for nonexistent sequence
2024-12-12 17:12:29 +01:00
Kraiem Taha Yassine
a654e30df2
Dev (#2866)
* refactor(chalice): refactored errors

* refactor(chalice): refactored metrics/cards/dashboards
refactor(chalice): refactored sessions
refactor(chalice): refactored sourcemaps
2024-12-12 12:37:39 +01:00
nick-delirium
e03bce3ba5
spot: 1.0.13 2024-12-12 10:47:37 +01:00
nick-delirium
92f3e8a0b5
spot: fix ingest resetting 2024-12-12 10:45:31 +01:00
nick-delirium
8d2b998f9a
spot: upgrade wxt, fix missing network timestamps
Signed-off-by: nick-delirium <nikita@openreplay.com>
2024-12-11 18:04:11 +01:00
Kraiem Taha Yassine
83c979ade0
Dev (#2860)
* refactor(chalice): refactored sessions
2024-12-11 17:20:14 +01:00
Shekar Siri
5640913e68 Merge branch 'dev' into product-analytics-go 2024-12-11 16:49:15 +01:00
Kraiem Taha Yassine
c93df14f93
Dev (#2858)
* refactor(chalice): refactored issue-tracking-tools*
2024-12-11 16:24:56 +01:00
Kraiem Taha Yassine
c151d55c67 fix(chalice): use card's global filters instead of series' filters (#2857)
(cherry picked from commit 94b541c758)
2024-12-11 16:18:34 +01:00
Kraiem Taha Yassine
d6e0865b8a
Dev (#2856)
refactor(chalice): refactored alerts
refactor(chalice): refactored autocomplete
refactor(chalice): refactored sessions*
refactor(chalice): refactored collaboration-tools
refactor(chalice): refactored log-tools*
refactor(chalice): refactored issue-tracking-tools*
2024-12-11 15:53:34 +01:00
nick-delirium
d30d1570bd
ui: fix mobile crash? 2024-12-11 14:59:34 +01:00
Shekar Siri
8e7cfebdba fix(ui): funnel - filter sessions by step 2024-12-11 14:36:20 +01:00
nick-delirium
ca035d699e
ui: fix spot tab lookup, improve js build speed 2024-12-11 13:16:02 +01:00
Shekar Siri
7cb6bc7d38 change(ui): tracker version warning message spacing 2024-12-11 11:33:49 +01:00
Shekar Siri
566d6c2fdb
feat(analytics): dashboards (#2788) 2024-12-11 09:55:07 +01:00
Shekar Siri
f1e43b12be change(react-native): android version jump 2024-12-11 09:42:45 +01:00
Shekar Siri
9a37ba0739
feat(react-native): expo support (#2850)
* change(react-native): android version jump

* change(react-native): updates to support expo

* change(react-native): swipe event fix

* change(react-native): version jump

* change(react-native): include plugin file and version jump
2024-12-11 09:41:16 +01:00
Kraiem Taha Yassine
e74effe24d
Dev (#2847)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): refactord alerts

* refactor(alerts): refactord alerts
refactor(alerts): moved CH
2024-12-10 18:19:12 +01:00
Alexander
dab822e772 feat(spot): added missing imports 2024-12-10 17:55:38 +01:00
Alexander
ec53099eb0 feat(spot): removed old code 2024-12-10 17:48:44 +01:00
nick-delirium
5cde5aefce
ui: more fixes... 2024-12-10 16:26:36 +01:00
nick-delirium
4eea15b053
ui: fix log panel crashing 2024-12-10 13:31:18 +01:00
Kraiem Taha Yassine
37f00f4d73
Dev (#2839)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(DB): CH int and rollback scripts

* refactor(chalice): removed unsued funnels code
2024-12-10 13:11:46 +01:00
Alexander
9b75e4502f
ClickHouse support (#2830)
* feat(db): added CH support to db service

* feat(db): removed license check for CH client

* feat(db): removed fts integration

* feat(clickhouse): added config instead of direct env parsing

* feat(clickhouse): removed prev extraHandlers

* feat(clickhouse): an unified approach for data insertion to dbs

* feat(clickhouse): removed unused imports
2024-12-10 12:41:52 +01:00
nick-delirium
122416d311
ui: improve log list filtering 2024-12-10 12:09:49 +01:00
nick-delirium
890630dfa0
ui: fix tab name lookup 2024-12-10 12:05:26 +01:00
nick-delirium
42dd341e6c
ui: fixup 2024-12-10 11:08:12 +01:00
nick-delirium
922ccede98
ui: trim logs 2024-12-10 10:55:08 +01:00
Delirium
f6cf1cfb4a
Player ux improvements (#2834)
* Player UX improvements.

DevTools (Including multi-tab)
Actions panel (User events, Click maps, Tag Elements)

* ui: remove unused imports, remove str templ classnames

---------

Co-authored-by: Sudheer Salavadi <connect.uxmaster@gmail.com>
2024-12-10 10:21:08 +01:00
Kraiem Taha Yassine
0ac4ed1fa2
Dev (#2833)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): refactored metrics

* refactor(chalice): refactored autocomplete
2024-12-09 17:47:41 +01:00
nick-delirium
013d866455
ui: move xray warn 2024-12-09 17:25:41 +01:00
nick-delirium
a010ef9d0f
ui: fix performance bottlenecks, split data sources in devtools panes 2024-12-09 17:20:08 +01:00
Kraiem Taha Yassine
71b96c1728
Dev (#2832)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): refactored authorize

* refactor(chalice): upgraded dependencies
refactor(alerts): upgraded dependencies
refactor(crons): upgraded dependencies

* refactor(chalice): refactored custom_metrics

* refactor(chalice): upgraded dependency
2024-12-09 16:34:53 +01:00
Kraiem Taha Yassine
d35837416b
Dev (#2831)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): refactored authorize

* refactor(chalice): upgraded dependencies
refactor(alerts): upgraded dependencies
refactor(crons): upgraded dependencies

* refactor(chalice): refactored custom_metrics
2024-12-09 16:05:54 +01:00
Alexander
d0ef617e40 feat(integrations): removed all unnecessary app exits 2024-12-09 14:20:04 +01:00
Kraiem Taha Yassine
aa8cebca7e
Dev (#2829)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): refactored db-drivers
refactor(scripts): defined ch-dataPort
2024-12-09 13:59:46 +01:00
Kraiem Taha Yassine
f360961500
Dev (#2828)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): upgraded dependencies
refactor(chalice): new ch-driver logs
2024-12-09 13:11:17 +01:00
Kraiem Taha Yassine
962385651f
Dev (#2822)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): CH connexion logs
2024-12-06 15:16:14 +01:00
Kraiem Taha Yassine
ac47e339cf
Dev (#2821)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): defined CH ports
2024-12-06 14:34:24 +01:00
Kraiem Taha Yassine
d99187e14a
Dev (#2820)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* fix(chalice): fixed new ch-driver
fix(chalice): fixed redundant code
2024-12-06 11:18:50 +01:00
nick-delirium
e2417ef2be
tracker: fixing failuresOnly option in network (15.0.3 beta) 2024-12-06 10:16:58 +01:00
Kraiem Taha Yassine
2f693cd490
Dev (#2819)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(chalice): removed sessions insights
refactor(DB): removed sessions insights

* refactor(chalice): upgraded dependencies
refactor(crons): upgraded dependencies
refactor(alerts): upgraded dependencies
feat(chalice): moved CH to FOSS
feat(chalice): use clickhouse-connect
feat(chalice): use CH connexion pool
feat(scripts): defined ch-data-port
2024-12-05 17:43:52 +01:00
nick-delirium
c13a220b52
tracker: 15.0.2 2024-12-05 17:22:38 +01:00
PiRDub
844b9d80c3
fix(tracker): prevent raising security error accessing window.top (#2818)
* fix(tracker): prevent raising security error accessing window.top

* fix(canAccessTop): check window document access
2024-12-05 17:21:24 +01:00
nick-delirium
27e94fed35
ui: fixing resource success when mapped from timing messages 2024-12-05 17:20:45 +01:00
Kraiem Taha Yassine
5180ad8717
Dev (#2816)
* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* fix(chalice): fixed Math-operators validation
refactor(chalice): search for sessions that have events for heatmaps

* refactor(chalice): search for sessions that have at least 1 location event for heatmaps

* feat(chalice): autocomplete return top 10 with stats

* fix(chalice): fixed autocomplete top 10 meta-filters

* refactor(DB): preparing for v1.22.0
refactor(chalice): upgraded dependencies
refactor(alerts): upgraded dependencies
refactor(crons): upgraded dependencies

* refactor(chalice): removed sessions insights
2024-12-05 09:40:00 +01:00
Shekar Siri
8434044611 change(react-native): android version jump 2024-12-04 16:13:21 +01:00
Kraiem Taha Yassine
99bdb5dba7 fix(chalice): support webhook default ports (#2814)
* fix(chalice): support webhook default ports

* fix(chalice): support webhook default ports EE

(cherry picked from commit 04db322e54)
2024-12-04 13:10:53 +01:00
nick-delirium
e51455b8ea
ui: fix mobile player not scaling automatically 2024-12-03 17:29:20 +01:00
Shekar Siri
30c9f6184e change(ui): webhooks url allow port 2024-12-03 17:07:30 +01:00
Kraiem Taha Yassine
ca20ae3d10 fix(chalice): fixed edit user's role (#2810)
(cherry picked from commit e9a1a8c4eb)
2024-12-03 16:28:40 +01:00
Alexander
eeb1d616bc feat(integrations): removed user's creds from error message 2024-12-03 15:23:18 +01:00
Shekar Siri
594d57905f fix(ui): do not allow protected roles 2024-12-03 14:14:49 +01:00
nick-delirium
db92ae3bf0
ui: update autoretry policy for integrations 2024-12-03 13:48:16 +01:00
Shekar Siri
cadb2b456d fix(ui): 403 clear the token 2024-12-03 13:27:22 +01:00
Shekar Siri
8e8064abd5 fix(ui): trigger session search on change 2024-12-03 13:27:22 +01:00
Kraiem Taha Yassine
9d740c7bb2 fix(chalice): support session's search null duration (#2806)
(cherry picked from commit e7ad4c8bd0)
2024-12-03 13:25:12 +01:00
nick-delirium
4ec2dbfeca
tracker: update changelogs 2024-12-03 09:56:53 +01:00
Kraiem Taha Yassine
f1ce859e8b fix(chalice): fixed accept invitation response (#2803)
(cherry picked from commit 2e5517509b)
2024-12-02 23:07:30 +01:00
Delirium
eff22eb554
tracker (rn): sessionID method for react native connector
* tracker: rm env var

tracker: fix some ios react native issues, add sessionid method

* change(react-native): android native method to get sessionId

* change(react-native): android version jump

* change(react-native): android use promise

* tracker: clearing logs

---------

Co-authored-by: Shekar Siri <sshekarsiri@gmail.com>
2024-12-02 17:57:28 +01:00
Shekar Siri
076e664ced change(react-native): android use promise 2024-12-02 17:15:19 +01:00
Shekar Siri
f95c1c9c94 change(react-native): android version jump 2024-12-02 16:57:00 +01:00
Shekar Siri
67a3494804 change(react-native): android native method to get sessionId 2024-12-02 16:21:45 +01:00
Shekar Siri
77273b0df9 change(ui): remove pagination while checking for latest sessions 2024-12-02 14:14:06 +01:00
Shekar Siri
72d4867e42 fix(assist): pagination 2024-11-29 15:41:58 +01:00
Shekar Siri
ef9a077655 change(react-native): version jump 0.6.5 2024-11-28 12:32:47 +01:00
Shekar Siri
5831da93e1 change(react-native): added stop 2024-11-28 12:28:55 +01:00
Shekar Siri
197d6bbc0a change(react-native): version jump 0.6.4 2024-11-28 12:27:26 +01:00
Shekar Siri
0b72007006 change(react-native): android version jump 2024-11-28 12:06:50 +01:00
nick-delirium
54b07c6110
ui: porting fix from saas 2024-11-28 11:52:12 +01:00
Shekar Siri
99085a95a1 feat(analytics): dashboards 2024-11-27 16:13:26 +01:00
Shekar Siri
f5df3fb5b5 fix(ui): sessions, bookmark, notes navigation and search silters and timestamp issues 2024-11-26 12:57:57 +01:00
Shekar Siri
253feefe53 change(ui): search query params improvements 2024-11-26 12:57:57 +01:00
Shekar Siri
b84c05cbad fix(ui): latest sessions check clear the list 2024-11-26 12:57:57 +01:00
Kraiem Taha Yassine
043d6a9f53 fix(chalice): support user-city for assist (#2782)
(cherry picked from commit b00a90484e)
2024-11-26 10:43:50 +01:00
rjshrjndrn
5dbe313a68 Squashed commit of the following:
fix(helm): password
    remove: debug
    chore(helm): change helm hook to post upgrde, since pre-upgrde triggered
    before install
    fix(helm): remove default ns
    fix(helm): template number
    chore(helm): change trigger preference
    fix(helm): variable
    revert: disabling clickhouse pwd rotation, as CH not used
    chore(helm): trigger password update only if passwords are rotated
    chore(helm): Adding snippet for postgres/clickhouse secret rotation

    Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2024-11-25 18:37:06 +01:00
rjshrjndrn
73db2c44d0 fix(helm): version change check
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2024-11-25 18:28:34 +01:00
rjshrjndrn
4f269ce4a0 chore(helm): Adding opereplay config map for
Installation agnostic version access. This is useful for db migration,
especially when we install using argo, or other means
precedence to the autogenereated prev version.
Set migration is true if its argo deployment
fix the forceMigration override
2024-11-25 18:28:34 +01:00
rjshrjndrn
7c8912933f chore(cli): proper cleanup
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2024-11-25 18:28:34 +01:00
rjshrjndrn
f6f2a14a18 chore(helm): check github availability before clone
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2024-11-25 18:28:34 +01:00
Shekar Siri
aa213e036c fix(ui): sessions list persist page, show latest sessions 2024-11-25 12:09:30 +01:00
nick-delirium
ab331d57a4
tracker: fix bundling process, export types separately 2024-11-25 10:11:56 +01:00
nick-delirium
423d9cb671
tracker: bump assist minor v 2024-11-22 14:17:27 +01:00
PiRDub
d409b41ddb
chore(peerDeps): fix @openreplay/tracker min version (#2772) 2024-11-22 14:15:58 +01:00
nick-delirium
38367777b7
ui: fix ws panel crash 2024-11-22 14:05:51 +01:00
Alexander
6830c8879f
web module refactoring (#2725)
* feat(server): moved an http server object into a pkg subdir to be reusable for http, spots, and integrations

* feat(web): isolated web module (server, router, middleware, utils) used in spots and new integrations

* feat(web): removed possible panic

* feat(web): split all handlers from http service into different packages for better management.

* feat(web): changed router's method signature

* feat(web): added missing handlers interface

* feat(web): added health middleware to remove unnecessary checks

* feat(web): customizable middleware set for web servers

* feat(web): simplified the handler's structure

* feat(web): created an unified server.Run method for all web services (http, spot, integrations)

* feat(web): fixed a json size limit issue

* feat(web): removed Keys and PG connection from router

* feat(web): simplified integration's main file

* feat(web): simplified spot's main file

* feat(web): simplified http's main file (builder)

* feat(web): refactored audit trail functionality

* feat(web): added ee version of audit trail

* feat(web): added ee version of conditions module

* feat(web): moved ee version of some web session structs

* feat(web): new format of web metrics

* feat(web): added new web metrics to all handlers

* feat(web): added justExpired feature to web ingest handler

* feat(web): added small integrations improvements
2024-11-21 17:48:04 +01:00
Kraiem Taha Yassine
d95738bb0d fix(chalice): support top graphql autocomplete (#2767)
refactor(chalice): enforce UTC TZ
refactor(crons): enforce UTC TZ
refactor(alerts): enforce UTC TZ

(cherry picked from commit 884f3499ef)
2024-11-20 12:47:20 +01:00
nick-delirium
73ade8da81
sourcemapuploader: fix globe version 2024-11-20 09:54:35 +01:00
2885 changed files with 117769 additions and 73227 deletions

View file

@ -10,8 +10,6 @@ on:
branches:
- dev
- api-*
- v1.11.0-patch
- actions_test
paths:
- "ee/api/**"
- "api/**"

View file

@ -10,7 +10,6 @@ on:
branches:
- dev
- api-*
- v1.11.0-patch
paths:
- "api/**"
- "!api/.gitignore"

View file

@ -9,7 +9,6 @@ on:
push:
branches:
- dev
- api-*
paths:
- "ee/assist/**"
- "assist/**"

View file

@ -1,4 +1,4 @@
# This action will push the peers changes to aws
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
@ -9,14 +9,10 @@ on:
push:
branches:
- dev
- api-*
paths:
- "ee/peers/**"
- "peers/**"
- "!peers/.gitignore"
- "!peers/*-dev.sh"
- "ee/assist-server/**"
name: Build and Deploy Peers EE
name: Build and Deploy Assist-Server EE
jobs:
deploy:
@ -57,12 +53,7 @@ jobs:
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing peers image
- name: Building and Pushing Assist-Server image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
@ -70,11 +61,11 @@ jobs:
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd peers
cd assist-server
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("peers")
images=("assist-server")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
@ -85,7 +76,7 @@ jobs:
} && {
echo "Skipping Security Checks"
}
images=("peers")
images=("assist-server")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
@ -109,43 +100,23 @@ jobs:
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
pwd
cd scripts/helmcharts/
# Update changed image tag
sed -i "/peers/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
sed -i "/assist-server/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,peers,quickwit,connector} /tmp/charts/
mv openreplay/charts/{ingress-nginx,assist-server,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -15,7 +15,7 @@ on:
- "!assist-stats/*-dev.sh"
- "!assist-stats/requirements-*.txt"
name: Build and Deploy Assist Stats
name: Build and Deploy Assist Stats ee
jobs:
deploy:
@ -123,8 +123,9 @@ jobs:
tag: ${IMAGE_TAG}
EOF
export IMAGE_TAG=${IMAGE_TAG}
# Update changed image tag
sed -i "/assist-stats/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
yq '.utilities.apiCrons.assiststats.image.tag = strenv(IMAGE_TAG)' -i /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command

View file

@ -9,7 +9,6 @@ on:
push:
branches:
- dev
- api-*
paths:
- "assist/**"
- "!assist/.gitignore"

View file

@ -10,7 +10,6 @@ on:
branches:
- dev
- api-*
- v1.11.0-patch
paths:
- "ee/api/**"
- "api/**"
@ -101,33 +100,32 @@ jobs:
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
env:
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
image: &image
tag: "${IMAGE_TAG}"
utilities:
apiCrons:
assiststats:
image: *image
report:
image: *image
sessionsCleaner:
image: *image
projectsStats:
image: *image
fixProjectsStats:
image: *image
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/crons/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
@ -137,8 +135,6 @@ jobs:
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack

189
.github/workflows/patch-build-old.yaml vendored Normal file
View file

@ -0,0 +1,189 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
tag:
description: 'Tag to update.'
required: true
type: string
branch:
description: 'Branch to build patches from. Make sure the branch is uptodate with tag. Else itll cause missing commits.'
required: true
type: string
name: Build patches from tag, rewrite commit HEAD to older timestamp, and Push the tag
jobs:
deploy:
name: Build Patch from old tag
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 4
ref: ${{ github.event.inputs.tag }}
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Create backup tag with timestamp
run: |
set -e # Exit immediately if a command exits with a non-zero status
TIMESTAMP=$(date +%Y%m%d%H%M%S)
BACKUP_TAG="${{ github.event.inputs.tag }}-backup-${TIMESTAMP}"
echo "BACKUP_TAG=${BACKUP_TAG}" >> $GITHUB_ENV
echo "INPUT_TAG=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
git tag $BACKUP_TAG || { echo "Failed to create backup tag"; exit 1; }
git push origin $BACKUP_TAG || { echo "Failed to push backup tag"; exit 1; }
echo "Created backup tag: $BACKUP_TAG"
# Get the oldest commit date from the last 3 commits in raw format
OLDEST_COMMIT_TIMESTAMP=$(git log -3 --pretty=format:"%at" | tail -1)
echo "Oldest commit timestamp: $OLDEST_COMMIT_TIMESTAMP"
# Add 1 second to the timestamp
NEW_TIMESTAMP=$((OLDEST_COMMIT_TIMESTAMP + 1))
echo "NEW_TIMESTAMP=$NEW_TIMESTAMP" >> $GITHUB_ENV
- name: Setup yq
uses: mikefarah/yq@master
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=${{inputs.branch}}" >> $GITHUB_ENV
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b $INPUT_TAG --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout $INPUT_TAG
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
done
- name: Change commit timestamp
run: |
# Convert the timestamp to a date format git can understand
NEW_DATE=$(perl -le 'print scalar gmtime($ARGV[0])." +0000"' $NEW_TIMESTAMP)
echo "Setting commit date to: $NEW_DATE"
# Amend the commit with the new date
GIT_COMMITTER_DATE="$NEW_DATE" git commit --amend --no-edit --date="$NEW_DATE"
# Verify the change
git log -1 --pretty=format:"Commit now dated: %cD"
# git tag and push
git tag $INPUT_TAG -f
git push origin $INPUT_TAG -f
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -2,7 +2,6 @@
on:
workflow_dispatch:
description: 'This workflow will build for patches for latest tag, and will Always use commit from main branch.'
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
@ -20,12 +19,20 @@ jobs:
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 1
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Rebase with main branch, to make sure the code has latest main changes
if: github.ref != 'refs/heads/main'
run: |
git pull --rebase origin main
git remote -v
git config --global user.email "action@github.com"
git config --global user.name "GitHub Action"
git config --global rebase.autoStash true
git fetch origin main:main
git rebase main
git log -3
- name: Downloading yq
run: |
@ -48,6 +55,8 @@ jobs:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
@ -65,78 +74,168 @@ jobs:
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
SERVICES_INPUT: ${{ github.event.inputs.services }}
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
#!/bin/bash
set -euo pipefail
# Configuration
readonly WORKING_DIR=$(pwd)
readonly BUILD_SCRIPT_NAME="build.sh"
readonly BACKEND_SERVICES_FILE="/tmp/backend.txt"
# Initialize git configuration
setup_git() {
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b "$BRANCH_NAME"
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b dev --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout main # This have to be changed to specific tag
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
# Get and increment image version
image_version() {
local service=$1
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
local current_version new_version
current_version=$(yq eval '.AppVersion' "$chart_path")
new_version=$(echo "$current_version" | awk -F. '{$NF += 1; print $1"."$2"."$3}')
echo "$new_version"
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
# Clone MSAAS repository if not exists
clone_msaas() {
if [[ ! -d "$MSAAS_REPO_FOLDER" ]]; then
git clone -b dev --recursive "https://x-access-token:${MSAAS_REPO_CLONE_TOKEN}@${MSAAS_REPO_URL}" "$MSAAS_REPO_FOLDER"
cd "$MSAAS_REPO_FOLDER"
cd openreplay && git fetch origin && git checkout main
git log -1
cd "$MSAAS_REPO_FOLDER"
bash git-init.sh
git checkout
fi
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
git push --set-upstream origin $BRANCH_NAME
done
# Build managed services
build_managed() {
local service=$1
local version=$2
echo "Building managed service: $service"
clone_msaas
if [[ $service == 'chalice' ]]; then
cd "$MSAAS_REPO_FOLDER/openreplay/api"
else
cd "$MSAAS_REPO_FOLDER/openreplay/$service"
fi
local build_cmd="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh"
echo "Executing: $build_cmd"
if ! eval "$build_cmd" 2>&1; then
echo "Build failed for $service"
exit 1
fi
}
# Build service with given arguments
build_service() {
local service=$1
local version=$2
local build_args=$3
local build_script=${4:-$BUILD_SCRIPT_NAME}
local command="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash $build_script $build_args"
echo "Executing: $command"
eval "$command"
}
# Update chart version and commit changes
update_chart_version() {
local service=$1
local version=$2
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
# Ensure we're in the original working directory/repository
cd "$WORKING_DIR"
yq eval ".AppVersion = \"$version\"" -i "$chart_path"
git add "$chart_path"
git commit -m "Increment $service chart version to $version"
git push --set-upstream origin "$BRANCH_NAME"
cd -
}
# Main execution
main() {
setup_git
# Get backend services list
ls backend/cmd >"$BACKEND_SERVICES_FILE"
# Parse services input (fix for GitHub Actions syntax)
echo "Services: ${SERVICES_INPUT:-$1}"
IFS=',' read -ra services <<<"${SERVICES_INPUT:-$1}"
# Process each service
for service in "${services[@]}"; do
echo "Processing service: $service"
cd "$WORKING_DIR"
local foss_build_args="" ee_build_args="" build_script="$BUILD_SCRIPT_NAME"
# Determine build configuration based on service type
if grep -q "$service" "$BACKEND_SERVICES_FILE"; then
# Backend service
cd backend
foss_build_args="nil $service"
ee_build_args="ee $service"
else
# Non-backend service
case "$service" in
chalice | alerts | crons)
cd "$WORKING_DIR/api"
;;
*)
cd "$service"
;;
esac
# Special build scripts for alerts/crons
if [[ $service == 'alerts' || $service == 'crons' ]]; then
build_script="build_${service}.sh"
fi
ee_build_args="ee"
fi
# Get version and build
local version
version=$(image_version "$service")
# Build FOSS and EE versions
build_service "$service" "$version" "$foss_build_args"
build_service "$service" "${version}-ee" "$ee_build_args"
# Build managed version for specific services
if [[ "$service" != "chalice" && "$service" != "frontend" ]]; then
echo "Nothing to build in managed for service $service"
else
build_managed "$service" "$version"
fi
# Update chart and commit
update_chart_version "$service" "$version"
done
cd "$WORKING_DIR"
# Cleanup
rm -f "$BACKEND_SERVICES_FILE"
}
echo "Working directory: $WORKING_DIR"
# Run main function with all arguments
main "$SERVICES_INPUT"
- name: Create Pull Request
uses: repo-sync/pull-request@v2
@ -147,8 +246,7 @@ jobs:
pr_title: "Updated patch build from main ${{ env.HEAD_COMMIT_ID }}"
pr_body: |
This PR updates the Helm chart version after building the patch from $HEAD_COMMIT_ID.
Once this PR is merged, To update the latest tag, run the following workflow.
https://github.com/openreplay/openreplay/actions/workflows/update-tag.yaml
Once this PR is merged, tag update job will run automatically.
# - name: Debug Job
# if: ${{ failure() }}

View file

@ -1,149 +0,0 @@
# This action will push the peers changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "peers/**"
- "!peers/.gitignore"
- "!peers/*-dev.sh"
name: Build and Deploy Peers
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing peers image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd peers
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("peers")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("peers")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/peers/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,peers,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -0,0 +1,103 @@
name: Release Deployment
on:
workflow_dispatch:
inputs:
services:
description: 'Comma-separated list of services to deploy. eg: frontend,api,sink'
required: true
branch:
description: 'Branch to deploy (defaults to dev)'
required: false
default: 'dev'
env:
IMAGE_REGISTRY_URL: ${{ secrets.OSS_REGISTRY_URL }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ github.event.inputs.branch }}
- name: Docker login
run: |
docker login $IMAGE_REGISTRY_URL -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Set image tag with branch info
run: |
SHORT_SHA=$(git rev-parse --short HEAD)
echo "IMAGE_TAG=${{ github.event.inputs.branch }}-${SHORT_SHA}" >> $GITHUB_ENV
echo "Using image tag: $IMAGE_TAG"
- uses: depot/setup-action@v1
- name: Build and push Docker images
run: |
# Parse the comma-separated services list into an array
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
working_dir=$(pwd)
# Define backend services (consider moving this to workflow inputs or repo config)
ls backend/cmd >> /tmp/backend.txt
BUILD_SCRIPT_NAME="build.sh"
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd $working_dir/backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
cd $working_dir
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
{
echo IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
}&
{
echo IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
}&
done
wait
- uses: azure/k8s-set-context@v1
name: Using ee release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_RELEASE_KUBECONFIG }}
- name: Deploy to ee release Kubernetes
run: |
echo "Deploying services to EE cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to EE cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}-ee
done
- uses: azure/k8s-set-context@v1
name: Using foss release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.FOSS_RELEASE_KUBECONFIG }}
- name: Deploy to FOSS release Kubernetes
run: |
echo "Deploying services to FOSS cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}
done

View file

@ -1,4 +1,4 @@
# This action will push the sourcemapreader changes to aws
# This action will push the sourcemapreader changes to ee
on:
workflow_dispatch:
inputs:
@ -9,13 +9,13 @@ on:
push:
branches:
- dev
- api-*
paths:
- "ee/sourcemap-reader/**"
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"
- "!sourcemap-reader/*-dev.sh"
name: Build and Deploy sourcemap-reader
name: Build and Deploy sourcemap-reader EE
jobs:
deploy:
@ -64,7 +64,7 @@ jobs:
- name: Building and Pushing sourcemaps-reader image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
@ -132,7 +132,7 @@ jobs:
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}

View file

@ -9,7 +9,6 @@ on:
push:
branches:
- dev
- api-*
paths:
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"

View file

@ -1,35 +1,42 @@
on:
workflow_dispatch:
description: "This workflow will build for patches for latest tag, and will Always use commit from main branch."
inputs:
services:
description: "This action will update the latest tag with current main branch HEAD. Should I proceed ? true/false"
required: true
default: "false"
name: Force Push tag with main branch HEAD
pull_request:
types: [closed]
branches:
- main
name: Release tag update --force
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
if: ${{ (github.event_name == 'pull_request' && github.event.pull_request.merged == true) || github.event.inputs.services == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get latest release tag using GitHub API
id: get-latest-tag
run: |
LATEST_TAG=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/latest" \
| jq -r .tag_name)
# Fallback to git command if API doesn't return a tag
if [ "$LATEST_TAG" == "null" ] || [ -z "$LATEST_TAG" ]; then
echo "Not found latest tag"
exit 100
fi
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
echo "Latest tag: $LATEST_TAG"
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}
- name: Push main branch to tag
run: |
git fetch --tags
git checkout main
git push origin HEAD:refs/tags/$(git tag --list 'v[0-9]*' --sort=-v:refname | head -n 1) --force
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# with:
# limit-access-to-actor: true
echo "Updating tag ${{ env.LATEST_TAG }} to point to latest commit on main"
git push origin HEAD:refs/tags/${{ env.LATEST_TAG }} --force

View file

@ -1,4 +1,4 @@
Copyright (c) 2021-2024 Asayer, Inc dba OpenReplay
Copyright (c) 2021-2025 Asayer, Inc dba OpenReplay
OpenReplay monorepo uses multiple licenses. Portions of this software are licensed as follows:
- All content that resides under the "ee/" directory of this repository, is licensed under the license defined in "ee/LICENSE".

View file

@ -1,10 +1,17 @@
FROM python:3.11-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA
LABEL GIT_SHA=$GIT_SHA
FROM python:3.12-alpine AS builder
LABEL maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
RUN apk add --no-cache build-base tini
RUN apk add --no-cache build-base
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv && \
export UV_SYSTEM_PYTHON=true && \
uv pip install --no-cache-dir --upgrade pip setuptools wheel && \
uv pip install --no-cache-dir --upgrade -r requirements.txt
FROM python:3.12-alpine
ARG GIT_SHA
ARG envarg
# Add Tini
# Startup daemon
@ -14,19 +21,11 @@ ENV SOURCE_MAP_VERSION=0.7.4 \
PRIVATE_ENDPOINTS=false \
ENTERPRISE_BUILD=${envarg} \
GIT_SHA=$GIT_SHA
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv
RUN uv pip install --no-cache-dir --upgrade pip setuptools wheel --system
RUN uv pip install --no-cache-dir --upgrade -r requirements.txt --system
COPY . .
RUN mv env.default .env
RUN adduser -u 1001 openreplay -D
USER 1001
RUN apk add --no-cache tini && mv env.default .env
ENTRYPOINT ["/sbin/tini", "--"]
CMD ./entrypoint.sh
CMD ["./entrypoint.sh"]

View file

@ -1,4 +1,4 @@
FROM python:3.11-alpine
FROM python:3.12-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA

View file

@ -4,23 +4,26 @@ verify_ssl = true
name = "pypi"
[packages]
urllib3 = "==1.26.16"
urllib3 = "==2.3.0"
requests = "==2.32.3"
boto3 = "==1.35.60"
pyjwt = "==2.9.0"
boto3 = "==1.36.12"
pyjwt = "==2.10.1"
psycopg2-binary = "==2.9.10"
psycopg = {extras = ["pool", "binary"], version = "==3.2.3"}
elasticsearch = "==8.16.0"
psycopg = {extras = ["pool", "binary"], version = "==3.2.4"}
clickhouse-driver = {extras = ["lz4"], version = "==0.2.9"}
clickhouse-connect = "==0.8.15"
elasticsearch = "==8.17.1"
jira = "==3.8.0"
cachetools = "==5.5.0"
fastapi = "==0.115.5"
uvicorn = {extras = ["standard"], version = "==0.32.0"}
cachetools = "==5.5.1"
fastapi = "==0.115.8"
uvicorn = {extras = ["standard"], version = "==0.34.0"}
python-decouple = "==3.8"
pydantic = {extras = ["email"], version = "==2.9.2"}
apscheduler = "==3.10.4"
redis = "==5.2.0"
pydantic = {extras = ["email"], version = "==2.10.6"}
apscheduler = "==3.11.0"
redis = "==5.2.1"
[dev-packages]
[requires]
python_version = "3.12"
python_full_version = "3.12.8"

View file

@ -13,17 +13,16 @@ from psycopg.rows import dict_row
from starlette.responses import StreamingResponse
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils import pg_client, ch_client
from crons import core_crons, core_dynamic_crons
from routers import core, core_dynamic
from routers.subs import insights, metrics, v1_api, health, usability_tests, spot
from routers.subs import insights, metrics, v1_api, health, usability_tests, spot, product_anaytics
loglevel = config("LOGLEVEL", default=logging.WARNING)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
class ORPYAsyncConnection(AsyncConnection):
def __init__(self, *args, **kwargs):
@ -39,6 +38,7 @@ async def lifespan(app: FastAPI):
app.schedule = AsyncIOScheduler()
await pg_client.init()
await ch_client.init()
app.schedule.start()
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs:
@ -128,3 +128,7 @@ app.include_router(usability_tests.app_apikey)
app.include_router(spot.public_app)
app.include_router(spot.app)
app.include_router(spot.app_apikey)
app.include_router(product_anaytics.public_app)
app.include_router(product_anaytics.app)
app.include_router(product_anaytics.app_apikey)

View file

@ -5,14 +5,14 @@ from apscheduler.schedulers.asyncio import AsyncIOScheduler
from decouple import config
from fastapi import FastAPI
from chalicelib.core import alerts_processor
from chalicelib.core.alerts import alerts_processor
from chalicelib.utils import pg_client
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
logging.info(">>>>> starting up <<<<<")
ap_logger.info(">>>>> starting up <<<<<")
await pg_client.init()
app.schedule.start()
app.schedule.add_job(id="alerts_processor", **{"func": alerts_processor.process, "trigger": "interval",
@ -27,14 +27,22 @@ async def lifespan(app: FastAPI):
yield
# Shutdown
logging.info(">>>>> shutting down <<<<<")
ap_logger.info(">>>>> shutting down <<<<<")
app.schedule.shutdown(wait=False)
await pg_client.terminate()
loglevel = config("LOGLEVEL", default=logging.INFO)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app = FastAPI(root_path=config("root_path", default="/alerts"), docs_url=config("docs_url", default=""),
redoc_url=config("redoc_url", default=""), lifespan=lifespan)
logging.info("============= ALERTS =============")
app.schedule = AsyncIOScheduler()
ap_logger.info("============= ALERTS =============")
@app.get("/")
@ -50,17 +58,8 @@ async def get_health_status():
}}
app.schedule = AsyncIOScheduler()
loglevel = config("LOGLEVEL", default=logging.INFO)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app.schedule = AsyncIOScheduler()
if config("LOCAL_DEV", default=False, cast=bool):
@app.get('/trigger', tags=["private"])
async def trigger_main_cron():
logging.info("Triggering main cron")
ap_logger.info("Triggering main cron")
alerts_processor.process()

View file

@ -45,8 +45,6 @@ class JWTAuth(HTTPBearer):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authentication scheme.")
jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials)
logger.info("------ jwt_payload ------")
logger.info(jwt_payload)
auth_exists = jwt_payload is not None and users.auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_iat=jwt_payload.get("iat", 100))
if jwt_payload is None \
@ -120,8 +118,7 @@ class JWTAuth(HTTPBearer):
jwt_payload = None
else:
jwt_payload = authorizers.jwt_refresh_authorizer(scheme="Bearer", token=request.cookies["spotRefreshToken"])
logger.info("__process_spot_refresh_call")
logger.info(jwt_payload)
if jwt_payload is None or jwt_payload.get("jti") is None:
logger.warning("Null spotRefreshToken's payload, or null JTI.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,

View file

@ -0,0 +1,10 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_ALERTS", cast=bool, default=False):
logging.info(">>> Using experimental alerts")
from . import alerts_processor_ch as alerts_processor
else:
from . import alerts_processor as alerts_processor

View file

@ -7,8 +7,8 @@ from decouple import config
import schemas
from chalicelib.core import notifications, webhook
from chalicelib.core.collaboration_msteams import MSTeams
from chalicelib.core.collaboration_slack import Slack
from chalicelib.core.collaborations.collaboration_msteams import MSTeams
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.utils import pg_client, helper, email_helper, smtp
from chalicelib.utils.TimeUTC import TimeUTC

View file

@ -1,9 +1,10 @@
from chalicelib.core.alerts.modules import TENANT_ID
from chalicelib.utils import pg_client, helper
def get_all_alerts():
with pg_client.PostgresClient(long_query=True) as cur:
query = """SELECT tenant_id,
query = f"""SELECT {TENANT_ID} AS tenant_id,
alert_id,
projects.project_id,
projects.name AS project_name,

View file

@ -1,16 +1,16 @@
import decimal
import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core import alerts
from chalicelib.core import alerts_listener
from chalicelib.core import sessions
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_pg as sessions
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
LeftToDb = {
schemas.AlertColumn.PERFORMANCE__DOM_CONTENT_LOADED__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
@ -46,35 +46,6 @@ LeftToDb = {
"formula": "COUNT(DISTINCT session_id)", "condition": "source!='js_exception'", "joinSessions": False},
}
# This is the frequency of execution for each threshold
TimeInterval = {
15: 3,
30: 5,
60: 10,
120: 20,
240: 30,
1440: 60,
}
def can_check(a) -> bool:
now = TimeUTC.now()
repetitionBase = a["options"]["currentPeriod"] \
if a["detectionMethod"] == schemas.AlertDetectionMethod.CHANGE \
and a["options"]["currentPeriod"] > a["options"]["previousPeriod"] \
else a["options"]["previousPeriod"]
if TimeInterval.get(repetitionBase) is None:
logger.error(f"repetitionBase: {repetitionBase} NOT FOUND")
return False
return (a["options"]["renotifyInterval"] <= 0 or
a["options"].get("lastNotification") is None or
a["options"]["lastNotification"] <= 0 or
((now - a["options"]["lastNotification"]) > a["options"]["renotifyInterval"] * 60 * 1000)) \
and ((now - a["createdAt"]) % (TimeInterval[repetitionBase] * 60 * 1000)) < 60 * 1000
def Build(a):
now = TimeUTC.now()
@ -161,11 +132,12 @@ def Build(a):
def process():
logger.info("> processing alerts on PG")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:
for alert in all_alerts:
if can_check(alert):
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = cur.mogrify(query, params)
@ -181,7 +153,7 @@ def process():
result = cur.fetchone()
if result["valid"]:
logger.info(f"Valid alert, notifying users, alertId:{alert['alertId']} name: {alert['name']}")
notifications.append(generate_notification(alert, result))
notifications.append(alert_helpers.generate_notification(alert, result))
except Exception as e:
logger.error(
f"!!!Error while running alert query for alertId:{alert['alertId']} name: {alert['name']}")
@ -195,42 +167,3 @@ def process():
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)
def __format_value(x):
if x % 1 == 0:
x = int(x)
else:
x = round(x, 2)
return f"{x:,}"
def generate_notification(alert, result):
left = __format_value(result['value'])
right = __format_value(alert['query']['right'])
return {
"alertId": alert["alertId"],
"tenantId": alert["tenantId"],
"title": alert["name"],
"description": f"{alert['seriesName']} = {left} ({alert['query']['operator']} {right}).",
"buttonText": "Check metrics for more details",
"buttonUrl": f"/{alert['projectId']}/metrics",
"imageUrl": None,
"projectId": alert["projectId"],
"projectName": alert["projectName"],
"options": {"source": "ALERT", "sourceId": alert["alertId"],
"sourceMeta": alert["detectionMethod"],
"message": alert["options"]["message"], "projectId": alert["projectId"],
"data": {"title": alert["name"],
"limitValue": alert["query"]["right"],
"actualValue": float(result["value"]) \
if isinstance(result["value"], decimal.Decimal) \
else result["value"],
"operator": alert["query"]["operator"],
"trigger": alert["query"]["left"],
"alertId": alert["alertId"],
"detectionMethod": alert["detectionMethod"],
"currentPeriod": alert["options"]["currentPeriod"],
"previousPeriod": alert["options"]["previousPeriod"],
"createdAt": TimeUTC.now()}},
}

View file

@ -3,11 +3,11 @@ import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core import alerts
from chalicelib.core import alerts_listener, alerts_processor
from chalicelib.core import sessions_exp as sessions
from chalicelib.utils import pg_client, ch_client, exp_ch_helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_ch as sessions
logger = logging.getLogger(__name__)
@ -156,16 +156,17 @@ def Build(a):
def process():
logger.info("> processing alerts on CH")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur, ch_client.ClickHouseClient() as ch_cur:
for alert in all_alerts:
if alert["query"]["left"] != "CUSTOM":
continue
if alerts_processor.can_check(alert):
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = ch_cur.format(query, params)
query = ch_cur.format(query=query, parameters=params)
except Exception as e:
logger.error(
f"!!!Error while building alert query for alertId:{alert['alertId']} name: {alert['name']}")
@ -174,13 +175,13 @@ def process():
logger.debug(alert)
logger.debug(query)
try:
result = ch_cur.execute(query)
result = ch_cur.execute(query=query)
if len(result) > 0:
result = result[0]
if result["valid"]:
logger.info("Valid alert, notifying users")
notifications.append(alerts_processor.generate_notification(alert, result))
notifications.append(alert_helpers.generate_notification(alert, result))
except Exception as e:
logger.error(f"!!!Error while running alert query for alertId:{alert['alertId']}")
logger.error(str(e))

View file

@ -0,0 +1,3 @@
TENANT_ID = "-1"
from . import helpers as alert_helpers

View file

@ -0,0 +1,74 @@
import decimal
import logging
import schemas
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
# This is the frequency of execution for each threshold
TimeInterval = {
15: 3,
30: 5,
60: 10,
120: 20,
240: 30,
1440: 60,
}
def __format_value(x):
if x % 1 == 0:
x = int(x)
else:
x = round(x, 2)
return f"{x:,}"
def can_check(a) -> bool:
now = TimeUTC.now()
repetitionBase = a["options"]["currentPeriod"] \
if a["detectionMethod"] == schemas.AlertDetectionMethod.CHANGE \
and a["options"]["currentPeriod"] > a["options"]["previousPeriod"] \
else a["options"]["previousPeriod"]
if TimeInterval.get(repetitionBase) is None:
logger.error(f"repetitionBase: {repetitionBase} NOT FOUND")
return False
return (a["options"]["renotifyInterval"] <= 0 or
a["options"].get("lastNotification") is None or
a["options"]["lastNotification"] <= 0 or
((now - a["options"]["lastNotification"]) > a["options"]["renotifyInterval"] * 60 * 1000)) \
and ((now - a["createdAt"]) % (TimeInterval[repetitionBase] * 60 * 1000)) < 60 * 1000
def generate_notification(alert, result):
left = __format_value(result['value'])
right = __format_value(alert['query']['right'])
return {
"alertId": alert["alertId"],
"tenantId": alert["tenantId"],
"title": alert["name"],
"description": f"{alert['seriesName']} = {left} ({alert['query']['operator']} {right}).",
"buttonText": "Check metrics for more details",
"buttonUrl": f"/{alert['projectId']}/metrics",
"imageUrl": None,
"projectId": alert["projectId"],
"projectName": alert["projectName"],
"options": {"source": "ALERT", "sourceId": alert["alertId"],
"sourceMeta": alert["detectionMethod"],
"message": alert["options"]["message"], "projectId": alert["projectId"],
"data": {"title": alert["name"],
"limitValue": alert["query"]["right"],
"actualValue": float(result["value"]) \
if isinstance(result["value"], decimal.Decimal) \
else result["value"],
"operator": alert["query"]["operator"],
"trigger": alert["query"]["left"],
"alertId": alert["alertId"],
"detectionMethod": alert["detectionMethod"],
"currentPeriod": alert["options"]["currentPeriod"],
"previousPeriod": alert["options"]["previousPeriod"],
"createdAt": TimeUTC.now()}},
}

View file

@ -1,32 +0,0 @@
from chalicelib.utils import pg_client, helper
def get_all_alerts():
with pg_client.PostgresClient(long_query=True) as cur:
query = """SELECT -1 AS tenant_id,
alert_id,
projects.project_id,
projects.name AS project_name,
detection_method,
query,
options,
(EXTRACT(EPOCH FROM alerts.created_at) * 1000)::BIGINT AS created_at,
alerts.name,
alerts.series_id,
filter,
change,
COALESCE(metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count',
query ->> 'left') AS series_name
FROM public.alerts
INNER JOIN projects USING (project_id)
LEFT JOIN metric_series USING (series_id)
LEFT JOIN metrics USING (metric_id)
WHERE alerts.deleted_at ISNULL
AND alerts.active
AND projects.active
AND projects.deleted_at ISNULL
AND (alerts.series_id ISNULL OR metric_series.deleted_at ISNULL)
ORDER BY alerts.created_at;"""
cur.execute(query=query)
all_alerts = helper.list_to_camel_case(cur.fetchall())
return all_alerts

View file

@ -1,3 +1,4 @@
import logging
from os import access, R_OK
from os.path import exists as path_exists, getsize
@ -10,6 +11,8 @@ import schemas
from chalicelib.core import projects
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
ASSIST_KEY = config("ASSIST_KEY")
ASSIST_URL = config("ASSIST_URL") % ASSIST_KEY
@ -52,21 +55,21 @@ def __get_live_sessions_ws(project_id, data):
results = requests.post(ASSIST_URL + config("assist") + f"/{project_key}",
json=data, timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
print(f"!! issue with the peer-server code:{results.status_code} for __get_live_sessions_ws")
print(results.text)
logger.error(f"!! issue with the peer-server code:{results.status_code} for __get_live_sessions_ws")
logger.error(results.text)
return {"total": 0, "sessions": []}
live_peers = results.json().get("data", [])
except requests.exceptions.Timeout:
print("!! Timeout getting Assist response")
logger.error("!! Timeout getting Assist response")
live_peers = {"total": 0, "sessions": []}
except Exception as e:
print("!! Issue getting Live-Assist response")
print(str(e))
print("expected JSON, received:")
logger.error("!! Issue getting Live-Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
print(results.text)
logger.error(results.text)
except:
print("couldn't get response")
logger.error("couldn't get response")
live_peers = {"total": 0, "sessions": []}
_live_peers = live_peers
if "sessions" in live_peers:
@ -102,8 +105,8 @@ def get_live_session_by_id(project_id, session_id):
results = requests.get(ASSIST_URL + config("assist") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
print(f"!! issue with the peer-server code:{results.status_code} for get_live_session_by_id")
print(results.text)
logger.error(f"!! issue with the peer-server code:{results.status_code} for get_live_session_by_id")
logger.error(results.text)
return None
results = results.json().get("data")
if results is None:
@ -111,16 +114,16 @@ def get_live_session_by_id(project_id, session_id):
results["live"] = True
results["agentToken"] = __get_agent_token(project_id=project_id, project_key=project_key, session_id=session_id)
except requests.exceptions.Timeout:
print("!! Timeout getting Assist response")
logger.error("!! Timeout getting Assist response")
return None
except Exception as e:
print("!! Issue getting Assist response")
print(str(e))
print("expected JSON, received:")
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
print(results.text)
logger.error(results.text)
except:
print("couldn't get response")
logger.error("couldn't get response")
return None
return results
@ -132,21 +135,21 @@ def is_live(project_id, session_id, project_key=None):
results = requests.get(ASSIST_URL + config("assistList") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
print(f"!! issue with the peer-server code:{results.status_code} for is_live")
print(results.text)
logger.error(f"!! issue with the peer-server code:{results.status_code} for is_live")
logger.error(results.text)
return False
results = results.json().get("data")
except requests.exceptions.Timeout:
print("!! Timeout getting Assist response")
logger.error("!! Timeout getting Assist response")
return False
except Exception as e:
print("!! Issue getting Assist response")
print(str(e))
print("expected JSON, received:")
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
print(results.text)
logger.error(results.text)
except:
print("couldn't get response")
logger.error("couldn't get response")
return False
return str(session_id) == results
@ -161,21 +164,21 @@ def autocomplete(project_id, q: str, key: str = None):
ASSIST_URL + config("assistList") + f"/{project_key}/autocomplete",
params=params, timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
print(f"!! issue with the peer-server code:{results.status_code} for autocomplete")
print(results.text)
logger.error(f"!! issue with the peer-server code:{results.status_code} for autocomplete")
logger.error(results.text)
return {"errors": [f"Something went wrong wile calling assist:{results.text}"]}
results = results.json().get("data", [])
except requests.exceptions.Timeout:
print("!! Timeout getting Assist response")
logger.error("!! Timeout getting Assist response")
return {"errors": ["Assist request timeout"]}
except Exception as e:
print("!! Issue getting Assist response")
print(str(e))
print("expected JSON, received:")
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
print(results.text)
logger.error(results.text)
except:
print("couldn't get response")
logger.error("couldn't get response")
return {"errors": ["Something went wrong wile calling assist"]}
for r in results:
r["type"] = __change_keys(r["type"])
@ -239,24 +242,24 @@ def session_exists(project_id, session_id):
results = requests.get(ASSIST_URL + config("assist") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
print(f"!! issue with the peer-server code:{results.status_code} for session_exists")
print(results.text)
logger.error(f"!! issue with the peer-server code:{results.status_code} for session_exists")
logger.error(results.text)
return None
results = results.json().get("data")
if results is None:
return False
return True
except requests.exceptions.Timeout:
print("!! Timeout getting Assist response")
logger.error("!! Timeout getting Assist response")
return False
except Exception as e:
print("!! Issue getting Assist response")
print(str(e))
print("expected JSON, received:")
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
print(results.text)
logger.error(results.text)
except:
print("couldn't get response")
logger.error("couldn't get response")
return False

View file

@ -37,8 +37,7 @@ def jwt_authorizer(scheme: str, token: str, leeway=0) -> dict | None:
logger.debug("! JWT Expired signature")
return None
except BaseException as e:
logger.warning("! JWT Base Exception")
logger.debug(e)
logger.warning("! JWT Base Exception", exc_info=e)
return None
return payload
@ -56,8 +55,7 @@ def jwt_refresh_authorizer(scheme: str, token: str):
logger.debug("! JWT-refresh Expired signature")
return None
except BaseException as e:
logger.warning("! JWT-refresh Base Exception")
logger.debug(e)
logger.error("! JWT-refresh Base Exception", exc_info=e)
return None
return payload

View file

@ -61,11 +61,11 @@ def __get_autocomplete_table(value, project_id):
try:
cur.execute(query)
except Exception as err:
print("--------- AUTOCOMPLETE SEARCH QUERY EXCEPTION -----------")
print(query.decode('UTF-8'))
print("--------- VALUE -----------")
print(value)
print("--------------------")
logger.exception("--------- AUTOCOMPLETE SEARCH QUERY EXCEPTION -----------")
logger.exception(query.decode('UTF-8'))
logger.exception("--------- VALUE -----------")
logger.exception(value)
logger.exception("--------------------")
raise err
results = cur.fetchall()
for r in results:
@ -85,7 +85,8 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""(SELECT DISTINCT value, type
return f"""SELECT DISTINCT ON(value,type) value, type
((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -101,7 +102,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5);"""
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -124,7 +125,7 @@ def __generic_autocomplete(event: Event):
return f
def __generic_autocomplete_metas(typename):
def generic_autocomplete_metas(typename):
def f(project_id, text):
with pg_client.PostgresClient() as cur:
params = {"project_id": project_id, "value": helper.string_to_sql_like(text),
@ -326,7 +327,7 @@ def __search_metadata(project_id, value, key=None, source=None):
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT key, value, 'METADATA' AS TYPE
SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))

View file

@ -1,7 +1,8 @@
from chalicelib.utils import pg_client
from chalicelib.core import projects, log_tool_datadog, log_tool_stackdriver, log_tool_sentry
from chalicelib.core import projects
from chalicelib.core import users
from chalicelib.core.log_tools import datadog, stackdriver, sentry
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client
def get_state(tenant_id):
@ -12,47 +13,61 @@ def get_state(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify("""SELECT EXISTS(( SELECT 1
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)})
{"ids": tuple(pids)},
)
)
recorded = cur.fetchone()["exists"]
meta = False
if recorded:
cur.execute("""SELECT EXISTS((SELECT 1
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE sessions.project_id = p.project_id
AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON (TRUE)
WHERE p.deleted_at ISNULL
WHERE {TENANT_CONDITION} AND p.deleted_at ISNULL
AND ( sessions.user_id IS NOT NULL OR p.metadata_1 IS NOT NULL
OR p.metadata_2 IS NOT NULL OR p.metadata_3 IS NOT NULL
OR p.metadata_4 IS NOT NULL OR p.metadata_5 IS NOT NULL
OR p.metadata_6 IS NOT NULL OR p.metadata_7 IS NOT NULL
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""")
)) AS exists;""",
{"tenant_id": tenant_id},
)
cur.execute(query)
meta = cur.fetchone()["exists"]
return [
{"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"},
{"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"},
{"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"},
{"task": "Integrations",
"done": len(log_tool_datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
{
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
},
{
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
},
{
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
},
{
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
},
]
@ -63,52 +78,66 @@ def get_state_installing(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify("""SELECT EXISTS(( SELECT 1
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)})
{"ids": tuple(pids)},
)
)
recorded = cur.fetchone()["exists"]
return {"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"}
return {
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
}
def get_state_identify_users(tenant_id):
with pg_client.PostgresClient() as cur:
cur.execute("""SELECT EXISTS((SELECT 1
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE sessions.project_id = p.project_id
AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON (TRUE)
WHERE p.deleted_at ISNULL
WHERE {TENANT_CONDITION} AND p.deleted_at ISNULL
AND ( sessions.user_id IS NOT NULL OR p.metadata_1 IS NOT NULL
OR p.metadata_2 IS NOT NULL OR p.metadata_3 IS NOT NULL
OR p.metadata_4 IS NOT NULL OR p.metadata_5 IS NOT NULL
OR p.metadata_6 IS NOT NULL OR p.metadata_7 IS NOT NULL
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""")
)) AS exists;""",
{"tenant_id": tenant_id},
)
cur.execute(query)
meta = cur.fetchone()["exists"]
return {"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"}
return {
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
}
def get_state_manage_users(tenant_id):
return {"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"}
return {
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
}
def get_state_integrations(tenant_id):
return {"task": "Integrations",
"done": len(log_tool_datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
return {
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
}

View file

@ -0,0 +1 @@
from . import collaboration_base as _

View file

@ -6,7 +6,7 @@ from fastapi import HTTPException, status
import schemas
from chalicelib.core import webhook
from chalicelib.core.collaboration_base import BaseCollaboration
from chalicelib.core.collaborations.collaboration_base import BaseCollaboration
logger = logging.getLogger(__name__)

View file

@ -6,7 +6,7 @@ from fastapi import HTTPException, status
import schemas
from chalicelib.core import webhook
from chalicelib.core.collaboration_base import BaseCollaboration
from chalicelib.core.collaborations.collaboration_base import BaseCollaboration
class Slack(BaseCollaboration):

View file

@ -1,653 +0,0 @@
import json
import logging
from fastapi import HTTPException, status
import schemas
from chalicelib.core import sessions, funnels, errors, issues, heatmaps, product_analytics, \
custom_metrics_predefined
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
# TODO: refactor this to split
# timeseries /
# table of errors / table of issues / table of browsers / table of devices / table of countries / table of URLs
# remove "table of" calls from this function
def __try_live(project_id, data: schemas.CardSchema):
results = []
for i, s in enumerate(data.series):
results.append(sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
view_type=data.view_type, metric_type=data.metric_type,
metric_of=data.metric_of, metric_value=data.metric_value))
return results
def __get_table_of_series(project_id, data: schemas.CardSchema):
results = []
for i, s in enumerate(data.series):
results.append(sessions.search2_table(data=s.filter, project_id=project_id, density=data.density,
metric_of=data.metric_of, metric_value=data.metric_value,
metric_format=data.metric_format))
return results
def __get_funnel_chart(project: schemas.ProjectContext, data: schemas.CardFunnel, user_id: int = None):
if len(data.series) == 0:
return {
"stages": [],
"totalDropDueToIssues": 0
}
# return funnels.get_top_insights_on_the_fly_widget(project_id=project_id,
# data=data.series[0].filter,
# metric_format=data.metric_format)
return funnels.get_simple_funnel(project=project,
data=data.series[0].filter,
metric_format=data.metric_format)
def __get_errors_list(project: schemas.ProjectContext, user_id, data: schemas.CardSchema):
if len(data.series) == 0:
return {
"total": 0,
"errors": []
}
return errors.search(data.series[0].filter, project_id=project.project_id, user_id=user_id)
def __get_sessions_list(project: schemas.ProjectContext, user_id, data: schemas.CardSchema):
if len(data.series) == 0:
logger.debug("empty series")
return {
"total": 0,
"sessions": []
}
return sessions.search_sessions(data=data.series[0].filter, project_id=project.project_id, user_id=user_id)
def __get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas.CardHeatMap,
include_mobs: bool = True):
if len(data.series) == 0:
return None
data.series[0].filter.filters += data.series[0].filter.events
data.series[0].filter.events = []
return heatmaps.search_short_session(project_id=project.project_id, user_id=user_id,
data=schemas.HeatMapSessionsSearch(
**data.series[0].filter.model_dump()),
include_mobs=include_mobs)
def __get_path_analysis_chart(project: schemas.ProjectContext, user_id: int, data: schemas.CardPathAnalysis):
if len(data.series) == 0:
data.series.append(
schemas.CardPathAnalysisSeriesSchema(startTimestamp=data.startTimestamp, endTimestamp=data.endTimestamp))
elif not isinstance(data.series[0].filter, schemas.PathAnalysisSchema):
data.series[0].filter = schemas.PathAnalysisSchema()
return product_analytics.path_analysis(project_id=project.project_id, data=data)
def __get_timeseries_chart(project: schemas.ProjectContext, data: schemas.CardTimeSeries, user_id: int = None):
series_charts = __try_live(project_id=project.project_id, data=data)
results = [{}] * len(series_charts[0])
for i in range(len(results)):
for j, series_chart in enumerate(series_charts):
results[i] = {**results[i], "timestamp": series_chart[i]["timestamp"],
data.series[j].name if data.series[j].name else j + 1: series_chart[i]["count"]}
return results
def not_supported(**args):
raise Exception("not supported")
def __get_table_of_user_ids(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_sessions(project: schemas.ProjectContext, data: schemas.CardTable, user_id):
return __get_sessions_list(project=project, user_id=user_id, data=data)
def __get_table_of_errors(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int):
return __get_errors_list(project=project, user_id=user_id, data=data)
def __get_table_of_issues(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_browsers(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_devises(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_countries(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_urls(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_referrers(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_of_requests(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int = None):
return __get_table_of_series(project_id=project.project_id, data=data)
def __get_table_chart(project: schemas.ProjectContext, data: schemas.CardTable, user_id: int):
supported = {
schemas.MetricOfTable.SESSIONS: __get_table_of_sessions,
schemas.MetricOfTable.ERRORS: __get_table_of_errors,
schemas.MetricOfTable.USER_ID: __get_table_of_user_ids,
schemas.MetricOfTable.ISSUES: __get_table_of_issues,
schemas.MetricOfTable.USER_BROWSER: __get_table_of_browsers,
schemas.MetricOfTable.USER_DEVICE: __get_table_of_devises,
schemas.MetricOfTable.USER_COUNTRY: __get_table_of_countries,
schemas.MetricOfTable.VISITED_URL: __get_table_of_urls,
schemas.MetricOfTable.REFERRER: __get_table_of_referrers,
schemas.MetricOfTable.FETCH: __get_table_of_requests
}
return supported.get(data.metric_of, not_supported)(project=project, data=data, user_id=user_id)
def get_chart(project: schemas.ProjectContext, data: schemas.CardSchema, user_id: int):
if data.is_predefined:
return custom_metrics_predefined.get_metric(key=data.metric_of,
project_id=project.project_id,
data=data.model_dump())
supported = {
schemas.MetricType.TIMESERIES: __get_timeseries_chart,
schemas.MetricType.TABLE: __get_table_chart,
schemas.MetricType.HEAT_MAP: __get_heat_map_chart,
schemas.MetricType.FUNNEL: __get_funnel_chart,
schemas.MetricType.INSIGHTS: not_supported,
schemas.MetricType.PATH_ANALYSIS: __get_path_analysis_chart
}
return supported.get(data.metric_type, not_supported)(project=project, data=data, user_id=user_id)
def get_sessions_by_card_id(project_id, user_id, metric_id, data: schemas.CardSessionsSchema):
# No need for this because UI is sending the full payload
# card: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
# if card is None:
# return None
# metric: schemas.CardSchema = schemas.CardSchema(**card)
# metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
if not card_exists(metric_id=metric_id, project_id=project_id, user_id=user_id):
return None
results = []
for s in data.series:
results.append({"seriesId": s.series_id, "seriesName": s.name,
**sessions.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def get_sessions(project_id, user_id, data: schemas.CardSessionsSchema):
results = []
if len(data.series) == 0:
return results
for s in data.series:
if len(data.filters) > 0:
s.filter.filters += data.filters
s.filter = schemas.SessionsSearchPayloadSchema(**s.filter.model_dump(by_alias=True))
results.append({"seriesId": None, "seriesName": s.name,
**sessions.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def get_issues(project: schemas.ProjectContext, user_id: int, data: schemas.CardSchema):
if data.is_predefined:
return not_supported()
if data.metric_of == schemas.MetricOfTable.ISSUES:
return __get_table_of_issues(project=project, user_id=user_id, data=data)
supported = {
schemas.MetricType.TIMESERIES: not_supported,
schemas.MetricType.TABLE: not_supported,
schemas.MetricType.HEAT_MAP: not_supported,
schemas.MetricType.INSIGHTS: not_supported,
schemas.MetricType.PATH_ANALYSIS: not_supported,
}
return supported.get(data.metric_type, not_supported)()
def __get_path_analysis_card_info(data: schemas.CardPathAnalysis):
r = {"start_point": [s.model_dump() for s in data.start_point],
"start_type": data.start_type,
"excludes": [e.model_dump() for e in data.excludes],
"hideExcess": data.hide_excess}
return r
def create_card(project: schemas.ProjectContext, user_id, data: schemas.CardSchema, dashboard=False):
with pg_client.PostgresClient() as cur:
session_data = None
if data.metric_type == schemas.MetricType.HEAT_MAP:
if data.session_id is not None:
session_data = {"sessionId": data.session_id}
else:
session_data = __get_heat_map_chart(project=project, user_id=user_id,
data=data, include_mobs=False)
if session_data is not None:
session_data = {"sessionId": session_data["sessionId"]}
_data = {"session_data": json.dumps(session_data) if session_data is not None else None}
for i, s in enumerate(data.series):
for k in s.model_dump().keys():
_data[f"{k}_{i}"] = s.__getattribute__(k)
_data[f"index_{i}"] = i
_data[f"filter_{i}"] = s.filter.json()
series_len = len(data.series)
params = {"user_id": user_id, "project_id": project.project_id, **data.model_dump(), **_data,
"default_config": json.dumps(data.default_config.model_dump()), "card_info": None}
if data.metric_type == schemas.MetricType.PATH_ANALYSIS:
params["card_info"] = json.dumps(__get_path_analysis_card_info(data=data))
query = """INSERT INTO metrics (project_id, user_id, name, is_public,
view_type, metric_type, metric_of, metric_value,
metric_format, default_config, thumbnail, data,
card_info)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(is_public)s,
%(view_type)s, %(metric_type)s, %(metric_of)s, %(metric_value)s,
%(metric_format)s, %(default_config)s, %(thumbnail)s, %(session_data)s,
%(card_info)s)
RETURNING metric_id"""
if len(data.series) > 0:
query = f"""WITH m AS ({query})
INSERT INTO metric_series(metric_id, index, name, filter)
VALUES {",".join([f"((SELECT metric_id FROM m), %(index_{i})s, %(name_{i})s, %(filter_{i})s::jsonb)"
for i in range(series_len)])}
RETURNING metric_id;"""
query = cur.mogrify(query, params)
cur.execute(query)
r = cur.fetchone()
if dashboard:
return r["metric_id"]
return {"data": get_card(metric_id=r["metric_id"], project_id=project.project_id, user_id=user_id)}
def update_card(metric_id, user_id, project_id, data: schemas.CardSchema):
metric: dict = get_card(metric_id=metric_id, project_id=project_id,
user_id=user_id, flatten=False, include_data=True)
if metric is None:
return None
series_ids = [r["seriesId"] for r in metric["series"]]
n_series = []
d_series_ids = []
u_series = []
u_series_ids = []
params = {"metric_id": metric_id, "is_public": data.is_public, "name": data.name,
"user_id": user_id, "project_id": project_id, "view_type": data.view_type,
"metric_type": data.metric_type, "metric_of": data.metric_of,
"metric_value": data.metric_value, "metric_format": data.metric_format,
"config": json.dumps(data.default_config.model_dump()), "thumbnail": data.thumbnail}
for i, s in enumerate(data.series):
prefix = "u_"
if s.index is None:
s.index = i
if s.series_id is None or s.series_id not in series_ids:
n_series.append({"i": i, "s": s})
prefix = "n_"
else:
u_series.append({"i": i, "s": s})
u_series_ids.append(s.series_id)
ns = s.model_dump()
for k in ns.keys():
if k == "filter":
ns[k] = json.dumps(ns[k])
params[f"{prefix}{k}_{i}"] = ns[k]
for i in series_ids:
if i not in u_series_ids:
d_series_ids.append(i)
params["d_series_ids"] = tuple(d_series_ids)
params["card_info"] = None
params["session_data"] = json.dumps(metric["data"])
if data.metric_type == schemas.MetricType.PATH_ANALYSIS:
params["card_info"] = json.dumps(__get_path_analysis_card_info(data=data))
elif data.metric_type == schemas.MetricType.HEAT_MAP:
if data.session_id is not None:
params["session_data"] = json.dumps({"sessionId": data.session_id})
elif metric.get("data") and metric["data"].get("sessionId"):
params["session_data"] = json.dumps({"sessionId": metric["data"]["sessionId"]})
with pg_client.PostgresClient() as cur:
sub_queries = []
if len(n_series) > 0:
sub_queries.append(f"""\
n AS (INSERT INTO metric_series (metric_id, index, name, filter)
VALUES {",".join([f"(%(metric_id)s, %(n_index_{s['i']})s, %(n_name_{s['i']})s, %(n_filter_{s['i']})s::jsonb)"
for s in n_series])}
RETURNING 1)""")
if len(u_series) > 0:
sub_queries.append(f"""\
u AS (UPDATE metric_series
SET name=series.name,
filter=series.filter,
index=series.index
FROM (VALUES {",".join([f"(%(u_series_id_{s['i']})s,%(u_index_{s['i']})s,%(u_name_{s['i']})s,%(u_filter_{s['i']})s::jsonb)"
for s in u_series])}) AS series(series_id, index, name, filter)
WHERE metric_series.metric_id =%(metric_id)s AND metric_series.series_id=series.series_id
RETURNING 1)""")
if len(d_series_ids) > 0:
sub_queries.append("""\
d AS (DELETE FROM metric_series WHERE metric_id =%(metric_id)s AND series_id IN %(d_series_ids)s
RETURNING 1)""")
query = cur.mogrify(f"""\
{"WITH " if len(sub_queries) > 0 else ""}{",".join(sub_queries)}
UPDATE metrics
SET name = %(name)s, is_public= %(is_public)s,
view_type= %(view_type)s, metric_type= %(metric_type)s,
metric_of= %(metric_of)s, metric_value= %(metric_value)s,
metric_format= %(metric_format)s,
edited_at = timezone('utc'::text, now()),
default_config = %(config)s,
thumbnail = %(thumbnail)s,
card_info = %(card_info)s,
data = %(session_data)s
WHERE metric_id = %(metric_id)s
AND project_id = %(project_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING metric_id;""", params)
cur.execute(query)
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
def search_all(project_id, user_id, data: schemas.SearchCardsSchema, include_series=False):
constraints = ["metrics.project_id = %(project_id)s",
"metrics.deleted_at ISNULL"]
params = {"project_id": project_id, "user_id": user_id,
"offset": (data.page - 1) * data.limit,
"limit": data.limit, }
if data.mine_only:
constraints.append("user_id = %(user_id)s")
else:
constraints.append("(user_id = %(user_id)s OR metrics.is_public)")
if data.shared_only:
constraints.append("is_public")
if data.query is not None and len(data.query) > 0:
constraints.append("(name ILIKE %(query)s OR owner.owner_email ILIKE %(query)s)")
params["query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
with pg_client.PostgresClient() as cur:
sub_join = ""
if include_series:
sub_join = """LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)"""
query = cur.mogrify(
f"""SELECT metric_id, project_id, user_id, name, is_public, created_at, edited_at,
metric_type, metric_of, metric_format, metric_value, view_type, is_pinned,
dashboards, owner_email, owner_name, default_config AS config, thumbnail
FROM metrics
{sub_join}
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
FROM (SELECT DISTINCT dashboard_id, name, is_public
FROM dashboards INNER JOIN dashboard_widgets USING (dashboard_id)
WHERE deleted_at ISNULL
AND dashboard_widgets.metric_id = metrics.metric_id
AND project_id = %(project_id)s
AND ((dashboards.user_id = %(user_id)s OR is_public))) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (SELECT email AS owner_email, name AS owner_name
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE {" AND ".join(constraints)}
ORDER BY created_at {data.order.value}
LIMIT %(limit)s OFFSET %(offset)s;""", params)
logger.debug("---------")
logger.debug(query)
logger.debug("---------")
cur.execute(query)
rows = cur.fetchall()
if include_series:
for r in rows:
for s in r["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
else:
for r in rows:
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r["edited_at"] = TimeUTC.datetime_to_timestamp(r["edited_at"])
rows = helper.list_to_camel_case(rows)
return rows
def get_all(project_id, user_id):
default_search = schemas.SearchCardsSchema()
rows = search_all(project_id=project_id, user_id=user_id, data=default_search)
result = rows
while len(rows) == default_search.limit:
default_search.page += 1
rows = search_all(project_id=project_id, user_id=user_id, data=default_search)
result += rows
return result
def delete_card(project_id, metric_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET deleted_at = timezone('utc'::text, now()), edited_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING data;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id})
)
return {"state": "success"}
def __get_path_analysis_attributes(row):
card_info = row.pop("cardInfo")
row["excludes"] = card_info.get("excludes", [])
row["startPoint"] = card_info.get("startPoint", [])
row["startType"] = card_info.get("startType", "start")
row["hideExcess"] = card_info.get("hideExcess", False)
return row
def get_card(metric_id, project_id, user_id, flatten: bool = True, include_data: bool = False):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT metric_id, project_id, user_id, name, is_public, created_at, deleted_at, edited_at, metric_type,
view_type, metric_of, metric_value, metric_format, is_pinned, default_config,
default_config AS config,series, dashboards, owner_email, card_info
{',data' if include_data else ''}
FROM metrics
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
FROM (SELECT dashboard_id, name, is_public
FROM dashboards INNER JOIN dashboard_widgets USING (dashboard_id)
WHERE deleted_at ISNULL
AND project_id = %(project_id)s
AND ((dashboards.user_id = %(user_id)s OR is_public))
AND metric_id = %(metric_id)s) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (SELECT email AS owner_email
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE metrics.project_id = %(project_id)s
AND metrics.deleted_at ISNULL
AND (metrics.user_id = %(user_id)s OR metrics.is_public)
AND metrics.metric_id = %(metric_id)s
ORDER BY created_at;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id}
)
cur.execute(query)
row = cur.fetchone()
if row is None:
return None
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
row["edited_at"] = TimeUTC.datetime_to_timestamp(row["edited_at"])
if flatten:
for s in row["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
row = helper.dict_to_camel_case(row)
if row["metricType"] == schemas.MetricType.PATH_ANALYSIS:
row = __get_path_analysis_attributes(row=row)
return row
def get_series_for_alert(project_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT series_id AS value,
metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count' AS name,
'count' AS unit,
FALSE AS predefined,
metric_id,
series_id
FROM metric_series
INNER JOIN metrics USING (metric_id)
WHERE metrics.deleted_at ISNULL
AND metrics.project_id = %(project_id)s
AND metrics.metric_type = 'timeseries'
AND (user_id = %(user_id)s OR is_public)
ORDER BY name;""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
return helper.list_to_camel_case(rows)
def change_state(project_id, metric_id, user_id, status):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET active = %(status)s
WHERE metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"metric_id": metric_id, "status": status, "user_id": user_id})
)
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
def get_funnel_sessions_by_issue(user_id, project_id, metric_id, issue_id,
data: schemas.CardSessionsSchema
# , range_value=None, start_date=None, end_date=None
):
# No need for this because UI is sending the full payload
# card: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
# if card is None:
# return None
# metric: schemas.CardSchema = schemas.CardSchema(**card)
# metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
# if metric is None:
# return None
if not card_exists(metric_id=metric_id, project_id=project_id, user_id=user_id):
return None
for s in data.series:
s.filter.startTimestamp = data.startTimestamp
s.filter.endTimestamp = data.endTimestamp
s.filter.limit = data.limit
s.filter.page = data.page
issues_list = funnels.get_issues_on_the_fly_widget(project_id=project_id, data=s.filter).get("issues", {})
issues_list = issues_list.get("significant", []) + issues_list.get("insignificant", [])
issue = None
for i in issues_list:
if i.get("issueId", "") == issue_id:
issue = i
break
if issue is None:
issue = issues.get(project_id=project_id, issue_id=issue_id)
if issue is not None:
issue = {**issue,
"affectedSessions": 0,
"affectedUsers": 0,
"conversionImpact": 0,
"lostConversions": 0,
"unaffectedSessions": 0}
return {"seriesId": s.series_id, "seriesName": s.name,
"sessions": sessions.search_sessions(user_id=user_id, project_id=project_id,
issue=issue, data=s.filter)
if issue is not None else {"total": 0, "sessions": []},
"issue": issue}
def make_chart_from_card(project: schemas.ProjectContext, user_id, metric_id, data: schemas.CardSessionsSchema):
raw_metric: dict = get_card(metric_id=metric_id, project_id=project.project_id, user_id=user_id, include_data=True)
if raw_metric is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="card not found")
raw_metric["startTimestamp"] = data.startTimestamp
raw_metric["endTimestamp"] = data.endTimestamp
raw_metric["limit"] = data.limit
raw_metric["density"] = data.density
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
if metric.is_predefined:
return custom_metrics_predefined.get_metric(key=metric.metric_of,
project_id=project.project_id,
data=data.model_dump())
elif metric.metric_type == schemas.MetricType.HEAT_MAP:
if raw_metric["data"] and raw_metric["data"].get("sessionId"):
return heatmaps.get_selected_session(project_id=project.project_id,
session_id=raw_metric["data"]["sessionId"])
else:
return heatmaps.search_short_session(project_id=project.project_id,
data=schemas.HeatMapSessionsSearch(**metric.model_dump()),
user_id=user_id)
return get_chart(project=project, data=metric, user_id=user_id)
def card_exists(metric_id, project_id, user_id) -> bool:
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT 1
FROM metrics
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
FROM (SELECT dashboard_id, name, is_public
FROM dashboards INNER JOIN dashboard_widgets USING (dashboard_id)
WHERE deleted_at ISNULL
AND project_id = %(project_id)s
AND ((dashboards.user_id = %(user_id)s OR is_public))
AND metric_id = %(metric_id)s) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (SELECT email AS owner_email
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE metrics.project_id = %(project_id)s
AND metrics.deleted_at ISNULL
AND (metrics.user_id = %(user_id)s OR metrics.is_public)
AND metrics.metric_id = %(metric_id)s
ORDER BY created_at;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id}
)
cur.execute(query)
row = cur.fetchone()
return row is not None

View file

@ -1,25 +0,0 @@
import logging
from typing import Union
import schemas
from chalicelib.core import metrics
logger = logging.getLogger(__name__)
def get_metric(key: Union[schemas.MetricOfWebVitals, schemas.MetricOfErrors], project_id: int, data: dict):
supported = {
schemas.MetricOfWebVitals.COUNT_SESSIONS: metrics.get_processed_sessions,
schemas.MetricOfWebVitals.AVG_VISITED_PAGES: metrics.get_user_activity_avg_visited_pages,
schemas.MetricOfWebVitals.COUNT_REQUESTS: metrics.get_top_metrics_count_requests,
schemas.MetricOfErrors.IMPACTED_SESSIONS_BY_JS_ERRORS: metrics.get_impacted_sessions_by_js_errors,
schemas.MetricOfErrors.DOMAINS_ERRORS_4XX: metrics.get_domains_errors_4xx,
schemas.MetricOfErrors.DOMAINS_ERRORS_5XX: metrics.get_domains_errors_5xx,
schemas.MetricOfErrors.ERRORS_PER_DOMAINS: metrics.get_errors_per_domains,
schemas.MetricOfErrors.ERRORS_PER_TYPE: metrics.get_errors_per_type,
schemas.MetricOfErrors.RESOURCES_BY_PARTY: metrics.get_resources_by_party,
schemas.MetricOfWebVitals.COUNT_USERS: metrics.get_unique_users,
schemas.MetricOfWebVitals.SPEED_LOCATION: metrics.get_speed_index_location,
}
return supported.get(key, lambda *args: None)(project_id=project_id, **data)

View file

@ -1,602 +0,0 @@
import json
import schemas
from chalicelib.core import sourcemaps, sessions
from chalicelib.utils import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import __get_step_size
def get(error_id, family=False):
if family:
return get_batch([error_id])
with pg_client.PostgresClient() as cur:
# trying: return only 1 error, without event details
query = cur.mogrify(
# "SELECT * FROM events.errors AS e INNER JOIN public.errors AS re USING(error_id) WHERE error_id = %(error_id)s;",
"SELECT * FROM public.errors WHERE error_id = %(error_id)s LIMIT 1;",
{"error_id": error_id})
cur.execute(query=query)
result = cur.fetchone()
if result is not None:
result["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(result["stacktrace_parsed_at"])
return helper.dict_to_camel_case(result)
def get_batch(error_ids):
if len(error_ids) == 0:
return []
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
WITH RECURSIVE error_family AS (
SELECT *
FROM public.errors
WHERE error_id IN %(error_ids)s
UNION
SELECT child_errors.*
FROM public.errors AS child_errors
INNER JOIN error_family ON error_family.error_id = child_errors.parent_error_id OR error_family.parent_error_id = child_errors.error_id
)
SELECT *
FROM error_family;""",
{"error_ids": tuple(error_ids)})
cur.execute(query=query)
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f'{o["name"]}@{v["version"]}',
"count": v["count"]
} for o in data for v in o["partition"]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o["name"],
"count": o["count"],
} for o in data
]
def __process_tags(row):
return [
{"name": "browser", "partitions": __flatten_sort_key_count_version(data=row.get("browsers_partition"))},
{"name": "browser.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("browsers_partition"), merge_nested=True)},
{"name": "OS", "partitions": __flatten_sort_key_count_version(data=row.get("os_partition"))},
{"name": "OS.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("os_partition"), merge_nested=True)},
{"name": "device.family", "partitions": __flatten_sort_key_count_version(data=row.get("device_partition"))},
{"name": "device",
"partitions": __flatten_sort_key_count_version(data=row.pop("device_partition"), merge_nested=True)},
{"name": "country", "partitions": row.pop("country_partition")}
]
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = __get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30_session = __get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30", project_key="sessions.project_id")
pg_sub_query30_session.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_session.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_session.append("error_id = %(error_id)s")
pg_sub_query30_err = __get_basic_constraints(time_constraint=True, chart=False, startTime_arg_name="startDate30",
endTime_arg_name="endDate30", project_key="errors.project_id")
pg_sub_query30_err.append("sessions.project_id = %(project_id)s")
pg_sub_query30_err.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_err.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_err.append("error_id = %(error_id)s")
pg_sub_query30_err.append("source ='js_exception'")
pg_sub_query30 = __get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = __get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
data["endDate24"] = TimeUTC.now()
data["startDate30"] = TimeUTC.now(-30)
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = __get_step_size(data["startDate24"], data["endDate24"], density24, factor=1)
density30 = int(data.get("density30", 30))
step_size30 = __get_step_size(data["startDate30"], data["endDate30"], density30, factor=1)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
"startDate30": data['startDate30'],
"endDate30": data['endDate30'],
"project_id": project_id,
"userId": user_id,
"step_size24": step_size24,
"step_size30": step_size30,
"error_id": error_id}
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
last_session_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart24,
chart30,
custom_tags
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT session_id) AS sessions
FROM public.errors
INNER JOIN events.errors AS s_errors USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_err)}
GROUP BY error_id, name, message) AS details
INNER JOIN (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s) AS time_details ON (TRUE)
INNER JOIN (SELECT session_id AS last_session_id,
coalesce(custom_tags, '[]')::jsonb AS custom_tags
FROM events.errors
LEFT JOIN LATERAL (
SELECT jsonb_agg(jsonb_build_object(errors_tags.key, errors_tags.value)) AS custom_tags
FROM errors_tags
WHERE errors_tags.error_id = %(error_id)s
AND errors_tags.session_id = errors.session_id
AND errors_tags.message_id = errors.message_id) AS errors_tags ON (TRUE)
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT JSONB_AGG(version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS version_details
) AS browser_version_details ON (TRUE)) AS browser_details) AS browser_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT COALESCE(user_os_version,'unknown') AS version, COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_details
GROUP BY count_per_os_details.name ) AS os_version_details
ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_v_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_device_type = count_per_device_details.name
GROUP BY user_device
ORDER BY count DESC) AS count_per_device_v_details
GROUP BY count_per_device_details.name ) AS device_version_details
ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart24
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query24)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details24 ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart30
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30)}) AS chart_details
ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details30 ON (TRUE);
"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
query = cur.mogrify(
f"""SELECT error_id, status, session_id, start_ts,
parent_error_id,session_id, user_anonymous_id,
user_id, user_uuid, user_browser, user_browser_version,
user_os, user_os_version, user_device, payload,
FALSE AS favorite,
True AS viewed
FROM public.errors AS pe
INNER JOIN events.errors AS ee USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE pe.project_id = %(project_id)s
AND error_id = %(error_id)s
ORDER BY start_ts DESC
LIMIT 1;""",
{"project_id": project_id, "error_id": error_id, "user_id": user_id})
cur.execute(query=query)
status = cur.fetchone()
if status is not None:
row["stack"] = errors_helper.format_first_stack_frame(status).pop("stack")
row["status"] = status.pop("status")
row["parent_error_id"] = status.pop("parent_error_id")
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
row["last_hydrated_session"] = status
else:
row["stack"] = []
row["last_hydrated_session"] = None
row["status"] = "untracked"
row["parent_error_id"] = None
row["favorite"] = False
row["viewed"] = False
return {"data": helper.dict_to_camel_case(row)}
def __get_basic_constraints(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", chart=False, step_size_name="step_size",
project_key="project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project_id, user_id):
empty_response = {
'total': 0,
'errors': []
}
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = __get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = __get_basic_constraints(platform, time_constraint=False, chart=True, project_key=None)
if platform:
pg_sub_query_chart += ["start_ts>=%(startDate)s", "start_ts<%(endDate)s", "project_id=%(project_id)s"]
pg_sub_query_chart.append("errors.error_id =details.error_id")
statuses = []
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-30)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
if len(data.events) > 0 or len(data.filters) > 0:
print("-- searching for sessions before errors")
statuses = sessions.search_sessions(data=data, project_id=project_id, user_id=user_id, errors_only=True,
error_status=data.status)
if len(statuses) == 0:
return empty_response
error_ids = [e["errorId"] for e in statuses]
with pg_client.PostgresClient() as cur:
step_size = __get_step_size(data.startTimestamp, data.endTimestamp, data.density, factor=1)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = schemas.SortOrderType.DESC
if data.order is not None:
order = data.order
extra_join = ""
params = {
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project_id,
"userId": user_id,
"step_size": step_size}
if data.status != schemas.ErrorStatus.ALL:
pg_sub_query.append("status = %(error_status)s")
params["error_status"] = data.status
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
pg_sub_query.append("error_id IN %(error_ids)s")
# if data.bookmarked:
# pg_sub_query.append("ufe.user_id = %(userId)s")
# extra_join += " INNER JOIN public.user_favorite_errors AS ufe USING (error_id)"
if data.query is not None and len(data.query) > 0:
pg_sub_query.append("(pe.name ILIKE %(error_query)s OR pe.message ILIKE %(error_query)s)")
params["error_query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
main_pg_query = f"""SELECT full_count,
error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
chart
FROM (SELECT COUNT(details) OVER () AS full_count, details.*
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT COALESCE(user_id,user_uuid::text)) AS users,
COUNT(DISTINCT session_id) AS sessions,
MAX(timestamp) AS max_datetime,
MIN(timestamp) AS min_datetime
FROM events.errors
INNER JOIN public.errors AS pe USING (error_id)
INNER JOIN public.sessions USING (session_id)
{extra_join}
WHERE {" AND ".join(pg_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}) AS details
LIMIT %(errors_limit)s OFFSET %(errors_offset)s
) AS details
INNER JOIN LATERAL (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE errors.error_id = details.error_id) AS time_details ON (TRUE)
INNER JOIN LATERAL (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
{"INNER JOIN public.sessions USING(session_id)" if platform else ""}
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details ON (TRUE);"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
rows = cur.fetchall()
total = 0 if len(rows) == 0 else rows[0]["full_count"]
if total == 0:
rows = []
else:
if len(statuses) == 0:
query = cur.mogrify(
"""SELECT error_id,
COALESCE((SELECT TRUE
FROM public.user_viewed_errors AS ve
WHERE errors.error_id = ve.error_id
AND ve.user_id = %(user_id)s LIMIT 1), FALSE) AS viewed
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = helper.list_to_camel_case(cur.fetchall())
statuses = {
s["errorId"]: s for s in statuses
}
for r in rows:
r.pop("full_count")
if r["error_id"] in statuses:
r["viewed"] = statuses[r["error_id"]]["viewed"]
else:
r["viewed"] = False
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def __save_stacktrace(error_id, data):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET stacktrace=%(data)s::jsonb, stacktrace_parsed_at=timezone('utc'::text, now())
WHERE error_id = %(error_id)s;""",
{"error_id": error_id, "data": json.dumps(data)})
cur.execute(query=query)
def get_trace(project_id, error_id):
error = get(error_id=error_id, family=False)
if error is None:
return {"errors": ["error not found"]}
if error.get("source", "") != "js_exception":
return {"errors": ["this source of errors doesn't have a sourcemap"]}
if error.get("payload") is None:
return {"errors": ["null payload"]}
if error.get("stacktrace") is not None:
return {"sourcemapUploaded": True,
"trace": error.get("stacktrace"),
"preparsed": True}
trace, all_exists = sourcemaps.get_traces_group(project_id=project_id, payload=error["payload"])
if all_exists:
__save_stacktrace(error_id=error_id, data=trace)
return {"sourcemapUploaded": all_exists,
"trace": trace,
"preparsed": False}
def get_sessions(start_date, end_date, project_id, user_id, error_id):
extra_constraints = ["s.project_id = %(project_id)s",
"s.start_ts >= %(startDate)s",
"s.start_ts <= %(endDate)s",
"e.error_id = %(error_id)s"]
if start_date is None:
start_date = TimeUTC.now(-7)
if end_date is None:
end_date = TimeUTC.now()
params = {
"startDate": start_date,
"endDate": end_date,
"project_id": project_id,
"userId": user_id,
"error_id": error_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_country,
s.start_ts,
s.duration,
s.events_count,
s.pages_count,
s.errors_count,
s.issue_types,
COALESCE((SELECT TRUE
FROM public.user_favorite_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.sessions AS s INNER JOIN events.errors AS e USING (session_id)
WHERE {" AND ".join(extra_constraints)}
ORDER BY s.start_ts DESC;""",
params)
cur.execute(query=query)
sessions_list = []
total = cur.rowcount
row = cur.fetchone()
while row is not None and len(sessions_list) < 100:
sessions_list.append(row)
row = cur.fetchone()
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions_list)
}
ACTION_STATE = {
"unsolve": 'unresolved',
"solve": 'resolved',
"ignore": 'ignored'
}
def change_state(project_id, user_id, error_id, action):
errors = get(error_id, family=True)
print(len(errors))
status = ACTION_STATE.get(action)
if errors is None or len(errors) == 0:
return {"errors": ["error not found"]}
if errors[0]["status"] == status:
return {"errors": [f"error is already {status}"]}
if errors[0]["status"] == ACTION_STATE["solve"] and status == ACTION_STATE["ignore"]:
return {"errors": [f"state transition not permitted {errors[0]['status']} -> {status}"]}
params = {
"userId": user_id,
"error_ids": tuple([e["errorId"] for e in errors]),
"status": status}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET status = %(status)s
WHERE error_id IN %(error_ids)s
RETURNING status""",
params)
cur.execute(query=query)
row = cur.fetchone()
if row is not None:
for e in errors:
e["status"] = row["status"]
return {"data": errors}

View file

@ -0,0 +1,13 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import errors_pg as errors_legacy
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental error search")
from . import errors_ch as errors
else:
from . import errors_pg as errors

View file

@ -0,0 +1,409 @@
import schemas
from chalicelib.core import metadata
from chalicelib.core.errors import errors_legacy
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.errors.modules import sessions
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, metrics_helper
from chalicelib.utils.TimeUTC import TimeUTC
def _multiple_values(values, value_key="value"):
query_values = {}
if values is not None and isinstance(values, list):
for i in range(len(values)):
k = f"{value_key}_{i}"
query_values[k] = values[i]
return query_values
def __get_sql_operator(op: schemas.SearchEventOperator):
return {
schemas.SearchEventOperator.IS: "=",
schemas.SearchEventOperator.IS_ANY: "IN",
schemas.SearchEventOperator.ON: "=",
schemas.SearchEventOperator.ON_ANY: "IN",
schemas.SearchEventOperator.IS_NOT: "!=",
schemas.SearchEventOperator.NOT_ON: "!=",
schemas.SearchEventOperator.CONTAINS: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH: "ILIKE",
}.get(op, "=")
def _isAny_opreator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.ON_ANY, schemas.SearchEventOperator.IS_ANY]
def _isUndefined_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_UNDEFINED]
def __is_negation_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_NOT,
schemas.SearchEventOperator.NOT_ON,
schemas.SearchEventOperator.NOT_CONTAINS]
def _multiple_conditions(condition, values, value_key="value", is_not=False):
query = []
for i in range(len(values)):
k = f"{value_key}_{i}"
query.append(condition.replace(value_key, k))
return "(" + (" AND " if is_not else " OR ").join(query) + ")"
def get(error_id, family=False):
return errors_legacy.get(error_id=error_id, family=family)
def get_batch(error_ids):
return errors_legacy.get_batch(error_ids=error_ids)
def __get_basic_constraints_events(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}created_at >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}created_at < toDateTime(%({endTime_arg_name})s/1000)"]
# if platform == schemas.PlatformType.MOBILE:
# ch_sub_query.append("user_device_type = 'mobile'")
# elif platform == schemas.PlatformType.DESKTOP:
# ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
MAIN_EVENTS_TABLE = exp_ch_helper.get_main_events_table(data.startTimestamp)
MAIN_SESSIONS_TABLE = exp_ch_helper.get_main_sessions_table(data.startTimestamp)
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
ch_sessions_sub_query = errors_helper.__get_basic_constraints_ch(platform, type_condition=False)
# ignore platform for errors table
ch_sub_query = __get_basic_constraints_events(None, type_condition=True)
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'source') = 'js_exception'")
# To ignore Script error
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'message') != 'Script error.'")
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-7)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
subquery_part = ""
params = {}
if len(data.events) > 0:
errors_condition_count = 0
for i, e in enumerate(data.events):
if e.type == schemas.EventType.ERROR:
errors_condition_count += 1
is_any = _isAny_opreator(e.operator)
op = __get_sql_operator(e.operator)
e_k = f"e_value{i}"
params = {**params, **_multiple_values(e.value, value_key=e_k)}
if not is_any and len(e.value) > 0 and e.value[1] not in [None, "*", ""]:
ch_sub_query.append(
_multiple_conditions(f"(message {op} %({e_k})s OR name {op} %({e_k})s)",
e.value, value_key=e_k))
if len(data.events) > errors_condition_count:
subquery_part_args, subquery_part = sessions.search_query_parts_ch(data=data, error_status=data.status,
errors_only=True,
project_id=project.project_id,
user_id=user_id,
issue=None,
favorite_only=False)
subquery_part = f"INNER JOIN {subquery_part} USING(session_id)"
params = {**params, **subquery_part_args}
if len(data.filters) > 0:
meta_keys = None
# to reduce include a sub-query of sessions inside events query, in order to reduce the selected data
for i, f in enumerate(data.filters):
if not isinstance(f.value, list):
f.value = [f.value]
filter_type = f.type
f.value = helper.values_for_operator(value=f.value, op=f.operator)
f_k = f"f_value{i}"
params = {**params, f_k: f.value, **_multiple_values(f.value, value_key=f_k)}
op = __get_sql_operator(f.operator) \
if filter_type not in [schemas.FilterType.EVENTS_COUNT] else f.operator
is_any = _isAny_opreator(f.operator)
is_undefined = _isUndefined_operator(f.operator)
if not is_any and not is_undefined and len(f.value) == 0:
continue
is_not = False
if __is_negation_operator(f.operator):
is_not = True
if filter_type == schemas.FilterType.USER_BROWSER:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_browser)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_browser {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_OS, schemas.FilterType.USER_OS_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_os)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_os {op} %({f_k})s', f.value, is_not=is_not, value_key=f_k))
elif filter_type in [schemas.FilterType.USER_DEVICE, schemas.FilterType.USER_DEVICE_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_device)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_device {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_COUNTRY, schemas.FilterType.USER_COUNTRY_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_country)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_country {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_SOURCE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_source)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_source)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_source {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_MEDIUM]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_medium)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_medium)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_medium {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_CAMPAIGN]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_campaign)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_campaign)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_campaign {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type == schemas.FilterType.DURATION:
if len(f.value) > 0 and f.value[0] is not None:
ch_sessions_sub_query.append("s.duration >= %(minDuration)s")
params["minDuration"] = f.value[0]
if len(f.value) > 1 and f.value[1] is not None and int(f.value[1]) > 0:
ch_sessions_sub_query.append("s.duration <= %(maxDuration)s")
params["maxDuration"] = f.value[1]
elif filter_type == schemas.FilterType.REFERRER:
# extra_from += f"INNER JOIN {events.EventType.LOCATION.table} AS p USING(session_id)"
if is_any:
referrer_constraint = 'isNotNull(s.base_referrer)'
else:
referrer_constraint = _multiple_conditions(f"s.base_referrer {op} %({f_k})s", f.value,
is_not=is_not, value_key=f_k)
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if f.source in meta_keys.keys():
if is_any:
ch_sessions_sub_query.append(f"isNotNull(s.{metadata.index_to_colname(meta_keys[f.source])})")
elif is_undefined:
ch_sessions_sub_query.append(f"isNull(s.{metadata.index_to_colname(meta_keys[f.source])})")
else:
ch_sessions_sub_query.append(
_multiple_conditions(
f"s.{metadata.index_to_colname(meta_keys[f.source])} {op} toString(%({f_k})s)",
f.value, is_not=is_not, value_key=f_k))
elif filter_type in [schemas.FilterType.USER_ID, schemas.FilterType.USER_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.user_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_id {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_ANONYMOUS_ID,
schemas.FilterType.USER_ANONYMOUS_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_anonymous_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.user_anonymous_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_anonymous_id {op} toString(%({f_k})s)", f.value,
is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.REV_ID, schemas.FilterType.REV_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.rev_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.rev_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.rev_id {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type == schemas.FilterType.PLATFORM:
# op = __get_sql_operator(f.operator)
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_device_type {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
# elif filter_type == schemas.FilterType.issue:
# if is_any:
# ch_sessions_sub_query.append("notEmpty(s.issue_types)")
# else:
# ch_sessions_sub_query.append(f"hasAny(s.issue_types,%({f_k})s)")
# # _multiple_conditions(f"%({f_k})s {op} ANY (s.issue_types)", f.value, is_not=is_not,
# # value_key=f_k))
#
# if is_not:
# extra_constraints[-1] = f"not({extra_constraints[-1]})"
# ss_constraints[-1] = f"not({ss_constraints[-1]})"
elif filter_type == schemas.FilterType.EVENTS_COUNT:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.events_count {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
with ch_client.ClickHouseClient() as ch:
step_size = metrics_helper.get_step_size(data.startTimestamp, data.endTimestamp, data.density)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = "DESC"
if data.order is not None:
order = data.order
params = {
**params,
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"userId": user_id,
"step_size": step_size}
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
# if data.bookmarked:
# cur.execute(cur.mogrify(f"""SELECT error_id
# FROM public.user_favorite_errors
# WHERE user_id = %(userId)s
# {"" if error_ids is None else "AND error_id IN %(error_ids)s"}""",
# {"userId": user_id, "error_ids": tuple(error_ids or [])}))
# error_ids = cur.fetchall()
# if len(error_ids) == 0:
# return empty_response
# error_ids = [e["error_id"] for e in error_ids]
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
ch_sub_query.append("error_id IN %(error_ids)s")
main_ch_query = f"""\
SELECT details.error_id as error_id,
name, message, users, total,
sessions, last_occurrence, first_occurrence, chart
FROM (SELECT error_id,
JSONExtractString(toString(`$properties`), 'name') AS name,
JSONExtractString(toString(`$properties`), 'message') AS message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT events.session_id) AS sessions,
MAX(created_at) AS max_datetime,
MIN(created_at) AS min_datetime,
COUNT(DISTINCT error_id)
OVER() AS total
FROM {MAIN_EVENTS_TABLE} AS events
INNER JOIN (SELECT session_id, coalesce(user_id,toString(user_uuid)) AS user_id
FROM {MAIN_SESSIONS_TABLE} AS s
{subquery_part}
WHERE {" AND ".join(ch_sessions_sub_query)}) AS sessions
ON (events.session_id = sessions.session_id)
WHERE {" AND ".join(ch_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}
LIMIT %(errors_limit)s OFFSET %(errors_offset)s) AS details
INNER JOIN (SELECT error_id,
toUnixTimestamp(MAX(created_at))*1000 AS last_occurrence,
toUnixTimestamp(MIN(created_at))*1000 AS first_occurrence
FROM {MAIN_EVENTS_TABLE}
WHERE project_id=%(project_id)s
AND `$event_name`='ERROR'
GROUP BY error_id) AS time_details
ON details.error_id=time_details.error_id
INNER JOIN (SELECT error_id, groupArray([timestamp, count]) AS chart
FROM (SELECT error_id,
gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs
LEFT JOIN {MAIN_EVENTS_TABLE} ON(TRUE)
WHERE {" AND ".join(ch_sub_query)}
AND created_at >= toDateTime(timestamp / 1000)
AND created_at < toDateTime((timestamp + %(step_size)s) / 1000)
GROUP BY error_id, timestamp
ORDER BY timestamp) AS sub_table
GROUP BY error_id) AS chart_details ON details.error_id=chart_details.error_id;"""
# print("------------")
# print(ch.format(main_ch_query, params))
# print("------------")
query = ch.format(query=main_ch_query, parameters=params)
rows = ch.execute(query=query)
total = rows[0]["total"] if len(rows) > 0 else 0
for r in rows:
r["chart"] = list(r["chart"])
for i in range(len(r["chart"])):
r["chart"][i] = {"timestamp": r["chart"][i][0], "count": r["chart"][i][1]}
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def get_trace(project_id, error_id):
return errors_legacy.get_trace(project_id=project_id, error_id=error_id)
def get_sessions(start_date, end_date, project_id, user_id, error_id):
return errors_legacy.get_sessions(start_date=start_date,
end_date=end_date,
project_id=project_id,
user_id=user_id,
error_id=error_id)

View file

@ -0,0 +1,248 @@
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f'{o["name"]}@{v["version"]}',
"count": v["count"]
} for o in data for v in o["partition"]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o["name"],
"count": o["count"],
} for o in data
]
def __process_tags(row):
return [
{"name": "browser", "partitions": __flatten_sort_key_count_version(data=row.get("browsers_partition"))},
{"name": "browser.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("browsers_partition"), merge_nested=True)},
{"name": "OS", "partitions": __flatten_sort_key_count_version(data=row.get("os_partition"))},
{"name": "OS.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("os_partition"), merge_nested=True)},
{"name": "device.family", "partitions": __flatten_sort_key_count_version(data=row.get("device_partition"))},
{"name": "device",
"partitions": __flatten_sort_key_count_version(data=row.pop("device_partition"), merge_nested=True)},
{"name": "country", "partitions": row.pop("country_partition")}
]
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30_session = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_session.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_session.append("error_id = %(error_id)s")
pg_sub_query30_err = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="errors.project_id")
pg_sub_query30_err.append("sessions.project_id = %(project_id)s")
pg_sub_query30_err.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_err.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_err.append("error_id = %(error_id)s")
pg_sub_query30_err.append("source ='js_exception'")
pg_sub_query30 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = errors_helper.__get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
data["endDate24"] = TimeUTC.now()
data["startDate30"] = TimeUTC.now(-30)
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = get_step_size(data["startDate24"], data["endDate24"], density24, factor=1)
density30 = int(data.get("density30", 30))
step_size30 = get_step_size(data["startDate30"], data["endDate30"], density30, factor=1)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
"startDate30": data['startDate30'],
"endDate30": data['endDate30'],
"project_id": project_id,
"userId": user_id,
"step_size24": step_size24,
"step_size30": step_size30,
"error_id": error_id}
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
last_session_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart24,
chart30
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT session_id) AS sessions
FROM public.errors
INNER JOIN events.errors AS s_errors USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_err)}
GROUP BY error_id, name, message) AS details
INNER JOIN (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s) AS time_details ON (TRUE)
INNER JOIN (SELECT session_id AS last_session_id
FROM events.errors
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT JSONB_AGG(version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS version_details
) AS browser_version_details ON (TRUE)) AS browser_details) AS browser_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT COALESCE(user_os_version,'unknown') AS version, COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_details
GROUP BY count_per_os_details.name ) AS os_version_details
ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_v_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_device_type = count_per_device_details.name
GROUP BY user_device
ORDER BY count DESC) AS count_per_device_v_details
GROUP BY count_per_device_details.name ) AS device_version_details
ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart24
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query24)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details24 ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart30
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30)}) AS chart_details
ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details30 ON (TRUE);
"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
query = cur.mogrify(
f"""SELECT error_id, status, session_id, start_ts,
parent_error_id,session_id, user_anonymous_id,
user_id, user_uuid, user_browser, user_browser_version,
user_os, user_os_version, user_device, payload,
FALSE AS favorite,
True AS viewed
FROM public.errors AS pe
INNER JOIN events.errors AS ee USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE pe.project_id = %(project_id)s
AND error_id = %(error_id)s
ORDER BY start_ts DESC
LIMIT 1;""",
{"project_id": project_id, "error_id": error_id, "user_id": user_id})
cur.execute(query=query)
status = cur.fetchone()
if status is not None:
row["stack"] = errors_helper.format_first_stack_frame(status).pop("stack")
row["status"] = status.pop("status")
row["parent_error_id"] = status.pop("parent_error_id")
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
row["last_hydrated_session"] = status
else:
row["stack"] = []
row["last_hydrated_session"] = None
row["status"] = "untracked"
row["parent_error_id"] = None
row["favorite"] = False
row["viewed"] = False
return {"data": helper.dict_to_camel_case(row)}

View file

@ -0,0 +1,294 @@
import json
from typing import List
import schemas
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.sessions import sessions_search
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def get(error_id, family=False) -> dict | List[dict]:
if family:
return get_batch([error_id])
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT *
FROM public.errors
WHERE error_id = %(error_id)s
LIMIT 1;""",
{"error_id": error_id})
cur.execute(query=query)
result = cur.fetchone()
if result is not None:
result["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(result["stacktrace_parsed_at"])
return helper.dict_to_camel_case(result)
def get_batch(error_ids):
if len(error_ids) == 0:
return []
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
WITH RECURSIVE error_family AS (
SELECT *
FROM public.errors
WHERE error_id IN %(error_ids)s
UNION
SELECT child_errors.*
FROM public.errors AS child_errors
INNER JOIN error_family ON error_family.error_id = child_errors.parent_error_id OR error_family.parent_error_id = child_errors.error_id
)
SELECT *
FROM error_family;""",
{"error_ids": tuple(error_ids)})
cur.execute(query=query)
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
empty_response = {
'total': 0,
'errors': []
}
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = errors_helper.__get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = errors_helper.__get_basic_constraints(platform, time_constraint=False, chart=True,
project_key=None)
if platform:
pg_sub_query_chart += ["start_ts>=%(startDate)s", "start_ts<%(endDate)s", "project_id=%(project_id)s"]
pg_sub_query_chart.append("errors.error_id =details.error_id")
statuses = []
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-30)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
if len(data.events) > 0 or len(data.filters) > 0:
print("-- searching for sessions before errors")
statuses = sessions_search.search_sessions(data=data, project=project, user_id=user_id, errors_only=True,
error_status=data.status)
if len(statuses) == 0:
return empty_response
error_ids = [e["errorId"] for e in statuses]
with pg_client.PostgresClient() as cur:
step_size = get_step_size(data.startTimestamp, data.endTimestamp, data.density, factor=1)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = schemas.SortOrderType.DESC
if data.order is not None:
order = data.order
extra_join = ""
params = {
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"userId": user_id,
"step_size": step_size}
if data.status != schemas.ErrorStatus.ALL:
pg_sub_query.append("status = %(error_status)s")
params["error_status"] = data.status
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
pg_sub_query.append("error_id IN %(error_ids)s")
# if data.bookmarked:
# pg_sub_query.append("ufe.user_id = %(userId)s")
# extra_join += " INNER JOIN public.user_favorite_errors AS ufe USING (error_id)"
if data.query is not None and len(data.query) > 0:
pg_sub_query.append("(pe.name ILIKE %(error_query)s OR pe.message ILIKE %(error_query)s)")
params["error_query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
main_pg_query = f"""SELECT full_count,
error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
chart
FROM (SELECT COUNT(details) OVER () AS full_count, details.*
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT COALESCE(user_id,user_uuid::text)) AS users,
COUNT(DISTINCT session_id) AS sessions,
MAX(timestamp) AS max_datetime,
MIN(timestamp) AS min_datetime
FROM events.errors
INNER JOIN public.errors AS pe USING (error_id)
INNER JOIN public.sessions USING (session_id)
{extra_join}
WHERE {" AND ".join(pg_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}) AS details
LIMIT %(errors_limit)s OFFSET %(errors_offset)s
) AS details
INNER JOIN LATERAL (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE errors.error_id = details.error_id) AS time_details ON (TRUE)
INNER JOIN LATERAL (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
{"INNER JOIN public.sessions USING(session_id)" if platform else ""}
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details ON (TRUE);"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
rows = cur.fetchall()
total = 0 if len(rows) == 0 else rows[0]["full_count"]
if total == 0:
rows = []
else:
if len(statuses) == 0:
query = cur.mogrify(
"""SELECT error_id
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project.project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = helper.list_to_camel_case(cur.fetchall())
statuses = {
s["errorId"]: s for s in statuses
}
for r in rows:
r.pop("full_count")
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def __save_stacktrace(error_id, data):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET stacktrace=%(data)s::jsonb, stacktrace_parsed_at=timezone('utc'::text, now())
WHERE error_id = %(error_id)s;""",
{"error_id": error_id, "data": json.dumps(data)})
cur.execute(query=query)
def get_trace(project_id, error_id):
error = get(error_id=error_id, family=False)
if error is None:
return {"errors": ["error not found"]}
if error.get("source", "") != "js_exception":
return {"errors": ["this source of errors doesn't have a sourcemap"]}
if error.get("payload") is None:
return {"errors": ["null payload"]}
if error.get("stacktrace") is not None:
return {"sourcemapUploaded": True,
"trace": error.get("stacktrace"),
"preparsed": True}
trace, all_exists = sourcemaps.get_traces_group(project_id=project_id, payload=error["payload"])
if all_exists:
__save_stacktrace(error_id=error_id, data=trace)
return {"sourcemapUploaded": all_exists,
"trace": trace,
"preparsed": False}
def get_sessions(start_date, end_date, project_id, user_id, error_id):
extra_constraints = ["s.project_id = %(project_id)s",
"s.start_ts >= %(startDate)s",
"s.start_ts <= %(endDate)s",
"e.error_id = %(error_id)s"]
if start_date is None:
start_date = TimeUTC.now(-7)
if end_date is None:
end_date = TimeUTC.now()
params = {
"startDate": start_date,
"endDate": end_date,
"project_id": project_id,
"userId": user_id,
"error_id": error_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_country,
s.start_ts,
s.duration,
s.events_count,
s.pages_count,
s.errors_count,
s.issue_types,
COALESCE((SELECT TRUE
FROM public.user_favorite_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.sessions AS s INNER JOIN events.errors AS e USING (session_id)
WHERE {" AND ".join(extra_constraints)}
ORDER BY s.start_ts DESC;""",
params)
cur.execute(query=query)
sessions_list = []
total = cur.rowcount
row = cur.fetchone()
while row is not None and len(sessions_list) < 100:
sessions_list.append(row)
row = cur.fetchone()
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions_list)
}

View file

@ -0,0 +1,11 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import helper as errors_helper
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
import chalicelib.core.sessions.sessions_ch as sessions
else:
import chalicelib.core.sessions.sessions_pg as sessions

View file

@ -0,0 +1,58 @@
from typing import Optional
import schemas
from chalicelib.core.sourcemaps import sourcemaps
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_basic_constraints_ch(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -1,48 +0,0 @@
from chalicelib.utils import pg_client
def add_favorite_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""INSERT INTO public.user_favorite_errors(user_id, error_id)
VALUES (%(userId)s,%(error_id)s);""",
{"userId": user_id, "error_id": error_id})
)
return {"errorId": error_id, "favorite": True}
def remove_favorite_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""DELETE FROM public.user_favorite_errors
WHERE
user_id = %(userId)s
AND error_id = %(error_id)s;""",
{"userId": user_id, "error_id": error_id})
)
return {"errorId": error_id, "favorite": False}
def favorite_error(project_id, user_id, error_id):
exists, favorite = error_exists_and_favorite(user_id=user_id, error_id=error_id)
if not exists:
return {"errors": ["cannot bookmark non-rehydrated errors"]}
if favorite:
return remove_favorite_error(project_id=project_id, user_id=user_id, error_id=error_id)
return add_favorite_error(project_id=project_id, user_id=user_id, error_id=error_id)
def error_exists_and_favorite(user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT errors.error_id AS exists, ufe.error_id AS favorite
FROM public.errors
LEFT JOIN (SELECT error_id FROM public.user_favorite_errors WHERE user_id = %(userId)s) AS ufe USING (error_id)
WHERE error_id = %(error_id)s;""",
{"userId": user_id, "error_id": error_id})
)
r = cur.fetchone()
if r is None:
return False, False
return True, r.get("favorite") is not None

View file

@ -1,37 +0,0 @@
from chalicelib.utils import pg_client
def add_viewed_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""INSERT INTO public.user_viewed_errors(user_id, error_id)
VALUES (%(userId)s,%(error_id)s);""",
{"userId": user_id, "error_id": error_id})
)
def viewed_error_exists(user_id, error_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT
errors.error_id AS hydrated,
COALESCE((SELECT TRUE
FROM public.user_viewed_errors AS ve
WHERE ve.error_id = %(error_id)s
AND ve.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.errors
WHERE error_id = %(error_id)s""",
{"userId": user_id, "error_id": error_id})
cur.execute(
query=query
)
r = cur.fetchone()
if r:
return r.get("viewed")
return True
def viewed_error(project_id, user_id, error_id):
if viewed_error_exists(user_id=user_id, error_id=error_id):
return None
return add_viewed_error(project_id=project_id, user_id=user_id, error_id=error_id)

View file

@ -1,9 +1,10 @@
from functools import cache
from typing import Optional
import schemas
from chalicelib.core import autocomplete
from chalicelib.core import issues
from chalicelib.core import sessions_metas
from chalicelib.core.autocomplete import autocomplete
from chalicelib.core.sessions import sessions_metas
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.event_filter_definition import SupportedFilter, Event
@ -137,52 +138,57 @@ class EventType:
column=None) # column=None because errors are searched by name or message
SUPPORTED_TYPES = {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
query=autocomplete.__generic_query(
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
@cache
def supported_types():
return {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
}
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
}
def get_errors_by_session_id(session_id, project_id):
@ -202,20 +208,17 @@ def search(text, event_type, project_id, source, key):
if not event_type:
return {"data": autocomplete.__get_autocomplete_table(text, project_id)}
if event_type in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type].get(project_id=project_id, value=text, key=key, source=source)
# for IOS events autocomplete
# if event_type + "_IOS" in SUPPORTED_TYPES.keys():
# rows += SUPPORTED_TYPES[event_type + "_IOS"].get(project_id=project_id, value=text, key=key,source=source)
elif event_type + "_MOBILE" in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.SUPPORTED_TYPES.keys():
if event_type in supported_types().keys():
rows = supported_types()[event_type].get(project_id=project_id, value=text, key=key, source=source)
elif event_type + "_MOBILE" in supported_types().keys():
rows = supported_types()[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.SUPPORTED_TYPES.keys():
and event_type[:-len("_IOS")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_MOBILE") \
and event_type[:-len("_MOBILE")] in sessions_metas.SUPPORTED_TYPES.keys():
and event_type[:-len("_MOBILE")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}

View file

@ -27,7 +27,6 @@ HEALTH_ENDPOINTS = {
"http": app_connection_string("http-openreplay", 8888, "metrics"),
"ingress-nginx": app_connection_string("ingress-nginx-openreplay", 80, "healthz"),
"integrations": app_connection_string("integrations-openreplay", 8888, "metrics"),
"peers": app_connection_string("peers-openreplay", 8888, "health"),
"sink": app_connection_string("sink-openreplay", 8888, "metrics"),
"sourcemapreader": app_connection_string(
"sourcemapreader-openreplay", 8888, "health"
@ -39,9 +38,7 @@ HEALTH_ENDPOINTS = {
def __check_database_pg(*_):
fail_response = {
"health": False,
"details": {
"errors": ["Postgres health-check failed"]
}
"details": {"errors": ["Postgres health-check failed"]},
}
with pg_client.PostgresClient() as cur:
try:
@ -63,29 +60,26 @@ def __check_database_pg(*_):
"details": {
# "version": server_version["server_version"],
# "schema": schema_version["version"]
}
},
}
def __always_healthy(*_):
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
def __check_be_service(service_name):
def fn(*_):
fail_response = {
"health": False,
"details": {
"errors": ["server health-check failed"]
}
"details": {"errors": ["server health-check failed"]},
}
try:
results = requests.get(HEALTH_ENDPOINTS.get(service_name), timeout=2)
if results.status_code != 200:
logger.error(f"!! issue with the {service_name}-health code:{results.status_code}")
logger.error(
f"!! issue with the {service_name}-health code:{results.status_code}"
)
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
return fail_response
@ -103,10 +97,7 @@ def __check_be_service(service_name):
logger.error("couldn't get response")
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
return fn
@ -114,7 +105,7 @@ def __check_be_service(service_name):
def __check_redis(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]}
"details": {"errors": ["server health-check failed"]},
}
if config("REDIS_STRING", default=None) is None:
# fail_response["details"]["errors"].append("REDIS_STRING not defined in env-vars")
@ -133,16 +124,14 @@ def __check_redis(*_):
"health": True,
"details": {
# "version": r.execute_command('INFO')['redis_version']
}
},
}
def __check_SSL(*_):
fail_response = {
"health": False,
"details": {
"errors": ["SSL Certificate health-check failed"]
}
"details": {"errors": ["SSL Certificate health-check failed"]},
}
try:
requests.get(config("SITE_URL"), verify=True, allow_redirects=True)
@ -150,36 +139,28 @@ def __check_SSL(*_):
logger.error("!! health failed: SSL Certificate")
logger.exception(e)
return fail_response
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
def __get_sessions_stats(*_):
with pg_client.PostgresClient() as cur:
constraints = ["projects.deleted_at IS NULL"]
query = cur.mogrify(f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
query = cur.mogrify(
f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
COALESCE(SUM(events_count),0) AS e_c
FROM public.projects_stats
INNER JOIN public.projects USING(project_id)
WHERE {" AND ".join(constraints)};""")
WHERE {" AND ".join(constraints)};"""
)
cur.execute(query)
row = cur.fetchone()
return {
"numberOfSessionsCaptured": row["s_c"],
"numberOfEventCaptured": row["e_c"]
}
return {"numberOfSessionsCaptured": row["s_c"], "numberOfEventCaptured": row["e_c"]}
def get_health(tenant_id=None):
health_map = {
"databases": {
"postgres": __check_database_pg
},
"ingestionPipeline": {
"redis": __check_redis
},
"databases": {"postgres": __check_database_pg},
"ingestionPipeline": {"redis": __check_redis},
"backendServices": {
"alerts": __check_be_service("alerts"),
"assets": __check_be_service("assets"),
@ -192,13 +173,12 @@ def get_health(tenant_id=None):
"http": __check_be_service("http"),
"ingress-nginx": __always_healthy,
"integrations": __check_be_service("integrations"),
"peers": __check_be_service("peers"),
"sink": __check_be_service("sink"),
"sourcemapreader": __check_be_service("sourcemapreader"),
"storage": __check_be_service("storage")
"storage": __check_be_service("storage"),
},
"details": __get_sessions_stats,
"ssl": __check_SSL
"ssl": __check_SSL,
}
return __process_health(health_map=health_map)
@ -210,10 +190,16 @@ def __process_health(health_map):
response.pop(parent_key)
elif isinstance(health_map[parent_key], dict):
for element_key in health_map[parent_key]:
if config(f"SKIP_H_{parent_key.upper()}_{element_key.upper()}", cast=bool, default=False):
if config(
f"SKIP_H_{parent_key.upper()}_{element_key.upper()}",
cast=bool,
default=False,
):
response[parent_key].pop(element_key)
else:
response[parent_key][element_key] = health_map[parent_key][element_key]()
response[parent_key][element_key] = health_map[parent_key][
element_key
]()
else:
response[parent_key] = health_map[parent_key]()
return response
@ -221,7 +207,8 @@ def __process_health(health_map):
def cron():
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""SELECT projects.project_id,
query = cur.mogrify(
"""SELECT projects.project_id,
projects.created_at,
projects.sessions_last_check_at,
projects.first_recorded_session_at,
@ -229,7 +216,8 @@ def cron():
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;""")
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
@ -250,20 +238,24 @@ def cron():
count_start_from = r["last_update_at"]
count_start_from = TimeUTC.datetime_to_timestamp(count_start_from)
params = {"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
params = {
"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts>=%(start_ts)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params)
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
@ -271,56 +263,68 @@ def cron():
params["events_count"] = row["events_count"]
if insert:
query = cur.mogrify("""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
query = cur.mogrify(
"""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
VALUES (%(project_id)s, %(sessions_count)s, %(events_count)s, (now() AT TIME ZONE 'utc'::text));""",
params)
params,
)
else:
query = cur.mogrify("""UPDATE public.projects_stats
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=sessions_count+%(sessions_count)s,
events_count=events_count+%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params)
params,
)
cur.execute(query)
# this cron is used to correct the sessions&events count every week
def weekly_cron():
with pg_client.PostgresClient(long_query=True) as cur:
query = cur.mogrify("""SELECT project_id,
query = cur.mogrify(
"""SELECT project_id,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;""")
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
if r["last_update_at"] is None:
continue
params = {"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
params = {
"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params)
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
query = cur.mogrify("""UPDATE public.projects_stats
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=%(sessions_count)s,
events_count=%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params)
params,
)
cur.execute(query)

View file

@ -1,12 +1,12 @@
import schemas
from chalicelib.core import integration_base
from chalicelib.core.integration_github_issue import GithubIntegrationIssue
from chalicelib.core.issue_tracking import base
from chalicelib.core.issue_tracking.github_issue import GithubIntegrationIssue
from chalicelib.utils import pg_client, helper
PROVIDER = schemas.IntegrationType.GITHUB
class GitHubIntegration(integration_base.BaseIntegration):
class GitHubIntegration(base.BaseIntegration):
def __init__(self, tenant_id, user_id):
self.__tenant_id = tenant_id

View file

@ -1,12 +1,12 @@
from chalicelib.core.integration_base_issue import BaseIntegrationIssue
from chalicelib.core.issue_tracking.base_issue import BaseIntegrationIssue
from chalicelib.utils import github_client_v3
from chalicelib.utils.github_client_v3 import github_formatters as formatter
class GithubIntegrationIssue(BaseIntegrationIssue):
def __init__(self, integration_token):
self.__client = github_client_v3.githubV3Request(integration_token)
super(GithubIntegrationIssue, self).__init__("GITHUB", integration_token)
def __init__(self, token):
self.__client = github_client_v3.githubV3Request(token)
super(GithubIntegrationIssue, self).__init__("GITHUB", token)
def get_current_user(self):
return formatter.user(self.__client.get("/user"))
@ -28,9 +28,9 @@ class GithubIntegrationIssue(BaseIntegrationIssue):
return meta
def create_new_assignment(self, integration_project_id, title, description, assignee,
def create_new_assignment(self, project_id, title, description, assignee,
issue_type):
repoId = integration_project_id
repoId = project_id
assignees = [assignee]
labels = [str(issue_type)]
@ -59,11 +59,11 @@ class GithubIntegrationIssue(BaseIntegrationIssue):
def get_by_ids(self, saved_issues):
results = []
for i in saved_issues:
results.append(self.get(integration_project_id=i["integrationProjectId"], assignment_id=i["id"]))
results.append(self.get(project_id=i["integrationProjectId"], assignment_id=i["id"]))
return {"issues": results}
def get(self, integration_project_id, assignment_id):
repoId = integration_project_id
def get(self, project_id, assignment_id):
repoId = project_id
issueNumber = assignment_id
issue = self.__client.get(f"/repositories/{repoId}/issues/{issueNumber}")
issue = formatter.issue(issue)
@ -72,17 +72,17 @@ class GithubIntegrationIssue(BaseIntegrationIssue):
self.__client.get(f"/repositories/{repoId}/issues/{issueNumber}/comments")]
return issue
def comment(self, integration_project_id, assignment_id, comment):
repoId = integration_project_id
def comment(self, project_id, assignment_id, comment):
repoId = project_id
issueNumber = assignment_id
commentCreated = self.__client.post(f"/repositories/{repoId}/issues/{issueNumber}/comments",
body={"body": comment})
return formatter.comment(commentCreated)
def get_metas(self, integration_project_id):
def get_metas(self, project_id):
current_user = self.get_current_user()
try:
users = self.__client.get(f"/repositories/{integration_project_id}/collaborators")
users = self.__client.get(f"/repositories/{project_id}/collaborators")
except Exception as e:
users = []
users = [formatter.user(u) for u in users]
@ -92,7 +92,7 @@ class GithubIntegrationIssue(BaseIntegrationIssue):
return {"provider": self.provider.lower(),
'users': users,
'issueTypes': [formatter.label(l) for l in
self.__client.get(f"/repositories/{integration_project_id}/labels")]
self.__client.get(f"/repositories/{project_id}/labels")]
}
def get_projects(self):

View file

@ -1,4 +1,5 @@
import schemas
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client
@ -51,10 +52,10 @@ def get_global_integrations_status(tenant_id, user_id, project_id):
AND provider='elasticsearch')) AS {schemas.IntegrationType.ELASTICSEARCH.value},
EXISTS((SELECT 1
FROM public.webhooks
WHERE type='slack' AND deleted_at ISNULL)) AS {schemas.IntegrationType.SLACK.value},
WHERE type='slack' AND deleted_at ISNULL AND {TENANT_CONDITION})) AS {schemas.IntegrationType.SLACK.value},
EXISTS((SELECT 1
FROM public.webhooks
WHERE type='msteams' AND deleted_at ISNULL)) AS {schemas.IntegrationType.MS_TEAMS.value},
WHERE type='msteams' AND deleted_at ISNULL AND {TENANT_CONDITION})) AS {schemas.IntegrationType.MS_TEAMS.value},
EXISTS((SELECT 1
FROM public.integrations
WHERE project_id=%(project_id)s AND provider='dynatrace')) AS {schemas.IntegrationType.DYNATRACE.value};""",

View file

@ -1,7 +1,7 @@
from chalicelib.core import integration_github, integration_jira_cloud
from chalicelib.core.issue_tracking import github, jira_cloud
from chalicelib.utils import pg_client
SUPPORTED_TOOLS = [integration_github.PROVIDER, integration_jira_cloud.PROVIDER]
SUPPORTED_TOOLS = [github.PROVIDER, jira_cloud.PROVIDER]
def get_available_integrations(user_id):
@ -23,7 +23,7 @@ def get_available_integrations(user_id):
def __get_default_integration(user_id):
current_integrations = get_available_integrations(user_id)
return integration_github.PROVIDER if current_integrations["github"] else integration_jira_cloud.PROVIDER if \
return github.PROVIDER if current_integrations["github"] else jira_cloud.PROVIDER if \
current_integrations["jira"] else None
@ -35,11 +35,11 @@ def get_integration(tenant_id, user_id, tool=None, for_delete=False):
tool = tool.upper()
if tool not in SUPPORTED_TOOLS:
return {"errors": [f"issue tracking tool not supported yet, available: {SUPPORTED_TOOLS}"]}, None
if tool == integration_jira_cloud.PROVIDER:
integration = integration_jira_cloud.JIRAIntegration(tenant_id=tenant_id, user_id=user_id)
if tool == jira_cloud.PROVIDER:
integration = jira_cloud.JIRAIntegration(tenant_id=tenant_id, user_id=user_id)
if not for_delete and integration.integration is not None and not integration.integration.get("valid", True):
return {"errors": ["JIRA: connexion issue/unauthorized"]}, integration
return None, integration
elif tool == integration_github.PROVIDER:
return None, integration_github.GitHubIntegration(tenant_id=tenant_id, user_id=user_id)
elif tool == github.PROVIDER:
return None, github.GitHubIntegration(tenant_id=tenant_id, user_id=user_id)
return {"errors": ["lost integration"]}, None

View file

@ -1,6 +1,6 @@
import schemas
from chalicelib.core import integration_base
from chalicelib.core.integration_jira_cloud_issue import JIRACloudIntegrationIssue
from chalicelib.core.issue_tracking import base
from chalicelib.core.issue_tracking.jira_cloud_issue import JIRACloudIntegrationIssue
from chalicelib.utils import pg_client, helper
PROVIDER = schemas.IntegrationType.JIRA
@ -10,7 +10,7 @@ def obfuscate_string(string):
return "*" * (len(string) - 4) + string[-4:]
class JIRAIntegration(integration_base.BaseIntegration):
class JIRAIntegration(base.BaseIntegration):
def __init__(self, tenant_id, user_id):
self.__tenant_id = tenant_id
# TODO: enable super-constructor when OAuth is done
@ -50,8 +50,8 @@ class JIRAIntegration(integration_base.BaseIntegration):
cur.execute(
cur.mogrify(
"""SELECT username, token, url
FROM public.jira_cloud
WHERE user_id=%(user_id)s;""",
FROM public.jira_cloud
WHERE user_id = %(user_id)s;""",
{"user_id": self._user_id})
)
data = helper.dict_to_camel_case(cur.fetchone())
@ -95,10 +95,9 @@ class JIRAIntegration(integration_base.BaseIntegration):
def add(self, username, token, url, obfuscate=False):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.jira_cloud(username, token, user_id,url)
VALUES (%(username)s, %(token)s, %(user_id)s,%(url)s)
RETURNING username, token, url;""",
cur.mogrify(""" \
INSERT INTO public.jira_cloud(username, token, user_id, url)
VALUES (%(username)s, %(token)s, %(user_id)s, %(url)s) RETURNING username, token, url;""",
{"user_id": self._user_id, "username": username,
"token": token, "url": url})
)
@ -112,9 +111,10 @@ class JIRAIntegration(integration_base.BaseIntegration):
def delete(self):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
DELETE FROM public.jira_cloud
WHERE user_id=%(user_id)s;""",
cur.mogrify(""" \
DELETE
FROM public.jira_cloud
WHERE user_id = %(user_id)s;""",
{"user_id": self._user_id})
)
return {"state": "success"}
@ -125,7 +125,7 @@ class JIRAIntegration(integration_base.BaseIntegration):
changes={
"username": data.username,
"token": data.token if len(data.token) > 0 and data.token.find("***") == -1 \
else self.integration.token,
else self.integration["token"],
"url": str(data.url)
},
obfuscate=True

View file

@ -1,5 +1,5 @@
from chalicelib.utils import jira_client
from chalicelib.core.integration_base_issue import BaseIntegrationIssue
from chalicelib.core.issue_tracking.base_issue import BaseIntegrationIssue
class JIRACloudIntegrationIssue(BaseIntegrationIssue):
@ -9,8 +9,8 @@ class JIRACloudIntegrationIssue(BaseIntegrationIssue):
self._client = jira_client.JiraManager(self.url, self.username, token, None)
super(JIRACloudIntegrationIssue, self).__init__("JIRA", token)
def create_new_assignment(self, integration_project_id, title, description, assignee, issue_type):
self._client.set_jira_project_id(integration_project_id)
def create_new_assignment(self, project_id, title, description, assignee, issue_type):
self._client.set_jira_project_id(project_id)
data = {
'summary': title,
'description': description,
@ -28,26 +28,26 @@ class JIRACloudIntegrationIssue(BaseIntegrationIssue):
projects_map[i["integrationProjectId"]].append(i["id"])
results = []
for integration_project_id in projects_map:
self._client.set_jira_project_id(integration_project_id)
for project_id in projects_map:
self._client.set_jira_project_id(project_id)
jql = 'labels = OpenReplay'
if len(projects_map[integration_project_id]) > 0:
jql += f" AND ID IN ({','.join(projects_map[integration_project_id])})"
if len(projects_map[project_id]) > 0:
jql += f" AND ID IN ({','.join(projects_map[project_id])})"
issues = self._client.get_issues(jql, offset=0)
results += issues
return {"issues": results}
def get(self, integration_project_id, assignment_id):
self._client.set_jira_project_id(integration_project_id)
def get(self, project_id, assignment_id):
self._client.set_jira_project_id(project_id)
return self._client.get_issue_v3(assignment_id)
def comment(self, integration_project_id, assignment_id, comment):
self._client.set_jira_project_id(integration_project_id)
def comment(self, project_id, assignment_id, comment):
self._client.set_jira_project_id(project_id)
return self._client.add_comment_v3(assignment_id, comment)
def get_metas(self, integration_project_id):
def get_metas(self, project_id):
meta = {}
self._client.set_jira_project_id(integration_project_id)
self._client.set_jira_project_id(project_id)
meta['issueTypes'] = self._client.get_issue_types()
meta['users'] = self._client.get_assignable_users()
return {"provider": self.provider.lower(), **meta}

View file

@ -1,6 +1,10 @@
import logging
from chalicelib.core.sessions import sessions_mobs, sessions_devtool
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core import sessions_mobs, sessions_devtool
logger = logging.getLogger(__name__)
class Actions:
@ -150,23 +154,23 @@ def get_scheduled_jobs():
def execute_jobs():
jobs = get_scheduled_jobs()
for job in jobs:
print(f"Executing jobId:{job['jobId']}")
logger.info(f"Executing jobId:{job['jobId']}")
try:
if job["action"] == Actions.DELETE_USER_DATA:
session_ids = __get_session_ids_by_user_ids(project_id=job["projectId"],
user_ids=[job["referenceId"]])
if len(session_ids) > 0:
print(f"Deleting {len(session_ids)} sessions")
logger.info(f"Deleting {len(session_ids)} sessions")
__delete_sessions_by_session_ids(session_ids=session_ids)
__delete_session_mobs_by_session_ids(session_ids=session_ids, project_id=job["projectId"])
else:
raise Exception(f"The action '{job['action']}' not supported.")
job["status"] = JobStatus.COMPLETED
print(f"Job completed {job['jobId']}")
logger.info(f"Job completed {job['jobId']}")
except Exception as e:
job["status"] = JobStatus.FAILED
job["errors"] = str(e)
print(f"Job failed {job['jobId']}")
logger.error(f"Job failed {job['jobId']}")
update(job["jobId"], job)

View file

@ -1,6 +1,5 @@
from chalicelib.core import log_tools
import requests
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "bugsnag"

View file

@ -1,5 +1,5 @@
import boto3
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "cloudwatch"

View file

@ -1,4 +1,4 @@
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "datadog"

View file

@ -1,8 +1,7 @@
import logging
from chalicelib.core.log_tools import log_tools
from elasticsearch import Elasticsearch
from chalicelib.core import log_tools
from schemas import schemas
logger = logging.getLogger(__name__)

View file

@ -1,6 +1,8 @@
from chalicelib.utils import pg_client, helper
import json
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client, helper
EXCEPT = ["jira_server", "jira_cloud"]
@ -94,11 +96,11 @@ def get_all_by_tenant(tenant_id, integration):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT integrations.*
f"""SELECT integrations.*
FROM public.integrations INNER JOIN public.projects USING(project_id)
WHERE provider = %(provider)s
WHERE provider = %(provider)s AND {TENANT_CONDITION}
AND projects.deleted_at ISNULL;""",
{"provider": integration})
{"tenant_id": tenant_id, "provider": integration})
)
r = cur.fetchall()
return helper.list_to_camel_case(r, flatten=True)

View file

@ -1,4 +1,4 @@
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "newrelic"

View file

@ -1,4 +1,4 @@
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "rollbar"

View file

@ -1,5 +1,5 @@
import requests
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "sentry"

View file

@ -1,4 +1,4 @@
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "stackdriver"

View file

@ -1,4 +1,4 @@
from chalicelib.core import log_tools
from chalicelib.core.log_tools import log_tools
from schemas import schemas
IN_TY = "sumologic"

View file

@ -98,17 +98,23 @@ def __edit(project_id, col_index, colname, new_name):
if col_index not in list(old_metas.keys()):
return {"errors": ["custom field not found"]}
with pg_client.PostgresClient() as cur:
if old_metas[col_index]["key"] != new_name:
if old_metas[col_index]["key"] != new_name:
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""UPDATE public.projects
SET {colname} = %(value)s
WHERE project_id = %(project_id)s
AND deleted_at ISNULL
RETURNING {colname};""",
RETURNING {colname},
(SELECT {colname} FROM projects WHERE project_id = %(project_id)s) AS old_{colname};""",
{"project_id": project_id, "value": new_name})
cur.execute(query=query)
new_name = cur.fetchone()[colname]
row = cur.fetchone()
new_name = row[colname]
old_name = row['old_' + colname]
old_metas[col_index]["key"] = new_name
projects.rename_metadata_condition(project_id=project_id,
old_metadata_key=old_name,
new_metadata_key=new_name)
return {"data": old_metas[col_index]}
@ -121,8 +127,8 @@ def edit(tenant_id, project_id, index: int, new_name: str):
def delete(tenant_id, project_id, index: int):
index = int(index)
old_segments = get(project_id)
old_segments = [k["index"] for k in old_segments]
if index not in old_segments:
old_indexes = [k["index"] for k in old_segments]
if index not in old_indexes:
return {"errors": ["custom field not found"]}
with pg_client.PostgresClient() as cur:
@ -132,7 +138,8 @@ def delete(tenant_id, project_id, index: int):
WHERE project_id = %(project_id)s AND deleted_at ISNULL;""",
{"project_id": project_id})
cur.execute(query=query)
projects.delete_metadata_condition(project_id=project_id,
metadata_key=old_segments[old_indexes.index(index)]["key"])
return {"data": get(project_id)}

View file

@ -1,624 +0,0 @@
import logging
import schemas
from chalicelib.core import metadata
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import __get_step_size
logger = logging.getLogger(__name__)
def __get_constraints(project_id, time_constraint=True, chart=False, duration=True, project=True,
project_identifier="project_id",
main_table="sessions", time_column="start_ts", data={}):
pg_sub_query = []
main_table = main_table + "." if main_table is not None and len(main_table) > 0 else ""
if project:
pg_sub_query.append(f"{main_table}{project_identifier} =%({project_identifier})s")
if duration:
pg_sub_query.append(f"{main_table}duration>0")
if time_constraint:
pg_sub_query.append(f"{main_table}{time_column} >= %(startTimestamp)s")
pg_sub_query.append(f"{main_table}{time_column} < %(endTimestamp)s")
if chart:
pg_sub_query.append(f"{main_table}{time_column} >= generated_timestamp")
pg_sub_query.append(f"{main_table}{time_column} < generated_timestamp + %(step_size)s")
return pg_sub_query + __get_meta_constraint(project_id=project_id, data=data)
def __merge_charts(list1, list2, time_key="timestamp"):
if len(list1) != len(list2):
raise Exception("cannot merge unequal lists")
result = []
for i in range(len(list1)):
timestamp = min(list1[i][time_key], list2[i][time_key])
result.append({**list1[i], **list2[i], time_key: timestamp})
return result
def __get_constraint_values(data):
params = {}
for i, f in enumerate(data.get("filters", [])):
params[f"{f['key']}_{i}"] = f["value"]
return params
def __get_meta_constraint(project_id, data):
if len(data.get("filters", [])) == 0:
return []
constraints = []
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
for i, f in enumerate(data.get("filters", [])):
if f["key"] in meta_keys.keys():
key = f"sessions.metadata_{meta_keys[f['key']]})"
if f["value"] in ["*", ""]:
constraints.append(f"{key} IS NOT NULL")
else:
constraints.append(f"{key} = %({f['key']}_{i})s")
else:
filter_type = f["key"].upper()
filter_type = [filter_type, "USER" + filter_type, filter_type[4:]]
if any(item in [schemas.FilterType.USER_BROWSER] \
for item in filter_type):
constraints.append(f"sessions.user_browser = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_OS, schemas.FilterType.USER_OS_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_os = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_DEVICE, schemas.FilterType.USER_DEVICE_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_device = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_COUNTRY, schemas.FilterType.USER_COUNTRY_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_country = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_ID, schemas.FilterType.USER_ID_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_id = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.USER_ANONYMOUS_ID, schemas.FilterType.USER_ANONYMOUS_ID_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.user_anonymous_id = %({f['key']}_{i})s")
elif any(item in [schemas.FilterType.REV_ID, schemas.FilterType.REV_ID_MOBILE] \
for item in filter_type):
constraints.append(f"sessions.rev_id = %({f['key']}_{i})s")
return constraints
def get_processed_sessions(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(),
density=7, **args):
step_size = __get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(sessions), 0) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
results = {
"value": sum([r["value"] for r in rows]),
"chart": rows
}
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
pg_query = f"""SELECT COUNT(sessions.session_id) AS count
FROM public.sessions
WHERE {" AND ".join(pg_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
count = cur.fetchone()["count"]
results["progress"] = helper.__progress(old_val=count, new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def __get_neutral(rows, add_All_if_empty=True):
neutral = {l: 0 for l in [i for k in [list(v.keys()) for v in rows] for i in k]}
if add_All_if_empty and len(neutral.keys()) <= 1:
neutral = {"All": 0}
return neutral
def __merge_rows_with_neutral(rows, neutral):
for i in range(len(rows)):
rows[i] = {**neutral, **rows[i]}
return rows
def __get_domains_errors_4xx_and_5xx(status, project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
step_size = __get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query_subset = __get_constraints(project_id=project_id, time_constraint=True, chart=False, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, chart=True,
data=args, main_table="requests", time_column="timestamp", project=False,
duration=False)
pg_sub_query_subset.append("requests.status_code/100 = %(status_code)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH requests AS (SELECT host, timestamp
FROM events_common.requests INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT generated_timestamp AS timestamp,
COALESCE(JSONB_AGG(requests) FILTER ( WHERE requests IS NOT NULL ), '[]'::JSONB) AS keys
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT requests.host, COUNT(*) AS count
FROM requests
WHERE {" AND ".join(pg_sub_query_chart)}
GROUP BY host
ORDER BY count DESC
LIMIT 5
) AS requests ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"step_size": step_size,
"status_code": status, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __nested_array_to_dict_array(rows, key="host")
neutral = __get_neutral(rows)
rows = __merge_rows_with_neutral(rows, neutral)
return rows
def get_domains_errors_4xx(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
return __get_domains_errors_4xx_and_5xx(status=4, project_id=project_id, startTimestamp=startTimestamp,
endTimestamp=endTimestamp, density=density, **args)
def get_domains_errors_5xx(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=6, **args):
return __get_domains_errors_4xx_and_5xx(status=5, project_id=project_id, startTimestamp=startTimestamp,
endTimestamp=endTimestamp, density=density, **args)
def __nested_array_to_dict_array(rows, key="url_host", value="count"):
for r in rows:
for i in range(len(r["keys"])):
r[r["keys"][i][key]] = r["keys"][i][value]
r.pop("keys")
return rows
def get_errors_per_domains(project_id, limit, page, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query.append("requests.success = FALSE")
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"limit_s": (page - 1) * limit,
"limit_e": page * limit,
**__get_constraint_values(args)}
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COALESCE(SUM(errors_count),0)::INT AS count,
COUNT(raw.domain) AS total,
jsonb_agg(raw) FILTER ( WHERE rn > %(limit_s)s
AND rn <= %(limit_e)s ) AS values
FROM (SELECT requests.host AS domain,
COUNT(requests.session_id) AS errors_count,
row_number() over (ORDER BY COUNT(requests.session_id) DESC ) AS rn
FROM events_common.requests
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY requests.host
ORDER BY errors_count DESC) AS raw;"""
pg_query = cur.mogrify(pg_query, params)
logger.debug("-----------")
logger.debug(pg_query)
logger.debug("-----------")
cur.execute(pg_query)
row = cur.fetchone()
if row:
row["values"] = row["values"] or []
for r in row["values"]:
r.pop("rn")
return helper.dict_to_camel_case(row)
def get_errors_per_type(project_id, startTimestamp=TimeUTC.now(delta_days=-1), endTimestamp=TimeUTC.now(),
platform=None, density=7, **args):
step_size = __get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query_subset = __get_constraints(project_id=project_id, data=args)
pg_sub_query_subset.append("requests.timestamp>=%(startTimestamp)s")
pg_sub_query_subset.append("requests.timestamp<%(endTimestamp)s")
pg_sub_query_subset.append("requests.status_code > 200")
pg_sub_query_subset_e = __get_constraints(project_id=project_id, data=args, duration=False, main_table="m_errors",
time_constraint=False)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False,
chart=True, data=args, main_table="", time_column="timestamp",
project=False, duration=False)
pg_sub_query_subset_e.append("timestamp>=%(startTimestamp)s")
pg_sub_query_subset_e.append("timestamp<%(endTimestamp)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH requests AS (SELECT status_code AS status, timestamp
FROM events_common.requests
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
),
errors_integ AS (SELECT timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset_e)}
AND source != 'js_exception'
),
errors_js AS (SELECT timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset_e)}
AND source = 'js_exception'
)
SELECT generated_timestamp AS timestamp,
COALESCE(SUM(CASE WHEN status / 100 = 4 THEN 1 ELSE 0 END), 0) AS _4xx,
COALESCE(SUM(CASE WHEN status / 100 = 5 THEN 1 ELSE 0 END), 0) AS _5xx,
COALESCE((SELECT COUNT(*)
FROM errors_js
WHERE {" AND ".join(pg_sub_query_chart)}
), 0) AS js,
COALESCE((SELECT COUNT(*)
FROM errors_integ
WHERE {" AND ".join(pg_sub_query_chart)}
), 0) AS integrations
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT status
FROM requests
WHERE {" AND ".join(pg_sub_query_chart)}
) AS errors_partition ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp;"""
params = {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = helper.list_to_camel_case(rows)
return rows
def get_impacted_sessions_by_js_errors(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=7, **args):
step_size = __get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
pg_sub_query.append("m_errors.source = 'js_exception'")
pg_sub_query.append("m_errors.project_id = %(project_id)s")
pg_sub_query.append("errors.timestamp >= %(startTimestamp)s")
pg_sub_query.append("errors.timestamp < %(endTimestamp)s")
pg_sub_query_chart.append("m_errors.source = 'js_exception'")
pg_sub_query_chart.append("m_errors.project_id = %(project_id)s")
pg_sub_query_chart.append("errors.timestamp >= generated_timestamp")
pg_sub_query_chart.append("errors.timestamp < generated_timestamp+ %(step_size)s")
pg_sub_query_subset = __get_constraints(project_id=project_id, data=args, duration=False, main_table="m_errors",
time_constraint=False)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False,
chart=True, data=args, main_table="errors", time_column="timestamp",
project=False, duration=False)
pg_sub_query_subset.append("m_errors.source = 'js_exception'")
pg_sub_query_subset.append("errors.timestamp>=%(startTimestamp)s")
pg_sub_query_subset.append("errors.timestamp<%(endTimestamp)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH errors AS (SELECT DISTINCT ON (session_id,timestamp) session_id, timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT *
FROM (SELECT COUNT(DISTINCT session_id) AS sessions_count
FROM errors) AS counts
LEFT JOIN
(SELECT jsonb_agg(chart) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(session_id), 0) AS sessions_count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT session_id
FROM errors
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart) AS chart ON (TRUE);"""
cur.execute(cur.mogrify(pg_query, {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
**__get_constraint_values(args)}))
row_sessions = cur.fetchone()
pg_query = f"""WITH errors AS ( SELECT DISTINCT ON(errors.error_id,timestamp) errors.error_id,timestamp
FROM events.errors
INNER JOIN public.errors AS m_errors USING (error_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT *
FROM (SELECT COUNT(DISTINCT errors.error_id) AS errors_count
FROM errors) AS counts
LEFT JOIN
(SELECT jsonb_agg(chart) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(error_id), 0) AS errors_count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT errors.error_id
FROM errors
WHERE {" AND ".join(pg_sub_query_chart)}
) AS errors ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart) AS chart ON (TRUE);"""
cur.execute(cur.mogrify(pg_query, {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
**__get_constraint_values(args)}))
row_errors = cur.fetchone()
chart = __merge_charts(row_sessions.pop("chart"), row_errors.pop("chart"))
row_sessions = helper.dict_to_camel_case(row_sessions)
row_errors = helper.dict_to_camel_case(row_errors)
return {**row_sessions, **row_errors, "chart": chart}
def get_resources_by_party(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), density=7, **args):
step_size = __get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query_subset = __get_constraints(project_id=project_id, time_constraint=True,
chart=False, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, project=False,
chart=True, data=args, main_table="requests", time_column="timestamp",
duration=False)
pg_sub_query_subset.append("requests.timestamp >= %(startTimestamp)s")
pg_sub_query_subset.append("requests.timestamp < %(endTimestamp)s")
# pg_sub_query_subset.append("resources.type IN ('fetch', 'script')")
pg_sub_query_subset.append("requests.success = FALSE")
with pg_client.PostgresClient() as cur:
pg_query = f"""WITH requests AS (
SELECT requests.host, timestamp
FROM events_common.requests
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT generated_timestamp AS timestamp,
SUM(CASE WHEN first.host = sub_requests.host THEN 1 ELSE 0 END) AS first_party,
SUM(CASE WHEN first.host != sub_requests.host THEN 1 ELSE 0 END) AS third_party
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN (
SELECT requests.host,
COUNT(requests.session_id) AS count
FROM events_common.requests
INNER JOIN public.sessions USING (session_id)
WHERE sessions.project_id = '1'
AND sessions.start_ts > (EXTRACT(EPOCH FROM now() - INTERVAL '31 days') * 1000)::BIGINT
AND sessions.start_ts < (EXTRACT(EPOCH FROM now()) * 1000)::BIGINT
AND requests.timestamp > (EXTRACT(EPOCH FROM now() - INTERVAL '31 days') * 1000)::BIGINT
AND requests.timestamp < (EXTRACT(EPOCH FROM now()) * 1000)::BIGINT
AND sessions.duration>0
GROUP BY requests.host
ORDER BY count DESC
LIMIT 1
) AS first ON (TRUE)
LEFT JOIN LATERAL (
SELECT requests.host
FROM requests
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sub_requests ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
cur.execute(cur.mogrify(pg_query, {"step_size": step_size,
"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}))
rows = cur.fetchall()
return rows
def get_user_activity_avg_visited_pages(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
with pg_client.PostgresClient() as cur:
row = __get_user_activity_avg_visited_pages(cur, project_id, startTimestamp, endTimestamp, **args)
results = helper.dict_to_camel_case(row)
results["chart"] = __get_user_activity_avg_visited_pages_chart(cur, project_id, startTimestamp,
endTimestamp, **args)
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
row = __get_user_activity_avg_visited_pages(cur, project_id, startTimestamp, endTimestamp, **args)
previous = helper.dict_to_camel_case(row)
results["progress"] = helper.__progress(old_val=previous["value"], new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def __get_user_activity_avg_visited_pages(cur, project_id, startTimestamp, endTimestamp, **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query.append("sessions.pages_count>0")
pg_query = f"""SELECT COALESCE(CEIL(AVG(sessions.pages_count)),0) AS value
FROM public.sessions
WHERE {" AND ".join(pg_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
return row
def __get_user_activity_avg_visited_pages_chart(cur, project_id, startTimestamp, endTimestamp, density=20, **args):
step_size = __get_step_size(endTimestamp=endTimestamp, startTimestamp=startTimestamp, density=density, factor=1)
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp}
pg_sub_query_subset = __get_constraints(project_id=project_id, time_constraint=True,
chart=False, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, project=False,
chart=True, data=args, main_table="sessions", time_column="start_ts",
duration=False)
pg_sub_query_subset.append("sessions.duration IS NOT NULL")
pg_query = f"""WITH sessions AS(SELECT sessions.pages_count, sessions.start_ts
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_subset)}
)
SELECT generated_timestamp AS timestamp,
COALESCE(AVG(sessions.pages_count),0) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (
SELECT sessions.pages_count
FROM sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
cur.execute(cur.mogrify(pg_query, {**params, **__get_constraint_values(args)}))
rows = cur.fetchall()
return rows
def get_top_metrics_count_requests(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), value=None, density=20, **args):
step_size = __get_step_size(endTimestamp=endTimestamp, startTimestamp=startTimestamp, density=density, factor=1)
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp}
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=False, project=False,
chart=True, data=args, main_table="pages", time_column="timestamp",
duration=False)
if value is not None:
pg_sub_query.append("pages.path = %(value)s")
pg_sub_query_chart.append("pages.path = %(value)s")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COUNT(pages.session_id) AS value
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)};"""
cur.execute(cur.mogrify(pg_query, {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp,
"value": value, **__get_constraint_values(args)}))
row = cur.fetchone()
pg_query = f"""WITH pages AS(SELECT pages.timestamp
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
)
SELECT generated_timestamp AS timestamp,
COUNT(pages.*) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (
SELECT 1
FROM pages
WHERE {" AND ".join(pg_sub_query_chart)}
) AS pages ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
cur.execute(cur.mogrify(pg_query, {**params, **__get_constraint_values(args)}))
rows = cur.fetchall()
row["chart"] = rows
row["unit"] = schemas.TemplatePredefinedUnits.COUNT
return helper.dict_to_camel_case(row)
def get_unique_users(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(),
density=7, **args):
step_size = __get_step_size(startTimestamp, endTimestamp, density, factor=1)
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("user_id != ''")
pg_sub_query_chart.append("user_id IS NOT NULL")
pg_sub_query_chart.append("user_id != ''")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(sessions), 0) AS value
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT user_id
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"step_size": step_size, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
results = {
"value": sum([r["value"] for r in rows]),
"chart": rows
}
diff = endTimestamp - startTimestamp
endTimestamp = startTimestamp
startTimestamp = endTimestamp - diff
pg_query = f"""SELECT COUNT(DISTINCT sessions.user_id) AS count
FROM public.sessions
WHERE {" AND ".join(pg_sub_query)};"""
params = {"project_id": project_id, "startTimestamp": startTimestamp, "endTimestamp": endTimestamp,
**__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
count = cur.fetchone()["count"]
results["progress"] = helper.__progress(old_val=count, new_val=results["value"])
results["unit"] = schemas.TemplatePredefinedUnits.COUNT
return results
def get_speed_index_location(project_id, startTimestamp=TimeUTC.now(delta_days=-1),
endTimestamp=TimeUTC.now(), **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args)
pg_sub_query.append("pages.speed_index IS NOT NULL")
pg_sub_query.append("pages.speed_index>0")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT sessions.user_country, AVG(pages.speed_index) AS value
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY sessions.user_country
ORDER BY value, sessions.user_country;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
if len(rows) > 0:
pg_query = f"""SELECT AVG(pages.speed_index) AS avg
FROM events.pages INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)};"""
cur.execute(cur.mogrify(pg_query, params))
avg = cur.fetchone()["avg"]
else:
avg = 0
return {"value": avg, "chart": helper.list_to_camel_case(rows), "unit": schemas.TemplatePredefinedUnits.MILLISECOND}

View file

@ -0,0 +1,10 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
logger.info(">>> Using experimental metrics")
else:
pass

View file

@ -1,44 +1,19 @@
import json
import logging
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import funnels, issues, heatmaps, sessions_insights, sessions_mobs, sessions_favorite, \
product_analytics, custom_metrics_predefined
from chalicelib.core import issues
from chalicelib.core.errors import errors
from chalicelib.core.metrics import heatmaps, product_analytics, funnels
from chalicelib.core.sessions import sessions, sessions_search
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.storage import extra
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logging.info(">>> Using experimental error search")
from . import errors_exp as errors
else:
from . import errors as errors
if config("EXP_SESSIONS_SEARCH_METRIC", cast=bool, default=False):
from chalicelib.core import sessions
else:
from chalicelib.core import sessions_legacy as sessions
logger = logging.getLogger(__name__)
# TODO: refactor this to split
# timeseries /
# table of errors / table of issues / table of browsers / table of devices / table of countries / table of URLs
# remove "table of" calls from this function
def __try_live(project_id, data: schemas.CardSchema):
results = []
for i, s in enumerate(data.series):
results.append(sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
view_type=data.view_type, metric_type=data.metric_type,
metric_of=data.metric_of, metric_value=data.metric_value))
return results
def __get_table_of_series(project_id, data: schemas.CardSchema):
results = []
for i, s in enumerate(data.series):
@ -56,9 +31,6 @@ def __get_funnel_chart(project: schemas.ProjectContext, data: schemas.CardFunnel
"totalDropDueToIssues": 0
}
# return funnels.get_top_insights_on_the_fly_widget(project_id=project_id,
# data=data.series[0].filter,
# metric_format=data.metric_format)
return funnels.get_simple_funnel(project=project,
data=data.series[0].filter,
metric_format=data.metric_format)
@ -70,7 +42,7 @@ def __get_errors_list(project: schemas.ProjectContext, user_id, data: schemas.Ca
"total": 0,
"errors": []
}
return errors.search(data.series[0].filter, project_id=project.project_id, user_id=user_id)
return errors.search(data.series[0].filter, project=project, user_id=user_id)
def __get_sessions_list(project: schemas.ProjectContext, user_id, data: schemas.CardSchema):
@ -80,11 +52,11 @@ def __get_sessions_list(project: schemas.ProjectContext, user_id, data: schemas.
"total": 0,
"sessions": []
}
return sessions.search_sessions(data=data.series[0].filter, project_id=project.project_id, user_id=user_id)
return sessions_search.search_sessions(data=data.series[0].filter, project=project, user_id=user_id)
def __get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas.CardHeatMap,
include_mobs: bool = True):
def get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas.CardHeatMap,
include_mobs: bool = True):
if len(data.series) == 0:
return None
data.series[0].filter.filters += data.series[0].filter.events
@ -95,15 +67,6 @@ def __get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas
include_mobs=include_mobs)
# EE only
def __get_insights_chart(project: schemas.ProjectContext, data: schemas.CardInsights, user_id: int = None):
return sessions_insights.fetch_selected(project_id=project.project_id,
data=schemas.GetInsightsSchema(startTimestamp=data.startTimestamp,
endTimestamp=data.endTimestamp,
metricValue=data.metric_value,
series=data.series))
def __get_path_analysis_chart(project: schemas.ProjectContext, user_id: int, data: schemas.CardPathAnalysis):
if len(data.series) == 0:
data.series.append(
@ -115,7 +78,12 @@ def __get_path_analysis_chart(project: schemas.ProjectContext, user_id: int, dat
def __get_timeseries_chart(project: schemas.ProjectContext, data: schemas.CardTimeSeries, user_id: int = None):
series_charts = __try_live(project_id=project.project_id, data=data)
series_charts = []
for i, s in enumerate(data.series):
series_charts.append(sessions.search2_series(data=s.filter, project_id=project.project_id, density=data.density,
metric_type=data.metric_type, metric_of=data.metric_of,
metric_value=data.metric_value))
results = [{}] * len(series_charts[0])
for i in range(len(results)):
for j, series_chart in enumerate(series_charts):
@ -185,40 +153,28 @@ def __get_table_chart(project: schemas.ProjectContext, data: schemas.CardTable,
def get_chart(project: schemas.ProjectContext, data: schemas.CardSchema, user_id: int):
if data.is_predefined:
return custom_metrics_predefined.get_metric(key=data.metric_of,
project_id=project.project_id,
data=data.model_dump())
supported = {
schemas.MetricType.TIMESERIES: __get_timeseries_chart,
schemas.MetricType.TABLE: __get_table_chart,
schemas.MetricType.HEAT_MAP: __get_heat_map_chart,
schemas.MetricType.HEAT_MAP: get_heat_map_chart,
schemas.MetricType.FUNNEL: __get_funnel_chart,
schemas.MetricType.INSIGHTS: __get_insights_chart,
schemas.MetricType.PATH_ANALYSIS: __get_path_analysis_chart
}
return supported.get(data.metric_type, not_supported)(project=project, data=data, user_id=user_id)
def get_sessions_by_card_id(project_id, user_id, metric_id, data: schemas.CardSessionsSchema):
# No need for this because UI is sending the full payload
# card: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
# if card is None:
# return None
# metric: schemas.CardSchema = schemas.CardSchema(**card)
# metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
if not card_exists(metric_id=metric_id, project_id=project_id, user_id=user_id):
def get_sessions_by_card_id(project: schemas.ProjectContext, user_id, metric_id, data: schemas.CardSessionsSchema):
if not card_exists(metric_id=metric_id, project_id=project.project_id, user_id=user_id):
return None
results = []
for s in data.series:
results.append({"seriesId": s.series_id, "seriesName": s.name,
**sessions.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
**sessions_search.search_sessions(data=s.filter, project=project, user_id=user_id)})
return results
def get_sessions(project_id, user_id, data: schemas.CardSessionsSchema):
def get_sessions(project: schemas.ProjectContext, user_id, data: schemas.CardSessionsSchema):
results = []
if len(data.series) == 0:
return results
@ -228,31 +184,33 @@ def get_sessions(project_id, user_id, data: schemas.CardSessionsSchema):
s.filter = schemas.SessionsSearchPayloadSchema(**s.filter.model_dump(by_alias=True))
results.append({"seriesId": None, "seriesName": s.name,
**sessions.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
**sessions_search.search_sessions(data=s.filter, project=project, user_id=user_id)})
return results
def get_issues(project: schemas.ProjectContext, user_id: int, data: schemas.CardSchema):
if data.is_predefined:
return not_supported()
if data.metric_of == schemas.MetricOfTable.ISSUES:
return __get_table_of_issues(project=project, user_id=user_id, data=data)
supported = {
schemas.MetricType.TIMESERIES: not_supported,
schemas.MetricType.TABLE: not_supported,
schemas.MetricType.HEAT_MAP: not_supported,
schemas.MetricType.INSIGHTS: not_supported,
schemas.MetricType.PATH_ANALYSIS: not_supported,
}
return supported.get(data.metric_type, not_supported)()
def __get_path_analysis_card_info(data: schemas.CardPathAnalysis):
def get_global_card_info(data: schemas.CardSchema):
r = {"hideExcess": data.hide_excess, "compareTo": data.compare_to, "rows": data.rows}
return r
def get_path_analysis_card_info(data: schemas.CardPathAnalysis):
r = {"start_point": [s.model_dump() for s in data.start_point],
"start_type": data.start_type,
"excludes": [e.model_dump() for e in data.excludes],
"hideExcess": data.hide_excess}
"rows": data.rows}
return r
@ -263,25 +221,11 @@ def create_card(project: schemas.ProjectContext, user_id, data: schemas.CardSche
if data.session_id is not None:
session_data = {"sessionId": data.session_id}
else:
session_data = __get_heat_map_chart(project=project, user_id=user_id,
data=data, include_mobs=False)
session_data = get_heat_map_chart(project=project, user_id=user_id,
data=data, include_mobs=False)
if session_data is not None:
session_data = {"sessionId": session_data["sessionId"]}
if session_data is not None:
# for EE only
keys = sessions_mobs. \
__get_mob_keys(project_id=project.project_id, session_id=session_data["sessionId"])
keys += sessions_mobs. \
__get_mob_keys_deprecated(session_id=session_data["sessionId"]) # To support old sessions
tag = config('RETENTION_L_VALUE', default='vault')
for k in keys:
try:
extra.tag_session(file_key=k, tag_value=tag)
except Exception as e:
logger.warning(f"!!!Error while tagging: {k} to {tag} for heatMap")
logger.error(str(e))
_data = {"session_data": json.dumps(session_data) if session_data is not None else None}
for i, s in enumerate(data.series):
for k in s.model_dump().keys():
@ -291,8 +235,10 @@ def create_card(project: schemas.ProjectContext, user_id, data: schemas.CardSche
series_len = len(data.series)
params = {"user_id": user_id, "project_id": project.project_id, **data.model_dump(), **_data,
"default_config": json.dumps(data.default_config.model_dump()), "card_info": None}
params["card_info"] = get_global_card_info(data=data)
if data.metric_type == schemas.MetricType.PATH_ANALYSIS:
params["card_info"] = json.dumps(__get_path_analysis_card_info(data=data))
params["card_info"] = {**params["card_info"], **get_path_analysis_card_info(data=data)}
params["card_info"] = json.dumps(params["card_info"])
query = """INSERT INTO metrics (project_id, user_id, name, is_public,
view_type, metric_type, metric_of, metric_value,
@ -352,16 +298,18 @@ def update_card(metric_id, user_id, project_id, data: schemas.CardSchema):
if i not in u_series_ids:
d_series_ids.append(i)
params["d_series_ids"] = tuple(d_series_ids)
params["card_info"] = None
params["session_data"] = json.dumps(metric["data"])
params["card_info"] = get_global_card_info(data=data)
if data.metric_type == schemas.MetricType.PATH_ANALYSIS:
params["card_info"] = json.dumps(__get_path_analysis_card_info(data=data))
params["card_info"] = {**params["card_info"], **get_path_analysis_card_info(data=data)}
elif data.metric_type == schemas.MetricType.HEAT_MAP:
if data.session_id is not None:
params["session_data"] = json.dumps({"sessionId": data.session_id})
elif metric.get("data") and metric["data"].get("sessionId"):
params["session_data"] = json.dumps({"sessionId": metric["data"]["sessionId"]})
params["card_info"] = json.dumps(params["card_info"])
with pg_client.PostgresClient() as cur:
sub_queries = []
if len(n_series) > 0:
@ -404,6 +352,100 @@ def update_card(metric_id, user_id, project_id, data: schemas.CardSchema):
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
def search_metrics(project_id, user_id, data: schemas.MetricSearchSchema, include_series=False):
constraints = ["metrics.project_id = %(project_id)s", "metrics.deleted_at ISNULL"]
params = {
"project_id": project_id,
"user_id": user_id,
"offset": (data.page - 1) * data.limit,
"limit": data.limit,
}
if data.mine_only:
constraints.append("user_id = %(user_id)s")
else:
constraints.append("(user_id = %(user_id)s OR metrics.is_public)")
if data.shared_only:
constraints.append("is_public")
if data.filter is not None:
if data.filter.type:
constraints.append("metrics.metric_type = %(filter_type)s")
params["filter_type"] = data.filter.type
if data.filter.query and len(data.filter.query) > 0:
constraints.append("(metrics.name ILIKE %(filter_query)s OR owner.owner_name ILIKE %(filter_query)s)")
params["filter_query"] = helper.values_for_operator(
value=data.filter.query, op=schemas.SearchEventOperator.CONTAINS
)
with pg_client.PostgresClient() as cur:
sub_join = ""
if include_series:
sub_join = """LEFT JOIN LATERAL (
SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)"""
sort_column = data.sort.field if data.sort.field is not None and len(data.sort.field) > 0 \
else "created_at"
# change ascend to asc and descend to desc
sort_order = data.sort.order.value if hasattr(data.sort.order, "value") else data.sort.order
if sort_order == "ascend":
sort_order = "asc"
elif sort_order == "descend":
sort_order = "desc"
query = cur.mogrify(
f"""SELECT count(1) OVER () AS total,metric_id, project_id, user_id, name, is_public, created_at, edited_at,
metric_type, metric_of, metric_format, metric_value, view_type, is_pinned,
dashboards, owner_email, owner_name, default_config AS config, thumbnail
FROM metrics
{sub_join}
LEFT JOIN LATERAL (
SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public, name),'[]'::jsonb) AS dashboards
FROM (
SELECT DISTINCT dashboard_id, name, is_public
FROM dashboards
INNER JOIN dashboard_widgets USING (dashboard_id)
WHERE deleted_at ISNULL
AND dashboard_widgets.metric_id = metrics.metric_id
AND project_id = %(project_id)s
AND ((dashboards.user_id = %(user_id)s OR is_public))
) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (
SELECT email AS owner_email, name AS owner_name
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE {" AND ".join(constraints)}
ORDER BY {sort_column} {sort_order}
LIMIT %(limit)s OFFSET %(offset)s;""",
params
)
cur.execute(query)
rows = cur.fetchall()
if len(rows) > 0:
total = rows[0]["total"]
if include_series:
for r in rows:
r.pop("total")
for s in r.get("series", []):
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
else:
for r in rows:
r.pop("total")
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r["edited_at"] = TimeUTC.datetime_to_timestamp(r["edited_at"])
rows = helper.list_to_camel_case(rows)
else:
total = 0
return {"total": total, "list": rows}
def search_all(project_id, user_id, data: schemas.SearchCardsSchema, include_series=False):
constraints = ["metrics.project_id = %(project_id)s",
"metrics.deleted_at ISNULL"]
@ -492,26 +534,20 @@ def delete_card(project_id, metric_id, user_id):
RETURNING data;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id})
)
# for EE only
row = cur.fetchone()
if row:
if row["data"] and not sessions_favorite.favorite_session_exists(session_id=row["data"]["sessionId"]):
keys = sessions_mobs. \
__get_mob_keys(project_id=project_id, session_id=row["data"]["sessionId"])
keys += sessions_mobs. \
__get_mob_keys_deprecated(session_id=row["data"]["sessionId"]) # To support old sessions
tag = config('RETENTION_D_VALUE', default='default')
for k in keys:
try:
extra.tag_session(file_key=k, tag_value=tag)
except Exception as e:
logger.warning(f"!!!Error while tagging: {k} to {tag} for heatMap")
logger.error(str(e))
return {"state": "success"}
def __get_global_attributes(row):
if row is None or row.get("cardInfo") is None:
return row
card_info = row.get("cardInfo", {})
row["compareTo"] = card_info["compareTo"] if card_info.get("compareTo") is not None else []
return row
def __get_path_analysis_attributes(row):
card_info = row.pop("cardInfo")
card_info = row.get("cardInfo", {})
row["excludes"] = card_info.get("excludes", [])
row["startPoint"] = card_info.get("startPoint", [])
row["startType"] = card_info.get("startType", "start")
@ -564,6 +600,8 @@ def get_card(metric_id, project_id, user_id, flatten: bool = True, include_data:
row = helper.dict_to_camel_case(row)
if row["metricType"] == schemas.MetricType.PATH_ANALYSIS:
row = __get_path_analysis_attributes(row=row)
row = __get_global_attributes(row=row)
row.pop("cardInfo")
return row
@ -605,17 +643,7 @@ def change_state(project_id, metric_id, user_id, status):
def get_funnel_sessions_by_issue(user_id, project_id, metric_id, issue_id,
data: schemas.CardSessionsSchema
# , range_value=None, start_date=None, end_date=None
):
# No need for this because UI is sending the full payload
# card: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
# if card is None:
# return None
# metric: schemas.CardSchema = schemas.CardSchema(**card)
# metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
# if metric is None:
# return None
data: schemas.CardSessionsSchema):
if not card_exists(metric_id=metric_id, project_id=project_id, user_id=user_id):
return None
for s in data.series:
@ -657,11 +685,7 @@ def make_chart_from_card(project: schemas.ProjectContext, user_id, metric_id, da
raw_metric["density"] = data.density
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
if metric.is_predefined:
return custom_metrics_predefined.get_metric(key=metric.metric_of,
project_id=project.project_id,
data=data.model_dump())
elif metric.metric_type == schemas.MetricType.HEAT_MAP:
if metric.metric_type == schemas.MetricType.HEAT_MAP:
if raw_metric["data"] and raw_metric["data"].get("sessionId"):
return heatmaps.get_selected_session(project_id=project.project_id,
session_id=raw_metric["data"]["sessionId"])

View file

@ -1,7 +1,7 @@
import json
import schemas
from chalicelib.core import custom_metrics
from chalicelib.core.metrics import custom_metrics
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC

View file

@ -1,7 +1,7 @@
from typing import List
import schemas
from chalicelib.core import significance
from chalicelib.core.metrics.modules import significance
from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh

View file

@ -0,0 +1,11 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
logger.info(">>> Using experimental heatmaps")
from .heatmaps_ch import *
else:
from .heatmaps import *

View file

@ -1,7 +1,8 @@
import logging
import schemas
from chalicelib.core import sessions_mobs, sessions
from chalicelib.core import sessions
from chalicelib.core.sessions import sessions_mobs
from chalicelib.utils import pg_client, helper
from chalicelib.utils import sql_helper as sh

View file

@ -0,0 +1,385 @@
import logging
from decouple import config
import schemas
from chalicelib.core import events
from chalicelib.core.metrics.modules import sessions, sessions_mobs
from chalicelib.utils import sql_helper as sh
from chalicelib.utils import pg_client, helper, ch_client, exp_ch_helper
logger = logging.getLogger(__name__)
def get_by_url(project_id, data: schemas.GetHeatMapPayloadSchema):
if data.url is None or data.url == "":
return []
args = {"startDate": data.startTimestamp, "endDate": data.endTimestamp,
"project_id": project_id, "url": data.url}
constraints = [
"main_events.project_id = toUInt16(%(project_id)s)",
"main_events.created_at >= toDateTime(%(startDate)s / 1000)",
"main_events.created_at <= toDateTime(%(endDate)s / 1000)",
"main_events.`$event_name` = 'CLICK'",
"isNotNull(JSON_VALUE(CAST(main_events.`$properties` AS String), '$.normalized_x'))"
]
if data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
args["url"] = helper.values_for_operator(data.url, data.operator)
query_from = f"{exp_ch_helper.get_main_events_table(data.startTimestamp)} AS main_events"
# TODO: is this used ?
# has_click_rage_filter = False
# if len(data.filters) > 0:
# for i, f in enumerate(data.filters):
# if f.type == schemas.FilterType.issue and len(f.value) > 0:
# has_click_rage_filter = True
# query_from += """INNER JOIN events_common.issues USING (timestamp, session_id)
# INNER JOIN issues AS mis USING (issue_id)
# INNER JOIN LATERAL (
# SELECT COUNT(1) AS real_count
# FROM events.clicks AS sc
# INNER JOIN sessions as ss USING (session_id)
# WHERE ss.project_id = 2
# AND (sc.url = %(url)s OR sc.path = %(url)s)
# AND sc.timestamp >= %(startDate)s
# AND sc.timestamp <= %(endDate)s
# AND ss.start_ts >= %(startDate)s
# AND ss.start_ts <= %(endDate)s
# AND sc.selector = clicks.selector) AS r_clicks ON (TRUE)"""
# constraints += ["mis.project_id = %(project_id)s",
# "issues.timestamp >= %(startDate)s",
# "issues.timestamp <= %(endDate)s"]
# f_k = f"issue_value{i}"
# args = {**args, **sh.multi_values(f.value, value_key=f_k)}
# constraints.append(sh.multi_conditions(f"%({f_k})s = ANY (issue_types)",
# f.value, value_key=f_k))
# constraints.append(sh.multi_conditions(f"mis.type = %({f_k})s",
# f.value, value_key=f_k))
# TODO: change this once click-rage is fixed
# if data.click_rage and not has_click_rage_filter:
# constraints.append("""(issues_t.session_id IS NULL
# OR (issues_t.datetime >= toDateTime(%(startDate)s/1000)
# AND issues_t.datetime <= toDateTime(%(endDate)s/1000)
# AND issues_t.project_id = toUInt16(%(project_id)s)
# AND issues_t.event_type = 'ISSUE'
# AND issues_t.project_id = toUInt16(%(project_id)s)
# AND mis.project_id = toUInt16(%(project_id)s)
# AND mis.type='click_rage'))""")
# query_from += """ LEFT JOIN experimental.events AS issues_t ON (main_events.session_id=issues_t.session_id)
# LEFT JOIN experimental.issues AS mis ON (issues_t.issue_id=mis.issue_id)"""
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT
JSON_VALUE(CAST(`$properties` AS String), '$.normalized_x') AS normalized_x,
JSON_VALUE(CAST(`$properties` AS String), '$.normalized_y') AS normalized_y
FROM {query_from}
WHERE {" AND ".join(constraints)}
LIMIT 500;""",
parameters=args)
logger.debug("---------")
logger.debug(query)
logger.debug("---------")
try:
rows = cur.execute(query=query)
except Exception as err:
logger.warning("--------- HEATMAP 2 SEARCH QUERY EXCEPTION CH -----------")
logger.warning(query)
logger.warning("--------- PAYLOAD -----------")
logger.warning(data)
logger.warning("--------------------")
raise err
return helper.list_to_camel_case(rows)
def get_x_y_by_url_and_session_id(project_id, session_id, data: schemas.GetHeatMapPayloadSchema):
args = {"project_id": project_id, "session_id": session_id, "url": data.url}
constraints = [
"main_events.project_id = toUInt16(%(project_id)s)",
"main_events.session_id = %(session_id)s",
"main_events.`$event_name`='CLICK'",
"isNotNull(JSON_VALUE(CAST(main_events.`$properties` AS String), '$.normalized_x'))"
]
if data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
args["url"] = helper.values_for_operator(data.url, data.operator)
query_from = f"{exp_ch_helper.get_main_events_table(0)} AS main_events"
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT main_events.normalized_x AS normalized_x,
main_events.normalized_y AS normalized_y
FROM {query_from}
WHERE {" AND ".join(constraints)};""",
parameters=args)
logger.debug("---------")
logger.debug(query)
logger.debug("---------")
try:
rows = cur.execute(query=query)
except Exception as err:
logger.warning("--------- HEATMAP-session_id SEARCH QUERY EXCEPTION CH -----------")
logger.warning(query)
logger.warning("--------- PAYLOAD -----------")
logger.warning(data)
logger.warning("--------------------")
raise err
return helper.list_to_camel_case(rows)
def get_selectors_by_url_and_session_id(project_id, session_id, data: schemas.GetHeatMapPayloadSchema):
args = {"project_id": project_id, "session_id": session_id, "url": data.url}
constraints = ["main_events.project_id = toUInt16(%(project_id)s)",
"main_events.session_id = %(session_id)s",
"main_events.`$event_name`='CLICK'"]
if data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
args["url"] = helper.values_for_operator(data.url, data.operator)
query_from = f"{exp_ch_helper.get_main_events_table(0)} AS main_events"
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT CAST(`$properties`.selector AS String) AS selector,
COUNT(1) AS count
FROM {query_from}
WHERE {" AND ".join(constraints)}
GROUP BY 1
ORDER BY count DESC;""",
parameters=args)
logger.debug("---------")
logger.debug(query)
logger.debug("---------")
try:
rows = cur.execute(query=query)
except Exception as err:
logger.warning("--------- HEATMAP-session_id SEARCH QUERY EXCEPTION CH -----------")
logger.warning(query)
logger.warning("--------- PAYLOAD -----------")
logger.warning(data)
logger.warning("--------------------")
raise err
return helper.list_to_camel_case(rows)
# use CH
SESSION_PROJECTION_COLS = """s.project_id,
s.session_id AS session_id,
toUnixTimestamp(s.datetime)*1000 AS start_ts,
s.duration AS duration"""
def __get_1_url(location_condition: schemas.SessionSearchEventSchema2 | None, session_id: str, project_id: int,
start_time: int,
end_time: int) -> str | None:
full_args = {
"sessionId": session_id,
"projectId": project_id,
"start_time": start_time,
"end_time": end_time,
}
sub_condition = ["session_id = %(sessionId)s", "`$event_name` = 'CLICK'", "project_id = %(projectId)s"]
if location_condition and len(location_condition.value) > 0:
f_k = "LOC"
op = sh.get_sql_operator(location_condition.operator)
full_args = {**full_args, **sh.multi_values(location_condition.value, value_key=f_k)}
sub_condition.append(
sh.multi_conditions(f'path {op} %({f_k})s', location_condition.value, is_not=False,
value_key=f_k))
with ch_client.ClickHouseClient() as cur:
main_query = cur.format(query=f"""WITH paths AS (
SELECT DISTINCT
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS url_path
FROM product_analytics.events
WHERE {" AND ".join(sub_condition)}
)
SELECT
paths.url_path,
COUNT(*) AS count
FROM product_analytics.events
INNER JOIN paths
ON JSON_VALUE(CAST(product_analytics.events.$properties AS String), '$.url_path') = paths.url_path
WHERE `$event_name` = 'CLICK'
AND project_id = %(projectId)s
AND created_at >= toDateTime(%(start_time)s / 1000)
AND created_at <= toDateTime(%(end_time)s / 1000)
GROUP BY paths.url_path
ORDER BY count DESC
LIMIT 1;""",
parameters=full_args)
logger.debug("--------------------")
logger.debug(main_query)
logger.debug("--------------------")
try:
url = cur.execute(query=main_query)
except Exception as err:
logger.warning("--------- CLICK MAP BEST URL SEARCH QUERY EXCEPTION CH-----------")
logger.warning(main_query.decode('UTF-8'))
logger.warning("--------- PAYLOAD -----------")
logger.warning(full_args)
logger.warning("--------------------")
raise err
if url is None or len(url) == 0:
return None
return url[0]["url_path"]
def search_short_session(data: schemas.HeatMapSessionsSearch, project_id, user_id,
include_mobs: bool = True, exclude_sessions: list[str] = [],
_depth: int = 3):
no_platform = True
location_condition = None
no_click = True
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM:
no_platform = False
break
for f in data.events:
if f.type == schemas.EventType.LOCATION:
if len(f.value) == 0:
f.operator = schemas.SearchEventOperator.IS_ANY
location_condition = f.model_copy()
elif f.type == schemas.EventType.CLICK:
no_click = False
if len(f.value) == 0:
f.operator = schemas.SearchEventOperator.IS_ANY
if location_condition and not no_click:
break
if no_platform:
data.filters.append(schemas.SessionSearchFilterSchema(type=schemas.FilterType.PLATFORM,
value=[schemas.PlatformType.DESKTOP],
operator=schemas.SearchEventOperator.IS))
if not location_condition:
data.events.append(schemas.SessionSearchEventSchema2(type=schemas.EventType.LOCATION,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
if no_click:
data.events.append(schemas.SessionSearchEventSchema2(type=schemas.EventType.CLICK,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.filters.append(schemas.SessionSearchFilterSchema(type=schemas.FilterType.EVENTS_COUNT,
value=[0],
operator=schemas.MathOperator.GREATER))
full_args, query_part = sessions.search_query_parts_ch(data=data, error_status=None, errors_only=False,
favorite_only=data.bookmarked, issue=None,
project_id=project_id, user_id=user_id)
full_args["exclude_sessions"] = tuple(exclude_sessions)
if len(exclude_sessions) > 0:
query_part += "\n AND session_id NOT IN (%(exclude_sessions)s)"
with ch_client.ClickHouseClient() as cur:
data.order = schemas.SortOrderType.DESC
data.sort = 'duration'
main_query = cur.format(query=f"""SELECT *
FROM (SELECT {SESSION_PROJECTION_COLS}
{query_part}
-- ORDER BY {data.sort} {data.order.value}
LIMIT 20) AS raw
ORDER BY rand()
LIMIT 1;""",
parameters=full_args)
logger.debug("--------------------")
logger.debug(main_query)
logger.debug("--------------------")
try:
session = cur.execute(query=main_query)
except Exception as err:
logger.warning("--------- CLICK MAP SHORT SESSION SEARCH QUERY EXCEPTION CH -----------")
logger.warning(main_query)
logger.warning("--------- PAYLOAD -----------")
logger.warning(data.model_dump_json())
logger.warning("--------------------")
raise err
if len(session) > 0:
session = session[0]
if not location_condition or location_condition.operator == schemas.SearchEventOperator.IS_ANY:
session["path"] = __get_1_url(project_id=project_id, session_id=session["session_id"],
location_condition=location_condition,
start_time=data.startTimestamp, end_time=data.endTimestamp)
else:
session["path"] = location_condition.value[0]
if include_mobs:
session['domURL'] = sessions_mobs.get_urls(session_id=session["session_id"], project_id=project_id)
session['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session["session_id"])
if _depth > 0 and len(session['domURL']) == 0 and len(session['mobsUrl']) == 0:
return search_short_session(data=data, project_id=project_id, user_id=user_id,
include_mobs=include_mobs,
exclude_sessions=exclude_sessions + [session["session_id"]],
_depth=_depth - 1)
elif _depth == 0 and len(session['domURL']) == 0 and len(session['mobsUrl']) == 0:
logger.info("couldn't find an existing replay after 3 iterations for heatmap")
session['events'] = events.get_by_session_id(project_id=project_id, session_id=session["session_id"],
event_type=schemas.EventType.LOCATION)
else:
return None
return helper.dict_to_camel_case(session)
def get_selected_session(project_id, session_id):
with ch_client.ClickHouseClient() as cur:
main_query = cur.format(query=f"""SELECT {SESSION_PROJECTION_COLS}
FROM experimental.sessions AS s
WHERE session_id=%(session_id)s;""",
parameters={"session_id": session_id})
logger.debug("--------------------")
logger.debug(main_query)
logger.debug("--------------------")
try:
session = cur.execute(query=main_query)
except Exception as err:
logger.warning("--------- CLICK MAP GET SELECTED SESSION QUERY EXCEPTION -----------")
logger.warning(main_query.decode('UTF-8'))
raise err
if len(session) > 0:
session = session[0]
else:
session = None
if session:
session['domURL'] = sessions_mobs.get_urls(session_id=session["session_id"], project_id=project_id)
session['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session["session_id"])
if len(session['domURL']) == 0 and len(session['mobsUrl']) == 0:
session["_issue"] = "mob file not found"
logger.info("can't find selected mob file for heatmap")
session['events'] = get_page_events(session_id=session["session_id"], project_id=project_id)
return helper.dict_to_camel_case(session)
def get_page_events(session_id, project_id):
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT
event_id as message_id,
toUnixTimestamp(created_at)*1000 AS timestamp,
JSON_VALUE(CAST(`$properties` AS String), '$.url_host') AS host,
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS path,
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS value,
JSON_VALUE(CAST(`$properties` AS String), '$.url_path') AS url,
'LOCATION' AS type
FROM product_analytics.events
WHERE session_id = %(session_id)s
AND `$event_name`='LOCATION'
AND project_id= %(project_id)s
ORDER BY created_at,message_id;""",
parameters={"session_id": session_id, "project_id": project_id})
rows = cur.execute(query=query)
rows = helper.list_to_camel_case(rows)
return rows

View file

@ -0,0 +1,12 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
import chalicelib.core.sessions.sessions_ch as sessions
else:
import chalicelib.core.sessions.sessions_pg as sessions
from chalicelib.core.sessions import sessions_mobs

View file

@ -0,0 +1,10 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from .significance import *
if config("EXP_METRICS", cast=bool, default=False):
from .significance_ch import *

View file

@ -1,20 +1,15 @@
import logging
import schemas
from chalicelib.core import events, metadata
from chalicelib.utils import sql_helper as sh
"""
todo: remove LIMIT from the query
"""
from typing import List
import math
import warnings
from collections import defaultdict
from typing import List
from psycopg2.extras import RealDictRow
import schemas
from chalicelib.core import events, metadata
from chalicelib.utils import pg_client, helper
from chalicelib.utils import sql_helper as sh
logger = logging.getLogger(__name__)
SIGNIFICANCE_THRSH = 0.4
@ -765,30 +760,6 @@ def get_issues(stages, rows, first_stage=None, last_stage=None, drop_only=False)
return n_critical_issues, issues_dict, total_drop_due_to_issues
def get_top_insights(filter_d: schemas.CardSeriesFilterSchema, project_id,
metric_format: schemas.MetricExtendedFormatType):
output = []
stages = filter_d.events
if len(stages) == 0:
logger.debug("no stages found")
return output, 0
# The result of the multi-stage query
rows = get_stages_and_events(filter_d=filter_d, project_id=project_id)
# Obtain the first part of the output
stages_list = get_stages(stages, rows, metric_format=metric_format)
if len(rows) == 0:
return stages_list, 0
# Obtain the second part of the output
total_drop_due_to_issues = get_issues(stages, rows,
first_stage=1,
last_stage=len(filter_d.events),
drop_only=True)
return stages_list, total_drop_due_to_issues
def get_issues_list(filter_d: schemas.CardSeriesFilterSchema, project_id, first_stage=None, last_stage=None):
output = dict({"total_drop_due_to_issues": 0, "critical_issues_count": 0, "significant": [], "insignificant": []})
stages = filter_d.events

View file

@ -1,6 +1,14 @@
import logging
from typing import List
from psycopg2.extras import RealDictRow
import schemas
from chalicelib.utils import ch_client
from chalicelib.utils import exp_ch_helper
from .significance import *
from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh
from chalicelib.core import events
logger = logging.getLogger(__name__)
@ -11,9 +19,9 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
filters: List[schemas.SessionSearchFilterSchema] = filter_d.filters
platform = project.platform
constraints = ["e.project_id = %(project_id)s",
"e.datetime >= toDateTime(%(startTimestamp)s/1000)",
"e.datetime <= toDateTime(%(endTimestamp)s/1000)",
"e.event_type IN %(eventTypes)s"]
"e.created_at >= toDateTime(%(startTimestamp)s/1000)",
"e.created_at <= toDateTime(%(endTimestamp)s/1000)",
"e.`$event_name` IN %(eventTypes)s"]
full_args = {"project_id": project.project_id, "startTimestamp": filter_d.startTimestamp,
"endTimestamp": filter_d.endTimestamp}
@ -149,18 +157,25 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
if next_event_type not in event_types:
event_types.append(next_event_type)
full_args[f"event_type_{i}"] = next_event_type
n_stages_query.append(f"event_type=%(event_type_{i})s")
n_stages_query.append(f"`$event_name`=%(event_type_{i})s")
if is_not:
n_stages_query_not.append(n_stages_query[-1] + " AND " +
(sh.multi_conditions(f' {next_col_name} {op} %({e_k})s', s.value,
is_not=is_not, value_key=e_k)
if not specific_condition else specific_condition))
(sh.multi_conditions(
f"JSON_VALUE(CAST(`$properties` AS String), '$.{next_col_name}') {op} %({e_k})s",
s.value,
is_not=is_not,
value_key=e_k
) if not specific_condition else specific_condition))
elif not is_any:
n_stages_query[-1] += " AND " + (sh.multi_conditions(f' {next_col_name} {op} %({e_k})s', s.value,
is_not=is_not, value_key=e_k)
if not specific_condition else specific_condition)
n_stages_query[-1] += " AND " + (
sh.multi_conditions(
f"JSON_VALUE(CAST(`$properties` AS String), '$.{next_col_name}') {op} %({e_k})s",
s.value,
is_not=is_not,
value_key=e_k
) if not specific_condition else specific_condition)
full_args = {"eventTypes": tuple(event_types), **full_args, **values}
full_args = {"eventTypes": event_types, **full_args, **values}
n_stages = len(n_stages_query)
if n_stages == 0:
return []
@ -180,8 +195,8 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
if len(n_stages_query_not) > 0:
value_conditions_not_base = ["project_id = %(project_id)s",
"datetime >= toDateTime(%(startTimestamp)s/1000)",
"datetime <= toDateTime(%(endTimestamp)s/1000)"]
"created_at >= toDateTime(%(startTimestamp)s/1000)",
"created_at <= toDateTime(%(endTimestamp)s/1000)"]
_value_conditions_not = []
value_conditions_not = []
for c in n_stages_query_not:
@ -202,7 +217,7 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
sequences = []
projections = []
for i, s in enumerate(n_stages_query):
projections.append(f"SUM(T{i + 1}) AS stage{i + 1}")
projections.append(f"coalesce(SUM(T{i + 1}),0) AS stage{i + 1}")
if i == 0:
sequences.append(f"anyIf(1,{s}) AS T1")
else:
@ -213,23 +228,22 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
pattern += f"(?{j + 1})"
conditions.append(n_stages_query[j])
j += 1
sequences.append(f"sequenceMatch('{pattern}')(e.datetime, {','.join(conditions)}) AS T{i + 1}")
sequences.append(f"sequenceMatch('{pattern}')(toDateTime(e.created_at), {','.join(conditions)}) AS T{i + 1}")
n_stages_query = f"""
SELECT {",".join(projections)}
FROM (SELECT {",".join(sequences)}
FROM {MAIN_EVENTS_TABLE} AS e {extra_from}
WHERE {" AND ".join(constraints)}
GROUP BY {group_by}) AS raw;
"""
GROUP BY {group_by}) AS raw;"""
with ch_client.ClickHouseClient() as cur:
query = cur.format(n_stages_query, full_args)
query = cur.format(query=n_stages_query, parameters=full_args)
logger.debug("---------------------------------------------------")
logger.debug(query)
logger.debug("---------------------------------------------------")
try:
row = cur.execute(query)
row = cur.execute(query=query)
except Exception as err:
logger.warning("--------- SIMPLE FUNNEL SEARCH QUERY EXCEPTION CH-----------")
logger.warning(query)

View file

@ -0,0 +1,10 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
logger.info(">>> Using experimental product-analytics")
from .product_analytics_ch import *
else:
from .product_analytics import *

View file

@ -12,42 +12,75 @@ logger = logging.getLogger(__name__)
def __transform_journey(rows, reverse_path=False):
total_100p = 0
number_of_step1 = 0
for r in rows:
if r["event_number_in_session"] > 1:
break
number_of_step1 += 1
total_100p += r["sessions_count"]
# for i in range(number_of_step1):
# rows[i]["value"] = 100 / number_of_step1
# for i in range(number_of_step1, len(rows)):
for i in range(len(rows)):
rows[i]["value"] = rows[i]["sessions_count"] * 100 / total_100p
nodes = []
nodes_values = []
links = []
drops = []
max_depth = 0
for r in rows:
source = f"{r['event_number_in_session']}_{r['event_type']}_{r['e_value']}"
r["value"] = r["sessions_count"] * 100 / total_100p
source = f"{r['event_number_in_session'] - 1}_{r['event_type']}_{r['e_value']}"
if source not in nodes:
nodes.append(source)
nodes_values.append({"name": r['e_value'], "eventType": r['event_type'],
"avgTimeFromPrevious": 0, "sessionsCount": 0})
if r['next_value']:
target = f"{r['event_number_in_session'] + 1}_{r['next_type']}_{r['next_value']}"
if target not in nodes:
nodes.append(target)
nodes_values.append({"name": r['next_value'], "eventType": r['next_type'],
"avgTimeFromPrevious": 0, "sessionsCount": 0})
nodes_values.append({"depth": r['event_number_in_session'] - 1,
"name": r['e_value'],
"eventType": r['event_type'],
"id": len(nodes_values)})
target = f"{r['event_number_in_session']}_{r['next_type']}_{r['next_value']}"
if target not in nodes:
nodes.append(target)
nodes_values.append({"depth": r['event_number_in_session'],
"name": r['next_value'],
"eventType": r['next_type'],
"id": len(nodes_values)})
sr_idx = nodes.index(source)
tg_idx = nodes.index(target)
link = {"eventType": r['event_type'], "sessionsCount": r["sessions_count"], "value": r["value"]}
if not reverse_path:
link["source"] = sr_idx
link["target"] = tg_idx
else:
link["source"] = tg_idx
link["target"] = sr_idx
links.append(link)
max_depth = r['event_number_in_session']
if r["next_type"] == "DROP":
for d in drops:
if d["depth"] == r['event_number_in_session']:
d["sessions_count"] += r["sessions_count"]
break
else:
drops.append({"depth": r['event_number_in_session'], "sessions_count": r["sessions_count"]})
for i in range(len(drops)):
if drops[i]["depth"] < max_depth:
source = f"{drops[i]['depth']}_DROP_None"
target = f"{drops[i]['depth'] + 1}_DROP_None"
sr_idx = nodes.index(source)
tg_idx = nodes.index(target)
if r["avg_time_from_previous"] is not None:
nodes_values[tg_idx]["avgTimeFromPrevious"] += r["avg_time_from_previous"] * r["sessions_count"]
nodes_values[tg_idx]["sessionsCount"] += r["sessions_count"]
link = {"eventType": r['event_type'], "sessionsCount": r["sessions_count"],
"value": r["value"], "avgTimeFromPrevious": r["avg_time_from_previous"]}
if i < len(drops) - 1 and drops[i]["depth"] + 1 == drops[i + 1]["depth"]:
tg_idx = nodes.index(target)
else:
nodes.append(target)
nodes_values.append({"depth": drops[i]["depth"] + 1,
"name": None,
"eventType": "DROP",
"id": len(nodes_values)})
tg_idx = len(nodes) - 1
link = {"eventType": "DROP",
"sessionsCount": drops[i]["sessions_count"],
"value": drops[i]["sessions_count"] * 100 / total_100p}
if not reverse_path:
link["source"] = sr_idx
link["target"] = tg_idx
@ -55,13 +88,10 @@ def __transform_journey(rows, reverse_path=False):
link["source"] = tg_idx
link["target"] = sr_idx
links.append(link)
for n in nodes_values:
if n["sessionsCount"] > 0:
n["avgTimeFromPrevious"] = n["avgTimeFromPrevious"] / n["sessionsCount"]
else:
n["avgTimeFromPrevious"] = None
n.pop("sessionsCount")
if reverse_path:
for n in nodes_values:
n["depth"] = max_depth - n["depth"]
return {"nodes": nodes_values,
"links": sorted(links, key=lambda x: (x["source"], x["target"]), reverse=False)}
@ -403,7 +433,9 @@ WITH sub_sessions AS (SELECT session_id {sub_sessions_extra_projection}
{"UNION ALL".join(projection_query)};"""
params = {"project_id": project_id, "startTimestamp": data.startTimestamp,
"endTimestamp": data.endTimestamp, "density": data.density,
"eventThresholdNumberInGroup": 4 if data.hide_excess else 8,
# This is ignored because UI will take care of it
# "eventThresholdNumberInGroup": 4 if data.hide_excess else 8,
"eventThresholdNumberInGroup": 8,
**extra_values}
query = cur.mogrify(pg_query, params)
_now = time()

View file

@ -1,110 +1,135 @@
from typing import List
import schemas
from chalicelib.core.metrics import __get_basic_constraints, __get_meta_constraint
from chalicelib.core.metrics import __get_constraint_values, __complete_missing_steps
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, dev
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils import sql_helper as sh
from chalicelib.core import metadata
import logging
from time import time
import logging
import schemas
from chalicelib.core import metadata
from .product_analytics import __transform_journey
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
logger = logging.getLogger(__name__)
def __transform_journey(rows, reverse_path=False):
total_100p = 0
number_of_step1 = 0
for r in rows:
if r["event_number_in_session"] > 1:
break
number_of_step1 += 1
total_100p += r["sessions_count"]
# for i in range(number_of_step1):
# rows[i]["value"] = 100 / number_of_step1
# for i in range(number_of_step1, len(rows)):
for i in range(len(rows)):
rows[i]["value"] = rows[i]["sessions_count"] * 100 / total_100p
nodes = []
nodes_values = []
links = []
for r in rows:
source = f"{r['event_number_in_session']}_{r['event_type']}_{r['e_value']}"
if source not in nodes:
nodes.append(source)
nodes_values.append({"name": r['e_value'], "eventType": r['event_type'],
"avgTimeFromPrevious": 0, "sessionsCount": 0})
if r['next_value']:
target = f"{r['event_number_in_session'] + 1}_{r['next_type']}_{r['next_value']}"
if target not in nodes:
nodes.append(target)
nodes_values.append({"name": r['next_value'], "eventType": r['next_type'],
"avgTimeFromPrevious": 0, "sessionsCount": 0})
sr_idx = nodes.index(source)
tg_idx = nodes.index(target)
if r["avg_time_from_previous"] is not None:
nodes_values[tg_idx]["avgTimeFromPrevious"] += r["avg_time_from_previous"] * r["sessions_count"]
nodes_values[tg_idx]["sessionsCount"] += r["sessions_count"]
link = {"eventType": r['event_type'], "sessionsCount": r["sessions_count"],
"value": r["value"], "avgTimeFromPrevious": r["avg_time_from_previous"]}
if not reverse_path:
link["source"] = sr_idx
link["target"] = tg_idx
else:
link["source"] = tg_idx
link["target"] = sr_idx
links.append(link)
for n in nodes_values:
if n["sessionsCount"] > 0:
n["avgTimeFromPrevious"] = n["avgTimeFromPrevious"] / n["sessionsCount"]
else:
n["avgTimeFromPrevious"] = None
n.pop("sessionsCount")
return {"nodes": nodes_values,
"links": sorted(links, key=lambda x: (x["source"], x["target"]), reverse=False)}
JOURNEY_TYPES = {
schemas.ProductAnalyticsSelectedEventType.LOCATION: {"eventType": "LOCATION", "column": "url_path"},
schemas.ProductAnalyticsSelectedEventType.CLICK: {"eventType": "CLICK", "column": "label"},
schemas.ProductAnalyticsSelectedEventType.INPUT: {"eventType": "INPUT", "column": "label"},
schemas.ProductAnalyticsSelectedEventType.CUSTOM_EVENT: {"eventType": "CUSTOM", "column": "name"}
schemas.ProductAnalyticsSelectedEventType.LOCATION: {"eventType": "LOCATION", "column": "`$properties`.url_path"},
schemas.ProductAnalyticsSelectedEventType.CLICK: {"eventType": "CLICK", "column": "`$properties`.label"},
schemas.ProductAnalyticsSelectedEventType.INPUT: {"eventType": "INPUT", "column": "`$properties`.label"},
schemas.ProductAnalyticsSelectedEventType.CUSTOM_EVENT: {"eventType": "CUSTOM", "column": "`$properties`.name"}
}
# Q6: use events as a sub_query to support filter of materialized columns when doing a join
# query: Q5, the result is correct,
def __get_basic_constraints_events(table_name=None, identifier="project_id"):
if table_name:
table_name += "."
else:
table_name = ""
ch_sub_query = [f"{table_name}{identifier} =toUInt16(%({identifier})s)"]
ch_sub_query.append(f"{table_name}created_at >= toDateTime(%(startTimestamp)s/1000)")
ch_sub_query.append(f"{table_name}created_at < toDateTime(%(endTimestamp)s/1000)")
return ch_sub_query
def __frange(start, stop, step):
result = []
i = start
while i < stop:
result.append(i)
i += step
return result
def __add_missing_keys(original, complete):
for missing in [key for key in complete.keys() if key not in original.keys()]:
original[missing] = complete[missing]
return original
def __complete_missing_steps(start_time, end_time, density, neutral, rows, time_key="timestamp", time_coefficient=1000):
if len(rows) == density:
return rows
step = get_step_size(start_time, end_time, density, decimal=True)
optimal = [(int(i * time_coefficient), int((i + step) * time_coefficient)) for i in
__frange(start_time // time_coefficient, end_time // time_coefficient, step)]
result = []
r = 0
o = 0
for i in range(density):
neutral_clone = dict(neutral)
for k in neutral_clone.keys():
if callable(neutral_clone[k]):
neutral_clone[k] = neutral_clone[k]()
if r < len(rows) and len(result) + len(rows) - r == density:
result += rows[r:]
break
if r < len(rows) and o < len(optimal) and rows[r][time_key] < optimal[o][0]:
# complete missing keys in original object
rows[r] = __add_missing_keys(original=rows[r], complete=neutral_clone)
result.append(rows[r])
r += 1
elif r < len(rows) and o < len(optimal) and optimal[o][0] <= rows[r][time_key] < optimal[o][1]:
# complete missing keys in original object
rows[r] = __add_missing_keys(original=rows[r], complete=neutral_clone)
result.append(rows[r])
r += 1
o += 1
else:
neutral_clone[time_key] = optimal[o][0]
result.append(neutral_clone)
o += 1
return result
# startPoints are computed before ranked_events to reduce the number of window functions over rows
# replaced time_to_target by time_from_previous
# compute avg_time_from_previous at the same level as sessions_count
# sort by top 5 according to sessions_count at the CTE level
# final part project data without grouping
# compute avg_time_from_previous at the same level as sessions_count (this was removed in v1.22)
# if start-point is selected, the selected event is ranked n°1
def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
if not data.hide_excess:
data.hide_excess = True
data.rows = 50
sub_events = []
start_points_conditions = []
step_0_conditions = []
step_1_post_conditions = ["event_number_in_session <= %(density)s"]
q2_extra_col = None
q2_extra_condition = None
if len(data.metric_value) == 0:
data.metric_value.append(schemas.ProductAnalyticsSelectedEventType.LOCATION)
sub_events.append({"column": JOURNEY_TYPES[schemas.ProductAnalyticsSelectedEventType.LOCATION]["column"],
"eventType": schemas.ProductAnalyticsSelectedEventType.LOCATION.value})
else:
if len(data.start_point) > 0:
extra_metric_values = []
for s in data.start_point:
if s.type not in data.metric_value:
sub_events.append({"column": JOURNEY_TYPES[s.type]["column"],
"eventType": JOURNEY_TYPES[s.type]["eventType"]})
step_1_post_conditions.append(
f"(`$event_name`='{JOURNEY_TYPES[s.type]['eventType']}' AND event_number_in_session = 1 \
OR `$event_name`!='{JOURNEY_TYPES[s.type]['eventType']}' AND event_number_in_session > 1)")
extra_metric_values.append(s.type)
if not q2_extra_col:
# This is used in case start event has different type of the visible event,
# because it causes intermediary events to be removed, so you find a jump from step-0 to step-3
# because step-2 is not of a visible event
q2_extra_col = """,leadInFrame(toNullable(event_number_in_session))
OVER (PARTITION BY session_id ORDER BY created_at %s
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_event_number_in_session"""
q2_extra_condition = """WHERE event_number_in_session + 1 = next_event_number_in_session
OR isNull(next_event_number_in_session);"""
data.metric_value += extra_metric_values
for v in data.metric_value:
if JOURNEY_TYPES.get(v):
sub_events.append({"column": JOURNEY_TYPES[v]["column"],
"eventType": JOURNEY_TYPES[v]["eventType"]})
if len(sub_events) == 1:
main_column = sub_events[0]['column']
else:
main_column = f"multiIf(%s,%s)" % (
','.join([f"event_type='{s['eventType']}',{s['column']}" for s in sub_events[:-1]]),
','.join([f"`$event_name`='{s['eventType']}',{s['column']}" for s in sub_events[:-1]]),
sub_events[-1]["column"])
extra_values = {}
reverse = data.start_type == "end"
@ -117,19 +142,19 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
event_type = JOURNEY_TYPES[sf.type]['eventType']
extra_values = {**extra_values, **sh.multi_values(sf.value, value_key=f_k),
f"start_event_type_{i}": event_type}
start_points_conditions.append(f"(event_type=%(start_event_type_{i})s AND " +
start_points_conditions.append(f"(`$event_name`=%(start_event_type_{i})s AND " +
sh.multi_conditions(f'{event_column} {op} %({f_k})s', sf.value, is_not=is_not,
value_key=f_k)
+ ")")
step_0_conditions.append(f"(event_type=%(start_event_type_{i})s AND " +
step_0_conditions.append(f"(`$event_name`=%(start_event_type_{i})s AND " +
sh.multi_conditions(f'e_value {op} %({f_k})s', sf.value, is_not=is_not,
value_key=f_k)
+ ")")
if len(start_points_conditions) > 0:
start_points_conditions = ["(" + " OR ".join(start_points_conditions) + ")",
"events.project_id = toUInt16(%(project_id)s)",
"events.datetime >= toDateTime(%(startTimestamp)s / 1000)",
"events.datetime < toDateTime(%(endTimestamp)s / 1000)"]
"events.created_at >= toDateTime(%(startTimestamp)s / 1000)",
"events.created_at < toDateTime(%(endTimestamp)s / 1000)"]
step_0_conditions = ["(" + " OR ".join(step_0_conditions) + ")",
"pre_ranked_events.event_number_in_session = 1"]
@ -318,10 +343,11 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
else:
path_direction = ""
ch_sub_query = __get_basic_constraints(table_name="events")
# ch_sub_query = __get_basic_constraints(table_name="events")
ch_sub_query = __get_basic_constraints_events(table_name="events")
selected_event_type_sub_query = []
for s in data.metric_value:
selected_event_type_sub_query.append(f"events.event_type = '{JOURNEY_TYPES[s]['eventType']}'")
selected_event_type_sub_query.append(f"events.`$event_name` = '{JOURNEY_TYPES[s]['eventType']}'")
if s in exclusions:
selected_event_type_sub_query[-1] += " AND (" + " AND ".join(exclusions[s]) + ")"
selected_event_type_sub_query = " OR ".join(selected_event_type_sub_query)
@ -344,14 +370,14 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
if len(start_points_conditions) == 0:
step_0_subquery = """SELECT DISTINCT session_id
FROM (SELECT event_type, e_value
FROM (SELECT `$event_name`, e_value
FROM pre_ranked_events
WHERE event_number_in_session = 1
GROUP BY event_type, e_value
GROUP BY `$event_name`, e_value
ORDER BY count(1) DESC
LIMIT 1) AS top_start_events
INNER JOIN pre_ranked_events
ON (top_start_events.event_type = pre_ranked_events.event_type AND
ON (top_start_events.`$event_name` = pre_ranked_events.`$event_name` AND
top_start_events.e_value = pre_ranked_events.e_value)
WHERE pre_ranked_events.event_number_in_session = 1"""
initial_event_cte = ""
@ -360,65 +386,85 @@ def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
FROM pre_ranked_events
WHERE {" AND ".join(step_0_conditions)}"""
initial_event_cte = f"""\
initial_event AS (SELECT events.session_id, MIN(datetime) AS start_event_timestamp
initial_event AS (SELECT events.session_id, MIN(created_at) AS start_event_timestamp
FROM {main_events_table} {"INNER JOIN sub_sessions USING (session_id)" if len(sessions_conditions) > 0 else ""}
WHERE {" AND ".join(start_points_conditions)}
GROUP BY 1),"""
ch_sub_query.append("events.datetime>=initial_event.start_event_timestamp")
ch_sub_query.append(f"events.created_at{'<=' if reverse else '>='}initial_event.start_event_timestamp")
main_events_table += " INNER JOIN initial_event ON (events.session_id = initial_event.session_id)"
sessions_conditions = []
steps_query = ["""n1 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
AVG(time_from_previous) AS avg_time_from_previous,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 1
AND isNotNull(next_value)
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC
LIMIT %(eventThresholdNumberInGroup)s)"""]
projection_query = ["""SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count,
avg_time_from_previous
FROM n1"""]
for i in range(2, data.density + 1):
steps_query.append(f"""n{i} AS (SELECT *
FROM (SELECT re.event_number_in_session AS event_number_in_session,
re.event_type AS event_type,
re.e_value AS e_value,
re.next_type AS next_type,
re.next_value AS next_value,
AVG(re.time_from_previous) AS avg_time_from_previous,
COUNT(1) AS sessions_count
FROM n{i - 1} INNER JOIN ranked_events AS re
ON (n{i - 1}.next_value = re.e_value AND n{i - 1}.next_type = re.event_type)
WHERE re.event_number_in_session = {i}
GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type, re.next_value) AS sub_level
ORDER BY sessions_count DESC
LIMIT %(eventThresholdNumberInGroup)s)""")
projection_query.append(f"""SELECT event_number_in_session,
event_type,
steps_query = []
# This is used if data.hideExcess is True
projection_query = []
drop_query = []
top_query = []
top_with_next_query = []
other_query = []
for i in range(1, data.density + (1 if data.hide_excess else 0)):
steps_query.append(f"""n{i} AS (SELECT event_number_in_session,
`$event_name`,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = {i}
GROUP BY event_number_in_session, `$event_name`, e_value, next_type, next_value
ORDER BY sessions_count DESC)""")
if not data.hide_excess:
projection_query.append(f"""\
SELECT event_number_in_session,
`$event_name`,
e_value,
next_type,
next_value,
sessions_count,
avg_time_from_previous
FROM n{i}""")
sessions_count
FROM n{i}
WHERE isNotNull(next_type)""")
else:
top_query.append(f"""\
SELECT event_number_in_session,
`$event_name`,
e_value,
SUM(n{i}.sessions_count) AS sessions_count
FROM n{i}
GROUP BY event_number_in_session, `$event_name`, e_value
ORDER BY sessions_count DESC
LIMIT %(visibleRows)s""")
if i < data.density:
drop_query.append(f"""SELECT event_number_in_session,
`$event_name`,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n{i}
WHERE isNull(n{i}.next_type)""")
if data.hide_excess:
top_with_next_query.append(f"""\
SELECT n{i}.*
FROM n{i}
INNER JOIN top_n
ON (n{i}.event_number_in_session = top_n.event_number_in_session
AND n{i}.`$event_name` = top_n.`$event_name`
AND n{i}.e_value = top_n.e_value)""")
if i > 1 and data.hide_excess:
other_query.append(f"""SELECT n{i}.*
FROM n{i}
WHERE (event_number_in_session, `$event_name`, e_value) NOT IN
(SELECT event_number_in_session, `$event_name`, e_value
FROM top_n
WHERE top_n.event_number_in_session = {i})""")
with ch_client.ClickHouseClient(database="experimental") as ch:
time_key = TimeUTC.now()
_now = time()
params = {"project_id": project_id, "startTimestamp": data.startTimestamp,
"endTimestamp": data.endTimestamp, "density": data.density,
"eventThresholdNumberInGroup": 4 if data.hide_excess else 8,
"visibleRows": data.rows,
**extra_values}
ch_query1 = f"""\
@ -427,23 +473,24 @@ WITH {initial_sessions_cte}
{initial_event_cte}
pre_ranked_events AS (SELECT *
FROM (SELECT session_id,
event_type,
datetime,
{main_column} AS e_value,
`$event_name`,
created_at,
toString({main_column}) AS e_value,
row_number() OVER (PARTITION BY session_id
ORDER BY datetime {path_direction},
message_id {path_direction} ) AS event_number_in_session
ORDER BY created_at {path_direction},
event_id {path_direction} ) AS event_number_in_session
FROM {main_events_table} {"INNER JOIN sub_sessions ON (sub_sessions.session_id = events.session_id)" if len(sessions_conditions) > 0 else ""}
WHERE {" AND ".join(ch_sub_query)}
) AS full_ranked_events
WHERE event_number_in_session <= %(density)s)
WHERE {" AND ".join(step_1_post_conditions)})
SELECT *
FROM pre_ranked_events;"""
logger.debug("---------Q1-----------")
ch.execute(query=ch_query1, params=params)
ch_query1 = ch.format(query=ch_query1, parameters=params)
ch.execute(query=ch_query1)
if time() - _now > 2:
logger.warning(f">>>>>>>>>PathAnalysis long query EE ({int(time() - _now)}s)<<<<<<<<<")
logger.warning(ch.format(ch_query1, params))
logger.warning(str.encode(ch_query1))
logger.warning("----------------------")
_now = time()
@ -454,38 +501,136 @@ WITH pre_ranked_events AS (SELECT *
start_points AS ({step_0_subquery}),
ranked_events AS (SELECT pre_ranked_events.*,
leadInFrame(e_value)
OVER (PARTITION BY session_id ORDER BY datetime {path_direction}
OVER (PARTITION BY session_id ORDER BY created_at {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_value,
leadInFrame(toNullable(event_type))
OVER (PARTITION BY session_id ORDER BY datetime {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_type,
abs(lagInFrame(toNullable(datetime))
OVER (PARTITION BY session_id ORDER BY datetime {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
- pre_ranked_events.datetime) AS time_from_previous
leadInFrame(toNullable(`$event_name`))
OVER (PARTITION BY session_id ORDER BY created_at {path_direction}
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_type
{q2_extra_col % path_direction if q2_extra_col else ""}
FROM start_points INNER JOIN pre_ranked_events USING (session_id))
SELECT *
FROM ranked_events;"""
FROM ranked_events
{q2_extra_condition if q2_extra_condition else ""};"""
logger.debug("---------Q2-----------")
ch.execute(query=ch_query2, params=params)
ch_query2 = ch.format(query=ch_query2, parameters=params)
ch.execute(query=ch_query2)
if time() - _now > 2:
logger.warning(f">>>>>>>>>PathAnalysis long query EE ({int(time() - _now)}s)<<<<<<<<<")
logger.warning(ch.format(ch_query2, params))
logger.warning(str.encode(ch_query2))
logger.warning("----------------------")
_now = time()
sub_cte = ""
if data.hide_excess:
sub_cte = f""",
top_n AS ({" UNION ALL ".join(top_query)}),
top_n_with_next AS ({" UNION ALL ".join(top_with_next_query)}),
others_n AS ({" UNION ALL ".join(other_query)})"""
projection_query = """\
-- Top to Top: valid
SELECT top_n_with_next.*
FROM top_n_with_next
INNER JOIN top_n
ON (top_n_with_next.event_number_in_session + 1 = top_n.event_number_in_session
AND top_n_with_next.next_type = top_n.`$event_name`
AND top_n_with_next.next_value = top_n.e_value)
UNION ALL
-- Top to Others: valid
SELECT top_n_with_next.event_number_in_session,
top_n_with_next.`$event_name`,
top_n_with_next.e_value,
'OTHER' AS next_type,
NULL AS next_value,
SUM(top_n_with_next.sessions_count) AS sessions_count
FROM top_n_with_next
WHERE (top_n_with_next.event_number_in_session + 1, top_n_with_next.next_type, top_n_with_next.next_value) IN
(SELECT others_n.event_number_in_session, others_n.`$event_name`, others_n.e_value FROM others_n)
GROUP BY top_n_with_next.event_number_in_session, top_n_with_next.`$event_name`, top_n_with_next.e_value
UNION ALL
-- Top go to Drop: valid
SELECT drop_n.event_number_in_session,
drop_n.`$event_name`,
drop_n.e_value,
drop_n.next_type,
drop_n.next_value,
drop_n.sessions_count
FROM drop_n
INNER JOIN top_n ON (drop_n.event_number_in_session = top_n.event_number_in_session
AND drop_n.`$event_name` = top_n.`$event_name`
AND drop_n.e_value = top_n.e_value)
ORDER BY drop_n.event_number_in_session
UNION ALL
-- Others got to Drop: valid
SELECT others_n.event_number_in_session,
'OTHER' AS `$event_name`,
NULL AS e_value,
'DROP' AS next_type,
NULL AS next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNull(others_n.next_type)
AND others_n.event_number_in_session < 3
GROUP BY others_n.event_number_in_session, next_type, next_value
UNION ALL
-- Others got to Top:valid
SELECT others_n.event_number_in_session,
'OTHER' AS `$event_name`,
NULL AS e_value,
others_n.next_type,
others_n.next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) IN
(SELECT top_n.event_number_in_session, top_n.`$event_name`, top_n.e_value FROM top_n)
GROUP BY others_n.event_number_in_session, others_n.next_type, others_n.next_value
UNION ALL
-- Others got to Others
SELECT others_n.event_number_in_session,
'OTHER' AS `$event_name`,
NULL AS e_value,
'OTHER' AS next_type,
NULL AS next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND others_n.event_number_in_session < %(density)s
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) NOT IN
(SELECT event_number_in_session, `$event_name`, e_value FROM top_n)
GROUP BY others_n.event_number_in_session"""
else:
projection_query.append("""\
SELECT event_number_in_session,
`$event_name`,
e_value,
next_type,
next_value,
sessions_count
FROM drop_n""")
projection_query = " UNION ALL ".join(projection_query)
ch_query3 = f"""\
WITH ranked_events AS (SELECT *
FROM ranked_events_{time_key}),
{",".join(steps_query)}
SELECT *
FROM ({" UNION ALL ".join(projection_query)}) AS chart_steps
ORDER BY event_number_in_session;"""
WITH ranked_events AS (SELECT *
FROM ranked_events_{time_key}),
{", ".join(steps_query)},
drop_n AS ({" UNION ALL ".join(drop_query)})
{sub_cte}
SELECT event_number_in_session,
`$event_name` AS event_type,
e_value,
next_type,
next_value,
sessions_count
FROM (
{projection_query}
) AS chart_steps
ORDER BY event_number_in_session, sessions_count DESC;"""
logger.debug("---------Q3-----------")
rows = ch.execute(query=ch_query3, params=params)
ch_query3 = ch.format(query=ch_query3, parameters=params)
rows = ch.execute(query=ch_query3)
if time() - _now > 2:
logger.warning(f">>>>>>>>>PathAnalysis long query EE ({int(time() - _now)}s)<<<<<<<<<")
logger.warning(ch.format(ch_query3, params))
logger.warning(str.encode(ch_query3))
logger.warning("----------------------")
return __transform_journey(rows=rows, reverse_path=reverse)

View file

@ -0,0 +1,14 @@
from chalicelib.utils.ch_client import ClickHouseClient
def search_events(project_id: int, data: dict):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT *
FROM taha.events
WHERE project_id=%(project_id)s
ORDER BY created_at;""",
params={"project_id": project_id})
x = ch_client.execute(r)
return x

View file

@ -0,0 +1,6 @@
TENANT_CONDITION = "TRUE"
MOB_KEY = ""
def get_file_key(project_id, session_id):
return {}

View file

@ -1,6 +1,7 @@
import json
from typing import Optional, List
import logging
from collections import Counter
from typing import Optional, List
from fastapi import HTTPException, status
@ -9,6 +10,8 @@ from chalicelib.core import users
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def __exists_by_name(name: str, exclude_id: Optional[int]) -> bool:
with pg_client.PostgresClient() as cur:
@ -410,7 +413,6 @@ def update_project_conditions(project_id, conditions):
create_project_conditions(project_id, to_be_created)
if to_be_updated:
print(to_be_updated)
update_project_condition(project_id, to_be_updated)
return get_conditions(project_id)
@ -425,3 +427,45 @@ def get_projects_ids(tenant_id):
cur.execute(query=query)
rows = cur.fetchall()
return [r["project_id"] for r in rows]
def delete_metadata_condition(project_id, metadata_key):
sql = """\
UPDATE public.projects_conditions
SET filters=(SELECT COALESCE(jsonb_agg(elem), '[]'::jsonb)
FROM jsonb_array_elements(filters) AS elem
WHERE NOT (elem ->> 'type' = 'metadata'
AND elem ->> 'source' = %(metadata_key)s))
WHERE project_id = %(project_id)s
AND jsonb_typeof(filters) = 'array'
AND EXISTS (SELECT 1
FROM jsonb_array_elements(filters) AS elem
WHERE elem ->> 'type' = 'metadata'
AND elem ->> 'source' = %(metadata_key)s);"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"project_id": project_id, "metadata_key": metadata_key})
cur.execute(query)
def rename_metadata_condition(project_id, old_metadata_key, new_metadata_key):
sql = """\
UPDATE public.projects_conditions
SET filters = (SELECT jsonb_agg(CASE
WHEN elem ->> 'type' = 'metadata' AND elem ->> 'source' = %(old_metadata_key)s
THEN elem || ('{"source": "'||%(new_metadata_key)s||'"}')::jsonb
ELSE elem END)
FROM jsonb_array_elements(filters) AS elem)
WHERE project_id = %(project_id)s
AND jsonb_typeof(filters) = 'array'
AND EXISTS (SELECT 1
FROM jsonb_array_elements(filters) AS elem
WHERE elem ->> 'type' = 'metadata'
AND elem ->> 'source' = %(old_metadata_key)s);"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"project_id": project_id, "old_metadata_key": old_metadata_key,
"new_metadata_key": new_metadata_key})
cur.execute(query)
# TODO: make project conditions use metadata-column-name instead of metadata-key

View file

@ -14,7 +14,7 @@ def reset(data: schemas.ForgetPasswordPayloadSchema, background_tasks: Backgroun
if helper.allow_captcha() and not captcha.is_valid(data.g_recaptcha_response):
return {"errors": ["Invalid captcha."]}
if not smtp.has_smtp():
return {"errors": ["no SMTP configuration found, you can ask your admin to reset your password"]}
return {"errors": ["Email delivery failed due to invalid SMTP configuration. Please contact your admin."]}
a_user = users.get_by_email_only(data.email)
if a_user:
invitation_link = users.generate_new_invitation(user_id=a_user["userId"])

View file

@ -0,0 +1,13 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import sessions_pg
from . import sessions_pg as sessions_legacy
from . import sessions_ch
if config("EXP_METRICS", cast=bool, default=False):
from . import sessions_ch as sessions
else:
from . import sessions_pg as sessions

View file

@ -1,10 +1,12 @@
from decouple import config
from chalicelib.utils import helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils import pg_client
from chalicelib.core import integrations_manager, integration_base_issue
import json
from decouple import config
from chalicelib.core.issue_tracking import integrations_manager, base_issue
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
def __get_saved_data(project_id, session_id, issue_id, tool):
with pg_client.PostgresClient() as cur:
@ -39,8 +41,8 @@ def create_new_assignment(tenant_id, project_id, session_id, creator_id, assigne
issue = integration.issue_handler.create_new_assignment(title=title, assignee=assignee, description=description,
issue_type=issue_type,
integration_project_id=integration_project_id)
except integration_base_issue.RequestException as e:
return integration_base_issue.proxy_issues_handler(e)
except base_issue.RequestException as e:
return base_issue.proxy_issues_handler(e)
if issue is None or "id" not in issue:
return {"errors": ["something went wrong while creating the issue"]}
with pg_client.PostgresClient() as cur:

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
from .sessions_devtool import *

View file

@ -1,9 +1,10 @@
from decouple import config
import schemas
from chalicelib.utils.storage import StorageClient
def __get_devtools_keys(project_id, session_id):
def get_devtools_keys(project_id, session_id):
params = {
"sessionId": session_id,
"projectId": project_id
@ -13,9 +14,9 @@ def __get_devtools_keys(project_id, session_id):
]
def get_urls(session_id, project_id, check_existence: bool = True):
def get_urls(session_id, project_id, context: schemas.CurrentContext, check_existence: bool = True):
results = []
for k in __get_devtools_keys(project_id=project_id, session_id=session_id):
for k in get_devtools_keys(project_id=project_id, session_id=session_id):
if check_existence and not StorageClient.exists(bucket=config("sessions_bucket"), key=k):
continue
results.append(StorageClient.get_presigned_url_for_sharing(
@ -28,5 +29,5 @@ def get_urls(session_id, project_id, check_existence: bool = True):
def delete_mobs(project_id, session_ids):
for session_id in session_ids:
for k in __get_devtools_keys(project_id=project_id, session_id=session_id):
for k in get_devtools_keys(project_id=project_id, session_id=session_id):
StorageClient.tag_for_deletion(bucket=config("sessions_bucket"), key=k)

View file

@ -0,0 +1 @@
from .sessions_favorite import *

Some files were not shown because too many files have changed in this diff Show more