Compare commits

...
Sign in to create a new pull request.

217 commits

Author SHA1 Message Date
nick-delirium
90510aa33b ui: fix double metric selection in list 2025-06-06 16:19:54 +02:00
GitHub Action
96a70f5d41 Increment frontend chart version to v1.22.42 2025-06-04 11:41:56 +02:00
rjshrjndrn
d4a13edcf0 fix(actions): frontend image with proper tag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-06-04 11:33:19 +02:00
GitHub Action
51fad91a22 Increment frontend chart version to v1.22.41 2025-06-04 10:48:50 +02:00
nick-delirium
36abcda1e1 ui: fix audioplayer start point 2025-06-04 10:39:08 +02:00
Mehdi Osman
dd5f464f73
Increment frontend chart version to v1.22.40 (#3479)
Co-authored-by: GitHub Action <action@github.com>
2025-06-03 16:22:12 +02:00
Delirium
f9ada41272
ui: recreate period on db visit (#3478) 2025-06-03 16:05:52 +02:00
rjshrjndrn
9e24a3583e feat(nginx): add integrations endpoint with CORS support
Add new /integrations/ location block that proxies requests to
integrations-openreplay:8080 service. Includes proper CORS headers
for cross-origin requests and WebSocket upgrade support.

- Rewrite /integrations/ path to root
- Configure proxy headers for forwarding
- Set connection timeouts for stability
- Add CORS headers for API access

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-06-02 10:55:50 +02:00
Taha Yassine Kraiem
0a3129d3cd fix(chalice): fixed JIRA integration 2025-05-30 15:25:41 +02:00
Mehdi Osman
99d61db9d9
Increment frontend chart version to v1.22.39 (#3460)
Co-authored-by: GitHub Action <action@github.com>
2025-05-30 15:07:29 +02:00
Delirium
133958622e
ui: fix alert create button (#3459) 2025-05-30 14:56:21 +02:00
GitHub Action
fb021f606f Increment frontend chart version to v1.22.38 2025-05-29 12:21:04 +02:00
rjshrjndrn
a2905fa8ed fix: move cd - command after git operations in patch workflow
Move the directory restoration command after the git operations to
ensure all git commands execute in the correct working directory
before returning to the previous directory.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-29 12:16:28 +02:00
rjshrjndrn
beec2283fd refactor(ci): restructure patch-build workflow script
- Extract inline bash script into structured functions
- Add proper error handling with set -euo pipefail
- Improve variable scoping with readonly and local declarations
- Add descriptive function names and comments
- Fix shell quoting and parameter expansion
- Consolidate build logic into reusable functions
- Add proper cleanup of temporary files
- Improve readability and maintainability of the CI script

The refactored script maintains the same functionality while being
more robust and easier to understand.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-29 12:16:28 +02:00
GitHub Action
6c8b55019e Increment frontend chart version 2025-05-29 10:29:46 +02:00
rjshrjndrn
e3e3e11227 fix(action): proper registry
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-29 10:18:55 +02:00
Shekar Siri
c6f7de04cc Revert "fix(ui): new card data state is not updating"
This reverts commit 2921c17cbf.
2025-05-28 22:16:00 +02:00
Shekar Siri
2921c17cbf fix(ui): new card data state is not updating 2025-05-28 19:49:01 +02:00
Mehdi Osman
7eb3f5c4c8
Increment frontend chart version (#3436)
Co-authored-by: GitHub Action <action@github.com>
2025-05-26 16:10:35 +02:00
Rajesh Rajendran
5a9a8e588a
chore(actions): rebase only if not main (#3435)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-26 16:04:50 +02:00
Rajesh Rajendran
4b14258266
fix(action): clone repo (#3433)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-26 15:50:13 +02:00
Rajesh Rajendran
744d2d4311
actions fix or 2070 (#3432)
* chore(build): Better error handling

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(build): remove fetch depth, as it might cause issue in rebase

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(build): proper platform

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-26 15:45:48 +02:00
Taha Yassine Kraiem
64242a5dc0 refactor(DB): changed supported platforms in CH 2025-05-26 11:51:49 +02:00
Rajesh Rajendran
cae3002697
feat(ci): Support building from branch for old patch (#3419)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-20 15:19:04 +02:00
GitHub Action
3d3c62196b Increment frontend chart version 2025-05-20 11:44:16 +02:00
nick-delirium
e810958a5d ui: fix ant imports 2025-05-20 11:26:20 +02:00
nick-delirium
39fa9787d1 ui: prevent network row modal from changing replayer time 2025-05-20 11:21:50 +02:00
nick-delirium
c9c1ad4dde ui: comments etc 2025-05-20 11:21:50 +02:00
nick-delirium
d9868928be ui: improve network panel row mapping 2025-05-20 11:21:50 +02:00
GitHub Action
a460d8c9a2 Increment frontend chart version 2025-05-15 15:18:19 +02:00
nick-delirium
930417aab4 ui: fix session search on url change 2025-05-15 15:12:30 +02:00
GitHub Action
07bc184f4d Increment chalice chart version 2025-05-14 18:59:43 +02:00
Rajesh Rajendran
71b7cca569
Patch/api v1.22.0 (#3401)
* fix(chalice): fixed duplicate autocomplete values

* ci(actions): possible fix for pull --rebase

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com>
2025-05-14 18:42:25 +02:00
Mehdi Osman
355d27eaa0
Increment frontend chart version (#3397)
Co-authored-by: GitHub Action <action@github.com>
2025-05-13 13:38:15 +02:00
Mehdi Osman
66b485cccf
Increment db chart version (#3396)
Co-authored-by: GitHub Action <action@github.com>
2025-05-13 10:34:28 +02:00
Alexander
de33a42151
feat(db): custom event's ts (#3395) 2025-05-12 17:52:24 +02:00
Rajesh Rajendran
f12bdebf82
ci(actions): fix push denied (#3392) (#3393) (#3394)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 17:19:41 +02:00
Rajesh Rajendran
bbfa20c693
ci(actions): fix push denied (#3392) (#3393)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:58:19 +02:00
Rajesh Rajendran
f264ba043d
ci(actions): fix push denied (#3392)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:55:23 +02:00
Rajesh Rajendran
a05dce8125
main (#3391)
* ci(actions): Update pr description

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* ci(actions): run only on pull request merge

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:50:20 +02:00
Mehdi Osman
3a1635d81f
Increment frontend chart version (#3389)
Co-authored-by: GitHub Action <action@github.com>
2025-05-12 16:12:43 +02:00
Delirium
ccb332c636
ui: change <slot> check (#3388) 2025-05-12 16:02:26 +02:00
Rajesh Rajendran
80ffa15959
ci(actions): Auto update tag for patch build (#3387)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 15:54:10 +02:00
Rajesh Rajendran
b2e961d621
ci(actions): Auto update tag for patch build (#3386)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 15:49:19 +02:00
Mehdi Osman
b4d0598f23
Increment frontend chart version (#3385)
Co-authored-by: GitHub Action <action@github.com>
2025-05-12 15:46:29 +02:00
Delirium
e77f083f10
ui: fixup toggler closing (#3384) 2025-05-12 15:40:30 +02:00
Delirium
58da1d3f64
fix litjs support, fix autocomplete modal options reset, fix dashboard chart density (#3382)
* Litjs fixes2 (#3381)

* ui: fixes for litjs capture

* ui: introduce vmode for lwc light dom

* ui: fixup the mode toggle and remover

* ui: fix filter options reset, fix dashboard chart density
2025-05-12 15:27:44 +02:00
GitHub Action
447fc26a2a Increment frontend chart version 2025-05-12 10:46:33 +02:00
nick-delirium
9bdf6e4f92 ui: fix heatmaps crash 2025-05-12 10:37:48 +02:00
GitHub Action
01f403e12d Increment chalice chart version 2025-05-07 12:28:44 +02:00
Taha Yassine Kraiem
39eb943b86 fix(chalice): fixed get error's details 2025-05-07 12:15:33 +02:00
GitHub Action
366b0d38b0 Increment frontend chart version 2025-05-06 16:28:28 +02:00
nick-delirium
f4d5b3c06e ui: fix max meta length, add horizontal layout for player 2025-05-06 16:23:47 +02:00
Mehdi Osman
93ae18133e
Increment frontend chart version (#3366)
Co-authored-by: GitHub Action <action@github.com>
2025-05-06 13:16:57 +02:00
Andrey Babushkin
fbe5d78270
Revert update (#3365)
* Revert "Increment chalice chart version"

This reverts commit 5e0e5730ba.

* revert updates

* changed chalice version
2025-05-06 13:08:08 +02:00
Mehdi Osman
b803eed1d4
Increment frontend chart version (#3362)
Co-authored-by: GitHub Action <action@github.com>
2025-05-05 17:49:39 +02:00
Andrey Babushkin
9ed3cb1b7e
Add searched events (#3361)
* add filtered events to search

* removed consoles

* changed styles to tailwind

* changed styles to tailwind

* fixed errors
2025-05-05 17:40:10 +02:00
GitHub Action
5e0e5730ba Increment chalice chart version 2025-05-05 17:04:29 +02:00
Taha Yassine Kraiem
d78b33dcd2 refactor(DB): remove TTL for CH tables 2025-05-05 16:49:37 +02:00
Taha Yassine Kraiem
4b1ca200b4 fix(chalice): fixed empty error_id for table of errors 2025-05-05 16:49:37 +02:00
rjshrjndrn
08d930f9ff fix(docker-compose): proper volume path #3279
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-28 17:28:40 +02:00
Mehdi Osman
da37809bc8
Increment frontend chart version (#3345)
Co-authored-by: GitHub Action <action@github.com>
2025-04-28 11:38:04 +02:00
Andrey Babushkin
d922fc7ad5
Patch frontend inline css (#3344)
* add inlineCss enum

* updated changelog
2025-04-28 11:29:53 +02:00
GitHub Action
796360fdd2 Increment frontend chart version 2025-04-28 11:01:55 +02:00
nick-delirium
13dbb60d8b ui: fix velement applychanges 2025-04-28 10:40:11 +02:00
Андрей Бабушкин
9e20a49128 add slot tag to custom elements 2025-04-28 10:34:43 +02:00
nick-delirium
91f8cc1399 ui: move debouncecall 2025-04-28 10:34:43 +02:00
Andrey Babushkin
f8ba3f6d89 Css batching (#3326)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* ui: rereoute custom html component fragments

* removed sorting

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-28 10:34:43 +02:00
Delirium
85e30b3692 tracker css batching/inlining (#3334)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* removed sorting

* upgrade css inliner

* ui: better logging for ocunter

* tracker: force-fetch mode for cssInliner

* tracker: fix ts warns

* tracker: use debug opts

* tracker: 16.2.0 changelogs, inliner opts

* tracker: remove debug options

---------

Co-authored-by: Андрей Бабушкин <andreybabushkin2000@gmail.com>
2025-04-28 10:34:43 +02:00
nick-delirium
0360e3726e ui: fixup autoplay on inactive tabs 2025-04-28 10:34:43 +02:00
nick-delirium
77bbb5af36 tracker: update css inject 2025-04-28 10:34:43 +02:00
Andrey Babushkin
ab0d4cfb62 Css inliner tuning (#3337)
* tracker: don't send double sheets

* tracker: don't send double sheets

* tracker: slot checker

* add slot tag to custom elements

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-28 10:34:43 +02:00
Andrey Babushkin
3fd506a812 Css batching (#3326)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* ui: rereoute custom html component fragments

* removed sorting

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-28 10:34:43 +02:00
Shekar Siri
e8432e2dec change(ui): force the table cards events order to use and istead the defaul then 2025-04-24 10:09:19 +02:00
GitHub Action
5c76a8524c Increment frontend chart version 2025-04-23 18:41:46 +02:00
rjshrjndrn
3ba40a4811 feat(cli): Add support for image versions
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
rjshrjndrn
f9a3f24590 fix(docker-compose): clickhouse migration
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
rjshrjndrn
85d6d0abac fix(docker-compose): remove shell interpolation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
Rajesh Rajendran
b3594136ce or 1940 upstream docker release with the existing installation (#3316)
* chore(docker): Adding dynamic env generator
* ci(make): Create deployment yamls
* ci(make): Generating docker envs
* change env name structure
* proper env names
* chore(docker): clickhouse
* chore(docker-compose): generate env file format
* chore(docker-compose): Adding docker-compose
* chore(docker-compose): format make
* chore(docker-compose): Update version
* chore(docker-compose): adding new secrets
* ci(make): default target
* ci(Makefile): Update common protocol
* chore(docker-compose): refactor folder structure
* ci(make): rename to docker-envs
* feat(docker): add clickhouse volume definition
Add clickhouse persistent volume to the docker-compose configuration
to ensure data is preserved between container restarts.
* refactor: move env files to docker-envs directory
Updates all environment file references in docker-compose.yaml to use a
consistent directory structure, placing them under the docker-envs/
directory for better organization.
* fix(docker): rename imagestorage to images
 The `imagestorage` service and related environment file
 have been renamed to `images` for clarity and consistency.
 This change reflects the service's purpose of handling
 images.
* feat(docker): introduce docker-compose template
 A new docker-compose template
 to generate docker-compose files from a list of services.
 The template uses helm syntax.
* fix: Properly set FILES variable in Makefile
 The FILES variable was not being set correctly in the
 Makefile due to subshell issues. This commit fixes the
 variable assignment and ensures that the variable is
 accessible in subsequent commands.
* feat: Refactor docker-compose template for local development
 This commit introduces a complete overhaul of the
 docker-compose template, switching from a helm-based
 template to a native docker-compose.yml file. This
 change simplifies local development and makes it easier
 to manage the OpenReplay stack.
 The new template includes services for:
 - PostgreSQL
 - ClickHouse
 - Redis
 - MinIO
 - Nginx
 - Caddy
 It also includes migration jobs for setting up the
 database and MinIO.
* fix(docker-compose): Add fallback empty environment
 Add an empty environment to the docker-compose template to prevent
 errors when the env_file is missing. This ensures that the
 container can start even if the environment file is not present.
* feat(docker): Add domainname and aliases to services
 This change adds the `domainname` and `aliases` attributes to each
 service in the docker-compose.yaml file. This is to ensure that
 the services can communicate with each other using their fully
 qualified domain names. Also adds shared volume and empty
 environment variables.
* update version
* chore(docker): don't pull parallel
* chore(docker-compose): proper pull
* chore(docker-compose): Update db service urls
* fix(docker-compose): clickhouse url
* chore(clickhouse): Adding clickhouse db migration
* chore(docker-compose): Adding clickhouse
* fix(tpl): variable injection
* chore(fix): compose tpl variable rendering
* chore(docker-compose): Allow override pg variable
* chore(helm): remove assist-server
* chore(helm): pg integrations
* chore(nginx): removed services
* chore(docker-compose): Mulitple aliases
* chore(docker-compose): Adding more env vars
* feat(install): Dynamically generate passwords
 dynamic password generation by
 identifying `change_me_*` entries in `common.env` and
 replacing them with random passwords. This enhances
 security and simplifies initial setup.
 The changes include:
 - Replacing hardcoded password replacements with a loop
   that iterates through all `change_me_*` entries.
 - Using `grep` to find all `change_me_*` tokens.
 - Generating a random password for each token.
 - Updating the `common.env` file with the generated
   passwords.
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): clickhouse-migration
* compose: chalice env
* chore(docker-compose): overlay vars
* chore(docker): Adding ch port
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): migration name
* feat(docker): skip specific values
* chore(docker-compose): define namespace
---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 17:52:50 +02:00
GitHub Action
8f67edde8d Increment chalice chart version 2025-04-23 12:26:20 +02:00
Taha Yassine Kraiem
74ed29915b fix(chalice): enforce AND operator for table of requests and table of pages 2025-04-23 11:51:38 +02:00
GitHub Action
3ca71ec211 Increment chalice chart version 2025-04-22 19:23:11 +02:00
Taha Yassine Kraiem
0e469fd056 fix(chalice): fixes for table of requests 2025-04-22 19:03:35 +02:00
KRAIEM Taha Yassine
a8cb0e1643 fix(chalice): fixes for table of requests 2025-04-22 19:03:35 +02:00
GitHub Action
e171f0d8d5 Increment frontend chart version 2025-04-22 17:56:00 +02:00
nick-delirium
68ea291444 ui: fix timepicker and timezone interactions 2025-04-22 17:42:56 +02:00
GitHub Action
05cbb831c7 Increment frontend chart version 2025-04-22 10:32:00 +02:00
nick-delirium
5070ded1f4 ui: fix empty sank sessions fetch 2025-04-22 10:27:16 +02:00
GitHub Action
77610a4924 Increment frontend chart version 2025-04-16 17:45:25 +02:00
nick-delirium
7c34e4a0f6 ui: virtualizer for filter options list 2025-04-16 17:36:34 +02:00
GitHub Action
330e21183f Increment frontend chart version 2025-04-15 18:25:49 +02:00
Shekar Siri
30ce37896c feat(widget-sessions): improve session filtering logic
- Refactored session filtering logic to handle nested filters properly.
- Enhanced `fetchSessions` to ensure null checks and avoid errors.
- Updated `loadData` to handle `USER_PATH` and `HEATMAP` metric types.
- Improved UI consistency by adjusting spacing and formatting.
- Replaced redundant code with cleaner, more maintainable patterns.

This change improves the reliability and readability of the session
filtering and loading logic in the WidgetSessions component.
2025-04-15 18:15:03 +02:00
Andrey Babushkin
80a7817e7d
removed sorting by id (#3305) 2025-04-15 13:32:53 +02:00
Jorgen Evens
1b9c568cb1 fix(helm): fix broken volumeMounts indentation 2025-04-14 15:51:41 +02:00
GitHub Action
3759771ae9 Increment frontend chart version 2025-04-14 12:06:09 +02:00
Shekar Siri
f6ae5aba88 feat(SessionsBy): add specific filter for FETCH metric
Added a conditional check to handle the FETCH metric in the SessionsBy
component. When the metric is FETCH, a specific filter with key
FETCH_URL, operator is, and value derived from data.name is applied.
This ensures proper filtering behavior for FETCH-related metrics.
2025-04-14 12:01:51 +02:00
Mehdi Osman
5190dc512a
Increment frontend chart version (#3297)
Co-authored-by: GitHub Action <action@github.com>
2025-04-14 11:54:25 +02:00
Andrey Babushkin
3fcccb51e8
Patch assist (#3296)
* add global method support

* fix errors

* remove wrong updates

* remove wrong updates

* add onDrag as option

* fix wrong updates
2025-04-14 11:33:06 +02:00
GitHub Action
26077d5689 Increment frontend chart version 2025-04-11 14:56:11 +02:00
Shekar Siri
00c57348fd feat(search): enhance filter value handling
- Added `checkFilterValue` function to validate and update filter values
  in `SearchStoreLive`.
- Updated `FilterItem` to handle undefined `value` gracefully by providing
  a default empty array.

These changes improve robustness in filter value processing.
2025-04-11 14:36:25 +02:00
Shekar Siri
1f9bc5520a feat(search): add rounding to next minutes for date ranges
- Introduced `roundToNextMinutes` utility function to round timestamps
  to the next specified minute interval.
- Updated `Search` class to use the rounding function for non-custom
  date ranges.
- Modified `getRange` in `period.js` to align LAST_24_HOURS with
  15-minute intervals.
- Added `roundToNextMinutes` implementation in `utils/index.ts`.
2025-04-11 12:01:15 +02:00
Shekar Siri
aef94618f6 Revert "Increment frontend chart version"
This reverts commit 2a330318c7.
2025-04-11 11:03:01 +02:00
GitHub Action
2a330318c7 Increment frontend chart version 2025-04-11 11:01:53 +02:00
Shekar Siri
6777d5ce2a feat(dashboard): set initial drill down period
Change default drill down period from LAST_7_DAYS to LAST_24_HOURS
and preserve current period when drilling down on chart click
2025-04-11 10:49:17 +02:00
rjshrjndrn
8a6f8fe91f chore(action): cloning specific tag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-10 15:45:50 +02:00
Mehdi Osman
7b078fed4c
Increment frontend chart version (#3278)
Co-authored-by: GitHub Action <action@github.com>
2025-04-07 15:24:32 +02:00
Andrey Babushkin
894d4c84b3
Patch assist canvas (#3277)
* resolved conflict

* removed comments
2025-04-07 15:13:36 +02:00
Alexander
46390a3ba9
feat(assist-server): added the github action (#3275) 2025-04-07 10:43:48 +02:00
rjshrjndrn
621667f5ce ci(action): Build and patch github tags
feat(workflow): update commit timestamp for patching

Add a step to set the commit timestamp of the HEAD commit to be 1
second newer than the oldest of the last 3 commits. This ensures
proper chronological order while preserving the commit content.

- Fetch deeper history to access commit history
- Get oldest timestamp from recent commits
- Set new commit date with BSD-compatible date command
- Verify timestamp change with git log

The workflow was previously checking out 'main' branch with a
comment indicating it needed to be fixed. This change makes it
properly checkout the tag specified by the workflow input.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-04 16:09:05 +02:00
rjshrjndrn
a72f476f1c chore(ci): tag patching
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-04 13:15:56 +02:00
Mehdi Osman
623946ce4e
Increment assist chart version (#3267)
Co-authored-by: GitHub Action <action@github.com>
2025-04-03 13:29:02 -04:00
Mehdi Osman
2d099214fc
Increment frontend chart version (#3266)
Co-authored-by: GitHub Action <action@github.com>
2025-04-03 18:27:05 +02:00
Andrey Babushkin
b0e7054f89
Assist patch canvas (#3265)
* add agent info to assist and tracker

* removed AGENTS_CONNECTED event
2025-04-03 18:22:08 +02:00
Mehdi Osman
a9097270af
Increment chalice chart version (#3260)
Co-authored-by: GitHub Action <action@github.com>
2025-04-02 16:43:46 +02:00
Alexander
5d514ddaf2
feat(chalice): added for_spot=True for authenticate_sso (#3259) 2025-04-02 16:35:19 +02:00
Mehdi Osman
43688bb03b
Increment assist chart version (#3256)
Co-authored-by: GitHub Action <action@github.com>
2025-04-01 16:04:41 +02:00
Mehdi Osman
e050cee7bb
Increment frontend chart version (#3255)
Co-authored-by: GitHub Action <action@github.com>
2025-03-31 18:19:52 +02:00
Andrey Babushkin
6b35df7125
pulled updates (#3254) 2025-03-31 18:13:51 +02:00
GitHub Action
8e099b6dc3 Increment frontend chart version 2025-03-31 17:25:58 +02:00
nick-delirium
c0a4734054 ui: fix double fetches for sessions 2025-03-31 17:19:33 +02:00
GitHub Action
7de1efb5fe Increment frontend chart version 2025-03-31 12:08:45 +02:00
nick-delirium
d4ff28ddbe ui: fix modules label 2025-03-31 11:54:13 +02:00
nick-delirium
b2256f72d0 ui: fix modules mapper 2025-03-31 11:48:14 +02:00
GitHub Action
a63bda1c79 Increment frontend chart version 2025-03-31 11:17:34 +02:00
nick-delirium
3a0176789e ui: filter keys 2025-03-31 10:34:02 +02:00
nick-delirium
f2b7271fca ui: add old devtool filters 2025-03-31 10:31:06 +02:00
GitHub Action
d50f89662b Increment frontend chart version 2025-03-28 21:37:59 +01:00
GitHub Action
35051d201c Increment assist chart version 2025-03-28 21:37:59 +01:00
rjshrjndrn
214be95ecc fix(init): remove duplicate clone
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-28 21:25:24 +01:00
Delirium
dbc142c114
UI patches (28.03) (#3231)
* ui: force getting url for location in tabmanagers

* Assist add turn servers (#3229)

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* ui: fix chat draggable, fix default params

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>

* ui: fix spritemap generation for assist sessions

* ui: fix yarnlock

* fix errors

* updated widget link

* resolved conflicts

* updated widget url

---------

Co-authored-by: Andrey Babushkin <55714097+reyand43@users.noreply.github.com>
Co-authored-by: Андрей Бабушкин <andreybabushkin2000@gmail.com>
2025-03-28 17:32:12 +01:00
GitHub Action
443f5e8f08 Increment frontend chart version 2025-03-27 12:36:54 +01:00
Shekar Siri
9f693f220d refactor(auth): separate SSO support from enterprise edition
Add dedicated isSSOSupported property to correctly identify when SSO
authentication is available, properly handling the 'msaas' edition
case separately from enterprise edition checks. This fixes SSO
visibility in the login interface.
2025-03-27 12:28:10 +01:00
GitHub Action
5ab30380b0 Increment chalice chart version 2025-03-26 17:48:08 +01:00
Taha Yassine Kraiem
fc86555644 refactor(chalice): changed user-journey 2025-03-26 17:18:17 +01:00
GitHub Action
2a3c611a27 Increment frontend chart version 2025-03-26 16:48:29 +01:00
Delirium
1d6fb0ae9e ui: shrink icons when no space, adjust player area for events export … (#3217)
* ui: shrink icons when no space, adjust player area for events export panel, fix panel size

* ui: rm log
2025-03-26 16:38:48 +01:00
GitHub Action
bef91a6136 Increment frontend chart version 2025-03-25 18:15:34 +01:00
Shekar Siri
1e2bd19d32 fix(dashboard): update filter condition in MetricsList
Change the filter type comparison from checking against 'all' to
checking against an empty string. This ensures proper filtering
behavior when filtering metrics in the dashboard component.
2025-03-25 18:10:13 +01:00
rjshrjndrn
3b58cb347e chore(http): remove default token_string
scripts/helmcharts/openreplay/charts/http/scripts/entrypoint.sh

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 19:31:01 +01:00
GitHub Action
ca4590501a Increment frontend chart version 2025-03-24 17:45:24 +01:00
Andrey Babushkin
fd12cc7585
fix(GraphQL): remove unused useTranslation hook (#3200) (#3206)
Co-authored-by: PiRDub <pirddeveloppeur@gmail.com>
2025-03-24 17:38:45 +01:00
rjshrjndrn
6abded53e0 feat(helm): add TOKEN_SECRET environment variable
Add TOKEN_SECRET environment variable to HTTP service deployment and
generate a random value for it in vars.yaml.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 16:55:35 +01:00
GitHub Action
82c5e5e59d Increment frontend chart version 2025-03-24 14:34:51 +01:00
nick-delirium
c77b0cc4de ui: fixes for onboarding ui 2025-03-24 14:30:22 +01:00
nick-delirium
de344e62ef ui: onboarding fixes 2025-03-24 14:30:22 +01:00
Mehdi Osman
deb78a62c0
Increment frontend chart version (#3189)
Co-authored-by: GitHub Action <action@github.com>
2025-03-21 11:00:14 +01:00
Shekar Siri
0724cf05f0
fix(auth): remove unnecessary captcha token validation (#3188)
The token validation checks were redundant as the validation is already
handled by the captcha wrapper component. This change simplifies the
password reset flow while maintaining security.
2025-03-21 10:55:39 +01:00
GitHub Action
cc704f1bc3 Increment frontend chart version 2025-03-20 16:18:42 +01:00
nick-delirium
4c159b2d26 ui: fix table column export 2025-03-20 16:08:58 +01:00
Mehdi Osman
42df33bc01
Increment assist chart version (#3181)
Co-authored-by: GitHub Action <action@github.com>
2025-03-19 14:58:26 +01:00
Alexander
ae95b48760
feat(assist): improved caching mechanism for cluster mode (#3180) 2025-03-19 14:53:58 +01:00
Mehdi Osman
4be3050e61
Increment frontend chart version (#3179)
Co-authored-by: GitHub Action <action@github.com>
2025-03-19 14:47:37 +01:00
Shekar Siri
8eec6e983b
feat(auth): implement withCaptcha HOC for consistent reCAPTCHA (#3177)
* feat(auth): implement withCaptcha HOC for consistent reCAPTCHA

This commit refactors the reCAPTCHA implementation across the application
by introducing a Higher Order Component (withCaptcha) that encapsulates
captcha verification logic. The changes:

- Create a reusable withCaptcha HOC in withRecaptcha.tsx
- Refactor Login, ResetPasswordRequest, and CreatePassword components
- Extract SSOLogin into a separate component
- Improve error handling and user feedback
- Standardize loading and verification states across forms
- Make captcha implementation more maintainable and consistent

* feat(auth): support msaas edition for enterprise features

Add msaas to the isEnterprise check alongside ee edition to properly
display enterprise features. Use userStore.isEnterprise in SSOLogin
component instead of directly checking authDetails.edition for
consistent
enterprise status detection.
2025-03-19 14:36:56 +01:00
Taha Yassine Kraiem
5fec615044 refactor(chalice): cleaned code
fix(chalice): fixed session-search-pg sortKey issue
fix(chalice): fixed CH-query-formatter to handle special chars
fix(chalice): fixed /ids response
2025-03-18 13:51:10 +01:00
Mehdi Osman
f77568a01c
Increment frontend chart version (#3167)
Co-authored-by: GitHub Action <action@github.com>
2025-03-18 13:45:09 +01:00
Shekar Siri
618e4dc59f
refactor(searchStore): reformat filterMap function parameters (#3166)
- Reformat the parameters of the filterMap function for better readability.
- Comment out the fetchSessions call in clearSearch method to avoid unnecessary session fetch.
2025-03-15 11:42:14 +01:00
nick-delirium
b94fcb11e5
ui: fix pageselect for insights 2025-03-14 17:35:54 +01:00
nick-delirium
f93ee6fb8f
ui: fix filekey on prefetched sessions 2025-03-14 17:30:00 +01:00
Alexander
23820b7ea5
feat(ender): grab all sessions per tick (#3163) 2025-03-14 17:16:56 +01:00
nick-delirium
e92bfe3cfe
ui: fix efs file replay 2025-03-14 17:06:36 +01:00
Gabriele Angrisani
102f0c7b06 fix redis volume reference folder (#2805) 2025-03-14 15:34:11 +01:00
Laurenz Glück
8d57cc55a5 fix: updates docker-compose setup to be compatible with v1.21.0 2025-03-14 15:34:11 +01:00
nick-delirium
24b36efc9d
ui: update env sample 2025-03-14 15:06:11 +01:00
Alexander
fe91cad4af feat(db): moved out the error_id from json 2025-03-14 14:51:19 +01:00
Taha Yassine Kraiem
033ffcb7b9 refactor(DB): changed product_analytics.events to expose error_id
fix(chalice): fixed search events by error
2025-03-14 14:32:17 +01:00
Taha Yassine Kraiem
499048e46c refactor(chalice): changed pg_client to send keep-alive signals 2025-03-14 14:32:17 +01:00
nick-delirium
5b6c653862
ui: fix rm unused assets 2025-03-14 13:31:09 +01:00
nick-delirium
4169ab87c6
ui: fix comparison reset 2025-03-14 13:19:00 +01:00
Taha Yassine Kraiem
80229a0214 refactor(chalice): use new errors columns 2025-03-14 10:53:09 +01:00
Taha Yassine Kraiem
fb48ba8300 refactor(chalice): refactored errors helper
refactor(chalice): removed errors-tags
2025-03-14 10:53:09 +01:00
Taha Yassine Kraiem
b0f3c50c0f refactor(DB): DB changes 2025-03-14 10:53:09 +01:00
Alexander
5806362ce0 feat(db): added missing columns for events 2025-03-14 10:31:15 +01:00
rjshrjndrn
2458af460b feat(docker): switch to Chainguard nginx image
Replace nginx:alpine with cgr.dev/chainguard/nginx base image and
remove unnecessary permission changes since the Chainguard image
handles permissions differently and runs with proper security defaults.
2025-03-13 17:45:38 +01:00
Alexander
6c891cb131 feat(db): removed js exception tags in CH 2025-03-13 17:25:53 +01:00
rjshrjndrn
8e41c3ce91 fix(chalice): default envs
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 17:08:50 +01:00
rjshrjndrn
14d0a77a73 feat(chalice): add JWT expiration configuration
Add JWT_EXPIRATION environment variable to the chalice helm chart with
default value set to 86400 s (24 hours).
2025-03-13 17:00:23 +01:00
rjshrjndrn
0333c56d52 feat(clickhouse): Upgrade version
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:39:03 +01:00
rjshrjndrn
52d4abb61c fix(cli): download clickhouse
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:36:20 +01:00
rjshrjndrn
b0e7d3aa79 ci(make): download-cli
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:19:13 +01:00
rjshrjndrn
e9eea78283 ci(make): Upgrade installation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:04:25 +01:00
rjshrjndrn
0f4c509582 ci(make): get version for chart
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 16:02:17 +01:00
rjshrjndrn
820bca6308 build(api): implement multi-stage Dockerfile
- Use multi-stage build to reduce final image size
- Move build dependencies to builder stage
- Copy only necessary files to final stage
- Use UV for faster Python package installation
- Clean up duplicate operations
2025-03-13 14:51:59 +01:00
rjshrjndrn
51e71a4d52 ci(make): pull the latest images
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 14:37:13 +01:00
rjshrjndrn
2c9e9576c5 ci(Makefile): clean installation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 14:12:53 +01:00
Taha Yassine Kraiem
9e7f751df6 fix(chalice): fixed EE error-details undefined status 2025-03-13 13:59:46 +01:00
Taha Yassine Kraiem
b6d0e71544 fix(chalice): fixed EE imports for errors 2025-03-13 13:53:10 +01:00
rjshrjndrn
93a9e03026 fix(api): improve Dockerfile with best practices
- Use lowercase labels in accordance with Docker conventions
- Pin package versions for better build reproducibility
- Consolidate RUN commands to reduce image layers
- Use JSON array notation for CMD instruction
- Restore GIT_SHA label for proper image versioning
- Maintain consistent code formatting and ordering

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 13:48:53 +01:00
rjshrjndrn
a62f6f6bb0 ci(build): remove peers
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 13:48:53 +01:00
Taha Yassine Kraiem
cd80aa85ea refactor(chalice): refactored error-details code 2025-03-13 13:46:03 +01:00
Taha Yassine Kraiem
961c685310 refactor(chalice): refactored error-details code
refactor(chalice): moved error-details to use new product-analytics DB structure
2025-03-13 13:46:03 +01:00
Alexander
160b5ac2c8 feat(metrics): moved back the metrics endpoint to support the undocumented functionality 2025-03-13 13:34:23 +01:00
nick-delirium
1cca40d4c5
ui: fix calendar self-close 2025-03-13 13:08:44 +01:00
rjshrjndrn
bd2a59266d feat(api): migrate to uv package manager
- Add uv as dependency manager for faster installations
- Remove pinned versions from apk packages for better maintenance
- Fix lxml installation and dependency issues in requirements.txt
- Add environment setup steps in Dockerfile

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 13:01:02 +01:00
Alexander
8acee7d357 feat(connector): fixed several release bugs 2025-03-13 12:28:23 +01:00
rjshrjndrn
fb49c715cb refactor(chalice): remove peers from health checks and fix formatting
Updated health.py to remove the peers-openreplay service from health
checks and applied consistent formatting throughout the file. This
includes proper line breaks, trailing commas for multi-line data
structures, and consistent indentation patterns.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 11:51:04 +01:00
nick-delirium
221bee70f5
ui: add hash to css filenames 2025-03-13 11:45:22 +01:00
rjshrjndrn
8eb431f70c fix(docker): pin pip packages in API Dockerfile
Add exact version pinning for all packages installed via pip to improve
build reproducibility and security. Also consolidates package install
steps and improves the docker image build process with proper cleanup
of build dependencies.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 11:38:57 +01:00
rjshrjndrn
820b0954e7 ci(makefile): install test
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-13 11:13:38 +01:00
Andrey Babushkin
19b350761c
add few locales (#3151) 2025-03-13 10:08:08 +01:00
Alexander
3b3e95a413
Observability upgrade (#3146)
* feat(metrics): grand update

* feat(metrics): fixed missing part in ee tracer

* feat(assets): added missing arg

* feat(metrics): fixed naming problems
2025-03-13 08:09:29 +01:00
Taha Yassine Kraiem
fe1130397c fix(alerts): fixed crash while processing over CH 2025-03-12 17:15:41 +01:00
Taha Yassine Kraiem
fd4b71d854 fix(chalice): fixed EE docker image 2025-03-12 16:47:04 +01:00
nick-delirium
404ffd5b2d
ui: add hash to css filenames 2025-03-12 16:46:07 +01:00
Taha Yassine Kraiem
5af63eb9f1 fix(chalice): fixed legacy sessions search for EE 2025-03-12 15:14:54 +01:00
Shekar Siri
038bfee383 change(ui): tabl loader 2025-03-12 14:41:33 +01:00
Taha Yassine Kraiem
bd09160a4a fix(chalice): fixed search usability-test's sessions 2025-03-12 13:43:01 +01:00
nick-delirium
136a5b2bfb
ui: wrap title with i18n 2025-03-12 13:21:51 +01:00
Taha Yassine Kraiem
33deaef0ce fix(chalice): changes sessions_search importer 2025-03-12 12:28:16 +01:00
Taha Yassine Kraiem
3f541e5d59 refactor(chalice): changed .gitignore 2025-03-12 12:23:03 +01:00
Kraiem Taha Yassine
ae463db150
fix(chalice): fixed empty bookmark/vault projects null-timestamp issue (#3142) 2025-03-12 12:11:34 +01:00
Taha Yassine Kraiem
9eb19fedf1 refactor(chalice): changed import/flags logic 2025-03-12 11:54:04 +01:00
Andrey Babushkin
5df934c9ce
fixed sessions layout (#3138) 2025-03-12 09:21:16 +01:00
Taha Yassine Kraiem
e027a2d016 fix(chalice): fixed circular imports 2025-03-11 17:02:01 +01:00
nick-delirium
c7f3c78740
ui: fix heatmap scaling (use true document height) 2025-03-11 17:00:46 +01:00
Taha Yassine Kraiem
3245579b7c fix(chalice): remove duplicate sessions when using MV 2025-03-11 16:37:00 +01:00
nick-delirium
0107c9c523
ui: fix in-session clickmap refresh 2025-03-11 16:24:26 +01:00
nick-delirium
05f4054b31
ui: fix sank tooltip spacing 2025-03-11 16:20:15 +01:00
398 changed files with 9087 additions and 5416 deletions

122
.github/workflows/assist-server-ee.yaml vendored Normal file
View file

@ -0,0 +1,122 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/assist-server/**"
name: Build and Deploy Assist-Server EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist-Server image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-server
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-server")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist-server")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
pwd
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist-server/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-server,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging

189
.github/workflows/patch-build-old.yaml vendored Normal file
View file

@ -0,0 +1,189 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
tag:
description: 'Tag to update.'
required: true
type: string
branch:
description: 'Branch to build patches from. Make sure the branch is uptodate with tag. Else itll cause missing commits.'
required: true
type: string
name: Build patches from tag, rewrite commit HEAD to older timestamp, and Push the tag
jobs:
deploy:
name: Build Patch from old tag
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 4
ref: ${{ github.event.inputs.tag }}
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Create backup tag with timestamp
run: |
set -e # Exit immediately if a command exits with a non-zero status
TIMESTAMP=$(date +%Y%m%d%H%M%S)
BACKUP_TAG="${{ github.event.inputs.tag }}-backup-${TIMESTAMP}"
echo "BACKUP_TAG=${BACKUP_TAG}" >> $GITHUB_ENV
echo "INPUT_TAG=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
git tag $BACKUP_TAG || { echo "Failed to create backup tag"; exit 1; }
git push origin $BACKUP_TAG || { echo "Failed to push backup tag"; exit 1; }
echo "Created backup tag: $BACKUP_TAG"
# Get the oldest commit date from the last 3 commits in raw format
OLDEST_COMMIT_TIMESTAMP=$(git log -3 --pretty=format:"%at" | tail -1)
echo "Oldest commit timestamp: $OLDEST_COMMIT_TIMESTAMP"
# Add 1 second to the timestamp
NEW_TIMESTAMP=$((OLDEST_COMMIT_TIMESTAMP + 1))
echo "NEW_TIMESTAMP=$NEW_TIMESTAMP" >> $GITHUB_ENV
- name: Setup yq
uses: mikefarah/yq@master
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=${{inputs.branch}}" >> $GITHUB_ENV
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b $INPUT_TAG --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout $INPUT_TAG
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
done
- name: Change commit timestamp
run: |
# Convert the timestamp to a date format git can understand
NEW_DATE=$(perl -le 'print scalar gmtime($ARGV[0])." +0000"' $NEW_TIMESTAMP)
echo "Setting commit date to: $NEW_DATE"
# Amend the commit with the new date
GIT_COMMITTER_DATE="$NEW_DATE" git commit --amend --no-edit --date="$NEW_DATE"
# Verify the change
git log -1 --pretty=format:"Commit now dated: %cD"
# git tag and push
git tag $INPUT_TAG -f
git push origin $INPUT_TAG -f
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -2,7 +2,6 @@
on:
workflow_dispatch:
description: 'This workflow will build for patches for latest tag, and will Always use commit from main branch.'
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
@ -20,12 +19,20 @@ jobs:
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 1
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Rebase with main branch, to make sure the code has latest main changes
if: github.ref != 'refs/heads/main'
run: |
git pull --rebase origin main
git remote -v
git config --global user.email "action@github.com"
git config --global user.name "GitHub Action"
git config --global rebase.autoStash true
git fetch origin main:main
git rebase main
git log -3
- name: Downloading yq
run: |
@ -48,6 +55,8 @@ jobs:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
@ -65,78 +74,168 @@ jobs:
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
SERVICES_INPUT: ${{ github.event.inputs.services }}
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
#!/bin/bash
set -euo pipefail
# Configuration
readonly WORKING_DIR=$(pwd)
readonly BUILD_SCRIPT_NAME="build.sh"
readonly BACKEND_SERVICES_FILE="/tmp/backend.txt"
# Initialize git configuration
setup_git() {
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b "$BRANCH_NAME"
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b dev --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout main # This have to be changed to specific tag
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
# Get and increment image version
image_version() {
local service=$1
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
local current_version new_version
current_version=$(yq eval '.AppVersion' "$chart_path")
new_version=$(echo "$current_version" | awk -F. '{$NF += 1; print $1"."$2"."$3}')
echo "$new_version"
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
# Clone MSAAS repository if not exists
clone_msaas() {
if [[ ! -d "$MSAAS_REPO_FOLDER" ]]; then
git clone -b dev --recursive "https://x-access-token:${MSAAS_REPO_CLONE_TOKEN}@${MSAAS_REPO_URL}" "$MSAAS_REPO_FOLDER"
cd "$MSAAS_REPO_FOLDER"
cd openreplay && git fetch origin && git checkout main
git log -1
cd "$MSAAS_REPO_FOLDER"
bash git-init.sh
git checkout
fi
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
git push --set-upstream origin $BRANCH_NAME
done
# Build managed services
build_managed() {
local service=$1
local version=$2
echo "Building managed service: $service"
clone_msaas
if [[ $service == 'chalice' ]]; then
cd "$MSAAS_REPO_FOLDER/openreplay/api"
else
cd "$MSAAS_REPO_FOLDER/openreplay/$service"
fi
local build_cmd="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh"
echo "Executing: $build_cmd"
if ! eval "$build_cmd" 2>&1; then
echo "Build failed for $service"
exit 1
fi
}
# Build service with given arguments
build_service() {
local service=$1
local version=$2
local build_args=$3
local build_script=${4:-$BUILD_SCRIPT_NAME}
local command="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash $build_script $build_args"
echo "Executing: $command"
eval "$command"
}
# Update chart version and commit changes
update_chart_version() {
local service=$1
local version=$2
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
# Ensure we're in the original working directory/repository
cd "$WORKING_DIR"
yq eval ".AppVersion = \"$version\"" -i "$chart_path"
git add "$chart_path"
git commit -m "Increment $service chart version to $version"
git push --set-upstream origin "$BRANCH_NAME"
cd -
}
# Main execution
main() {
setup_git
# Get backend services list
ls backend/cmd >"$BACKEND_SERVICES_FILE"
# Parse services input (fix for GitHub Actions syntax)
echo "Services: ${SERVICES_INPUT:-$1}"
IFS=',' read -ra services <<<"${SERVICES_INPUT:-$1}"
# Process each service
for service in "${services[@]}"; do
echo "Processing service: $service"
cd "$WORKING_DIR"
local foss_build_args="" ee_build_args="" build_script="$BUILD_SCRIPT_NAME"
# Determine build configuration based on service type
if grep -q "$service" "$BACKEND_SERVICES_FILE"; then
# Backend service
cd backend
foss_build_args="nil $service"
ee_build_args="ee $service"
else
# Non-backend service
case "$service" in
chalice | alerts | crons)
cd "$WORKING_DIR/api"
;;
*)
cd "$service"
;;
esac
# Special build scripts for alerts/crons
if [[ $service == 'alerts' || $service == 'crons' ]]; then
build_script="build_${service}.sh"
fi
ee_build_args="ee"
fi
# Get version and build
local version
version=$(image_version "$service")
# Build FOSS and EE versions
build_service "$service" "$version" "$foss_build_args"
build_service "$service" "${version}-ee" "$ee_build_args"
# Build managed version for specific services
if [[ "$service" != "chalice" && "$service" != "frontend" ]]; then
echo "Nothing to build in managed for service $service"
else
build_managed "$service" "$version"
fi
# Update chart and commit
update_chart_version "$service" "$version"
done
cd "$WORKING_DIR"
# Cleanup
rm -f "$BACKEND_SERVICES_FILE"
}
echo "Working directory: $WORKING_DIR"
# Run main function with all arguments
main "$SERVICES_INPUT"
- name: Create Pull Request
uses: repo-sync/pull-request@v2
@ -147,8 +246,7 @@ jobs:
pr_title: "Updated patch build from main ${{ env.HEAD_COMMIT_ID }}"
pr_body: |
This PR updates the Helm chart version after building the patch from $HEAD_COMMIT_ID.
Once this PR is merged, To update the latest tag, run the following workflow.
https://github.com/openreplay/openreplay/actions/workflows/update-tag.yaml
Once this PR is merged, tag update job will run automatically.
# - name: Debug Job
# if: ${{ failure() }}

View file

@ -1,35 +1,42 @@
on:
workflow_dispatch:
description: "This workflow will build for patches for latest tag, and will Always use commit from main branch."
inputs:
services:
description: "This action will update the latest tag with current main branch HEAD. Should I proceed ? true/false"
required: true
default: "false"
name: Force Push tag with main branch HEAD
pull_request:
types: [closed]
branches:
- main
name: Release tag update --force
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
if: ${{ (github.event_name == 'pull_request' && github.event.pull_request.merged == true) || github.event.inputs.services == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get latest release tag using GitHub API
id: get-latest-tag
run: |
LATEST_TAG=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/latest" \
| jq -r .tag_name)
# Fallback to git command if API doesn't return a tag
if [ "$LATEST_TAG" == "null" ] || [ -z "$LATEST_TAG" ]; then
echo "Not found latest tag"
exit 100
fi
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
echo "Latest tag: $LATEST_TAG"
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}
- name: Push main branch to tag
run: |
git fetch --tags
git checkout main
git push origin HEAD:refs/tags/$(git tag --list 'v[0-9]*' --sort=-v:refname | head -n 1) --force
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# with:
# limit-access-to-actor: true
echo "Updating tag ${{ env.LATEST_TAG }} to point to latest commit on main"
git push origin HEAD:refs/tags/${{ env.LATEST_TAG }} --force

View file

@ -1,10 +1,17 @@
FROM python:3.12-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA
LABEL GIT_SHA=$GIT_SHA
FROM python:3.12-alpine AS builder
LABEL maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
RUN apk add --no-cache build-base tini
RUN apk add --no-cache build-base
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv && \
export UV_SYSTEM_PYTHON=true && \
uv pip install --no-cache-dir --upgrade pip setuptools wheel && \
uv pip install --no-cache-dir --upgrade -r requirements.txt
FROM python:3.12-alpine
ARG GIT_SHA
ARG envarg
# Add Tini
# Startup daemon
@ -14,19 +21,11 @@ ENV SOURCE_MAP_VERSION=0.7.4 \
PRIVATE_ENDPOINTS=false \
ENTERPRISE_BUILD=${envarg} \
GIT_SHA=$GIT_SHA
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv
RUN uv pip install --no-cache-dir --upgrade pip setuptools wheel --system
RUN uv pip install --no-cache-dir --upgrade -r requirements.txt --system
COPY . .
RUN mv env.default .env
RUN adduser -u 1001 openreplay -D
USER 1001
RUN apk add --no-cache tini && mv env.default .env
ENTRYPOINT ["/sbin/tini", "--"]
CMD ./entrypoint.sh
CMD ["./entrypoint.sh"]

View file

@ -4,7 +4,8 @@ from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import sessions, alert_helpers
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_pg as sessions
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
@ -131,6 +132,7 @@ def Build(a):
def process():
logger.info("> processing alerts on PG")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:

View file

@ -5,8 +5,9 @@ from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.utils import pg_client, ch_client, exp_ch_helper
from chalicelib.utils.TimeUTC import TimeUTC
from . import alerts, alerts_listener
from .modules import sessions, alert_helpers
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_ch as sessions
logger = logging.getLogger(__name__)
@ -155,6 +156,7 @@ def Build(a):
def process():
logger.info("> processing alerts on CH")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur, ch_client.ClickHouseClient() as ch_cur:

View file

@ -1,9 +1,3 @@
from decouple import config
TENANT_ID = "-1"
if config("EXP_ALERTS", cast=bool, default=False):
from chalicelib.core.sessions import sessions_ch as sessions
else:
from chalicelib.core.sessions import sessions
from . import helpers as alert_helpers

View file

@ -85,7 +85,8 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""(SELECT DISTINCT value, type
return f"""SELECT DISTINCT ON(value,type) value, type
((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -101,7 +102,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5);"""
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -326,7 +327,7 @@ def __search_metadata(project_id, value, key=None, source=None):
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT key, value, 'METADATA' AS TYPE
SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))

View file

@ -13,15 +13,18 @@ def get_state(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify("""SELECT EXISTS(( SELECT 1
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)})
{"ids": tuple(pids)},
)
)
recorded = cur.fetchone()["exists"]
meta = False
if recorded:
query = cur.mogrify(f"""SELECT EXISTS((SELECT 1
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
@ -36,26 +39,35 @@ def get_state(tenant_id):
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id})
{"tenant_id": tenant_id},
)
cur.execute(query)
meta = cur.fetchone()["exists"]
return [
{"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"},
{"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"},
{"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"},
{"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
{
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
},
{
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
},
{
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
},
{
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
},
]
@ -66,21 +78,26 @@ def get_state_installing(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify("""SELECT EXISTS(( SELECT 1
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)})
{"ids": tuple(pids)},
)
)
recorded = cur.fetchone()["exists"]
return {"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"}
return {
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
}
def get_state_identify_users(tenant_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT EXISTS((SELECT 1
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
@ -95,25 +112,32 @@ def get_state_identify_users(tenant_id):
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id})
{"tenant_id": tenant_id},
)
cur.execute(query)
meta = cur.fetchone()["exists"]
return {"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"}
return {
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
}
def get_state_manage_users(tenant_id):
return {"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"}
return {
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
}
def get_state_integrations(tenant_id):
return {"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
return {
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
}

View file

@ -4,10 +4,10 @@ from decouple import config
logger = logging.getLogger(__name__)
from . import errors as errors_legacy
from . import errors_pg as errors_legacy
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental error search")
from . import errors_ch as errors
else:
from . import errors
from . import errors_pg as errors

View file

@ -1,10 +1,11 @@
import schemas
from chalicelib.core import metadata
from chalicelib.core.errors import errors_legacy
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.errors.modules import sessions
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, metrics_helper
from chalicelib.utils.TimeUTC import TimeUTC
from . import errors as errors_legacy
def _multiple_values(values, value_key="value"):
@ -61,25 +62,6 @@ def get_batch(error_ids):
return errors_legacy.get_batch(error_ids=error_ids)
def __get_basic_constraints(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id", table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_basic_constraints_events(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
@ -116,7 +98,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
ch_sessions_sub_query = __get_basic_constraints(platform, type_condition=False)
ch_sessions_sub_query = errors_helper.__get_basic_constraints_ch(platform, type_condition=False)
# ignore platform for errors table
ch_sub_query = __get_basic_constraints_events(None, type_condition=True)
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'source') = 'js_exception'")
@ -148,7 +130,8 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
if len(data.events) > errors_condition_count:
subquery_part_args, subquery_part = sessions.search_query_parts_ch(data=data, error_status=data.status,
errors_only=True,
project_id=project.project_id, user_id=user_id,
project_id=project.project_id,
user_id=user_id,
issue=None,
favorite_only=False)
subquery_part = f"INNER JOIN {subquery_part} USING(session_id)"
@ -355,14 +338,14 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
SELECT details.error_id as error_id,
name, message, users, total,
sessions, last_occurrence, first_occurrence, chart
FROM (SELECT JSONExtractString(toString(`$properties`), 'error_id') AS error_id,
FROM (SELECT error_id,
JSONExtractString(toString(`$properties`), 'name') AS name,
JSONExtractString(toString(`$properties`), 'message') AS message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT events.session_id) AS sessions,
MAX(created_at) AS max_datetime,
MIN(created_at) AS min_datetime,
COUNT(DISTINCT JSONExtractString(toString(`$properties`), 'error_id'))
COUNT(DISTINCT error_id)
OVER() AS total
FROM {MAIN_EVENTS_TABLE} AS events
INNER JOIN (SELECT session_id, coalesce(user_id,toString(user_uuid)) AS user_id
@ -374,7 +357,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
GROUP BY error_id, name, message
ORDER BY {sort} {order}
LIMIT %(errors_limit)s OFFSET %(errors_offset)s) AS details
INNER JOIN (SELECT JSONExtractString(toString(`$properties`), 'error_id') AS error_id,
INNER JOIN (SELECT error_id,
toUnixTimestamp(MAX(created_at))*1000 AS last_occurrence,
toUnixTimestamp(MIN(created_at))*1000 AS first_occurrence
FROM {MAIN_EVENTS_TABLE}
@ -383,7 +366,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
GROUP BY error_id) AS time_details
ON details.error_id=time_details.error_id
INNER JOIN (SELECT error_id, groupArray([timestamp, count]) AS chart
FROM (SELECT JSONExtractString(toString(`$properties`), 'error_id') AS error_id,
FROM (SELECT error_id,
gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs

View file

@ -1,5 +1,5 @@
from chalicelib.core.errors import errors_legacy as errors
from chalicelib.utils import errors_helper
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
@ -40,26 +40,29 @@ def __process_tags(row):
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = errors.__get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size24")
pg_sub_query24 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30_session = errors.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_session.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_session.append("error_id = %(error_id)s")
pg_sub_query30_err = errors.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30", project_key="errors.project_id")
pg_sub_query30_err = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="errors.project_id")
pg_sub_query30_err.append("sessions.project_id = %(project_id)s")
pg_sub_query30_err.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_err.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_err.append("error_id = %(error_id)s")
pg_sub_query30_err.append("source ='js_exception'")
pg_sub_query30 = errors.__get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size30")
pg_sub_query30 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = errors.__get_basic_constraints(time_constraint=False)
pg_basic_query = errors_helper.__get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
@ -95,8 +98,7 @@ def get_details(project_id, error_id, user_id, **data):
device_partition,
country_partition,
chart24,
chart30,
custom_tags
chart30
FROM (SELECT error_id,
name,
message,
@ -111,15 +113,8 @@ def get_details(project_id, error_id, user_id, **data):
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s) AS time_details ON (TRUE)
INNER JOIN (SELECT session_id AS last_session_id,
coalesce(custom_tags, '[]')::jsonb AS custom_tags
INNER JOIN (SELECT session_id AS last_session_id
FROM events.errors
LEFT JOIN LATERAL (
SELECT jsonb_agg(jsonb_build_object(errors_tags.key, errors_tags.value)) AS custom_tags
FROM errors_tags
WHERE errors_tags.error_id = %(error_id)s
AND errors_tags.session_id = errors.session_id
AND errors_tags.message_id = errors.message_id) AS errors_tags ON (TRUE)
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details ON (TRUE)

View file

@ -1,7 +1,8 @@
import json
from typing import Optional, List
from typing import List
import schemas
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.sessions import sessions_search
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.utils import pg_client, helper
@ -51,27 +52,6 @@ def get_batch(error_ids):
return helper.list_to_camel_case(errors)
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
@ -90,12 +70,13 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = __get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query = errors_helper.__get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = __get_basic_constraints(platform, time_constraint=False, chart=True, project_key=None)
pg_sub_query_chart = errors_helper.__get_basic_constraints(platform, time_constraint=False, chart=True,
project_key=None)
if platform:
pg_sub_query_chart += ["start_ts>=%(startDate)s", "start_ts<%(endDate)s", "project_id=%(project_id)s"]
pg_sub_query_chart.append("errors.error_id =details.error_id")

View file

@ -3,8 +3,9 @@ import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import helper as errors_helper
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
from chalicelib.core.sessions import sessions_ch as sessions
import chalicelib.core.sessions.sessions_ch as sessions
else:
from chalicelib.core.sessions import sessions
import chalicelib.core.sessions.sessions_pg as sessions

View file

@ -0,0 +1,58 @@
from typing import Optional
import schemas
from chalicelib.core.sourcemaps import sourcemaps
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_basic_constraints_ch(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -27,7 +27,6 @@ HEALTH_ENDPOINTS = {
"http": app_connection_string("http-openreplay", 8888, "metrics"),
"ingress-nginx": app_connection_string("ingress-nginx-openreplay", 80, "healthz"),
"integrations": app_connection_string("integrations-openreplay", 8888, "metrics"),
"peers": app_connection_string("peers-openreplay", 8888, "health"),
"sink": app_connection_string("sink-openreplay", 8888, "metrics"),
"sourcemapreader": app_connection_string(
"sourcemapreader-openreplay", 8888, "health"
@ -39,9 +38,7 @@ HEALTH_ENDPOINTS = {
def __check_database_pg(*_):
fail_response = {
"health": False,
"details": {
"errors": ["Postgres health-check failed"]
}
"details": {"errors": ["Postgres health-check failed"]},
}
with pg_client.PostgresClient() as cur:
try:
@ -63,29 +60,26 @@ def __check_database_pg(*_):
"details": {
# "version": server_version["server_version"],
# "schema": schema_version["version"]
}
},
}
def __always_healthy(*_):
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
def __check_be_service(service_name):
def fn(*_):
fail_response = {
"health": False,
"details": {
"errors": ["server health-check failed"]
}
"details": {"errors": ["server health-check failed"]},
}
try:
results = requests.get(HEALTH_ENDPOINTS.get(service_name), timeout=2)
if results.status_code != 200:
logger.error(f"!! issue with the {service_name}-health code:{results.status_code}")
logger.error(
f"!! issue with the {service_name}-health code:{results.status_code}"
)
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
return fail_response
@ -103,10 +97,7 @@ def __check_be_service(service_name):
logger.error("couldn't get response")
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
return fn
@ -114,7 +105,7 @@ def __check_be_service(service_name):
def __check_redis(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]}
"details": {"errors": ["server health-check failed"]},
}
if config("REDIS_STRING", default=None) is None:
# fail_response["details"]["errors"].append("REDIS_STRING not defined in env-vars")
@ -133,16 +124,14 @@ def __check_redis(*_):
"health": True,
"details": {
# "version": r.execute_command('INFO')['redis_version']
}
},
}
def __check_SSL(*_):
fail_response = {
"health": False,
"details": {
"errors": ["SSL Certificate health-check failed"]
}
"details": {"errors": ["SSL Certificate health-check failed"]},
}
try:
requests.get(config("SITE_URL"), verify=True, allow_redirects=True)
@ -150,36 +139,28 @@ def __check_SSL(*_):
logger.error("!! health failed: SSL Certificate")
logger.exception(e)
return fail_response
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
def __get_sessions_stats(*_):
with pg_client.PostgresClient() as cur:
constraints = ["projects.deleted_at IS NULL"]
query = cur.mogrify(f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
query = cur.mogrify(
f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
COALESCE(SUM(events_count),0) AS e_c
FROM public.projects_stats
INNER JOIN public.projects USING(project_id)
WHERE {" AND ".join(constraints)};""")
WHERE {" AND ".join(constraints)};"""
)
cur.execute(query)
row = cur.fetchone()
return {
"numberOfSessionsCaptured": row["s_c"],
"numberOfEventCaptured": row["e_c"]
}
return {"numberOfSessionsCaptured": row["s_c"], "numberOfEventCaptured": row["e_c"]}
def get_health(tenant_id=None):
health_map = {
"databases": {
"postgres": __check_database_pg
},
"ingestionPipeline": {
"redis": __check_redis
},
"databases": {"postgres": __check_database_pg},
"ingestionPipeline": {"redis": __check_redis},
"backendServices": {
"alerts": __check_be_service("alerts"),
"assets": __check_be_service("assets"),
@ -192,13 +173,12 @@ def get_health(tenant_id=None):
"http": __check_be_service("http"),
"ingress-nginx": __always_healthy,
"integrations": __check_be_service("integrations"),
"peers": __check_be_service("peers"),
"sink": __check_be_service("sink"),
"sourcemapreader": __check_be_service("sourcemapreader"),
"storage": __check_be_service("storage")
"storage": __check_be_service("storage"),
},
"details": __get_sessions_stats,
"ssl": __check_SSL
"ssl": __check_SSL,
}
return __process_health(health_map=health_map)
@ -210,10 +190,16 @@ def __process_health(health_map):
response.pop(parent_key)
elif isinstance(health_map[parent_key], dict):
for element_key in health_map[parent_key]:
if config(f"SKIP_H_{parent_key.upper()}_{element_key.upper()}", cast=bool, default=False):
if config(
f"SKIP_H_{parent_key.upper()}_{element_key.upper()}",
cast=bool,
default=False,
):
response[parent_key].pop(element_key)
else:
response[parent_key][element_key] = health_map[parent_key][element_key]()
response[parent_key][element_key] = health_map[parent_key][
element_key
]()
else:
response[parent_key] = health_map[parent_key]()
return response
@ -221,7 +207,8 @@ def __process_health(health_map):
def cron():
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""SELECT projects.project_id,
query = cur.mogrify(
"""SELECT projects.project_id,
projects.created_at,
projects.sessions_last_check_at,
projects.first_recorded_session_at,
@ -229,7 +216,8 @@ def cron():
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;""")
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
@ -250,20 +238,24 @@ def cron():
count_start_from = r["last_update_at"]
count_start_from = TimeUTC.datetime_to_timestamp(count_start_from)
params = {"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
params = {
"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts>=%(start_ts)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params)
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
@ -271,56 +263,68 @@ def cron():
params["events_count"] = row["events_count"]
if insert:
query = cur.mogrify("""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
query = cur.mogrify(
"""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
VALUES (%(project_id)s, %(sessions_count)s, %(events_count)s, (now() AT TIME ZONE 'utc'::text));""",
params)
params,
)
else:
query = cur.mogrify("""UPDATE public.projects_stats
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=sessions_count+%(sessions_count)s,
events_count=events_count+%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params)
params,
)
cur.execute(query)
# this cron is used to correct the sessions&events count every week
def weekly_cron():
with pg_client.PostgresClient(long_query=True) as cur:
query = cur.mogrify("""SELECT project_id,
query = cur.mogrify(
"""SELECT project_id,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;""")
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
if r["last_update_at"] is None:
continue
params = {"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
params = {
"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params)
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
query = cur.mogrify("""UPDATE public.projects_stats
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=%(sessions_count)s,
events_count=%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params)
params,
)
cur.execute(query)

View file

@ -50,8 +50,8 @@ class JIRAIntegration(base.BaseIntegration):
cur.execute(
cur.mogrify(
"""SELECT username, token, url
FROM public.jira_cloud
WHERE user_id=%(user_id)s;""",
FROM public.jira_cloud
WHERE user_id = %(user_id)s;""",
{"user_id": self._user_id})
)
data = helper.dict_to_camel_case(cur.fetchone())
@ -95,10 +95,9 @@ class JIRAIntegration(base.BaseIntegration):
def add(self, username, token, url, obfuscate=False):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.jira_cloud(username, token, user_id,url)
VALUES (%(username)s, %(token)s, %(user_id)s,%(url)s)
RETURNING username, token, url;""",
cur.mogrify(""" \
INSERT INTO public.jira_cloud(username, token, user_id, url)
VALUES (%(username)s, %(token)s, %(user_id)s, %(url)s) RETURNING username, token, url;""",
{"user_id": self._user_id, "username": username,
"token": token, "url": url})
)
@ -112,9 +111,10 @@ class JIRAIntegration(base.BaseIntegration):
def delete(self):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
DELETE FROM public.jira_cloud
WHERE user_id=%(user_id)s;""",
cur.mogrify(""" \
DELETE
FROM public.jira_cloud
WHERE user_id = %(user_id)s;""",
{"user_id": self._user_id})
)
return {"state": "success"}
@ -125,7 +125,7 @@ class JIRAIntegration(base.BaseIntegration):
changes={
"username": data.username,
"token": data.token if len(data.token) > 0 and data.token.find("***") == -1 \
else self.integration.token,
else self.integration["token"],
"url": str(data.url)
},
obfuscate=True

View file

@ -5,8 +5,8 @@ from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_METRICS", cast=bool, default=False):
from chalicelib.core.sessions import sessions_ch as sessions
import chalicelib.core.sessions.sessions_ch as sessions
else:
from chalicelib.core.sessions import sessions
import chalicelib.core.sessions.sessions_pg as sessions
from chalicelib.core.sessions import sessions_mobs

View file

@ -85,6 +85,9 @@ def __complete_missing_steps(start_time, end_time, density, neutral, rows, time_
# compute avg_time_from_previous at the same level as sessions_count (this was removed in v1.22)
# if start-point is selected, the selected event is ranked n°1
def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
if not data.hide_excess:
data.hide_excess = True
data.rows = 50
sub_events = []
start_points_conditions = []
step_0_conditions = []

View file

@ -3,9 +3,11 @@ import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import sessions as sessions_legacy
from . import sessions_pg
from . import sessions_pg as sessions_legacy
from . import sessions_ch
if config("EXP_METRICS", cast=bool, default=False):
from . import sessions_ch as sessions
else:
from . import sessions
from . import sessions_pg as sessions

View file

@ -3,7 +3,7 @@ from typing import List, Union
import schemas
from chalicelib.core import events, metadata
from . import performance_event, sessions as sessions_legacy
from . import performance_event, sessions_legacy
from chalicelib.utils import pg_client, helper, metrics_helper, ch_client, exp_ch_helper
from chalicelib.utils import sql_helper as sh
@ -153,7 +153,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": []
"filters": e.filters
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -178,7 +178,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": []
"filters": e.filters
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -870,12 +870,12 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
events_conditions[-1]["condition"] = []
if not is_any and event.value not in [None, "*", ""]:
event_where.append(
sh.multi_conditions(f"(main1.message {op} %({e_k})s OR main1.name {op} %({e_k})s)",
sh.multi_conditions(f"(toString(main1.`$properties`.message) {op} %({e_k})s OR toString(main1.`$properties`.name) {op} %({e_k})s)",
event.value, value_key=e_k))
events_conditions[-1]["condition"].append(event_where[-1])
events_extra_join += f" AND {event_where[-1]}"
if len(event.source) > 0 and event.source[0] not in [None, "*", ""]:
event_where.append(sh.multi_conditions(f"main1.source = %({s_k})s", event.source, value_key=s_k))
event_where.append(sh.multi_conditions(f"toString(main1.`$properties`.source) = %({s_k})s", event.source, value_key=s_k))
events_conditions[-1]["condition"].append(event_where[-1])
events_extra_join += f" AND {event_where[-1]}"
@ -1108,8 +1108,12 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
is_any = sh.isAny_opreator(f.operator)
if is_any or len(f.value) == 0:
continue
is_negative_operator = sh.is_negation_operator(f.operator)
f.value = helper.values_for_operator(value=f.value, op=f.operator)
op = sh.get_sql_operator(f.operator)
r_op = ""
if is_negative_operator:
r_op = sh.reverse_sql_operator(op)
e_k_f = e_k + f"_fetch{j}"
full_args = {**full_args, **sh.multi_values(f.value, value_key=e_k_f)}
if f.type == schemas.FetchFilterType.FETCH_URL:
@ -1118,6 +1122,12 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.url_path {r_op} %({e_k_f})s", f.value, value_key=e_k_f)
elif f.type == schemas.FetchFilterType.FETCH_STATUS_CODE:
event_where.append(json_condition(
"main", "$properties", 'status', op, f.value, e_k_f, True, True
@ -1130,6 +1140,13 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.method {r_op} %({e_k_f})s", f.value,
value_key=e_k_f)
elif f.type == schemas.FetchFilterType.FETCH_DURATION:
event_where.append(
sh.multi_conditions(f"main.`$duration_s` {f.operator} %({e_k_f})s/1000", f.value,
@ -1142,12 +1159,26 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.request_body {r_op} %({e_k_f})s", f.value,
value_key=e_k_f)
elif f.type == schemas.FetchFilterType.FETCH_RESPONSE_BODY:
event_where.append(json_condition(
"main", "$properties", 'response_body', op, f.value, e_k_f
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.response_body {r_op} %({e_k_f})s", f.value,
value_key=e_k_f)
else:
logging.warning(f"undefined FETCH filter: {f.type}")
if not apply:
@ -1395,17 +1426,30 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
if extra_conditions and len(extra_conditions) > 0:
_extra_or_condition = []
for i, c in enumerate(extra_conditions):
if sh.isAny_opreator(c.operator):
if sh.isAny_opreator(c.operator) and c.type != schemas.EventType.REQUEST_DETAILS.value:
continue
e_k = f"ec_value{i}"
op = sh.get_sql_operator(c.operator)
c.value = helper.values_for_operator(value=c.value, op=c.operator)
full_args = {**full_args,
**sh.multi_values(c.value, value_key=e_k)}
if c.type == events.EventType.LOCATION.ui_type:
if c.type in (schemas.EventType.LOCATION.value, schemas.EventType.REQUEST.value):
_extra_or_condition.append(
sh.multi_conditions(f"extra_event.url_path {op} %({e_k})s",
c.value, value_key=e_k))
elif c.type == schemas.EventType.REQUEST_DETAILS.value:
for j, c_f in enumerate(c.filters):
if sh.isAny_opreator(c_f.operator) or len(c_f.value) == 0:
continue
e_k += f"_{j}"
op = sh.get_sql_operator(c_f.operator)
c_f.value = helper.values_for_operator(value=c_f.value, op=c_f.operator)
full_args = {**full_args,
**sh.multi_values(c_f.value, value_key=e_k)}
if c_f.type == schemas.FetchFilterType.FETCH_URL.value:
_extra_or_condition.append(
sh.multi_conditions(f"extra_event.url_path {op} %({e_k})s",
c_f.value, value_key=e_k))
else:
logging.warning(f"unsupported extra_event type:${c.type}")
if len(_extra_or_condition) > 0:
@ -1416,9 +1460,10 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
query_part = f"""{f"({events_query_part}) AS f" if len(events_query_part) > 0 else ""}"""
else:
if len(events_query_part) > 0:
extra_join += f"""INNER JOIN (SELECT *
extra_join += f"""INNER JOIN (SELECT DISTINCT ON (session_id) *
FROM {MAIN_SESSIONS_TABLE} AS s {extra_event}
WHERE {" AND ".join(extra_constraints)}) AS s ON(s.session_id=f.session_id)"""
WHERE {" AND ".join(extra_constraints)}
ORDER BY _timestamp DESC) AS s ON(s.session_id=f.session_id)"""
else:
deduplication_keys = ["session_id"] + extra_deduplication
extra_join = f"""(SELECT *

View file

@ -148,7 +148,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": []
"filters": e.filters
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -165,7 +165,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": []
"filters": e.filters
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -989,7 +989,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
sh.multi_conditions(f"ev.{events.EventType.LOCATION.column} {op} %({e_k})s",
c.value, value_key=e_k))
else:
logger.warning(f"unsupported extra_event type:${c.type}")
logger.warning(f"unsupported extra_event type: {c.type}")
if len(_extra_or_condition) > 0:
extra_constraints.append("(" + " OR ".join(_extra_or_condition) + ")")
query_part = f"""\

View file

@ -2,7 +2,7 @@ import schemas
from chalicelib.core import events, metadata, events_mobile, \
issues, assist, canvas, user_testing
from . import sessions_mobs, sessions_devtool
from chalicelib.utils import errors_helper
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.core.modules import MOB_KEY, get_file_key

View file

@ -2,7 +2,7 @@ import logging
import schemas
from chalicelib.core import metadata, projects
from chalicelib.core.sessions import sessions_favorite, sessions_legacy
from . import sessions_favorite, sessions_legacy
from chalicelib.utils import pg_client, helper
logger = logging.getLogger(__name__)
@ -43,7 +43,13 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
count_only=False, issue=None, ids_only=False, platform="web"):
if data.bookmarked:
data.startTimestamp, data.endTimestamp = sessions_favorite.get_start_end_timestamp(project.project_id, user_id)
if data.startTimestamp is None:
logger.debug(f"No vault sessions found for project:{project.project_id}")
return {
'total': 0,
'sessions': [],
'src': 1
}
full_args, query_part = sessions_legacy.search_query_parts(data=data, error_status=error_status,
errors_only=errors_only,
favorite_only=data.bookmarked, issue=issue,
@ -116,7 +122,10 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
sort = 'session_id'
if data.sort is not None and data.sort != "session_id":
# sort += " " + data.order + "," + helper.key_to_snake_case(data.sort)
sort = helper.key_to_snake_case(data.sort)
if data.sort == 'datetime':
sort = 'start_ts'
else:
sort = helper.key_to_snake_case(data.sort)
meta_keys = metadata.get(project_id=project.project_id)
main_query = cur.mogrify(f"""SELECT COUNT(full_sessions) AS count,

View file

@ -1,7 +1,7 @@
import logging
from chalicelib.core import assist
from chalicelib.core.sessions import sessions
from . import sessions
logger = logging.getLogger(__name__)

View file

@ -34,7 +34,10 @@ if config("CH_COMPRESSION", cast=bool, default=True):
def transform_result(self, original_function):
@wraps(original_function)
def wrapper(*args, **kwargs):
logger.debug(str.encode(self.format(query=kwargs.get("query", ""), parameters=kwargs.get("parameters"))))
if kwargs.get("parameters"):
logger.debug(str.encode(self.format(query=kwargs.get("query", ""), parameters=kwargs.get("parameters"))))
elif len(args) > 0:
logger.debug(str.encode(args[0]))
result = original_function(*args, **kwargs)
if isinstance(result, clickhouse_connect.driver.query.QueryResult):
column_names = result.column_names
@ -146,13 +149,11 @@ class ClickHouseClient:
def __enter__(self):
return self.__client
def format(self, query, *, parameters=None):
if parameters is None:
return query
return query % {
key: f"'{value}'" if isinstance(value, str) else value
for key, value in parameters.items()
}
def format(self, query, parameters=None):
if parameters:
ctx = QueryContext(query=query, parameters=parameters)
return ctx.final_query
return query
def __exit__(self, *args):
if config('CH_POOL', cast=bool, default=True):

View file

@ -1,14 +0,0 @@
from chalicelib.core.sourcemaps import sourcemaps
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -19,6 +19,16 @@ PG_CONFIG = dict(_PG_CONFIG)
if config("PG_TIMEOUT", cast=int, default=0) > 0:
PG_CONFIG["options"] = f"-c statement_timeout={config('PG_TIMEOUT', cast=int) * 1000}"
if config('PG_POOL', cast=bool, default=True):
PG_CONFIG = {
**PG_CONFIG,
# Keepalive settings
"keepalives": 1, # Enable keepalives
"keepalives_idle": 300, # Seconds before sending keepalive
"keepalives_interval": 10, # Seconds between keepalives
"keepalives_count": 3 # Number of keepalives before giving up
}
class ORThreadedConnectionPool(psycopg2.pool.ThreadedConnectionPool):
def __init__(self, minconn, maxconn, *args, **kwargs):
@ -55,6 +65,7 @@ RETRY = 0
def make_pool():
if not config('PG_POOL', cast=bool, default=True):
logger.info("PG_POOL is disabled, not creating a new one")
return
global postgreSQL_pool
global RETRY
@ -176,8 +187,7 @@ class PostgresClient:
async def init():
logger.info(f">use PG_POOL:{config('PG_POOL', default=True)}")
if config('PG_POOL', cast=bool, default=True):
make_pool()
make_pool()
async def terminate():

View file

@ -4,37 +4,41 @@ import schemas
def get_sql_operator(op: Union[schemas.SearchEventOperator, schemas.ClickEventExtraOperator, schemas.MathOperator]):
if isinstance(op, Enum):
op = op.value
return {
schemas.SearchEventOperator.IS: "=",
schemas.SearchEventOperator.ON: "=",
schemas.SearchEventOperator.ON_ANY: "IN",
schemas.SearchEventOperator.IS_NOT: "!=",
schemas.SearchEventOperator.NOT_ON: "!=",
schemas.SearchEventOperator.CONTAINS: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH: "ILIKE",
schemas.SearchEventOperator.IS.value: "=",
schemas.SearchEventOperator.ON.value: "=",
schemas.SearchEventOperator.ON_ANY.value: "IN",
schemas.SearchEventOperator.IS_NOT.value: "!=",
schemas.SearchEventOperator.NOT_ON.value: "!=",
schemas.SearchEventOperator.CONTAINS.value: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS.value: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH.value: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH.value: "ILIKE",
# Selector operators:
schemas.ClickEventExtraOperator.IS: "=",
schemas.ClickEventExtraOperator.IS_NOT: "!=",
schemas.ClickEventExtraOperator.CONTAINS: "ILIKE",
schemas.ClickEventExtraOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.ClickEventExtraOperator.STARTS_WITH: "ILIKE",
schemas.ClickEventExtraOperator.ENDS_WITH: "ILIKE",
schemas.ClickEventExtraOperator.IS.value: "=",
schemas.ClickEventExtraOperator.IS_NOT.value: "!=",
schemas.ClickEventExtraOperator.CONTAINS.value: "ILIKE",
schemas.ClickEventExtraOperator.NOT_CONTAINS.value: "NOT ILIKE",
schemas.ClickEventExtraOperator.STARTS_WITH.value: "ILIKE",
schemas.ClickEventExtraOperator.ENDS_WITH.value: "ILIKE",
schemas.MathOperator.GREATER: ">",
schemas.MathOperator.GREATER_EQ: ">=",
schemas.MathOperator.LESS: "<",
schemas.MathOperator.LESS_EQ: "<=",
schemas.MathOperator.GREATER.value: ">",
schemas.MathOperator.GREATER_EQ.value: ">=",
schemas.MathOperator.LESS.value: "<",
schemas.MathOperator.LESS_EQ.value: "<=",
}.get(op, "=")
def is_negation_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_NOT,
schemas.SearchEventOperator.NOT_ON,
schemas.SearchEventOperator.NOT_CONTAINS,
schemas.ClickEventExtraOperator.IS_NOT,
schemas.ClickEventExtraOperator.NOT_CONTAINS]
if isinstance(op, Enum):
op = op.value
return op in [schemas.SearchEventOperator.IS_NOT.value,
schemas.SearchEventOperator.NOT_ON.value,
schemas.SearchEventOperator.NOT_CONTAINS.value,
schemas.ClickEventExtraOperator.IS_NOT.value,
schemas.ClickEventExtraOperator.NOT_CONTAINS.value]
def reverse_sql_operator(op):

View file

@ -1,591 +0,0 @@
-- -- Original Q3
-- WITH ranked_events AS (SELECT *
-- FROM ranked_events_1736344377403),
-- n1 AS (SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- COUNT(1) AS sessions_count
-- FROM ranked_events
-- WHERE event_number_in_session = 1
-- AND isNotNull(next_value)
-- GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n2 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n1
-- INNER JOIN ranked_events AS re
-- ON (n1.next_value = re.e_value AND n1.next_type = re.event_type)
-- WHERE re.event_number_in_session = 2
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n3 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n2
-- INNER JOIN ranked_events AS re
-- ON (n2.next_value = re.e_value AND n2.next_type = re.event_type)
-- WHERE re.event_number_in_session = 3
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n4 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n3
-- INNER JOIN ranked_events AS re
-- ON (n3.next_value = re.e_value AND n3.next_type = re.event_type)
-- WHERE re.event_number_in_session = 4
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n5 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n4
-- INNER JOIN ranked_events AS re
-- ON (n4.next_value = re.e_value AND n4.next_type = re.event_type)
-- WHERE re.event_number_in_session = 5
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8)
-- SELECT *
-- FROM (SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n1
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n2
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n3
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n4
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n5) AS chart_steps
-- ORDER BY event_number_in_session;
-- Q1
-- CREATE TEMPORARY TABLE pre_ranked_events_1736344377403 AS
CREATE TABLE pre_ranked_events_1736344377403 ENGINE = Memory AS
(WITH initial_event AS (SELECT events.session_id, MIN(datetime) AS start_event_timestamp
FROM experimental.events AS events
WHERE ((event_type = 'LOCATION' AND (url_path = '/en/deployment/')))
AND events.project_id = toUInt16(65)
AND events.datetime >= toDateTime(1735599600000 / 1000)
AND events.datetime < toDateTime(1736290799999 / 1000)
GROUP BY 1),
pre_ranked_events AS (SELECT *
FROM (SELECT session_id,
event_type,
datetime,
url_path AS e_value,
row_number() OVER (PARTITION BY session_id
ORDER BY datetime ,
message_id ) AS event_number_in_session
FROM experimental.events AS events
INNER JOIN initial_event ON (events.session_id = initial_event.session_id)
WHERE events.project_id = toUInt16(65)
AND events.datetime >= toDateTime(1735599600000 / 1000)
AND events.datetime < toDateTime(1736290799999 / 1000)
AND (events.event_type = 'LOCATION')
AND events.datetime >= initial_event.start_event_timestamp
) AS full_ranked_events
WHERE event_number_in_session <= 5)
SELECT *
FROM pre_ranked_events);
;
SELECT *
FROM pre_ranked_events_1736344377403
WHERE event_number_in_session < 3;
-- ---------Q2-----------
-- CREATE TEMPORARY TABLE ranked_events_1736344377403 AS
DROP TABLE ranked_events_1736344377403;
CREATE TABLE ranked_events_1736344377403 ENGINE = Memory AS
(WITH pre_ranked_events AS (SELECT *
FROM pre_ranked_events_1736344377403),
start_points AS (SELECT DISTINCT session_id
FROM pre_ranked_events
WHERE ((event_type = 'LOCATION' AND (e_value = '/en/deployment/')))
AND pre_ranked_events.event_number_in_session = 1),
ranked_events AS (SELECT pre_ranked_events.*,
leadInFrame(e_value)
OVER (PARTITION BY session_id ORDER BY datetime
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_value,
leadInFrame(toNullable(event_type))
OVER (PARTITION BY session_id ORDER BY datetime
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_type
FROM start_points
INNER JOIN pre_ranked_events USING (session_id))
SELECT *
FROM ranked_events);
-- ranked events
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events_1736344377403
WHERE event_number_in_session = 2
-- AND e_value='/en/deployment/deploy-docker/'
-- AND next_value NOT IN ('/en/deployment/','/en/plugins/','/en/using-or/')
-- AND e_value NOT IN ('/en/deployment/deploy-docker/','/en/getting-started/','/en/deployment/deploy-ubuntu/')
AND isNotNull(next_value)
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY event_number_in_session, sessions_count DESC;
SELECT event_number_in_session,
event_type,
e_value,
COUNT(1) AS sessions_count
FROM ranked_events_1736344377403
WHERE event_number_in_session = 1
GROUP BY event_number_in_session, event_type, e_value
ORDER BY event_number_in_session, sessions_count DESC;
SELECT COUNT(1) AS sessions_count
FROM ranked_events_1736344377403
WHERE event_number_in_session = 2
AND isNull(next_value)
;
-- ---------Q3 MORE -----------
WITH ranked_events AS (SELECT *
FROM ranked_events_1736344377403),
n1 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 1
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n2 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 2
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n3 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 3
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
drop_n AS (-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n1
WHERE isNull(n1.next_type)
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n2
WHERE isNull(n2.next_type)),
-- TODO: make this as top_steps, where every step will go to next as top/others
top_n1 AS (-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n1
WHERE isNotNull(next_type)
ORDER BY sessions_count DESC
LIMIT 3),
top_n2 AS (-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n2
WHERE (event_type, e_value) IN (SELECT event_type,
e_value
FROM n2
WHERE isNotNull(next_type)
GROUP BY event_type, e_value
ORDER BY SUM(sessions_count) DESC
LIMIT 3)
ORDER BY sessions_count DESC),
top_n AS (SELECT *
FROM top_n1
UNION ALL
SELECT *
FROM top_n2),
u_top_n AS (SELECT DISTINCT event_number_in_session,
event_type,
e_value
FROM top_n),
others_n AS (
-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n1
WHERE isNotNull(next_type)
ORDER BY sessions_count DESC
LIMIT 1000000 OFFSET 3
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n2
WHERE isNotNull(next_type)
-- GROUP BY event_number_in_session, event_type, e_value
ORDER BY sessions_count DESC
LIMIT 1000000 OFFSET 3)
SELECT *
FROM (
-- Top
SELECT *
FROM top_n
-- UNION ALL
-- -- Others
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- 'OTHER' AS next_type,
-- NULL AS next_value,
-- SUM(sessions_count)
-- FROM others_n
-- GROUP BY event_number_in_session, event_type, e_value
-- UNION ALL
-- -- Top go to Drop
-- SELECT drop_n.event_number_in_session,
-- drop_n.event_type,
-- drop_n.e_value,
-- drop_n.next_type,
-- drop_n.next_value,
-- drop_n.sessions_count
-- FROM drop_n
-- INNER JOIN u_top_n ON (drop_n.event_number_in_session = u_top_n.event_number_in_session
-- AND drop_n.event_type = u_top_n.event_type
-- AND drop_n.e_value = u_top_n.e_value)
-- ORDER BY drop_n.event_number_in_session
-- -- -- UNION ALL
-- -- -- Top go to Others
-- SELECT top_n.event_number_in_session,
-- top_n.event_type,
-- top_n.e_value,
-- 'OTHER' AS next_type,
-- NULL AS next_value,
-- SUM(top_n.sessions_count) AS sessions_count
-- FROM top_n
-- LEFT JOIN others_n ON (others_n.event_number_in_session = (top_n.event_number_in_session + 1)
-- AND top_n.next_type = others_n.event_type
-- AND top_n.next_value = others_n.e_value)
-- WHERE others_n.event_number_in_session IS NULL
-- AND top_n.next_type IS NOT NULL
-- GROUP BY event_number_in_session, event_type, e_value
-- UNION ALL
-- -- Others got to Top
-- SELECT others_n.event_number_in_session,
-- 'OTHER' AS event_type,
-- NULL AS e_value,
-- others_n.s_next_type AS next_type,
-- others_n.s_next_value AS next_value,
-- SUM(sessions_count) AS sessions_count
-- FROM others_n
-- INNER JOIN top_n ON (others_n.event_number_in_session = top_n.event_number_in_session + 1 AND
-- others_n.s_next_type = top_n.event_type AND
-- others_n.s_next_value = top_n.event_type)
-- GROUP BY others_n.event_number_in_session, next_type, next_value
-- UNION ALL
-- -- TODO: find if this works or not
-- -- Others got to Others
-- SELECT others_n.event_number_in_session,
-- 'OTHER' AS event_type,
-- NULL AS e_value,
-- 'OTHERS' AS next_type,
-- NULL AS next_value,
-- SUM(sessions_count) AS sessions_count
-- FROM others_n
-- LEFT JOIN u_top_n ON ((others_n.event_number_in_session + 1) = u_top_n.event_number_in_session
-- AND others_n.s_next_type = u_top_n.event_type
-- AND others_n.s_next_value = u_top_n.e_value)
-- WHERE u_top_n.event_number_in_session IS NULL
-- GROUP BY others_n.event_number_in_session
)
ORDER BY event_number_in_session;
-- ---------Q3 TOP ON VALUE ONLY -----------
WITH ranked_events AS (SELECT *
FROM ranked_events_1736344377403),
n1 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 1
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n2 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 2
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n3 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 3
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
drop_n AS (-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n1
WHERE isNull(n1.next_type)
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n2
WHERE isNull(n2.next_type)),
top_n AS (SELECT event_number_in_session,
event_type,
e_value,
SUM(sessions_count) AS sessions_count
FROM n1
GROUP BY event_number_in_session, event_type, e_value
LIMIT 1
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
SUM(sessions_count) AS sessions_count
FROM n2
GROUP BY event_number_in_session, event_type, e_value
ORDER BY sessions_count DESC
LIMIT 3
UNION ALL
-- STEP 3
SELECT event_number_in_session,
event_type,
e_value,
SUM(sessions_count) AS sessions_count
FROM n3
GROUP BY event_number_in_session, event_type, e_value
ORDER BY sessions_count DESC
LIMIT 3),
top_n_with_next AS (SELECT n1.*
FROM n1
UNION ALL
SELECT n2.*
FROM n2
INNER JOIN top_n ON (n2.event_number_in_session = top_n.event_number_in_session
AND n2.event_type = top_n.event_type
AND n2.e_value = top_n.e_value)),
others_n AS (
-- STEP 2
SELECT n2.*
FROM n2
WHERE (n2.event_number_in_session, n2.event_type, n2.e_value) NOT IN
(SELECT event_number_in_session, event_type, e_value
FROM top_n
WHERE top_n.event_number_in_session = 2)
UNION ALL
-- STEP 3
SELECT n3.*
FROM n3
WHERE (n3.event_number_in_session, n3.event_type, n3.e_value) NOT IN
(SELECT event_number_in_session, event_type, e_value
FROM top_n
WHERE top_n.event_number_in_session = 3))
SELECT *
FROM (
-- SELECT sum(top_n_with_next.sessions_count)
-- FROM top_n_with_next
-- WHERE event_number_in_session = 1
-- -- AND isNotNull(next_value)
-- AND (next_type, next_value) IN
-- (SELECT others_n.event_type, others_n.e_value FROM others_n WHERE others_n.event_number_in_session = 2)
-- -- SELECT * FROM others_n
-- -- SELECT * FROM n2
-- SELECT *
-- FROM top_n
-- );
-- Top to Top: valid
SELECT top_n_with_next.*
FROM top_n_with_next
INNER JOIN top_n
ON (top_n_with_next.event_number_in_session + 1 = top_n.event_number_in_session
AND top_n_with_next.next_type = top_n.event_type
AND top_n_with_next.next_value = top_n.e_value)
UNION ALL
-- Top to Others: valid
SELECT top_n_with_next.event_number_in_session,
top_n_with_next.event_type,
top_n_with_next.e_value,
'OTHER' AS next_type,
NULL AS next_value,
SUM(top_n_with_next.sessions_count) AS sessions_count
FROM top_n_with_next
WHERE (top_n_with_next.event_number_in_session + 1, top_n_with_next.next_type, top_n_with_next.next_value) IN
(SELECT others_n.event_number_in_session, others_n.event_type, others_n.e_value FROM others_n)
GROUP BY top_n_with_next.event_number_in_session, top_n_with_next.event_type, top_n_with_next.e_value
UNION ALL
-- Top go to Drop: valid
SELECT drop_n.event_number_in_session,
drop_n.event_type,
drop_n.e_value,
drop_n.next_type,
drop_n.next_value,
drop_n.sessions_count
FROM drop_n
INNER JOIN top_n ON (drop_n.event_number_in_session = top_n.event_number_in_session
AND drop_n.event_type = top_n.event_type
AND drop_n.e_value = top_n.e_value)
ORDER BY drop_n.event_number_in_session
UNION ALL
-- Others got to Drop: valid
SELECT others_n.event_number_in_session,
'OTHER' AS event_type,
NULL AS e_value,
'DROP' AS next_type,
NULL AS next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNull(others_n.next_type)
AND others_n.event_number_in_session < 3
GROUP BY others_n.event_number_in_session, next_type, next_value
UNION ALL
-- Others got to Top:valid
SELECT others_n.event_number_in_session,
'OTHER' AS event_type,
NULL AS e_value,
others_n.next_type,
others_n.next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) IN
(SELECT top_n.event_number_in_session, top_n.event_type, top_n.e_value FROM top_n)
GROUP BY others_n.event_number_in_session, others_n.next_type, others_n.next_value
UNION ALL
-- Others got to Others
SELECT others_n.event_number_in_session,
'OTHER' AS event_type,
NULL AS e_value,
'OTHERS' AS next_type,
NULL AS next_value,
SUM(sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND others_n.event_number_in_session < 3
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) NOT IN
(SELECT event_number_in_session, event_type, e_value FROM top_n)
GROUP BY others_n.event_number_in_session)
ORDER BY event_number_in_session, sessions_count
DESC;

View file

@ -960,36 +960,6 @@ class CardSessionsSchema(_TimedSchema, _PaginatedSchema):
return self
# We don't need this as the UI is expecting filters to override the full series' filters
# @model_validator(mode="after")
# def __merge_out_filters_with_series(self):
# for f in self.filters:
# for s in self.series:
# found = False
#
# if f.is_event:
# sub = s.filter.events
# else:
# sub = s.filter.filters
#
# for e in sub:
# if f.type == e.type and f.operator == e.operator:
# found = True
# if f.is_event:
# # If extra event: append value
# for v in f.value:
# if v not in e.value:
# e.value.append(v)
# else:
# # If extra filter: override value
# e.value = f.value
# if not found:
# sub.append(f)
#
# self.filters = []
#
# return self
# UI is expecting filters to override the full series' filters
@model_validator(mode="after")
def __override_series_filters_with_outer_filters(self):
@ -1060,6 +1030,16 @@ class CardTable(__CardSchema):
values["metricValue"] = []
return values
@model_validator(mode="after")
def __enforce_AND_operator(self):
self.metric_of = MetricOfTable(self.metric_of)
if self.metric_of in (MetricOfTable.VISITED_URL, MetricOfTable.FETCH, \
MetricOfTable.VISITED_URL.value, MetricOfTable.FETCH.value):
for s in self.series:
if s.filter is not None:
s.filter.events_order = SearchEventOrder.AND
return self
@model_validator(mode="after")
def __transform(self):
self.metric_of = MetricOfTable(self.metric_of)
@ -1135,7 +1115,7 @@ class CardPathAnalysis(__CardSchema):
view_type: MetricOtherViewType = Field(...)
metric_value: List[ProductAnalyticsSelectedEventType] = Field(default_factory=list)
density: int = Field(default=4, ge=2, le=10)
rows: int = Field(default=3, ge=1, le=10)
rows: int = Field(default=5, ge=1, le=10)
start_type: Literal["start", "end"] = Field(default="start")
start_point: List[PathAnalysisSubFilterSchema] = Field(default_factory=list)

View file

@ -19,14 +19,16 @@ const EVENTS_DEFINITION = {
}
};
EVENTS_DEFINITION.emit = {
NEW_AGENT: "NEW_AGENT",
NO_AGENTS: "NO_AGENT",
AGENT_DISCONNECT: "AGENT_DISCONNECTED",
AGENTS_CONNECTED: "AGENTS_CONNECTED",
NO_SESSIONS: "SESSION_DISCONNECTED",
SESSION_ALREADY_CONNECTED: "SESSION_ALREADY_CONNECTED",
SESSION_RECONNECTED: "SESSION_RECONNECTED",
UPDATE_EVENT: EVENTS_DEFINITION.listen.UPDATE_EVENT
NEW_AGENT: "NEW_AGENT",
NO_AGENTS: "NO_AGENT",
AGENT_DISCONNECT: "AGENT_DISCONNECTED",
AGENTS_CONNECTED: "AGENTS_CONNECTED",
AGENTS_INFO_CONNECTED: "AGENTS_INFO_CONNECTED",
NO_SESSIONS: "SESSION_DISCONNECTED",
SESSION_ALREADY_CONNECTED: "SESSION_ALREADY_CONNECTED",
SESSION_RECONNECTED: "SESSION_RECONNECTED",
UPDATE_EVENT: EVENTS_DEFINITION.listen.UPDATE_EVENT,
WEBRTC_CONFIG: "WEBRTC_CONFIG",
};
const BASE_sessionInfo = {

View file

@ -27,9 +27,14 @@ const respond = function (req, res, data) {
res.setHeader('Content-Type', 'application/json');
res.end(JSON.stringify(result));
} else {
res.cork(() => {
res.writeStatus('200 OK').writeHeader('Content-Type', 'application/json').end(JSON.stringify(result));
});
if (!res.aborted) {
res.cork(() => {
res.writeStatus('200 OK').writeHeader('Content-Type', 'application/json').end(JSON.stringify(result));
});
} else {
logger.debug("response aborted");
return;
}
}
const duration = performance.now() - req.startTs;
IncreaseTotalRequests();

View file

@ -42,7 +42,7 @@ const findSessionSocketId = async (io, roomId, tabId) => {
};
async function getRoomData(io, roomID) {
let tabsCount = 0, agentsCount = 0, tabIDs = [], agentIDs = [];
let tabsCount = 0, agentsCount = 0, tabIDs = [], agentIDs = [], config = null, agentInfos = [];
const connected_sockets = await io.in(roomID).fetchSockets();
if (connected_sockets.length > 0) {
for (let socket of connected_sockets) {
@ -52,13 +52,19 @@ async function getRoomData(io, roomID) {
} else {
agentsCount++;
agentIDs.push(socket.id);
agentInfos.push({ ...socket.handshake.query.agentInfo, socketId: socket.id });
if (socket.handshake.query.config !== undefined) {
config = socket.handshake.query.config;
}
}
}
} else {
tabsCount = -1;
agentsCount = -1;
agentInfos = [];
agentIDs = [];
}
return {tabsCount, agentsCount, tabIDs, agentIDs};
return {tabsCount, agentsCount, tabIDs, agentIDs, config, agentInfos};
}
function processNewSocket(socket) {
@ -78,7 +84,7 @@ async function onConnect(socket) {
IncreaseOnlineConnections(socket.handshake.query.identity);
const io = getServer();
const {tabsCount, agentsCount, tabIDs, agentIDs} = await getRoomData(io, socket.handshake.query.roomId);
const {tabsCount, agentsCount, tabIDs, agentInfos, agentIDs, config} = await getRoomData(io, socket.handshake.query.roomId);
if (socket.handshake.query.identity === IDENTITIES.session) {
// Check if session with the same tabID already connected, if so, refuse new connexion
@ -100,7 +106,9 @@ async function onConnect(socket) {
// Inform all connected agents about reconnected session
if (agentsCount > 0) {
logger.debug(`notifying new session about agent-existence`);
io.to(socket.id).emit(EVENTS_DEFINITION.emit.WEBRTC_CONFIG, config);
io.to(socket.id).emit(EVENTS_DEFINITION.emit.AGENTS_CONNECTED, agentIDs);
io.to(socket.id).emit(EVENTS_DEFINITION.emit.AGENTS_INFO_CONNECTED, agentInfos);
socket.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.SESSION_RECONNECTED, socket.id);
}
} else if (tabsCount <= 0) {
@ -118,7 +126,8 @@ async function onConnect(socket) {
// Stats
startAssist(socket, socket.handshake.query.agentID);
}
socket.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.NEW_AGENT, socket.id, socket.handshake.query.agentInfo);
io.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.WEBRTC_CONFIG, socket.handshake.query.config);
socket.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.NEW_AGENT, socket.id, { ...socket.handshake.query.agentInfo });
}
// Set disconnect handler

View file

@ -8,8 +8,7 @@ import (
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/metrics"
analyticsMetrics "openreplay/backend/pkg/metrics/analytics"
databaseMetrics "openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/web"
"openreplay/backend/pkg/server"
"openreplay/backend/pkg/server/api"
@ -19,16 +18,18 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := analyticsConfig.New(log)
// Observability
webMetrics := web.New("analytics")
metrics.New(log, append(webMetrics.List(), append(analyticsMetrics.List(), databaseMetrics.List()...)...))
dbMetrics := database.New("analytics")
metrics.New(log, append(webMetrics.List(), dbMetrics.List()...))
pgConn, err := pool.New(cfg.Postgres.String())
pgConn, err := pool.New(dbMetrics, cfg.Postgres.String())
if err != nil {
log.Fatal(ctx, "can't init postgres connection: %s", err)
}
defer pgConn.Close()
builder, err := analytics.NewServiceBuilder(log, cfg, webMetrics, pgConn)
builder, err := analytics.NewServiceBuilder(log, cfg, webMetrics, dbMetrics, pgConn)
if err != nil {
log.Fatal(ctx, "can't init services: %s", err)
}

View file

@ -22,13 +22,15 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
metrics.New(log, assetsMetrics.List())
// Observability
assetMetrics := assetsMetrics.New("assets")
metrics.New(log, assetMetrics.List())
objStore, err := store.NewStore(&cfg.ObjectsConfig)
if err != nil {
log.Fatal(ctx, "can't init object storage: %s", err)
}
cacher, err := cacher.NewCacher(cfg, objStore)
cacher, err := cacher.NewCacher(cfg, objStore, assetMetrics)
if err != nil {
log.Fatal(ctx, "can't init cacher: %s", err)
}
@ -37,7 +39,7 @@ func main() {
switch m := msg.(type) {
case *messages.AssetCache:
cacher.CacheURL(m.SessionID(), m.URL)
assetsMetrics.IncreaseProcessesSessions()
assetMetrics.IncreaseProcessesSessions()
case *messages.JSException:
sourceList, err := assets.ExtractJSExceptionSources(&m.Payload)
if err != nil {

View file

@ -22,6 +22,7 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
// Observability
canvasMetrics := canvasesMetrics.New("canvases")
metrics.New(log, canvasMetrics.List())

View file

@ -14,7 +14,7 @@ import (
"openreplay/backend/pkg/memory"
"openreplay/backend/pkg/messages"
"openreplay/backend/pkg/metrics"
databaseMetrics "openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/projects"
"openreplay/backend/pkg/queue"
"openreplay/backend/pkg/sessions"
@ -26,22 +26,24 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
metrics.New(log, databaseMetrics.List())
// Observability
dbMetric := database.New("db")
metrics.New(log, dbMetric.List())
pgConn, err := pool.New(cfg.Postgres.String())
pgConn, err := pool.New(dbMetric, cfg.Postgres.String())
if err != nil {
log.Fatal(ctx, "can't init postgres connection: %s", err)
}
defer pgConn.Close()
chConn := clickhouse.NewConnector(cfg.Clickhouse)
chConn := clickhouse.NewConnector(cfg.Clickhouse, dbMetric)
if err := chConn.Prepare(); err != nil {
log.Fatal(ctx, "can't prepare clickhouse: %s", err)
}
defer chConn.Stop()
// Init db proxy module (postgres + clickhouse + batches)
dbProxy := postgres.NewConn(log, pgConn, chConn)
dbProxy := postgres.NewConn(log, pgConn, chConn, dbMetric)
defer dbProxy.Close()
// Init redis connection
@ -51,8 +53,8 @@ func main() {
}
defer redisClient.Close()
projManager := projects.New(log, pgConn, redisClient)
sessManager := sessions.New(log, pgConn, projManager, redisClient)
projManager := projects.New(log, pgConn, redisClient, dbMetric)
sessManager := sessions.New(log, pgConn, projManager, redisClient, dbMetric)
tagsManager := tags.New(log, pgConn)
// Init data saver

View file

@ -19,7 +19,7 @@ import (
"openreplay/backend/pkg/memory"
"openreplay/backend/pkg/messages"
"openreplay/backend/pkg/metrics"
databaseMetrics "openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/database"
enderMetrics "openreplay/backend/pkg/metrics/ender"
"openreplay/backend/pkg/projects"
"openreplay/backend/pkg/queue"
@ -31,9 +31,12 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := ender.New(log)
metrics.New(log, append(enderMetrics.List(), databaseMetrics.List()...))
// Observability
dbMetric := database.New("ender")
enderMetric := enderMetrics.New("ender")
metrics.New(log, append(enderMetric.List(), dbMetric.List()...))
pgConn, err := pool.New(cfg.Postgres.String())
pgConn, err := pool.New(dbMetric, cfg.Postgres.String())
if err != nil {
log.Fatal(ctx, "can't init postgres connection: %s", err)
}
@ -45,10 +48,10 @@ func main() {
}
defer redisClient.Close()
projManager := projects.New(log, pgConn, redisClient)
sessManager := sessions.New(log, pgConn, projManager, redisClient)
projManager := projects.New(log, pgConn, redisClient, dbMetric)
sessManager := sessions.New(log, pgConn, projManager, redisClient, dbMetric)
sessionEndGenerator, err := sessionender.New(intervals.EVENTS_SESSION_END_TIMEOUT, cfg.PartitionsNumber)
sessionEndGenerator, err := sessionender.New(enderMetric, intervals.EVENTS_SESSION_END_TIMEOUT, cfg.PartitionsNumber)
if err != nil {
log.Fatal(ctx, "can't init ender service: %s", err)
}

View file

@ -23,7 +23,9 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
metrics.New(log, heuristicsMetrics.List())
// Observability
heuristicsMetric := heuristicsMetrics.New("heuristics")
metrics.New(log, heuristicsMetric.List())
// HandlersFabric returns the list of message handlers we want to be applied to each incoming message.
handlersFabric := func() []handlers.MessageProcessor {
@ -62,7 +64,7 @@ func main() {
}
// Run service and wait for TERM signal
service := heuristics.New(log, cfg, producer, consumer, eventBuilder, memoryManager)
service := heuristics.New(log, cfg, producer, consumer, eventBuilder, memoryManager, heuristicsMetric)
log.Info(ctx, "Heuristics service started")
terminator.Wait(log, service)
}

View file

@ -9,7 +9,7 @@ import (
"openreplay/backend/pkg/db/redis"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/metrics"
databaseMetrics "openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/web"
"openreplay/backend/pkg/queue"
"openreplay/backend/pkg/server"
@ -20,13 +20,15 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := http.New(log)
// Observability
webMetrics := web.New("http")
metrics.New(log, append(webMetrics.List(), databaseMetrics.List()...))
dbMetric := database.New("http")
metrics.New(log, append(webMetrics.List(), dbMetric.List()...))
producer := queue.NewProducer(cfg.MessageSizeLimit, true)
defer producer.Close(15000)
pgConn, err := pool.New(cfg.Postgres.String())
pgConn, err := pool.New(dbMetric, cfg.Postgres.String())
if err != nil {
log.Fatal(ctx, "can't init postgres connection: %s", err)
}
@ -38,7 +40,7 @@ func main() {
}
defer redisClient.Close()
builder, err := services.New(log, cfg, webMetrics, producer, pgConn, redisClient)
builder, err := services.New(log, cfg, webMetrics, dbMetric, producer, pgConn, redisClient)
if err != nil {
log.Fatal(ctx, "failed while creating services: %s", err)
}

View file

@ -23,6 +23,7 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
// Observability
imageMetrics := imagesMetrics.New("images")
metrics.New(log, imageMetrics.List())

View file

@ -18,16 +18,18 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
// Observability
webMetrics := web.New("integrations")
metrics.New(log, append(webMetrics.List(), database.List()...))
dbMetric := database.New("integrations")
metrics.New(log, append(webMetrics.List(), dbMetric.List()...))
pgConn, err := pool.New(cfg.Postgres.String())
pgConn, err := pool.New(dbMetric, cfg.Postgres.String())
if err != nil {
log.Fatal(ctx, "can't init postgres connection: %s", err)
}
defer pgConn.Close()
builder, err := integrations.NewServiceBuilder(log, cfg, webMetrics, pgConn)
builder, err := integrations.NewServiceBuilder(log, cfg, webMetrics, dbMetric, pgConn)
if err != nil {
log.Fatal(ctx, "can't init services: %s", err)
}

View file

@ -9,14 +9,14 @@ import (
"syscall"
"time"
"openreplay/backend/internal/config/sink"
config "openreplay/backend/internal/config/sink"
"openreplay/backend/internal/sink/assetscache"
"openreplay/backend/internal/sink/sessionwriter"
"openreplay/backend/internal/storage"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/messages"
"openreplay/backend/pkg/metrics"
sinkMetrics "openreplay/backend/pkg/metrics/sink"
"openreplay/backend/pkg/metrics/sink"
"openreplay/backend/pkg/queue"
"openreplay/backend/pkg/url/assets"
)
@ -24,7 +24,9 @@ import (
func main() {
ctx := context.Background()
log := logger.New()
cfg := sink.New(log)
cfg := config.New(log)
// Observability
sinkMetrics := sink.New("sink")
metrics.New(log, sinkMetrics.List())
if _, err := os.Stat(cfg.FsDir); os.IsNotExist(err) {
@ -39,7 +41,7 @@ func main() {
if err != nil {
log.Fatal(ctx, "can't init rewriter: %s", err)
}
assetMessageHandler := assetscache.New(log, cfg, rewriter, producer)
assetMessageHandler := assetscache.New(log, cfg, rewriter, producer, sinkMetrics)
counter := storage.NewLogCounter()
var (
@ -191,7 +193,7 @@ func main() {
cfg.TopicRawWeb,
cfg.TopicRawMobile,
},
messages.NewSinkMessageIterator(log, msgHandler, nil, false),
messages.NewSinkMessageIterator(log, msgHandler, nil, false, sinkMetrics),
false,
cfg.MessageSizeLimit,
)

View file

@ -19,17 +19,20 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := spotConfig.New(log)
// Observability
webMetrics := web.New("spot")
metrics.New(log, append(webMetrics.List(), append(spotMetrics.List(), databaseMetrics.List()...)...))
spotMetric := spotMetrics.New("spot")
dbMetric := databaseMetrics.New("spot")
metrics.New(log, append(webMetrics.List(), append(spotMetric.List(), dbMetric.List()...)...))
pgConn, err := pool.New(cfg.Postgres.String())
pgConn, err := pool.New(dbMetric, cfg.Postgres.String())
if err != nil {
log.Fatal(ctx, "can't init postgres connection: %s", err)
}
defer pgConn.Close()
prefix := api.NoPrefix
builder, err := spot.NewServiceBuilder(log, cfg, webMetrics, pgConn, prefix)
builder, err := spot.NewServiceBuilder(log, cfg, webMetrics, spotMetric, dbMetric, pgConn, prefix)
if err != nil {
log.Fatal(ctx, "can't init services: %s", err)
}

View file

@ -23,13 +23,15 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := config.New(log)
metrics.New(log, storageMetrics.List())
// Observability
storageMetric := storageMetrics.New("storage")
metrics.New(log, storageMetric.List())
objStore, err := store.NewStore(&cfg.ObjectsConfig)
if err != nil {
log.Fatal(ctx, "can't init object storage: %s", err)
}
srv, err := storage.New(cfg, log, objStore)
srv, err := storage.New(cfg, log, objStore, storageMetric)
if err != nil {
log.Fatal(ctx, "can't init storage service: %s", err)
}

View file

@ -27,6 +27,7 @@ type cacher struct {
objStorage objectstorage.ObjectStorage // AWS Docs: "These clients are safe to use concurrently."
httpClient *http.Client // Docs: "Clients are safe for concurrent use by multiple goroutines."
rewriter *assets.Rewriter // Read only
metrics metrics.Assets
Errors chan error
sizeLimit int
requestHeaders map[string]string
@ -37,7 +38,7 @@ func (c *cacher) CanCache() bool {
return c.workers.CanAddTask()
}
func NewCacher(cfg *config.Config, store objectstorage.ObjectStorage) (*cacher, error) {
func NewCacher(cfg *config.Config, store objectstorage.ObjectStorage, metrics metrics.Assets) (*cacher, error) {
switch {
case cfg == nil:
return nil, errors.New("config is nil")
@ -93,6 +94,7 @@ func NewCacher(cfg *config.Config, store objectstorage.ObjectStorage) (*cacher,
Errors: make(chan error),
sizeLimit: cfg.AssetsSizeLimit,
requestHeaders: cfg.AssetsRequestHeaders,
metrics: metrics,
}
c.workers = NewPool(64, c.CacheFile)
return c, nil
@ -115,7 +117,7 @@ func (c *cacher) cacheURL(t *Task) {
c.Errors <- errors.Wrap(err, t.urlContext)
return
}
metrics.RecordDownloadDuration(float64(time.Now().Sub(start).Milliseconds()), res.StatusCode)
c.metrics.RecordDownloadDuration(float64(time.Now().Sub(start).Milliseconds()), res.StatusCode)
defer res.Body.Close()
if res.StatusCode >= 400 {
printErr := true
@ -162,12 +164,12 @@ func (c *cacher) cacheURL(t *Task) {
start = time.Now()
err = c.objStorage.Upload(strings.NewReader(strData), t.cachePath, contentType, contentEncoding, objectstorage.NoCompression)
if err != nil {
metrics.RecordUploadDuration(float64(time.Now().Sub(start).Milliseconds()), true)
c.metrics.RecordUploadDuration(float64(time.Now().Sub(start).Milliseconds()), true)
c.Errors <- errors.Wrap(err, t.urlContext)
return
}
metrics.RecordUploadDuration(float64(time.Now().Sub(start).Milliseconds()), false)
metrics.IncreaseSavedSessions()
c.metrics.RecordUploadDuration(float64(time.Now().Sub(start).Milliseconds()), false)
c.metrics.IncreaseSavedSessions()
if isCSS {
if t.depth > 0 {

View file

@ -2,11 +2,12 @@ package datasaver
import (
"context"
"encoding/json"
"openreplay/backend/pkg/db/types"
"openreplay/backend/internal/config/db"
"openreplay/backend/pkg/db/clickhouse"
"openreplay/backend/pkg/db/postgres"
"openreplay/backend/pkg/db/types"
"openreplay/backend/pkg/logger"
. "openreplay/backend/pkg/messages"
queue "openreplay/backend/pkg/queue/types"
@ -50,10 +51,6 @@ func New(log logger.Logger, cfg *db.Config, pg *postgres.Conn, ch clickhouse.Con
}
func (s *saverImpl) Handle(msg Message) {
if msg.TypeID() == MsgCustomEvent {
defer s.Handle(types.WrapCustomEvent(msg.(*CustomEvent)))
}
var (
sessCtx = context.WithValue(context.Background(), "sessionID", msg.SessionID())
session *sessions.Session
@ -69,6 +66,23 @@ func (s *saverImpl) Handle(msg Message) {
return
}
if msg.TypeID() == MsgCustomEvent {
m := msg.(*CustomEvent)
// Try to parse custom event payload to JSON and extract or_payload field
type CustomEventPayload struct {
CustomTimestamp uint64 `json:"or_timestamp"`
}
customPayload := &CustomEventPayload{}
if err := json.Unmarshal([]byte(m.Payload), customPayload); err == nil {
if customPayload.CustomTimestamp >= session.Timestamp {
s.log.Info(sessCtx, "custom event timestamp received: %v", m.Timestamp)
msg.Meta().Timestamp = customPayload.CustomTimestamp
s.log.Info(sessCtx, "custom event timestamp updated: %v", m.Timestamp)
}
}
defer s.Handle(types.WrapCustomEvent(m))
}
if IsMobileType(msg.TypeID()) {
if err := s.handleMobileMessage(sessCtx, session, msg); err != nil {
if !postgres.IsPkeyViolation(err) {

View file

@ -11,7 +11,7 @@ import (
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/memory"
"openreplay/backend/pkg/messages"
metrics "openreplay/backend/pkg/metrics/heuristics"
heuristicMetrics "openreplay/backend/pkg/metrics/heuristics"
"openreplay/backend/pkg/queue/types"
)
@ -23,11 +23,12 @@ type heuristicsImpl struct {
consumer types.Consumer
events builders.EventBuilder
mm memory.Manager
metrics heuristicMetrics.Heuristics
done chan struct{}
finished chan struct{}
}
func New(log logger.Logger, cfg *heuristics.Config, p types.Producer, c types.Consumer, e builders.EventBuilder, mm memory.Manager) service.Interface {
func New(log logger.Logger, cfg *heuristics.Config, p types.Producer, c types.Consumer, e builders.EventBuilder, mm memory.Manager, metrics heuristicMetrics.Heuristics) service.Interface {
s := &heuristicsImpl{
log: log,
ctx: context.Background(),
@ -36,6 +37,7 @@ func New(log logger.Logger, cfg *heuristics.Config, p types.Producer, c types.Co
consumer: c,
events: e,
mm: mm,
metrics: metrics,
done: make(chan struct{}),
finished: make(chan struct{}),
}
@ -51,7 +53,7 @@ func (h *heuristicsImpl) run() {
if err := h.producer.Produce(h.cfg.TopicAnalytics, evt.SessionID(), evt.Encode()); err != nil {
h.log.Error(h.ctx, "can't send new event to queue: %s", err)
} else {
metrics.IncreaseTotalEvents(messageTypeName(evt))
h.metrics.IncreaseTotalEvents(messageTypeName(evt))
}
case <-tick:
h.producer.Flush(h.cfg.ProducerTimeout)

View file

@ -12,6 +12,7 @@ import (
featureflagsAPI "openreplay/backend/pkg/featureflags/api"
"openreplay/backend/pkg/flakeid"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/web"
"openreplay/backend/pkg/objectstorage/store"
"openreplay/backend/pkg/projects"
@ -36,8 +37,8 @@ type ServicesBuilder struct {
UxTestsAPI api.Handlers
}
func New(log logger.Logger, cfg *http.Config, metrics web.Web, producer types.Producer, pgconn pool.Pool, redis *redis.Client) (*ServicesBuilder, error) {
projs := projects.New(log, pgconn, redis)
func New(log logger.Logger, cfg *http.Config, webMetrics web.Web, dbMetrics database.Database, producer types.Producer, pgconn pool.Pool, redis *redis.Client) (*ServicesBuilder, error) {
projs := projects.New(log, pgconn, redis, dbMetrics)
objStore, err := store.NewStore(&cfg.ObjectsConfig)
if err != nil {
return nil, err
@ -53,11 +54,11 @@ func New(log logger.Logger, cfg *http.Config, metrics web.Web, producer types.Pr
tokenizer := token.NewTokenizer(cfg.TokenSecret)
conditions := conditions.New(pgconn)
flaker := flakeid.NewFlaker(cfg.WorkerID)
sessions := sessions.New(log, pgconn, projs, redis)
sessions := sessions.New(log, pgconn, projs, redis, dbMetrics)
featureFlags := featureflags.New(pgconn)
tags := tags.New(log, pgconn)
uxTesting := uxtesting.New(pgconn)
responser := api.NewResponser(metrics)
responser := api.NewResponser(webMetrics)
builder := &ServicesBuilder{}
if builder.WebAPI, err = websessions.NewHandlers(cfg, log, responser, producer, projs, sessions, uaModule, geoModule, tokenizer, conditions, flaker); err != nil {
return nil, err

View file

@ -21,6 +21,7 @@ type session struct {
// SessionEnder updates timestamp of last message for each session
type SessionEnder struct {
metrics ender.Ender
timeout int64
sessions map[uint64]*session // map[sessionID]session
timeCtrl *timeController
@ -28,8 +29,9 @@ type SessionEnder struct {
enabled bool
}
func New(timeout int64, parts int) (*SessionEnder, error) {
func New(metrics ender.Ender, timeout int64, parts int) (*SessionEnder, error) {
return &SessionEnder{
metrics: metrics,
timeout: timeout,
sessions: make(map[uint64]*session),
timeCtrl: NewTimeController(parts),
@ -56,7 +58,7 @@ func (se *SessionEnder) ActivePartitions(parts []uint64) {
for sessID, _ := range se.sessions {
if !activeParts[sessID%se.parts] {
delete(se.sessions, sessID)
ender.DecreaseActiveSessions()
se.metrics.DecreaseActiveSessions()
removedSessions++
} else {
activeSessions++
@ -89,8 +91,8 @@ func (se *SessionEnder) UpdateSession(msg messages.Message) {
isEnded: false,
isMobile: messages.IsMobileType(msg.TypeID()),
}
ender.IncreaseActiveSessions()
ender.IncreaseTotalSessions()
se.metrics.IncreaseActiveSessions()
se.metrics.IncreaseTotalSessions()
return
}
// Keep the highest user's timestamp for correct session duration value
@ -139,8 +141,8 @@ func (se *SessionEnder) HandleEndedSessions(handler EndedSessionHandler) {
sess.isEnded = true
if res, _ := handler(sessID, sess.lastUserTime); res {
delete(se.sessions, sessID)
ender.DecreaseActiveSessions()
ender.IncreaseClosedSessions()
se.metrics.DecreaseActiveSessions()
se.metrics.IncreaseClosedSessions()
removedSessions++
if endCase == 2 {
brokerTime[1]++

View file

@ -12,7 +12,7 @@ import (
"openreplay/backend/internal/config/sink"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/messages"
metrics "openreplay/backend/pkg/metrics/sink"
sinkMetrics "openreplay/backend/pkg/metrics/sink"
"openreplay/backend/pkg/queue/types"
"openreplay/backend/pkg/url/assets"
)
@ -30,9 +30,10 @@ type AssetsCache struct {
producer types.Producer
cache map[string]*CachedAsset
blackList []string // use "example.com" to filter all domains or ".example.com" to filter only third-level domain
metrics sinkMetrics.Sink
}
func New(log logger.Logger, cfg *sink.Config, rewriter *assets.Rewriter, producer types.Producer) *AssetsCache {
func New(log logger.Logger, cfg *sink.Config, rewriter *assets.Rewriter, producer types.Producer, metrics sinkMetrics.Sink) *AssetsCache {
assetsCache := &AssetsCache{
log: log,
cfg: cfg,
@ -40,6 +41,7 @@ func New(log logger.Logger, cfg *sink.Config, rewriter *assets.Rewriter, produce
producer: producer,
cache: make(map[string]*CachedAsset, 64),
blackList: make([]string, 0),
metrics: metrics,
}
// Parse black list for cache layer
if len(cfg.CacheBlackList) > 0 {
@ -76,7 +78,7 @@ func (e *AssetsCache) clearCache() {
if int64(now.Sub(cache.ts).Minutes()) > e.cfg.CacheExpiration {
deleted++
delete(e.cache, id)
metrics.DecreaseCachedAssets()
e.metrics.DecreaseCachedAssets()
}
}
e.log.Info(context.Background(), "cache cleaner: deleted %d/%d assets", deleted, cacheSize)
@ -194,7 +196,7 @@ func parseHost(baseURL string) (string, error) {
}
func (e *AssetsCache) handleCSS(sessionID uint64, baseURL string, css string) string {
metrics.IncreaseTotalAssets()
e.metrics.IncreaseTotalAssets()
// Try to find asset in cache
h := md5.New()
// Cut first part of url (scheme + host)
@ -217,7 +219,7 @@ func (e *AssetsCache) handleCSS(sessionID uint64, baseURL string, css string) st
e.mutex.RUnlock()
if ok {
if int64(time.Now().Sub(cachedAsset.ts).Minutes()) < e.cfg.CacheExpiration {
metrics.IncreaseSkippedAssets()
e.metrics.IncreaseSkippedAssets()
return cachedAsset.msg
}
}
@ -229,8 +231,8 @@ func (e *AssetsCache) handleCSS(sessionID uint64, baseURL string, css string) st
start := time.Now()
res := e.getRewrittenCSS(sessionID, baseURL, css)
duration := time.Now().Sub(start).Milliseconds()
metrics.RecordAssetSize(float64(len(res)))
metrics.RecordProcessAssetDuration(float64(duration))
e.metrics.RecordAssetSize(float64(len(res)))
e.metrics.RecordProcessAssetDuration(float64(duration))
// Save asset to cache if we spent more than threshold
if duration > e.cfg.CacheThreshold {
e.mutex.Lock()
@ -239,7 +241,7 @@ func (e *AssetsCache) handleCSS(sessionID uint64, baseURL string, css string) st
ts: time.Now(),
}
e.mutex.Unlock()
metrics.IncreaseCachedAssets()
e.metrics.IncreaseCachedAssets()
}
// Return rewritten asset
return res

View file

@ -18,7 +18,7 @@ import (
config "openreplay/backend/internal/config/storage"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/messages"
metrics "openreplay/backend/pkg/metrics/storage"
storageMetrics "openreplay/backend/pkg/metrics/storage"
"openreplay/backend/pkg/objectstorage"
"openreplay/backend/pkg/pool"
)
@ -77,9 +77,10 @@ type Storage struct {
splitTime uint64
processorPool pool.WorkerPool
uploaderPool pool.WorkerPool
metrics storageMetrics.Storage
}
func New(cfg *config.Config, log logger.Logger, objStorage objectstorage.ObjectStorage) (*Storage, error) {
func New(cfg *config.Config, log logger.Logger, objStorage objectstorage.ObjectStorage, metrics storageMetrics.Storage) (*Storage, error) {
switch {
case cfg == nil:
return nil, fmt.Errorf("config is empty")
@ -92,6 +93,7 @@ func New(cfg *config.Config, log logger.Logger, objStorage objectstorage.ObjectS
objStorage: objStorage,
startBytes: make([]byte, cfg.FileSplitSize),
splitTime: parseSplitTime(cfg.FileSplitTime),
metrics: metrics,
}
s.processorPool = pool.NewPool(1, 1, s.doCompression)
s.uploaderPool = pool.NewPool(1, 1, s.uploadSession)
@ -141,7 +143,7 @@ func (s *Storage) Process(ctx context.Context, msg *messages.SessionEnd) (err er
if err != nil {
if strings.Contains(err.Error(), "big file") {
s.log.Warn(ctx, "can't process session: %s", err)
metrics.IncreaseStorageTotalSkippedSessions()
s.metrics.IncreaseStorageTotalSkippedSessions()
return nil
}
return err
@ -159,8 +161,8 @@ func (s *Storage) prepareSession(path string, tp FileType, task *Task) error {
return err
}
metrics.RecordSessionReadDuration(float64(time.Now().Sub(startRead).Milliseconds()), tp.String())
metrics.RecordSessionSize(float64(len(mob)), tp.String())
s.metrics.RecordSessionReadDuration(float64(time.Now().Sub(startRead).Milliseconds()), tp.String())
s.metrics.RecordSessionSize(float64(len(mob)), tp.String())
// Put opened session file into task struct
task.SetMob(mob, index, tp)
@ -174,7 +176,7 @@ func (s *Storage) openSession(ctx context.Context, filePath string, tp FileType)
// Check file size before download into memory
info, err := os.Stat(filePath)
if err == nil && info.Size() > s.cfg.MaxFileSize {
metrics.RecordSkippedSessionSize(float64(info.Size()), tp.String())
s.metrics.RecordSkippedSessionSize(float64(info.Size()), tp.String())
return nil, -1, fmt.Errorf("big file, size: %d", info.Size())
}
// Read file into memory
@ -190,7 +192,7 @@ func (s *Storage) openSession(ctx context.Context, filePath string, tp FileType)
if err != nil {
return nil, -1, fmt.Errorf("can't sort session, err: %s", err)
}
metrics.RecordSessionSortDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
s.metrics.RecordSessionSortDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
return mob, index, nil
}
@ -234,12 +236,12 @@ func (s *Storage) packSession(task *Task, tp FileType) {
// Compression
start := time.Now()
data := s.compress(task.ctx, mob, task.compression)
metrics.RecordSessionCompressDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
s.metrics.RecordSessionCompressDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
// Encryption
start = time.Now()
result := s.encryptSession(task.ctx, data.Bytes(), task.key)
metrics.RecordSessionEncryptionDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
s.metrics.RecordSessionEncryptionDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
if tp == DOM {
task.doms = bytes.NewBuffer(result)
@ -296,8 +298,8 @@ func (s *Storage) packSession(task *Task, tp FileType) {
wg.Wait()
// Record metrics
metrics.RecordSessionEncryptionDuration(float64(firstEncrypt+secondEncrypt), tp.String())
metrics.RecordSessionCompressDuration(float64(firstPart+secondPart), tp.String())
s.metrics.RecordSessionEncryptionDuration(float64(firstEncrypt+secondEncrypt), tp.String())
s.metrics.RecordSessionCompressDuration(float64(firstPart+secondPart), tp.String())
}
func (s *Storage) encryptSession(ctx context.Context, data []byte, encryptionKey string) []byte {
@ -382,7 +384,7 @@ func (s *Storage) uploadSession(payload interface{}) {
go func() {
if task.doms != nil {
// Record compression ratio
metrics.RecordSessionCompressionRatio(task.domsRawSize/float64(task.doms.Len()), DOM.String())
s.metrics.RecordSessionCompressionRatio(task.domsRawSize/float64(task.doms.Len()), DOM.String())
// Upload session to s3
start := time.Now()
if err := s.objStorage.Upload(task.doms, task.id+string(DOM)+"s", "application/octet-stream", objectstorage.NoContentEncoding, task.compression); err != nil {
@ -395,7 +397,7 @@ func (s *Storage) uploadSession(payload interface{}) {
go func() {
if task.dome != nil {
// Record compression ratio
metrics.RecordSessionCompressionRatio(task.domeRawSize/float64(task.dome.Len()), DOM.String())
s.metrics.RecordSessionCompressionRatio(task.domeRawSize/float64(task.dome.Len()), DOM.String())
// Upload session to s3
start := time.Now()
if err := s.objStorage.Upload(task.dome, task.id+string(DOM)+"e", "application/octet-stream", objectstorage.NoContentEncoding, task.compression); err != nil {
@ -408,7 +410,7 @@ func (s *Storage) uploadSession(payload interface{}) {
go func() {
if task.dev != nil {
// Record compression ratio
metrics.RecordSessionCompressionRatio(task.devRawSize/float64(task.dev.Len()), DEV.String())
s.metrics.RecordSessionCompressionRatio(task.devRawSize/float64(task.dev.Len()), DEV.String())
// Upload session to s3
start := time.Now()
if err := s.objStorage.Upload(task.dev, task.id+string(DEV), "application/octet-stream", objectstorage.NoContentEncoding, task.compression); err != nil {
@ -419,9 +421,9 @@ func (s *Storage) uploadSession(payload interface{}) {
wg.Done()
}()
wg.Wait()
metrics.RecordSessionUploadDuration(float64(uploadDoms+uploadDome), DOM.String())
metrics.RecordSessionUploadDuration(float64(uploadDev), DEV.String())
metrics.IncreaseStorageTotalSessions()
s.metrics.RecordSessionUploadDuration(float64(uploadDoms+uploadDome), DOM.String())
s.metrics.RecordSessionUploadDuration(float64(uploadDev), DEV.String())
s.metrics.IncreaseStorageTotalSessions()
}
func (s *Storage) doCompression(payload interface{}) {

View file

@ -3,6 +3,7 @@ package analytics
import (
"github.com/go-playground/validator/v10"
"openreplay/backend/pkg/analytics/charts"
"openreplay/backend/pkg/metrics/database"
"time"
"openreplay/backend/internal/config/analytics"
@ -26,9 +27,9 @@ type ServicesBuilder struct {
ChartsAPI api.Handlers
}
func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.Web, pgconn pool.Pool) (*ServicesBuilder, error) {
func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.Web, dbMetrics database.Database, pgconn pool.Pool) (*ServicesBuilder, error) {
responser := api.NewResponser(webMetrics)
audiTrail, err := tracer.NewTracer(log, pgconn)
audiTrail, err := tracer.NewTracer(log, pgconn, dbMetrics)
if err != nil {
return nil, err
}

View file

@ -18,13 +18,14 @@ type Bulk interface {
}
type bulkImpl struct {
conn driver.Conn
table string
query string
values [][]interface{}
conn driver.Conn
metrics database.Database
table string
query string
values [][]interface{}
}
func NewBulk(conn driver.Conn, table, query string) (Bulk, error) {
func NewBulk(conn driver.Conn, metrics database.Database, table, query string) (Bulk, error) {
switch {
case conn == nil:
return nil, errors.New("clickhouse connection is empty")
@ -34,10 +35,11 @@ func NewBulk(conn driver.Conn, table, query string) (Bulk, error) {
return nil, errors.New("query is empty")
}
return &bulkImpl{
conn: conn,
table: table,
query: query,
values: make([][]interface{}, 0),
conn: conn,
metrics: metrics,
table: table,
query: query,
values: make([][]interface{}, 0),
}, nil
}
@ -60,8 +62,8 @@ func (b *bulkImpl) Send() error {
}
err = batch.Send()
// Save bulk metrics
database.RecordBulkElements(float64(len(b.values)), "ch", b.table)
database.RecordBulkInsertDuration(float64(time.Now().Sub(start).Milliseconds()), "ch", b.table)
b.metrics.RecordBulkElements(float64(len(b.values)), "ch", b.table)
b.metrics.RecordBulkInsertDuration(float64(time.Now().Sub(start).Milliseconds()), "ch", b.table)
// Prepare values slice for a new data
b.values = make([][]interface{}, 0)
return err

View file

@ -18,6 +18,7 @@ import (
"openreplay/backend/pkg/db/types"
"openreplay/backend/pkg/hashid"
"openreplay/backend/pkg/messages"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/sessions"
"openreplay/backend/pkg/url"
)
@ -57,13 +58,14 @@ func NewTask() *task {
type connectorImpl struct {
conn driver.Conn
metrics database.Database
batches map[string]Bulk //driver.Batch
workerTask chan *task
done chan struct{}
finished chan struct{}
}
func NewConnector(cfg common.Clickhouse) Connector {
func NewConnector(cfg common.Clickhouse, metrics database.Database) Connector {
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{cfg.GetTrimmedURL()},
Auth: clickhouse.Auth{
@ -84,6 +86,7 @@ func NewConnector(cfg common.Clickhouse) Connector {
c := &connectorImpl{
conn: conn,
metrics: metrics,
batches: make(map[string]Bulk, 20),
workerTask: make(chan *task, 1),
done: make(chan struct{}),
@ -94,7 +97,7 @@ func NewConnector(cfg common.Clickhouse) Connector {
}
func (c *connectorImpl) newBatch(name, query string) error {
batch, err := NewBulk(c.conn, name, query)
batch, err := NewBulk(c.conn, c.metrics, name, query)
if err != nil {
return fmt.Errorf("can't create new batch: %s", err)
}
@ -103,25 +106,25 @@ func (c *connectorImpl) newBatch(name, query string) error {
}
var batches = map[string]string{
"sessions": "INSERT INTO experimental.sessions (session_id, project_id, user_id, user_uuid, user_os, user_os_version, user_device, user_device_type, user_country, user_state, user_city, datetime, duration, pages_count, events_count, errors_count, issue_score, referrer, issue_types, tracker_version, user_browser, user_browser_version, metadata_1, metadata_2, metadata_3, metadata_4, metadata_5, metadata_6, metadata_7, metadata_8, metadata_9, metadata_10, timezone, utm_source, utm_medium, utm_campaign) VALUES (?, ?, SUBSTR(?, 1, 8000), ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, SUBSTR(?, 1, 8000), ?, ?, ?, ?, SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), ?, ?, ?, ?)",
"sessions": "INSERT INTO experimental.sessions (session_id, project_id, user_id, user_uuid, user_os, user_os_version, user_device, user_device_type, user_country, user_state, user_city, datetime, duration, pages_count, events_count, errors_count, issue_score, referrer, issue_types, tracker_version, user_browser, user_browser_version, metadata_1, metadata_2, metadata_3, metadata_4, metadata_5, metadata_6, metadata_7, metadata_8, metadata_9, metadata_10, platform, timezone, utm_source, utm_medium, utm_campaign) VALUES (?, ?, SUBSTR(?, 1, 8000), ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, SUBSTR(?, 1, 8000), ?, ?, ?, ?, SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), ?, ?, ?, ?, ?)",
"autocompletes": "INSERT INTO experimental.autocomplete (project_id, type, value) VALUES (?, ?, SUBSTR(?, 1, 8000))",
"pages": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$current_url", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"clicks": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$current_url", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"inputs": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$duration_s", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"errors": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"performance": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"requests": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$duration_s", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"custom": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"graphql": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"issuesEvents": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", issue_type, issue_id, "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"pages": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$current_url", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"clicks": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$current_url", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"inputs": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$duration_s", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"errors": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", error_id, "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"performance": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"requests": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$duration_s", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"custom": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"graphql": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"issuesEvents": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", issue_type, issue_id, "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"issues": "INSERT INTO experimental.issues (project_id, issue_id, type, context_string) VALUES (?, ?, ?, ?)",
"mobile_sessions": "INSERT INTO experimental.sessions (session_id, project_id, user_id, user_uuid, user_os, user_os_version, user_device, user_device_type, user_country, user_state, user_city, datetime, duration, pages_count, events_count, errors_count, issue_score, referrer, issue_types, tracker_version, user_browser, user_browser_version, metadata_1, metadata_2, metadata_3, metadata_4, metadata_5, metadata_6, metadata_7, metadata_8, metadata_9, metadata_10, platform, timezone) VALUES (?, ?, SUBSTR(?, 1, 8000), ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, SUBSTR(?, 1, 8000), ?, ?, ?, ?, SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), SUBSTR(?, 1, 8000), ?, ?)",
"mobile_custom": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_clicks": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_swipes": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_inputs": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_requests": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_crashes": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_custom": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_clicks": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_swipes": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_inputs": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_requests": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
"mobile_crashes": `INSERT INTO product_analytics.events (session_id, project_id, event_id, "$event_name", created_at, "$time", distinct_id, "$auto_captured", "$device", "$os_version", "$properties") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
}
func (c *connectorImpl) Prepare() error {
@ -212,6 +215,7 @@ func (c *connectorImpl) InsertWebSession(session *sessions.Session) error {
session.Metadata8,
session.Metadata9,
session.Metadata10,
"web",
session.Timezone,
session.UtmSource,
session.UtmMedium,
@ -243,8 +247,10 @@ func (c *connectorImpl) InsertWebInputDuration(session *sessions.Session, msg *m
return nil
}
jsonString, err := json.Marshal(map[string]interface{}{
"label": msg.Label,
"hesitation_time": nullableUint32(uint32(msg.HesitationTime)),
"label": msg.Label,
"hesitation_time": nullableUint32(uint32(msg.HesitationTime)),
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal input event: %s", err)
@ -259,6 +265,8 @@ func (c *connectorImpl) InsertWebInputDuration(session *sessions.Session, msg *m
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
nullableUint16(uint16(msg.InputDuration)),
jsonString,
); err != nil {
@ -275,12 +283,14 @@ func (c *connectorImpl) InsertMouseThrashing(session *sessions.Session, msg *mes
return fmt.Errorf("can't extract url parts: %s", err)
}
jsonString, err := json.Marshal(map[string]interface{}{
"issue_id": issueID,
"issue_type": "mouse_thrashing",
"url": cropString(msg.Url),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"issue_id": issueID,
"issue_type": "mouse_thrashing",
"url": cropString(msg.Url),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal issue event: %s", err)
@ -295,6 +305,8 @@ func (c *connectorImpl) InsertMouseThrashing(session *sessions.Session, msg *mes
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
"mouse_thrashing",
issueID,
jsonString,
@ -327,12 +339,14 @@ func (c *connectorImpl) InsertIssue(session *sessions.Session, msg *messages.Iss
return fmt.Errorf("can't extract url parts: %s", err)
}
jsonString, err := json.Marshal(map[string]interface{}{
"issue_id": issueID,
"issue_type": msg.Type,
"url": cropString(msg.Url),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"issue_id": issueID,
"issue_type": msg.Type,
"url": cropString(msg.Url),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal issue event: %s", err)
@ -347,6 +361,8 @@ func (c *connectorImpl) InsertIssue(session *sessions.Session, msg *messages.Iss
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
msg.Type,
issueID,
jsonString,
@ -418,6 +434,8 @@ func (c *connectorImpl) InsertWebPageEvent(session *sessions.Session, msg *messa
"dom_building_time": domBuildingTime,
"dom_content_loaded_event_time": domContentLoadedEventTime,
"load_event_time": loadEventTime,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal page event: %s", err)
@ -432,6 +450,8 @@ func (c *connectorImpl) InsertWebPageEvent(session *sessions.Session, msg *messa
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
cropString(msg.URL),
jsonString,
); err != nil {
@ -465,15 +485,17 @@ func (c *connectorImpl) InsertWebClickEvent(session *sessions.Session, msg *mess
return fmt.Errorf("can't extract url parts: %s", err)
}
jsonString, err := json.Marshal(map[string]interface{}{
"label": msg.Label,
"hesitation_time": nullableUint32(uint32(msg.HesitationTime)),
"selector": msg.Selector,
"normalized_x": nX,
"normalized_y": nY,
"url": cropString(msg.Url),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"label": msg.Label,
"hesitation_time": nullableUint32(uint32(msg.HesitationTime)),
"selector": msg.Selector,
"normalized_x": nX,
"normalized_y": nY,
"url": cropString(msg.Url),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal click event: %s", err)
@ -488,6 +510,8 @@ func (c *connectorImpl) InsertWebClickEvent(session *sessions.Session, msg *mess
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
cropString(msg.Url),
jsonString,
); err != nil {
@ -498,11 +522,6 @@ func (c *connectorImpl) InsertWebClickEvent(session *sessions.Session, msg *mess
}
func (c *connectorImpl) InsertWebErrorEvent(session *sessions.Session, msg *types.ErrorEvent) error {
keys, values := make([]string, 0, len(msg.Tags)), make([]*string, 0, len(msg.Tags))
for k, v := range msg.Tags {
keys = append(keys, k)
values = append(values, v)
}
// Check error source before insert to avoid panic from clickhouse lib
switch msg.Source {
case "js_exception", "bugsnag", "cloudwatch", "datadog", "elasticsearch", "newrelic", "rollbar", "sentry", "stackdriver", "sumologic":
@ -511,12 +530,11 @@ func (c *connectorImpl) InsertWebErrorEvent(session *sessions.Session, msg *type
}
msgID, _ := msg.ID(session.ProjectID)
jsonString, err := json.Marshal(map[string]interface{}{
"source": msg.Source,
"name": nullableString(msg.Name),
"message": msg.Message,
"error_id": msgID,
"error_tags_keys": keys,
"error_tags_values": values,
"source": msg.Source,
"name": nullableString(msg.Name),
"message": msg.Message,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal error event: %s", err)
@ -531,6 +549,9 @@ func (c *connectorImpl) InsertWebErrorEvent(session *sessions.Session, msg *type
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
msgID,
jsonString,
); err != nil {
c.checkError("errors", err)
@ -562,6 +583,8 @@ func (c *connectorImpl) InsertWebPerformanceTrackAggr(session *sessions.Session,
"min_used_js_heap_size": msg.MinUsedJSHeapSize,
"avg_used_js_heap_size": msg.AvgUsedJSHeapSize,
"max_used_js_heap_size": msg.MaxUsedJSHeapSize,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal performance event: %s", err)
@ -576,6 +599,8 @@ func (c *connectorImpl) InsertWebPerformanceTrackAggr(session *sessions.Session,
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("performance", err)
@ -599,16 +624,18 @@ func (c *connectorImpl) InsertRequest(session *sessions.Session, msg *messages.N
return fmt.Errorf("can't extract url parts: %s", err)
}
jsonString, err := json.Marshal(map[string]interface{}{
"request_body": request,
"response_body": response,
"status": uint16(msg.Status),
"method": url.EnsureMethod(msg.Method),
"success": msg.Status < 400,
"transfer_size": uint32(msg.TransferredBodySize),
"url": cropString(msg.URL),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"request_body": request,
"response_body": response,
"status": uint16(msg.Status),
"method": url.EnsureMethod(msg.Method),
"success": msg.Status < 400,
"transfer_size": uint32(msg.TransferredBodySize),
"url": cropString(msg.URL),
"url_host": host,
"url_path": path,
"url_hostpath": hostpath,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal request event: %s", err)
@ -623,6 +650,8 @@ func (c *connectorImpl) InsertRequest(session *sessions.Session, msg *messages.N
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
nullableUint16(uint16(msg.Duration)),
jsonString,
); err != nil {
@ -634,8 +663,10 @@ func (c *connectorImpl) InsertRequest(session *sessions.Session, msg *messages.N
func (c *connectorImpl) InsertCustom(session *sessions.Session, msg *messages.CustomEvent) error {
jsonString, err := json.Marshal(map[string]interface{}{
"name": msg.Name,
"payload": msg.Payload,
"name": msg.Name,
"payload": msg.Payload,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal custom event: %s", err)
@ -650,6 +681,8 @@ func (c *connectorImpl) InsertCustom(session *sessions.Session, msg *messages.Cu
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("custom", err)
@ -660,9 +693,11 @@ func (c *connectorImpl) InsertCustom(session *sessions.Session, msg *messages.Cu
func (c *connectorImpl) InsertGraphQL(session *sessions.Session, msg *messages.GraphQL) error {
jsonString, err := json.Marshal(map[string]interface{}{
"name": msg.OperationName,
"request_body": nullableString(msg.Variables),
"response_body": nullableString(msg.Response),
"name": msg.OperationName,
"request_body": nullableString(msg.Variables),
"response_body": nullableString(msg.Response),
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal graphql event: %s", err)
@ -677,6 +712,8 @@ func (c *connectorImpl) InsertGraphQL(session *sessions.Session, msg *messages.G
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("graphql", err)
@ -724,7 +761,7 @@ func (c *connectorImpl) InsertMobileSession(session *sessions.Session) error {
session.Metadata8,
session.Metadata9,
session.Metadata10,
"ios",
"mobile",
session.Timezone,
); err != nil {
c.checkError("mobile_sessions", err)
@ -735,8 +772,10 @@ func (c *connectorImpl) InsertMobileSession(session *sessions.Session) error {
func (c *connectorImpl) InsertMobileCustom(session *sessions.Session, msg *messages.MobileEvent) error {
jsonString, err := json.Marshal(map[string]interface{}{
"name": msg.Name,
"payload": msg.Payload,
"name": msg.Name,
"payload": msg.Payload,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal mobile custom event: %s", err)
@ -751,6 +790,8 @@ func (c *connectorImpl) InsertMobileCustom(session *sessions.Session, msg *messa
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("mobile_custom", err)
@ -764,7 +805,9 @@ func (c *connectorImpl) InsertMobileClick(session *sessions.Session, msg *messag
return nil
}
jsonString, err := json.Marshal(map[string]interface{}{
"label": msg.Label,
"label": msg.Label,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal mobile clicks event: %s", err)
@ -779,6 +822,8 @@ func (c *connectorImpl) InsertMobileClick(session *sessions.Session, msg *messag
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("mobile_clicks", err)
@ -792,8 +837,10 @@ func (c *connectorImpl) InsertMobileSwipe(session *sessions.Session, msg *messag
return nil
}
jsonString, err := json.Marshal(map[string]interface{}{
"label": msg.Label,
"direction": nullableString(msg.Direction),
"label": msg.Label,
"direction": nullableString(msg.Direction),
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal mobile swipe event: %s", err)
@ -808,6 +855,8 @@ func (c *connectorImpl) InsertMobileSwipe(session *sessions.Session, msg *messag
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("mobile_swipes", err)
@ -821,7 +870,9 @@ func (c *connectorImpl) InsertMobileInput(session *sessions.Session, msg *messag
return nil
}
jsonString, err := json.Marshal(map[string]interface{}{
"label": msg.Label,
"label": msg.Label,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal mobile input event: %s", err)
@ -836,6 +887,8 @@ func (c *connectorImpl) InsertMobileInput(session *sessions.Session, msg *messag
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("mobile_inputs", err)
@ -855,13 +908,15 @@ func (c *connectorImpl) InsertMobileRequest(session *sessions.Session, msg *mess
response = &msg.Response
}
jsonString, err := json.Marshal(map[string]interface{}{
"url": cropString(msg.URL),
"request_body": request,
"response_body": response,
"status": uint16(msg.Status),
"method": url.EnsureMethod(msg.Method),
"duration": uint16(msg.Duration),
"success": msg.Status < 400,
"url": cropString(msg.URL),
"request_body": request,
"response_body": response,
"status": uint16(msg.Status),
"method": url.EnsureMethod(msg.Method),
"duration": uint16(msg.Duration),
"success": msg.Status < 400,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal mobile request event: %s", err)
@ -876,6 +931,8 @@ func (c *connectorImpl) InsertMobileRequest(session *sessions.Session, msg *mess
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("mobile_requests", err)
@ -886,9 +943,11 @@ func (c *connectorImpl) InsertMobileRequest(session *sessions.Session, msg *mess
func (c *connectorImpl) InsertMobileCrash(session *sessions.Session, msg *messages.MobileCrash) error {
jsonString, err := json.Marshal(map[string]interface{}{
"name": msg.Name,
"reason": msg.Reason,
"stacktrace": msg.Stacktrace,
"name": msg.Name,
"reason": msg.Reason,
"stacktrace": msg.Stacktrace,
"user_device": session.UserDevice,
"user_device_type": session.UserDeviceType,
})
if err != nil {
return fmt.Errorf("can't marshal mobile crash event: %s", err)
@ -903,6 +962,8 @@ func (c *connectorImpl) InsertMobileCrash(session *sessions.Session, msg *messag
eventTime.Unix(),
session.UserUUID,
true,
session.Platform,
session.UserOSVersion,
jsonString,
); err != nil {
c.checkError("mobile_crashes", err)

View file

@ -52,6 +52,7 @@ func NewBatchesTask(size int) *batchesTask {
type BatchSet struct {
log logger.Logger
c pool.Pool
metrics database.Database
ctx context.Context
batches map[uint64]*SessionBatch
workerTask chan *batchesTask
@ -59,10 +60,11 @@ type BatchSet struct {
finished chan struct{}
}
func NewBatchSet(log logger.Logger, c pool.Pool) *BatchSet {
func NewBatchSet(log logger.Logger, c pool.Pool, metrics database.Database) *BatchSet {
bs := &BatchSet{
log: log,
c: c,
metrics: metrics,
ctx: context.Background(),
batches: make(map[uint64]*SessionBatch),
workerTask: make(chan *batchesTask, 1),
@ -104,7 +106,7 @@ func (conn *BatchSet) Stop() {
func (conn *BatchSet) sendBatches(t *batchesTask) {
for _, batch := range t.batches {
// Record batch size
database.RecordBatchElements(float64(batch.Len()))
conn.metrics.RecordBatchElements(float64(batch.Len()))
start := time.Now()
@ -120,7 +122,7 @@ func (conn *BatchSet) sendBatches(t *batchesTask) {
}
}
br.Close() // returns err
database.RecordBatchInsertDuration(float64(time.Now().Sub(start).Milliseconds()))
conn.metrics.RecordBatchInsertDuration(float64(time.Now().Sub(start).Milliseconds()))
}
}

View file

@ -24,6 +24,7 @@ type Bulk interface {
type bulkImpl struct {
conn pool.Pool
metrics database.Database
table string
columns string
template string
@ -75,12 +76,12 @@ func (b *bulkImpl) send() error {
return fmt.Errorf("send bulk err: %s", err)
}
// Save bulk metrics
database.RecordBulkElements(float64(size), "pg", b.table)
database.RecordBulkInsertDuration(float64(time.Now().Sub(start).Milliseconds()), "pg", b.table)
b.metrics.RecordBulkElements(float64(size), "pg", b.table)
b.metrics.RecordBulkInsertDuration(float64(time.Now().Sub(start).Milliseconds()), "pg", b.table)
return nil
}
func NewBulk(conn pool.Pool, table, columns, template string, setSize, sizeLimit int) (Bulk, error) {
func NewBulk(conn pool.Pool, metrics database.Database, table, columns, template string, setSize, sizeLimit int) (Bulk, error) {
switch {
case conn == nil:
return nil, errors.New("db conn is empty")
@ -97,6 +98,7 @@ func NewBulk(conn pool.Pool, table, columns, template string, setSize, sizeLimit
}
return &bulkImpl{
conn: conn,
metrics: metrics,
table: table,
columns: columns,
template: template,

View file

@ -2,6 +2,7 @@ package postgres
import (
"context"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/logger"
@ -21,6 +22,7 @@ type BulkSet struct {
log logger.Logger
c pool.Pool
ctx context.Context
metrics database.Database
autocompletes Bulk
requests Bulk
customEvents Bulk
@ -43,10 +45,11 @@ type BulkSet struct {
finished chan struct{}
}
func NewBulkSet(log logger.Logger, c pool.Pool) *BulkSet {
func NewBulkSet(log logger.Logger, c pool.Pool, metrics database.Database) *BulkSet {
bs := &BulkSet{
log: log,
c: c,
metrics: metrics,
ctx: context.Background(),
workerTask: make(chan *bulksTask, 1),
done: make(chan struct{}),
@ -100,7 +103,7 @@ func (conn *BulkSet) Get(name string) Bulk {
func (conn *BulkSet) initBulks() {
var err error
conn.autocompletes, err = NewBulk(conn.c,
conn.autocompletes, err = NewBulk(conn.c, conn.metrics,
"autocomplete",
"(value, type, project_id)",
"($%d, $%d, $%d)",
@ -108,7 +111,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create autocomplete bulk: %s", err)
}
conn.requests, err = NewBulk(conn.c,
conn.requests, err = NewBulk(conn.c, conn.metrics,
"events_common.requests",
"(session_id, timestamp, seq_index, url, duration, success)",
"($%d, $%d, $%d, LEFT($%d, 8000), $%d, $%d)",
@ -116,7 +119,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create requests bulk: %s", err)
}
conn.customEvents, err = NewBulk(conn.c,
conn.customEvents, err = NewBulk(conn.c, conn.metrics,
"events_common.customs",
"(session_id, timestamp, seq_index, name, payload)",
"($%d, $%d, $%d, LEFT($%d, 2000), $%d)",
@ -124,7 +127,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create customEvents bulk: %s", err)
}
conn.webPageEvents, err = NewBulk(conn.c,
conn.webPageEvents, err = NewBulk(conn.c, conn.metrics,
"events.pages",
"(session_id, message_id, timestamp, referrer, base_referrer, host, path, query, dom_content_loaded_time, "+
"load_time, response_end, first_paint_time, first_contentful_paint_time, speed_index, visually_complete, "+
@ -136,7 +139,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webPageEvents bulk: %s", err)
}
conn.webInputDurations, err = NewBulk(conn.c,
conn.webInputDurations, err = NewBulk(conn.c, conn.metrics,
"events.inputs",
"(session_id, message_id, timestamp, label, hesitation, duration)",
"($%d, $%d, $%d, NULLIF(LEFT($%d, 2000),''), $%d, $%d)",
@ -144,7 +147,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webInputDurations bulk: %s", err)
}
conn.webGraphQL, err = NewBulk(conn.c,
conn.webGraphQL, err = NewBulk(conn.c, conn.metrics,
"events.graphql",
"(session_id, timestamp, message_id, name, request_body, response_body)",
"($%d, $%d, $%d, LEFT($%d, 2000), $%d, $%d)",
@ -152,7 +155,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webGraphQL bulk: %s", err)
}
conn.webErrors, err = NewBulk(conn.c,
conn.webErrors, err = NewBulk(conn.c, conn.metrics,
"errors",
"(error_id, project_id, source, name, message, payload)",
"($%d, $%d, $%d, $%d, $%d, $%d::jsonb)",
@ -160,7 +163,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webErrors bulk: %s", err)
}
conn.webErrorEvents, err = NewBulk(conn.c,
conn.webErrorEvents, err = NewBulk(conn.c, conn.metrics,
"events.errors",
"(session_id, message_id, timestamp, error_id)",
"($%d, $%d, $%d, $%d)",
@ -168,7 +171,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webErrorEvents bulk: %s", err)
}
conn.webErrorTags, err = NewBulk(conn.c,
conn.webErrorTags, err = NewBulk(conn.c, conn.metrics,
"public.errors_tags",
"(session_id, message_id, error_id, key, value)",
"($%d, $%d, $%d, $%d, $%d)",
@ -176,7 +179,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webErrorTags bulk: %s", err)
}
conn.webIssues, err = NewBulk(conn.c,
conn.webIssues, err = NewBulk(conn.c, conn.metrics,
"issues",
"(project_id, issue_id, type, context_string)",
"($%d, $%d, $%d, $%d)",
@ -184,7 +187,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webIssues bulk: %s", err)
}
conn.webIssueEvents, err = NewBulk(conn.c,
conn.webIssueEvents, err = NewBulk(conn.c, conn.metrics,
"events_common.issues",
"(session_id, issue_id, timestamp, seq_index, payload)",
"($%d, $%d, $%d, $%d, CAST($%d AS jsonb))",
@ -192,7 +195,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webIssueEvents bulk: %s", err)
}
conn.webCustomEvents, err = NewBulk(conn.c,
conn.webCustomEvents, err = NewBulk(conn.c, conn.metrics,
"events_common.customs",
"(session_id, seq_index, timestamp, name, payload, level)",
"($%d, $%d, $%d, LEFT($%d, 2000), $%d, $%d)",
@ -200,7 +203,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webCustomEvents bulk: %s", err)
}
conn.webClickEvents, err = NewBulk(conn.c,
conn.webClickEvents, err = NewBulk(conn.c, conn.metrics,
"events.clicks",
"(session_id, message_id, timestamp, label, selector, url, path, hesitation)",
"($%d, $%d, $%d, NULLIF(LEFT($%d, 2000), ''), LEFT($%d, 8000), LEFT($%d, 2000), LEFT($%d, 2000), $%d)",
@ -208,7 +211,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webClickEvents bulk: %s", err)
}
conn.webClickXYEvents, err = NewBulk(conn.c,
conn.webClickXYEvents, err = NewBulk(conn.c, conn.metrics,
"events.clicks",
"(session_id, message_id, timestamp, label, selector, url, path, hesitation, normalized_x, normalized_y)",
"($%d, $%d, $%d, NULLIF(LEFT($%d, 2000), ''), LEFT($%d, 8000), LEFT($%d, 2000), LEFT($%d, 2000), $%d, $%d, $%d)",
@ -216,7 +219,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webClickEvents bulk: %s", err)
}
conn.webNetworkRequest, err = NewBulk(conn.c,
conn.webNetworkRequest, err = NewBulk(conn.c, conn.metrics,
"events_common.requests",
"(session_id, timestamp, seq_index, url, host, path, query, request_body, response_body, status_code, method, duration, success, transfer_size)",
"($%d, $%d, $%d, LEFT($%d, 8000), LEFT($%d, 300), LEFT($%d, 2000), LEFT($%d, 8000), $%d, $%d, $%d::smallint, NULLIF($%d, '')::http_method, $%d, $%d, $%d)",
@ -224,7 +227,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webNetworkRequest bulk: %s", err)
}
conn.webCanvasNodes, err = NewBulk(conn.c,
conn.webCanvasNodes, err = NewBulk(conn.c, conn.metrics,
"events.canvas_recordings",
"(session_id, recording_id, timestamp)",
"($%d, $%d, $%d)",
@ -232,7 +235,7 @@ func (conn *BulkSet) initBulks() {
if err != nil {
conn.log.Fatal(conn.ctx, "can't create webCanvasNodes bulk: %s", err)
}
conn.webTagTriggers, err = NewBulk(conn.c,
conn.webTagTriggers, err = NewBulk(conn.c, conn.metrics,
"events.tags",
"(session_id, timestamp, seq_index, tag_id)",
"($%d, $%d, $%d, $%d)",

View file

@ -2,6 +2,7 @@ package postgres
import (
"context"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/db/postgres/batch"
"openreplay/backend/pkg/db/postgres/pool"
@ -22,7 +23,7 @@ type Conn struct {
chConn CH
}
func NewConn(log logger.Logger, pool pool.Pool, ch CH) *Conn {
func NewConn(log logger.Logger, pool pool.Pool, ch CH, metrics database.Database) *Conn {
if pool == nil {
log.Fatal(context.Background(), "pg pool is empty")
}
@ -30,8 +31,8 @@ func NewConn(log logger.Logger, pool pool.Pool, ch CH) *Conn {
log: log,
Pool: pool,
chConn: ch,
bulks: NewBulkSet(log, pool),
batches: batch.NewBatchSet(log, pool),
bulks: NewBulkSet(log, pool, metrics),
batches: batch.NewBatchSet(log, pool, metrics),
}
}

View file

@ -181,11 +181,6 @@ func (conn *Conn) InsertWebErrorEvent(sess *sessions.Session, e *types.ErrorEven
if err := conn.bulks.Get("webErrorEvents").Append(sess.SessionID, truncSqIdx(e.MessageID), e.Timestamp, errorID); err != nil {
conn.log.Error(sessCtx, "insert web error event err: %s", err)
}
for key, value := range e.Tags {
if err := conn.bulks.Get("webErrorTags").Append(sess.SessionID, truncSqIdx(e.MessageID), errorID, key, value); err != nil {
conn.log.Error(sessCtx, "insert web error token err: %s", err)
}
}
return nil
}

View file

@ -23,58 +23,12 @@ type Pool interface {
}
type poolImpl struct {
url string
conn *pgxpool.Pool
url string
conn *pgxpool.Pool
metrics database.Database
}
func (p *poolImpl) Query(sql string, args ...interface{}) (pgx.Rows, error) {
start := time.Now()
res, err := p.conn.Query(getTimeoutContext(), sql, args...)
method, table := methodName(sql)
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
database.IncreaseTotalRequests(method, table)
return res, err
}
func (p *poolImpl) QueryRow(sql string, args ...interface{}) pgx.Row {
start := time.Now()
res := p.conn.QueryRow(getTimeoutContext(), sql, args...)
method, table := methodName(sql)
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
database.IncreaseTotalRequests(method, table)
return res
}
func (p *poolImpl) Exec(sql string, arguments ...interface{}) error {
start := time.Now()
_, err := p.conn.Exec(getTimeoutContext(), sql, arguments...)
method, table := methodName(sql)
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
database.IncreaseTotalRequests(method, table)
return err
}
func (p *poolImpl) SendBatch(b *pgx.Batch) pgx.BatchResults {
start := time.Now()
res := p.conn.SendBatch(getTimeoutContext(), b)
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "sendBatch", "")
database.IncreaseTotalRequests("sendBatch", "")
return res
}
func (p *poolImpl) Begin() (*Tx, error) {
start := time.Now()
tx, err := p.conn.Begin(context.Background())
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "begin", "")
database.IncreaseTotalRequests("begin", "")
return &Tx{tx}, err
}
func (p *poolImpl) Close() {
p.conn.Close()
}
func New(url string) (Pool, error) {
func New(metrics database.Database, url string) (Pool, error) {
if url == "" {
return nil, errors.New("pg connection url is empty")
}
@ -83,24 +37,73 @@ func New(url string) (Pool, error) {
return nil, fmt.Errorf("pgxpool.Connect error: %v", err)
}
res := &poolImpl{
url: url,
conn: conn,
url: url,
conn: conn,
metrics: metrics,
}
return res, nil
}
func (p *poolImpl) Query(sql string, args ...interface{}) (pgx.Rows, error) {
start := time.Now()
res, err := p.conn.Query(getTimeoutContext(), sql, args...)
method, table := methodName(sql)
p.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
p.metrics.IncreaseTotalRequests(method, table)
return res, err
}
func (p *poolImpl) QueryRow(sql string, args ...interface{}) pgx.Row {
start := time.Now()
res := p.conn.QueryRow(getTimeoutContext(), sql, args...)
method, table := methodName(sql)
p.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
p.metrics.IncreaseTotalRequests(method, table)
return res
}
func (p *poolImpl) Exec(sql string, arguments ...interface{}) error {
start := time.Now()
_, err := p.conn.Exec(getTimeoutContext(), sql, arguments...)
method, table := methodName(sql)
p.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
p.metrics.IncreaseTotalRequests(method, table)
return err
}
func (p *poolImpl) SendBatch(b *pgx.Batch) pgx.BatchResults {
start := time.Now()
res := p.conn.SendBatch(getTimeoutContext(), b)
p.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "sendBatch", "")
p.metrics.IncreaseTotalRequests("sendBatch", "")
return res
}
func (p *poolImpl) Begin() (*Tx, error) {
start := time.Now()
tx, err := p.conn.Begin(context.Background())
p.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "begin", "")
p.metrics.IncreaseTotalRequests("begin", "")
return &Tx{tx, p.metrics}, err
}
func (p *poolImpl) Close() {
p.conn.Close()
}
// TX - start
type Tx struct {
pgx.Tx
metrics database.Database
}
func (tx *Tx) TxExec(sql string, args ...interface{}) error {
start := time.Now()
_, err := tx.Exec(context.Background(), sql, args...)
method, table := methodName(sql)
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
database.IncreaseTotalRequests(method, table)
tx.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
tx.metrics.IncreaseTotalRequests(method, table)
return err
}
@ -108,24 +111,24 @@ func (tx *Tx) TxQueryRow(sql string, args ...interface{}) pgx.Row {
start := time.Now()
res := tx.QueryRow(context.Background(), sql, args...)
method, table := methodName(sql)
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
database.IncreaseTotalRequests(method, table)
tx.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), method, table)
tx.metrics.IncreaseTotalRequests(method, table)
return res
}
func (tx *Tx) TxRollback() error {
start := time.Now()
err := tx.Rollback(context.Background())
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "rollback", "")
database.IncreaseTotalRequests("rollback", "")
tx.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "rollback", "")
tx.metrics.IncreaseTotalRequests("rollback", "")
return err
}
func (tx *Tx) TxCommit() error {
start := time.Now()
err := tx.Commit(context.Background())
database.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "commit", "")
database.IncreaseTotalRequests("commit", "")
tx.metrics.RecordRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "commit", "")
tx.metrics.IncreaseTotalRequests("commit", "")
return err
}

View file

@ -61,7 +61,6 @@ func parseTags(tagsJSON string) (tags map[string]*string, err error) {
}
func WrapJSException(m *JSException) (*ErrorEvent, error) {
meta, err := parseTags(m.Metadata)
return &ErrorEvent{
MessageID: m.Meta().Index,
Timestamp: m.Meta().Timestamp,
@ -69,9 +68,8 @@ func WrapJSException(m *JSException) (*ErrorEvent, error) {
Name: m.Name,
Message: m.Message,
Payload: m.Payload,
Tags: meta,
OriginType: m.TypeID(),
}, err
}, nil
}
func WrapIntegrationEvent(m *IntegrationEvent) *ErrorEvent {

View file

@ -2,6 +2,7 @@ package integrations
import (
"openreplay/backend/pkg/integrations/service"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/web"
"openreplay/backend/pkg/server/tracer"
"time"
@ -23,7 +24,7 @@ type ServiceBuilder struct {
IntegrationsAPI api.Handlers
}
func NewServiceBuilder(log logger.Logger, cfg *integrations.Config, webMetrics web.Web, pgconn pool.Pool) (*ServiceBuilder, error) {
func NewServiceBuilder(log logger.Logger, cfg *integrations.Config, webMetrics web.Web, dbMetrics database.Database, pgconn pool.Pool) (*ServiceBuilder, error) {
objStore, err := store.NewStore(&cfg.ObjectsConfig)
if err != nil {
return nil, err
@ -37,7 +38,7 @@ func NewServiceBuilder(log logger.Logger, cfg *integrations.Config, webMetrics w
if err != nil {
return nil, err
}
auditrail, err := tracer.NewTracer(log, pgconn)
auditrail, err := tracer.NewTracer(log, pgconn, dbMetrics)
if err != nil {
return nil, err
}

View file

@ -8,11 +8,13 @@ import (
type sinkIteratorImpl struct {
coreIterator MessageIterator
handler MessageHandler
metrics sink.Sink
}
func NewSinkMessageIterator(log logger.Logger, messageHandler MessageHandler, messageFilter []int, autoDecode bool) MessageIterator {
func NewSinkMessageIterator(log logger.Logger, messageHandler MessageHandler, messageFilter []int, autoDecode bool, metrics sink.Sink) MessageIterator {
iter := &sinkIteratorImpl{
handler: messageHandler,
metrics: metrics,
}
iter.coreIterator = NewMessageIterator(log, iter.handle, messageFilter, autoDecode)
return iter
@ -23,8 +25,8 @@ func (i *sinkIteratorImpl) handle(message Message) {
}
func (i *sinkIteratorImpl) Iterate(batchData []byte, batchInfo *BatchInfo) {
sink.RecordBatchSize(float64(len(batchData)))
sink.IncreaseTotalBatches()
i.metrics.RecordBatchSize(float64(len(batchData)))
i.metrics.IncreaseTotalBatches()
// Call core iterator
i.coreIterator.Iterate(batchData, batchInfo)
// Send batch end signal

View file

@ -1,22 +0,0 @@
package analytics
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
var cardCreated = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "card",
Name: "created",
Help: "Histogram for tracking card creation",
Buckets: common.DefaultBuckets,
},
)
func List() []prometheus.Collector {
return []prometheus.Collector{
cardCreated,
}
}

View file

@ -2,71 +2,22 @@ package assets
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
"strconv"
)
var assetsProcessedSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "assets",
Name: "processed_total",
Help: "A counter displaying the total count of processed assets.",
},
)
func IncreaseProcessesSessions() {
assetsProcessedSessions.Inc()
type Assets interface {
IncreaseProcessesSessions()
IncreaseSavedSessions()
RecordDownloadDuration(durMillis float64, code int)
RecordUploadDuration(durMillis float64, isFailed bool)
List() []prometheus.Collector
}
var assetsSavedSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "assets",
Name: "saved_total",
Help: "A counter displaying the total number of cached assets.",
},
)
type assetsImpl struct{}
func IncreaseSavedSessions() {
assetsSavedSessions.Inc()
}
func New(serviceName string) Assets { return &assetsImpl{} }
var assetsDownloadDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "assets",
Name: "download_duration_seconds",
Help: "A histogram displaying the duration of downloading for each asset in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"response_code"},
)
func RecordDownloadDuration(durMillis float64, code int) {
assetsDownloadDuration.WithLabelValues(strconv.Itoa(code)).Observe(durMillis / 1000.0)
}
var assetsUploadDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "assets",
Name: "upload_s3_duration_seconds",
Help: "A histogram displaying the duration of uploading to s3 for each asset in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"failed"},
)
func RecordUploadDuration(durMillis float64, isFailed bool) {
failed := "false"
if isFailed {
failed = "true"
}
assetsUploadDuration.WithLabelValues(failed).Observe(durMillis / 1000.0)
}
func List() []prometheus.Collector {
return []prometheus.Collector{
assetsProcessedSessions,
assetsSavedSessions,
assetsDownloadDuration,
assetsUploadDuration,
}
}
func (a *assetsImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (a *assetsImpl) IncreaseProcessesSessions() {}
func (a *assetsImpl) IncreaseSavedSessions() {}
func (a *assetsImpl) RecordDownloadDuration(durMillis float64, code int) {}
func (a *assetsImpl) RecordUploadDuration(durMillis float64, isFailed bool) {}

View file

@ -2,7 +2,6 @@ package canvas
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
type Canvas interface {
@ -18,175 +17,17 @@ type Canvas interface {
List() []prometheus.Collector
}
type canvasImpl struct {
canvasesImageSize prometheus.Histogram
canvasesTotalSavedImages prometheus.Counter
canvasesImagesPerCanvas prometheus.Histogram
canvasesCanvasesPerSession prometheus.Histogram
canvasesPreparingDuration prometheus.Histogram
canvasesTotalCreatedArchives prometheus.Counter
canvasesArchivingDuration prometheus.Histogram
canvasesArchiveSize prometheus.Histogram
canvasesUploadingDuration prometheus.Histogram
}
type canvasImpl struct{}
func New(serviceName string) Canvas {
return &canvasImpl{
canvasesImageSize: newImageSizeMetric(serviceName),
canvasesTotalSavedImages: newTotalSavedImages(serviceName),
canvasesImagesPerCanvas: newImagesPerCanvas(serviceName),
canvasesCanvasesPerSession: newCanvasesPerSession(serviceName),
canvasesPreparingDuration: newPreparingDuration(serviceName),
canvasesTotalCreatedArchives: newTotalCreatedArchives(serviceName),
canvasesArchivingDuration: newArchivingDuration(serviceName),
canvasesArchiveSize: newArchiveSize(serviceName),
canvasesUploadingDuration: newUploadingDuration(serviceName),
}
}
func New(serviceName string) Canvas { return &canvasImpl{} }
func (c *canvasImpl) List() []prometheus.Collector {
return []prometheus.Collector{
c.canvasesImageSize,
c.canvasesTotalSavedImages,
c.canvasesImagesPerCanvas,
c.canvasesCanvasesPerSession,
c.canvasesPreparingDuration,
c.canvasesTotalCreatedArchives,
c.canvasesArchivingDuration,
c.canvasesArchiveSize,
c.canvasesUploadingDuration,
}
}
func newImageSizeMetric(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "image_size_bytes",
Help: "A histogram displaying the size of each canvas image in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
}
func (c *canvasImpl) RecordCanvasImageSize(size float64) {
c.canvasesImageSize.Observe(size)
}
func newTotalSavedImages(serviceName string) prometheus.Counter {
return prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: serviceName,
Name: "total_saved_images",
Help: "A counter displaying the total number of saved images.",
},
)
}
func (c *canvasImpl) IncreaseTotalSavedImages() {
c.canvasesTotalSavedImages.Inc()
}
func newImagesPerCanvas(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "images_per_canvas",
Help: "A histogram displaying the number of images per canvas.",
Buckets: common.DefaultBuckets,
},
)
}
func (c *canvasImpl) RecordImagesPerCanvas(number float64) {
c.canvasesImagesPerCanvas.Observe(number)
}
func newCanvasesPerSession(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "canvases_per_session",
Help: "A histogram displaying the number of canvases per session.",
Buckets: common.DefaultBuckets,
},
)
}
func (c *canvasImpl) RecordCanvasesPerSession(number float64) {
c.canvasesCanvasesPerSession.Observe(number)
}
func newPreparingDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "preparing_duration_seconds",
Help: "A histogram displaying the duration of preparing the list of canvases for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (c *canvasImpl) RecordPreparingDuration(duration float64) {
c.canvasesPreparingDuration.Observe(duration)
}
func newTotalCreatedArchives(serviceName string) prometheus.Counter {
return prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: serviceName,
Name: "total_created_archives",
Help: "A counter displaying the total number of created canvas archives.",
},
)
}
func (c *canvasImpl) IncreaseTotalCreatedArchives() {
c.canvasesTotalCreatedArchives.Inc()
}
func newArchivingDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "archiving_duration_seconds",
Help: "A histogram displaying the duration of archiving for each canvas in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (c *canvasImpl) RecordArchivingDuration(duration float64) {
c.canvasesArchivingDuration.Observe(duration)
}
func newArchiveSize(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "archive_size_bytes",
Help: "A histogram displaying the size of each canvas archive in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
}
func (c *canvasImpl) RecordArchiveSize(size float64) {
c.canvasesArchiveSize.Observe(size)
}
func newUploadingDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "uploading_duration_seconds",
Help: "A histogram displaying the duration of uploading for each canvas in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (c *canvasImpl) RecordUploadingDuration(duration float64) {
c.canvasesUploadingDuration.Observe(duration)
}
func (c *canvasImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (c *canvasImpl) RecordCanvasImageSize(size float64) {}
func (c *canvasImpl) IncreaseTotalSavedImages() {}
func (c *canvasImpl) RecordImagesPerCanvas(number float64) {}
func (c *canvasImpl) RecordCanvasesPerSession(number float64) {}
func (c *canvasImpl) RecordPreparingDuration(duration float64) {}
func (c *canvasImpl) IncreaseTotalCreatedArchives() {}
func (c *canvasImpl) RecordArchivingDuration(duration float64) {}
func (c *canvasImpl) RecordArchiveSize(size float64) {}
func (c *canvasImpl) RecordUploadingDuration(duration float64) {}

View file

@ -2,141 +2,32 @@ package database
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
var dbBatchElements = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "db",
Name: "batch_size_elements",
Help: "A histogram displaying the number of SQL commands in each batch.",
Buckets: common.DefaultBuckets,
},
)
func RecordBatchElements(number float64) {
dbBatchElements.Observe(number)
type Database interface {
RecordBatchElements(number float64)
RecordBatchInsertDuration(durMillis float64)
RecordBulkSize(size float64, db, table string)
RecordBulkElements(size float64, db, table string)
RecordBulkInsertDuration(durMillis float64, db, table string)
RecordRequestDuration(durMillis float64, method, table string)
IncreaseTotalRequests(method, table string)
IncreaseRedisRequests(method, table string)
RecordRedisRequestDuration(durMillis float64, method, table string)
List() []prometheus.Collector
}
var dbBatchInsertDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "db",
Name: "batch_insert_duration_seconds",
Help: "A histogram displaying the duration of batch inserts in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
type databaseImpl struct{}
func RecordBatchInsertDuration(durMillis float64) {
dbBatchInsertDuration.Observe(durMillis / 1000.0)
}
func New(serviceName string) Database { return &databaseImpl{} }
var dbBulkSize = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "db",
Name: "bulk_size_bytes",
Help: "A histogram displaying the bulk size in bytes.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"db", "table"},
)
func RecordBulkSize(size float64, db, table string) {
dbBulkSize.WithLabelValues(db, table).Observe(size)
}
var dbBulkElements = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "db",
Name: "bulk_size_elements",
Help: "A histogram displaying the size of data set in each bulk.",
Buckets: common.DefaultBuckets,
},
[]string{"db", "table"},
)
func RecordBulkElements(size float64, db, table string) {
dbBulkElements.WithLabelValues(db, table).Observe(size)
}
var dbBulkInsertDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "db",
Name: "bulk_insert_duration_seconds",
Help: "A histogram displaying the duration of bulk inserts in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"db", "table"},
)
func RecordBulkInsertDuration(durMillis float64, db, table string) {
dbBulkInsertDuration.WithLabelValues(db, table).Observe(durMillis / 1000.0)
}
var dbRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "db",
Name: "request_duration_seconds",
Help: "A histogram displaying the duration of each sql request in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"method", "table"},
)
func RecordRequestDuration(durMillis float64, method, table string) {
dbRequestDuration.WithLabelValues(method, table).Observe(durMillis / 1000.0)
}
var dbTotalRequests = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: "db",
Name: "requests_total",
Help: "A counter showing the total number of all SQL requests.",
},
[]string{"method", "table"},
)
func IncreaseTotalRequests(method, table string) {
dbTotalRequests.WithLabelValues(method, table).Inc()
}
var cacheRedisRequests = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: "cache",
Name: "redis_requests_total",
Help: "A counter showing the total number of all Redis requests.",
},
[]string{"method", "table"},
)
func IncreaseRedisRequests(method, table string) {
cacheRedisRequests.WithLabelValues(method, table).Inc()
}
var cacheRedisRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "cache",
Name: "redis_request_duration_seconds",
Help: "A histogram displaying the duration of each Redis request in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"method", "table"},
)
func RecordRedisRequestDuration(durMillis float64, method, table string) {
cacheRedisRequestDuration.WithLabelValues(method, table).Observe(durMillis / 1000.0)
}
func List() []prometheus.Collector {
return []prometheus.Collector{
dbBatchElements,
dbBatchInsertDuration,
dbBulkSize,
dbBulkElements,
dbBulkInsertDuration,
dbRequestDuration,
dbTotalRequests,
cacheRedisRequests,
cacheRedisRequestDuration,
}
}
func (d *databaseImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (d *databaseImpl) RecordBatchElements(number float64) {}
func (d *databaseImpl) RecordBatchInsertDuration(durMillis float64) {}
func (d *databaseImpl) RecordBulkSize(size float64, db, table string) {}
func (d *databaseImpl) RecordBulkElements(size float64, db, table string) {}
func (d *databaseImpl) RecordBulkInsertDuration(durMillis float64, db, table string) {}
func (d *databaseImpl) RecordRequestDuration(durMillis float64, method, table string) {}
func (d *databaseImpl) IncreaseTotalRequests(method, table string) {}
func (d *databaseImpl) IncreaseRedisRequests(method, table string) {}
func (d *databaseImpl) RecordRedisRequestDuration(durMillis float64, method, table string) {}

View file

@ -2,50 +2,20 @@ package ender
import "github.com/prometheus/client_golang/prometheus"
var enderActiveSessions = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "ender",
Name: "sessions_active",
Help: "A gauge displaying the number of active (live) sessions.",
},
)
func IncreaseActiveSessions() {
enderActiveSessions.Inc()
type Ender interface {
IncreaseActiveSessions()
DecreaseActiveSessions()
IncreaseClosedSessions()
IncreaseTotalSessions()
List() []prometheus.Collector
}
func DecreaseActiveSessions() {
enderActiveSessions.Dec()
}
type enderImpl struct{}
var enderClosedSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "ender",
Name: "sessions_closed",
Help: "A counter displaying the number of closed sessions (sent SessionEnd).",
},
)
func New(serviceName string) Ender { return &enderImpl{} }
func IncreaseClosedSessions() {
enderClosedSessions.Inc()
}
var enderTotalSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "ender",
Name: "sessions_total",
Help: "A counter displaying the number of all processed sessions.",
},
)
func IncreaseTotalSessions() {
enderTotalSessions.Inc()
}
func List() []prometheus.Collector {
return []prometheus.Collector{
enderActiveSessions,
enderClosedSessions,
enderTotalSessions,
}
}
func (e *enderImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (e *enderImpl) IncreaseActiveSessions() {}
func (e *enderImpl) DecreaseActiveSessions() {}
func (e *enderImpl) IncreaseClosedSessions() {}
func (e *enderImpl) IncreaseTotalSessions() {}

View file

@ -2,65 +2,16 @@ package heuristics
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
"strconv"
)
var heuristicsTotalEvents = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: "heuristics",
Name: "events_total",
Help: "A counter displaying the number of all processed events",
},
[]string{"type"},
)
func IncreaseTotalEvents(eventType string) {
heuristicsTotalEvents.WithLabelValues(eventType).Inc()
type Heuristics interface {
IncreaseTotalEvents(eventType string)
List() []prometheus.Collector
}
var heuristicsRequestSize = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "heuristics",
Name: "request_size_bytes",
Help: "A histogram displaying the size of each HTTP request in bytes.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"url", "response_code"},
)
type heuristicsImpl struct{}
func RecordRequestSize(size float64, url string, code int) {
heuristicsRequestSize.WithLabelValues(url, strconv.Itoa(code)).Observe(size)
}
func New(serviceName string) Heuristics { return &heuristicsImpl{} }
var heuristicsRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "heuristics",
Name: "request_duration_seconds",
Help: "A histogram displaying the duration of each HTTP request in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"url", "response_code"},
)
func RecordRequestDuration(durMillis float64, url string, code int) {
heuristicsRequestDuration.WithLabelValues(url, strconv.Itoa(code)).Observe(durMillis / 1000.0)
}
var heuristicsTotalRequests = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "heuristics",
Name: "requests_total",
Help: "A counter displaying the number all HTTP requests.",
},
)
func IncreaseTotalRequests() {
heuristicsTotalRequests.Inc()
}
func List() []prometheus.Collector {
return []prometheus.Collector{
heuristicsTotalEvents,
}
}
func (h *heuristicsImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (h *heuristicsImpl) IncreaseTotalEvents(eventType string) {}

View file

@ -2,7 +2,6 @@ package images
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
type Images interface {
@ -18,174 +17,17 @@ type Images interface {
List() []prometheus.Collector
}
type imagesImpl struct {
originalArchiveSize prometheus.Histogram
originalArchiveExtractionDuration prometheus.Histogram
totalSavedArchives prometheus.Counter
savingImageDuration prometheus.Histogram
totalSavedImages prometheus.Counter
totalCreatedArchives prometheus.Counter
archivingDuration prometheus.Histogram
archiveSize prometheus.Histogram
uploadingDuration prometheus.Histogram
}
type imagesImpl struct{}
func New(serviceName string) Images {
return &imagesImpl{
originalArchiveSize: newOriginalArchiveSize(serviceName),
originalArchiveExtractionDuration: newOriginalArchiveExtractionDuration(serviceName),
totalSavedArchives: newTotalSavedArchives(serviceName),
savingImageDuration: newSavingImageDuration(serviceName),
totalSavedImages: newTotalSavedImages(serviceName),
totalCreatedArchives: newTotalCreatedArchives(serviceName),
archivingDuration: newArchivingDuration(serviceName),
archiveSize: newArchiveSize(serviceName),
uploadingDuration: newUploadingDuration(serviceName),
}
}
func New(serviceName string) Images { return &imagesImpl{} }
func (i *imagesImpl) List() []prometheus.Collector {
return []prometheus.Collector{
i.originalArchiveSize,
i.originalArchiveExtractionDuration,
i.totalSavedArchives,
i.savingImageDuration,
i.totalSavedImages,
i.totalCreatedArchives,
i.archivingDuration,
i.archiveSize,
i.uploadingDuration,
}
}
func newOriginalArchiveSize(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "original_archive_size_bytes",
Help: "A histogram displaying the original archive size in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
}
func (i *imagesImpl) RecordOriginalArchiveSize(size float64) {
i.archiveSize.Observe(size)
}
func newOriginalArchiveExtractionDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "original_archive_extraction_duration_seconds",
Help: "A histogram displaying the duration of extracting the original archive.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (i *imagesImpl) RecordOriginalArchiveExtractionDuration(duration float64) {
i.originalArchiveExtractionDuration.Observe(duration)
}
func newTotalSavedArchives(serviceName string) prometheus.Counter {
return prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: serviceName,
Name: "total_saved_archives",
Help: "A counter displaying the total number of saved original archives.",
},
)
}
func (i *imagesImpl) IncreaseTotalSavedArchives() {
i.totalSavedArchives.Inc()
}
func newSavingImageDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "saving_image_duration_seconds",
Help: "A histogram displaying the duration of saving each image in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (i *imagesImpl) RecordSavingImageDuration(duration float64) {
i.savingImageDuration.Observe(duration)
}
func newTotalSavedImages(serviceName string) prometheus.Counter {
return prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: serviceName,
Name: "total_saved_images",
Help: "A counter displaying the total number of saved images.",
},
)
}
func (i *imagesImpl) IncreaseTotalSavedImages() {
i.totalSavedImages.Inc()
}
func newTotalCreatedArchives(serviceName string) prometheus.Counter {
return prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: serviceName,
Name: "total_created_archives",
Help: "A counter displaying the total number of created archives.",
},
)
}
func (i *imagesImpl) IncreaseTotalCreatedArchives() {
i.totalCreatedArchives.Inc()
}
func newArchivingDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "archiving_duration_seconds",
Help: "A histogram displaying the duration of archiving each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (i *imagesImpl) RecordArchivingDuration(duration float64) {
i.archivingDuration.Observe(duration)
}
func newArchiveSize(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "archive_size_bytes",
Help: "A histogram displaying the session's archive size in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
}
func (i *imagesImpl) RecordArchiveSize(size float64) {
i.archiveSize.Observe(size)
}
func newUploadingDuration(serviceName string) prometheus.Histogram {
return prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "uploading_duration_seconds",
Help: "A histogram displaying the duration of uploading each session's archive to S3 in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
}
func (i *imagesImpl) RecordUploadingDuration(duration float64) {
i.uploadingDuration.Observe(duration)
}
func (i *imagesImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (i *imagesImpl) RecordOriginalArchiveSize(size float64) {}
func (i *imagesImpl) RecordOriginalArchiveExtractionDuration(duration float64) {}
func (i *imagesImpl) IncreaseTotalSavedArchives() {}
func (i *imagesImpl) RecordSavingImageDuration(duration float64) {}
func (i *imagesImpl) IncreaseTotalSavedImages() {}
func (i *imagesImpl) IncreaseTotalCreatedArchives() {}
func (i *imagesImpl) RecordArchivingDuration(duration float64) {}
func (i *imagesImpl) RecordArchiveSize(size float64) {}
func (i *imagesImpl) RecordUploadingDuration(duration float64) {}

View file

@ -2,184 +2,40 @@ package sink
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
var sinkMessageSize = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "sink",
Name: "message_size_bytes",
Help: "A histogram displaying the size of each message in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
func RecordMessageSize(size float64) {
sinkMessageSize.Observe(size)
type Sink interface {
RecordMessageSize(size float64)
IncreaseWrittenMessages()
IncreaseTotalMessages()
RecordBatchSize(size float64)
IncreaseTotalBatches()
RecordWrittenBytes(size float64, fileType string)
IncreaseTotalWrittenBytes(size float64, fileType string)
IncreaseCachedAssets()
DecreaseCachedAssets()
IncreaseSkippedAssets()
IncreaseTotalAssets()
RecordAssetSize(size float64)
RecordProcessAssetDuration(durMillis float64)
List() []prometheus.Collector
}
var sinkWrittenMessages = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "sink",
Name: "messages_written",
Help: "A counter displaying the total number of all written messages.",
},
)
type sinkImpl struct{}
func IncreaseWrittenMessages() {
sinkWrittenMessages.Inc()
}
func New(serviceName string) Sink { return &sinkImpl{} }
var sinkTotalMessages = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "sink",
Name: "messages_total",
Help: "A counter displaying the total number of all processed messages.",
},
)
func IncreaseTotalMessages() {
sinkTotalMessages.Inc()
}
var sinkBatchSize = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "sink",
Name: "batch_size_bytes",
Help: "A histogram displaying the size of each batch in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
func RecordBatchSize(size float64) {
sinkBatchSize.Observe(size)
}
var sinkTotalBatches = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "sink",
Name: "batches_total",
Help: "A counter displaying the total number of all written batches.",
},
)
func IncreaseTotalBatches() {
sinkTotalBatches.Inc()
}
var sinkWrittenBytes = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "sink",
Name: "written_bytes",
Help: "A histogram displaying the size of buffer in bytes written to session file.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"file_type"},
)
func RecordWrittenBytes(size float64, fileType string) {
if size == 0 {
return
}
sinkWrittenBytes.WithLabelValues(fileType).Observe(size)
IncreaseTotalWrittenBytes(size, fileType)
}
var sinkTotalWrittenBytes = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: "sink",
Name: "written_bytes_total",
Help: "A counter displaying the total number of bytes written to all session files.",
},
[]string{"file_type"},
)
func IncreaseTotalWrittenBytes(size float64, fileType string) {
if size == 0 {
return
}
sinkTotalWrittenBytes.WithLabelValues(fileType).Add(size)
}
var sinkCachedAssets = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "sink",
Name: "assets_cached",
Help: "A gauge displaying the current number of cached assets.",
},
)
func IncreaseCachedAssets() {
sinkCachedAssets.Inc()
}
func DecreaseCachedAssets() {
sinkCachedAssets.Dec()
}
var sinkSkippedAssets = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "sink",
Name: "assets_skipped",
Help: "A counter displaying the total number of all skipped assets.",
},
)
func IncreaseSkippedAssets() {
sinkSkippedAssets.Inc()
}
var sinkTotalAssets = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "sink",
Name: "assets_total",
Help: "A counter displaying the total number of all processed assets.",
},
)
func IncreaseTotalAssets() {
sinkTotalAssets.Inc()
}
var sinkAssetSize = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "sink",
Name: "asset_size_bytes",
Help: "A histogram displaying the size of each asset in bytes.",
Buckets: common.DefaultSizeBuckets,
},
)
func RecordAssetSize(size float64) {
sinkAssetSize.Observe(size)
}
var sinkProcessAssetDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "sink",
Name: "asset_process_duration_seconds",
Help: "A histogram displaying the duration of processing for each asset in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
func RecordProcessAssetDuration(durMillis float64) {
sinkProcessAssetDuration.Observe(durMillis / 1000.0)
}
func List() []prometheus.Collector {
return []prometheus.Collector{
sinkMessageSize,
sinkWrittenMessages,
sinkTotalMessages,
sinkBatchSize,
sinkTotalBatches,
sinkWrittenBytes,
sinkTotalWrittenBytes,
sinkCachedAssets,
sinkSkippedAssets,
sinkTotalAssets,
sinkAssetSize,
sinkProcessAssetDuration,
}
}
func (s *sinkImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (s *sinkImpl) RecordMessageSize(size float64) {}
func (s *sinkImpl) IncreaseWrittenMessages() {}
func (s *sinkImpl) IncreaseTotalMessages() {}
func (s *sinkImpl) RecordBatchSize(size float64) {}
func (s *sinkImpl) IncreaseTotalBatches() {}
func (s *sinkImpl) RecordWrittenBytes(size float64, fileType string) {}
func (s *sinkImpl) IncreaseTotalWrittenBytes(size float64, fileType string) {}
func (s *sinkImpl) IncreaseCachedAssets() {}
func (s *sinkImpl) DecreaseCachedAssets() {}
func (s *sinkImpl) IncreaseSkippedAssets() {}
func (s *sinkImpl) IncreaseTotalAssets() {}
func (s *sinkImpl) RecordAssetSize(size float64) {}
func (s *sinkImpl) RecordProcessAssetDuration(durMillis float64) {}

View file

@ -2,148 +2,34 @@ package spot
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
var spotOriginalVideoSize = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "original_video_size_bytes",
Help: "A histogram displaying the size of each original video in bytes.",
Buckets: common.VideoSizeBuckets,
},
)
func RecordOriginalVideoSize(size float64) {
spotOriginalVideoSize.Observe(size)
type Spot interface {
RecordOriginalVideoSize(size float64)
RecordCroppedVideoSize(size float64)
IncreaseVideosTotal()
IncreaseVideosCropped()
IncreaseVideosTranscoded()
RecordOriginalVideoDownloadDuration(durMillis float64)
RecordCroppingDuration(durMillis float64)
RecordCroppedVideoUploadDuration(durMillis float64)
RecordTranscodingDuration(durMillis float64)
RecordTranscodedVideoUploadDuration(durMillis float64)
List() []prometheus.Collector
}
var spotCroppedVideoSize = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "cropped_video_size_bytes",
Help: "A histogram displaying the size of each cropped video in bytes.",
Buckets: common.VideoSizeBuckets,
},
)
type spotImpl struct{}
func RecordCroppedVideoSize(size float64) {
spotCroppedVideoSize.Observe(size)
}
func New(serviceName string) Spot { return &spotImpl{} }
var spotVideosTotal = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "spot",
Name: "videos_total",
Help: "A counter displaying the total number of all processed videos.",
},
)
func IncreaseVideosTotal() {
spotVideosTotal.Inc()
}
var spotVideosCropped = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "spot",
Name: "videos_cropped_total",
Help: "A counter displaying the total number of all cropped videos.",
},
)
func IncreaseVideosCropped() {
spotVideosCropped.Inc()
}
var spotVideosTranscoded = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "spot",
Name: "videos_transcoded_total",
Help: "A counter displaying the total number of all transcoded videos.",
},
)
func IncreaseVideosTranscoded() {
spotVideosTranscoded.Inc()
}
var spotOriginalVideoDownloadDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "original_video_download_duration_seconds",
Help: "A histogram displaying the duration of downloading each original video in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
func RecordOriginalVideoDownloadDuration(durMillis float64) {
spotOriginalVideoDownloadDuration.Observe(durMillis / 1000.0)
}
var spotCroppingDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "cropping_duration_seconds",
Help: "A histogram displaying the duration of cropping each video in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
func RecordCroppingDuration(durMillis float64) {
spotCroppingDuration.Observe(durMillis / 1000.0)
}
var spotCroppedVideoUploadDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "cropped_video_upload_duration_seconds",
Help: "A histogram displaying the duration of uploading each cropped video in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
func RecordCroppedVideoUploadDuration(durMillis float64) {
spotCroppedVideoUploadDuration.Observe(durMillis / 1000.0)
}
var spotTranscodingDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "transcoding_duration_seconds",
Help: "A histogram displaying the duration of transcoding each video in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
func RecordTranscodingDuration(durMillis float64) {
spotTranscodingDuration.Observe(durMillis / 1000.0)
}
var spotTranscodedVideoUploadDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Namespace: "spot",
Name: "transcoded_video_upload_duration_seconds",
Help: "A histogram displaying the duration of uploading each transcoded video in seconds.",
Buckets: common.DefaultDurationBuckets,
},
)
func RecordTranscodedVideoUploadDuration(durMillis float64) {
spotTranscodedVideoUploadDuration.Observe(durMillis / 1000.0)
}
func List() []prometheus.Collector {
return []prometheus.Collector{
spotOriginalVideoSize,
spotCroppedVideoSize,
spotVideosTotal,
spotVideosCropped,
spotVideosTranscoded,
spotOriginalVideoDownloadDuration,
spotCroppingDuration,
spotCroppedVideoUploadDuration,
spotTranscodingDuration,
spotTranscodedVideoUploadDuration,
}
}
func (s *spotImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (s *spotImpl) RecordOriginalVideoSize(size float64) {}
func (s *spotImpl) RecordCroppedVideoSize(size float64) {}
func (s *spotImpl) IncreaseVideosTotal() {}
func (s *spotImpl) IncreaseVideosCropped() {}
func (s *spotImpl) IncreaseVideosTranscoded() {}
func (s *spotImpl) RecordOriginalVideoDownloadDuration(durMillis float64) {}
func (s *spotImpl) RecordCroppingDuration(durMillis float64) {}
func (s *spotImpl) RecordCroppedVideoUploadDuration(durMillis float64) {}
func (s *spotImpl) RecordTranscodingDuration(durMillis float64) {}
func (s *spotImpl) RecordTranscodedVideoUploadDuration(durMillis float64) {}

View file

@ -2,154 +2,34 @@ package storage
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
var storageSessionSize = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "session_size_bytes",
Help: "A histogram displaying the size of each session file in bytes prior to any manipulation.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"file_type"},
)
func RecordSessionSize(fileSize float64, fileType string) {
storageSessionSize.WithLabelValues(fileType).Observe(fileSize)
type Storage interface {
RecordSessionSize(fileSize float64, fileType string)
IncreaseStorageTotalSessions()
RecordSkippedSessionSize(fileSize float64, fileType string)
IncreaseStorageTotalSkippedSessions()
RecordSessionReadDuration(durMillis float64, fileType string)
RecordSessionSortDuration(durMillis float64, fileType string)
RecordSessionEncryptionDuration(durMillis float64, fileType string)
RecordSessionCompressDuration(durMillis float64, fileType string)
RecordSessionUploadDuration(durMillis float64, fileType string)
RecordSessionCompressionRatio(ratio float64, fileType string)
List() []prometheus.Collector
}
var storageTotalSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "storage",
Name: "sessions_total",
Help: "A counter displaying the total number of all processed sessions.",
},
)
type storageImpl struct{}
func IncreaseStorageTotalSessions() {
storageTotalSessions.Inc()
}
func New(serviceName string) Storage { return &storageImpl{} }
var storageSkippedSessionSize = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "session_size_bytes",
Help: "A histogram displaying the size of each skipped session file in bytes.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"file_type"},
)
func RecordSkippedSessionSize(fileSize float64, fileType string) {
storageSkippedSessionSize.WithLabelValues(fileType).Observe(fileSize)
}
var storageTotalSkippedSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "storage",
Name: "sessions_skipped_total",
Help: "A counter displaying the total number of all skipped sessions because of the size limits.",
},
)
func IncreaseStorageTotalSkippedSessions() {
storageTotalSkippedSessions.Inc()
}
var storageSessionReadDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "read_duration_seconds",
Help: "A histogram displaying the duration of reading for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionReadDuration(durMillis float64, fileType string) {
storageSessionReadDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionSortDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "sort_duration_seconds",
Help: "A histogram displaying the duration of sorting for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionSortDuration(durMillis float64, fileType string) {
storageSessionSortDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionEncryptionDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "encryption_duration_seconds",
Help: "A histogram displaying the duration of encoding for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionEncryptionDuration(durMillis float64, fileType string) {
storageSessionEncryptionDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionCompressDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "compress_duration_seconds",
Help: "A histogram displaying the duration of compressing for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionCompressDuration(durMillis float64, fileType string) {
storageSessionCompressDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionUploadDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "upload_duration_seconds",
Help: "A histogram displaying the duration of uploading to s3 for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionUploadDuration(durMillis float64, fileType string) {
storageSessionUploadDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionCompressionRatio = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "compression_ratio",
Help: "A histogram displaying the compression ratio of mob files for each session.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionCompressionRatio(ratio float64, fileType string) {
storageSessionCompressionRatio.WithLabelValues(fileType).Observe(ratio)
}
func List() []prometheus.Collector {
return []prometheus.Collector{
storageSessionSize,
storageTotalSessions,
storageSessionReadDuration,
storageSessionSortDuration,
storageSessionEncryptionDuration,
storageSessionCompressDuration,
storageSessionUploadDuration,
storageSessionCompressionRatio,
}
}
func (s *storageImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (s *storageImpl) RecordSessionSize(fileSize float64, fileType string) {}
func (s *storageImpl) IncreaseStorageTotalSessions() {}
func (s *storageImpl) RecordSkippedSessionSize(fileSize float64, fileType string) {}
func (s *storageImpl) IncreaseStorageTotalSkippedSessions() {}
func (s *storageImpl) RecordSessionReadDuration(durMillis float64, fileType string) {}
func (s *storageImpl) RecordSessionSortDuration(durMillis float64, fileType string) {}
func (s *storageImpl) RecordSessionEncryptionDuration(durMillis float64, fileType string) {}
func (s *storageImpl) RecordSessionCompressDuration(durMillis float64, fileType string) {}
func (s *storageImpl) RecordSessionUploadDuration(durMillis float64, fileType string) {}
func (s *storageImpl) RecordSessionCompressionRatio(ratio float64, fileType string) {}

View file

@ -1,155 +0,0 @@
package videostorage
import (
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
var storageSessionSize = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "session_size_bytes",
Help: "A histogram displaying the size of each session file in bytes prior to any manipulation.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"file_type"},
)
func RecordSessionSize(fileSize float64, fileType string) {
storageSessionSize.WithLabelValues(fileType).Observe(fileSize)
}
var storageTotalSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "storage",
Name: "sessions_total",
Help: "A counter displaying the total number of all processed sessions.",
},
)
func IncreaseStorageTotalSessions() {
storageTotalSessions.Inc()
}
var storageSkippedSessionSize = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "session_size_bytes",
Help: "A histogram displaying the size of each skipped session file in bytes.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"file_type"},
)
func RecordSkippedSessionSize(fileSize float64, fileType string) {
storageSkippedSessionSize.WithLabelValues(fileType).Observe(fileSize)
}
var storageTotalSkippedSessions = prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: "storage",
Name: "sessions_skipped_total",
Help: "A counter displaying the total number of all skipped sessions because of the size limits.",
},
)
func IncreaseStorageTotalSkippedSessions() {
storageTotalSkippedSessions.Inc()
}
var storageSessionReadDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "read_duration_seconds",
Help: "A histogram displaying the duration of reading for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionReadDuration(durMillis float64, fileType string) {
storageSessionReadDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionSortDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "sort_duration_seconds",
Help: "A histogram displaying the duration of sorting for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionSortDuration(durMillis float64, fileType string) {
storageSessionSortDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionEncryptionDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "encryption_duration_seconds",
Help: "A histogram displaying the duration of encoding for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionEncryptionDuration(durMillis float64, fileType string) {
storageSessionEncryptionDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionCompressDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "compress_duration_seconds",
Help: "A histogram displaying the duration of compressing for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionCompressDuration(durMillis float64, fileType string) {
storageSessionCompressDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionUploadDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "upload_duration_seconds",
Help: "A histogram displaying the duration of uploading to s3 for each session in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionUploadDuration(durMillis float64, fileType string) {
storageSessionUploadDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
}
var storageSessionCompressionRatio = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: "storage",
Name: "compression_ratio",
Help: "A histogram displaying the compression ratio of mob files for each session.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"file_type"},
)
func RecordSessionCompressionRatio(ratio float64, fileType string) {
storageSessionCompressionRatio.WithLabelValues(fileType).Observe(ratio)
}
func List() []prometheus.Collector {
return []prometheus.Collector{
storageSessionSize,
storageTotalSessions,
storageSessionReadDuration,
storageSessionSortDuration,
storageSessionEncryptionDuration,
storageSessionCompressDuration,
storageSessionUploadDuration,
storageSessionCompressionRatio,
}
}

View file

@ -1,11 +1,7 @@
package web
import (
"strconv"
"github.com/prometheus/client_golang/prometheus"
"openreplay/backend/pkg/metrics/common"
)
type Web interface {
@ -15,70 +11,11 @@ type Web interface {
List() []prometheus.Collector
}
type webImpl struct {
httpRequestSize *prometheus.HistogramVec
httpRequestDuration *prometheus.HistogramVec
httpTotalRequests prometheus.Counter
}
type webImpl struct{}
func New(serviceName string) Web {
return &webImpl{
httpRequestSize: newRequestSizeMetric(serviceName),
httpRequestDuration: newRequestDurationMetric(serviceName),
httpTotalRequests: newTotalRequestsMetric(serviceName),
}
}
func New(serviceName string) Web { return &webImpl{} }
func (w *webImpl) List() []prometheus.Collector {
return []prometheus.Collector{
w.httpRequestSize,
w.httpRequestDuration,
w.httpTotalRequests,
}
}
func newRequestSizeMetric(serviceName string) *prometheus.HistogramVec {
return prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "request_size_bytes",
Help: "A histogram displaying the size of each HTTP request in bytes.",
Buckets: common.DefaultSizeBuckets,
},
[]string{"url", "response_code"},
)
}
func (w *webImpl) RecordRequestSize(size float64, url string, code int) {
w.httpRequestSize.WithLabelValues(url, strconv.Itoa(code)).Observe(size)
}
func newRequestDurationMetric(serviceName string) *prometheus.HistogramVec {
return prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Namespace: serviceName,
Name: "request_duration_seconds",
Help: "A histogram displaying the duration of each HTTP request in seconds.",
Buckets: common.DefaultDurationBuckets,
},
[]string{"url", "response_code"},
)
}
func (w *webImpl) RecordRequestDuration(durMillis float64, url string, code int) {
w.httpRequestDuration.WithLabelValues(url, strconv.Itoa(code)).Observe(durMillis / 1000.0)
}
func newTotalRequestsMetric(serviceName string) prometheus.Counter {
return prometheus.NewCounter(
prometheus.CounterOpts{
Namespace: serviceName,
Name: "requests_total",
Help: "A counter displaying the number all HTTP requests.",
},
)
}
func (w *webImpl) IncreaseTotalRequests() {
w.httpTotalRequests.Inc()
}
func (w *webImpl) List() []prometheus.Collector { return []prometheus.Collector{} }
func (w *webImpl) RecordRequestSize(size float64, url string, code int) {}
func (w *webImpl) RecordRequestDuration(durMillis float64, url string, code int) {}
func (w *webImpl) IncreaseTotalRequests() {}

View file

@ -11,6 +11,8 @@ import (
"openreplay/backend/pkg/metrics/database"
)
var ErrDisabledCache = errors.New("cache is disabled")
type Cache interface {
Set(project *Project) error
GetByID(projectID uint32) (*Project, error)
@ -18,10 +20,16 @@ type Cache interface {
}
type cacheImpl struct {
db *redis.Client
db *redis.Client
metrics database.Database
}
var ErrDisabledCache = errors.New("cache is disabled")
func NewCache(db *redis.Client, metrics database.Database) Cache {
return &cacheImpl{
db: db,
metrics: metrics,
}
}
func (c *cacheImpl) Set(project *Project) error {
if c.db == nil {
@ -38,8 +46,8 @@ func (c *cacheImpl) Set(project *Project) error {
if _, err = c.db.Redis.Set(fmt.Sprintf("project:key:%s", project.ProjectKey), projectBytes, time.Minute*10).Result(); err != nil {
return err
}
database.RecordRedisRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "set", "project")
database.IncreaseRedisRequests("set", "project")
c.metrics.RecordRedisRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "set", "project")
c.metrics.IncreaseRedisRequests("set", "project")
return nil
}
@ -52,8 +60,8 @@ func (c *cacheImpl) GetByID(projectID uint32) (*Project, error) {
if err != nil {
return nil, err
}
database.RecordRedisRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "get", "project")
database.IncreaseRedisRequests("get", "project")
c.metrics.RecordRedisRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "get", "project")
c.metrics.IncreaseRedisRequests("get", "project")
project := &Project{}
if err = json.Unmarshal([]byte(result), project); err != nil {
return nil, err
@ -70,15 +78,11 @@ func (c *cacheImpl) GetByKey(projectKey string) (*Project, error) {
if err != nil {
return nil, err
}
database.RecordRedisRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "get", "project")
database.IncreaseRedisRequests("get", "project")
c.metrics.RecordRedisRequestDuration(float64(time.Now().Sub(start).Milliseconds()), "get", "project")
c.metrics.IncreaseRedisRequests("get", "project")
project := &Project{}
if err = json.Unmarshal([]byte(result), project); err != nil {
return nil, err
}
return project, nil
}
func NewCache(db *redis.Client) Cache {
return &cacheImpl{db: db}
}

View file

@ -9,6 +9,7 @@ import (
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/db/redis"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/metrics/database"
)
type Projects interface {
@ -24,8 +25,8 @@ type projectsImpl struct {
projectsByKeys cache.Cache
}
func New(log logger.Logger, db pool.Pool, redis *redis.Client) Projects {
cl := NewCache(redis)
func New(log logger.Logger, db pool.Pool, redis *redis.Client, metrics database.Database) Projects {
cl := NewCache(redis, metrics)
return &projectsImpl{
log: log,
db: db,

View file

@ -2,6 +2,7 @@ package tracer
import (
"net/http"
"openreplay/backend/pkg/metrics/database"
db "openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/logger"
@ -14,7 +15,7 @@ type Tracer interface {
type tracerImpl struct{}
func NewTracer(log logger.Logger, conn db.Pool) (Tracer, error) {
func NewTracer(log logger.Logger, conn db.Pool, metrics database.Database) (Tracer, error) {
return &tracerImpl{}, nil
}

View file

@ -3,6 +3,7 @@ package sessions
import (
"errors"
"openreplay/backend/pkg/db/redis"
"openreplay/backend/pkg/metrics/database"
)
type cacheImpl struct{}
@ -25,6 +26,6 @@ func (c *cacheImpl) Get(sessionID uint64) (*Session, error) {
var ErrDisabledCache = errors.New("cache is disabled")
func NewCache(db *redis.Client) Cache {
func NewCache(db *redis.Client, metrics database.Database) Cache {
return &cacheImpl{}
}

View file

@ -3,6 +3,7 @@ package sessions
import (
"context"
"fmt"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/db/redis"
@ -38,12 +39,12 @@ type sessionsImpl struct {
projects projects.Projects
}
func New(log logger.Logger, db pool.Pool, proj projects.Projects, redis *redis.Client) Sessions {
func New(log logger.Logger, db pool.Pool, proj projects.Projects, redis *redis.Client, metrics database.Database) Sessions {
return &sessionsImpl{
log: log,
cache: NewInMemoryCache(log, NewCache(redis)),
cache: NewInMemoryCache(log, NewCache(redis, metrics)),
storage: NewStorage(db),
updates: NewSessionUpdates(log, db),
updates: NewSessionUpdates(log, db, metrics),
projects: proj,
}
}

View file

@ -27,13 +27,15 @@ type updatesImpl struct {
log logger.Logger
db pool.Pool
updates map[uint64]*sessionUpdate
metrics database.Database
}
func NewSessionUpdates(log logger.Logger, db pool.Pool) Updates {
func NewSessionUpdates(log logger.Logger, db pool.Pool, metrics database.Database) Updates {
return &updatesImpl{
log: log,
db: db,
updates: make(map[uint64]*sessionUpdate),
metrics: metrics,
}
}
@ -94,7 +96,7 @@ func (u *updatesImpl) Commit() {
}
}
// Record batch size
database.RecordBatchElements(float64(b.Len()))
u.metrics.RecordBatchElements(float64(b.Len()))
start := time.Now()
@ -121,7 +123,7 @@ func (u *updatesImpl) Commit() {
}
}
}
database.RecordBatchInsertDuration(float64(time.Now().Sub(start).Milliseconds()))
u.metrics.RecordBatchInsertDuration(float64(time.Now().Sub(start).Milliseconds()))
u.updates = make(map[uint64]*sessionUpdate)
}

View file

@ -1,12 +1,14 @@
package spot
import (
"openreplay/backend/pkg/metrics/database"
"time"
"openreplay/backend/internal/config/spot"
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/flakeid"
"openreplay/backend/pkg/logger"
spotMetrics "openreplay/backend/pkg/metrics/spot"
"openreplay/backend/pkg/metrics/web"
"openreplay/backend/pkg/objectstorage/store"
"openreplay/backend/pkg/server/api"
@ -26,16 +28,16 @@ type ServicesBuilder struct {
SpotsAPI api.Handlers
}
func NewServiceBuilder(log logger.Logger, cfg *spot.Config, webMetrics web.Web, pgconn pool.Pool, prefix string) (*ServicesBuilder, error) {
func NewServiceBuilder(log logger.Logger, cfg *spot.Config, webMetrics web.Web, spotMetrics spotMetrics.Spot, dbMetrics database.Database, pgconn pool.Pool, prefix string) (*ServicesBuilder, error) {
objStore, err := store.NewStore(&cfg.ObjectsConfig)
if err != nil {
return nil, err
}
flaker := flakeid.NewFlaker(cfg.WorkerID)
spots := service.NewSpots(log, pgconn, flaker)
transcoder := transcoder.NewTranscoder(cfg, log, objStore, pgconn, spots)
transcoder := transcoder.NewTranscoder(cfg, log, objStore, pgconn, spots, spotMetrics)
keys := keys.NewKeys(log, pgconn)
auditrail, err := tracer.NewTracer(log, pgconn)
auditrail, err := tracer.NewTracer(log, pgconn, dbMetrics)
if err != nil {
return nil, err
}

View file

@ -16,7 +16,7 @@ import (
"openreplay/backend/internal/config/spot"
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/logger"
metrics "openreplay/backend/pkg/metrics/spot"
spotMetrics "openreplay/backend/pkg/metrics/spot"
"openreplay/backend/pkg/objectstorage"
workers "openreplay/backend/pkg/pool"
"openreplay/backend/pkg/spot/service"
@ -39,9 +39,10 @@ type transcoderImpl struct {
spots service.Spots
prepareWorkers workers.WorkerPool
transcodeWorkers workers.WorkerPool
metrics spotMetrics.Spot
}
func NewTranscoder(cfg *spot.Config, log logger.Logger, objStorage objectstorage.ObjectStorage, conn pool.Pool, spots service.Spots) Transcoder {
func NewTranscoder(cfg *spot.Config, log logger.Logger, objStorage objectstorage.ObjectStorage, conn pool.Pool, spots service.Spots, metrics spotMetrics.Spot) Transcoder {
tnsc := &transcoderImpl{
cfg: cfg,
log: log,
@ -114,7 +115,7 @@ func (t *transcoderImpl) doneTask(task *Task) {
}
func (t *transcoderImpl) process(task *Task) {
metrics.IncreaseVideosTotal()
t.metrics.IncreaseVideosTotal()
//spotID := task.SpotID
t.log.Info(context.Background(), "Processing spot %s", task.SpotID)
@ -200,11 +201,11 @@ func (t *transcoderImpl) downloadSpotVideo(spotID uint64, path string) error {
if fileInfo, err := originVideo.Stat(); err != nil {
t.log.Error(context.Background(), "Failed to get file info: %v", err)
} else {
metrics.RecordOriginalVideoSize(float64(fileInfo.Size()))
t.metrics.RecordOriginalVideoSize(float64(fileInfo.Size()))
}
originVideo.Close()
metrics.RecordOriginalVideoDownloadDuration(time.Since(start).Seconds())
t.metrics.RecordOriginalVideoDownloadDuration(time.Since(start).Seconds())
t.log.Info(context.Background(), "Saved origin video to disk, spot: %d in %v sec", spotID, time.Since(start).Seconds())
return nil
@ -227,8 +228,8 @@ func (t *transcoderImpl) cropSpotVideo(spotID uint64, crop []int, path string) e
if err != nil {
return fmt.Errorf("failed to execute command: %v, stderr: %v", err, stderr.String())
}
metrics.IncreaseVideosCropped()
metrics.RecordCroppingDuration(time.Since(start).Seconds())
t.metrics.IncreaseVideosCropped()
t.metrics.RecordCroppingDuration(time.Since(start).Seconds())
t.log.Info(context.Background(), "Cropped spot %d in %v", spotID, time.Since(start).Seconds())
@ -246,7 +247,7 @@ func (t *transcoderImpl) cropSpotVideo(spotID uint64, crop []int, path string) e
if fileInfo, err := video.Stat(); err != nil {
t.log.Error(context.Background(), "Failed to get file info: %v", err)
} else {
metrics.RecordCroppedVideoSize(float64(fileInfo.Size()))
t.metrics.RecordCroppedVideoSize(float64(fileInfo.Size()))
}
err = t.objStorage.Upload(video, fmt.Sprintf("%d/video.webm", spotID), "video/webm", objectstorage.NoContentEncoding, objectstorage.NoCompression)
@ -254,7 +255,7 @@ func (t *transcoderImpl) cropSpotVideo(spotID uint64, crop []int, path string) e
return fmt.Errorf("failed to upload cropped video: %v", err)
}
metrics.RecordCroppedVideoUploadDuration(time.Since(start).Seconds())
t.metrics.RecordCroppedVideoUploadDuration(time.Since(start).Seconds())
t.log.Info(context.Background(), "Uploaded cropped spot %d in %v", spotID, time.Since(start).Seconds())
return nil
@ -279,8 +280,8 @@ func (t *transcoderImpl) transcodeSpotVideo(spotID uint64, path string) (string,
t.log.Error(context.Background(), "Failed to execute command: %v, stderr: %v", err, stderr.String())
return "", err
}
metrics.IncreaseVideosTranscoded()
metrics.RecordTranscodingDuration(time.Since(start).Seconds())
t.metrics.IncreaseVideosTranscoded()
t.metrics.RecordTranscodingDuration(time.Since(start).Seconds())
t.log.Info(context.Background(), "Transcoded spot %d in %v", spotID, time.Since(start).Seconds())
start = time.Now()
@ -327,7 +328,7 @@ func (t *transcoderImpl) transcodeSpotVideo(spotID uint64, path string) (string,
return "", err
}
}
metrics.RecordTranscodedVideoUploadDuration(time.Since(start).Seconds())
t.metrics.RecordTranscodedVideoUploadDuration(time.Since(start).Seconds())
t.log.Info(context.Background(), "Uploaded chunks for spot %d in %v", spotID, time.Since(start).Seconds())
return strings.Join(lines, "\n"), nil

8
ee/api/.gitignore vendored
View file

@ -211,7 +211,7 @@ Pipfile.lock
/chalicelib/core/metadata.py
/chalicelib/core/mobile.py
/chalicelib/core/saved_search.py
/chalicelib/core/sessions/sessions.py
/chalicelib/core/sessions/sessions_pg.py
/chalicelib/core/sessions/sessions_ch.py
/chalicelib/core/sessions/sessions_devtool/sessions_devtool.py
/chalicelib/core/sessions/sessions_favorite/sessions_favorite.py
@ -225,8 +225,7 @@ Pipfile.lock
/chalicelib/core/sessions/unprocessed_sessions.py
/chalicelib/core/metrics/modules
/chalicelib/core/socket_ios.py
/chalicelib/core/sourcemaps.py
/chalicelib/core/sourcemaps_parser.py
/chalicelib/core/sourcemaps
/chalicelib/core/tags.py
/chalicelib/saml
/chalicelib/utils/__init__.py
@ -235,7 +234,6 @@ Pipfile.lock
/chalicelib/utils/dev.py
/chalicelib/utils/email_handler.py
/chalicelib/utils/email_helper.py
/chalicelib/utils/errors_helper.py
/chalicelib/utils/event_filter_definition.py
/chalicelib/utils/github_client_v3.py
/chalicelib/utils/helper.py
@ -287,7 +285,7 @@ Pipfile.lock
/chalicelib/core/alerts/alerts_listener.py
/chalicelib/core/alerts/modules/helpers.py
/chalicelib/core/errors/modules/*
/chalicelib/core/errors/errors.py
/chalicelib/core/errors/errors_pg.py
/chalicelib/core/errors/errors_ch.py
/chalicelib/core/errors/errors_details.py
/chalicelib/utils/contextual_validators.py

View file

@ -1,8 +1,31 @@
FROM python:3.12-alpine
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
RUN apk add --no-cache build-base libressl libffi-dev libressl-dev libxslt-dev libxml2-dev xmlsec-dev xmlsec tini
LABEL maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
WORKDIR /app
COPY . .
RUN mv env.default .env
RUN apk add --no-cache tini xmlsec && \
export UV_SYSTEM_PYTHON=true && \
pip install uv && \
apk add --no-cache --virtual .build-deps \
build-base \
libressl \
libffi-dev \
libressl-dev \
libxslt-dev \
libxml2-dev \
xmlsec-dev && \
uv pip install --no-cache-dir --upgrade -r requirements.txt && \
# Solve the libxml2 library version mismatch by reinstalling lxml with matching libxml2
uv pip uninstall lxml && \
uv pip install --no-cache-dir --no-binary lxml lxml --force-reinstall && \
# Create non-root user
adduser -u 1001 openreplay -D && \
# Cleanup build dependencies
apk del .build-deps
ARG envarg
ARG GIT_SHA
ENV SOURCE_MAP_VERSION=0.7.4 \
APP_NAME=chalice \
LISTEN_PORT=8000 \
@ -10,17 +33,12 @@ ENV SOURCE_MAP_VERSION=0.7.4 \
ENTERPRISE_BUILD=${envarg} \
GIT_SHA=$GIT_SHA
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt
# This code is used to solve 'lxml & xmlsec libxml2 library version mismatch' error
RUN pip uninstall -y lxml && pip install lxml
WORKDIR /app
COPY . .
RUN mv env.default .env
RUN adduser -u 1001 openreplay -D
USER 1001
ENTRYPOINT ["/sbin/tini", "--"]
CMD ./entrypoint.sh
CMD ["./entrypoint.sh"]

View file

@ -21,7 +21,6 @@ gunicorn = "==23.0.0"
python-decouple = "==3.8"
pydantic = {extras = ["email"], version = "==2.10.6"}
apscheduler = "==3.11.0"
lxml = "!=4.7.0,<=5.2.1,>=4.6.5"
python3-saml = "==1.16.0"
python-multipart = "==0.0.20"
redis = "==5.2.1"

View file

@ -1,16 +1,3 @@
from decouple import config
TENANT_ID = "tenant_id"
if config("EXP_ALERTS", cast=bool, default=False):
if config("EXP_SESSIONS_SEARCH", cast=bool, default=False):
from chalicelib.core.sessions import sessions
else:
from chalicelib.core.sessions import sessions_ch as sessions
else:
if config("EXP_SESSIONS_SEARCH", cast=bool, default=False):
from chalicelib.core.sessions import sessions_ch as sessions
else:
from chalicelib.core.sessions import sessions
from . import helpers as alert_helpers

View file

@ -86,7 +86,8 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""(SELECT DISTINCT value, type
return f"""SELECT DISTINCT ON(value, type) value, type
FROM ((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -102,7 +103,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5);"""
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -257,7 +258,7 @@ def __search_metadata(project_id, value, key=None, source=None):
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT key, value, 'METADATA' AS TYPE
query = cur.format(query=f"""SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", parameters={"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)})

View file

@ -4,11 +4,11 @@ from decouple import config
logger = logging.getLogger(__name__)
from . import errors as errors_legacy
from . import errors_pg as errors_legacy
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental error search")
from . import errors_ch as errors
from . import errors_details_exp as errors_details
else:
from . import errors
from . import errors_pg as errors

View file

@ -1,31 +1,12 @@
from decouple import config
import logging
import schemas
from . import errors
from chalicelib.core import metrics, metadata
from chalicelib.core import sessions
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils import helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f"{o[0][0][0]}@{v[0]}",
"count": v[1]
} for o in data for v in o[2]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o[0][0][0],
"count": o[1][0][0],
} for o in data
]
logger = logging.getLogger(__name__)
def __transform_map_to_tag(data, key1, key2, requested_key):
@ -89,13 +70,7 @@ def get_details(project_id, error_id, user_id, **data):
MAIN_ERR_SESS_TABLE = exp_ch_helper.get_main_js_errors_sessions_table(0)
MAIN_EVENTS_TABLE = exp_ch_helper.get_main_events_table(0)
ch_sub_query24 = errors.__get_basic_constraints(startTime_arg_name="startDate24", endTime_arg_name="endDate24")
ch_sub_query24.append("error_id = %(error_id)s")
ch_sub_query30 = errors.__get_basic_constraints(startTime_arg_name="startDate30", endTime_arg_name="endDate30",
project_key="errors.project_id")
ch_sub_query30.append("error_id = %(error_id)s")
ch_basic_query = errors.__get_basic_constraints(time_constraint=False)
ch_basic_query = errors_helper.__get_basic_constraints_ch(time_constraint=False)
ch_basic_query.append("error_id = %(error_id)s")
with ch_client.ClickHouseClient() as ch:
@ -105,9 +80,9 @@ def get_details(project_id, error_id, user_id, **data):
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = errors.get_step_size(data["startDate24"], data["endDate24"], density24)
step_size24 = get_step_size(data["startDate24"], data["endDate24"], density24)
density30 = int(data.get("density30", 30))
step_size30 = errors.get_step_size(data["startDate30"], data["endDate30"], density30)
step_size30 = get_step_size(data["startDate30"], data["endDate30"], density30)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
@ -121,27 +96,25 @@ def get_details(project_id, error_id, user_id, **data):
main_ch_query = f"""\
WITH pre_processed AS (SELECT error_id,
name,
message,
toString(`$properties`.name) AS name,
toString(`$properties`.message) AS message,
session_id,
datetime,
user_id,
user_browser,
user_browser_version,
user_os,
user_os_version,
user_device_type,
user_device,
user_country,
error_tags_keys,
error_tags_values
created_at AS datetime,
`$user_id` AS user_id,
`$browser` AS user_browser,
`$browser_version` AS user_browser_version,
`$os` AS user_os,
'$os_version' AS user_os_version,
toString(`$properties`.user_device_type) AS user_device_type,
toString(`$properties`.user_device) AS user_device,
`$country` AS user_country
FROM {MAIN_ERR_SESS_TABLE} AS errors
WHERE {" AND ".join(ch_basic_query)}
)
SELECT %(error_id)s AS error_id, name, message,users,
first_occurrence,last_occurrence,last_session_id,
sessions,browsers_partition,os_partition,device_partition,
country_partition,chart24,chart30,custom_tags
country_partition,chart24,chart30
FROM (SELECT error_id,
name,
message
@ -156,8 +129,7 @@ def get_details(project_id, error_id, user_id, **data):
INNER JOIN (SELECT toUnixTimestamp(max(datetime)) * 1000 AS last_occurrence,
toUnixTimestamp(min(datetime)) * 1000 AS first_occurrence
FROM pre_processed) AS time_details ON TRUE
INNER JOIN (SELECT session_id AS last_session_id,
arrayMap((key, value)->(map(key, value)), error_tags_keys, error_tags_values) AS custom_tags
INNER JOIN (SELECT session_id AS last_session_id
FROM pre_processed
ORDER BY datetime DESC
LIMIT 1) AS last_session_details ON TRUE
@ -203,27 +175,35 @@ def get_details(project_id, error_id, user_id, **data):
ORDER BY count DESC) AS count_per_country_details
) AS mapped_country_details ON TRUE
INNER JOIN (SELECT groupArray(map('timestamp', timestamp, 'count', count)) AS chart24
FROM (SELECT toUnixTimestamp(toStartOfInterval(datetime, INTERVAL 3756 second)) *
1000 AS timestamp,
FROM (SELECT gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM {MAIN_EVENTS_TABLE} AS errors
WHERE {" AND ".join(ch_sub_query24)}
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS gs
LEFT JOIN {MAIN_EVENTS_TABLE} AS errors ON(TRUE)
WHERE project_id = toUInt16(%(project_id)s)
AND `$event_name` = 'ERROR'
AND events.created_at >= toDateTime(timestamp / 1000)
AND events.created_at < toDateTime((timestamp + %(step_size24)s) / 1000)
AND error_id = %(error_id)s
GROUP BY timestamp
ORDER BY timestamp) AS chart_details
) AS chart_details24 ON TRUE
INNER JOIN (SELECT groupArray(map('timestamp', timestamp, 'count', count)) AS chart30
FROM (SELECT toUnixTimestamp(toStartOfInterval(datetime, INTERVAL 3724 second)) *
1000 AS timestamp,
FROM (SELECT gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM {MAIN_EVENTS_TABLE} AS errors
WHERE {" AND ".join(ch_sub_query30)}
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS gs
LEFT JOIN {MAIN_EVENTS_TABLE} AS errors ON(TRUE)
WHERE project_id = toUInt16(%(project_id)s)
AND `$event_name` = 'ERROR'
AND events.created_at >= toDateTime(timestamp / 1000)
AND events.created_at < toDateTime((timestamp + %(step_size30)s) / 1000)
AND error_id = %(error_id)s
GROUP BY timestamp
ORDER BY timestamp) AS chart_details
) AS chart_details30 ON TRUE;"""
# print("--------------------")
# print(ch.format(main_ch_query, params))
# print("--------------------")
logger.debug("--------------------")
logging.debug(ch.format(query=main_ch_query, parameters=params))
logger.debug("--------------------")
row = ch.execute(query=main_ch_query, parameters=params)
if len(row) == 0:
return {"errors": ["error not found"]}
@ -240,12 +220,12 @@ def get_details(project_id, error_id, user_id, **data):
ORDER BY datetime DESC
LIMIT 1;"""
params = {"project_id": project_id, "session_id": row["last_session_id"], "userId": user_id}
# print("--------------------")
# print(ch.format(query, params))
# print("--------------------")
logger.debug("--------------------")
logging.debug(ch.format(query=query, parameters=params))
logger.debug("--------------------")
status = ch.execute(query=query, parameters=params)
if status is not None:
if status is not None and len(status) > 0:
status = status[0]
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
@ -254,8 +234,4 @@ def get_details(project_id, error_id, user_id, **data):
row["last_hydrated_session"] = None
row["favorite"] = False
row["viewed"] = False
row["chart24"] = metrics.__complete_missing_steps(start_time=data["startDate24"], end_time=data["endDate24"],
density=density24, rows=row["chart24"], neutral={"count": 0})
row["chart30"] = metrics.__complete_missing_steps(start_time=data["startDate30"], end_time=data["endDate30"],
density=density30, rows=row["chart30"], neutral={"count": 0})
return {"data": helper.dict_to_camel_case(row)}

View file

@ -2,6 +2,7 @@ import logging
import redis
import requests
# from confluent_kafka.admin import AdminClient
from decouple import config
@ -28,7 +29,6 @@ HEALTH_ENDPOINTS = {
"http": app_connection_string("http-openreplay", 8888, "metrics"),
"ingress-nginx": app_connection_string("ingress-nginx-openreplay", 80, "healthz"),
"integrations": app_connection_string("integrations-openreplay", 8888, "metrics"),
"peers": app_connection_string("peers-openreplay", 8888, "health"),
"sink": app_connection_string("sink-openreplay", 8888, "metrics"),
"sourcemapreader": app_connection_string(
"sourcemapreader-openreplay", 8888, "health"
@ -40,9 +40,7 @@ HEALTH_ENDPOINTS = {
def __check_database_pg(*_):
fail_response = {
"health": False,
"details": {
"errors": ["Postgres health-check failed"]
}
"details": {"errors": ["Postgres health-check failed"]},
}
with pg_client.PostgresClient() as cur:
try:
@ -64,29 +62,26 @@ def __check_database_pg(*_):
"details": {
# "version": server_version["server_version"],
# "schema": schema_version["version"]
}
},
}
def __always_healthy(*_):
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
def __check_be_service(service_name):
def fn(*_):
fail_response = {
"health": False,
"details": {
"errors": ["server health-check failed"]
}
"details": {"errors": ["server health-check failed"]},
}
try:
results = requests.get(HEALTH_ENDPOINTS.get(service_name), timeout=2)
if results.status_code != 200:
logger.error(f"!! issue with the {service_name}-health code:{results.status_code}")
logger.error(
f"!! issue with the {service_name}-health code:{results.status_code}"
)
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
return fail_response
@ -104,10 +99,7 @@ def __check_be_service(service_name):
logger.error("couldn't get response")
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
return fn
@ -115,7 +107,7 @@ def __check_be_service(service_name):
def __check_redis(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]}
"details": {"errors": ["server health-check failed"]},
}
if config("REDIS_STRING", default=None) is None:
# fail_response["details"]["errors"].append("REDIS_STRING not defined in env-vars")
@ -134,16 +126,14 @@ def __check_redis(*_):
"health": True,
"details": {
# "version": r.execute_command('INFO')['redis_version']
}
},
}
def __check_SSL(*_):
fail_response = {
"health": False,
"details": {
"errors": ["SSL Certificate health-check failed"]
}
"details": {"errors": ["SSL Certificate health-check failed"]},
}
try:
requests.get(config("SITE_URL"), verify=True, allow_redirects=True)
@ -151,10 +141,7 @@ def __check_SSL(*_):
logger.error("!! health failed: SSL Certificate")
logger.exception(e)
return fail_response
return {
"health": True,
"details": {}
}
return {"health": True, "details": {}}
def __get_sessions_stats(tenant_id, *_):
@ -162,31 +149,34 @@ def __get_sessions_stats(tenant_id, *_):
constraints = ["projects.deleted_at IS NULL"]
if tenant_id:
constraints.append("tenant_id=%(tenant_id)s")
query = cur.mogrify(f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
query = cur.mogrify(
f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
COALESCE(SUM(events_count),0) AS e_c
FROM public.projects_stats
INNER JOIN public.projects USING(project_id)
WHERE {" AND ".join(constraints)};""",
{"tenant_id": tenant_id})
{"tenant_id": tenant_id},
)
cur.execute(query)
row = cur.fetchone()
return {
"numberOfSessionsCaptured": row["s_c"],
"numberOfEventCaptured": row["e_c"]
}
return {"numberOfSessionsCaptured": row["s_c"], "numberOfEventCaptured": row["e_c"]}
def get_health(tenant_id=None):
health_map = {
"databases": {
"postgres": __check_database_pg,
"clickhouse": __check_database_ch
"clickhouse": __check_database_ch,
},
"ingestionPipeline": {
**({"redis": __check_redis} if config("REDIS_STRING", default=None)
and len(config("REDIS_STRING")) > 0 else {}),
**(
{"redis": __check_redis}
if config("REDIS_STRING", default=None)
and len(config("REDIS_STRING")) > 0
else {}
),
# "kafka": __check_kafka
"kafka": __always_healthy
"kafka": __always_healthy,
},
"backendServices": {
"alerts": __check_be_service("alerts"),
@ -200,14 +190,13 @@ def get_health(tenant_id=None):
"http": __check_be_service("http"),
"ingress-nginx": __always_healthy,
"integrations": __check_be_service("integrations"),
"peers": __check_be_service("peers"),
# "quickwit": __check_be_service("quickwit"),
"sink": __check_be_service("sink"),
"sourcemapreader": __check_be_service("sourcemapreader"),
"storage": __check_be_service("storage")
"storage": __check_be_service("storage"),
},
"details": __get_sessions_stats,
"ssl": __check_SSL
"ssl": __check_SSL,
}
return __process_health(tenant_id=tenant_id, health_map=health_map)
@ -219,10 +208,16 @@ def __process_health(tenant_id, health_map):
response.pop(parent_key)
elif isinstance(health_map[parent_key], dict):
for element_key in health_map[parent_key]:
if config(f"SKIP_H_{parent_key.upper()}_{element_key.upper()}", cast=bool, default=False):
if config(
f"SKIP_H_{parent_key.upper()}_{element_key.upper()}",
cast=bool,
default=False,
):
response[parent_key].pop(element_key)
else:
response[parent_key][element_key] = health_map[parent_key][element_key](tenant_id)
response[parent_key][element_key] = health_map[parent_key][
element_key
](tenant_id)
else:
response[parent_key] = health_map[parent_key](tenant_id)
return response
@ -230,7 +225,8 @@ def __process_health(tenant_id, health_map):
def cron():
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""SELECT projects.project_id,
query = cur.mogrify(
"""SELECT projects.project_id,
projects.created_at,
projects.sessions_last_check_at,
projects.first_recorded_session_at,
@ -238,7 +234,8 @@ def cron():
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;""")
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
@ -259,20 +256,24 @@ def cron():
count_start_from = r["last_update_at"]
count_start_from = TimeUTC.datetime_to_timestamp(count_start_from)
params = {"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
params = {
"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts>=%(start_ts)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params)
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
@ -280,65 +281,77 @@ def cron():
params["events_count"] = row["events_count"]
if insert:
query = cur.mogrify("""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
query = cur.mogrify(
"""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
VALUES (%(project_id)s, %(sessions_count)s, %(events_count)s, (now() AT TIME ZONE 'utc'::text));""",
params)
params,
)
else:
query = cur.mogrify("""UPDATE public.projects_stats
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=sessions_count+%(sessions_count)s,
events_count=events_count+%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params)
params,
)
cur.execute(query)
# this cron is used to correct the sessions&events count every week
def weekly_cron():
with pg_client.PostgresClient(long_query=True) as cur:
query = cur.mogrify("""SELECT project_id,
query = cur.mogrify(
"""SELECT project_id,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;""")
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
if r["last_update_at"] is None:
continue
params = {"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0}
params = {
"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify("""SELECT COUNT(1) AS sessions_count,
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params)
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
query = cur.mogrify("""UPDATE public.projects_stats
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=%(sessions_count)s,
events_count=%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params)
params,
)
cur.execute(query)
def __check_database_ch(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]}
"details": {"errors": ["server health-check failed"]},
}
with ch_client.ClickHouseClient() as ch:
try:
@ -348,9 +361,11 @@ def __check_database_ch(*_):
logger.exception(e)
return fail_response
schema_version = ch.execute("""SELECT 1
schema_version = ch.execute(
"""SELECT 1
FROM system.functions
WHERE name = 'openreplay_version';""")
WHERE name = 'openreplay_version';"""
)
if len(schema_version) > 0:
schema_version = ch.execute("SELECT openreplay_version() AS version;")
schema_version = schema_version[0]["version"]
@ -365,9 +380,10 @@ def __check_database_ch(*_):
# "version": server_version[0]["server_version"],
# "schema": schema_version,
# **errors
}
},
}
# def __check_kafka(*_):
# fail_response = {
# "health": False,

View file

@ -3,12 +3,15 @@ import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import sessions as sessions_legacy
from . import sessions_pg
from . import sessions_pg as sessions_legacy
from . import sessions_ch
from . import sessions_search as sessions_search_legacy
if config("EXP_SESSIONS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental sessions search")
from . import sessions_ch as sessions
from . import sessions_search_exp as sessions_search
else:
from . import sessions
from . import sessions_search_exp
from . import sessions_pg as sessions
from . import sessions_search as sessions_search

View file

@ -3,7 +3,7 @@ import logging
import schemas
from chalicelib.core import metadata, projects
from . import sessions_favorite, sessions as sessions_legacy, sessions_ch as sessions, sessions_legacy_mobil
from . import sessions_favorite, sessions_search_legacy, sessions_ch as sessions, sessions_legacy_mobil
from chalicelib.utils import pg_client, helper, ch_client, exp_ch_helper
logger = logging.getLogger(__name__)
@ -64,6 +64,13 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
platform="web"):
if data.bookmarked:
data.startTimestamp, data.endTimestamp = sessions_favorite.get_start_end_timestamp(project.project_id, user_id)
if data.startTimestamp is None:
logger.debug(f"No vault sessions found for project:{project.project_id}")
return {
'total': 0,
'sessions': [],
'src': 2
}
if project.platform == "web":
full_args, query_part = sessions.search_query_parts_ch(data=data, error_status=error_status,
errors_only=errors_only,
@ -134,7 +141,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
) AS users_sessions;""",
full_args)
elif ids_only:
main_query = cur.format(query=f"""SELECT DISTINCT ON(s.session_id) s.session_id
main_query = cur.format(query=f"""SELECT DISTINCT ON(s.session_id) s.session_id AS session_id
{query_part}
ORDER BY s.session_id desc
LIMIT %(sessions_limit)s OFFSET %(sessions_limit_s)s;""",
@ -272,4 +279,4 @@ def search_by_metadata(tenant_id, user_id, m_key, m_value, project_id=None):
# TODO: rewrite this function to use ClickHouse
def search_sessions_by_ids(project_id: int, session_ids: list, sort_by: str = 'session_id',
ascending: bool = False) -> dict:
return sessions_legacy.search_sessions_by_ids(project_id, session_ids, sort_by, ascending)
return sessions_search_legacy.search_sessions_by_ids(project_id, session_ids, sort_by, ascending)

View file

@ -927,12 +927,12 @@ def authenticate_sso(email: str, internal_id: str):
aud=AUDIENCE, jwt_jti=j_r.jwt_refresh_jti),
"refreshTokenMaxAge": config("JWT_REFRESH_EXPIRATION", cast=int),
"spotJwt": authorizers.generate_jwt(user_id=r['userId'], tenant_id=r['tenantId'],
iat=j_r.spot_jwt_iat, aud=spot.AUDIENCE),
iat=j_r.spot_jwt_iat, aud=spot.AUDIENCE, for_spot=True),
"spotRefreshToken": authorizers.generate_jwt_refresh(user_id=r['userId'],
tenant_id=r['tenantId'],
iat=j_r.spot_jwt_refresh_iat,
aud=spot.AUDIENCE,
jwt_jti=j_r.spot_jwt_refresh_jti),
jwt_jti=j_r.spot_jwt_refresh_jti, for_spot=True),
"spotRefreshTokenMaxAge": config("JWT_SPOT_REFRESH_EXPIRATION", cast=int)
}
return response

View file

@ -32,7 +32,7 @@ rm -rf ./chalicelib/core/log_tools
rm -rf ./chalicelib/core/metadata.py
rm -rf ./chalicelib/core/mobile.py
rm -rf ./chalicelib/core/saved_search.py
rm -rf ./chalicelib/core/sessions/sessions.py
rm -rf ./chalicelib/core/sessions/sessions_pg.py
rm -rf ./chalicelib/core/sessions/sessions_ch.py
rm -rf ./chalicelib/core/sessions/sessions_devtool/sessions_devtool.py
rm -rf ./chalicelib/core/sessions/sessions_favorite/sessions_favorite.py
@ -46,8 +46,7 @@ rm -rf ./chalicelib/core/sessions/sessions_viewed/sessions_viewed.py
rm -rf ./chalicelib/core/sessions/unprocessed_sessions.py
rm -rf ./chalicelib/core/metrics/modules
rm -rf ./chalicelib/core/socket_ios.py
rm -rf ./chalicelib/core/sourcemaps.py
rm -rf ./chalicelib/core/sourcemaps_parser.py
rm -rf ./chalicelib/core/sourcemaps
rm -rf ./chalicelib/core/user_testing.py
rm -rf ./chalicelib/core/tags.py
rm -rf ./chalicelib/saml
@ -59,7 +58,6 @@ rm -rf ./chalicelib/utils/captcha.py
rm -rf ./chalicelib/utils/dev.py
rm -rf ./chalicelib/utils/email_handler.py
rm -rf ./chalicelib/utils/email_helper.py
rm -rf ./chalicelib/utils/errors_helper.py
rm -rf ./chalicelib/utils/event_filter_definition.py
rm -rf ./chalicelib/utils/github_client_v3.py
rm -rf ./chalicelib/utils/helper.py
@ -107,7 +105,7 @@ rm -rf ./chalicelib/core/alerts/alerts_processor_ch.py
rm -rf ./chalicelib/core/alerts/alerts_listener.py
rm -rf ./chalicelib/core/alerts/modules/helpers.py
rm -rf ./chalicelib/core/errors/modules
rm -rf ./chalicelib/core/errors/errors.py
rm -rf ./chalicelib/core/errors/errors_pg.py
rm -rf ./chalicelib/core/errors/errors_ch.py
rm -rf ./chalicelib/core/errors/errors_details.py
rm -rf ./chalicelib/utils/contextual_validators.py

View file

@ -19,8 +19,8 @@ apscheduler==3.11.0
# TODO: enable after xmlsec fix https://github.com/xmlsec/python-xmlsec/issues/252
#--no-binary is used to avoid libxml2 library version incompatibilities between xmlsec and lxml
lxml >= 4.6.5, !=4.7.0, <=5.2.1
python3-saml==1.16.0 --no-binary=lxml
python3-saml==1.16.0
--no-binary=lxml
python-multipart==0.0.20
redis==5.2.1

View file

@ -83,9 +83,11 @@ if (process.env.uws !== "true") {
const uWrapper = function (fn) {
return (res, req) => {
res.id = 1;
res.aborted = false;
req.startTs = performance.now(); // track request's start timestamp
req.method = req.getMethod();
res.onAborted(() => {
res.aborted = true;
onAbortedOrFinishedResponse(res);
});
return fn(req, res);

Some files were not shown because too many files have changed in this diff Show more