Compare commits

...
Sign in to create a new pull request.

359 commits

Author SHA1 Message Date
Shekar Siri
c750de6946 feat(product_analytics): teimseries filter fixes 2025-05-23 17:02:47 +02:00
Shekar Siri
9df909d112 feat(product_analytics): teimseries improvements 2025-05-23 17:02:47 +02:00
Shekar Siri
a486af5749 feat(product_analytics): general query fix 2025-05-23 17:02:47 +02:00
Shekar Siri
3e5f018b58 feat(product_analytics): table of cards testing and improvements 2025-05-23 17:02:47 +02:00
Shekar Siri
db0084f7a9 feat(product_analytics): table of cards testing and improvements 2025-05-23 17:02:47 +02:00
Shekar Siri
d2b455dfdb feat(product_analytics): table of cards testing fitlers 2025-05-23 17:02:47 +02:00
Shekar Siri
96b5e2e0cc feat(product_analytics): table of cards fixes 2025-05-23 17:02:47 +02:00
Shekar Siri
34c2ca281f feat(product_analytics): errors table 2025-05-23 17:02:47 +02:00
Shekar Siri
65ee3bcbb6 feat(product_analytics): funnel query and response fixes 2025-05-23 17:02:47 +02:00
Shekar Siri
adb88fd9fc feat(product_analytics): user journey - handle duration filter 2025-05-23 17:02:46 +02:00
Shekar Siri
bf62be2a4a feat(product_analytics): user journey - wip 2025-05-23 17:02:46 +02:00
Shekar Siri
f789ee1bda feat(product_analytics): user journey - wip 2025-05-23 17:02:46 +02:00
Shekar Siri
1d30b4d4cb feat(product_analytics): user journey - wup 2025-05-23 17:02:46 +02:00
Shekar Siri
10ecfde97e feat(product_analytics): heatmaps and other query improvements 2025-05-23 17:02:46 +02:00
Shekar Siri
5d6d94ed4d feat(product_analytics): heatmaps wip 2025-05-23 17:02:46 +02:00
Shekar Siri
c6076c5e7e feat(product_analytics): funnels card handle duration 2025-05-23 17:02:46 +02:00
Shekar Siri
f6485005c6 feat(product_analytics): funnels card handle duration 2025-05-23 17:02:46 +02:00
Shekar Siri
4204b41dbd feat(product_analytics): funnels card 2025-05-23 17:02:46 +02:00
Shekar Siri
6e57d2105d feat(product_analytics): handle filters dynamically 2025-05-23 17:02:46 +02:00
Shekar Siri
942dcbbd8d feat(product_analytics): timeseries error message 2025-05-23 17:02:46 +02:00
Shekar Siri
3c5844e4ad feat(product_analytics): table of cards 2025-05-23 17:02:45 +02:00
Shekar Siri
c077841b4e feat(api): dev rebase 2025-05-23 17:02:45 +02:00
Shekar Siri
8711648ac7 feat(analytics): table charts wip 2025-05-23 17:02:45 +02:00
Shekar Siri
6ad249bf6e feat(analytics): multi series results 2025-05-23 17:02:45 +02:00
Shekar Siri
e1cd230633 feat(analytics): filter operators 2025-05-23 17:02:45 +02:00
Shekar Siri
5c0139b66c feat(analytics): timeseries queries with filters and events 2025-05-23 17:02:45 +02:00
Shekar Siri
b0bf357be1 feat(analytics): base structure 2025-05-23 17:02:45 +02:00
Shekar Siri
4709918254 feat(analytics): filters 2025-05-23 17:02:45 +02:00
Alexander
4eae2ef439 feat(analytics): updated user trends method 2025-05-23 17:02:44 +02:00
Alexander
25841f26a1 feat(analytics): session/user trends 2025-05-23 17:02:44 +02:00
nick-delirium
98c82aa126
ui: kai charting 2025-05-23 16:02:05 +02:00
nick-delirium
8cd0a0ba07
ui: kai fixes 2025-05-23 11:46:46 +02:00
nick-delirium
04a63e3f84
spot: more checks for bg 2025-05-23 11:25:12 +02:00
Taha Yassine Kraiem
fa9c4d3398 fix(chalice): fixed search sessions for EE 2025-05-23 11:20:44 +02:00
nick-delirium
52a208024d
spot: upgrade wxt, properly reset offscreen page 2025-05-23 11:08:51 +02:00
nick-delirium
3410aec605
spot: close offscreen doc before reinit 2025-05-23 11:08:51 +02:00
Andrey Babushkin
58111d2323
remove util import (#3426) 2025-05-23 10:40:45 +02:00
Andrey Babushkin
d3d1a40909
addunit tests for session events parser (#3423)
* addunit tests for session events parser

* fixed tests and add test check to deploy

* updated frontend workflow

* updated frontend workflow

* updated frontend workflow

* updated frontend workflow

* updated frontend workflow

* updated frontend workflow

* fix test
2025-05-22 15:18:03 +02:00
Taha Yassine Kraiem
dedeb4cb2c fix(chalice): fixed view-session in FOSS 2025-05-22 13:03:01 +02:00
Taha Yassine Kraiem
92feaa3641 fix(chalice): fixed notes API 2025-05-22 13:03:01 +02:00
nick-delirium
58314ff2f3
spot: fixing constraints and version in bg.ts 2025-05-22 11:08:25 +02:00
nick-delirium
99b6238fc7
spot: fix event isolation from overlapping input events 2025-05-22 10:42:52 +02:00
nick-delirium
e365b7b14f
ui: changing style spaces 2025-05-21 16:53:40 +02:00
Taha Yassine Kraiem
5b6e9ab7e0 feature(chalice): search sessions by x/y coordinates
feature(chalice): search heatmaps by x/y coordinates
2025-05-21 16:46:54 +02:00
Taha Yassine Kraiem
f1e1d37d8e refactor(chalice): changed payload's name-pattern 2025-05-21 16:46:54 +02:00
nick-delirium
07cc0f0939
ui: disable input on limit 2025-05-21 16:41:03 +02:00
nick-delirium
eab0f60734
ui: hide empty dates from kai chats list 2025-05-21 16:29:21 +02:00
Delirium
f3f7992c0a
Kai charting (#3420)
* ui: chart btn

* ui: chats list modal fixes, signal editing visually, map charting response

* ui: support readable errors for kai

* ui: add support for response limiting
2025-05-21 16:19:39 +02:00
nick-delirium
24a220bc51
ui: hide kai chat msgs from self tracking 2025-05-20 17:58:15 +02:00
nick-delirium
83ebd01526
ui: darkmode fixes for spot and highlights 2025-05-20 17:55:49 +02:00
nick-delirium
c5555d7343
ui: fix for close icon in warnbadge darkmode 2025-05-20 17:41:54 +02:00
Rajesh Rajendran
bd97e15e9a feat(ci): Support building from branch for old patch (#3419)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-20 15:19:37 +02:00
nick-delirium
3431c97712
ui: dark mode fixes 2025-05-20 13:50:14 +02:00
Taha Yassine Kraiem
2d58cf2da4 feat(chalice): filter sessions by incidents 2025-05-20 11:58:59 +02:00
Taha Yassine Kraiem
c84aa417e1 refactor(chalice): changed incident-events response 2025-05-20 11:58:59 +02:00
Taha Yassine Kraiem
6f0deb57da refactor(chalice): changed default scope state 2025-05-20 11:58:59 +02:00
nick-delirium
3597523a04
ui: fixes for kai msg editing 2025-05-20 11:09:56 +02:00
nick-delirium
5de6d5de98
ui: visualization prep for kai 2025-05-19 15:34:00 +02:00
nick-delirium
3d02e7bbe3 ui: rename file 2025-05-19 14:01:02 +02:00
nick-delirium
1098f877e6 ui: rm console log 2025-05-19 14:01:02 +02:00
nick-delirium
8e0b30ece4 ui: delete deprecated components, fix widgetchart props, fix dashboard page reload check 2025-05-19 14:01:02 +02:00
nick-delirium
b8f97ad15b test fix for charts 2025-05-19 14:01:02 +02:00
nick-delirium
b7028ff131
ui: remove scope setup, fix spot capitalization 2025-05-19 11:01:35 +02:00
Taha Yassine Kraiem
d42c4a46f9 refactor(chalice): support incidents for replay 2025-05-16 17:52:57 +02:00
Taha Yassine Kraiem
1bb8f3a7b3 refactor(chalice): refactored issues
refactor(chalice): support issues in CH
2025-05-16 17:52:57 +02:00
Taha Yassine Kraiem
06ff696141 refactor(chalice): use CH to get replay events 2025-05-16 17:52:57 +02:00
Taha Yassine Kraiem
f8c9275127 refactor(chalice): refactored events 2025-05-16 17:52:57 +02:00
nick-delirium
a06f035b5a
ui: fix warns badge in replayer 2025-05-16 15:42:08 +02:00
Andrey Babushkin
0d3a2015b2
Tracker events throttling (#3399)
* add throttling

* fix throttling

* fix throttling
2025-05-16 14:13:37 +02:00
rjshrjndrn
0139e0f1d5 fix(api): requirements compatible with uv
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-16 12:00:44 +02:00
Taha Yassine Kraiem
cde2c53476 refactor(chalice): upgraded dependencies 2025-05-15 17:45:02 +02:00
Taha Yassine Kraiem
af8996b54a refactor(chalice): removed top-values cache
refactor(DB): removed or_cache
2025-05-15 17:45:02 +02:00
nick-delirium
edf6b2060e
spot: change v to 20 2025-05-15 15:01:27 +02:00
nick-delirium
e998543f3a
ui: reconnect kai if timed, add timezone to connection request 2025-05-15 13:00:51 +02:00
nick-delirium
2bb3300a95
ui: fresh lock file 2025-05-15 09:51:50 +02:00
Taha Yassine Kraiem
7e9d6b4761 fix(chalice): fix EE 2025-05-14 19:15:58 +02:00
Taha Yassine Kraiem
bf79a4c893 fix(chalice): fixed duplicate autocomplete values 2025-05-14 18:41:53 +02:00
Taha Yassine Kraiem
dcf6fdb1c9 refactor(chalice): hide minor paths for Path Analysis in dashboard page 2025-05-14 17:44:47 +02:00
nick-delirium
a4f6a93c59
tracker, backend: fixup msg gen 2025-05-14 11:50:57 +02:00
Taha Yassine Kraiem
c4ad390b3f feat(chalice): support data-type for sessions search 2025-05-13 17:36:56 +02:00
Taha Yassine Kraiem
d378b00bf7 refactor(chalice): validate regex expression 2025-05-13 17:36:56 +02:00
Taha Yassine Kraiem
bb6e2cbbdc feat(chalice): support data type for events search 2025-05-13 17:36:56 +02:00
Taha Yassine Kraiem
6b30e261a5 refactor(chalice): refactored errors 2025-05-13 17:36:56 +02:00
Taha Yassine Kraiem
f29e729acb refactor(chalice): moved CH autocomplete to FOSS 2025-05-13 17:36:56 +02:00
Taha Yassine Kraiem
56b6c6c7e6 refactor(chalice): refactored autocomplete 2025-05-13 17:36:56 +02:00
nick-delirium
0a5d4413ca
ui: fixes for kai and network tooltips 2025-05-13 17:17:18 +02:00
nick-delirium
ddb47631b6
ui: add logo to spot pdf export 2025-05-13 16:21:52 +02:00
nick-delirium
04beacde61
ui: reset kai thread on site change 2025-05-13 15:35:59 +02:00
nick-delirium
9fec22319b
ui: adjust styling and rm logs 2025-05-13 14:59:12 +02:00
Delirium
dca5e54811
Kai UI (#3336)
* ui: kai ui thing

ui: fixes for picking existing chat, feedback and retry buttons

ui: connect finding, creating threads

ui: more ui tuning for chat window, socket manager

ui: get/delete chats logic, create testing socket

ui: testing

ui: use on click query

ui: minor fixed for chat display, rebase

ui: start kai thing

* ui: add logs, add threadid

* ui: feedback methods and ui

* ui: store, replacing messages and giving feedback

* ui: move retry btn to right corner

* ui: move kai service out for ease of code splitting

* ui: add thread id to socket connection

* ui: support state messages

* ui: cancel response generation method

* ui: fix toast str

* ui: add gfm plugin

* ui: ensure md table has max sizes to prevent overflow

* ui: revert tailwind styles on markdown block layer

* ui: export as pdf, copy text contents of a message

* ui: try to save text with formatting in secure contexts

* ui: fix types

* ui: fixup dark mode colors

* ui: add duration for msgs

* ui: take out custom jwt

* ui: removing hardcode...

* ui: change endpoints to prod

* ui: swap socket path

* ui: flip vis toggle

* ui: lock file regenerate
2025-05-13 14:00:09 +02:00
nick-delirium
4e331b70a4
ui: fixes for security warns, bump minor versions to latest 2025-05-13 12:54:33 +02:00
nick-delirium
5cccaaa782 tracker: changelog and v17 2025-05-13 12:15:57 +02:00
nick-delirium
58d1f7c145 ui: fix warns types 2025-05-13 12:15:57 +02:00
Sudheer Salavadi
cbb930379d Request Timings UI update
The notification banner has been updated (position and style). so it doesn't overlap on Network Panel drawer.
2025-05-13 12:15:57 +02:00
nick-delirium
b9e6bd6e72 tracker: adjust queueing calc, rm total timeline from view 2025-05-13 12:15:57 +02:00
nick-delirium
6f3058f9f9 ui: adjust string 2025-05-13 12:15:57 +02:00
nick-delirium
c71db6c441 ui, tracker: add stall, add ui implementation 2025-05-13 12:15:57 +02:00
nick-delirium
3200107d71 tracker: add reminder about header for fetch/xhr 2025-05-13 12:15:57 +02:00
nick-delirium
526dfd7e21 tracker: fix timing status check 2025-05-13 12:15:57 +02:00
nick-delirium
e92ba42d82 tracker, ui, backend: checking support for network timings 2025-05-13 12:15:57 +02:00
Delirium
b1b21937ed
ui, tracker, backend: support long animation metrics (#3262)
* ui, tracker, backend: support long animation metrics

* ui: fix LAT mapping

* ui: change jump button display, longTask time definition

* ui: border for rows

* ui: refine LAT design

* tracker: regenerate messages
2025-05-13 12:04:14 +02:00
nick-delirium
55d435be87
ui: fix widget setData -> create new ref 2025-05-13 11:20:45 +02:00
nick-delirium
4972cfad94
spot: v to.19 2025-05-13 10:38:22 +02:00
Rajesh Rajendran
5c24594d5b main (#3391)
* ci(actions): Update pr description

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* ci(actions): run only on pull request merge

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 16:51:45 +02:00
Alexander
27e20c4ef1 feat(db): use custom event name instead of 'CUSTOM' in CH 2025-05-12 16:41:42 +02:00
Alexander
3edea4acb4
feat(db): custom ts for custom events (#3390) 2025-05-12 16:14:34 +02:00
nick-delirium
5304dbf8c1
ui: change <slot> check 2025-05-12 16:01:43 +02:00
Rajesh Rajendran
0c64003b09 ci(actions): Auto update tag for patch build (#3387)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-12 15:55:46 +02:00
nick-delirium
f38b3a830d
ui: fixup toggler closing 2025-05-12 15:38:31 +02:00
Delirium
500fb44856
Litjs fixes2 (#3381)
* ui: fixes for litjs capture

* ui: introduce vmode for lwc light dom

* ui: fixup the mode toggle and remover
2025-05-12 15:19:56 +02:00
Alexander
2a483b62f0 feat(http): removed tracker version validation 2025-05-12 15:08:18 +02:00
Alexander
c9b29c5c3d feat(custom): parse custom event's payload and use the predefined timestamp as a correct ts for our messages 2025-05-12 14:57:22 +02:00
nick-delirium
889fde91a9
ui: fix heatmaps crash 2025-05-12 10:35:45 +02:00
Alexander
eb7f3fb7a0 Revert "feat(proto): removed a part of deprecated messages (min supported tracker version is 6.0.0)"
This reverts commit 6dc3dcfd
2025-05-12 10:33:03 +02:00
nick-delirium
d58031caf6 tracker: changelogs 2025-05-09 13:59:47 +02:00
nick-delirium
83b6b6a3dd tracker: rm sheet from trackign once its rate limited 2025-05-09 13:59:47 +02:00
nick-delirium
db5fc57e43 tracker: add options for emotionjs tracker 2025-05-09 13:59:47 +02:00
nick-delirium
b4fd3def10 tracker: united check for sheet and trackerid 2025-05-09 13:59:47 +02:00
nick-delirium
d477862edf tracker: potential perf improvement 2025-05-09 13:59:47 +02:00
nick-delirium
821db5c0d5 tracker: code style 2025-05-09 13:59:47 +02:00
nick-delirium
4a9a082896 tracker: handle emotion-js style population 2025-05-09 13:59:47 +02:00
Alexander
b109dd559a feat(db): [OR-2012] insert page title to all auto-captured events 2025-05-09 11:56:38 +02:00
nick-delirium
de04e23c51 ui: comb through components, add additional classes for tailwind basic
styles, fix antd defaults
2025-05-09 11:29:12 +02:00
nick-delirium
c6b0649613 ui: starting dark theme 2025-05-09 11:29:12 +02:00
nick-delirium
d2d886b322
ui: fix filter options reset, fix dashboard chart density 2025-05-09 11:20:52 +02:00
nick-delirium
a009ff928c
spot: refactor popup code, split audio logic from ui code 2025-05-09 09:59:12 +02:00
Taha Yassine Kraiem
a13f427816 refactor(chalice): autocomplete for event-names
refactor(chalice): autocomplete for properties-names
refactor(chalice): autocomplete for properties-values
2025-05-08 18:52:23 +02:00
Taha Yassine Kraiem
8a69316b82 refactor(chalice): upgraded dependencies
refactor(alerts): upgraded dependencies
refactor(crons): upgraded dependencies
2025-05-08 18:52:23 +02:00
Taha Yassine Kraiem
1576208e25 refactor(chalice): return all events & properties 2025-05-08 18:52:23 +02:00
Taha Yassine Kraiem
3ac5c30c5f refactor(DB): remove TTL for CH tables 2025-05-08 18:52:23 +02:00
Taha Yassine Kraiem
39d3d8db4c refactor(chalice): changed predefined properties types handling
refactor(DB): changed predefined properties types
2025-05-08 18:52:23 +02:00
Taha Yassine Kraiem
812983f97c refactor(chalice): changed properties response 2025-05-08 18:52:23 +02:00
rjshrjndrn
1df4a92901 chore(cli): pin dns
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-05-07 16:52:28 +02:00
Taha Yassine Kraiem
d9fe534223 fix(chalice): fixed get error's details
(cherry picked from commit 39eb943b86)
2025-05-07 12:21:11 +02:00
nick-delirium
95d4df7a1b
ui: loading badges for spot videos 2025-05-07 10:48:16 +02:00
nick-delirium
b3cb8df65b
ui: fix sankey start calculation 2025-05-06 16:47:53 +02:00
nick-delirium
18f8ee9d15
ui: fix max meta length, add horizontal layout for player 2025-05-06 16:21:02 +02:00
Andrey Babushkin
2eb4ab4b84
Fix events highlight (#3368)
* add filtered events to search

* removed consoles

* changed styles to tailwind

* changed styles to tailwind

* fixed error
2025-05-06 13:32:56 +02:00
nick-delirium
a3fdad3de1
ui: missing } for eventsblock 2025-05-06 11:29:55 +02:00
nick-delirium
5ee2b125c8 ui: clips, set playercontent size for clip player 2025-05-06 11:09:05 +02:00
nick-delirium
360d6ca382 ui: fix endpointing sankey main node calculation 2025-05-06 11:09:05 +02:00
Andrey Babushkin
b477f9637d
Highlight searched events (#3360)
* add filtered events to search

* removed consoles

* changed styles to tailwind

* changed styles to tailwind

* fixed errors
2025-05-05 17:29:16 +02:00
Taha Yassine Kraiem
626c453a80 refactor(DB): remove TTL for CH tables
(cherry picked from commit d78b33dcd2)

# Conflicts:
#	ee/scripts/schema/db/init_dbs/clickhouse/create/init_schema.sql
2025-05-05 17:19:22 +02:00
Taha Yassine Kraiem
859100213d fix(chalice): fixed empty error_id for table of errors
(cherry picked from commit 4b1ca200b4)
2025-05-05 17:13:55 +02:00
nick-delirium
ec6076c0b3
ui: fix metadata check 2025-05-02 09:48:51 +02:00
nick-delirium
878c10f723
ui: prevent network row modal from changing replayer time 2025-04-30 10:19:25 +02:00
nick-delirium
2c0a75b11f
ui: change card endpoint 2025-04-30 09:22:00 +02:00
Taha Yassine Kraiem
718b3f3223 refactor(chalice): get properties defined types 2025-04-28 18:54:40 +02:00
Taha Yassine Kraiem
9f6dc788c4 refactor(DB): structured JSON attributes 2025-04-28 18:54:40 +02:00
Taha Yassine Kraiem
9b9248f1f6 refactor(DB): structured JSON attributes (draft) 2025-04-28 18:54:40 +02:00
rjshrjndrn
f591e886d7 fix(docker-compose): proper volume path #3279
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-28 17:27:17 +02:00
nick-delirium
cb0ea165f9 tracker: clearup old deprecated changes 2025-04-28 17:19:50 +02:00
nick-delirium
db404c0648 tracker: clearup old deprecated changes 2025-04-28 17:19:50 +02:00
nick-delirium
e4d75467ef tracker: 16.2.1, rename inliner options for clarity 2025-04-28 17:15:58 +02:00
nick-delirium
69b8e2e774
ui: fix velement applychanges 2025-04-28 10:38:43 +02:00
Андрей Бабушкин
160370f45e add inlineCss enum 2025-04-28 10:38:13 +02:00
Andrey Babushkin
53f3623481
Css inliner tuning (#3337)
* tracker: don't send double sheets

* tracker: don't send double sheets

* tracker: slot checker

* add slot tag to custom elements

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-25 17:45:21 +02:00
nick-delirium
324299170e
spot: isolate input events in saving screen 2025-04-25 14:30:47 +02:00
nick-delirium
f5f47103c3
tracker: update css inject 2025-04-25 10:17:44 +02:00
Taha Yassine Kraiem
70390920cd refactor(chalice): fixes and cleaning 2025-04-24 14:51:23 +02:00
Taha Yassine Kraiem
447a2490ef refactor(chalice): refactored sessions-search for CH-PG
fix(chalice): fixed usability-tests' sessions
2025-04-24 14:51:23 +02:00
Taha Yassine Kraiem
ad3f72a10b refactor(chalice): removed favorite attribute from sessions search response as it is not used by UI
refactor(chalice): use json.loads instead of ast.literal_eval for faster metadata parsing
2025-04-24 14:51:23 +02:00
Taha Yassine Kraiem
70d2bbd9b9 refactor(chalice): refactored notes 2025-04-24 14:51:23 +02:00
Taha Yassine Kraiem
bbdde7be81 refactor(chalice): refactored sessions search
refactor(chalice): refactored notes
fix(chalice): fixed imports
2025-04-24 14:51:23 +02:00
nick-delirium
22d71ceb14
ui: fixup autoplay on inactive tabs 2025-04-24 13:00:17 +02:00
Delirium
53797500bf
tracker css batching/inlining (#3334)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* removed sorting

* upgrade css inliner

* ui: better logging for ocunter

* tracker: force-fetch mode for cssInliner

* tracker: fix ts warns

* tracker: use debug opts

* tracker: 16.2.0 changelogs, inliner opts

* tracker: remove debug options

---------

Co-authored-by: Андрей Бабушкин <andreybabushkin2000@gmail.com>
2025-04-24 12:16:51 +02:00
rjshrjndrn
7217959992 feat(git): Adding pre-commit hook 2025-04-24 10:49:17 +02:00
Shekar Siri
effdfaef2c change(ui): force the table cards events order to use and istead the defaul then 2025-04-24 10:08:21 +02:00
rjshrjndrn
0c3bac0fe0 chore(ci): Update actions
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 19:33:10 +02:00
rjshrjndrn
1f55f8241c feat(cli): Add support for image versions
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 16:58:54 +02:00
rjshrjndrn
8b75dfa149 fix(docker-compose): clickhouse migration
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 16:51:39 +02:00
Taha Yassine Kraiem
94642d2871 fix(chalice): enforce AND operator for table of requests and table of pages 2025-04-23 11:58:17 +01:00
rjshrjndrn
33f571acc4 fix(docker-compose): remove shell interpolation
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 11:13:06 +02:00
Rajesh Rajendran
b886e9b242
or 1940 upstream docker release with the existing installation (#3316)
* chore(docker): Adding dynamic env generator
* ci(make): Create deployment yamls
* ci(make): Generating docker envs
* change env name structure
* proper env names
* chore(docker): clickhouse
* chore(docker-compose): generate env file format
* chore(docker-compose): Adding docker-compose
* chore(docker-compose): format make
* chore(docker-compose): Update version
* chore(docker-compose): adding new secrets
* ci(make): default target
* ci(Makefile): Update common protocol
* chore(docker-compose): refactor folder structure
* ci(make): rename to docker-envs
* feat(docker): add clickhouse volume definition
Add clickhouse persistent volume to the docker-compose configuration
to ensure data is preserved between container restarts.
* refactor: move env files to docker-envs directory
Updates all environment file references in docker-compose.yaml to use a
consistent directory structure, placing them under the docker-envs/
directory for better organization.
* fix(docker): rename imagestorage to images
 The `imagestorage` service and related environment file
 have been renamed to `images` for clarity and consistency.
 This change reflects the service's purpose of handling
 images.
* feat(docker): introduce docker-compose template
 A new docker-compose template
 to generate docker-compose files from a list of services.
 The template uses helm syntax.
* fix: Properly set FILES variable in Makefile
 The FILES variable was not being set correctly in the
 Makefile due to subshell issues. This commit fixes the
 variable assignment and ensures that the variable is
 accessible in subsequent commands.
* feat: Refactor docker-compose template for local development
 This commit introduces a complete overhaul of the
 docker-compose template, switching from a helm-based
 template to a native docker-compose.yml file. This
 change simplifies local development and makes it easier
 to manage the OpenReplay stack.
 The new template includes services for:
 - PostgreSQL
 - ClickHouse
 - Redis
 - MinIO
 - Nginx
 - Caddy
 It also includes migration jobs for setting up the
 database and MinIO.
* fix(docker-compose): Add fallback empty environment
 Add an empty environment to the docker-compose template to prevent
 errors when the env_file is missing. This ensures that the
 container can start even if the environment file is not present.
* feat(docker): Add domainname and aliases to services
 This change adds the `domainname` and `aliases` attributes to each
 service in the docker-compose.yaml file. This is to ensure that
 the services can communicate with each other using their fully
 qualified domain names. Also adds shared volume and empty
 environment variables.
* update version
* chore(docker): don't pull parallel
* chore(docker-compose): proper pull
* chore(docker-compose): Update db service urls
* fix(docker-compose): clickhouse url
* chore(clickhouse): Adding clickhouse db migration
* chore(docker-compose): Adding clickhouse
* fix(tpl): variable injection
* chore(fix): compose tpl variable rendering
* chore(docker-compose): Allow override pg variable
* chore(helm): remove assist-server
* chore(helm): pg integrations
* chore(nginx): removed services
* chore(docker-compose): Mulitple aliases
* chore(docker-compose): Adding more env vars
* feat(install): Dynamically generate passwords
 dynamic password generation by
 identifying `change_me_*` entries in `common.env` and
 replacing them with random passwords. This enhances
 security and simplifies initial setup.
 The changes include:
 - Replacing hardcoded password replacements with a loop
   that iterates through all `change_me_*` entries.
 - Using `grep` to find all `change_me_*` tokens.
 - Generating a random password for each token.
 - Updating the `common.env` file with the generated
   passwords.
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): clickhouse-migration
* compose: chalice env
* chore(docker-compose): overlay vars
* chore(docker): Adding ch port
* chore(docker-compose): disable clickhouse password
* fix(docker-compose): migration name
* feat(docker): skip specific values
* chore(docker-compose): define namespace
---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-23 10:57:19 +02:00
Taha Yassine Kraiem
f963ff394d fix(chalice): fixes for table of requests
(cherry picked from commit 0e469fd056)
2025-04-22 18:04:46 +01:00
Andrey Babushkin
a26411f2a6
Css batching (#3326)
* tracker: initial css inlining functionality

* tracker: add tests, adjust sheet id, stagger rule sending

* ui: rereoute custom html component fragments

* removed sorting

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-04-22 17:59:25 +02:00
nick-delirium
ee71625499
ui: fix timepicker and timezone interactions 2025-04-22 17:37:04 +02:00
nick-delirium
7d6f838d25
ui: fix empty sank sessions fetch 2025-04-22 10:26:20 +02:00
Alexander
089539ef7e feat(go.mod): upgraded imports 2025-04-18 16:14:25 +02:00
Alexander
373d71e4f3 feat(ch-connector): added current url for all events 2025-04-18 15:31:55 +02:00
nick-delirium
cde427ae4c
tracker: bump proxy version to .3, prevent crash on calling obscure fn on objects 2025-04-17 17:35:27 +02:00
nick-delirium
7cfef90cc8
ui: virtualizer for filter options list 2025-04-16 15:22:37 +02:00
nick-delirium
04db655776
ui: fix auto import paths 2025-04-16 15:07:37 +02:00
nick-delirium
b91f5df89f
ui: fix imports for eventsblock 2025-04-16 12:22:16 +02:00
nick-delirium
7fd741348c
ui: fix session search on url change 2025-04-16 11:55:47 +02:00
nick-delirium
2aaafa5b22
ui: fixing security warnings 2025-04-16 11:43:45 +02:00
nick-delirium
11f9b865cf
tracker: 16.1.3 with network proxy fix 2025-04-16 11:39:17 +02:00
rjshrjndrn
60a691bbaf chore(make): Adding make file
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-16 10:07:36 +02:00
Shekar Siri
3f1f6c03f2 feat(widget-sessions): improve session filtering logic
- Refactored session filtering logic to handle nested filters properly.
- Enhanced `fetchSessions` to ensure null checks and avoid errors.
- Updated `loadData` to handle `USER_PATH` and `HEATMAP` metric types.
- Improved UI consistency by adjusting spacing and formatting.
- Replaced redundant code with cleaner, more maintainable patterns.

This change improves the reliability and readability of the session
filtering and loading logic in the WidgetSessions component.
2025-04-15 18:15:23 +02:00
nick-delirium
dcd19e3c83
player: add debug methods (get node, get node messages) 2025-04-15 15:57:01 +02:00
nick-delirium
ced855568f
tracker: drop mentions of lint-staged 2025-04-15 14:42:55 +02:00
Andrey Babushkin
c8483df795
removed sorting by id (#3304) 2025-04-15 13:31:35 +02:00
Jorgen Evens
d544da0665 fix(helm): fix broken volumeMounts indentation 2025-04-14 15:51:55 +02:00
rjshrjndrn
408c3122d3 fix(clickhouse): update user config mount paths
Properly mount clickhouse user configuration files to the users.d
directory with correct paths for each file. Also adds several
performance-related settings to the default user profile including
query cache and JSON type support.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-14 15:37:55 +02:00
nick-delirium
c196736c3c
tracker: 16.1.2 networkProxy bump 2025-04-14 13:30:37 +02:00
Shekar Siri
d47542830f feat(SessionsBy): add specific filter for FETCH metric
Added a conditional check to handle the FETCH metric in the SessionsBy
component. When the metric is FETCH, a specific filter with key
FETCH_URL, operator is, and value derived from data.name is applied.
This ensures proper filtering behavior for FETCH-related metrics.
2025-04-14 12:00:02 +02:00
Andrey Babushkin
055ff8f64a
Assist remote canvas control (#3287)
* refactor(searchStore): reformat filterMap function parameters (#3166)

- Reformat the parameters of the filterMap function for better readability.
- Comment out the fetchSessions call in clearSearch method to avoid unnecessary session fetch.

* Increment frontend chart version (#3167)

Co-authored-by: GitHub Action <action@github.com>

* refactor(chalice): cleaned code
fix(chalice): fixed session-search-pg sortKey issue
fix(chalice): fixed CH-query-formatter to handle special chars
fix(chalice): fixed /ids response

* feat(auth): implement withCaptcha HOC for consistent reCAPTCHA (#3177)

* feat(auth): implement withCaptcha HOC for consistent reCAPTCHA

This commit refactors the reCAPTCHA implementation across the application
by introducing a Higher Order Component (withCaptcha) that encapsulates
captcha verification logic. The changes:

- Create a reusable withCaptcha HOC in withRecaptcha.tsx
- Refactor Login, ResetPasswordRequest, and CreatePassword components
- Extract SSOLogin into a separate component
- Improve error handling and user feedback
- Standardize loading and verification states across forms
- Make captcha implementation more maintainable and consistent

* feat(auth): support msaas edition for enterprise features

Add msaas to the isEnterprise check alongside ee edition to properly
display enterprise features. Use userStore.isEnterprise in SSOLogin
component instead of directly checking authDetails.edition for
consistent
enterprise status detection.

* Increment frontend chart version (#3179)

Co-authored-by: GitHub Action <action@github.com>

* feat(assist): improved caching mechanism for cluster mode (#3180)

* Increment assist chart version (#3181)

Co-authored-by: GitHub Action <action@github.com>

* ui: fix table column export

* Increment frontend chart version

* fix(auth): remove unnecessary captcha token validation (#3188)

The token validation checks were redundant as the validation is already
handled by the captcha wrapper component. This change simplifies the
password reset flow while maintaining security.

* Increment frontend chart version (#3189)

Co-authored-by: GitHub Action <action@github.com>

* ui: onboarding fixes

* ui: fixes for onboarding ui

* Increment frontend chart version

* feat(helm): add TOKEN_SECRET environment variable

Add TOKEN_SECRET environment variable to HTTP service deployment and
generate a random value for it in vars.yaml.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(GraphQL): remove unused useTranslation hook (#3200) (#3206)

Co-authored-by: PiRDub <pirddeveloppeur@gmail.com>

* Increment frontend chart version

* chore(http): remove default token_string

scripts/helmcharts/openreplay/charts/http/scripts/entrypoint.sh

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(dashboard): update filter condition in MetricsList

Change the filter type comparison from checking against 'all' to
checking against an empty string. This ensures proper filtering
behavior when filtering metrics in the dashboard component.

* Increment frontend chart version

* ui: shrink icons when no space, adjust player area for events export … (#3217)

* ui: shrink icons when no space, adjust player area for events export panel, fix panel size

* ui: rm log

* Increment frontend chart version

* refactor(chalice): changed user-journey

* Increment chalice chart version

* refactor(auth): separate SSO support from enterprise edition

Add dedicated isSSOSupported property to correctly identify when SSO
authentication is available, properly handling the 'msaas' edition
case separately from enterprise edition checks. This fixes SSO
visibility in the login interface.

* Increment frontend chart version

* UI patches (28.03) (#3231)

* ui: force getting url for location in tabmanagers

* Assist add turn servers (#3229)

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* ui: fix chat draggable, fix default params

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>

* ui: fix spritemap generation for assist sessions

* ui: fix yarnlock

* fix errors

* updated widget link

* resolved conflicts

* updated widget url

---------

Co-authored-by: Andrey Babushkin <55714097+reyand43@users.noreply.github.com>
Co-authored-by: Андрей Бабушкин <andreybabushkin2000@gmail.com>

* fix(init): remove duplicate clone

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* Increment assist chart version

* Increment frontend chart version

* ui: add old devtool filters

* ui: filter keys

* Increment frontend chart version

* ui: fix modules mapper

* ui: fix modules label

* Increment frontend chart version

* ui: fix double fetches for sessions

* Increment frontend chart version

* pulled updates (#3254)

* Increment frontend chart version (#3255)

Co-authored-by: GitHub Action <action@github.com>

* Increment assist chart version (#3256)

Co-authored-by: GitHub Action <action@github.com>

* feat(chalice): added for_spot=True for authenticate_sso (#3259)

* Increment chalice chart version (#3260)

Co-authored-by: GitHub Action <action@github.com>

* Assist patch canvas (#3265)

* add agent info to assist and tracker

* removed AGENTS_CONNECTED event

* Increment frontend chart version (#3266)

Co-authored-by: GitHub Action <action@github.com>

* Increment assist chart version (#3267)

Co-authored-by: GitHub Action <action@github.com>

* resolved conflict

* removed comments

* add global method support

* fix errors

* remove wrong updates

* remove wrong updates

* add onDrag as option

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: Shekar Siri <sshekarsiri@gmail.com>
Co-authored-by: Mehdi Osman <estradino@users.noreply.github.com>
Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com>
Co-authored-by: Alexander <zavorotynskiy@pm.me>
Co-authored-by: nick-delirium <nikita@openreplay.com>
Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: PiRDub <pirddeveloppeur@gmail.com>
2025-04-14 11:25:17 +02:00
nick-delirium
2bf92f40f7
ui: metrics filtering checks 2025-04-14 10:53:12 +02:00
nick-delirium
f0f78341e7
networkProxy: improve sanitizer, fix bodyreader class 2025-04-14 10:53:12 +02:00
nick-delirium
dbb805189f ui: keep spot log 2025-04-14 09:41:11 +02:00
nick-delirium
e32dbe2ee2 ui: check if spot ext exists on login comp 2025-04-14 09:41:11 +02:00
rjshrjndrn
3272f5b9fd refactor(clickhouse): split server and user config
Split the ClickHouse configuration into separate ConfigMaps for server
and user configurations. This allows more granular management of the
different configuration types and proper mounting to their respective
paths.

- Created separate serverConfig and userConfig under configOverride
- Added user-default.xml under userConfig
- Updated StatefulSet to mount each ConfigMap separately

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-11 17:20:26 +02:00
Shekar Siri
ea4e2ab198 feat(search): enhance filter value handling
- Added `checkFilterValue` function to validate and update filter values
  in `SearchStoreLive`.
- Updated `FilterItem` to handle undefined `value` gracefully by providing
  a default empty array.

These changes improve robustness in filter value processing.
2025-04-11 14:35:19 +02:00
Shekar Siri
990e1fa1c4 feat(search): add rounding to next minutes for date ranges
- Introduced `roundToNextMinutes` utility function to round timestamps
  to the next specified minute interval.
- Updated `Search` class to use the rounding function for non-custom
  date ranges.
- Modified `getRange` in `period.js` to align LAST_24_HOURS with
  15-minute intervals.
- Added `roundToNextMinutes` implementation in `utils/index.ts`.
2025-04-11 11:59:04 +02:00
Shekar Siri
5ca97ceedd feat(dashboard): set initial drill down period
Change default drill down period from LAST_7_DAYS to LAST_24_HOURS
and preserve current period when drilling down on chart click
2025-04-11 10:47:32 +02:00
rjshrjndrn
d3b8c35058 chore(action): cloning specific tag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-10 15:41:56 +02:00
rjshrjndrn
1b851a8b72 feat(clickhouse): add config override capability
Adds support for overriding ClickHouse server configurations by:
- Creating a new ConfigMap to store custom XML configurations
- Mounting the ConfigMap to ClickHouse pods under /etc/clickhouse-server/config.d
- Adding configOverride field to values.yaml with examples

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-09 16:36:04 +02:00
Andrey Babushkin
553e3f6045
Assist fix canvas clearing (#3276)
* add stop canvas socket event

* change tracker version

* removed comments
2025-04-07 14:10:31 +02:00
rjshrjndrn
3f73bae22f fix(helm): proper aws endpoint detection
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-04 23:44:08 +02:00
Alexander
9160b42113 feat(assist-server): fixed an issue with sessionIDs collector 2025-04-04 17:53:19 +02:00
Alexander
36e1a2fca2 feat(assist-server): removed unnecessary prefix for ws connections 2025-04-04 16:34:45 +02:00
Alexander
cbbd480cca feat(assist-server): changed default port 2025-04-04 16:23:16 +02:00
Alexander
77ae0cac0e Revert "feat(assist): temporary changed the default assist path"
This reverts commit 5771323800.
2025-04-04 16:18:19 +02:00
Alexander
5771323800 feat(assist): temporary changed the default assist path 2025-04-04 16:13:03 +02:00
Alexander
aab8691cf5 Merge remote-tracking branch 'origin/dev' into dev 2025-04-04 16:08:24 +02:00
Alexander
d9ff3f4691 feat(assist-server): use the default prefix url 2025-04-04 16:08:09 +02:00
rjshrjndrn
09c2ce0976 ci(action): Build and patch github tags
feat(workflow): update commit timestamp for patching

Add a step to set the commit timestamp of the HEAD commit to be 1
second newer than the oldest of the last 3 commits. This ensures
proper chronological order while preserving the commit content.

- Fetch deeper history to access commit history
- Get oldest timestamp from recent commits
- Set new commit date with BSD-compatible date command
- Verify timestamp change with git log

The workflow was previously checking out 'main' branch with a
comment indicating it needed to be fixed. This change makes it
properly checkout the tag specified by the workflow input.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-04 15:57:50 +02:00
Alexander
0141a42911 feat(assist-server): fixed the helm chart 2025-04-04 15:48:31 +02:00
Alexander
b55e44d450 feat(assist-server): moved the build.sh script to the root 2025-04-04 15:44:19 +02:00
Alexander
f70cce7e23 feat(assist-server): removed unnecessary comments 2025-04-04 15:13:45 +02:00
Alexander
8b3be469b6 feat(assist-server): added host configuration 2025-04-04 15:09:37 +02:00
Alexander
dc975bc19a feat(actions): small fix in assist-server action 2025-04-04 12:11:48 +02:00
Alexander
c1d51b98a2
feat(assist-server): added a first part of the assist v2 (#3269) 2025-04-04 12:05:36 +02:00
nick-delirium
5a51bfb984
update codecov yml 2025-04-04 10:46:13 +02:00
Andrey Babushkin
b55b9e5515
Assist fix canvas stream (#3263)
* add agent info to assist and tracker

* removed AGENTS_CONNECTED event
2025-04-03 18:06:09 +02:00
Andrey Babushkin
af7b46516f
Assist fix canvas stream (#3261)
* add agent info to assist and tracker

* removed AGENTS_CONNECTED event
2025-04-03 16:14:46 +02:00
rjshrjndrn
05e0306823 fix(actions): add dynamic token secret
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-04-03 16:10:19 +02:00
Alexander
77a8371543 feat(analytics): added mock (because it's impossible to build at the moment) 2025-04-03 15:33:48 +02:00
Mehdi Osman
e4406ad26b
Update .env.sample 2025-04-03 09:06:31 -04:00
Alexander
a8971d842b feat(chalice): added for_spot=True for authenticate_sso 2025-04-02 16:38:08 +02:00
nick-delirium
c003057cf0
ui: fix events filtering, net panel scroll and default tab 2025-04-02 14:40:13 +02:00
nick-delirium
586472c7dd
ui: bump tab tooltip delay 2025-04-01 17:16:25 +02:00
nick-delirium
ecb192f16e
tracker: hoist deps to root level 2025-04-01 11:49:39 +02:00
nick-delirium
6dc585417f
tracker: fix tests (use workflow) 2025-04-01 11:40:06 +02:00
nick-delirium
264444c92a
tracker: setup bun workspaces for tracker/assist 2025-04-01 11:35:42 +02:00
nick-delirium
b2fcd7094b
tracker: patch for potential empty call_end msg #3249 2025-04-01 11:05:42 +02:00
Andrey Babushkin
f3b98dad8a
updated version (#3253) 2025-03-31 18:09:27 +02:00
Andrey Babushkin
c27213c65d
add test turn (#3236)
* add test turn

* removed stun

* add ice candidates buffer and removed config to another socket event

* removed config from NEW_AGENTS

* changed WEBRTC_CONFIG event receiver

* fixed error

* fixed errors

* add buffer cleaning
2025-03-31 18:00:27 +02:00
nick-delirium
f61c5e99b5
ui: fix double fetches for sessions 2025-03-31 17:14:23 +02:00
nick-delirium
6412f14b08
ui: fix modules label 2025-03-31 11:52:23 +02:00
nick-delirium
0a620c6ba3
ui: fix modules mapper 2025-03-31 11:47:10 +02:00
nick-delirium
685741f039
tracker: yarn -> bun 2025-03-31 11:15:38 +02:00
nick-delirium
4ee78e1a5c
ui: filter keys 2025-03-31 10:33:51 +02:00
nick-delirium
77735d9d72
ui: use metadata as filter on click 2025-03-31 10:29:27 +02:00
nick-delirium
e3065e0530 ui: add old devtool filters 2025-03-31 10:11:34 +02:00
rjshrjndrn
d9d4221ad3 fix(init): remove duplicate clone
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-28 21:40:58 +01:00
nick-delirium
0bbde3e75a
tracker: assist 11.0.4; pass peer creds 2025-03-28 17:55:26 +01:00
nick-delirium
7dec8bb943
ui: fix toast auto close 2025-03-28 17:26:50 +01:00
Taha Yassine Kraiem
c6a5ed6c3b fix(chalice): fixed redundant event-names 2025-03-28 17:19:36 +01:00
Taha Yassine Kraiem
99d62fa549 feat(chalice): support regex operator for heatmaps 2025-03-28 16:53:49 +01:00
Taha Yassine Kraiem
c0bb05bc0f feat(chalice): support regex operator for sessions search 2025-03-28 16:53:49 +01:00
Taha Yassine Kraiem
70258e5c1d refactor(chalice): simplified supportedTypes for product analytics 2025-03-28 16:53:49 +01:00
Taha Yassine Kraiem
6ec146b24b feat(chalice): support regex for events search 2025-03-28 16:53:49 +01:00
Taha Yassine Kraiem
9f464e3b41 refactor(chalice): refactored code 2025-03-28 16:53:49 +01:00
nick-delirium
e95bdab478 ui: fix spritemap generation for assist sessions 2025-03-28 16:42:16 +01:00
Andrey Babushkin
421b3d1dc5
Assist add turn servers (#3229)
* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* fixed conflicts

* add offers

* add config to sicket query

* add config to sicket query

* add config init

* removed console logs

* removed wrong updates

* ui: fix chat draggable, fix default params

---------

Co-authored-by: nick-delirium <nikita@openreplay.com>
2025-03-28 16:27:01 +01:00
nick-delirium
437a25fb97
networkProxy: update dev deps 2025-03-28 11:20:15 +01:00
nick-delirium
cb55a17227
ui: force getting url for location in tabmanagers 2025-03-28 10:57:39 +01:00
Taha Yassine Kraiem
9d160abda5 refactor(chalice): optimized search sessions using user-events 2025-03-27 16:34:49 +01:00
Taha Yassine Kraiem
3758cf6565 refactor(chalice): search sessions using user-events 2025-03-27 14:18:08 +01:00
Taha Yassine Kraiem
9db5e2a8f7 refactor(chalice): refactored code 2025-03-27 14:18:08 +01:00
Taha Yassine Kraiem
e0dba41065 refactor(chalice): upgraded dependencies 2025-03-27 14:18:08 +01:00
Taha Yassine Kraiem
8fbaf25799 feat(DB): use incremental&refreshable materialized views to fill extra tables 2025-03-27 14:18:08 +01:00
Shekar Siri
65072f607f refactor(auth): separate SSO support from enterprise edition
Add dedicated isSSOSupported property to correctly identify when SSO
authentication is available, properly handling the 'msaas' edition
case separately from enterprise edition checks. This fixes SSO
visibility in the login interface.
2025-03-27 12:28:37 +01:00
nick-delirium
cb4bf932c4
ui: fix fresh sessions lookup 2025-03-27 10:57:37 +01:00
nick-delirium
20b938365c
ui: minor session list fixes 2025-03-27 10:43:30 +01:00
Taha Yassine Kraiem
8e68ebd52b refactor(chalice): changed user-journey
(cherry picked from commit fc86555644)
2025-03-27 10:25:47 +01:00
nick-delirium
293382ea85
tracker: 16.1.1 2025-03-27 09:34:12 +01:00
nick-delirium
ac35bf5179
tracker: assist 11.0.3 clicks fix 2025-03-26 17:47:14 +01:00
nick-delirium
eb610d1c21 tracker: fix remote control clicks on svg 2025-03-26 17:42:27 +01:00
Delirium
ac0ccb2169
ui: shrink icons when no space, adjust player area for events export … (#3217)
* ui: shrink icons when no space, adjust player area for events export panel, fix panel size

* ui: rm log
2025-03-26 16:37:45 +01:00
Taha Yassine Kraiem
20a57d7ca1 feat(chalice): initial lexicon for events & properties 2025-03-26 13:27:42 +01:00
Taha Yassine Kraiem
856e716507 refactor(chalice): changed product analytics to return full filters with possible types 2025-03-26 13:27:42 +01:00
Taha Yassine Kraiem
bb17f672fe feat(DB): use incremental&refreshable materialized views to fill extra tables 2025-03-26 13:27:42 +01:00
nick-delirium
d087736df0
ui: fix default series state, fix metricType in comparison 2025-03-26 10:05:03 +01:00
Shekar Siri
ce546bcfa3 fix(ui): adjust CirclePlay icon spacing in player controls
Add marginLeft style property to eliminate unwanted spacing between
the text and icon in the "Play Full Session" button, improving the
visual alignment and consistency of the player controls.
2025-03-25 18:36:27 +01:00
Shekar Siri
9f681aca45 fix(dashboard): update filter condition in MetricsList
Change the filter type comparison from checking against 'all' to
checking against an empty string. This ensures proper filtering
behavior when filtering metrics in the dashboard component.
2025-03-25 18:08:29 +01:00
Taha Yassine Kraiem
0500f30d14 feat(DB): use incremental materialized views to fill extra tables
refactor(chalice): changed product analytics
2025-03-25 17:44:31 +01:00
Taha Yassine Kraiem
ec2c42c688 refactor(DB): changed product analytics DB structure 2025-03-25 17:44:31 +01:00
Taha Yassine Kraiem
7f0bc100f5 refactor(chalice): changed product analytics search payload 2025-03-25 17:44:31 +01:00
Taha Yassine Kraiem
522a985ef3 refactor(chalice): refactored pagination-query-string 2025-03-25 17:44:31 +01:00
nick-delirium
634d0e8a0f ui: rm speed index card 2025-03-25 17:39:14 +01:00
nick-delirium
28b4fc7598 ui: upgrade react to 19.0.0 2025-03-25 17:39:14 +01:00
Alexander
0d4c256ca8 feat(tasks): removed unnecessary wrapper 2025-03-25 17:16:57 +01:00
Alexander
35f63a8fb1 feat(dbpool): fixed an issue in metrics call 2025-03-25 17:02:06 +01:00
nick-delirium
a4e96822ed
spot: skip saas check for ingest 2025-03-25 16:52:48 +01:00
Alexander
96f984a76a feat(spot): fixed an issue in metrics call 2025-03-25 16:46:21 +01:00
nick-delirium
5f15dfafe7 ui: auto detect ingest for spot (if not cloud) 2025-03-25 16:05:36 +01:00
nick-delirium
b9cca6b388
spot: restore currtime after thumbnail 2025-03-25 15:44:07 +01:00
nick-delirium
712f07988e spot: fix deps 2025-03-25 15:39:02 +01:00
nick-delirium
08bddb3165 switch meta tag to mp4 2025-03-25 15:39:02 +01:00
nick-delirium
3efb879cdf spot: up audio bitrate a bit 2025-03-25 15:39:02 +01:00
nick-delirium
ccf44fda70 spot: try mp4 support with avc1 2025-03-25 15:39:02 +01:00
nick-delirium
ce525a4ccf spot: more fixes for network, reinit stage for content script 2025-03-25 15:39:02 +01:00
nick-delirium
c6299c4592 spot: add err ctx, add iterator for values 2025-03-25 15:39:02 +01:00
nick-delirium
a371c79151 spot: more fixes for debugger approach, check settings before enabling network 2025-03-25 15:39:02 +01:00
nick-delirium
f59a8c24f4 spot: small refactoring + testing debugger for network capture 2025-03-25 15:39:02 +01:00
nick-delirium
8be6f63711 spot: .14
Signed-off-by: nick-delirium <nikita@openreplay.com>
2025-03-25 15:39:02 +01:00
nick-delirium
8ba35b1324 spot: mix network requests with webRequest data for better tracking 2025-03-25 15:39:02 +01:00
nick-delirium
28dea3b225
tracker: release 16.1.0 2025-03-25 15:17:43 +01:00
Andrey Babushkin
666643a6ae
Improve tracker perfomance (#3208)
* fix(helm): add CORS config to Assist ingress

Configure CORS headers and debug session information for the Assist
service's ingress to ensure proper cross-origin requests handling and
improved debugging capabilities.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* increase perfomance ticker and remove empty batches

* add commit

* updated Changelog

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-25 15:08:49 +01:00
nick-delirium
4cf688f15c spot: update network proxy for auto sanitizer 2025-03-25 14:52:43 +01:00
nick-delirium
1e57c90449 networkProxy: auto sanitize sensitive tokens 2025-03-25 14:52:43 +01:00
Alexander
c0678bab15 feat(db): insert current_url for errors and issues 2025-03-25 14:09:37 +01:00
rjshrjndrn
187a69a61a fix(assist): ingress session id
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-25 11:26:40 +01:00
rjshrjndrn
2e96a072e9 chore(http): remove default token_string
scripts/helmcharts/openreplay/charts/http/scripts/entrypoint.sh

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 19:33:49 +01:00
Andrey Babushkin
5a410e63b3
Update batch writer (#3205)
* fix(helm): add CORS config to Assist ingress

Configure CORS headers and debug session information for the Assist
service's ingress to ensure proper cross-origin requests handling and
improved debugging capabilities.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* add timestamp to prepare method

* update test

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 17:25:09 +01:00
Shekar Siri
300a857a5c fix(userStore): simplify error handling on save
Replace complex error parsing with direct error message display.
This improves code maintainability while still providing useful
feedback to users when saving their account data fails.
2025-03-24 17:14:14 +01:00
nick-delirium
eba22e0efa
ui: always show sessiontags 2025-03-24 17:12:18 +01:00
rjshrjndrn
664f6b9014 feat(helm): add TOKEN_SECRET environment variable
Add TOKEN_SECRET environment variable to HTTP service deployment and
generate a random value for it in vars.yaml.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 16:53:06 +01:00
nick-delirium
5bbd7cff10 tracker: change nodebound warn level 2025-03-24 15:09:03 +01:00
nick-delirium
6f172d4f01 tracker: keep spaces, remove data from page location msg 2025-03-24 15:09:03 +01:00
nick-delirium
829e1c8bde tracker: fix jest config, update test cases 2025-03-24 15:09:03 +01:00
nick-delirium
e7d309dadf tracker: "secure by default" mode; 16.1.0 2025-03-24 15:09:03 +01:00
nick-delirium
4bac12308a tracker: secure mode for sanitizer settings 2025-03-24 15:09:03 +01:00
nick-delirium
2aba1d9a52 ui: comments etc 2025-03-24 15:06:00 +01:00
nick-delirium
1f4e32e4f2 ui: improve network panel row mapping 2025-03-24 15:06:00 +01:00
nick-delirium
49f98967d6 ui: fixes for onboarding ui 2025-03-24 14:27:37 +01:00
PiRDub
356fa02094
fix(GraphQL): remove unused useTranslation hook (#3200) 2025-03-24 13:30:19 +01:00
Andrey Babushkin
a8e47e59ad
Update batch writer (#3198)
* fix(helm): add CORS config to Assist ingress

Configure CORS headers and debug session information for the Assist
service's ingress to ensure proper cross-origin requests handling and
improved debugging capabilities.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* add timestamp to prepare method

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-24 12:42:05 +01:00
nick-delirium
c760d29fb4
ui: update icon in langbanner 2025-03-24 11:10:43 +01:00
nick-delirium
d77a518cf0 ui: change language selection ui 2025-03-24 11:09:22 +01:00
Alexander
e04c2aa251 feat(ender): handle the negative duration sessions 2025-03-24 10:02:42 +01:00
rjshrjndrn
e6eb41536d fix(helm): improve session routing and CORS handling
- Add http-snippet with map function to extract sessionID from peerId
- Update ingress annotations for Assist chart
- Add proper CORS headers to support cross-origin requests
- Remove debugging headers that were previously enabled

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-22 20:10:52 +01:00
Mehdi Osman
4b3ad60565
Revert transport to websocket only 2025-03-22 13:55:45 -04:00
Mehdi Osman
90669b0604
Revert to websocket 2025-03-22 13:53:47 -04:00
Taha Yassine Kraiem
f4bf1b8960 fix(chalice): fixed wrong import 2025-03-21 16:58:34 +01:00
nick-delirium
70423c6d8e
experimental: only polling for assist 2025-03-21 16:38:43 +01:00
Taha Yassine Kraiem
ae313c17d4 feat(chalice): search product analytics 2025-03-21 16:22:49 +01:00
rjshrjndrn
0e45fa53ad fix(helm): add CORS config to Assist ingress
Configure CORS headers and debug session information for the Assist
service's ingress to ensure proper cross-origin requests handling and
improved debugging capabilities.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-21 16:06:41 +01:00
nick-delirium
fe20f83130
ui: add inuse error for signup 2025-03-21 15:50:25 +01:00
rjshrjndrn
d04e6686ca fix(helm): add CORS config to Assist ingress
Configure CORS headers and debug session information for the Assist
service's ingress to ensure proper cross-origin requests handling and
improved debugging capabilities.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-21 15:50:15 +01:00
Shekar Siri
6adb45e15f fix(auth): remove unnecessary captcha token validation
The token validation checks were redundant as the validation is already
handled by the captcha wrapper component. This change simplifies the
password reset flow while maintaining security.
2025-03-21 15:42:28 +01:00
Andrey Babushkin
a1337faeee
combine in 1 line (#3191) 2025-03-21 15:19:32 +01:00
nick-delirium
7e065ab02f
tracker: 16.0.3, fix local spritemap parsing 2025-03-21 15:10:00 +01:00
nick-delirium
1e2dde09b4
ui: onboarding fixes 2025-03-21 10:43:51 +01:00
nick-delirium
3cdfe76134
ui: add sessionId header for AssistManager.ts 2025-03-21 10:18:33 +01:00
nick-delirium
39855651d5
ui: use polling for first request 2025-03-21 09:52:00 +01:00
Taha Yassine Kraiem
dd469d2349 refactor(chalice): initial product analytics 2025-03-20 17:13:17 +01:00
Taha Yassine Kraiem
3d448320bf refactor(DB): changed DB structure for product analytics 2025-03-20 17:13:17 +01:00
Taha Yassine Kraiem
7b0771a581 refactor(chalice): upgraded dependencies 2025-03-20 17:13:17 +01:00
Taha Yassine Kraiem
988b396223 refactor(chalice): moved CH sessions-search to FOSS
refactor(DB): changed DB structures for CH sessions-search in FOSS
refactor(DB): preparing for v1.23.0
2025-03-20 17:13:17 +01:00
nick-delirium
fa3b585785
ui: fix table column export 2025-03-20 16:06:48 +01:00
Alexander
91e0ebeb56 feat(assist): improved caching mechanism for cluster mode 2025-03-20 13:52:14 +01:00
rjshrjndrn
8e68eb9a20 feat(assist): enhance WebSocket session persistence
Add session extraction from peerId parameter for better WebSocket
connection stability. This improves assist session routing by:

- Extracting sessionID from peerId parameter using regex
- Setting upstream hash-by to use the extracted session ID
- Adding debug headers to monitor session routing

TODO: Convert this to map

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2025-03-20 12:38:36 +01:00
nick-delirium
13bd3d9121
tracker: add sessId header for assist polling 2025-03-20 12:13:40 +01:00
nick-delirium
048ae0913c
ui: refetch live session list on proj change 2025-03-19 17:36:33 +01:00
Shekar Siri
73fff8b817 feat(auth): support msaas edition for enterprise features
Add msaas to the isEnterprise check alongside ee edition to properly
display enterprise features. Use userStore.isEnterprise in SSOLogin
component instead of directly checking authDetails.edition for
consistent
enterprise status detection.
2025-03-19 14:40:05 +01:00
Shekar Siri
605fa96a34
feat(auth): implement withCaptcha HOC for consistent reCAPTCHA (#3175)
* refactor(searchStore): reformat filterMap function parameters (#3166)

- Reformat the parameters of the filterMap function for better readability.
- Comment out the fetchSessions call in clearSearch method to avoid unnecessary session fetch.

* Increment frontend chart version (#3167)

Co-authored-by: GitHub Action <action@github.com>

* refactor(chalice): cleaned code
fix(chalice): fixed session-search-pg sortKey issue
fix(chalice): fixed CH-query-formatter to handle special chars
fix(chalice): fixed /ids response

* feat(auth): implement withCaptcha HOC for consistent reCAPTCHA

This commit refactors the reCAPTCHA implementation across the application
by introducing a Higher Order Component (withCaptcha) that encapsulates
captcha verification logic. The changes:

- Create a reusable withCaptcha HOC in withRecaptcha.tsx
- Refactor Login, ResetPasswordRequest, and CreatePassword components
- Extract SSOLogin into a separate component
- Improve error handling and user feedback
- Standardize loading and verification states across forms
- Make captcha implementation more maintainable and consistent

---------

Co-authored-by: Mehdi Osman <estradino@users.noreply.github.com>
Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com>
2025-03-19 11:37:50 +01:00
Andrey Babushkin
2cb33d7894
changhe sort events logic (#3174) 2025-03-18 18:27:48 +01:00
nick-delirium
15d427418d
tracker: fix autogen version 2025-03-18 16:37:09 +01:00
nick-delirium
ed3e553726
tracker: assist 11.0.1 changelog 2025-03-18 16:36:10 +01:00
nick-delirium
7eace68de6
ui: add loading state for LiveSessionReloadButton.tsx 2025-03-18 15:30:24 +01:00
Taha Yassine Kraiem
8009882cef refactor(chalice): cleaned code
fix(chalice): fixed session-search-pg sortKey issue
fix(chalice): fixed CH-query-formatter to handle special chars
fix(chalice): fixed /ids response

(cherry picked from commit b505645782)
2025-03-18 13:52:56 +01:00
Andrey Babushkin
7365d8639c
updated widget link (#3158)
* updated widget link

* fix calls

* updated widget url
2025-03-18 11:07:09 +01:00
nick-delirium
4c967d4bc1
ui: update tracker import examples 2025-03-17 13:42:34 +01:00
Alexander
3fdf799bd7 feat(http): unsupported tracker error with projectID in logs 2025-03-17 13:32:00 +01:00
nick-delirium
9aca716e6b
tracker: 16.0.2 fix str dictionary keys 2025-03-17 11:25:54 +01:00
Shekar Siri
cf9ecdc9a4 refactor(searchStore): reformat filterMap function parameters
- Reformat the parameters of the filterMap function for better readability.
- Comment out the fetchSessions call in clearSearch method to avoid unnecessary session fetch.
2025-03-14 19:47:42 +01:00
625 changed files with 35495 additions and 12242 deletions

View file

@ -47,6 +47,7 @@ runs:
"JWT_SECRET:.global.jwtSecret"
"JWT_SPOT_REFRESH_SECRET:.chalice.env.JWT_SPOT_REFRESH_SECRET"
"JWT_SPOT_SECRET:.global.jwtSpotSecret"
"JWT_SECRET:.global.tokenSecret"
"LICENSE_KEY:.global.enterpriseEditionLicense"
"MINIO_ACCESS_KEY:.global.s3.accessKey"
"MINIO_SECRET_KEY:.global.s3.secretKey"

122
.github/workflows/assist-server-ee.yaml vendored Normal file
View file

@ -0,0 +1,122 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/assist-server/**"
name: Build and Deploy Assist-Server EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist-Server image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-server
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-server")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist-server")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
pwd
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist-server/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-server,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging

33
.github/workflows/frontend-tests.yaml vendored Normal file
View file

@ -0,0 +1,33 @@
name: Frontend tests
on:
pull_request:
paths:
- 'frontend/**'
- '.github/workflows/frontend-test.yaml'
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: 20
- name: Install dependencies
working-directory: frontend
run: yarn
- name: Run tests
working-directory: frontend
run: yarn test:ci
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
directory: frontend/coverage/

189
.github/workflows/patch-build-old.yaml vendored Normal file
View file

@ -0,0 +1,189 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
tag:
description: 'Tag to update.'
required: true
type: string
branch:
description: 'Branch to build patches from. Make sure the branch is uptodate with tag. Else itll cause missing commits.'
required: true
type: string
name: Build patches from tag, rewrite commit HEAD to older timestamp, and Push the tag
jobs:
deploy:
name: Build Patch from old tag
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 4
ref: ${{ github.event.inputs.tag }}
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Create backup tag with timestamp
run: |
set -e # Exit immediately if a command exits with a non-zero status
TIMESTAMP=$(date +%Y%m%d%H%M%S)
BACKUP_TAG="${{ github.event.inputs.tag }}-backup-${TIMESTAMP}"
echo "BACKUP_TAG=${BACKUP_TAG}" >> $GITHUB_ENV
echo "INPUT_TAG=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
git tag $BACKUP_TAG || { echo "Failed to create backup tag"; exit 1; }
git push origin $BACKUP_TAG || { echo "Failed to push backup tag"; exit 1; }
echo "Created backup tag: $BACKUP_TAG"
# Get the oldest commit date from the last 3 commits in raw format
OLDEST_COMMIT_TIMESTAMP=$(git log -3 --pretty=format:"%at" | tail -1)
echo "Oldest commit timestamp: $OLDEST_COMMIT_TIMESTAMP"
# Add 1 second to the timestamp
NEW_TIMESTAMP=$((OLDEST_COMMIT_TIMESTAMP + 1))
echo "NEW_TIMESTAMP=$NEW_TIMESTAMP" >> $GITHUB_ENV
- name: Setup yq
uses: mikefarah/yq@master
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=${{inputs.branch}}" >> $GITHUB_ENV
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b $INPUT_TAG --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout $INPUT_TAG
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
done
- name: Change commit timestamp
run: |
# Convert the timestamp to a date format git can understand
NEW_DATE=$(perl -le 'print scalar gmtime($ARGV[0])." +0000"' $NEW_TIMESTAMP)
echo "Setting commit date to: $NEW_DATE"
# Amend the commit with the new date
GIT_COMMITTER_DATE="$NEW_DATE" git commit --amend --no-edit --date="$NEW_DATE"
# Verify the change
git log -1 --pretty=format:"Commit now dated: %cD"
# git tag and push
git tag $INPUT_TAG -f
git push origin $INPUT_TAG -f
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -147,8 +147,7 @@ jobs:
pr_title: "Updated patch build from main ${{ env.HEAD_COMMIT_ID }}"
pr_body: |
This PR updates the Helm chart version after building the patch from $HEAD_COMMIT_ID.
Once this PR is merged, To update the latest tag, run the following workflow.
https://github.com/openreplay/openreplay/actions/workflows/update-tag.yaml
Once this PR is merged, tag update job will run automatically.
# - name: Debug Job
# if: ${{ failure() }}

View file

@ -22,22 +22,14 @@ jobs:
- name: Cache tracker modules
uses: actions/cache@v3
with:
path: tracker/tracker/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lockb') }}
restore-keys: |
test_tracker_build{{ runner.OS }}-build-
test_tracker_build{{ runner.OS }}-
- name: Cache tracker-assist modules
uses: actions/cache@v3
with:
path: tracker/tracker-assist/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lockb') }}
path: tracker/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lock') }}
restore-keys: |
test_tracker_build{{ runner.OS }}-build-
test_tracker_build{{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker
cd tracker
bun install
- name: Jest tests
run: |
@ -47,10 +39,6 @@ jobs:
run: |
cd tracker/tracker
bun run build
- name: (TA) Setup Testing packages
run: |
cd tracker/tracker-assist
bun install
- name: (TA) Jest tests
run: |
cd tracker/tracker-assist

View file

@ -1,35 +1,43 @@
on:
workflow_dispatch:
description: "This workflow will build for patches for latest tag, and will Always use commit from main branch."
inputs:
services:
description: "This action will update the latest tag with current main branch HEAD. Should I proceed ? true/false"
required: true
default: "false"
name: Force Push tag with main branch HEAD
pull_request:
types: [closed]
branches:
- main
name: Release tag update --force
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
if: ${{ (github.event_name == 'pull_request' && github.event.pull_request.merged == true) || github.event.inputs.services == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get latest release tag using GitHub API
id: get-latest-tag
run: |
LATEST_TAG=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/latest" \
| jq -r .tag_name)
# Fallback to git command if API doesn't return a tag
if [ "$LATEST_TAG" == "null" ] || [ -z "$LATEST_TAG" ]; then
echo "Not found latest tag"
exit 100
fi
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
echo "Latest tag: $LATEST_TAG"
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Push main branch to tag
run: |
git fetch --tags
git checkout main
git push origin HEAD:refs/tags/$(git tag --list 'v[0-9]*' --sort=-v:refname | head -n 1) --force
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# with:
# limit-access-to-actor: true
echo "Updating tag ${{ env.LATEST_TAG }} to point to latest commit on main"
git push origin HEAD:refs/tags/${{ env.LATEST_TAG }} --force

1
.gitignore vendored
View file

@ -7,3 +7,4 @@ node_modules
**/*.envrc
.idea
*.mob*
install-state.gz

View file

@ -1,7 +1,7 @@
repos:
- repo: https://github.com/gitguardian/ggshield
rev: v1.14.5
rev: v1.38.0
hooks:
- id: ggshield
language_version: python3
stages: [commit]
stages: [pre-commit]

View file

@ -4,26 +4,24 @@ verify_ssl = true
name = "pypi"
[packages]
urllib3 = "==2.3.0"
urllib3 = "==2.4.0"
requests = "==2.32.3"
boto3 = "==1.36.12"
boto3 = "==1.38.16"
pyjwt = "==2.10.1"
psycopg2-binary = "==2.9.10"
psycopg = {extras = ["pool", "binary"], version = "==3.2.4"}
clickhouse-driver = {extras = ["lz4"], version = "==0.2.9"}
clickhouse-connect = "==0.8.15"
elasticsearch = "==8.17.1"
psycopg = {extras = ["binary", "pool"], version = "==3.2.9"}
clickhouse-connect = "==0.8.17"
elasticsearch = "==9.0.1"
jira = "==3.8.0"
cachetools = "==5.5.1"
fastapi = "==0.115.8"
uvicorn = {extras = ["standard"], version = "==0.34.0"}
cachetools = "==5.5.2"
fastapi = "==0.115.12"
uvicorn = {extras = ["standard"], version = "==0.34.2"}
python-decouple = "==3.8"
pydantic = {extras = ["email"], version = "==2.10.6"}
pydantic = {extras = ["email"], version = "==2.11.4"}
apscheduler = "==3.11.0"
redis = "==5.2.1"
redis = "==6.1.0"
[dev-packages]
[requires]
python_version = "3.12"
python_full_version = "3.12.8"

View file

@ -16,7 +16,7 @@ from chalicelib.utils import helper
from chalicelib.utils import pg_client, ch_client
from crons import core_crons, core_dynamic_crons
from routers import core, core_dynamic
from routers.subs import insights, metrics, v1_api, health, usability_tests, spot, product_anaytics
from routers.subs import insights, metrics, v1_api, health, usability_tests, spot, product_analytics
loglevel = config("LOGLEVEL", default=logging.WARNING)
print(f">Loglevel set to: {loglevel}")
@ -129,6 +129,6 @@ app.include_router(spot.public_app)
app.include_router(spot.app)
app.include_router(spot.app_apikey)
app.include_router(product_anaytics.public_app)
app.include_router(product_anaytics.app)
app.include_router(product_anaytics.app_apikey)
app.include_router(product_analytics.public_app, prefix="/pa")
app.include_router(product_analytics.app, prefix="/pa")
app.include_router(product_analytics.app_apikey, prefix="/pa")

View file

@ -0,0 +1,11 @@
import logging
from decouple import config
logging.basicConfig(level=config("LOGLEVEL", default=logging.INFO))
if config("EXP_AUTOCOMPLETE", cast=bool, default=False):
logging.info(">>> Using experimental autocomplete")
from . import autocomplete_ch as autocomplete
else:
from . import autocomplete

View file

@ -1,10 +1,9 @@
import logging
import schemas
from chalicelib.core import countries, events, metadata
from chalicelib.core import countries, metadata
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.event_filter_definition import Event
from chalicelib.utils.or_cache import CachedResponse
logger = logging.getLogger(__name__)
TABLE = "public.autocomplete"
@ -85,7 +84,8 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""(SELECT DISTINCT value, type
return f"""SELECT DISTINCT ON(value,type) value, type
((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -101,7 +101,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5);"""
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -112,10 +112,10 @@ def __generic_query(typename, value_length=None):
LIMIT 10;"""
def __generic_autocomplete(event: Event):
def __generic_autocomplete(event: str):
def f(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
query = __generic_query(event.ui_type, value_length=len(value))
query = __generic_query(event, value_length=len(value))
params = {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}
cur.execute(cur.mogrify(query, params))
@ -148,8 +148,8 @@ def __errors_query(source=None, value_length=None):
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR}' AS type
FROM events.errors INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
@ -160,8 +160,8 @@ def __errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR}' AS type
FROM events.errors INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
@ -172,8 +172,8 @@ def __errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR}' AS type
FROM events.errors INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(value)s
@ -184,8 +184,8 @@ def __errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR}' AS type
FROM events.errors INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
@ -195,8 +195,8 @@ def __errors_query(source=None, value_length=None):
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR}' AS type
FROM events.errors INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
@ -207,8 +207,8 @@ def __errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR}' AS type
FROM events.errors INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
@ -233,8 +233,8 @@ def __search_errors_mobile(project_id, value, key=None, source=None):
if len(value) > 2:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR_MOBILE}' AS type
FROM events_common.crashes INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
@ -243,8 +243,8 @@ def __search_errors_mobile(project_id, value, key=None, source=None):
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR_MOBILE}' AS type
FROM events_common.crashes INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
@ -253,8 +253,8 @@ def __search_errors_mobile(project_id, value, key=None, source=None):
UNION ALL
(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR_MOBILE}' AS type
FROM events_common.crashes INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
@ -263,8 +263,8 @@ def __search_errors_mobile(project_id, value, key=None, source=None):
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR_MOBILE}' AS type
FROM events_common.crashes INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
@ -273,8 +273,8 @@ def __search_errors_mobile(project_id, value, key=None, source=None):
else:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR_MOBILE}' AS type
FROM events_common.crashes INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
@ -283,8 +283,8 @@ def __search_errors_mobile(project_id, value, key=None, source=None):
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
'{schemas.EventType.ERROR_MOBILE}' AS type
FROM events_common.crashes INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
@ -326,7 +326,7 @@ def __search_metadata(project_id, value, key=None, source=None):
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT key, value, 'METADATA' AS TYPE
SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
@ -376,7 +376,6 @@ def is_top_supported(event_type):
return TYPE_TO_COLUMN.get(event_type, False)
@CachedResponse(table="or_cache.autocomplete_top_values", ttl=5 * 60)
def get_top_values(project_id, event_type, event_key=None):
with pg_client.PostgresClient() as cur:
if schemas.FilterType.has_value(event_type):

View file

@ -1,10 +1,9 @@
import logging
import schemas
from chalicelib.core import countries, events, metadata
from chalicelib.core import countries, metadata
from chalicelib.utils import ch_client
from chalicelib.utils import helper, exp_ch_helper
from chalicelib.utils.event_filter_definition import Event
from chalicelib.utils.or_cache import CachedResponse
logger = logging.getLogger(__name__)
TABLE = "experimental.autocomplete"
@ -86,7 +85,8 @@ def __generic_query(typename, value_length=None):
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""(SELECT DISTINCT value, type
return f"""SELECT DISTINCT ON(value, type) value, type
FROM ((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
@ -102,7 +102,7 @@ def __generic_query(typename, value_length=None):
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5);"""
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
@ -113,7 +113,7 @@ def __generic_query(typename, value_length=None):
LIMIT 10;"""
def __generic_autocomplete(event: Event):
def __generic_autocomplete(event: str):
def f(project_id, value, key=None, source=None):
with ch_client.ClickHouseClient() as cur:
query = __generic_query(event.ui_type, value_length=len(value))
@ -149,7 +149,7 @@ def __pg_errors_query(source=None, value_length=None):
return f"""((SELECT DISTINCT ON(message)
message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
'{schemas.EventType.ERROR}' AS type
FROM {MAIN_TABLE}
WHERE
project_id = %(project_id)s
@ -161,7 +161,7 @@ def __pg_errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(name)
name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
'{schemas.EventType.ERROR}' AS type
FROM {MAIN_TABLE}
WHERE
project_id = %(project_id)s
@ -172,7 +172,7 @@ def __pg_errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(message)
message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
'{schemas.EventType.ERROR}' AS type
FROM {MAIN_TABLE}
WHERE
project_id = %(project_id)s
@ -183,7 +183,7 @@ def __pg_errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(name)
name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
'{schemas.EventType.ERROR}' AS type
FROM {MAIN_TABLE}
WHERE
project_id = %(project_id)s
@ -193,7 +193,7 @@ def __pg_errors_query(source=None, value_length=None):
return f"""((SELECT DISTINCT ON(message)
message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
'{schemas.EventType.ERROR}' AS type
FROM {MAIN_TABLE}
WHERE
project_id = %(project_id)s
@ -204,7 +204,7 @@ def __pg_errors_query(source=None, value_length=None):
(SELECT DISTINCT ON(name)
name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
'{schemas.EventType.ERROR}' AS type
FROM {MAIN_TABLE}
WHERE
project_id = %(project_id)s
@ -257,10 +257,11 @@ def __search_metadata(project_id, value, key=None, source=None):
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""SELECT key, value, 'METADATA' AS TYPE
query = cur.format(query=f"""SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", parameters={"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)})
LIMIT 5;""",
parameters={"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)})
results = cur.execute(query)
return helper.list_to_camel_case(results)
@ -297,7 +298,6 @@ def is_top_supported(event_type):
return TYPE_TO_COLUMN.get(event_type, False)
@CachedResponse(table="or_cache.autocomplete_top_values", ttl=5 * 60)
def get_top_values(project_id, event_type, event_key=None):
with ch_client.ClickHouseClient() as cur:
if schemas.FilterType.has_value(event_type):

View file

@ -1,3 +1,5 @@
import logging
import schemas
from chalicelib.core import metadata
from chalicelib.core.errors import errors_legacy
@ -7,6 +9,8 @@ from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, metrics_helper
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def _multiple_values(values, value_key="value"):
query_values = {}
@ -338,14 +342,14 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
SELECT details.error_id as error_id,
name, message, users, total,
sessions, last_occurrence, first_occurrence, chart
FROM (SELECT JSONExtractString(toString(`$properties`), 'error_id') AS error_id,
FROM (SELECT error_id,
JSONExtractString(toString(`$properties`), 'name') AS name,
JSONExtractString(toString(`$properties`), 'message') AS message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT events.session_id) AS sessions,
MAX(created_at) AS max_datetime,
MIN(created_at) AS min_datetime,
COUNT(DISTINCT JSONExtractString(toString(`$properties`), 'error_id'))
COUNT(DISTINCT error_id)
OVER() AS total
FROM {MAIN_EVENTS_TABLE} AS events
INNER JOIN (SELECT session_id, coalesce(user_id,toString(user_uuid)) AS user_id
@ -357,7 +361,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
GROUP BY error_id, name, message
ORDER BY {sort} {order}
LIMIT %(errors_limit)s OFFSET %(errors_offset)s) AS details
INNER JOIN (SELECT JSONExtractString(toString(`$properties`), 'error_id') AS error_id,
INNER JOIN (SELECT error_id,
toUnixTimestamp(MAX(created_at))*1000 AS last_occurrence,
toUnixTimestamp(MIN(created_at))*1000 AS first_occurrence
FROM {MAIN_EVENTS_TABLE}
@ -366,7 +370,7 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
GROUP BY error_id) AS time_details
ON details.error_id=time_details.error_id
INNER JOIN (SELECT error_id, groupArray([timestamp, count]) AS chart
FROM (SELECT JSONExtractString(toString(`$properties`), 'error_id') AS error_id,
FROM (SELECT error_id,
gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs
@ -378,9 +382,9 @@ def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, us
ORDER BY timestamp) AS sub_table
GROUP BY error_id) AS chart_details ON details.error_id=chart_details.error_id;"""
# print("------------")
# print(ch.format(main_ch_query, params))
# print("------------")
logger.debug("------------")
logger.debug(ch.format(main_ch_query, params))
logger.debug("------------")
query = ch.format(query=main_ch_query, parameters=params)
rows = ch.execute(query=query)

View file

@ -1,226 +0,0 @@
from functools import cache
from typing import Optional
import schemas
from chalicelib.core import issues
from chalicelib.core.autocomplete import autocomplete
from chalicelib.core.sessions import sessions_metas
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.event_filter_definition import SupportedFilter, Event
def get_customs_by_session_id(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify("""\
SELECT
c.*,
'CUSTOM' AS type
FROM events_common.customs AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
return helper.dict_to_camel_case(rows)
def __merge_cells(rows, start, count, replacement):
rows[start] = replacement
rows = rows[:start + 1] + rows[start + count:]
return rows
def __get_grouped_clickrage(rows, session_id, project_id):
click_rage_issues = issues.get_by_session_id(session_id=session_id, issue_type="click_rage", project_id=project_id)
if len(click_rage_issues) == 0:
return rows
for c in click_rage_issues:
merge_count = c.get("payload")
if merge_count is not None:
merge_count = merge_count.get("Count", 3)
else:
merge_count = 3
for i in range(len(rows)):
if rows[i]["timestamp"] == c["timestamp"]:
rows = __merge_cells(rows=rows,
start=i,
count=merge_count,
replacement={**rows[i], "type": "CLICKRAGE", "count": merge_count})
break
return rows
def get_by_session_id(session_id, project_id, group_clickrage=False, event_type: Optional[schemas.EventType] = None):
with pg_client.PostgresClient() as cur:
rows = []
if event_type is None or event_type == schemas.EventType.CLICK:
cur.execute(cur.mogrify("""\
SELECT
c.*,
'CLICK' AS type
FROM events.clicks AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if group_clickrage:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id, project_id=project_id)
if event_type is None or event_type == schemas.EventType.INPUT:
cur.execute(cur.mogrify("""
SELECT
i.*,
'INPUT' AS type
FROM events.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if event_type is None or event_type == schemas.EventType.LOCATION:
cur.execute(cur.mogrify("""\
SELECT
l.*,
l.path AS value,
l.path AS url,
'LOCATION' AS type
FROM events.pages AS l
WHERE
l.session_id = %(session_id)s
ORDER BY l.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: (k["timestamp"], k["messageId"]))
return rows
def _search_tags(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
query = f"""
SELECT public.tags.name
'TAG' AS type
FROM public.tags
WHERE public.tags.project_id = %(project_id)s
ORDER BY SIMILARITY(public.tags.name, %(value)s) DESC
LIMIT 10
"""
query = cur.mogrify(query, {'project_id': project_id, 'value': value})
cur.execute(query)
results = helper.list_to_camel_case(cur.fetchall())
return results
class EventType:
CLICK = Event(ui_type=schemas.EventType.CLICK, table="events.clicks", column="label")
INPUT = Event(ui_type=schemas.EventType.INPUT, table="events.inputs", column="label")
LOCATION = Event(ui_type=schemas.EventType.LOCATION, table="events.pages", column="path")
CUSTOM = Event(ui_type=schemas.EventType.CUSTOM, table="events_common.customs", column="name")
REQUEST = Event(ui_type=schemas.EventType.REQUEST, table="events_common.requests", column="path")
GRAPHQL = Event(ui_type=schemas.EventType.GRAPHQL, table="events.graphql", column="name")
STATEACTION = Event(ui_type=schemas.EventType.STATE_ACTION, table="events.state_actions", column="name")
TAG = Event(ui_type=schemas.EventType.TAG, table="events.tags", column="tag_id")
ERROR = Event(ui_type=schemas.EventType.ERROR, table="events.errors",
column=None) # column=None because errors are searched by name or message
METADATA = Event(ui_type=schemas.FilterType.METADATA, table="public.sessions", column=None)
# MOBILE
CLICK_MOBILE = Event(ui_type=schemas.EventType.CLICK_MOBILE, table="events_ios.taps", column="label")
INPUT_MOBILE = Event(ui_type=schemas.EventType.INPUT_MOBILE, table="events_ios.inputs", column="label")
VIEW_MOBILE = Event(ui_type=schemas.EventType.VIEW_MOBILE, table="events_ios.views", column="name")
SWIPE_MOBILE = Event(ui_type=schemas.EventType.SWIPE_MOBILE, table="events_ios.swipes", column="label")
CUSTOM_MOBILE = Event(ui_type=schemas.EventType.CUSTOM_MOBILE, table="events_common.customs", column="name")
REQUEST_MOBILE = Event(ui_type=schemas.EventType.REQUEST_MOBILE, table="events_common.requests", column="path")
CRASH_MOBILE = Event(ui_type=schemas.EventType.ERROR_MOBILE, table="events_common.crashes",
column=None) # column=None because errors are searched by name or message
@cache
def supported_types():
return {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
query=autocomplete.__generic_query(
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
}
def get_errors_by_session_id(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT er.*,ur.*, er.timestamp - s.start_ts AS time
FROM {EventType.ERROR.table} AS er INNER JOIN public.errors AS ur USING (error_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE er.session_id = %(session_id)s AND s.project_id=%(project_id)s
ORDER BY timestamp;""", {"session_id": session_id, "project_id": project_id}))
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def search(text, event_type, project_id, source, key):
if not event_type:
return {"data": autocomplete.__get_autocomplete_table(text, project_id)}
if event_type in supported_types().keys():
rows = supported_types()[event_type].get(project_id=project_id, value=text, key=key, source=source)
elif event_type + "_MOBILE" in supported_types().keys():
rows = supported_types()[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_MOBILE") \
and event_type[:-len("_MOBILE")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}
return {"data": rows}

View file

@ -0,0 +1,11 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_EVENTS", cast=bool, default=False):
logger.info(">>> Using experimental events replay")
from . import events_ch as events
else:
from . import events_pg as events

View file

@ -0,0 +1,97 @@
from chalicelib.utils import ch_client
from .events_pg import *
def __explode_properties(rows):
for i in range(len(rows)):
rows[i] = {**rows[i], **rows[i]["$properties"]}
rows[i].pop("$properties")
return rows
def get_customs_by_session_id(session_id, project_id):
with ch_client.ClickHouseClient() as cur:
rows = cur.execute(""" \
SELECT `$properties`,
created_at,
'CUSTOM' AS type
FROM product_analytics.events
WHERE session_id = %(session_id)s
AND NOT `$auto_captured`
AND `$event_name`!='INCIDENT'
ORDER BY created_at;""",
{"project_id": project_id, "session_id": session_id})
rows = __explode_properties(rows)
return helper.list_to_camel_case(rows)
def __merge_cells(rows, start, count, replacement):
rows[start] = replacement
rows = rows[:start + 1] + rows[start + count:]
return rows
def __get_grouped_clickrage(rows, session_id, project_id):
click_rage_issues = issues.get_by_session_id(session_id=session_id, issue_type="click_rage", project_id=project_id)
if len(click_rage_issues) == 0:
return rows
for c in click_rage_issues:
merge_count = c.get("payload")
if merge_count is not None:
merge_count = merge_count.get("Count", 3)
else:
merge_count = 3
for i in range(len(rows)):
if rows[i]["created_at"] == c["createdAt"]:
rows = __merge_cells(rows=rows,
start=i,
count=merge_count,
replacement={**rows[i], "type": "CLICKRAGE", "count": merge_count})
break
return rows
def get_by_session_id(session_id, project_id, group_clickrage=False, event_type: Optional[schemas.EventType] = None):
with ch_client.ClickHouseClient() as cur:
select_events = ('CLICK', 'INPUT', 'LOCATION')
if event_type is not None:
select_events = (event_type,)
query = cur.format(query=""" \
SELECT created_at,
`$properties`,
`$event_name` AS type
FROM product_analytics.events
WHERE session_id = %(session_id)s
AND `$event_name` IN %(select_events)s
AND `$auto_captured`
ORDER BY created_at;""",
parameters={"project_id": project_id, "session_id": session_id,
"select_events": select_events})
rows = cur.execute(query)
rows = __explode_properties(rows)
if group_clickrage and 'CLICK' in select_events:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id, project_id=project_id)
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: k["createdAt"])
return rows
def get_incidents_by_session_id(session_id, project_id):
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=""" \
SELECT created_at,
`$properties`,
`$event_name` AS type
FROM product_analytics.events
WHERE session_id = %(session_id)s
AND `$event_name` = 'INCIDENT'
AND `$auto_captured`
ORDER BY created_at;""",
parameters={"project_id": project_id, "session_id": session_id})
rows = cur.execute(query)
rows = __explode_properties(rows)
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: k["createdAt"])
return rows

View file

@ -1,5 +1,5 @@
from chalicelib.utils import pg_client, helper
from chalicelib.core import events
from . import events
def get_customs_by_session_id(session_id, project_id):
@ -58,7 +58,7 @@ def get_crashes_by_session_id(session_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT cr.*,uc.*, cr.timestamp - s.start_ts AS time
FROM {events.EventType.CRASH_MOBILE.table} AS cr
FROM events_common.crashes AS cr
INNER JOIN public.crashes_ios AS uc USING (crash_ios_id)
INNER JOIN public.sessions AS s USING (session_id)
WHERE

View file

@ -0,0 +1,209 @@
import logging
from functools import cache
from typing import Optional
import schemas
from chalicelib.core.autocomplete import autocomplete
from chalicelib.core.issues import issues
from chalicelib.core.sessions import sessions_metas
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.event_filter_definition import SupportedFilter
logger = logging.getLogger(__name__)
def get_customs_by_session_id(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(""" \
SELECT c.*,
'CUSTOM' AS type
FROM events_common.customs AS c
WHERE c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
return helper.list_to_camel_case(rows)
def __merge_cells(rows, start, count, replacement):
rows[start] = replacement
rows = rows[:start + 1] + rows[start + count:]
return rows
def __get_grouped_clickrage(rows, session_id, project_id):
click_rage_issues = issues.get_by_session_id(session_id=session_id, issue_type="click_rage", project_id=project_id)
if len(click_rage_issues) == 0:
return rows
for c in click_rage_issues:
merge_count = c.get("payload")
if merge_count is not None:
merge_count = merge_count.get("Count", 3)
else:
merge_count = 3
for i in range(len(rows)):
if rows[i]["timestamp"] == c["timestamp"]:
rows = __merge_cells(rows=rows,
start=i,
count=merge_count,
replacement={**rows[i], "type": "CLICKRAGE", "count": merge_count})
break
return rows
def get_by_session_id(session_id, project_id, group_clickrage=False, event_type: Optional[schemas.EventType] = None):
with pg_client.PostgresClient() as cur:
rows = []
if event_type is None or event_type == schemas.EventType.CLICK:
cur.execute(cur.mogrify(""" \
SELECT c.*,
'CLICK' AS type
FROM events.clicks AS c
WHERE c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if group_clickrage:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id, project_id=project_id)
if event_type is None or event_type == schemas.EventType.INPUT:
cur.execute(cur.mogrify("""
SELECT i.*,
'INPUT' AS type
FROM events.inputs AS i
WHERE i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if event_type is None or event_type == schemas.EventType.LOCATION:
cur.execute(cur.mogrify(""" \
SELECT l.*,
l.path AS value,
l.path AS url,
'LOCATION' AS type
FROM events.pages AS l
WHERE
l.session_id = %(session_id)s
ORDER BY l.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: (k["timestamp"], k["messageId"]))
return rows
def _search_tags(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
query = f"""
SELECT public.tags.name
'TAG' AS type
FROM public.tags
WHERE public.tags.project_id = %(project_id)s
ORDER BY SIMILARITY(public.tags.name, %(value)s) DESC
LIMIT 10
"""
query = cur.mogrify(query, {'project_id': project_id, 'value': value})
cur.execute(query)
results = helper.list_to_camel_case(cur.fetchall())
return results
@cache
def supported_types():
return {
schemas.EventType.CLICK: SupportedFilter(get=autocomplete.__generic_autocomplete(schemas.EventType.CLICK),
query=autocomplete.__generic_query(typename=schemas.EventType.CLICK)),
schemas.EventType.INPUT: SupportedFilter(get=autocomplete.__generic_autocomplete(schemas.EventType.INPUT),
query=autocomplete.__generic_query(typename=schemas.EventType.INPUT)),
schemas.EventType.LOCATION: SupportedFilter(get=autocomplete.__generic_autocomplete(schemas.EventType.LOCATION),
query=autocomplete.__generic_query(
typename=schemas.EventType.LOCATION)),
schemas.EventType.CUSTOM: SupportedFilter(get=autocomplete.__generic_autocomplete(schemas.EventType.CUSTOM),
query=autocomplete.__generic_query(
typename=schemas.EventType.CUSTOM)),
schemas.EventType.REQUEST: SupportedFilter(get=autocomplete.__generic_autocomplete(schemas.EventType.REQUEST),
query=autocomplete.__generic_query(
typename=schemas.EventType.REQUEST)),
schemas.EventType.GRAPHQL: SupportedFilter(get=autocomplete.__generic_autocomplete(schemas.EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=schemas.EventType.GRAPHQL)),
schemas.EventType.STATE_ACTION: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=schemas.EventType.STATE_ACTION)),
schemas.EventType.TAG: SupportedFilter(get=_search_tags, query=None),
schemas.EventType.ERROR: SupportedFilter(get=autocomplete.__search_errors,
query=None),
schemas.FilterType.METADATA: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
# MOBILE
schemas.EventType.CLICK_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=schemas.EventType.CLICK_MOBILE)),
schemas.EventType.SWIPE_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=schemas.EventType.SWIPE_MOBILE)),
schemas.EventType.INPUT_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=schemas.EventType.INPUT_MOBILE)),
schemas.EventType.VIEW_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=schemas.EventType.VIEW_MOBILE)),
schemas.EventType.CUSTOM_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=schemas.EventType.CUSTOM_MOBILE)),
schemas.EventType.REQUEST_MOBILE: SupportedFilter(
get=autocomplete.__generic_autocomplete(schemas.EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=schemas.EventType.REQUEST_MOBILE)),
schemas.EventType.ERROR_MOBILE: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
}
def get_errors_by_session_id(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT er.*,ur.*, er.timestamp - s.start_ts AS time
FROM events.errors AS er INNER JOIN public.errors AS ur USING (error_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE er.session_id = %(session_id)s AND s.project_id=%(project_id)s
ORDER BY timestamp;""", {"session_id": session_id, "project_id": project_id}))
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def get_incidents_by_session_id(session_id, project_id):
logger.warning("INCIDENTS not supported in PG")
return []
def search(text, event_type, project_id, source, key):
if not event_type:
return {"data": autocomplete.__get_autocomplete_table(text, project_id)}
if event_type in supported_types().keys():
rows = supported_types()[event_type].get(project_id=project_id, value=text, key=key, source=source)
elif event_type + "_MOBILE" in supported_types().keys():
rows = supported_types()[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_MOBILE") \
and event_type[:-len("_MOBILE")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}
return {"data": rows}

View file

@ -0,0 +1,11 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_EVENTS", cast=bool, default=False):
logger.info(">>> Using experimental issues")
from . import issues_ch as issues
else:
from . import issues_pg as issues

View file

@ -0,0 +1,56 @@
from chalicelib.utils import ch_client, helper
import datetime
from .issues_pg import get_all_types
def get(project_id, issue_id):
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=""" \
SELECT *
FROM product_analytics.events
WHERE project_id = %(project_id)s
AND issue_id = %(issue_id)s;""",
parameters={"project_id": project_id, "issue_id": issue_id})
data = cur.execute(query=query)
if data is not None and len(data) > 0:
data = data[0]
data["title"] = helper.get_issue_title(data["type"])
return helper.dict_to_camel_case(data)
def get_by_session_id(session_id, project_id, issue_type=None):
with ch_client.ClickHouseClient() as cur:
query = cur.format(query=f"""\
SELECT *
FROM product_analytics.events
WHERE session_id = %(session_id)s
AND project_id= %(project_id)s
AND `$event_name`='ISSUE'
{"AND issue_type = %(type)s" if issue_type is not None else ""}
ORDER BY created_at;""",
parameters={"session_id": session_id, "project_id": project_id, "type": issue_type})
data = cur.execute(query)
return helper.list_to_camel_case(data)
# To reduce the number of issues in the replay;
# will be removed once we agree on how to show issues
def reduce_issues(issues_list):
if issues_list is None:
return None
i = 0
# remove same-type issues if the time between them is <2s
while i < len(issues_list) - 1:
for j in range(i + 1, len(issues_list)):
if issues_list[i]["issueType"] == issues_list[j]["issueType"]:
break
else:
i += 1
break
if issues_list[i]["createdAt"] - issues_list[j]["createdAt"] < datetime.timedelta(seconds=2):
issues_list.pop(j)
else:
i += 1
return issues_list

View file

@ -4,12 +4,11 @@ from chalicelib.utils import pg_client, helper
def get(project_id, issue_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""\
SELECT
*
""" \
SELECT *
FROM public.issues
WHERE project_id = %(project_id)s
AND issue_id = %(issue_id)s;""",
AND issue_id = %(issue_id)s;""",
{"project_id": project_id, "issue_id": issue_id}
)
cur.execute(query=query)
@ -35,6 +34,29 @@ def get_by_session_id(session_id, project_id, issue_type=None):
return helper.list_to_camel_case(cur.fetchall())
# To reduce the number of issues in the replay;
# will be removed once we agree on how to show issues
def reduce_issues(issues_list):
if issues_list is None:
return None
i = 0
# remove same-type issues if the time between them is <2s
while i < len(issues_list) - 1:
for j in range(i + 1, len(issues_list)):
if issues_list[i]["type"] == issues_list[j]["type"]:
break
else:
i += 1
break
if issues_list[i]["timestamp"] - issues_list[j]["timestamp"] < 2000:
issues_list.pop(j)
else:
i += 1
return issues_list
def get_all_types():
return [
{

View file

@ -241,3 +241,25 @@ def get_colname_by_key(project_id, key):
return None
return index_to_colname(meta_keys[key])
def get_for_filters(project_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT {",".join(column_names())}
FROM public.projects
WHERE project_id = %(project_id)s
AND deleted_at ISNULL
LIMIT 1;""", {"project_id": project_id})
cur.execute(query=query)
metas = cur.fetchone()
results = []
if metas is not None:
for i, k in enumerate(metas.keys()):
if metas[k] is not None:
results.append({"id": f"meta_{i}",
"name": k,
"displayName": metas[k],
"possibleTypes": ["String"],
"autoCaptured": False,
"icon": None})
return {"total": len(results), "list": results}

View file

@ -4,7 +4,7 @@ import logging
from fastapi import HTTPException, status
import schemas
from chalicelib.core import issues
from chalicelib.core.issues import issues
from chalicelib.core.errors import errors
from chalicelib.core.metrics import heatmaps, product_analytics, funnels
from chalicelib.core.sessions import sessions, sessions_search
@ -61,6 +61,9 @@ def get_heat_map_chart(project: schemas.ProjectContext, user_id, data: schemas.C
return None
data.series[0].filter.filters += data.series[0].filter.events
data.series[0].filter.events = []
print(">>>>>>>>>>>>>>>>>>>>>>>>><")
print(data.series[0].filter.model_dump())
print(">>>>>>>>>>>>>>>>>>>>>>>>><")
return heatmaps.search_short_session(project_id=project.project_id, user_id=user_id,
data=schemas.HeatMapSessionsSearch(
**data.series[0].filter.model_dump()),
@ -241,14 +244,13 @@ def create_card(project: schemas.ProjectContext, user_id, data: schemas.CardSche
params["card_info"] = json.dumps(params["card_info"])
query = """INSERT INTO metrics (project_id, user_id, name, is_public,
view_type, metric_type, metric_of, metric_value,
metric_format, default_config, thumbnail, data,
card_info)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(is_public)s,
%(view_type)s, %(metric_type)s, %(metric_of)s, %(metric_value)s,
%(metric_format)s, %(default_config)s, %(thumbnail)s, %(session_data)s,
%(card_info)s)
RETURNING metric_id"""
view_type, metric_type, metric_of, metric_value,
metric_format, default_config, thumbnail, data,
card_info)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(is_public)s,
%(view_type)s, %(metric_type)s, %(metric_of)s, %(metric_value)s,
%(metric_format)s, %(default_config)s, %(thumbnail)s, %(session_data)s,
%(card_info)s) RETURNING metric_id"""
if len(data.series) > 0:
query = f"""WITH m AS ({query})
INSERT INTO metric_series(metric_id, index, name, filter)
@ -525,13 +527,13 @@ def get_all(project_id, user_id):
def delete_card(project_id, metric_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET deleted_at = timezone('utc'::text, now()), edited_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING data;""",
cur.mogrify(""" \
UPDATE public.metrics
SET deleted_at = timezone('utc'::text, now()),
edited_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public) RETURNING data;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id})
)
@ -615,13 +617,14 @@ def get_series_for_alert(project_id, user_id):
FALSE AS predefined,
metric_id,
series_id
FROM metric_series
INNER JOIN metrics USING (metric_id)
WHERE metrics.deleted_at ISNULL
AND metrics.project_id = %(project_id)s
AND metrics.metric_type = 'timeseries'
AND (user_id = %(user_id)s OR is_public)
ORDER BY name;""",
FROM metric_series
INNER JOIN metrics USING (metric_id)
WHERE metrics.deleted_at ISNULL
AND metrics.project_id = %(project_id)s
AND metrics.metric_type = 'timeseries'
AND (user_id = %(user_id)s
OR is_public)
ORDER BY name;""",
{"project_id": project_id, "user_id": user_id}
)
)
@ -632,11 +635,11 @@ def get_series_for_alert(project_id, user_id):
def change_state(project_id, metric_id, user_id, status):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET active = %(status)s
WHERE metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
cur.mogrify(""" \
UPDATE public.metrics
SET active = %(status)s
WHERE metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"metric_id": metric_id, "status": status, "user_id": user_id})
)
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
@ -674,7 +677,8 @@ def get_funnel_sessions_by_issue(user_id, project_id, metric_id, issue_id,
"issue": issue}
def make_chart_from_card(project: schemas.ProjectContext, user_id, metric_id, data: schemas.CardSessionsSchema):
def make_chart_from_card(project: schemas.ProjectContext, user_id, metric_id,
data: schemas.CardSessionsSchema, for_dashboard: bool = False):
raw_metric: dict = get_card(metric_id=metric_id, project_id=project.project_id, user_id=user_id, include_data=True)
if raw_metric is None:
@ -693,7 +697,8 @@ def make_chart_from_card(project: schemas.ProjectContext, user_id, metric_id, da
return heatmaps.search_short_session(project_id=project.project_id,
data=schemas.HeatMapSessionsSearch(**metric.model_dump()),
user_id=user_id)
elif metric.metric_type == schemas.MetricType.PATH_ANALYSIS and for_dashboard:
metric.hide_excess = True
return get_chart(project=project, data=metric, user_id=user_id)

View file

@ -6,7 +6,7 @@ from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh
def filter_stages(stages: List[schemas.SessionSearchEventSchema2]):
def filter_stages(stages: List[schemas.SessionSearchEventSchema]):
ALLOW_TYPES = [schemas.EventType.CLICK, schemas.EventType.INPUT,
schemas.EventType.LOCATION, schemas.EventType.CUSTOM,
schemas.EventType.CLICK_MOBILE, schemas.EventType.INPUT_MOBILE,
@ -15,10 +15,10 @@ def filter_stages(stages: List[schemas.SessionSearchEventSchema2]):
def __parse_events(f_events: List[dict]):
return [schemas.SessionSearchEventSchema2.parse_obj(e) for e in f_events]
return [schemas.SessionSearchEventSchema.parse_obj(e) for e in f_events]
def __fix_stages(f_events: List[schemas.SessionSearchEventSchema2]):
def __fix_stages(f_events: List[schemas.SessionSearchEventSchema]):
if f_events is None:
return
events = []

View file

@ -160,7 +160,7 @@ s.start_ts,
s.duration"""
def __get_1_url(location_condition: schemas.SessionSearchEventSchema2 | None, session_id: str, project_id: int,
def __get_1_url(location_condition: schemas.SessionSearchEventSchema | None, session_id: str, project_id: int,
start_time: int,
end_time: int) -> str | None:
full_args = {
@ -240,13 +240,13 @@ def search_short_session(data: schemas.HeatMapSessionsSearch, project_id, user_i
value=[schemas.PlatformType.DESKTOP],
operator=schemas.SearchEventOperator.IS))
if not location_condition:
data.events.append(schemas.SessionSearchEventSchema2(type=schemas.EventType.LOCATION,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.events.append(schemas.SessionSearchEventSchema(type=schemas.EventType.LOCATION,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
if no_click:
data.events.append(schemas.SessionSearchEventSchema2(type=schemas.EventType.CLICK,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.events.append(schemas.SessionSearchEventSchema(type=schemas.EventType.CLICK,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.filters.append(schemas.SessionSearchFilterSchema(type=schemas.FilterType.EVENTS_COUNT,
value=[0],

View file

@ -3,7 +3,7 @@ import logging
from decouple import config
import schemas
from chalicelib.core import events
from chalicelib.core.events import events
from chalicelib.core.metrics.modules import sessions, sessions_mobs
from chalicelib.utils import sql_helper as sh
@ -24,8 +24,9 @@ def get_by_url(project_id, data: schemas.GetHeatMapPayloadSchema):
"main_events.`$event_name` = 'CLICK'",
"isNotNull(JSON_VALUE(CAST(main_events.`$properties` AS String), '$.normalized_x'))"
]
if data.operator == schemas.SearchEventOperator.IS:
if data.operator == schemas.SearchEventOperator.PATTERN:
constraints.append("match(main_events.`$properties`.url_path'.:String,%(url)s)")
elif data.operator == schemas.SearchEventOperator.IS:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') = %(url)s")
else:
constraints.append("JSON_VALUE(CAST(main_events.`$properties` AS String), '$.url_path') ILIKE %(url)s")
@ -179,7 +180,7 @@ toUnixTimestamp(s.datetime)*1000 AS start_ts,
s.duration AS duration"""
def __get_1_url(location_condition: schemas.SessionSearchEventSchema2 | None, session_id: str, project_id: int,
def __get_1_url(location_condition: schemas.SessionSearchEventSchema | None, session_id: str, project_id: int,
start_time: int,
end_time: int) -> str | None:
full_args = {
@ -262,13 +263,13 @@ def search_short_session(data: schemas.HeatMapSessionsSearch, project_id, user_i
value=[schemas.PlatformType.DESKTOP],
operator=schemas.SearchEventOperator.IS))
if not location_condition:
data.events.append(schemas.SessionSearchEventSchema2(type=schemas.EventType.LOCATION,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.events.append(schemas.SessionSearchEventSchema(type=schemas.EventType.LOCATION,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
if no_click:
data.events.append(schemas.SessionSearchEventSchema2(type=schemas.EventType.CLICK,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.events.append(schemas.SessionSearchEventSchema(type=schemas.EventType.CLICK,
value=[],
operator=schemas.SearchEventOperator.IS_ANY))
data.filters.append(schemas.SessionSearchFilterSchema(type=schemas.FilterType.EVENTS_COUNT,
value=[0],

View file

@ -7,7 +7,8 @@ from typing import List
from psycopg2.extras import RealDictRow
import schemas
from chalicelib.core import events, metadata
from chalicelib.core import metadata
from chalicelib.core.events import events
from chalicelib.utils import pg_client, helper
from chalicelib.utils import sql_helper as sh
@ -76,10 +77,10 @@ def get_stages_and_events(filter_d: schemas.CardSeriesFilterSchema, project_id)
values["maxDuration"] = f.value[1]
elif filter_type == schemas.FilterType.REFERRER:
# events_query_part = events_query_part + f"INNER JOIN events.pages AS p USING(session_id)"
filter_extra_from = [f"INNER JOIN {events.EventType.LOCATION.table} AS p USING(session_id)"]
filter_extra_from = [f"INNER JOIN {"events.pages"} AS p USING(session_id)"]
first_stage_extra_constraints.append(
sh.multi_conditions(f"p.base_referrer {op} %({f_k})s", f.value, is_not=is_not, value_key=f_k))
elif filter_type == events.EventType.METADATA.ui_type:
elif filter_type == schemas.FilterType.METADATA:
if meta_keys is None:
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
@ -121,31 +122,31 @@ def get_stages_and_events(filter_d: schemas.CardSeriesFilterSchema, project_id)
op = sh.get_sql_operator(s.operator)
# event_type = s["type"].upper()
event_type = s.type
if event_type == events.EventType.CLICK.ui_type:
next_table = events.EventType.CLICK.table
next_col_name = events.EventType.CLICK.column
elif event_type == events.EventType.INPUT.ui_type:
next_table = events.EventType.INPUT.table
next_col_name = events.EventType.INPUT.column
elif event_type == events.EventType.LOCATION.ui_type:
next_table = events.EventType.LOCATION.table
next_col_name = events.EventType.LOCATION.column
elif event_type == events.EventType.CUSTOM.ui_type:
next_table = events.EventType.CUSTOM.table
next_col_name = events.EventType.CUSTOM.column
if event_type == schemas.EventType.CLICK:
next_table = "events.clicks"
next_col_name = "label"
elif event_type == schemas.EventType.INPUT:
next_table = "events.inputs"
next_col_name = "label"
elif event_type == schemas.EventType.LOCATION:
next_table = "events.pages"
next_col_name = "path"
elif event_type == schemas.EventType.CUSTOM:
next_table = "events_common.customs"
next_col_name = "name"
# IOS --------------
elif event_type == events.EventType.CLICK_MOBILE.ui_type:
next_table = events.EventType.CLICK_MOBILE.table
next_col_name = events.EventType.CLICK_MOBILE.column
elif event_type == events.EventType.INPUT_MOBILE.ui_type:
next_table = events.EventType.INPUT_MOBILE.table
next_col_name = events.EventType.INPUT_MOBILE.column
elif event_type == events.EventType.VIEW_MOBILE.ui_type:
next_table = events.EventType.VIEW_MOBILE.table
next_col_name = events.EventType.VIEW_MOBILE.column
elif event_type == events.EventType.CUSTOM_MOBILE.ui_type:
next_table = events.EventType.CUSTOM_MOBILE.table
next_col_name = events.EventType.CUSTOM_MOBILE.column
elif event_type == schemas.EventType.CLICK_MOBILE:
next_table = "events_ios.taps"
next_col_name = "label"
elif event_type == schemas.EventType.INPUT_MOBILE:
next_table = "events_ios.inputs"
next_col_name = "label"
elif event_type == schemas.EventType.VIEW_MOBILE:
next_table = "events_ios.views"
next_col_name = "name"
elif event_type == schemas.EventType.CUSTOM_MOBILE:
next_table = "events_common.customs"
next_col_name = "name"
else:
logger.warning(f"=================UNDEFINED:{event_type}")
continue
@ -241,7 +242,7 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
:return:
"""
stages: List[schemas.SessionSearchEventSchema2] = filter_d.events
stages: List[schemas.SessionSearchEventSchema] = filter_d.events
filters: List[schemas.SessionSearchFilterSchema] = filter_d.filters
stage_constraints = ["main.timestamp <= %(endTimestamp)s"]
@ -297,10 +298,10 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
values["maxDuration"] = f.value[1]
elif filter_type == schemas.FilterType.REFERRER:
# events_query_part = events_query_part + f"INNER JOIN events.pages AS p USING(session_id)"
filter_extra_from = [f"INNER JOIN {events.EventType.LOCATION.table} AS p USING(session_id)"]
filter_extra_from = [f"INNER JOIN {"events.pages"} AS p USING(session_id)"]
first_stage_extra_constraints.append(
sh.multi_conditions(f"p.base_referrer {op} %({f_k})s", f.value, is_not=is_not, value_key=f_k))
elif filter_type == events.EventType.METADATA.ui_type:
elif filter_type == schemas.FilterType.METADATA:
if meta_keys is None:
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
@ -342,31 +343,31 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
op = sh.get_sql_operator(s.operator)
# event_type = s["type"].upper()
event_type = s.type
if event_type == events.EventType.CLICK.ui_type:
next_table = events.EventType.CLICK.table
next_col_name = events.EventType.CLICK.column
elif event_type == events.EventType.INPUT.ui_type:
next_table = events.EventType.INPUT.table
next_col_name = events.EventType.INPUT.column
elif event_type == events.EventType.LOCATION.ui_type:
next_table = events.EventType.LOCATION.table
next_col_name = events.EventType.LOCATION.column
elif event_type == events.EventType.CUSTOM.ui_type:
next_table = events.EventType.CUSTOM.table
next_col_name = events.EventType.CUSTOM.column
if event_type == schemas.EventType.CLICK:
next_table = "events.clicks"
next_col_name = "label"
elif event_type == schemas.EventType.INPUT:
next_table = "events.inputs"
next_col_name = "label"
elif event_type == schemas.EventType.LOCATION:
next_table = "events.pages"
next_col_name = "path"
elif event_type == schemas.EventType.CUSTOM:
next_table = "events_common.customs"
next_col_name = "name"
# IOS --------------
elif event_type == events.EventType.CLICK_MOBILE.ui_type:
next_table = events.EventType.CLICK_MOBILE.table
next_col_name = events.EventType.CLICK_MOBILE.column
elif event_type == events.EventType.INPUT_MOBILE.ui_type:
next_table = events.EventType.INPUT_MOBILE.table
next_col_name = events.EventType.INPUT_MOBILE.column
elif event_type == events.EventType.VIEW_MOBILE.ui_type:
next_table = events.EventType.VIEW_MOBILE.table
next_col_name = events.EventType.VIEW_MOBILE.column
elif event_type == events.EventType.CUSTOM_MOBILE.ui_type:
next_table = events.EventType.CUSTOM_MOBILE.table
next_col_name = events.EventType.CUSTOM_MOBILE.column
elif event_type == schemas.EventType.CLICK_MOBILE:
next_table = "events_ios.taps"
next_col_name = "label"
elif event_type == schemas.EventType.INPUT_MOBILE:
next_table = "events_ios.inputs"
next_col_name = "label"
elif event_type == schemas.EventType.VIEW_MOBILE:
next_table = "events_ios.views"
next_col_name = "name"
elif event_type == schemas.EventType.CUSTOM_MOBILE:
next_table = "events_common.customs"
next_col_name = "name"
else:
logger.warning(f"=================UNDEFINED:{event_type}")
continue

View file

@ -8,14 +8,14 @@ from chalicelib.utils import ch_client
from chalicelib.utils import exp_ch_helper
from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh
from chalicelib.core import events
from chalicelib.core.events import events
logger = logging.getLogger(__name__)
def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas.ProjectContext,
metric_format: schemas.MetricExtendedFormatType) -> List[RealDictRow]:
stages: List[schemas.SessionSearchEventSchema2] = filter_d.events
stages: List[schemas.SessionSearchEventSchema] = filter_d.events
filters: List[schemas.SessionSearchFilterSchema] = filter_d.filters
platform = project.platform
constraints = ["e.project_id = %(project_id)s",
@ -82,7 +82,7 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
elif filter_type == schemas.FilterType.REFERRER:
constraints.append(
sh.multi_conditions(f"s.base_referrer {op} %({f_k})s", f.value, is_not=is_not, value_key=f_k))
elif filter_type == events.EventType.METADATA.ui_type:
elif filter_type == schemas.FilterType.METADATA:
if meta_keys is None:
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
@ -125,29 +125,29 @@ def get_simple_funnel(filter_d: schemas.CardSeriesFilterSchema, project: schemas
e_k = f"e_value{i}"
event_type = s.type
next_event_type = exp_ch_helper.get_event_type(event_type, platform=platform)
if event_type == events.EventType.CLICK.ui_type:
if event_type == schemas.EventType.CLICK:
if platform == "web":
next_col_name = events.EventType.CLICK.column
next_col_name = "label"
if not is_any:
if schemas.ClickEventExtraOperator.has_value(s.operator):
specific_condition = sh.multi_conditions(f"selector {op} %({e_k})s", s.value, value_key=e_k)
else:
next_col_name = events.EventType.CLICK_MOBILE.column
elif event_type == events.EventType.INPUT.ui_type:
next_col_name = events.EventType.INPUT.column
elif event_type == events.EventType.LOCATION.ui_type:
next_col_name = "label"
elif event_type == schemas.EventType.INPUT:
next_col_name = "label"
elif event_type == schemas.EventType.LOCATION:
next_col_name = 'url_path'
elif event_type == events.EventType.CUSTOM.ui_type:
next_col_name = events.EventType.CUSTOM.column
elif event_type == schemas.EventType.CUSTOM:
next_col_name = "name"
# IOS --------------
elif event_type == events.EventType.CLICK_MOBILE.ui_type:
next_col_name = events.EventType.CLICK_MOBILE.column
elif event_type == events.EventType.INPUT_MOBILE.ui_type:
next_col_name = events.EventType.INPUT_MOBILE.column
elif event_type == events.EventType.VIEW_MOBILE.ui_type:
next_col_name = events.EventType.VIEW_MOBILE.column
elif event_type == events.EventType.CUSTOM_MOBILE.ui_type:
next_col_name = events.EventType.CUSTOM_MOBILE.column
elif event_type == schemas.EventType.CLICK_MOBILE:
next_col_name = "label"
elif event_type == schemas.EventType.INPUT_MOBILE:
next_col_name = "label"
elif event_type == schemas.EventType.VIEW_MOBILE:
next_col_name = "name"
elif event_type == schemas.EventType.CUSTOM_MOBILE:
next_col_name = "name"
else:
logger.warning(f"=================UNDEFINED:{event_type}")
continue

View file

@ -85,6 +85,9 @@ def __complete_missing_steps(start_time, end_time, density, neutral, rows, time_
# compute avg_time_from_previous at the same level as sessions_count (this was removed in v1.22)
# if start-point is selected, the selected event is ranked n°1
def path_analysis(project_id: int, data: schemas.CardPathAnalysis):
if not data.hide_excess:
data.hide_excess = True
data.rows = 50
sub_events = []
start_points_conditions = []
step_0_conditions = []

View file

@ -1,14 +0,0 @@
from chalicelib.utils.ch_client import ClickHouseClient
def search_events(project_id: int, data: dict):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT *
FROM taha.events
WHERE project_id=%(project_id)s
ORDER BY created_at;""",
params={"project_id": project_id})
x = ch_client.execute(r)
return x

View file

@ -6,6 +6,7 @@ from decouple import config
import schemas
from chalicelib.core.collaborations.collaboration_msteams import MSTeams
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client, helper
from chalicelib.utils import sql_helper as sh
from chalicelib.utils.TimeUTC import TimeUTC
@ -16,13 +17,13 @@ logger = logging.getLogger(__name__)
def get_note(tenant_id, project_id, user_id, note_id, share=None):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT sessions_notes.*, users.name AS user_name
{",(SELECT name FROM users WHERE tenant_id=%(tenant_id)s AND user_id=%(share)s AND deleted_at ISNULL) AS share_name" if share else ""}
{f",(SELECT name FROM users WHERE {TENANT_CONDITION} AND user_id=%(share)s AND deleted_at ISNULL) AS share_name" if share else ""}
FROM sessions_notes INNER JOIN users USING (user_id)
WHERE sessions_notes.project_id = %(project_id)s
AND sessions_notes.note_id = %(note_id)s
AND sessions_notes.deleted_at IS NULL
AND (sessions_notes.user_id = %(user_id)s
OR sessions_notes.is_public AND users.tenant_id = %(tenant_id)s);""",
OR sessions_notes.is_public AND {TENANT_CONDITION});""",
{"project_id": project_id, "user_id": user_id, "tenant_id": tenant_id,
"note_id": note_id, "share": share})
@ -43,7 +44,7 @@ def get_session_notes(tenant_id, project_id, session_id, user_id):
AND sessions_notes.deleted_at IS NULL
AND sessions_notes.session_id = %(session_id)s
AND (sessions_notes.user_id = %(user_id)s
OR sessions_notes.is_public AND users.tenant_id = %(tenant_id)s)
OR sessions_notes.is_public AND {TENANT_CONDITION})
ORDER BY created_at DESC;""",
{"project_id": project_id, "user_id": user_id,
"tenant_id": tenant_id, "session_id": session_id})
@ -62,7 +63,7 @@ def get_all_notes_by_project_id(tenant_id, project_id, user_id, data: schemas.Se
conditions = [
"sessions_notes.project_id = %(project_id)s",
"sessions_notes.deleted_at IS NULL",
"users.tenant_id = %(tenant_id)s"
TENANT_CONDITION
]
params = {"project_id": project_id, "user_id": user_id, "tenant_id": tenant_id}
@ -127,7 +128,7 @@ def create(tenant_id, user_id, project_id, session_id, data: schemas.SessionNote
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""INSERT INTO public.sessions_notes (message, user_id, tag, session_id, project_id, timestamp, is_public, thumbnail, start_at, end_at)
VALUES (%(message)s, %(user_id)s, %(tag)s, %(session_id)s, %(project_id)s, %(timestamp)s, %(is_public)s, %(thumbnail)s, %(start_at)s, %(end_at)s)
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s AND users.tenant_id=%(tenant_id)s) AS user_name;""",
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s AND {TENANT_CONDITION}) AS user_name;""",
{"user_id": user_id, "project_id": project_id, "session_id": session_id,
**data.model_dump(),
"tenant_id": tenant_id})
@ -161,7 +162,7 @@ def edit(tenant_id, user_id, project_id, note_id, data: schemas.SessionUpdateNot
AND user_id = %(user_id)s
AND note_id = %(note_id)s
AND deleted_at ISNULL
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s AND users.tenant_id=%(tenant_id)s) AS user_name;""",
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s AND {TENANT_CONDITION}) AS user_name;""",
{"project_id": project_id, "user_id": user_id, "note_id": note_id, **data.model_dump(),
"tenant_id": tenant_id})
)

View file

@ -0,0 +1,59 @@
from typing import Optional
from chalicelib.utils import helper
from chalicelib.utils.ch_client import ClickHouseClient
def search_events(project_id: int, q: Optional[str] = None):
with ClickHouseClient() as ch_client:
full_args = {"project_id": project_id, "limit": 20}
constraints = ["project_id = %(project_id)s",
"_timestamp >= now()-INTERVAL 1 MONTH"]
if q:
constraints += ["value ILIKE %(q)s"]
full_args["q"] = helper.string_to_sql_like(q)
query = ch_client.format(
f"""SELECT value,data_count
FROM product_analytics.autocomplete_events_grouped
WHERE {" AND ".join(constraints)}
ORDER BY data_count DESC
LIMIT %(limit)s;""",
parameters=full_args)
rows = ch_client.execute(query)
return {"values": helper.list_to_camel_case(rows), "_src": 2}
def search_properties(project_id: int, property_name: Optional[str] = None, event_name: Optional[str] = None,
q: Optional[str] = None):
with ClickHouseClient() as ch_client:
select = "value"
full_args = {"project_id": project_id, "limit": 20,
"event_name": event_name, "property_name": property_name, "q": q,
"property_name_l": helper.string_to_sql_like(property_name),
"q_l": helper.string_to_sql_like(q)}
constraints = ["project_id = %(project_id)s",
"_timestamp >= now()-INTERVAL 1 MONTH"]
if event_name:
constraints += ["event_name = %(event_name)s"]
if property_name and q:
constraints += ["property_name = %(property_name)s"]
elif property_name:
select = "DISTINCT ON(property_name) property_name AS value"
constraints += ["property_name ILIKE %(property_name_l)s"]
if q:
constraints += ["value ILIKE %(q_l)s"]
query = ch_client.format(
f"""SELECT {select},data_count
FROM product_analytics.autocomplete_event_properties_grouped
WHERE {" AND ".join(constraints)}
ORDER BY data_count DESC
LIMIT %(limit)s;""",
parameters=full_args)
rows = ch_client.execute(query)
return {"values": helper.list_to_camel_case(rows), "_src": 2}

View file

@ -0,0 +1,182 @@
import logging
import schemas
from chalicelib.utils import helper
from chalicelib.utils import sql_helper as sh
from chalicelib.utils.ch_client import ClickHouseClient
from chalicelib.utils.exp_ch_helper import get_sub_condition, get_col_cast
logger = logging.getLogger(__name__)
PREDEFINED_EVENTS = {
"CLICK": "String",
"INPUT": "String",
"LOCATION": "String",
"ERROR": "String",
"PERFORMANCE": "String",
"REQUEST": "String"
}
def get_events(project_id: int, page: schemas.PaginatedSchema):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT DISTINCT
ON(event_name,auto_captured)
COUNT (1) OVER () AS total,
event_name AS name, display_name, description,
auto_captured
FROM product_analytics.all_events
WHERE project_id=%(project_id)s
ORDER BY auto_captured, display_name
LIMIT %(limit)s
OFFSET %(offset)s;""",
parameters={"project_id": project_id, "limit": page.limit, "offset": (page.page - 1) * page.limit})
rows = ch_client.execute(r)
if len(rows) == 0:
return {"total": len(PREDEFINED_EVENTS), "list": [{
"name": e,
"displayName": "",
"description": "",
"autoCaptured": True,
"id": "event_0",
"dataType": "string",
"possibleTypes": [
"string"
],
"_foundInPredefinedList": False
} for e in PREDEFINED_EVENTS]}
total = rows[0]["total"]
rows = helper.list_to_camel_case(rows)
for i, row in enumerate(rows):
row["id"] = f"event_{i}"
row["dataType"] = "string"
row["possibleTypes"] = ["string"]
row["_foundInPredefinedList"] = True
row.pop("total")
keys = [r["name"] for r in rows]
for e in PREDEFINED_EVENTS:
if e not in keys:
total += 1
rows.append({
"name": e,
"displayName": "",
"description": "",
"autoCaptured": True,
"id": "event_0",
"dataType": "string",
"possibleTypes": [
"string"
],
"_foundInPredefinedList": False
})
return {"total": total, "list": rows}
def search_events(project_id: int, data: schemas.EventsSearchPayloadSchema):
with ClickHouseClient() as ch_client:
full_args = {"project_id": project_id, "startDate": data.startTimestamp, "endDate": data.endTimestamp,
"projectId": project_id, "limit": data.limit, "offset": (data.page - 1) * data.limit}
constraints = ["project_id = %(projectId)s",
"created_at >= toDateTime(%(startDate)s/1000)",
"created_at <= toDateTime(%(endDate)s/1000)"]
ev_constraints = []
for i, f in enumerate(data.filters):
if not f.is_event:
f.value = helper.values_for_operator(value=f.value, op=f.operator)
f_k = f"f_value{i}"
full_args = {**full_args, f_k: sh.single_value(f.value), **sh.multi_values(f.value, value_key=f_k)}
is_any = sh.isAny_opreator(f.operator)
is_undefined = sh.isUndefined_operator(f.operator)
full_args = {**full_args, f_k: sh.single_value(f.value), **sh.multi_values(f.value, value_key=f_k)}
if f.is_predefined:
column = f.name
else:
column = f"properties.{f.name}"
if is_any:
condition = f"notEmpty{column})"
elif is_undefined:
condition = f"empty({column})"
else:
condition = sh.multi_conditions(
get_sub_condition(col_name=column, val_name=f_k, operator=f.operator),
values=f.value, value_key=f_k)
constraints.append(condition)
else:
e_k = f"e_value{i}"
full_args = {**full_args, e_k: f.name}
condition = f"`$event_name` = %({e_k})s"
sub_conditions = []
for j, ef in enumerate(f.properties.filters):
p_k = f"e_{i}_p_{j}"
full_args = {**full_args, **sh.multi_values(ef.value, value_key=p_k, data_type=ef.data_type)}
cast = get_col_cast(data_type=ef.data_type, value=ef.value)
if ef.is_predefined:
sub_condition = get_sub_condition(col_name=f"accurateCastOrNull(`{ef.name}`,'{cast}')",
val_name=p_k, operator=ef.operator)
else:
sub_condition = get_sub_condition(col_name=f"accurateCastOrNull(properties.`{ef.name}`,{cast})",
val_name=p_k, operator=ef.operator)
sub_conditions.append(sh.multi_conditions(sub_condition, ef.value, value_key=p_k))
if len(sub_conditions) > 0:
condition += " AND (" + (" " + f.properties.operator + " ").join(sub_conditions) + ")"
ev_constraints.append(condition)
constraints.append("(" + " OR ".join(ev_constraints) + ")")
query = ch_client.format(
f"""SELECT COUNT(1) OVER () AS total,
event_id,
`$event_name`,
created_at,
`distinct_id`,
`$browser`,
`$import`,
`$os`,
`$country`,
`$state`,
`$city`,
`$screen_height`,
`$screen_width`,
`$source`,
`$user_id`,
`$device`
FROM product_analytics.events
WHERE {" AND ".join(constraints)}
ORDER BY created_at
LIMIT %(limit)s OFFSET %(offset)s;""",
parameters=full_args)
rows = ch_client.execute(query)
if len(rows) == 0:
return {"total": 0, "rows": [], "_src": 2}
total = rows[0]["total"]
for r in rows:
r.pop("total")
return {"total": total, "rows": rows, "_src": 2}
def get_lexicon(project_id: int, page: schemas.PaginatedSchema):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT COUNT(1) OVER () AS total, all_events.event_name AS name,
*
FROM product_analytics.all_events
WHERE project_id = %(project_id)s
ORDER BY display_name
LIMIT %(limit)s
OFFSET %(offset)s;""",
parameters={"project_id": project_id, "limit": page.limit, "offset": (page.page - 1) * page.limit})
rows = ch_client.execute(r)
if len(rows) == 0:
return {"total": 0, "list": []}
total = rows[0]["total"]
rows = helper.list_to_camel_case(rows)
for i, row in enumerate(rows):
row["id"] = f"event_{i}"
row["dataType"] = "string"
row["possibleTypes"] = ["string"]
row["_foundInPredefinedList"] = True
row.pop("total")
return {"total": total, "list": rows}

View file

@ -0,0 +1,167 @@
import schemas
from chalicelib.utils import helper, exp_ch_helper
from chalicelib.utils.ch_client import ClickHouseClient
PREDEFINED_PROPERTIES = {
"label": "String",
"hesitation_time": "UInt32",
"name": "String",
"payload": "String",
"level": "Enum8",
"source": "Enum8",
"message": "String",
"error_id": "String",
"duration": "UInt16",
"context": "Enum8",
"url_host": "String",
"url_path": "String",
"url_hostpath": "String",
"request_start": "UInt16",
"response_start": "UInt16",
"response_end": "UInt16",
"dom_content_loaded_event_start": "UInt16",
"dom_content_loaded_event_end": "UInt16",
"load_event_start": "UInt16",
"load_event_end": "UInt16",
"first_paint": "UInt16",
"first_contentful_paint_time": "UInt16",
"speed_index": "UInt16",
"visually_complete": "UInt16",
"time_to_interactive": "UInt16",
"ttfb": "UInt16",
"ttlb": "UInt16",
"response_time": "UInt16",
"dom_building_time": "UInt16",
"dom_content_loaded_event_time": "UInt16",
"load_event_time": "UInt16",
"min_fps": "UInt8",
"avg_fps": "UInt8",
"max_fps": "UInt8",
"min_cpu": "UInt8",
"avg_cpu": "UInt8",
"max_cpu": "UInt8",
"min_total_js_heap_size": "UInt64",
"avg_total_js_heap_size": "UInt64",
"max_total_js_heap_size": "UInt64",
"min_used_js_heap_size": "UInt64",
"avg_used_js_heap_size": "UInt64",
"max_used_js_heap_size": "UInt64",
"method": "Enum8",
"status": "UInt16",
"success": "UInt8",
"request_body": "String",
"response_body": "String",
"transfer_size": "UInt32",
"selector": "String",
"normalized_x": "Float32",
"normalized_y": "Float32",
"message_id": "UInt64"
}
def get_all_properties(project_id: int, page: schemas.PaginatedSchema):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT COUNT(1) OVER () AS total, property_name AS name,
display_name,
array_agg(DISTINCT event_properties.value_type) AS possible_types
FROM product_analytics.all_properties
LEFT JOIN product_analytics.event_properties USING (project_id, property_name)
WHERE all_properties.project_id = %(project_id)s
GROUP BY property_name, display_name
ORDER BY display_name
LIMIT %(limit)s
OFFSET %(offset)s;""",
parameters={"project_id": project_id,
"limit": page.limit,
"offset": (page.page - 1) * page.limit})
properties = ch_client.execute(r)
if len(properties) == 0:
return {"total": 0, "list": []}
total = properties[0]["total"]
properties = helper.list_to_camel_case(properties)
for i, p in enumerate(properties):
p["id"] = f"prop_{i}"
p["_foundInPredefinedList"] = False
if p["name"] in PREDEFINED_PROPERTIES:
p["dataType"] = exp_ch_helper.simplify_clickhouse_type(PREDEFINED_PROPERTIES[p["name"]])
p["_foundInPredefinedList"] = True
p["possibleTypes"] = list(set(exp_ch_helper.simplify_clickhouse_types(p["possibleTypes"])))
p.pop("total")
keys = [p["name"] for p in properties]
for p in PREDEFINED_PROPERTIES:
if p not in keys:
total += 1
properties.append({
"name": p,
"displayName": "",
"possibleTypes": [
],
"id": f"prop_{len(properties) + 1}",
"_foundInPredefinedList": False,
"dataType": PREDEFINED_PROPERTIES[p]
})
return {"total": total, "list": properties}
def get_event_properties(project_id: int, event_name):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT all_properties.property_name AS name,
all_properties.display_name,
array_agg(DISTINCT event_properties.value_type) AS possible_types
FROM product_analytics.event_properties
INNER JOIN product_analytics.all_properties USING (property_name)
WHERE event_properties.project_id = %(project_id)s
AND all_properties.project_id = %(project_id)s
AND event_properties.event_name = %(event_name)s
GROUP BY ALL
ORDER BY 1;""",
parameters={"project_id": project_id, "event_name": event_name})
properties = ch_client.execute(r)
properties = helper.list_to_camel_case(properties)
for i, p in enumerate(properties):
p["id"] = f"prop_{i}"
p["_foundInPredefinedList"] = False
if p["name"] in PREDEFINED_PROPERTIES:
p["dataType"] = exp_ch_helper.simplify_clickhouse_type(PREDEFINED_PROPERTIES[p["name"]])
p["_foundInPredefinedList"] = True
p["possibleTypes"] = list(set(exp_ch_helper.simplify_clickhouse_types(p["possibleTypes"])))
return properties
def get_lexicon(project_id: int, page: schemas.PaginatedSchema):
with ClickHouseClient() as ch_client:
r = ch_client.format(
"""SELECT COUNT(1) OVER () AS total, all_properties.property_name AS name,
all_properties.*,
possible_types.values AS possible_types,
possible_values.values AS sample_values
FROM product_analytics.all_properties
LEFT JOIN (SELECT project_id, property_name, array_agg(DISTINCT value_type) AS
values
FROM product_analytics.event_properties
WHERE project_id=%(project_id)s
GROUP BY 1, 2) AS possible_types
USING (project_id, property_name)
LEFT JOIN (SELECT project_id, property_name, array_agg(DISTINCT value) AS
values
FROM product_analytics.property_values_samples
WHERE project_id=%(project_id)s
GROUP BY 1, 2) AS possible_values USING (project_id, property_name)
WHERE project_id = %(project_id)s
ORDER BY display_name
LIMIT %(limit)s
OFFSET %(offset)s;""",
parameters={"project_id": project_id,
"limit": page.limit,
"offset": (page.page - 1) * page.limit})
properties = ch_client.execute(r)
if len(properties) == 0:
return {"total": 0, "list": []}
total = properties[0]["total"]
for i, p in enumerate(properties):
p["id"] = f"prop_{i}"
p.pop("total")
return {"total": total, "list": helper.list_to_camel_case(properties)}

View file

@ -6,8 +6,18 @@ logger = logging.getLogger(__name__)
from . import sessions_pg
from . import sessions_pg as sessions_legacy
from . import sessions_ch
from . import sessions_search_pg
from . import sessions_search_pg as sessions_search_legacy
if config("EXP_METRICS", cast=bool, default=False):
if config("EXP_SESSIONS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental sessions search")
from . import sessions_ch as sessions
from . import sessions_search_ch as sessions_search
else:
from . import sessions_pg as sessions
from . import sessions_search_pg as sessions_search
# if config("EXP_METRICS", cast=bool, default=False):
# from . import sessions_ch as sessions
# else:
# from . import sessions_pg as sessions

View file

@ -2,10 +2,12 @@ import logging
from typing import List, Union
import schemas
from chalicelib.core import events, metadata
from chalicelib.core import metadata
from chalicelib.core.events import events
from . import performance_event, sessions_legacy
from chalicelib.utils import pg_client, helper, metrics_helper, ch_client, exp_ch_helper
from chalicelib.utils import sql_helper as sh
from chalicelib.utils.exp_ch_helper import get_sub_condition, get_col_cast
logger = logging.getLogger(__name__)
@ -48,8 +50,8 @@ def search2_series(data: schemas.SessionsSearchPayloadSchema, project_id: int, d
query = f"""SELECT gs.generate_series AS timestamp,
COALESCE(COUNT(DISTINCT processed_sessions.user_id),0) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs
LEFT JOIN (SELECT multiIf(s.user_id IS NOT NULL AND s.user_id != '', s.user_id,
s.user_anonymous_id IS NOT NULL AND s.user_anonymous_id != '',
LEFT JOIN (SELECT multiIf(isNotNull(s.user_id) AND notEmpty(s.user_id), s.user_id,
isNotNull(s.user_anonymous_id) AND notEmpty(s.user_anonymous_id),
s.user_anonymous_id, toString(s.user_uuid)) AS user_id,
s.datetime AS datetime
{query_part}) AS processed_sessions ON(TRUE)
@ -148,12 +150,12 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
for e in data.events:
if e.type == schemas.EventType.LOCATION:
if e.operator not in extra_conditions:
extra_conditions[e.operator] = schemas.SessionSearchEventSchema2.model_validate({
extra_conditions[e.operator] = schemas.SessionSearchEventSchema.model_validate({
"type": e.type,
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": []
"filters": e.filters
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -173,12 +175,12 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
for e in data.events:
if e.type == schemas.EventType.REQUEST_DETAILS:
if e.operator not in extra_conditions:
extra_conditions[e.operator] = schemas.SessionSearchEventSchema2.model_validate({
extra_conditions[e.operator] = schemas.SessionSearchEventSchema.model_validate({
"type": e.type,
"isEvent": True,
"value": [],
"operator": e.operator,
"filters": []
"filters": e.filters
})
for v in e.value:
if v not in extra_conditions[e.operator].value:
@ -253,7 +255,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
FROM (SELECT s.user_id AS user_id {extra_col}
{query_part}
WHERE isNotNull(user_id)
AND user_id != '') AS filtred_sessions
AND notEmpty(user_id)) AS filtred_sessions
{extra_where}
GROUP BY {main_col}
ORDER BY total DESC
@ -277,7 +279,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
return sessions
def __is_valid_event(is_any: bool, event: schemas.SessionSearchEventSchema2):
def __is_valid_event(is_any: bool, event: schemas.SessionSearchEventSchema):
return not (not is_any and len(event.value) == 0 and event.type not in [schemas.EventType.REQUEST_DETAILS,
schemas.EventType.GRAPHQL] \
or event.type in [schemas.PerformanceEventType.LOCATION_DOM_COMPLETE,
@ -330,7 +332,11 @@ def json_condition(table_alias, json_column, json_key, op, values, value_key, ch
extract_func = "JSONExtractFloat" if numeric_type == "float" else "JSONExtractInt"
condition = f"{extract_func}(toString({table_alias}.`{json_column}`), '{json_key}') {op} %({value_key})s"
else:
condition = f"JSONExtractString(toString({table_alias}.`{json_column}`), '{json_key}') {op} %({value_key})s"
# condition = f"JSONExtractString(toString({table_alias}.`{json_column}`), '{json_key}') {op} %({value_key})s"
condition = get_sub_condition(
col_name=f"JSONExtractString(toString({table_alias}.`{json_column}`), '{json_key}')",
val_name=value_key, operator=op
)
conditions.append(sh.multi_conditions(condition, values, value_key=value_key))
@ -373,6 +379,34 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
events_conditions_where = ["main.project_id = %(projectId)s",
"main.created_at >= toDateTime(%(startDate)s/1000)",
"main.created_at <= toDateTime(%(endDate)s/1000)"]
any_incident = False
for i, e in enumerate(data.events):
if e.type == schemas.EventType.INCIDENT and e.operator == schemas.SearchEventOperator.IS_ANY:
any_incident = True
data.events.pop(i)
# don't stop here because we could have multiple filters looking for any incident
if any_incident:
any_incident = False
for f in data.filters:
if f.type == schemas.FilterType.ISSUE:
any_incident = True
if f.value.index(schemas.IssueType.INCIDENT) < 0:
f.value.append(schemas.IssueType.INCIDENT)
if f.operator == schemas.SearchEventOperator.IS_ANY:
f.operator = schemas.SearchEventOperator.IS
break
if not any_incident:
data.filters.append(schemas.SessionSearchFilterSchema(**{
"type": "issue",
"isEvent": False,
"value": [
"incident"
],
"operator": "is"
}))
if len(data.filters) > 0:
meta_keys = None
# to reduce include a sub-query of sessions inside events query, in order to reduce the selected data
@ -516,7 +550,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
ss_constraints.append(
sh.multi_conditions(f"ms.base_referrer {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type == events.EventType.METADATA.ui_type:
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project_id)
@ -660,39 +694,60 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event.value = helper.values_for_operator(value=event.value, op=event.operator)
full_args = {**full_args,
**sh.multi_values(event.value, value_key=e_k),
**sh.multi_values(event.source, value_key=s_k)}
**sh.multi_values(event.source, value_key=s_k),
e_k: event.value[0] if len(event.value) > 0 else event.value}
if event_type == events.EventType.CLICK.ui_type:
if event_type == schemas.EventType.CLICK:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
if platform == "web":
_column = events.EventType.CLICK.column
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
if schemas.ClickEventExtraOperator.has_value(event.operator):
event_where.append(json_condition(
"main",
"$properties",
"selector", op, event.value, e_k)
# event_where.append(json_condition(
# "main",
# "$properties",
# "selector", op, event.value, e_k)
# )
event_where.append(
sh.multi_conditions(
get_sub_condition(col_name=f"main.`$properties`.selector",
val_name=e_k, operator=event.operator),
event.value, value_key=e_k)
)
events_conditions[-1]["condition"] = event_where[-1]
else:
if is_not:
event_where.append(json_condition(
"sub", "$properties", _column, op, event.value, e_k
))
# event_where.append(json_condition(
# "sub", "$properties", _column, op, event.value, e_k
# ))
event_where.append(
sh.multi_conditions(
get_sub_condition(col_name=f"sub.`$properties`.{_column}",
val_name=e_k, operator=event.operator),
event.value, value_key=e_k)
)
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"
}
)
events_conditions_not[-1]["condition"] = event_where[-1]
else:
# event_where.append(
# json_condition("main", "$properties", _column, op, event.value, e_k)
# )
event_where.append(
json_condition("main", "$properties", _column, op, event.value, e_k)
sh.multi_conditions(
get_sub_condition(col_name=f"main.`$properties`.{_column}",
val_name=e_k, operator=event.operator),
event.value, value_key=e_k)
)
events_conditions[-1]["condition"] = event_where[-1]
else:
_column = events.EventType.CLICK_MOBILE.column
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -711,10 +766,10 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
)
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.INPUT.ui_type:
elif event_type == schemas.EventType.INPUT:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
if platform == "web":
_column = events.EventType.INPUT.column
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -739,7 +794,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
full_args = {**full_args, **sh.multi_values(event.source, value_key=f"custom{i}")}
else:
_column = events.EventType.INPUT_MOBILE.column
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -759,7 +814,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.LOCATION.ui_type:
elif event_type == schemas.EventType.LOCATION:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
if platform == "web":
_column = 'url_path'
@ -781,7 +836,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
)
events_conditions[-1]["condition"] = event_where[-1]
else:
_column = events.EventType.VIEW_MOBILE.column
_column = "name"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -798,9 +853,9 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(sh.multi_conditions(f"main.{_column} {op} %({e_k})s",
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.CUSTOM.ui_type:
elif event_type == schemas.EventType.CUSTOM:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = events.EventType.CUSTOM.column
_column = "name"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -818,7 +873,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
"main", "$properties", _column, op, event.value, e_k
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.REQUEST.ui_type:
elif event_type == schemas.EventType.REQUEST:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = 'url_path'
event_where.append(
@ -839,9 +894,9 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.STATEACTION.ui_type:
elif event_type == schemas.EventType.STATE_ACTION:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = events.EventType.STATEACTION.column
_column = "name"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -860,7 +915,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"] = event_where[-1]
# TODO: isNot for ERROR
elif event_type == events.EventType.ERROR.ui_type:
elif event_type == schemas.EventType.ERROR:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main"
events_extra_join = f"SELECT * FROM {MAIN_EVENTS_TABLE} AS main1 WHERE main1.project_id=%(project_id)s"
event_where.append(
@ -870,20 +925,23 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
events_conditions[-1]["condition"] = []
if not is_any and event.value not in [None, "*", ""]:
event_where.append(
sh.multi_conditions(f"(toString(main1.`$properties`.message) {op} %({e_k})s OR toString(main1.`$properties`.name) {op} %({e_k})s)",
event.value, value_key=e_k))
sh.multi_conditions(
f"(toString(main1.`$properties`.message) {op} %({e_k})s OR toString(main1.`$properties`.name) {op} %({e_k})s)",
event.value, value_key=e_k))
events_conditions[-1]["condition"].append(event_where[-1])
events_extra_join += f" AND {event_where[-1]}"
if len(event.source) > 0 and event.source[0] not in [None, "*", ""]:
event_where.append(sh.multi_conditions(f"toString(main1.`$properties`.source) = %({s_k})s", event.source, value_key=s_k))
event_where.append(
sh.multi_conditions(f"toString(main1.`$properties`.source) = %({s_k})s", event.source,
value_key=s_k))
events_conditions[-1]["condition"].append(event_where[-1])
events_extra_join += f" AND {event_where[-1]}"
events_conditions[-1]["condition"] = " AND ".join(events_conditions[-1]["condition"])
# ----- Mobile
elif event_type == events.EventType.CLICK_MOBILE.ui_type:
_column = events.EventType.CLICK_MOBILE.column
elif event_type == schemas.EventType.CLICK_MOBILE:
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -901,8 +959,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
"main", "$properties", _column, op, event.value, e_k
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.INPUT_MOBILE.ui_type:
_column = events.EventType.INPUT_MOBILE.column
elif event_type == schemas.EventType.INPUT_MOBILE:
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -920,8 +978,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
"main", "$properties", _column, op, event.value, e_k
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.VIEW_MOBILE.ui_type:
_column = events.EventType.VIEW_MOBILE.column
elif event_type == schemas.EventType.VIEW_MOBILE:
_column = "name"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -939,8 +997,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
"main", "$properties", _column, op, event.value, e_k
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.CUSTOM_MOBILE.ui_type:
_column = events.EventType.CUSTOM_MOBILE.column
elif event_type == schemas.EventType.CUSTOM_MOBILE:
_column = "name"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -959,7 +1017,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.REQUEST_MOBILE.ui_type:
elif event_type == schemas.EventType.REQUEST_MOBILE:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = 'url_path'
event_where.append(
@ -979,8 +1037,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
"main", "$properties", _column, op, event.value, e_k
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.CRASH_MOBILE.ui_type:
_column = events.EventType.CRASH_MOBILE.column
elif event_type == schemas.EventType.ERROR_MOBILE:
_column = "name"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -999,8 +1057,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
"main", "$properties", _column, op, event.value, e_k
))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.SWIPE_MOBILE.ui_type and platform != "web":
_column = events.EventType.SWIPE_MOBILE.column
elif event_type == schemas.EventType.SWIPE_MOBILE and platform != "web":
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -1108,8 +1166,12 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
is_any = sh.isAny_opreator(f.operator)
if is_any or len(f.value) == 0:
continue
is_negative_operator = sh.is_negation_operator(f.operator)
f.value = helper.values_for_operator(value=f.value, op=f.operator)
op = sh.get_sql_operator(f.operator)
r_op = ""
if is_negative_operator:
r_op = sh.reverse_sql_operator(op)
e_k_f = e_k + f"_fetch{j}"
full_args = {**full_args, **sh.multi_values(f.value, value_key=e_k_f)}
if f.type == schemas.FetchFilterType.FETCH_URL:
@ -1118,6 +1180,12 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.url_path {r_op} %({e_k_f})s", f.value, value_key=e_k_f)
elif f.type == schemas.FetchFilterType.FETCH_STATUS_CODE:
event_where.append(json_condition(
"main", "$properties", 'status', op, f.value, e_k_f, True, True
@ -1130,6 +1198,13 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.method {r_op} %({e_k_f})s", f.value,
value_key=e_k_f)
elif f.type == schemas.FetchFilterType.FETCH_DURATION:
event_where.append(
sh.multi_conditions(f"main.`$duration_s` {f.operator} %({e_k_f})s/1000", f.value,
@ -1142,12 +1217,26 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.request_body {r_op} %({e_k_f})s", f.value,
value_key=e_k_f)
elif f.type == schemas.FetchFilterType.FETCH_RESPONSE_BODY:
event_where.append(json_condition(
"main", "$properties", 'response_body', op, f.value, e_k_f
))
events_conditions[-1]["condition"].append(event_where[-1])
apply = True
if is_negative_operator:
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"})
events_conditions_not[-1]["condition"] = sh.multi_conditions(
f"sub.`$properties`.response_body {r_op} %({e_k_f})s", f.value,
value_key=e_k_f)
else:
logging.warning(f"undefined FETCH filter: {f.type}")
if not apply:
@ -1170,7 +1259,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
full_args = {**full_args, **sh.multi_values(f.value, value_key=e_k_f)}
if f.type == schemas.GraphqlFilterType.GRAPHQL_NAME:
event_where.append(json_condition(
"main", "$properties", events.EventType.GRAPHQL.column, op, f.value, e_k_f
"main", "$properties", "name", op, f.value, e_k_f
))
events_conditions[-1]["condition"].append(event_where[-1])
elif f.type == schemas.GraphqlFilterType.GRAPHQL_METHOD:
@ -1191,8 +1280,92 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
else:
logging.warning(f"undefined GRAPHQL filter: {f.type}")
events_conditions[-1]["condition"] = " AND ".join(events_conditions[-1]["condition"])
elif event_type == schemas.EventType.EVENT:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = "label"
event_where.append(f"main.`$event_name`=%({e_k})s AND main.session_id>0")
events_conditions.append({"type": event_where[-1], "condition": ""})
elif event_type == schemas.EventType.INCIDENT:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = "label"
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if is_not:
event_where.append(
sh.multi_conditions(
get_sub_condition(col_name=f"sub.`$properties`.{_column}",
val_name=e_k, operator=event.operator),
event.value, value_key=e_k)
)
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(event_type, platform=platform)}'"
}
)
events_conditions_not[-1]["condition"] = event_where[-1]
else:
event_where.append(
sh.multi_conditions(
get_sub_condition(col_name=f"main.`$properties`.{_column}",
val_name=e_k, operator=event.operator),
event.value, value_key=e_k)
)
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == schemas.EventType.CLICK_COORDINATES:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
event_where.append(
f"main.`$event_name`='{exp_ch_helper.get_event_type(schemas.EventType.CLICK, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if is_not:
event_where.append(
sh.coordinate_conditions(
condition_x=f"sub.`$properties`.normalized_x",
condition_y=f"sub.`$properties`.normalized_y",
values=event.value, value_key=e_k, is_not=True)
)
events_conditions_not.append(
{
"type": f"sub.`$event_name`='{exp_ch_helper.get_event_type(schemas.EventType.CLICK, platform=platform)}'"
}
)
events_conditions_not[-1]["condition"] = event_where[-1]
else:
event_where.append(
sh.coordinate_conditions(
condition_x=f"main.`$properties`.normalized_x",
condition_y=f"main.`$properties`.normalized_y",
values=event.value, value_key=e_k, is_not=True)
)
events_conditions[-1]["condition"] = event_where[-1]
else:
continue
if event.properties is not None and len(event.properties.filters) > 0:
sub_conditions = []
for l, property in enumerate(event.properties.filters):
a_k = f"{e_k}_att_{l}"
full_args = {**full_args,
**sh.multi_values(property.value, value_key=a_k, data_type=property.data_type)}
cast = get_col_cast(data_type=property.data_type, value=property.value)
if property.is_predefined:
condition = get_sub_condition(col_name=f"accurateCastOrNull(main.`{property.name}`,'{cast}')",
val_name=a_k, operator=property.operator)
else:
condition = get_sub_condition(
col_name=f"accurateCastOrNull(main.properties.`{property.name}`,'{cast}')",
val_name=a_k, operator=property.operator)
event_where.append(
sh.multi_conditions(condition, property.value, value_key=a_k)
)
sub_conditions.append(event_where[-1])
if len(sub_conditions) > 0:
sub_conditions = (" " + event.properties.operator + " ").join(sub_conditions)
events_conditions[-1]["condition"] += " AND " if len(events_conditions[-1]["condition"]) > 0 else ""
events_conditions[-1]["condition"] += "(" + sub_conditions + ")"
if event_index == 0 or or_events:
event_where += ss_constraints
if is_not:
@ -1395,17 +1568,30 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
if extra_conditions and len(extra_conditions) > 0:
_extra_or_condition = []
for i, c in enumerate(extra_conditions):
if sh.isAny_opreator(c.operator):
if sh.isAny_opreator(c.operator) and c.type != schemas.EventType.REQUEST_DETAILS.value:
continue
e_k = f"ec_value{i}"
op = sh.get_sql_operator(c.operator)
c.value = helper.values_for_operator(value=c.value, op=c.operator)
full_args = {**full_args,
**sh.multi_values(c.value, value_key=e_k)}
if c.type == events.EventType.LOCATION.ui_type:
if c.type in (schemas.EventType.LOCATION.value, schemas.EventType.REQUEST.value):
_extra_or_condition.append(
sh.multi_conditions(f"extra_event.url_path {op} %({e_k})s",
c.value, value_key=e_k))
elif c.type == schemas.EventType.REQUEST_DETAILS.value:
for j, c_f in enumerate(c.filters):
if sh.isAny_opreator(c_f.operator) or len(c_f.value) == 0:
continue
e_k += f"_{j}"
op = sh.get_sql_operator(c_f.operator)
c_f.value = helper.values_for_operator(value=c_f.value, op=c_f.operator)
full_args = {**full_args,
**sh.multi_values(c_f.value, value_key=e_k)}
if c_f.type == schemas.FetchFilterType.FETCH_URL.value:
_extra_or_condition.append(
sh.multi_conditions(f"extra_event.url_path {op} %({e_k})s",
c_f.value, value_key=e_k))
else:
logging.warning(f"unsupported extra_event type:${c.type}")
if len(_extra_or_condition) > 0:
@ -1477,18 +1663,15 @@ def get_user_sessions(project_id, user_id, start_date, end_date):
def get_session_user(project_id, user_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""\
SELECT
user_id,
count(*) as session_count,
max(start_ts) as last_seen,
min(start_ts) as first_seen
FROM
"public".sessions
WHERE
project_id = %(project_id)s
AND user_id = %(userId)s
AND duration is not null
""" \
SELECT user_id,
count(*) as session_count,
max(start_ts) as last_seen,
min(start_ts) as first_seen
FROM "public".sessions
WHERE project_id = %(project_id)s
AND user_id = %(userId)s
AND duration is not null
GROUP BY user_id;
""",
{"project_id": project_id, "userId": user_id}

View file

@ -1,9 +1,9 @@
import ast
import logging
from typing import List, Union
import schemas
from chalicelib.core import events, metadata, projects
from chalicelib.core import metadata, projects
from chalicelib.core.events import events
from chalicelib.core.sessions import performance_event, sessions_favorite, sessions_legacy
from chalicelib.utils import pg_client, helper, ch_client, exp_ch_helper
from chalicelib.utils import sql_helper as sh
@ -219,7 +219,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project_id, user_
}
def __is_valid_event(is_any: bool, event: schemas.SessionSearchEventSchema2):
def __is_valid_event(is_any: bool, event: schemas.SessionSearchEventSchema):
return not (not is_any and len(event.value) == 0 and event.type not in [schemas.EventType.REQUEST_DETAILS,
schemas.EventType.GRAPHQL] \
or event.type in [schemas.PerformanceEventType.LOCATION_DOM_COMPLETE,
@ -411,7 +411,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
ss_constraints.append(
_multiple_conditions(f"ms.base_referrer {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type == events.EventType.METADATA.ui_type:
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project_id)
@ -557,10 +557,10 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
**_multiple_values(event.value, value_key=e_k),
**_multiple_values(event.source, value_key=s_k)}
if event_type == events.EventType.CLICK.ui_type:
if event_type == schemas.EventType.CLICK:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
if platform == "web":
_column = events.EventType.CLICK.column
_column = "label"
event_where.append(
f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -582,7 +582,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
else:
_column = events.EventType.CLICK_MOBILE.column
_column = "label"
event_where.append(
f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -599,10 +599,10 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.INPUT.ui_type:
elif event_type == schemas.EventType.INPUT:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
if platform == "web":
_column = events.EventType.INPUT.column
_column = "label"
event_where.append(
f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -623,7 +623,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
value_key=f"custom{i}"))
full_args = {**full_args, **_multiple_values(event.source, value_key=f"custom{i}")}
else:
_column = events.EventType.INPUT_MOBILE.column
_column = "label"
event_where.append(
f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -640,7 +640,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.LOCATION.ui_type:
elif event_type == schemas.EventType.LOCATION:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
if platform == "web":
_column = 'url_path'
@ -660,7 +660,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
else:
_column = events.EventType.VIEW_MOBILE.column
_column = "name"
event_where.append(
f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
@ -676,9 +676,9 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s",
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.CUSTOM.ui_type:
elif event_type == schemas.EventType.CUSTOM:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = events.EventType.CUSTOM.column
_column = "name"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -692,7 +692,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s", event.value,
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.REQUEST.ui_type:
elif event_type == schemas.EventType.REQUEST:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = 'url_path'
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
@ -709,9 +709,9 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.STATEACTION.ui_type:
elif event_type == schemas.EventType.STATE_ACTION:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = events.EventType.STATEACTION.column
_column = "name"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -726,7 +726,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
# TODO: isNot for ERROR
elif event_type == events.EventType.ERROR.ui_type:
elif event_type == schemas.EventType.ERROR:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main"
events_extra_join = f"SELECT * FROM {MAIN_EVENTS_TABLE} AS main1 WHERE main1.project_id=%(project_id)s"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
@ -747,8 +747,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
events_conditions[-1]["condition"] = " AND ".join(events_conditions[-1]["condition"])
# ----- Mobile
elif event_type == events.EventType.CLICK_MOBILE.ui_type:
_column = events.EventType.CLICK_MOBILE.column
elif event_type == schemas.EventType.CLICK_MOBILE:
_column = "label"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -762,8 +762,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s", event.value,
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.INPUT_MOBILE.ui_type:
_column = events.EventType.INPUT_MOBILE.column
elif event_type == schemas.EventType.INPUT_MOBILE:
_column = "label"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -777,8 +777,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s", event.value,
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.VIEW_MOBILE.ui_type:
_column = events.EventType.VIEW_MOBILE.column
elif event_type == schemas.EventType.VIEW_MOBILE:
_column = "name"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -792,8 +792,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s",
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.CUSTOM_MOBILE.ui_type:
_column = events.EventType.CUSTOM_MOBILE.column
elif event_type == schemas.EventType.CUSTOM_MOBILE:
_column = "name"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -807,7 +807,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s",
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.REQUEST_MOBILE.ui_type:
elif event_type == schemas.EventType.REQUEST_MOBILE:
event_from = event_from % f"{MAIN_EVENTS_TABLE} AS main "
_column = 'url_path'
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
@ -823,8 +823,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s", event.value,
value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.CRASH_MOBILE.ui_type:
_column = events.EventType.CRASH_MOBILE.column
elif event_type == schemas.EventType.ERROR_MOBILE:
_column = "name"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -838,8 +838,8 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
event_where.append(_multiple_conditions(f"main.{_column} {op} %({e_k})s",
event.value, value_key=e_k))
events_conditions[-1]["condition"] = event_where[-1]
elif event_type == events.EventType.SWIPE_MOBILE.ui_type and platform != "web":
_column = events.EventType.SWIPE_MOBILE.column
elif event_type == schemas.EventType.SWIPE_MOBILE and platform != "web":
_column = "label"
event_where.append(f"main.event_type='{exp_ch_helper.get_event_type(event_type, platform=platform)}'")
events_conditions.append({"type": event_where[-1]})
if not is_any:
@ -993,7 +993,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
full_args = {**full_args, **_multiple_values(f.value, value_key=e_k_f)}
if f.type == schemas.GraphqlFilterType.GRAPHQL_NAME:
event_where.append(
_multiple_conditions(f"main.{events.EventType.GRAPHQL.column} {op} %({e_k_f})s", f.value,
_multiple_conditions(f"main.name {op} %({e_k_f})s", f.value,
value_key=e_k_f))
events_conditions[-1]["condition"].append(event_where[-1])
elif f.type == schemas.GraphqlFilterType.GRAPHQL_METHOD:
@ -1222,7 +1222,7 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
c.value = helper.values_for_operator(value=c.value, op=c.operator)
full_args = {**full_args,
**_multiple_values(c.value, value_key=e_k)}
if c.type == events.EventType.LOCATION.ui_type:
if c.type == schemas.EventType.LOCATION:
_extra_or_condition.append(
_multiple_conditions(f"extra_event.url_path {op} %({e_k})s",
c.value, value_key=e_k))
@ -1359,18 +1359,15 @@ def get_user_sessions(project_id, user_id, start_date, end_date):
def get_session_user(project_id, user_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""\
SELECT
user_id,
count(*) as session_count,
max(start_ts) as last_seen,
min(start_ts) as first_seen
FROM
"public".sessions
WHERE
project_id = %(project_id)s
AND user_id = %(userId)s
AND duration is not null
""" \
SELECT user_id,
count(*) as session_count,
max(start_ts) as last_seen,
min(start_ts) as first_seen
FROM "public".sessions
WHERE project_id = %(project_id)s
AND user_id = %(userId)s
AND duration is not null
GROUP BY user_id;
""",
{"project_id": project_id, "userId": user_id}

View file

@ -1,269 +0,0 @@
import logging
from urllib.parse import urljoin
from decouple import config
import schemas
from chalicelib.core.collaborations.collaboration_msteams import MSTeams
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.utils import pg_client, helper
from chalicelib.utils import sql_helper as sh
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def get_note(tenant_id, project_id, user_id, note_id, share=None):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT sessions_notes.*, users.name AS user_name
{",(SELECT name FROM users WHERE user_id=%(share)s AND deleted_at ISNULL) AS share_name" if share else ""}
FROM sessions_notes INNER JOIN users USING (user_id)
WHERE sessions_notes.project_id = %(project_id)s
AND sessions_notes.note_id = %(note_id)s
AND sessions_notes.deleted_at IS NULL
AND (sessions_notes.user_id = %(user_id)s OR sessions_notes.is_public);""",
{"project_id": project_id, "user_id": user_id, "tenant_id": tenant_id,
"note_id": note_id, "share": share})
cur.execute(query=query)
row = cur.fetchone()
row = helper.dict_to_camel_case(row)
if row:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row["updatedAt"] = TimeUTC.datetime_to_timestamp(row["updatedAt"])
return row
def get_session_notes(tenant_id, project_id, session_id, user_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT sessions_notes.*, users.name AS user_name
FROM sessions_notes INNER JOIN users USING (user_id)
WHERE sessions_notes.project_id = %(project_id)s
AND sessions_notes.deleted_at IS NULL
AND sessions_notes.session_id = %(session_id)s
AND (sessions_notes.user_id = %(user_id)s
OR sessions_notes.is_public)
ORDER BY created_at DESC;""",
{"project_id": project_id, "user_id": user_id,
"tenant_id": tenant_id, "session_id": session_id})
cur.execute(query=query)
rows = cur.fetchall()
rows = helper.list_to_camel_case(rows)
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
return rows
def get_all_notes_by_project_id(tenant_id, project_id, user_id, data: schemas.SearchNoteSchema):
with pg_client.PostgresClient() as cur:
# base conditions
conditions = [
"sessions_notes.project_id = %(project_id)s",
"sessions_notes.deleted_at IS NULL"
]
params = {"project_id": project_id, "user_id": user_id, "tenant_id": tenant_id}
# tag conditions
if data.tags:
tag_key = "tag_value"
conditions.append(
sh.multi_conditions(f"%({tag_key})s = sessions_notes.tag", data.tags, value_key=tag_key)
)
params.update(sh.multi_values(data.tags, value_key=tag_key))
# filter by ownership or shared status
if data.shared_only:
conditions.append("sessions_notes.is_public IS TRUE")
elif data.mine_only:
conditions.append("sessions_notes.user_id = %(user_id)s")
else:
conditions.append("(sessions_notes.user_id = %(user_id)s OR sessions_notes.is_public)")
# search condition
if data.search:
conditions.append("sessions_notes.message ILIKE %(search)s")
params["search"] = f"%{data.search}%"
query = f"""
SELECT
COUNT(1) OVER () AS full_count,
sessions_notes.*,
users.name AS user_name
FROM
sessions_notes
INNER JOIN
users USING (user_id)
WHERE
{" AND ".join(conditions)}
ORDER BY
created_at {data.order}
LIMIT
%(limit)s OFFSET %(offset)s;
"""
params.update({
"limit": data.limit,
"offset": data.limit * (data.page - 1)
})
query = cur.mogrify(query, params)
logger.debug(query)
cur.execute(query)
rows = cur.fetchall()
result = {"count": 0, "notes": helper.list_to_camel_case(rows)}
if rows:
result["count"] = rows[0]["fullCount"]
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row.pop("fullCount")
return result
def create(tenant_id, user_id, project_id, session_id, data: schemas.SessionNoteSchema):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""INSERT INTO public.sessions_notes (message, user_id, tag, session_id, project_id, timestamp, is_public, thumbnail, start_at, end_at)
VALUES (%(message)s, %(user_id)s, %(tag)s, %(session_id)s, %(project_id)s, %(timestamp)s, %(is_public)s, %(thumbnail)s, %(start_at)s, %(end_at)s)
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s) AS user_name;""",
{"user_id": user_id, "project_id": project_id, "session_id": session_id,
**data.model_dump()})
cur.execute(query)
result = helper.dict_to_camel_case(cur.fetchone())
if result:
result["createdAt"] = TimeUTC.datetime_to_timestamp(result["createdAt"])
return result
def edit(tenant_id, user_id, project_id, note_id, data: schemas.SessionUpdateNoteSchema):
sub_query = []
if data.message is not None:
sub_query.append("message = %(message)s")
if data.tag is not None and len(data.tag) > 0:
sub_query.append("tag = %(tag)s")
if data.is_public is not None:
sub_query.append("is_public = %(is_public)s")
if data.timestamp is not None:
sub_query.append("timestamp = %(timestamp)s")
sub_query.append("updated_at = timezone('utc'::text, now())")
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""UPDATE public.sessions_notes
SET
{" ,".join(sub_query)}
WHERE
project_id = %(project_id)s
AND user_id = %(user_id)s
AND note_id = %(note_id)s
AND deleted_at ISNULL
RETURNING *,(SELECT name FROM users WHERE users.user_id=%(user_id)s) AS user_name;""",
{"project_id": project_id, "user_id": user_id, "note_id": note_id, **data.model_dump()})
)
row = helper.dict_to_camel_case(cur.fetchone())
if row:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
return row
return {"errors": ["Note not found"]}
def delete(project_id, note_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(""" UPDATE public.sessions_notes
SET deleted_at = timezone('utc'::text, now())
WHERE note_id = %(note_id)s
AND project_id = %(project_id)s
AND deleted_at ISNULL;""",
{"project_id": project_id, "note_id": note_id})
)
return {"data": {"state": "success"}}
def share_to_slack(tenant_id, user_id, project_id, note_id, webhook_id):
note = get_note(tenant_id=tenant_id, project_id=project_id, user_id=user_id, note_id=note_id, share=user_id)
if note is None:
return {"errors": ["Note not found"]}
session_url = urljoin(config('SITE_URL'), f"{note['projectId']}/session/{note['sessionId']}?note={note['noteId']}")
if note["timestamp"] > 0:
session_url += f"&jumpto={note['timestamp']}"
title = f"<{session_url}|Note for session {note['sessionId']}>"
blocks = [{"type": "section",
"fields": [{"type": "mrkdwn",
"text": title}]},
{"type": "section",
"fields": [{"type": "plain_text",
"text": note["message"]}]}]
if note["tag"]:
blocks.append({"type": "context",
"elements": [{"type": "plain_text",
"text": f"Tag: *{note['tag']}*"}]})
bottom = f"Created by {note['userName'].capitalize()}"
if user_id != note["userId"]:
bottom += f"\nSent by {note['shareName']}: "
blocks.append({"type": "context",
"elements": [{"type": "plain_text",
"text": bottom}]})
return Slack.send_raw(
tenant_id=tenant_id,
webhook_id=webhook_id,
body={"blocks": blocks}
)
def share_to_msteams(tenant_id, user_id, project_id, note_id, webhook_id):
note = get_note(tenant_id=tenant_id, project_id=project_id, user_id=user_id, note_id=note_id, share=user_id)
if note is None:
return {"errors": ["Note not found"]}
session_url = urljoin(config('SITE_URL'), f"{note['projectId']}/session/{note['sessionId']}?note={note['noteId']}")
if note["timestamp"] > 0:
session_url += f"&jumpto={note['timestamp']}"
title = f"[Note for session {note['sessionId']}]({session_url})"
blocks = [{
"type": "TextBlock",
"text": title,
"style": "heading",
"size": "Large"
},
{
"type": "TextBlock",
"spacing": "Small",
"text": note["message"]
}
]
if note["tag"]:
blocks.append({"type": "TextBlock",
"spacing": "Small",
"text": f"Tag: *{note['tag']}*",
"size": "Small"})
bottom = f"Created by {note['userName'].capitalize()}"
if user_id != note["userId"]:
bottom += f"\nSent by {note['shareName']}: "
blocks.append({"type": "TextBlock",
"spacing": "Default",
"text": bottom,
"size": "Small",
"fontType": "Monospace"})
return MSTeams.send_raw(
tenant_id=tenant_id,
webhook_id=webhook_id,
body={"type": "message",
"attachments": [
{"contentType": "application/vnd.microsoft.card.adaptive",
"contentUrl": None,
"content": {
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.5",
"body": [{
"type": "ColumnSet",
"style": "emphasis",
"separator": True,
"bleed": True,
"columns": [{"width": "stretch",
"items": blocks,
"type": "Column"}]
}]}}
]})

View file

@ -2,7 +2,8 @@ import logging
from typing import List, Union
import schemas
from chalicelib.core import events, metadata
from chalicelib.core.events import events
from chalicelib.core import metadata
from . import performance_event
from chalicelib.utils import pg_client, helper, metrics_helper
from chalicelib.utils import sql_helper as sh
@ -143,7 +144,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
for e in data.events:
if e.type == schemas.EventType.LOCATION:
if e.operator not in extra_conditions:
extra_conditions[e.operator] = schemas.SessionSearchEventSchema2.model_validate({
extra_conditions[e.operator] = schemas.SessionSearchEventSchema.model_validate({
"type": e.type,
"isEvent": True,
"value": [],
@ -160,7 +161,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
for e in data.events:
if e.type == schemas.EventType.REQUEST_DETAILS:
if e.operator not in extra_conditions:
extra_conditions[e.operator] = schemas.SessionSearchEventSchema2.model_validate({
extra_conditions[e.operator] = schemas.SessionSearchEventSchema.model_validate({
"type": e.type,
"isEvent": True,
"value": [],
@ -273,7 +274,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
return sessions
def __is_valid_event(is_any: bool, event: schemas.SessionSearchEventSchema2):
def __is_valid_event(is_any: bool, event: schemas.SessionSearchEventSchema):
return not (not is_any and len(event.value) == 0 and event.type not in [schemas.EventType.REQUEST_DETAILS,
schemas.EventType.GRAPHQL] \
or event.type in [schemas.PerformanceEventType.LOCATION_DOM_COMPLETE,
@ -439,7 +440,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
extra_constraints.append(
sh.multi_conditions(f"s.base_referrer {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
elif filter_type == events.EventType.METADATA.ui_type:
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project_id)
@ -580,36 +581,36 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
**sh.multi_values(event.value, value_key=e_k),
**sh.multi_values(event.source, value_key=s_k)}
if event_type == events.EventType.CLICK.ui_type:
if event_type == schemas.EventType.CLICK:
if platform == "web":
event_from = event_from % f"{events.EventType.CLICK.table} AS main "
event_from = event_from % f"events.clicks AS main "
if not is_any:
if schemas.ClickEventExtraOperator.has_value(event.operator):
event_where.append(
sh.multi_conditions(f"main.selector {op} %({e_k})s", event.value, value_key=e_k))
else:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.CLICK.column} {op} %({e_k})s", event.value,
sh.multi_conditions(f"main.label {op} %({e_k})s", event.value,
value_key=e_k))
else:
event_from = event_from % f"{events.EventType.CLICK_MOBILE.table} AS main "
event_from = event_from % f"events_ios.taps AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.CLICK_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.label {op} %({e_k})s",
event.value,
value_key=e_k))
elif event_type == events.EventType.TAG.ui_type:
event_from = event_from % f"{events.EventType.TAG.table} AS main "
elif event_type == schemas.EventType.TAG:
event_from = event_from % f"events.tags AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.tag_id = %({e_k})s", event.value, value_key=e_k))
elif event_type == events.EventType.INPUT.ui_type:
elif event_type == schemas.EventType.INPUT:
if platform == "web":
event_from = event_from % f"{events.EventType.INPUT.table} AS main "
event_from = event_from % f"events.inputs AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.INPUT.column} {op} %({e_k})s", event.value,
sh.multi_conditions(f"main.label {op} %({e_k})s", event.value,
value_key=e_k))
if event.source is not None and len(event.source) > 0:
event_where.append(sh.multi_conditions(f"main.value ILIKE %(custom{i})s", event.source,
@ -617,53 +618,53 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
full_args = {**full_args, **sh.multi_values(event.source, value_key=f"custom{i}")}
else:
event_from = event_from % f"{events.EventType.INPUT_MOBILE.table} AS main "
event_from = event_from % f"events_ios.inputs AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.INPUT_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.label {op} %({e_k})s",
event.value,
value_key=e_k))
elif event_type == events.EventType.LOCATION.ui_type:
elif event_type == schemas.EventType.LOCATION:
if platform == "web":
event_from = event_from % f"{events.EventType.LOCATION.table} AS main "
event_from = event_from % f"events.pages AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.LOCATION.column} {op} %({e_k})s",
sh.multi_conditions(f"main.path {op} %({e_k})s",
event.value, value_key=e_k))
else:
event_from = event_from % f"{events.EventType.VIEW_MOBILE.table} AS main "
event_from = event_from % f"events_ios.views AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.VIEW_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.name {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == events.EventType.CUSTOM.ui_type:
event_from = event_from % f"{events.EventType.CUSTOM.table} AS main "
elif event_type == schemas.EventType.CUSTOM:
event_from = event_from % f"events_common.customs AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.CUSTOM.column} {op} %({e_k})s", event.value,
sh.multi_conditions(f"main.name {op} %({e_k})s", event.value,
value_key=e_k))
elif event_type == events.EventType.REQUEST.ui_type:
event_from = event_from % f"{events.EventType.REQUEST.table} AS main "
elif event_type == schemas.EventType.REQUEST:
event_from = event_from % f"events_common.requests AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.REQUEST.column} {op} %({e_k})s", event.value,
sh.multi_conditions(f"main.path {op} %({e_k})s", event.value,
value_key=e_k))
# elif event_type == events.event_type.GRAPHQL.ui_type:
# elif event_type == schemas.event_type.GRAPHQL:
# event_from = event_from % f"{events.event_type.GRAPHQL.table} AS main "
# if not is_any:
# event_where.append(
# _multiple_conditions(f"main.{events.event_type.GRAPHQL.column} {op} %({e_k})s", event.value,
# value_key=e_k))
elif event_type == events.EventType.STATEACTION.ui_type:
event_from = event_from % f"{events.EventType.STATEACTION.table} AS main "
elif event_type == schemas.EventType.STATE_ACTION:
event_from = event_from % f"events.state_actions AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.STATEACTION.column} {op} %({e_k})s",
sh.multi_conditions(f"main.name {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == events.EventType.ERROR.ui_type:
event_from = event_from % f"{events.EventType.ERROR.table} AS main INNER JOIN public.errors AS main1 USING(error_id)"
elif event_type == schemas.EventType.ERROR:
event_from = event_from % f"events.errors AS main INNER JOIN public.errors AS main1 USING(error_id)"
event.source = list(set(event.source))
if not is_any and event.value not in [None, "*", ""]:
event_where.append(
@ -674,59 +675,59 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
# ----- Mobile
elif event_type == events.EventType.CLICK_MOBILE.ui_type:
event_from = event_from % f"{events.EventType.CLICK_MOBILE.table} AS main "
elif event_type == schemas.EventType.CLICK_MOBILE:
event_from = event_from % f"events_ios.taps AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.CLICK_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.label {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == events.EventType.INPUT_MOBILE.ui_type:
event_from = event_from % f"{events.EventType.INPUT_MOBILE.table} AS main "
elif event_type == schemas.EventType.INPUT_MOBILE:
event_from = event_from % f"events_ios.inputs AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.INPUT_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.label {op} %({e_k})s",
event.value, value_key=e_k))
if event.source is not None and len(event.source) > 0:
event_where.append(sh.multi_conditions(f"main.value ILIKE %(custom{i})s", event.source,
value_key="custom{i}"))
full_args = {**full_args, **sh.multi_values(event.source, f"custom{i}")}
elif event_type == events.EventType.VIEW_MOBILE.ui_type:
event_from = event_from % f"{events.EventType.VIEW_MOBILE.table} AS main "
elif event_type == schemas.EventType.VIEW_MOBILE:
event_from = event_from % f"events_ios.views AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.VIEW_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.name {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == events.EventType.CUSTOM_MOBILE.ui_type:
event_from = event_from % f"{events.EventType.CUSTOM_MOBILE.table} AS main "
elif event_type == schemas.EventType.CUSTOM_MOBILE:
event_from = event_from % f"events_common.customs AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.CUSTOM_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.name {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == events.EventType.REQUEST_MOBILE.ui_type:
event_from = event_from % f"{events.EventType.REQUEST_MOBILE.table} AS main "
elif event_type == schemas.EventType.REQUEST_MOBILE:
event_from = event_from % f"events_common.requests AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.REQUEST_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.path {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == events.EventType.CRASH_MOBILE.ui_type:
event_from = event_from % f"{events.EventType.CRASH_MOBILE.table} AS main INNER JOIN public.crashes_ios AS main1 USING(crash_ios_id)"
elif event_type == schemas.EventType.ERROR_MOBILE:
event_from = event_from % f"events_common.crashes AS main INNER JOIN public.crashes_ios AS main1 USING(crash_ios_id)"
if not is_any and event.value not in [None, "*", ""]:
event_where.append(
sh.multi_conditions(f"(main1.reason {op} %({e_k})s OR main1.name {op} %({e_k})s)",
event.value, value_key=e_k))
elif event_type == events.EventType.SWIPE_MOBILE.ui_type and platform != "web":
event_from = event_from % f"{events.EventType.SWIPE_MOBILE.table} AS main "
elif event_type == schemas.EventType.SWIPE_MOBILE and platform != "web":
event_from = event_from % f"events_ios.swipes AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.SWIPE_MOBILE.column} {op} %({e_k})s",
sh.multi_conditions(f"main.label {op} %({e_k})s",
event.value, value_key=e_k))
elif event_type == schemas.PerformanceEventType.FETCH_FAILED:
event_from = event_from % f"{events.EventType.REQUEST.table} AS main "
event_from = event_from % f"events_common.requests AS main "
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.REQUEST.column} {op} %({e_k})s",
sh.multi_conditions(f"main.path {op} %({e_k})s",
event.value, value_key=e_k))
col = performance_event.get_col(event_type)
colname = col["column"]
@ -751,7 +752,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
schemas.PerformanceEventType.LOCATION_AVG_CPU_LOAD,
schemas.PerformanceEventType.LOCATION_AVG_MEMORY_USAGE
]:
event_from = event_from % f"{events.EventType.LOCATION.table} AS main "
event_from = event_from % f"events.pages AS main "
col = performance_event.get_col(event_type)
colname = col["column"]
tname = "main"
@ -762,7 +763,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
f"{tname}.timestamp <= %(endDate)s"]
if not is_any:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.LOCATION.column} {op} %({e_k})s",
sh.multi_conditions(f"main.path {op} %({e_k})s",
event.value, value_key=e_k))
e_k += "_custom"
full_args = {**full_args, **sh.multi_values(event.source, value_key=e_k)}
@ -772,7 +773,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
event.source, value_key=e_k))
elif event_type == schemas.EventType.REQUEST_DETAILS:
event_from = event_from % f"{events.EventType.REQUEST.table} AS main "
event_from = event_from % f"events_common.requests AS main "
apply = False
for j, f in enumerate(event.filters):
is_any = sh.isAny_opreator(f.operator)
@ -784,7 +785,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
full_args = {**full_args, **sh.multi_values(f.value, value_key=e_k_f)}
if f.type == schemas.FetchFilterType.FETCH_URL:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.REQUEST.column} {op} %({e_k_f})s::text",
sh.multi_conditions(f"main.path {op} %({e_k_f})s::text",
f.value, value_key=e_k_f))
apply = True
elif f.type == schemas.FetchFilterType.FETCH_STATUS_CODE:
@ -816,7 +817,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
if not apply:
continue
elif event_type == schemas.EventType.GRAPHQL:
event_from = event_from % f"{events.EventType.GRAPHQL.table} AS main "
event_from = event_from % f"events.graphql AS main "
for j, f in enumerate(event.filters):
is_any = sh.isAny_opreator(f.operator)
if is_any or len(f.value) == 0:
@ -827,7 +828,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
full_args = {**full_args, **sh.multi_values(f.value, value_key=e_k_f)}
if f.type == schemas.GraphqlFilterType.GRAPHQL_NAME:
event_where.append(
sh.multi_conditions(f"main.{events.EventType.GRAPHQL.column} {op} %({e_k_f})s", f.value,
sh.multi_conditions(f"main.name {op} %({e_k_f})s", f.value,
value_key=e_k_f))
elif f.type == schemas.GraphqlFilterType.GRAPHQL_METHOD:
event_where.append(
@ -908,7 +909,7 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
# b"s.user_os in ('Chrome OS','Fedora','Firefox OS','Linux','Mac OS X','Ubuntu','Windows')")
if errors_only:
extra_from += f" INNER JOIN {events.EventType.ERROR.table} AS er USING (session_id) INNER JOIN public.errors AS ser USING (error_id)"
extra_from += f" INNER JOIN events.errors AS er USING (session_id) INNER JOIN public.errors AS ser USING (error_id)"
extra_constraints.append("ser.source = 'js_exception'")
extra_constraints.append("ser.project_id = %(project_id)s")
# if error_status != schemas.ErrorStatus.all:
@ -984,9 +985,9 @@ def search_query_parts(data: schemas.SessionsSearchPayloadSchema, error_status,
c.value = helper.values_for_operator(value=c.value, op=c.operator)
full_args = {**full_args,
**sh.multi_values(c.value, value_key=e_k)}
if c.type == events.EventType.LOCATION.ui_type:
if c.type == schemas.EventType.LOCATION:
_extra_or_condition.append(
sh.multi_conditions(f"ev.{events.EventType.LOCATION.column} {op} %({e_k})s",
sh.multi_conditions(f"ev.path {op} %({e_k})s",
c.value, value_key=e_k))
else:
logger.warning(f"unsupported extra_event type:${c.type}")
@ -1044,18 +1045,15 @@ def get_user_sessions(project_id, user_id, start_date, end_date):
def get_session_user(project_id, user_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""\
SELECT
user_id,
count(*) as session_count,
max(start_ts) as last_seen,
min(start_ts) as first_seen
FROM
"public".sessions
WHERE
project_id = %(project_id)s
AND user_id = %(userId)s
AND duration is not null
""" \
SELECT user_id,
count(*) as session_count,
max(start_ts) as last_seen,
min(start_ts) as first_seen
FROM "public".sessions
WHERE project_id = %(project_id)s
AND user_id = %(userId)s
AND duration is not null
GROUP BY user_id;
""",
{"project_id": project_id, "userId": user_id}
@ -1074,11 +1072,10 @@ def count_all():
def session_exists(project_id, session_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""SELECT 1
FROM public.sessions
WHERE session_id=%(session_id)s
AND project_id=%(project_id)s
LIMIT 1;""",
query = cur.mogrify("""SELECT 1
FROM public.sessions
WHERE session_id = %(session_id)s
AND project_id = %(project_id)s LIMIT 1;""",
{"project_id": project_id, "session_id": session_id})
cur.execute(query)
row = cur.fetchone()

View file

@ -1,6 +1,7 @@
import schemas
from chalicelib.core import events, metadata, events_mobile, \
issues, assist, canvas, user_testing
from chalicelib.core import metadata, assist, canvas, user_testing
from chalicelib.core.issues import issues
from chalicelib.core.events import events, events_mobile
from . import sessions_mobs, sessions_devtool
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import pg_client, helper
@ -128,30 +129,8 @@ def get_events(project_id, session_id):
data['userTesting'] = user_testing.get_test_signals(session_id=session_id, project_id=project_id)
data['issues'] = issues.get_by_session_id(session_id=session_id, project_id=project_id)
data['issues'] = reduce_issues(data['issues'])
data['issues'] = issues.reduce_issues(data['issues'])
data['incidents'] = events.get_incidents_by_session_id(session_id=session_id, project_id=project_id)
return data
else:
return None
# To reduce the number of issues in the replay;
# will be removed once we agree on how to show issues
def reduce_issues(issues_list):
if issues_list is None:
return None
i = 0
# remove same-type issues if the time between them is <2s
while i < len(issues_list) - 1:
for j in range(i + 1, len(issues_list)):
if issues_list[i]["type"] == issues_list[j]["type"]:
break
else:
i += 1
break
if issues_list[i]["timestamp"] - issues_list[j]["timestamp"] < 2000:
issues_list.pop(j)
else:
i += 1
return issues_list

View file

@ -1,10 +1,10 @@
import ast
import json
import logging
import schemas
from chalicelib.core import metadata, projects
from chalicelib.core import metadata
from chalicelib.utils import helper, ch_client, exp_ch_helper
from . import sessions_favorite, sessions_search_legacy, sessions_ch as sessions, sessions_legacy_mobil
from chalicelib.utils import pg_client, helper, ch_client, exp_ch_helper
logger = logging.getLogger(__name__)
@ -57,11 +57,15 @@ SESSION_PROJECTION_COLS_CH_MAP = """\
"""
def __parse_metadata(metadata_map):
return json.loads(metadata_map.replace("'", '"').replace("NULL", 'null'))
# This function executes the query and return result
def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.ProjectContext,
user_id, errors_only=False,
error_status=schemas.ErrorStatus.ALL, count_only=False, issue=None, ids_only=False,
platform="web"):
user_id, errors_only=False, error_status=schemas.ErrorStatus.ALL,
count_only=False, issue=None, ids_only=False):
platform = project.platform
if data.bookmarked:
data.startTimestamp, data.endTimestamp = sessions_favorite.get_start_end_timestamp(project.project_id, user_id)
if data.startTimestamp is None:
@ -69,7 +73,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
return {
'total': 0,
'sessions': [],
'src': 2
'_src': 2
}
if project.platform == "web":
full_args, query_part = sessions.search_query_parts_ch(data=data, error_status=error_status,
@ -123,7 +127,8 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
meta_keys = metadata.get(project_id=project.project_id)
meta_map = ",map(%s) AS 'metadata'" \
% ','.join([f"'{m['key']}',coalesce(metadata_{m['index']},'None')" for m in meta_keys])
% ','.join(
[f"'{m['key']}',coalesce(metadata_{m['index']},CAST(NULL AS Nullable(String)))" for m in meta_keys])
main_query = cur.mogrify(f"""SELECT COUNT(*) AS count,
COALESCE(JSONB_AGG(users_sessions)
FILTER (WHERE rn>%(sessions_limit_s)s AND rn<=%(sessions_limit_e)s), '[]'::JSONB) AS sessions
@ -141,7 +146,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
) AS users_sessions;""",
full_args)
elif ids_only:
main_query = cur.format(query=f"""SELECT DISTINCT ON(s.session_id) s.session_id
main_query = cur.format(query=f"""SELECT DISTINCT ON(s.session_id) s.session_id AS session_id
{query_part}
ORDER BY s.session_id desc
LIMIT %(sessions_limit)s OFFSET %(sessions_limit_s)s;""",
@ -158,7 +163,8 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
meta_keys = metadata.get(project_id=project.project_id)
meta_map = ",'metadata',toString(map(%s))" \
% ','.join([f"'{m['key']}',coalesce(metadata_{m['index']},'None')" for m in meta_keys])
% ','.join(
[f"'{m['key']}',coalesce(metadata_{m['index']},CAST(NULL AS Nullable(String)))" for m in meta_keys])
main_query = cur.format(query=f"""SELECT any(total) AS count,
groupArray(%(sessions_limit)s)(details) AS sessions
FROM (SELECT total, details
@ -175,11 +181,11 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
ORDER BY sort_key {data.order}
LIMIT %(sessions_limit)s OFFSET %(sessions_limit_s)s) AS sorted_sessions;""",
parameters=full_args)
logging.debug("--------------------")
logging.debug(main_query)
logging.debug("--------------------")
try:
logging.debug("--------------------")
sessions_list = cur.execute(main_query)
logging.debug("--------------------")
except Exception as err:
logging.warning("--------- SESSIONS-CH SEARCH QUERY EXCEPTION -----------")
logging.warning(main_query)
@ -200,83 +206,24 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
for i, s in enumerate(sessions_list):
sessions_list[i] = {**s.pop("last_session")[0], **s}
sessions_list[i].pop("rn")
sessions_list[i]["metadata"] = ast.literal_eval(sessions_list[i]["metadata"])
sessions_list[i]["metadata"] = __parse_metadata(sessions_list[i]["metadata"])
else:
import json
for i in range(len(sessions_list)):
sessions_list[i]["metadata"] = ast.literal_eval(sessions_list[i]["metadata"])
sessions_list[i] = schemas.SessionModel.parse_obj(helper.dict_to_camel_case(sessions_list[i]))
sessions_list[i]["metadata"] = __parse_metadata(sessions_list[i]["metadata"])
sessions_list[i] = schemas.SessionModel.model_validate(helper.dict_to_camel_case(sessions_list[i]))
return {
'total': total,
'sessions': sessions_list,
'src': 2
'_src': 2
}
def search_by_metadata(tenant_id, user_id, m_key, m_value, project_id=None):
if project_id is None:
all_projects = projects.get_projects(tenant_id=tenant_id)
else:
all_projects = [
projects.get_project(tenant_id=tenant_id, project_id=int(project_id), include_last_session=False,
include_gdpr=False)]
all_projects = {int(p["projectId"]): p["name"] for p in all_projects}
project_ids = list(all_projects.keys())
available_keys = metadata.get_keys_by_projects(project_ids)
for i in available_keys:
available_keys[i]["user_id"] = schemas.FilterType.USER_ID
available_keys[i]["user_anonymous_id"] = schemas.FilterType.USER_ANONYMOUS_ID
results = {}
for i in project_ids:
if m_key not in available_keys[i].values():
available_keys.pop(i)
results[i] = {"total": 0, "sessions": [], "missingMetadata": True}
project_ids = list(available_keys.keys())
if len(project_ids) > 0:
with pg_client.PostgresClient() as cur:
sub_queries = []
for i in project_ids:
col_name = list(available_keys[i].keys())[list(available_keys[i].values()).index(m_key)]
sub_queries.append(cur.mogrify(
f"(SELECT COALESCE(COUNT(s.*)) AS count FROM public.sessions AS s WHERE s.project_id = %(id)s AND s.{col_name} = %(value)s) AS \"{i}\"",
{"id": i, "value": m_value}).decode('UTF-8'))
query = f"""SELECT {", ".join(sub_queries)};"""
cur.execute(query=query)
rows = cur.fetchone()
sub_queries = []
for i in rows.keys():
results[i] = {"total": rows[i], "sessions": [], "missingMetadata": False, "name": all_projects[int(i)]}
if rows[i] > 0:
col_name = list(available_keys[int(i)].keys())[list(available_keys[int(i)].values()).index(m_key)]
sub_queries.append(
cur.mogrify(
f"""(
SELECT *
FROM (
SELECT DISTINCT ON(favorite_sessions.session_id, s.session_id) {SESSION_PROJECTION_COLS_CH}
FROM public.sessions AS s LEFT JOIN (SELECT session_id
FROM public.user_favorite_sessions
WHERE user_favorite_sessions.user_id = %(userId)s
) AS favorite_sessions USING (session_id)
WHERE s.project_id = %(id)s AND s.duration IS NOT NULL AND s.{col_name} = %(value)s
) AS full_sessions
ORDER BY favorite DESC, issue_score DESC
LIMIT 10
)""",
{"id": i, "value": m_value, "userId": user_id}).decode('UTF-8'))
if len(sub_queries) > 0:
cur.execute("\nUNION\n".join(sub_queries))
rows = cur.fetchall()
for i in rows:
results[str(i["project_id"])]["sessions"].append(helper.dict_to_camel_case(i))
return results
return sessions_search_legacy.search_by_metadata(tenant_id, user_id, m_key, m_value, project_id)
# TODO: rewrite this function to use ClickHouse
def search_sessions_by_ids(project_id: int, session_ids: list, sort_by: str = 'session_id',
ascending: bool = False) -> dict:
return sessions_search_legacy.search_sessions_by_ids(project_id, session_ids, sort_by, ascending)

View file

@ -40,7 +40,8 @@ COALESCE((SELECT TRUE
# This function executes the query and return result
def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.ProjectContext,
user_id, errors_only=False, error_status=schemas.ErrorStatus.ALL,
count_only=False, issue=None, ids_only=False, platform="web"):
count_only=False, issue=None, ids_only=False):
platform = project.platform
if data.bookmarked:
data.startTimestamp, data.endTimestamp = sessions_favorite.get_start_end_timestamp(project.project_id, user_id)
if data.startTimestamp is None:
@ -48,7 +49,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
return {
'total': 0,
'sessions': [],
'src': 1
'_src': 1
}
full_args, query_part = sessions_legacy.search_query_parts(data=data, error_status=error_status,
errors_only=errors_only,
@ -122,7 +123,10 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
sort = 'session_id'
if data.sort is not None and data.sort != "session_id":
# sort += " " + data.order + "," + helper.key_to_snake_case(data.sort)
sort = helper.key_to_snake_case(data.sort)
if data.sort == 'datetime':
sort = 'start_ts'
else:
sort = helper.key_to_snake_case(data.sort)
meta_keys = metadata.get(project_id=project.project_id)
main_query = cur.mogrify(f"""SELECT COUNT(full_sessions) AS count,
@ -173,7 +177,7 @@ def search_sessions(data: schemas.SessionsSearchPayloadSchema, project: schemas.
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions),
'src': 1
'_src': 1
}
@ -236,6 +240,7 @@ def search_by_metadata(tenant_id, user_id, m_key, m_value, project_id=None):
cur.execute("\nUNION\n".join(sub_queries))
rows = cur.fetchall()
for i in rows:
i["_src"] = 1
results[str(i["project_id"])]["sessions"].append(helper.dict_to_camel_case(i))
return results
@ -243,7 +248,7 @@ def search_by_metadata(tenant_id, user_id, m_key, m_value, project_id=None):
def search_sessions_by_ids(project_id: int, session_ids: list, sort_by: str = 'session_id',
ascending: bool = False) -> dict:
if session_ids is None or len(session_ids) == 0:
return {"total": 0, "sessions": []}
return {"total": 0, "sessions": [], "_src": 1}
with pg_client.PostgresClient() as cur:
meta_keys = metadata.get(project_id=project_id)
params = {"project_id": project_id, "session_ids": tuple(session_ids)}
@ -262,4 +267,4 @@ def search_sessions_by_ids(project_id: int, session_ids: list, sort_by: str = 's
s["metadata"] = {}
for m in meta_keys:
s["metadata"][m["key"]] = s.pop(f'metadata_{m["index"]}')
return {"total": len(rows), "sessions": helper.list_to_camel_case(rows)}
return {"total": len(rows), "sessions": helper.list_to_camel_case(rows), "_src": 1}

View file

@ -1 +1,2 @@
from .sessions_viewed import *
from .sessions_viewed import *
from .sessions_viewed_ch import *

View file

@ -87,7 +87,7 @@ async def create_tenant(data: schemas.UserSignupSchema):
"spotRefreshToken": r.pop("spotRefreshToken"),
"spotRefreshTokenMaxAge": r.pop("spotRefreshTokenMaxAge"),
'data': {
"scopeState": 0,
"scopeState": 2,
"user": r
}
}

View file

@ -11,9 +11,3 @@ if smtp.has_smtp():
logger.info("valid SMTP configuration found")
else:
logger.info("no SMTP configuration found or SMTP validation failed")
if config("EXP_CH_DRIVER", cast=bool, default=True):
logging.info(">>> Using new CH driver")
from . import ch_client_exp as ch_client
else:
from . import ch_client

View file

@ -1,73 +1,185 @@
import logging
import threading
import time
from functools import wraps
from queue import Queue, Empty
import clickhouse_driver
import clickhouse_connect
from clickhouse_connect.driver.query import QueryContext
from decouple import config
logger = logging.getLogger(__name__)
_CH_CONFIG = {"host": config("ch_host"),
"user": config("ch_user", default="default"),
"password": config("ch_password", default=""),
"port": config("ch_port_http", cast=int),
"client_name": config("APP_NAME", default="PY")}
CH_CONFIG = dict(_CH_CONFIG)
settings = {}
if config('ch_timeout', cast=int, default=-1) > 0:
logger.info(f"CH-max_execution_time set to {config('ch_timeout')}s")
logging.info(f"CH-max_execution_time set to {config('ch_timeout')}s")
settings = {**settings, "max_execution_time": config('ch_timeout', cast=int)}
if config('ch_receive_timeout', cast=int, default=-1) > 0:
logger.info(f"CH-receive_timeout set to {config('ch_receive_timeout')}s")
logging.info(f"CH-receive_timeout set to {config('ch_receive_timeout')}s")
settings = {**settings, "receive_timeout": config('ch_receive_timeout', cast=int)}
extra_args = {}
if config("CH_COMPRESSION", cast=bool, default=True):
extra_args["compression"] = "lz4"
def transform_result(self, original_function):
@wraps(original_function)
def wrapper(*args, **kwargs):
if kwargs.get("parameters"):
if config("LOCAL_DEV", cast=bool, default=False):
logger.debug(self.format(query=kwargs.get("query", ""), parameters=kwargs.get("parameters")))
else:
logger.debug(
str.encode(self.format(query=kwargs.get("query", ""), parameters=kwargs.get("parameters"))))
elif len(args) > 0:
if config("LOCAL_DEV", cast=bool, default=False):
logger.debug(args[0])
else:
logger.debug(str.encode(args[0]))
result = original_function(*args, **kwargs)
if isinstance(result, clickhouse_connect.driver.query.QueryResult):
column_names = result.column_names
result = result.result_rows
result = [dict(zip(column_names, row)) for row in result]
return result
return wrapper
class ClickHouseConnectionPool:
def __init__(self, min_size, max_size):
self.min_size = min_size
self.max_size = max_size
self.pool = Queue()
self.lock = threading.Lock()
self.total_connections = 0
# Initialize the pool with min_size connections
for _ in range(self.min_size):
client = clickhouse_connect.get_client(**CH_CONFIG,
database=config("ch_database", default="default"),
settings=settings,
**extra_args)
self.pool.put(client)
self.total_connections += 1
def get_connection(self):
try:
# Try to get a connection without blocking
client = self.pool.get_nowait()
return client
except Empty:
with self.lock:
if self.total_connections < self.max_size:
client = clickhouse_connect.get_client(**CH_CONFIG,
database=config("ch_database", default="default"),
settings=settings,
**extra_args)
self.total_connections += 1
return client
# If max_size reached, wait until a connection is available
client = self.pool.get()
return client
def release_connection(self, client):
self.pool.put(client)
def close_all(self):
with self.lock:
while not self.pool.empty():
client = self.pool.get()
client.close()
self.total_connections = 0
CH_pool: ClickHouseConnectionPool = None
RETRY_MAX = config("CH_RETRY_MAX", cast=int, default=50)
RETRY_INTERVAL = config("CH_RETRY_INTERVAL", cast=int, default=2)
RETRY = 0
def make_pool():
if not config('CH_POOL', cast=bool, default=True):
return
global CH_pool
global RETRY
if CH_pool is not None:
try:
CH_pool.close_all()
except Exception as error:
logger.error("Error while closing all connexions to CH", exc_info=error)
try:
CH_pool = ClickHouseConnectionPool(min_size=config("CH_MINCONN", cast=int, default=4),
max_size=config("CH_MAXCONN", cast=int, default=8))
if CH_pool is not None:
logger.info("Connection pool created successfully for CH")
except ConnectionError as error:
logger.error("Error while connecting to CH", exc_info=error)
if RETRY < RETRY_MAX:
RETRY += 1
logger.info(f"waiting for {RETRY_INTERVAL}s before retry n°{RETRY}")
time.sleep(RETRY_INTERVAL)
make_pool()
else:
raise error
class ClickHouseClient:
__client = None
def __init__(self, database=None):
extra_args = {}
if config("CH_COMPRESSION", cast=bool, default=True):
extra_args["compression"] = "lz4"
self.__client = clickhouse_driver.Client(host=config("ch_host"),
database=database if database else config("ch_database",
default="default"),
user=config("ch_user", default="default"),
password=config("ch_password", default=""),
port=config("ch_port", cast=int),
settings=settings,
**extra_args) \
if self.__client is None else self.__client
if self.__client is None:
if database is not None or not config('CH_POOL', cast=bool, default=True):
self.__client = clickhouse_connect.get_client(**CH_CONFIG,
database=database if database else config("ch_database",
default="default"),
settings=settings,
**extra_args)
else:
self.__client = CH_pool.get_connection()
self.__client.execute = transform_result(self, self.__client.query)
self.__client.format = self.format
def __enter__(self):
return self
def execute(self, query, parameters=None, **args):
try:
results = self.__client.execute(query=query, params=parameters, with_column_types=True, **args)
keys = tuple(x for x, y in results[1])
return [dict(zip(keys, i)) for i in results[0]]
except Exception as err:
logger.error("--------- CH EXCEPTION -----------", exc_info=err)
logger.error("--------- CH QUERY EXCEPTION -----------")
logger.error(self.format(query=query, parameters=parameters)
.replace('\n', '\\n')
.replace(' ', ' ')
.replace(' ', ' '))
logger.error("--------------------")
raise err
def insert(self, query, params=None, **args):
return self.__client.execute(query=query, params=params, **args)
def client(self):
return self.__client
def format(self, query, parameters):
if parameters is None:
return query
return self.__client.substitute_params(query, parameters, self.__client.connection.context)
def format(self, query, parameters=None):
if parameters:
ctx = QueryContext(query=query, parameters=parameters)
return ctx.final_query
return query
def __exit__(self, *args):
pass
if config('CH_POOL', cast=bool, default=True):
CH_pool.release_connection(self.__client)
else:
self.__client.close()
async def init():
logger.info(f">CH_POOL:not defined")
logger.info(f">use CH_POOL:{config('CH_POOL', default=True)}")
if config('CH_POOL', cast=bool, default=True):
make_pool()
async def terminate():
pass
global CH_pool
if CH_pool is not None:
try:
CH_pool.close_all()
logger.info("Closed all connexions to CH")
except Exception as error:
logger.error("Error while closing all connexions to CH", exc_info=error)

View file

@ -1,177 +0,0 @@
import logging
import threading
import time
from functools import wraps
from queue import Queue, Empty
import clickhouse_connect
from clickhouse_connect.driver.query import QueryContext
from decouple import config
logger = logging.getLogger(__name__)
_CH_CONFIG = {"host": config("ch_host"),
"user": config("ch_user", default="default"),
"password": config("ch_password", default=""),
"port": config("ch_port_http", cast=int),
"client_name": config("APP_NAME", default="PY")}
CH_CONFIG = dict(_CH_CONFIG)
settings = {}
if config('ch_timeout', cast=int, default=-1) > 0:
logging.info(f"CH-max_execution_time set to {config('ch_timeout')}s")
settings = {**settings, "max_execution_time": config('ch_timeout', cast=int)}
if config('ch_receive_timeout', cast=int, default=-1) > 0:
logging.info(f"CH-receive_timeout set to {config('ch_receive_timeout')}s")
settings = {**settings, "receive_timeout": config('ch_receive_timeout', cast=int)}
extra_args = {}
if config("CH_COMPRESSION", cast=bool, default=True):
extra_args["compression"] = "lz4"
def transform_result(self, original_function):
@wraps(original_function)
def wrapper(*args, **kwargs):
logger.debug(str.encode(self.format(query=kwargs.get("query", ""), parameters=kwargs.get("parameters"))))
result = original_function(*args, **kwargs)
if isinstance(result, clickhouse_connect.driver.query.QueryResult):
column_names = result.column_names
result = result.result_rows
result = [dict(zip(column_names, row)) for row in result]
return result
return wrapper
class ClickHouseConnectionPool:
def __init__(self, min_size, max_size):
self.min_size = min_size
self.max_size = max_size
self.pool = Queue()
self.lock = threading.Lock()
self.total_connections = 0
# Initialize the pool with min_size connections
for _ in range(self.min_size):
client = clickhouse_connect.get_client(**CH_CONFIG,
database=config("ch_database", default="default"),
settings=settings,
**extra_args)
self.pool.put(client)
self.total_connections += 1
def get_connection(self):
try:
# Try to get a connection without blocking
client = self.pool.get_nowait()
return client
except Empty:
with self.lock:
if self.total_connections < self.max_size:
client = clickhouse_connect.get_client(**CH_CONFIG,
database=config("ch_database", default="default"),
settings=settings,
**extra_args)
self.total_connections += 1
return client
# If max_size reached, wait until a connection is available
client = self.pool.get()
return client
def release_connection(self, client):
self.pool.put(client)
def close_all(self):
with self.lock:
while not self.pool.empty():
client = self.pool.get()
client.close()
self.total_connections = 0
CH_pool: ClickHouseConnectionPool = None
RETRY_MAX = config("CH_RETRY_MAX", cast=int, default=50)
RETRY_INTERVAL = config("CH_RETRY_INTERVAL", cast=int, default=2)
RETRY = 0
def make_pool():
if not config('CH_POOL', cast=bool, default=True):
return
global CH_pool
global RETRY
if CH_pool is not None:
try:
CH_pool.close_all()
except Exception as error:
logger.error("Error while closing all connexions to CH", exc_info=error)
try:
CH_pool = ClickHouseConnectionPool(min_size=config("CH_MINCONN", cast=int, default=4),
max_size=config("CH_MAXCONN", cast=int, default=8))
if CH_pool is not None:
logger.info("Connection pool created successfully for CH")
except ConnectionError as error:
logger.error("Error while connecting to CH", exc_info=error)
if RETRY < RETRY_MAX:
RETRY += 1
logger.info(f"waiting for {RETRY_INTERVAL}s before retry n°{RETRY}")
time.sleep(RETRY_INTERVAL)
make_pool()
else:
raise error
class ClickHouseClient:
__client = None
def __init__(self, database=None):
if self.__client is None:
if database is not None or not config('CH_POOL', cast=bool, default=True):
self.__client = clickhouse_connect.get_client(**CH_CONFIG,
database=database if database else config("ch_database",
default="default"),
settings=settings,
**extra_args)
else:
self.__client = CH_pool.get_connection()
self.__client.execute = transform_result(self, self.__client.query)
self.__client.format = self.format
def __enter__(self):
return self.__client
def format(self, query, *, parameters=None):
if parameters is None:
return query
return query % {
key: f"'{value}'" if isinstance(value, str) else value
for key, value in parameters.items()
}
def __exit__(self, *args):
if config('CH_POOL', cast=bool, default=True):
CH_pool.release_connection(self.__client)
else:
self.__client.close()
async def init():
logger.info(f">use CH_POOL:{config('CH_POOL', default=True)}")
if config('CH_POOL', cast=bool, default=True):
make_pool()
async def terminate():
global CH_pool
if CH_pool is not None:
try:
CH_pool.close_all()
logger.info("Closed all connexions to CH")
except Exception as error:
logger.error("Error while closing all connexions to CH", exc_info=error)

View file

@ -1,7 +1,13 @@
from typing import Union
import logging
import re
from typing import Union, Any
import schemas
import logging
from chalicelib.utils import sql_helper as sh
from schemas import SearchEventOperator
import math
import struct
from decimal import Decimal
logger = logging.getLogger(__name__)
@ -50,7 +56,8 @@ def get_event_type(event_type: Union[schemas.EventType, schemas.PerformanceEvent
schemas.EventType.ERROR: "ERROR",
schemas.PerformanceEventType.LOCATION_AVG_CPU_LOAD: 'PERFORMANCE',
schemas.PerformanceEventType.LOCATION_AVG_MEMORY_USAGE: 'PERFORMANCE',
schemas.FetchFilterType.FETCH_URL: 'REQUEST'
schemas.FetchFilterType.FETCH_URL: 'REQUEST',
schemas.EventType.INCIDENT: "INCIDENT",
}
defs_mobile = {
schemas.EventType.CLICK_MOBILE: "TAP",
@ -59,10 +66,170 @@ def get_event_type(event_type: Union[schemas.EventType, schemas.PerformanceEvent
schemas.EventType.REQUEST_MOBILE: "REQUEST",
schemas.EventType.ERROR_MOBILE: "CRASH",
schemas.EventType.VIEW_MOBILE: "VIEW",
schemas.EventType.SWIPE_MOBILE: "SWIPE"
schemas.EventType.SWIPE_MOBILE: "SWIPE",
schemas.EventType.INCIDENT: "INCIDENT"
}
if platform != "web" and event_type in defs_mobile:
return defs_mobile.get(event_type)
if event_type not in defs:
raise Exception(f"unsupported EventType:{event_type}")
return defs.get(event_type)
# AI generated
def simplify_clickhouse_type(ch_type: str) -> str:
"""
Simplify a ClickHouse data type name to a broader category like:
int, float, decimal, datetime, string, uuid, enum, array, tuple, map, nested, etc.
"""
# 1) Strip out common wrappers like Nullable(...) or LowCardinality(...)
# Possibly multiple wrappers: e.g. "LowCardinality(Nullable(Int32))"
pattern_wrappers = re.compile(r'(Nullable|LowCardinality)\((.*)\)')
while True:
match = pattern_wrappers.match(ch_type)
if match:
ch_type = match.group(2)
else:
break
# 2) Normalize (lowercase) for easier checks
normalized_type = ch_type.lower()
# 3) Use pattern matching or direct checks for known categories
# (You can adapt this as you see fit for your environment.)
# Integers: Int8, Int16, Int32, Int64, Int128, Int256, UInt8, UInt16, ...
if re.match(r'^(u?int)(8|16|32|64|128|256)$', normalized_type):
return "int"
# Floats: Float32, Float64
if re.match(r'^float(32|64)|double$', normalized_type):
return "float"
# Decimal: Decimal(P, S)
if normalized_type.startswith("decimal"):
# return "decimal"
return "float"
# Date/DateTime
if normalized_type.startswith("date"):
return "datetime"
if normalized_type.startswith("datetime"):
return "datetime"
# Strings: String, FixedString(N)
if normalized_type.startswith("string"):
return "string"
if normalized_type.startswith("fixedstring"):
return "string"
# UUID
if normalized_type.startswith("uuid"):
# return "uuid"
return "string"
# Enums: Enum8(...) or Enum16(...)
if normalized_type.startswith("enum8") or normalized_type.startswith("enum16"):
# return "enum"
return "string"
# Arrays: Array(T)
if normalized_type.startswith("array"):
return "array"
# Tuples: Tuple(T1, T2, ...)
if normalized_type.startswith("tuple"):
return "tuple"
# Map(K, V)
if normalized_type.startswith("map"):
return "map"
# Nested(...)
if normalized_type.startswith("nested"):
return "nested"
# If we didn't match above, just return the original type in lowercase
return normalized_type
def simplify_clickhouse_types(ch_types: list[str]) -> list[str]:
"""
Takes a list of ClickHouse types and returns a list of simplified types
by calling `simplify_clickhouse_type` on each.
"""
return list(set([simplify_clickhouse_type(t) for t in ch_types]))
def get_sub_condition(col_name: str, val_name: str,
operator: Union[schemas.SearchEventOperator, schemas.MathOperator]) -> str:
if operator == SearchEventOperator.PATTERN:
return f"match({col_name}, %({val_name})s)"
op = sh.get_sql_operator(operator)
return f"{col_name} {op} %({val_name})s"
def get_col_cast(data_type: schemas.PropertyType, value: Any) -> str:
if value is None or len(value) == 0:
return ""
if isinstance(value, list):
value = value[0]
if data_type in (schemas.PropertyType.INT, schemas.PropertyType.FLOAT):
return best_clickhouse_type(value)
return data_type.capitalize()
# (type_name, minimum, maximum) ordered by increasing size
_INT_RANGES = [
("Int8", -128, 127),
("UInt8", 0, 255),
("Int16", -32_768, 32_767),
("UInt16", 0, 65_535),
("Int32", -2_147_483_648, 2_147_483_647),
("UInt32", 0, 4_294_967_295),
("Int64", -9_223_372_036_854_775_808, 9_223_372_036_854_775_807),
("UInt64", 0, 18_446_744_073_709_551_615),
]
def best_clickhouse_type(value):
"""
Return the most compact ClickHouse numeric type that can store *value* loss-lessly.
"""
# Treat bool like tiny int
if isinstance(value, bool):
value = int(value)
# --- Integers ---
if isinstance(value, int):
for name, lo, hi in _INT_RANGES:
if lo <= value <= hi:
return name
# Beyond UInt64: ClickHouse offers Int128 / Int256 or Decimal
return "Int128"
# --- Decimal.Decimal (exact) ---
if isinstance(value, Decimal):
# ClickHouse Decimal32/64/128 have 9 / 18 / 38 significant digits.
digits = len(value.as_tuple().digits)
if digits <= 9:
return "Decimal32"
elif digits <= 18:
return "Decimal64"
else:
return "Decimal128"
# --- Floats ---
if isinstance(value, float):
if not math.isfinite(value):
return "Float64" # inf / nan → always Float64
# Check if a round-trip through 32-bit float preserves the bit pattern
packed = struct.pack("f", value)
if struct.unpack("f", packed)[0] == value:
return "Float32"
return "Float64"
raise TypeError(f"Unsupported type: {type(value).__name__}")

View file

@ -99,6 +99,8 @@ def allow_captcha():
def string_to_sql_like(value):
if value is None:
return None
value = re.sub(' +', ' ', value)
value = value.replace("*", "%")
if value.startswith("^"):
@ -334,5 +336,3 @@ def cast_session_id_to_string(data):
for key in keys:
data[key] = cast_session_id_to_string(data[key])
return data

View file

@ -1 +0,0 @@
from .or_cache import CachedResponse

View file

@ -1,83 +0,0 @@
import functools
import inspect
import json
import logging
from chalicelib.utils import pg_client
import time
from fastapi.encoders import jsonable_encoder
logger = logging.getLogger(__name__)
class CachedResponse:
def __init__(self, table, ttl):
self.table = table
self.ttl = ttl
def __call__(self, func):
self.param_names = {i: param for i, param in enumerate(inspect.signature(func).parameters)}
@functools.wraps(func)
def wrapper(*args, **kwargs):
values = dict()
for i, param in self.param_names.items():
if i < len(args):
values[param] = args[i]
elif param in kwargs:
values[param] = kwargs[param]
else:
values[param] = None
result = self.__get(values)
if result is None or result["expired"] \
or result["result"] is None or len(result["result"]) == 0:
now = time.time()
result = func(*args, **kwargs)
now = time.time() - now
if result is not None and len(result) > 0:
self.__add(values, result, now)
result[0]["cached"] = False
else:
logger.info(f"using cached response for "
f"{func.__name__}({','.join([f'{key}={val}' for key, val in enumerate(values)])})")
result = result["result"]
result[0]["cached"] = True
return result
return wrapper
def __get(self, values):
with pg_client.PostgresClient() as cur:
sub_constraints = []
for key, value in values.items():
if value is not None:
sub_constraints.append(f"{key}=%({key})s")
else:
sub_constraints.append(f"{key} IS NULL")
query = f"""SELECT result,
(%(ttl)s>0
AND EXTRACT(EPOCH FROM (timezone('utc'::text, now()) - created_at - INTERVAL %(interval)s)) > 0) AS expired
FROM {self.table}
WHERE {" AND ".join(sub_constraints)}"""
query = cur.mogrify(query, {**values, 'ttl': self.ttl, 'interval': f'{self.ttl} seconds'})
logger.debug("------")
logger.debug(query)
logger.debug("------")
cur.execute(query)
result = cur.fetchone()
return result
def __add(self, values, result, execution_time):
with pg_client.PostgresClient() as cur:
query = f"""INSERT INTO {self.table} ({",".join(values.keys())},result,execution_time)
VALUES ({",".join([f"%({param})s" for param in values.keys()])},%(result)s,%(execution_time)s)
ON CONFLICT ({",".join(values.keys())}) DO UPDATE SET result=%(result)s,
execution_time=%(execution_time)s,
created_at=timezone('utc'::text, now());"""
query = cur.mogrify(query, {**values,
"result": json.dumps(jsonable_encoder(result)),
"execution_time": execution_time})
logger.debug("------")
logger.debug(query)
logger.debug("------")
cur.execute(query)

View file

@ -14,6 +14,9 @@ def get_sql_operator(op: Union[schemas.SearchEventOperator, schemas.ClickEventEx
schemas.SearchEventOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH: "ILIKE",
# this is not used as an operator, it is used in order to maintain a valid value for conditions
schemas.SearchEventOperator.PATTERN: "regex",
# Selector operators:
schemas.ClickEventExtraOperator.IS: "=",
schemas.ClickEventExtraOperator.IS_NOT: "!=",
@ -41,7 +44,7 @@ def reverse_sql_operator(op):
return "=" if op == "!=" else "!=" if op == "=" else "ILIKE" if op == "NOT ILIKE" else "NOT ILIKE"
def multi_conditions(condition, values, value_key="value", is_not=False):
def multi_conditions(condition, values, value_key="value", is_not=False) -> str:
query = []
for i in range(len(values)):
k = f"{value_key}_{i}"
@ -49,12 +52,16 @@ def multi_conditions(condition, values, value_key="value", is_not=False):
return "(" + (" AND " if is_not else " OR ").join(query) + ")"
def multi_values(values, value_key="value"):
def multi_values(values, value_key="value", data_type: schemas.PropertyType | None = None):
query_values = {}
if values is not None and isinstance(values, list):
for i in range(len(values)):
k = f"{value_key}_{i}"
query_values[k] = values[i].value if isinstance(values[i], Enum) else values[i]
if data_type:
if data_type == schemas.PropertyType.STRING:
query_values[k] = str(query_values[k])
return query_values
@ -73,3 +80,29 @@ def single_value(values):
values[i] = v.value
return values
def coordinate_conditions(condition_x, condition_y, values, value_key="value", is_not=False):
query = []
if len(values) == 2:
# if 2 values are provided, it means x=v[0] and y=v[1]
for i in range(len(values)):
k = f"{value_key}_{i}"
if i == 0:
query.append(f"{condition_x}=%({k})s")
elif i == 1:
query.append(f"{condition_y}=%({k})s")
elif len(values) == 4:
# if 4 values are provided, it means v[0]<=x<=v[1] and v[2]<=y<=v[3]
for i in range(len(values)):
k = f"{value_key}_{i}"
if i == 0:
query.append(f"{condition_x}>=%({k})s")
elif i == 1:
query.append(f"{condition_x}<=%({k})s")
elif i == 2:
query.append(f"{condition_y}>=%({k})s")
elif i == 3:
query.append(f"{condition_y}<=%({k})s")
return "(" + (" AND " if is_not else " OR ").join(query) + ")"

View file

@ -74,4 +74,6 @@ EXP_CH_DRIVER=true
EXP_AUTOCOMPLETE=true
EXP_ALERTS=true
EXP_ERRORS_SEARCH=true
EXP_METRICS=true
EXP_METRICS=true
EXP_SESSIONS_SEARCH=true
EXP_EVENTS=true

View file

@ -68,4 +68,5 @@ EXP_CH_DRIVER=true
EXP_AUTOCOMPLETE=true
EXP_ALERTS=true
EXP_ERRORS_SEARCH=true
EXP_METRICS=true
EXP_METRICS=true
EXP_EVENTS=true

View file

@ -1,591 +0,0 @@
-- -- Original Q3
-- WITH ranked_events AS (SELECT *
-- FROM ranked_events_1736344377403),
-- n1 AS (SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- COUNT(1) AS sessions_count
-- FROM ranked_events
-- WHERE event_number_in_session = 1
-- AND isNotNull(next_value)
-- GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n2 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n1
-- INNER JOIN ranked_events AS re
-- ON (n1.next_value = re.e_value AND n1.next_type = re.event_type)
-- WHERE re.event_number_in_session = 2
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n3 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n2
-- INNER JOIN ranked_events AS re
-- ON (n2.next_value = re.e_value AND n2.next_type = re.event_type)
-- WHERE re.event_number_in_session = 3
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n4 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n3
-- INNER JOIN ranked_events AS re
-- ON (n3.next_value = re.e_value AND n3.next_type = re.event_type)
-- WHERE re.event_number_in_session = 4
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8),
-- n5 AS (SELECT *
-- FROM (SELECT re.event_number_in_session AS event_number_in_session,
-- re.event_type AS event_type,
-- re.e_value AS e_value,
-- re.next_type AS next_type,
-- re.next_value AS next_value,
-- COUNT(1) AS sessions_count
-- FROM n4
-- INNER JOIN ranked_events AS re
-- ON (n4.next_value = re.e_value AND n4.next_type = re.event_type)
-- WHERE re.event_number_in_session = 5
-- GROUP BY re.event_number_in_session, re.event_type, re.e_value, re.next_type,
-- re.next_value) AS sub_level
-- ORDER BY sessions_count DESC
-- LIMIT 8)
-- SELECT *
-- FROM (SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n1
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n2
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n3
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n4
-- UNION ALL
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- next_type,
-- next_value,
-- sessions_count
-- FROM n5) AS chart_steps
-- ORDER BY event_number_in_session;
-- Q1
-- CREATE TEMPORARY TABLE pre_ranked_events_1736344377403 AS
CREATE TABLE pre_ranked_events_1736344377403 ENGINE = Memory AS
(WITH initial_event AS (SELECT events.session_id, MIN(datetime) AS start_event_timestamp
FROM experimental.events AS events
WHERE ((event_type = 'LOCATION' AND (url_path = '/en/deployment/')))
AND events.project_id = toUInt16(65)
AND events.datetime >= toDateTime(1735599600000 / 1000)
AND events.datetime < toDateTime(1736290799999 / 1000)
GROUP BY 1),
pre_ranked_events AS (SELECT *
FROM (SELECT session_id,
event_type,
datetime,
url_path AS e_value,
row_number() OVER (PARTITION BY session_id
ORDER BY datetime ,
message_id ) AS event_number_in_session
FROM experimental.events AS events
INNER JOIN initial_event ON (events.session_id = initial_event.session_id)
WHERE events.project_id = toUInt16(65)
AND events.datetime >= toDateTime(1735599600000 / 1000)
AND events.datetime < toDateTime(1736290799999 / 1000)
AND (events.event_type = 'LOCATION')
AND events.datetime >= initial_event.start_event_timestamp
) AS full_ranked_events
WHERE event_number_in_session <= 5)
SELECT *
FROM pre_ranked_events);
;
SELECT *
FROM pre_ranked_events_1736344377403
WHERE event_number_in_session < 3;
-- ---------Q2-----------
-- CREATE TEMPORARY TABLE ranked_events_1736344377403 AS
DROP TABLE ranked_events_1736344377403;
CREATE TABLE ranked_events_1736344377403 ENGINE = Memory AS
(WITH pre_ranked_events AS (SELECT *
FROM pre_ranked_events_1736344377403),
start_points AS (SELECT DISTINCT session_id
FROM pre_ranked_events
WHERE ((event_type = 'LOCATION' AND (e_value = '/en/deployment/')))
AND pre_ranked_events.event_number_in_session = 1),
ranked_events AS (SELECT pre_ranked_events.*,
leadInFrame(e_value)
OVER (PARTITION BY session_id ORDER BY datetime
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_value,
leadInFrame(toNullable(event_type))
OVER (PARTITION BY session_id ORDER BY datetime
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS next_type
FROM start_points
INNER JOIN pre_ranked_events USING (session_id))
SELECT *
FROM ranked_events);
-- ranked events
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events_1736344377403
WHERE event_number_in_session = 2
-- AND e_value='/en/deployment/deploy-docker/'
-- AND next_value NOT IN ('/en/deployment/','/en/plugins/','/en/using-or/')
-- AND e_value NOT IN ('/en/deployment/deploy-docker/','/en/getting-started/','/en/deployment/deploy-ubuntu/')
AND isNotNull(next_value)
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY event_number_in_session, sessions_count DESC;
SELECT event_number_in_session,
event_type,
e_value,
COUNT(1) AS sessions_count
FROM ranked_events_1736344377403
WHERE event_number_in_session = 1
GROUP BY event_number_in_session, event_type, e_value
ORDER BY event_number_in_session, sessions_count DESC;
SELECT COUNT(1) AS sessions_count
FROM ranked_events_1736344377403
WHERE event_number_in_session = 2
AND isNull(next_value)
;
-- ---------Q3 MORE -----------
WITH ranked_events AS (SELECT *
FROM ranked_events_1736344377403),
n1 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 1
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n2 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 2
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n3 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 3
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
drop_n AS (-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n1
WHERE isNull(n1.next_type)
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n2
WHERE isNull(n2.next_type)),
-- TODO: make this as top_steps, where every step will go to next as top/others
top_n1 AS (-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n1
WHERE isNotNull(next_type)
ORDER BY sessions_count DESC
LIMIT 3),
top_n2 AS (-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n2
WHERE (event_type, e_value) IN (SELECT event_type,
e_value
FROM n2
WHERE isNotNull(next_type)
GROUP BY event_type, e_value
ORDER BY SUM(sessions_count) DESC
LIMIT 3)
ORDER BY sessions_count DESC),
top_n AS (SELECT *
FROM top_n1
UNION ALL
SELECT *
FROM top_n2),
u_top_n AS (SELECT DISTINCT event_number_in_session,
event_type,
e_value
FROM top_n),
others_n AS (
-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n1
WHERE isNotNull(next_type)
ORDER BY sessions_count DESC
LIMIT 1000000 OFFSET 3
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
sessions_count
FROM n2
WHERE isNotNull(next_type)
-- GROUP BY event_number_in_session, event_type, e_value
ORDER BY sessions_count DESC
LIMIT 1000000 OFFSET 3)
SELECT *
FROM (
-- Top
SELECT *
FROM top_n
-- UNION ALL
-- -- Others
-- SELECT event_number_in_session,
-- event_type,
-- e_value,
-- 'OTHER' AS next_type,
-- NULL AS next_value,
-- SUM(sessions_count)
-- FROM others_n
-- GROUP BY event_number_in_session, event_type, e_value
-- UNION ALL
-- -- Top go to Drop
-- SELECT drop_n.event_number_in_session,
-- drop_n.event_type,
-- drop_n.e_value,
-- drop_n.next_type,
-- drop_n.next_value,
-- drop_n.sessions_count
-- FROM drop_n
-- INNER JOIN u_top_n ON (drop_n.event_number_in_session = u_top_n.event_number_in_session
-- AND drop_n.event_type = u_top_n.event_type
-- AND drop_n.e_value = u_top_n.e_value)
-- ORDER BY drop_n.event_number_in_session
-- -- -- UNION ALL
-- -- -- Top go to Others
-- SELECT top_n.event_number_in_session,
-- top_n.event_type,
-- top_n.e_value,
-- 'OTHER' AS next_type,
-- NULL AS next_value,
-- SUM(top_n.sessions_count) AS sessions_count
-- FROM top_n
-- LEFT JOIN others_n ON (others_n.event_number_in_session = (top_n.event_number_in_session + 1)
-- AND top_n.next_type = others_n.event_type
-- AND top_n.next_value = others_n.e_value)
-- WHERE others_n.event_number_in_session IS NULL
-- AND top_n.next_type IS NOT NULL
-- GROUP BY event_number_in_session, event_type, e_value
-- UNION ALL
-- -- Others got to Top
-- SELECT others_n.event_number_in_session,
-- 'OTHER' AS event_type,
-- NULL AS e_value,
-- others_n.s_next_type AS next_type,
-- others_n.s_next_value AS next_value,
-- SUM(sessions_count) AS sessions_count
-- FROM others_n
-- INNER JOIN top_n ON (others_n.event_number_in_session = top_n.event_number_in_session + 1 AND
-- others_n.s_next_type = top_n.event_type AND
-- others_n.s_next_value = top_n.event_type)
-- GROUP BY others_n.event_number_in_session, next_type, next_value
-- UNION ALL
-- -- TODO: find if this works or not
-- -- Others got to Others
-- SELECT others_n.event_number_in_session,
-- 'OTHER' AS event_type,
-- NULL AS e_value,
-- 'OTHERS' AS next_type,
-- NULL AS next_value,
-- SUM(sessions_count) AS sessions_count
-- FROM others_n
-- LEFT JOIN u_top_n ON ((others_n.event_number_in_session + 1) = u_top_n.event_number_in_session
-- AND others_n.s_next_type = u_top_n.event_type
-- AND others_n.s_next_value = u_top_n.e_value)
-- WHERE u_top_n.event_number_in_session IS NULL
-- GROUP BY others_n.event_number_in_session
)
ORDER BY event_number_in_session;
-- ---------Q3 TOP ON VALUE ONLY -----------
WITH ranked_events AS (SELECT *
FROM ranked_events_1736344377403),
n1 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 1
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n2 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 2
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
n3 AS (SELECT event_number_in_session,
event_type,
e_value,
next_type,
next_value,
COUNT(1) AS sessions_count
FROM ranked_events
WHERE event_number_in_session = 3
GROUP BY event_number_in_session, event_type, e_value, next_type, next_value
ORDER BY sessions_count DESC),
drop_n AS (-- STEP 1
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n1
WHERE isNull(n1.next_type)
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
'DROP' AS next_type,
NULL AS next_value,
sessions_count
FROM n2
WHERE isNull(n2.next_type)),
top_n AS (SELECT event_number_in_session,
event_type,
e_value,
SUM(sessions_count) AS sessions_count
FROM n1
GROUP BY event_number_in_session, event_type, e_value
LIMIT 1
UNION ALL
-- STEP 2
SELECT event_number_in_session,
event_type,
e_value,
SUM(sessions_count) AS sessions_count
FROM n2
GROUP BY event_number_in_session, event_type, e_value
ORDER BY sessions_count DESC
LIMIT 3
UNION ALL
-- STEP 3
SELECT event_number_in_session,
event_type,
e_value,
SUM(sessions_count) AS sessions_count
FROM n3
GROUP BY event_number_in_session, event_type, e_value
ORDER BY sessions_count DESC
LIMIT 3),
top_n_with_next AS (SELECT n1.*
FROM n1
UNION ALL
SELECT n2.*
FROM n2
INNER JOIN top_n ON (n2.event_number_in_session = top_n.event_number_in_session
AND n2.event_type = top_n.event_type
AND n2.e_value = top_n.e_value)),
others_n AS (
-- STEP 2
SELECT n2.*
FROM n2
WHERE (n2.event_number_in_session, n2.event_type, n2.e_value) NOT IN
(SELECT event_number_in_session, event_type, e_value
FROM top_n
WHERE top_n.event_number_in_session = 2)
UNION ALL
-- STEP 3
SELECT n3.*
FROM n3
WHERE (n3.event_number_in_session, n3.event_type, n3.e_value) NOT IN
(SELECT event_number_in_session, event_type, e_value
FROM top_n
WHERE top_n.event_number_in_session = 3))
SELECT *
FROM (
-- SELECT sum(top_n_with_next.sessions_count)
-- FROM top_n_with_next
-- WHERE event_number_in_session = 1
-- -- AND isNotNull(next_value)
-- AND (next_type, next_value) IN
-- (SELECT others_n.event_type, others_n.e_value FROM others_n WHERE others_n.event_number_in_session = 2)
-- -- SELECT * FROM others_n
-- -- SELECT * FROM n2
-- SELECT *
-- FROM top_n
-- );
-- Top to Top: valid
SELECT top_n_with_next.*
FROM top_n_with_next
INNER JOIN top_n
ON (top_n_with_next.event_number_in_session + 1 = top_n.event_number_in_session
AND top_n_with_next.next_type = top_n.event_type
AND top_n_with_next.next_value = top_n.e_value)
UNION ALL
-- Top to Others: valid
SELECT top_n_with_next.event_number_in_session,
top_n_with_next.event_type,
top_n_with_next.e_value,
'OTHER' AS next_type,
NULL AS next_value,
SUM(top_n_with_next.sessions_count) AS sessions_count
FROM top_n_with_next
WHERE (top_n_with_next.event_number_in_session + 1, top_n_with_next.next_type, top_n_with_next.next_value) IN
(SELECT others_n.event_number_in_session, others_n.event_type, others_n.e_value FROM others_n)
GROUP BY top_n_with_next.event_number_in_session, top_n_with_next.event_type, top_n_with_next.e_value
UNION ALL
-- Top go to Drop: valid
SELECT drop_n.event_number_in_session,
drop_n.event_type,
drop_n.e_value,
drop_n.next_type,
drop_n.next_value,
drop_n.sessions_count
FROM drop_n
INNER JOIN top_n ON (drop_n.event_number_in_session = top_n.event_number_in_session
AND drop_n.event_type = top_n.event_type
AND drop_n.e_value = top_n.e_value)
ORDER BY drop_n.event_number_in_session
UNION ALL
-- Others got to Drop: valid
SELECT others_n.event_number_in_session,
'OTHER' AS event_type,
NULL AS e_value,
'DROP' AS next_type,
NULL AS next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNull(others_n.next_type)
AND others_n.event_number_in_session < 3
GROUP BY others_n.event_number_in_session, next_type, next_value
UNION ALL
-- Others got to Top:valid
SELECT others_n.event_number_in_session,
'OTHER' AS event_type,
NULL AS e_value,
others_n.next_type,
others_n.next_value,
SUM(others_n.sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) IN
(SELECT top_n.event_number_in_session, top_n.event_type, top_n.e_value FROM top_n)
GROUP BY others_n.event_number_in_session, others_n.next_type, others_n.next_value
UNION ALL
-- Others got to Others
SELECT others_n.event_number_in_session,
'OTHER' AS event_type,
NULL AS e_value,
'OTHERS' AS next_type,
NULL AS next_value,
SUM(sessions_count) AS sessions_count
FROM others_n
WHERE isNotNull(others_n.next_type)
AND others_n.event_number_in_session < 3
AND (others_n.event_number_in_session + 1, others_n.next_type, others_n.next_value) NOT IN
(SELECT event_number_in_session, event_type, e_value FROM top_n)
GROUP BY others_n.event_number_in_session)
ORDER BY event_number_in_session, sessions_count
DESC;

View file

@ -1,17 +1,16 @@
urllib3==2.3.0
urllib3==2.4.0
requests==2.32.3
boto3==1.36.12
boto3==1.38.16
pyjwt==2.10.1
psycopg2-binary==2.9.10
psycopg[pool,binary]==3.2.4
clickhouse-driver[lz4]==0.2.9
clickhouse-connect==0.8.15
elasticsearch==8.17.1
psycopg[pool,binary]==3.2.9
clickhouse-connect==0.8.17
elasticsearch==9.0.1
jira==3.8.0
cachetools==5.5.1
cachetools==5.5.2
fastapi==0.115.8
uvicorn[standard]==0.34.0
fastapi==0.115.12
uvicorn[standard]==0.34.2
python-decouple==3.8
pydantic[email]==2.10.6
pydantic[email]==2.11.4
apscheduler==3.11.0

View file

@ -1,19 +1,18 @@
urllib3==2.3.0
urllib3==2.4.0
requests==2.32.3
boto3==1.36.12
boto3==1.38.16
pyjwt==2.10.1
psycopg2-binary==2.9.10
psycopg[pool,binary]==3.2.4
clickhouse-driver[lz4]==0.2.9
clickhouse-connect==0.8.15
elasticsearch==8.17.1
psycopg[pool,binary]==3.2.9
clickhouse-connect==0.8.17
elasticsearch==9.0.1
jira==3.8.0
cachetools==5.5.1
cachetools==5.5.2
fastapi==0.115.8
uvicorn[standard]==0.34.0
fastapi==0.115.12
uvicorn[standard]==0.34.2
python-decouple==3.8
pydantic[email]==2.10.6
pydantic[email]==2.11.4
apscheduler==3.11.0
redis==5.2.1
redis==6.1.0

View file

@ -4,8 +4,9 @@ from decouple import config
from fastapi import Depends, Body, BackgroundTasks
import schemas
from chalicelib.core import events, projects, issues, metadata, reset_password, log_tools, \
from chalicelib.core import events, projects, metadata, reset_password, log_tools, \
announcements, weekly_report, assist, mobile, tenants, boarding, notifications, webhook, users, saved_search, tags
from chalicelib.core.issues import issues
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.core.metrics import custom_metrics
from chalicelib.core.alerts import alerts

View file

@ -8,13 +8,14 @@ from starlette.responses import RedirectResponse, FileResponse, JSONResponse, Re
import schemas
from chalicelib.core import assist, signup, feature_flags
from chalicelib.core import notes
from chalicelib.core import scope
from chalicelib.core import tenants, users, projects, license
from chalicelib.core import webhook
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.core.errors import errors, errors_details
from chalicelib.core.metrics import heatmaps
from chalicelib.core.sessions import sessions, sessions_notes, sessions_replay, sessions_favorite, sessions_viewed, \
from chalicelib.core.sessions import sessions, sessions_replay, sessions_favorite, sessions_viewed, \
sessions_assignments, unprocessed_sessions, sessions_search
from chalicelib.utils import captcha, smtp
from chalicelib.utils import contextual_validators
@ -259,8 +260,7 @@ def get_projects(context: schemas.CurrentContext = Depends(OR_context)):
def search_sessions(projectId: int, data: schemas.SessionsSearchPayloadSchema = \
Depends(contextual_validators.validate_contextual_payload),
context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_search.search_sessions(data=data, project=context.project, user_id=context.user_id,
platform=context.project.platform)
data = sessions_search.search_sessions(data=data, project=context.project, user_id=context.user_id)
return {'data': data}
@ -268,8 +268,7 @@ def search_sessions(projectId: int, data: schemas.SessionsSearchPayloadSchema =
def session_ids_search(projectId: int, data: schemas.SessionsSearchPayloadSchema = \
Depends(contextual_validators.validate_contextual_payload),
context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_search.search_sessions(data=data, project=context.project, user_id=context.user_id, ids_only=True,
platform=context.project.platform)
data = sessions_search.search_sessions(data=data, project=context.project, user_id=context.user_id, ids_only=True)
return {'data': data}
@ -475,8 +474,8 @@ def comment_assignment(projectId: int, sessionId: int, issueId: str,
@app.get('/{projectId}/notes/{noteId}', tags=["sessions", "notes"])
def get_note_by_id(projectId: int, noteId: int, context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_notes.get_note(tenant_id=context.tenant_id, project_id=projectId, note_id=noteId,
user_id=context.user_id)
data = notes.get_note(tenant_id=context.tenant_id, project_id=projectId, note_id=noteId,
user_id=context.user_id)
if "errors" in data:
return data
return {
@ -489,8 +488,8 @@ def create_note(projectId: int, sessionId: int, data: schemas.SessionNoteSchema
context: schemas.CurrentContext = Depends(OR_context)):
if not sessions.session_exists(project_id=projectId, session_id=sessionId):
return {"errors": ["Session not found"]}
data = sessions_notes.create(tenant_id=context.tenant_id, project_id=projectId,
session_id=sessionId, user_id=context.user_id, data=data)
data = notes.create(tenant_id=context.tenant_id, project_id=projectId,
session_id=sessionId, user_id=context.user_id, data=data)
if "errors" in data.keys():
return data
return {
@ -500,8 +499,8 @@ def create_note(projectId: int, sessionId: int, data: schemas.SessionNoteSchema
@app.get('/{projectId}/sessions/{sessionId}/notes', tags=["sessions", "notes"])
def get_session_notes(projectId: int, sessionId: int, context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_notes.get_session_notes(tenant_id=context.tenant_id, project_id=projectId,
session_id=sessionId, user_id=context.user_id)
data = notes.get_session_notes(tenant_id=context.tenant_id, project_id=projectId,
session_id=sessionId, user_id=context.user_id)
if "errors" in data:
return data
return {
@ -512,8 +511,8 @@ def get_session_notes(projectId: int, sessionId: int, context: schemas.CurrentCo
@app.post('/{projectId}/notes/{noteId}', tags=["sessions", "notes"])
def edit_note(projectId: int, noteId: int, data: schemas.SessionUpdateNoteSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_notes.edit(tenant_id=context.tenant_id, project_id=projectId, user_id=context.user_id,
note_id=noteId, data=data)
data = notes.edit(tenant_id=context.tenant_id, project_id=projectId, user_id=context.user_id,
note_id=noteId, data=data)
if "errors" in data.keys():
return data
return {
@ -523,29 +522,29 @@ def edit_note(projectId: int, noteId: int, data: schemas.SessionUpdateNoteSchema
@app.delete('/{projectId}/notes/{noteId}', tags=["sessions", "notes"])
def delete_note(projectId: int, noteId: int, _=Body(None), context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_notes.delete(project_id=projectId, note_id=noteId)
data = notes.delete(project_id=projectId, note_id=noteId)
return data
@app.get('/{projectId}/notes/{noteId}/slack/{webhookId}', tags=["sessions", "notes"])
def share_note_to_slack(projectId: int, noteId: int, webhookId: int,
context: schemas.CurrentContext = Depends(OR_context)):
return sessions_notes.share_to_slack(tenant_id=context.tenant_id, project_id=projectId, user_id=context.user_id,
note_id=noteId, webhook_id=webhookId)
return notes.share_to_slack(tenant_id=context.tenant_id, project_id=projectId, user_id=context.user_id,
note_id=noteId, webhook_id=webhookId)
@app.get('/{projectId}/notes/{noteId}/msteams/{webhookId}', tags=["sessions", "notes"])
def share_note_to_msteams(projectId: int, noteId: int, webhookId: int,
context: schemas.CurrentContext = Depends(OR_context)):
return sessions_notes.share_to_msteams(tenant_id=context.tenant_id, project_id=projectId, user_id=context.user_id,
note_id=noteId, webhook_id=webhookId)
return notes.share_to_msteams(tenant_id=context.tenant_id, project_id=projectId, user_id=context.user_id,
note_id=noteId, webhook_id=webhookId)
@app.post('/{projectId}/notes', tags=["sessions", "notes"])
def get_all_notes(projectId: int, data: schemas.SearchNoteSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = sessions_notes.get_all_notes_by_project_id(tenant_id=context.tenant_id, project_id=projectId,
user_id=context.user_id, data=data)
data = notes.get_all_notes_by_project_id(tenant_id=context.tenant_id, project_id=projectId,
user_id=context.user_id, data=data)
if "errors" in data:
return data
return {'data': data}

View file

@ -219,6 +219,17 @@ def get_card_chart(projectId: int, metric_id: int, data: schemas.CardSessionsSch
return {"data": data}
@app.post("/{projectId}/dashboards/{dashboardId}/cards/{metric_id}/chart", tags=["card"])
@app.post("/{projectId}/dashboards/{dashboardId}/cards/{metric_id}", tags=["card"])
def get_card_chart_for_dashboard(projectId: int, dashboardId: int, metric_id: int,
data: schemas.CardSessionsSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
data = custom_metrics.make_chart_from_card(
project=context.project, user_id=context.user_id, metric_id=metric_id, data=data, for_dashboard=True
)
return {"data": data}
@app.post("/{projectId}/cards/{metric_id}", tags=["dashboard"])
def update_card(projectId: int, metric_id: int, data: schemas.CardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):

View file

@ -0,0 +1,77 @@
from typing import Annotated
from fastapi import Body, Depends, Query
import schemas
from chalicelib.core import metadata
from chalicelib.core.product_analytics import events, properties, autocomplete
from or_dependencies import OR_context
from routers.base import get_routers
from typing import Optional
public_app, app, app_apikey = get_routers()
@app.get('/{projectId}/filters', tags=["product_analytics"])
def get_all_filters(projectId: int, filter_query: Annotated[schemas.PaginatedSchema, Query()],
context: schemas.CurrentContext = Depends(OR_context)):
return {
"data": {
"events": events.get_events(project_id=projectId, page=filter_query),
"filters": properties.get_all_properties(project_id=projectId, page=filter_query),
"metadata": metadata.get_for_filters(project_id=projectId)
}
}
@app.get('/{projectId}/events/names', tags=["product_analytics"])
def get_all_events(projectId: int, filter_query: Annotated[schemas.PaginatedSchema, Query()],
context: schemas.CurrentContext = Depends(OR_context)):
return {"data": events.get_events(project_id=projectId, page=filter_query)}
@app.get('/{projectId}/properties/search', tags=["product_analytics"])
def get_event_properties(projectId: int, event_name: str = None,
context: schemas.CurrentContext = Depends(OR_context)):
if not event_name or len(event_name) == 0:
return {"data": []}
return {"data": properties.get_event_properties(project_id=projectId, event_name=event_name)}
@app.post('/{projectId}/events/search', tags=["product_analytics"])
def search_events(projectId: int, data: schemas.EventsSearchPayloadSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return {"data": events.search_events(project_id=projectId, data=data)}
@app.get('/{projectId}/lexicon/events', tags=["product_analytics", "lexicon"])
def get_all_lexicon_events(projectId: int, filter_query: Annotated[schemas.PaginatedSchema, Query()],
context: schemas.CurrentContext = Depends(OR_context)):
return {"data": events.get_lexicon(project_id=projectId, page=filter_query)}
@app.get('/{projectId}/lexicon/properties', tags=["product_analytics", "lexicon"])
def get_all_lexicon_properties(projectId: int, filter_query: Annotated[schemas.PaginatedSchema, Query()],
context: schemas.CurrentContext = Depends(OR_context)):
return {"data": properties.get_lexicon(project_id=projectId, page=filter_query)}
@app.get('/{projectId}/events/autocomplete', tags=["autocomplete"])
def autocomplete_events(projectId: int, q: Optional[str] = None,
context: schemas.CurrentContext = Depends(OR_context)):
return {"data": autocomplete.search_events(project_id=projectId, q=None if not q or len(q) == 0 else q)}
@app.get('/{projectId}/properties/autocomplete', tags=["autocomplete"])
def autocomplete_properties(projectId: int, propertyName: Optional[str] = None, eventName: Optional[str] = None,
q: Optional[str] = None, context: schemas.CurrentContext = Depends(OR_context)):
if not propertyName and not eventName and not q:
return {"error": ["Specify eventName to get top properties",
"Specify propertyName to get top values of that property",
"Specify eventName&propertyName to get top values of that property for the selected event"]}
return {"data": autocomplete.search_properties(project_id=projectId,
event_name=None if not eventName \
or len(eventName) == 0 else eventName,
property_name=None if not propertyName \
or len(propertyName) == 0 else propertyName,
q=None if not q or len(q) == 0 else q)}

View file

@ -1,15 +0,0 @@
import schemas
from chalicelib.core.metrics import product_anaytics2
from fastapi import Depends
from or_dependencies import OR_context
from routers.base import get_routers
public_app, app, app_apikey = get_routers()
@app.post('/{projectId}/events/search', tags=["dashboard"])
def search_events(projectId: int,
# data: schemas.CreateDashboardSchema = Body(...),
context: schemas.CurrentContext = Depends(OR_context)):
return product_anaytics2.search_events(project_id=projectId, data={})

View file

@ -1,10 +1,12 @@
from fastapi import Body, Depends
from typing import Annotated
from fastapi import Body, Depends, Query
import schemas
from chalicelib.core.usability_testing import service
from chalicelib.core.usability_testing.schema import UTTestCreate, UTTestUpdate, UTTestSearch
from or_dependencies import OR_context
from routers.base import get_routers
from schemas import schemas
public_app, app, app_apikey = get_routers()
tags = ["usability-tests"]
@ -77,9 +79,7 @@ async def update_ut_test(projectId: int, test_id: int, test_update: UTTestUpdate
@app.get('/{projectId}/usability-tests/{test_id}/sessions', tags=tags)
async def get_sessions(projectId: int, test_id: int, page: int = 1, limit: int = 10,
live: bool = False,
user_id: str = None):
async def get_sessions(projectId: int, test_id: int, filter_query: Annotated[schemas.UsabilityTestQuery, Query()]):
"""
Get sessions related to a specific UT test.
@ -87,21 +87,23 @@ async def get_sessions(projectId: int, test_id: int, page: int = 1, limit: int =
- **test_id**: The unique identifier of the UT test.
"""
if live:
return service.ut_tests_sessions_live(projectId, test_id, page, limit)
if filter_query.live:
return service.ut_tests_sessions_live(projectId, test_id, filter_query.page, filter_query.limit)
else:
return service.ut_tests_sessions(projectId, test_id, page, limit, user_id, live)
return service.ut_tests_sessions(projectId, test_id, filter_query.page, filter_query.limit,
filter_query.user_id, filter_query.live)
@app.get('/{projectId}/usability-tests/{test_id}/responses/{task_id}', tags=tags)
async def get_responses(projectId: int, test_id: int, task_id: int, page: int = 1, limit: int = 10, query: str = None):
async def get_responses(projectId: int, test_id: int, task_id: int,
filter_query: Annotated[schemas.PaginatedSchema, Query()], query: str = None):
"""
Get responses related to a specific UT test.
- **project_id**: The unique identifier of the project.
- **test_id**: The unique identifier of the UT test.
"""
return service.get_responses(test_id, task_id, page, limit, query)
return service.get_responses(test_id, task_id, filter_query.page, filter_query.limit, query)
@app.get('/{projectId}/usability-tests/{test_id}/statistics', tags=tags)

View file

@ -1,2 +1,4 @@
from .schemas import *
from .product_analytics import *
from . import overrides as _overrides
from .schemas import _PaginatedSchema as PaginatedSchema

View file

@ -0,0 +1,22 @@
from typing import Optional, List, Literal, Union, Annotated
from pydantic import Field
from .overrides import BaseModel
from .schemas import EventPropertiesSchema, SortOrderType, _TimedSchema, \
_PaginatedSchema, PropertyFilterSchema
class EventSearchSchema(BaseModel):
is_event: Literal[True] = True
name: str = Field(...)
properties: Optional[EventPropertiesSchema] = Field(default=None)
ProductAnalyticsGroupedFilter = Annotated[Union[EventSearchSchema, PropertyFilterSchema], \
Field(discriminator='is_event')]
class EventsSearchPayloadSchema(_TimedSchema, _PaginatedSchema):
filters: List[ProductAnalyticsGroupedFilter] = Field(...)
sort: str = Field(default="startTs")
order: SortOrderType = Field(default=SortOrderType.DESC)

View file

@ -3,12 +3,13 @@ from typing import Optional, List, Union, Literal
from pydantic import Field, EmailStr, HttpUrl, SecretStr, AnyHttpUrl
from pydantic import field_validator, model_validator, computed_field
from pydantic import AfterValidator
from pydantic.functional_validators import BeforeValidator
from chalicelib.utils.TimeUTC import TimeUTC
from .overrides import BaseModel, Enum, ORUnion
from .transformers_validators import transform_email, remove_whitespace, remove_duplicate_values, single_to_list, \
force_is_event, NAME_PATTERN, int_to_string, check_alphanumeric
force_is_event, NAME_PATTERN, int_to_string, check_alphanumeric, check_regex
class _GRecaptcha(BaseModel):
@ -404,6 +405,9 @@ class EventType(str, Enum):
REQUEST_MOBILE = "requestMobile"
ERROR_MOBILE = "errorMobile"
SWIPE_MOBILE = "swipeMobile"
EVENT = "event"
INCIDENT = "incident"
CLICK_COORDINATES = "clickCoordinates"
class PerformanceEventType(str, Enum):
@ -464,6 +468,7 @@ class SearchEventOperator(str, Enum):
NOT_CONTAINS = "notContains"
STARTS_WITH = "startsWith"
ENDS_WITH = "endsWith"
PATTERN = "regex"
class ClickEventExtraOperator(str, Enum):
@ -503,8 +508,8 @@ class IssueType(str, Enum):
CUSTOM = 'custom'
JS_EXCEPTION = 'js_exception'
MOUSE_THRASHING = 'mouse_thrashing'
# IOS
TAP_RAGE = 'tap_rage'
TAP_RAGE = 'tap_rage' # IOS
INCIDENT = 'incident'
class MetricFormatType(str, Enum):
@ -535,7 +540,7 @@ class GraphqlFilterType(str, Enum):
class RequestGraphqlFilterSchema(BaseModel):
type: Union[FetchFilterType, GraphqlFilterType] = Field(...)
value: List[Union[int, str]] = Field(...)
operator: Union[SearchEventOperator, MathOperator] = Field(...)
operator: Annotated[Union[SearchEventOperator, MathOperator], AfterValidator(check_regex)] = Field(...)
@model_validator(mode="before")
@classmethod
@ -545,7 +550,85 @@ class RequestGraphqlFilterSchema(BaseModel):
return values
class SessionSearchEventSchema2(BaseModel):
class EventPredefinedPropertyType(str, Enum):
TIME = "$time"
SOURCE = "$source"
DURATION_S = "$duration_s"
DESCRIPTION = "description"
AUTO_CAPTURED = "$auto_captured"
SDK_EDITION = "$sdk_edition"
SDK_VERSION = "$sdk_version"
DEVICE_ID = "$device_id"
OS = "$os"
OS_VERSION = "$os_version"
BROWSER = "$browser"
BROWSER_VERSION = "$browser_version"
DEVICE = "$device"
SCREEN_HEIGHT = "$screen_height"
SCREEN_WIDTH = "$screen_width"
CURRENT_URL = "$current_url"
INITIAL_REFERRER = "$initial_referrer"
REFERRING_DOMAIN = "$referring_domain"
REFERRER = "$referrer"
INITIAL_REFERRING_DOMAIN = "$initial_referring_domain"
SEARCH_ENGINE = "$search_engine"
SEARCH_ENGINE_KEYWORD = "$search_engine_keyword"
UTM_SOURCE = "utm_source"
UTM_MEDIUM = "utm_medium"
UTM_CAMPAIGN = "utm_campaign"
COUNTRY = "$country"
STATE = "$state"
CITY = "$city"
ISSUE_TYPE = "issue_type"
TAGS = "$tags"
IMPORT = "$import"
class PropertyType(str, Enum):
INT = "int"
FLOAT = "float"
DATETIME = "datetime"
STRING = "string"
ARRAY = "array"
TUPLE = "tuple"
MAP = "map"
NESTED = "nested"
class PropertyFilterSchema(BaseModel):
is_event: Literal[False] = False
name: Union[EventPredefinedPropertyType, str] = Field(...)
operator: Union[SearchEventOperator, MathOperator] = Field(...)
value: List[Union[int, str]] = Field(...)
data_type: PropertyType = Field(default=PropertyType.STRING.value)
# property_type: Optional[Literal["string", "number", "date"]] = Field(default=None)
@computed_field
@property
def is_predefined(self) -> bool:
return EventPredefinedPropertyType.has_value(self.name)
@model_validator(mode="after")
def transform_name(self):
if isinstance(self.name, Enum):
self.name = self.name.value
return self
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
class EventPropertiesSchema(BaseModel):
operator: Literal["and", "or"] = Field(...)
filters: List[PropertyFilterSchema] = Field(...)
class SessionSearchEventSchema(BaseModel):
is_event: Literal[True] = True
value: List[Union[str, int]] = Field(...)
type: Union[EventType, PerformanceEventType] = Field(...)
@ -553,6 +636,7 @@ class SessionSearchEventSchema2(BaseModel):
source: Optional[List[Union[ErrorSource, int, str]]] = Field(default=None)
sourceOperator: Optional[MathOperator] = Field(default=None)
filters: Optional[List[RequestGraphqlFilterSchema]] = Field(default_factory=list)
properties: Optional[EventPropertiesSchema] = Field(default=None)
_remove_duplicate_values = field_validator('value', mode='before')(remove_duplicate_values)
_single_to_list_values = field_validator('value', mode='before')(single_to_list)
@ -577,12 +661,23 @@ class SessionSearchEventSchema2(BaseModel):
elif self.type == EventType.GRAPHQL:
assert isinstance(self.filters, List) and len(self.filters) > 0, \
f"filters should be defined for {EventType.GRAPHQL}"
elif self.type == EventType.CLICK_COORDINATES:
assert isinstance(self.value, List) \
and (len(self.value) == 0 or len(self.value) == 2 or len(self.value) == 4), \
f"value should be [x,y] or [x1,x2,y1,y2] for {EventType.CLICK_COORDINATES}"
if isinstance(self.operator, ClickEventExtraOperator):
assert self.type == EventType.CLICK, \
f"operator:{self.operator} is only available for event-type: {EventType.CLICK}"
return self
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
class SessionSearchFilterSchema(BaseModel):
is_event: Literal[False] = False
@ -640,6 +735,13 @@ class SessionSearchFilterSchema(BaseModel):
return self
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
class _PaginatedSchema(BaseModel):
limit: int = Field(default=200, gt=0, le=200)
@ -660,12 +762,12 @@ def add_missing_is_event(values: dict):
# this type is created to allow mixing events&filters and specifying a discriminator
GroupedFilterType = Annotated[Union[SessionSearchFilterSchema, SessionSearchEventSchema2],
GroupedFilterType = Annotated[Union[SessionSearchFilterSchema, SessionSearchEventSchema],
Field(discriminator='is_event'), BeforeValidator(add_missing_is_event)]
class SessionsSearchPayloadSchema(_TimedSchema, _PaginatedSchema):
events: List[SessionSearchEventSchema2] = Field(default_factory=list, doc_hidden=True)
events: List[SessionSearchEventSchema] = Field(default_factory=list, doc_hidden=True)
filters: List[GroupedFilterType] = Field(default_factory=list)
sort: str = Field(default="startTs")
order: SortOrderType = Field(default=SortOrderType.DESC)
@ -690,6 +792,8 @@ class SessionsSearchPayloadSchema(_TimedSchema, _PaginatedSchema):
def add_missing_attributes(cls, values):
# in case isEvent is wrong:
for f in values.get("filters") or []:
if f.get("type") is None:
continue
if EventType.has_value(f["type"]) and not f.get("isEvent"):
f["isEvent"] = True
elif FilterType.has_value(f["type"]) and f.get("isEvent"):
@ -715,6 +819,15 @@ class SessionsSearchPayloadSchema(_TimedSchema, _PaginatedSchema):
f["value"] = vals
return values
@model_validator(mode="after")
def check_pa_event_filter(self):
for v in self.filters + self.events:
if v.type == EventType.EVENT:
assert v.operator in (SearchEventOperator.IS, MathOperator.EQUAL), \
"operator must be {SearchEventOperator.IS} or {MathOperator.EQUAL} for EVENT type"
assert len(v.value) == 1, "value must have 1 single value for EVENT type"
return self
@model_validator(mode="after")
def split_filters_events(self):
n_filters = []
@ -795,6 +908,13 @@ class PathAnalysisSubFilterSchema(BaseModel):
values["isEvent"] = True
return values
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
class _ProductAnalyticsFilter(BaseModel):
is_event: Literal[False] = False
@ -805,6 +925,13 @@ class _ProductAnalyticsFilter(BaseModel):
_remove_duplicate_values = field_validator('value', mode='before')(remove_duplicate_values)
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
class _ProductAnalyticsEventFilter(BaseModel):
is_event: Literal[True] = True
@ -815,6 +942,13 @@ class _ProductAnalyticsEventFilter(BaseModel):
_remove_duplicate_values = field_validator('value', mode='before')(remove_duplicate_values)
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
# this type is created to allow mixing events&filters and specifying a discriminator for PathAnalysis series filter
ProductAnalyticsFilter = Annotated[Union[_ProductAnalyticsFilter, _ProductAnalyticsEventFilter],
@ -960,36 +1094,6 @@ class CardSessionsSchema(_TimedSchema, _PaginatedSchema):
return self
# We don't need this as the UI is expecting filters to override the full series' filters
# @model_validator(mode="after")
# def __merge_out_filters_with_series(self):
# for f in self.filters:
# for s in self.series:
# found = False
#
# if f.is_event:
# sub = s.filter.events
# else:
# sub = s.filter.filters
#
# for e in sub:
# if f.type == e.type and f.operator == e.operator:
# found = True
# if f.is_event:
# # If extra event: append value
# for v in f.value:
# if v not in e.value:
# e.value.append(v)
# else:
# # If extra filter: override value
# e.value = f.value
# if not found:
# sub.append(f)
#
# self.filters = []
#
# return self
# UI is expecting filters to override the full series' filters
@model_validator(mode="after")
def __override_series_filters_with_outer_filters(self):
@ -1060,6 +1164,16 @@ class CardTable(__CardSchema):
values["metricValue"] = []
return values
@model_validator(mode="after")
def __enforce_AND_operator(self):
self.metric_of = MetricOfTable(self.metric_of)
if self.metric_of in (MetricOfTable.VISITED_URL, MetricOfTable.FETCH, \
MetricOfTable.VISITED_URL.value, MetricOfTable.FETCH.value):
for s in self.series:
if s.filter is not None:
s.filter.events_order = SearchEventOrder.AND
return self
@model_validator(mode="after")
def __transform(self):
self.metric_of = MetricOfTable(self.metric_of)
@ -1135,7 +1249,7 @@ class CardPathAnalysis(__CardSchema):
view_type: MetricOtherViewType = Field(...)
metric_value: List[ProductAnalyticsSelectedEventType] = Field(default_factory=list)
density: int = Field(default=4, ge=2, le=10)
rows: int = Field(default=3, ge=1, le=10)
rows: int = Field(default=5, ge=1, le=10)
start_type: Literal["start", "end"] = Field(default="start")
start_point: List[PathAnalysisSubFilterSchema] = Field(default_factory=list)
@ -1279,6 +1393,13 @@ class LiveSessionSearchFilterSchema(BaseModel):
assert len(self.source) > 0, "source should not be empty for METADATA type"
return self
@model_validator(mode='after')
def _check_regex_value(self):
if self.operator == SearchEventOperator.PATTERN:
for v in self.value:
check_regex(v)
return self
class LiveSessionsSearchPayloadSchema(_PaginatedSchema):
filters: List[LiveSessionSearchFilterSchema] = Field([])
@ -1404,8 +1525,8 @@ class MetricSearchSchema(_PaginatedSchema):
mine_only: bool = Field(default=False)
class _HeatMapSearchEventRaw(SessionSearchEventSchema2):
type: Literal[EventType.LOCATION] = Field(...)
class _HeatMapSearchEventRaw(SessionSearchEventSchema):
type: Literal[EventType.LOCATION, EventType.CLICK_COORDINATES] = Field(...)
class HeatMapSessionsSearch(SessionsSearchPayloadSchema):
@ -1529,3 +1650,34 @@ class TagCreate(TagUpdate):
class ScopeSchema(BaseModel):
scope: int = Field(default=1, ge=1, le=2)
class SessionModel(BaseModel):
duration: int
errorsCount: int
eventsCount: int
issueScore: int
issueTypes: List[IssueType] = Field(default=[])
metadata: dict = Field(default={})
pagesCount: int
platform: str
projectId: int
sessionId: str
startTs: int
timezone: Optional[str]
userAnonymousId: Optional[str]
userBrowser: str
userCity: str
userCountry: str
userDevice: Optional[str]
userDeviceType: str
userId: Optional[str]
userOs: str
userState: str
userUuid: str
viewed: bool = Field(default=False)
class UsabilityTestQuery(_PaginatedSchema):
live: bool = Field(default=False)
user_id: Optional[str] = Field(default=None)

View file

@ -1,10 +1,11 @@
import re
from typing import Union, Any, Type
from pydantic import ValidationInfo
from .overrides import Enum
NAME_PATTERN = r"^[a-z,A-Z,0-9,\-,é,è,à,ç, ,|,&,\/,\\,_,.,#]*$"
NAME_PATTERN = r"^[a-z,A-Z,0-9,\-,é,è,à,ç, ,|,&,\/,\\,_,.,#,']*$"
def transform_email(email: str) -> str:
@ -57,3 +58,17 @@ def check_alphanumeric(v: str, info: ValidationInfo) -> str:
is_alphanumeric = v.replace(' ', '').isalnum()
assert is_alphanumeric, f'{info.field_name} must be alphanumeric'
return v
def check_regex(v: str) -> str:
assert v is not None, "Regex is null"
assert isinstance(v, str), "Regex value must be a string"
assert len(v) > 0, "Regex is empty"
is_valid = None
try:
re.compile(v)
except re.error as exc:
is_valid = f"Invalid regex: {exc} (at position {exc.pos})"
assert is_valid is None, is_valid
return v

61
assist-server/build.sh Normal file
View file

@ -0,0 +1,61 @@
#!/bin/bash
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
ARCH=${ARCH:-amd64}
git_sha=$(git rev-parse --short HEAD)
image_tag=${IMAGE_TAG:-git_sha}
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
exit 1
}
}
source ../scripts/lib/_docker.sh
[[ $PATCH -eq 1 ]] && {
image_tag="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/$chart | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
image_tag="${image_tag}-ee"
}
update_helm_release() {
chart=$1
HELM_TAG="$(grep -iER ^version ../scripts/helmcharts/openreplay/charts/$chart | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
# Update the chart version
sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Update image tags
sed -i "s#ppVersion.*#ppVersion: \"$image_tag\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Commit the changes
git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
git commit -m "chore(helm): Updating $chart image release"
}
function build_api() {
destination="_assist-server_ee"
[[ -d ../${destination} ]] && {
echo "Removing previous build cache"
rm -rf ../${destination}
}
cp -R ../assist-server ../${destination}
cd ../${destination} || exit 1
cp -rf ../ee/assist-server/* ./
docker build -f ./Dockerfile --build-arg GIT_SHA=$git_sha -t ${DOCKER_REPO:-'local'}/assist-server:${image_tag} .
cd ../assist-server || exit 1
rm -rf ../${destination}
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/assist-server:${image_tag}
docker tag ${DOCKER_REPO:-'local'}/assist-server:${image_tag} ${DOCKER_REPO:-'local'}/assist-server:latest
docker push ${DOCKER_REPO:-'local'}/assist-server:latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/assist-server:${image_tag}
}
echo "build completed for assist-server"
}
check_prereq
build_api $1
if [[ $PATCH -eq 1 ]]; then
update_helm_release assist-server
fi

View file

@ -19,14 +19,16 @@ const EVENTS_DEFINITION = {
}
};
EVENTS_DEFINITION.emit = {
NEW_AGENT: "NEW_AGENT",
NO_AGENTS: "NO_AGENT",
AGENT_DISCONNECT: "AGENT_DISCONNECTED",
AGENTS_CONNECTED: "AGENTS_CONNECTED",
NO_SESSIONS: "SESSION_DISCONNECTED",
SESSION_ALREADY_CONNECTED: "SESSION_ALREADY_CONNECTED",
SESSION_RECONNECTED: "SESSION_RECONNECTED",
UPDATE_EVENT: EVENTS_DEFINITION.listen.UPDATE_EVENT
NEW_AGENT: "NEW_AGENT",
NO_AGENTS: "NO_AGENT",
AGENT_DISCONNECT: "AGENT_DISCONNECTED",
AGENTS_CONNECTED: "AGENTS_CONNECTED",
AGENTS_INFO_CONNECTED: "AGENTS_INFO_CONNECTED",
NO_SESSIONS: "SESSION_DISCONNECTED",
SESSION_ALREADY_CONNECTED: "SESSION_ALREADY_CONNECTED",
SESSION_RECONNECTED: "SESSION_RECONNECTED",
UPDATE_EVENT: EVENTS_DEFINITION.listen.UPDATE_EVENT,
WEBRTC_CONFIG: "WEBRTC_CONFIG",
};
const BASE_sessionInfo = {

View file

@ -27,9 +27,14 @@ const respond = function (req, res, data) {
res.setHeader('Content-Type', 'application/json');
res.end(JSON.stringify(result));
} else {
res.cork(() => {
res.writeStatus('200 OK').writeHeader('Content-Type', 'application/json').end(JSON.stringify(result));
});
if (!res.aborted) {
res.cork(() => {
res.writeStatus('200 OK').writeHeader('Content-Type', 'application/json').end(JSON.stringify(result));
});
} else {
logger.debug("response aborted");
return;
}
}
const duration = performance.now() - req.startTs;
IncreaseTotalRequests();

View file

@ -42,7 +42,7 @@ const findSessionSocketId = async (io, roomId, tabId) => {
};
async function getRoomData(io, roomID) {
let tabsCount = 0, agentsCount = 0, tabIDs = [], agentIDs = [];
let tabsCount = 0, agentsCount = 0, tabIDs = [], agentIDs = [], config = null, agentInfos = [];
const connected_sockets = await io.in(roomID).fetchSockets();
if (connected_sockets.length > 0) {
for (let socket of connected_sockets) {
@ -52,13 +52,19 @@ async function getRoomData(io, roomID) {
} else {
agentsCount++;
agentIDs.push(socket.id);
agentInfos.push({ ...socket.handshake.query.agentInfo, socketId: socket.id });
if (socket.handshake.query.config !== undefined) {
config = socket.handshake.query.config;
}
}
}
} else {
tabsCount = -1;
agentsCount = -1;
agentInfos = [];
agentIDs = [];
}
return {tabsCount, agentsCount, tabIDs, agentIDs};
return {tabsCount, agentsCount, tabIDs, agentIDs, config, agentInfos};
}
function processNewSocket(socket) {
@ -78,7 +84,7 @@ async function onConnect(socket) {
IncreaseOnlineConnections(socket.handshake.query.identity);
const io = getServer();
const {tabsCount, agentsCount, tabIDs, agentIDs} = await getRoomData(io, socket.handshake.query.roomId);
const {tabsCount, agentsCount, tabIDs, agentInfos, agentIDs, config} = await getRoomData(io, socket.handshake.query.roomId);
if (socket.handshake.query.identity === IDENTITIES.session) {
// Check if session with the same tabID already connected, if so, refuse new connexion
@ -100,7 +106,9 @@ async function onConnect(socket) {
// Inform all connected agents about reconnected session
if (agentsCount > 0) {
logger.debug(`notifying new session about agent-existence`);
io.to(socket.id).emit(EVENTS_DEFINITION.emit.WEBRTC_CONFIG, config);
io.to(socket.id).emit(EVENTS_DEFINITION.emit.AGENTS_CONNECTED, agentIDs);
io.to(socket.id).emit(EVENTS_DEFINITION.emit.AGENTS_INFO_CONNECTED, agentInfos);
socket.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.SESSION_RECONNECTED, socket.id);
}
} else if (tabsCount <= 0) {
@ -118,7 +126,8 @@ async function onConnect(socket) {
// Stats
startAssist(socket, socket.handshake.query.agentID);
}
socket.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.NEW_AGENT, socket.id, socket.handshake.query.agentInfo);
io.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.WEBRTC_CONFIG, socket.handshake.query.config);
socket.to(socket.handshake.query.roomId).emit(EVENTS_DEFINITION.emit.NEW_AGENT, socket.id, { ...socket.handshake.query.agentInfo });
}
// Set disconnect handler

30
backend/Makefile Normal file
View file

@ -0,0 +1,30 @@
ee ?= "false" # true to build ee
app ?= "" # app name, default all
arch ?= "amd64" # default amd64
docker_runtime ?= "docker" # default docker runtime
.PHONY: help
help: ## Prints help for targets with comments
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-25s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Docker
.PHONY: build
build: ## Build the backend. ee=true for ee build. app=app name for only one app. Default build all apps.
ARCH=$(arch) DOCKER_RUNTIME=$(docker_runtime) bash build.sh $(ee) $(app)
##@ Local Dev
.PHONY: scan
scan: ## Scan the backend
@trivy fs -q .
.PHONY: update
update: ## Update the backend dependecies
@echo Updating dependencies
@go get -u -v ./...
@go mod tidy
run: ## Run the backend. app=app name for app to run
@if [ $(app) == "" ]; then echo "Error: app parameter is required. Usage: make run app=<app_name>"; exit 1; fi
@go run "cmd/$(app)/main.go"

View file

@ -2,12 +2,14 @@ package main
import (
"context"
analyticsConfig "openreplay/backend/internal/config/analytics"
"openreplay/backend/pkg/analytics"
"openreplay/backend/pkg/analytics/db"
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/logger"
"openreplay/backend/pkg/metrics"
//analyticsMetrics "openreplay/backend/pkg/metrics/analytics"
//databaseMetrics "openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/database"
"openreplay/backend/pkg/metrics/web"
"openreplay/backend/pkg/server"
@ -18,7 +20,6 @@ func main() {
ctx := context.Background()
log := logger.New()
cfg := analyticsConfig.New(log)
// Observability
webMetrics := web.New("analytics")
dbMetrics := database.New("analytics")
metrics.New(log, append(webMetrics.List(), dbMetrics.List()...))
@ -29,7 +30,13 @@ func main() {
}
defer pgConn.Close()
builder, err := analytics.NewServiceBuilder(log, cfg, webMetrics, dbMetrics, pgConn)
chConn, err := db.NewConnector(cfg.Clickhouse)
if err != nil {
log.Fatal(ctx, "can't init clickhouse connection: %s", err)
}
defer chConn.Stop()
builder, err := analytics.NewServiceBuilder(log, cfg, webMetrics, dbMetrics, pgConn, chConn)
if err != nil {
log.Fatal(ctx, "can't init services: %s", err)
}

View file

@ -66,7 +66,7 @@ func main() {
messages.MsgMetadata, messages.MsgIssueEvent, messages.MsgSessionStart, messages.MsgSessionEnd,
messages.MsgUserID, messages.MsgUserAnonymousID, messages.MsgIntegrationEvent, messages.MsgPerformanceTrackAggr,
messages.MsgJSException, messages.MsgResourceTiming, messages.MsgCustomEvent, messages.MsgCustomIssue,
messages.MsgNetworkRequest, messages.MsgGraphQL, messages.MsgStateAction, messages.MsgMouseClick,
messages.MsgFetch, messages.MsgNetworkRequest, messages.MsgGraphQL, messages.MsgStateAction, messages.MsgMouseClick,
messages.MsgMouseClickDeprecated, messages.MsgSetPageLocation, messages.MsgSetPageLocationDeprecated,
messages.MsgPageLoadTiming, messages.MsgPageRenderTiming,
messages.MsgPageEvent, messages.MsgPageEventDeprecated, messages.MsgMouseThrashing, messages.MsgInputChange,

View file

@ -100,6 +100,7 @@ func main() {
// Process assets
if msg.TypeID() == messages.MsgSetNodeAttributeURLBased ||
msg.TypeID() == messages.MsgSetCSSDataURLBased ||
msg.TypeID() == messages.MsgCSSInsertRuleURLBased ||
msg.TypeID() == messages.MsgAdoptedSSReplaceURLBased ||
msg.TypeID() == messages.MsgAdoptedSSInsertRuleURLBased {
m := msg.Decode()

View file

@ -1,52 +1,54 @@
module openreplay/backend
go 1.23
go 1.23.0
toolchain go1.23.1
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0
github.com/ClickHouse/clickhouse-go/v2 v2.32.1
github.com/DataDog/datadog-api-client-go/v2 v2.34.0
github.com/ClickHouse/clickhouse-go/v2 v2.34.0
github.com/DataDog/datadog-api-client-go/v2 v2.37.1
github.com/Masterminds/semver v1.5.0
github.com/andybalholm/brotli v1.1.1
github.com/aws/aws-sdk-go v1.55.6
github.com/btcsuite/btcutil v1.0.2
github.com/confluentinc/confluent-kafka-go/v2 v2.8.0
github.com/confluentinc/confluent-kafka-go/v2 v2.10.0
github.com/docker/distribution v2.8.3+incompatible
github.com/elastic/go-elasticsearch/v7 v7.17.10
github.com/elastic/go-elasticsearch/v8 v8.17.0
github.com/getsentry/sentry-go v0.31.1
github.com/go-playground/validator/v10 v10.24.0
github.com/elastic/go-elasticsearch/v8 v8.18.0
github.com/getsentry/sentry-go v0.32.0
github.com/go-playground/validator/v10 v10.26.0
github.com/go-redis/redis v6.15.9+incompatible
github.com/golang-jwt/jwt/v5 v5.2.1
github.com/golang-jwt/jwt/v5 v5.2.2
github.com/google/uuid v1.6.0
github.com/gorilla/mux v1.8.1
github.com/jackc/pgconn v1.14.3
github.com/jackc/pgerrcode v0.0.0-20240316143900-6e2875d9b438
github.com/jackc/pgtype v1.14.4
github.com/jackc/pgx/v4 v4.18.3
github.com/klauspost/compress v1.17.11
github.com/klauspost/compress v1.18.0
github.com/klauspost/pgzip v1.2.6
github.com/lib/pq v1.10.9
github.com/oschwald/maxminddb-golang v1.13.1
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.20.5
github.com/prometheus/client_golang v1.22.0
github.com/rs/xid v1.6.0
github.com/sethvargo/go-envconfig v1.1.0
github.com/sethvargo/go-envconfig v1.2.0
github.com/tomasen/realip v0.0.0-20180522021738-f0c99a92ddce
github.com/ua-parser/uap-go v0.0.0-20250126222208-a52596c19dff
github.com/ua-parser/uap-go v0.0.0-20250326155420-f7f5a2f9f5bc
go.uber.org/zap v1.27.0
golang.org/x/net v0.35.0
golang.org/x/net v0.39.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/ClickHouse/ch-go v0.65.0 // indirect
github.com/DataDog/zstd v1.5.6 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/ClickHouse/ch-go v0.65.1 // indirect
github.com/DataDog/zstd v1.5.7 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/elastic/elastic-transport-go/v8 v8.6.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/elastic/elastic-transport-go/v8 v8.7.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/go-faster/city v1.0.1 // indirect
github.com/go-faster/errors v0.7.1 // indirect
github.com/go-logr/logr v1.4.2 // indirect
@ -66,23 +68,23 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/paulmach/orb v0.11.1 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.63.0 // indirect
github.com/prometheus/procfs v0.16.0 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect
go.opentelemetry.io/otel/trace v1.34.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.33.0 // indirect
golang.org/x/oauth2 v0.25.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250127172529-29210b9bc287 // indirect
google.golang.org/protobuf v1.36.4 // indirect
golang.org/x/crypto v0.37.0 // indirect
golang.org/x/oauth2 v0.29.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/text v0.24.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250414145226-207652e42e2e // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View file

@ -6,10 +6,17 @@ github.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkk
github.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 h1:g0EZJwz7xkXQiZAI5xi9f3WWFYBlX1CPTrR+NDToRkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0/go.mod h1:XCW7KnZet0Opnr7HccfUw1PLc4CjHqpcaxW8DHklNkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 h1:Gt0j3wceWMwPmiazCa8MzMA0MfhmPIz0Qp0FJ6qcM0U=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 h1:B/dfvscEQtew9dVuoxqxrUKKv8Ih2f55PydknDamU+g=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0/go.mod h1:fiPSssYvltE08HJchL04dOy+RD4hgrjph0cwGGMntdI=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.2 h1:F0gBpfdPLGsw+nsgk6aqqkZS1jiixa5WwFe3fk/T3Ys=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.0 h1:Bg8m3nq/X1DeePkAbCfb6ml6F3F0IunEhE8TMh+lY48=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.0/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 h1:UXT0o77lXQrikd1kgwIPQOUect7EoR/+sbP4wQKdzxM=
@ -18,19 +25,28 @@ github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOEl
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2 h1:kYRSnvJju5gYVyhkij+RTJ/VR6QIUaCfWeaFm2ycsjQ=
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/ClickHouse/ch-go v0.63.1 h1:s2JyZvWLTCSAGdtjMBBmAgQQHMco6pawLJMOXi0FODM=
github.com/ClickHouse/ch-go v0.63.1/go.mod h1:I1kJJCL3WJcBMGe1m+HVK0+nREaG+JOYYBWjrDrF3R0=
github.com/ClickHouse/ch-go v0.65.0 h1:vZAXfTQliuNNefqkPDewX3kgRxN6Q4vUENnnY+ynTRY=
github.com/ClickHouse/ch-go v0.65.0/go.mod h1:tCM0XEH5oWngoi9Iu/8+tjPBo04I/FxNIffpdjtwx3k=
github.com/ClickHouse/ch-go v0.65.1 h1:SLuxmLl5Mjj44/XbINsK2HFvzqup0s6rwKLFH347ZhU=
github.com/ClickHouse/ch-go v0.65.1/go.mod h1:bsodgURwmrkvkBe5jw1qnGDgyITsYErfONKAHn05nv4=
github.com/ClickHouse/clickhouse-go/v2 v2.30.1 h1:Dy0n0l+cMbPXs8hFkeeWGaPKrB+MDByUNQBSmRO3W6k=
github.com/ClickHouse/clickhouse-go/v2 v2.30.1/go.mod h1:szk8BMoQV/NgHXZ20ZbwDyvPWmpfhRKjFkc6wzASGxM=
github.com/ClickHouse/clickhouse-go/v2 v2.32.1 h1:RLhkxA6iH/bLTXeDtEj/u4yUx9Q03Y95P+cjHScQK78=
github.com/ClickHouse/clickhouse-go/v2 v2.32.1/go.mod h1:YtaiIFlHCGNPbOpAvFGYobtcVnmgYvD/WmzitixxWYc=
github.com/ClickHouse/clickhouse-go/v2 v2.34.0 h1:Y4rqkdrRHgExvC4o/NTbLdY5LFQ3LHS77/RNFxFX3Co=
github.com/ClickHouse/clickhouse-go/v2 v2.34.0/go.mod h1:yioSINoRLVZkLyDzdMXPLRIqhDvel8iLBlwh6Iefso8=
github.com/DataDog/datadog-api-client-go/v2 v2.34.0 h1:0VVmv8uZg8vdBuEpiF2nBGUezl2QITrxdEsLgh38j8M=
github.com/DataDog/datadog-api-client-go/v2 v2.34.0/go.mod h1:d3tOEgUd2kfsr9uuHQdY+nXrWp4uikgTgVCPdKNK30U=
github.com/DataDog/datadog-api-client-go/v2 v2.37.1 h1:weZhrGMO//sMEoSKWngoSQwMp4zBSlEX4p3/YWy9ltw=
github.com/DataDog/datadog-api-client-go/v2 v2.37.1/go.mod h1:d3tOEgUd2kfsr9uuHQdY+nXrWp4uikgTgVCPdKNK30U=
github.com/DataDog/zstd v1.5.6 h1:LbEglqepa/ipmmQJUDnSsfvA8e8IStVcGaFWDuxvGOY=
github.com/DataDog/zstd v1.5.6/go.mod h1:g4AWEaM3yOg3HYfnJ3YIawPnVdXJh9QME85blwSAmyw=
github.com/DataDog/zstd v1.5.7 h1:ybO8RBeh29qrxIhCA9E8gKY6xfONU9T6G6aP9DTKfLE=
github.com/DataDog/zstd v1.5.7/go.mod h1:g4AWEaM3yOg3HYfnJ3YIawPnVdXJh9QME85blwSAmyw=
github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww=
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
@ -97,6 +113,8 @@ github.com/compose-spec/compose-go/v2 v2.1.3 h1:bD67uqLuL/XgkAK6ir3xZvNLFPxPScEi
github.com/compose-spec/compose-go/v2 v2.1.3/go.mod h1:lFN0DrMxIncJGYAXTfWuajfwj5haBJqrBkarHcnjJKc=
github.com/confluentinc/confluent-kafka-go/v2 v2.8.0 h1:0HlcSNWg4LpLA9nIjzUMIqWHI+w0S68UN7alXAc3TeA=
github.com/confluentinc/confluent-kafka-go/v2 v2.8.0/go.mod h1:hScqtFIGUI1wqHIgM3mjoqEou4VweGGGX7dMpcUKves=
github.com/confluentinc/confluent-kafka-go/v2 v2.10.0 h1:TK5CH5RbIj/aVfmJFEsDUT6vD2izac2zmA5BUfAOxC0=
github.com/confluentinc/confluent-kafka-go/v2 v2.10.0/go.mod h1:hScqtFIGUI1wqHIgM3mjoqEou4VweGGGX7dMpcUKves=
github.com/containerd/console v1.0.4 h1:F2g4+oChYvBTsASRTz8NP6iIAi97J3TtSAsLbIFn4ro=
github.com/containerd/console v1.0.4/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/containerd v1.7.18 h1:jqjZTQNfXGoEaZdW1WwPU0RqSn1Bm2Ay/KJPUuO8nao=
@ -148,10 +166,14 @@ github.com/eiannone/keyboard v0.0.0-20220611211555-0d226195f203 h1:XBBHcIb256gUJ
github.com/eiannone/keyboard v0.0.0-20220611211555-0d226195f203/go.mod h1:E1jcSv8FaEny+OP/5k9UxZVw9YFWGj7eI4KR/iOBqCg=
github.com/elastic/elastic-transport-go/v8 v8.6.0 h1:Y2S/FBjx1LlCv5m6pWAF2kDJAHoSjSRSJCApolgfthA=
github.com/elastic/elastic-transport-go/v8 v8.6.0/go.mod h1:YLHer5cj0csTzNFXoNQ8qhtGY1GTvSqPnKWKaqQE3Hk=
github.com/elastic/elastic-transport-go/v8 v8.7.0 h1:OgTneVuXP2uip4BA658Xi6Hfw+PeIOod2rY3GVMGoVE=
github.com/elastic/elastic-transport-go/v8 v8.7.0/go.mod h1:YLHer5cj0csTzNFXoNQ8qhtGY1GTvSqPnKWKaqQE3Hk=
github.com/elastic/go-elasticsearch/v7 v7.17.10 h1:TCQ8i4PmIJuBunvBS6bwT2ybzVFxxUhhltAs3Gyu1yo=
github.com/elastic/go-elasticsearch/v7 v7.17.10/go.mod h1:OJ4wdbtDNk5g503kvlHLyErCgQwwzmDtaFC4XyOxXA4=
github.com/elastic/go-elasticsearch/v8 v8.17.0 h1:e9cWksE/Fr7urDRmGPGp47Nsp4/mvNOrU8As1l2HQQ0=
github.com/elastic/go-elasticsearch/v8 v8.17.0/go.mod h1:lGMlgKIbYoRvay3xWBeKahAiJOgmFDsjZC39nmO3H64=
github.com/elastic/go-elasticsearch/v8 v8.18.0 h1:ANNq1h7DEiPUaALb8+5w3baQzaS08WfHV0DNzp0VG4M=
github.com/elastic/go-elasticsearch/v8 v8.18.0/go.mod h1:WLqwXsJmQoYkoA9JBFeEwPkQhCfAZuUvfpdU/NvSSf0=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@ -163,8 +185,12 @@ github.com/fvbommel/sortorder v1.0.2 h1:mV4o8B2hKboCdkJm+a7uX/SIpZob4JzUpc5GGnM4
github.com/fvbommel/sortorder v1.0.2/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
github.com/gabriel-vasile/mimetype v1.4.8/go.mod h1:ByKUIKGjh1ODkGM1asKUbQZOLGrPjydw3hYPU2YU9t8=
github.com/gabriel-vasile/mimetype v1.4.9 h1:5k+WDwEsD9eTLL8Tz3L0VnmVh9QxGjRmjBvAG7U/oYY=
github.com/gabriel-vasile/mimetype v1.4.9/go.mod h1:WnSQhFKJuBlRyLiKohA/2DtIlPFAbguNaG7QCHcyGok=
github.com/getsentry/sentry-go v0.31.1 h1:ELVc0h7gwyhnXHDouXkhqTFSO5oslsRDk0++eyE0KJ4=
github.com/getsentry/sentry-go v0.31.1/go.mod h1:CYNcMMz73YigoHljQRG+qPF+eMq8gG72XcGN/p71BAY=
github.com/getsentry/sentry-go v0.32.0 h1:YKs+//QmwE3DcYtfKRH8/KyOOF/I6Qnx7qYGNHCGmCY=
github.com/getsentry/sentry-go v0.32.0/go.mod h1:CYNcMMz73YigoHljQRG+qPF+eMq8gG72XcGN/p71BAY=
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=
@ -194,6 +220,8 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.24.0 h1:KHQckvo8G6hlWnrPX4NJJ+aBfWNAE/HH+qdL2cBpCmg=
github.com/go-playground/validator/v10 v10.24.0/go.mod h1:GGzBIJMuE98Ic/kJsBXbz1x/7cByt++cQ+YOuDM5wus=
github.com/go-playground/validator/v10 v10.26.0 h1:SP05Nqhjcvz81uJaRfEV0YBSSSGMc/iMaVtFbr3Sw2k=
github.com/go-playground/validator/v10 v10.26.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg=
github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
@ -211,6 +239,8 @@ github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
@ -222,6 +252,7 @@ github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
@ -328,6 +359,8 @@ github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@ -441,12 +474,20 @@ github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/common v0.63.0 h1:YR/EIY1o3mEFP/kZCD7iDMnLPlGyuU2Gb3HIcXnA98k=
github.com/prometheus/common v0.63.0/go.mod h1:VVFF/fBIoToEnWRVkYoXEkq3R3paCoxG9PXP74SnV18=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/procfs v0.16.0 h1:xh6oHhKwnOJKMYiYBDWmkHqQPyiY40sny36Cmx2bbsM=
github.com/prometheus/procfs v0.16.0/go.mod h1:8veyXUu3nGP7oaCxhX6yeaM5u4stL2FeMXnCqhDthZg=
github.com/r3labs/sse v0.0.0-20210224172625-26fe804710bc h1:zAsgcP8MhzAbhMnB1QQ2O7ZhWYVGYSR2iVcjzQuPV+o=
github.com/r3labs/sse v0.0.0-20210224172625-26fe804710bc/go.mod h1:S8xSOnV3CgpNrWd0GQ/OoQfMtlg2uPRSuTzcSGrzwK8=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
@ -468,6 +509,8 @@ github.com/serialx/hashring v0.0.0-20200727003509-22c0c7ab6b1b h1:h+3JX2VoWTFuyQ
github.com/serialx/hashring v0.0.0-20200727003509-22c0c7ab6b1b/go.mod h1:/yeG0My1xr/u+HZrFQ1tOQQQQrOawfyMUH13ai5brBc=
github.com/sethvargo/go-envconfig v1.1.0 h1:cWZiJxeTm7AlCvzGXrEXaSTCNgip5oJepekh/BOQuog=
github.com/sethvargo/go-envconfig v1.1.0/go.mod h1:JLd0KFWQYzyENqnEPWWZ49i4vzZo/6nRidxI8YvGiHw=
github.com/sethvargo/go-envconfig v1.2.0 h1:q3XkOZWkC+G1sMLCrw9oPGTjYexygLOXDmGUit1ti8Q=
github.com/sethvargo/go-envconfig v1.2.0/go.mod h1:JLd0KFWQYzyENqnEPWWZ49i4vzZo/6nRidxI8YvGiHw=
github.com/shibumi/go-pathspec v1.3.0 h1:QUyMZhFo0Md5B8zV8x2tesohbb5kfbpTi9rBnKh5dkI=
github.com/shibumi/go-pathspec v1.3.0/go.mod h1:Xutfslp817l2I1cZvgcfeMQJG5QnU2lh5tVaaMCl3jE=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
@ -528,6 +571,8 @@ github.com/tonistiigi/vt100 v0.0.0-20240514184818-90bafcd6abab h1:H6aJ0yKQ0gF49Q
github.com/tonistiigi/vt100 v0.0.0-20240514184818-90bafcd6abab/go.mod h1:ulncasL3N9uLrVann0m+CDlJKWsIAP34MPcOJF6VRvc=
github.com/ua-parser/uap-go v0.0.0-20250126222208-a52596c19dff h1:NwMEGwb7JJ8wPjT8OPKP5hO1Xz6AQ7Z00+GLSJfW21s=
github.com/ua-parser/uap-go v0.0.0-20250126222208-a52596c19dff/go.mod h1:BUbeWZiieNxAuuADTBNb3/aeje6on3DhU3rpWsQSB1E=
github.com/ua-parser/uap-go v0.0.0-20250326155420-f7f5a2f9f5bc h1:reH9QQKGFOq39MYOvU9+SYrB8uzXtWNo51fWK3g0gGc=
github.com/ua-parser/uap-go v0.0.0-20250326155420-f7f5a2f9f5bc/go.mod h1:gwANdYmo9R8LLwGnyDFWK2PMsaXXX2HhAvCnb/UhZsM=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
@ -557,6 +602,8 @@ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.42.0 h1:ZtfnDL+tUrs1F0Pzfwbg2d59Gru9NCH3bgSHBM6LDwU=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.42.0/go.mod h1:hG4Fj/y8TR/tlEDREo8tWstl9fO9gcFkn4xrx0Io8xU=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.42.0 h1:NmnYCiR0qNufkldjVvyQfZTHSdzeHoZ41zggMsdMcLM=
@ -571,12 +618,16 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkE
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw=
go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg=
go.opentelemetry.io/otel/sdk/metric v1.21.0 h1:smhI5oD714d6jHE6Tie36fPx4WDFIg+Y6RfAY4ICcR0=
go.opentelemetry.io/otel/sdk/metric v1.21.0/go.mod h1:FJ8RAsoPGv/wYMgBdUJXOm+6pzFY3YdljnXtv1SBE8Q=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I=
go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
@ -617,6 +668,10 @@ golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 h1:hNQpMuAJe5CtcUqCXaWga3FHu+kQvCqcsoVaQgSV60o=
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3/go.mod h1:idGWGoKP1toJGkd5/ig9ZLuPcZBC3ewk7SzmH0uou08=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
@ -643,8 +698,14 @@ golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70=
golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98=
golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -679,6 +740,10 @@ golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@ -700,6 +765,10 @@ golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -727,12 +796,18 @@ google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237 h1:
google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237/go.mod h1:Z5Iiy3jtmioajWHDGFk7CeugTyHtPvMHA4UTmUkyalE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250127172529-29210b9bc287 h1:J1H9f+LEdWAfHcez/4cvaVBox7cOYT+IU6rgqj5x++8=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250127172529-29210b9bc287/go.mod h1:8BS3B93F/U1juMFq9+EDk+qOT5CO1R9IzXxG3PTqiRk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250414145226-207652e42e2e h1:ztQaXfzEXTmCBvbtWYRhJxW+0iJcz2qXfd38/e9l7bA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250414145226-207652e42e2e/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.64.1 h1:LKtvyfbX3UGVPFcGqJ9ItpVWW6oN/2XqTxfAnwRRXiA=
google.golang.org/grpc v1.64.1/go.mod h1:hiQF4LFZelK2WKaP6W0L92zGHtiQdZxk8CrSdvyjeP0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM=
google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/cenkalti/backoff.v1 v1.1.0 h1:Arh75ttbsvlpVA7WtVpH4u9h6Zl46xuptxqLxPiSo4Y=
gopkg.in/cenkalti/backoff.v1 v1.1.0/go.mod h1:J6Vskwqd+OMVJl8C33mmtxTBs2gyzfv7UDAkHu8BrjI=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View file

@ -14,6 +14,7 @@ import (
type Config struct {
common.Config
common.Postgres
common.Clickhouse
redis.Redis
objectstorage.ObjectsConfig
common.HTTP

View file

@ -2,7 +2,7 @@ package datasaver
import (
"context"
"encoding/json"
"openreplay/backend/internal/config/db"
"openreplay/backend/pkg/db/clickhouse"
"openreplay/backend/pkg/db/postgres"
@ -50,10 +50,6 @@ func New(log logger.Logger, cfg *db.Config, pg *postgres.Conn, ch clickhouse.Con
}
func (s *saverImpl) Handle(msg Message) {
if msg.TypeID() == MsgCustomEvent {
defer s.Handle(types.WrapCustomEvent(msg.(*CustomEvent)))
}
var (
sessCtx = context.WithValue(context.Background(), "sessionID", msg.SessionID())
session *sessions.Session
@ -69,6 +65,23 @@ func (s *saverImpl) Handle(msg Message) {
return
}
if msg.TypeID() == MsgCustomEvent {
m := msg.(*CustomEvent)
// Try to parse custom event payload to JSON and extract or_payload field
type CustomEventPayload struct {
CustomTimestamp uint64 `json:"or_timestamp"`
}
customPayload := &CustomEventPayload{}
if err := json.Unmarshal([]byte(m.Payload), customPayload); err == nil {
if customPayload.CustomTimestamp >= session.Timestamp {
s.log.Info(sessCtx, "custom event timestamp received: %v", m.Timestamp)
msg.Meta().Timestamp = customPayload.CustomTimestamp
s.log.Info(sessCtx, "custom event timestamp updated: %v", m.Timestamp)
}
}
defer s.Handle(types.WrapCustomEvent(m))
}
if IsMobileType(msg.TypeID()) {
if err := s.handleMobileMessage(sessCtx, session, msg); err != nil {
if !postgres.IsPkeyViolation(err) {

View file

@ -2,7 +2,6 @@ package datasaver
import (
"context"
"openreplay/backend/pkg/db/postgres"
"openreplay/backend/pkg/db/types"
"openreplay/backend/pkg/messages"

View file

@ -133,6 +133,17 @@ func (e *AssetsCache) ParseAssets(msg messages.Message) messages.Message {
}
newMsg.SetMeta(msg.Meta())
return newMsg
case *messages.CSSInsertRuleURLBased:
if e.shouldSkipAsset(m.BaseURL) {
return msg
}
newMsg := &messages.CSSInsertRule{
ID: m.ID,
Index: m.Index,
Rule: e.handleCSS(m.SessionID(), m.BaseURL, m.Rule),
}
newMsg.SetMeta(msg.Meta())
return newMsg
case *messages.AdoptedSSReplaceURLBased:
if e.shouldSkipAsset(m.BaseURL) {
return msg

View file

@ -3,6 +3,7 @@ package analytics
import (
"github.com/go-playground/validator/v10"
"openreplay/backend/pkg/analytics/charts"
"openreplay/backend/pkg/analytics/db"
"openreplay/backend/pkg/metrics/database"
"time"
@ -27,13 +28,14 @@ type ServicesBuilder struct {
ChartsAPI api.Handlers
}
func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.Web, dbMetrics database.Database, pgconn pool.Pool) (*ServicesBuilder, error) {
func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.Web, dbMetrics database.Database, pgconn pool.Pool, chConn db.Connector) (*ServicesBuilder, error) {
responser := api.NewResponser(webMetrics)
audiTrail, err := tracer.NewTracer(log, pgconn, dbMetrics)
if err != nil {
return nil, err
}
reqValidator := validator.New()
cardsService, err := cards.New(log, pgconn)
if err != nil {
return nil, err
@ -42,6 +44,7 @@ func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.
if err != nil {
return nil, err
}
dashboardsService, err := dashboards.New(log, pgconn)
if err != nil {
return nil, err
@ -50,7 +53,8 @@ func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.
if err != nil {
return nil, err
}
chartsService, err := charts.New(log, pgconn)
chartsService, err := charts.New(log, pgconn, chConn)
if err != nil {
return nil, err
}
@ -58,6 +62,7 @@ func NewServiceBuilder(log logger.Logger, cfg *analytics.Config, webMetrics web.
if err != nil {
return nil, err
}
return &ServicesBuilder{
Auth: auth.NewAuth(log, cfg.JWTSecret, cfg.JWTSpotSecret, pgconn, nil, api.NoPrefix),
RateLimiter: limiter.NewUserRateLimiter(10, 30, 1*time.Minute, 5*time.Minute),

View file

@ -6,7 +6,6 @@ import (
"fmt"
"strings"
"github.com/jackc/pgx/v4"
"github.com/lib/pq"
"openreplay/backend/pkg/db/postgres/pool"
@ -48,12 +47,12 @@ func (s *cardsImpl) Create(projectId int, userID uint64, req *CardCreateRequest)
ctx := context.Background()
defer func() {
if err != nil {
tx.Rollback(ctx)
err := tx.TxRollback()
if err != nil {
return
}
} else {
err := tx.Commit(ctx)
err := tx.TxCommit()
if err != nil {
return
}
@ -67,8 +66,8 @@ func (s *cardsImpl) Create(projectId int, userID uint64, req *CardCreateRequest)
RETURNING metric_id, project_id, user_id, name, metric_type, view_type, metric_of, metric_value, metric_format, is_public, created_at, edited_at`
card := &CardGetResponse{}
err = tx.QueryRow(
ctx, sql,
err = tx.TxQueryRow(
sql,
projectId, userID, req.Name, req.MetricType, req.ViewType, req.MetricOf, req.MetricValue, req.MetricFormat, req.IsPublic,
).Scan(
&card.CardID,
@ -98,7 +97,7 @@ func (s *cardsImpl) Create(projectId int, userID uint64, req *CardCreateRequest)
return card, nil
}
func (s *cardsImpl) CreateSeries(ctx context.Context, tx pgx.Tx, metricId int64, series []CardSeriesBase) []CardSeries {
func (s *cardsImpl) CreateSeries(ctx context.Context, tx *pool.Tx, metricId int64, series []CardSeriesBase) []CardSeries {
if len(series) == 0 {
return nil // No series to create
}
@ -126,7 +125,7 @@ func (s *cardsImpl) CreateSeries(ctx context.Context, tx pgx.Tx, metricId int64,
query := fmt.Sprintf(sql, strings.Join(values, ","))
s.log.Info(ctx, "Executing query: %s with args: %v", query, args)
rows, err := tx.Query(ctx, query, args...)
rows, err := tx.TxQuery(query, args...)
if err != nil {
s.log.Error(ctx, "failed to execute batch insert for series: %v", err)
return nil
@ -359,12 +358,12 @@ func (s *cardsImpl) Update(projectId int, cardID int64, userID uint64, req *Card
ctx := context.Background()
defer func() {
if err != nil {
tx.Rollback(ctx)
err := tx.TxRollback()
if err != nil {
return
}
} else {
err := tx.Commit(ctx)
err := tx.TxCommit()
if err != nil {
return
}
@ -379,7 +378,7 @@ func (s *cardsImpl) Update(projectId int, cardID int64, userID uint64, req *Card
RETURNING metric_id, project_id, user_id, name, metric_type, view_type, metric_of, metric_value, metric_format, is_public, created_at, edited_at`
card := &CardGetResponse{}
err = tx.QueryRow(ctx, sql,
err = tx.TxQueryRow(sql,
req.Name, req.MetricType, req.ViewType, req.MetricOf, req.MetricValue, req.MetricFormat, req.IsPublic, cardID, projectId,
).Scan(
&card.CardID, &card.ProjectID, &card.UserID, &card.Name, &card.MetricType, &card.ViewType, &card.MetricOf,

View file

@ -46,6 +46,7 @@ func (e *handlersImpl) GetAll() []*api.Description {
{"/v1/analytics/{projectId}/cards/{id}", e.getCard, "GET"},
{"/v1/analytics/{projectId}/cards/{id}", e.updateCard, "PUT"},
{"/v1/analytics/{projectId}/cards/{id}", e.deleteCard, "DELETE"},
{"/v1/analytics/{projectId}/cards/{id}/sessions", e.getCardSessions, "POST"},
}
}
@ -296,3 +297,8 @@ func (e *handlersImpl) deleteCard(w http.ResponseWriter, r *http.Request) {
e.responser.ResponseWithJSON(e.log, r.Context(), w, nil, startTime, r.URL.Path, bodySize)
}
func (e *handlersImpl) getCardSessions(w http.ResponseWriter, r *http.Request) {
// TODO: implement this
e.responser.ResponseWithError(e.log, r.Context(), w, http.StatusNotImplemented, fmt.Errorf("not implemented"), time.Now(), r.URL.Path, 0)
}

View file

@ -6,6 +6,24 @@ import (
"time"
)
type MetricType string
type MetricOfTimeseries string
type MetricOfTable string
const (
MetricTypeTimeseries MetricType = "TIMESERIES"
MetricTypeTable MetricType = "TABLE"
MetricOfTimeseriesSessionCount MetricOfTimeseries = "SESSION_COUNT"
MetricOfTimeseriesUserCount MetricOfTimeseries = "USER_COUNT"
MetricOfTableVisitedURL MetricOfTable = "VISITED_URL"
MetricOfTableIssues MetricOfTable = "ISSUES"
MetricOfTableUserCountry MetricOfTable = "USER_COUNTRY"
MetricOfTableUserDevice MetricOfTable = "USER_DEVICE"
MetricOfTableUserBrowser MetricOfTable = "USER_BROWSER"
)
// CardBase Common fields for the Card entity
type CardBase struct {
Name string `json:"name" validate:"required"`
@ -49,8 +67,8 @@ type CardSeries struct {
}
type SeriesFilter struct {
EventOrder string `json:"eventOrder" validate:"required,oneof=then or and"`
Filters []FilterItem `json:"filters"`
EventsOrder string `json:"eventsOrder" validate:"required,oneof=then or and"`
Filters []FilterItem `json:"filters"`
}
type FilterItem struct {
@ -192,3 +210,34 @@ func (s *CardListSort) GetSQLField() string {
func (s *CardListSort) GetSQLOrder() string {
return strings.ToUpper(s.Order)
}
// ---
/*
class IssueType(str, Enum):
CLICK_RAGE = 'click_rage'
DEAD_CLICK = 'dead_click'
EXCESSIVE_SCROLLING = 'excessive_scrolling'
BAD_REQUEST = 'bad_request'
MISSING_RESOURCE = 'missing_resource'
MEMORY = 'memory'
CPU = 'cpu'
SLOW_RESOURCE = 'slow_resource'
SLOW_PAGE_LOAD = 'slow_page_load'
CRASH = 'crash'
CUSTOM = 'custom'
JS_EXCEPTION = 'js_exception'
MOUSE_THRASHING = 'mouse_thrashing'
# IOS
TAP_RAGE = 'tap_rage'
*/
type IssueType string
type ChartData struct {
StartTs uint64 `json:"startTs"`
EndTs uint64 `json:"endTs"`
Density uint64 `json:"density"`
Filters []FilterItem `json:"filter"`
MetricOf string `json:"metricOf"`
MetricValue []IssueType `json:"metricValue"`
}

View file

@ -1,50 +1,51 @@
package charts
import (
"encoding/json"
"fmt"
"openreplay/backend/pkg/analytics/db"
"openreplay/backend/pkg/db/postgres/pool"
"openreplay/backend/pkg/logger"
)
type Charts interface {
GetData(projectId int, userId uint64, req *GetCardChartDataRequest) ([]DataPoint, error)
GetData(projectId int, userId uint64, req *MetricPayload) (interface{}, error)
}
type chartsImpl struct {
log logger.Logger
pgconn pool.Pool
chConn db.Connector
}
func New(log logger.Logger, conn pool.Pool) (Charts, error) {
func New(log logger.Logger, conn pool.Pool, chConn db.Connector) (Charts, error) {
return &chartsImpl{
log: log,
pgconn: conn,
chConn: chConn,
}, nil
}
func (s *chartsImpl) GetData(projectId int, userID uint64, req *GetCardChartDataRequest) ([]DataPoint, error) {
jsonInput := `
{
"data": [
{
"timestamp": 1733934939000,
"Series A": 100,
"Series B": 200
},
{
"timestamp": 1733935939000,
"Series A": 150,
"Series B": 250
}
]
}`
var resp GetCardChartDataResponse
if err := json.Unmarshal([]byte(jsonInput), &resp); err != nil {
return nil, fmt.Errorf("failed to unmarshal response: %w", err)
// GetData def get_chart()
func (s *chartsImpl) GetData(projectId int, userID uint64, req *MetricPayload) (interface{}, error) {
if req == nil {
return nil, fmt.Errorf("request is empty")
}
return resp.Data, nil
payload := Payload{
ProjectId: projectId,
UserId: userID,
MetricPayload: req,
}
qb, err := NewQueryBuilder(payload)
if err != nil {
return nil, fmt.Errorf("error creating query builder: %v", err)
}
resp, err := qb.Execute(payload, s.chConn)
if err != nil {
return nil, fmt.Errorf("error executing query: %v", err)
}
//return resp, nil
return map[string]interface{}{"data": resp}, nil
}

View file

@ -0,0 +1,427 @@
package charts
import (
"fmt"
"log"
"strconv"
"strings"
)
type Fields map[string]string
func getSessionMetaFields() Fields {
return Fields{
"revId": "rev_id",
"country": "user_country",
"os": "user_os",
"platform": "user_device_type",
"device": "user_device",
"browser": "user_browser",
}
}
func getMetadataFields() Fields {
return Fields{
"userId": "user_id",
"userAnonymousId": "user_anonymous_id",
"metadata1": "metadata_1",
"metadata2": "metadata_2",
"metadata3": "metadata_3",
"metadata4": "metadata_4",
"metadata5": "metadata_5",
"metadata6": "metadata_6",
"metadata7": "metadata_7",
"metadata8": "metadata_8",
"metadata9": "metadata_9",
"metadata10": "metadata_10",
}
}
func getStepSize(startTimestamp, endTimestamp int64, density int, decimal bool, factor int) float64 {
factorInt64 := int64(factor)
stepSize := (endTimestamp / factorInt64) - (startTimestamp / factorInt64)
if density <= 1 {
return float64(stepSize)
}
if decimal {
return float64(stepSize) / float64(density)
}
return float64(stepSize / int64(density-1))
}
//func getStepSize(startTimestamp, endTimestamp, density uint64, decimal bool, factor uint64) float64 {
// stepSize := (endTimestamp / factor) - (startTimestamp / factor) // TODO: should I use float64 here?
// if !decimal {
// density--
// }
// return float64(stepSize) / float64(density)
//}
func getBasicConstraints(tableName string, timeConstraint, roundStart bool, data map[string]interface{}, identifier string) []string { // Если tableName не пустая, добавляем точку
if tableName != "" {
tableName += "."
}
chSubQuery := []string{fmt.Sprintf("%s%s = toUInt16(:%s)", tableName, identifier, identifier)}
if timeConstraint {
if roundStart {
chSubQuery = append(chSubQuery, fmt.Sprintf("toStartOfInterval(%sdatetime, INTERVAL :step_size second) >= toDateTime(:startTimestamp/1000)", tableName))
} else {
chSubQuery = append(chSubQuery, fmt.Sprintf("%sdatetime >= toDateTime(:startTimestamp/1000)", tableName))
}
chSubQuery = append(chSubQuery, fmt.Sprintf("%sdatetime < toDateTime(:endTimestamp/1000)", tableName))
}
return append(chSubQuery, getGenericConstraint(data, tableName)...)
}
func getGenericConstraint(data map[string]interface{}, tableName string) []string {
return getConstraint(data, getSessionMetaFields(), tableName)
}
func getConstraint(data map[string]interface{}, fields Fields, tableName string) []string {
var constraints []string
filters, err := data["filters"].([]map[string]interface{})
if !err {
log.Println("error getting filters from data")
filters = make([]map[string]interface{}, 0) // to skip the next block
}
// process filters
for i, f := range filters {
key, _ := f["key"].(string)
value, _ := f["value"].(string)
if field, ok := fields[key]; ok {
if value == "*" || value == "" {
constraints = append(constraints, fmt.Sprintf("isNotNull(%s%s)", tableName, field))
} else {
// constraints.append(f"{table_name}{fields[f['key']]} = %({f['key']}_{i})s")
constraints = append(constraints, fmt.Sprintf("%s%s = %%(%s_%d)s", tableName, field, key, i)) // TODO: where we'll keep the value?
}
}
}
// TODO from Python: remove this in next release
offset := len(filters)
for i, f := range data {
key, _ := f.(string)
value, _ := data[key].(string)
if field, ok := fields[key]; ok {
if value == "*" || value == "" {
constraints = append(constraints, fmt.Sprintf("isNotNull(%s%s)", tableName, field))
} else {
intI, err := strconv.Atoi(i)
if err != nil {
log.Printf("error converting data[k] to int: %v", err)
continue
} else {
constraints = append(constraints, fmt.Sprintf("%s%s = %%(%s_%d)s", tableName, field, f, intI+offset))
}
}
}
}
return constraints
}
func getMetaConstraint(data map[string]interface{}) []string {
return getConstraint(data, getMetadataFields(), "sessions_metadata.")
}
func getConstraintValues(data map[string]interface{}) map[string]interface{} {
params := make(map[string]interface{})
if filters, ok := data["filters"].([]map[string]interface{}); ok {
for i, f := range filters {
key, _ := f["key"].(string)
value := f["value"]
params[fmt.Sprintf("%s_%d", key, i)] = value
}
// TODO from Python: remove this in next release
offset := len(data["filters"].([]map[string]interface{}))
i := 0
for k, v := range data {
params[fmt.Sprintf("%s_%d", k, i+offset)] = v
i++
}
}
return params
}
/*
def get_main_sessions_table(timestamp=0):
return "experimental.sessions_l7d_mv" \
if config("EXP_7D_MV", cast=bool, default=True) \
and timestamp and timestamp >= TimeUTC.now(delta_days=-7) else "experimental.sessions"
*/
func getMainSessionsTable(timestamp int64) string {
return "experimental.sessions"
}
// Function to convert named parameters to positional parameters
func replaceNamedParams(query string, params map[string]interface{}) (string, []interface{}) {
var args []interface{}
i := 1
for key, val := range params {
placeholder := ":" + key
//query = strings.Replace(query, placeholder, "?", 1)
strVal := fmt.Sprintf("%v", val)
query = strings.Replace(query, placeholder, strVal, -1)
args = append(args, val)
i++
}
return query, args
}
// Helper function to generate a range of floats
func frange(start, end, step float64) []float64 {
var rangeValues []float64
for i := start; i < end; i += step {
rangeValues = append(rangeValues, i)
}
return rangeValues
}
// Helper function to add missing keys from the "complete" map to the "original" map
func addMissingKeys(original, complete map[string]interface{}) map[string]interface{} {
for k, v := range complete {
if _, exists := original[k]; !exists {
original[k] = v
}
}
return original
}
// CompleteMissingSteps fills in missing steps in the data
func CompleteMissingSteps(
startTime, endTime int64,
density int,
neutral map[string]interface{},
rows []map[string]interface{},
timeKey string,
timeCoefficient int64,
) []map[string]interface{} {
if len(rows) == density {
return rows
}
// Calculate the step size
step := getStepSize(startTime, endTime, density, true, 1000)
optimal := make([][2]uint64, 0)
for _, i := range frange(float64(startTime)/float64(timeCoefficient), float64(endTime)/float64(timeCoefficient), step) {
startInterval := uint64(i * float64(timeCoefficient))
endInterval := uint64((i + step) * float64(timeCoefficient))
optimal = append(optimal, [2]uint64{startInterval, endInterval})
}
var result []map[string]interface{}
r, o := 0, 0
// Iterate over density
for i := 0; i < density; i++ {
// Clone the neutral map
neutralClone := make(map[string]interface{})
for k, v := range neutral {
if fn, ok := v.(func() interface{}); ok {
neutralClone[k] = fn()
} else {
neutralClone[k] = v
}
}
// If we can just add the rest of the rows to result
if r < len(rows) && len(result)+len(rows)-r == density {
result = append(result, rows[r:]...)
break
}
// Determine where the current row fits within the optimal intervals
if r < len(rows) && o < len(optimal) && rows[r][timeKey].(uint64) < optimal[o][0] {
rows[r] = addMissingKeys(rows[r], neutralClone)
result = append(result, rows[r])
r++
} else if r < len(rows) && o < len(optimal) && optimal[o][0] <= rows[r][timeKey].(uint64) && rows[r][timeKey].(uint64) < optimal[o][1] {
rows[r] = addMissingKeys(rows[r], neutralClone)
result = append(result, rows[r])
r++
o++
} else {
neutralClone[timeKey] = optimal[o][0]
result = append(result, neutralClone)
o++
}
}
return result
}
func progress(oldVal, newVal uint64) float64 {
if newVal > 0 {
return (float64(oldVal-newVal) / float64(newVal)) * 100
}
if oldVal == 0 {
return 0
}
return 100
}
// Trying to find a common part
func parse(projectID int, startTs, endTs int64, density int, args map[string]interface{}) ([]string, []string, map[string]interface{}) {
stepSize := getStepSize(startTs, endTs, density, false, 1000)
chSubQuery := getBasicConstraints("sessions", true, false, args, "project_id")
chSubQueryChart := getBasicConstraints("sessions", true, true, args, "project_id")
metaCondition := getMetaConstraint(args)
chSubQuery = append(chSubQuery, metaCondition...)
chSubQueryChart = append(chSubQueryChart, metaCondition...)
params := map[string]interface{}{
"step_size": stepSize,
"project_id": projectID,
"startTimestamp": startTs,
"endTimestamp": endTs,
}
for k, v := range getConstraintValues(args) {
params[k] = v
}
return chSubQuery, chSubQueryChart, params
}
// Sessions trend
//func (s *chartsImpl) getProcessedSessions(projectID int, startTs, endTs int64, density int, args map[string]interface{}) (interface{}, error) {
// chQuery := `
// SELECT toUnixTimestamp(toStartOfInterval(sessions.datetime, INTERVAL :step_size second)) * 1000 AS timestamp,
// COUNT(DISTINCT sessions.session_id) AS value
// FROM :main_sessions_table AS sessions
// WHERE :sub_query_chart
// GROUP BY timestamp
// ORDER BY timestamp;
// `
// chSubQuery, chSubQueryChart, params := parse(projectID, startTs, endTs, density, args)
//
// chQuery = strings.Replace(chQuery, ":main_sessions_table", getMainSessionsTable(startTs), -1)
// chQuery = strings.Replace(chQuery, ":sub_query_chart", strings.Join(chSubQueryChart, " AND "), -1)
//
// preparedQuery, preparedArgs := replaceNamedParams(chQuery, params)
// rows, err := s.chConn.Query(context.Background(), preparedQuery, preparedArgs)
// if err != nil {
// log.Fatalf("Error executing query: %v", err)
// }
// preparedRows := make([]map[string]interface{}, 0)
// var sum uint64
// for rows.Next() {
// var timestamp, value uint64
// if err := rows.Scan(&timestamp, &value); err != nil {
// log.Fatalf("Error scanning row: %v", err)
// }
// fmt.Printf("Timestamp: %d, Value: %d\n", timestamp, value)
// sum += value
// preparedRows = append(preparedRows, map[string]interface{}{"timestamp": timestamp, "value": value})
// }
//
// results := map[string]interface{}{
// "value": sum,
// "chart": CompleteMissingSteps(startTs, endTs, int(density), map[string]interface{}{"value": 0}, preparedRows, "timestamp", 1000),
// }
//
// diff := endTs - startTs
// endTs = startTs
// startTs = endTs - diff
//
// log.Println(results)
//
// chQuery = fmt.Sprintf(`
// SELECT COUNT(1) AS count
// FROM :main_sessions_table AS sessions
// WHERE :sub_query_chart;
// `)
// chQuery = strings.Replace(chQuery, ":main_sessions_table", getMainSessionsTable(startTs), -1)
// chQuery = strings.Replace(chQuery, ":sub_query_chart", strings.Join(chSubQuery, " AND "), -1)
//
// var count uint64
//
// preparedQuery, preparedArgs = replaceNamedParams(chQuery, params)
// if err := s.chConn.QueryRow(context.Background(), preparedQuery, preparedArgs).Scan(&count); err != nil {
// log.Fatalf("Error executing query: %v", err)
// }
//
// results["progress"] = progress(count, results["value"].(uint64))
//
// // TODO: this should be returned in any case
// results["unit"] = "COUNT"
// fmt.Println(results)
//
// return results, nil
//}
//
//// Users trend
//func (s *chartsImpl) getUniqueUsers(projectID int, startTs, endTs int64, density int, args map[string]interface{}) (interface{}, error) {
// chQuery := `
// SELECT toUnixTimestamp(toStartOfInterval(sessions.datetime, INTERVAL :step_size second)) * 1000 AS timestamp,
// COUNT(DISTINCT sessions.user_id) AS value
// FROM :main_sessions_table AS sessions
// WHERE :sub_query_chart
// GROUP BY timestamp
// ORDER BY timestamp;
// `
// chSubQuery, chSubQueryChart, params := parse(projectID, startTs, endTs, density, args)
// chSubQueryChart = append(chSubQueryChart, []string{"isNotNull(sessions.user_id)", "sessions.user_id!=''"}...)
//
// chQuery = strings.Replace(chQuery, ":main_sessions_table", getMainSessionsTable(startTs), -1)
// chQuery = strings.Replace(chQuery, ":sub_query_chart", strings.Join(chSubQueryChart, " AND "), -1)
//
// preparedQuery, preparedArgs := replaceNamedParams(chQuery, params)
// rows, err := s.chConn.Query(context.Background(), preparedQuery, preparedArgs)
// if err != nil {
// log.Fatalf("Error executing query: %v", err)
// }
// preparedRows := make([]map[string]interface{}, 0)
// var sum uint64
// for rows.Next() {
// var timestamp, value uint64
// if err := rows.Scan(&timestamp, &value); err != nil {
// log.Fatalf("Error scanning row: %v", err)
// }
// fmt.Printf("Timestamp: %d, Value: %d\n", timestamp, value)
// sum += value
// preparedRows = append(preparedRows, map[string]interface{}{"timestamp": timestamp, "value": value})
// }
//
// results := map[string]interface{}{
// "value": sum,
// "chart": CompleteMissingSteps(startTs, endTs, int(density), map[string]interface{}{"value": 0}, preparedRows, "timestamp", 1000),
// }
//
// diff := endTs - startTs
// endTs = startTs
// startTs = endTs - diff
//
// log.Println(results)
//
// chQuery = fmt.Sprintf(`
// SELECT COUNT(DISTINCT user_id) AS count
// FROM :main_sessions_table AS sessions
// WHERE :sub_query_chart;
// `)
// chQuery = strings.Replace(chQuery, ":main_sessions_table", getMainSessionsTable(startTs), -1)
// chQuery = strings.Replace(chQuery, ":sub_query_chart", strings.Join(chSubQuery, " AND "), -1)
//
// var count uint64
//
// preparedQuery, preparedArgs = replaceNamedParams(chQuery, params)
// if err := s.chConn.QueryRow(context.Background(), preparedQuery, preparedArgs).Scan(&count); err != nil {
// log.Fatalf("Error executing query: %v", err)
// }
//
// results["progress"] = progress(count, results["value"].(uint64))
//
// // TODO: this should be returned in any case
// results["unit"] = "COUNT"
// fmt.Println(results)
//
// return results, nil
//}

View file

@ -41,8 +41,9 @@ type handlersImpl struct {
func (e *handlersImpl) GetAll() []*api.Description {
return []*api.Description{
{"/v1/analytics/{projectId}/cards/{id}/chart", e.getCardChartData, "POST"},
{"/v1/analytics/{projectId}/cards/{id}/chart", e.getCardChartData, "POST"}, // for dashboards
{"/v1/analytics/{projectId}/cards/{id}/try", e.getCardChartData, "POST"},
{"/v1/analytics/{projectId}/cards/try", e.getCardChartData, "POST"}, // for cards itself
}
}
@ -73,7 +74,7 @@ func (e *handlersImpl) getCardChartData(w http.ResponseWriter, r *http.Request)
}
bodySize = len(bodyBytes)
req := &GetCardChartDataRequest{}
req := &MetricPayload{}
if err := json.Unmarshal(bodyBytes, req); err != nil {
e.responser.ResponseWithError(e.log, r.Context(), w, http.StatusBadRequest, err, startTime, r.URL.Path, bodySize)
return

View file

@ -0,0 +1,236 @@
package charts
import (
"fmt"
"openreplay/backend/pkg/analytics/db"
"strings"
)
type FunnelStepResult struct {
LevelNumber uint64 `json:"step"`
StepName string `json:"type"`
CountAtLevel uint64 `json:"count"`
Operator string `json:"operator"`
Value []string `json:"value"`
DropPct float64 `json:"dropPct"`
}
type FunnelResponse struct {
Steps []FunnelStepResult `json:"stages"`
}
type FunnelQueryBuilder struct{}
func (f FunnelQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
q, err := f.buildQuery(p)
if err != nil {
return nil, err
}
rows, err := conn.Query(q)
if err != nil {
return nil, err
}
defer rows.Close()
// extract step filters
s := p.MetricPayload.Series[0]
var stepFilters []Filter
for _, flt := range s.Filter.Filters {
if flt.IsEvent {
stepFilters = append(stepFilters, flt)
}
}
var steps []FunnelStepResult
for rows.Next() {
var r FunnelStepResult
if err := rows.Scan(&r.LevelNumber, &r.StepName, &r.CountAtLevel); err != nil {
return nil, err
}
idx := int(r.LevelNumber) - 1
if idx >= 0 && idx < len(stepFilters) {
r.Operator = stepFilters[idx].Operator
r.Value = stepFilters[idx].Value
}
steps = append(steps, r)
}
// compute drop percentages
if len(steps) > 0 {
prev := steps[0].CountAtLevel
steps[0].DropPct = 0
for i := 1; i < len(steps); i++ {
curr := steps[i].CountAtLevel
if prev > 0 {
steps[i].DropPct = (float64(prev-curr) / float64(prev)) * 100
} else {
steps[i].DropPct = 0
}
prev = curr
}
}
return FunnelResponse{Steps: steps}, nil
}
func (f FunnelQueryBuilder) buildQuery(p Payload) (string, error) {
if len(p.MetricPayload.Series) == 0 {
return "", fmt.Errorf("series empty")
}
s := p.MetricPayload.Series[0]
metricFormat := p.MetricPayload.MetricFormat
var (
globalFilters []Filter
stepFilters []Filter
sessionDurationFilter *Filter
)
for _, flt := range s.Filter.Filters {
if flt.IsEvent {
stepFilters = append(stepFilters, flt)
} else if flt.Type == "duration" {
sessionDurationFilter = &flt
} else {
globalFilters = append(globalFilters, flt)
}
}
requiredColumns := make(map[string]struct{})
var collectColumns func([]Filter)
collectColumns = func(filters []Filter) {
for _, flt := range filters {
if col, ok := mainColumns[string(flt.Type)]; ok {
requiredColumns[col] = struct{}{}
}
collectColumns(flt.Filters)
}
}
collectColumns(globalFilters)
collectColumns(stepFilters)
selectCols := []string{
`e.created_at`,
`e."$event_name" AS event_name`,
`e."$properties" AS properties`,
}
for col := range requiredColumns {
logical := reverseLookup(mainColumns, col)
selectCols = append(selectCols, fmt.Sprintf(`e."%s" AS %s`, col, logical))
}
selectCols = append(selectCols,
`e.session_id`,
`e.distinct_id`,
`s.user_id AS session_user_id`,
fmt.Sprintf("if('%s' = 'sessionCount', toString(e.session_id), coalesce(nullif(s.user_id,''),e.distinct_id)) AS entity_id", metricFormat),
)
globalConds, _ := buildEventConditions(globalFilters, BuildConditionsOptions{
DefinedColumns: mainColumns,
MainTableAlias: "e",
PropertiesColumnName: "$properties",
})
base := []string{
fmt.Sprintf("e.created_at >= toDateTime(%d/1000)", p.MetricPayload.StartTimestamp),
fmt.Sprintf("e.created_at < toDateTime(%d/1000)", p.MetricPayload.EndTimestamp+86400000),
fmt.Sprintf("e.project_id = %d", p.ProjectId),
}
base = append(base, globalConds...)
if sessionDurationFilter != nil {
vals := sessionDurationFilter.Value
if len(vals) > 0 && vals[0] != "" {
base = append(base, fmt.Sprintf("s.duration >= %s", vals[0]))
}
if len(vals) > 1 && vals[1] != "" {
base = append(base, fmt.Sprintf("s.duration <= %s", vals[1]))
}
}
where := strings.Join(base, " AND ")
var (
stepNames []string
stepExprs []string
clickCount int
)
for i, flt := range stepFilters {
stepNames = append(stepNames, fmt.Sprintf("'%s'", flt.Type))
conds, _ := buildEventConditions([]Filter{flt}, BuildConditionsOptions{
DefinedColumns: cteColumnAliases(),
PropertiesColumnName: "properties",
MainTableAlias: "",
})
var exprParts []string
exprParts = append(exprParts, fmt.Sprintf("event_name = funnel_steps[%d]", i+1))
if flt.Type == "CLICK" {
clickCount++
exprParts = append(exprParts, fmt.Sprintf("click_idx = %d", clickCount))
}
exprParts = append(exprParts, conds...)
stepExprs = append(stepExprs, fmt.Sprintf("(%s)", strings.Join(exprParts, " AND ")))
}
stepsArr := fmt.Sprintf("[%s]", strings.Join(stepNames, ","))
windowArgs := strings.Join(stepExprs, ",\n ")
q := fmt.Sprintf(`
WITH
%s AS funnel_steps,
86400 AS funnel_window_seconds,
events_for_funnel AS (
SELECT
%s
FROM product_analytics.events AS e
JOIN experimental.sessions AS s USING(session_id)
WHERE %s
ORDER BY e.session_id, e.created_at
),
numbered_clicks AS (
SELECT
entity_id,
created_at,
row_number() OVER (PARTITION BY entity_id ORDER BY created_at) AS click_idx
FROM events_for_funnel
WHERE event_name = 'CLICK'
),
funnel_levels_reached AS (
SELECT
ef.entity_id,
windowFunnel(funnel_window_seconds)(
toDateTime(ef.created_at),
%s
) AS max_level
FROM events_for_funnel ef
LEFT JOIN numbered_clicks nc
ON ef.entity_id = nc.entity_id
AND ef.created_at = nc.created_at
GROUP BY ef.entity_id
),
counts_by_level AS (
SELECT
seq.number + 1 AS level_number,
countDistinctIf(entity_id, max_level >= seq.number + 1) AS cnt
FROM funnel_levels_reached
CROSS JOIN numbers(length(funnel_steps)) AS seq
GROUP BY seq.number
),
step_list AS (
SELECT
seq.number + 1 AS level_number,
funnel_steps[seq.number + 1] AS step_name
FROM numbers(length(funnel_steps)) AS seq
)
SELECT
s.level_number,
s.step_name,
ifNull(c.cnt, 0) AS count_at_level
FROM step_list AS s
LEFT JOIN counts_by_level AS c ON s.level_number = c.level_number
ORDER BY s.level_number;`,
stepsArr,
strings.Join(selectCols, ",\n "),
where,
windowArgs,
)
return q, nil
}

View file

@ -0,0 +1,100 @@
package charts
import (
"fmt"
"openreplay/backend/pkg/analytics/db"
"strings"
)
type HeatmapPoint struct {
NormalizedX float64 `json:"normalizedX"`
NormalizedY float64 `json:"normalizedY"`
}
type HeatmapResponse struct {
Points []HeatmapPoint `json:"data"`
}
type HeatmapQueryBuilder struct{}
func (h HeatmapQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
q, err := h.buildQuery(p)
if err != nil {
return nil, err
}
rows, err := conn.Query(q)
if err != nil {
return nil, err
}
defer rows.Close()
var pts []HeatmapPoint
for rows.Next() {
var x, y float64
if err := rows.Scan(&x, &y); err != nil {
return nil, err
}
pts = append(pts, HeatmapPoint{x, y})
}
return HeatmapResponse{
Points: pts,
}, nil
}
func (h HeatmapQueryBuilder) buildQuery(p Payload) (string, error) {
if len(p.MetricPayload.Series) == 0 {
return "", fmt.Errorf("series empty")
}
s := p.MetricPayload.Series[0]
var globalFilters, eventFilters []Filter
for _, flt := range s.Filter.Filters {
if flt.IsEvent {
eventFilters = append(eventFilters, flt)
} else {
globalFilters = append(globalFilters, flt)
}
}
globalConds, _ := buildEventConditions(globalFilters, BuildConditionsOptions{
DefinedColumns: mainColumns,
MainTableAlias: "e",
})
eventConds, _ := buildEventConditions(eventFilters, BuildConditionsOptions{
DefinedColumns: mainColumns,
MainTableAlias: "e",
})
base := []string{
fmt.Sprintf("e.created_at >= toDateTime(%d/1000)", p.MetricPayload.StartTimestamp),
fmt.Sprintf("e.created_at < toDateTime(%d/1000)", p.MetricPayload.EndTimestamp),
fmt.Sprintf("e.project_id = %d", p.ProjectId),
"e.session_id IS NOT NULL",
"e.`$event_name` = 'CLICK'",
}
base = append(base, globalConds...)
//if len(globalNames) > 0 {
// base = append(base, "e.`$event_name` IN ("+buildInClause(globalNames)+")")
//}
//if len(eventNames) > 0 {
// base = append(base, "e.`$event_name` IN ("+buildInClause(eventNames)+")")
//}
base = append(base, eventConds...)
where := strings.Join(base, " AND ")
q := fmt.Sprintf(`
SELECT
JSONExtractFloat(toString(e."$properties"), 'normalized_x') AS normalized_x,
JSONExtractFloat(toString(e."$properties"), 'normalized_y') AS normalized_y
FROM product_analytics.events AS e
-- JOIN experimental.sessions AS s USING(session_id)
WHERE %s LIMIT 500;`, where)
return q, nil
}

View file

@ -0,0 +1,96 @@
package charts
import (
"fmt"
"openreplay/backend/pkg/analytics/db"
"strings"
)
type HeatmapSessionResponse struct {
SessionID uint64 `json:"session_id"`
StartTs uint64 `json:"start_ts"`
Duration uint32 `json:"duration"`
EventTimestamp uint64 `json:"event_timestamp"`
}
type HeatmapSessionQueryBuilder struct{}
func (h HeatmapSessionQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
shortestQ, err := h.buildQuery(p)
if err != nil {
return nil, err
}
var sid uint64
var startTs uint64
var duration uint32
var eventTs uint64
row, err := conn.QueryRow(shortestQ)
if err != nil {
return nil, err
}
if err := row.Scan(&sid, &startTs, &duration, &eventTs); err != nil {
return nil, err
}
// TODO get mob urls
return HeatmapSessionResponse{
SessionID: sid,
StartTs: startTs,
Duration: duration,
EventTimestamp: eventTs,
}, nil
}
func (h HeatmapSessionQueryBuilder) buildQuery(p Payload) (string, error) {
if len(p.MetricPayload.Series) == 0 {
return "", fmt.Errorf("series empty")
}
s := p.MetricPayload.Series[0]
var globalFilters, eventFilters []Filter
for _, flt := range s.Filter.Filters {
if flt.IsEvent {
eventFilters = append(eventFilters, flt)
} else {
globalFilters = append(globalFilters, flt)
}
}
globalConds, _ := buildEventConditions(globalFilters, BuildConditionsOptions{
DefinedColumns: mainColumns,
MainTableAlias: "e",
})
eventConds, _ := buildEventConditions(eventFilters, BuildConditionsOptions{
DefinedColumns: mainColumns,
MainTableAlias: "e",
})
base := []string{
fmt.Sprintf("e.created_at >= toDateTime(%d/1000)", p.MetricPayload.StartTimestamp),
fmt.Sprintf("e.created_at < toDateTime(%d/1000)", p.MetricPayload.EndTimestamp+86400000),
fmt.Sprintf("e.project_id = %d", p.ProjectId),
"s.duration > 500",
"e.`$event_name` = 'LOCATION'",
}
base = append(base, eventConds...)
base = append(base, globalConds...)
where := strings.Join(base, " AND ")
q := fmt.Sprintf(`
SELECT
s.session_id,
toUnixTimestamp(s.datetime) * 1000 as startTs,
s.duration,
toUnixTimestamp(e.created_at) * 1000 as eventTs
FROM product_analytics.events AS e
JOIN experimental.sessions AS s USING(session_id)
WHERE %s
ORDER BY e.created_at ASC, s.duration ASC
LIMIT 1;`, where)
return q, nil
}

View file

@ -0,0 +1,241 @@
package charts
import (
"fmt"
"log"
"openreplay/backend/pkg/analytics/db"
"strings"
)
var validMetricOfValues = map[MetricOfTable]struct{}{
MetricOfTableBrowser: {},
MetricOfTableDevice: {},
MetricOfTableCountry: {},
MetricOfTableUserId: {},
MetricOfTableLocation: {},
MetricOfTableReferrer: {},
MetricOfTableFetch: {},
}
type TableQueryBuilder struct{}
type TableValue struct {
Name string `json:"name"`
Total uint64 `json:"total"`
}
type TableResponse struct {
Total uint64 `json:"total"`
Count uint64 `json:"count"`
Values []TableValue `json:"values"`
}
const (
MetricFormatSessionCount = "sessionCount"
MetricFormatUserCount = "userCount"
nilUUIDString = "00000000-0000-0000-0000-000000000000"
)
var propertySelectorMap = map[string]string{
string(MetricOfTableLocation): "JSONExtractString(toString(main.$properties), 'url_path') AS metric_value",
//string(MetricOfTableUserId): "if(empty(sessions.user_id), 'Anonymous', sessions.user_id) AS metric_value",
string(MetricOfTableUserId): "if(empty(sessions.user_id) OR sessions.user_id IS NULL, 'Anonymous', sessions.user_id) AS metric_value",
string(MetricOfTableBrowser): "main.$browser AS metric_value",
//string(MetricOfTableDevice): "sessions.user_device AS metric_value",
string(MetricOfTableDevice): "if(empty(sessions.user_device) OR sessions.user_device IS NULL, 'Undefined', sessions.user_device) AS metric_value",
string(MetricOfTableCountry): "toString(sessions.user_country) AS metric_value",
string(MetricOfTableReferrer): "main.$referrer AS metric_value",
string(MetricOfTableFetch): "JSONExtractString(toString(main.$properties), 'url_path') AS metric_value",
}
var mainColumns = map[string]string{
"userBrowser": "$browser",
"userDevice": "sessions.user_device",
"referrer": "$referrer",
"fetchDuration": "$duration_s",
"ISSUE": "issue_type",
}
func (t TableQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
if p.MetricOf == "" {
return nil, fmt.Errorf("MetricOf is empty")
}
if _, ok := validMetricOfValues[MetricOfTable(p.MetricOf)]; !ok {
return nil, fmt.Errorf("invalid MetricOf value: %s", p.MetricOf)
}
metricFormat := p.MetricFormat
if metricFormat != MetricFormatSessionCount && metricFormat != MetricFormatUserCount {
metricFormat = MetricFormatSessionCount
}
query, err := t.buildQuery(p, metricFormat)
if err != nil {
return nil, fmt.Errorf("error building query: %w", err)
}
rows, err := conn.Query(query)
if err != nil {
log.Printf("Error executing query: %s\nQuery: %s", err, query)
return nil, fmt.Errorf("error executing query: %w", err)
}
defer rows.Close()
var overallTotalMetricValues uint64
var overallCount uint64
values := make([]TableValue, 0)
firstRow := true
for rows.Next() {
var (
name string
valueSpecificCount uint64
tempOverallTotalMetricValues uint64
tempOverallCount uint64
)
if err := rows.Scan(&tempOverallTotalMetricValues, &name, &valueSpecificCount, &tempOverallCount); err != nil {
return nil, fmt.Errorf("error scanning row: %w", err)
}
if firstRow {
overallTotalMetricValues = tempOverallTotalMetricValues
overallCount = tempOverallCount
firstRow = false
}
values = append(values, TableValue{Name: name, Total: valueSpecificCount})
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating rows: %w", err)
}
return &TableResponse{
Total: overallTotalMetricValues,
Count: overallCount,
Values: values,
}, nil
}
func (t TableQueryBuilder) buildQuery(r Payload, metricFormat string) (string, error) {
if len(r.Series) == 0 {
return "", fmt.Errorf("payload Series cannot be empty")
}
s := r.Series[0]
// sessions_data WHERE conditions
durConds, _ := buildDurationWhere(s.Filter.Filters)
sessFilters, _ := filterOutTypes(s.Filter.Filters, []FilterType{FilterDuration, FilterUserAnonymousId})
sessConds, evtNames := buildEventConditions(sessFilters, BuildConditionsOptions{DefinedColumns: mainColumns, MainTableAlias: "main"})
sessionDataConds := append(durConds, sessConds...)
// date range for sessions_data
sessionDataConds = append(sessionDataConds,
fmt.Sprintf("main.created_at BETWEEN toDateTime(%d/1000) AND toDateTime(%d/1000)", r.StartTimestamp, r.EndTimestamp),
)
// clean empty
var sdClean []string
for _, c := range sessionDataConds {
if strings.TrimSpace(c) != "" {
sdClean = append(sdClean, c)
}
}
sessionDataWhere := ""
if len(sdClean) > 0 {
sessionDataWhere = "WHERE " + strings.Join(sdClean, " AND ")
}
if len(evtNames) > 0 {
sessionDataWhere += fmt.Sprintf(" AND main.$event_name IN ('%s')", strings.Join(evtNames, "','"))
}
// filtered_data WHERE conditions
propSel, ok := propertySelectorMap[r.MetricOf]
if !ok {
propSel = fmt.Sprintf("JSONExtractString(toString(main.$properties), '%s') AS metric_value", r.MetricOf)
}
parts := strings.SplitN(propSel, " AS ", 2)
propertyExpr := parts[0]
tAgg := "main.session_id"
specConds := []string{}
if metricFormat == MetricFormatUserCount {
tAgg = "if(empty(sessions.user_id), toString(sessions.user_uuid), sessions.user_id)"
specConds = append(specConds,
fmt.Sprintf("NOT (empty(sessions.user_id) AND (sessions.user_uuid IS NULL OR sessions.user_uuid = '%s'))", nilUUIDString),
)
}
// metric-specific filter
_, mFilt := filterOutTypes(s.Filter.Filters, []FilterType{FilterType(r.MetricOf)})
metricCond := eventNameCondition("", r.MetricOf)
if len(mFilt) > 0 {
//conds, _ := buildEventConditions(mFilt, BuildConditionsOptions{DefinedColumns: map[string]string{"userId": "user_id"}, MainTableAlias: "main"})
//metricCond = strings.Join(conds, " AND ")
}
filteredConds := []string{
fmt.Sprintf("main.project_id = %d", r.ProjectId),
metricCond,
fmt.Sprintf("main.created_at BETWEEN toDateTime(%d/1000) AND toDateTime(%d/1000)", r.StartTimestamp, r.EndTimestamp),
}
filteredConds = append(filteredConds, specConds...)
// clean empty
var fClean []string
for _, c := range filteredConds {
if strings.TrimSpace(c) != "" {
fClean = append(fClean, c)
}
}
filteredWhere := ""
if len(fClean) > 0 {
filteredWhere = "WHERE " + strings.Join(fClean, " AND ")
}
limit := r.Limit
if limit <= 0 {
limit = 10
}
offset := (r.Page - 1) * limit
query := fmt.Sprintf(`
WITH sessions_data AS (
SELECT session_id
FROM product_analytics.events AS main
JOIN experimental.sessions AS sessions USING (session_id)
%s
GROUP BY session_id
),
filtered_data AS (
SELECT %s AS name, %s AS session_id
FROM product_analytics.events AS main
JOIN sessions_data USING (session_id)
JOIN experimental.sessions AS sessions USING (session_id)
%s
),
totals AS (
SELECT count() AS overall_total_metric_values,
countDistinct(session_id) AS overall_total_count
FROM filtered_data
),
grouped_values AS (
SELECT name,
countDistinct(session_id) AS value_count
FROM filtered_data
GROUP BY name
)
SELECT t.overall_total_metric_values,
g.name,
g.value_count,
t.overall_total_count
FROM grouped_values AS g
CROSS JOIN totals AS t
ORDER BY g.value_count DESC
LIMIT %d OFFSET %d;`,
sessionDataWhere,
propertyExpr,
tAgg,
filteredWhere,
limit,
offset,
)
return query, nil
}

View file

@ -0,0 +1,188 @@
package charts
import (
"fmt"
"log"
"strings"
"openreplay/backend/pkg/analytics/db"
)
type TableErrorsQueryBuilder struct{}
type ErrorChartPoint struct {
Timestamp int64 `json:"timestamp"`
Count uint64 `json:"count"`
}
type ErrorItem struct {
ErrorID string `json:"errorId"`
Name string `json:"name"`
Message string `json:"message"`
Users uint64 `json:"users"`
Total uint64 `json:"total"`
Sessions uint64 `json:"sessions"`
FirstOccurrence int64 `json:"firstOccurrence"`
LastOccurrence int64 `json:"lastOccurrence"`
Chart []ErrorChartPoint `json:"chart"`
}
type TableErrorsResponse struct {
Total uint64 `json:"total"`
Errors []ErrorItem `json:"errors"`
}
func (t TableErrorsQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
query, err := t.buildQuery(p)
if err != nil {
return nil, err
}
rows, err := conn.Query(query)
if err != nil {
log.Printf("Error executing query: %s\nQuery: %s", err, query)
return nil, err
}
defer rows.Close()
var resp TableErrorsResponse
for rows.Next() {
var e ErrorItem
var ts []int64
var cs []uint64
if err := rows.Scan(
&e.ErrorID, &e.Name, &e.Message,
&e.Users, &e.Total, &e.Sessions,
&e.FirstOccurrence, &e.LastOccurrence,
&ts, &cs,
); err != nil {
return nil, err
}
for i := range ts {
e.Chart = append(e.Chart, ErrorChartPoint{Timestamp: ts[i], Count: cs[i]})
}
resp.Errors = append(resp.Errors, e)
}
resp.Total = uint64(len(resp.Errors))
return resp, nil
}
func (t TableErrorsQueryBuilder) buildQuery(p Payload) (string, error) {
if len(p.Series) == 0 {
return "", fmt.Errorf("payload Series cannot be empty")
}
density := p.Density
if density < 2 {
density = 7
}
durMs := p.EndTimestamp - p.StartTimestamp
stepMs := durMs / int64(density-1)
startMs := (p.StartTimestamp / 1000) * 1000
endMs := (p.EndTimestamp / 1000) * 1000
limit := p.Limit
if limit <= 0 {
limit = 10
}
page := p.Page
if page <= 0 {
page = 1
}
offset := (page - 1) * limit
ef, en := buildEventConditions(
p.Series[0].Filter.Filters,
BuildConditionsOptions{DefinedColumns: mainColumns},
)
conds := []string{
"`$event_name` = 'ERROR'",
fmt.Sprintf("project_id = %d", p.ProjectId),
fmt.Sprintf("created_at >= toDateTime(%d/1000)", startMs),
fmt.Sprintf("created_at <= toDateTime(%d/1000)", endMs),
}
if len(ef) > 0 {
conds = append(conds, ef...)
}
if len(en) > 0 {
conds = append(conds, "`$event_name` IN ("+buildInClause(en)+")")
}
whereClause := strings.Join(conds, " AND ")
sql := fmt.Sprintf(`WITH
events AS (
SELECT
error_id,
JSONExtractString(toString("$properties"), 'name') AS name,
JSONExtractString(toString("$properties"), 'message') AS message,
distinct_id,
session_id,
created_at
FROM product_analytics.events
WHERE %s
),
sessions_per_interval AS (
SELECT
error_id,
toUInt64(%d + (toUInt64((toUnixTimestamp64Milli(created_at) - %d) / %d) * %d)) AS bucket_ts,
countDistinct(session_id) AS session_count
FROM events
GROUP BY error_id, bucket_ts
),
buckets AS (
SELECT
toUInt64(generate_series) AS bucket_ts
FROM generate_series(
%d,
%d,
%d
)
),
error_meta AS (
SELECT
error_id,
name,
message,
countDistinct(distinct_id) AS users,
count() AS total,
countDistinct(session_id) AS sessions,
min(created_at) AS first_occurrence,
max(created_at) AS last_occurrence
FROM events
GROUP BY error_id, name, message
),
error_chart AS (
SELECT
e.error_id AS error_id,
groupArray(b.bucket_ts) AS timestamps,
groupArray(coalesce(s.session_count, 0)) AS counts
FROM (SELECT DISTINCT error_id FROM events) AS e
CROSS JOIN buckets AS b
LEFT JOIN sessions_per_interval AS s
ON s.error_id = e.error_id
AND s.bucket_ts = b.bucket_ts
GROUP BY e.error_id
)
SELECT
m.error_id,
m.name,
m.message,
m.users,
m.total,
m.sessions,
toUnixTimestamp64Milli(toDateTime64(m.first_occurrence, 3)) AS first_occurrence,
toUnixTimestamp64Milli(toDateTime64(m.last_occurrence, 3)) AS last_occurrence,
ec.timestamps,
ec.counts
FROM error_meta AS m
LEFT JOIN error_chart AS ec
ON m.error_id = ec.error_id
ORDER BY m.last_occurrence DESC
LIMIT %d OFFSET %d;`,
whereClause,
startMs, startMs, stepMs, stepMs, // New formula parameters
startMs, endMs, stepMs,
limit, offset,
)
return sql, nil
}

View file

@ -0,0 +1,147 @@
package charts
import (
"fmt"
"log"
"openreplay/backend/pkg/analytics/db"
"sort"
"strings"
)
type TimeSeriesQueryBuilder struct{}
func (t TimeSeriesQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
data := make(map[uint64]map[string]uint64)
for _, series := range p.Series {
query, err := t.buildQuery(p, series)
if err != nil {
log.Printf("buildQuery %s: %v", series.Name, err)
return nil, fmt.Errorf("series %s: %v", series.Name, err)
}
rows, err := conn.Query(query)
if err != nil {
log.Printf("exec %s: %v", series.Name, err)
return nil, fmt.Errorf("series %s: %v", series.Name, err)
}
var pts []DataPoint
for rows.Next() {
var dp DataPoint
if err := rows.Scan(&dp.Timestamp, &dp.Count); err != nil {
rows.Close()
return nil, err
}
pts = append(pts, dp)
}
rows.Close()
filled := FillMissingDataPoints(p.StartTimestamp, p.EndTimestamp, p.Density, DataPoint{}, pts, 1000)
for _, dp := range filled {
if data[dp.Timestamp] == nil {
data[dp.Timestamp] = map[string]uint64{}
}
data[dp.Timestamp][series.Name] = dp.Count
}
}
var timestamps []uint64
for ts := range data {
timestamps = append(timestamps, ts)
}
sort.Slice(timestamps, func(i, j int) bool { return timestamps[i] < timestamps[j] })
var result []map[string]interface{}
for _, ts := range timestamps {
row := map[string]interface{}{"timestamp": ts}
for _, series := range p.Series {
row[series.Name] = data[ts][series.Name]
}
result = append(result, row)
}
return result, nil
}
func (t TimeSeriesQueryBuilder) buildQuery(p Payload, s Series) (string, error) {
switch p.MetricOf {
case "sessionCount":
return t.buildTimeSeriesQuery(p, s, "sessionCount", "session_id"), nil
case "userCount":
return t.buildTimeSeriesQuery(p, s, "userCount", "user_id"), nil
default:
return "", fmt.Errorf("unsupported metric %q", p.MetricOf)
}
}
func (t TimeSeriesQueryBuilder) buildTimeSeriesQuery(p Payload, s Series, metric, idField string) string {
sub := t.buildSubQuery(p, s, metric)
step := int(getStepSize(p.StartTimestamp, p.EndTimestamp, p.Density, false, 1000)) * 1000
return fmt.Sprintf(
"SELECT gs.generate_series AS timestamp, COALESCE(COUNT(DISTINCT ps.%s),0) AS count "+
"FROM generate_series(%d,%d,%d) AS gs "+
"LEFT JOIN (%s) AS ps ON TRUE "+
"WHERE ps.datetime >= toDateTime(timestamp/1000) AND ps.datetime < toDateTime((timestamp+%d)/1000) "+
"GROUP BY timestamp ORDER BY timestamp;",
idField, p.StartTimestamp, p.EndTimestamp, step, sub, step,
)
}
func (t TimeSeriesQueryBuilder) buildSubQuery(p Payload, s Series, metric string) string {
evConds, evNames := buildEventConditions(s.Filter.Filters, BuildConditionsOptions{
DefinedColumns: mainColumns,
MainTableAlias: "main",
PropertiesColumnName: "$properties",
})
sessConds := buildSessionConditions(s.Filter.Filters)
staticEvt := buildStaticEventWhere(p)
sessWhere, sessJoin := buildStaticSessionWhere(p, sessConds)
if len(evConds) == 0 && len(evNames) == 0 {
if metric == "sessionCount" {
return fmt.Sprintf(
"SELECT s.session_id AS session_id, s.datetime AS datetime "+
"FROM experimental.sessions AS s WHERE %s",
sessJoin,
)
}
return fmt.Sprintf(
"SELECT multiIf(s.user_id!='',s.user_id,s.user_anonymous_id!='',s.user_anonymous_id,toString(s.user_uuid)) AS user_id, s.datetime AS datetime "+
"FROM experimental.sessions AS s WHERE %s",
sessJoin,
)
}
uniq := make([]string, 0, len(evNames))
for _, name := range evNames {
if !contains(uniq, name) {
uniq = append(uniq, name)
}
}
nameClause := ""
if len(uniq) > 0 {
nameClause = fmt.Sprintf("AND main.`$event_name` IN (%s) ", buildInClause(uniq))
}
having := ""
if len(evConds) > 0 {
having = buildHavingClause(evConds)
}
whereEvt := staticEvt
if len(evConds) > 0 {
whereEvt += " AND " + strings.Join(evConds, " AND ")
}
proj := map[string]string{
"sessionCount": "s.session_id AS session_id",
"userCount": "multiIf(s.user_id!='',s.user_id,s.user_anonymous_id!='',s.user_anonymous_id,toString(s.user_uuid)) AS user_id",
}[metric] + ", s.datetime AS datetime"
return fmt.Sprintf(
"SELECT %s FROM (SELECT main.session_id, MIN(main.created_at) AS first_event_ts, MAX(main.created_at) AS last_event_ts "+
"FROM product_analytics.events AS main "+
"WHERE %s AND main.session_id IN (SELECT s.session_id FROM experimental.sessions AS s WHERE %s) %s "+
"GROUP BY main.session_id %s "+
"INNER JOIN (SELECT * FROM experimental.sessions AS s WHERE %s) AS s ON s.session_id=f.session_id",
proj, whereEvt, sessWhere, nameClause, having, sessJoin,
)
}

View file

@ -0,0 +1,764 @@
package charts
import (
"fmt"
"math"
"openreplay/backend/pkg/analytics/db"
"sort"
"strconv"
"strings"
"time"
)
// Node represents a point in the journey diagram.
type Node struct {
Depth int `json:"depth"`
Name string `json:"name"`
EventType string `json:"eventType"`
ID int `json:"id"`
StartingNode bool `json:"startingNode"`
}
// Link represents a transition between nodes.
type Link struct {
EventType string `json:"eventType"`
SessionsCount int `json:"sessionsCount"`
Value float64 `json:"value"`
Source int `json:"source"`
Target int `json:"target"`
}
// JourneyData holds all nodes and links for the response.
type JourneyData struct {
Nodes []Node `json:"nodes"`
Links []Link `json:"links"`
}
// JourneyResponse is the API response structure.
type JourneyResponse struct {
Data JourneyData `json:"data"`
}
// UserJourneyQueryBuilder builds and executes the journey query.
type UserJourneyQueryBuilder struct{}
func (h UserJourneyQueryBuilder) Execute(p Payload, conn db.Connector) (interface{}, error) {
q, err := h.buildQuery(p)
if err != nil {
return nil, err
}
rows, err := conn.Query(q)
if err != nil {
return nil, err
}
defer rows.Close()
type row struct {
Stage int64
CurrentEventName string
CurrentEventProperty string
PrevEventName string
PrevEventProperty string
SessionsCount uint64
}
// Parse all rows into a slice
var rawData []row
for rows.Next() {
var r row
if err := rows.Scan(
&r.Stage,
&r.CurrentEventName,
&r.CurrentEventProperty,
&r.PrevEventName,
&r.PrevEventProperty,
&r.SessionsCount,
); err != nil {
return nil, err
}
if r.SessionsCount == 0 {
continue
}
rawData = append(rawData, r)
}
// Group data by stage
dataByStage := make(map[int64][]row)
var minStage int64 = 0
var maxStage int64 = 0
for _, r := range rawData {
dataByStage[r.Stage] = append(dataByStage[r.Stage], r)
if r.Stage > maxStage {
maxStage = r.Stage
}
if r.Stage < minStage {
minStage = r.Stage
}
}
// Calculate total sessions per stage
stageTotals := make(map[int64]uint64)
for stage, stageRows := range dataByStage {
for _, r := range stageRows {
stageTotals[stage] += r.SessionsCount
}
}
// Determine base count for percentage calculations
// We'll use the starting point (usually stage 1) as our base
var baseSessionsCount uint64
if count, exists := stageTotals[1]; exists {
baseSessionsCount = count
} else {
// If stage 1 doesn't exist, use the first available positive stage
for stage := int64(0); stage <= maxStage; stage++ {
if count, exists := stageTotals[stage]; exists {
baseSessionsCount = count
break
}
}
}
if baseSessionsCount == 0 {
baseSessionsCount = 1 // Prevent division by zero
}
// Number of top nodes to display per stage
topLimit := int(p.Rows)
if topLimit <= 0 {
topLimit = 5 // Default if not specified
}
// Step 1: Determine the top paths at each stage based on destination
type pathKey struct {
eventName string
eventProp string
}
// Map to store top paths for each stage
topPathsByStage := make(map[int64]map[pathKey]bool)
pathCountsByStage := make(map[int64]map[pathKey]uint64)
for stage := minStage; stage <= maxStage; stage++ {
// Skip if this stage has no data
if _, exists := dataByStage[stage]; !exists {
continue
}
// Sort rows within each stage by session count (descending)
sort.Slice(dataByStage[stage], func(i, j int) bool {
return dataByStage[stage][i].SessionsCount > dataByStage[stage][j].SessionsCount
})
// Initialize maps for this stage
topPathsByStage[stage] = make(map[pathKey]bool)
pathCountsByStage[stage] = make(map[pathKey]uint64)
// First, aggregate by path to get total sessions per path
for _, r := range dataByStage[stage] {
key := pathKey{eventName: r.CurrentEventName, eventProp: r.CurrentEventProperty}
pathCountsByStage[stage][key] += r.SessionsCount
}
// Then sort paths by session count
type pathCount struct {
path pathKey
count uint64
}
var paths []pathCount
for path, count := range pathCountsByStage[stage] {
paths = append(paths, pathCount{path: path, count: count})
}
// Sort descending by count
sort.Slice(paths, func(i, j int) bool {
return paths[i].count > paths[j].count
})
// Mark top paths - take exactly topLimit or all if fewer available
for i, pc := range paths {
if i < topLimit {
topPathsByStage[stage][pc.path] = true
}
}
}
// Step 2: Create a normalized sequential depth mapping
// First, gather all stages that have data
var stagesWithData []int64
for stage := range dataByStage {
stagesWithData = append(stagesWithData, stage)
}
// Sort stages
sort.Slice(stagesWithData, func(i, j int) bool {
return stagesWithData[i] < stagesWithData[j]
})
var startingStage int64
for _, s := range stagesWithData {
if s > 0 {
startingStage = s
break
}
}
// Create a mapping from logical stage to display depth (ensuring no gaps)
stageToDepth := make(map[int64]int)
for i, stage := range stagesWithData {
stageToDepth[stage] = i
}
// Determine depth of central node (stage 1 or equivalent)
var centralDepth int
if depth, exists := stageToDepth[1]; exists {
centralDepth = depth
} else {
// If stage 1 doesn't exist, use the first positive stage
for _, stage := range stagesWithData {
if stage > 0 {
centralDepth = stageToDepth[stage]
break
}
}
}
// Step 3: Create nodes with normalized depths
var nodes []Node
var links []Link
nodeID := 0
// Maps to track nodes and sessions
nodeMap := make(map[string]int) // Stage|EventName|EventProp → nodeID
othersNodes := make(map[int64]int) // stage → "Others" nodeID
dropNodes := make(map[int64]int) // stage → "Drop" nodeID
incomingSessions := make(map[int]uint64) // nodeID → incoming sessions
outgoingSessions := make(map[int]uint64) // nodeID → outgoing sessions
// Create all nodes using normalized depths
for _, stage := range stagesWithData {
displayDepth := stageToDepth[stage]
// Create regular nodes for top paths
for path := range topPathsByStage[stage] {
nodeKey := fmt.Sprintf("%d|%s|%s", stage, path.eventName, path.eventProp)
nodeMap[nodeKey] = nodeID
nodes = append(nodes, Node{
ID: nodeID,
Depth: displayDepth,
Name: path.eventProp,
EventType: path.eventName,
StartingNode: stage == startingStage,
})
// For the central stage (usually stage 1) or first stage, set incoming sessions
if (stage == 1) || (stage == minStage && minStage != 1) {
incomingSessions[nodeID] = pathCountsByStage[stage][path]
}
nodeID++
}
// Calculate if we need an "Others" node (when total paths > topLimit)
totalPaths := len(pathCountsByStage[stage])
if totalPaths > topLimit {
// Calculate sessions that will go to Others
othersCount := uint64(0)
for path, count := range pathCountsByStage[stage] {
if !topPathsByStage[stage][path] {
othersCount += count
}
}
// Only create Others if it has sessions
if othersCount > 0 {
othersNodes[stage] = nodeID
nodes = append(nodes, Node{
ID: nodeID,
Depth: displayDepth,
Name: "other",
EventType: "OTHER",
StartingNode: stage == startingStage,
})
// For the central stage or first stage, set incoming sessions for Others
if (stage == 1) || (stage == minStage && minStage != 1) {
incomingSessions[nodeID] = othersCount
}
nodeID++
}
}
}
// Step 4: Create links between adjacent nodes only
// Use a map to deduplicate links
type linkKey struct {
src int
tgt int
}
linkSessions := make(map[linkKey]uint64)
linkTypes := make(map[linkKey]string)
// For each stage (except the first), create links from the previous stage
for i := 1; i < len(stagesWithData); i++ {
currentStage := stagesWithData[i]
prevStage := stagesWithData[i-1]
for _, r := range dataByStage[currentStage] {
// Skip if previous stage doesn't match expected
if r.Stage != currentStage {
continue
}
// Determine source node
prevPathKey := fmt.Sprintf("%d|%s|%s", prevStage, r.PrevEventName, r.PrevEventProperty)
srcID, hasSrc := nodeMap[prevPathKey]
if !hasSrc {
// If source isn't a top node, use Others from previous stage
if othersID, hasOthers := othersNodes[prevStage]; hasOthers {
srcID = othersID
hasSrc = true
} else {
// Skip if we can't find a source
continue
}
}
// Determine target node
curPath := pathKey{eventName: r.CurrentEventName, eventProp: r.CurrentEventProperty}
var tgtID int
var hasTgt bool
// Check if this path is in the top paths for this stage
if topPathsByStage[currentStage][curPath] {
// It's a top node
curPathKey := fmt.Sprintf("%d|%s|%s", currentStage, r.CurrentEventName, r.CurrentEventProperty)
tgtID = nodeMap[curPathKey]
hasTgt = true
} else {
// It's part of Others
if othersID, hasOthers := othersNodes[currentStage]; hasOthers {
tgtID = othersID
hasTgt = true
}
}
if !hasSrc || !hasTgt {
continue
}
// Update session tracking
incomingSessions[tgtID] += r.SessionsCount
outgoingSessions[srcID] += r.SessionsCount
// Record link (deduplicating)
lk := linkKey{src: srcID, tgt: tgtID}
linkSessions[lk] += r.SessionsCount
// Prefer non-OTHER event type
if linkTypes[lk] == "" || linkTypes[lk] == "OTHER" {
linkTypes[lk] = r.CurrentEventName
}
}
}
// Create deduplicated links with proper percentages
for lk, count := range linkSessions {
// Calculate percentage based on baseSessionsCount
percent := math.Round(float64(count)*10000/float64(baseSessionsCount)) / 100
links = append(links, Link{
Source: lk.src,
Target: lk.tgt,
SessionsCount: int(count),
Value: percent,
EventType: linkTypes[lk],
})
}
// Step 5: Calculate drops and create drop nodes (only for stages ≥ 0)
// Process forward drops (positive stages only)
for i := 0; i < len(stagesWithData)-1; i++ {
stage := stagesWithData[i]
// Skip negative stages for drops
if stage < 0 {
continue
}
// Calculate new drops at this stage
stageDrops := uint64(0)
dropsFromNode := make(map[int]uint64) // nodeID -> drop count
for _, node := range nodes {
nodeDepth := node.Depth
// Skip if this node isn't in the current stage
if nodeDepth != stageToDepth[stage] {
continue
}
incoming := incomingSessions[node.ID]
outgoing := outgoingSessions[node.ID]
if incoming > outgoing {
dropCount := incoming - outgoing
dropsFromNode[node.ID] = dropCount
stageDrops += dropCount
}
}
// Skip if no drops
if stageDrops == 0 {
continue
}
// Determine next stage depth for drop node positioning
var dropDepth int
if i+1 < len(stagesWithData) {
dropDepth = stageToDepth[stagesWithData[i+1]]
} else {
dropDepth = stageToDepth[stage] + 1
}
// Create drop node
dropNodes[stage] = nodeID
nodes = append(nodes, Node{
ID: nodeID,
Depth: dropDepth,
Name: "drop",
EventType: "DROP",
})
// Create links from nodes with drops to the drop node
for nid, dropCount := range dropsFromNode {
if dropCount == 0 {
continue
}
// Calculate percentage based on baseSessionsCount
percent := math.Round(float64(dropCount)*10000/float64(baseSessionsCount)) / 100
links = append(links, Link{
Source: nid,
Target: nodeID,
SessionsCount: int(dropCount),
Value: percent,
EventType: "DROP",
})
}
// Link previous drop node to current drop node to show accumulation
if i > 0 {
for j := i - 1; j >= 0; j-- {
prevStage := stagesWithData[j]
if prevDropID, hasPrevDrop := dropNodes[prevStage]; hasPrevDrop {
// Link previous drop to current drop to show accumulation
prevDropCount := uint64(0)
for _, link := range links {
if link.Target == prevDropID && link.EventType == "DROP" {
prevDropCount += uint64(link.SessionsCount)
}
}
percent := math.Round(float64(prevDropCount)*10000/float64(baseSessionsCount)) / 100
links = append(links, Link{
Source: prevDropID,
Target: nodeID,
SessionsCount: int(prevDropCount),
Value: percent,
EventType: "DROP",
})
break
}
}
}
nodeID++
}
// Filter out nodes with no connections
nodeHasConnection := make(map[int]bool)
for _, link := range links {
nodeHasConnection[link.Source] = true
nodeHasConnection[link.Target] = true
}
// Make sure central nodes are included even if they don't have links
for _, node := range nodes {
if node.Depth == centralDepth {
nodeHasConnection[node.ID] = true
}
}
var filteredNodes []Node
for _, node := range nodes {
if nodeHasConnection[node.ID] {
filteredNodes = append(filteredNodes, node)
}
}
// Reassign IDs to be sequential
nodeIDMap := make(map[int]int)
var finalNodes []Node = make([]Node, 0, len(filteredNodes))
for newID, node := range filteredNodes {
nodeIDMap[node.ID] = newID
node.ID = newID
finalNodes = append(finalNodes, node)
}
// Update link references
var finalLinks []Link = make([]Link, 0, len(links))
for _, link := range links {
srcID, srcExists := nodeIDMap[link.Source]
tgtID, tgtExists := nodeIDMap[link.Target]
if srcExists && tgtExists {
link.Source = srcID
link.Target = tgtID
finalLinks = append(finalLinks, link)
}
}
return JourneyData{
Nodes: finalNodes,
Links: finalLinks,
}, nil
}
func (h UserJourneyQueryBuilder) buildQuery(p Payload) (string, error) {
// prepare event list filter
events := p.MetricValue
if len(events) == 0 {
events = []string{"LOCATION"}
}
vals := make([]string, len(events))
for i, v := range events {
vals[i] = fmt.Sprintf("'%s'", v)
}
laterCond := fmt.Sprintf("e.\"$event_name\" IN (%s)", strings.Join(vals, ","))
// build start and exclude conditions
startConds, _ := buildEventConditions(p.StartPoint, BuildConditionsOptions{DefinedColumns: mainColumns, MainTableAlias: "e"})
excludeConds, _ := buildEventConditions(p.Exclude, BuildConditionsOptions{DefinedColumns: mainColumns, MainTableAlias: "e"})
// quote properties column correctly
fixProps := func(conds []string) []string {
for i, c := range conds {
conds[i] = strings.ReplaceAll(c, "e.$properties", "e.\"$properties\"")
}
return conds
}
startConds = fixProps(startConds)
excludeConds = fixProps(excludeConds)
// extract global filters and duration from first series
s := p.MetricPayload.Series[0]
var durationMin, durationMax int64
var okMin, okMax bool
var err error
var globalFilters []Filter
for _, flt := range s.Filter.Filters {
if flt.Type == "duration" {
if len(flt.Value) > 0 && flt.Value[0] != "" {
durationMin, err = strconv.ParseInt(flt.Value[0], 10, 64)
if err != nil {
return "", err
}
okMin = true
}
if len(flt.Value) > 1 && flt.Value[1] != "" {
durationMax, err = strconv.ParseInt(flt.Value[1], 10, 64)
if err != nil {
return "", err
}
okMax = true
}
continue
}
if flt.IsEvent {
continue
}
globalFilters = append(globalFilters, flt)
}
globalConds, _ := buildEventConditions(globalFilters, BuildConditionsOptions{DefinedColumns: mainColumns, MainTableAlias: "e"})
globalConds = fixProps(globalConds)
// assemble duration condition
var durCond string
if okMin && okMax {
durCond = fmt.Sprintf("ss.duration BETWEEN %d AND %d", durationMin, durationMax)
} else if okMin {
durCond = fmt.Sprintf("ss.duration >= %d", durationMin)
} else if okMax {
durCond = fmt.Sprintf("ss.duration <= %d", durationMax)
}
// determine starting event
var startEvent string
if len(p.StartPoint) > 0 {
startEvent = string(p.StartPoint[0].Type)
} else {
startEvent = events[0]
}
// assemble first_hits WHERE clause with optional duration
firstBase := []string{fmt.Sprintf("e.\"$event_name\" = '%s'", startEvent)}
if len(startConds) > 0 {
firstBase = append(firstBase, startConds...)
}
if len(globalConds) > 0 {
firstBase = append(firstBase, globalConds...)
}
firstBase = append(firstBase,
fmt.Sprintf("e.project_id = %d", p.ProjectId),
"e.session_id IS NOT NULL",
fmt.Sprintf("e.created_at BETWEEN toDateTime('%s') AND toDateTime('%s')",
time.Unix(p.StartTimestamp/1000, 0).UTC().Format("2006-01-02 15:04:05"),
time.Unix(p.EndTimestamp/1000, 0).UTC().Format("2006-01-02 15:04:05"),
),
)
if durCond != "" {
firstBase = append(firstBase, durCond)
}
// assemble journey WHERE clause
journeyBase := []string{laterCond}
if len(excludeConds) > 0 {
journeyBase = append(journeyBase, "NOT ("+strings.Join(excludeConds, " AND ")+")")
}
if len(globalConds) > 0 {
journeyBase = append(journeyBase, globalConds...)
}
journeyBase = append(journeyBase,
fmt.Sprintf("e.project_id = %d", p.ProjectId),
)
// format time bounds
startTime := time.Unix(p.StartTimestamp/1000, 0).UTC().Format("2006-01-02 15:04:05")
endTime := time.Unix(p.EndTimestamp/1000, 0).UTC().Format("2006-01-02 15:04:05")
// set column limits
previousColumns := p.PreviousColumns
if previousColumns <= 0 {
previousColumns = 0
}
maxCols := p.Columns
if maxCols > 0 {
maxCols++
}
// build final query
q := fmt.Sprintf(`WITH
first_hits AS (
SELECT e.session_id, MIN(e.created_at) AS start_time
FROM product_analytics.events AS e
JOIN experimental.sessions AS ss USING(session_id)
WHERE %s
GROUP BY e.session_id
),
journey_events_after AS (
SELECT
e.session_id,
e.distinct_id,
e."$event_name" AS event_name,
e.created_at,
CASE
WHEN e."$event_name" = 'LOCATION' THEN JSONExtractString(toString(e."$properties"), 'url_path')
WHEN e."$event_name" = 'CLICK' THEN JSONExtractString(toString(e."$properties"), 'label')
WHEN e."$event_name" = 'INPUT' THEN JSONExtractString(toString(e."$properties"), 'label')
ELSE NULL
END AS event_property
FROM product_analytics.events AS e
JOIN first_hits AS f USING(session_id)
WHERE
e.created_at >= f.start_time
AND e.created_at <= toDateTime('%s')
AND %s
),
journey_events_before AS (
SELECT
e.session_id,
e.distinct_id,
e."$event_name" AS event_name,
e.created_at,
CASE
WHEN e."$event_name" = 'LOCATION' THEN JSONExtractString(toString(e."$properties"), 'url_path')
WHEN e."$event_name" = 'CLICK' THEN JSONExtractString(toString(e."$properties"), 'label')
WHEN e."$event_name" = 'INPUT' THEN JSONExtractString(toString(e."$properties"), 'label')
ELSE NULL
END AS event_property
FROM product_analytics.events AS e
JOIN first_hits AS f USING(session_id)
WHERE
e.created_at < f.start_time
AND e.created_at >= toDateTime('%s')
AND %s
AND %d > 0
),
journey_events_combined AS (
SELECT *, 1 AS direction FROM journey_events_after
UNION ALL
SELECT *, -1 AS direction FROM journey_events_before
),
event_with_prev AS (
SELECT
session_id,
distinct_id,
event_name,
event_property,
created_at,
direction,
any(event_name) OVER (PARTITION BY session_id ORDER BY created_at ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS previous_event_name,
any(event_property) OVER (PARTITION BY session_id ORDER BY created_at ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS previous_event_property
FROM journey_events_combined
),
staged AS (
SELECT
*,
CASE
WHEN direction = 1 THEN toInt64(sumIf(1, true) OVER (PARTITION BY session_id, direction ORDER BY created_at ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
WHEN direction = -1 THEN -1 * toInt64(sumIf(1, true) OVER (PARTITION BY session_id, direction ORDER BY created_at DESC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
ELSE 0
END AS stage
FROM event_with_prev
)
SELECT
stage AS stage,
event_name AS current_event_name,
event_property AS current_event_property,
COALESCE(previous_event_name, '') AS previous_event_name,
COALESCE(previous_event_property, '') AS previous_event_property,
COUNT(DISTINCT session_id) AS sessions_count
FROM staged
WHERE stage <= %d AND stage >= -%d
GROUP BY
stage,
event_name,
event_property,
previous_event_name,
previous_event_property
ORDER BY stage, COUNT(DISTINCT session_id) DESC;`,
strings.Join(firstBase, " AND "),
endTime,
strings.Join(journeyBase, " AND "),
startTime,
strings.Join(journeyBase, " AND "),
previousColumns,
maxCols,
previousColumns,
)
return q, nil
}

View file

@ -1,21 +1,184 @@
package charts
import "openreplay/backend/pkg/analytics/cards"
type Table string
type Column string
type MetricType string
type FilterType string
type EventType string
type EventOrder string
const (
TableEvents Table = "product_analytics.events"
TableSessions Table = "experimental.sessions"
)
const (
ColEventTime Column = "main.created_at"
ColEventName Column = "main.`$event_name`"
ColEventProjectID Column = "main.project_id"
ColEventProperties Column = "main.`$properties`"
ColEventSessionID Column = "main.session_id"
ColEventURLPath Column = "main.url_path"
ColEventStatus Column = "main.status"
)
const (
ColSessionID Column = "s.session_id"
ColDuration Column = "s.duration"
ColUserCountry Column = "s.user_country"
ColUserCity Column = "s.user_city"
ColUserState Column = "s.user_state"
ColUserID Column = "s.user_id"
ColUserAnonymousID Column = "s.user_anonymous_id"
ColUserOS Column = "s.user_os"
ColUserBrowser Column = "s.user_browser"
ColUserDevice Column = "s.user_device"
ColUserDeviceType Column = "s.user_device_type"
ColRevID Column = "s.rev_id"
ColBaseReferrer Column = "s.base_referrer"
ColUtmSource Column = "s.utm_source"
ColUtmMedium Column = "s.utm_medium"
ColUtmCampaign Column = "s.utm_campaign"
ColMetadata1 Column = "s.metadata_1"
ColSessionProjectID Column = "s.project_id"
ColSessionIsNotNull Column = "isNotNull(s.duration)"
)
const (
MetricTypeTimeseries MetricType = "timeseries"
MetricTypeTable MetricType = "table"
MetricTypeFunnel MetricType = "funnel"
MetricTypeHeatmap MetricType = "heatMap"
MetricTypeSession MetricType = "heatmaps_session"
MetricUserJourney MetricType = "pathAnalysis"
)
const (
EventOrderThen EventOrder = "then"
EventOrderOr EventOrder = "or"
EventOrderAnd EventOrder = "and"
)
type MetricPayload struct {
StartTimestamp int64 `json:"startTimestamp"`
EndTimestamp int64 `json:"endTimestamp"`
Density int `json:"density"`
MetricOf string `json:"metricOf"`
MetricType MetricType `json:"metricType"`
MetricValue []string `json:"metricValue"`
MetricFormat string `json:"metricFormat"`
ViewType string `json:"viewType"`
Name string `json:"name"`
Series []Series `json:"series"`
Limit int `json:"limit"`
Page int `json:"page"`
StartPoint []Filter `json:"startPoint"`
Exclude []Filter `json:"excludes"`
Rows uint64 `json:"rows"`
Columns uint64 `json:"columns"`
PreviousColumns uint64 `json:"previousColumns"`
}
type MetricOfTable string
const (
MetricOfTableLocation MetricOfTable = "LOCATION" // TOP Pages
MetricOfTableBrowser MetricOfTable = "userBrowser"
MetricOfTableReferrer MetricOfTable = "referrer"
MetricOfTableUserId MetricOfTable = "userId"
MetricOfTableCountry MetricOfTable = "userCountry"
MetricOfTableDevice MetricOfTable = "userDevice"
MetricOfTableFetch MetricOfTable = "FETCH"
//MetricOfTableIssues MetricOfTable = "issues"
//MetricOfTableSessions MetricOfTable = "sessions"
//MetricOfTableErrors MetricOfTable = "errors"
)
type FilterGroup struct {
Filters []Filter `json:"filters"`
EventsOrder EventOrder `json:"eventsOrder"`
}
type Series struct {
Name string `json:"name"`
Filter FilterGroup `json:"filter"`
}
type Filter struct {
Type FilterType `json:"type"`
IsEvent bool `json:"isEvent"`
Value []string `json:"value"`
Operator string `json:"operator"`
Source string `json:"source,omitempty"`
Filters []Filter `json:"filters"`
}
const (
FilterUserId FilterType = "userId"
FilterUserAnonymousId FilterType = "userAnonymousId"
FilterReferrer FilterType = "referrer"
FilterDuration FilterType = "duration"
FilterUtmSource FilterType = "utmSource"
FilterUtmMedium FilterType = "utmMedium"
FilterUtmCampaign FilterType = "utmCampaign"
FilterUserCountry FilterType = "userCountry"
FilterUserCity FilterType = "userCity"
FilterUserState FilterType = "userState"
FilterUserOs FilterType = "userOs"
FilterUserBrowser FilterType = "userBrowser"
FilterUserDevice FilterType = "userDevice"
FilterPlatform FilterType = "platform"
FilterRevId FilterType = "revId"
FilterIssue FilterType = "issue"
FilterMetadata FilterType = "metadata"
)
// Event filters
const (
FilterClick FilterType = "CLICK"
FilterInput FilterType = "INPUT"
FilterLocation FilterType = "LOCATION"
FilterTag FilterType = "tag"
FilterCustom FilterType = "customEvent"
FilterFetch FilterType = "fetch"
FilterFetchStatusCode FilterType = "fetchStatusCode" // Subfilter
FilterGraphQLRequest FilterType = "graphql"
FilterStateAction FilterType = "stateAction"
FilterError FilterType = "error"
FilterAvgCpuLoad FilterType = "avgCpuLoad"
FilterAvgMemoryUsage FilterType = "avgMemoryUsage"
)
// MOBILE FILTERS
const (
FilterUserOsIos FilterType = "userOsIos"
FilterUserDeviceIos FilterType = "userDeviceIos"
FilterUserCountryIos FilterType = "userCountryIos"
FilterUserIdIos FilterType = "userIdIos"
FilterUserAnonymousIdIos FilterType = "userAnonymousIdIos"
FilterRevIdIos FilterType = "revIdIos"
)
const (
OperatorStringIs = "is"
OperatorStringIsAny = "isAny"
OperatorStringOn = "on"
OperatorStringOnAny = "onAny"
OperatorStringIsNot = "isNot"
OperatorStringIsUndefined = "isUndefined"
OperatorStringNotOn = "notOn"
OperatorContains = "contains"
OperatorStringNotContains = "notContains"
OperatorStringStartsWith = "startsWith"
OperatorStringEndsWith = "endsWith"
)
type DataPoint struct {
Timestamp int64 `json:"timestamp"`
Series map[string]int64 `json:"series"`
Timestamp uint64 `json:"timestamp"`
Count uint64 `json:"count"`
}
type GetCardChartDataRequest struct {
MetricType string `json:"metricType" validate:"required,oneof=timeseries table funnel"`
MetricOf string `json:"metricOf" validate:"required,oneof=session_count user_count"`
ViewType string `json:"viewType" validate:"required,oneof=line_chart table_view"`
MetricFormat string `json:"metricFormat" validate:"required,oneof=default percentage"`
SessionID int64 `json:"sessionId"`
Series []cards.CardSeries `json:"series" validate:"required,dive"`
}
type GetCardChartDataResponse struct {
Data []DataPoint `json:"data"`
}
//type TimeseriesResponse struct {
// Data []DataPoint `json:"data"`
//}

Some files were not shown because too many files have changed in this diff Show more