Compare commits

..

903 commits

Author SHA1 Message Date
rjshrjndrn
68050f183f chore(dashboards): backend dashboard update
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-13 16:38:12 +02:00
rjshrjndrn
3530fbccb8 chore(dashboards): Adding more metrics to backend
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-12 14:59:33 +02:00
rjshrjndrn
e5c37cb0f2 docs(observability): How to install
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-12 13:16:10 +02:00
rjshrjndrn
0ee52dd72a chore(dashboard): updating components dashboard
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-12 13:12:44 +02:00
rjshrjndrn
ff0d473a15 chore(helm): updating backend dashboard
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-11 13:50:14 +02:00
rjshrjndrn
e4953d649e chore(monitoring): msk dashboard update
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-11 13:33:31 +02:00
rjshrjndrn
6812102061 chore(helm): Adding default storage provider for prometheus
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-11 13:05:56 +02:00
rjshrjndrn
fa173eeb4f chore(monitorng): Adding time estimation for msk lag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-07-08 12:02:10 +02:00
rjshrjndrn
f5944566da chore(dashboard): Adding clickhouse dashboard
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 16:26:20 +02:00
rjshrjndrn
7507e28d80 chore(dashboards): Adding msk charts
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 14:22:35 +02:00
rjshrjndrn
fd8a770789 chore(helm): dynamic dashboard creation
Ref: https://github.com/helm/helm/issues/4157#issuecomment-490748085

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 14:17:45 +02:00
rjshrjndrn
c971c0ff05 chore(dashboards): nginx grafana dashboard
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 13:46:06 +02:00
rjshrjndrn
cc3ca56902 chore(helm): overriding fullName
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 13:31:04 +02:00
rjshrjndrn
df736cc840 fix(helm): fix namespace
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 13:12:12 +02:00
rjshrjndrn
06d568a96e fix(helm): value override
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 12:54:11 +02:00
rjshrjndrn
67022f538b chore(helm): Adding observability chart
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 12:32:58 +02:00
rjshrjndrn
d7e100e383 chore(helm): Adding grafana plugins
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 08:12:26 +02:00
rjshrjndrn
a5bc7a8f87 chore(dashboard): updated openreplay components dashboard
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 07:46:44 +02:00
rjshrjndrn
6eb15fa1cb chore(monitoring): updated variable format.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 07:21:09 +02:00
rjshrjndrn
b1171d321b chore(dashboards): openreplay-components
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-30 06:59:05 +02:00
rjshrjndrn
7e6d4b5e2b chore(dashboards): Adding ngin-ingress dashboards.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-29 19:12:09 +02:00
rjshrjndrn
0100684faa chore(helm): nginx-ingress-controller enabled metrics
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-29 19:04:24 +02:00
rjshrjndrn
4971d5ff25 chore(logging): Adding promtail config 2022-06-29 18:36:22 +02:00
rjshrjndrn
73902a73ef chore(helm): Update configs
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-29 18:28:10 +02:00
rjshrjndrn
d6e03aad52 chore(monitoring): Adding enterprise config
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-29 15:20:34 +02:00
rjshrjndrn
99ee5d5cb1 ci(dbmigrate): Create db migrate when there is change
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-29 12:25:21 +02:00
Alexander
1c1887f657
New configuration module (#558) 2022-06-29 12:20:42 +02:00
Taha Yassine Kraiem
831d90cb94 Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-28 20:44:08 +02:00
Taha Yassine Kraiem
6b292a3ae7 feat(api): fixed assist error response 2022-06-28 20:43:00 +02:00
Taha Yassine Kraiem
4b0f3e1ffc feat(api): api-v1 handle wrong projectKey 2022-06-28 20:34:47 +02:00
Taha Yassine Kraiem
0bcfbedfd2 feat(api): api-v1 fixed search live sessions 2022-06-28 20:32:23 +02:00
Shekar Siri
b005d4dd31 change(ui) - show a message when mob file not found 2022-06-28 19:42:47 +02:00
Shekar Siri
497fae023b fix(ui) - audit trail date range custom picker alignment 2022-06-28 19:42:47 +02:00
Shekar Siri
ce09f5d54f change(ui) - audit trail count with comma 2022-06-28 19:42:47 +02:00
Shekar Siri
c78fb78927 change(ui) - show role edit on hover 2022-06-28 19:42:47 +02:00
Taha Yassine Kraiem
20644140e2 Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-28 19:17:13 +02:00
Taha Yassine Kraiem
48fdd9fabc feat(api): api-v1 handle wrong projectKey
feat(api): api-v1 get live sessions
2022-06-28 19:05:40 +02:00
Shekar Siri
6e7a2f2472 change(ui) - show installation btn without mouse hover 2022-06-28 18:37:43 +02:00
Shekar Siri
0cb6341988 fix(ui) - redirect fix 2022-06-28 18:06:02 +02:00
Shekar Siri
a46d842c0b change(ui) - non admin user preference restrictions 2022-06-28 18:06:02 +02:00
sylenien
c8416e2c0c fix(ui): fix share popup styles 2022-06-28 17:55:47 +02:00
Shekar Siri
e9482d1629 change(ui) - redirect to the landing url on SSO login 2022-06-28 17:27:56 +02:00
sylenien
f3052d1ad0 fix(ui): fix typo 2022-06-28 16:44:34 +02:00
sylenien
08f9e3965e fix(ui): fix metric tables height and button placing 2022-06-28 15:25:54 +02:00
rjshrjndrn
198ea005d4 chore(helm): override branch name for db migration
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-28 15:14:05 +02:00
sylenien
4d22156b6a fix(ui): default tz fix 2022-06-28 15:05:26 +02:00
rjshrjndrn
7f9ca9ef18 ci(helm): skipping hooks for ci installation 2022-06-28 14:44:28 +02:00
Shekar Siri
1ed30b35d7 change(ui) - error widget border 2022-06-28 14:40:56 +02:00
sylenien
63f77c0c3e fix(ui): fix capture rate 2022-06-28 14:31:03 +02:00
rjshrjndrn
dd6d8ec566 ci(helm): skip hooks
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-28 14:30:27 +02:00
Shekar Siri
0e31fe43af change(ui) - hide back button based on url query param iframe=true 2022-06-28 14:29:31 +02:00
Shekar Siri
c44bb5da79 fix(ui) - disable body scroll on modal open 2022-06-28 14:09:43 +02:00
sylenien
41e093312a fix(ui): fix bookmarking 2022-06-28 13:29:07 +02:00
sylenien
09cde2e5ec fix(ui): fix for timezone storage and format 2022-06-28 12:40:27 +02:00
Shekar Siri
2d22bae2ff change(ui) - modalprovider 2022-06-28 12:30:47 +02:00
sylenien
773bfc7995 fix(ui): fix breadcrumbs chevron icon 2022-06-28 11:19:12 +02:00
sylenien
e5a73ada4f fix(ui): fix textelipsis comp 2022-06-28 10:51:48 +02:00
sylenien
86a6aa6c07 fix(ui): fix bookmarking/vaulting 2022-06-28 09:24:42 +02:00
sylenien
bf80997c0c fix(tracker): fix peer hack for better build support 2022-06-28 09:20:11 +02:00
sylenien
a03b441f97 fix(tracker): fix assist import in order to prevent fails with next imports 2022-06-28 09:20:11 +02:00
Shekar Siri
7022faa5eb change(ui) - assist list loader
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 20:40:56 +02:00
Shekar Siri
110f215e7d fix(ui) - lazyload loader 2022-06-27 20:03:11 +02:00
Taha Yassine Kraiem
a2588df4cc feat(api): changed build logic 2022-06-27 19:55:09 +02:00
Taha Yassine Kraiem
747487cc4c Merge remote-tracking branch 'origin/dev' into api-v1.7.0 2022-06-27 19:50:32 +02:00
rjshrjndrn
b064732c01 ci(fix): resetting vars file
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 19:21:17 +02:00
rjshrjndrn
fa815a7cb6 ci(fix): cleaning old assets
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 18:58:43 +02:00
Shekar Siri
c26e715b5d fix(ui) - API_EDP 2022-06-27 18:32:25 +02:00
rjshrjndrn
e05ba2df47 ci(helm): updated comment
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 18:17:53 +02:00
rjshrjndrn
8ed69347f6 ci(fix): actions pointing to correct cluster
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 17:54:57 +02:00
Shekar Siri
6384bf9e9e fix(ui) - permission check updates 2022-06-27 17:40:28 +02:00
rjshrjndrn
082acccac4 ci(actions): skipping migration in actions
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 17:24:27 +02:00
rjshrjndrn
5019fba5b2 chore(helm): enable skipMigration Flag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 17:16:23 +02:00
rjshrjndrn
cccd97c07a fix(cicd): proper image tag
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 17:16:23 +02:00
Shekar Siri
db6609d908 fix(ui) - duration and not data message for sessions and errors 2022-06-27 16:32:03 +02:00
Shekar Siri
a3e99a6217 fix(ui) - end date fix and other changes 2022-06-27 15:53:11 +02:00
rjshrjndrn
a14dfb4a79 ci(fix): frontend ee deployment
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 15:05:46 +02:00
Shekar Siri
16503a2fae change(ui) - metric type icon in metric list 2022-06-27 14:08:18 +02:00
Shekar Siri
661a4364dc change(ui) - error and sessions border 2022-06-27 14:08:18 +02:00
Shekar Siri
941f8c9b11 change(ui) - removed unused from header component 2022-06-27 14:08:18 +02:00
Shekar Siri
7787f19c00 change(ui) - version changes in env.sample 2022-06-27 14:08:18 +02:00
rjshrjndrn
917ab96723 ci(fix): kubeconfig path
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 14:01:42 +02:00
rjshrjndrn
5d52d56e12 ci(fix): change kubeconfig auth env
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 13:51:09 +02:00
Alexander Zavorotynskiy
3f52992e33 fix(backend): fixed config var name in integrations service 2022-06-27 13:45:11 +02:00
rjshrjndrn
22b3ffdc6d ci(worker): cache disabled as it's consuming space, and actions failing
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 13:41:39 +02:00
rjshrjndrn
f1920a28bf ci(frontend): deploying ee along with oss
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-27 13:34:57 +02:00
sylenien
6592f33827 fix(ui): fix add outside click for modal, fix right menu headers 2022-06-27 12:32:54 +02:00
sylenien
8ae15c799e fix(ui): fix jira integration, fix widget name esc handling, minor fixes 2022-06-27 12:08:09 +02:00
Alexander Zavorotynskiy
95c9b6e3f5 feat(backend): minor fixes after prerelease tests 2022-06-27 10:35:05 +02:00
Shekar Siri
179dbd22d5 fix(ui) - roles and permissions 2022-06-24 20:48:06 +02:00
Taha Yassine Kraiem
20aaff933e Merge remote-tracking branch 'origin/dev' into api-v1.7.0 2022-06-24 20:25:01 +02:00
Taha Yassine Kraiem
3089d02e7d feat(api): fixed invite user 2022-06-24 20:23:19 +02:00
Taha Yassine Kraiem
7d0a0c998e feat(api): changed /limits 2022-06-24 20:06:37 +02:00
Taha Yassine Kraiem
855830a9a8 feat(api): changed /notifications/count response 2022-06-24 19:51:08 +02:00
Taha Yassine Kraiem
3264a424d1 feat(DB): changed resources primary keys 2022-06-24 19:29:54 +02:00
Taha Yassine Kraiem
66a1a1c0d1 feat(api): fixed /notifications/count 2022-06-24 19:07:33 +02:00
Shekar Siri
399352dd7f change(ui) - funnels checking for min two events 2022-06-24 18:56:39 +02:00
Shekar Siri
fc01ffb6bf change(ui) - sessions search checking for empty fitler values 2022-06-24 18:35:35 +02:00
Shekar Siri
77096976ea fix(ui) - session settings input event 2022-06-24 18:35:35 +02:00
Shekar Siri
783e029ec9 change(ui) - packge lock updates 2022-06-24 18:35:35 +02:00
Shekar Siri
c662f26e38 change(ui) - tailwind version 2022-06-24 18:35:35 +02:00
rjshrjndrn
03fa6c5e22 chore(build): fix script return code
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 18:22:30 +02:00
rjshrjndrn
2c3841e57e chore(local_deploy): Deploy frontend
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 18:17:46 +02:00
sylenien
810e97605b fix(ui): fix for js errors widget styles 2022-06-24 17:20:24 +02:00
Alex Kaminskii
236ac05c92 refactor(backend): use analytics topic for IntegrationEvent 2022-06-24 16:37:51 +02:00
Alex Kaminskii
4e439354c3 style(backend): rename RawErrorEvent to IntegrationEvent 2022-06-24 16:32:46 +02:00
rjshrjndrn
3259c6667a ci(helm): use atomic for deploying
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 16:11:35 +02:00
rjshrjndrn
e38ad0a7b2 ci(build): moving env sample
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 16:08:37 +02:00
rjshrjndrn
c1fa34d2ff fix(build): correct file name
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 16:07:33 +02:00
rjshrjndrn
f6e21ee07e build(frontend): removed unnecessary step
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 16:07:18 +02:00
rjshrjndrn
01f6c6b54c fix(frontend): build script comment
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 16:07:18 +02:00
rjshrjndrn
1134fc133c fix(build): docker file priority
Last step is the default one in docker build.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 16:07:18 +02:00
Shekar Siri
c7741ebad8 change(ui) - docker copy env file 2022-06-24 15:51:59 +02:00
sylenien
bb61a4543f fix(ui): rm unused code 2022-06-24 15:43:13 +02:00
sylenien
8f31341881 fix(ui): tweak webpack config 2022-06-24 15:43:13 +02:00
sylenien
61b2c3e32c fix(ui): tweak webpack config 2022-06-24 15:43:13 +02:00
sylenien
6f9a5e71f1 fix(ui): small design review fixes 2022-06-24 15:43:13 +02:00
rjshrjndrn
16bfdc5c9c ci(fix): inject proper env value.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 15:37:47 +02:00
Shekar Siri
7d19b77e94 fix(ui) - no data msg and padding 2022-06-24 15:24:06 +02:00
rjshrjndrn
08bf88b411 build(frontend): Adding docker buildkit support
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 15:06:46 +02:00
rjshrjndrn
08d2375683 ci(actions): enable docker buildkit
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 15:03:19 +02:00
rjshrjndrn
abe4f17bbc ci(fix): spelling
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 14:59:11 +02:00
rjshrjndrn
f17fd33120 chore(build): Creating separate build for cicd
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 14:57:24 +02:00
rjshrjndrn
260d758592 ci(fix): change build
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:57:56 +02:00
rjshrjndrn
aaf42f6157 ci(frontend): remove docker caching
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:52:07 +02:00
rjshrjndrn
e975c07482 ci(frontend): optimizing build for caching
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:49:59 +02:00
rjshrjndrn
bdc3fcf22b ci(frontend): run only the latest build
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:37:44 +02:00
sylenien
7302490444 fix(ui): fix item menu styles 2022-06-24 12:27:03 +02:00
rjshrjndrn
0374f0934a build(frontend): decoupling yarn and build for better caching.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:25:35 +02:00
rjshrjndrn
46b3ec2025 ci(fix): change step name
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:22:51 +02:00
rjshrjndrn
126a7561d8 ci(frontend): removed npm caching from host
As the build is happening in container.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:20:14 +02:00
Alex Kaminskii
015fe57355 refactor(tracker): get rid of instanceof checks in observer (use nodeName and nodeType guards) 2022-06-24 12:17:13 +02:00
rjshrjndrn
febdfd72e3 ci(fix): frontend buld deploy
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:13:50 +02:00
rjshrjndrn
4288da245c ci(frontend): Update deploy to helm chart.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-24 12:00:09 +02:00
Alexander Zavorotynskiy
f77af991d6 feat(backend): disabled iOS heuristics 2022-06-24 11:47:24 +02:00
Alexander Zavorotynskiy
2c0880a161 feat(backend): add user's timestamp to ender logic, removed some messages from db batches 2022-06-24 10:10:35 +02:00
Shekar Siri
3b6cb3ee0e fix(ui) - alert metric check 2022-06-23 19:27:38 +02:00
Shekar Siri
8fafc878eb fix(ui) - login check 2022-06-23 19:21:58 +02:00
Shekar Siri
a63ff8ae12 fix(ui) - widget change detection on route change 2022-06-23 18:59:28 +02:00
Shekar Siri
b22f1488a6 fix(ui) - icon button 2022-06-23 18:59:28 +02:00
Shekar Siri
0034a22fd1 fix(ui) - alert form segment selection 2022-06-23 18:59:28 +02:00
Taha Yassine Kraiem
bee4abeb63 feat(api): EE env-vars override 2022-06-23 18:19:52 +02:00
sylenien
6114254671 fix(ui): fix dashboard widget scroll position on change 2022-06-23 18:01:26 +02:00
Shekar Siri
68b8cc3586 fix(ui) - last item border 2022-06-23 17:59:19 +02:00
Shekar Siri
889a4313c5 fix(ui) - invitation link button 2022-06-23 17:59:19 +02:00
Shekar Siri
a46240feb4 fix(ui) - signup form submit button type 2022-06-23 17:59:19 +02:00
Rajesh Rajendran
7dcd9c99a6
Move frontend as a separate container (#553)
* chore(frontend): build dockerimage for frontend

* chore(helm): remove frontend files from minio.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(helm): Adding frontend chart

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(helm): remove grafana ingress from community charts

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(helm): removing minio-frontend ingress

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(helm): ingress rewrite

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(build): give priority to env image tag

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(frontend): adding nginx.conf for frontend container

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(helm): disable minio if not used

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-23 15:53:18 +00:00
Shekar Siri
c548cd7cbe change(ui) - metric list item name casing 2022-06-23 16:29:19 +02:00
Shekar Siri
7b4a189588 change(ui) - sessions and errors count 2022-06-23 16:26:01 +02:00
Shekar Siri
e1c633af99 change(ui) - project limit check 2022-06-23 16:16:24 +02:00
Shekar Siri
952515b293 change(ui) - remote pull resolve conflicts 2022-06-23 16:04:33 +02:00
Alex Kaminskii
71f5f3a797 fix(backend): remove tab chars in url before parse 2022-06-23 15:56:11 +02:00
sylenien
c2f93c6c42 fix(ui): fix react warning 2022-06-23 15:45:33 +02:00
sylenien
98ebef88c3 fix(ui): role button for timeline 2022-06-23 15:40:25 +02:00
sylenien
47af08e0fe fix(ui): minor ui fixes 2022-06-23 15:37:23 +02:00
sylenien
6f3e66ee46 fix(ui): more ui fixes, typing for router 2022-06-23 15:37:23 +02:00
sylenien
2e51918cc7 fix(ui): typings for iconts, fix for widget name field 2022-06-23 15:37:23 +02:00
sylenien
b55145e580 fix(ui): minor ui fixes 2022-06-23 15:37:23 +02:00
Taha Yassine Kraiem
fa6e8087e1 Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-23 15:00:28 +02:00
Taha Yassine Kraiem
ebc2f809a3 feat(api): changed /limits response 2022-06-23 15:00:10 +02:00
Shekar Siri
f7cf8ac269 change(ui) - global limits 2022-06-23 14:56:01 +02:00
Shekar Siri
c622891299 change(ui) - funnels calls 2022-06-23 14:56:01 +02:00
rjshrjndrn
ece5b482e6 docs(frontend): removed unnecessary code
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-23 14:08:40 +02:00
Alex Kaminskii
eb967919f5 style(tracker): img module - use attributeFilter in observer 2022-06-23 13:13:16 +02:00
Shekar Siri
5d9cc9b7ea change(ui) - errors and sessions widget click 2022-06-23 12:35:50 +02:00
Shekar Siri
0fb37ab695 change(ui) - precentage removed floating 2022-06-23 12:21:09 +02:00
Shekar Siri
ceeeb1ef0c change(ui) - braedcrumb first letter cap 2022-06-23 12:21:09 +02:00
sylenien
5e8b663fb6 fix(tracker): add check for sets 2022-06-23 12:05:21 +02:00
sylenien
7499b05431 fix(tracker): fix srcset tracking 2022-06-23 12:05:21 +02:00
sylenien
3607f45f8a fix(tracker): fix srcset tracking 2022-06-23 12:05:21 +02:00
Shekar Siri
af23769d74 change(ui) - capitalize first letter 2022-06-23 11:59:05 +02:00
Shekar Siri
f96100fea9 Merge branch 'sessions-list' into dev 2022-06-23 11:33:33 +02:00
sylenien
ef935e3ee2 fix(ui): fix timeline icons overlap 2022-06-23 10:03:20 +02:00
Alex Kaminskii
9f7b8aec5b fix(tracker): send metadata on start 2022-06-22 19:48:17 +02:00
Shekar Siri
78363606ad change(ui) - sessions list layout 2022-06-22 19:41:29 +02:00
Shekar Siri
b38fbe1a30 fix(ui) - dropdown fixes 2022-06-22 18:33:32 +02:00
Shekar Siri
d7e680247d fix(ui) - dropdown fixes 2022-06-22 18:28:52 +02:00
Shekar Siri
188e504bb7 fix(ui) - dropdown fixes 2022-06-22 17:59:08 +02:00
Alex Kaminskii
c8ec85c98e style(tracker): type fix 2022-06-22 17:32:29 +02:00
Alex Kaminskii
a4f2191757 style(frontend/player):type fixes 2022-06-22 17:30:00 +02:00
Shekar Siri
47774191b1 fix(ui) - player init 2022-06-22 17:16:09 +02:00
rjshrjndrn
5ff8fc6960 chore(helm): kafka topic size update
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-22 17:12:54 +02:00
sylenien
e07404de22 fix(ui): add more ts aliases 2022-06-22 17:07:09 +02:00
sylenien
aadd7d5418 fix(ui): fix icon build script to rewrite clipPath and clipRule 2022-06-22 17:02:08 +02:00
sylenien
fe14907303 fix(ui): fix icon build script to remove warnings 2022-06-22 16:57:16 +02:00
Delirium
ee373cc0b4
fix(ui): small ui fixes and improvements (#550) 2022-06-22 16:51:53 +02:00
Shekar Siri
0ef43a30fd change(ui) - switch to yern 2022-06-22 16:39:33 +02:00
Shekar Siri
719dab4b8e
Update frontend.yaml 2022-06-22 16:38:34 +02:00
Shekar Siri
3d82678c26 fixed(ui) - review fixes 2022-06-22 16:18:37 +02:00
Taha Yassine Kraiem
84a59710c3 Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-22 16:16:58 +02:00
Taha Yassine Kraiem
6cb997def7 feat(api): changed get session's live flag 2022-06-22 16:16:38 +02:00
Alex Kaminskii
c2e95f8d98 fix(frontend): init player once 2022-06-22 15:56:27 +02:00
Taha Yassine Kraiem
3c47cebd53 Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-22 15:50:20 +02:00
Taha Yassine Kraiem
7ed0d28e40 feat(assist): cleaned extra files
feat(assist): autocomplete return capitalized type
2022-06-22 15:49:59 +02:00
Alex Kaminskii
d4c692b2d4 fix(frontend/player): no inplace operations in loadFiles fn 2022-06-22 15:38:48 +02:00
Alex K
33eca54031
Merge pull request #542 from openreplay/tracker-wworker-writer-bug
Worker console fix
*worker activity state introduced 
*late worker stop 
*BatchWriter refactor
2022-06-22 14:15:56 +02:00
Alex Kaminskii
00572c0f38 fix(frontend/dev): verbose function 2022-06-22 14:11:00 +02:00
Shekar Siri
db4844c4c5 change(ui) - funnel icon 2022-06-22 14:08:06 +02:00
sylenien
13043f6ee7 fix(tracker): rm unused 2022-06-22 14:05:35 +02:00
Alex Kaminskii
fe90b4cc26 feat(frontend/player): smooth cursor 2022-06-22 13:41:53 +02:00
Alex Kaminskii
9b66433348 feat(frontend): store dev options in localStorage 2022-06-22 13:27:39 +02:00
sylenien
794c7f72d4 fix(tracker): remove wworker logs(unused) 2022-06-22 12:49:46 +02:00
sylenien
22d7c4acd0 fix(tracker): change checks for state update 2022-06-22 12:41:07 +02:00
Alex K
e56fee3134
Merge pull request #524 from openreplay/hide-containers-rule
feat(ui): add option to mask entire HTML/SVG containers and their children tree
2022-06-22 12:26:15 +02:00
sylenien
0a66b23613 fix(tracker): move worker stop to the end of stop func 2022-06-22 12:24:07 +02:00
Shekar Siri
7910b9e872 feat(ui) - metrics list icons 2022-06-22 12:10:45 +02:00
rjshrjndrn
04d8148be6 chore(helm): change default clickhouse resource limit
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-22 11:06:49 +02:00
sylenien
308ef872f4 fix(tracker): code review 2022-06-22 09:54:26 +02:00
sylenien
845cf64e44 fix(tracker): rm console 2022-06-22 09:51:19 +02:00
sylenien
f1998eab3c fix(tracker): move activity state under worker start 2022-06-22 09:48:49 +02:00
Shekar Siri
05990478c5 feat(ui) - filters issue - operator dropdown refresh 2022-06-21 18:47:12 +02:00
Shekar Siri
0c45d43bb9 feat(ui) - filters issues live vs offline 2022-06-21 18:33:38 +02:00
Shekar Siri
2b5d85cb35 feat(ui) - dropdown alignments 2022-06-21 17:23:56 +02:00
Shekar Siri
44853973cf feat(ui) - integration icon and other checks 2022-06-21 17:23:56 +02:00
sylenien
692a0505e8 fix(tracker): typo fix 2022-06-21 16:59:37 +02:00
sylenien
5e7e498088 fix(tracker): fix state updating 2022-06-21 16:59:37 +02:00
sylenien
fedd89c119 fix(tracker): wworker build fix 2022-06-21 16:59:37 +02:00
sylenien
8750448841 fix(tracker): typo 2022-06-21 16:59:37 +02:00
sylenien
d6fd7b312a fix(tracker): rm unused 2022-06-21 16:59:37 +02:00
sylenien
8d919e49cc fix(tracker): add optional data in error 2022-06-21 16:59:37 +02:00
sylenien
869a25169f fix(tracker): potential fix for writer busy status 2022-06-21 16:59:37 +02:00
Alexander Zavorotynskiy
caf66b305a fix(backend): fixed bug when ender triggered on sessionEnd message 2022-06-21 16:19:22 +02:00
Rajesh Rajendran
26daf936c5
removing cache for worker build 2022-06-21 10:53:21 +00:00
Shekar Siri
28aa99a668 feat(ui) - metric to session player navigation flow with modal 2022-06-21 12:41:33 +02:00
Shekar Siri
ea7b37441b feat(ui) - metric type check for alert 2022-06-21 11:17:08 +02:00
Shekar Siri
c5cc24fc52 feat(ui) - errors details modal 2022-06-21 11:17:08 +02:00
Alexander Zavorotynskiy
b848b89536 feat(backend): removed not necessary message type 2022-06-21 11:11:33 +02:00
rjshrjndrn
d7a4005adc refactor(helm): format file.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-21 09:40:58 +02:00
rjshrjndrn
6862652744 chore(helm): variable for kafka retention.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-21 09:40:58 +02:00
ShiKhu
4a5093addf refactor(tracker/BatchWriter): explicit logic 2022-06-21 01:05:20 +02:00
Shekar Siri
67b83deddb feat(ui) - errors details modal 2022-06-20 19:01:54 +02:00
Taha Yassine Kraiem
7444c2d999 feat(api): changed funnel's dropDueToIssues 2022-06-20 18:45:03 +02:00
Taha Yassine Kraiem
86bbf49014 feat(api): return issue details if issue not found in funnel 2022-06-20 17:35:08 +02:00
Shekar Siri
e30e5c22ad feat(ui) - errors details modal 2022-06-20 17:25:54 +02:00
Shekar Siri
1bcb0dfc01 feat(ui) - funnels more steps 2022-06-20 17:25:54 +02:00
rjshrjndrn
fb164af465 chore(helm): Adding postgres string in ender
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-20 11:58:06 +02:00
rjshrjndrn
94fc4d693e fix(actions): image override
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-20 11:40:07 +02:00
rjshrjndrn
1a66daa3a2 chore(helm): details of cleaning.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-20 10:29:05 +02:00
rjshrjndrn
3ca389ff3c fix(helm): nginx lb algorithm
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-20 10:11:52 +02:00
rjshrjndrn
1898f18d6b fix(helm): efs clean cron path
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-20 09:44:39 +02:00
Alexander
623e241afb
feat(backend): moved recording sessionStart to db into http service and sessionEnd into ender service (#545)
Co-authored-by: Alexander Zavorotynskiy <alexander@openreplay.com>
2022-06-20 09:26:05 +02:00
Alexander Zavorotynskiy
3da78cfe62 feat(backend): added metadata insertion retrier (temp solution) 2022-06-17 17:33:52 +02:00
Taha Yassine Kraiem
592cbd5fd5 feat(api): errors search ignore Script error on query level 2022-06-17 17:26:22 +02:00
Taha Yassine Kraiem
e99776778f feat(api): errors search ignore Script error on query level 2022-06-17 16:54:39 +02:00
rjshrjndrn
f34c433a42 chore(backend): clean go mod
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-17 16:27:43 +02:00
Shekar Siri
cbc4c25a8e feat(ui) - sessions - widget - data reload 2022-06-17 15:48:14 +02:00
Taha Yassine Kraiem
98b71b13fe Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-17 15:41:49 +02:00
Taha Yassine Kraiem
6421d9b9b8 feat(api): elasticsearch fixed typo 2022-06-17 15:40:47 +02:00
Shekar Siri
93455cd746 feat(ui) - sessions - widget - pagination 2022-06-17 15:38:25 +02:00
Shekar Siri
10c064c99c feat(ui) - sessions - widget - pagination 2022-06-17 15:38:25 +02:00
Taha Yassine Kraiem
d007d7da5a feat(api): elasticsearch upgrade fix 2022-06-17 15:24:30 +02:00
Alexander Zavorotynskiy
951ffa0320 fix(backend/db): fixed bug (index row size exceeds maximum) by adding left() func in sql requests 2022-06-17 14:48:06 +02:00
Taha Yassine Kraiem
9e7e35769c Merge remote-tracking branch 'origin/api-v1.7.0' into dev 2022-06-17 12:57:03 +02:00
Taha Yassine Kraiem
c10140b8d1 feat(api): changed empty funnel response 2022-06-17 12:39:21 +02:00
Taha Yassine Kraiem
38b65537c7 feat(api): fixed Elasticsearch upgrade 2022-06-17 11:31:31 +02:00
rjshrjndrn
215d889782 ci(workers): build both ee and oss for deployment changes
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-17 11:01:52 +02:00
Taha Yassine Kraiem
1ee50b62ed feat(api): full dependencies upgrade 2022-06-17 10:53:43 +02:00
rjshrjndrn
a08ac6101a chore(helm): change nginx-ingress default lb to ewma
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-17 10:45:41 +02:00
Taha Yassine Kraiem
778db9af34 Merge remote-tracking branch 'origin/api-v1.7.0' into api-v1.7.0 2022-06-17 10:43:13 +02:00
Taha Yassine Kraiem
4d111d6f4a feat(db): migrate to v1.7.0: fixed cross-database references issue 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
7beb08f398 feat(db): migrate old funnels to new metric-funnels 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
891c7600a7 feat(api): custom metrics errors pagination
feat(api): custom metrics sessions pagination
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
9fb5e7c4d1 feat(api): fixed typo 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
f76c621350 feat(assist): support null&empty values for search
feat(assist): changed single-session search
feat(api): support null&empty values for live sessions search
feat(api): support key-mapping for different names
feat(api): support platform live-sessions search
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
6cc7372187 feat(api): support nested-key-sort for live sessions 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
4e22038137 feat(assist): changed pagination response
feat(assist): allow nested-key sort
feat(api): support new live sessions pagination response
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
2e5acdabc3 feat(assist): full autocomplete
feat(assist): solved endpoints conflicts
feat(api): live sessions full autocomplete
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
d1ef7ea1c7 feat(assist): full search
feat(api): live sessions full search
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
47fb100b4f feat(assist): fixed multiple values filter support for search 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
ab02495f63 feat(api): changed assist search payload 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
a59a8c0133 feat(assist): changed debug 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
bd9dbc9393 feat(assist): payload extraction debug 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
4fe3f87d46 feat(api): assist autocomplete 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
c0c1a86209 feat(assist): autocomplete 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
fbe37babbc feat(assist): sessions search handle nested objects 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
ccf951f8e4 feat(api): optimized live session check
feat(assist): optimized live session check
feat(assist): sort
feat(assist): pagination
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
0aa94bbc3c feat(assist): assist changed search payload 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
ef609aa196 feat(api): search live sessions 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
43184d5c43 feat(assist): assist refactored 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
c6a6a77e71 feat(assist): EE assist search 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
181195ffde feat(assist): assist refactored 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
58aea53101 feat(assist): assist upgrade uWebSockets
feat(assist): assist upgrade SocketIo
2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
03dbf42d11 feat(assist): FOSS assist search 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
7d4d0fadbd feat(api): requirements upgrade 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
5b1185b872 feat(api): metric-funnel changed response 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
c7c6cd2187 feat(api):metrics get sessions related to issue 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
1448cb45e9 feat(api): metrics table of errors 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
b4b3a6c26e feat(api): custom metrics fixed templates response 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
f296b27346 feat(api): optimised get issues for get session-details 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
531b112439 feat(api): fixed custom metrics timestamp issue 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
c68edbc705 feat(api): fixed login 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
7d4596c074 feat(api): get sessions details fix 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
03e0dbf0e4 feat(api): optimised get session details 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
46e7f5b83e feat(api): custom metrics config 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
bafae833d5 feat(api): limited long task DB 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
421a1f1104 feat(api): custom metrics config 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
405d83d4e0 feat(api): optimised weekly report 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
6c377bc4e5 feat(api): fixed login response 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
40d60f7769 feat(api): fixed login response 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
557d855ae5 feat(api): changed login response 2022-06-17 10:42:30 +02:00
Taha Yassine Kraiem
0dd7914375 feat(api): EE changed weekly report
feat(api): changed login response
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
63d2fce3b5 feat(api): fixed weekly report
feat(api): optimised weekly report
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
119ecd7743 feat(api): ignore weekly report if SMTP not configured 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
8aec595495 feat(api): changed connexion pool configuration
feat(alerts): changed connexion pool configuration
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
779c85dfda feat(api): changes 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
0fd7d1d80c feat(api): changes
feat(db): changes
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
5e85da6533 feat(api): changed pages_response_time_distribution response 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
26ce0c8e86 feat(api): changed crashes response 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
3f35b01a5e feat(api): changed speed_location response 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
597da9fc11 feat(api): changed speed_location response 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
fa7a57eb3f feat(api): changed slowest_domains response 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
23a98d83d7 feat(api): table of sessions widget 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
53fc845f9a feat(api): errors widget chart
feat(api): funnels widget chart
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
bf60c83f3b feat(api): errors widget 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
4912841a9e feat(api): funnel widget issues 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
8d49a588e4 feat(api): funnel widget 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
b5a646b233 feat(api): EE fixed edition 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
7d426ee79a feat(api): fixed notifications count query 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
06a52e505e feat(api): fixed edition
feat(api): fixed expiration date
feat(api): fixed change name
feat(api): fixed change role
feat(api): fixed has password
feat(api): refactored edit user
feat(api): refactored edit member
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
667fe3dd79 feat(db): removed user's appearance
feat(db): removed generated_password
feat(api): merged account&client
feat(api): cleaned account response
feat(api): removed user's appearance
feat(api): removed generated_password
feat(api): limits endpoint
feat(api): notifications/count endpoint
2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
d86ca3c7ec feat(db): EE CH new structure 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
e92f14dc17 feat(db): EE CH new structure 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
81503030e4 feat(db): EE CH new structure 2022-06-17 10:42:29 +02:00
Taha Yassine Kraiem
10f26ab45c feat(api): clean script 2022-06-17 10:42:27 +02:00
Taha Yassine Kraiem
5968b55934 feat(api): refactored user-auth 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
c2ea4fb4b6 feat(api): metrics changed web vitals description
feat(db): changed metric's monitoring essentials category to web vitals
2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
254202ba85 feat(api): fixed changed SearchSession payload schema 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
b2732eb9be feat(api): changed SearchSession payload schema 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
a3ba925cea feat(api): centralized 'order'
feat(api): transform 'order' casing
2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
20f7c0fb70 feat(DB): changed metrics category from Overview to Monitoring Essentials 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
9c9452c530 feat(api): upgraded python base image
feat(alerts): upgraded python base image
2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
c12cea6f6b feat(api): fixed CH client format 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
6c0aca2f8c feat(DB): changed partition expression 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
2ed54261b6 feat(api): fixed sourcemaps reader endpoint 2022-06-17 10:42:00 +02:00
Taha Yassine Kraiem
6bf5d1d65b feat(api): user trail limit changed 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
23584b8be8 feat(alerts): changed Dockerfile.alerts 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
f7002ab2a0 feat(api): vault support 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
2fba643b7c feat(api): changed search user trails by username 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
18f0d2fbca feat(api): search user trails by username
feat(db): index to search user trails by username
2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
9fcba8703e feat(api): EE updated authorizer 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
41d7d16d03 feat(api): changed Dockerfile 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
9100d27854 feat(api): changed root path 2022-06-17 10:41:59 +02:00
Taha Yassine Kraiem
507462180e feat(api): fixed return createdAt with the list of users 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
7f9bc99bcf feat(DB): traces/trails index
feat(api): get all possible traces/trails actions
feat(api): search traces/trails by actions
feat(api): search traces/trails by user
2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
e95c5b915d feat(api): return createdAt with the list of users 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
cf6320d4df feat(DB): traces/trails index
feat(api): get all traces/trails
2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
d9d2f08fb8 feat(DB): changed sessions_metadata sort expression 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
b0d3074ceb feat(api): changed Dockerfile 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
9c5d96e35c feat(api): changed Dockerfile 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
9af6fc004b feat(api): changed Dockerfile 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
1dcad02b9a feat(api): changed replay file URL 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
1859fb8a6c feat(api): EE updated dependencies 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
90143bcd31 feat(api): updated dependencies 2022-06-17 10:41:58 +02:00
Taha Yassine Kraiem
1224e6054e feat(api): fixed description optional value 2022-06-17 10:41:57 +02:00
Taha Yassine Kraiem
c715a6084e feat(api): fixed description default value 2022-06-17 10:41:57 +02:00
Taha Yassine Kraiem
1c671631e7 feat(api): changed Dockerfile 2022-06-17 10:41:57 +02:00
rjshrjndrn
ea103f9589 chore(vagrant): Adding development readme
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-17 10:41:55 +02:00
Rajesh Rajendran
32fdd80784 Vagrant for local contribution (#434)
* chore(vagrant): initial vagrantfile
* chore(vagrant): adding instructions after installation
* chore(vagrant): Adding vagrant user to docker group
* chore(vagrant): use local docker daemon for k3s
* chore(vagrant): fix comment
* chore(vagrant): adding hostname in /etc/hosts
* chore(vagrant): fix doc
* chore(vagrant): limiting cpu
* chore(frontend): initialize dev env
* chore(docker): adding dockerignore
* chore(dockerfile): using cache for fasten build
* chore(dockerignore): update
* chore(docker): build optimizations
* chore(build): all components build option
* chore(build): utilities build fix
* chore(scrpt): remove debug message
* chore(vagrant): provision using stable branch always

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-17 10:40:57 +02:00
Taha Yassine Kraiem
c72120ac64 feat(api): s3 helper detect environment
feat(api): support description for dashboards
2022-06-17 10:39:48 +02:00
Taha Yassine Kraiem
1e6c6fa1a7 feat(db): EE remove pages_count column 2022-06-17 10:39:48 +02:00
Taha Yassine Kraiem
d45fd1634d feat(api): EE fixed No of pages count widget 2022-06-17 10:39:48 +02:00
Taha Yassine Kraiem
9ddc0e5e4a feat(api): merge dev 2022-06-17 10:39:30 +02:00
Taha Yassine Kraiem
e322e9c3d0 feat(api): round time metrics 2022-06-17 10:33:41 +02:00
Alexander Zavorotynskiy
a153547575 feat(backend/db): send metadata directly to db (removed from batches) 2022-06-17 09:34:58 +02:00
Taha Yassine Kraiem
f9695198f2 feat(db): migrate to v1.7.0: fixed cross-database references issue 2022-06-16 19:18:52 +02:00
Taha Yassine Kraiem
621b4aae7c feat(db): migrate old funnels to new metric-funnels 2022-06-16 19:12:06 +02:00
Taha Yassine Kraiem
734320cfe5 feat(api): custom metrics errors pagination
feat(api): custom metrics sessions pagination
2022-06-16 17:49:57 +02:00
Shekar Siri
441f792679 feat(ui) - assist filters with pagination 2022-06-16 16:49:00 +02:00
Shekar Siri
133714a4cb feat(ui) - assist filters with pagination 2022-06-16 16:49:00 +02:00
Taha Yassine Kraiem
33a3890562 feat(api): fixed typo 2022-06-16 16:34:02 +02:00
rjshrjndrn
1a5c50cefa fix(helm): removing unnecessary ingress rules
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-16 15:08:43 +02:00
rjshrjndrn
54b414e199 chore(helm): adding pvc to utilities
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-16 14:17:33 +02:00
Taha Yassine Kraiem
a3aa176e67 feat(assist): support null&empty values for search
feat(assist): changed single-session search
feat(api): support null&empty values for live sessions search
feat(api): support key-mapping for different names
feat(api): support platform live-sessions search
2022-06-16 14:02:20 +02:00
Alexander Zavorotynskiy
d837c14be4 feat(backend): start using analytics topic for heuristics and trigger topic only for sessionEnd between sink and storage 2022-06-16 14:00:50 +02:00
Taha Yassine Kraiem
96bf84b567 feat(api): support nested-key-sort for live sessions 2022-06-16 12:27:51 +02:00
Taha Yassine Kraiem
fe6a50dc2c feat(assist): changed pagination response
feat(assist): allow nested-key sort
feat(api): support new live sessions pagination response
2022-06-16 11:53:49 +02:00
rjshrjndrn
75504409e7 chore(helm): Adding utilities chart
Will contain openreplay utilities. like
- efs cleaner
- postgres backup trigger etc
2022-06-16 11:14:18 +02:00
Taha Yassine Kraiem
c254aab413 feat(assist): full autocomplete
feat(assist): solved endpoints conflicts
feat(api): live sessions full autocomplete
2022-06-15 22:44:41 +02:00
Taha Yassine Kraiem
c6b719b9fa feat(assist): full search
feat(api): live sessions full search
2022-06-15 21:56:59 +02:00
Taha Yassine Kraiem
2dbdfade10 feat(assist): fixed multiple values filter support for search 2022-06-15 20:24:32 +02:00
Taha Yassine Kraiem
31a53edd5a feat(api): changed assist search payload 2022-06-15 19:25:50 +02:00
Shekar Siri
6ba773fe6d Merge branch 'dev-assist-filters' into dev 2022-06-15 19:08:01 +02:00
Shekar Siri
6144a34d75 Merge branch 'dev-funnels' into dev 2022-06-15 19:07:45 +02:00
Taha Yassine Kraiem
dd2c51e3b6 feat(assist): changed debug 2022-06-15 19:05:07 +02:00
Shekar Siri
9e87909167 feat(ui) - issues and errors widgets 2022-06-15 18:56:16 +02:00
Taha Yassine Kraiem
cf80c46cd9 feat(assist): payload extraction debug 2022-06-15 18:45:31 +02:00
Shekar Siri
c2ca867fdc change(ui) - checking for user login 2022-06-15 18:43:55 +02:00
Taha Yassine Kraiem
c53ecbef00 feat(api): assist autocomplete 2022-06-15 17:22:43 +02:00
Taha Yassine Kraiem
38be085622 feat(assist): autocomplete 2022-06-15 17:15:02 +02:00
Shekar Siri
1e78a851c6 feat(ui) - assist filters wip 2022-06-15 16:46:09 +02:00
Shekar Siri
aa669d6a86 feat(ui) - assist filters wip 2022-06-15 16:20:35 +02:00
Taha Yassine Kraiem
8510949d29 feat(assist): sessions search handle nested objects 2022-06-15 16:03:37 +02:00
Alexander Zavorotynskiy
5ea482d4c2 feat(backend/http): removed second unnecessary request body read 2022-06-15 15:50:55 +02:00
Shekar Siri
2fe2406d0c feat(ui) - assist filters wip 2022-06-15 15:29:29 +02:00
Taha Yassine Kraiem
d6070d1829 feat(api): optimized live session check
feat(assist): optimized live session check
feat(assist): sort
feat(assist): pagination
2022-06-15 15:05:41 +02:00
Shekar Siri
e5963fbeef feat(ui) - assist filters wip 2022-06-15 14:14:48 +02:00
Alexander Zavorotynskiy
56623f9635 feat(backend/db): added batch updates in web-stats methods 2022-06-15 13:20:37 +02:00
Alexander
3c6bd9613c
feat(backend): control batch size and number of sql requests in db service to more accurate management data inserts (#540)
Co-authored-by: Alexander Zavorotynskiy <alexander@openreplay.com>
2022-06-15 12:57:09 +02:00
Alexander
6b5d9d3799
feat(backend): added new trigger which sink should send to storage after session end received (#539)
Co-authored-by: Alexander Zavorotynskiy <alexander@openreplay.com>
2022-06-15 11:45:52 +02:00
Alexander
883a6f6909
Improved ender (#537)
* feat(backend/ender): using producer timestamp for session end detection

* feat(backend/ender): added timeControl module

Co-authored-by: Alexander Zavorotynskiy <alexander@openreplay.com>
2022-06-15 10:49:32 +02:00
Taha Yassine Kraiem
b85f2abfd5 feat(assist): assist changed search payload 2022-06-14 20:12:03 +02:00
Taha Yassine Kraiem
a2ec909ace feat(api): search live sessions 2022-06-14 20:09:36 +02:00
Taha Yassine Kraiem
971dbd40a4 feat(assist): assist refactored 2022-06-14 19:42:16 +02:00
Taha Yassine Kraiem
1462f90925 feat(assist): EE assist search 2022-06-14 19:37:04 +02:00
Taha Yassine Kraiem
ded2d980fe feat(assist): assist refactored 2022-06-14 18:01:52 +02:00
Taha Yassine Kraiem
d4d029c525 feat(assist): assist upgrade uWebSockets
feat(assist): assist upgrade SocketIo
2022-06-14 18:01:34 +02:00
Taha Yassine Kraiem
40836092fa feat(assist): FOSS assist search 2022-06-14 17:19:58 +02:00
Mehdi Osman
911736f772
Increased Redis max queue length 2022-06-14 16:21:15 +02:00
Taha Yassine Kraiem
b8eac83662 feat(api): requirements upgrade 2022-06-14 15:07:39 +02:00
Taha Yassine Kraiem
d478436d9b feat(api): metric-funnel changed response 2022-06-14 14:56:46 +02:00
Shekar Siri
af7f751b42 feat(ui) - issues and errors widgets 2022-06-14 14:36:08 +02:00
rjshrjndrn
ec66bc03c6 chore(helm): enable compression for nginx
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-14 13:16:54 +02:00
Shekar Siri
7874dcbe0b feat(ui) - issues and errors widgets 2022-06-14 12:47:43 +02:00
Alexander Zavorotynskiy
3059227bcd feat(backend): turn on kafka delivery reports 2022-06-14 10:19:33 +02:00
Taha Yassine Kraiem
13d71ce388 feat(api):metrics get sessions related to issue 2022-06-13 19:56:27 +02:00
Taha Yassine Kraiem
09711d4521 feat(api): metrics table of errors 2022-06-13 19:26:00 +02:00
Taha Yassine Kraiem
50dce0ee9f feat(api): custom metrics fixed templates response 2022-06-13 19:20:16 +02:00
Taha Yassine Kraiem
2a12ed7337 feat(api): optimised get issues for get session-details 2022-06-13 18:24:03 +02:00
Shekar Siri
06855c41f4 feat(ui) - issues - widget 2022-06-13 17:39:27 +02:00
Shekar Siri
bea34112c9 feat(ui) - funnels - issue details 2022-06-13 17:35:31 +02:00
Shekar Siri
44d735d0a5 Merge branch 'funnels' into deb-funnels 2022-06-13 17:21:46 +02:00
Shekar Siri
88adec9e84 feat(ui) - funnels - issue details 2022-06-13 17:20:09 +02:00
Taha Yassine Kraiem
c856b2168d feat(api): fixed custom metrics timestamp issue 2022-06-13 16:07:56 +02:00
Taha Yassine Kraiem
85c27ff0f5 feat(api): fixed login 2022-06-13 15:59:54 +02:00
Taha Yassine Kraiem
d4c7fdcc5f feat(api): get sessions details fix 2022-06-13 15:24:21 +02:00
Shekar Siri
b26f2e87bf feat(ui) - funnels - issue details 2022-06-13 14:04:16 +02:00
Taha Yassine Kraiem
2b85ad3dfc feat(api): optimised get session details 2022-06-13 13:19:24 +02:00
Shekar Siri
4e2bcf26a4 feat(ui) - funnels - issue details 2022-06-13 12:32:13 +02:00
Shekar Siri
936d1f6f6e feat(ui) - funnels - details 2022-06-13 11:35:23 +02:00
Taha Yassine Kraiem
974f78b84a feat(api): custom metrics config 2022-06-10 17:51:47 +02:00
Taha Yassine Kraiem
36e5ba6389 feat(api): limited long task DB 2022-06-10 17:36:22 +02:00
Taha Yassine Kraiem
41b96321fe feat(api): custom metrics config 2022-06-10 17:19:51 +02:00
sylenien
fee99d3bf1 fix(ui): bugfixes 2022-06-10 17:11:14 +02:00
sylenien
43f52a9dcd fix(ui): fix couple ui bugs 2022-06-10 17:11:14 +02:00
dlrm
0c4b6ab6f0 fix(ui): fix styles 2022-06-10 17:11:14 +02:00
dlrm
c90a8d558a fix(ui): env? 2022-06-10 17:11:14 +02:00
dlrm
2bc44d038e fix(ui): fix env sample 2022-06-10 17:11:14 +02:00
dlrm
f745b9ba51 fix(ui): fix env sample 2022-06-10 17:11:14 +02:00
dlrm
f7eb848706 fix(ui): fixes after webpack update 2022-06-10 17:11:14 +02:00
dlrm
f08d8ca07e fix(ui): webpack 2022-06-10 17:11:14 +02:00
sylenien
aca4ef697e fix(ui): fix icon positioning on a timeline 2022-06-10 17:11:14 +02:00
sylenien
55f58487f5 fix(ui): fix performance tab graph mapper 2022-06-10 17:11:14 +02:00
sylenien
05c8bf4d59 fix(ui): red color changes, menu controls, performance crash 2022-06-10 17:11:14 +02:00
sylenien
997a5421ae fix(ui): small design fixes 2022-06-10 17:11:14 +02:00
sylenien
8a2d777d8c fix(ui): small fixes to share popup, archive inspector 2022-06-10 17:11:14 +02:00
sylenien
9caaabcacc fix(ui): move issues button to the subheader 2022-06-10 17:11:14 +02:00
sylenien
43a1991300 fix(ui): ui fixes 2022-06-10 17:11:14 +02:00
sylenien
c60b060cbe fix(ui): unblock tabs when in inspector mode, turn off inspector on tab change 2022-06-10 17:11:14 +02:00
sylenien
13dff716ea fix(ui): fix ui bugs 2022-06-10 17:11:14 +02:00
sylenien
366314193e fix(ui): design review fixes 2022-06-10 17:11:14 +02:00
sylenien
6e24da549a fix(ui): live session fixes 2022-06-10 17:11:14 +02:00
sylenien
02c87d237d feat(ui): change player control tabs designs 2022-06-10 17:11:14 +02:00
sylenien
b1d903f7f6 fix(ui): design fixes 2022-06-10 17:11:14 +02:00
sylenien
83600ee04d fix(ui): minor changes 2022-06-10 17:11:14 +02:00
sylenien
dbae4fe353 feat(ui): player controls redesign 2022-06-10 17:11:14 +02:00
sylenien
35d258aa8c fix(ui): design review fixes 2022-06-10 17:11:14 +02:00
sylenien
042571193a fix(ui): minor bugs 2022-06-10 17:11:14 +02:00
sylenien
3031569c07 fix(ui): ui fixes after design review 2022-06-10 17:11:14 +02:00
sylenien
9d06a95c7a fix(ui): fix active sessions 2022-06-10 17:11:14 +02:00
sylenien
ce5affddd6 fix(ui): fix styles in player header 2022-06-10 17:11:14 +02:00
sylenien
3444b73ed0 fix(ui): show events serach by default 2022-06-10 17:11:14 +02:00
sylenien
2109808d61 fix(ui): fix tooltip for subheader 2022-06-10 17:11:14 +02:00
sylenien
197694be73 fix(ui): rm test code 2022-06-10 17:11:14 +02:00
sylenien
ff73c70bfd fix(ui): fix warnings for few components 2022-06-10 17:11:14 +02:00
sylenien
1e51e3bce8 feat(ui): change eventgroup sidebar 2022-06-10 17:11:14 +02:00
sylenien
5e296703b0 fix(ui): fix typo 2022-06-10 17:11:14 +02:00
sylenien
05ecce9c74 feat(ui): add urlref bad to subheader 2022-06-10 17:11:14 +02:00
sylenien
5f5f47b06b fix(ui): rm unused code 2022-06-10 17:11:14 +02:00
sylenien
e3099bf93d fix(ui): return subheader 2022-06-10 17:11:14 +02:00
sylenien
0ab16ce91c fix(ui): fix for cicd 2022-06-10 17:11:14 +02:00
sylenien
a7d032bb29 fix(ui): rename file 2022-06-10 17:11:14 +02:00
sylenien
6b34630fa1 fix(ui): minor bugfix 2022-06-10 17:11:14 +02:00
sylenien
c584b0f653 feat(ui): change events tab design, move action buttons to subheader 2022-06-10 17:11:14 +02:00
sylenien
aff6f54397 fix(ui): fix sessionlist modal 2022-06-10 17:11:14 +02:00
sylenien
3aac6cf130 feat(ui): redesign player header; move user data to header 2022-06-10 17:11:14 +02:00
Taha Yassine Kraiem
dc02594da8 feat(api): optimised weekly report 2022-06-10 16:31:08 +02:00
Taha Yassine Kraiem
e796e6c795 feat(api): fixed login response 2022-06-10 15:49:24 +02:00
Taha Yassine Kraiem
8d4d61103a feat(api): fixed login response 2022-06-10 15:44:05 +02:00
Taha Yassine Kraiem
3217a55bca feat(api): changed login response 2022-06-10 15:29:54 +02:00
Taha Yassine Kraiem
0886e3856a feat(api): EE changed weekly report
feat(api): changed login response
2022-06-10 12:33:36 +02:00
Taha Yassine Kraiem
5592e13d9b feat(api): fixed weekly report
feat(api): optimised weekly report
2022-06-10 12:31:29 +02:00
Taha Yassine Kraiem
4305e03745 feat(api): ignore weekly report if SMTP not configured 2022-06-10 11:53:47 +02:00
Taha Yassine Kraiem
e1b233bac8 feat(api): changed connexion pool configuration
feat(alerts): changed connexion pool configuration
2022-06-10 11:35:25 +02:00
sylenien
684f1598bc feat(tracker): add option to hide dom nodes 2022-06-10 09:51:40 +02:00
Alexander Zavorotynskiy
ea658316a2 fix(backend): fixed panic in kafka consumer 2022-06-10 09:45:50 +02:00
Alexander Zavorotynskiy
b646ba2a9e fix(backend): fixed panic in db service 2022-06-10 09:31:54 +02:00
Taha Yassine Kraiem
b16b3e3b87 feat(api): changes 2022-06-09 17:37:49 +02:00
Taha Yassine Kraiem
656e13f6e5 feat(api): changes
feat(db): changes
2022-06-09 17:23:17 +02:00
Taha Yassine Kraiem
6e5bdae7da feat(api): changed pages_response_time_distribution response 2022-06-09 14:12:21 +02:00
Taha Yassine Kraiem
c81ce9bf7d feat(api): changed crashes response 2022-06-09 14:09:13 +02:00
Taha Yassine Kraiem
6e9e5dceb7 feat(api): changed speed_location response 2022-06-09 13:54:25 +02:00
Taha Yassine Kraiem
89b3d84230 feat(api): changed speed_location response 2022-06-09 13:53:55 +02:00
Taha Yassine Kraiem
9411f0f576 feat(api): changed slowest_domains response 2022-06-09 13:42:52 +02:00
dlrm
3b8a2c19ef fix(tracker): code style 2022-06-09 13:36:28 +02:00
dlrm
c913e4e7f6 fix(tracker): code rvw 2022-06-09 13:36:28 +02:00
dlrm
9158fa60c5 fix(tracker): fix tracker date recording, added new obscure dates opt
fix(tracker): rm consolelog

fix(tracker): change compile import

fix(tracker): fix node v and import
2022-06-09 13:36:28 +02:00
Taha Yassine Kraiem
7b1e854c53 feat(api): table of sessions widget 2022-06-09 13:13:05 +02:00
Taha Yassine Kraiem
adb8e2c404 feat(api): errors widget chart
feat(api): funnels widget chart
2022-06-08 19:03:06 +02:00
Taha Yassine Kraiem
6816dedaff feat(api): errors widget 2022-06-08 17:21:13 +02:00
Shekar Siri
a461ad0938 change(ui) - sessions daterange 2022-06-08 16:55:29 +02:00
Shekar Siri
4188b7894d change(ui) - tracking code changes 2022-06-08 16:29:00 +02:00
Shekar Siri
e652ee97ba pulled webpack changes and resolved conflicts 2022-06-08 16:16:41 +02:00
Shekar Siri
f235da44ab pulled webpack changes and resolved conflicts 2022-06-08 16:04:52 +02:00
Shekar Siri
767376a8db change(ui) - notifications count and list with mobx 2022-06-08 15:50:29 +02:00
Shekar Siri
d8911e93c1 change(ui) - notifications count and list 2022-06-08 15:50:29 +02:00
Shekar Siri
8273fc08bc change(ui) - login align 2022-06-08 15:50:29 +02:00
Alexander
e749ed1823
Merge pull request #531 from openreplay/assets_fix
Assets fix
2022-06-08 15:08:25 +02:00
Alexander Zavorotynskiy
2dccb2142b fix(backend/assets): return back cache checks in s3 2022-06-08 15:05:02 +02:00
Alexander Zavorotynskiy
404f6204e1 fix(backend/assets): copy ts and index in assets convert method 2022-06-08 14:44:16 +02:00
Alexander Zavorotynskiy
248d3b2c3d fix(backend/assets): changed comment 2022-06-08 13:17:37 +02:00
rjshrjndrn
9388e03e8c fix(ingress): assets ingress values
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-08 12:47:24 +02:00
Alexander Zavorotynskiy
c4081ce78a feat(backend/assets): disabled cache checks 2022-06-08 12:38:38 +02:00
Taha Yassine Kraiem
b2a778a0d7 feat(api): funnel widget issues 2022-06-07 20:10:40 +02:00
Taha Yassine Kraiem
1445c72737 feat(api): funnel widget 2022-06-07 19:17:55 +02:00
Taha Yassine Kraiem
734d1333a9 feat(api): EE fixed edition 2022-06-07 18:34:52 +02:00
Taha Yassine Kraiem
932c18f65a feat(api): fixed notifications count query 2022-06-07 18:18:22 +02:00
Taha Yassine Kraiem
3a70c8bef6 feat(api): fixed edition
feat(api): fixed expiration date
feat(api): fixed change name
feat(api): fixed change role
feat(api): fixed has password
feat(api): refactored edit user
feat(api): refactored edit member
2022-06-07 18:12:08 +02:00
rjshrjndrn
a996fac4d3 fix(ingress): assets path
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-07 17:31:51 +02:00
rjshrjndrn
8ce66d0ffc fix(build): frontend build command
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-07 16:42:15 +02:00
rjshrjndrn
4986708006 build(frontend): changed env file
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-07 15:46:50 +02:00
Shekar Siri
b1ce794c06 change(ui) - Tenant Key checking for ee edition 2022-06-07 15:41:36 +02:00
Shekar Siri
d8dcfe4b5e change(ui) - removed client and updated account 2022-06-07 15:41:36 +02:00
Shekar Siri
7a3b13ff8a change(ui) - Tenant Key checking for ee edition 2022-06-07 15:41:36 +02:00
Shekar Siri
c3d4470bb1 change(ui) - removed appearnace 2022-06-07 15:41:36 +02:00
Alexander Zavorotynskiy
0b0798b0ef feat(backend/assets): added metric (total_assets) 2022-06-07 14:14:18 +02:00
Alexander Zavorotynskiy
9292d315c4 feat(backend/ender): removed debug log 2022-06-07 13:48:10 +02:00
Alexander Zavorotynskiy
7678e9d056 fix(backend/db): fixed loss of sessions 2022-06-07 13:44:20 +02:00
Alexander Zavorotynskiy
4f8c4358f8 fix(backend/storage): fixed panic in storage service 2022-06-07 13:30:48 +02:00
Shekar Siri
329ae62881 change(ui) - input class 2022-06-07 12:08:15 +02:00
Shekar Siri
65331ca016 change(ui) - code snippet 2022-06-07 12:04:43 +02:00
Shekar Siri
cb5809608a change(ui) - code snippet 2022-06-07 11:59:42 +02:00
Alexander Zavorotynskiy
78cf538b6b feat(backend): added metrics to storage and sink services 2022-06-07 10:12:42 +02:00
Taha Yassine Kraiem
cbe78cc58e feat(db): removed user's appearance
feat(db): removed generated_password
feat(api): merged account&client
feat(api): cleaned account response
feat(api): removed user's appearance
feat(api): removed generated_password
feat(api): limits endpoint
feat(api): notifications/count endpoint
2022-06-06 19:33:26 +02:00
Alexander Zavorotynskiy
a6db2cb602 feat(backend): added metrics to http service 2022-06-06 16:46:14 +02:00
Alexander Zavorotynskiy
c963b74cbf feat(backend): cleaned up in internal dir 2022-06-06 14:13:24 +02:00
Taha Yassine Kraiem
a6c75d3cdd Merge remote-tracking branch 'origin/dev' into api-v1.6.1
# Conflicts:
#	api/Dockerfile
#	api/development.md
#	backend/Dockerfile.bundle
#	backend/build.sh
#	backend/development.md
#	backend/internal/assets/jsexception.go
#	backend/internal/handlers/ios/performanceAggregator.go
#	backend/pkg/intervals/intervals.go
#	backend/pkg/log/queue.go
#	backend/pkg/messages/filters.go
#	backend/pkg/messages/legacy-message-transform.go
#	backend/pkg/messages/messages.go
#	backend/pkg/messages/read-message.go
#	backend/services/db/heuristics/anr.go
#	backend/services/db/heuristics/clickrage.go
#	backend/services/db/heuristics/heuristics.go
#	backend/services/db/heuristics/readyMessageStore.go
#	backend/services/db/heuristics/session.go
#	backend/services/db/stats.go
#	backend/services/ender/builder/builderMap.go
#	backend/services/ender/builder/clikRageDetector.go
#	backend/services/ender/builder/cpuIssueFinder.go
#	backend/services/ender/builder/deadClickDetector.go
#	backend/services/ender/builder/domDropDetector.go
#	backend/services/ender/builder/inputEventBuilder.go
#	backend/services/ender/builder/memoryIssueFinder.go
#	backend/services/ender/builder/pageEventBuilder.go
#	backend/services/ender/builder/performanceTrackAggrBuilder.go
#	backend/services/http/assets.go
#	backend/services/http/handlers-depricated.go
#	backend/services/http/ios-device.go
#	backend/services/integrations/clientManager/manager.go
#	backend/services/storage/gzip.go
#	backend/services/storage/main.go
#	ee/api/clean.sh
#	scripts/helmcharts/local_deploy.sh
#	scripts/helmcharts/vars.yaml
2022-06-03 17:06:25 +01:00
Taha Yassine Kraiem
31a577b6cc feat(db): EE CH new structure 2022-06-03 16:56:37 +01:00
Shekar Siri
a7bfbc8ff7 change(ui) - config changes 2022-06-03 17:18:17 +02:00
Shekar Siri
2ed5cac986
Webpack upgrade and dependency cleanup (#523)
* change(ui) - webpack update
* change(ui) - api optimize and other fixes
2022-06-03 16:47:38 +02:00
rjshrjndrn
f5e013329f chore(action): removing unnecessary file 2022-06-03 16:32:06 +02:00
Alexander Zavorotynskiy
d358747caf fix(backend): several fixes in backend services 2022-06-03 16:01:14 +02:00
Alex Kaminskii
d0e651bc29 fix(tracker): uncomment init scroll tracking 2022-06-03 14:19:39 +02:00
Alex Kaminskii
e57d90e5a1 fix(tracker): use node guards instead of instanceof in some cases; import type App 2022-06-03 14:17:53 +02:00
Alex Kaminskii
1495f3bc5d fix(backend/ee/kafka): Partition-wise back-commit 2022-06-03 13:52:31 +02:00
rjshrjndrn
f626636ed7 chore(helm): enable cors for ingest
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-06-03 13:04:11 +02:00
rjshrjndrn
06eeabe494 chore(actions): enable build from branch 2022-06-03 12:48:24 +02:00
Alexander Zavorotynskiy
d68ac74731 feat(backend/http): added OPTIONS method for all paths 2022-06-03 11:13:56 +02:00
Alexander Zavorotynskiy
d4e5fce12a feat(backend/http): added prefix hack 2022-06-03 10:52:12 +02:00
Alex Kaminskii
7395688831 fix(backend/http): check if order of declaring gets influence 2022-06-02 19:04:48 +02:00
Eric Chan
c2695ef31f allow use of localStorage and sessionStorage to be overriden 2022-06-02 17:49:05 +02:00
Alexander Zavorotynskiy
1a8c076b41 fix(backend/http): added prefligt headers to root 2022-06-02 17:39:38 +02:00
Taha Yassine Kraiem
e7e0296b6b feat(db): EE CH new structure 2022-06-02 12:37:52 +01:00
Alexander Zavorotynskiy
2fb57962b8 feat(backend/sink): added last session ts in sink logs 2022-06-02 10:50:14 +02:00
Alexander Zavorotynskiy
485865f704 fix(backend/storage): fixed ts of last processed session in logs 2022-06-02 10:27:32 +02:00
Alexander Zavorotynskiy
2cadf12f88 feat(backend/storage): added counter and last session timestamp for storage service 2022-06-02 10:13:18 +02:00
Taha Yassine Kraiem
caaf7793e3 feat(db): EE CH new structure 2022-06-01 19:51:42 +01:00
rjshrjndrn
f330d5031f chore(helm): adding grafana ingress
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-31 19:21:45 +02:00
Taha Yassine Kraiem
95088518aa feat(api): clean script 2022-05-31 13:46:13 +01:00
Alexander Zavorotynskiy
3a4d5f6796 feat(backend/sink): added additional log on producer write operation 2022-05-31 14:43:56 +02:00
Taha Yassine Kraiem
b1aae16f60 feat(api): refactored user-auth 2022-05-31 10:14:55 +01:00
Alexander Zavorotynskiy
6e92ba2e79 feat(backend/ender): added additional log for ender service 2022-05-31 10:40:44 +02:00
Alexander Zavorotynskiy
df18e7dd7d feat(backend/storage): additional log and memory improvements in storage service 2022-05-31 10:02:31 +02:00
Alexander Zavorotynskiy
0b7bb2339d fix(backend/datasaver): changed postgres on clickhouse and added missed imports 2022-05-30 17:41:45 +02:00
rjshrjndrn
440efd1b5d chore(helm): increase health check timeout
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-30 17:30:35 +02:00
rjshrjndrn
6aaa0b5fb8 chore(helm): chalice health check
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-30 17:20:28 +02:00
Alexander Zavorotynskiy
d871558390 fix(backend/storage): fixed bug with large session files 2022-05-30 16:59:41 +02:00
Alexander Zavorotynskiy
24fdb5e18c fix(backend/http): fixed bug with aws health checks 2022-05-30 16:39:05 +02:00
ShiKhu
0f434a21d4 fix(tracker): 3.5.12: resolve Promise returning on start() with success:false instead of rejecting 2022-05-27 21:25:21 +02:00
ShiKhu
3555864580 fix(backend-db): log session-not-found only once 2022-05-27 12:55:15 +02:00
ShiKhu
edddf87e5f fix(frontend): resources status fix 2022-05-27 12:38:05 +02:00
Alexander Zavorotynskiy
0fe1b0c3a8 fix(backend/storage): fixed panic in storage service 2022-05-27 10:22:19 +02:00
Rajesh Rajendran
3a2b54a446
Fixes related to clickhouse and service Port for ingress (#510)
* chore(helm): variablizing clickhouse shards/replica

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(clickhouse): adding new template for clickhouse cluster

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(helm): enable passwordless clickhouse

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(install): check clickhouse is up prior initialization

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(helm): port value for ingress
2022-05-25 16:53:24 +00:00
Rajesh Rajendran
55a0d3a0e0
chore(helm): enable serviceMonitor only if monitoring stack installed. (#509) 2022-05-25 16:11:09 +00:00
Rajesh Rajendran
2752118e94
fix(helm): clickhouse change port type to integer (#508)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-25 16:00:13 +00:00
Rajesh Rajendran
c795e0480d
fix(helm): service port installation issue (#507)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-25 15:58:48 +00:00
Alex Kaminskii
d7dc6e0860 fix(player): apply scrolls after styles 2022-05-25 15:24:10 +02:00
Rajesh Rajendran
2a870d6f74
chore(helm): enabling monitoring for services (#503) 2022-05-24 17:49:24 +00:00
Alexander Zavorotynskiy
a32ac65f35 feat(backend): additional logs in messageHandler 2022-05-24 16:27:35 +02:00
Alexander Zavorotynskiy
ca78bca3d1 chore(helmchart): added missed part of yaml file to sink helm chart 2022-05-24 13:39:53 +02:00
Alexander Zavorotynskiy
31c852df2b feat(backend/sink): added error log for consumer.Commit() method 2022-05-24 13:30:25 +02:00
Alexander Zavorotynskiy
204c6f589b feat(backend/sink): small changes 2022-05-24 13:24:00 +02:00
Alexander Zavorotynskiy
8647beb538 chore(helmchart): added ASSETS_ORIGIN to sink helm chart 2022-05-24 13:21:38 +02:00
Alexander
c6f54f18aa
Merge pull request #502 from openreplay/message_timestamp_changes
Message timestamp changes
2022-05-24 13:02:16 +02:00
Alexander Zavorotynskiy
c941cb872a feat(backend/messages): added timestamp for SessionStart and moved RawErrorEvent to db datasaver 2022-05-24 10:33:16 +02:00
Alexander Zavorotynskiy
d685ad4cb3 feat(backend/ender): implemented metrics module and added to ender service 2022-05-23 17:48:24 +02:00
Alexander Zavorotynskiy
d29416fd48 fix(backend): fixed bug with group name in heuristics service 2022-05-23 17:42:28 +02:00
sylenien
07072f74b0 fix(ui): fix text overflow 2022-05-23 11:05:03 +02:00
sylenien
a06fb42e12 fix(ui): fix bugs with metric updating, metric selection hover etc 2022-05-23 11:05:03 +02:00
sylenien
40ab7d1e41 fix(ui): minor fixes for sesson settings 2022-05-23 11:05:03 +02:00
sylenien
d4fa960fdf fix(ui): make dashboardeditModal closable with esc 2022-05-23 11:05:03 +02:00
sylenien
6a801a2026 fix(ui): make menuitem configurable 2022-05-23 11:05:03 +02:00
sylenien
af45af8bd0 fix(ui): design review - dashboard metric selection 2022-05-23 11:05:03 +02:00
sylenien
a489a8b77e fix(ui): design review - saved search 2022-05-23 11:05:03 +02:00
sylenien
144f596144 fix(ui): rm consolelog 2022-05-23 11:05:03 +02:00
sylenien
e47797ee3e fix(ui): minor ui fixes after review 2022-05-23 11:05:03 +02:00
sylenien
020b993280 fix(ui): fix description input focus 2022-05-23 11:05:03 +02:00
sylenien
4efe7a7843 feat(ui): add icon to metric creation box 2022-05-23 11:05:03 +02:00
Alex Kaminskii
30d6f2489c feat (tracker-assist): 3.5.11: RemoteControl: better scroll element detection; maintain react tight state input value 2022-05-20 22:38:13 +02:00
Alex Kaminskii
62e163fb40 fix(player-assist): ignore tab press during remote control 2022-05-20 22:26:22 +02:00
Alex Kaminskii
d30b663195 fix(player): use append() instead of add(); update lastMessageTime inside distributeMessage 2022-05-20 19:05:32 +02:00
Taha Yassine Kraiem
b5540998d9 feat(api): metrics changed web vitals description
feat(db): changed metric's monitoring essentials category to web vitals
2022-05-20 11:20:25 +02:00
rjshrjndrn
40e0296c8a docs(machine setup): for contribution
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-19 19:05:53 +02:00
rjshrjndrn
9526ea68aa chore(helm): clickhouse use kafka zookeeper
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-19 18:27:00 +02:00
Alex Kaminskii
18a09cf66b fix(frontend/player): codefix 2022-05-19 17:52:49 +02:00
Alex Kaminskii
cecd57fc50 fix(frontend): maintain string mobsUrl for the smooth version transition 2022-05-19 17:29:15 +02:00
Rajesh Rajendran
97094107fe
GH actions for ee (#488)
* chore(actions): changing installation method

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(actions): inject ee license key and image tag

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(actions): image tag overload

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-19 15:04:01 +00:00
Rajesh Rajendran
2e332f3447
Openreplay install, without kubernetes and related tools (#487)
* chore(init script): option to skip k8s/tools installation

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(install): init script gnu sed detection

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-19 13:21:37 +00:00
sylenien
d08280b709 fix(ui): fix text size 2022-05-19 15:11:17 +02:00
sylenien
c82efbeb6b fix(ui): bug fixes for dashboard 2022-05-19 15:11:17 +02:00
sylenien
580641efe8 fix(ui): fix css files 2022-05-19 15:11:17 +02:00
sylenien
bb4aafa1df fix(ui): code rvw 2022-05-19 15:11:17 +02:00
sylenien
d9a01b3380 feat(ui): move create metric button to the grid 2022-05-19 15:11:17 +02:00
sylenien
69002865d6 fix(ui): remove unnecessary code 2022-05-19 15:11:17 +02:00
sylenien
cde2a6e2d5 fix(ui): fix metric category max height calculation 2022-05-19 15:11:17 +02:00
sylenien
eaf162c5f8 fix(ui): minor metric hover styles fixes 2022-05-19 15:11:17 +02:00
sylenien
e8f7e2e9be feat(ui): make edit metric title hoverable and clickable, create plain text button for future usage 2022-05-19 15:11:17 +02:00
Taha Yassine Kraiem
6df7bbe7d1 feat(api): fixed changed SearchSession payload schema 2022-05-18 20:02:09 +02:00
Taha Yassine Kraiem
4a55d93f52 feat(api): changed SearchSession payload schema 2022-05-18 19:43:18 +02:00
Taha Yassine Kraiem
2544a3e166 feat(api): centralized 'order'
feat(api): transform 'order' casing
2022-05-18 19:08:08 +02:00
ShiKhu
babe654329 Merge branch 'assist-fixes' into dev 2022-05-18 17:55:25 +02:00
ShiKhu
84b99616bd chore(tracker-assist): fix package number string 2022-05-18 17:43:31 +02:00
ShiKhu
8b0ad960e9 Merge branch 'assist-fixes' of github.com:openreplay/openreplay into assist-fixes 2022-05-18 17:29:26 +02:00
ShiKhu
613bed393a fix(player): take into account first message time 2022-05-18 17:29:17 +02:00
Shekar Siri
dce918972f change(ui) - enable annotation on call or remote 2022-05-18 17:27:11 +02:00
ShiKhu
9294748352 fix(frontend-assist): toggleAnnotation incapsulate + fix inverse booleans 2022-05-18 17:17:11 +02:00
ShiKhu
f8bbc16208 fix(frontend-player):apply set_input_value on blure if focused (for the case of remote controle) 2022-05-18 16:49:36 +02:00
ShiKhu
b283b89bd2 feat(tracker-assist): annotation available on RemoteControl as well 2022-05-18 16:01:18 +02:00
Shekar Siri
437341257c change(ui) - enable annotation without call 2022-05-18 15:49:58 +02:00
Alex Kaminskii
1f80cb4e64 Merge branch 'small-player-refactoring' into dev 2022-05-18 15:25:34 +02:00
Alex Kaminskii
bd6dba4781 fix(tracker-assisst): ConfirmWindow: override default button style & separate defaults 2022-05-18 14:50:56 +02:00
rjshrjndrn
bda652ccab fix(helm): service name
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-18 14:05:08 +02:00
Alex Kaminskii
4c8751944c style(tracker-*): do not store lock files under the npm puckage dirs 2022-05-18 13:57:38 +02:00
Alexander Zavorotynskiy
a9071b68f2 chore(bash): added heuristics service to local_build.sh 2022-05-18 13:30:21 +02:00
Alexander Zavorotynskiy
8d0d05c2cf fix(backend/heuristics): fixed panic in performanceAggr message encoding 2022-05-18 13:28:00 +02:00
Shekar Siri
ab2a800b7c merged vault (from main) and resolved conflicts 2022-05-18 12:52:26 +02:00
Shekar Siri
9ea1992b34 merged vault (from main) and resolved conflicts 2022-05-18 12:51:26 +02:00
rjshrjndrn
336046a443 chore(helm): common naming convention 2022-05-18 12:39:13 +02:00
Rajesh Rajendran
5041bcb177
GH aciton with new format (#479)
* chore(actions): update GH Actions to new deployment format

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(actions): yaml indentation

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(actions): image override

helm doesn't support multipart yaml files.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(action): enable docker image cache

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(actions): chalice deployment

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(actions): check previous image prior deploying

Because we're using an umbrella chart and not storing the image tags
which is deployed from actions anywhere, a new deployment will reset all
older deployed image tags. For that we've to fetch the existing image
tags and feed it to the current deployment.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(actions): static path the build input

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* chore(actions): adding dev branch to chalice deployment

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-17 20:07:02 +00:00
sylenien
631f427f67 fix(ui): fix typo 2022-05-17 18:20:34 +02:00
sylenien
fcd79a6fb7 fix(ui): fix weird scrolling 2022-05-17 18:20:34 +02:00
sylenien
ff02248900 fix(ui): remove additional divider line, fix zindex for menu 2022-05-17 18:20:34 +02:00
sylenien
8e58e68607 fix(ui): fix descr position, fix card click, rm unneeded code 2022-05-17 17:57:03 +02:00
sylenien
07d2c0427d feat(ui): add hovers to metric widgets for dashboard and template comps 2022-05-17 17:57:03 +02:00
sylenien
c1af05fbbe fix(ui): fix metrics table width, fix reload pathing 2022-05-17 17:57:03 +02:00
sylenien
25f792edc2 fix(ui): fix dashboard pinning and state updating; fix menu items naming 2022-05-17 17:57:03 +02:00
sylenien
9960927ca0 fix(ui): fix show more button for metric adding 2022-05-17 17:57:03 +02:00
sylenien
14ef2cba26 fix(ui): fix tooltip behavior on a metric widget 2022-05-17 17:57:03 +02:00
sylenien
30add0fd3c fix(ui): rm consolelog 2022-05-17 17:57:03 +02:00
sylenien
749093d9f6 fix(ui): fix routing in dashboards 2022-05-17 17:57:03 +02:00
rjshrjndrn
d7037771ed chore(helmcharts): adding heuristics service
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-17 12:15:52 +02:00
sylenien
0617e8b485 fix(ui): fix icons generation script to properly trim svg attrs 2022-05-17 11:19:42 +02:00
sylenien
536bacad64 fix(ui): rm conflicting code 2022-05-17 11:19:42 +02:00
sylenien
a3aecae559 fix(ui): fix text on widget updates, remove back link on metrics page and add brdcmbs 2022-05-17 11:19:42 +02:00
sylenien
33ff7914be fix(ui): remove state updates on unmounted components 2022-05-17 11:19:42 +02:00
sylenien
cba53fa284 fix(ui): fix comments in iconsjs 2022-05-17 11:19:42 +02:00
sylenien
a2c999ccef fix(ui): fix weird wording, bug with svg 2022-05-17 11:19:42 +02:00
sylenien
fec8b9e13c fix(ui): fix clipping bg on hover, fix side menu header 2022-05-17 11:19:42 +02:00
sylenien
8a29f8ecf4 fix(ui): wording, keys warnings 2022-05-17 11:19:42 +02:00
sylenien
bb33ea4714 fix(ui): lettering fixes, move create dashboard to sidebar title 2022-05-17 11:19:42 +02:00
sylenien
5c7f6c1738 fix(ui): fix messages for empty dashboad 2022-05-17 11:19:42 +02:00
rjshrjndrn
f66e780596 chore(ingress): changing proxy body size to 10m
else nginx will reject the change, and AWS will report as CORS issue.

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-16 21:14:14 +02:00
Alex Kaminskii
8ff0249814 few files to ts 2022-05-16 20:25:15 +02:00
Alex Kaminskii
d495f1aa97 style(player): few renamings 2022-05-16 20:02:28 +02:00
Alex Kaminskii
7929a8ceca refactor(player): move lists to separate file + renaming 2022-05-16 19:55:45 +02:00
Shekar Siri
82ad650f0c feat(ui) - sessions - widget 2022-05-16 19:11:53 +02:00
Alexander Zavorotynskiy
94c56205b9 fix(backend): added error log in kafka producer 2022-05-16 18:56:43 +02:00
Taha Yassine Kraiem
f054b130bf feat(DB): changed metrics category from Overview to Monitoring Essentials 2022-05-16 18:24:16 +02:00
Shekar Siri
acdd3596bc fix(ui) - assist reload remove click event params 2022-05-16 17:05:23 +02:00
Shekar Siri
f1d94c5378 feat(ui) - errors - widget 2022-05-16 17:04:10 +02:00
Shekar Siri
baa6c916dc feat(ui) - funnels - fitler dropdowns to select 2022-05-16 16:26:16 +02:00
Alex Kaminskii
76d9d41ed8 refactor(backend/storage): pass FileSplitSize as env var 2022-05-16 15:31:37 +02:00
Alex Kaminskii
7d7dcc2910 chore (backend): Dockerfile.bundle update 2022-05-16 15:28:56 +02:00
rjshrjndrn
3b704b9430 fix(helm): nginx forward L7 headers from LB
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-16 15:02:59 +02:00
Alexander Zavorotynskiy
f681e85e50 fix(backend): removed temp Dockerfile from cmd dir 2022-05-16 15:01:12 +02:00
sylenien
90299d9d6d fix(ui): rm consolelog 2022-05-16 14:53:40 +02:00
sylenien
09056c103c feat(ui): moved saved search list to new modal component 2022-05-16 14:53:40 +02:00
sylenien
69b75f5b56 fix(ui): various small ui fixes for buttons 2022-05-16 14:53:40 +02:00
sylenien
e5842939db feat(ui): added success notif for settings updates 2022-05-16 14:53:40 +02:00
sylenien
387e946dfe fix(ui): removed popup from country flag component; added bg to toggler head 2022-05-16 14:53:40 +02:00
sylenien
e1ae8bae20 fix(ui): removed popup from country flag component 2022-05-16 14:53:40 +02:00
sylenien
ac7a70ea62 fix(ui): fixed search bar to properly include sections and filters 2022-05-16 14:53:40 +02:00
Alexander Zavorotynskiy
0028de2d11 fix(backend): removed service dir from Dockerfile 2022-05-16 14:50:32 +02:00
Alex K
22606aca62
Merge pull request #475 from openreplay/integrations_refactoring
Integrations to golang standart filestructure
2022-05-16 14:48:00 +02:00
Alex Kaminskii
e26ce2e963 fix(backend-ee/clickhouse): do not insert metod & status into resources as they are always unknown 2022-05-16 14:41:44 +02:00
Alexander Zavorotynskiy
3511534cbb feat(backend/integrations): service refactoring 2022-05-16 14:41:12 +02:00
Shekar Siri
97da3f5c1c Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-05-16 14:38:00 +02:00
Shekar Siri
ebbc9cc984 fix(ui) - alert form footer bg 2022-05-16 14:18:52 +02:00
Alex K
d996b14ff8
Merge pull request #474 from openreplay/assets_refactoring
* Assets to golang standart filestructure
2022-05-16 14:18:04 +02:00
Alexander Zavorotynskiy
3449440de3 feat(backend/assets): service refactoring 2022-05-16 14:12:37 +02:00
Shekar Siri
d36d4862cf fix(ui) - chart y axis numbers 2022-05-16 14:12:16 +02:00
Alexander
356bf32bfc
Merge pull request #473 from openreplay/storage_refactoring
Storage refactoring
2022-05-16 12:56:22 +02:00
Alexander Zavorotynskiy
24f64af95a feat(backend/storage): service refactoring 2022-05-16 12:52:43 +02:00
rjshrjndrn
4175d98be8 chore(helmcharts): adding clickhouse operator helm chart
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-16 12:52:43 +02:00
rjshrjndrn
c94f4074bb chore(helm): make ingress-nginx installation not mandatory.
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-16 10:48:35 +02:00
Taha Yassine Kraiem
c84d39d38e feat(api): upgraded python base image
feat(alerts): upgraded python base image
2022-05-13 19:15:31 +02:00
Shekar Siri
05bd61b83c feat(ui) - funnels - issues sort 2022-05-13 19:03:01 +02:00
Alexander
a69f3f0e83
Merge pull request #459 from openreplay/ender_refactoring
Ender refactoring
2022-05-13 17:32:14 +02:00
Alexander Zavorotynskiy
44dae11886 feat(backend/db): fixed ee version 2022-05-13 17:00:09 +02:00
Shekar Siri
3baa3ea9a5 Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-05-13 16:05:30 +02:00
Shekar Siri
f6bd3dd0dd feat(ui) - funnels - details wip 2022-05-13 16:05:11 +02:00
sylenien
58397e6c6c fix(ui): remove attrs from icons 2022-05-13 15:56:26 +02:00
sylenien
d72f47b296 fix(ui): fix prop types for sessionitem 2022-05-13 15:56:26 +02:00
sylenien
cea1218613 fix(ui): fix typo in comment 2022-05-13 15:56:26 +02:00
sylenien
e7a31dbb8c fix(ui): refactor sessionitem 2022-05-13 15:56:26 +02:00
sylenien
19178807f8 fix(ui): fixed sessionitem types and removed withrouter connection 2022-05-13 15:56:26 +02:00
sylenien
be13ff5f7a fix(ui): fixed sessionitem and timezone dropdown connection to mobx 2022-05-13 15:56:26 +02:00
sylenien
0d00cf0349 more search field fixes 2022-05-13 15:56:26 +02:00
sylenien
64ebd07e57 added toggler disabled colors, visibility default values, no items warning text to search field 2022-05-13 15:56:26 +02:00
sylenien
1529510d25 removed browser autocomplete from filter inputs, removed timezone picker from main page 2022-05-13 15:56:26 +02:00
sylenien
1f0fb80024 fix category and filters naming, add underline to username hover, fix small bugs 2022-05-13 15:56:26 +02:00
sylenien
7005c046b8 fix ui bugs in session tab 2022-05-13 15:56:26 +02:00
Taha Yassine Kraiem
839f4c0927 feat(api): fixed CH client format 2022-05-13 15:49:17 +02:00
Shekar Siri
fd68f7b576 feat(ui) - funnels - path changes 2022-05-13 13:07:35 +02:00
Shekar Siri
87f42b4a79 feat(ui) - funnels - sub details view 2022-05-13 12:35:55 +02:00
Shekar Siri
95f0649ccb Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-05-13 11:27:53 +02:00
Shekar Siri
923fce97fb change(ui) - validation based on ee 2022-05-13 11:26:36 +02:00
Shekar Siri
8c7cbbb189 Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-05-13 11:22:53 +02:00
Shekar Siri
34947d8ef7 change(ui) - validation based on ee 2022-05-13 11:19:23 +02:00
Shekar Siri
a88763d0eb feat(ui) - funnels - issues list 2022-05-13 11:13:55 +02:00
Alexander
4ac3da241e
Merge branch 'dev' into ender_refactoring 2022-05-12 17:16:45 +02:00
Taha Yassine Kraiem
ac4e32aba3 feat(DB): changed partition expression 2022-05-12 16:24:58 +02:00
Shekar Siri
6a1e72e1d5 feat(ui) - funnels - issues list 2022-05-12 15:15:56 +02:00
Alex K
4f1a686787
Merge pull request #453 from openreplay/sink_refactor
Sink refactor

* structure -> go standarts
* move URLrewrite to sink (free http from encoding-decoding)
2022-05-12 15:03:32 +02:00
Shekar Siri
8584cf74cb feat(ui) - funnels - tailwind config 2022-05-12 14:32:04 +02:00
Shekar Siri
f40403f4e9 feat(ui) - funnels - issues filters 2022-05-12 14:31:44 +02:00
Shekar Siri
8e1bb95c84 feat(ui) - funnels - issues filters 2022-05-12 12:55:34 +02:00
Alexander Zavorotynskiy
ae6af1449c feat(backend-db/heuristics): fixed errors in main files 2022-05-12 09:59:09 +02:00
ShiKhu
883f7eab8a fix(tracker-assist):3.5.9: enforce peerjs@1.3.2 2022-05-11 23:53:19 +02:00
Alex Kaminskii
88bec7ab60 refactor(): separate ieBuilder, peBuilder & networkIssueDeterctor from EventMapper 2022-05-11 21:27:18 +02:00
Alex Kaminskii
6d2bfc0e77 fix(backend/internals): builder codefix 2022-05-11 21:25:41 +02:00
Alex Kaminskii
85b87e17df refactor(backend/internals): builder: message order & timestamps check 2022-05-11 21:14:23 +02:00
Alex Kaminskii
a6f8857b89 refactor-fix(backend-heuristics/db): create handlers for each session separately 2022-05-11 19:04:14 +02:00
Alex Kaminskii
e65fa58ab5 refactor(backend-internal): dry builder 2022-05-11 18:51:55 +02:00
Alex Kaminskii
17d477fc43 fix+style(tracker):3.5.11 fix build & files structure 2022-05-11 18:27:18 +02:00
Alex Kaminskii
396f1a16af refactor(backend-sink): producer close timeout value to config 2022-05-11 17:36:35 +02:00
Shekar Siri
a8fbf50a49 feat(ui) - funnels - issues sort 2022-05-11 17:12:33 +02:00
Alexander Zavorotynskiy
c77966a789 feat(backend/handlers): removed unix timestamp from header builders 2022-05-11 16:45:31 +02:00
Alex Kaminskii
ebc0185806 style(backend-http): split core and local imports 2022-05-11 16:37:49 +02:00
Alex Kaminskii
6456520587 style(backend-http): use UnixMilli 2022-05-11 16:36:31 +02:00
Alex Kaminskii
a241830e71 refactor(backend-sink/http): move URLrewriter to sink 2022-05-11 16:32:27 +02:00
Alex Kaminskii
ea2d13dac6 chore(backend-sink): sink in cmd 2022-05-11 16:27:01 +02:00
Shekar Siri
467e99d90d merge dev changes 2022-05-11 16:16:59 +02:00
Shekar Siri
f5d154bfc2 npm updates 2022-05-11 16:13:26 +02:00
Shekar Siri
bec68eb375 feat(ui) - funnels - issues 2022-05-11 16:13:01 +02:00
Shekar Siri
34425b8b02 feat(ui) - funnels - check for table and funnel 2022-05-10 19:25:08 +02:00
Shekar Siri
9ecb4c369e feat(ui) - funnels - step percentage dynamic 2022-05-10 18:03:19 +02:00
Shekar Siri
0174e265e0 feat(ui) - funnels - step percentage 2022-05-10 17:50:50 +02:00
Shekar Siri
d619083a85 feat(ui) - funnels - step toggle 2022-05-10 17:37:27 +02:00
Shekar Siri
3bb5d9fabd feat(ui) - funnels - graph 2022-05-10 17:17:15 +02:00
Taha Yassine Kraiem
efec096ffe feat(api): fixed sourcemaps reader endpoint 2022-05-10 17:13:19 +02:00
Shekar Siri
5f64bc90dc
Merge pull request #452 from openreplay/audit
Audit Trails
2022-05-10 17:08:21 +02:00
Alexander Zavorotynskiy
26e23d594f feat(backend/handlers): refactored web and ios message handlers 2022-05-10 15:40:55 +02:00
Alexander Zavorotynskiy
47007eb9d7 feat(backend/db): prepared db service for refactoring 2022-05-10 14:11:41 +02:00
Shekar Siri
89db14bdbf feat(ui) - funnels - merged dev 2022-05-10 12:10:18 +02:00
Shekar Siri
eae31eac37 feat(ui) - audit - date 2022-05-09 19:34:59 +02:00
Shekar Siri
5b627c17ec feat(ui) - audit - daterange with new component 2022-05-09 19:02:07 +02:00
Alexander Zavorotynskiy
ca9d76624b feat(backend/heuristics): message handlers refactoring 2022-05-09 16:51:10 +02:00
Taha Yassine Kraiem
d3be02fd9d feat(api): user trail limit changed 2022-05-09 15:30:28 +02:00
Alex Kaminskii
ae4c6e5cad refactor(backend-sink): go go standarts 2022-05-07 23:52:48 +02:00
Alex Kaminskii
324ee0890e chore(backend): enforce amd64 build (for build on amr mac) 2022-05-07 23:21:30 +02:00
Alex Kaminskii
71d50e5a44 refactor(backend-messages):predefined TypeID() on message type 2022-05-07 23:19:49 +02:00
Alex Kaminskii
e4d45e88f9 chore(backend): name entrypoint container 2022-05-07 23:00:00 +02:00
Alex Kaminskii
6ab6d342c0 chore(backend-heuristics/db): remove redundant 2022-05-07 22:16:15 +02:00
Alex Kaminskii
62b36bd70a refactor(backend-heuristics): bring all sub-bilders to common interface 2022-05-07 21:29:40 +02:00
Alex Kaminskii
432c0da4e2 chore(backend-heuristics): Remove redundant lines 2022-05-07 15:10:46 +02:00
Shekar Siri
b97c32ad56 feat(ui) - audit - filters 2022-05-06 18:54:25 +02:00
Taha Yassine Kraiem
7625eb9f8c feat(alerts): changed Dockerfile.alerts 2022-05-06 18:36:46 +02:00
Taha Yassine Kraiem
202bf73456 feat(api): vault support 2022-05-06 18:30:59 +02:00
Taha Yassine Kraiem
516e5b0446 feat(api): changed search user trails by username 2022-05-06 17:43:55 +02:00
Shekar Siri
7feaa376e6 feat(ui) - audit - list and search 2022-05-06 17:31:35 +02:00
Taha Yassine Kraiem
d8078c220d feat(api): search user trails by username
feat(db): index to search user trails by username
2022-05-06 17:27:43 +02:00
Alexander Zavorotynskiy
8c432b8ba3 Removed from heuristics extra logic 2022-05-06 16:39:29 +02:00
Alexander Zavorotynskiy
967034a89c Create first version of heuristics service with the same logic as old ender 2022-05-06 16:12:06 +02:00
Taha Yassine Kraiem
ec445f88c7 feat(api): EE updated authorizer 2022-05-06 15:09:50 +02:00
Alexander Zavorotynskiy
2b3728d8da Finished refactoring for session ender service 2022-05-06 12:21:43 +02:00
Taha Yassine Kraiem
0c84c89b4f feat(api): changed Dockerfile 2022-05-06 12:16:07 +02:00
Taha Yassine Kraiem
50b476316a feat(api): changed root path 2022-05-06 12:11:38 +02:00
Taha Yassine Kraiem
ac9c10393f feat(api): fixed return createdAt with the list of users 2022-05-06 12:07:03 +02:00
Shekar Siri
f12931491a feat(ui) - audit - base views 2022-05-06 12:06:55 +02:00
Taha Yassine Kraiem
ef0edebb3d feat(DB): traces/trails index
feat(api): get all possible traces/trails actions
feat(api): search traces/trails by actions
feat(api): search traces/trails by user
2022-05-06 11:56:03 +02:00
Alex Kaminskii
a99f684b83 feat(frontend-player): sequential (pre)load for multifile sessions 2022-05-06 00:10:08 +02:00
Alex Kaminskii
2d96705930 readme(tracker): build-readme for js packages 2022-05-06 00:07:07 +02:00
Taha Yassine Kraiem
21d8d28a79 feat(api): return createdAt with the list of users 2022-05-05 20:42:08 +02:00
Taha Yassine Kraiem
acaef59590 feat(DB): traces/trails index
feat(api): get all traces/trails
2022-05-05 20:37:37 +02:00
Taha Yassine Kraiem
172508dcf3 feat(DB): changed sessions_metadata sort expression 2022-05-05 18:21:47 +02:00
Alexander Zavorotynskiy
f4212d6eaa Split ender into 2 services (ender and heuristics) 2022-05-05 17:37:05 +02:00
Shekar Siri
bd07d42084 Merge branch 'user-list' into dev 2022-05-05 17:07:36 +02:00
Shekar Siri
b77771ccca change(ui) - user list checking for enterprise 2022-05-05 17:07:16 +02:00
Shekar Siri
17aec98298
Merge pull request #447 from openreplay/user-list
UI Improvements - User, Projects
2022-05-05 16:32:20 +02:00
Shekar Siri
bb1afdc76e fix(ui) - errors viewed state 2022-05-05 16:29:55 +02:00
Alexander Zavorotynskiy
700ef0dcc6 Made standart project layout for ender service 2022-05-05 15:26:10 +02:00
rjshrjndrn
b843aba08a chore(init): create direcotry if not exist
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-05 15:03:31 +02:00
Shekar Siri
59fe8245dd change(ui) - user list tooltip 2022-05-05 14:41:10 +02:00
Shekar Siri
c4b371507d change(ui) - project delete moved to modal 2022-05-05 14:31:53 +02:00
Shekar Siri
c3bb5aeb07 change(ui) - sites search 2022-05-05 13:27:06 +02:00
Shekar Siri
55b64128f1 change(ui) - sites checking for exists 2022-05-05 13:16:06 +02:00
Shekar Siri
dfce25709a change(ui) - user limit check and other fixes 2022-05-05 13:11:20 +02:00
Alex K
50bbd0fe98
Merge pull request #445 from openreplay/db_refactoring
Db refactoring
2022-05-05 12:50:40 +02:00
Alex Kaminskii
b6d57b45ab chore(github-workflow): backend 2022-05-05 12:49:44 +02:00
Alexander Zavorotynskiy
88306e1a6a fix (backend): removed unused import in storage module 2022-05-05 12:04:23 +02:00
Alexander Zavorotynskiy
74756b2409 Refactoring of the db service 2022-05-05 10:46:48 +02:00
Alexander Zavorotynskiy
c050394116 Moved service configs to config module 2022-05-05 10:23:36 +02:00
Shekar Siri
918f7e9d86 change(ui) - user delete 2022-05-05 10:09:16 +02:00
Alexander Zavorotynskiy
167d1e117e Made correct project layout 2022-05-05 09:45:38 +02:00
Alex Kaminskii
6314fcbbef feat(backend): 2 files back compatible format 2022-05-04 20:33:52 +02:00
Shekar Siri
330992736d change(ui) - user form role filter 2022-05-04 19:35:04 +02:00
Shekar Siri
7e655d513c change(ui) - userlist form 2022-05-04 18:53:43 +02:00
Shekar Siri
5ef382c9b8 Merge branch 'dev' of github.com:openreplay/openreplay into user-list 2022-05-04 16:42:45 +02:00
Shekar Siri
c15648eaf7 change(ui) - tailwind justify-self 2022-05-04 16:41:44 +02:00
Shekar Siri
c97fe55cda change(ui) - users list - form 2022-05-04 16:41:29 +02:00
Alexander Zavorotynskiy
5b7c479f4d Refactoring in stats logger 2022-05-04 16:17:57 +02:00
Taha Yassine Kraiem
42f3b6d018 feat(api): changed Dockerfile 2022-05-04 14:50:09 +02:00
Taha Yassine Kraiem
8d5cf84d90 feat(api): changed Dockerfile 2022-05-04 14:36:52 +02:00
Alexander Zavorotynskiy
74672d4321 Removed unused code 2022-05-04 14:36:42 +02:00
Taha Yassine Kraiem
47be240dfb feat(api): changed Dockerfile 2022-05-04 14:32:17 +02:00
Alexander Zavorotynskiy
9cdb1e8ab7 Removed global pg connection 2022-05-04 14:21:15 +02:00
Taha Yassine Kraiem
36b466665c feat(api): changed replay file URL 2022-05-04 13:14:25 +02:00
Shekar Siri
424b071eaf change(ui) - users list - search and pagination 2022-05-04 13:14:20 +02:00
Taha Yassine Kraiem
f90a25c75a feat(api): EE updated dependencies 2022-05-04 13:10:48 +02:00
Taha Yassine Kraiem
144e58adef feat(api): updated dependencies 2022-05-04 13:00:40 +02:00
Shekar Siri
7d08e32d25 change(ui) - users list 2022-05-04 12:27:44 +02:00
Alexander Zavorotynskiy
a4278aec23 [http] removed extra log in main.go
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-04 12:17:33 +02:00
rjshrjndrn
767fa31026 chore(actions): include cmd dir for build
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-04 12:09:46 +02:00
rjshrjndrn
b72a332cd0 chore(build): returning from function
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-04 11:38:23 +02:00
Alex Kaminskii
82084c9717 fix (backend): build.sh build_service incapsulate 2022-05-04 11:23:38 +02:00
Alexander
15563ca582
Merge pull request #442 from openreplay/http_refactoring
Http service refactoring
2022-05-04 10:10:07 +02:00
rjshrjndrn
42e6a63e44 docs(vagrant): create user account comment
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-05-03 21:43:46 +02:00
Alexander Zavorotynskiy
414fbee962 Fixed build.sh file 2022-05-03 13:55:56 +02:00
Shekar Siri
0bbd27e856
Merge pull request #441 from openreplay/session-settings
Session settings
2022-05-03 12:50:54 +02:00
Shekar Siri
690577407d feat(ui) - session settings - cleanup 2022-05-03 12:34:58 +02:00
Alexander Zavorotynskiy
b2456e9ac6 Removed debug lines from build.sh 2022-05-03 12:33:43 +02:00
Shekar Siri
18e932e5e9 feat(ui) - session settings - capture rate api update 2022-05-03 12:26:42 +02:00
Alexander Zavorotynskiy
18d18164b3 Added temporary hack for http service building 2022-05-03 10:42:24 +02:00
Alexander Zavorotynskiy
d02ecba354 Added missed return statements 2022-05-02 17:38:53 +02:00
Alexander Zavorotynskiy
5ec46ad753 Moved assets cache logic 2022-05-02 17:36:33 +02:00
Shekar Siri
87f76f484d feat(ui) - session settings - changed state 2022-05-02 16:31:19 +02:00
Shekar Siri
d2f168f667 remote pull dev 2022-05-02 16:27:53 +02:00
Shekar Siri
02c39199d2 feat(ui) - session settings - changed state 2022-05-02 16:26:05 +02:00
Shekar Siri
e421511db8 feat(ui) - session settings - libs 2022-05-02 16:07:12 +02:00
Shekar Siri
a1b656dc6a feat(ui) - session settings - ui and state 2022-05-02 16:07:00 +02:00
Alexander Zavorotynskiy
69cabaecfe Moved the rest of the code to separate dirs 2022-05-02 15:28:51 +02:00
Alexander Zavorotynskiy
df722761e5 Moved server to a separate dir 2022-05-02 15:20:10 +02:00
Alexander Zavorotynskiy
c347198fc1 Moved http handlers to a separate dir 2022-05-02 15:05:45 +02:00
Alexander Zavorotynskiy
f01ef3ea03 Made a correct project structure for http service 2022-05-02 14:47:13 +02:00
Alexander Zavorotynskiy
66e190221d Removed global objects (moved service initialization into serviceBuilder) 2022-05-02 14:36:02 +02:00
Taha Yassine Kraiem
b87e601f27 chore(vagrant): Changed development.md
chore(vagrant): Added dev setup-scripts for EE
2022-05-02 11:33:39 +02:00
Rajesh Rajendran
867f92dfc7 Update development.md 2022-04-30 18:07:45 +02:00
Taha Yassine Kraiem
6807dc8ce1 feat(api): EE optimized get error details 2022-04-29 18:52:29 +02:00
Alexander Zavorotynskiy
b0bb5bd922 Moved configuration to the separate file 2022-04-29 17:23:20 +02:00
Alexander Zavorotynskiy
10edeb6e2d Refactoring of http handlers 2022-04-29 16:53:28 +02:00
Shekar Siri
27641279b4
Update dashboard.ts 2022-04-29 16:10:03 +02:00
Taha Yassine Kraiem
423f416015 feat(api): fixed description optional value 2022-04-29 16:08:38 +02:00
Shekar Siri
4f1a476c65
Update dashboard.ts 2022-04-29 16:02:14 +02:00
Shekar Siri
6a855a947c
Merge pull request #435 from openreplay/reporting
Dashboard - Report Generation
2022-04-29 15:36:06 +02:00
Shekar Siri
8986f395b1 feat(ui) - dashboard - new libs 2022-04-29 14:27:23 +02:00
Taha Yassine Kraiem
84a43bcd8b feat(api): fixed description default value 2022-04-29 14:16:36 +02:00
Shekar Siri
7c2539ec93 feat(ui) - dashboard - report 2022-04-29 14:16:29 +02:00
Taha Yassine Kraiem
fff8f75fd0 feat(api): changed Dockerfile 2022-04-29 14:06:06 +02:00
Taha Yassine Kraiem
63e897594f feat(db): EE fixed widget-size for upgrade 2022-04-29 14:06:06 +02:00
rjshrjndrn
31f9e49673 chore(vagrant): Adding development readme
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-04-29 14:06:06 +02:00
ShiKhu
6412c2a862 fix(backend/storage): codefix 2022-04-29 14:06:06 +02:00
ShiKhu
1e5deed0d5 feat(backend/storage):split files into 2 2022-04-29 14:06:06 +02:00
Alexander Zavorotynskiy
0bbf8012f1 fix(backend): added missed return in error case 2022-04-29 14:06:06 +02:00
Alexander Zavorotynskiy
9856e36f44 fix(backend): fixed possible panic in the defer 2022-04-29 14:06:06 +02:00
ShiKhu
d699341676 fix(backend): Dockerfile.bundle fix 2022-04-29 14:06:06 +02:00
ShiKhu
fbb039f0c7 fix(backend):pprof launch addr: use port only 2022-04-29 14:06:06 +02:00
ShiKhu
1b93f8a453 gofmt 2022-04-29 14:06:06 +02:00
rjshrjndrn
bdb6a75d7c fix(nginx): proper x-forward-for proxying
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-04-29 14:06:06 +02:00
Rajesh Rajendran
4f44edeb39 Vagrant for local contribution (#434)
* chore(vagrant): initial vagrantfile
* chore(vagrant): adding instructions after installation
* chore(vagrant): Adding vagrant user to docker group
* chore(vagrant): use local docker daemon for k3s
* chore(vagrant): fix comment
* chore(vagrant): adding hostname in /etc/hosts
* chore(vagrant): fix doc
* chore(vagrant): limiting cpu
* chore(frontend): initialize dev env
* chore(docker): adding dockerignore
* chore(dockerfile): using cache for fasten build
* chore(dockerignore): update
* chore(docker): build optimizations
* chore(build): all components build option
* chore(build): utilities build fix
* chore(scrpt): remove debug message
* chore(vagrant): provision using stable branch always

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-04-29 14:06:06 +02:00
Taha Yassine Kraiem
8fa4632ee4 feat(alerts): changed build script 2022-04-29 14:06:06 +02:00
Shekar Siri
59f51cde26 feat(ui) - dashboard - report 2022-04-29 13:56:20 +02:00
Taha Yassine Kraiem
35b9d6ebaf feat(api): s3 helper detect environment
feat(api): support description for dashboards
2022-04-29 13:40:57 +02:00
Shekar Siri
a87717ba8c feat(ui) - dashboard - report 2022-04-29 13:37:30 +02:00
Taha Yassine Kraiem
122705b4c7 feat(db): EE fixed widget-size for upgrade 2022-04-29 13:19:11 +02:00
Shekar Siri
878c742c2f feat(ui) - dashboard - report 2022-04-29 12:32:34 +02:00
rjshrjndrn
89ba052d41 chore(vagrant): Adding development readme
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-04-29 12:17:01 +02:00
Alexander Zavorotynskiy
dc69131499 Deleted commented (unused) code 2022-04-29 11:22:00 +02:00
Shekar Siri
b096ac73d1 feat(ui) - dashboard - report 2022-04-29 10:02:56 +02:00
ShiKhu
cb01c3cb28 fix(backend/storage): codefix 2022-04-28 19:21:45 +02:00
ShiKhu
6d4800feea feat(backend/storage):split files into 2 2022-04-28 19:14:23 +02:00
Alexander Zavorotynskiy
de3ba9c7f6 fix(backend): added missed return in error case 2022-04-28 18:02:56 +02:00
Alexander Zavorotynskiy
3132db6205 fix(backend): fixed possible panic in the defer 2022-04-28 17:55:56 +02:00
ShiKhu
c2d1bcdb35 Merge branch 'backend' into dev 2022-04-28 17:03:25 +02:00
ShiKhu
60d0d42d69 fix(backend): Dockerfile.bundle fix 2022-04-28 17:02:53 +02:00
ShiKhu
d64cd12eb6 fix(backend):pprof launch addr: use port only 2022-04-28 17:02:13 +02:00
Taha Yassine Kraiem
1a73b978dc feat(db): EE remove pages_count column 2022-04-28 15:29:45 +02:00
Taha Yassine Kraiem
b8367d87f8 feat(api): EE fixed No of pages count widget 2022-04-28 14:59:22 +02:00
Taha Yassine Kraiem
aef7026034 feat(api): EE fixed No of pages count widget 2022-04-28 14:59:05 +02:00
Taha Yassine Kraiem
51c75657ab feat(api): EE fixed No of pages count widget 2022-04-28 14:08:23 +02:00
Taha Yassine Kraiem
f8f70b1006 feat(api): EE fixed No of pages count widget 2022-04-28 14:07:28 +02:00
rjshrjndrn
94adb69f6b fix(nginx): proper x-forward-for proxying
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-04-27 15:00:54 +02:00
Rajesh Rajendran
f3b6bda163
Vagrant for local contribution (#434)
* chore(vagrant): initial vagrantfile
* chore(vagrant): adding instructions after installation
* chore(vagrant): Adding vagrant user to docker group
* chore(vagrant): use local docker daemon for k3s
* chore(vagrant): fix comment
* chore(vagrant): adding hostname in /etc/hosts
* chore(vagrant): fix doc
* chore(vagrant): limiting cpu
* chore(frontend): initialize dev env
* chore(docker): adding dockerignore
* chore(dockerfile): using cache for fasten build
* chore(dockerignore): update
* chore(docker): build optimizations
* chore(build): all components build option
* chore(build): utilities build fix
* chore(scrpt): remove debug message
* chore(vagrant): provision using stable branch always

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2022-04-27 12:54:40 +00:00
Taha Yassine Kraiem
72bee8e894 feat(api): round time metrics 2022-04-26 18:10:25 +02:00
Taha Yassine Kraiem
55b504cc22 feat(alerts): changed build script 2022-04-26 16:30:48 +02:00
Taha Yassine Kraiem
f57bf7205c feat(assist): EE fixed geoip-unknown ip 2022-04-26 12:47:18 +02:00
Taha Yassine Kraiem
1832567beb feat(assist): fixed geoip-unknown ip 2022-04-26 12:44:07 +02:00
ShiKhu
43669c082c gofmt 2022-04-25 23:09:52 +02:00
Shekar Siri
53ac4c3321 Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-04-25 12:07:19 +02:00
Shekar Siri
fb44ff70fe feat(ui) - funnels wip 2022-04-22 19:07:01 +02:00
Shekar Siri
eeebe11915 Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-04-22 16:10:44 +02:00
Shekar Siri
4907c1b26c feat(ui) - funnels listing 2022-04-22 14:47:38 +02:00
Shekar Siri
a287a9ca47 Merge branch 'dev' of github.com:openreplay/openreplay into funnels 2022-04-22 12:47:03 +02:00
Shekar Siri
3882128d4a feat(ui) - funnels - wip 2022-04-21 16:52:01 +02:00
Shekar Siri
45e39c8749 feat(ui) - funnels - wip 2022-04-20 18:05:10 +02:00
7011 changed files with 293031 additions and 433854 deletions

View file

@ -1,33 +0,0 @@
---
name: Bug report
about: Report an issue and help improve OpenReplay
title: ''
labels: bug
assignees: estradino
---
**Describe the issue**
A short description of what the issue is.
**Steps to reproduce the issue**
1. Step 1
2. Step 2
3. You got it :)
**Expected behavior**
What you expected to happen.
**Screenshots**
If possible, that would be make our life easier.
**OpenReplay Environment**
- Frontend stack: [e.g. React/Axios/MobX, Next]
- OpenReplay version: [e.g. 1.6.0]
- Tracker version: [e.g. 3.5.10]
- Plugins used: [e.g. Fetch, Redux]
- Cloud provider: [e.g. AWS, GCP]
- System specs: [e.g. 2vCPU/16Gb with 50Gb of storage]
**Additional context**
Add additional information you think might be relevant for this behavior.

View file

@ -1,11 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: Documentation Request
url: https://github.com/openreplay/documentation/issues
about: Report a mistake or suggest anything we might be missing in the docs
- name: Discussions
url: https://github.com/openreplay/openreplay/discussions
about: Ask and answer various questions on GitHub Discussions
- name: Join our Slack Community
url: https://slack.openreplay.com
about: Take the discussion further by joining our community on Slack

View file

@ -1,10 +0,0 @@
---
name: Feature request
about: Suggest an idea or a feature to improve OpenReplay
title: ''
labels: feature-request
assignees: estradino
---
Briefly describe the feature you would like to see shipped with the upcoming versions of OpenReplay, the use-case (very important to us) and the alternative solutions you've considered so far.

View file

@ -1,74 +0,0 @@
name: 'Update Keys'
description: 'Updates keys'
inputs:
domain_name:
required: true
description: 'Domain Name'
license_key:
required: true
description: 'License Key'
jwt_secret:
required: true
description: 'JWT Secret'
jwt_spot_secret:
required: true
description: 'JWT spot Secret'
minio_access_key:
required: true
description: 'MinIO Access Key'
minio_secret_key:
required: true
description: 'MinIO Secret Key'
pg_password:
required: true
description: 'PostgreSQL Password'
registry_url:
required: true
description: 'Registry URL'
runs:
using: "composite"
steps:
- name: Downloading yq
run: |
VERSION="v4.42.1"
sudo wget https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 -O /usr/bin/yq
sudo chmod +x /usr/bin/yq
shell: bash
- name: "Updating OSS secrets"
run: |
cd scripts/helmcharts/
vars=(
"ASSIST_JWT_SECRET:.global.assistJWTSecret"
"ASSIST_KEY:.global.assistKey"
"DOMAIN_NAME:.global.domainName"
"JWT_REFRESH_SECRET:.chalice.env.JWT_REFRESH_SECRET"
"JWT_SECRET:.global.jwtSecret"
"JWT_SPOT_REFRESH_SECRET:.chalice.env.JWT_SPOT_REFRESH_SECRET"
"JWT_SPOT_SECRET:.global.jwtSpotSecret"
"LICENSE_KEY:.global.enterpriseEditionLicense"
"MINIO_ACCESS_KEY:.global.s3.accessKey"
"MINIO_SECRET_KEY:.global.s3.secretKey"
"PG_PASSWORD:.postgresql.postgresqlPassword"
"REGISTRY_URL:.global.openReplayContainerRegistry"
)
for var in "${vars[@]}"; do
IFS=":" read -r env_var yq_path <<<"$var"
yq e -i "${yq_path} = strenv(${env_var})" vars.yaml
done
shell: bash
env:
ASSIST_JWT_SECRET: ${{ inputs.assist_jwt_secret }}
ASSIST_KEY: ${{ inputs.assist_key }}
DOMAIN_NAME: ${{ inputs.domain_name }}
JWT_REFRESH_SECRET: ${{ inputs.jwt_refresh_secret }}
JWT_SECRET: ${{ inputs.jwt_secret }}
JWT_SPOT_REFRESH_SECRET: ${{inputs.jwt_spot_refresh_secret}}
JWT_SPOT_SECRET: ${{ inputs.jwt_spot_secret }}
LICENSE_KEY: ${{ inputs.license_key }}
MINIO_ACCESS_KEY: ${{ inputs.minio_access_key }}
MINIO_SECRET_KEY: ${{ inputs.minio_secret_key }}
PG_PASSWORD: ${{ inputs.pg_password }}
REGISTRY_URL: ${{ inputs.registry_url }}

View file

@ -1,12 +0,0 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
target-branch: "dev"
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
target-branch: "dev"

View file

@ -1,162 +0,0 @@
# This action will push the alerts changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "ee/api/**"
- "api/**"
- "!api/.gitignore"
- "!api/routers"
- "!api/app.py"
- "!api/*-dev.sh"
- "!api/requirements.txt"
- "!api/requirements-crons.txt"
- "!ee/api/.gitignore"
- "!ee/api/routers"
- "!ee/api/app.py"
- "!ee/api/*-dev.sh"
- "!ee/api/requirements.txt"
- "!ee/api/requirements-crons.txt"
name: Build and Deploy Alerts EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build_alerts.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("alerts")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("alerts")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/alerts/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,alerts,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,160 +0,0 @@
# This action will push the alerts changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "api/**"
- "!api/.gitignore"
- "!api/routers"
- "!api/app.py"
- "!api/*-dev.sh"
- "!api/requirements.txt"
- "!api/requirements-crons.txt"
name: Build and Deploy Alerts
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing Alerts image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build_alerts.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("alerts")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("alerts")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
## Update secerts
sed -i "s#openReplayContainerRegistry.*#openReplayContainerRegistry: \"${{ secrets.OSS_REGISTRY_URL }}\"#g" vars.yaml
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.OSS_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.OSS_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
# Update changed image tag
sed -i "/alerts/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,alerts,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,27 +1,12 @@
# This action will push the chalice changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "ee/api/**"
- "api/**"
- "!api/.gitignore"
- "!api/app_alerts.py"
- "!api/*-dev.sh"
- "!api/requirements-*.txt"
- "!ee/api/.gitignore"
- "!ee/api/app_alerts.py"
- "!ee/api/app_crons.py"
- "!ee/api/*-dev.sh"
- "!ee/api/requirements-*.txt"
- ee/api/**
- api/**
name: Build and Deploy Chalice EE
@ -31,129 +16,88 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("chalice")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("chalice")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
- name: Building and Pusing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
cd api
PUSH_IMAGE=1 bash build.sh ee
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/chalice/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.EE_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.EE_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.EE_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.EE_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.EE_DOMAIN_NAME }}\"/g" vars.yaml
sed -i "s/enterpriseEditionLicense: \"\"/enterpriseEditionLicense: \"${{ secrets.EE_LICENSE_KEY }}\"/g" vars.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,chalice,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
# Update changed image tag
sed -i "/chalice/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set skipMigration=true --no-hooks
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true
#

View file

@ -1,21 +1,12 @@
# This action will push the chalice changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
- api-v1.5.5
paths:
- "api/**"
- "!api/.gitignore"
- "!api/app_alerts.py"
- "!api/*-dev.sh"
- "!api/requirements-*.txt"
- api/**
name: Build and Deploy Chalice
@ -25,126 +16,85 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("chalice")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("chalice")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
- name: Building and Pusing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
cd api
PUSH_IMAGE=1 bash build.sh
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/chalice/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.OSS_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.OSS_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,chalice,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
# Update changed image tag
sed -i "/chalice/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set skipMigration=true --no-hooks
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
#

View file

@ -1,134 +0,0 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/assist/**"
- "assist/**"
- "!assist/.gitignore"
- "!assist/*-dev.sh"
name: Build and Deploy Assist EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,122 +0,0 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/assist-server/**"
name: Build and Deploy Assist-Server EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist-Server image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-server
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-server")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist-server")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
pwd
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist-server/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-server,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging

View file

@ -1,162 +0,0 @@
# This action will push the assist-stats changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "assist-stats/**"
- "!assist-stats/.gitignore"
- "!assist-stats/*-dev.sh"
- "!assist-stats/requirements-*.txt"
name: Build and Deploy Assist Stats ee
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing assist-stats image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist-stats
PUSH_IMAGE=0 bash -x ./build.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist-stats")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist-stats")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
### Enterprise code deployment
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontextee
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
assist-stats:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
export IMAGE_TAG=${IMAGE_TAG}
# Update changed image tag
yq '.utilities.apiCrons.assiststats.image.tag = strenv(IMAGE_TAG)' -i /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist-stats,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,133 +0,0 @@
# This action will push the assist changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "assist/**"
- "!assist/.gitignore"
- "!assist/*-dev.sh"
name: Build and Deploy Assist
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing Assist image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd assist
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("assist")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("assist")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/assist/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,assist,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,159 +0,0 @@
# This action will push the crons changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- api-*
paths:
- "ee/api/**"
- "api/**"
- "!api/.gitignore"
- "!api/app.py"
- "!api/app_alerts.py"
- "!api/*-dev.sh"
- "!api/requirements.txt"
- "!api/requirements-alerts.txt"
- "!ee/api/.gitignore"
- "!ee/api/app.py"
- "!ee/api/app_alerts.py"
- "!ee/api/*-dev.sh"
- "!ee/api/requirements.txt"
- "!ee/api/requirements-crons.txt"
name: Build and Deploy Crons EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd api
PUSH_IMAGE=0 bash -x ./build_crons.sh ee
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("crons")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("crons")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
env:
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
image: &image
tag: "${IMAGE_TAG}"
utilities:
apiCrons:
assiststats:
image: *image
report:
image: *image
sessionsCleaner:
image: *image
projectsStats:
image: *image
fixProjectsStats:
image: *image
EOF
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,utilities,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks --kube-version=$k_version | kubectl apply -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -59,22 +59,16 @@ jobs:
EOF
done
- uses: ./.github/composite-actions/update-keys
with:
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes foss
if: ${{ steps.check-migration.outputs.skip_migration_oss != 'true' }}
run: |
cd scripts/helmcharts/
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.OSS_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.OSS_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
cat /tmp/image_override.yaml
@ -120,21 +114,21 @@ jobs:
EOF
done
- uses: ./.github/composite-actions/update-keys
with:
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Resetting vars file
run: |
git checkout -- scripts/helmcharts/vars.yaml
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.EE_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.EE_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.EE_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.EE_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.EE_DOMAIN_NAME }}\"/g" vars.yaml
sed -i "s/enterpriseEditionLicense: \"\"/enterpriseEditionLicense: \"${{ secrets.EE_LICENSE_KEY }}\"/g" vars.yaml
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --atomic --set forceMigration=true --set dbMigrationUpstreamBranch=${IMAGE_TAG}
@ -144,13 +138,12 @@ jobs:
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# AWS_REGION: eu-central-1
# AWS_S3_BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET_NAME }}

View file

@ -1,85 +0,0 @@
name: Frontend Dev Deployment
on: workflow_dispatch
# Disable previous workflows for this action.
concurrency:
group: ${{ github.workflow }} #-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Cache node modules
uses: actions/cache@v1
with:
path: node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- uses: ./.github/composite-actions/update-keys
with:
domain_name: ${{ secrets.DEV_DOMAIN_NAME }}
license_key: ${{ secrets.DEV_LICENSE_KEY }}
jwt_secret: ${{ secrets.DEV_JWT_SECRET }}
minio_access_key: ${{ secrets.DEV_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.DEV_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.DEV_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.DEV_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing frontend image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
set -x
cd frontend
mv .env.sample .env
docker run --rm -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v $(pwd):/home/${USER} -w /home/${USER} --name node_build node:20-slim /bin/bash -c "yarn && yarn build"
# https://github.com/docker/cli/issues/1134#issuecomment-613516912
DOCKER_BUILDKIT=1 docker build --target=cicd -t $DOCKER_REPO/frontend:${IMAGE_TAG} .
docker tag $DOCKER_REPO/frontend:${IMAGE_TAG} $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
- name: Deploy to kubernetes foss
run: |
cd scripts/helmcharts/
set -x
cat <<EOF>>/tmp/image_override.yaml
frontend:
image:
tag: ${IMAGE_TAG}
EOF
# Update changed image tag
sed -i "/frontend/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,frontend,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
iMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}

View file

@ -1,4 +1,4 @@
name: Frontend Foss Deployment
name: Frontend FOSS Deployment
on:
workflow_dispatch:
push:
@ -7,7 +7,7 @@ on:
paths:
- frontend/**
# Disable previous workflows for this action.
concurrency:
concurrency:
group: ${{ github.workflow }} #-${{ github.ref }}
cancel-in-progress: true
@ -15,145 +15,155 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Checkout
uses: actions/checkout@v2
- name: Cache node modules
uses: actions/cache@v4
with:
path: |
/home/runner/work/openreplay/openreplay/frontend/node_modules
/home/runner/work/openreplay/openreplay/frontend/.yarn
key: ${{ runner.OS }}-build-${{ hashFiles('frontend/yarn.lock') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- name: Cache node modules
uses: actions/cache@v1
with:
path: node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pushing frontend image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
set -x
cd frontend
mv .env.sample .env
docker run --rm -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v $(pwd):/home/${USER} -w /home/${USER} --name node_build node:14-stretch-slim /bin/bash -c "yarn && yarn build"
# https://github.com/docker/cli/issues/1134#issuecomment-613516912
DOCKER_BUILDKIT=1 docker build --target=cicd -t $DOCKER_REPO/frontend:${IMAGE_TAG} .
docker tag $DOCKER_REPO/frontend:${IMAGE_TAG} $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
- name: Building and Pushing frontend image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
set -x
cd frontend
mv .env.sample .env
docker run --rm -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v $(pwd):/home/${USER} -w /home/${USER} --name node_build node:20-slim /bin/bash -c "yarn && yarn build"
# https://github.com/docker/cli/issues/1134#issuecomment-613516912
DOCKER_BUILDKIT=1 docker build --target=cicd -t $DOCKER_REPO/frontend:${IMAGE_TAG} .
docker tag $DOCKER_REPO/frontend:${IMAGE_TAG} $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}
docker push $DOCKER_REPO/frontend:${IMAGE_TAG}-ee
- name: Creating old image input
run: |
set -x
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
- name: Deploy to kubernetes foss
run: |
cd scripts/helmcharts/
echo > /tmp/image_override.yaml
set -x
cat <<EOF>>/tmp/image_override.yaml
frontend:
image:
tag: ${IMAGE_TAG}
EOF
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
# Update changed image tag
sed -i "/frontend/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
- name: Deploy to kubernetes foss
run: |
cd scripts/helmcharts/
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,frontend,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.OSS_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.OSS_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
### Enterprise code deployment
# Update changed image tag
sed -i "/frontend/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontextee
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --atomic --set skipMigration=true --no-hooks
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
cat <<EOF>/tmp/image_override.yaml
frontend:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
### Enterprise code deployment
# Update changed image tag
sed -i "/frontend/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
- name: cleaning old assets
run: |
rm -rf /tmp/image_*
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontextee
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,frontend,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Creating old image input
env:
IMAGE_TAG: ${{ github.sha }}
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Resetting vars file
run: |
git checkout -- scripts/helmcharts/vars.yaml
- name: Deploy to kubernetes ee
run: |
cd scripts/helmcharts/
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.EE_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.EE_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.EE_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.EE_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.EE_DOMAIN_NAME }}\"/g" vars.yaml
sed -i "s/enterpriseEditionLicense: \"\"/enterpriseEditionLicense: \"${{ secrets.EE_LICENSE_KEY }}\"/g" vars.yaml
# Update changed image tag
sed -i "/frontend/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set skipMigration=true --no-hooks
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# AWS_REGION: eu-central-1
# AWS_S3_BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET_NAME }}

View file

@ -1,189 +0,0 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
tag:
description: 'Tag to update.'
required: true
type: string
branch:
description: 'Branch to build patches from. Make sure the branch is uptodate with tag. Else itll cause missing commits.'
required: true
type: string
name: Build patches from tag, rewrite commit HEAD to older timestamp, and Push the tag
jobs:
deploy:
name: Build Patch from old tag
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 4
ref: ${{ github.event.inputs.tag }}
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Create backup tag with timestamp
run: |
set -e # Exit immediately if a command exits with a non-zero status
TIMESTAMP=$(date +%Y%m%d%H%M%S)
BACKUP_TAG="${{ github.event.inputs.tag }}-backup-${TIMESTAMP}"
echo "BACKUP_TAG=${BACKUP_TAG}" >> $GITHUB_ENV
echo "INPUT_TAG=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
git tag $BACKUP_TAG || { echo "Failed to create backup tag"; exit 1; }
git push origin $BACKUP_TAG || { echo "Failed to push backup tag"; exit 1; }
echo "Created backup tag: $BACKUP_TAG"
# Get the oldest commit date from the last 3 commits in raw format
OLDEST_COMMIT_TIMESTAMP=$(git log -3 --pretty=format:"%at" | tail -1)
echo "Oldest commit timestamp: $OLDEST_COMMIT_TIMESTAMP"
# Add 1 second to the timestamp
NEW_TIMESTAMP=$((OLDEST_COMMIT_TIMESTAMP + 1))
echo "NEW_TIMESTAMP=$NEW_TIMESTAMP" >> $GITHUB_ENV
- name: Setup yq
uses: mikefarah/yq@master
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=${{inputs.branch}}" >> $GITHUB_ENV
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
run: |
set -exo pipefail
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b $BRANCH_NAME
working_dir=$(pwd)
function image_version(){
local service=$1
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
current_version=$(yq eval '.AppVersion' $chart_path)
new_version=$(echo $current_version | awk -F. '{$NF += 1 ; print $1"."$2"."$3}')
echo $new_version
# yq eval ".AppVersion = \"$new_version\"" -i $chart_path
}
function clone_msaas() {
[ -d $MSAAS_REPO_FOLDER ] || {
git clone -b $INPUT_TAG --recursive https://x-access-token:$MSAAS_REPO_CLONE_TOKEN@$MSAAS_REPO_URL $MSAAS_REPO_FOLDER
cd $MSAAS_REPO_FOLDER
cd openreplay && git fetch origin && git checkout $INPUT_TAG
git log -1
cd $MSAAS_REPO_FOLDER
bash git-init.sh
git checkout
}
}
function build_managed() {
local service=$1
local version=$2
echo building managed
clone_msaas
if [[ $service == 'chalice' ]]; then
cd $MSAAS_REPO_FOLDER/openreplay/api
else
cd $MSAAS_REPO_FOLDER/openreplay/$service
fi
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh >> /tmp/arm.txt
}
# Checking for backend images
ls backend/cmd >> /tmp/backend.txt
echo Services: "${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
BUILD_SCRIPT_NAME="build.sh"
# Build FOSS
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
version=$(image_version $SERVICE)
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=$version-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
if [[ "$SERVICE" != "chalice" && "$SERVICE" != "frontend" ]]; then
IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
echo IMAGE_TAG=$version DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
else
build_managed $SERVICE $version
fi
cd $working_dir
chart_path="$working_dir/scripts/helmcharts/openreplay/charts/$SERVICE/Chart.yaml"
yq eval ".AppVersion = \"$version\"" -i $chart_path
git add $chart_path
git commit -m "Increment $SERVICE chart version"
done
- name: Change commit timestamp
run: |
# Convert the timestamp to a date format git can understand
NEW_DATE=$(perl -le 'print scalar gmtime($ARGV[0])." +0000"' $NEW_TIMESTAMP)
echo "Setting commit date to: $NEW_DATE"
# Amend the commit with the new date
GIT_COMMITTER_DATE="$NEW_DATE" git commit --amend --no-edit --date="$NEW_DATE"
# Verify the change
git log -1 --pretty=format:"Commit now dated: %cD"
# git tag and push
git tag $INPUT_TAG -f
git push origin $INPUT_TAG -f
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -1,261 +0,0 @@
# Ref: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
on:
workflow_dispatch:
inputs:
services:
description: 'Comma separated names of services to build(in small letters).'
required: true
default: 'chalice,frontend'
name: Build patches from main branch, Raise PR to Main, and Push to tag
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Rebase with main branch, to make sure the code has latest main changes
if: github.ref != 'refs/heads/main'
run: |
git remote -v
git config --global user.email "action@github.com"
git config --global user.name "GitHub Action"
git config --global rebase.autoStash true
git fetch origin main:main
git rebase main
git log -3
- name: Downloading yq
run: |
VERSION="v4.42.1"
sudo wget https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 -O /usr/bin/yq
sudo chmod +x /usr/bin/yq
# Configure AWS credentials for the first registry
- name: Configure AWS credentials for RELEASE_ARM_REGISTRY
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_DEPOT_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_DEPOT_SECRET_KEY }}
aws-region: ${{ secrets.AWS_DEPOT_DEFAULT_REGION }}
- name: Login to Amazon ECR for RELEASE_ARM_REGISTRY
id: login-ecr-arm
run: |
aws ecr get-login-password --region ${{ secrets.AWS_DEPOT_DEFAULT_REGION }} | docker login --username AWS --password-stdin ${{ secrets.RELEASE_ARM_REGISTRY }}
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.RELEASE_OSS_REGISTRY }}
- uses: depot/setup-action@v1
env:
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
- name: Get HEAD Commit ID
run: echo "HEAD_COMMIT_ID=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Define Branch Name
run: echo "BRANCH_NAME=patch/main/${HEAD_COMMIT_ID}" >> $GITHUB_ENV
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}.git
- name: Build
id: build-image
env:
DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
MSAAS_REPO_FOLDER: /tmp/msaas
SERVICES_INPUT: ${{ github.event.inputs.services }}
run: |
#!/bin/bash
set -euo pipefail
# Configuration
readonly WORKING_DIR=$(pwd)
readonly BUILD_SCRIPT_NAME="build.sh"
readonly BACKEND_SERVICES_FILE="/tmp/backend.txt"
# Initialize git configuration
setup_git() {
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git checkout -b "$BRANCH_NAME"
}
# Get and increment image version
image_version() {
local service=$1
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
local current_version new_version
current_version=$(yq eval '.AppVersion' "$chart_path")
new_version=$(echo "$current_version" | awk -F. '{$NF += 1; print $1"."$2"."$3}')
echo "$new_version"
}
# Clone MSAAS repository if not exists
clone_msaas() {
if [[ ! -d "$MSAAS_REPO_FOLDER" ]]; then
git clone -b dev --recursive "https://x-access-token:${MSAAS_REPO_CLONE_TOKEN}@${MSAAS_REPO_URL}" "$MSAAS_REPO_FOLDER"
cd "$MSAAS_REPO_FOLDER"
cd openreplay && git fetch origin && git checkout main
git log -1
cd "$MSAAS_REPO_FOLDER"
bash git-init.sh
git checkout
fi
}
# Build managed services
build_managed() {
local service=$1
local version=$2
echo "Building managed service: $service"
clone_msaas
if [[ $service == 'chalice' ]]; then
cd "$MSAAS_REPO_FOLDER/openreplay/api"
else
cd "$MSAAS_REPO_FOLDER/openreplay/$service"
fi
local build_cmd="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=arm64 DOCKER_REPO=$DOCKER_REPO_ARM PUSH_IMAGE=0 bash build.sh"
echo "Executing: $build_cmd"
if ! eval "$build_cmd" 2>&1; then
echo "Build failed for $service"
exit 1
fi
}
# Build service with given arguments
build_service() {
local service=$1
local version=$2
local build_args=$3
local build_script=${4:-$BUILD_SCRIPT_NAME}
local command="IMAGE_TAG=$version DOCKER_RUNTIME=depot DOCKER_BUILD_ARGS=--push ARCH=amd64 DOCKER_REPO=$DOCKER_REPO_OSS PUSH_IMAGE=0 bash $build_script $build_args"
echo "Executing: $command"
eval "$command"
}
# Update chart version and commit changes
update_chart_version() {
local service=$1
local version=$2
local chart_path="$WORKING_DIR/scripts/helmcharts/openreplay/charts/$service/Chart.yaml"
# Ensure we're in the original working directory/repository
cd "$WORKING_DIR"
yq eval ".AppVersion = \"$version\"" -i "$chart_path"
git add "$chart_path"
git commit -m "Increment $service chart version to $version"
git push --set-upstream origin "$BRANCH_NAME"
cd -
}
# Main execution
main() {
setup_git
# Get backend services list
ls backend/cmd >"$BACKEND_SERVICES_FILE"
# Parse services input (fix for GitHub Actions syntax)
echo "Services: ${SERVICES_INPUT:-$1}"
IFS=',' read -ra services <<<"${SERVICES_INPUT:-$1}"
# Process each service
for service in "${services[@]}"; do
echo "Processing service: $service"
cd "$WORKING_DIR"
local foss_build_args="" ee_build_args="" build_script="$BUILD_SCRIPT_NAME"
# Determine build configuration based on service type
if grep -q "$service" "$BACKEND_SERVICES_FILE"; then
# Backend service
cd backend
foss_build_args="nil $service"
ee_build_args="ee $service"
else
# Non-backend service
case "$service" in
chalice | alerts | crons)
cd "$WORKING_DIR/api"
;;
*)
cd "$service"
;;
esac
# Special build scripts for alerts/crons
if [[ $service == 'alerts' || $service == 'crons' ]]; then
build_script="build_${service}.sh"
fi
ee_build_args="ee"
fi
# Get version and build
local version
version=$(image_version "$service")
# Build FOSS and EE versions
build_service "$service" "$version" "$foss_build_args"
build_service "$service" "${version}-ee" "$ee_build_args"
# Build managed version for specific services
if [[ "$service" != "chalice" && "$service" != "frontend" ]]; then
echo "Nothing to build in managed for service $service"
else
build_managed "$service" "$version"
fi
# Update chart and commit
update_chart_version "$service" "$version"
done
cd "$WORKING_DIR"
# Cleanup
rm -f "$BACKEND_SERVICES_FILE"
}
echo "Working directory: $WORKING_DIR"
# Run main function with all arguments
main "$SERVICES_INPUT"
- name: Create Pull Request
uses: repo-sync/pull-request@v2
with:
github_token: ${{ secrets.ACTIONS_COMMMIT_TOKEN }}
source_branch: ${{ env.BRANCH_NAME }}
destination_branch: "main"
pr_title: "Updated patch build from main ${{ env.HEAD_COMMIT_ID }}"
pr_body: |
This PR updates the Helm chart version after building the patch from $HEAD_COMMIT_ID.
Once this PR is merged, tag update job will run automatically.
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO_ARM: ${{ secrets.RELEASE_ARM_REGISTRY }}
# DOCKER_REPO_OSS: ${{ secrets.RELEASE_OSS_REGISTRY }}
# MSAAS_REPO_CLONE_TOKEN: ${{ secrets.MSAAS_REPO_CLONE_TOKEN }}
# MSAAS_REPO_URL: ${{ secrets.MSAAS_REPO_URL }}
# MSAAS_REPO_FOLDER: /tmp/msaas
# with:
# limit-access-to-actor: true

View file

@ -1,86 +0,0 @@
name: PR-Env-Delete
on:
workflow_dispatch:
inputs:
env_origin_url:
description: |
URL of the origin of the PR env to be deleted. Example: https://pr-1717-ee.openreplay.tools
required: true
jobs:
create-vcluster-pr:
runs-on: ubuntu-latest
env:
build_service: ${{ github.event.inputs.build_service }}
env_flavour: ${{ github.event.inputs.env_flavour }}
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.OR_PR_AWS_DEFAULT_REGION}}
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.PR_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Install vCluster CLI
run: |
# Replace with the command to install vCluster CLI
curl -s -L "https://github.com/loft-sh/vcluster/releases/download/v0.16.4/vcluster-linux-amd64" -o /usr/local/bin/vcluster
chmod +x /usr/local/bin/vcluster
- name: Deleting vcluster
run: |
url=${{ github.event.inputs.env_origin_url }}
# Remove the protocol part of the URL
url_no_protocol=${url#*//}
# Extract the subdomain and domain
subdomain=$(echo $url_no_protocol | cut -d"." -f1)
domain=$(echo $url_no_protocol | cut -d"." -f2-)
echo "subdomain=$subdomain" >> $GITHUB_ENV
echo "domain=$domain" >> $GITHUB_ENV
vcluster delete -n $subdomain-vcluster $subdomain-vcluster
echo $subdomain $domain
- name: Get LoadBalancer IP
id: lb-ip
run: |
LB_IP=$(kubectl get svc ingress-ingress-nginx-controller -n default -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "::set-output name=ip::$LB_IP"
- name: Delete dns record
env:
AWS_ACCESS_KEY_ID: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.OR_PR_AWS_DEFAULT_REGION }}
run: |
DOMAIN_NAME_1=$subdomain.$domain
DOMAIN_NAME_2=$subdomain-vcluster.$domain
cat <<EOF > route53-changes.json
{
"Comment": "Create record set for VCluster",
"Changes": [
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_1",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
},
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_2",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
}
]
}
EOF
iws route53 change-resource-record-sets --hosted-zone-id ${{ secrets.OR_PR_HOSTED_ZONE_ID }} --change-batch file://route53-changes.json

View file

@ -1,340 +0,0 @@
name: PR-Deployment
on:
workflow_dispatch:
inputs:
build_service:
description: |
Name of a single service to build(in small letters), eg: api or frontend etc. backend:sevice-name to build service.
If what ever image is not built, it'll be deployed from latest release.
Options: none/all/service-name/backend:{app1/app1,app2,app3/all}
required: false
default: none
env_flavour:
description: 'Which env to build. Values: foss/ee'
required: false
jobs:
create-vcluster-pr:
runs-on: ubuntu-latest
env:
build_service: ${{ github.event.inputs.build_service }}
env_flavour: ${{ github.event.inputs.env_flavour }}
steps:
- name: Checkout Code
uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.OR_PR_AWS_DEFAULT_REGION}}
- name: Setting up env variables
run: |
# Fetching details open/draft PR for current branch
PR_DATA=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/pulls" \
| jq -r --arg BRANCH "${{ github.ref_name }}" '.[] | select((.head.ref==$BRANCH) and (.state=="open") and (.draft==true or .draft==false))')
# Extracting PR number
PR_NUMBER=$(echo "$PR_DATA" | jq -r '.number' | head -n 1)
if [ -z $PR_NUMBER ]; then
echo "No PR found for ${{ github.ref_name}}"
exit 100
fi
echo "PR_NUMBER_PRE=$PR_NUMBER" >> $GITHUB_ENV
PR_NUMBER=pr-$PR_NUMBER
if [ $env_flavour == "ee" ]; then
PR_NUMBER=$PR_NUMBER-ee
fi
echo "PR number: $PR_NUMBER"
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
# Extracting PR status (open, closed, merged)
PR_STATUS=$(echo "$PR_DATA" | jq -r '.state' | head -n 1)
echo "PR status: $PR_STATUS"
echo "PR_STATUS=$PR_STATUS" >> $GITHUB_ENV
- name: Install vCluster CLI
run: |
# Replace with the command to install vCluster CLI
curl -s -L "https://github.com/loft-sh/vcluster/releases/download/v0.16.4/vcluster-linux-amd64" -o /usr/local/bin/vcluster
chmod +x /usr/local/bin/vcluster
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.PR_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Check existing vcluster
id: vcluster_exists
continue-on-error: true
run: |
if ! $(vcluster list | grep $PR_NUMBER &> /dev/null); then
echo "no cluster found for $PR_NUMBER"
echo "::set-output name=failed::true"
exit 100
fi
DOMAIN_NAME=${PR_NUMBER}-vcluster.${{ secrets.OR_PR_DOMAIN_NAME }}
vcluster connect ${PR_NUMBER}-vcluster --update-current=false --server=https://$DOMAIN_NAME
mv kubeconfig.yaml /tmp/kubeconfig.yaml
- name: Get LoadBalancer IP
if: steps.vcluster_exists.outputs.failed == 'true'
id: lb-ip
run: |
# LB_IP=$(kubectl get svc ingress-ingress-nginx-controller -n default -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
LB_IP=$(kubectl get svc ingress-ingress-nginx-controller -n default -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "::set-output name=ip::$LB_IP"
- name: Create vCluster
if: steps.vcluster_exists.outputs.failed == 'true'
run: |
# Replace with the actual command to create a vCluster
pwd
cd scripts/pr-env/
bash create.sh ${PR_NUMBER}.${{ secrets.OR_PR_DOMAIN_NAME }}
cp kubeconfig.yaml /tmp/
- name: Update AWS Route53 Record
if: steps.vcluster_exists.outputs.failed == 'true'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.OR_PR_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.OR_PR_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.OR_PR_AWS_DEFAULT_REGION }}
run: |
DOMAIN_NAME_1=$PR_NUMBER-vcluster.${{ secrets.OR_PR_DOMAIN_NAME }}
DOMAIN_NAME_2=$PR_NUMBER.${{ secrets.OR_PR_DOMAIN_NAME }}
cat <<EOF > route53-changes.json
{
"Comment": "Create record set for VCluster",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_1",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
},
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "$DOMAIN_NAME_2",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "${{ steps.lb-ip.outputs.ip }}" }]
}
}
]
}
EOF
#
NEW_IP=${{ steps.lb-ip.outputs.ip }}
# Get the current IP address associated with the domain
CURRENT_IP=$(dig +short $DOMAIN_NAME_1 @1.1.1.1)
echo "current ip: $CURRENT_IP"
# Check if the domain has no IP association or if the IPs are different
if [ -z "$CURRENT_IP" ] || [ "$CURRENT_IP" != "$NEW_IP" ]; then
aws route53 change-resource-record-sets --hosted-zone-id ${{ secrets.OR_PR_HOSTED_ZONE_ID }} --change-batch file://route53-changes.json
fi
- name: Wait for DNS Propagation
if: steps.vcluster_exists.outputs.failed == 'true'
env:
EXPECTED_IP: ${{ steps.lb-ip.outputs.ip }}
run: |
DOMAIN_NAME="$PR_NUMBER-vcluster.${{ secrets.OR_PR_DOMAIN_NAME }}"
MAX_ATTEMPTS=30
attempt=1
until [[ $attempt -gt $MAX_ATTEMPTS ]]
do
# Use dig to query DNS records
DNS_RESULT=$(dig +short $DOMAIN_NAME @1.1.1.1)
# Check if DNS result is empty
if [ -z "$DNS_RESULT" ]; then
echo "No IP or CNAME records found for $DOMAIN_NAME."
else
echo "DNS records found for $DOMAIN_NAME:"
echo "$DNS_RESULT"
break
fi
echo "Waiting for DNS propagation... Attempt $attempt of $MAX_ATTEMPTS"
((attempt++))
sleep 20
done
if [[ $attempt -gt $MAX_ATTEMPTS ]]; then
echo "DNS propagation check failed for $DOMAIN_NAME after $MAX_ATTEMPTS attempts."
exit 1
fi
- name: Install openreplay
if: steps.vcluster_exists.outputs.failed == 'true'
env:
KUBECONFIG: /tmp/kubeconfig.yaml
run: |
DOMAIN_NAME=$PR_NUMBER.${{ secrets.OR_PR_DOMAIN_NAME }}
cd scripts/helmcharts
sed -i "s/domainName: \"\"/domainName: \"${DOMAIN_NAME}\"/g" vars.yaml
# If ee cluster, enable the following
if [ $env_flavour == "ee" ]; then
# Explanation for the sed command:
# /clickhouse:/: Matches lines containing "clickhouse:".
# {:a: Starts a block with label 'a'.
# n;: Reads the next line.
# /enabled:/s/false/true/: If the line contains 'enabled:', replace 'false' with 'true'.
# t done;: If the substitution was made, branch to label 'done'.
# ba;: Go back to label 'a' if no substitution was made.
# :done}: Label 'done', where the script goes after a successful substitution.
sed -i '/clickhouse:/{:a;n;/enabled:/s/false/true/;t done; ba; :done}' vars.yaml
sed -i '/kafka:/{:a;n;/# enabled:/s/# enabled: .*/enabled: true/;t done; ba; :done}' vars.yaml
sed -i '/redis:/{:a;n;/enabled:/s/true/false/;t done; ba; :done}' vars.yaml
sed -i "s/enterpriseEditionLicense: \"\"/enterpriseEditionLicense: \"${{ secrets.EE_LICENSE_KEY }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${DOMAIN_NAME}\"/g" vars.yaml
fi
helm upgrade -i databases -n db ./databases -f vars.yaml --create-namespace --wait -f ../pr-env/resources.yaml
helm upgrade -i openreplay -n app ./openreplay -f vars.yaml --create-namespace --set ingress-nginx.enabled=false -f ../pr-env/resources.yaml --wait
- name: Build and deploy application
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
env: ${{ github.event.inputs.env_flavour }}
run: |
set -x
app_name=${{github.event.inputs.build_service}}
echo "building and deploying $app_name"
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
export KUBECONFIG=/tmp/kubeconfig.yaml
function build_and_deploy {
apps_to_build=$1
case $apps_to_build in
backend*)
echo "Building backend build"
cd $GITHUB_WORKSPACE/backend
components=()
if [ $apps_to_build == "backend:all" ]; then
# Append all folder names from 'cmd/' directory to the array
for folder in cmd/*/; do
# Use basename to extract the folder name without path
folder_name=$(basename "$folder")
components+=("$folder_name")
done
else
# "${apps_to_build#*:}" :: Strip backend: and output app1,app2,app3 to read -ra
IFS=',' read -ra components <<< "${apps_to_build#*:}"
fi
echo "Building components: " ${components[@]}
for component in "${components[@]}"; do
if [ $(docker manifest inspect ${DOCKER_REPO}/$component:${IMAGE_TAG} > /dev/null) ]; then
echo Image present upstream. Skipping build: $component
else
echo "Building backend:$component"
PUSH_IMAGE=1 bash -x ./build.sh $env $component
fi
kubectl set image -n app deployment/$component-openreplay $component=${DOCKER_REPO}/$component:${IMAGE_TAG}
done
;;
chalice|api)
echo "Chalice build"
component=chalice
cd $GITHUB_WORKSPACE/api || (Nothing to build: $component; exit 100)
if [ $(docker manifest inspect ${DOCKER_REPO}/$component:${IMAGE_TAG} > /dev/null) ]; then
echo Image present upstream. Skipping build: $component
else
echo "Building backend:$component"
PUSH_IMAGE=1 bash -x ./build.sh $env $component
fi
kubectl set image -n app deployment/$component-openreplay $component=${DOCKER_REPO}/$component:${IMAGE_TAG}
;;
*)
echo "$apps_to_build build"
cd $GITHUB_WORKSPACE/$apps_to_build || (Nothing to build: $apps_to_build; exit 100)
component=$apps_to_build
if [ $(docker manifest inspect ${DOCKER_REPO}/$component:${IMAGE_TAG} > /dev/null) ]; then
echo Image present upstream. Skipping build: $component
else
echo "Building backend:$component"
PUSH_IMAGE=1 bash -x ./build.sh $env $component
fi
kubectl set image -n app deployment/$apps_to_build-openreplay $apps_to_build=${DOCKER_REPO}/$apps_to_build:${IMAGE_TAG}
;;
esac
}
case $app_name in
all)
build_and_deploy "backend:all"
build_and_deploy "frontend"
build_and_deploy "chalice"
build_and_deploy "sourcemapreader"
build_and_deploy "assist-stats"
;;
none)
echo "Nothing to build"
;;
*)
build_and_deploy $app_name
;;
esac
- name: Sent results to slack
if: steps.vcluster_exists.outputs.failed == 'true'
env:
SLACK_BOT_TOKEN: ${{ secrets.OR_PR_SLACK_BOT_TOKEN }}
SLACK_CHANNEL: ${{ secrets.OR_PR_SLACK_CHANNEL }}
run: |
echo hi ${{ steps.vcluster_exists.outputs.failed }}
DOMAIN_NAME=https://$PR_NUMBER.${{ secrets.OR_PR_DOMAIN_NAME }}
# Variables
PR_NUMBER=https://github.com/${{ github.repository }}/pull/${PR_NUMBER_PRE}
BRANCH_NAME=${{ github.ref_name }}
ORIGIN=$DOMAIN_NAME
ASSETS_HOST=$DOMAIN_NAME/assets
API_EDP=$DOMAIN_NAME/api
INGEST_POINT=$DOMAIN_NAME/ingest
# File to be uploaded
FILE_PATH="/tmp/kubeconfig.yaml"
if [! -f $FILE_PATH ]; then
echo "Kubeconfig file not found: $FILE_PATH"
exit 100
fi
# Form the message payload
PAYLOAD=$(cat <<EOF
{
"channel": "$SLACK_CHANNEL",
"text": "Deployment Information:\n- PR#: $PR_NUMBER\n- PR Status: $PR_STATUS\n- Branch Name: $BRANCH_NAME\n- Origin: $ORIGIN\n- Assets Host: $ASSETS_HOST\n- API Endpoint: $API_EDP\n- Ingest Point: $INGEST_POINT\n- To use the cluster: download the following file and run the following commands, \n export KUBECONFIG=/path/to/kubeconfig.yaml\n k9s"
}
EOF
)
# Send the message to Slack
curl -X POST -H "Authorization: Bearer $SLACK_BOT_TOKEN" -H 'Content-type: application/json' --data "$PAYLOAD" https://slack.com/api/chat.postMessage > /dev/null
# Upload the file to Slack
curl -F file=@"$FILE_PATH" -F channels="$SLACK_CHANNEL" -F token="$SLACK_BOT_TOKEN" https://slack.com/api/files.upload > /dev/null
# - name: Cleanup
# if: always()
# run: |
# # Add any cleanup commands if necessary
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true

View file

@ -1,103 +0,0 @@
name: Release Deployment
on:
workflow_dispatch:
inputs:
services:
description: 'Comma-separated list of services to deploy. eg: frontend,api,sink'
required: true
branch:
description: 'Branch to deploy (defaults to dev)'
required: false
default: 'dev'
env:
IMAGE_REGISTRY_URL: ${{ secrets.OSS_REGISTRY_URL }}
DEPOT_PROJECT_ID: ${{ secrets.DEPOT_PROJECT_ID }}
DEPOT_TOKEN: ${{ secrets.DEPOT_TOKEN }}
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ github.event.inputs.branch }}
- name: Docker login
run: |
docker login $IMAGE_REGISTRY_URL -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Set image tag with branch info
run: |
SHORT_SHA=$(git rev-parse --short HEAD)
echo "IMAGE_TAG=${{ github.event.inputs.branch }}-${SHORT_SHA}" >> $GITHUB_ENV
echo "Using image tag: $IMAGE_TAG"
- uses: depot/setup-action@v1
- name: Build and push Docker images
run: |
# Parse the comma-separated services list into an array
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
working_dir=$(pwd)
# Define backend services (consider moving this to workflow inputs or repo config)
ls backend/cmd >> /tmp/backend.txt
BUILD_SCRIPT_NAME="build.sh"
for SERVICE in "${SERVICES[@]}"; do
# Check if service is backend
if grep -q $SERVICE /tmp/backend.txt; then
cd $working_dir/backend
foss_build_args="nil $SERVICE"
ee_build_args="ee $SERVICE"
else
cd $working_dir
[[ $SERVICE == 'chalice' || $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && cd $working_dir/api || cd $SERVICE
[[ $SERVICE == 'alerts' || $SERVICE == 'crons' ]] && BUILD_SCRIPT_NAME="build_${SERVICE}.sh"
ee_build_args="ee"
fi
{
echo IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
IMAGE_TAG=$IMAGE_TAG DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $foss_build_args
}&
{
echo IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
IMAGE_TAG=${IMAGE_TAG}-ee DOCKER_RUNTIME="depot" DOCKER_BUILD_ARGS="--push" ARCH=amd64 DOCKER_REPO=$IMAGE_REGISTRY_URL PUSH_IMAGE=0 bash ${BUILD_SCRIPT_NAME} $ee_build_args
}&
done
wait
- uses: azure/k8s-set-context@v1
name: Using ee release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_RELEASE_KUBECONFIG }}
- name: Deploy to ee release Kubernetes
run: |
echo "Deploying services to EE cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to EE cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}-ee
done
- uses: azure/k8s-set-context@v1
name: Using foss release cluster
with:
method: kubeconfig
kubeconfig: ${{ secrets.FOSS_RELEASE_KUBECONFIG }}
- name: Deploy to FOSS release Kubernetes
run: |
echo "Deploying services to FOSS cluster: ${{ github.event.inputs.services }}"
IFS=',' read -ra SERVICES <<< "${{ github.event.inputs.services }}"
for SERVICE in "${SERVICES[@]}"; do
SERVICE=$(echo $SERVICE | xargs) # Trim whitespace
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
echo "Deploying $SERVICE to FOSS cluster with image tag: ${IMAGE_TAG}"
kubectl set image deployment/$SERVICE-openreplay -n app $SERVICE=${IMAGE_REGISTRY_URL}/$SERVICE:${IMAGE_TAG}
done

View file

@ -1,150 +0,0 @@
# This action will push the sourcemapreader changes to ee
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "ee/sourcemap-reader/**"
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"
- "!sourcemap-reader/*-dev.sh"
name: Build and Deploy sourcemap-reader EE
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing sourcemaps-reader image
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd sourcemap-reader
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("sourcemaps-reader")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("sourcemaps-reader")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/sourcemaps-reader/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
sed -i "s/sourcemaps-reader/sourcemapreader/g" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,sourcemapreader,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,149 +0,0 @@
# This action will push the sourcemapreader changes to aws
on:
workflow_dispatch:
inputs:
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
paths:
- "sourcemap-reader/**"
- "!sourcemap-reader/.gitignore"
- "!sourcemap-reader/*-dev.sh"
name: Build and Deploy sourcemap-reader
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Building and Pushing sourcemaps-reader image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
cd sourcemap-reader
PUSH_IMAGE=0 bash -x ./build.sh
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
images=("sourcemaps-reader")
for image in ${images[*]};do
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --security-checks vuln --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
done
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
images=("sourcemaps-reader")
for image in ${images[*]};do
docker push $DOCKER_REPO/$image:$IMAGE_TAG
done
- name: Creating old image input
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploy to kubernetes
run: |
cd scripts/helmcharts/
# Update changed image tag
sed -i "/sourcemaps-reader/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
sed -i "s/sourcemaps-reader/sourcemapreader/g" /tmp/image_override.yaml
cat /tmp/image_override.yaml
# Deploy command
mkdir -p /tmp/charts
mv openreplay/charts/{ingress-nginx,sourcemapreader,quickwit,connector} /tmp/charts/
rm -rf openreplay/charts/*
mv /tmp/charts/* openreplay/charts/
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true --no-hooks | kubectl apply -n app -f -
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# ENVIRONMENT: staging
# with:
# limit-access-to-actor: true

View file

@ -1,67 +0,0 @@
# Checking unit tests for tracker and assist
name: Tracker tests
on:
workflow_dispatch:
push:
branches: [ "main", "dev" ]
paths:
- tracker/**
pull_request:
branches: [ "dev", "main" ]
paths:
- tracker/**
jobs:
build-and-test:
runs-on: macos-latest
name: Build and test Tracker
steps:
- uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- uses: actions/checkout@v3
- name: Cache tracker modules
uses: actions/cache@v3
with:
path: tracker/tracker/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lockb') }}
restore-keys: |
test_tracker_build{{ runner.OS }}-build-
test_tracker_build{{ runner.OS }}-
- name: Cache tracker-assist modules
uses: actions/cache@v3
with:
path: tracker/tracker-assist/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('**/bun.lockb') }}
restore-keys: |
test_tracker_build{{ runner.OS }}-build-
test_tracker_build{{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker
bun install
- name: Jest tests
run: |
cd tracker/tracker
bun run test:ci
- name: Building test
run: |
cd tracker/tracker
bun run build
- name: (TA) Setup Testing packages
run: |
cd tracker/tracker-assist
bun install
- name: (TA) Jest tests
run: |
cd tracker/tracker-assist
bun run test:ci
- name: (TA) Building test
run: |
cd tracker/tracker-assist
bun run build
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: tracker
iame: tracker

View file

@ -1,157 +0,0 @@
# Checking unit and visual tests locally on every merge rq to dev and main
name: Frontend tests
on:
workflow_dispatch:
push:
branches: [ "main" ]
paths:
- frontend/**
- tracker/**
pull_request:
branches: [ "dev", "main" ]
paths:
- frontend/**
- tracker/**
env:
API: ${{ secrets.E2E_API_ORIGIN }}
ASSETS: ${{ secrets.E2E_ASSETS_ORIGIN }}
APIEDP: ${{ secrets.E2E_EDP_ORIGIN }}
CY_ACC: ${{ secrets.CYPRESS_ACCOUNT }}
CY_PASS: ${{ secrets.CYPRESS_PASSWORD }}
FOSS_PROJECT_KEY: ${{ secrets.FOSS_PROJECT_KEY }}
FOSS_INGEST: ${{ secrets.FOSS_INGEST }}
jobs:
build-and-test:
runs-on: macos-latest
name: Build and test Tracker plus Replayer
strategy:
matrix:
node-version: [ 20.x ]
steps:
- uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
- name: Cache tracker modules
uses: actions/cache@v3
with:
path: tracker/tracker/node_modules
key: ${{ runner.OS }}-test_tracker_build-${{ hashFiles('tracker/tracker/bun.lockb') }}
restore-keys: |
test_tracker_build-{{ runner.OS }}-build-
test_tracker_build-{{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker
bun install
- name: Build tracker inst
run: |
cd tracker/tracker
bun run build
- name: Setup Testing UI Env
run: |
cd tracker/tracker-testing-playground
echo "REACT_APP_KEY=$FOSS_PROJECT_KEY" >> .env
echo "REACT_APP_INGEST=$FOSS_INGEST" >> .env
- name: Cache testing UI node modules
uses: actions/cache@v3
with:
path: tracker/tracker-testing-playground/node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- name: Setup Testing packages
run: |
cd tracker/tracker-testing-playground
yarn
- name: Cache node modules
uses: actions/cache@v3
with:
path: frontend/node_modules
key: ${{ runner.OS }}-build-${{ hashFiles('frontend/yarn.lock') }}
restore-keys: |
${{ runner.OS }}-build-
${{ runner.OS }}-
- name: Setup env
run: |
cd frontend
echo "NODE_ENV=development" >> .env
echo "SOURCEMAP=true" >> .env
echo "ORIGIN=$API" >> .env
echo "ASSETS_HOST=$ASSETS" >> .env
echo "API_EDP=$APIEDP" >> .env
echo "SENTRY_ENABLED = false" >> .env
echo "SENTRY_URL = ''" >> .env
echo "CAPTCHA_ENABLED = false" >> .env
echo "CAPTCHA_SITE_KEY = 'asdad'" >> .env
echo "MINIO_ENDPOINT = ''" >> .env
echo "MINIO_PORT = ''" >> .env
echo "MINIO_USE_SSL = ''" >> .env
echo "MINIO_ACCESS_KEY = ''" >> .env
echo "MINIO_SECRET_KEY = ''" >> .env
echo "VERSION = '1.15.0'" >> .env
echo "TRACKER_VERSION = '10.0.0'" >> .env
echo "COMMIT_HASH = 'dev'" >> .env
echo "{ \"account\": \"$CY_ACC\", \"password\": \"$CY_PASS\" }" >> cypress.env.json
- name: Setup packages
run: |
cd frontend
yarn
- name: Run unit tests
run: |
cd frontend
yarn test:ci
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: ui
name: ui
- name: Run testing frontend
run: |
cd tracker/tracker-testing-playground
yarn start &> testing.log &
echo "Started"
npm i -g wait-on
echo "Got wait on"
sleep 30
cat testing.log
npx wait-on http://localhost:3000
echo "Done"
timeout-minutes: 4
- name: Run Frontend
run: |
cd frontend
bun start &> frontend.log &
echo "Started"
sleep 30
cat frontend.log
npx wait-on http://0.0.0.0:3333
echo "Done"
timeout-minutes: 4
- name: (Chrome) Run visual tests
run: |
cd frontend
yarn cy:test
# firefox have different viewport somehow
# - name: (Firefox) Run visual tests
# run: yarn cy:test-firefox
# - name: (Edge) Run visual tests
# run: yarn cy:test-edge
timeout-minutes: 5
- name: Upload Debug
if: ${{ failure() }}
uses: actions/upload-artifact@v3
with:
name: 'Snapshots'
path: |
frontend/cypress/videos
frontend/cypress/snapshots/replayer.cy.ts
frontend/cypress/screenshots
frontend/cypress/snapshots/generalStability.cy.ts

View file

@ -1,42 +0,0 @@
on:
pull_request:
types: [closed]
branches:
- main
name: Release tag update --force
jobs:
deploy:
name: Build Patch from main
runs-on: ubuntu-latest
if: ${{ (github.event_name == 'pull_request' && github.event.pull_request.merged == true) || github.event.inputs.services == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Get latest release tag using GitHub API
id: get-latest-tag
run: |
LATEST_TAG=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/releases/latest" \
| jq -r .tag_name)
# Fallback to git command if API doesn't return a tag
if [ "$LATEST_TAG" == "null" ] || [ -z "$LATEST_TAG" ]; then
echo "Not found latest tag"
exit 100
fi
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
echo "Latest tag: $LATEST_TAG"
- name: Set Remote with GITHUB_TOKEN
run: |
git config --unset http.https://github.com/.extraheader
git remote set-url origin https://x-access-token:${{ secrets.ACTIONS_COMMMIT_TOKEN }}@github.com/${{ github.repository }}
- name: Push main branch to tag
run: |
git checkout main
echo "Updating tag ${{ env.LATEST_TAG }} to point to latest commit on main"
git push origin HEAD:refs/tags/${{ env.LATEST_TAG }} --force

65
.github/workflows/utilities.yaml vendored Normal file
View file

@ -0,0 +1,65 @@
# This action will push the utilities changes to aws
on:
workflow_dispatch:
push:
branches:
- dev
paths:
- utilities/**
name: Build and Deploy Utilities
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Building and Pusing api image
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
cd utilities
PUSH_IMAGE=1 bash build.sh
- name: Deploy to kubernetes
run: |
cd scripts/helm/
sed -i "s#minio_access_key.*#minio_access_key: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\" #g" vars.yaml
sed -i "s#minio_secret_key.*#minio_secret_key: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\" #g" vars.yaml
sed -i "s#domain_name.*#domain_name: \"foss.openreplay.com\" #g" vars.yaml
sed -i "s#kubeconfig.*#kubeconfig_path: ${KUBECONFIG}#g" vars.yaml
sed -i "s/image_tag:.*/image_tag: \"$IMAGE_TAG\"/g" vars.yaml
bash kube-install.sh --app utilities
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
# - name: Debug Job
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
#

View file

@ -2,18 +2,9 @@
on:
workflow_dispatch:
inputs:
build_service:
description: 'Name of a single service to build(in small letters). "all" to build everything'
required: false
default: "false"
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- dev
paths:
- ee/backend/**
- backend/**
@ -26,168 +17,121 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.EE_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.EE_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.EE_LICENSE_KEY }}
minio_access_key: ${{ secrets.EE_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.EE_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.EE_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.EE_REGISTRY_URL }} -u ${{ secrets.EE_DOCKER_USERNAME }} -p "${{ secrets.EE_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
- name: Downloading yq
run: |
VERSION="v4.42.1"
sudo wget https://github.com/mikefarah/yq/releases/download/${VERSION}/yq_linux_amd64 -O /usr/bin/yq
sudo chmod +x /usr/bin/yq
# # Caching docker images
# - uses: satackey/action-docker-layer-caching@v0.0.11
# # Ignore the failure of a step and avoid terminating the job.
# continue-on-error: true
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.EE_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# # Caching docker images
# - uses: satackey/action-docker-layer-caching@v0.0.11
# # Ignore the failure of a step and avoid terminating the job.
# continue-on-error: true
- name: Build, tag
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
set -x
touch /tmp/images_to_build.txt
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
tmp_param=${{ github.event.inputs.build_service }}
build_param=${tmp_param:-'false'}
case ${build_param} in
false)
{
git diff --name-only HEAD HEAD~1 | grep -E "backend/pkg|backend/internal" | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services backend/cmd | cut -d '/' -f3
done
} | awk '!seen[$0]++' > /tmp/images_to_build.txt
;;
all)
ls backend/cmd > /tmp/images_to_build.txt
;;
*)
echo ${{github.event.inputs.build_service }} > /tmp/images_to_build.txt
;;
esac
if [[ $(cat /tmp/images_to_build.txt) == "" ]]; then
echo "Nothing to build here"
touch /tmp/nothing-to-build-here
exit 0
fi
#
# Pushing image to registry
#
cd backend
cat /tmp/images_to_build.txt
for image in $(cat /tmp/images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=0 bash -x ./build.sh ee $image
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
docker push $DOCKER_REPO/$image:$IMAGE_TAG
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
- name: Build, tag
id: build-image
env:
DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}-ee
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
set -x
{
git diff --name-only HEAD HEAD~1 | grep -E "backend/cmd|backend/services" | grep -vE ^ee/ | cut -d '/' -f3
git diff --name-only HEAD HEAD~1 | grep -E "backend/pkg|backend/internal" | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services backend/cmd | cut -d '/' -f3
done
} | uniq > /tmp/images_to_build.txt
- name: Deploying to kuberntes
env:
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
#
# Deploying image to environment.
#
set -x
[[ -f /tmp/nothing-to-build-here ]] && exit 0
cd scripts/helmcharts/
[[ $(cat /tmp/images_to_build.txt) != "" ]] || (echo "Nothing to build here"; exit 0)
#
# Pushing image to registry
#
cd backend
for image in $(cat /tmp/images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=1 bash -x ./build.sh ee $image
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
done
set -x
echo > /tmp/image_override.yaml
mkdir /tmp/helmcharts
mv openreplay/charts/ingress-nginx /tmp/helmcharts/
mv openreplay/charts/quickwit /tmp/helmcharts/
mv openreplay/charts/connector /tmp/helmcharts/
## Update images
for image in $(cat /tmp/images_to_build.txt);
do
mv openreplay/charts/$image /tmp/helmcharts/
cat <<EOF>>/tmp/image_override.yaml
${image}:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
done
ls /tmp/helmcharts
rm -rf openreplay/charts/*
ls openreplay/charts
mv /tmp/helmcharts/* openreplay/charts/
ls openreplay/charts
- name: Creating old image input
env:
IMAGE_TAG: ${{ github.sha }}
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
# Deploy command
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true | kubectl apply -f -
echo > /tmp/image_override.yaml
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: ee
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
# We've to strip off the -ee, as helm will append it.
tag: `echo ${image_array[1]} | cut -d '-' -f 1`
EOF
done
- name: Deploying to kuberntes
env:
# We're not passing -ee flag, because helm will add that.
IMAGE_TAG: ${{ github.sha }}
run: |
#
# Deploying image to environment.
#
cd scripts/helmcharts/
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.EE_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.EE_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.EE_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.EE_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.EE_DOMAIN_NAME }}\"/g" vars.yaml
sed -i "s/enterpriseEditionLicense: \"\"/enterpriseEditionLicense: \"${{ secrets.EE_LICENSE_KEY }}\"/g" vars.yaml
## Update images
for image in $(cat /tmp/images_to_build.txt);
do
sed -i "/${image}/{n;n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
done
cat /tmp/image_override.yaml
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set skipMigration=true --no-hooks
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true
#

View file

@ -2,18 +2,9 @@
on:
workflow_dispatch:
inputs:
build_service:
description: 'Name of a single service to build(in small letters). "all" to build everything'
required: false
default: "false"
skip_security_checks:
description: "Skip Security checks if there is a unfixable vuln or error. Value: true/false"
required: false
default: "false"
push:
branches:
- dev
- dev
paths:
- backend/**
@ -25,162 +16,116 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- name: Checkout
uses: actions/checkout@v2
with:
# We need to diff with old commit
# to see which workers got changed.
fetch-depth: 2
# ref: staging
- uses: ./.github/composite-actions/update-keys
with:
assist_jwt_secret: ${{ secrets.ASSIST_JWT_SECRET }}
assist_key: ${{ secrets.ASSIST_KEY }}
domain_name: ${{ secrets.OSS_DOMAIN_NAME }}
jwt_refresh_secret: ${{ secrets.JWT_REFRESH_SECRET }}
jwt_secret: ${{ secrets.OSS_JWT_SECRET }}
jwt_spot_refresh_secret: ${{ secrets.JWT_SPOT_REFRESH_SECRET }}
jwt_spot_secret: ${{ secrets.JWT_SPOT_SECRET }}
license_key: ${{ secrets.OSS_LICENSE_KEY }}
minio_access_key: ${{ secrets.OSS_MINIO_ACCESS_KEY }}
minio_secret_key: ${{ secrets.OSS_MINIO_SECRET_KEY }}
pg_password: ${{ secrets.OSS_PG_PASSWORD }}
registry_url: ${{ secrets.OSS_REGISTRY_URL }}
name: Update Keys
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- name: Docker login
run: |
docker login ${{ secrets.OSS_REGISTRY_URL }} -u ${{ secrets.OSS_DOCKER_USERNAME }} -p "${{ secrets.OSS_REGISTRY_TOKEN }}"
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
# - uses: satackey/action-docker-layer-caching@v0.0.11
# # Ignore the failure of a step and avoid terminating the job.
# continue-on-error: true
- uses: azure/k8s-set-context@v1
with:
method: kubeconfig
kubeconfig: ${{ secrets.OSS_KUBECONFIG }} # Use content of kubeconfig in secret.
id: setcontext
# Caching docker images
# - uses: satackey/action-docker-layer-caching@v0.0.11
# # Ignore the failure of a step and avoid terminating the job.
# continue-on-error: true
- name: Build, tag
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
set -xe
touch /tmp/images_to_build.txt
skip_security_checks=${{ github.event.inputs.skip_security_checks }}
tmp_param=${{ github.event.inputs.build_service }}
build_param=${tmp_param:-'false'}
case ${build_param} in
false)
{
git diff --name-only HEAD HEAD~1 | grep -E "backend/pkg|backend/internal" | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services backend/cmd | cut -d '/' -f3
done
} | awk '!seen[$0]++' > /tmp/images_to_build.txt
;;
all)
ls backend/cmd > /tmp/images_to_build.txt
;;
*)
echo ${{github.event.inputs.build_service }} > /tmp/images_to_build.txt
;;
esac
if [[ $(cat /tmp/images_to_build.txt) == "" ]]; then
echo "Nothing to build here"
touch /tmp/nothing-to-build-here
exit 0
fi
#
# Pushing image to registry
#
cd backend
cat /tmp/images_to_build.txt
for image in $(cat /tmp/images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=0 bash -x ./build.sh skip $image
[[ "x$skip_security_checks" == "xtrue" ]] || {
curl -L https://github.com/aquasecurity/trivy/releases/download/v0.56.2/trivy_0.56.2_Linux-64bit.tar.gz | tar -xzf - -C ./
./trivy image --db-repository ghcr.io/aquasecurity/trivy-db:2 --db-repository public.ecr.aws/aquasecurity/trivy-db:2 --exit-code 1 --vuln-type os,library --severity "HIGH,CRITICAL" --ignore-unfixed $DOCKER_REPO/$image:$IMAGE_TAG
err_code=$?
[[ $err_code -ne 0 ]] && {
exit $err_code
}
} && {
echo "Skipping Security Checks"
}
docker push $DOCKER_REPO/$image:$IMAGE_TAG
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
- name: Build, tag
id: build-image
env:
DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
IMAGE_TAG: ${{ github.sha }}
ENVIRONMENT: staging
run: |
#
# TODO: Check the container tags are same, then skip the build and deployment.
#
# Build a docker container and push it to Docker Registry so that it can be deployed to Kubernetes cluster.
#
# Getting the images to build
#
set -x
{
git diff --name-only HEAD HEAD~1 | grep -E "backend/cmd|backend/services" | grep -vE ^ee/ | cut -d '/' -f3
git diff --name-only HEAD HEAD~1 | grep -E "backend/pkg|backend/internal" | grep -vE ^ee/ | cut -d '/' -f3 | uniq | while read -r pkg_name ; do
grep -rl "pkg/$pkg_name" backend/services backend/cmd | cut -d '/' -f3
done
} | uniq > /tmp/images_to_build.txt
- name: Deploying to kuberntes
env:
IMAGE_TAG: ${{ github.ref_name }}_${{ github.sha }}
run: |
#
# Deploying image to environment.
#
set -x
[[ -f /tmp/nothing-to-build-here ]] && exit 0
cd scripts/helmcharts/
[[ $(cat /tmp/images_to_build.txt) != "" ]] || (echo "Nothing to build here"; exit 0)
#
# Pushing image to registry
#
cd backend
for image in $(cat /tmp/images_to_build.txt);
do
echo "Bulding $image"
PUSH_IMAGE=1 bash -x ./build.sh skip $image
echo "::set-output name=image::$DOCKER_REPO/$image:$IMAGE_TAG"
done
set -x
echo > /tmp/image_override.yaml
mkdir /tmp/helmcharts
mv openreplay/charts/ingress-nginx /tmp/helmcharts/
mv openreplay/charts/quickwit /tmp/helmcharts/
mv openreplay/charts/connector /tmp/helmcharts/
## Update images
for image in $(cat /tmp/images_to_build.txt);
do
mv openreplay/charts/$image /tmp/helmcharts/
cat <<EOF>>/tmp/image_override.yaml
${image}:
image:
# We've to strip off the -ee, as helm will append it.
tag: ${IMAGE_TAG}
EOF
done
ls /tmp/helmcharts
rm -rf openreplay/charts/*
ls openreplay/charts
mv /tmp/helmcharts/* openreplay/charts/
ls openreplay/charts
- name: Creating old image input
env:
IMAGE_TAG: ${{ github.sha }}
run: |
#
# Create yaml with existing image tags
#
kubectl get pods -n app -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' | sort | uniq -c | grep '/foss/' | cut -d '/' -f3 > /tmp/image_tag.txt
# Deploy command
helm template openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set ingress-nginx.enabled=false --set skipMigration=true | kubectl apply -f -
echo > /tmp/image_override.yaml
for line in `cat /tmp/image_tag.txt`;
do
image_array=($(echo "$line" | tr ':' '\n'))
cat <<EOF >> /tmp/image_override.yaml
${image_array[0]}:
image:
tag: ${image_array[1]}
EOF
done
- name: Deploying to kuberntes
env:
IMAGE_TAG: ${{ github.sha }}
run: |
#
# Deploying image to environment.
#
cd scripts/helmcharts/
## Update secerts
sed -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"${{ secrets.OSS_PG_PASSWORD }}\"/g" vars.yaml
sed -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"${{ secrets.OSS_MINIO_ACCESS_KEY }}\"/g" vars.yaml
sed -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"${{ secrets.OSS_MINIO_SECRET_KEY }}\"/g" vars.yaml
sed -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"${{ secrets.OSS_JWT_SECRET }}\"/g" vars.yaml
sed -i "s/domainName: \"\"/domainName: \"${{ secrets.OSS_DOMAIN_NAME }}\"/g" vars.yaml
## Update images
for image in $(cat /tmp/images_to_build.txt);
do
sed -i "/${image}/{n;n;s/.*/ tag: ${IMAGE_TAG}/}" /tmp/image_override.yaml
done
# Deploy command
helm upgrade --install openreplay -n app openreplay -f vars.yaml -f /tmp/image_override.yaml --set skipMigration=true --no-hooks
- name: Alert slack
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@v2
env:
SLACK_CHANNEL: foss
SLACK_TITLE: "Failed ${{ github.workflow }}"
SLACK_COLOR: ${{ job.status }} # or a specific color like 'good' or '#ff00ff'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEB_HOOK }}
SLACK_USERNAME: "OR Bot"
SLACK_MESSAGE: "Build failed :bomb:"
# - name: Debug Job
# # if: ${{ failure() }}
# if: ${{ failure() }}
# uses: mxschmitt/action-tmate@v3
# env:
# DOCKER_REPO: ${{ secrets.EE_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}-ee
# DOCKER_REPO: ${{ secrets.OSS_REGISTRY_URL }}
# IMAGE_TAG: ${{ github.sha }}
# ENVIRONMENT: staging
# with:
# iimit-access-to-actor: true
#
#

5
.gitignore vendored
View file

@ -3,7 +3,4 @@ public
node_modules
*DS_Store
*.env
*.log
**/*.envrc
.idea
*.mob*
.idea

View file

@ -1,7 +0,0 @@
repos:
- repo: https://github.com/gitguardian/ggshield
rev: v1.14.5
hooks:
- id: ggshield
language_version: python3
stages: [commit]

701
LICENSE
View file

@ -1,694 +1,55 @@
Copyright (c) 2021-2025 Asayer, Inc dba OpenReplay
Copyright (c) 2022 Asayer, Inc.
OpenReplay monorepo uses multiple licenses. Portions of this software are licensed as follows:
- All content that resides under the "ee/" directory of this repository, is licensed under the license defined in "ee/LICENSE".
- All third party components incorporated into the OpenReplay Software are licensed under the original license provided by the owner of the applicable component.
- Some directories are licensed under the "MIT" license, as defined below.
- Content outside of the above mentioned directories or restrictions defaults to the "GNU Affero General Public License Version 3 (AGPL v3)" license, as defined below.
- Content outside of the above mentioned directories or restrictions above is available under the "Elastic License 2.0 (ELv2)" license as defined below.
Reach out (license@openreplay.com) if you have any questions regarding licenses.
--------------------------------------------------------------------------------
MIT LICENSE
------------------------------------------------------------------------------------
Elastic License 2.0 (ELv2)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
**Acceptance**
By using the software, you agree to all of the terms and conditions below.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
**Copyright License**
The licensor grants you a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable license to use, copy, distribute, make available, and prepare derivative works of the software, in each case subject to the limitations and conditions below
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
**Limitations**
You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.
--------------------------------------------------------------------------------
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
You may not move, change, disable, or circumvent the license key functionality in the software, and you may not remove or obscure any functionality in the software that is protected by the license key.
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensors trademarks is subject to applicable law.
Preamble
**Patents**
The licensor grants you a license, under any patent claims the licensor can license, or becomes able to license, to make, have made, use, sell, offer for sale, import and have imported the software, in each case subject to the limitations and conditions in this license. This license does not cover any patent claims that you cause to be infringed by modifications or additions to the software. If you or your company make any written claim that the software infringes or contributes to infringement of any patent, your patent license for the software granted under these terms ends immediately. If your company makes such a claim, your patent license ends immediately for work on behalf of your company.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
**Notices**
You must ensure that anyone who gets a copy of any part of the software from you also gets a copy of these terms.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
If you modify the software, you must include in any modified copies of the software prominent notices stating that you have modified the software.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
**No Other Rights**
These terms do not imply any licenses other than those expressly granted in these terms.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
**Termination**
If you use the software in violation of these terms, such use is not licensed, and your licenses will automatically terminate. If the licensor provides you with a notice of your violation, and you cease all violation of this license no later than 30 days after you receive that notice, your licenses will be reinstated retroactively. However, if you violate these terms after such reinstatement, any additional violation of these terms will cause your licenses to terminate automatically and permanently.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
**No Liability**
As far as the law allows, the software comes as is, without any warranty or condition, and the licensor will not be liable to you for any damages arising out of these terms or the use or nature of the software, under any kind of legal claim.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
**Definitions**
The *licensor* is the entity offering these terms, and the *software* is the software the licensor makes available under these terms, including any portion of it.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
*you* refers to the individual or entity agreeing to these terms.
The precise terms and conditions for copying, distribution and
modification follow.
*your company* is any legal entity, sole proprietorship, or other kind of organization that you work for, plus all organizations that have control over, are under the control of, or are under common control with that organization. *control* means ownership of substantially all the assets of an entity, or the power to direct its management and policies by vote, contract, or otherwise. Control can be direct or indirect.
TERMS AND CONDITIONS
*your licenses* are all the licenses granted to you for the software under these terms.
0. Definitions.
*use* means anything you do with the software requiring one of your licenses.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.
*trademark* means trademarks, service marks, and similar rights.

View file

@ -1,53 +1,40 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README_AR.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
<a href="https://openreplay.com">
<img src="static/logo.svg" height="70">
</a>
</p>
<h3 align="center">Session replay for developers</h3>
<p align="center">The most advanced session replay for building delightful web apps.</p>
<p align="center">The most advanced open-source session replay to build delightful web apps.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
<img src="static/deploy-aws.png" height="35"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
<img src="static/deploy-gcp.png" height="35" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
<img src="static/deploy-azure.png" height="35" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
<img src="static/deploy-do.png" height="35" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
<img src="static/overview.png">
</a>
</p>
OpenReplay is an open-source session replay suite you can host yourself, that lets you see what users do on your web app, helping you troubleshoot issues faster.
OpenReplay is a session replay stack that lets you see what users do on your web app, helping you troubleshoot issues faster. It's the only open-source alternative to products such as FullStory and LogRocket.
- **Session replay.** OpenReplay replays what users do, but not only. It also shows you what went under the hood, how your website or app behaves by capturing network activity, console logs, JS errors, store actions/state, page speed metrics, cpu/memory usage and much more.
- **Low footprint**. With a ~26KB (.br) tracker that asynchronously sends minimal data for a very limited impact on performance.
- **Low footprint**. With a ~18KB (.gz) tracker that asynchronously sends minimal data for a very limited impact on performance.
- **Self-hosted**. No more security compliance checks, 3rd-parties processing user data. Everything OpenReplay captures stays in your cloud for a complete control over your data.
- **Privacy controls**. Fine-grained security features for sanitizing user data.
- **Easy deploy**. With support of major public cloud providers (AWS, GCP, Azure, DigitalOcean).
@ -55,13 +42,12 @@ OpenReplay is an open-source session replay suite you can host yourself, that le
## Features
- **Session replay:** Lets you relive your users' experience, see where they struggle and how it affects their behavior. Each session replay is automatically analyzed based on heuristics, for easy triage.
- **Spot:** A Chrome extension that lets record bugs directly from your browser — each recording includes all the technical details developers need to fix them.
- **DevTools:** It's like debugging in your own browser. OpenReplay provides you with the full context (network activity, JS errors, store actions/state and 40+ metrics) so you can instantly reproduce bugs and understand performance issues.
- **Assist:** Helps you support your users by seeing their live screen and instantly hopping on call (WebRTC) with them without requiring any 3rd-party screen sharing software.
- **Omni-search:** Search and filter by almost any user action/criteria, session attribute or technical event, so you can answer any question. No instrumentation required.
- **Analytics:** For surfacing the most impactful issues causing conversion and revenue loss.
- **Funnels:** For surfacing the most impactful issues causing conversion and revenue loss.
- **Fine-grained privacy controls:** Choose what to capture, what to obscure or what to ignore so user data doesn't even reach your servers.
- **Plugins oriented:** Get to the root cause even faster by tracking application state (Redux, VueX, MobX, NgRx, Pinia and Zustand) and logging GraphQL queries (Apollo, Relay) and Fetch/Axios requests.
- **Plugins oriented:** Get to the root cause even faster by tracking application state (Redux, VueX, MobX, NgRx) and logging GraphQL queries (Apollo, Relay) and Fetch requests.
- **Integrations:** Sync your backend logs with your session replays and see what happened front-to-back. OpenReplay supports Sentry, Datadog, CloudWatch, Stackdriver, Elastic and more.
## Deployment Options
@ -73,7 +59,6 @@ OpenReplay can be deployed anywhere. Follow our step-by-step guides for deployin
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
@ -84,10 +69,9 @@ For those who want to simply use OpenReplay as a service, [sign up](https://app.
Please refer to the [official OpenReplay documentation](https://docs.openreplay.com/). That should help you troubleshoot common issues. For additional help, you can reach out to us on one of these channels:
- [Slack](https://slack.openreplay.com) (Connect with our engineers and community)
- [Discord](https://discord.openreplay.com) (Connect with our engineers and community)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Bug and issue reports)
- [Twitter](https://twitter.com/OpenReplayHQ) (Product updates, Great content)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (How-to tutorials, past Community Calls)
- [Website chat](https://openreplay.com) (Talk to us)
## Contributing
@ -96,8 +80,12 @@ We're always on the lookout for contributions to OpenReplay, and we're glad you'
See our [Contributing Guide](CONTRIBUTING.md) for more details.
Also, feel free to join our [Slack](https://slack.openreplay.com) to ask questions, discuss ideas or connect with our contributors.
Also, feel free to join our [Discord](https://discord.openreplay.com) to ask questions, discuss ideas or connect with our contributors.
## Roadmap
Check out our [roadmap](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) and keep an eye on what's coming next. You're free to [submit](https://github.com/openreplay/openreplay/issues/new) new ideas and vote on features.
## License
This monorepo uses several licenses. See [LICENSE](/LICENSE) for more details.
This repo is under the Elastic License 2.0 (ELv2), with the exception of the `ee` directory.

View file

@ -1,106 +0,0 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README.md">English</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">إعادة تشغيل الجلسة للمطورين</h3>
<p align="center">إعادة تشغيل الجلسة الأكثر تقدمًا لإنشاء تطبيقات ويب رائعة</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay هو مجموعة إعادة تشغيل الجلسة التي يمكنك استضافتها بنفسك، والتي تتيح لك رؤية ما يقوم به المستخدمون على تطبيق الويب الخاص بك، مما يساعدك على حل المشكلات بشكل أسرع.
- **إعادة تشغيل الجلسة.** يقوم OpenReplay بإعادة تشغيل ما يقوم به المستخدمون، وكيف يتصرف موقع الويب الخاص بك أو التطبيق من خلال التقاط النشاط على الشبكة، وسجلات وحدة التحكم، وأخطاء JavaScript، وإجراءات/حالة التخزين، وقياسات سرعة الصفحة، واستخدام وحدة المعالجة المركزية/الذاكرة، وأكثر من ذلك بكثير.
- **بصمة منخفضة**. مع متتبع بحجم حوالي 26 كيلوبايت (نوع .br) الذي يرسل بيانات دقيقة بشكل غير متزامن لتأثير محدود جدًا على الأداء.
- **مضيف بواسطتك.** لا مزيد من فحوص الامتثال الأمني، ومعالجة بيانات المستخدمين من قبل جهات خارجية. كل ما يتم التقاطه بواسطة OpenReplay يبقى في سحابتك للتحكم الكامل في بياناتك.
- **ضوابط الخصوصية.** ميزات أمان دقيقة لتنقية بيانات المستخدم.
- **نشر سهل.** بدعم من مزودي الخدمة السحابية العامة الرئيسيين (AWS، GCP، Azure، DigitalOcean).
## الميزات
- **إعادة تشغيل الجلسة:** تتيح لك إعادة تشغيل الجلسة إعادة عيش تجربة مستخدميك، ورؤية أين يواجهون صعوبة وكيف يؤثر ذلك على سلوكهم. يتم تحليل كل إعادة تشغيل للجلسة تلقائيًا بناءً على الأساليب الاستدلالية، لسهولة التقييم.
- **أدوات التطوير (DevTools):** إنها مثل التصحيح في متصفحك الخاص. يوفر لك OpenReplay السياق الكامل (نشاط الشبكة، أخطاء JavaScript، إجراءات/حالة التخزين وأكثر من 40 مقياسًا) حتى تتمكن من إعادة إنتاج الأخطاء فورًا وفهم مشكلات الأداء.
- **المساعدة (Assist):** تساعدك في دعم مستخدميك من خلال رؤية شاشتهم مباشرة والانضمام فورًا إلى مكالمة (WebRTC) معهم دون الحاجة إلى برامج مشاركة الشاشة من جهات خارجية.
- **البحث الشامل (Omni-search):** ابحث وفرز حسب أي عملية/معيار للمستخدم تقريبًا، أو سمة الجلسة أو الحدث التقني، حتى تتمكن من الرد على أي سؤال. لا يلزم تجهيز.
- **الأنفاق (Funnels):** للكشف عن المشكلات الأكثر تأثيرًا التي تسبب في فقدان التحويل والإيرادات.
- **ضوابط الخصوصية الدقيقة:** اختر ماذا تريد التقاطه، ماذا تريد أن تخفي أو تجاهل حتى لا تصل بيانات المستخدم حتى إلى خوادمك.
- **موجهة للمكونات الإضافية (Plugins oriented):** تصل إلى السبب الجذري بشكل أسرع عن طريق تتبع حالة التطبيق (Redux، VueX، MobX، NgRx، Pinia، وZustand) وتسجيل استعلامات GraphQL (Apollo، Relay) وطلبات Fetch/Axios.
- **التكاملات (Integrations):** مزامنة سجلات الخادم الخلفي مع إعادات التشغيل للجلسات ورؤية ما حدث من الأمام إلى الخلف. يدعم OpenReplay Sentry وDatadog وCloudWatch وStackdriver وElastic والمزيد.
## خيارات النشر
يمكن نشر OpenReplay في أي مكان. اتبع دليلنا الخطوة بالخطوة لنشره على خدمات السحابة العامة الرئيسية:
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## سحابة OpenReplay
لأولئك الذين يرغبون في استخدام OpenReplay كخدمة، [قم بالتسجيل](https://app.openreplay.com/signup) للحصول على حساب مجاني على عرض السحابة لدينا.
## دعم المجتمع
يرجى الرجوع إلى [الوثائق الرسمية لـ OpenReplay](https://docs.openreplay.com/). سيساعدك ذلك في حل المشكلات الشائعة. للحصول على مساعدة إضافية، يمكنك الاتصال بنا عبر أحد هذه القنوات:
- [Slack](https://slack.openreplay.com) (الاتصال مع مهندسينا والمجتمع)
- [GitHub](https://github.com/openreplay/openreplay/issues) (تقارير الأخطاء والمشكلات)
- [Twitter](https://twitter.com/OpenReplayHQ) (تحديثات المنتج، محتوى رائع)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (دروس حول كيفية الاستخدام، مكالمات مجتمع سابقة)
- [دردشة الموقع الإلكتروني](https://openreplay.com) (تحدث معنا)
## المساهمة
نحن دائمًا في انتظار المساهمات في OpenReplay، ونحن سعداء بأنك تفكر في ذلك! غير متأكد من أين تبدأ؟ ابحث عن المشاكل المفتوحة، وخاصة تلك المُميزة بأنها مناسبة للمبتدئين.
انظر دليل المساهمة لدينا [دليل المساهمة](CONTRIBUTING.md) لمزيد من التفاصيل.
كما توجد حرية الانضمام إلى Slack لدينا [Slack](https://slack.openreplay.com) لطرح الأسئلة، مناقشة الأفكار أو التواصل مع مساهمينا.
## الخارطة الزمنية
تحقق من [الخارطة الزمنية لدينا](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) وابق على اطلاع على ما سيأتي لاحقًا. لديك حرية [تقديم أفكار جديدة](https://github.com/openreplay/openreplay/issues/new) والتصويت على الميزات.
## الترخيص
يستخدم هذا المستودع المتعدد التراخيص. انظر إلى [LICENSE](/LICENSE) لمزيد من التفاصيل.

View file

@ -1,106 +0,0 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README.md">English</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">Reproducción de sesiones para desarrolladores</h3>
<p align="center">La reproducción de sesiones más avanzada para crear aplicaciones web encantadoras.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay es una suite de retransmisión de sesiones que puedes alojar tú mismo, lo que te permite ver lo que hacen los usuarios en tu aplicación web y ayudarte a solucionar problemas más rápido.
- **Reproducción de sesiones.** OpenReplay reproduce lo que hacen los usuarios, pero no solo eso. También te muestra lo que ocurre bajo el capó, cómo se comporta tu sitio web o aplicación al capturar la actividad de la red, registros de la consola, errores de JavaScript, acciones/estado del almacén, métricas de velocidad de la página, uso de CPU/memoria y mucho más.
- **Huella reducida.** Con un rastreador de aproximadamente 26 KB (.br) que envía datos mínimos de forma asíncrona, lo que tiene un impacto muy limitado en el rendimiento.
- **Auto-alojado.** No más verificaciones de cumplimiento de seguridad, procesamiento de datos de usuario por terceros. Todo lo que OpenReplay captura se queda en tu nube para un control completo sobre tus datos.
- **Controles de privacidad.** Funciones de seguridad detalladas para desinfectar los datos de usuario.
- **Despliegue sencillo.** Con el soporte de los principales proveedores de nube pública (AWS, GCP, Azure, DigitalOcean).
## Características
- **Reproducción de sesiones:** Te permite revivir la experiencia de tus usuarios, ver dónde encuentran dificultades y cómo afecta su comportamiento. Cada reproducción de sesión se analiza automáticamente en función de heurísticas, para un triaje sencillo.
- **Herramientas de desarrollo (DevTools):** Es como depurar en tu propio navegador. OpenReplay te proporciona el contexto completo (actividad de red, errores de JavaScript, acciones/estado del almacén y más de 40 métricas) para que puedas reproducir instantáneamente errores y entender problemas de rendimiento.
- **Asistencia (Assist):** Te ayuda a brindar soporte a tus usuarios al ver su pantalla en tiempo real y unirte instantáneamente a una llamada (WebRTC) con ellos, sin necesidad de software de uso compartido de pantalla de terceros.
- **Búsqueda universal (Omni-search):** Busca y filtra por casi cualquier acción/criterio de usuario, atributo de sesión o evento técnico, para que puedas responder a cualquier pregunta. No se requiere instrumentación.
- **Embudos (Funnels):** Para resaltar los problemas más impactantes que causan la conversión y la pérdida de ingresos.
- **Controles de privacidad detallados:** Elige qué capturar, qué ocultar o qué ignorar para que los datos de usuario ni siquiera lleguen a tus servidores.
- **Orientado a complementos (Plugins oriented):** Llega más rápido a la causa raíz siguiendo el estado de la aplicación (Redux, VueX, MobX, NgRx, Pinia y Zustand) y registrando consultas GraphQL (Apollo, Relay) y solicitudes Fetch/Axios.
- **Integraciones:** Sincroniza tus registros del servidor con tus repeticiones de sesiones y observa lo que sucedió de principio a fin. OpenReplay es compatible con Sentry, Datadog, CloudWatch, Stackdriver, Elastic y más.
## Opciones de implementación
OpenReplay se puede implementar en cualquier lugar. Sigue nuestras guías paso a paso para implementarlo en los principales servicios de nube pública:
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
Para aquellos que desean usar OpenReplay como un servicio, [regístrate](https://app.openreplay.com/signup) para obtener una cuenta gratuita en nuestra oferta en la nube.
## Soporte de la comunidad
Consulta la [documentación oficial de OpenReplay](https://docs.openreplay.com/). Eso debería ayudarte a solucionar problemas comunes. Para obtener ayuda adicional, puedes contactarnos a través de uno de estos canales:
- [Slack](https://slack.openreplay.com) (Conéctate con nuestros ingenieros y la comunidad)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Informes de errores y problemas)
- [Twitter](https://twitter.com/OpenReplayHQ) (Actualizaciones del producto, contenido excelente)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (Tutoriales, reuniones comunitarias anteriores)
- [Chat en el sitio web](https://openreplay.com) (Háblanos)
## Contribución
Siempre estamos buscando contribuciones para OpenReplay, ¡y nos alegra que lo estés considerando! ¿No estás seguro por dónde empezar? Busca problemas abiertos, preferiblemente aquellos marcados como "buenas primeras contribuciones".
Consulta nuestra [Guía de Contribución](CONTRIBUTING.md) para obtener más detalles.
Además, no dudes en unirte a nuestro [Slack](https://slack.openreplay.com) para hacer preguntas, discutir ideas o conectarte con nuestros colaboradores.
## Hoja de ruta
Consulta nuestra [hoja de ruta](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) y mantente atento a lo que viene a continuación. Eres libre de [enviar](https://github.com/openreplay/openreplay/issues/new) nuevas ideas y votar por funciones.
## Licencia
Este monorepo utiliza varias licencias. Consulta [LICENSE](/LICENSE) para obtener más detalles.

View file

@ -1,106 +0,0 @@
<p align="center">
<a href="/README.md">English</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">Русский</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">Relecture de session pour développeurs</h3>
<p align="center">La relecture de session la plus avancée sur le marché pour des applications perfectionnées.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay est une suite d'outils de relecture (appelée aussi "replay") de sessions que vous pouvez héberger vous-même, vous permettant de voir ce que les utilisateurs font sur une application web, vous aidant ainsi à résoudre différents types de problèmes plus rapidement.
- **Relecture de session.** OpenReplay rejoue ce que les utilisateurs font, mais pas seulement. Il vous montre également ce qui se passe en coulisse, comment votre site web ou votre application se comporte en capturant l'activité réseau, les journaux de console, les erreurs JS, les actions/états du store, les métriques de chargement des pages, l'utilisation du CPU/mémoire, et bien plus encore.
- **Faible empreinte**. Avec un traqueur d'environ 26 Ko (.br) qui envoie de manière asynchrone des données minimales, ce qui a un impact très limité sur les performances.
- **Auto-hébergé**. Plus de vérifications de conformité en matière de sécurité, plus de traitement des données des utilisateurs par des tiers. Tout ce qu'OpenReplay capture reste dans votre cloud pour un contrôle complet sur vos données.
- **Contrôles de confidentialité**. Fonctionnalités de sécurité détaillées pour la désinfection des données utilisateur.
- **Déploiement facile**. Avec le support des principaux fournisseurs de cloud public (AWS, GCP, Azure, DigitalOcean).
## Fonctionnalités
- **Relecture de session :** Vous permet de revivre l'expérience de vos utilisateurs, de voir où ils rencontrent des problèmes et comment cela affecte leur comportement. Chaque relecture de session est automatiquement analysée en se basant sur des heuristiques, pour un triage plus facile des problèmes en fonction de l'impact.
- **Outils de développement (DevTools) :** C'est comme déboguer dans votre propre navigateur. OpenReplay vous fournit le contexte complet (activité réseau, erreurs JS, actions/états du store et plus de 40 métriques) pour que vous puissiez instantanément reproduire les bugs et comprendre les problèmes de performance.
- **Assistance (Assist) :** Vous aide à soutenir vos utilisateurs en voyant leur écran en direct et en vous connectant instantanément avec eux via appel/vidéo (WebRTC), sans nécessiter de logiciel tiers de partage d'écran.
- **Recherche universelle (Omni-search) :** Recherchez et filtrez presque n'importe quelle action/critère utilisateur, attribut de session ou événement technique, afin de pouvoir répondre à n'importe quelle question. Aucune instrumentation requise.
- **Entonnoirs (Funnels) :** Pour mettre en évidence les problèmes les plus impactants entraînant une conversion et une perte de revenus.
- **Contrôles de confidentialité détaillés :** Choisissez ce que vous voulez capturer, ce que vous voulez obscurcir ou ignorer, de sorte que les données utilisateur n'atteignent même pas vos serveurs.
- **Orienté vers les plugins :** Corrigez plus rapidement les bogues en suivant l'état de l'application (Redux, VueX, MobX, NgRx, Pinia et Zustand) et enregistrant les requêtes GraphQL (Apollo, Relay) et les requêtes Fetch/Axios.
- **Intégrations :** Synchronisez vos journaux backend avec vos relectures de sessions et voyez ce qui s'est passé du début à la fin. OpenReplay prend en charge Sentry, Datadog, CloudWatch, Stackdriver, Elastic et bien d'autres.
## Options de déploiement
OpenReplay peut être déployé n'importe où. Suivez nos guides détaillés pour le déployer sur les principaux clouds publics :
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
Pour ceux qui veulent simplement utiliser OpenReplay en tant que service, [inscrivez-vous](https://app.openreplay.com/signup) pour un compte gratuit sur notre offre cloud.
## Support de la communauté
Veuillez vous référer à la [documentation officielle d'OpenReplay](https://docs.openreplay.com/). Cela devrait vous aider à résoudre les problèmes courants. Pour toute aide ou question supplémentaire, vous pouvez nous contacter sur l'un des canaux suivants :
- [Slack](https://slack.openreplay.com) (Connectez-vous avec nos ingénieurs et notre communauté)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Rapports de bogues et problèmes)
- [Twitter](https://twitter.com/OpenReplayHQ) (Mises à jour du produit, articles techniques et autres annonces)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (Tutoriels)
- [Chat sur le site Web](https://openreplay.com) (Nous contacter)
## Contribution
Nous sommes toujours à la recherche de contributions pour rendre OpenReplay meilleur. Vous ne savez pas par où commencer ? Recherchez dans notre "GitHub Issues" pour trouver des tickets ouverts, de préférence ceux marqués comme "bonnes premières contributions".
Consultez notre [Guide de contribution](CONTRIBUTING.md) pour plus de détails.
N'hésitez pas à rejoindre notre [Slack](https://slack.openreplay.com) pour poser des questions, discuter vos idées ou simplement pour vous connecter avec nos contributeurs.
## Feuille de route
Consultez notre [feuille de route](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) et gardez un œil sur ce qui arrive prochainement. Vous êtes libre de [proposer](https://github.com/openreplay/openreplay/issues/new) de nouvelles idées et de voter pour des fonctionnalités.
## Licence
Ce monorepo utilise plusieurs licences. Consultez [LICENSE](/LICENSE) pour plus de détails.

View file

@ -1,107 +0,0 @@
<p align="center">
<a href="/README_FR.md">Français</a>
&nbsp;|&nbsp;
<a href="/README_ESP.md">Español</a>
&nbsp;|&nbsp;
<a href="/README.md">English</a>
&nbsp;|&nbsp;
<a href="/README_RU.md">العربية</a>
</p>
<p align="center">
<a href="https://openreplay.com/#gh-light-mode-only">
<img src="static/openreplay-git-banner-light.png" width="100%">
</a>
<a href="https://openreplay.com/#gh-dark-mode-only">
<img src="static/openreplay-git-banner-dark.png" width="100%">
</a>
</p>
<h3 align="center">Реплей сессий для разработчиков</h3>
<p align="center">Самое продвинутое решение для воспроизведения сессий с открытым исходным кодом для создания восхитительных веб-приложений.</p>
<p align="center">
<a href="https://docs.openreplay.com/deployment/deploy-aws">
<img src="static/btn-deploy-aws.svg" height="40"/>
</a>
<a href="https://docs.openreplay.com/deployment/deploy-gcp">
<img src="static/btn-deploy-google-cloud.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-azure">
<img src="static/btn-deploy-azure.svg" height="40" />
</a>
<a href="https://docs.openreplay.com/deployment/deploy-digitalocean">
<img src="static/btn-deploy-digital-ocean.svg" height="40" />
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
OpenReplay - это набор инструментов для воспроизведения пользовательских сессий, позволяющий увидеть действия пользователи в вашем веб-приложении, который вы можете разместить в своем облаке или на серверах.
- **Воспроизведение сессий.** OpenReplay не только воспроизводит действия пользователей, но и показывает, что происходит под капотом сессии, как ведет себя ваш сайт или приложение, фиксируя сетевую активность, логи консоли, JS-ошибки, действия/состояние стейт менеджеров, показатели скорости страницы, использование процессора/памяти и многое другое.
- **Компактность**. Размером всего в ~26 КБ (.br), трекер асинхронно отправляет минимальное количество данных, оказывая очень незначительное влияние на производительность вашего приложения.
- **Self-hosted**. Больше никаких проверок на соответствие требованиям безопасности или обработки данных ваших пользователей третьими сторонами. Все, что фиксирует OpenReplay, остается в вашем облаке, что обеспечивает полный контроль над вашими данными.
- **Контроль над приватностью**. Тонкие настройки приватности позволяют записывать только действительно необходимые данные.
- **Легкая установка**. Мы поддерживаем всех крупных поставщиков облачных услуг (AWS, GCP, Azure, DigitalOcean).
## Особенности
- **Session Replay:** Позволяет повторить опыт пользователей, увидеть, где они испытывают трудности и как это влияет на конверсию. Каждый реплей автоматически анализируется на наличие ошибок и аномалий, что значительно облегчает сортировку и поиск проблемных сессий.
- **DevTools:** Прямо как отладка в вашем собственном браузере. OpenReplay предоставляет вам полный контекст (сетевая активность, JS ошибки, действия/состояние стейт менеджеров и более 40 метрик), чтобы вы могли мгновенно воспроизвести ошибки и найти проблемы с производительностью.
- **Assist:** Позволяет вам помочь вашим пользователям, наблюдая их экран в настоящем времени и мгновенно переходя на звонок (WebRTC) с ними, не требуя стороннего программного обеспечения для совместного просмотра экрана.
- **Omni-search:** Поиск и фильтрация практически любого действия пользователя/критерия, атрибута сессии или технического события, чтобы вы могли ответить на любой вопрос.
- **Воронки:** Для выявления наиболее влияющих на конверсию мест.
- **Тонкая настройка приватности:** Выбирайте, что записывать, а что игнорировать, чтобы данные пользователя даже не отправлялись на ваши сервера.
- **Ориентирован на плагины:** С помощью плагинов можно отслеживать состояние приложения (Redux, VueX, MobX, NgRx, Pinia, и Zustand), регистрировать запросы GraphQL (Apollo, Relay) и многое другое.
- **Интеграции:** OpenReplay поддерживает интеграции с Sentry, Datadog, CloudWatch, Stackdriver, Elastic и другими провайдерами, позволяя получать еще больше информации о пользовательской сессии.
## Варианты развертывания
OpenReplay можно развернуть где угодно. Следуйте нашим пошаговым руководствам по развертыванию на основных публичных облаках:
- [AWS](https://docs.openreplay.com/deployment/deploy-aws)
- [Google Cloud](https://docs.openreplay.com/deployment/deploy-gcp)
- [Azure](https://docs.openreplay.com/deployment/deploy-azure)
- [Digital Ocean](https://docs.openreplay.com/deployment/deploy-digitalocean)
- [Scaleway](https://docs.openreplay.com/deployment/deploy-scaleway)
- [OVHcloud](https://docs.openreplay.com/deployment/deploy-ovhcloud)
- [Kubernetes](https://docs.openreplay.com/deployment/deploy-kubernetes)
## OpenReplay Cloud
Для тех, кто просто хочет использовать OpenReplay как сервис, [зарегистрируйте](https://app.openreplay.com/signup) бесплатную учетную запись в нашем приложении.
## Поддержка сообщества
В случае возникновения проблем, вы можете обратиться к [официальной документации OpenReplay](https://docs.openreplay.com/). Это поможет вам решить наиболее распространенные проблемы.
Для дополнительной помощи, вы можете связаться с нами через один из этих каналов:
- [Slack](https://slack.openreplay.com) (Свяжитесь с нашими инженерами и сообществом)
- [GitHub](https://github.com/openreplay/openreplay/issues) (Отчеты о багах и проблемах)
- [Twitter](https://twitter.com/OpenReplayHQ) (Обновления продукта)
- [YouTube](https://www.youtube.com/channel/UCcnWlW-5wEuuPAwjTR1Ydxw) (Учебные пособия, прошлые комьюнити-звонки)
- [Чат на веб-сайте](https://openreplay.com) (Общайтесь с нами)
## Содействие
Мы всегда рады любой помощи в создании OpenReplay, и готовы услышать ваши идеи. Не уверены, с чего начать? Ищите открытые задачи, особенно те, которые отмечены как "good first issue".
Смотрите наше [руководство по содействию](CONTRIBUTING.md) для более подробной информации.
Также не стесняйтесь присоединиться к нашему [Slack](https://slack.openreplay.com), чтобы задавать вопросы, обсуждать идеи или связываться с нашими участниками.
## План развития
Ознакомьтесь с нашим [планом развития](https://www.notion.so/openreplay/Roadmap-889d2c3d968b4786ab9b281ab2394a94) и следите за тем, что будет далее. Вы можете свободно [предложить](https://github.com/openreplay/openreplay/issues/new) новые идеи и голосовать за функции.
## Лицензия
В этом монорепозитории используются разные лицензии. См. [LICENSE](/LICENSE) для получения более подробной информации.

6
api/.dockerignore Normal file
View file

@ -0,0 +1,6 @@
# ignore .git and .cache folders
.git
.cache
**/build.sh
**/build_*.sh
**/*deploy.sh

8
api/.gitignore vendored
View file

@ -83,7 +83,7 @@ wheels/
.installed.cfg
*.egg
MANIFEST
Pipfile.lock
Pipfile
# PyInstaller
# Usually these files are written by a python script from a template
@ -143,7 +143,7 @@ celerybeat-schedule
# Environments
.env
.venv/*
.venv
env/
venv/
ENV/
@ -174,6 +174,4 @@ logs*.txt
SUBNETS.json
./chalicelib/.configs
README/*
.local
/.dev/
README/*

View file

@ -1 +0,0 @@
.venv

View file

@ -1,3 +0,0 @@
# Accept the risk until
# python setup tools recently fixed. Not yet available in distros.
CVE-2023-5363 exp:2023-12-31

View file

@ -1,31 +1,31 @@
FROM python:3.12-alpine AS builder
LABEL maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
RUN apk add --no-cache build-base
WORKDIR /work
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv && \
export UV_SYSTEM_PYTHON=true && \
uv pip install --no-cache-dir --upgrade pip setuptools wheel && \
uv pip install --no-cache-dir --upgrade -r requirements.txt
FROM python:3.12-alpine
ARG GIT_SHA
ARG envarg
FROM python:3.9.12-slim
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ENV APP_NAME chalice
# Add Tini
# Startup daemon
ENV SOURCE_MAP_VERSION=0.7.4 \
APP_NAME=chalice \
LISTEN_PORT=8000 \
PRIVATE_ENDPOINTS=false \
ENTERPRISE_BUILD=${envarg} \
GIT_SHA=$GIT_SHA
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
ENV TINI_VERSION v0.19.0
ARG envarg
ENV ENTERPRISE_BUILD ${envarg}
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# Installing Nodejs
RUN apt update && apt install -y curl && \
curl -fsSL https://deb.nodesource.com/setup_12.x | bash - && \
apt install -y nodejs && \
apt remove --purge -y curl && \
rm -rf /var/lib/apt/lists/*
WORKDIR /work_tmp
COPY requirements.txt /work_tmp/requirements.txt
RUN pip install -r /work_tmp/requirements.txt
COPY sourcemap-reader/*.json /work_tmp/
RUN cd /work_tmp && npm install
WORKDIR /work
COPY . .
RUN apk add --no-cache tini && mv env.default .env
RUN mv env.default .env && mv /work_tmp/node_modules sourcemap-reader/.
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["./entrypoint.sh"]
ENTRYPOINT ["/tini", "--"]
CMD ./entrypoint.sh

23
api/Dockerfile.alerts Normal file
View file

@ -0,0 +1,23 @@
FROM python:3.9.12-slim
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ENV APP_NAME alerts
ENV pg_minconn 2
ENV pg_maxconn 10
# Add Tini
# Startup daemon
ENV TINI_VERSION v0.19.0
ARG envarg
ENV ENTERPRISE_BUILD ${envarg}
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
COPY requirements.txt /work_tmp/requirements.txt
RUN pip install -r /work_tmp/requirements.txt
WORKDIR /work
COPY . .
RUN mv .env.default .env && mv app_alerts.py app.py && mv entrypoint_alerts.sh entrypoint.sh
ENTRYPOINT ["/tini", "--"]
CMD ./entrypoint.sh

27
api/Dockerfile.bundle Normal file
View file

@ -0,0 +1,27 @@
FROM python:3.9.12-slim
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
WORKDIR /work
COPY . .
COPY ../utilities ./utilities
RUN rm entrypoint.sh && rm .chalice/config.json
RUN mv entrypoint.bundle.sh entrypoint.sh && mv .chalice/config.bundle.json .chalice/config.json
RUN pip install -r requirements.txt -t ./vendor --upgrade
RUN pip install chalice==1.22.2
# Installing Nodejs
RUN apt update && apt install -y curl && \
curl -fsSL https://deb.nodesource.com/setup_12.x | bash - && \
apt install -y nodejs && \
apt remove --purge -y curl && \
rm -rf /var/lib/apt/lists/* && \
cd utilities && \
npm install
# Add Tini
# Startup daemon
ENV TINI_VERSION v0.19.0
ARG envarg
ENV ENTERPRISE_BUILD ${envarg}
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
CMD ./entrypoint.sh

View file

@ -1,11 +0,0 @@
# ignore .git and .cache folders
.git
.cache
**/build.sh
**/build_*.sh
**/*deploy.sh
Dockerfile*
app_alerts.py
requirements-alerts.txt
entrypoint_alerts.sh

View file

@ -1,29 +0,0 @@
FROM python:3.12-alpine
LABEL Maintainer="Rajesh Rajendran<rjshrjndrn@gmail.com>"
LABEL Maintainer="KRAIEM Taha Yassine<tahayk2@gmail.com>"
ARG GIT_SHA
LABEL GIT_SHA=$GIT_SHA
RUN apk add --no-cache build-base tini
ARG envarg
ENV APP_NAME=alerts \
PG_MINCONN=1 \
PG_MAXCONN=10 \
LISTEN_PORT=8000 \
PRIVATE_ENDPOINTS=true \
GIT_SHA=$GIT_SHA \
ENTERPRISE_BUILD=${envarg}
WORKDIR /work
COPY requirements-alerts.txt ./requirements.txt
RUN pip install --no-cache-dir --upgrade uv
RUN uv pip install --no-cache-dir --upgrade pip setuptools wheel --system
RUN uv pip install --no-cache-dir --upgrade -r requirements.txt --system
COPY . .
RUN mv env.default .env && mv app_alerts.py app.py && mv entrypoint_alerts.sh entrypoint.sh
RUN adduser -u 1001 openreplay -D
USER 1001
ENTRYPOINT ["/sbin/tini", "--"]
CMD ./entrypoint.sh

View file

@ -1,11 +0,0 @@
# ignore .git and .cache folders
.git
.cache
**/build.sh
**/build_*.sh
**/*deploy.sh
Dockerfile*
app.py
entrypoint.sh
requirements.txt

View file

@ -1,43 +0,0 @@
#### autogenerated api frontend
API can autogenerate a frontend that documents, and allows to play
with, in a limited way, its interface. Make sure you have the
following variables inside the current `.env`:
```
docs_url=/docs
root_path=''
```
If the `.env` that is in-use is based on `env.default` then it is
already the case. Start, or restart the http server, then go to
`https://127.0.0.1:8000/docs`. That is autogenerated documentation
based on pydantic schema, fastapi routes, and docstrings :wink:.
Happy experiments, and then documentation!
#### psycopg3 API
I mis-remember the psycopg v2 vs. v3 API.
For the record, the expected psycopg3's async api looks like the
following pseudo code:
```python
async with app.state.postgresql.connection() as cnx:
async with cnx.transaction():
row = await cnx.execute("SELECT EXISTS(SELECT 1 FROM public.tenants)")
row = await row.fetchone()
return row["exists"]
```
Mind the following:
- Where `app.state.postgresql` is the postgresql connection pooler.
- Wrap explicit transaction with `async with cnx.transaction():
foobar()`
- Most of the time the transaction object is not used;
- Do execute await operation against `cnx`;
- `await cnx.execute` returns a cursor object;
- Do the `await cursor.fetchqux...` calls against the object return by
a call to execute.

View file

@ -1,29 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
urllib3 = "==2.3.0"
requests = "==2.32.3"
boto3 = "==1.36.12"
pyjwt = "==2.10.1"
psycopg2-binary = "==2.9.10"
psycopg = {extras = ["pool", "binary"], version = "==3.2.4"}
clickhouse-driver = {extras = ["lz4"], version = "==0.2.9"}
clickhouse-connect = "==0.8.15"
elasticsearch = "==8.17.1"
jira = "==3.8.0"
cachetools = "==5.5.1"
fastapi = "==0.115.8"
uvicorn = {extras = ["standard"], version = "==0.34.0"}
python-decouple = "==3.8"
pydantic = {extras = ["email"], version = "==2.10.6"}
apscheduler = "==3.11.0"
redis = "==5.2.1"
[dev-packages]
[requires]
python_version = "3.12"
python_full_version = "3.12.8"

View file

@ -1,99 +1,37 @@
import logging
import time
from contextlib import asynccontextmanager
import psycopg_pool
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from decouple import config
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from psycopg import AsyncConnection
from psycopg.rows import dict_row
from starlette.responses import StreamingResponse
from chalicelib.utils import helper
from chalicelib.utils import pg_client, ch_client
from crons import core_crons, core_dynamic_crons
from chalicelib.utils import pg_client
from routers import core, core_dynamic
from routers.subs import insights, metrics, v1_api, health, usability_tests, spot, product_anaytics
from routers.crons import core_crons
from routers.crons import core_dynamic_crons
from routers.subs import dashboard, insights, metrics, v1_api
loglevel = config("LOGLEVEL", default=logging.WARNING)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
class ORPYAsyncConnection(AsyncConnection):
def __init__(self, *args, **kwargs):
super().__init__(*args, row_factory=dict_row, **kwargs)
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
logging.info(">>>>> starting up <<<<<")
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app.schedule = AsyncIOScheduler()
await pg_client.init()
await ch_client.init()
app.schedule.start()
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs:
app.schedule.add_job(id=job["func"].__name__, **job)
ap_logger.info(">Scheduled jobs:")
for job in app.schedule.get_jobs():
ap_logger.info({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
database = {
"host": config("pg_host", default="localhost"),
"dbname": config("pg_dbname", default="orpy"),
"user": config("pg_user", default="orpy"),
"password": config("pg_password", default="orpy"),
"port": config("pg_port", cast=int, default=5432),
"application_name": "AIO" + config("APP_NAME", default="PY"),
}
database = psycopg_pool.AsyncConnectionPool(kwargs=database, connection_class=ORPYAsyncConnection,
min_size=config("PG_AIO_MINCONN", cast=int, default=1),
max_size=config("PG_AIO_MAXCONN", cast=int, default=5), )
app.state.postgresql = database
# App listening
yield
# Shutdown
await database.close()
logging.info(">>>>> shutting down <<<<<")
app.schedule.shutdown(wait=False)
await pg_client.terminate()
app = FastAPI(root_path=config("root_path", default="/api"), docs_url=config("docs_url", default=""),
redoc_url=config("redoc_url", default=""), lifespan=lifespan)
app.add_middleware(GZipMiddleware, minimum_size=1000)
app = FastAPI(root_path="/api")
@app.middleware('http')
async def or_middleware(request: Request, call_next):
if helper.TRACK_TIME:
now = time.time()
global OR_SESSION_TOKEN
OR_SESSION_TOKEN = request.headers.get('vnd.openreplay.com.sid', request.headers.get('vnd.asayer.io.sid'))
try:
if helper.TRACK_TIME:
import time
now = int(time.time() * 1000)
response: StreamingResponse = await call_next(request)
except:
logging.error(f"{request.method}: {request.url.path} FAILED!")
raise
if response.status_code // 100 != 2:
logging.warning(f"{request.method}:{request.url.path} {response.status_code}!")
if helper.TRACK_TIME:
now = time.time() - now
if now > 2:
now = round(now, 2)
logging.warning(f"Execution time: {now} s for {request.method}: {request.url.path}")
response.headers["x-robots-tag"] = 'noindex, nofollow'
if helper.TRACK_TIME:
print(f"Execution time: {int(time.time() * 1000) - now} ms")
except Exception as e:
pg_client.close()
raise e
pg_client.close()
return response
@ -114,21 +52,19 @@ app.include_router(core.app_apikey)
app.include_router(core_dynamic.public_app)
app.include_router(core_dynamic.app)
app.include_router(core_dynamic.app_apikey)
app.include_router(dashboard.app)
app.include_router(metrics.app)
app.include_router(insights.app)
app.include_router(v1_api.app_apikey)
app.include_router(health.public_app)
app.include_router(health.app)
app.include_router(health.app_apikey)
app.include_router(usability_tests.public_app)
app.include_router(usability_tests.app)
app.include_router(usability_tests.app_apikey)
Schedule = AsyncIOScheduler()
Schedule.start()
app.include_router(spot.public_app)
app.include_router(spot.app)
app.include_router(spot.app_apikey)
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs:
Schedule.add_job(id=job["func"].__name__, **job)
app.include_router(product_anaytics.public_app)
app.include_router(product_anaytics.app)
app.include_router(product_anaytics.app_apikey)
for job in Schedule.get_jobs():
print({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
logging.basicConfig(level=config("LOGLEVEL", default=logging.INFO))
logging.getLogger('apscheduler').setLevel(config("LOGLEVEL", default=logging.INFO))

View file

@ -1,48 +1,13 @@
import logging
from contextlib import asynccontextmanager
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from decouple import config
from fastapi import FastAPI
from chalicelib.core.alerts import alerts_processor
from chalicelib.utils import pg_client
from chalicelib.core import alerts_processor
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
ap_logger.info(">>>>> starting up <<<<<")
await pg_client.init()
app.schedule.start()
app.schedule.add_job(id="alerts_processor", **{"func": alerts_processor.process, "trigger": "interval",
"minutes": config("ALERTS_INTERVAL", cast=int, default=5),
"misfire_grace_time": 20})
ap_logger.info(">Scheduled jobs:")
for job in app.schedule.get_jobs():
ap_logger.info({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
# App listening
yield
# Shutdown
ap_logger.info(">>>>> shutting down <<<<<")
app.schedule.shutdown(wait=False)
await pg_client.terminate()
loglevel = config("LOGLEVEL", default=logging.INFO)
print(f">Loglevel set to: {loglevel}")
logging.basicConfig(level=loglevel)
ap_logger = logging.getLogger('apscheduler')
ap_logger.setLevel(loglevel)
app = FastAPI(root_path=config("root_path", default="/alerts"), docs_url=config("docs_url", default=""),
redoc_url=config("redoc_url", default=""), lifespan=lifespan)
app.schedule = AsyncIOScheduler()
ap_logger.info("============= ALERTS =============")
app = FastAPI()
print("============= ALERTS =============")
@app.get("/")
@ -50,16 +15,13 @@ async def root():
return {"status": "Running"}
@app.get("/health")
async def get_health_status():
return {"data": {
"health": True,
"details": {"version": config("version_number", default="unknown")}
}}
app.schedule = AsyncIOScheduler()
app.schedule.start()
app.schedule.add_job(id="alerts_processor", **{"func": alerts_processor.process, "trigger": "interval",
"minutes": config("ALERTS_INTERVAL", cast=int, default=5),
"misfire_grace_time": 20})
for job in app.schedule.get_jobs():
print({"Name": str(job.id), "Run Frequency": str(job.trigger), "Next Run": str(job.next_run_time)})
if config("LOCAL_DEV", default=False, cast=bool):
@app.get('/trigger', tags=["private"])
async def trigger_main_cron():
ap_logger.info("Triggering main cron")
alerts_processor.process()
logging.basicConfig(level=config("LOGLEVEL", default=logging.INFO))
logging.getLogger('apscheduler').setLevel(config("LOGLEVEL", default=logging.INFO))

View file

@ -1,4 +1,3 @@
import logging
from typing import Optional
from fastapi import Request
@ -9,8 +8,6 @@ from starlette.exceptions import HTTPException
from chalicelib.core import authorizers
from schemas import CurrentAPIContext
logger = logging.getLogger(__name__)
class APIKeyAuth(APIKeyHeader):
def __init__(self, auto_error: bool = True):
@ -25,7 +22,7 @@ class APIKeyAuth(APIKeyHeader):
detail="Invalid API Key",
)
r["authorizer_identity"] = "api_key"
logger.debug(r)
print(r)
request.state.authorizer_identity = "api_key"
request.state.currentContext = CurrentAPIContext(tenantId=r["tenantId"])
request.state.currentContext = CurrentAPIContext(tenant_id=r["tenantId"])
return request.state.currentContext

View file

@ -1,153 +1,59 @@
import datetime
import logging
from typing import Optional
from decouple import config
from fastapi import Request
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from starlette import status
from starlette.exceptions import HTTPException
import schemas
from chalicelib.core import authorizers, users, spot
logger = logging.getLogger(__name__)
def _get_current_auth_context(request: Request, jwt_payload: dict) -> schemas.CurrentContext:
user = users.get(user_id=jwt_payload.get("userId", -1), tenant_id=jwt_payload.get("tenantId", -1))
if user is None:
logger.warning("User not found.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="User not found.")
request.state.authorizer_identity = "jwt"
request.state.currentContext = schemas.CurrentContext(tenantId=jwt_payload.get("tenantId", -1),
userId=jwt_payload.get("userId", -1),
email=user["email"],
role=user["role"])
return request.state.currentContext
from chalicelib.core import authorizers, users
from schemas import CurrentContext
class JWTAuth(HTTPBearer):
def __init__(self, auto_error: bool = True):
super(JWTAuth, self).__init__(auto_error=auto_error)
async def __call__(self, request: Request) -> Optional[schemas.CurrentContext]:
if request.url.path in ["/refresh", "/api/refresh"]:
return await self.__process_refresh_call(request)
elif request.url.path in ["/spot/refresh", "/api/spot/refresh"]:
return await self.__process_spot_refresh_call(request)
else:
credentials: HTTPAuthorizationCredentials = await super(JWTAuth, self).__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authentication scheme.")
jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials)
auth_exists = jwt_payload is not None and users.auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_iat=jwt_payload.get("iat", 100))
if jwt_payload is None \
or jwt_payload.get("iat") is None or jwt_payload.get("aud") is None \
or not auth_exists:
if jwt_payload is not None:
logger.debug(jwt_payload)
if jwt_payload.get("iat") is None:
logger.debug("iat is None")
if jwt_payload.get("aud") is None:
logger.debug("aud is None")
if not auth_exists:
logger.warning("not users.auth_exists")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Invalid token or expired token.")
if jwt_payload.get("aud", "").startswith("spot") and not request.url.path.startswith("/spot"):
# Allow access to spot endpoints only
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
detail="Unauthorized access (spot).")
elif jwt_payload.get("aud", "").startswith("front") and request.url.path.startswith("/spot"):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
detail="Unauthorized access endpoint reserved for Spot only.")
return _get_current_auth_context(request=request, jwt_payload=jwt_payload)
logger.warning("Invalid authorization code.")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authorization code.")
async def __process_refresh_call(self, request: Request) -> schemas.CurrentContext:
if "refreshToken" not in request.cookies:
logger.warning("Missing refreshToken cookie.")
jwt_payload = None
else:
jwt_payload = authorizers.jwt_refresh_authorizer(scheme="Bearer", token=request.cookies["refreshToken"])
if jwt_payload is None or jwt_payload.get("jti") is None:
logger.warning("Null refreshToken's payload, or null JTI.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid refresh-token or expired refresh-token.")
auth_exists = users.refresh_auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_jti=jwt_payload["jti"])
if not auth_exists:
logger.warning("refreshToken's user not found.")
logger.warning(jwt_payload)
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid refresh-token or expired refresh-token.")
async def __call__(self, request: Request) -> Optional[CurrentContext]:
credentials: HTTPAuthorizationCredentials = await super(JWTAuth, self).__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authentication scheme.")
old_jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials,
leeway=datetime.timedelta(
days=config("JWT_LEEWAY_DAYS", cast=int, default=3)
))
if old_jwt_payload is None \
or old_jwt_payload.get("userId") is None \
or old_jwt_payload.get("userId") != jwt_payload.get("userId"):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authentication scheme.")
jwt_payload = authorizers.jwt_authorizer(credentials.scheme + " " + credentials.credentials)
auth_exists = jwt_payload is not None \
and users.auth_exists(user_id=jwt_payload.get("userId", -1),
tenant_id=jwt_payload.get("tenantId", -1),
jwt_iat=jwt_payload.get("iat", 100),
jwt_aud=jwt_payload.get("aud", ""))
if jwt_payload is None \
or jwt_payload.get("iat") is None or jwt_payload.get("aud") is None \
or not auth_exists:
print("JWTAuth: Token issue")
if jwt_payload is not None:
print(jwt_payload)
print(f"JWTAuth: user_id={jwt_payload.get('userId')} tenant_id={jwt_payload.get('tenantId')}")
if jwt_payload is None:
print("JWTAuth: jwt_payload is None")
print(credentials.scheme + " " + credentials.credentials)
if jwt_payload is not None and jwt_payload.get("iat") is None:
print("JWTAuth: iat is None")
if jwt_payload is not None and jwt_payload.get("aud") is None:
print("JWTAuth: aud is None")
if jwt_payload is not None and not auth_exists:
print("JWTAuth: not users.auth_exists")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Invalid token or expired token.")
user = users.get(user_id=jwt_payload.get("userId", -1), tenant_id=jwt_payload.get("tenantId", -1))
if user is None:
print("JWTAuth: User not found.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="User not found.")
jwt_payload["authorizer_identity"] = "jwt"
print(jwt_payload)
request.state.authorizer_identity = "jwt"
request.state.currentContext = CurrentContext(tenant_id=jwt_payload.get("tenantId", -1),
user_id=jwt_payload.get("userId", -1),
email=user["email"])
return request.state.currentContext
return _get_current_auth_context(request=request, jwt_payload=jwt_payload)
logger.warning("Invalid authorization code (refresh logic).")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authorization code for refresh.")
async def __process_spot_refresh_call(self, request: Request) -> schemas.CurrentContext:
if "spotRefreshToken" not in request.cookies:
logger.warning("Missing soptRefreshToken cookie.")
jwt_payload = None
else:
jwt_payload = authorizers.jwt_refresh_authorizer(scheme="Bearer", token=request.cookies["spotRefreshToken"])
if jwt_payload is None or jwt_payload.get("jti") is None:
logger.warning("Null spotRefreshToken's payload, or null JTI.")
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid spotRefreshToken or expired refresh-token.")
auth_exists = spot.refresh_auth_exists(user_id=jwt_payload.get("userId", -1),
jwt_jti=jwt_payload["jti"])
if not auth_exists:
logger.warning("spotRefreshToken's user not found.")
logger.warning(jwt_payload)
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid spotRefreshToken or expired refresh-token.")
credentials: HTTPAuthorizationCredentials = await super(JWTAuth, self).__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid spot-authentication scheme.")
old_jwt_payload = authorizers.jwt_authorizer(scheme=credentials.scheme, token=credentials.credentials,
leeway=datetime.timedelta(
days=config("JWT_LEEWAY_DAYS", cast=int, default=3)
))
if old_jwt_payload is None \
or old_jwt_payload.get("userId") is None \
or old_jwt_payload.get("userId") != jwt_payload.get("userId"):
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN,
detail="Invalid spot-token or expired token.")
return _get_current_auth_context(request=request, jwt_payload=jwt_payload)
logger.warning("Invalid authorization code (spot-refresh logic).")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Invalid authorization code for spot-refresh.")
print("JWTAuth: Invalid authorization code.")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid authorization code.")

View file

@ -1,5 +1,3 @@
import logging
from fastapi import Request
from starlette import status
from starlette.exceptions import HTTPException
@ -8,8 +6,6 @@ import schemas
from chalicelib.core import projects
from or_dependencies import OR_context
logger = logging.getLogger(__name__)
class ProjectAuthorizer:
def __init__(self, project_identifier):
@ -19,20 +15,10 @@ class ProjectAuthorizer:
if len(request.path_params.keys()) == 0 or request.path_params.get(self.project_identifier) is None:
return
current_user: schemas.CurrentContext = await OR_context(request)
value = request.path_params[self.project_identifier]
current_project = None
if self.project_identifier == "projectId" \
and (isinstance(value, int) or isinstance(value, str) and value.isnumeric()):
current_project = projects.get_project(project_id=value, tenant_id=current_user.tenant_id)
elif self.project_identifier == "projectKey":
current_project = projects.get_by_project_key(project_key=value)
if current_project is None:
logger.debug(f"unauthorized project {self.project_identifier}:{value}")
project_identifier = request.path_params[self.project_identifier]
if (self.project_identifier == "projectId" \
and projects.get_project(project_id=project_identifier, tenant_id=current_user.tenant_id) is None) \
or (self.project_identifier.lower() == "projectKey" \
and projects.get_internal_project_id(project_key=project_identifier) is None):
print("project not found")
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="project not found.")
else:
current_project = schemas.ProjectContext(projectId=current_project["projectId"],
projectKey=current_project["projectKey"],
platform=current_project["platform"],
name=current_project["name"])
request.state.currentContext.project = current_project

View file

@ -7,22 +7,8 @@
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
# Helper function
exit_err() {
err_code=$1
if [[ $err_code != 0 ]]; then
exit $err_code
fi
}
source ../scripts/lib/_docker.sh
ARCH=${ARCH:-'amd64'}
environment=$1
git_sha=$(git rev-parse --short HEAD)
image_tag=${IMAGE_TAG:-git_sha}
git_sha1=${IMAGE_TAG:-$(git rev-parse HEAD)}
envarg="default-foss"
chart="chalice"
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
@ -31,36 +17,9 @@ check_prereq() {
return
}
[[ $1 == ee ]] && ee=true
[[ $PATCH -eq 1 ]] && {
image_tag="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/$chart | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
[[ $ee == "true" ]] && {
image_tag="${image_tag}-ee"
}
}
update_helm_release() {
[[ $ee == "true" ]] && return
HELM_TAG="$(grep -iER ^version ../scripts/helmcharts/openreplay/charts/$chart | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
# Update the chart version
sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Update image tags
sed -i "s#ppVersion.*#ppVersion: \"$image_tag\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Commit the changes
git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
git commit -m "chore(helm): Updating $chart image release"
}
function build_api() {
destination="_api"
[[ $1 == "ee" ]] && {
destination="_api_ee"
}
[[ -d ../${destination} ]] && {
echo "Removing previous build cache"
rm -rf ../${destination}
}
cp -R ../api ../${destination}
cd ../${destination} || exit_err 100
function build_api(){
cp -R ../utilities/utils ../sourcemap-reader/.
cp -R ../sourcemap-reader .
tag=""
# Copy enterprise code
[[ $1 == "ee" ]] && {
@ -68,24 +27,16 @@ function build_api() {
envarg="default-ee"
tag="ee-"
}
mv Dockerfile.dockerignore .dockerignore
docker build -f ./Dockerfile --platform linux/${ARCH} --build-arg envarg=$envarg --build-arg GIT_SHA=$git_sha -t ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag} .
cd ../api || exit_err 100
rm -rf ../${destination}
docker build -f ./Dockerfile --build-arg envarg=$envarg -t ${DOCKER_REPO:-'local'}/chalice:${git_sha1} .
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag}
docker tag ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag} ${DOCKER_REPO:-'local'}/chalice:${tag}latest
docker push ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${tag}latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/${IMAGE_NAME:-'chalice'}:${image_tag}
docker push ${DOCKER_REPO:-'local'}/chalice:${git_sha1}
docker tag ${DOCKER_REPO:-'local'}/chalice:${git_sha1} ${DOCKER_REPO:-'local'}/chalice:${tag}latest
docker push ${DOCKER_REPO:-'local'}/chalice:${tag}latest
}
echo "api docker build completed"
}
check_prereq
build_api $environment
build_api $1
echo buil_complete
if [[ $PATCH -eq 1 ]]; then
update_helm_release
fi
IMAGE_TAG=$IMAGE_TAG PUSH_IMAGE=$PUSH_IMAGE DOCKER_REPO=$DOCKER_REPO bash build_alerts.sh $1

View file

@ -7,43 +7,46 @@
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
git_sha=$(git rev-parse --short HEAD)
image_tag=${IMAGE_TAG:-git_sha}
function make_submodule() {
[[ $1 != "ee" ]] && {
# -- this part was generated by modules_lister.py --
mkdir alerts
cp -R ./{app_alerts,schemas}.py ./alerts/
mkdir -p ./alerts/chalicelib/
cp -R ./chalicelib/__init__.py ./alerts/chalicelib/
mkdir -p ./alerts/chalicelib/core/
cp -R ./chalicelib/core/{__init__,alerts_processor,alerts_listener,sessions,events,issues,sessions_metas,metadata,projects,users,authorizers,tenants,assist,events_ios,sessions_mobs,errors,sourcemaps,sourcemaps_parser,resources,performance_event,alerts,notifications,slack,collaboration_slack,webhook}.py ./alerts/chalicelib/core/
mkdir -p ./alerts/chalicelib/utils/
cp -R ./chalicelib/utils/{__init__,TimeUTC,pg_client,helper,event_filter_definition,dev,email_helper,email_handler,smtp,s3,metrics_helper}.py ./alerts/chalicelib/utils/
# -- end of generated part
}
[[ $1 == "ee" ]] && {
# -- this part was generated by modules_lister.py --
mkdir alerts
cp -R ./{app_alerts,schemas,schemas_ee}.py ./alerts/
mkdir -p ./alerts/chalicelib/
cp -R ./chalicelib/__init__.py ./alerts/chalicelib/
mkdir -p ./alerts/chalicelib/core/
cp -R ./chalicelib/core/{__init__,alerts_processor,alerts_listener,sessions,events,issues,sessions_metas,metadata,projects,users,authorizers,tenants,roles,assist,events_ios,sessions_mobs,errors,metrics,sourcemaps,sourcemaps_parser,resources,performance_event,alerts,notifications,slack,collaboration_slack,webhook}.py ./alerts/chalicelib/core/
mkdir -p ./alerts/chalicelib/utils/
cp -R ./chalicelib/utils/{__init__,TimeUTC,pg_client,helper,event_filter_definition,dev,SAML2_helper,email_helper,email_handler,smtp,s3,args_transformer,ch_client,metrics_helper}.py ./alerts/chalicelib/utils/
# -- end of generated part
}
cp -R ./{Dockerfile.alerts,requirements.txt,.env.default,entrypoint_alerts.sh} ./alerts/
cp -R ./chalicelib/utils/html ./alerts/chalicelib/utils/html
}
git_sha1=${IMAGE_TAG:-$(git rev-parse HEAD)}
envarg="default-foss"
source ../scripts/lib/_docker.sh
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
exit 1
exit=1
}
[[ exit -eq 1 ]] && exit 1
}
[[ $1 == ee ]] && ee=true
[[ $PATCH -eq 1 ]] && {
image_tag="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/$chart | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
[[ $ee == "true" ]] && {
image_tag="${image_tag}-ee"
}
}
update_helm_release() {
chart=$1
HELM_TAG="$(grep -iER ^version ../scripts/helmcharts/openreplay/charts/$chart | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')"
# Update the chart version
sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Update image tags
sed -i "s#ppVersion.*#ppVersion: \"$image_tag\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Commit the changes
git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
git commit -m "chore(helm): Updating $chart image release"
}
function build_alerts() {
destination="_alerts"
[[ $1 == "ee" ]] && {
destination="_alerts_ee"
}
cp -R ../api ../${destination}
cd ../${destination}
function build_api(){
tag=""
# Copy enterprise code
[[ $1 == "ee" ]] && {
@ -51,23 +54,18 @@ function build_alerts() {
envarg="default-ee"
tag="ee-"
}
mv Dockerfile_alerts.dockerignore .dockerignore
docker build -f ./Dockerfile_alerts --platform linux/${ARCH:-"amd64"} --build-arg envarg=$envarg --build-arg GIT_SHA=$git_sha -t ${DOCKER_REPO:-'local'}/alerts:${image_tag} .
cd ../api
rm -rf ../${destination}
make_submodule $1
cd alerts
docker build -f ./Dockerfile.alerts --build-arg envarg=$envarg -t ${DOCKER_REPO:-'local'}/alerts:${git_sha1} .
cd ..
rm -rf alerts
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/alerts:${image_tag}
docker tag ${DOCKER_REPO:-'local'}/alerts:${image_tag} ${DOCKER_REPO:-'local'}/alerts:${tag}latest
docker push ${DOCKER_REPO:-'local'}/alerts:${git_sha1}
docker tag ${DOCKER_REPO:-'local'}/alerts:${git_sha1} ${DOCKER_REPO:-'local'}/alerts:${tag}latest
docker push ${DOCKER_REPO:-'local'}/alerts:${tag}latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/alerts:${image_tag}
}
echo "completed alerts build"
echo "completed alerts build"
}
check_prereq
build_alerts $1
if [[ $PATCH -eq 1 ]]; then
update_helm_release alerts
fi
build_api $1

View file

@ -1,52 +0,0 @@
#!/bin/bash
# Script to build crons module
# flags to accept:
# envarg: build for enterprise edition.
# Default will be OSS build.
# Usage: IMAGE_TAG=latest DOCKER_REPO=myDockerHubID bash build.sh <ee>
git_sha1=${IMAGE_TAG:-$(git rev-parse HEAD)}
envarg="default-foss"
source ../scripts/lib/_docker.sh
check_prereq() {
which docker || {
echo "Docker not installed, please install docker."
exit=1
}
[[ exit -eq 1 ]] && exit 1
}
function build_crons() {
destination="_crons_ee"
cp -R ../api ../${destination}
cd ../${destination}
tag=""
# Copy enterprise code
cp -rf ../ee/api/* ./
envarg="default-ee"
tag="ee-"
mv Dockerfile_crons.dockerignore .dockerignore
docker build -f ./Dockerfile_crons --platform=linux/${ARCH:-'amd64'} --build-arg envarg=$envarg -t ${DOCKER_REPO:-'local'}/crons:${git_sha1} .
cd ../api
rm -rf ../${destination}
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/crons:${git_sha1}
docker tag ${DOCKER_REPO:-'local'}/crons:${git_sha1} ${DOCKER_REPO:-'local'}/crons:${tag}latest
docker push ${DOCKER_REPO:-'local'}/crons:${tag}latest
}
[[ $SIGN_IMAGE -eq 1 ]] && {
cosign sign --key $SIGN_KEY ${DOCKER_REPO:-'local'}/crons:${git_sha1}
}
echo "completed crons build"
}
check_prereq
[[ $1 == "ee" ]] && {
build_crons $1
} || {
echo -e "Crons is only for ee. Rerun the script using \n bash $0 ee"
exit 100
}

View file

@ -0,0 +1,170 @@
import json
import logging
import time
import schemas
from chalicelib.core import notifications, slack, webhook
from chalicelib.utils import pg_client, helper, email_helper
from chalicelib.utils.TimeUTC import TimeUTC
def get(id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
SELECT *
FROM public.alerts
WHERE alert_id =%(id)s;""",
{"id": id})
)
a = helper.dict_to_camel_case(cur.fetchone())
return helper.custom_alert_to_front(__process_circular(a))
def get_all(project_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
SELECT *
FROM public.alerts
WHERE project_id =%(project_id)s AND deleted_at ISNULL
ORDER BY created_at;""",
{"project_id": project_id})
cur.execute(query=query)
all = helper.list_to_camel_case(cur.fetchall())
for i in range(len(all)):
all[i] = helper.custom_alert_to_front(__process_circular(all[i]))
return all
def __process_circular(alert):
if alert is None:
return None
alert.pop("deletedAt")
alert["createdAt"] = TimeUTC.datetime_to_timestamp(alert["createdAt"])
return alert
def create(project_id, data: schemas.AlertSchema):
data = data.dict()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.alerts(project_id, name, description, detection_method, query, options, series_id)
VALUES (%(project_id)s, %(name)s, %(description)s, %(detection_method)s, %(query)s, %(options)s::jsonb, %(series_id)s)
RETURNING *;""",
{"project_id": project_id, **data})
)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(helper.dict_to_camel_case(__process_circular(a)))}
def update(id, data: schemas.AlertSchema):
data = data.dict()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
UPDATE public.alerts
SET name = %(name)s,
description = %(description)s,
active = TRUE,
detection_method = %(detection_method)s,
query = %(query)s,
options = %(options)s,
series_id = %(series_id)s
WHERE alert_id =%(id)s AND deleted_at ISNULL
RETURNING *;""",
{"id": id, **data})
cur.execute(query=query)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(__process_circular(a))}
def process_notifications(data):
full = {}
for n in data:
if "message" in n["options"]:
webhook_data = {}
if "data" in n["options"]:
webhook_data = n["options"].pop("data")
for c in n["options"].pop("message"):
if c["type"] not in full:
full[c["type"]] = []
if c["type"] in ["slack", "email"]:
full[c["type"]].append({
"notification": n,
"destination": c["value"]
})
elif c["type"] in ["webhook"]:
full[c["type"]].append({"data": webhook_data, "destination": c["value"]})
notifications.create(data)
BATCH_SIZE = 200
for t in full.keys():
for i in range(0, len(full[t]), BATCH_SIZE):
notifications_list = full[t][i:i + BATCH_SIZE]
if t == "slack":
try:
slack.send_batch(notifications_list=notifications_list)
except Exception as e:
logging.error("!!!Error while sending slack notifications batch")
logging.error(str(e))
elif t == "email":
try:
send_by_email_batch(notifications_list=notifications_list)
except Exception as e:
logging.error("!!!Error while sending email notifications batch")
logging.error(str(e))
elif t == "webhook":
try:
webhook.trigger_batch(data_list=notifications_list)
except Exception as e:
logging.error("!!!Error while sending webhook notifications batch")
logging.error(str(e))
def send_by_email(notification, destination):
if notification is None:
return
email_helper.alert_email(recipients=destination,
subject=f'"{notification["title"]}" has been triggered',
data={
"message": f'"{notification["title"]}" {notification["description"]}',
"project_id": notification["options"]["projectId"]})
def send_by_email_batch(notifications_list):
if notifications_list is None or len(notifications_list) == 0:
return
for n in notifications_list:
send_by_email(notification=n.get("notification"), destination=n.get("destination"))
time.sleep(1)
def delete(project_id, alert_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.alerts
SET
deleted_at = timezone('utc'::text, now()),
active = FALSE
WHERE
alert_id = %(alert_id)s AND project_id=%(project_id)s;""",
{"alert_id": alert_id, "project_id": project_id})
)
return {"data": {"state": "success"}}
def get_predefined_values():
values = [e.value for e in schemas.AlertColumn]
values = [{"name": v, "value": v,
"unit": "count" if v.endswith(".count") else "ms",
"predefined": True,
"metricId": None,
"seriesId": None} for v in values if v != schemas.AlertColumn.custom]
return values

View file

@ -1,10 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
if config("EXP_ALERTS", cast=bool, default=False):
logging.info(">>> Using experimental alerts")
from . import alerts_processor_ch as alerts_processor
else:
from . import alerts_processor as alerts_processor

View file

@ -1,235 +0,0 @@
import json
import logging
import time
from datetime import datetime
from decouple import config
import schemas
from chalicelib.core import notifications, webhook
from chalicelib.core.collaborations.collaboration_msteams import MSTeams
from chalicelib.core.collaborations.collaboration_slack import Slack
from chalicelib.utils import pg_client, helper, email_helper, smtp
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def get(id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
SELECT *
FROM public.alerts
WHERE alert_id =%(id)s;""",
{"id": id})
)
a = helper.dict_to_camel_case(cur.fetchone())
return helper.custom_alert_to_front(__process_circular(a))
def get_all(project_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
SELECT alerts.*,
COALESCE(metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count',
query ->> 'left') AS series_name
FROM public.alerts
LEFT JOIN metric_series USING (series_id)
LEFT JOIN metrics USING (metric_id)
WHERE alerts.project_id =%(project_id)s
AND alerts.deleted_at ISNULL
ORDER BY alerts.created_at;""",
{"project_id": project_id})
cur.execute(query=query)
all = helper.list_to_camel_case(cur.fetchall())
for i in range(len(all)):
all[i] = helper.custom_alert_to_front(__process_circular(all[i]))
return all
def __process_circular(alert):
if alert is None:
return None
alert.pop("deletedAt")
alert["createdAt"] = TimeUTC.datetime_to_timestamp(alert["createdAt"])
return alert
def create(project_id, data: schemas.AlertSchema):
data = data.model_dump()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.alerts(project_id, name, description, detection_method, query, options, series_id, change)
VALUES (%(project_id)s, %(name)s, %(description)s, %(detection_method)s, %(query)s, %(options)s::jsonb, %(series_id)s, %(change)s)
RETURNING *;""",
{"project_id": project_id, **data})
)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(helper.dict_to_camel_case(__process_circular(a)))}
def update(id, data: schemas.AlertSchema):
data = data.model_dump()
data["query"] = json.dumps(data["query"])
data["options"] = json.dumps(data["options"])
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
UPDATE public.alerts
SET name = %(name)s,
description = %(description)s,
active = TRUE,
detection_method = %(detection_method)s,
query = %(query)s,
options = %(options)s,
series_id = %(series_id)s,
change = %(change)s
WHERE alert_id =%(id)s AND deleted_at ISNULL
RETURNING *;""",
{"id": id, **data})
cur.execute(query=query)
a = helper.dict_to_camel_case(cur.fetchone())
return {"data": helper.custom_alert_to_front(__process_circular(a))}
def process_notifications(data):
full = {}
for n in data:
if "message" in n["options"]:
webhook_data = {}
if "data" in n["options"]:
webhook_data = n["options"].pop("data")
for c in n["options"].pop("message"):
if c["type"] not in full:
full[c["type"]] = []
if c["type"] in ["slack", "msteams", "email"]:
full[c["type"]].append({
"notification": n,
"destination": c["value"]
})
elif c["type"] in ["webhook"]:
full[c["type"]].append({"data": webhook_data, "destination": c["value"]})
notifications.create(data)
BATCH_SIZE = 200
for t in full.keys():
for i in range(0, len(full[t]), BATCH_SIZE):
notifications_list = full[t][i:min(i + BATCH_SIZE, len(full[t]))]
if notifications_list is None or len(notifications_list) == 0:
break
if t == "slack":
try:
send_to_slack_batch(notifications_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending slack notifications batch")
logger.error(str(e))
elif t == "msteams":
try:
send_to_msteams_batch(notifications_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending msteams notifications batch")
logger.error(str(e))
elif t == "email":
try:
send_by_email_batch(notifications_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending email notifications batch")
logger.error(str(e))
elif t == "webhook":
try:
webhook.trigger_batch(data_list=notifications_list)
except Exception as e:
logger.error("!!!Error while sending webhook notifications batch")
logger.error(str(e))
def send_by_email(notification, destination):
if notification is None:
return
email_helper.alert_email(recipients=destination,
subject=f'"{notification["title"]}" has been triggered',
data={
"message": f'"{notification["title"]}" {notification["description"]}',
"project_id": notification["options"]["projectId"]})
def send_by_email_batch(notifications_list):
if not smtp.has_smtp():
logger.info("no SMTP configuration for email notifications")
if notifications_list is None or len(notifications_list) == 0:
logger.info("no email notifications")
return
for n in notifications_list:
send_by_email(notification=n.get("notification"), destination=n.get("destination"))
time.sleep(1)
def send_to_slack_batch(notifications_list):
webhookId_map = {}
for n in notifications_list:
if n.get("destination") not in webhookId_map:
webhookId_map[n.get("destination")] = {"tenantId": n["notification"]["tenantId"], "batch": []}
webhookId_map[n.get("destination")]["batch"].append({"text": n["notification"]["description"] \
+ f"\n<{config('SITE_URL')}{n['notification']['buttonUrl']}|{n['notification']['buttonText']}>",
"title": n["notification"]["title"],
"title_link": n["notification"]["buttonUrl"],
"ts": datetime.now().timestamp()})
for batch in webhookId_map.keys():
Slack.send_batch(tenant_id=webhookId_map[batch]["tenantId"], webhook_id=batch,
attachments=webhookId_map[batch]["batch"])
def send_to_msteams_batch(notifications_list):
webhookId_map = {}
for n in notifications_list:
if n.get("destination") not in webhookId_map:
webhookId_map[n.get("destination")] = {"tenantId": n["notification"]["tenantId"], "batch": []}
link = f"{config('SITE_URL')}{n['notification']['buttonUrl']}"
# for MSTeams, the batch is the list of `sections`
webhookId_map[n.get("destination")]["batch"].append(
{
"activityTitle": n["notification"]["title"],
"activitySubtitle": f"On Project *{n['notification']['projectName']}*",
"facts": [
{
"name": "Target:",
"value": link
},
{
"name": "Description:",
"value": n["notification"]["description"]
}],
"markdown": True
}
)
for batch in webhookId_map.keys():
MSTeams.send_batch(tenant_id=webhookId_map[batch]["tenantId"], webhook_id=batch,
attachments=webhookId_map[batch]["batch"])
def delete(project_id, alert_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(""" UPDATE public.alerts
SET deleted_at = timezone('utc'::text, now()),
active = FALSE
WHERE alert_id = %(alert_id)s AND project_id=%(project_id)s;""",
{"alert_id": alert_id, "project_id": project_id})
)
return {"data": {"state": "success"}}
def get_predefined_values():
values = [e.value for e in schemas.AlertColumn]
values = [{"name": v, "value": v,
"unit": "count" if v.endswith(".count") else "ms",
"predefined": True,
"metricId": None,
"seriesId": None} for v in values if v != schemas.AlertColumn.CUSTOM]
return values

View file

@ -1,33 +0,0 @@
from chalicelib.core.alerts.modules import TENANT_ID
from chalicelib.utils import pg_client, helper
def get_all_alerts():
with pg_client.PostgresClient(long_query=True) as cur:
query = f"""SELECT {TENANT_ID} AS tenant_id,
alert_id,
projects.project_id,
projects.name AS project_name,
detection_method,
query,
options,
(EXTRACT(EPOCH FROM alerts.created_at) * 1000)::BIGINT AS created_at,
alerts.name,
alerts.series_id,
filter,
change,
COALESCE(metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count',
query ->> 'left') AS series_name
FROM public.alerts
INNER JOIN projects USING (project_id)
LEFT JOIN metric_series USING (series_id)
LEFT JOIN metrics USING (metric_id)
WHERE alerts.deleted_at ISNULL
AND alerts.active
AND projects.active
AND projects.deleted_at ISNULL
AND (alerts.series_id ISNULL OR metric_series.deleted_at ISNULL)
ORDER BY alerts.created_at;"""
cur.execute(query=query)
all_alerts = helper.list_to_camel_case(cur.fetchall())
return all_alerts

View file

@ -1,169 +0,0 @@
import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_pg as sessions
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
LeftToDb = {
schemas.AlertColumn.PERFORMANCE__DOM_CONTENT_LOADED__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(dom_content_loaded_time ,0)),0)"},
schemas.AlertColumn.PERFORMANCE__FIRST_MEANINGFUL_PAINT__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(first_contentful_paint_time,0)),0)"},
schemas.AlertColumn.PERFORMANCE__PAGE_LOAD_TIME__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(load_time ,0))"},
schemas.AlertColumn.PERFORMANCE__DOM_BUILD_TIME__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(dom_building_time,0))"},
schemas.AlertColumn.PERFORMANCE__SPEED_INDEX__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(speed_index,0))"},
schemas.AlertColumn.PERFORMANCE__PAGE_RESPONSE_TIME__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(response_time,0))"},
schemas.AlertColumn.PERFORMANCE__TTFB__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(first_paint_time,0))"},
schemas.AlertColumn.PERFORMANCE__TIME_TO_RENDER__AVERAGE: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(visually_complete,0))"},
schemas.AlertColumn.PERFORMANCE__CRASHES__COUNT: {
"table": "public.sessions",
"formula": "COUNT(DISTINCT session_id)",
"condition": "errors_count > 0 AND duration>0"},
schemas.AlertColumn.ERRORS__JAVASCRIPT__COUNT: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source='js_exception'", "joinSessions": False},
schemas.AlertColumn.ERRORS__BACKEND__COUNT: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source!='js_exception'", "joinSessions": False},
}
def Build(a):
now = TimeUTC.now()
params = {"project_id": a["projectId"], "now": now}
full_args = {}
j_s = True
main_table = ""
if a["seriesId"] is not None:
a["filter"]["sort"] = "session_id"
a["filter"]["order"] = schemas.SortOrderType.DESC
a["filter"]["startDate"] = 0
a["filter"]["endDate"] = TimeUTC.now()
try:
data = schemas.SessionsSearchPayloadSchema.model_validate(a["filter"])
except ValidationError:
logger.warning("Validation error for:")
logger.warning(a["filter"])
raise
full_args, query_part = sessions.search_query_parts(data=data, error_status=None, errors_only=False,
issue=None, project_id=a["projectId"], user_id=None,
favorite_only=False)
subQ = f"""SELECT COUNT(session_id) AS value
{query_part}"""
else:
colDef = LeftToDb[a["query"]["left"]]
subQ = f"""SELECT {colDef["formula"]} AS value
FROM {colDef["table"]}
WHERE project_id = %(project_id)s
{"AND " + colDef["condition"] if colDef.get("condition") else ""}"""
j_s = colDef.get("joinSessions", True)
main_table = colDef["table"]
is_ss = main_table == "public.sessions"
q = f"""SELECT coalesce(value,0) AS value, coalesce(value,0) {a["query"]["operator"]} {a["query"]["right"]} AS valid"""
if a["detectionMethod"] == schemas.AlertDetectionMethod.THRESHOLD:
if a["seriesId"] is not None:
q += f""" FROM ({subQ}) AS stat"""
else:
q += f""" FROM ({subQ} {"AND timestamp >= %(startDate)s AND timestamp <= %(now)s" if not is_ss else ""}
{"AND start_ts >= %(startDate)s AND start_ts <= %(now)s" if j_s else ""}) AS stat"""
params = {**params, **full_args, "startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000}
else:
if a["change"] == schemas.AlertDetectionType.CHANGE:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})-({sub2})) AS value"
q += f" FROM ( {sub1} ) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000}
else:
sub1 = f"""{subQ} {"AND timestamp >= %(startDate)s AND timestamp <= %(now)s" if not is_ss else ""}
{"AND start_ts >= %(startDate)s AND start_ts <= %(now)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} {"AND timestamp < %(startDate)s AND timestamp >= %(timestamp_sub2)s" if not is_ss else ""}
{"AND start_ts < %(startDate)s AND start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000
sub1 = f"SELECT (( {sub1} )-( {sub2} )) AS value"
q += f" FROM ( {sub1} ) AS stat"
else:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) \
* 60 * 1000}
else:
sub1 = f"""{subQ} {"AND timestamp >= %(startDate)s AND timestamp <= %(now)s" if not is_ss else ""}
{"AND start_ts >= %(startDate)s AND start_ts <= %(now)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} {"AND timestamp < %(startDate)s AND timestamp >= %(timestamp_sub2)s" if not is_ss else ""}
{"AND start_ts < %(startDate)s AND start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) * 60 * 1000
sub1 = f"SELECT (({sub1})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
return q, params
def process():
logger.info("> processing alerts on PG")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:
for alert in all_alerts:
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = cur.mogrify(query, params)
except Exception as e:
logger.error(
f"!!!Error while building alert query for alertId:{alert['alertId']} name: {alert['name']}")
logger.error(e)
continue
logger.debug(alert)
logger.debug(query)
try:
cur.execute(query)
result = cur.fetchone()
if result["valid"]:
logger.info(f"Valid alert, notifying users, alertId:{alert['alertId']} name: {alert['name']}")
notifications.append(alert_helpers.generate_notification(alert, result))
except Exception as e:
logger.error(
f"!!!Error while running alert query for alertId:{alert['alertId']} name: {alert['name']}")
logger.error(query)
logger.error(e)
cur = cur.recreate(rollback=True)
if len(notifications) > 0:
cur.execute(
cur.mogrify(f"""UPDATE public.alerts
SET options = options||'{{"lastNotification":{TimeUTC.now()}}}'::jsonb
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)

View file

@ -1,195 +0,0 @@
import logging
from pydantic_core._pydantic_core import ValidationError
import schemas
from chalicelib.utils import pg_client, ch_client, exp_ch_helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core.alerts import alerts, alerts_listener
from chalicelib.core.alerts.modules import alert_helpers
from chalicelib.core.sessions import sessions_ch as sessions
logger = logging.getLogger(__name__)
LeftToDb = {
schemas.AlertColumn.PERFORMANCE__DOM_CONTENT_LOADED__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "COALESCE(AVG(NULLIF(dom_content_loaded_event_time ,0)),0)",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__FIRST_MEANINGFUL_PAINT__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "COALESCE(AVG(NULLIF(first_contentful_paint_time,0)),0)",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__PAGE_LOAD_TIME__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(load_event_time ,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__DOM_BUILD_TIME__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(dom_building_time,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__SPEED_INDEX__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(speed_index,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__PAGE_RESPONSE_TIME__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(response_time,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__TTFB__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(first_contentful_paint_time,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__TIME_TO_RENDER__AVERAGE: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS pages",
"formula": "AVG(NULLIF(visually_complete,0))",
"eventType": "LOCATION"
},
schemas.AlertColumn.PERFORMANCE__CRASHES__COUNT: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_sessions_table(timestamp)} AS sessions",
"formula": "COUNT(DISTINCT session_id)",
"condition": "duration>0 AND errors_count>0"
},
schemas.AlertColumn.ERRORS__JAVASCRIPT__COUNT: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS errors",
"eventType": "ERROR",
"formula": "COUNT(DISTINCT session_id)",
"condition": "source='js_exception'"
},
schemas.AlertColumn.ERRORS__BACKEND__COUNT: {
"table": lambda timestamp: f"{exp_ch_helper.get_main_events_table(timestamp)} AS errors",
"eventType": "ERROR",
"formula": "COUNT(DISTINCT session_id)",
"condition": "source!='js_exception'"
},
}
def Build(a):
now = TimeUTC.now()
params = {"project_id": a["projectId"], "now": now}
full_args = {}
if a["seriesId"] is not None:
a["filter"]["sort"] = "session_id"
a["filter"]["order"] = schemas.SortOrderType.DESC
a["filter"]["startDate"] = 0
a["filter"]["endDate"] = TimeUTC.now()
try:
data = schemas.SessionsSearchPayloadSchema.model_validate(a["filter"])
except ValidationError:
logger.warning("Validation error for:")
logger.warning(a["filter"])
raise
full_args, query_part = sessions.search_query_parts_ch(data=data, error_status=None, errors_only=False,
issue=None, project_id=a["projectId"], user_id=None,
favorite_only=False)
subQ = f"""SELECT COUNT(session_id) AS value
{query_part}"""
else:
colDef = LeftToDb[a["query"]["left"]]
params["event_type"] = LeftToDb[a["query"]["left"]].get("eventType")
subQ = f"""SELECT {colDef["formula"]} AS value
FROM {colDef["table"](now)}
WHERE project_id = %(project_id)s
{"AND event_type=%(event_type)s" if params["event_type"] else ""}
{"AND " + colDef["condition"] if colDef.get("condition") else ""}"""
q = f"""SELECT coalesce(value,0) AS value, coalesce(value,0) {a["query"]["operator"]} {a["query"]["right"]} AS valid"""
if a["detectionMethod"] == schemas.AlertDetectionMethod.THRESHOLD:
if a["seriesId"] is not None:
q += f""" FROM ({subQ}) AS stat"""
else:
q += f""" FROM ({subQ}
AND datetime>=toDateTime(%(startDate)s/1000)
AND datetime<=toDateTime(%(now)s/1000) ) AS stat"""
params = {**params, **full_args, "startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000}
else:
if a["change"] == schemas.AlertDetectionType.CHANGE:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})-({sub2})) AS value"
q += f" FROM ( {sub1} ) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000}
else:
sub1 = f"""{subQ} AND datetime>=toDateTime(%(startDate)s/1000)
AND datetime<=toDateTime(%(now)s/1000)"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND datetime<toDateTime(%(startDate)s/1000)
AND datetime>=toDateTime(%(timestamp_sub2)s/1000)"""
params["timestamp_sub2"] = TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000
sub1 = f"SELECT (( {sub1} )-( {sub2} )) AS value"
q += f" FROM ( {sub1} ) AS stat"
else:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) \
* 60 * 1000}
else:
sub1 = f"""{subQ} AND datetime>=toDateTime(%(startDate)s/1000)
AND datetime<=toDateTime(%(now)s/1000)"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND datetime<toDateTime(%(startDate)s/1000)
AND datetime>=toDateTime(%(timestamp_sub2)s/1000)"""
params["timestamp_sub2"] = TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) * 60 * 1000
sub1 = f"SELECT (({sub1})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
return q, params
def process():
logger.info("> processing alerts on CH")
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur, ch_client.ClickHouseClient() as ch_cur:
for alert in all_alerts:
if alert["query"]["left"] != "CUSTOM":
continue
if alert_helpers.can_check(alert):
query, params = Build(alert)
try:
query = ch_cur.format(query=query, parameters=params)
except Exception as e:
logger.error(
f"!!!Error while building alert query for alertId:{alert['alertId']} name: {alert['name']}")
logger.error(e)
continue
logger.debug(alert)
logger.debug(query)
try:
result = ch_cur.execute(query=query)
if len(result) > 0:
result = result[0]
if result["valid"]:
logger.info("Valid alert, notifying users")
notifications.append(alert_helpers.generate_notification(alert, result))
except Exception as e:
logger.error(f"!!!Error while running alert query for alertId:{alert['alertId']}")
logger.error(str(e))
logger.error(query)
if len(notifications) > 0:
cur.execute(
cur.mogrify(f"""UPDATE public.alerts
SET options = options||'{{"lastNotification":{TimeUTC.now()}}}'::jsonb
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)

View file

@ -1,3 +0,0 @@
TENANT_ID = "-1"
from . import helpers as alert_helpers

View file

@ -1,74 +0,0 @@
import decimal
import logging
import schemas
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
# This is the frequency of execution for each threshold
TimeInterval = {
15: 3,
30: 5,
60: 10,
120: 20,
240: 30,
1440: 60,
}
def __format_value(x):
if x % 1 == 0:
x = int(x)
else:
x = round(x, 2)
return f"{x:,}"
def can_check(a) -> bool:
now = TimeUTC.now()
repetitionBase = a["options"]["currentPeriod"] \
if a["detectionMethod"] == schemas.AlertDetectionMethod.CHANGE \
and a["options"]["currentPeriod"] > a["options"]["previousPeriod"] \
else a["options"]["previousPeriod"]
if TimeInterval.get(repetitionBase) is None:
logger.error(f"repetitionBase: {repetitionBase} NOT FOUND")
return False
return (a["options"]["renotifyInterval"] <= 0 or
a["options"].get("lastNotification") is None or
a["options"]["lastNotification"] <= 0 or
((now - a["options"]["lastNotification"]) > a["options"]["renotifyInterval"] * 60 * 1000)) \
and ((now - a["createdAt"]) % (TimeInterval[repetitionBase] * 60 * 1000)) < 60 * 1000
def generate_notification(alert, result):
left = __format_value(result['value'])
right = __format_value(alert['query']['right'])
return {
"alertId": alert["alertId"],
"tenantId": alert["tenantId"],
"title": alert["name"],
"description": f"{alert['seriesName']} = {left} ({alert['query']['operator']} {right}).",
"buttonText": "Check metrics for more details",
"buttonUrl": f"/{alert['projectId']}/metrics",
"imageUrl": None,
"projectId": alert["projectId"],
"projectName": alert["projectName"],
"options": {"source": "ALERT", "sourceId": alert["alertId"],
"sourceMeta": alert["detectionMethod"],
"message": alert["options"]["message"], "projectId": alert["projectId"],
"data": {"title": alert["name"],
"limitValue": alert["query"]["right"],
"actualValue": float(result["value"]) \
if isinstance(result["value"], decimal.Decimal) \
else result["value"],
"operator": alert["query"]["operator"],
"trigger": alert["query"]["left"],
"alertId": alert["alertId"],
"detectionMethod": alert["detectionMethod"],
"currentPeriod": alert["options"]["currentPeriod"],
"previousPeriod": alert["options"]["previousPeriod"],
"createdAt": TimeUTC.now()}},
}

View file

@ -0,0 +1,27 @@
from chalicelib.utils import pg_client, helper
def get_all_alerts():
with pg_client.PostgresClient(long_query=True) as cur:
query = """SELECT -1 AS tenant_id,
alert_id,
project_id,
detection_method,
query,
options,
(EXTRACT(EPOCH FROM alerts.created_at) * 1000)::BIGINT AS created_at,
alerts.name,
alerts.series_id,
filter
FROM public.alerts
LEFT JOIN metric_series USING (series_id)
INNER JOIN projects USING (project_id)
WHERE alerts.deleted_at ISNULL
AND alerts.active
AND projects.active
AND projects.deleted_at ISNULL
AND (alerts.series_id ISNULL OR metric_series.deleted_at ISNULL)
ORDER BY alerts.created_at;"""
cur.execute(query=query)
all_alerts = helper.list_to_camel_case(cur.fetchall())
return all_alerts

View file

@ -0,0 +1,222 @@
import decimal
import logging
import schemas
from chalicelib.core import alerts_listener
from chalicelib.core import sessions, alerts
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
LeftToDb = {
schemas.AlertColumn.performance__dom_content_loaded__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(dom_content_loaded_time ,0)),0)"},
schemas.AlertColumn.performance__first_meaningful_paint__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "COALESCE(AVG(NULLIF(first_contentful_paint_time,0)),0)"},
schemas.AlertColumn.performance__page_load_time__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(load_time ,0))"},
schemas.AlertColumn.performance__dom_build_time__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(dom_building_time,0))"},
schemas.AlertColumn.performance__speed_index__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)", "formula": "AVG(NULLIF(speed_index,0))"},
schemas.AlertColumn.performance__page_response_time__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(response_time,0))"},
schemas.AlertColumn.performance__ttfb__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(first_paint_time,0))"},
schemas.AlertColumn.performance__time_to_render__average: {
"table": "events.pages INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(visually_complete,0))"},
schemas.AlertColumn.performance__image_load_time__average: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(resources.duration,0))", "condition": "type='img'"},
schemas.AlertColumn.performance__request_load_time__average: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(resources.duration,0))", "condition": "type='fetch'"},
schemas.AlertColumn.resources__load_time__average: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "AVG(NULLIF(resources.duration,0))"},
schemas.AlertColumn.resources__missing__count: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(DISTINCT url_hostpath)", "condition": "success= FALSE"},
schemas.AlertColumn.errors__4xx_5xx__count: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)", "formula": "COUNT(session_id)",
"condition": "status/100!=2"},
schemas.AlertColumn.errors__4xx__count: {"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(session_id)", "condition": "status/100=4"},
schemas.AlertColumn.errors__5xx__count: {"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(session_id)", "condition": "status/100=5"},
schemas.AlertColumn.errors__javascript__impacted_sessions__count: {
"table": "events.resources INNER JOIN public.sessions USING(session_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "success= FALSE AND type='script'"},
schemas.AlertColumn.performance__crashes__count: {
"table": "(SELECT *, start_ts AS timestamp FROM public.sessions WHERE errors_count > 0) AS sessions",
"formula": "COUNT(DISTINCT session_id)", "condition": "errors_count > 0"},
schemas.AlertColumn.errors__javascript__count: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source='js_exception'", "joinSessions": False},
schemas.AlertColumn.errors__backend__count: {
"table": "events.errors INNER JOIN public.errors AS m_errors USING (error_id)",
"formula": "COUNT(DISTINCT session_id)", "condition": "source!='js_exception'", "joinSessions": False},
}
# This is the frequency of execution for each threshold
TimeInterval = {
15: 3,
30: 5,
60: 10,
120: 20,
240: 30,
1440: 60,
}
def can_check(a) -> bool:
now = TimeUTC.now()
repetitionBase = a["options"]["currentPeriod"] \
if a["detectionMethod"] == schemas.AlertDetectionMethod.change \
and a["options"]["currentPeriod"] > a["options"]["previousPeriod"] \
else a["options"]["previousPeriod"]
if TimeInterval.get(repetitionBase) is None:
logging.error(f"repetitionBase: {repetitionBase} NOT FOUND")
return False
return (a["options"]["renotifyInterval"] <= 0 or
a["options"].get("lastNotification") is None or
a["options"]["lastNotification"] <= 0 or
((now - a["options"]["lastNotification"]) > a["options"]["renotifyInterval"] * 60 * 1000)) \
and ((now - a["createdAt"]) % (TimeInterval[repetitionBase] * 60 * 1000)) < 60 * 1000
def Build(a):
params = {"project_id": a["projectId"]}
full_args = {}
j_s = True
if a["seriesId"] is not None:
a["filter"]["sort"] = "session_id"
a["filter"]["order"] = schemas.SortOrderType.desc
a["filter"]["startDate"] = -1
a["filter"]["endDate"] = TimeUTC.now()
full_args, query_part = sessions.search_query_parts(
data=schemas.SessionsSearchPayloadSchema.parse_obj(a["filter"]), error_status=None, errors_only=False,
issue=None, project_id=a["projectId"], user_id=None, favorite_only=False)
subQ = f"""SELECT COUNT(session_id) AS value
{query_part}"""
else:
colDef = LeftToDb[a["query"]["left"]]
subQ = f"""SELECT {colDef["formula"]} AS value
FROM {colDef["table"]}
WHERE project_id = %(project_id)s
{"AND " + colDef["condition"] if colDef.get("condition") is not None else ""}"""
j_s = colDef.get("joinSessions", True)
q = f"""SELECT coalesce(value,0) AS value, coalesce(value,0) {a["query"]["operator"]} {a["query"]["right"]} AS valid"""
if a["detectionMethod"] == schemas.AlertDetectionMethod.threshold:
if a["seriesId"] is not None:
q += f""" FROM ({subQ}) AS stat"""
else:
q += f""" FROM ({subQ} AND timestamp>=%(startDate)s
{"AND sessions.start_ts >= %(startDate)s" if j_s else ""}) AS stat"""
params = {**params, **full_args, "startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000}
else:
if a["options"]["change"] == schemas.AlertDetectionChangeType.change:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})-({sub2})) AS value"
q += f" FROM ( {sub1} ) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000}
else:
sub1 = f"""{subQ} AND timestamp>=%(startDate)s
{"AND sessions.start_ts >= %(startDate)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND timestamp<%(startDate)s
AND timestamp>=%(timestamp_sub2)s
{"AND sessions.start_ts < %(startDate)s AND sessions.start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() - 2 * a["options"]["currentPeriod"] * 60 * 1000
sub1 = f"SELECT (( {sub1} )-( {sub2} )) AS value"
q += f" FROM ( {sub1} ) AS stat"
else:
if a["seriesId"] is not None:
sub2 = subQ.replace("%(startDate)s", "%(timestamp_sub2)s").replace("%(endDate)s", "%(startDate)s")
sub1 = f"SELECT (({subQ})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
params = {**params, **full_args,
"startDate": TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000,
"timestamp_sub2": TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) \
* 60 * 1000}
else:
sub1 = f"""{subQ} AND timestamp>=%(startDate)s
{"AND sessions.start_ts >= %(startDate)s" if j_s else ""}"""
params["startDate"] = TimeUTC.now() - a["options"]["currentPeriod"] * 60 * 1000
sub2 = f"""{subQ} AND timestamp<%(startDate)s
AND timestamp>=%(timestamp_sub2)s
{"AND sessions.start_ts < %(startDate)s AND sessions.start_ts >= %(timestamp_sub2)s" if j_s else ""}"""
params["timestamp_sub2"] = TimeUTC.now() \
- (a["options"]["currentPeriod"] + a["options"]["currentPeriod"]) * 60 * 1000
sub1 = f"SELECT (({sub1})/NULLIF(({sub2}),0)-1)*100 AS value"
q += f" FROM ({sub1}) AS stat"
return q, params
def process():
notifications = []
all_alerts = alerts_listener.get_all_alerts()
with pg_client.PostgresClient() as cur:
for alert in all_alerts:
if can_check(alert):
logging.info(f"Querying alertId:{alert['alertId']} name: {alert['name']}")
query, params = Build(alert)
query = cur.mogrify(query, params)
logging.debug(alert)
logging.debug(query)
try:
cur.execute(query)
result = cur.fetchone()
if result["valid"]:
logging.info("Valid alert, notifying users")
notifications.append({
"alertId": alert["alertId"],
"tenantId": alert["tenantId"],
"title": alert["name"],
"description": f"has been triggered, {alert['query']['left']} = {round(result['value'], 2)} ({alert['query']['operator']} {alert['query']['right']}).",
"buttonText": "Check metrics for more details",
"buttonUrl": f"/{alert['projectId']}/metrics",
"imageUrl": None,
"options": {"source": "ALERT", "sourceId": alert["alertId"],
"sourceMeta": alert["detectionMethod"],
"message": alert["options"]["message"], "projectId": alert["projectId"],
"data": {"title": alert["name"],
"limitValue": alert["query"]["right"],
"actualValue": float(result["value"]) \
if isinstance(result["value"], decimal.Decimal) \
else result["value"],
"operator": alert["query"]["operator"],
"trigger": alert["query"]["left"],
"alertId": alert["alertId"],
"detectionMethod": alert["detectionMethod"],
"currentPeriod": alert["options"]["currentPeriod"],
"previousPeriod": alert["options"]["previousPeriod"],
"createdAt": TimeUTC.now()}},
})
except Exception as e:
logging.error(f"!!!Error while running alert query for alertId:{alert['alertId']}")
logging.error(str(e))
logging.error(query)
if len(notifications) > 0:
cur.execute(
cur.mogrify(f"""UPDATE public.Alerts
SET options = options||'{{"lastNotification":{TimeUTC.now()}}}'::jsonb
WHERE alert_id IN %(ids)s;""", {"ids": tuple([n["alertId"] for n in notifications])}))
if len(notifications) > 0:
alerts.process_notifications(notifications)

View file

@ -1,20 +1,23 @@
import logging
from os import access, R_OK
from os.path import exists as path_exists, getsize
import jwt
import requests
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import projects
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
ASSIST_KEY = config("ASSIST_KEY")
ASSIST_URL = config("ASSIST_URL") % ASSIST_KEY
SESSION_PROJECTION_COLS = """s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_device_type,
s.user_country,
s.start_ts,
s.user_anonymous_id,
s.platform
"""
def get_live_sessions_ws_user_id(project_id, user_id):
@ -24,16 +27,6 @@ def get_live_sessions_ws_user_id(project_id, user_id):
return __get_live_sessions_ws(project_id=project_id, data=data)
def get_live_sessions_ws_test_id(project_id, test_id):
data = {
"filter": {
'uxtId': test_id,
'operator': 'is'
}
}
return __get_live_sessions_ws(project_id=project_id, data=data)
def get_live_sessions_ws(project_id, body: schemas.LiveSessionsSearchPayloadSchema):
data = {
"filter": {},
@ -41,35 +34,34 @@ def get_live_sessions_ws(project_id, body: schemas.LiveSessionsSearchPayloadSche
"sort": {"key": body.sort, "order": body.order}
}
for f in body.filters:
if f.type == schemas.LiveFilterType.METADATA:
data["filter"][f.source] = {"values": f.value, "operator": f.operator}
if f.type == schemas.LiveFilterType.metadata:
data["filter"][f.source] = f.value
else:
data["filter"][f.type] = {"values": f.value, "operator": f.operator}
data["filter"][f.type.value] = f.value
return __get_live_sessions_ws(project_id=project_id, data=data)
def __get_live_sessions_ws(project_id, data):
project_key = projects.get_project_key(project_id)
try:
results = requests.post(ASSIST_URL + config("assist") + f"/{project_key}",
json=data, timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for __get_live_sessions_ws")
logger.error(results.text)
connected_peers = requests.post(config("assist") % config("S3_KEY") + f"/{project_key}", json=data,
timeout=config("assistTimeout", cast=int, default=5))
if connected_peers.status_code != 200:
print("!! issue with the peer-server")
print(connected_peers.text)
return {"total": 0, "sessions": []}
live_peers = results.json().get("data", [])
live_peers = connected_peers.json().get("data", [])
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
print("Timeout getting Assist response")
live_peers = {"total": 0, "sessions": []}
except Exception as e:
logger.error("!! Issue getting Live-Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
print("issue getting Live-Assist response")
print(str(e))
print("expected JSON, received:")
try:
logger.error(results.text)
print(connected_peers.text)
except:
logger.error("couldn't get response")
print("couldn't get response")
live_peers = {"total": 0, "sessions": []}
_live_peers = live_peers
if "sessions" in live_peers:
@ -77,81 +69,61 @@ def __get_live_sessions_ws(project_id, data):
for s in _live_peers:
s["live"] = True
s["projectId"] = project_id
if "projectID" in s:
s.pop("projectID")
return live_peers
def __get_agent_token(project_id, project_key, session_id):
iat = TimeUTC.now()
return jwt.encode(
payload={
"projectKey": project_key,
"projectId": project_id,
"sessionId": session_id,
"iat": iat // 1000,
"exp": iat // 1000 + config("ASSIST_JWT_EXPIRATION", cast=int) + TimeUTC.get_utc_offset() // 1000,
"iss": config("JWT_ISSUER"),
"aud": f"openreplay:agent"
},
key=config("ASSIST_JWT_SECRET"),
algorithm=config("JWT_ALGORITHM")
)
def get_live_session_by_id(project_id, session_id):
project_key = projects.get_project_key(project_id)
try:
results = requests.get(ASSIST_URL + config("assist") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for get_live_session_by_id")
logger.error(results.text)
connected_peers = requests.get(config("assist") % config("S3_KEY") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if connected_peers.status_code != 200:
print("!! issue with the peer-server")
print(connected_peers.text)
return False
connected_peers = connected_peers.json().get("data")
if connected_peers is None:
return None
results = results.json().get("data")
if results is None:
return None
results["live"] = True
results["agentToken"] = __get_agent_token(project_id=project_id, project_key=project_key, session_id=session_id)
connected_peers["live"] = True
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
print("Timeout getting Assist response")
return None
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
print("issue getting Assist response")
print(str(e))
print("expected JSON, received:")
try:
logger.error(results.text)
print(connected_peers.text)
except:
logger.error("couldn't get response")
print("couldn't get response")
return None
return results
return connected_peers
def is_live(project_id, session_id, project_key=None):
if project_key is None:
project_key = projects.get_project_key(project_id)
try:
results = requests.get(ASSIST_URL + config("assistList") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for is_live")
logger.error(results.text)
connected_peers = requests.get(config("assistList") % config("S3_KEY") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if connected_peers.status_code != 200:
print("!! issue with the peer-server")
print(connected_peers.text)
return False
results = results.json().get("data")
connected_peers = connected_peers.json().get("data")
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
print("Timeout getting Assist response")
return False
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
print("issue getting Assist response")
print(str(e))
print("expected JSON, received:")
try:
logger.error(results.text)
print(connected_peers.text)
except:
logger.error("couldn't get response")
print("couldn't get response")
return False
return str(session_id) == results
return str(session_id) == connected_peers
def autocomplete(project_id, q: str, key: str = None):
@ -160,128 +132,28 @@ def autocomplete(project_id, q: str, key: str = None):
if key:
params["key"] = key
try:
results = requests.get(
ASSIST_URL + config("assistList") + f"/{project_key}/autocomplete",
params=params, timeout=config("assistTimeout", cast=int, default=5))
results = requests.get(config("assistList") % config("S3_KEY") + f"/{project_key}/autocomplete",
params=params, timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for autocomplete")
logger.error(results.text)
print("!! issue with the peer-server")
print(results.text)
return {"errors": [f"Something went wrong wile calling assist:{results.text}"]}
results = results.json().get("data", [])
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
print("Timeout getting Assist response")
return {"errors": ["Assist request timeout"]}
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
print("issue getting Assist response")
print(str(e))
print("expected JSON, received:")
try:
logger.error(results.text)
print(results.text)
except:
logger.error("couldn't get response")
print("couldn't get response")
return {"errors": ["Something went wrong wile calling assist"]}
for r in results:
r["type"] = __change_keys(r["type"])
return {"data": results}
def __get_efs_path():
efs_path = config("FS_DIR")
if not path_exists(efs_path):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"EFS not found in path: {efs_path}")
if not access(efs_path, R_OK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"EFS found under: {efs_path}; but it is not readable, please check permissions")
return efs_path
def __get_mob_path(project_id, session_id):
params = {"projectId": project_id, "sessionId": session_id}
return config("EFS_SESSION_MOB_PATTERN", default="%(sessionId)s") % params
def get_raw_mob_by_id(project_id, session_id):
efs_path = __get_efs_path()
path_to_file = efs_path + "/" + __get_mob_path(project_id=project_id, session_id=session_id)
if path_exists(path_to_file):
if not access(path_to_file, R_OK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Replay file found under: {efs_path};" +
" but it is not readable, please check permissions")
# getsize return size in bytes, UNPROCESSED_MAX_SIZE is in Kb
if (getsize(path_to_file) / 1000) >= config("UNPROCESSED_MAX_SIZE", cast=int, default=200 * 1000):
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE, detail="Replay file too large")
return path_to_file
return None
def __get_devtools_path(project_id, session_id):
params = {"projectId": project_id, "sessionId": session_id}
return config("EFS_DEVTOOLS_MOB_PATTERN", default="%(sessionId)s") % params
def get_raw_devtools_by_id(project_id, session_id):
efs_path = __get_efs_path()
path_to_file = efs_path + "/" + __get_devtools_path(project_id=project_id, session_id=session_id)
if path_exists(path_to_file):
if not access(path_to_file, R_OK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Devtools file found under: {efs_path};"
" but it is not readable, please check permissions")
return path_to_file
return None
def session_exists(project_id, session_id):
project_key = projects.get_project_key(project_id)
try:
results = requests.get(ASSIST_URL + config("assist") + f"/{project_key}/{session_id}",
timeout=config("assistTimeout", cast=int, default=5))
if results.status_code != 200:
logger.error(f"!! issue with the peer-server code:{results.status_code} for session_exists")
logger.error(results.text)
return None
results = results.json().get("data")
if results is None:
return False
return True
except requests.exceptions.Timeout:
logger.error("!! Timeout getting Assist response")
return False
except Exception as e:
logger.error("!! Issue getting Assist response")
logger.exception(e)
logger.error("expected JSON, received:")
try:
logger.error(results.text)
except:
logger.error("couldn't get response")
return False
def __change_keys(key):
return {
"PAGETITLE": schemas.LiveFilterType.PAGE_TITLE.value,
"ACTIVE": "active",
"LIVE": "live",
"SESSIONID": schemas.LiveFilterType.SESSION_ID.value,
"METADATA": schemas.LiveFilterType.METADATA.value,
"USERID": schemas.LiveFilterType.USER_ID.value,
"USERUUID": schemas.LiveFilterType.USER_UUID.value,
"PROJECTKEY": "projectKey",
"REVID": schemas.LiveFilterType.REV_ID.value,
"TIMESTAMP": "timestamp",
"TRACKERVERSION": schemas.LiveFilterType.TRACKER_VERSION.value,
"ISSNIPPET": "isSnippet",
"USEROS": schemas.LiveFilterType.USER_OS.value,
"USERBROWSER": schemas.LiveFilterType.USER_BROWSER.value,
"USERBROWSERVERSION": schemas.LiveFilterType.USER_BROWSER_VERSION.value,
"USERDEVICE": schemas.LiveFilterType.USER_DEVICE.value,
"USERDEVICETYPE": schemas.LiveFilterType.USER_DEVICE_TYPE.value,
"USERCOUNTRY": schemas.LiveFilterType.USER_COUNTRY.value,
"PROJECTID": "projectId"
}.get(key.upper(), key)
def get_ice_servers():
return config("iceServers") if config("iceServers", default=None) is not None \
and len(config("iceServers")) > 0 else None

View file

@ -1,96 +1,54 @@
import logging
import jwt
from decouple import config
from chalicelib.core import tenants
from chalicelib.core import users, spot
from chalicelib.utils import helper
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
from decouple import config
from chalicelib.core import tenants
from chalicelib.core import users
def get_supported_audience():
return [users.AUDIENCE, spot.AUDIENCE]
def is_spot_token(token: str) -> bool:
try:
decoded_token = jwt.decode(token, options={"verify_signature": False, "verify_exp": False})
audience = decoded_token.get("aud")
return audience == spot.AUDIENCE
except jwt.InvalidTokenError:
logger.error(f"Invalid token for is_spot_token: {token}")
raise
def jwt_authorizer(scheme: str, token: str, leeway=0) -> dict | None:
if scheme.lower() != "bearer":
def jwt_authorizer(token):
token = token.split(" ")
if len(token) != 2 or token[0].lower() != "bearer":
return None
try:
payload = jwt.decode(jwt=token,
key=config("JWT_SECRET") if not is_spot_token(token) else config("JWT_SPOT_SECRET"),
algorithms=config("JWT_ALGORITHM"),
audience=get_supported_audience(),
leeway=leeway)
payload = jwt.decode(
token[1],
config("jwt_secret"),
algorithms=config("jwt_algorithm"),
audience=[f"plugin:{helper.get_stage_name()}", f"front:{helper.get_stage_name()}"]
)
except jwt.ExpiredSignatureError:
logger.debug("! JWT Expired signature")
print("! JWT Expired signature")
return None
except BaseException as e:
logger.warning("! JWT Base Exception", exc_info=e)
print("! JWT Base Exception")
return None
return payload
def jwt_refresh_authorizer(scheme: str, token: str):
if scheme.lower() != "bearer":
def jwt_context(context):
user = users.get(user_id=context["userId"], tenant_id=context["tenantId"])
if user is None:
return None
try:
payload = jwt.decode(jwt=token,
key=config("JWT_REFRESH_SECRET") if not is_spot_token(token) \
else config("JWT_SPOT_REFRESH_SECRET"),
algorithms=config("JWT_ALGORITHM"),
audience=get_supported_audience())
except jwt.ExpiredSignatureError:
logger.debug("! JWT-refresh Expired signature")
return None
except BaseException as e:
logger.error("! JWT-refresh Base Exception", exc_info=e)
return None
return payload
return {
"tenantId": context["tenantId"],
"userId": context["userId"],
**user
}
def generate_jwt(user_id, tenant_id, iat, aud, for_spot=False):
def generate_jwt(id, tenant_id, iat, aud):
token = jwt.encode(
payload={
"userId": user_id,
"userId": id,
"tenantId": tenant_id,
"exp": iat + (config("JWT_EXPIRATION", cast=int) if not for_spot
else config("JWT_SPOT_EXPIRATION", cast=int)),
"iss": config("JWT_ISSUER"),
"iat": iat,
"exp": iat // 1000 + config("jwt_exp_delta_seconds", cast=int) + TimeUTC.get_utc_offset() // 1000,
"iss": config("jwt_issuer"),
"iat": iat // 1000,
"aud": aud
},
key=config("JWT_SECRET") if not for_spot else config("JWT_SPOT_SECRET"),
algorithm=config("JWT_ALGORITHM")
)
return token
def generate_jwt_refresh(user_id, tenant_id, iat, aud, jwt_jti, for_spot=False):
token = jwt.encode(
payload={
"userId": user_id,
"tenantId": tenant_id,
"exp": iat + (config("JWT_REFRESH_EXPIRATION", cast=int) if not for_spot
else config("JWT_SPOT_REFRESH_EXPIRATION", cast=int)),
"iss": config("JWT_ISSUER"),
"iat": iat,
"aud": aud,
"jti": jwt_jti
},
key=config("JWT_REFRESH_SECRET") if not for_spot else config("JWT_SPOT_REFRESH_SECRET"),
algorithm=config("JWT_ALGORITHM")
key=config("jwt_secret"),
algorithm=config("jwt_algorithm")
)
return token

View file

@ -1,439 +0,0 @@
import logging
import schemas
from chalicelib.core import countries, events, metadata
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.event_filter_definition import Event
from chalicelib.utils.or_cache import CachedResponse
logger = logging.getLogger(__name__)
TABLE = "public.autocomplete"
def __get_autocomplete_table(value, project_id):
autocomplete_events = [schemas.FilterType.REV_ID,
schemas.EventType.CLICK,
schemas.FilterType.USER_DEVICE,
schemas.FilterType.USER_ID,
schemas.FilterType.USER_BROWSER,
schemas.FilterType.USER_OS,
schemas.EventType.CUSTOM,
schemas.FilterType.USER_COUNTRY,
schemas.FilterType.USER_CITY,
schemas.FilterType.USER_STATE,
schemas.EventType.LOCATION,
schemas.EventType.INPUT]
autocomplete_events.sort()
sub_queries = []
c_list = []
for e in autocomplete_events:
if e == schemas.FilterType.USER_COUNTRY:
c_list = countries.get_country_code_autocomplete(value)
if len(c_list) > 0:
sub_queries.append(f"""(SELECT DISTINCT ON(value) '{e.value}' AS _type, value
FROM {TABLE}
WHERE project_id = %(project_id)s
AND type= '{e.value.upper()}'
AND value IN %(c_list)s)""")
continue
sub_queries.append(f"""(SELECT '{e.value}' AS _type, value
FROM {TABLE}
WHERE project_id = %(project_id)s
AND type= '{e.value.upper()}'
AND value ILIKE %(svalue)s
ORDER BY value
LIMIT 5)""")
if len(value) > 2:
sub_queries.append(f"""(SELECT '{e.value}' AS _type, value
FROM {TABLE}
WHERE project_id = %(project_id)s
AND type= '{e.value.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5)""")
with pg_client.PostgresClient() as cur:
query = cur.mogrify(" UNION DISTINCT ".join(sub_queries) + ";",
{"project_id": project_id,
"value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value),
"c_list": tuple(c_list)
})
try:
cur.execute(query)
except Exception as err:
logger.exception("--------- AUTOCOMPLETE SEARCH QUERY EXCEPTION -----------")
logger.exception(query.decode('UTF-8'))
logger.exception("--------- VALUE -----------")
logger.exception(value)
logger.exception("--------------------")
raise err
results = cur.fetchall()
for r in results:
r["type"] = r.pop("_type")
results = helper.list_to_camel_case(results)
return results
def __generic_query(typename, value_length=None):
if typename == schemas.FilterType.USER_COUNTRY:
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value IN %(value)s
ORDER BY value"""
if value_length is None or value_length > 2:
return f"""SELECT DISTINCT ON(value,type) value, type
((SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value ILIKE %(svalue)s
ORDER BY value
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value ILIKE %(value)s
ORDER BY value
LIMIT 5)) AS raw;"""
return f"""SELECT DISTINCT value, type
FROM {TABLE}
WHERE
project_id = %(project_id)s
AND type='{typename.upper()}'
AND value ILIKE %(svalue)s
ORDER BY value
LIMIT 10;"""
def __generic_autocomplete(event: Event):
def f(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
query = __generic_query(event.ui_type, value_length=len(value))
params = {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}
cur.execute(cur.mogrify(query, params))
return helper.list_to_camel_case(cur.fetchall())
return f
def generic_autocomplete_metas(typename):
def f(project_id, text):
with pg_client.PostgresClient() as cur:
params = {"project_id": project_id, "value": helper.string_to_sql_like(text),
"svalue": helper.string_to_sql_like("^" + text)}
if typename == schemas.FilterType.USER_COUNTRY:
params["value"] = tuple(countries.get_country_code_autocomplete(text))
if len(params["value"]) == 0:
return []
query = cur.mogrify(__generic_query(typename, value_length=len(text)), params)
cur.execute(query)
rows = cur.fetchall()
return rows
return f
def __errors_query(source=None, value_length=None):
if value_length is None or value_length > 2:
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(value)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION DISTINCT
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{events.EventType.ERROR.ui_type}' AS type
FROM {events.EventType.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
def __search_errors(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(__errors_query(source,
value_length=len(value)),
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value),
"source": source}))
results = helper.list_to_camel_case(cur.fetchall())
return results
def __search_errors_mobile(project_id, value, key=None, source=None):
if len(value) > 2:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(value)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
LIMIT 5);"""
else:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{events.EventType.CRASH_MOBILE.ui_type}' AS type
FROM {events.EventType.CRASH_MOBILE.table} INNER JOIN public.crashes_ios AS lg USING (crash_ios_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
LIMIT 5);"""
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(query, {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
def __search_metadata(project_id, value, key=None, source=None):
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if len(meta_keys) == 0 or key is not None and key not in meta_keys.keys():
return []
sub_from = []
if key is not None:
meta_keys = {key: meta_keys[key]}
for k in meta_keys.keys():
colname = metadata.index_to_colname(meta_keys[k])
if len(value) > 2:
sub_from.append(f"""((SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)
UNION
(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(value)s LIMIT 5))
""")
else:
sub_from.append(f"""(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT DISTINCT ON(key, value) key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
TYPE_TO_COLUMN = {
schemas.EventType.CLICK: "label",
schemas.EventType.INPUT: "label",
schemas.EventType.LOCATION: "path",
schemas.EventType.CUSTOM: "name",
schemas.FetchFilterType.FETCH_URL: "path",
schemas.GraphqlFilterType.GRAPHQL_NAME: "name",
schemas.EventType.STATE_ACTION: "name",
# For ERROR, sessions search is happening over name OR message,
# for simplicity top 10 is using name only
schemas.EventType.ERROR: "name",
schemas.FilterType.USER_COUNTRY: "user_country",
schemas.FilterType.USER_CITY: "user_city",
schemas.FilterType.USER_STATE: "user_state",
schemas.FilterType.USER_ID: "user_id",
schemas.FilterType.USER_ANONYMOUS_ID: "user_anonymous_id",
schemas.FilterType.USER_OS: "user_os",
schemas.FilterType.USER_BROWSER: "user_browser",
schemas.FilterType.USER_DEVICE: "user_device",
schemas.FilterType.PLATFORM: "platform",
schemas.FilterType.REV_ID: "rev_id",
schemas.FilterType.REFERRER: "referrer",
schemas.FilterType.UTM_SOURCE: "utm_source",
schemas.FilterType.UTM_MEDIUM: "utm_medium",
schemas.FilterType.UTM_CAMPAIGN: "utm_campaign",
}
TYPE_TO_TABLE = {
schemas.EventType.CLICK: "events.clicks",
schemas.EventType.INPUT: "events.inputs",
schemas.EventType.LOCATION: "events.pages",
schemas.EventType.CUSTOM: "events_common.customs",
schemas.FetchFilterType.FETCH_URL: "events_common.requests",
schemas.GraphqlFilterType.GRAPHQL_NAME: "events.graphql",
schemas.EventType.STATE_ACTION: "events.state_actions",
}
def is_top_supported(event_type):
return TYPE_TO_COLUMN.get(event_type, False)
@CachedResponse(table="or_cache.autocomplete_top_values", ttl=5 * 60)
def get_top_values(project_id, event_type, event_key=None):
with pg_client.PostgresClient() as cur:
if schemas.FilterType.has_value(event_type):
if event_type == schemas.FilterType.METADATA \
and (event_key is None \
or (colname := metadata.get_colname_by_key(project_id=project_id, key=event_key)) is None) \
or event_type != schemas.FilterType.METADATA \
and (colname := TYPE_TO_COLUMN.get(event_type)) is None:
return []
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY {colname}) AS row_count,
COUNT(1) OVER () AS total_count
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} IS NOT NULL
AND sessions.duration IS NOT NULL
AND sessions.duration > 0
ORDER BY row_count DESC
LIMIT 10)
SELECT c_value AS value, row_count, trunc(row_count * 100 / total_count, 2) AS row_percentage
FROM raw;"""
elif event_type == schemas.EventType.ERROR:
colname = TYPE_TO_COLUMN.get(event_type)
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY {colname}) AS row_count,
COUNT(1) OVER () AS total_count
FROM public.errors
WHERE project_id = %(project_id)s
AND {colname} IS NOT NULL
AND {colname} != ''
ORDER BY row_count DESC
LIMIT 10)
SELECT c_value AS value, row_count, trunc(row_count * 100 / total_count,2) AS row_percentage
FROM raw;"""
else:
colname = TYPE_TO_COLUMN.get(event_type)
table = TYPE_TO_TABLE.get(event_type)
query = f"""WITH raw AS (SELECT DISTINCT {colname} AS c_value,
COUNT(1) OVER (PARTITION BY {colname}) AS row_count,
COUNT(1) OVER () AS total_count
FROM {table} INNER JOIN public.sessions USING(session_id)
WHERE project_id = %(project_id)s
AND {colname} IS NOT NULL
AND {colname} != ''
AND sessions.duration IS NOT NULL
AND sessions.duration > 0
ORDER BY row_count DESC
LIMIT 10)
SELECT c_value AS value, row_count, trunc(row_count * 100 / total_count,2) AS row_percentage
FROM raw;"""
params = {"project_id": project_id}
query = cur.mogrify(query, params)
logger.debug("--------------------")
logger.debug(query)
logger.debug("--------------------")
cur.execute(query=query)
results = cur.fetchall()
return helper.list_to_camel_case(results)

View file

@ -1,8 +1,7 @@
from chalicelib.core import projects
from chalicelib.core import users
from chalicelib.core.log_tools import datadog, stackdriver, sentry
from chalicelib.core.modules import TENANT_CONDITION
from chalicelib.utils import pg_client
from chalicelib.core import projects, log_tool_datadog, log_tool_stackdriver, log_tool_sentry
from chalicelib.core import users
def get_state(tenant_id):
@ -13,61 +12,47 @@ def get_state(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
cur.mogrify("""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)},
)
{"ids": tuple(pids)})
)
recorded = cur.fetchone()["exists"]
meta = False
if recorded:
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
cur.execute("""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE sessions.project_id = p.project_id
AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON (TRUE)
WHERE {TENANT_CONDITION} AND p.deleted_at ISNULL
WHERE p.deleted_at ISNULL
AND ( sessions.user_id IS NOT NULL OR p.metadata_1 IS NOT NULL
OR p.metadata_2 IS NOT NULL OR p.metadata_3 IS NOT NULL
OR p.metadata_4 IS NOT NULL OR p.metadata_5 IS NOT NULL
OR p.metadata_6 IS NOT NULL OR p.metadata_7 IS NOT NULL
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id},
)
cur.execute(query)
)) AS exists;""")
meta = cur.fetchone()["exists"]
return [
{
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
},
{
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
},
{
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
},
{
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
},
{"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"},
{"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"},
{"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"},
{"task": "Integrations",
"done": len(log_tool_datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}
]
@ -78,66 +63,52 @@ def get_state_installing(tenant_id):
if len(pids) > 0:
cur.execute(
cur.mogrify(
"""SELECT EXISTS(( SELECT 1
cur.mogrify("""SELECT EXISTS(( SELECT 1
FROM public.sessions AS s
WHERE s.project_id IN %(ids)s)) AS exists;""",
{"ids": tuple(pids)},
)
{"ids": tuple(pids)})
)
recorded = cur.fetchone()["exists"]
return {
"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start",
}
return {"task": "Install OpenReplay",
"done": recorded,
"URL": "https://docs.openreplay.com/getting-started/quick-start"}
def get_state_identify_users(tenant_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT EXISTS((SELECT 1
cur.execute("""SELECT EXISTS((SELECT 1
FROM public.projects AS p
LEFT JOIN LATERAL ( SELECT 1
FROM public.sessions
WHERE sessions.project_id = p.project_id
AND sessions.user_id IS NOT NULL
LIMIT 1) AS sessions(user_id) ON (TRUE)
WHERE {TENANT_CONDITION} AND p.deleted_at ISNULL
WHERE p.deleted_at ISNULL
AND ( sessions.user_id IS NOT NULL OR p.metadata_1 IS NOT NULL
OR p.metadata_2 IS NOT NULL OR p.metadata_3 IS NOT NULL
OR p.metadata_4 IS NOT NULL OR p.metadata_5 IS NOT NULL
OR p.metadata_6 IS NOT NULL OR p.metadata_7 IS NOT NULL
OR p.metadata_8 IS NOT NULL OR p.metadata_9 IS NOT NULL
OR p.metadata_10 IS NOT NULL )
)) AS exists;""",
{"tenant_id": tenant_id},
)
cur.execute(query)
)) AS exists;""")
meta = cur.fetchone()["exists"]
return {
"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata",
}
return {"task": "Identify Users",
"done": meta,
"URL": "https://docs.openreplay.com/data-privacy-security/metadata"}
def get_state_manage_users(tenant_id):
return {
"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users",
}
return {"task": "Invite Team Members",
"done": len(users.get_members(tenant_id=tenant_id)) > 1,
"URL": "https://app.openreplay.com/client/manage-users"}
def get_state_integrations(tenant_id):
return {
"task": "Integrations",
"done": len(datadog.get_all(tenant_id=tenant_id)) > 0
or len(sentry.get_all(tenant_id=tenant_id)) > 0
or len(stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations",
}
return {"task": "Integrations",
"done": len(log_tool_datadog.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_sentry.get_all(tenant_id=tenant_id)) > 0 \
or len(log_tool_stackdriver.get_all(tenant_id=tenant_id)) > 0,
"URL": "https://docs.openreplay.com/integrations"}

View file

@ -1,35 +0,0 @@
from chalicelib.utils import pg_client
from chalicelib.utils.storage import StorageClient
from decouple import config
def get_canvas_presigned_urls(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify("""\
SELECT *
FROM events.canvas_recordings
WHERE session_id = %(session_id)s
ORDER BY timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
urls = []
for i in range(len(rows)):
params = {
"sessionId": session_id,
"projectId": project_id,
"recordingId": rows[i]["recording_id"]
}
oldKey = "%(sessionId)s/%(recordingId)s.mp4" % params
key = config("CANVAS_PATTERN", default="%(sessionId)s/%(recordingId)s.tar.zst") % params
urls.append(StorageClient.get_presigned_url_for_sharing(
bucket=config("CANVAS_BUCKET", default=config("sessions_bucket")),
expires_in=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900),
key=key
))
urls.append(StorageClient.get_presigned_url_for_sharing(
bucket=config("CANVAS_BUCKET", default=config("sessions_bucket")),
expires_in=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900),
key=oldKey
))
return urls

View file

@ -0,0 +1,125 @@
import requests
from decouple import config
from datetime import datetime
from chalicelib.core import webhook
class Slack:
@classmethod
def add_channel(cls, tenant_id, **args):
url = args["url"]
name = args["name"]
if cls.say_hello(url):
return webhook.add(tenant_id=tenant_id,
endpoint=url,
webhook_type="slack",
name=name)
return None
@classmethod
def say_hello(cls, url):
r = requests.post(
url=url,
json={
"attachments": [
{
"text": "Welcome to OpenReplay",
"ts": datetime.now().timestamp(),
}
]
})
if r.status_code != 200:
print("slack integration failed")
print(r.text)
return False
return True
@classmethod
def send_text(cls, tenant_id, webhook_id, text, **args):
integration = cls.__get(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
print("====> sending slack notification")
r = requests.post(
url=integration["endpoint"],
json={
"attachments": [
{
"text": text,
"ts": datetime.now().timestamp(),
**args
}
]
})
print(r)
print(r.text)
return {"data": r.text}
@classmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
integration = cls.__get(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
print(f"====> sending slack batch notification: {len(attachments)}")
for i in range(0, len(attachments), 100):
r = requests.post(
url=integration["endpoint"],
json={"attachments": attachments[i:i + 100]})
if r.status_code != 200:
print("!!!! something went wrong")
print(r)
print(r.text)
@classmethod
def __share_to_slack(cls, tenant_id, integration_id, fallback, pretext, title, title_link, text):
integration = cls.__get(tenant_id=tenant_id, integration_id=integration_id)
if integration is None:
return {"errors": ["slack integration not found"]}
r = requests.post(
url=integration["endpoint"],
json={
"attachments": [
{
"fallback": fallback,
"pretext": pretext,
"title": title,
"title_link": title_link,
"text": text,
"ts": datetime.now().timestamp()
}
]
})
return r.text
@classmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, integration_id=None):
args = {"fallback": f"{user} has shared the below session!",
"pretext": f"{user} has shared the below session!",
"title": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"text": comment}
return {"data": cls.__share_to_slack(tenant_id, integration_id, **args)}
@classmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, integration_id=None):
args = {"fallback": f"{user} has shared the below error!",
"pretext": f"{user} has shared the below error!",
"title": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"text": comment}
return {"data": cls.__share_to_slack(tenant_id, integration_id, **args)}
@classmethod
def has_slack(cls, tenant_id):
integration = cls.__get(tenant_id=tenant_id)
return not (integration is None or len(integration) == 0)
@classmethod
def __get(cls, tenant_id, integration_id=None):
if integration_id is not None:
return webhook.get(tenant_id=tenant_id, webhook_id=integration_id)
integrations = webhook.get_by_type(tenant_id=tenant_id, webhook_type="slack")
if integrations is None or len(integrations) == 0:
return None
return integrations[0]

View file

@ -1 +0,0 @@
from . import collaboration_base as _

View file

@ -1,45 +0,0 @@
from abc import ABC, abstractmethod
import schemas
class BaseCollaboration(ABC):
@classmethod
@abstractmethod
def add(cls, tenant_id, data: schemas.AddCollaborationSchema):
pass
@classmethod
@abstractmethod
def say_hello(cls, url):
pass
@classmethod
@abstractmethod
def send_raw(cls, tenant_id, webhook_id, body):
pass
@classmethod
@abstractmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
pass
@classmethod
@abstractmethod
def __share(cls, tenant_id, integration_id, attachments, extra=None):
pass
@classmethod
@abstractmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, project_name=None, integration_id=None):
pass
@classmethod
@abstractmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, project_name=None, integration_id=None):
pass
@classmethod
@abstractmethod
def get_integration(cls, tenant_id, integration_id=None):
pass

View file

@ -1,171 +0,0 @@
import logging
import requests
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import webhook
from chalicelib.core.collaborations.collaboration_base import BaseCollaboration
logger = logging.getLogger(__name__)
class MSTeams(BaseCollaboration):
@classmethod
def add(cls, tenant_id, data: schemas.AddCollaborationSchema):
if webhook.exists_by_name(tenant_id=tenant_id, name=data.name, exclude_id=None,
webhook_type=schemas.WebhookType.MSTEAMS):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"name already exists.")
if cls.say_hello(data.url):
return webhook.add(tenant_id=tenant_id,
endpoint=data.url.unicode_string(),
webhook_type=schemas.WebhookType.MSTEAMS,
name=data.name)
return None
@classmethod
def say_hello(cls, url):
try:
r = requests.post(
url=url,
json={
"@type": "MessageCard",
"@context": "https://schema.org/extensions",
"summary": "Welcome to OpenReplay",
"title": "Welcome to OpenReplay"
},
timeout=3)
if r.status_code != 200:
logger.warning("MSTeams integration failed")
logger.warning(r.text)
return False
except Exception as e:
logger.warning("!!! MSTeams integration failed")
logger.exception(e)
return False
return True
@classmethod
def send_raw(cls, tenant_id, webhook_id, body):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["msteams integration not found"]}
try:
r = requests.post(
url=integration["endpoint"],
json=body,
timeout=5)
if r.status_code != 200:
logger.warning(f"!! issue sending msteams raw; webhookId:{webhook_id} code:{r.status_code}")
logger.warning(r.text)
return None
except requests.exceptions.Timeout:
logger.warning(f"!! Timeout sending msteams raw webhookId:{webhook_id}")
return None
except Exception as e:
logger.warning(f"!! Issue sending msteams raw webhookId:{webhook_id}")
logger.warning(e)
return None
return {"data": r.text}
@classmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["msteams integration not found"]}
logger.debug(f"====> sending msteams batch notification: {len(attachments)}")
for i in range(0, len(attachments), 50):
part = attachments[i:i + 50]
for j in range(1, len(part), 2):
part.insert(j, {"text": "***"})
r = requests.post(url=integration["endpoint"],
json={
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": part[0]["activityTitle"],
"sections": part
})
if r.status_code != 200:
logger.warning("!!!! something went wrong")
logger.warning(r.text)
@classmethod
def __share(cls, tenant_id, integration_id, attachement, extra=None):
if extra is None:
extra = {}
integration = cls.get_integration(tenant_id=tenant_id, integration_id=integration_id)
if integration is None:
return {"errors": ["Microsoft Teams integration not found"]}
r = requests.post(
url=integration["endpoint"],
json={
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"sections": [attachement],
**extra
})
return r.text
@classmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, project_name=None, integration_id=None):
title = f"*{user}* has shared the below session!"
link = f"{config('SITE_URL')}/{project_id}/session/{session_id}"
args = {
"activityTitle": title,
"facts": [
{
"name": "Session:",
"value": link
}],
"markdown": True
}
if project_name and len(project_name) > 0:
args["activitySubtitle"] = f"On Project *{project_name}*"
if comment and len(comment) > 0:
args["facts"].append({
"name": "Comment:",
"value": comment
})
data = cls.__share(tenant_id, integration_id, attachement=args, extra={"summary": title})
if "errors" in data:
return data
return {"data": data}
@classmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, project_name=None, integration_id=None):
title = f"*{user}* has shared the below error!"
link = f"{config('SITE_URL')}/{project_id}/errors/{error_id}"
args = {
"activityTitle": title,
"facts": [
{
"name": "Session:",
"value": link
}],
"markdown": True
}
if project_name and len(project_name) > 0:
args["activitySubtitle"] = f"On Project *{project_name}*"
if comment and len(comment) > 0:
args["facts"].append({
"name": "Comment:",
"value": comment
})
data = cls.__share(tenant_id, integration_id, attachement=args, extra={"summary": title})
if "errors" in data:
return data
return {"data": data}
@classmethod
def get_integration(cls, tenant_id, integration_id=None):
if integration_id is not None:
return webhook.get_webhook(tenant_id=tenant_id, webhook_id=integration_id,
webhook_type=schemas.WebhookType.MSTEAMS)
integrations = webhook.get_by_type(tenant_id=tenant_id, webhook_type=schemas.WebhookType.MSTEAMS)
if integrations is None or len(integrations) == 0:
return None
return integrations[0]

View file

@ -1,126 +0,0 @@
from datetime import datetime
import requests
from decouple import config
from fastapi import HTTPException, status
import schemas
from chalicelib.core import webhook
from chalicelib.core.collaborations.collaboration_base import BaseCollaboration
class Slack(BaseCollaboration):
@classmethod
def add(cls, tenant_id, data: schemas.AddCollaborationSchema):
if webhook.exists_by_name(tenant_id=tenant_id, name=data.name, exclude_id=None,
webhook_type=schemas.WebhookType.SLACK):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"name already exists.")
if cls.say_hello(data.url):
return webhook.add(tenant_id=tenant_id,
endpoint=data.url.unicode_string(),
webhook_type=schemas.WebhookType.SLACK,
name=data.name)
return None
@classmethod
def say_hello(cls, url):
r = requests.post(
url=url,
json={
"attachments": [
{
"text": "Welcome to OpenReplay",
"ts": datetime.now().timestamp(),
}
]
})
if r.status_code != 200:
print("slack integration failed")
print(r.text)
return False
return True
@classmethod
def send_raw(cls, tenant_id, webhook_id, body):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
try:
r = requests.post(
url=integration["endpoint"],
json=body,
timeout=5)
if r.status_code != 200:
print(f"!! issue sending slack raw; webhookId:{webhook_id} code:{r.status_code}")
print(r.text)
return None
except requests.exceptions.Timeout:
print(f"!! Timeout sending slack raw webhookId:{webhook_id}")
return None
except Exception as e:
print(f"!! Issue sending slack raw webhookId:{webhook_id}")
print(str(e))
return None
return {"data": r.text}
@classmethod
def send_batch(cls, tenant_id, webhook_id, attachments):
integration = cls.get_integration(tenant_id=tenant_id, integration_id=webhook_id)
if integration is None:
return {"errors": ["slack integration not found"]}
print(f"====> sending slack batch notification: {len(attachments)}")
for i in range(0, len(attachments), 100):
r = requests.post(
url=integration["endpoint"],
json={"attachments": attachments[i:i + 100]})
if r.status_code != 200:
print("!!!! something went wrong while sending to:")
print(integration)
print(r)
print(r.text)
@classmethod
def __share(cls, tenant_id, integration_id, attachement, extra=None):
if extra is None:
extra = {}
integration = cls.get_integration(tenant_id=tenant_id, integration_id=integration_id)
if integration is None:
return {"errors": ["slack integration not found"]}
attachement["ts"] = datetime.now().timestamp()
r = requests.post(url=integration["endpoint"], json={"attachments": [attachement], **extra})
return r.text
@classmethod
def share_session(cls, tenant_id, project_id, session_id, user, comment, project_name=None, integration_id=None):
args = {"fallback": f"{user} has shared the below session!",
"pretext": f"{user} has shared the below session!",
"title": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/session/{session_id}",
"text": comment}
data = cls.__share(tenant_id, integration_id, attachement=args)
if "errors" in data:
return data
return {"data": data}
@classmethod
def share_error(cls, tenant_id, project_id, error_id, user, comment, project_name=None, integration_id=None):
args = {"fallback": f"{user} has shared the below error!",
"pretext": f"{user} has shared the below error!",
"title": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"title_link": f"{config('SITE_URL')}/{project_id}/errors/{error_id}",
"text": comment}
data = cls.__share(tenant_id, integration_id, attachement=args)
if "errors" in data:
return data
return {"data": data}
@classmethod
def get_integration(cls, tenant_id, integration_id=None):
if integration_id is not None:
return webhook.get_webhook(tenant_id=tenant_id, webhook_id=integration_id,
webhook_type=schemas.WebhookType.SLACK)
integrations = webhook.get_by_type(tenant_id=tenant_id, webhook_type=schemas.WebhookType.SLACK)
if integrations is None or len(integrations) == 0:
return None
return integrations[0]

View file

@ -1,296 +0,0 @@
COUNTRIES = {
"AC": "Ascension Island",
"AD": "Andorra",
"AE": "United Arab Emirates",
"AF": "Afghanistan",
"AG": "Antigua And Barbuda",
"AI": "Anguilla",
"AL": "Albania",
"AM": "Armenia",
"AN": "Netherlands Antilles",
"AO": "Angola",
"AQ": "Antarctica",
"AR": "Argentina",
"AS": "American Samoa",
"AT": "Austria",
"AU": "Australia",
"AW": "Aruba",
"AX": "Åland Islands",
"AZ": "Azerbaijan",
"BA": "Bosnia & Herzegovina",
"BB": "Barbados",
"BD": "Bangladesh",
"BE": "Belgium",
"BF": "Burkina Faso",
"BG": "Bulgaria",
"BH": "Bahrain",
"BI": "Burundi",
"BJ": "Benin",
"BL": "Saint Barthélemy",
"BM": "Bermuda",
"BN": "Brunei Darussalam",
"BO": "Bolivia",
"BQ": "Bonaire, Saint Eustatius And Saba",
"BR": "Brazil",
"BS": "Bahamas",
"BT": "Bhutan",
"BU": "Burma",
"BV": "Bouvet Island",
"BW": "Botswana",
"BY": "Belarus",
"BZ": "Belize",
"CA": "Canada",
"CC": "Cocos Islands",
"CD": "Congo",
"CF": "Central African Republic",
"CG": "Congo",
"CH": "Switzerland",
"CI": "Côte d'Ivoire",
"CK": "Cook Islands",
"CL": "Chile",
"CM": "Cameroon",
"CN": "China",
"CO": "Colombia",
"CP": "Clipperton Island",
"CR": "Costa Rica",
"CS": "Serbia and Montenegro",
"CT": "Canton and Enderbury Islands",
"CU": "Cuba",
"CV": "Cabo Verde",
"CW": "Curacao",
"CX": "Christmas Island",
"CY": "Cyprus",
"CZ": "Czech Republic",
"DD": "Germany",
"DE": "Germany",
"DG": "Diego Garcia",
"DJ": "Djibouti",
"DK": "Denmark",
"DM": "Dominica",
"DO": "Dominican Republic",
"DY": "Dahomey",
"DZ": "Algeria",
"EA": "Ceuta, Mulilla",
"EC": "Ecuador",
"EE": "Estonia",
"EG": "Egypt",
"EH": "Western Sahara",
"ER": "Eritrea",
"ES": "Spain",
"ET": "Ethiopia",
"FI": "Finland",
"FJ": "Fiji",
"FK": "Falkland Islands",
"FM": "Micronesia",
"FO": "Faroe Islands",
"FQ": "French Southern and Antarctic Territories",
"FR": "France",
"FX": "France, Metropolitan",
"GA": "Gabon",
"GB": "United Kingdom",
"GD": "Grenada",
"GE": "Georgia",
"GF": "French Guiana",
"GG": "Guernsey",
"GH": "Ghana",
"GI": "Gibraltar",
"GL": "Greenland",
"GM": "Gambia",
"GN": "Guinea",
"GP": "Guadeloupe",
"GQ": "Equatorial Guinea",
"GR": "Greece",
"GS": "South Georgia And The South Sandwich Islands",
"GT": "Guatemala",
"GU": "Guam",
"GW": "Guinea-bissau",
"GY": "Guyana",
"HK": "Hong Kong",
"HM": "Heard Island And McDonald Islands",
"HN": "Honduras",
"HR": "Croatia",
"HT": "Haiti",
"HU": "Hungary",
"HV": "Upper Volta",
"IC": "Canary Islands",
"ID": "Indonesia",
"IE": "Ireland",
"IL": "Israel",
"IM": "Isle Of Man",
"IN": "India",
"IO": "British Indian Ocean Territory",
"IQ": "Iraq",
"IR": "Iran",
"IS": "Iceland",
"IT": "Italy",
"JE": "Jersey",
"JM": "Jamaica",
"JO": "Jordan",
"JP": "Japan",
"JT": "Johnston Island",
"KE": "Kenya",
"KG": "Kyrgyzstan",
"KH": "Cambodia",
"KI": "Kiribati",
"KM": "Comoros",
"KN": "Saint Kitts And Nevis",
"KP": "Korea",
"KR": "Korea",
"KW": "Kuwait",
"KY": "Cayman Islands",
"KZ": "Kazakhstan",
"LA": "Laos",
"LB": "Lebanon",
"LC": "Saint Lucia",
"LI": "Liechtenstein",
"LK": "Sri Lanka",
"LR": "Liberia",
"LS": "Lesotho",
"LT": "Lithuania",
"LU": "Luxembourg",
"LV": "Latvia",
"LY": "Libya",
"MA": "Morocco",
"MC": "Monaco",
"MD": "Moldova",
"ME": "Montenegro",
"MF": "Saint Martin",
"MG": "Madagascar",
"MH": "Marshall Islands",
"MI": "Midway Islands",
"MK": "Macedonia",
"ML": "Mali",
"MM": "Myanmar",
"MN": "Mongolia",
"MO": "Macao",
"MP": "Northern Mariana Islands",
"MQ": "Martinique",
"MR": "Mauritania",
"MS": "Montserrat",
"MT": "Malta",
"MU": "Mauritius",
"MV": "Maldives",
"MW": "Malawi",
"MX": "Mexico",
"MY": "Malaysia",
"MZ": "Mozambique",
"NA": "Namibia",
"NC": "New Caledonia",
"NE": "Niger",
"NF": "Norfolk Island",
"NG": "Nigeria",
"NH": "New Hebrides",
"NI": "Nicaragua",
"NL": "Netherlands",
"NO": "Norway",
"NP": "Nepal",
"NQ": "Dronning Maud Land",
"NR": "Nauru",
"NT": "Neutral Zone",
"NU": "Niue",
"NZ": "New Zealand",
"OM": "Oman",
"PA": "Panama",
"PC": "Pacific Islands",
"PE": "Peru",
"PF": "French Polynesia",
"PG": "Papua New Guinea",
"PH": "Philippines",
"PK": "Pakistan",
"PL": "Poland",
"PM": "Saint Pierre And Miquelon",
"PN": "Pitcairn",
"PR": "Puerto Rico",
"PS": "Palestine",
"PT": "Portugal",
"PU": "U.S. Miscellaneous Pacific Islands",
"PW": "Palau",
"PY": "Paraguay",
"PZ": "Panama Canal Zone",
"QA": "Qatar",
"RE": "Reunion",
"RH": "Southern Rhodesia",
"RO": "Romania",
"RS": "Serbia",
"RU": "Russian Federation",
"RW": "Rwanda",
"SA": "Saudi Arabia",
"SB": "Solomon Islands",
"SC": "Seychelles",
"SD": "Sudan",
"SE": "Sweden",
"SG": "Singapore",
"SH": "Saint Helena, Ascension And Tristan Da Cunha",
"SI": "Slovenia",
"SJ": "Svalbard And Jan Mayen",
"SK": "Slovakia",
"SL": "Sierra Leone",
"SM": "San Marino",
"SN": "Senegal",
"SO": "Somalia",
"SR": "Suriname",
"SS": "South Sudan",
"ST": "Sao Tome and Principe",
"SU": "USSR",
"SV": "El Salvador",
"SX": "Sint Maarten",
"SY": "Syrian Arab Republic",
"SZ": "Swaziland",
"TA": "Tristan de Cunha",
"TC": "Turks And Caicos Islands",
"TD": "Chad",
"TF": "French Southern Territories",
"TG": "Togo",
"TH": "Thailand",
"TJ": "Tajikistan",
"TK": "Tokelau",
"TL": "Timor-Leste",
"TM": "Turkmenistan",
"TN": "Tunisia",
"TO": "Tonga",
"TP": "East Timor",
"TR": "Turkey",
"TT": "Trinidad And Tobago",
"TV": "Tuvalu",
"TW": "Taiwan",
"TZ": "Tanzania",
"UA": "Ukraine",
"UG": "Uganda",
"UM": "United States Minor Outlying Islands",
"UN": "United Nations",
"US": "United States",
"UY": "Uruguay",
"UZ": "Uzbekistan",
"VA": "Vatican City State",
"VC": "Saint Vincent And The Grenadines",
"VD": "VietNam",
"VE": "Venezuela",
"VG": "Virgin Islands (British)",
"VI": "Virgin Islands (US)",
"VN": "VietNam",
"VU": "Vanuatu",
"WF": "Wallis And Futuna",
"WK": "Wake Island",
"WS": "Samoa",
"XK": "Kosovo",
"YD": "Yemen",
"YE": "Yemen",
"YT": "Mayotte",
"YU": "Yugoslavia",
"ZA": "South Africa",
"ZM": "Zambia",
"ZR": "Zaire",
"ZW": "Zimbabwe",
}
def get_country_code_autocomplete(text):
if text is None or len(text) == 0:
return []
results = []
for code in COUNTRIES:
if text.lower() in code.lower() \
or text.lower() in COUNTRIES[code].lower():
results.append(code)
return results

View file

@ -0,0 +1,538 @@
import json
from typing import Union
import schemas
from chalicelib.core import sessions, funnels, errors, issues
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
PIE_CHART_GROUP = 5
def __try_live(project_id, data: schemas.TryCustomMetricsPayloadSchema):
results = []
for i, s in enumerate(data.series):
s.filter.startDate = data.startTimestamp
s.filter.endDate = data.endTimestamp
results.append(sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
view_type=data.view_type, metric_type=data.metric_type,
metric_of=data.metric_of, metric_value=data.metric_value))
if data.view_type == schemas.MetricTimeseriesViewType.progress:
r = {"count": results[-1]}
diff = s.filter.endDate - s.filter.startDate
s.filter.endDate = s.filter.startDate
s.filter.startDate = s.filter.endDate - diff
r["previousCount"] = sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
view_type=data.view_type, metric_type=data.metric_type,
metric_of=data.metric_of, metric_value=data.metric_value)
r["countProgress"] = helper.__progress(old_val=r["previousCount"], new_val=r["count"])
# r["countProgress"] = ((r["count"] - r["previousCount"]) / r["previousCount"]) * 100 \
# if r["previousCount"] > 0 else 0
r["seriesName"] = s.name if s.name else i + 1
r["seriesId"] = s.series_id if s.series_id else None
results[-1] = r
elif data.view_type == schemas.MetricTableViewType.pie_chart:
if len(results[i].get("values", [])) > PIE_CHART_GROUP:
results[i]["values"] = results[i]["values"][:PIE_CHART_GROUP] \
+ [{
"name": "Others", "group": True,
"sessionCount": sum(r["sessionCount"] for r in results[i]["values"][PIE_CHART_GROUP:])
}]
return results
def __is_funnel_chart(data: schemas.TryCustomMetricsPayloadSchema):
return data.metric_type == schemas.MetricType.funnel
def __get_funnel_chart(project_id, data: schemas.TryCustomMetricsPayloadSchema):
if len(data.series) == 0:
return {
"stages": [],
"totalDropDueToIssues": 0
}
data.series[0].filter.startDate = data.startTimestamp
data.series[0].filter.endDate = data.endTimestamp
return funnels.get_top_insights_on_the_fly_widget(project_id=project_id, data=data.series[0].filter)
def __is_errors_list(data):
return data.metric_type == schemas.MetricType.table \
and data.metric_of == schemas.TableMetricOfType.errors
def __get_errors_list(project_id, user_id, data):
if len(data.series) == 0:
return {
"total": 0,
"errors": []
}
data.series[0].filter.startDate = data.startTimestamp
data.series[0].filter.endDate = data.endTimestamp
data.series[0].filter.page = data.page
data.series[0].filter.limit = data.limit
return errors.search(data.series[0].filter, project_id=project_id, user_id=user_id)
def __is_sessions_list(data):
return data.metric_type == schemas.MetricType.table \
and data.metric_of == schemas.TableMetricOfType.sessions
def __get_sessions_list(project_id, user_id, data):
if len(data.series) == 0:
print("empty series")
return {
"total": 0,
"sessions": []
}
data.series[0].filter.startDate = data.startTimestamp
data.series[0].filter.endDate = data.endTimestamp
data.series[0].filter.page = data.page
data.series[0].filter.limit = data.limit
return sessions.search2_pg(data=data.series[0].filter, project_id=project_id, user_id=user_id)
def merged_live(project_id, data: schemas.TryCustomMetricsPayloadSchema, user_id=None):
if __is_funnel_chart(data):
return __get_funnel_chart(project_id=project_id, data=data)
elif __is_errors_list(data):
return __get_errors_list(project_id=project_id, user_id=user_id, data=data)
elif __is_sessions_list(data):
return __get_sessions_list(project_id=project_id, user_id=user_id, data=data)
series_charts = __try_live(project_id=project_id, data=data)
if data.view_type == schemas.MetricTimeseriesViewType.progress or data.metric_type == schemas.MetricType.table:
return series_charts
results = [{}] * len(series_charts[0])
for i in range(len(results)):
for j, series_chart in enumerate(series_charts):
results[i] = {**results[i], "timestamp": series_chart[i]["timestamp"],
data.series[j].name if data.series[j].name else j + 1: series_chart[i]["count"]}
return results
def __merge_metric_with_data(metric, data: Union[schemas.CustomMetricChartPayloadSchema,
schemas.CustomMetricSessionsPayloadSchema]) \
-> Union[schemas.CreateCustomMetricsSchema, None]:
if data.series is not None and len(data.series) > 0:
metric["series"] = data.series
metric: schemas.CreateCustomMetricsSchema = schemas.CreateCustomMetricsSchema.parse_obj({**data.dict(), **metric})
if len(data.filters) > 0 or len(data.events) > 0:
for s in metric.series:
if len(data.filters) > 0:
s.filter.filters += data.filters
if len(data.events) > 0:
s.filter.events += data.events
return metric
def make_chart(project_id, user_id, metric_id, data: schemas.CustomMetricChartPayloadSchema, metric=None):
if metric is None:
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.CreateCustomMetricsSchema = __merge_metric_with_data(metric=metric, data=data)
return merged_live(project_id=project_id, data=metric, user_id=user_id)
# if __is_funnel_chart(metric):
# return __get_funnel_chart(project_id=project_id, data=metric)
# elif __is_errors_list(metric):
# return __get_errors_list(project_id=project_id, user_id=user_id, data=metric)
#
# series_charts = __try_live(project_id=project_id, data=metric)
# if metric.view_type == schemas.MetricTimeseriesViewType.progress or metric.metric_type == schemas.MetricType.table:
# return series_charts
# results = [{}] * len(series_charts[0])
# for i in range(len(results)):
# for j, series_chart in enumerate(series_charts):
# results[i] = {**results[i], "timestamp": series_chart[i]["timestamp"],
# metric.series[j].name: series_chart[i]["count"]}
# return results
def get_sessions(project_id, user_id, metric_id, data: schemas.CustomMetricSessionsPayloadSchema):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.CreateCustomMetricsSchema = __merge_metric_with_data(metric=metric, data=data)
if metric is None:
return None
results = []
for s in metric.series:
s.filter.startDate = data.startTimestamp
s.filter.endDate = data.endTimestamp
s.filter.limit = data.limit
s.filter.page = data.page
results.append({"seriesId": s.series_id, "seriesName": s.name,
**sessions.search2_pg(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def get_funnel_issues(project_id, user_id, metric_id, data: schemas.CustomMetricSessionsPayloadSchema):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.CreateCustomMetricsSchema = __merge_metric_with_data(metric=metric, data=data)
if metric is None:
return None
for s in metric.series:
s.filter.startDate = data.startTimestamp
s.filter.endDate = data.endTimestamp
s.filter.limit = data.limit
s.filter.page = data.page
return {"seriesId": s.series_id, "seriesName": s.name,
**funnels.get_issues_on_the_fly_widget(project_id=project_id, data=s.filter)}
def get_errors_list(project_id, user_id, metric_id, data: schemas.CustomMetricSessionsPayloadSchema):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.CreateCustomMetricsSchema = __merge_metric_with_data(metric=metric, data=data)
if metric is None:
return None
for s in metric.series:
s.filter.startDate = data.startTimestamp
s.filter.endDate = data.endTimestamp
s.filter.limit = data.limit
s.filter.page = data.page
return {"seriesId": s.series_id, "seriesName": s.name,
**errors.search(data=s.filter, project_id=project_id, user_id=user_id)}
def try_sessions(project_id, user_id, data: schemas.CustomMetricSessionsPayloadSchema):
results = []
if data.series is None:
return results
for s in data.series:
s.filter.startDate = data.startTimestamp
s.filter.endDate = data.endTimestamp
s.filter.limit = data.limit
s.filter.page = data.page
results.append({"seriesId": None, "seriesName": s.name,
**sessions.search2_pg(data=s.filter, project_id=project_id, user_id=user_id)})
return results
def create(project_id, user_id, data: schemas.CreateCustomMetricsSchema, dashboard=False):
with pg_client.PostgresClient() as cur:
_data = {}
for i, s in enumerate(data.series):
for k in s.dict().keys():
_data[f"{k}_{i}"] = s.__getattribute__(k)
_data[f"index_{i}"] = i
_data[f"filter_{i}"] = s.filter.json()
series_len = len(data.series)
data.series = None
params = {"user_id": user_id, "project_id": project_id,
"default_config": json.dumps(data.config.dict()),
**data.dict(), **_data}
query = cur.mogrify(f"""\
WITH m AS (INSERT INTO metrics (project_id, user_id, name, is_public,
view_type, metric_type, metric_of, metric_value,
metric_format, default_config)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(is_public)s,
%(view_type)s, %(metric_type)s, %(metric_of)s, %(metric_value)s,
%(metric_format)s, %(default_config)s)
RETURNING *)
INSERT
INTO metric_series(metric_id, index, name, filter)
VALUES {",".join([f"((SELECT metric_id FROM m), %(index_{i})s, %(name_{i})s, %(filter_{i})s::jsonb)"
for i in range(series_len)])}
RETURNING metric_id;""", params)
cur.execute(
query
)
r = cur.fetchone()
if dashboard:
return r["metric_id"]
return {"data": get(metric_id=r["metric_id"], project_id=project_id, user_id=user_id)}
def update(metric_id, user_id, project_id, data: schemas.UpdateCustomMetricsSchema):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
series_ids = [r["seriesId"] for r in metric["series"]]
n_series = []
d_series_ids = []
u_series = []
u_series_ids = []
params = {"metric_id": metric_id, "is_public": data.is_public, "name": data.name,
"user_id": user_id, "project_id": project_id, "view_type": data.view_type,
"metric_type": data.metric_type, "metric_of": data.metric_of,
"metric_value": data.metric_value, "metric_format": data.metric_format}
for i, s in enumerate(data.series):
prefix = "u_"
if s.index is None:
s.index = i
if s.series_id is None or s.series_id not in series_ids:
n_series.append({"i": i, "s": s})
prefix = "n_"
else:
u_series.append({"i": i, "s": s})
u_series_ids.append(s.series_id)
ns = s.dict()
for k in ns.keys():
if k == "filter":
ns[k] = json.dumps(ns[k])
params[f"{prefix}{k}_{i}"] = ns[k]
for i in series_ids:
if i not in u_series_ids:
d_series_ids.append(i)
params["d_series_ids"] = tuple(d_series_ids)
with pg_client.PostgresClient() as cur:
sub_queries = []
if len(n_series) > 0:
sub_queries.append(f"""\
n AS (INSERT INTO metric_series (metric_id, index, name, filter)
VALUES {",".join([f"(%(metric_id)s, %(n_index_{s['i']})s, %(n_name_{s['i']})s, %(n_filter_{s['i']})s::jsonb)"
for s in n_series])}
RETURNING 1)""")
if len(u_series) > 0:
sub_queries.append(f"""\
u AS (UPDATE metric_series
SET name=series.name,
filter=series.filter,
index=series.index
FROM (VALUES {",".join([f"(%(u_series_id_{s['i']})s,%(u_index_{s['i']})s,%(u_name_{s['i']})s,%(u_filter_{s['i']})s::jsonb)"
for s in u_series])}) AS series(series_id, index, name, filter)
WHERE metric_series.metric_id =%(metric_id)s AND metric_series.series_id=series.series_id
RETURNING 1)""")
if len(d_series_ids) > 0:
sub_queries.append("""\
d AS (DELETE FROM metric_series WHERE metric_id =%(metric_id)s AND series_id IN %(d_series_ids)s
RETURNING 1)""")
query = cur.mogrify(f"""\
{"WITH " if len(sub_queries) > 0 else ""}{",".join(sub_queries)}
UPDATE metrics
SET name = %(name)s, is_public= %(is_public)s,
view_type= %(view_type)s, metric_type= %(metric_type)s,
metric_of= %(metric_of)s, metric_value= %(metric_value)s,
metric_format= %(metric_format)s,
edited_at = timezone('utc'::text, now())
WHERE metric_id = %(metric_id)s
AND project_id = %(project_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING metric_id;""", params)
cur.execute(query)
return get(metric_id=metric_id, project_id=project_id, user_id=user_id)
def get_all(project_id, user_id, include_series=False):
with pg_client.PostgresClient() as cur:
sub_join = ""
if include_series:
sub_join = """LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)"""
cur.execute(
cur.mogrify(
f"""SELECT *
FROM metrics
{sub_join}
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
FROM (SELECT DISTINCT dashboard_id, name, is_public
FROM dashboards INNER JOIN dashboard_widgets USING (dashboard_id)
WHERE deleted_at ISNULL
AND dashboard_widgets.metric_id = metrics.metric_id
AND project_id = %(project_id)s
AND ((dashboards.user_id = %(user_id)s OR is_public))) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (SELECT email AS owner_email
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE metrics.project_id = %(project_id)s
AND metrics.deleted_at ISNULL
AND (user_id = %(user_id)s OR metrics.is_public)
ORDER BY metrics.edited_at DESC, metrics.created_at DESC;""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
if include_series:
for r in rows:
# r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
for s in r["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
else:
for r in rows:
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r["edited_at"] = TimeUTC.datetime_to_timestamp(r["edited_at"])
rows = helper.list_to_camel_case(rows)
return rows
def delete(project_id, metric_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET deleted_at = timezone('utc'::text, now()), edited_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id})
)
return {"state": "success"}
def get(metric_id, project_id, user_id, flatten=True):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT *
FROM metrics
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
FROM (SELECT dashboard_id, name, is_public
FROM dashboards
WHERE deleted_at ISNULL
AND project_id = %(project_id)s
AND ((user_id = %(user_id)s OR is_public))) AS connected_dashboards
) AS connected_dashboards ON (TRUE)
LEFT JOIN LATERAL (SELECT email AS owner_email
FROM users
WHERE deleted_at ISNULL
AND users.user_id = metrics.user_id
) AS owner ON (TRUE)
WHERE metrics.project_id = %(project_id)s
AND metrics.deleted_at ISNULL
AND (metrics.user_id = %(user_id)s OR metrics.is_public)
AND metrics.metric_id = %(metric_id)s
ORDER BY created_at;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id}
)
)
row = cur.fetchone()
if row is None:
return None
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
row["edited_at"] = TimeUTC.datetime_to_timestamp(row["edited_at"])
if flatten:
for s in row["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
return helper.dict_to_camel_case(row)
def get_with_template(metric_id, project_id, user_id, include_dashboard=True):
with pg_client.PostgresClient() as cur:
sub_query = ""
if include_dashboard:
sub_query = """LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
FROM (SELECT dashboard_id, name, is_public
FROM dashboards
WHERE deleted_at ISNULL
AND project_id = %(project_id)s
AND ((user_id = %(user_id)s OR is_public))) AS connected_dashboards
) AS connected_dashboards ON (TRUE)"""
cur.execute(
cur.mogrify(
f"""SELECT *
FROM metrics
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
{sub_query}
WHERE (metrics.project_id = %(project_id)s OR metrics.project_id ISNULL)
AND metrics.deleted_at ISNULL
AND (metrics.user_id = %(user_id)s OR metrics.is_public)
AND metrics.metric_id = %(metric_id)s
ORDER BY created_at;""",
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id}
)
)
row = cur.fetchone()
return helper.dict_to_camel_case(row)
def get_series_for_alert(project_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT series_id AS value,
metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count' AS name,
'count' AS unit,
FALSE AS predefined,
metric_id,
series_id
FROM metric_series
INNER JOIN metrics USING (metric_id)
WHERE metrics.deleted_at ISNULL
AND metrics.project_id = %(project_id)s
AND metrics.metric_type = 'timeseries'
AND (user_id = %(user_id)s OR is_public)
ORDER BY name;""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
return helper.list_to_camel_case(rows)
def change_state(project_id, metric_id, user_id, status):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.metrics
SET active = %(status)s
WHERE metric_id = %(metric_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"metric_id": metric_id, "status": status, "user_id": user_id})
)
return get(metric_id=metric_id, project_id=project_id, user_id=user_id)
def get_funnel_sessions_by_issue(user_id, project_id, metric_id, issue_id,
data: schemas.CustomMetricSessionsPayloadSchema
# , range_value=None, start_date=None, end_date=None
):
metric = get(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
if metric is None:
return None
metric: schemas.CreateCustomMetricsSchema = __merge_metric_with_data(metric=metric, data=data)
if metric is None:
return None
for s in metric.series:
s.filter.startDate = data.startTimestamp
s.filter.endDate = data.endTimestamp
s.filter.limit = data.limit
s.filter.page = data.page
issues_list = funnels.get_issues_on_the_fly_widget(project_id=project_id, data=s.filter).get("issues", {})
issues_list = issues_list.get("significant", []) + issues_list.get("insignificant", [])
issue = None
for i in issues_list:
if i.get("issueId", "") == issue_id:
issue = i
break
if issue is None:
issue = issues.get(project_id=project_id, issue_id=issue_id)
if issue is not None:
issue = {**issue,
"affectedSessions": 0,
"affectedUsers": 0,
"conversionImpact": 0,
"lostConversions": 0,
"unaffectedSessions": 0}
return {"seriesId": s.series_id, "seriesName": s.name,
"sessions": sessions.search2_pg(user_id=user_id, project_id=project_id,
issue=issue, data=s.filter)
if issue is not None else {"total": 0, "sessions": []},
"issue": issue}

View file

@ -0,0 +1,324 @@
import json
import schemas
from chalicelib.core import custom_metrics, metrics
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
# category name should be lower cased
CATEGORY_DESCRIPTION = {
'web vitals': 'A set of metrics that assess app performance on criteria such as load time, load performance, and stability.',
'custom': 'Previously created custom metrics by me and my team.',
'errors': 'Keep a closer eye on errors and track their type, origin and domain.',
'performance': 'Optimize your apps performance by tracking slow domains, page response times, memory consumption, CPU usage and more.',
'resources': 'Find out which resources are missing and those that may be slowing your web app.'
}
def get_templates(project_id, user_id):
with pg_client.PostgresClient() as cur:
pg_query = cur.mogrify(f"""SELECT category, jsonb_agg(metrics ORDER BY name) AS widgets
FROM (SELECT * , default_config AS config
FROM metrics LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index), '[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
WHERE deleted_at IS NULL
AND (project_id ISNULL OR (project_id = %(project_id)s AND (is_public OR user_id= %(userId)s)))
) AS metrics
GROUP BY category
ORDER BY ARRAY_POSITION(ARRAY ['custom','overview','errors','performance','resources'], category);""",
{"project_id": project_id, "userId": user_id})
cur.execute(pg_query)
rows = cur.fetchall()
for r in rows:
r["description"] = CATEGORY_DESCRIPTION.get(r["category"].lower(), "")
for w in r["widgets"]:
w["created_at"] = TimeUTC.datetime_to_timestamp(w["created_at"])
w["edited_at"] = TimeUTC.datetime_to_timestamp(w["edited_at"])
for s in w["series"]:
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
return helper.list_to_camel_case(rows)
def create_dashboard(project_id, user_id, data: schemas.CreateDashboardSchema):
with pg_client.PostgresClient() as cur:
pg_query = f"""INSERT INTO dashboards(project_id, user_id, name, is_public, is_pinned, description)
VALUES(%(projectId)s, %(userId)s, %(name)s, %(is_public)s, %(is_pinned)s, %(description)s)
RETURNING *"""
params = {"userId": user_id, "projectId": project_id, **data.dict()}
if data.metrics is not None and len(data.metrics) > 0:
pg_query = f"""WITH dash AS ({pg_query})
INSERT INTO dashboard_widgets(dashboard_id, metric_id, user_id, config)
VALUES {",".join([f"((SELECT dashboard_id FROM dash),%(metric_id_{i})s, %(userId)s, (SELECT default_config FROM metrics WHERE metric_id=%(metric_id_{i})s)||%(config_{i})s)" for i in range(len(data.metrics))])}
RETURNING (SELECT dashboard_id FROM dash)"""
for i, m in enumerate(data.metrics):
params[f"metric_id_{i}"] = m
# params[f"config_{i}"] = schemas.AddWidgetToDashboardPayloadSchema.schema() \
# .get("properties", {}).get("config", {}).get("default", {})
# params[f"config_{i}"]["position"] = i
# params[f"config_{i}"] = json.dumps(params[f"config_{i}"])
params[f"config_{i}"] = json.dumps({"position": i})
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["something went wrong while creating the dashboard"]}
return {"data": get_dashboard(project_id=project_id, user_id=user_id, dashboard_id=row["dashboard_id"])}
def get_dashboards(project_id, user_id):
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT *
FROM dashboards
WHERE deleted_at ISNULL
AND project_id = %(projectId)s
AND (user_id = %(userId)s OR is_public);"""
params = {"userId": user_id, "projectId": project_id}
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return helper.list_to_camel_case(rows)
def get_dashboard(project_id, user_id, dashboard_id):
with pg_client.PostgresClient() as cur:
pg_query = """SELECT dashboards.*, all_metric_widgets.widgets AS widgets
FROM dashboards
LEFT JOIN LATERAL (SELECT COALESCE(JSONB_AGG(raw_metrics), '[]') AS widgets
FROM (SELECT dashboard_widgets.*, metrics.*, metric_series.series
FROM metrics
INNER JOIN dashboard_widgets USING (metric_id)
LEFT JOIN LATERAL (SELECT COALESCE(JSONB_AGG(metric_series.* ORDER BY index),'[]') AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
WHERE dashboard_widgets.dashboard_id = dashboards.dashboard_id
AND metrics.deleted_at ISNULL
AND (metrics.project_id = %(projectId)s OR metrics.project_id ISNULL)) AS raw_metrics
) AS all_metric_widgets ON (TRUE)
WHERE dashboards.deleted_at ISNULL
AND dashboards.project_id = %(projectId)s
AND dashboard_id = %(dashboard_id)s
AND (dashboards.user_id = %(userId)s OR is_public);"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
for w in row["widgets"]:
w["created_at"] = TimeUTC.datetime_to_timestamp(w["created_at"])
w["edited_at"] = TimeUTC.datetime_to_timestamp(w["edited_at"])
for s in w["series"]:
s["created_at"] = TimeUTC.datetime_to_timestamp(s["created_at"])
return helper.dict_to_camel_case(row)
def delete_dashboard(project_id, user_id, dashboard_id):
with pg_client.PostgresClient() as cur:
pg_query = """UPDATE dashboards
SET deleted_at = timezone('utc'::text, now())
WHERE dashboards.project_id = %(projectId)s
AND dashboard_id = %(dashboard_id)s
AND (dashboards.user_id = %(userId)s OR is_public);"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id}
cur.execute(cur.mogrify(pg_query, params))
return {"data": {"success": True}}
def update_dashboard(project_id, user_id, dashboard_id, data: schemas.EditDashboardSchema):
with pg_client.PostgresClient() as cur:
pg_query = """SELECT COALESCE(COUNT(*),0) AS count
FROM dashboard_widgets
WHERE dashboard_id = %(dashboard_id)s;"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id, **data.dict()}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
offset = row["count"]
pg_query = f"""UPDATE dashboards
SET name = %(name)s,
description= %(description)s
{", is_public = %(is_public)s" if data.is_public is not None else ""}
{", is_pinned = %(is_pinned)s" if data.is_pinned is not None else ""}
WHERE dashboards.project_id = %(projectId)s
AND dashboard_id = %(dashboard_id)s
AND (dashboards.user_id = %(userId)s OR is_public)"""
if data.metrics is not None and len(data.metrics) > 0:
pg_query = f"""WITH dash AS ({pg_query})
INSERT INTO dashboard_widgets(dashboard_id, metric_id, user_id, config)
VALUES {",".join([f"(%(dashboard_id)s, %(metric_id_{i})s, %(userId)s, (SELECT default_config FROM metrics WHERE metric_id=%(metric_id_{i})s)||%(config_{i})s)" for i in range(len(data.metrics))])};"""
for i, m in enumerate(data.metrics):
params[f"metric_id_{i}"] = m
# params[f"config_{i}"] = schemas.AddWidgetToDashboardPayloadSchema.schema() \
# .get("properties", {}).get("config", {}).get("default", {})
# params[f"config_{i}"]["position"] = i
# params[f"config_{i}"] = json.dumps(params[f"config_{i}"])
params[f"config_{i}"] = json.dumps({"position": i + offset})
cur.execute(cur.mogrify(pg_query, params))
return get_dashboard(project_id=project_id, user_id=user_id, dashboard_id=dashboard_id)
def get_widget(project_id, user_id, dashboard_id, widget_id):
with pg_client.PostgresClient() as cur:
pg_query = """SELECT metrics.*, metric_series.series
FROM dashboard_widgets
INNER JOIN dashboards USING (dashboard_id)
INNER JOIN metrics USING (metric_id)
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index), '[]'::jsonb) AS series
FROM metric_series
WHERE metric_series.metric_id = metrics.metric_id
AND metric_series.deleted_at ISNULL
) AS metric_series ON (TRUE)
WHERE dashboard_id = %(dashboard_id)s
AND widget_id = %(widget_id)s
AND (dashboards.is_public OR dashboards.user_id = %(userId)s)
AND dashboards.deleted_at IS NULL
AND metrics.deleted_at ISNULL
AND (metrics.project_id = %(projectId)s OR metrics.project_id ISNULL)
AND (metrics.is_public OR metrics.user_id = %(userId)s);"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id, "widget_id": widget_id}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
return helper.dict_to_camel_case(row)
def add_widget(project_id, user_id, dashboard_id, data: schemas.AddWidgetToDashboardPayloadSchema):
with pg_client.PostgresClient() as cur:
pg_query = """INSERT INTO dashboard_widgets(dashboard_id, metric_id, user_id, config)
SELECT %(dashboard_id)s AS dashboard_id, %(metric_id)s AS metric_id,
%(userId)s AS user_id, (SELECT default_config FROM metrics WHERE metric_id=%(metric_id)s)||%(config)s::jsonb AS config
WHERE EXISTS(SELECT 1 FROM dashboards
WHERE dashboards.deleted_at ISNULL AND dashboards.project_id = %(projectId)s
AND dashboard_id = %(dashboard_id)s
AND (dashboards.user_id = %(userId)s OR is_public))
RETURNING *;"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id, **data.dict()}
params["config"] = json.dumps(data.config)
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
return helper.dict_to_camel_case(row)
def update_widget(project_id, user_id, dashboard_id, widget_id, data: schemas.UpdateWidgetPayloadSchema):
with pg_client.PostgresClient() as cur:
pg_query = """UPDATE dashboard_widgets
SET config= %(config)s
WHERE dashboard_id=%(dashboard_id)s AND widget_id=%(widget_id)s
RETURNING *;"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id,
"widget_id": widget_id, **data.dict()}
params["config"] = json.dumps(data.config)
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
return helper.dict_to_camel_case(row)
def remove_widget(project_id, user_id, dashboard_id, widget_id):
with pg_client.PostgresClient() as cur:
pg_query = """DELETE FROM dashboard_widgets
WHERE dashboard_id=%(dashboard_id)s AND widget_id=%(widget_id)s;"""
params = {"userId": user_id, "projectId": project_id, "dashboard_id": dashboard_id, "widget_id": widget_id}
cur.execute(cur.mogrify(pg_query, params))
return {"data": {"success": True}}
def pin_dashboard(project_id, user_id, dashboard_id):
with pg_client.PostgresClient() as cur:
pg_query = """UPDATE dashboards
SET is_pinned = FALSE
WHERE project_id=%(project_id)s;
UPDATE dashboards
SET is_pinned = True
WHERE dashboard_id=%(dashboard_id)s AND project_id=%(project_id)s AND deleted_at ISNULL
RETURNING *;"""
params = {"userId": user_id, "project_id": project_id, "dashboard_id": dashboard_id}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
return helper.dict_to_camel_case(row)
def create_metric_add_widget(project_id, user_id, dashboard_id, data: schemas.CreateCustomMetricsSchema):
metric_id = custom_metrics.create(project_id=project_id, user_id=user_id, data=data, dashboard=True)
return add_widget(project_id=project_id, user_id=user_id, dashboard_id=dashboard_id,
data=schemas.AddWidgetToDashboardPayloadSchema(metricId=metric_id))
PREDEFINED = {schemas.TemplatePredefinedKeys.count_sessions: metrics.get_processed_sessions,
schemas.TemplatePredefinedKeys.avg_image_load_time: metrics.get_application_activity_avg_image_load_time,
schemas.TemplatePredefinedKeys.avg_page_load_time: metrics.get_application_activity_avg_page_load_time,
schemas.TemplatePredefinedKeys.avg_request_load_time: metrics.get_application_activity_avg_request_load_time,
schemas.TemplatePredefinedKeys.avg_dom_content_load_start: metrics.get_page_metrics_avg_dom_content_load_start,
schemas.TemplatePredefinedKeys.avg_first_contentful_pixel: metrics.get_page_metrics_avg_first_contentful_pixel,
schemas.TemplatePredefinedKeys.avg_visited_pages: metrics.get_user_activity_avg_visited_pages,
schemas.TemplatePredefinedKeys.avg_session_duration: metrics.get_user_activity_avg_session_duration,
schemas.TemplatePredefinedKeys.avg_pages_dom_buildtime: metrics.get_pages_dom_build_time,
schemas.TemplatePredefinedKeys.avg_pages_response_time: metrics.get_pages_response_time,
schemas.TemplatePredefinedKeys.avg_response_time: metrics.get_top_metrics_avg_response_time,
schemas.TemplatePredefinedKeys.avg_first_paint: metrics.get_top_metrics_avg_first_paint,
schemas.TemplatePredefinedKeys.avg_dom_content_loaded: metrics.get_top_metrics_avg_dom_content_loaded,
schemas.TemplatePredefinedKeys.avg_till_first_bit: metrics.get_top_metrics_avg_till_first_bit,
schemas.TemplatePredefinedKeys.avg_time_to_interactive: metrics.get_top_metrics_avg_time_to_interactive,
schemas.TemplatePredefinedKeys.count_requests: metrics.get_top_metrics_count_requests,
schemas.TemplatePredefinedKeys.avg_time_to_render: metrics.get_time_to_render,
schemas.TemplatePredefinedKeys.avg_used_js_heap_size: metrics.get_memory_consumption,
schemas.TemplatePredefinedKeys.avg_cpu: metrics.get_avg_cpu,
schemas.TemplatePredefinedKeys.avg_fps: metrics.get_avg_fps,
schemas.TemplatePredefinedKeys.impacted_sessions_by_js_errors: metrics.get_impacted_sessions_by_js_errors,
schemas.TemplatePredefinedKeys.domains_errors_4xx: metrics.get_domains_errors_4xx,
schemas.TemplatePredefinedKeys.domains_errors_5xx: metrics.get_domains_errors_5xx,
schemas.TemplatePredefinedKeys.errors_per_domains: metrics.get_errors_per_domains,
schemas.TemplatePredefinedKeys.calls_errors: metrics.get_calls_errors,
schemas.TemplatePredefinedKeys.errors_by_type: metrics.get_errors_per_type,
schemas.TemplatePredefinedKeys.errors_by_origin: metrics.get_resources_by_party,
schemas.TemplatePredefinedKeys.speed_index_by_location: metrics.get_speed_index_location,
schemas.TemplatePredefinedKeys.slowest_domains: metrics.get_slowest_domains,
schemas.TemplatePredefinedKeys.sessions_per_browser: metrics.get_sessions_per_browser,
schemas.TemplatePredefinedKeys.time_to_render: metrics.get_time_to_render,
schemas.TemplatePredefinedKeys.impacted_sessions_by_slow_pages: metrics.get_impacted_sessions_by_slow_pages,
schemas.TemplatePredefinedKeys.memory_consumption: metrics.get_memory_consumption,
schemas.TemplatePredefinedKeys.cpu_load: metrics.get_avg_cpu,
schemas.TemplatePredefinedKeys.frame_rate: metrics.get_avg_fps,
schemas.TemplatePredefinedKeys.crashes: metrics.get_crashes,
schemas.TemplatePredefinedKeys.resources_vs_visually_complete: metrics.get_resources_vs_visually_complete,
schemas.TemplatePredefinedKeys.pages_dom_buildtime: metrics.get_pages_dom_build_time,
schemas.TemplatePredefinedKeys.pages_response_time: metrics.get_pages_response_time,
schemas.TemplatePredefinedKeys.pages_response_time_distribution: metrics.get_pages_response_time_distribution,
schemas.TemplatePredefinedKeys.missing_resources: metrics.get_missing_resources_trend,
schemas.TemplatePredefinedKeys.slowest_resources: metrics.get_slowest_resources,
schemas.TemplatePredefinedKeys.resources_fetch_time: metrics.get_resources_loading_time,
schemas.TemplatePredefinedKeys.resource_type_vs_response_end: metrics.resource_type_vs_response_end,
schemas.TemplatePredefinedKeys.resources_count_by_type: metrics.get_resources_count_by_type,
}
def get_predefined_metric(key: schemas.TemplatePredefinedKeys, project_id: int, data: dict):
return PREDEFINED.get(key, lambda *args: None)(project_id=project_id, **data)
def make_chart_metrics(project_id, user_id, metric_id, data: schemas.CustomMetricChartPayloadSchema):
raw_metric = custom_metrics.get_with_template(metric_id=metric_id, project_id=project_id, user_id=user_id,
include_dashboard=False)
if raw_metric is None:
return None
metric = schemas.CustomMetricAndTemplate = schemas.CustomMetricAndTemplate.parse_obj(raw_metric)
if metric.is_template:
return get_predefined_metric(key=metric.predefined_key, project_id=project_id, data=data.dict())
else:
return custom_metrics.make_chart(project_id=project_id, user_id=user_id, metric_id=metric_id, data=data,
metric=raw_metric)
def make_chart_widget(dashboard_id, project_id, user_id, widget_id, data: schemas.CustomMetricChartPayloadSchema):
raw_metric = get_widget(widget_id=widget_id, project_id=project_id, user_id=user_id, dashboard_id=dashboard_id)
if raw_metric is None:
return None
metric = schemas.CustomMetricAndTemplate = schemas.CustomMetricAndTemplate.parse_obj(raw_metric)
if metric.is_template:
return get_predefined_metric(key=metric.predefined_key, project_id=project_id, data=data.dict())
else:
return custom_metrics.make_chart(project_id=project_id, user_id=user_id, metric_id=raw_metric["metricId"],
data=data, metric=raw_metric)

View file

@ -1,205 +0,0 @@
import logging
from chalicelib.utils import pg_client
logger = logging.getLogger(__name__)
class DatabaseRequestHandler:
def __init__(self, table_name):
self.table_name = table_name
self.constraints = []
self.params = {}
self.order_clause = ""
self.sort_clause = ""
self.select_columns = []
self.sub_queries = []
self.joins = []
self.group_by_clause = ""
self.client = pg_client
self.logger = logging.getLogger(__name__)
self.pagination = {}
def add_constraint(self, constraint, param=None):
self.constraints.append(constraint)
if param:
self.params.update(param)
def add_subquery(self, subquery, alias, param=None):
self.sub_queries.append((subquery, alias))
if param:
self.params.update(param)
def add_join(self, join_clause):
self.joins.append(join_clause)
def add_param(self, key, value):
self.params[key] = value
def set_order_by(self, order_by):
self.order_clause = order_by
def set_sort_by(self, sort_by):
self.sort_clause = sort_by
def set_select_columns(self, columns):
self.select_columns = columns
def set_group_by(self, group_by_clause):
self.group_by_clause = group_by_clause
def set_pagination(self, page, page_size):
"""
Set pagination parameters for the query.
:param page: The page number (1-indexed)
:param page_size: Number of items per page
"""
self.pagination = {
'offset': (page - 1) * page_size,
'limit': page_size
}
def build_query(self, action="select", additional_clauses=None, data=None):
if action == "select":
query = f"SELECT {', '.join(self.select_columns)} FROM {self.table_name}"
elif action == "insert":
columns = ', '.join(data.keys())
placeholders = ', '.join(f'%({k})s' for k in data.keys())
query = f"INSERT INTO {self.table_name} ({columns}) VALUES ({placeholders})"
elif action == "update":
set_clause = ', '.join(f"{k} = %({k})s" for k in data.keys())
query = f"UPDATE {self.table_name} SET {set_clause}"
elif action == "delete":
query = f"DELETE FROM {self.table_name}"
for join in self.joins:
query += f" {join}"
for subquery, alias in self.sub_queries:
query += f", ({subquery}) AS {alias}"
if self.constraints:
query += " WHERE " + " AND ".join(self.constraints)
if action == "select":
if self.group_by_clause:
query += " GROUP BY " + self.group_by_clause
if self.sort_clause:
query += " ORDER BY " + self.sort_clause
if self.order_clause:
query += " " + self.order_clause
if hasattr(self, 'pagination') and self.pagination:
query += " LIMIT %(limit)s OFFSET %(offset)s"
self.params.update(self.pagination)
if additional_clauses:
query += " " + additional_clauses
logger.debug(f"Query: {query}")
return query
def execute_query(self, query, data=None):
try:
with self.client.PostgresClient() as cur:
mogrified_query = cur.mogrify(query, {**data, **self.params} if data else self.params)
cur.execute(mogrified_query)
return cur.fetchall() if cur.description else None
except Exception as e:
self.logger.error(f"Database operation failed: {e}")
raise
def fetchall(self):
query = self.build_query()
return self.execute_query(query)
def fetchone(self):
query = self.build_query()
result = self.execute_query(query)
return result[0] if result else None
def insert(self, data):
query = self.build_query(action="insert", data=data)
query += " RETURNING *;"
result = self.execute_query(query, data)
return result[0] if result else None
def update(self, data):
query = self.build_query(action="update", data=data)
query += " RETURNING *;"
result = self.execute_query(query, data)
return result[0] if result else None
def delete(self):
query = self.build_query(action="delete")
return self.execute_query(query)
def batch_insert(self, items):
if not items:
return None
columns = ', '.join(items[0].keys())
# Building a values string with unique parameter names for each item
all_values_query = ', '.join(
'(' + ', '.join([f"%({key}_{i})s" for key in item]) + ')'
for i, item in enumerate(items)
)
query = f"INSERT INTO {self.table_name} ({columns}) VALUES {all_values_query} RETURNING *;"
try:
with self.client.PostgresClient() as cur:
# Flatten items into a single dictionary with unique keys
combined_params = {f"{k}_{i}": v for i, item in enumerate(items) for k, v in item.items()}
mogrified_query = cur.mogrify(query, combined_params)
cur.execute(mogrified_query)
return cur.fetchall()
except Exception as e:
self.logger.error(f"Database batch insert operation failed: {e}")
raise
def raw_query(self, query, params=None):
try:
with self.client.PostgresClient() as cur:
mogrified_query = cur.mogrify(query, params)
cur.execute(mogrified_query)
return cur.fetchall() if cur.description else None
except Exception as e:
self.logger.error(f"Database operation failed: {e}")
raise
def batch_update(self, items):
if not items:
return None
id_column = list(items[0])[0]
# Building the set clause for the update statement
update_columns = list(items[0].keys())
update_columns.remove(id_column)
set_clause = ', '.join([f"{col} = v.{col}" for col in update_columns])
# Building the values part for the 'VALUES' section
values_rows = []
for item in items:
values = ', '.join([f"%({key})s" for key in item.keys()])
values_rows.append(f"({values})")
values_query = ', '.join(values_rows)
# Constructing the full update query
query = f"""
UPDATE {self.table_name} AS t
SET {set_clause}
FROM (VALUES {values_query}) AS v ({', '.join(items[0].keys())})
WHERE t.{id_column} = v.{id_column};
"""
try:
with self.client.PostgresClient() as cur:
# Flatten items into a single dictionary for mogrify
combined_params = {k: v for item in items for k, v in item.items()}
mogrified_query = cur.mogrify(query, combined_params)
cur.execute(mogrified_query)
except Exception as e:
self.logger.error(f"Database batch update operation failed: {e}")
raise

View file

@ -0,0 +1,806 @@
import json
import schemas
from chalicelib.core import sourcemaps, sessions
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import __get_step_size
def get(error_id, family=False):
if family:
return get_batch([error_id])
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"SELECT * FROM events.errors AS e INNER JOIN public.errors AS re USING(error_id) WHERE error_id = %(error_id)s;",
{"error_id": error_id})
cur.execute(query=query)
result = cur.fetchone()
if result is not None:
result["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(result["stacktrace_parsed_at"])
return helper.dict_to_camel_case(result)
def get_batch(error_ids):
if len(error_ids) == 0:
return []
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
WITH RECURSIVE error_family AS (
SELECT *
FROM public.errors
WHERE error_id IN %(error_ids)s
UNION
SELECT child_errors.*
FROM public.errors AS child_errors
INNER JOIN error_family ON error_family.error_id = child_errors.parent_error_id OR error_family.parent_error_id = child_errors.error_id
)
SELECT *
FROM error_family;""",
{"error_ids": tuple(error_ids)})
cur.execute(query=query)
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f'{o["name"]}@{v["version"]}',
"count": v["count"]
} for o in data for v in o["partition"]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o["name"],
"count": o["count"],
} for o in data
]
def __process_tags(row):
return [
{"name": "browser", "partitions": __flatten_sort_key_count_version(data=row.get("browsers_partition"))},
{"name": "browser.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("browsers_partition"), merge_nested=True)},
{"name": "OS", "partitions": __flatten_sort_key_count_version(data=row.get("os_partition"))},
{"name": "OS.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("os_partition"), merge_nested=True)},
{"name": "device.family", "partitions": __flatten_sort_key_count_version(data=row.get("device_partition"))},
{"name": "device",
"partitions": __flatten_sort_key_count_version(data=row.pop("device_partition"), merge_nested=True)},
{"name": "country", "partitions": row.pop("country_partition")}
]
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = __get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30 = __get_basic_constraints(time_constraint=False, chart=True, step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = __get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
data["endDate24"] = TimeUTC.now()
data["startDate30"] = TimeUTC.now(-30)
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = __get_step_size(data["startDate24"], data["endDate24"], density24, factor=1)
density30 = int(data.get("density30", 30))
step_size30 = __get_step_size(data["startDate30"], data["endDate30"], density30, factor=1)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
"startDate30": data['startDate30'],
"endDate30": data['endDate30'],
"project_id": project_id,
"userId": user_id,
"step_size24": step_size24,
"step_size30": step_size30,
"error_id": error_id}
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
last_session_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart24,
chart30
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_uuid) AS users,
COUNT(DISTINCT session_id) AS sessions
FROM public.errors
INNER JOIN events.errors AS s_errors USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE error_id = %(error_id)s
GROUP BY error_id, name, message) AS details
INNER JOIN (SELECT error_id,
MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s
GROUP BY error_id) AS time_details USING (error_id)
INNER JOIN (SELECT error_id,
session_id AS last_session_id,
user_os,
user_os_version,
user_browser,
user_browser_version,
user_device,
user_device_type,
user_uuid
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details USING (error_id)
INNER JOIN (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT JSONB_AGG(version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
AND sessions.user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS version_details
) AS browser_version_details ON (TRUE)) AS browser_details) AS browser_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT COALESCE(user_os_version,'unknown') AS version, COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
AND sessions.user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_details
GROUP BY count_per_os_details.name ) AS os_version_details
ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_v_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
AND sessions.user_device_type = count_per_device_details.name
GROUP BY user_device
ORDER BY count DESC) AS count_per_device_v_details
GROUP BY count_per_device_details.name ) AS device_version_details
ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_basic_query)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart24
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query24)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details24 ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart30
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30)}) AS chart_details
ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details30 ON (TRUE);
"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
query = cur.mogrify(
f"""SELECT error_id, status, session_id, start_ts,
parent_error_id,session_id, user_anonymous_id,
user_id, user_uuid, user_browser, user_browser_version,
user_os, user_os_version, user_device, payload,
COALESCE((SELECT TRUE
FROM public.user_favorite_errors AS fe
WHERE pe.error_id = fe.error_id
AND fe.user_id = %(user_id)s), FALSE) AS favorite,
True AS viewed
FROM public.errors AS pe
INNER JOIN events.errors AS ee USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE pe.project_id = %(project_id)s
AND error_id = %(error_id)s
ORDER BY start_ts DESC
LIMIT 1;""",
{"project_id": project_id, "error_id": error_id, "user_id": user_id})
cur.execute(query=query)
status = cur.fetchone()
if status is not None:
row["stack"] = format_first_stack_frame(status).pop("stack")
row["status"] = status.pop("status")
row["parent_error_id"] = status.pop("parent_error_id")
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
row["last_hydrated_session"] = status
else:
row["stack"] = []
row["last_hydrated_session"] = None
row["status"] = "untracked"
row["parent_error_id"] = None
row["favorite"] = False
row["viewed"] = False
return {"data": helper.dict_to_camel_case(row)}
def get_details_chart(project_id, error_id, user_id, **data):
pg_sub_query = __get_basic_constraints()
pg_sub_query.append("error_id = %(error_id)s")
pg_sub_query_chart = __get_basic_constraints(time_constraint=False, chart=True)
pg_sub_query_chart.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
if data.get("startDate") is None:
data["startDate"] = TimeUTC.now(-7)
else:
data["startDate"] = int(data["startDate"])
if data.get("endDate") is None:
data["endDate"] = TimeUTC.now()
else:
data["endDate"] = int(data["endDate"])
density = int(data.get("density", 7))
step_size = __get_step_size(data["startDate"], data["endDate"], density, factor=1)
params = {
"startDate": data['startDate'],
"endDate": data['endDate'],
"project_id": project_id,
"userId": user_id,
"step_size": step_size,
"error_id": error_id}
main_pg_query = f"""\
SELECT %(error_id)s AS error_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart
FROM (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS count_per_version_details) AS browesr_version_details
ON (TRUE)) AS browser_details) AS browser_details
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_query) AS partition
FROM (SELECT COALESCE(user_os_version, 'unknown') AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_query
) AS os_version_query ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_device_type = count_per_device_details.name
GROUP BY user_device_type, user_device
ORDER BY count DESC) AS count_per_device_details
) AS device_version_details ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_chart)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details ON (TRUE);"""
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
return {"data": helper.dict_to_camel_case(row)}
def __get_basic_constraints(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", chart=False, step_size_name="step_size",
project_key="project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.mobile:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.desktop:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.occurrence: "max_datetime",
schemas.ErrorSort.users_count: "users",
schemas.ErrorSort.sessions_count: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project_id, user_id, flows=False):
empty_response = {'total': 0,
'errors': []
}
platform = None
for f in data.filters:
if f.type == schemas.FilterType.platform and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = __get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = __get_basic_constraints(platform, time_constraint=False, chart=True, project_key=None)
# pg_sub_query_chart.append("source ='js_exception'")
pg_sub_query_chart.append("errors.error_id =details.error_id")
statuses = []
error_ids = None
if data.startDate is None:
data.startDate = TimeUTC.now(-30)
if data.endDate is None:
data.endDate = TimeUTC.now(1)
if len(data.events) > 0 or len(data.filters) > 0:
print("-- searching for sessions before errors")
# if favorite_only=True search for sessions associated with favorite_error
statuses = sessions.search2_pg(data=data, project_id=project_id, user_id=user_id, errors_only=True,
error_status=data.status)
if len(statuses) == 0:
return empty_response
error_ids = [e["errorId"] for e in statuses]
with pg_client.PostgresClient() as cur:
if data.startDate is None:
data.startDate = TimeUTC.now(-7)
if data.endDate is None:
data.endDate = TimeUTC.now()
step_size = __get_step_size(data.startDate, data.endDate, data.density, factor=1)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = schemas.SortOrderType.desc
if data.order is not None:
order = data.order
extra_join = ""
params = {
"startDate": data.startDate,
"endDate": data.endDate,
"project_id": project_id,
"userId": user_id,
"step_size": step_size}
if data.status != schemas.ErrorStatus.all:
pg_sub_query.append("status = %(error_status)s")
params["error_status"] = data.status
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
pg_sub_query.append("error_id IN %(error_ids)s")
if data.bookmarked:
pg_sub_query.append("ufe.user_id = %(userId)s")
extra_join += " INNER JOIN public.user_favorite_errors AS ufe USING (error_id)"
if data.query is not None and len(data.query) > 0:
pg_sub_query.append("(pe.name ILIKE %(error_query)s OR pe.message ILIKE %(error_query)s)")
params["error_query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator._contains)
main_pg_query = f"""SELECT full_count,
error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
chart
FROM (SELECT COUNT(details) OVER () AS full_count, details.*
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_uuid) AS users,
COUNT(DISTINCT session_id) AS sessions,
MAX(timestamp) AS max_datetime,
MIN(timestamp) AS min_datetime
FROM events.errors
INNER JOIN public.errors AS pe USING (error_id)
INNER JOIN public.sessions USING (session_id)
{extra_join}
WHERE {" AND ".join(pg_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}) AS details
LIMIT %(errors_limit)s OFFSET %(errors_offset)s
) AS details
INNER JOIN LATERAL (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE errors.error_id = details.error_id) AS time_details ON (TRUE)
INNER JOIN LATERAL (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details ON (TRUE);"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
rows = cur.fetchall()
total = 0 if len(rows) == 0 else rows[0]["full_count"]
if flows:
return {"count": total}
if total == 0:
rows = []
else:
if len(statuses) == 0:
query = cur.mogrify(
"""SELECT error_id, status, parent_error_id, payload,
COALESCE((SELECT TRUE
FROM public.user_favorite_errors AS fe
WHERE errors.error_id = fe.error_id
AND fe.user_id = %(user_id)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_errors AS ve
WHERE errors.error_id = ve.error_id
AND ve.user_id = %(user_id)s LIMIT 1), FALSE) AS viewed
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = helper.list_to_camel_case(cur.fetchall())
statuses = {
s["errorId"]: s for s in statuses
}
for r in rows:
r.pop("full_count")
if r["error_id"] in statuses:
r["status"] = statuses[r["error_id"]]["status"]
r["parent_error_id"] = statuses[r["error_id"]]["parentErrorId"]
r["favorite"] = statuses[r["error_id"]]["favorite"]
r["viewed"] = statuses[r["error_id"]]["viewed"]
r["stack"] = format_first_stack_frame(statuses[r["error_id"]])["stack"]
else:
r["status"] = "untracked"
r["parent_error_id"] = None
r["favorite"] = False
r["viewed"] = False
r["stack"] = None
offset = len(rows)
rows = [r for r in rows if r["stack"] is None
or (len(r["stack"]) == 0 or len(r["stack"]) > 1
or len(r["stack"]) > 0
and (r["message"].lower() != "script error." or len(r["stack"][0]["absPath"]) > 0))]
offset -= len(rows)
return {
'total': total - offset,
'errors': helper.list_to_camel_case(rows)
}
def __save_stacktrace(error_id, data):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET stacktrace=%(data)s::jsonb, stacktrace_parsed_at=timezone('utc'::text, now())
WHERE error_id = %(error_id)s;""",
{"error_id": error_id, "data": json.dumps(data)})
cur.execute(query=query)
def get_trace(project_id, error_id):
error = get(error_id=error_id, family=False)
if error is None:
return {"errors": ["error not found"]}
if error.get("source", "") != "js_exception":
return {"errors": ["this source of errors doesn't have a sourcemap"]}
if error.get("payload") is None:
return {"errors": ["null payload"]}
if error.get("stacktrace") is not None:
return {"sourcemapUploaded": True,
"trace": error.get("stacktrace"),
"preparsed": True}
trace, all_exists = sourcemaps.get_traces_group(project_id=project_id, payload=error["payload"])
if all_exists:
__save_stacktrace(error_id=error_id, data=trace)
return {"sourcemapUploaded": all_exists,
"trace": trace,
"preparsed": False}
def get_sessions(start_date, end_date, project_id, user_id, error_id):
extra_constraints = ["s.project_id = %(project_id)s",
"s.start_ts >= %(startDate)s",
"s.start_ts <= %(endDate)s",
"e.error_id = %(error_id)s"]
if start_date is None:
start_date = TimeUTC.now(-7)
if end_date is None:
end_date = TimeUTC.now()
params = {
"startDate": start_date,
"endDate": end_date,
"project_id": project_id,
"userId": user_id,
"error_id": error_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_country,
s.start_ts,
s.duration,
s.events_count,
s.pages_count,
s.errors_count,
s.issue_types,
COALESCE((SELECT TRUE
FROM public.user_favorite_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.sessions AS s INNER JOIN events.errors AS e USING (session_id)
WHERE {" AND ".join(extra_constraints)}
ORDER BY s.start_ts DESC;""",
params)
cur.execute(query=query)
sessions_list = []
total = cur.rowcount
row = cur.fetchone()
while row is not None and len(sessions_list) < 100:
sessions_list.append(row)
row = cur.fetchone()
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions_list)
}
ACTION_STATE = {
"unsolve": 'unresolved',
"solve": 'resolved',
"ignore": 'ignored'
}
def change_state(project_id, user_id, error_id, action):
errors = get(error_id, family=True)
print(len(errors))
status = ACTION_STATE.get(action)
if errors is None or len(errors) == 0:
return {"errors": ["error not found"]}
if errors[0]["status"] == status:
return {"errors": [f"error is already {status}"]}
if errors[0]["status"] == ACTION_STATE["solve"] and status == ACTION_STATE["ignore"]:
return {"errors": [f"state transition not permitted {errors[0]['status']} -> {status}"]}
params = {
"userId": user_id,
"error_ids": tuple([e["errorId"] for e in errors]),
"status": status}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET status = %(status)s
WHERE error_id IN %(error_ids)s
RETURNING status""",
params)
cur.execute(query=query)
row = cur.fetchone()
if row is not None:
for e in errors:
e["status"] = row["status"]
return {"data": errors}
MAX_RANK = 2
def __status_rank(status):
return {
'unresolved': MAX_RANK - 2,
'ignored': MAX_RANK - 1,
'resolved': MAX_RANK
}.get(status)
def merge(error_ids):
error_ids = list(set(error_ids))
errors = get_batch(error_ids)
if len(error_ids) <= 1 or len(error_ids) > len(errors):
return {"errors": ["invalid list of ids"]}
error_ids = [e["errorId"] for e in errors]
parent_error_id = error_ids[0]
status = "unresolved"
for e in errors:
if __status_rank(status) < __status_rank(e["status"]):
status = e["status"]
if __status_rank(status) == MAX_RANK:
break
params = {
"error_ids": tuple(error_ids),
"parent_error_id": parent_error_id,
"status": status
}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET parent_error_id = %(parent_error_id)s, status = %(status)s
WHERE error_id IN %(error_ids)s OR parent_error_id IN %(error_ids)s;""",
params)
cur.execute(query=query)
# row = cur.fetchone()
return {"data": "success"}
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error
def stats(project_id, user_id, startTimestamp=TimeUTC.now(delta_days=-7), endTimestamp=TimeUTC.now()):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""WITH user_viewed AS (SELECT error_id FROM public.user_viewed_errors WHERE user_id = %(user_id)s)
SELECT COUNT(timed_errors.*) AS unresolved_and_unviewed
FROM (SELECT root_error.error_id
FROM events.errors
INNER JOIN public.errors AS root_error USING (error_id)
LEFT JOIN user_viewed USING (error_id)
WHERE project_id = %(project_id)s
AND timestamp >= %(startTimestamp)s
AND timestamp <= %(endTimestamp)s
AND source = 'js_exception'
AND root_error.status = 'unresolved'
AND user_viewed.error_id ISNULL
LIMIT 1
) AS timed_errors;""",
{"project_id": project_id, "user_id": user_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp})
cur.execute(query=query)
row = cur.fetchone()
return {
"data": helper.dict_to_camel_case(row)
}

View file

@ -1,13 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import errors_pg as errors_legacy
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
logger.info(">>> Using experimental error search")
from . import errors_ch as errors
else:
from . import errors_pg as errors

View file

@ -1,409 +0,0 @@
import schemas
from chalicelib.core import metadata
from chalicelib.core.errors import errors_legacy
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.errors.modules import sessions
from chalicelib.utils import ch_client, exp_ch_helper
from chalicelib.utils import helper, metrics_helper
from chalicelib.utils.TimeUTC import TimeUTC
def _multiple_values(values, value_key="value"):
query_values = {}
if values is not None and isinstance(values, list):
for i in range(len(values)):
k = f"{value_key}_{i}"
query_values[k] = values[i]
return query_values
def __get_sql_operator(op: schemas.SearchEventOperator):
return {
schemas.SearchEventOperator.IS: "=",
schemas.SearchEventOperator.IS_ANY: "IN",
schemas.SearchEventOperator.ON: "=",
schemas.SearchEventOperator.ON_ANY: "IN",
schemas.SearchEventOperator.IS_NOT: "!=",
schemas.SearchEventOperator.NOT_ON: "!=",
schemas.SearchEventOperator.CONTAINS: "ILIKE",
schemas.SearchEventOperator.NOT_CONTAINS: "NOT ILIKE",
schemas.SearchEventOperator.STARTS_WITH: "ILIKE",
schemas.SearchEventOperator.ENDS_WITH: "ILIKE",
}.get(op, "=")
def _isAny_opreator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.ON_ANY, schemas.SearchEventOperator.IS_ANY]
def _isUndefined_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_UNDEFINED]
def __is_negation_operator(op: schemas.SearchEventOperator):
return op in [schemas.SearchEventOperator.IS_NOT,
schemas.SearchEventOperator.NOT_ON,
schemas.SearchEventOperator.NOT_CONTAINS]
def _multiple_conditions(condition, values, value_key="value", is_not=False):
query = []
for i in range(len(values)):
k = f"{value_key}_{i}"
query.append(condition.replace(value_key, k))
return "(" + (" AND " if is_not else " OR ").join(query) + ")"
def get(error_id, family=False):
return errors_legacy.get(error_id=error_id, family=family)
def get_batch(error_ids):
return errors_legacy.get_batch(error_ids=error_ids)
def __get_basic_constraints_events(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}created_at >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}created_at < toDateTime(%({endTime_arg_name})s/1000)"]
# if platform == schemas.PlatformType.MOBILE:
# ch_sub_query.append("user_device_type = 'mobile'")
# elif platform == schemas.PlatformType.DESKTOP:
# ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
MAIN_EVENTS_TABLE = exp_ch_helper.get_main_events_table(data.startTimestamp)
MAIN_SESSIONS_TABLE = exp_ch_helper.get_main_sessions_table(data.startTimestamp)
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
ch_sessions_sub_query = errors_helper.__get_basic_constraints_ch(platform, type_condition=False)
# ignore platform for errors table
ch_sub_query = __get_basic_constraints_events(None, type_condition=True)
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'source') = 'js_exception'")
# To ignore Script error
ch_sub_query.append("JSONExtractString(toString(`$properties`), 'message') != 'Script error.'")
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-7)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
subquery_part = ""
params = {}
if len(data.events) > 0:
errors_condition_count = 0
for i, e in enumerate(data.events):
if e.type == schemas.EventType.ERROR:
errors_condition_count += 1
is_any = _isAny_opreator(e.operator)
op = __get_sql_operator(e.operator)
e_k = f"e_value{i}"
params = {**params, **_multiple_values(e.value, value_key=e_k)}
if not is_any and len(e.value) > 0 and e.value[1] not in [None, "*", ""]:
ch_sub_query.append(
_multiple_conditions(f"(message {op} %({e_k})s OR name {op} %({e_k})s)",
e.value, value_key=e_k))
if len(data.events) > errors_condition_count:
subquery_part_args, subquery_part = sessions.search_query_parts_ch(data=data, error_status=data.status,
errors_only=True,
project_id=project.project_id,
user_id=user_id,
issue=None,
favorite_only=False)
subquery_part = f"INNER JOIN {subquery_part} USING(session_id)"
params = {**params, **subquery_part_args}
if len(data.filters) > 0:
meta_keys = None
# to reduce include a sub-query of sessions inside events query, in order to reduce the selected data
for i, f in enumerate(data.filters):
if not isinstance(f.value, list):
f.value = [f.value]
filter_type = f.type
f.value = helper.values_for_operator(value=f.value, op=f.operator)
f_k = f"f_value{i}"
params = {**params, f_k: f.value, **_multiple_values(f.value, value_key=f_k)}
op = __get_sql_operator(f.operator) \
if filter_type not in [schemas.FilterType.EVENTS_COUNT] else f.operator
is_any = _isAny_opreator(f.operator)
is_undefined = _isUndefined_operator(f.operator)
if not is_any and not is_undefined and len(f.value) == 0:
continue
is_not = False
if __is_negation_operator(f.operator):
is_not = True
if filter_type == schemas.FilterType.USER_BROWSER:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_browser)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_browser {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_OS, schemas.FilterType.USER_OS_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_os)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_os {op} %({f_k})s', f.value, is_not=is_not, value_key=f_k))
elif filter_type in [schemas.FilterType.USER_DEVICE, schemas.FilterType.USER_DEVICE_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_device)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_device {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_COUNTRY, schemas.FilterType.USER_COUNTRY_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_country)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.user_country {op} %({f_k})s', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_SOURCE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_source)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_source)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_source {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_MEDIUM]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_medium)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_medium)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_medium {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.UTM_CAMPAIGN]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.utm_campaign)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.utm_campaign)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f's.utm_campaign {op} toString(%({f_k})s)', f.value, is_not=is_not,
value_key=f_k))
elif filter_type == schemas.FilterType.DURATION:
if len(f.value) > 0 and f.value[0] is not None:
ch_sessions_sub_query.append("s.duration >= %(minDuration)s")
params["minDuration"] = f.value[0]
if len(f.value) > 1 and f.value[1] is not None and int(f.value[1]) > 0:
ch_sessions_sub_query.append("s.duration <= %(maxDuration)s")
params["maxDuration"] = f.value[1]
elif filter_type == schemas.FilterType.REFERRER:
# extra_from += f"INNER JOIN {events.EventType.LOCATION.table} AS p USING(session_id)"
if is_any:
referrer_constraint = 'isNotNull(s.base_referrer)'
else:
referrer_constraint = _multiple_conditions(f"s.base_referrer {op} %({f_k})s", f.value,
is_not=is_not, value_key=f_k)
elif filter_type == schemas.FilterType.METADATA:
# get metadata list only if you need it
if meta_keys is None:
meta_keys = metadata.get(project_id=project.project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if f.source in meta_keys.keys():
if is_any:
ch_sessions_sub_query.append(f"isNotNull(s.{metadata.index_to_colname(meta_keys[f.source])})")
elif is_undefined:
ch_sessions_sub_query.append(f"isNull(s.{metadata.index_to_colname(meta_keys[f.source])})")
else:
ch_sessions_sub_query.append(
_multiple_conditions(
f"s.{metadata.index_to_colname(meta_keys[f.source])} {op} toString(%({f_k})s)",
f.value, is_not=is_not, value_key=f_k))
elif filter_type in [schemas.FilterType.USER_ID, schemas.FilterType.USER_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.user_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_id {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.USER_ANONYMOUS_ID,
schemas.FilterType.USER_ANONYMOUS_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.user_anonymous_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.user_anonymous_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_anonymous_id {op} toString(%({f_k})s)", f.value,
is_not=is_not,
value_key=f_k))
elif filter_type in [schemas.FilterType.REV_ID, schemas.FilterType.REV_ID_MOBILE]:
if is_any:
ch_sessions_sub_query.append('isNotNull(s.rev_id)')
elif is_undefined:
ch_sessions_sub_query.append('isNull(s.rev_id)')
else:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.rev_id {op} toString(%({f_k})s)", f.value, is_not=is_not,
value_key=f_k))
elif filter_type == schemas.FilterType.PLATFORM:
# op = __get_sql_operator(f.operator)
ch_sessions_sub_query.append(
_multiple_conditions(f"s.user_device_type {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
# elif filter_type == schemas.FilterType.issue:
# if is_any:
# ch_sessions_sub_query.append("notEmpty(s.issue_types)")
# else:
# ch_sessions_sub_query.append(f"hasAny(s.issue_types,%({f_k})s)")
# # _multiple_conditions(f"%({f_k})s {op} ANY (s.issue_types)", f.value, is_not=is_not,
# # value_key=f_k))
#
# if is_not:
# extra_constraints[-1] = f"not({extra_constraints[-1]})"
# ss_constraints[-1] = f"not({ss_constraints[-1]})"
elif filter_type == schemas.FilterType.EVENTS_COUNT:
ch_sessions_sub_query.append(
_multiple_conditions(f"s.events_count {op} %({f_k})s", f.value, is_not=is_not,
value_key=f_k))
with ch_client.ClickHouseClient() as ch:
step_size = metrics_helper.get_step_size(data.startTimestamp, data.endTimestamp, data.density)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = "DESC"
if data.order is not None:
order = data.order
params = {
**params,
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"userId": user_id,
"step_size": step_size}
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
# if data.bookmarked:
# cur.execute(cur.mogrify(f"""SELECT error_id
# FROM public.user_favorite_errors
# WHERE user_id = %(userId)s
# {"" if error_ids is None else "AND error_id IN %(error_ids)s"}""",
# {"userId": user_id, "error_ids": tuple(error_ids or [])}))
# error_ids = cur.fetchall()
# if len(error_ids) == 0:
# return empty_response
# error_ids = [e["error_id"] for e in error_ids]
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
ch_sub_query.append("error_id IN %(error_ids)s")
main_ch_query = f"""\
SELECT details.error_id as error_id,
name, message, users, total,
sessions, last_occurrence, first_occurrence, chart
FROM (SELECT error_id,
JSONExtractString(toString(`$properties`), 'name') AS name,
JSONExtractString(toString(`$properties`), 'message') AS message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT events.session_id) AS sessions,
MAX(created_at) AS max_datetime,
MIN(created_at) AS min_datetime,
COUNT(DISTINCT error_id)
OVER() AS total
FROM {MAIN_EVENTS_TABLE} AS events
INNER JOIN (SELECT session_id, coalesce(user_id,toString(user_uuid)) AS user_id
FROM {MAIN_SESSIONS_TABLE} AS s
{subquery_part}
WHERE {" AND ".join(ch_sessions_sub_query)}) AS sessions
ON (events.session_id = sessions.session_id)
WHERE {" AND ".join(ch_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}
LIMIT %(errors_limit)s OFFSET %(errors_offset)s) AS details
INNER JOIN (SELECT error_id,
toUnixTimestamp(MAX(created_at))*1000 AS last_occurrence,
toUnixTimestamp(MIN(created_at))*1000 AS first_occurrence
FROM {MAIN_EVENTS_TABLE}
WHERE project_id=%(project_id)s
AND `$event_name`='ERROR'
GROUP BY error_id) AS time_details
ON details.error_id=time_details.error_id
INNER JOIN (SELECT error_id, groupArray([timestamp, count]) AS chart
FROM (SELECT error_id,
gs.generate_series AS timestamp,
COUNT(DISTINCT session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS gs
LEFT JOIN {MAIN_EVENTS_TABLE} ON(TRUE)
WHERE {" AND ".join(ch_sub_query)}
AND created_at >= toDateTime(timestamp / 1000)
AND created_at < toDateTime((timestamp + %(step_size)s) / 1000)
GROUP BY error_id, timestamp
ORDER BY timestamp) AS sub_table
GROUP BY error_id) AS chart_details ON details.error_id=chart_details.error_id;"""
# print("------------")
# print(ch.format(main_ch_query, params))
# print("------------")
query = ch.format(query=main_ch_query, parameters=params)
rows = ch.execute(query=query)
total = rows[0]["total"] if len(rows) > 0 else 0
for r in rows:
r["chart"] = list(r["chart"])
for i in range(len(r["chart"])):
r["chart"][i] = {"timestamp": r["chart"][i][0], "count": r["chart"][i][1]}
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def get_trace(project_id, error_id):
return errors_legacy.get_trace(project_id=project_id, error_id=error_id)
def get_sessions(start_date, end_date, project_id, user_id, error_id):
return errors_legacy.get_sessions(start_date=start_date,
end_date=end_date,
project_id=project_id,
user_id=user_id,
error_id=error_id)

View file

@ -1,248 +0,0 @@
from chalicelib.core.errors.modules import errors_helper
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def __flatten_sort_key_count_version(data, merge_nested=False):
if data is None:
return []
return sorted(
[
{
"name": f'{o["name"]}@{v["version"]}',
"count": v["count"]
} for o in data for v in o["partition"]
],
key=lambda o: o["count"], reverse=True) if merge_nested else \
[
{
"name": o["name"],
"count": o["count"],
} for o in data
]
def __process_tags(row):
return [
{"name": "browser", "partitions": __flatten_sort_key_count_version(data=row.get("browsers_partition"))},
{"name": "browser.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("browsers_partition"), merge_nested=True)},
{"name": "OS", "partitions": __flatten_sort_key_count_version(data=row.get("os_partition"))},
{"name": "OS.ver",
"partitions": __flatten_sort_key_count_version(data=row.pop("os_partition"), merge_nested=True)},
{"name": "device.family", "partitions": __flatten_sort_key_count_version(data=row.get("device_partition"))},
{"name": "device",
"partitions": __flatten_sort_key_count_version(data=row.pop("device_partition"), merge_nested=True)},
{"name": "country", "partitions": row.pop("country_partition")}
]
def get_details(project_id, error_id, user_id, **data):
pg_sub_query24 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size24")
pg_sub_query24.append("error_id = %(error_id)s")
pg_sub_query30_session = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="sessions.project_id")
pg_sub_query30_session.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_session.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_session.append("error_id = %(error_id)s")
pg_sub_query30_err = errors_helper.__get_basic_constraints(time_constraint=True, chart=False,
startTime_arg_name="startDate30",
endTime_arg_name="endDate30",
project_key="errors.project_id")
pg_sub_query30_err.append("sessions.project_id = %(project_id)s")
pg_sub_query30_err.append("sessions.start_ts >= %(startDate30)s")
pg_sub_query30_err.append("sessions.start_ts <= %(endDate30)s")
pg_sub_query30_err.append("error_id = %(error_id)s")
pg_sub_query30_err.append("source ='js_exception'")
pg_sub_query30 = errors_helper.__get_basic_constraints(time_constraint=False, chart=True,
step_size_name="step_size30")
pg_sub_query30.append("error_id = %(error_id)s")
pg_basic_query = errors_helper.__get_basic_constraints(time_constraint=False)
pg_basic_query.append("error_id = %(error_id)s")
with pg_client.PostgresClient() as cur:
data["startDate24"] = TimeUTC.now(-1)
data["endDate24"] = TimeUTC.now()
data["startDate30"] = TimeUTC.now(-30)
data["endDate30"] = TimeUTC.now()
density24 = int(data.get("density24", 24))
step_size24 = get_step_size(data["startDate24"], data["endDate24"], density24, factor=1)
density30 = int(data.get("density30", 30))
step_size30 = get_step_size(data["startDate30"], data["endDate30"], density30, factor=1)
params = {
"startDate24": data['startDate24'],
"endDate24": data['endDate24'],
"startDate30": data['startDate30'],
"endDate30": data['endDate30'],
"project_id": project_id,
"userId": user_id,
"step_size24": step_size24,
"step_size30": step_size30,
"error_id": error_id}
main_pg_query = f"""\
SELECT error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
last_session_id,
browsers_partition,
os_partition,
device_partition,
country_partition,
chart24,
chart30
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT user_id) AS users,
COUNT(DISTINCT session_id) AS sessions
FROM public.errors
INNER JOIN events.errors AS s_errors USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_err)}
GROUP BY error_id, name, message) AS details
INNER JOIN (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE error_id = %(error_id)s) AS time_details ON (TRUE)
INNER JOIN (SELECT session_id AS last_session_id
FROM events.errors
WHERE error_id = %(error_id)s
ORDER BY errors.timestamp DESC
LIMIT 1) AS last_session_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(browser_details) AS browsers_partition
FROM (SELECT *
FROM (SELECT user_browser AS name,
COUNT(session_id) AS count
FROM events.errors
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_browser
ORDER BY count DESC) AS count_per_browser_query
INNER JOIN LATERAL (SELECT JSONB_AGG(version_details) AS partition
FROM (SELECT user_browser_version AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_browser = count_per_browser_query.name
GROUP BY user_browser_version
ORDER BY count DESC) AS version_details
) AS browser_version_details ON (TRUE)) AS browser_details) AS browser_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(os_details) AS os_partition
FROM (SELECT *
FROM (SELECT user_os AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_os
ORDER BY count DESC) AS count_per_os_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_version_details) AS partition
FROM (SELECT COALESCE(user_os_version,'unknown') AS version, COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_os = count_per_os_details.name
GROUP BY user_os_version
ORDER BY count DESC) AS count_per_version_details
GROUP BY count_per_os_details.name ) AS os_version_details
ON (TRUE)) AS os_details) AS os_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(device_details) AS device_partition
FROM (SELECT *
FROM (SELECT user_device_type AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_device_type
ORDER BY count DESC) AS count_per_device_details
INNER JOIN LATERAL (SELECT jsonb_agg(count_per_device_v_details) AS partition
FROM (SELECT CASE
WHEN user_device = '' OR user_device ISNULL
THEN 'unknown'
ELSE user_device END AS version,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
AND sessions.user_device_type = count_per_device_details.name
GROUP BY user_device
ORDER BY count DESC) AS count_per_device_v_details
GROUP BY count_per_device_details.name ) AS device_version_details
ON (TRUE)) AS device_details) AS device_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(count_per_country_details) AS country_partition
FROM (SELECT user_country AS name,
COUNT(session_id) AS count
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30_session)}
GROUP BY user_country
ORDER BY count DESC) AS count_per_country_details) AS country_details ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart24
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate24)s, %(endDate24)s, %(step_size24)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query24)}
) AS chart_details ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart_details) AS chart_details24 ON (TRUE)
INNER JOIN (SELECT jsonb_agg(chart_details) AS chart30
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate30)s, %(endDate30)s, %(step_size30)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query30)}) AS chart_details
ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details30 ON (TRUE);
"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
row = cur.fetchone()
if row is None:
return {"errors": ["error not found"]}
row["tags"] = __process_tags(row)
query = cur.mogrify(
f"""SELECT error_id, status, session_id, start_ts,
parent_error_id,session_id, user_anonymous_id,
user_id, user_uuid, user_browser, user_browser_version,
user_os, user_os_version, user_device, payload,
FALSE AS favorite,
True AS viewed
FROM public.errors AS pe
INNER JOIN events.errors AS ee USING (error_id)
INNER JOIN public.sessions USING (session_id)
WHERE pe.project_id = %(project_id)s
AND error_id = %(error_id)s
ORDER BY start_ts DESC
LIMIT 1;""",
{"project_id": project_id, "error_id": error_id, "user_id": user_id})
cur.execute(query=query)
status = cur.fetchone()
if status is not None:
row["stack"] = errors_helper.format_first_stack_frame(status).pop("stack")
row["status"] = status.pop("status")
row["parent_error_id"] = status.pop("parent_error_id")
row["favorite"] = status.pop("favorite")
row["viewed"] = status.pop("viewed")
row["last_hydrated_session"] = status
else:
row["stack"] = []
row["last_hydrated_session"] = None
row["status"] = "untracked"
row["parent_error_id"] = None
row["favorite"] = False
row["viewed"] = False
return {"data": helper.dict_to_camel_case(row)}

View file

@ -1,294 +0,0 @@
import json
from typing import List
import schemas
from chalicelib.core.errors.modules import errors_helper
from chalicelib.core.sessions import sessions_search
from chalicelib.core.sourcemaps import sourcemaps
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.metrics_helper import get_step_size
def get(error_id, family=False) -> dict | List[dict]:
if family:
return get_batch([error_id])
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT *
FROM public.errors
WHERE error_id = %(error_id)s
LIMIT 1;""",
{"error_id": error_id})
cur.execute(query=query)
result = cur.fetchone()
if result is not None:
result["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(result["stacktrace_parsed_at"])
return helper.dict_to_camel_case(result)
def get_batch(error_ids):
if len(error_ids) == 0:
return []
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""
WITH RECURSIVE error_family AS (
SELECT *
FROM public.errors
WHERE error_id IN %(error_ids)s
UNION
SELECT child_errors.*
FROM public.errors AS child_errors
INNER JOIN error_family ON error_family.error_id = child_errors.parent_error_id OR error_family.parent_error_id = child_errors.error_id
)
SELECT *
FROM error_family;""",
{"error_ids": tuple(error_ids)})
cur.execute(query=query)
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def __get_sort_key(key):
return {
schemas.ErrorSort.OCCURRENCE: "max_datetime",
schemas.ErrorSort.USERS_COUNT: "users",
schemas.ErrorSort.SESSIONS_COUNT: "sessions"
}.get(key, 'max_datetime')
def search(data: schemas.SearchErrorsSchema, project: schemas.ProjectContext, user_id):
empty_response = {
'total': 0,
'errors': []
}
platform = None
for f in data.filters:
if f.type == schemas.FilterType.PLATFORM and len(f.value) > 0:
platform = f.value[0]
pg_sub_query = errors_helper.__get_basic_constraints(platform, project_key="sessions.project_id")
pg_sub_query += ["sessions.start_ts>=%(startDate)s", "sessions.start_ts<%(endDate)s", "source ='js_exception'",
"pe.project_id=%(project_id)s"]
# To ignore Script error
pg_sub_query.append("pe.message!='Script error.'")
pg_sub_query_chart = errors_helper.__get_basic_constraints(platform, time_constraint=False, chart=True,
project_key=None)
if platform:
pg_sub_query_chart += ["start_ts>=%(startDate)s", "start_ts<%(endDate)s", "project_id=%(project_id)s"]
pg_sub_query_chart.append("errors.error_id =details.error_id")
statuses = []
error_ids = None
if data.startTimestamp is None:
data.startTimestamp = TimeUTC.now(-30)
if data.endTimestamp is None:
data.endTimestamp = TimeUTC.now(1)
if len(data.events) > 0 or len(data.filters) > 0:
print("-- searching for sessions before errors")
statuses = sessions_search.search_sessions(data=data, project=project, user_id=user_id, errors_only=True,
error_status=data.status)
if len(statuses) == 0:
return empty_response
error_ids = [e["errorId"] for e in statuses]
with pg_client.PostgresClient() as cur:
step_size = get_step_size(data.startTimestamp, data.endTimestamp, data.density, factor=1)
sort = __get_sort_key('datetime')
if data.sort is not None:
sort = __get_sort_key(data.sort)
order = schemas.SortOrderType.DESC
if data.order is not None:
order = data.order
extra_join = ""
params = {
"startDate": data.startTimestamp,
"endDate": data.endTimestamp,
"project_id": project.project_id,
"userId": user_id,
"step_size": step_size}
if data.status != schemas.ErrorStatus.ALL:
pg_sub_query.append("status = %(error_status)s")
params["error_status"] = data.status
if data.limit is not None and data.page is not None:
params["errors_offset"] = (data.page - 1) * data.limit
params["errors_limit"] = data.limit
else:
params["errors_offset"] = 0
params["errors_limit"] = 200
if error_ids is not None:
params["error_ids"] = tuple(error_ids)
pg_sub_query.append("error_id IN %(error_ids)s")
# if data.bookmarked:
# pg_sub_query.append("ufe.user_id = %(userId)s")
# extra_join += " INNER JOIN public.user_favorite_errors AS ufe USING (error_id)"
if data.query is not None and len(data.query) > 0:
pg_sub_query.append("(pe.name ILIKE %(error_query)s OR pe.message ILIKE %(error_query)s)")
params["error_query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
main_pg_query = f"""SELECT full_count,
error_id,
name,
message,
users,
sessions,
last_occurrence,
first_occurrence,
chart
FROM (SELECT COUNT(details) OVER () AS full_count, details.*
FROM (SELECT error_id,
name,
message,
COUNT(DISTINCT COALESCE(user_id,user_uuid::text)) AS users,
COUNT(DISTINCT session_id) AS sessions,
MAX(timestamp) AS max_datetime,
MIN(timestamp) AS min_datetime
FROM events.errors
INNER JOIN public.errors AS pe USING (error_id)
INNER JOIN public.sessions USING (session_id)
{extra_join}
WHERE {" AND ".join(pg_sub_query)}
GROUP BY error_id, name, message
ORDER BY {sort} {order}) AS details
LIMIT %(errors_limit)s OFFSET %(errors_offset)s
) AS details
INNER JOIN LATERAL (SELECT MAX(timestamp) AS last_occurrence,
MIN(timestamp) AS first_occurrence
FROM events.errors
WHERE errors.error_id = details.error_id) AS time_details ON (TRUE)
INNER JOIN LATERAL (SELECT jsonb_agg(chart_details) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COUNT(session_id) AS count
FROM generate_series(%(startDate)s, %(endDate)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL (SELECT DISTINCT session_id
FROM events.errors
{"INNER JOIN public.sessions USING(session_id)" if platform else ""}
WHERE {" AND ".join(pg_sub_query_chart)}
) AS sessions ON (TRUE)
GROUP BY timestamp
ORDER BY timestamp) AS chart_details) AS chart_details ON (TRUE);"""
# print("--------------------")
# print(cur.mogrify(main_pg_query, params))
# print("--------------------")
cur.execute(cur.mogrify(main_pg_query, params))
rows = cur.fetchall()
total = 0 if len(rows) == 0 else rows[0]["full_count"]
if total == 0:
rows = []
else:
if len(statuses) == 0:
query = cur.mogrify(
"""SELECT error_id
FROM public.errors
WHERE project_id = %(project_id)s AND error_id IN %(error_ids)s;""",
{"project_id": project.project_id, "error_ids": tuple([r["error_id"] for r in rows]),
"user_id": user_id})
cur.execute(query=query)
statuses = helper.list_to_camel_case(cur.fetchall())
statuses = {
s["errorId"]: s for s in statuses
}
for r in rows:
r.pop("full_count")
return {
'total': total,
'errors': helper.list_to_camel_case(rows)
}
def __save_stacktrace(error_id, data):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""UPDATE public.errors
SET stacktrace=%(data)s::jsonb, stacktrace_parsed_at=timezone('utc'::text, now())
WHERE error_id = %(error_id)s;""",
{"error_id": error_id, "data": json.dumps(data)})
cur.execute(query=query)
def get_trace(project_id, error_id):
error = get(error_id=error_id, family=False)
if error is None:
return {"errors": ["error not found"]}
if error.get("source", "") != "js_exception":
return {"errors": ["this source of errors doesn't have a sourcemap"]}
if error.get("payload") is None:
return {"errors": ["null payload"]}
if error.get("stacktrace") is not None:
return {"sourcemapUploaded": True,
"trace": error.get("stacktrace"),
"preparsed": True}
trace, all_exists = sourcemaps.get_traces_group(project_id=project_id, payload=error["payload"])
if all_exists:
__save_stacktrace(error_id=error_id, data=trace)
return {"sourcemapUploaded": all_exists,
"trace": trace,
"preparsed": False}
def get_sessions(start_date, end_date, project_id, user_id, error_id):
extra_constraints = ["s.project_id = %(project_id)s",
"s.start_ts >= %(startDate)s",
"s.start_ts <= %(endDate)s",
"e.error_id = %(error_id)s"]
if start_date is None:
start_date = TimeUTC.now(-7)
if end_date is None:
end_date = TimeUTC.now()
params = {
"startDate": start_date,
"endDate": end_date,
"project_id": project_id,
"userId": user_id,
"error_id": error_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
f"""SELECT s.project_id,
s.session_id::text AS session_id,
s.user_uuid,
s.user_id,
s.user_agent,
s.user_os,
s.user_browser,
s.user_device,
s.user_country,
s.start_ts,
s.duration,
s.events_count,
s.pages_count,
s.errors_count,
s.issue_types,
COALESCE((SELECT TRUE
FROM public.user_favorite_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS favorite,
COALESCE((SELECT TRUE
FROM public.user_viewed_sessions AS fs
WHERE s.session_id = fs.session_id
AND fs.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.sessions AS s INNER JOIN events.errors AS e USING (session_id)
WHERE {" AND ".join(extra_constraints)}
ORDER BY s.start_ts DESC;""",
params)
cur.execute(query=query)
sessions_list = []
total = cur.rowcount
row = cur.fetchone()
while row is not None and len(sessions_list) < 100:
sessions_list.append(row)
row = cur.fetchone()
return {
'total': total,
'sessions': helper.list_to_camel_case(sessions_list)
}

View file

@ -1,11 +0,0 @@
import logging
from decouple import config
logger = logging.getLogger(__name__)
from . import helper as errors_helper
if config("EXP_ERRORS_SEARCH", cast=bool, default=False):
import chalicelib.core.sessions.sessions_ch as sessions
else:
import chalicelib.core.sessions.sessions_pg as sessions

View file

@ -1,58 +0,0 @@
from typing import Optional
import schemas
from chalicelib.core.sourcemaps import sourcemaps
def __get_basic_constraints(platform: Optional[schemas.PlatformType] = None, time_constraint: bool = True,
startTime_arg_name: str = "startDate", endTime_arg_name: str = "endDate",
chart: bool = False, step_size_name: str = "step_size",
project_key: Optional[str] = "project_id"):
if project_key is None:
ch_sub_query = []
else:
ch_sub_query = [f"{project_key} =%(project_id)s"]
if time_constraint:
ch_sub_query += [f"timestamp >= %({startTime_arg_name})s",
f"timestamp < %({endTime_arg_name})s"]
if chart:
ch_sub_query += [f"timestamp >= generated_timestamp",
f"timestamp < generated_timestamp + %({step_size_name})s"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def __get_basic_constraints_ch(platform=None, time_constraint=True, startTime_arg_name="startDate",
endTime_arg_name="endDate", type_condition=True, project_key="project_id",
table_name=None):
ch_sub_query = [f"{project_key} =toUInt16(%(project_id)s)"]
if table_name is not None:
table_name = table_name + "."
else:
table_name = ""
if type_condition:
ch_sub_query.append(f"{table_name}`$event_name`='ERROR'")
if time_constraint:
ch_sub_query += [f"{table_name}datetime >= toDateTime(%({startTime_arg_name})s/1000)",
f"{table_name}datetime < toDateTime(%({endTime_arg_name})s/1000)"]
if platform == schemas.PlatformType.MOBILE:
ch_sub_query.append("user_device_type = 'mobile'")
elif platform == schemas.PlatformType.DESKTOP:
ch_sub_query.append("user_device_type = 'desktop'")
return ch_sub_query
def format_first_stack_frame(error):
error["stack"] = sourcemaps.format_payload(error.pop("payload"), truncate_to_first=True)
for s in error["stack"]:
for c in s.get("context", []):
for sci, sc in enumerate(c):
if isinstance(sc, str) and len(sc) > 1000:
c[sci] = sc[:1000]
# convert bytes to string:
if isinstance(s["filename"], bytes):
s["filename"] = s["filename"].decode("utf-8")
return error

View file

@ -0,0 +1,91 @@
from chalicelib.utils import pg_client
def add_favorite_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""\
INSERT INTO public.user_favorite_errors
(user_id, error_id)
VALUES
(%(userId)s,%(error_id)s);""",
{"userId": user_id, "error_id": error_id})
)
return {"errorId": error_id, "favorite": True}
def remove_favorite_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(f"""\
DELETE FROM public.user_favorite_errors
WHERE
user_id = %(userId)s
AND error_id = %(error_id)s;""",
{"userId": user_id, "error_id": error_id})
)
return {"errorId": error_id, "favorite": False}
def favorite_error(project_id, user_id, error_id):
exists, favorite = error_exists_and_favorite(user_id=user_id, error_id=error_id)
if not exists:
return {"errors": ["cannot bookmark non-rehydrated errors"]}
if favorite:
return remove_favorite_error(project_id=project_id, user_id=user_id, error_id=error_id)
return add_favorite_error(project_id=project_id, user_id=user_id, error_id=error_id)
def error_exists_and_favorite(user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT errors.error_id AS exists, ufe.error_id AS favorite
FROM public.errors
LEFT JOIN (SELECT error_id FROM public.user_favorite_errors WHERE user_id = %(userId)s) AS ufe USING (error_id)
WHERE error_id = %(error_id)s;""",
{"userId": user_id, "error_id": error_id})
)
r = cur.fetchone()
if r is None:
return False, False
return True, r.get("favorite") is not None
def add_viewed_error(project_id, user_id, error_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
INSERT INTO public.user_viewed_errors
(user_id, error_id)
VALUES
(%(userId)s,%(error_id)s);""",
{"userId": user_id, "error_id": error_id})
)
def viewed_error_exists(user_id, error_id):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT
errors.error_id AS hydrated,
COALESCE((SELECT TRUE
FROM public.user_viewed_errors AS ve
WHERE ve.error_id = %(error_id)s
AND ve.user_id = %(userId)s LIMIT 1), FALSE) AS viewed
FROM public.errors
WHERE error_id = %(error_id)s""",
{"userId": user_id, "error_id": error_id})
cur.execute(
query=query
)
r = cur.fetchone()
if r:
return r.get("viewed")
return True
def viewed_error(project_id, user_id, error_id):
if viewed_error_exists(user_id=user_id, error_id=error_id):
return None
return add_viewed_error(project_id=project_id, user_id=user_id, error_id=error_id)

View file

@ -1,16 +1,12 @@
from functools import cache
from typing import Optional
import schemas
from chalicelib.core import issues
from chalicelib.core.autocomplete import autocomplete
from chalicelib.core.sessions import sessions_metas
from chalicelib.core import sessions_metas, metadata
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.event_filter_definition import SupportedFilter, Event
def get_customs_by_session_id(session_id, project_id):
def get_customs_by_sessionId2_pg(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify("""\
SELECT
@ -40,7 +36,7 @@ def __get_grouped_clickrage(rows, session_id, project_id):
for c in click_rage_issues:
merge_count = c.get("payload")
if merge_count is not None:
merge_count = merge_count.get("Count", 3)
merge_count = merge_count.get("count", 3)
else:
merge_count = 3
for i in range(len(rows)):
@ -53,174 +49,437 @@ def __get_grouped_clickrage(rows, session_id, project_id):
return rows
def get_by_session_id(session_id, project_id, group_clickrage=False, event_type: Optional[schemas.EventType] = None):
def get_by_sessionId2_pg(session_id, project_id, group_clickrage=False):
with pg_client.PostgresClient() as cur:
rows = []
if event_type is None or event_type == schemas.EventType.CLICK:
cur.execute(cur.mogrify("""\
SELECT
c.*,
'CLICK' AS type
FROM events.clicks AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if group_clickrage:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id, project_id=project_id)
if event_type is None or event_type == schemas.EventType.INPUT:
cur.execute(cur.mogrify("""
SELECT
i.*,
'INPUT' AS type
FROM events.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
if event_type is None or event_type == schemas.EventType.LOCATION:
cur.execute(cur.mogrify("""\
SELECT
l.*,
l.path AS value,
l.path AS url,
'LOCATION' AS type
FROM events.pages AS l
WHERE
l.session_id = %(session_id)s
ORDER BY l.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
cur.execute(cur.mogrify("""\
SELECT
c.*,
'CLICK' AS type
FROM events.clicks AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
if group_clickrage:
rows = __get_grouped_clickrage(rows=rows, session_id=session_id, project_id=project_id)
cur.execute(cur.mogrify("""
SELECT
i.*,
'INPUT' AS type
FROM events.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
cur.execute(cur.mogrify("""\
SELECT
l.*,
l.path AS value,
l.path AS url,
'LOCATION' AS type
FROM events.pages AS l
WHERE
l.session_id = %(session_id)s
ORDER BY l.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: (k["timestamp"], k["messageId"]))
return rows
def _search_tags(project_id, value, key=None, source=None):
def __get_data_for_extend(data):
if "errors" not in data:
return data["data"]
def __pg_errors_query(source=None, value_length=None):
if value_length is None or value_length > 2:
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION
(SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(value)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
return f"""((SELECT DISTINCT ON(lg.message)
lg.message AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.message ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
source,
'{event_type.ERROR.ui_type}' AS type
FROM {event_type.ERROR.table} INNER JOIN public.errors AS lg USING (error_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
AND lg.project_id = %(project_id)s
{"AND source = %(source)s" if source is not None else ""}
LIMIT 5));"""
def __search_pg_errors(project_id, value, key=None, source=None):
now = TimeUTC.now()
with pg_client.PostgresClient() as cur:
query = f"""
SELECT public.tags.name
'TAG' AS type
FROM public.tags
WHERE public.tags.project_id = %(project_id)s
ORDER BY SIMILARITY(public.tags.name, %(value)s) DESC
LIMIT 10
"""
query = cur.mogrify(query, {'project_id': project_id, 'value': value})
cur.execute(query)
cur.execute(
cur.mogrify(__pg_errors_query(source,
value_length=len(value) \
if SUPPORTED_TYPES[event_type.ERROR.ui_type].change_by_length else None),
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value),
"source": source}))
results = helper.list_to_camel_case(cur.fetchall())
print(f"{TimeUTC.now() - now} : errors")
return results
def __search_pg_errors_ios(project_id, value, key=None, source=None):
now = TimeUTC.now()
if SUPPORTED_TYPES[event_type.ERROR_IOS.ui_type].change_by_length is False or len(value) > 2:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(value)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(value)s
LIMIT 5);"""
else:
query = f"""(SELECT DISTINCT ON(lg.reason)
lg.reason AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.reason ILIKE %(svalue)s
LIMIT 5)
UNION ALL
(SELECT DISTINCT ON(lg.name)
lg.name AS value,
'{event_type.ERROR_IOS.ui_type}' AS type
FROM {event_type.ERROR_IOS.table} INNER JOIN public.crashes_ios AS lg USING (crash_id) LEFT JOIN public.sessions AS s USING(session_id)
WHERE
s.project_id = %(project_id)s
AND lg.project_id = %(project_id)s
AND lg.name ILIKE %(svalue)s
LIMIT 5);"""
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(query, {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
print(f"{TimeUTC.now() - now} : errors")
return results
def __search_pg_metadata(project_id, value, key=None, source=None):
meta_keys = metadata.get(project_id=project_id)
meta_keys = {m["key"]: m["index"] for m in meta_keys}
if len(meta_keys) == 0 or key is not None and key not in meta_keys.keys():
return []
sub_from = []
if key is not None:
meta_keys = {key: meta_keys[key]}
for k in meta_keys.keys():
colname = metadata.index_to_colname(meta_keys[k])
if SUPPORTED_TYPES[event_type.METADATA.ui_type].change_by_length is False or len(value) > 2:
sub_from.append(f"""((SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)
UNION
(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(value)s LIMIT 5))
""")
else:
sub_from.append(f"""(SELECT DISTINCT ON ({colname}) {colname} AS value, '{k}' AS key
FROM public.sessions
WHERE project_id = %(project_id)s
AND {colname} ILIKE %(svalue)s LIMIT 5)""")
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT key, value, 'METADATA' AS TYPE
FROM({" UNION ALL ".join(sub_from)}) AS all_metas
LIMIT 5;""", {"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
results = helper.list_to_camel_case(cur.fetchall())
return results
class EventType:
CLICK = Event(ui_type=schemas.EventType.CLICK, table="events.clicks", column="label")
INPUT = Event(ui_type=schemas.EventType.INPUT, table="events.inputs", column="label")
LOCATION = Event(ui_type=schemas.EventType.LOCATION, table="events.pages", column="path")
CUSTOM = Event(ui_type=schemas.EventType.CUSTOM, table="events_common.customs", column="name")
REQUEST = Event(ui_type=schemas.EventType.REQUEST, table="events_common.requests", column="path")
GRAPHQL = Event(ui_type=schemas.EventType.GRAPHQL, table="events.graphql", column="name")
STATEACTION = Event(ui_type=schemas.EventType.STATE_ACTION, table="events.state_actions", column="name")
TAG = Event(ui_type=schemas.EventType.TAG, table="events.tags", column="tag_id")
ERROR = Event(ui_type=schemas.EventType.ERROR, table="events.errors",
def __generic_query(typename, value_length=None):
if value_length is None or value_length > 2:
return f"""(SELECT DISTINCT value, type
FROM public.autocomplete
WHERE
project_id = %(project_id)s
AND type='{typename}'
AND value ILIKE %(svalue)s
LIMIT 5)
UNION
(SELECT DISTINCT value, type
FROM public.autocomplete
WHERE
project_id = %(project_id)s
AND type='{typename}'
AND value ILIKE %(value)s
LIMIT 5);"""
return f"""SELECT DISTINCT value, type
FROM public.autocomplete
WHERE
project_id = %(project_id)s
AND type='{typename}'
AND value ILIKE %(svalue)s
LIMIT 10;"""
def __generic_autocomplete(event: Event):
def f(project_id, value, key=None, source=None):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
__generic_query(event.ui_type,
value_length=len(value) \
if SUPPORTED_TYPES[event.ui_type].change_by_length \
else None),
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)}))
return helper.list_to_camel_case(cur.fetchall())
return f
class event_type:
CLICK = Event(ui_type=schemas.EventType.click, table="events.clicks", column="label")
INPUT = Event(ui_type=schemas.EventType.input, table="events.inputs", column="label")
LOCATION = Event(ui_type=schemas.EventType.location, table="events.pages", column="path")
CUSTOM = Event(ui_type=schemas.EventType.custom, table="events_common.customs", column="name")
REQUEST = Event(ui_type=schemas.EventType.request, table="events_common.requests", column="path")
GRAPHQL = Event(ui_type=schemas.EventType.graphql, table="events.graphql", column="name")
STATEACTION = Event(ui_type=schemas.EventType.state_action, table="events.state_actions", column="name")
ERROR = Event(ui_type=schemas.EventType.error, table="events.errors",
column=None) # column=None because errors are searched by name or message
METADATA = Event(ui_type=schemas.FilterType.METADATA, table="public.sessions", column=None)
# MOBILE
CLICK_MOBILE = Event(ui_type=schemas.EventType.CLICK_MOBILE, table="events_ios.taps", column="label")
INPUT_MOBILE = Event(ui_type=schemas.EventType.INPUT_MOBILE, table="events_ios.inputs", column="label")
VIEW_MOBILE = Event(ui_type=schemas.EventType.VIEW_MOBILE, table="events_ios.views", column="name")
SWIPE_MOBILE = Event(ui_type=schemas.EventType.SWIPE_MOBILE, table="events_ios.swipes", column="label")
CUSTOM_MOBILE = Event(ui_type=schemas.EventType.CUSTOM_MOBILE, table="events_common.customs", column="name")
REQUEST_MOBILE = Event(ui_type=schemas.EventType.REQUEST_MOBILE, table="events_common.requests", column="path")
CRASH_MOBILE = Event(ui_type=schemas.EventType.ERROR_MOBILE, table="events_common.crashes",
column=None) # column=None because errors are searched by name or message
METADATA = Event(ui_type=schemas.FilterType.metadata, table="public.sessions", column=None)
# IOS
CLICK_IOS = Event(ui_type=schemas.EventType.click_ios, table="events_ios.clicks", column="label")
INPUT_IOS = Event(ui_type=schemas.EventType.input_ios, table="events_ios.inputs", column="label")
VIEW_IOS = Event(ui_type=schemas.EventType.view_ios, table="events_ios.views", column="name")
CUSTOM_IOS = Event(ui_type=schemas.EventType.custom_ios, table="events_common.customs", column="name")
REQUEST_IOS = Event(ui_type=schemas.EventType.request_ios, table="events_common.requests", column="url")
ERROR_IOS = Event(ui_type=schemas.EventType.error_ios, table="events_ios.crashes",
column=None) # column=None because errors are searched by name or message
@cache
def supported_types():
return {
EventType.CLICK.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK),
query=autocomplete.__generic_query(typename=EventType.CLICK.ui_type)),
EventType.INPUT.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT),
query=autocomplete.__generic_query(typename=EventType.INPUT.ui_type)),
EventType.LOCATION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.LOCATION),
query=autocomplete.__generic_query(
typename=EventType.LOCATION.ui_type)),
EventType.CUSTOM.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CUSTOM),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM.ui_type)),
EventType.REQUEST.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.REQUEST),
query=autocomplete.__generic_query(
typename=EventType.REQUEST.ui_type)),
EventType.GRAPHQL.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.GRAPHQL),
query=autocomplete.__generic_query(
typename=EventType.GRAPHQL.ui_type)),
EventType.STATEACTION.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.STATEACTION),
query=autocomplete.__generic_query(
typename=EventType.STATEACTION.ui_type)),
EventType.TAG.ui_type: SupportedFilter(get=_search_tags, query=None),
EventType.ERROR.ui_type: SupportedFilter(get=autocomplete.__search_errors,
query=None),
EventType.METADATA.ui_type: SupportedFilter(get=autocomplete.__search_metadata,
query=None),
# MOBILE
EventType.CLICK_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.CLICK_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CLICK_MOBILE.ui_type)),
EventType.SWIPE_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.SWIPE_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.SWIPE_MOBILE.ui_type)),
EventType.INPUT_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.INPUT_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.INPUT_MOBILE.ui_type)),
EventType.VIEW_MOBILE.ui_type: SupportedFilter(get=autocomplete.__generic_autocomplete(EventType.VIEW_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.VIEW_MOBILE.ui_type)),
EventType.CUSTOM_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.CUSTOM_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.CUSTOM_MOBILE.ui_type)),
EventType.REQUEST_MOBILE.ui_type: SupportedFilter(
get=autocomplete.__generic_autocomplete(EventType.REQUEST_MOBILE),
query=autocomplete.__generic_query(
typename=EventType.REQUEST_MOBILE.ui_type)),
EventType.CRASH_MOBILE.ui_type: SupportedFilter(get=autocomplete.__search_errors_mobile,
query=None),
}
SUPPORTED_TYPES = {
event_type.CLICK.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CLICK),
query=__generic_query(typename=event_type.CLICK.ui_type),
change_by_length=True),
event_type.INPUT.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.INPUT),
query=__generic_query(typename=event_type.INPUT.ui_type),
change_by_length=True),
event_type.LOCATION.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.LOCATION),
query=__generic_query(typename=event_type.LOCATION.ui_type),
change_by_length=True),
event_type.CUSTOM.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CUSTOM),
query=__generic_query(typename=event_type.CUSTOM.ui_type),
change_by_length=True),
event_type.REQUEST.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.REQUEST),
query=__generic_query(typename=event_type.REQUEST.ui_type),
change_by_length=True),
event_type.GRAPHQL.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.GRAPHQL),
query=__generic_query(typename=event_type.GRAPHQL.ui_type),
change_by_length=True),
event_type.STATEACTION.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.STATEACTION),
query=__generic_query(typename=event_type.STATEACTION.ui_type),
change_by_length=True),
event_type.ERROR.ui_type: SupportedFilter(get=__search_pg_errors,
query=None, change_by_length=True),
event_type.METADATA.ui_type: SupportedFilter(get=__search_pg_metadata,
query=None, change_by_length=True),
# IOS
event_type.CLICK_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CLICK_IOS),
query=__generic_query(typename=event_type.CLICK_IOS.ui_type),
change_by_length=True),
event_type.INPUT_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.INPUT_IOS),
query=__generic_query(typename=event_type.INPUT_IOS.ui_type),
change_by_length=True),
event_type.VIEW_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.VIEW_IOS),
query=__generic_query(typename=event_type.VIEW_IOS.ui_type),
change_by_length=True),
event_type.CUSTOM_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.CUSTOM_IOS),
query=__generic_query(typename=event_type.CUSTOM_IOS.ui_type),
change_by_length=True),
event_type.REQUEST_IOS.ui_type: SupportedFilter(get=__generic_autocomplete(event_type.REQUEST_IOS),
query=__generic_query(typename=event_type.REQUEST_IOS.ui_type),
change_by_length=True),
event_type.ERROR_IOS.ui_type: SupportedFilter(get=__search_pg_errors_ios,
query=None, change_by_length=True),
}
def __get_autocomplete_table(value, project_id):
autocomplete_events = [schemas.FilterType.rev_id,
schemas.EventType.click,
schemas.FilterType.user_device,
schemas.FilterType.user_id,
schemas.FilterType.user_browser,
schemas.FilterType.user_os,
schemas.EventType.custom,
schemas.FilterType.user_country,
schemas.EventType.location,
schemas.EventType.input]
autocomplete_events.sort()
sub_queries = []
for e in autocomplete_events:
sub_queries.append(f"""(SELECT type, value
FROM public.autocomplete
WHERE project_id = %(project_id)s
AND type= '{e}'
AND value ILIKE %(svalue)s
LIMIT 5)""")
if len(value) > 2:
sub_queries.append(f"""(SELECT type, value
FROM public.autocomplete
WHERE project_id = %(project_id)s
AND type= '{e}'
AND value ILIKE %(value)s
LIMIT 5)""")
with pg_client.PostgresClient() as cur:
query = cur.mogrify(" UNION ".join(sub_queries) + ";",
{"project_id": project_id, "value": helper.string_to_sql_like(value),
"svalue": helper.string_to_sql_like("^" + value)})
try:
cur.execute(query)
except Exception as err:
print("--------- AUTOCOMPLETE SEARCH QUERY EXCEPTION -----------")
print(query.decode('UTF-8'))
print("--------- VALUE -----------")
print(value)
print("--------------------")
raise err
results = helper.list_to_camel_case(cur.fetchall())
return results
def search(text, event_type, project_id, source, key):
if not event_type:
return {"data": __get_autocomplete_table(text, project_id)}
if event_type in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type].get(project_id=project_id, value=text, key=key, source=source)
# for IOS events autocomplete
# if event_type + "_IOS" in SUPPORTED_TYPES.keys():
# rows += SUPPORTED_TYPES[event_type + "_IOS"].get(project_id=project_id, value=text, key=key,
# source=source)
elif event_type + "_IOS" in SUPPORTED_TYPES.keys():
rows = SUPPORTED_TYPES[event_type + "_IOS"].get(project_id=project_id, value=text, key=key,
source=source)
elif event_type in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.SUPPORTED_TYPES.keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}
return {"data": rows}
def get_errors_by_session_id(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT er.*,ur.*, er.timestamp - s.start_ts AS time
FROM {EventType.ERROR.table} AS er INNER JOIN public.errors AS ur USING (error_id) INNER JOIN public.sessions AS s USING (session_id)
FROM {event_type.ERROR.table} AS er INNER JOIN public.errors AS ur USING (error_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE er.session_id = %(session_id)s AND s.project_id=%(project_id)s
ORDER BY timestamp;""", {"session_id": session_id, "project_id": project_id}))
errors = cur.fetchall()
for e in errors:
e["stacktrace_parsed_at"] = TimeUTC.datetime_to_timestamp(e["stacktrace_parsed_at"])
return helper.list_to_camel_case(errors)
def search(text, event_type, project_id, source, key):
if not event_type:
return {"data": autocomplete.__get_autocomplete_table(text, project_id)}
if event_type in supported_types().keys():
rows = supported_types()[event_type].get(project_id=project_id, value=text, key=key, source=source)
elif event_type + "_MOBILE" in supported_types().keys():
rows = supported_types()[event_type + "_MOBILE"].get(project_id=project_id, value=text, key=key, source=source)
elif event_type in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_IOS") \
and event_type[:-len("_IOS")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
elif event_type.endswith("_MOBILE") \
and event_type[:-len("_MOBILE")] in sessions_metas.supported_types().keys():
return sessions_metas.search(text, event_type, project_id)
else:
return {"errors": ["unsupported event"]}
return {"data": rows}

View file

@ -0,0 +1,69 @@
from chalicelib.utils import pg_client, helper
from chalicelib.core import events
def get_customs_by_sessionId(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""\
SELECT
c.*,
'{events.event_type.CUSTOM_IOS.ui_type}' AS type
FROM {events.event_type.CUSTOM_IOS.table} AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
return helper.dict_to_camel_case(rows)
def get_by_sessionId(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT
c.*,
'{events.event_type.CLICK_IOS.ui_type}' AS type
FROM {events.event_type.CLICK_IOS.table} AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
i.*,
'{events.event_type.INPUT_IOS.ui_type}' AS type
FROM {events.event_type.INPUT_IOS.table} AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
v.*,
'{events.event_type.VIEW_IOS.ui_type}' AS type
FROM {events.event_type.VIEW_IOS.table} AS v
WHERE
v.session_id = %(session_id)s
ORDER BY v.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: k["timestamp"])
return rows
def get_crashes_by_session_id(session_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT cr.*,uc.*, cr.timestamp - s.start_ts AS time
FROM {events.event_type.ERROR_IOS.table} AS cr INNER JOIN public.crashes_ios AS uc USING (crash_id) INNER JOIN public.sessions AS s USING (session_id)
WHERE
cr.session_id = %(session_id)s
ORDER BY timestamp;""", {"session_id": session_id}))
errors = cur.fetchall()
return helper.list_to_camel_case(errors)

View file

@ -1,68 +0,0 @@
from chalicelib.utils import pg_client, helper
from chalicelib.core import events
def get_customs_by_session_id(session_id, project_id):
return events.get_customs_by_session_id(session_id=session_id, project_id=project_id)
def get_by_sessionId(session_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT
c.*,
'TAP' AS type
FROM events_ios.taps AS c
WHERE
c.session_id = %(session_id)s
ORDER BY c.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows = cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
i.*,
'INPUT' AS type
FROM events_ios.inputs AS i
WHERE
i.session_id = %(session_id)s
ORDER BY i.timestamp;""",
{"project_id": project_id, "session_id": session_id})
)
rows += cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
v.*,
'VIEW' AS type
FROM events_ios.views AS v
WHERE
v.session_id = %(session_id)s
ORDER BY v.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
cur.execute(cur.mogrify(f"""
SELECT
s.*,
'SWIPE' AS type
FROM events_ios.swipes AS s
WHERE
s.session_id = %(session_id)s
ORDER BY s.timestamp;""", {"project_id": project_id, "session_id": session_id}))
rows += cur.fetchall()
rows = helper.list_to_camel_case(rows)
rows = sorted(rows, key=lambda k: k["timestamp"])
return rows
def get_crashes_by_session_id(session_id):
with pg_client.PostgresClient() as cur:
cur.execute(cur.mogrify(f"""
SELECT cr.*,uc.*, cr.timestamp - s.start_ts AS time
FROM {events.EventType.CRASH_MOBILE.table} AS cr
INNER JOIN public.crashes_ios AS uc USING (crash_ios_id)
INNER JOIN public.sessions AS s USING (session_id)
WHERE
cr.session_id = %(session_id)s
ORDER BY timestamp;""", {"session_id": session_id}))
errors = cur.fetchall()
return helper.list_to_camel_case(errors)

View file

@ -1,598 +0,0 @@
import json
import logging
from typing import Any, List, Dict, Optional
import schemas
from chalicelib.utils import helper
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
from fastapi import HTTPException, status
logger = logging.getLogger(__name__)
feature_flag_columns = (
"feature_flag_id",
"payload",
"flag_key",
"description",
"flag_type",
"is_persist",
"is_active",
"created_at",
"updated_at",
"created_by",
"updated_by",
)
def exists_by_name(flag_key: str, project_id: int, exclude_id: Optional[int]) -> bool:
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""SELECT EXISTS(SELECT 1
FROM public.feature_flags
WHERE deleted_at IS NULL
AND flag_key ILIKE %(flag_key)s AND project_id=%(project_id)s
{"AND feature_flag_id!=%(exclude_id)s" if exclude_id else ""}) AS exists;""",
{"flag_key": flag_key, "exclude_id": exclude_id, "project_id": project_id})
cur.execute(query=query)
row = cur.fetchone()
return row["exists"]
def update_feature_flag_status(project_id: int, feature_flag_id: int, is_active: bool) -> Dict[str, Any]:
try:
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""UPDATE feature_flags
SET is_active = %(is_active)s, updated_at=NOW()
WHERE feature_flag_id=%(feature_flag_id)s AND project_id=%(project_id)s
RETURNING is_active;""",
{"feature_flag_id": feature_flag_id, "is_active": is_active, "project_id": project_id})
cur.execute(query=query)
return {"is_active": cur.fetchone()["is_active"]}
except Exception as e:
logger.error(f"Failed to update feature flag status: {e}")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Failed to update feature flag status")
def search_feature_flags(project_id: int, user_id: int, data: schemas.SearchFlagsSchema) -> Dict[str, Any]:
"""
Get all feature flags and their total count.
"""
constraints, params = prepare_constraints_params_to_search(data, project_id, user_id)
sql = f"""
SELECT COUNT(1) OVER () AS count, {", ".join(feature_flag_columns)}
FROM feature_flags
WHERE {" AND ".join(constraints)}
ORDER BY updated_at {data.order}
LIMIT %(limit)s OFFSET %(offset)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
rows = cur.fetchall()
if len(rows) == 0:
return {"data": {"total": 0, "list": []}}
results = {"total": rows[0]["count"]}
rows = helper.list_to_camel_case(rows)
for row in rows:
row.pop("count")
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
row["updatedAt"] = TimeUTC.datetime_to_timestamp(row["updatedAt"])
results["list"] = rows
return {"data": results}
def prepare_constraints_params_to_search(data, project_id, user_id):
constraints = [
"feature_flags.project_id = %(project_id)s",
"feature_flags.deleted_at IS NULL",
]
params = {
"project_id": project_id,
"user_id": user_id,
"limit": data.limit,
"offset": (data.page - 1) * data.limit,
}
if data.is_active is not None:
constraints.append("feature_flags.is_active=%(is_active)s")
params["is_active"] = data.is_active
if data.user_id is not None:
constraints.append("feature_flags.created_by=%(user_id)s")
if data.query is not None and len(data.query) > 0:
constraints.append("flag_key ILIKE %(query)s")
params["query"] = helper.values_for_operator(value=data.query,
op=schemas.SearchEventOperator.CONTAINS)
return constraints, params
def create_feature_flag(project_id: int, user_id: int, feature_flag_data: schemas.FeatureFlagSchema) -> Optional[int]:
if feature_flag_data.flag_type == schemas.FeatureFlagType.MULTI_VARIANT and len(feature_flag_data.variants) == 0:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail="Variants are required for multi variant flag")
validate_unique_flag_key(feature_flag_data, project_id)
validate_multi_variant_flag(feature_flag_data)
insert_columns = (
'project_id',
'flag_key',
'description',
'flag_type',
'payload',
'is_persist',
'is_active',
'created_by'
)
params = prepare_params_to_create_flag(feature_flag_data, project_id, user_id)
conditions_len = len(feature_flag_data.conditions)
variants_len = len(feature_flag_data.variants)
flag_sql = f"""
INSERT INTO feature_flags ({", ".join(insert_columns)})
VALUES ({", ".join(["%(" + col + ")s" for col in insert_columns])})
RETURNING feature_flag_id
"""
conditions_query = ""
variants_query = ""
if conditions_len > 0:
conditions_query = f"""
inserted_conditions AS (
INSERT INTO feature_flags_conditions(feature_flag_id, name, rollout_percentage, filters)
VALUES {",".join([f"(("
f"SELECT feature_flag_id FROM inserted_flag),"
f"%(name_{i})s,"
f"%(rollout_percentage_{i})s,"
f"%(filters_{i})s::jsonb)"
for i in range(conditions_len)])}
RETURNING feature_flag_id
)
"""
if variants_len > 0:
variants_query = f""",
inserted_variants AS (
INSERT INTO feature_flags_variants(feature_flag_id, value, description, rollout_percentage, payload)
VALUES {",".join([f"((SELECT feature_flag_id FROM inserted_flag),"
f"%(v_value_{i})s,"
f"%(v_description_{i})s,"
f"%(v_rollout_percentage_{i})s,"
f"%(v_payload_{i})s::jsonb)"
for i in range(variants_len)])}
RETURNING feature_flag_id
)
"""
query = f"""
WITH inserted_flag AS ({flag_sql}),
{conditions_query}
{variants_query}
SELECT feature_flag_id FROM inserted_flag;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(query, params)
cur.execute(query)
row = cur.fetchone()
if row is None:
return None
return get_feature_flag(project_id=project_id, feature_flag_id=row["feature_flag_id"])
def validate_unique_flag_key(feature_flag_data, project_id, exclude_id=None):
if exists_by_name(project_id=project_id, flag_key=feature_flag_data.flag_key, exclude_id=exclude_id):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"Feature flag with key already exists.")
def validate_multi_variant_flag(feature_flag_data):
if feature_flag_data.flag_type == schemas.FeatureFlagType.MULTI_VARIANT:
if sum([v.rollout_percentage for v in feature_flag_data.variants]) > 100:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Sum of rollout percentage for variants cannot be greater than 100.")
def prepare_params_to_create_flag(feature_flag_data, project_id, user_id):
conditions_data = prepare_conditions_values(feature_flag_data)
variants_data = prepare_variants_values(feature_flag_data)
params = {
"project_id": project_id,
"created_by": user_id,
**feature_flag_data.model_dump(),
**conditions_data,
**variants_data,
"payload": json.dumps(feature_flag_data.payload)
}
return params
def prepare_variants_values(feature_flag_data):
variants_data = {}
for i, v in enumerate(feature_flag_data.variants):
for k in v.model_dump().keys():
variants_data[f"v_{k}_{i}"] = v.__getattribute__(k)
variants_data[f"v_value_{i}"] = v.value
variants_data[f"v_description_{i}"] = v.description
variants_data[f"v_payload_{i}"] = json.dumps(v.payload)
variants_data[f"v_rollout_percentage_{i}"] = v.rollout_percentage
return variants_data
def prepare_conditions_values(feature_flag_data):
conditions_data = {}
for i, s in enumerate(feature_flag_data.conditions):
for k in s.model_dump().keys():
conditions_data[f"{k}_{i}"] = s.__getattribute__(k)
conditions_data[f"name_{i}"] = s.name
conditions_data[f"rollout_percentage_{i}"] = s.rollout_percentage
conditions_data[f"filters_{i}"] = json.dumps([filter_.model_dump() for filter_ in s.filters])
return conditions_data
def get_feature_flag(project_id: int, feature_flag_id: int) -> Optional[Dict[str, Any]]:
conditions_query = """
SELECT COALESCE(jsonb_agg(ffc ORDER BY condition_id), '[]'::jsonb) AS conditions
FROM feature_flags_conditions AS ffc
WHERE ffc.feature_flag_id = %(feature_flag_id)s
"""
variants_query = """
SELECT COALESCE(jsonb_agg(ffv ORDER BY variant_id), '[]'::jsonb) AS variants
FROM feature_flags_variants AS ffv
WHERE ffv.feature_flag_id = %(feature_flag_id)s
"""
sql = f"""
SELECT {", ".join(["ff." + col for col in feature_flag_columns])},
({conditions_query}) AS conditions,
({variants_query}) AS variants
FROM feature_flags AS ff
WHERE ff.feature_flag_id = %(feature_flag_id)s
AND ff.project_id = %(project_id)s
AND ff.deleted_at IS NULL;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id, "project_id": project_id})
cur.execute(query)
row = cur.fetchone()
if row is None:
return {"errors": ["Feature flag not found"]}
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
row["updated_at"] = TimeUTC.datetime_to_timestamp(row["updated_at"])
return {"data": helper.dict_to_camel_case(row)}
def create_conditions(feature_flag_id: int, conditions: List[schemas.FeatureFlagCondition]) -> List[Dict[str, Any]]:
"""
Create new feature flag conditions and return their data.
"""
rows = []
# insert all conditions rows with single sql query
if len(conditions) > 0:
columns = (
"feature_flag_id",
"name",
"rollout_percentage",
"filters",
)
sql = f"""
INSERT INTO feature_flags_conditions
(feature_flag_id, name, rollout_percentage, filters)
VALUES {", ".join(["%s"] * len(conditions))}
RETURNING condition_id, {", ".join(columns)}
"""
with pg_client.PostgresClient() as cur:
params = [
(feature_flag_id, c.name, c.rollout_percentage,
json.dumps([filter_.model_dump() for filter_ in c.filters]))
for c in conditions]
query = cur.mogrify(sql, params)
cur.execute(query)
rows = cur.fetchall()
return rows
def update_feature_flag(project_id: int, feature_flag_id: int,
feature_flag: schemas.FeatureFlagSchema, user_id: int):
"""
Update an existing feature flag and return its updated data.
"""
validate_unique_flag_key(feature_flag_data=feature_flag, project_id=project_id, exclude_id=feature_flag_id)
validate_multi_variant_flag(feature_flag_data=feature_flag)
columns = (
"flag_key",
"description",
"flag_type",
"is_persist",
"is_active",
"payload",
"updated_by",
)
params = {
"updated_by": user_id,
"feature_flag_id": feature_flag_id,
"project_id": project_id,
**feature_flag.model_dump(),
"payload": json.dumps(feature_flag.payload),
}
sql = f"""
UPDATE feature_flags
SET {", ".join(f"{column} = %({column})s" for column in columns)},
updated_at = timezone('utc'::text, now())
WHERE feature_flag_id = %(feature_flag_id)s AND project_id = %(project_id)s
RETURNING feature_flag_id, {", ".join(columns)}, created_at, updated_at
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
row = cur.fetchone()
if row is None:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Feature flag not found")
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
row["updated_at"] = TimeUTC.datetime_to_timestamp(row["updated_at"])
row['conditions'] = check_conditions(feature_flag_id, feature_flag.conditions)
row['variants'] = check_variants(feature_flag_id, feature_flag.variants)
return {"data": helper.dict_to_camel_case(row)}
def get_conditions(feature_flag_id: int):
"""
Get all conditions for a feature flag.
"""
sql = """
SELECT
condition_id,
feature_flag_id,
name,
rollout_percentage,
filters
FROM feature_flags_conditions
WHERE feature_flag_id = %(feature_flag_id)s
ORDER BY condition_id;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id})
cur.execute(query)
rows = cur.fetchall()
return rows
def check_variants(feature_flag_id: int, variants: List[schemas.FeatureFlagVariant]) -> Any:
existing_ids = [ev.get("variant_id") for ev in get_variants(feature_flag_id)]
to_be_deleted = []
to_be_updated = []
to_be_created = []
for vid in existing_ids:
if vid not in [v.variant_id for v in variants]:
to_be_deleted.append(vid)
for variant in variants:
if variant.variant_id is None:
to_be_created.append(variant)
else:
to_be_updated.append(variant)
if len(to_be_created) > 0:
create_variants(feature_flag_id=feature_flag_id, variants=to_be_created)
if len(to_be_updated) > 0:
update_variants(feature_flag_id=feature_flag_id, variants=to_be_updated)
if len(to_be_deleted) > 0:
delete_variants(feature_flag_id=feature_flag_id, ids=to_be_deleted)
return get_variants(feature_flag_id)
def get_variants(feature_flag_id: int):
sql = """
SELECT
variant_id,
feature_flag_id,
value,
payload,
rollout_percentage
FROM feature_flags_variants
WHERE feature_flag_id = %(feature_flag_id)s
ORDER BY variant_id;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id})
cur.execute(query)
rows = cur.fetchall()
return rows
def create_variants(feature_flag_id: int, variants: List[schemas.FeatureFlagVariant]) -> List[Dict[str, Any]]:
"""
Create new feature flag variants and return their data.
"""
rows = []
# insert all variants rows with single sql query
if len(variants) > 0:
columns = (
"feature_flag_id",
"value",
"description",
"payload",
"rollout_percentage",
)
sql = f"""
INSERT INTO feature_flags_variants
(feature_flag_id, value, description, payload, rollout_percentage)
VALUES {", ".join(["%s"] * len(variants))}
RETURNING variant_id, {", ".join(columns)}
"""
with pg_client.PostgresClient() as cur:
params = [(feature_flag_id, v.value, v.description, json.dumps(v.payload), v.rollout_percentage) for v in
variants]
query = cur.mogrify(sql, params)
cur.execute(query)
rows = cur.fetchall()
return rows
def update_variants(feature_flag_id: int, variants: List[schemas.FeatureFlagVariant]) -> Any:
"""
Update existing feature flag variants and return their updated data.
"""
values = []
params = {
"feature_flag_id": feature_flag_id,
}
for i in range(len(variants)):
values.append(f"(%(variant_id_{i})s, %(value_{i})s, %(rollout_percentage_{i})s, %(payload_{i})s::jsonb)")
params[f"variant_id_{i}"] = variants[i].variant_id
params[f"value_{i}"] = variants[i].value
params[f"rollout_percentage_{i}"] = variants[i].rollout_percentage
params[f"payload_{i}"] = json.dumps(variants[i].payload)
sql = f"""
UPDATE feature_flags_variants
SET value = c.value, rollout_percentage = c.rollout_percentage, payload = c.payload
FROM (VALUES {','.join(values)}) AS c(variant_id, value, rollout_percentage, payload)
WHERE c.variant_id = feature_flags_variants.variant_id AND feature_flag_id = %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
def delete_variants(feature_flag_id: int, ids: List[int]) -> None:
"""
Delete existing feature flag variants and return their data.
"""
sql = """
DELETE FROM feature_flags_variants
WHERE variant_id IN %(ids)s
AND feature_flag_id= %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id, "ids": tuple(ids)})
cur.execute(query)
def check_conditions(feature_flag_id: int, conditions: List[schemas.FeatureFlagCondition]) -> Any:
existing_ids = [ec.get("condition_id") for ec in get_conditions(feature_flag_id)]
to_be_deleted = []
to_be_updated = []
to_be_created = []
for cid in existing_ids:
if cid not in [c.condition_id for c in conditions]:
to_be_deleted.append(cid)
for condition in conditions:
if condition.condition_id is None:
to_be_created.append(condition)
else:
to_be_updated.append(condition)
if len(to_be_created) > 0:
create_conditions(feature_flag_id=feature_flag_id, conditions=to_be_created)
if len(to_be_updated) > 0:
update_conditions(feature_flag_id=feature_flag_id, conditions=to_be_updated)
if len(to_be_deleted) > 0:
delete_conditions(feature_flag_id=feature_flag_id, ids=to_be_deleted)
return get_conditions(feature_flag_id)
def update_conditions(feature_flag_id: int, conditions: List[schemas.FeatureFlagCondition]) -> Any:
"""
Update existing feature flag conditions and return their updated data.
"""
values = []
params = {
"feature_flag_id": feature_flag_id,
}
for i in range(len(conditions)):
values.append(f"(%(condition_id_{i})s, %(name_{i})s, %(rollout_percentage_{i})s, %(filters_{i})s::jsonb)")
params[f"condition_id_{i}"] = conditions[i].condition_id
params[f"name_{i}"] = conditions[i].name
params[f"rollout_percentage_{i}"] = conditions[i].rollout_percentage
params[f"filters_{i}"] = json.dumps(conditions[i].filters)
sql = f"""
UPDATE feature_flags_conditions
SET name = c.name, rollout_percentage = c.rollout_percentage, filters = c.filters
FROM (VALUES {','.join(values)}) AS c(condition_id, name, rollout_percentage, filters)
WHERE c.condition_id = feature_flags_conditions.condition_id AND feature_flag_id = %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, params)
cur.execute(query)
def delete_conditions(feature_flag_id: int, ids: List[int]) -> None:
"""
Delete feature flag conditions.
"""
sql = """
DELETE FROM feature_flags_conditions
WHERE condition_id IN %(ids)s
AND feature_flag_id= %(feature_flag_id)s;
"""
with pg_client.PostgresClient() as cur:
query = cur.mogrify(sql, {"feature_flag_id": feature_flag_id, "ids": tuple(ids)})
cur.execute(query)
def delete_feature_flag(project_id: int, feature_flag_id: int):
"""
Delete a feature flag.
"""
conditions = [
"project_id=%(project_id)s",
"feature_flags.feature_flag_id=%(feature_flag_id)s"
]
params = {"project_id": project_id, "feature_flag_id": feature_flag_id}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""UPDATE feature_flags
SET deleted_at= (now() at time zone 'utc'), is_active=false
WHERE {" AND ".join(conditions)};""", params)
cur.execute(query)
return {"state": "success"}

View file

@ -0,0 +1,372 @@
import json
from typing import List
import chalicelib.utils.helper
import schemas
from chalicelib.core import significance, sessions
from chalicelib.utils import dev
from chalicelib.utils import helper, pg_client
from chalicelib.utils.TimeUTC import TimeUTC
REMOVE_KEYS = ["key", "_key", "startDate", "endDate"]
ALLOW_UPDATE_FOR = ["name", "filter"]
def filter_stages(stages: List[schemas._SessionSearchEventSchema]):
ALLOW_TYPES = [schemas.EventType.click, schemas.EventType.input,
schemas.EventType.location, schemas.EventType.custom,
schemas.EventType.click_ios, schemas.EventType.input_ios,
schemas.EventType.view_ios, schemas.EventType.custom_ios, ]
return [s for s in stages if s.type in ALLOW_TYPES and s.value is not None]
def __parse_events(f_events: List[dict]):
return [schemas._SessionSearchEventSchema.parse_obj(e) for e in f_events]
def __unparse_events(f_events: List[schemas._SessionSearchEventSchema]):
return [e.dict() for e in f_events]
def __fix_stages(f_events: List[schemas._SessionSearchEventSchema]):
if f_events is None:
return
events = []
for e in f_events:
if e.operator is None:
e.operator = schemas.SearchEventOperator._is
if not isinstance(e.value, list):
e.value = [e.value]
is_any = sessions._isAny_opreator(e.operator)
if not is_any and isinstance(e.value, list) and len(e.value) == 0:
continue
events.append(e)
return events
def __transform_old_funnels(events):
for e in events:
if not isinstance(e.get("value"), list):
e["value"] = [e["value"]]
return events
def create(project_id, user_id, name, filter: schemas.FunnelSearchPayloadSchema, is_public):
helper.delete_keys_from_dict(filter, REMOVE_KEYS)
filter.events = filter_stages(stages=filter.events)
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
INSERT INTO public.funnels (project_id, user_id, name, filter,is_public)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(filter)s::jsonb,%(is_public)s)
RETURNING *;""",
{"user_id": user_id, "project_id": project_id, "name": name,
"filter": json.dumps(filter.dict()),
"is_public": is_public})
cur.execute(
query
)
r = cur.fetchone()
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r = helper.dict_to_camel_case(r)
r["filter"]["startDate"], r["filter"]["endDate"] = TimeUTC.get_start_end_from_range(r["filter"]["rangeValue"])
return {"data": r}
def update(funnel_id, user_id, project_id, name=None, filter=None, is_public=None):
s_query = []
if filter is not None:
helper.delete_keys_from_dict(filter, REMOVE_KEYS)
s_query.append("filter = %(filter)s::jsonb")
if name is not None and len(name) > 0:
s_query.append("name = %(name)s")
if is_public is not None:
s_query.append("is_public = %(is_public)s")
if len(s_query) == 0:
return {"errors": ["Nothing to update"]}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""\
UPDATE public.funnels
SET {" , ".join(s_query)}
WHERE funnel_id=%(funnel_id)s
AND project_id = %(project_id)s
AND (user_id = %(user_id)s OR is_public)
RETURNING *;""", {"user_id": user_id, "funnel_id": funnel_id, "name": name,
"filter": json.dumps(filter) if filter is not None else None, "is_public": is_public,
"project_id": project_id})
# print("--------------------")
# print(query)
# print("--------------------")
cur.execute(
query
)
r = cur.fetchone()
if r is None:
return {"errors": ["funnel not found"]}
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r = helper.dict_to_camel_case(r)
r["filter"]["startDate"], r["filter"]["endDate"] = TimeUTC.get_start_end_from_range(r["filter"]["rangeValue"])
r["filter"] = helper.old_search_payload_to_flat(r["filter"])
return {"data": r}
def get_by_user(project_id, user_id, range_value=None, start_date=None, end_date=None, details=False):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
f"""\
SELECT funnel_id, project_id, user_id, name, created_at, deleted_at, is_public
{",filter" if details else ""}
FROM public.funnels
WHERE project_id = %(project_id)s
AND funnels.deleted_at IS NULL
AND (funnels.user_id = %(user_id)s OR funnels.is_public);""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
rows = helper.list_to_camel_case(rows)
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
if details:
row["filter"]["events"] = filter_stages(__parse_events(row["filter"]["events"]))
if row.get("filter") is not None and row["filter"].get("events") is not None:
row["filter"]["events"] = __transform_old_funnels(__unparse_events(row["filter"]["events"]))
get_start_end_time(filter_d=row["filter"], range_value=range_value, start_date=start_date,
end_date=end_date)
counts = sessions.search2_pg(data=schemas.SessionsSearchPayloadSchema.parse_obj(row["filter"]),
project_id=project_id, user_id=None, count_only=True)
row["sessionsCount"] = counts["countSessions"]
row["usersCount"] = counts["countUsers"]
filter_clone = dict(row["filter"])
overview = significance.get_overview(filter_d=row["filter"], project_id=project_id)
row["stages"] = overview["stages"]
row.pop("filter")
row["stagesCount"] = len(row["stages"])
# TODO: ask david to count it alone
row["criticalIssuesCount"] = overview["criticalIssuesCount"]
row["missedConversions"] = 0 if len(row["stages"]) < 2 \
else row["stages"][0]["sessionsCount"] - row["stages"][-1]["sessionsCount"]
row["filter"] = helper.old_search_payload_to_flat(filter_clone)
return rows
def get_possible_issue_types(project_id):
return [{"type": t, "title": chalicelib.utils.helper.get_issue_title(t)} for t in
['click_rage', 'dead_click', 'excessive_scrolling',
'bad_request', 'missing_resource', 'memory', 'cpu',
'slow_resource', 'slow_page_load', 'crash', 'custom_event_error',
'js_error']]
def get_start_end_time(filter_d, range_value, start_date, end_date):
if start_date is not None and end_date is not None:
filter_d["startDate"], filter_d["endDate"] = start_date, end_date
elif range_value is not None and len(range_value) > 0:
filter_d["rangeValue"] = range_value
filter_d["startDate"], filter_d["endDate"] = TimeUTC.get_start_end_from_range(range_value)
else:
filter_d["startDate"], filter_d["endDate"] = TimeUTC.get_start_end_from_range(filter_d["rangeValue"])
def delete(project_id, funnel_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.funnels
SET deleted_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND funnel_id = %(funnel_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"funnel_id": funnel_id, "project_id": project_id, "user_id": user_id})
)
return {"data": {"state": "success"}}
def get_sessions(project_id, funnel_id, user_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
return sessions.search2_pg(data=schemas.SessionsSearchPayloadSchema.parse_obj(f["filter"]), project_id=project_id,
user_id=user_id)
def get_sessions_on_the_fly(funnel_id, project_id, user_id, data: schemas.FunnelSearchPayloadSchema):
data.events = filter_stages(data.events)
data.events = __fix_stages(data.events)
if len(data.events) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.range_value,
start_date=data.startDate, end_date=data.endDate)
data = schemas.FunnelSearchPayloadSchema.parse_obj(f["filter"])
return sessions.search2_pg(data=data, project_id=project_id,
user_id=user_id)
def get_top_insights(project_id, user_id, funnel_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=f["filter"], project_id=project_id)
insights = helper.list_to_camel_case(insights)
if len(insights) > 0:
# fix: this fix for huge drop count
if total_drop_due_to_issues > insights[0]["sessionsCount"]:
total_drop_due_to_issues = insights[0]["sessionsCount"]
# end fix
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"data": {"stages": insights,
"totalDropDueToIssues": total_drop_due_to_issues}}
def get_top_insights_on_the_fly(funnel_id, user_id, project_id, data: schemas.FunnelInsightsPayloadSchema):
data.events = filter_stages(__parse_events(data.events))
if len(data.events) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.rangeValue,
start_date=data.startDate,
end_date=data.endDate)
data = schemas.FunnelInsightsPayloadSchema.parse_obj(f["filter"])
data.events = __fix_stages(data.events)
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=data.dict(), project_id=project_id)
insights = helper.list_to_camel_case(insights)
if len(insights) > 0:
# fix: this fix for huge drop count
if total_drop_due_to_issues > insights[0]["sessionsCount"]:
total_drop_due_to_issues = insights[0]["sessionsCount"]
# end fix
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"data": {"stages": insights,
"totalDropDueToIssues": total_drop_due_to_issues}}
# def get_top_insights_on_the_fly_widget(project_id, data: schemas.FunnelInsightsPayloadSchema):
def get_top_insights_on_the_fly_widget(project_id, data: schemas.CustomMetricSeriesFilterSchema):
data.events = filter_stages(__parse_events(data.events))
data.events = __fix_stages(data.events)
if len(data.events) == 0:
return {"stages": [], "totalDropDueToIssues": 0}
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=data.dict(), project_id=project_id)
insights = helper.list_to_camel_case(insights)
if len(insights) > 0:
# TODO: check if this correct
if total_drop_due_to_issues > insights[0]["sessionsCount"]:
if len(insights) == 0:
total_drop_due_to_issues = 0
else:
total_drop_due_to_issues = insights[0]["sessionsCount"] - insights[-1]["sessionsCount"]
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"stages": insights,
"totalDropDueToIssues": total_drop_due_to_issues}
def get_issues(project_id, user_id, funnel_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
return {"data": {
"issues": helper.dict_to_camel_case(significance.get_issues_list(filter_d=f["filter"], project_id=project_id))
}}
def get_issues_on_the_fly(funnel_id, user_id, project_id, data: schemas.FunnelSearchPayloadSchema):
data.events = filter_stages(data.events)
data.events = __fix_stages(data.events)
if len(data.events) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.rangeValue,
start_date=data.startDate,
end_date=data.endDate)
data = schemas.FunnelSearchPayloadSchema.parse_obj(f["filter"])
if len(data.events) < 2:
return {"issues": []}
return {
"issues": helper.dict_to_camel_case(
significance.get_issues_list(filter_d=data.dict(), project_id=project_id, first_stage=1,
last_stage=len(data.events)))}
# def get_issues_on_the_fly_widget(project_id, data: schemas.FunnelSearchPayloadSchema):
def get_issues_on_the_fly_widget(project_id, data: schemas.CustomMetricSeriesFilterSchema):
data.events = filter_stages(data.events)
data.events = __fix_stages(data.events)
if len(data.events) < 0:
return {"issues": []}
return {
"issues": helper.dict_to_camel_case(
significance.get_issues_list(filter_d=data.dict(), project_id=project_id, first_stage=1,
last_stage=len(data.events)))}
def get(funnel_id, project_id, user_id, flatten=True, fix_stages=True):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""\
SELECT
*
FROM public.funnels
WHERE project_id = %(project_id)s
AND deleted_at IS NULL
AND funnel_id = %(funnel_id)s
AND (user_id = %(user_id)s OR is_public);""",
{"funnel_id": funnel_id, "project_id": project_id, "user_id": user_id}
)
)
f = helper.dict_to_camel_case(cur.fetchone())
if f is None:
return None
if f.get("filter") is not None and f["filter"].get("events") is not None:
f["filter"]["events"] = __transform_old_funnels(f["filter"]["events"])
f["createdAt"] = TimeUTC.datetime_to_timestamp(f["createdAt"])
f["filter"]["events"] = __parse_events(f["filter"]["events"])
f["filter"]["events"] = filter_stages(stages=f["filter"]["events"])
if fix_stages:
f["filter"]["events"] = __fix_stages(f["filter"]["events"])
f["filter"]["events"] = [e.dict() for e in f["filter"]["events"]]
if flatten:
f["filter"] = helper.old_search_payload_to_flat(f["filter"])
return f
def search_by_issue(user_id, project_id, funnel_id, issue_id, data: schemas.FunnelSearchPayloadSchema, range_value=None,
start_date=None, end_date=None):
if len(data.events) == 0:
f = get(funnel_id=funnel_id, project_id=project_id, user_id=user_id, flatten=False)
if f is None:
return {"errors": ["funnel not found"]}
data.startDate = data.startDate if data.startDate is not None else start_date
data.endDate = data.endDate if data.endDate is not None else end_date
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=data.startDate,
end_date=data.endDate)
data = schemas.FunnelSearchPayloadSchema.parse_obj(f["filter"])
issues = get_issues_on_the_fly(funnel_id=funnel_id, user_id=user_id, project_id=project_id, data=data) \
.get("issues", {})
issues = issues.get("significant", []) + issues.get("insignificant", [])
issue = None
for i in issues:
if i.get("issueId", "") == issue_id:
issue = i
break
return {"sessions": sessions.search2_pg(user_id=user_id, project_id=project_id, issue=issue,
data=data) if issue is not None else {"total": 0, "sessions": []},
# "stages": helper.list_to_camel_case(insights),
# "totalDropDueToIssues": total_drop_due_to_issues,
"issue": issue}

View file

@ -1,330 +0,0 @@
import logging
import redis
import requests
from decouple import config
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
logger = logging.getLogger(__name__)
def app_connection_string(name, port, path):
namespace = config("POD_NAMESPACE", default="app")
conn_string = config("CLUSTER_URL", default="svc.cluster.local")
return f"http://{name}.{namespace}.{conn_string}:{port}/{path}"
HEALTH_ENDPOINTS = {
"alerts": app_connection_string("alerts-openreplay", 8888, "health"),
"assets": app_connection_string("assets-openreplay", 8888, "metrics"),
"assist": app_connection_string("assist-openreplay", 8888, "health"),
"chalice": app_connection_string("chalice-openreplay", 8888, "metrics"),
"db": app_connection_string("db-openreplay", 8888, "metrics"),
"ender": app_connection_string("ender-openreplay", 8888, "metrics"),
"heuristics": app_connection_string("heuristics-openreplay", 8888, "metrics"),
"http": app_connection_string("http-openreplay", 8888, "metrics"),
"ingress-nginx": app_connection_string("ingress-nginx-openreplay", 80, "healthz"),
"integrations": app_connection_string("integrations-openreplay", 8888, "metrics"),
"sink": app_connection_string("sink-openreplay", 8888, "metrics"),
"sourcemapreader": app_connection_string(
"sourcemapreader-openreplay", 8888, "health"
),
"storage": app_connection_string("storage-openreplay", 8888, "metrics"),
}
def __check_database_pg(*_):
fail_response = {
"health": False,
"details": {"errors": ["Postgres health-check failed"]},
}
with pg_client.PostgresClient() as cur:
try:
cur.execute("SHOW server_version;")
# server_version = cur.fetchone()
except Exception as e:
logger.error("!! health failed: postgres not responding")
logger.exception(e)
return fail_response
try:
cur.execute("SELECT openreplay_version() AS version;")
# schema_version = cur.fetchone()
except Exception as e:
logger.error("!! health failed: openreplay_version not defined")
logger.exception(e)
return fail_response
return {
"health": True,
"details": {
# "version": server_version["server_version"],
# "schema": schema_version["version"]
},
}
def __always_healthy(*_):
return {"health": True, "details": {}}
def __check_be_service(service_name):
def fn(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]},
}
try:
results = requests.get(HEALTH_ENDPOINTS.get(service_name), timeout=2)
if results.status_code != 200:
logger.error(
f"!! issue with the {service_name}-health code:{results.status_code}"
)
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
return fail_response
except requests.exceptions.Timeout:
logger.error(f"!! Timeout getting {service_name}-health")
# fail_response["details"]["errors"].append("timeout")
return fail_response
except Exception as e:
logger.error(f"!! Issue getting {service_name}-health response")
logger.exception(e)
try:
logger.error(results.text)
# fail_response["details"]["errors"].append(results.text)
except Exception:
logger.error("couldn't get response")
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {"health": True, "details": {}}
return fn
def __check_redis(*_):
fail_response = {
"health": False,
"details": {"errors": ["server health-check failed"]},
}
if config("REDIS_STRING", default=None) is None:
# fail_response["details"]["errors"].append("REDIS_STRING not defined in env-vars")
return fail_response
try:
r = redis.from_url(config("REDIS_STRING"), socket_timeout=2)
r.ping()
except Exception as e:
logger.error("!! Issue getting redis-health response")
logger.exception(e)
# fail_response["details"]["errors"].append(str(e))
return fail_response
return {
"health": True,
"details": {
# "version": r.execute_command('INFO')['redis_version']
},
}
def __check_SSL(*_):
fail_response = {
"health": False,
"details": {"errors": ["SSL Certificate health-check failed"]},
}
try:
requests.get(config("SITE_URL"), verify=True, allow_redirects=True)
except Exception as e:
logger.error("!! health failed: SSL Certificate")
logger.exception(e)
return fail_response
return {"health": True, "details": {}}
def __get_sessions_stats(*_):
with pg_client.PostgresClient() as cur:
constraints = ["projects.deleted_at IS NULL"]
query = cur.mogrify(
f"""SELECT COALESCE(SUM(sessions_count),0) AS s_c,
COALESCE(SUM(events_count),0) AS e_c
FROM public.projects_stats
INNER JOIN public.projects USING(project_id)
WHERE {" AND ".join(constraints)};"""
)
cur.execute(query)
row = cur.fetchone()
return {"numberOfSessionsCaptured": row["s_c"], "numberOfEventCaptured": row["e_c"]}
def get_health(tenant_id=None):
health_map = {
"databases": {"postgres": __check_database_pg},
"ingestionPipeline": {"redis": __check_redis},
"backendServices": {
"alerts": __check_be_service("alerts"),
"assets": __check_be_service("assets"),
"assist": __check_be_service("assist"),
"chalice": __always_healthy,
"db": __check_be_service("db"),
"ender": __check_be_service("ender"),
"frontend": __always_healthy,
"heuristics": __check_be_service("heuristics"),
"http": __check_be_service("http"),
"ingress-nginx": __always_healthy,
"integrations": __check_be_service("integrations"),
"sink": __check_be_service("sink"),
"sourcemapreader": __check_be_service("sourcemapreader"),
"storage": __check_be_service("storage"),
},
"details": __get_sessions_stats,
"ssl": __check_SSL,
}
return __process_health(health_map=health_map)
def __process_health(health_map):
response = dict(health_map)
for parent_key in health_map.keys():
if config(f"SKIP_H_{parent_key.upper()}", cast=bool, default=False):
response.pop(parent_key)
elif isinstance(health_map[parent_key], dict):
for element_key in health_map[parent_key]:
if config(
f"SKIP_H_{parent_key.upper()}_{element_key.upper()}",
cast=bool,
default=False,
):
response[parent_key].pop(element_key)
else:
response[parent_key][element_key] = health_map[parent_key][
element_key
]()
else:
response[parent_key] = health_map[parent_key]()
return response
def cron():
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT projects.project_id,
projects.created_at,
projects.sessions_last_check_at,
projects.first_recorded_session_at,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
insert = False
if r["last_update_at"] is None:
# never counted before, must insert
insert = True
if r["first_recorded_session_at"] is None:
if r["sessions_last_check_at"] is None:
count_start_from = r["created_at"]
else:
count_start_from = r["sessions_last_check_at"]
else:
count_start_from = r["first_recorded_session_at"]
else:
# counted before, must update
count_start_from = r["last_update_at"]
count_start_from = TimeUTC.datetime_to_timestamp(count_start_from)
params = {
"project_id": r["project_id"],
"start_ts": count_start_from,
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts>=%(start_ts)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
if insert:
query = cur.mogrify(
"""INSERT INTO public.projects_stats(project_id, sessions_count, events_count, last_update_at)
VALUES (%(project_id)s, %(sessions_count)s, %(events_count)s, (now() AT TIME ZONE 'utc'::text));""",
params,
)
else:
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=sessions_count+%(sessions_count)s,
events_count=events_count+%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params,
)
cur.execute(query)
# this cron is used to correct the sessions&events count every week
def weekly_cron():
with pg_client.PostgresClient(long_query=True) as cur:
query = cur.mogrify(
"""SELECT project_id,
projects_stats.last_update_at
FROM public.projects
LEFT JOIN public.projects_stats USING (project_id)
WHERE projects.deleted_at IS NULL
ORDER BY project_id;"""
)
cur.execute(query)
rows = cur.fetchall()
for r in rows:
if r["last_update_at"] is None:
continue
params = {
"project_id": r["project_id"],
"end_ts": TimeUTC.now(),
"sessions_count": 0,
"events_count": 0,
}
query = cur.mogrify(
"""SELECT COUNT(1) AS sessions_count,
COALESCE(SUM(events_count),0) AS events_count
FROM public.sessions
WHERE project_id=%(project_id)s
AND start_ts<=%(end_ts)s
AND duration IS NOT NULL;""",
params,
)
cur.execute(query)
row = cur.fetchone()
if row is not None:
params["sessions_count"] = row["sessions_count"]
params["events_count"] = row["events_count"]
query = cur.mogrify(
"""UPDATE public.projects_stats
SET sessions_count=%(sessions_count)s,
events_count=%(events_count)s,
last_update_at=(now() AT TIME ZONE 'utc'::text)
WHERE project_id=%(project_id)s;""",
params,
)
cur.execute(query)

View file

@ -0,0 +1,29 @@
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils import helper, pg_client
from chalicelib.utils import dev
def get_by_url(project_id, data):
args = {"startDate": data.get('startDate', TimeUTC.now(delta_days=-30)),
"endDate": data.get('endDate', TimeUTC.now()),
"project_id": project_id, "url": data["url"]}
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""SELECT selector, count(1) AS count
FROM events.clicks
INNER JOIN sessions USING (session_id)
WHERE project_id = %(project_id)s
AND url = %(url)s
AND timestamp >= %(startDate)s
AND timestamp <= %(endDate)s
AND start_ts >= %(startDate)s
AND start_ts <= %(endDate)s
AND duration IS NOT NULL
GROUP BY selector;""",
args)
cur.execute(
query
)
rows = cur.fetchall()
return helper.dict_to_camel_case(rows)

View file

@ -0,0 +1,926 @@
import schemas
from chalicelib.core.metrics import __get_constraints, __get_constraint_values
from chalicelib.utils import helper, dev
from chalicelib.utils import pg_client
from chalicelib.utils.TimeUTC import TimeUTC
def __transform_journey(rows):
nodes = []
links = []
for r in rows:
source = r["source_event"][r["source_event"].index("_") + 1:]
target = r["target_event"][r["target_event"].index("_") + 1:]
if source not in nodes:
nodes.append(source)
if target not in nodes:
nodes.append(target)
links.append({"source": nodes.index(source), "target": nodes.index(target), "value": r["value"]})
return {"nodes": nodes, "links": sorted(links, key=lambda x: x["value"], reverse=True)}
JOURNEY_DEPTH = 5
JOURNEY_TYPES = {
"PAGES": {"table": "events.pages", "column": "path", "table_id": "message_id"},
"CLICK": {"table": "events.clicks", "column": "label", "table_id": "message_id"},
# "VIEW": {"table": "events_ios.views", "column": "name", "table_id": "seq_index"}, TODO: enable this for SAAS only
"EVENT": {"table": "events_common.customs", "column": "name", "table_id": "seq_index"}
}
def journey(project_id, startTimestamp=TimeUTC.now(delta_days=-1), endTimestamp=TimeUTC.now(), filters=[], **args):
pg_sub_query_subset = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
event_start = None
event_table = JOURNEY_TYPES["PAGES"]["table"]
event_column = JOURNEY_TYPES["PAGES"]["column"]
event_table_id = JOURNEY_TYPES["PAGES"]["table_id"]
extra_values = {}
for f in filters:
if f["type"] == "START_POINT":
event_start = f["value"]
elif f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_table = JOURNEY_TYPES[f["value"]]["table"]
event_column = JOURNEY_TYPES[f["value"]]["column"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query_subset.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT source_event,
target_event,
count(*) AS value
FROM (SELECT event_number || '_' || value as target_event,
LAG(event_number || '_' || value, 1) OVER ( PARTITION BY session_rank ) AS source_event
FROM (SELECT value,
session_rank,
message_id,
ROW_NUMBER() OVER ( PARTITION BY session_rank ORDER BY timestamp ) AS event_number
{f"FROM (SELECT * FROM (SELECT *, MIN(mark) OVER ( PARTITION BY session_id , session_rank ORDER BY timestamp ) AS max FROM (SELECT *, CASE WHEN value = %(event_start)s THEN timestamp ELSE NULL END as mark"
if event_start else ""}
FROM (SELECT session_id,
message_id,
timestamp,
value,
SUM(new_session) OVER (ORDER BY session_id, timestamp) AS session_rank
FROM (SELECT *,
CASE
WHEN source_timestamp IS NULL THEN 1
ELSE 0 END AS new_session
FROM (SELECT session_id,
{event_table_id} AS message_id,
timestamp,
{event_column} AS value,
LAG(timestamp)
OVER (PARTITION BY session_id ORDER BY timestamp) AS source_timestamp
FROM {event_table} INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_subset)}
) AS related_events) AS ranked_events) AS processed
{") AS marked) AS maxed WHERE timestamp >= max) AS filtered" if event_start else ""}
) AS sorted_events
WHERE event_number <= %(JOURNEY_DEPTH)s) AS final
WHERE source_event IS NOT NULL
and target_event IS NOT NULL
GROUP BY source_event, target_event
ORDER BY value DESC
LIMIT 20;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, "event_start": event_start, "JOURNEY_DEPTH": JOURNEY_DEPTH,
**__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return __transform_journey(rows)
def __compute_weekly_percentage(rows):
if rows is None or len(rows) == 0:
return rows
t = -1
for r in rows:
if r["week"] == 0:
t = r["usersCount"]
r["percentage"] = r["usersCount"] / t
return rows
def __complete_retention(rows, start_date, end_date=None):
if rows is None:
return []
max_week = 10
for i in range(max_week):
if end_date is not None and start_date + i * TimeUTC.MS_WEEK >= end_date:
break
neutral = {
"firstConnexionWeek": start_date,
"week": i,
"usersCount": 0,
"connectedUsers": [],
"percentage": 0
}
if i < len(rows) \
and i != rows[i]["week"]:
rows.insert(i, neutral)
elif i >= len(rows):
rows.append(neutral)
return rows
def __complete_acquisition(rows, start_date, end_date=None):
if rows is None:
return []
max_week = 10
week = 0
delta_date = 0
while max_week > 0:
start_date += TimeUTC.MS_WEEK
if end_date is not None and start_date >= end_date:
break
delta = 0
if delta_date + week >= len(rows) \
or delta_date + week < len(rows) and rows[delta_date + week]["firstConnexionWeek"] > start_date:
for i in range(max_week):
if end_date is not None and start_date + i * TimeUTC.MS_WEEK >= end_date:
break
neutral = {
"firstConnexionWeek": start_date,
"week": i,
"usersCount": 0,
"connectedUsers": [],
"percentage": 0
}
rows.insert(delta_date + week + i, neutral)
delta = i
else:
for i in range(max_week):
if end_date is not None and start_date + i * TimeUTC.MS_WEEK >= end_date:
break
neutral = {
"firstConnexionWeek": start_date,
"week": i,
"usersCount": 0,
"connectedUsers": [],
"percentage": 0
}
if delta_date + week + i < len(rows) \
and i != rows[delta_date + week + i]["week"]:
rows.insert(delta_date + week + i, neutral)
elif delta_date + week + i >= len(rows):
rows.append(neutral)
delta = i
week += delta
max_week -= 1
delta_date += 1
return rows
def users_retention(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(), filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("DATE_TRUNC('week', to_timestamp(start_ts / 1000)) = to_timestamp(%(startTimestamp)s / 1000)")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT FLOOR(DATE_PART('day', connexion_week - DATE_TRUNC('week', to_timestamp(%(startTimestamp)s / 1000)::timestamp)) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT DISTINCT user_id
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND DATE_PART('week', to_timestamp((sessions.start_ts - %(startTimestamp)s)/1000)) = 1
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess
WHERE bsess.start_ts < %(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
LIMIT 1))
) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions
WHERE users_list.user_id = sessions.user_id
AND %(startTimestamp)s <=sessions.start_ts
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
GROUP BY connexion_week, user_id
) AS connexions_list ON (TRUE)
GROUP BY week
ORDER BY week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"chart": __complete_retention(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
def users_acquisition(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT EXTRACT(EPOCH FROM first_connexion_week::date)::bigint*1000 AS first_connexion_week,
FLOOR(DATE_PART('day', connexion_week - first_connexion_week) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT user_id, MIN(DATE_TRUNC('week', to_timestamp(start_ts / 1000))) AS first_connexion_week
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess
WHERE bsess.start_ts<%(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
LIMIT 1))
GROUP BY user_id) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions
WHERE users_list.user_id = sessions.user_id
AND first_connexion_week <=
DATE_TRUNC('week', to_timestamp(sessions.start_ts / 1000)::timestamp)
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
GROUP BY connexion_week, user_id) AS connexions_list ON (TRUE)
GROUP BY first_connexion_week, week
ORDER BY first_connexion_week, week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args)}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"chart": __complete_acquisition(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
def feature_retention(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_type = "PAGES"
event_value = "/"
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
pg_sub_query.append(f"feature.{event_column} = %(value)s")
with pg_client.PostgresClient() as cur:
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT FLOOR(DATE_PART('day', connexion_week - to_timestamp(%(startTimestamp)s/1000)) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT DISTINCT user_id
FROM sessions INNER JOIN {event_table} AS feature USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND DATE_PART('week', to_timestamp((sessions.start_ts - %(startTimestamp)s)/1000)) = 1
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess INNER JOIN {event_table} AS bfeature USING (session_id)
WHERE bsess.start_ts<%(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
AND bfeature.timestamp<%(startTimestamp)s
AND bfeature.{event_column}=%(value)s
LIMIT 1))
GROUP BY user_id) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions INNER JOIN {event_table} AS feature USING (session_id)
WHERE users_list.user_id = sessions.user_id
AND %(startTimestamp)s <= sessions.start_ts
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
AND feature.timestamp >= %(startTimestamp)s
AND feature.timestamp < %(endTimestamp)s
AND feature.{event_column} = %(value)s
GROUP BY connexion_week, user_id) AS connexions_list ON (TRUE)
GROUP BY week
ORDER BY week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}],
"chart": __complete_retention(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
def feature_acquisition(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_type = "PAGES"
event_value = "/"
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
pg_sub_query.append(f"feature.{event_column} = %(value)s")
with pg_client.PostgresClient() as cur:
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT EXTRACT(EPOCH FROM first_connexion_week::date)::bigint*1000 AS first_connexion_week,
FLOOR(DATE_PART('day', connexion_week - first_connexion_week) / 7)::integer AS week,
COUNT(DISTINCT connexions_list.user_id) AS users_count,
ARRAY_AGG(DISTINCT connexions_list.user_id) AS connected_users
FROM (SELECT user_id, DATE_TRUNC('week', to_timestamp(first_connexion_week / 1000)) AS first_connexion_week
FROM(SELECT DISTINCT user_id, MIN(start_ts) AS first_connexion_week
FROM sessions INNER JOIN {event_table} AS feature USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND NOT EXISTS((SELECT 1
FROM sessions AS bsess INNER JOIN {event_table} AS bfeature USING (session_id)
WHERE bsess.start_ts<%(startTimestamp)s
AND project_id = %(project_id)s
AND bsess.user_id = sessions.user_id
AND bfeature.timestamp<%(startTimestamp)s
AND bfeature.{event_column}=%(value)s
LIMIT 1))
GROUP BY user_id) AS raw_users_list) AS users_list
LEFT JOIN LATERAL (SELECT DATE_TRUNC('week', to_timestamp(start_ts / 1000)::timestamp) AS connexion_week,
user_id
FROM sessions INNER JOIN {event_table} AS feature USING(session_id)
WHERE users_list.user_id = sessions.user_id
AND first_connexion_week <=
DATE_TRUNC('week', to_timestamp(sessions.start_ts / 1000)::timestamp)
AND sessions.project_id = %(project_id)s
AND sessions.start_ts < (%(endTimestamp)s - 1)
AND feature.timestamp >= %(startTimestamp)s
AND feature.timestamp < %(endTimestamp)s
AND feature.{event_column} = %(value)s
GROUP BY connexion_week, user_id) AS connexions_list ON (TRUE)
GROUP BY first_connexion_week, week
ORDER BY first_connexion_week, week;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
rows = __compute_weekly_percentage(helper.list_to_camel_case(rows))
return {
"startTimestamp": startTimestamp,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}],
"chart": __complete_acquisition(rows=rows, start_date=startTimestamp, end_date=TimeUTC.now())
}
def feature_popularity_frequency(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
startTimestamp = TimeUTC.trunc_week(startTimestamp)
endTimestamp = startTimestamp + 10 * TimeUTC.MS_WEEK
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
event_table = JOURNEY_TYPES["CLICK"]["table"]
event_column = JOURNEY_TYPES["CLICK"]["column"]
extra_values = {}
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_table = JOURNEY_TYPES[f["value"]]["table"]
event_column = JOURNEY_TYPES[f["value"]]["column"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COUNT(DISTINCT user_id) AS count
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
all_user_count = cur.fetchone()["count"]
if all_user_count == 0:
return []
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT {event_column} AS value, COUNT(DISTINCT user_id) AS count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL
GROUP BY value
ORDER BY count DESC
LIMIT 7;"""
# TODO: solve full scan
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
popularity = cur.fetchall()
pg_query = f"""SELECT {event_column} AS value, COUNT(session_id) AS count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY value;"""
# TODO: solve full scan
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
frequencies = cur.fetchall()
total_usage = sum([f["count"] for f in frequencies])
frequencies = {f["value"]: f["count"] for f in frequencies}
for p in popularity:
p["popularity"] = p.pop("count") / all_user_count
p["frequency"] = frequencies[p["value"]] / total_usage
return popularity
def feature_adoption(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
event_type = "CLICK"
event_value = '/'
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT COUNT(DISTINCT user_id) AS count
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
all_user_count = cur.fetchone()["count"]
if all_user_count == 0:
return {"adoption": 0, "target": 0, "filters": [{"type": "EVENT_TYPE", "value": event_type},
{"type": "EVENT_VALUE", "value": event_value}], }
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_sub_query.append(f"feature.{event_column} = %(value)s")
pg_query = f"""SELECT COUNT(DISTINCT user_id) AS count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND user_id IS NOT NULL;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
adoption = cur.fetchone()["count"] / all_user_count
return {"target": all_user_count, "adoption": adoption,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}]}
def feature_adoption_top_users(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
event_type = "CLICK"
event_value = '/'
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
with pg_client.PostgresClient() as cur:
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_sub_query.append(f"feature.{event_column} = %(value)s")
pg_query = f"""SELECT user_id, COUNT(DISTINCT session_id) AS count
FROM {event_table} AS feature
INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY 1
ORDER BY 2 DESC
LIMIT 10;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return {"users": helper.list_to_camel_case(rows),
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}]}
def feature_adoption_daily_usage(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
event_type = "CLICK"
event_value = '/'
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query_chart.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
with pg_client.PostgresClient() as cur:
pg_sub_query_chart.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query_chart.append("feature.timestamp < %(endTimestamp)s")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
AND length({event_column})>2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_sub_query_chart.append(f"feature.{event_column} = %(value)s")
pg_query = f"""SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(session_id), 0) AS count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT session_id
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query_chart)}
) AS users ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp;"""
params = {"step_size": TimeUTC.MS_DAY, "project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return {"chart": helper.list_to_camel_case(rows),
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}]}
def feature_intensity(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_table = JOURNEY_TYPES["CLICK"]["table"]
event_column = JOURNEY_TYPES["CLICK"]["column"]
extra_values = {}
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_table = JOURNEY_TYPES[f["value"]]["table"]
event_column = JOURNEY_TYPES[f["value"]]["column"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
pg_sub_query.append(f"length({event_column})>2")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT {event_column} AS value, AVG(DISTINCT session_id) AS avg
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY value
ORDER BY avg DESC
LIMIT 7;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# TODO: solve full scan issue
print(cur.mogrify(pg_query, params))
print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return rows
def users_active(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[],
**args):
pg_sub_query_chart = __get_constraints(project_id=project_id, time_constraint=True,
chart=True, data=args)
pg_sub_query_chart.append("user_id IS NOT NULL")
period = "DAY"
extra_values = {}
for f in filters:
if f["type"] == "PERIOD" and f["value"] in ["DAY", "WEEK"]:
period = f["value"]
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query_chart.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT AVG(count) AS avg, JSONB_AGG(chart) AS chart
FROM (SELECT generated_timestamp AS timestamp,
COALESCE(COUNT(users), 0) AS count
FROM generate_series(%(startTimestamp)s, %(endTimestamp)s, %(step_size)s) AS generated_timestamp
LEFT JOIN LATERAL ( SELECT DISTINCT user_id
FROM public.sessions
WHERE {" AND ".join(pg_sub_query_chart)}
) AS users ON (TRUE)
GROUP BY generated_timestamp
ORDER BY generated_timestamp) AS chart;"""
params = {"step_size": TimeUTC.MS_DAY if period == "DAY" else TimeUTC.MS_WEEK,
"project_id": project_id,
"startTimestamp": TimeUTC.trunc_day(startTimestamp) if period == "DAY" else TimeUTC.trunc_week(
startTimestamp),
"endTimestamp": endTimestamp, **__get_constraint_values(args),
**extra_values}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
row_users = cur.fetchone()
return row_users
def users_power(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, time_constraint=True, chart=False, data=args)
pg_sub_query.append("user_id IS NOT NULL")
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT AVG(count) AS avg, JSONB_AGG(day_users_partition) AS partition
FROM (SELECT number_of_days, COUNT(user_id) AS count
FROM (SELECT user_id, COUNT(DISTINCT DATE_TRUNC('day', to_timestamp(start_ts / 1000))) AS number_of_days
FROM sessions
WHERE {" AND ".join(pg_sub_query)}
GROUP BY 1) AS users_connexions
GROUP BY number_of_days
ORDER BY number_of_days) AS day_users_partition;"""
params = {"project_id": project_id,
"startTimestamp": startTimestamp, "endTimestamp": endTimestamp, **__get_constraint_values(args)}
# print(cur.mogrify(pg_query, params))
# print("---------------------")
cur.execute(cur.mogrify(pg_query, params))
row_users = cur.fetchone()
return helper.dict_to_camel_case(row_users)
def users_slipping(project_id, startTimestamp=TimeUTC.now(delta_days=-70), endTimestamp=TimeUTC.now(),
filters=[], **args):
pg_sub_query = __get_constraints(project_id=project_id, data=args, duration=True, main_table="sessions",
time_constraint=True)
pg_sub_query.append("user_id IS NOT NULL")
pg_sub_query.append("feature.timestamp >= %(startTimestamp)s")
pg_sub_query.append("feature.timestamp < %(endTimestamp)s")
event_type = "PAGES"
event_value = "/"
extra_values = {}
default = True
for f in filters:
if f["type"] == "EVENT_TYPE" and JOURNEY_TYPES.get(f["value"]):
event_type = f["value"]
elif f["type"] == "EVENT_VALUE":
event_value = f["value"]
default = False
elif f["type"] in [schemas.FilterType.user_id, schemas.FilterType.user_id_ios]:
pg_sub_query.append(f"sessions.user_id = %(user_id)s")
extra_values["user_id"] = f["value"]
event_table = JOURNEY_TYPES[event_type]["table"]
event_column = JOURNEY_TYPES[event_type]["column"]
pg_sub_query.append(f"feature.{event_column} = %(value)s")
with pg_client.PostgresClient() as cur:
if default:
# get most used value
pg_query = f"""SELECT {event_column} AS value, COUNT(*) AS count
FROM {event_table} AS feature INNER JOIN public.sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query[:-1])}
AND length({event_column}) > 2
GROUP BY value
ORDER BY count DESC
LIMIT 1;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
cur.execute(cur.mogrify(pg_query, params))
row = cur.fetchone()
if row is not None:
event_value = row["value"]
extra_values["value"] = event_value
if len(event_value) > 2:
pg_sub_query.append(f"length({event_column})>2")
pg_query = f"""SELECT user_id, last_time, interactions_count, MIN(start_ts) AS first_seen, MAX(start_ts) AS last_seen
FROM (SELECT user_id, MAX(timestamp) AS last_time, COUNT(DISTINCT session_id) AS interactions_count
FROM {event_table} AS feature INNER JOIN sessions USING (session_id)
WHERE {" AND ".join(pg_sub_query)}
GROUP BY user_id) AS user_last_usage
INNER JOIN sessions USING (user_id)
WHERE EXTRACT(EPOCH FROM now()) * 1000 - last_time > 7 * 24 * 60 * 60 * 1000
GROUP BY user_id, last_time,interactions_count;"""
params = {"project_id": project_id, "startTimestamp": startTimestamp,
"endTimestamp": endTimestamp, **__get_constraint_values(args), **extra_values}
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
return {
"startTimestamp": startTimestamp,
"filters": [{"type": "EVENT_TYPE", "value": event_type}, {"type": "EVENT_VALUE", "value": event_value}],
"list": helper.list_to_camel_case(rows)
}
def search(text, feature_type, project_id, platform=None):
if not feature_type:
resource_type = "ALL"
data = search(text=text, feature_type=resource_type, project_id=project_id, platform=platform)
return data
pg_sub_query = __get_constraints(project_id=project_id, time_constraint=True, duration=True,
data={} if platform is None else {"platform": platform})
params = {"startTimestamp": TimeUTC.now() - 2 * TimeUTC.MS_MONTH,
"endTimestamp": TimeUTC.now(),
"project_id": project_id,
"value": helper.string_to_sql_like(text.lower()),
"platform_0": platform}
if feature_type == "ALL":
with pg_client.PostgresClient() as cur:
sub_queries = []
for e in JOURNEY_TYPES:
sub_queries.append(f"""(SELECT DISTINCT {JOURNEY_TYPES[e]["column"]} AS value, '{e}' AS "type"
FROM {JOURNEY_TYPES[e]["table"]} INNER JOIN public.sessions USING(session_id)
WHERE {" AND ".join(pg_sub_query)} AND {JOURNEY_TYPES[e]["column"]} ILIKE %(value)s
LIMIT 10)""")
pg_query = "UNION ALL".join(sub_queries)
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
elif JOURNEY_TYPES.get(feature_type) is not None:
with pg_client.PostgresClient() as cur:
pg_query = f"""SELECT DISTINCT {JOURNEY_TYPES[feature_type]["column"]} AS value, '{feature_type}' AS "type"
FROM {JOURNEY_TYPES[feature_type]["table"]} INNER JOIN public.sessions USING(session_id)
WHERE {" AND ".join(pg_sub_query)} AND {JOURNEY_TYPES[feature_type]["column"]} ILIKE %(value)s
LIMIT 10;"""
# print(cur.mogrify(pg_query, params))
cur.execute(cur.mogrify(pg_query, params))
rows = cur.fetchall()
else:
return []
return [helper.dict_to_camel_case(row) for row in rows]

View file

@ -0,0 +1,54 @@
from abc import ABC, abstractmethod
from chalicelib.utils import pg_client, helper
class BaseIntegration(ABC):
def __init__(self, user_id, ISSUE_CLASS):
self._user_id = user_id
self.issue_handler = ISSUE_CLASS(self.integration_token)
@property
@abstractmethod
def provider(self):
pass
@property
def integration_token(self):
integration = self.get()
if integration is None:
print("no token configured yet")
return None
return integration["token"]
def get(self):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""SELECT *
FROM public.oauth_authentication
WHERE user_id=%(user_id)s AND provider=%(provider)s;""",
{"user_id": self._user_id, "provider": self.provider.lower()})
)
return helper.dict_to_camel_case(cur.fetchone())
@abstractmethod
def get_obfuscated(self):
pass
@abstractmethod
def update(self, changes, obfuscate=False):
pass
@abstractmethod
def _add(self, data):
pass
@abstractmethod
def delete(self):
pass
@abstractmethod
def add_edit(self, data):
pass

Some files were not shown because too many files have changed in this diff Show more