* feat(backend): implemented unzipping for http requests with gzip content-type * fix(tracker): rm unused import * change(tracker): configure automatic headers, compress anything bigger than 24k, add third party lib to list * feat(backend): using custom library for unzipping request body * feat(backend): added extra logs * feat(backend): more debug logs * feat(backend): added compression threshold to start request * change(tracker): support compressionThreshold in tracker * feat(backend): debug log for body content * feat(backend): removed debug logs in http methods * change(tracker): fix priority sending, remove dead code, * feat(backend): removed debug logs in http methods * Enable session encryption (#1121) * feat(backend): enable session encryption * feat(backend): fixed updated method name in failover algo * feat(backend): disable encryption by default * change(tracker): fix iframe network handling * change(ui): add toast for recording error * Encryption metrics (#1151) * feat(backend): added metric to measure the duration of session encryption * feat(backend): enabled ecnryption * feat(backend): fixed typo issue in packSession method * change(ui): change error toast for rec * change(ui): add tooltip for added live sessions * chore(helm): disabling redis string if not enabled (#1153) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * change(player): fix typos; priority for 1st dom file * fix(player): priority and await for message processing * change(ui) - player improvements (#1164) * change(ui) - player - back button spacing * change(ui) - onboarding - changes * change(ui) - onboarding - changes * change(ui) - integrations gap-4 * change(ui) - install script copy button styles * change(ui) - copy button in account settings * fix(ui) - error details modal loader position * change(ui) - share popup styles * change(ui) - player improvements * change(ui) - player improvements - playback speed with menu * change(ui) - player improvements - current timezone * change(ui) - player improvements - autoplay options * fix(ui) - user sessions modal - navigation * feat(player): lazy JS DOM node creation; (need fixes for reaching full potential) * fix(player): drasticly reduce amount of node getter call during virtual node insertion * feat(player/VirtualDOM): OnloadVRoot & OnloadStyleSheet for lazy iframe innerContent initialisation & elimination of forceInsertion requirement in this case;; few renamings * style(player): few renamings; comments improved * feat(player/DOMManager): VirtualNodes insertion prioretization (for styles) * fix(player): cursor svg with light border for better visibility on dark backgrounds * change(ui) - session bookmarks remove from the list and copy options * chore(helm): Updating frontend image release (#1166) * chore(helm): Updating frontend image release * fix(helm): PG custom port Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(player): consider stringDict before any CreateDocument (fastfix) * style(player/DOMManager/safeCSSRules): depend on interfaces * fixup! fix(player): consider stringDict before any CreateDocument (fastfix) * fix(player): proper unmount * fix(helm): Variable override, prioriry to the user created one. (#1173) * fix(ui) - search url to wait for metadata to load * fix(tracker): optimise node counting * fix(tracker): changelog * fix(ui) - sessions reload (#1177) * fix(tracker): fix iframe network requests tracking * fix(ui) - check for error status and force logout (#1179) * fix(ui) - token expire * fix(ui) - token expire * change(player): manual decompression for encrypted files * change(player): detect gzip file after decoding * change(ui) - show projects in menu for all * [Storage] different order to compress and encrypt (#1182) * feat(backend): try to compress and encrypt in a new way * chore(helm): Update cors headers for http Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(ui): fix assist tooltip * change(ui): add sleep icon for inactive assist users * fix(ui): fix player automatic jump and start issues * Update .env.sample * Update cli for fetch latest patches and kubeconfig file hierarchy (#1183) * chore(helm): Kubeconfig file hierarchy Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): openreplay -u fetches update from current version, unless flag set Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): Updating comment (#1184) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): Adding option to keep backup directories (#1185) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): removing log message (#1186) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): Updating comment (#1188) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(helm): Annotation inject order Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(player): fix vroot context getter * feat(ui): display real session time * change(ui) - clearsearch styling on disable * change(ui) - session url changes * refactor(player/DOMManager): notMountedChildren rename * change(ui) - check if saved search present * change(ui) - player control icons and other changes * change(ui) - password validations * change(ui) - password validations * chore(helm): Override image pull policy (#1199) * change(ui) - player user steps improvements (#1201) * change(ui) - user steps * change(ui) - user steps * change(ui) - user steps * change(ui) - user steps - icon and other styles * fix(ui) - xray verticle line sync on resize * change(ui) - projects remove the status check * fix(cli): Proper git tag propegation (#1202) and logging of clone Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * Adding maintenance page * Improved session compression (#1200) * feat(backend): implemented new compression * chore(crons): Updating dockerfile Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * change(ui) - insights improvements * fix(ui) - url search params remove [] for keys * fix(player): fix dict reset * Remove message index from mob file (#1213) * feat(backend): removed message index from mob file messages * feat(backend): remove duplicated messages (by message index) * feat(backend): added MAX_INDEX at the begining of session to indicate a new version of mob file * feat(backend): added comments to code * change(ui): remove indexes from msgs * change(player): remove 8 byte skip for index * change(player): remove indexes * change(player): bugifx * change(tracker): update tests * change(tracker): remove batch writer changes * change(player): fix comments * feat(backend): updated go.mod file * change(player): change time str * feat(player): added mice trail * change(player): change trail color * change(player): change styles for buttons * chore(build): Don't commit chart change for ee patch (#1216) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * change(ui) updated recaptcha lib - which causing an issue with state reading * change(ui) - no content icon updates from metadata and webhooks * change(player): make cursor icon bigger * fix(player): fix virtualization * fix(player): fix virtualization * fix(ui) - onboarding project edit * change(ui) - no content graphic for projects, and svg component changes * change(ui) - events filter placeholder * change(ui) - ui feedback on user steps * change(ui): add more detials to health status * [Storage] timestamp sorting and filtering (#1218) * feat(backend): combined sorting by index and timestamp * feat(backend): write the only last timestamp message in a row * change(ui) - textarea styles * change(ui) - button text color * change(ui): add more detials to health status * fix(ui): fix screen rec error handling * fix(ui): fix screen rec stopping * fix(tracker): fix q sender token mismatch during assist connection * change(ui) - assist recordings pagination api * change(ui) - assist recordings pagination api * fix(ui) - not popup conflict with timeline tooltip * Updating version * change(tracker): 7.0.0. set max amount on restarts for compression error * fix(ui) - active menu link * fix redis endpoint and chalice health endpoints (#1138) * chore(helm): Adding redis string from global config Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(chalice): health check url for alerts and assist Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(ee): chalice health check (#1142) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): Adding verbose logging (#1144) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(helm): Adding option for records bucket (#1146) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(build): Bump image version of frontend assets while building (#1149) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * feat(chalice): fixed jobs execution * feat(chalice): configurable mobs expiration * feat(chalice): changes * feat(chalice): refactored Jobs feat(chalice): added limits on Jobs * chore(build): test patch branch * chore(build): testing EE cron-Jobs * Add files via upload (#1156) * Add files via upload (#1157) * chore(helm): Enabling redis string for helm template variable (#1159) fix #1158 Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * Changing default encryption to false (#1161) * Updated hero * feat(chalice): return all records if date is not specified * feat(chalice): refactored records list * Moving cli to scripts folder (#1196) * Revert "Moving cli to scripts folder (#1196)" (#1197) This reverts commitc947e48d99. * feat(chalice): support old FilterType * fix(ui) - alert form crash * fix(ui) - alert form crash * fix(ui) - assist menu status * Redshift connector (#1170) * Updated dependancies for redshift connector, changed os module for python-decouple module * Updated service and images * Updated message protocol, added exception for BatchMetadata when version is 0 (we apply old read method) * fixed load error from s3 to redshift. null values for string columns are now empty strings ("") * Added file test consumer_async.py: reads every 3 minutes kafka raw and send task in background to upload to cloud * Added method to skip messages that are not inserted to cloud * Added logs into consumer_async. Changed urls and issues in sessions table from list to string * Split between messages for sessions table and for events table * Updated redshift tables * Fixed small issue in query redshift_sessions.sql * Updated Dockerfiles. Cleaned logs of consumer_async. Updated/Fixed tables. Transformed Nan as NULL for VARCHAR columns * Added error handler for sql dropped connection * chore(docker): Optimize docker builds Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * Variables renamed * Adding compression libraries * Set default value of count events to 0 (instead of NULL) when event did not occur * Added support specific project tracking. Added PG handler to connect to sessions table * Added method to update values in db connection for sessions ended and restarted * Removing intelligent file copying * chore(connector): Build file Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * Adding connection pool for pg * Renaming and optimizing * Fixed issue of missing information of sessions --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(build): Parallel build Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * feat(chalice): changed release version feat(assist): changed release version feat(peers): changed release version feat(sourcemaps-reader): changed release version feat(chalice): enhanced health-check feat(DB): sessions_count table to keep status * feat(chalice): changed release version * feat(chalice): refactored projects code * feat(chalice): refactored projects code feat(chalice): sessions-check-flag every hour feat(chalice): sessions-check-delta set to 4 hours * feat(chalice): use experimental session search for metrics * feat(chalice): projects stats for health-check feat(DB): projects stats for health-check feat(crons): projects stats for health-check * feat(chalice): changed projects stats for health-check feat(crons): cahnged projects stats for health-check chore(helm): projectStats cron every 18 min chore(helm): projectStats-fix cron every Sunday at 5am * feat(crons): reorganized crons * feat(chalice): fixed typo * feat(chalice): changed health-check response * feat(crons): changed health-check response * (feat): Chalice - Allow SAML users to login with non-password methods as well as the usual password method, for example Windows Integrated Authentication * Move security field to correct area under SAML2 settings * feat(chalice): format code * feat(chalice): changed recordings response * feat(crons): fixed health check cron feat(crons): refactored main * feat(chalice): changed recordings response feat(chalice): updated dependencies feat(crons): updated dependencies feat(alerts): updated dependencies * feat(chalice): fixed recordings response recursion error * feat(assist): updated dependencies feat(sourcemaps-reader): upgraded dependencies * change(ui) - user event text change * fix(ui): fix events merging * fix(connector): handle db connection drop (#1223) * Added compatibility with SaaS, added reboot of connection if connection droped * Small fix * fix(backend): disabled debug log in http handler * fix(player): fix autopause on tabs * Updated python template to read messages with BatchMeta with old version (#1225) * change(ui) - user events text change * change(ui) - webhooks no content icon size * chore(backend): upgraded go to 1.19 and ClickHouse to 2.9.1 * fix(player): fix frustrations ingestion * fix(tracker): fix email detection performance * fix(tracker): fix email masking length * fix(player): fix fullview prop passing to children (live pl) * feat(chalice): reduce issues for replay (#1227) * change(ui) - bugreport modal title color * fix(ui) - elastic config validation rules * change(ui) - issue form and share popup titles * change(ui) - placeholder text change * change(ui) - filter user events text change * feat(chalice): include enforceSSO in signup status (#1228) * Updating kyverno * chore(cli): Override GH repo Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(helm): Update kafka chart Enable metrics and increased storage * change(ui) - enforce sso * Api v1.12.0 (#1230) * feat(chalice): include enforceSSO in signup status * feat(chalice): changed 1-time health-check * fix(helm): typo Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * change(ui) - support icon border * chore(helm): enable kafka jmx metrics Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * change(ui) - fetch details modal - no content text size * change(ui) - playback timemode alignment * fix(connector): fixed bug of cache dict size error (#1226) * change(ui) - text chante on create issue and share popups * change(ui) - share popup styles * change(ui) - user events visit event padding * feat(crons): include fastapi (#1231) * New env variable CLOUD (aws by default) (#1232) * feat(backend): added new env variable CLOUD (aws by default) * chore(backend): Adding env variable for CLOUD Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com> * Compression worker (#1233) * feat(backend): added extra worker for session compression * feat(backend): debug logs * feat(backend): added compression ratio metric * feat(backend): reduced number of duplicate logs * feat(backend): rewrite workers managment * chore(minio): changed lifecycle rules to support delete-jobs (#1235) * fix(backend): correct compression ratio value * fix(backend): reduced ender tick duration * feat(backend): insert referrer to sessions table (#1237) * chore(cli): Adding separate query for ee cleanup Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(connector): Added checkpoints and sigterm handler (#1234) * fix(connector): fixed bug of cache dict size error * fix(connector): Added method to save state in s3 for redshift if sigterm arise * fix(connector): Added exit signal handler and checkpoint method * Added sslmode selection for connection to database, added use_ssl parameter for S3 connection * fix(cli): Override cli options (#1239) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(player): fix first 8 byte checker * fix(player): remove logs * Update .env.sample * fix(ui) - search init - wait for filters (#1241) * fix(player): fix first 8 byte checker * chore(cron): Adding missing deps Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(player): fix commit conflict * fix(backend): added Content-Encoding to CORS for http service * fix(backend): added COMPRESSION_THRESHOLD env variable to Dockerfile * fix(player): ensure that player is cleaned on unmount * chore(helm): Updating frontend image release (#1243) * Update README.md * feat(chalice): fixed trace payload parsing * feat(player): player file loader refactoring (#1203) * change(ui): refactor mob loading * refactor(player): split message loader into separate file, remove toast dependency out of player lib, fix types, fix inspector and screen context * refactor(player): simplify file loading, add safe error throws * refactor(player): move loading status changers to the end of the flow * change(ui) - assist call to use iceTransportPolicy all * change(ui) - removed errors route * chore(helm): enablig pg_stat for metrics Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * fix(tracker): fix time inputs capturing * change(ui) - antd dependency * fix(player): clear selection manger on clicks; display frustrations row on xray by default * fix(player): add option todisable network in iframes * refactor(cli): In old clusters kyverno upgrade won't work. So we'll have to upgrade OR only. Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * feat(tracker): new axios capturing; tracker 7.0.1 * feat(chalice) - feature flags (#1252) * feat(api) - feature flags - schema * feat(api) - feature flags - wip * feat(api) - feature flags * feat(api) - feature flags - set back root path * feat(api) - feature flags * feat(api) - feature flags * feat(api) - feature flags - review * feat(DB): feature flags DB structure * feat(chalice): feature flags permissions support feat(chalice): feature flags changed code * feat(chalice): feature flags add permissions to DB --------- Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com> * [sourcemaps-reader] Azure blob storage support (#1259) * feat(sourcemaps-reader): implemented azure blob storage support for sourcemaps reader * feat(sourcemaps-reader): azure blob storage support - cleaned code --------- Co-authored-by: Taha Yassine Kraiem <tahayk2@gmail.com> * fix(player): fix selection manager styles and reset * fix(cli): KUBECONFIG PATH override (#1266) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * chore(cli): Adding info on which kubeconfig is getting used (#1261) Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> * feat(ui) - enforce pwd during signup (#1271) * fix(helm): SITE_URL injection * fix(player): hide empty index prop * change(repo): ignore precommit config * change(repo): precommit config * feat(chalice): faster projects response * fix(chalice): ignore SSO for testing * feat(chalice): added PyLint for dev purposes * feat(DB): support tab_id for all events * feat(chalice): removed PyLint * fix(chalice): include metadata in sessions exp search (#1291) (cherry picked from commit07dd9da820) * refactor(chalice): upgraded dependencies refactor(alerts): upgraded dependencies refactor(crons): upgraded dependencies * feat(DB): added tab_id in creation queries feat(DB): added user_city feat(DB): added user_state * feat(DB): added user_city feat(DB): added user_state * feat(DB): create index for user_city feat(DB): create index for user_state * feat(chalice): search sessions by user_city feat(chalice): search sessions by user_state * fix(chalice): install SSO dependencies --------- Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com> Co-authored-by: Alexander Zavorotynskiy <zavorotynskiy@pm.me> Co-authored-by: nick-delirium <nikita@openreplay.com> Co-authored-by: Rajesh Rajendran <rjshrjndrn@users.noreply.github.com> Co-authored-by: Shekar Siri <sshekarsiri@gmail.com> Co-authored-by: Alex Kaminskii <alex@openreplay.com> Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com> Co-authored-by: Mehdi Osman <estradino@users.noreply.github.com> Co-authored-by: MauricioGarciaS <47052044+MauricioGarciaS@users.noreply.github.com> Co-authored-by: Dayan Graham <d.graham50@hotmail.co.uk>
666 lines
34 KiB
Python
666 lines
34 KiB
Python
import json
|
|
from typing import Union
|
|
|
|
from decouple import config
|
|
from fastapi import HTTPException, status
|
|
|
|
import schemas
|
|
from chalicelib.core import sessions, funnels, errors, issues, metrics, click_maps, sessions_mobs, product_analytics
|
|
from chalicelib.utils import helper, pg_client, s3
|
|
from chalicelib.utils.TimeUTC import TimeUTC
|
|
|
|
PIE_CHART_GROUP = 5
|
|
|
|
|
|
def __try_live(project_id, data: schemas.CardSchema):
|
|
results = []
|
|
for i, s in enumerate(data.series):
|
|
s.filter.startDate = data.startTimestamp
|
|
s.filter.endDate = data.endTimestamp
|
|
results.append(sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
|
|
view_type=data.view_type, metric_type=data.metric_type,
|
|
metric_of=data.metric_of, metric_value=data.metric_value))
|
|
if data.view_type == schemas.MetricTimeseriesViewType.progress:
|
|
r = {"count": results[-1]}
|
|
diff = s.filter.endDate - s.filter.startDate
|
|
s.filter.endDate = s.filter.startDate
|
|
s.filter.startDate = s.filter.endDate - diff
|
|
r["previousCount"] = sessions.search2_series(data=s.filter, project_id=project_id, density=data.density,
|
|
view_type=data.view_type, metric_type=data.metric_type,
|
|
metric_of=data.metric_of, metric_value=data.metric_value)
|
|
r["countProgress"] = helper.__progress(old_val=r["previousCount"], new_val=r["count"])
|
|
# r["countProgress"] = ((r["count"] - r["previousCount"]) / r["previousCount"]) * 100 \
|
|
# if r["previousCount"] > 0 else 0
|
|
r["seriesName"] = s.name if s.name else i + 1
|
|
r["seriesId"] = s.series_id if s.series_id else None
|
|
results[-1] = r
|
|
elif data.view_type == schemas.MetricTableViewType.pie_chart:
|
|
if len(results[i].get("values", [])) > PIE_CHART_GROUP:
|
|
results[i]["values"] = results[i]["values"][:PIE_CHART_GROUP] \
|
|
+ [{
|
|
"name": "Others", "group": True,
|
|
"sessionCount": sum(r["sessionCount"] for r in results[i]["values"][PIE_CHART_GROUP:])
|
|
}]
|
|
|
|
return results
|
|
|
|
|
|
def __is_funnel_chart(data: schemas.CardSchema):
|
|
return data.metric_type == schemas.MetricType.funnel
|
|
|
|
|
|
def __get_funnel_chart(project_id, data: schemas.CardSchema):
|
|
if len(data.series) == 0:
|
|
return {
|
|
"stages": [],
|
|
"totalDropDueToIssues": 0
|
|
}
|
|
data.series[0].filter.startDate = data.startTimestamp
|
|
data.series[0].filter.endDate = data.endTimestamp
|
|
return funnels.get_top_insights_on_the_fly_widget(project_id=project_id, data=data.series[0].filter)
|
|
|
|
|
|
def __is_errors_list(data: schemas.CardSchema):
|
|
return data.metric_type == schemas.MetricType.table \
|
|
and data.metric_of == schemas.MetricOfTable.errors
|
|
|
|
|
|
def __get_errors_list(project_id, user_id, data: schemas.CardSchema):
|
|
if len(data.series) == 0:
|
|
return {
|
|
"total": 0,
|
|
"errors": []
|
|
}
|
|
data.series[0].filter.startDate = data.startTimestamp
|
|
data.series[0].filter.endDate = data.endTimestamp
|
|
data.series[0].filter.page = data.page
|
|
data.series[0].filter.limit = data.limit
|
|
return errors.search(data.series[0].filter, project_id=project_id, user_id=user_id)
|
|
|
|
|
|
def __is_sessions_list(data: schemas.CardSchema):
|
|
return data.metric_type == schemas.MetricType.table \
|
|
and data.metric_of == schemas.MetricOfTable.sessions
|
|
|
|
|
|
def __get_sessions_list(project_id, user_id, data: schemas.CardSchema):
|
|
if len(data.series) == 0:
|
|
print("empty series")
|
|
return {
|
|
"total": 0,
|
|
"sessions": []
|
|
}
|
|
data.series[0].filter.startDate = data.startTimestamp
|
|
data.series[0].filter.endDate = data.endTimestamp
|
|
data.series[0].filter.page = data.page
|
|
data.series[0].filter.limit = data.limit
|
|
return sessions.search_sessions(data=data.series[0].filter, project_id=project_id, user_id=user_id)
|
|
|
|
|
|
def __is_predefined(data: schemas.CardSchema):
|
|
return data.is_template
|
|
|
|
|
|
def __is_click_map(data: schemas.CardSchema):
|
|
return data.metric_type == schemas.MetricType.click_map
|
|
|
|
|
|
def __get_click_map_chart(project_id, user_id, data: schemas.CardSchema, include_mobs: bool = True):
|
|
if len(data.series) == 0:
|
|
return None
|
|
data.series[0].filter.startDate = data.startTimestamp
|
|
data.series[0].filter.endDate = data.endTimestamp
|
|
return click_maps.search_short_session(project_id=project_id, user_id=user_id,
|
|
data=schemas.FlatClickMapSessionsSearch(**data.series[0].filter.dict()),
|
|
include_mobs=include_mobs)
|
|
|
|
|
|
def __get_path_analysis_chart(project_id, data: schemas.CardSchema):
|
|
if len(data.series) == 0:
|
|
data.series.append(schemas.CardSeriesSchema())
|
|
elif not isinstance(data.series[0].filter, schemas.PathAnalysisSchema):
|
|
data.series[0].filter = schemas.PathAnalysisSchema()
|
|
data.series[0].filter.startTimestamp = data.startTimestamp
|
|
data.series[0].filter.endTimestamp = data.endTimestamp
|
|
return product_analytics.path_analysis(project_id=project_id,
|
|
data=schemas.PathAnalysisSchema(**data.series[0].filter.dict()))
|
|
|
|
|
|
def __is_path_analysis(data: schemas.CardSchema):
|
|
return data.metric_type == schemas.MetricType.pathAnalysis
|
|
|
|
|
|
def merged_live(project_id, data: schemas.CardSchema, user_id=None):
|
|
if data.is_template:
|
|
return get_predefined_metric(key=data.metric_of, project_id=project_id, data=data.dict())
|
|
elif __is_funnel_chart(data):
|
|
return __get_funnel_chart(project_id=project_id, data=data)
|
|
elif __is_errors_list(data):
|
|
return __get_errors_list(project_id=project_id, user_id=user_id, data=data)
|
|
elif __is_sessions_list(data):
|
|
return __get_sessions_list(project_id=project_id, user_id=user_id, data=data)
|
|
elif __is_click_map(data):
|
|
return __get_click_map_chart(project_id=project_id, user_id=user_id, data=data)
|
|
elif __is_path_analysis(data):
|
|
return __get_path_analysis_chart(project_id=project_id, data=data)
|
|
elif len(data.series) == 0:
|
|
return []
|
|
series_charts = __try_live(project_id=project_id, data=data)
|
|
if data.view_type == schemas.MetricTimeseriesViewType.progress or data.metric_type == schemas.MetricType.table:
|
|
return series_charts
|
|
results = [{}] * len(series_charts[0])
|
|
for i in range(len(results)):
|
|
for j, series_chart in enumerate(series_charts):
|
|
results[i] = {**results[i], "timestamp": series_chart[i]["timestamp"],
|
|
data.series[j].name if data.series[j].name else j + 1: series_chart[i]["count"]}
|
|
return results
|
|
|
|
|
|
def __merge_metric_with_data(metric: schemas.CardSchema,
|
|
data: schemas.CardChartSchema) -> schemas.CardSchema:
|
|
if data.series is not None and len(data.series) > 0:
|
|
metric.series = data.series
|
|
metric: schemas.CardSchema = schemas.CardSchema(
|
|
**{**data.dict(by_alias=True), **metric.dict(by_alias=True)})
|
|
if len(data.filters) > 0 or len(data.events) > 0:
|
|
for s in metric.series:
|
|
if len(data.filters) > 0:
|
|
s.filter.filters += data.filters
|
|
if len(data.events) > 0:
|
|
s.filter.events += data.events
|
|
metric.limit = data.limit
|
|
metric.page = data.page
|
|
metric.startTimestamp = data.startTimestamp
|
|
metric.endTimestamp = data.endTimestamp
|
|
return metric
|
|
|
|
|
|
def make_chart(project_id, user_id, data: schemas.CardChartSchema, metric: schemas.CardSchema):
|
|
if metric is None:
|
|
return None
|
|
metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
|
|
|
|
return merged_live(project_id=project_id, data=metric, user_id=user_id)
|
|
|
|
|
|
def get_sessions(project_id, user_id, metric_id, data: schemas.CardSessionsSchema):
|
|
# raw_metric = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False, include_data=True)
|
|
raw_metric: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
|
|
if raw_metric is None:
|
|
return None
|
|
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
|
|
metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
|
|
if metric is None:
|
|
return None
|
|
results = []
|
|
# is_click_map = False
|
|
# if __is_click_map(metric) and raw_metric.get("data") is not None:
|
|
# is_click_map = True
|
|
for s in metric.series:
|
|
s.filter.startDate = data.startTimestamp
|
|
s.filter.endDate = data.endTimestamp
|
|
s.filter.limit = data.limit
|
|
s.filter.page = data.page
|
|
# if is_click_map:
|
|
# results.append(
|
|
# {"seriesId": s.series_id, "seriesName": s.name, "total": 1, "sessions": [raw_metric["data"]]})
|
|
# break
|
|
results.append({"seriesId": s.series_id, "seriesName": s.name,
|
|
**sessions.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
|
|
|
|
return results
|
|
|
|
|
|
def get_funnel_issues(project_id, user_id, metric_id, data: schemas.CardSessionsSchema):
|
|
raw_metric: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
|
|
if raw_metric is None:
|
|
return None
|
|
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
|
|
metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
|
|
if metric is None:
|
|
return None
|
|
for s in metric.series:
|
|
s.filter.startDate = data.startTimestamp
|
|
s.filter.endDate = data.endTimestamp
|
|
s.filter.limit = data.limit
|
|
s.filter.page = data.page
|
|
return {"seriesId": s.series_id, "seriesName": s.name,
|
|
**funnels.get_issues_on_the_fly_widget(project_id=project_id, data=s.filter)}
|
|
|
|
|
|
def get_errors_list(project_id, user_id, metric_id, data: schemas.CardSessionsSchema):
|
|
raw_metric: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
|
|
if raw_metric is None:
|
|
return None
|
|
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
|
|
metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
|
|
if metric is None:
|
|
return None
|
|
for s in metric.series:
|
|
s.filter.startDate = data.startTimestamp
|
|
s.filter.endDate = data.endTimestamp
|
|
s.filter.limit = data.limit
|
|
s.filter.page = data.page
|
|
return {"seriesId": s.series_id, "seriesName": s.name,
|
|
**errors.search(data=s.filter, project_id=project_id, user_id=user_id)}
|
|
|
|
|
|
def try_sessions(project_id, user_id, data: schemas.CardSessionsSchema):
|
|
results = []
|
|
if data.series is None:
|
|
return results
|
|
for s in data.series:
|
|
s.filter.startDate = data.startTimestamp
|
|
s.filter.endDate = data.endTimestamp
|
|
s.filter.limit = data.limit
|
|
s.filter.page = data.page
|
|
if len(data.filters) > 0:
|
|
s.filter.filters += data.filters
|
|
if len(data.events) > 0:
|
|
s.filter.events += data.events
|
|
results.append({"seriesId": None, "seriesName": s.name,
|
|
**sessions.search_sessions(data=s.filter, project_id=project_id, user_id=user_id)})
|
|
|
|
return results
|
|
|
|
|
|
def create(project_id, user_id, data: schemas.CardSchema, dashboard=False):
|
|
with pg_client.PostgresClient() as cur:
|
|
session_data = None
|
|
if __is_click_map(data):
|
|
session_data = __get_click_map_chart(project_id=project_id, user_id=user_id,
|
|
data=data, include_mobs=False)
|
|
if session_data is not None:
|
|
session_data = json.dumps(session_data)
|
|
_data = {"session_data": session_data}
|
|
for i, s in enumerate(data.series):
|
|
for k in s.dict().keys():
|
|
_data[f"{k}_{i}"] = s.__getattribute__(k)
|
|
_data[f"index_{i}"] = i
|
|
_data[f"filter_{i}"] = s.filter.json()
|
|
series_len = len(data.series)
|
|
params = {"user_id": user_id, "project_id": project_id, **data.dict(), **_data}
|
|
params["default_config"] = json.dumps(data.default_config.dict())
|
|
query = """INSERT INTO metrics (project_id, user_id, name, is_public,
|
|
view_type, metric_type, metric_of, metric_value,
|
|
metric_format, default_config, thumbnail, data)
|
|
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(is_public)s,
|
|
%(view_type)s, %(metric_type)s, %(metric_of)s, %(metric_value)s,
|
|
%(metric_format)s, %(default_config)s, %(thumbnail)s, %(session_data)s)
|
|
RETURNING metric_id"""
|
|
if len(data.series) > 0:
|
|
query = f"""WITH m AS ({query})
|
|
INSERT INTO metric_series(metric_id, index, name, filter)
|
|
VALUES {",".join([f"((SELECT metric_id FROM m), %(index_{i})s, %(name_{i})s, %(filter_{i})s::jsonb)"
|
|
for i in range(series_len)])}
|
|
RETURNING metric_id;"""
|
|
|
|
query = cur.mogrify(query, params)
|
|
# print("-------")
|
|
# print(query)
|
|
# print("-------")
|
|
cur.execute(query)
|
|
r = cur.fetchone()
|
|
if dashboard:
|
|
return r["metric_id"]
|
|
return {"data": get_card(metric_id=r["metric_id"], project_id=project_id, user_id=user_id)}
|
|
|
|
|
|
def update(metric_id, user_id, project_id, data: schemas.UpdateCardSchema):
|
|
metric: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
|
|
if metric is None:
|
|
return None
|
|
series_ids = [r["seriesId"] for r in metric["series"]]
|
|
n_series = []
|
|
d_series_ids = []
|
|
u_series = []
|
|
u_series_ids = []
|
|
params = {"metric_id": metric_id, "is_public": data.is_public, "name": data.name,
|
|
"user_id": user_id, "project_id": project_id, "view_type": data.view_type,
|
|
"metric_type": data.metric_type, "metric_of": data.metric_of,
|
|
"metric_value": data.metric_value, "metric_format": data.metric_format,
|
|
"config": json.dumps(data.default_config.dict()), "thumbnail": data.thumbnail}
|
|
for i, s in enumerate(data.series):
|
|
prefix = "u_"
|
|
if s.index is None:
|
|
s.index = i
|
|
if s.series_id is None or s.series_id not in series_ids:
|
|
n_series.append({"i": i, "s": s})
|
|
prefix = "n_"
|
|
else:
|
|
u_series.append({"i": i, "s": s})
|
|
u_series_ids.append(s.series_id)
|
|
ns = s.dict()
|
|
for k in ns.keys():
|
|
if k == "filter":
|
|
ns[k] = json.dumps(ns[k])
|
|
params[f"{prefix}{k}_{i}"] = ns[k]
|
|
for i in series_ids:
|
|
if i not in u_series_ids:
|
|
d_series_ids.append(i)
|
|
params["d_series_ids"] = tuple(d_series_ids)
|
|
|
|
with pg_client.PostgresClient() as cur:
|
|
sub_queries = []
|
|
if len(n_series) > 0:
|
|
sub_queries.append(f"""\
|
|
n AS (INSERT INTO metric_series (metric_id, index, name, filter)
|
|
VALUES {",".join([f"(%(metric_id)s, %(n_index_{s['i']})s, %(n_name_{s['i']})s, %(n_filter_{s['i']})s::jsonb)"
|
|
for s in n_series])}
|
|
RETURNING 1)""")
|
|
if len(u_series) > 0:
|
|
sub_queries.append(f"""\
|
|
u AS (UPDATE metric_series
|
|
SET name=series.name,
|
|
filter=series.filter,
|
|
index=series.index
|
|
FROM (VALUES {",".join([f"(%(u_series_id_{s['i']})s,%(u_index_{s['i']})s,%(u_name_{s['i']})s,%(u_filter_{s['i']})s::jsonb)"
|
|
for s in u_series])}) AS series(series_id, index, name, filter)
|
|
WHERE metric_series.metric_id =%(metric_id)s AND metric_series.series_id=series.series_id
|
|
RETURNING 1)""")
|
|
if len(d_series_ids) > 0:
|
|
sub_queries.append("""\
|
|
d AS (DELETE FROM metric_series WHERE metric_id =%(metric_id)s AND series_id IN %(d_series_ids)s
|
|
RETURNING 1)""")
|
|
query = cur.mogrify(f"""\
|
|
{"WITH " if len(sub_queries) > 0 else ""}{",".join(sub_queries)}
|
|
UPDATE metrics
|
|
SET name = %(name)s, is_public= %(is_public)s,
|
|
view_type= %(view_type)s, metric_type= %(metric_type)s,
|
|
metric_of= %(metric_of)s, metric_value= %(metric_value)s,
|
|
metric_format= %(metric_format)s,
|
|
edited_at = timezone('utc'::text, now()),
|
|
default_config = %(config)s,
|
|
thumbnail = %(thumbnail)s
|
|
WHERE metric_id = %(metric_id)s
|
|
AND project_id = %(project_id)s
|
|
AND (user_id = %(user_id)s OR is_public)
|
|
RETURNING metric_id;""", params)
|
|
cur.execute(query)
|
|
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
|
|
|
|
|
|
def search_all(project_id, user_id, data: schemas.SearchCardsSchema, include_series=False):
|
|
constraints = ["metrics.project_id = %(project_id)s",
|
|
"metrics.deleted_at ISNULL"]
|
|
params = {"project_id": project_id, "user_id": user_id,
|
|
"offset": (data.page - 1) * data.limit,
|
|
"limit": data.limit, }
|
|
if data.mine_only:
|
|
constraints.append("user_id = %(user_id)s")
|
|
else:
|
|
constraints.append("(user_id = %(user_id)s OR metrics.is_public)")
|
|
if data.shared_only:
|
|
constraints.append("is_public")
|
|
|
|
if data.query is not None and len(data.query) > 0:
|
|
constraints.append("(name ILIKE %(query)s OR owner.owner_email ILIKE %(query)s)")
|
|
params["query"] = helper.values_for_operator(value=data.query,
|
|
op=schemas.SearchEventOperator._contains)
|
|
with pg_client.PostgresClient() as cur:
|
|
sub_join = ""
|
|
if include_series:
|
|
sub_join = """LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
|
|
FROM metric_series
|
|
WHERE metric_series.metric_id = metrics.metric_id
|
|
AND metric_series.deleted_at ISNULL
|
|
) AS metric_series ON (TRUE)"""
|
|
query = cur.mogrify(
|
|
f"""SELECT metric_id, project_id, user_id, name, is_public, created_at, edited_at,
|
|
metric_type, metric_of, metric_format, metric_value, view_type, is_pinned,
|
|
dashboards, owner_email, default_config AS config, thumbnail
|
|
FROM metrics
|
|
{sub_join}
|
|
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
|
|
FROM (SELECT DISTINCT dashboard_id, name, is_public
|
|
FROM dashboards INNER JOIN dashboard_widgets USING (dashboard_id)
|
|
WHERE deleted_at ISNULL
|
|
AND dashboard_widgets.metric_id = metrics.metric_id
|
|
AND project_id = %(project_id)s
|
|
AND ((dashboards.user_id = %(user_id)s OR is_public))) AS connected_dashboards
|
|
) AS connected_dashboards ON (TRUE)
|
|
LEFT JOIN LATERAL (SELECT email AS owner_email
|
|
FROM users
|
|
WHERE deleted_at ISNULL
|
|
AND users.user_id = metrics.user_id
|
|
) AS owner ON (TRUE)
|
|
WHERE {" AND ".join(constraints)}
|
|
ORDER BY created_at {data.order.value}
|
|
LIMIT %(limit)s OFFSET %(offset)s;""", params)
|
|
cur.execute(query)
|
|
rows = cur.fetchall()
|
|
if include_series:
|
|
for r in rows:
|
|
for s in r["series"]:
|
|
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
|
|
else:
|
|
for r in rows:
|
|
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
|
|
r["edited_at"] = TimeUTC.datetime_to_timestamp(r["edited_at"])
|
|
rows = helper.list_to_camel_case(rows)
|
|
return rows
|
|
|
|
|
|
def get_all(project_id, user_id):
|
|
default_search = schemas.SearchCardsSchema()
|
|
result = rows = search_all(project_id=project_id, user_id=user_id, data=default_search)
|
|
while len(rows) == default_search.limit:
|
|
default_search.page += 1
|
|
rows = search_all(project_id=project_id, user_id=user_id, data=default_search)
|
|
result += rows
|
|
|
|
return result
|
|
|
|
|
|
def delete(project_id, metric_id, user_id):
|
|
with pg_client.PostgresClient() as cur:
|
|
cur.execute(
|
|
cur.mogrify("""\
|
|
UPDATE public.metrics
|
|
SET deleted_at = timezone('utc'::text, now()), edited_at = timezone('utc'::text, now())
|
|
WHERE project_id = %(project_id)s
|
|
AND metric_id = %(metric_id)s
|
|
AND (user_id = %(user_id)s OR is_public);""",
|
|
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id})
|
|
)
|
|
|
|
return {"state": "success"}
|
|
|
|
|
|
def get_card(metric_id, project_id, user_id, flatten: bool = True, include_data: bool = False):
|
|
with pg_client.PostgresClient() as cur:
|
|
query = cur.mogrify(
|
|
f"""SELECT metric_id, project_id, user_id, name, is_public, created_at, deleted_at, edited_at, metric_type,
|
|
view_type, metric_of, metric_value, metric_format, is_pinned, default_config,
|
|
default_config AS config,series, dashboards, owner_email
|
|
{',data' if include_data else ''}
|
|
FROM metrics
|
|
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(metric_series.* ORDER BY index),'[]'::jsonb) AS series
|
|
FROM metric_series
|
|
WHERE metric_series.metric_id = metrics.metric_id
|
|
AND metric_series.deleted_at ISNULL
|
|
) AS metric_series ON (TRUE)
|
|
LEFT JOIN LATERAL (SELECT COALESCE(jsonb_agg(connected_dashboards.* ORDER BY is_public,name),'[]'::jsonb) AS dashboards
|
|
FROM (SELECT dashboard_id, name, is_public
|
|
FROM dashboards INNER JOIN dashboard_widgets USING (dashboard_id)
|
|
WHERE deleted_at ISNULL
|
|
AND project_id = %(project_id)s
|
|
AND ((dashboards.user_id = %(user_id)s OR is_public))
|
|
AND metric_id = %(metric_id)s) AS connected_dashboards
|
|
) AS connected_dashboards ON (TRUE)
|
|
LEFT JOIN LATERAL (SELECT email AS owner_email
|
|
FROM users
|
|
WHERE deleted_at ISNULL
|
|
AND users.user_id = metrics.user_id
|
|
) AS owner ON (TRUE)
|
|
WHERE metrics.project_id = %(project_id)s
|
|
AND metrics.deleted_at ISNULL
|
|
AND (metrics.user_id = %(user_id)s OR metrics.is_public)
|
|
AND metrics.metric_id = %(metric_id)s
|
|
ORDER BY created_at;""",
|
|
{"metric_id": metric_id, "project_id": project_id, "user_id": user_id}
|
|
)
|
|
cur.execute(query)
|
|
row = cur.fetchone()
|
|
if row is None:
|
|
return None
|
|
row["created_at"] = TimeUTC.datetime_to_timestamp(row["created_at"])
|
|
row["edited_at"] = TimeUTC.datetime_to_timestamp(row["edited_at"])
|
|
if flatten:
|
|
for s in row["series"]:
|
|
s["filter"] = helper.old_search_payload_to_flat(s["filter"])
|
|
return helper.dict_to_camel_case(row)
|
|
|
|
|
|
def get_series_for_alert(project_id, user_id):
|
|
with pg_client.PostgresClient() as cur:
|
|
cur.execute(
|
|
cur.mogrify(
|
|
"""SELECT series_id AS value,
|
|
metrics.name || '.' || (COALESCE(metric_series.name, 'series ' || index)) || '.count' AS name,
|
|
'count' AS unit,
|
|
FALSE AS predefined,
|
|
metric_id,
|
|
series_id
|
|
FROM metric_series
|
|
INNER JOIN metrics USING (metric_id)
|
|
WHERE metrics.deleted_at ISNULL
|
|
AND metrics.project_id = %(project_id)s
|
|
AND metrics.metric_type = 'timeseries'
|
|
AND (user_id = %(user_id)s OR is_public)
|
|
ORDER BY name;""",
|
|
{"project_id": project_id, "user_id": user_id}
|
|
)
|
|
)
|
|
rows = cur.fetchall()
|
|
return helper.list_to_camel_case(rows)
|
|
|
|
|
|
def change_state(project_id, metric_id, user_id, status):
|
|
with pg_client.PostgresClient() as cur:
|
|
cur.execute(
|
|
cur.mogrify("""\
|
|
UPDATE public.metrics
|
|
SET active = %(status)s
|
|
WHERE metric_id = %(metric_id)s
|
|
AND (user_id = %(user_id)s OR is_public);""",
|
|
{"metric_id": metric_id, "status": status, "user_id": user_id})
|
|
)
|
|
return get_card(metric_id=metric_id, project_id=project_id, user_id=user_id)
|
|
|
|
|
|
def get_funnel_sessions_by_issue(user_id, project_id, metric_id, issue_id,
|
|
data: schemas.CardSessionsSchema
|
|
# , range_value=None, start_date=None, end_date=None
|
|
):
|
|
metric: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, flatten=False)
|
|
if metric is None:
|
|
return None
|
|
metric: schemas.CardSchema = schemas.CardSchema(**metric)
|
|
metric: schemas.CardSchema = __merge_metric_with_data(metric=metric, data=data)
|
|
if metric is None:
|
|
return None
|
|
for s in metric.series:
|
|
s.filter.startDate = data.startTimestamp
|
|
s.filter.endDate = data.endTimestamp
|
|
s.filter.limit = data.limit
|
|
s.filter.page = data.page
|
|
issues_list = funnels.get_issues_on_the_fly_widget(project_id=project_id, data=s.filter).get("issues", {})
|
|
issues_list = issues_list.get("significant", []) + issues_list.get("insignificant", [])
|
|
issue = None
|
|
for i in issues_list:
|
|
if i.get("issueId", "") == issue_id:
|
|
issue = i
|
|
break
|
|
if issue is None:
|
|
issue = issues.get(project_id=project_id, issue_id=issue_id)
|
|
if issue is not None:
|
|
issue = {**issue,
|
|
"affectedSessions": 0,
|
|
"affectedUsers": 0,
|
|
"conversionImpact": 0,
|
|
"lostConversions": 0,
|
|
"unaffectedSessions": 0}
|
|
return {"seriesId": s.series_id, "seriesName": s.name,
|
|
"sessions": sessions.search_sessions(user_id=user_id, project_id=project_id,
|
|
issue=issue, data=s.filter)
|
|
if issue is not None else {"total": 0, "sessions": []},
|
|
"issue": issue}
|
|
|
|
|
|
def make_chart_from_card(project_id, user_id, metric_id, data: schemas.CardChartSchema):
|
|
raw_metric: dict = get_card(metric_id=metric_id, project_id=project_id, user_id=user_id, include_data=True)
|
|
if raw_metric is None:
|
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="card not found")
|
|
metric: schemas.CardSchema = schemas.CardSchema(**raw_metric)
|
|
if metric.is_template:
|
|
return get_predefined_metric(key=metric.metric_of, project_id=project_id, data=data.dict())
|
|
elif __is_click_map(metric):
|
|
if raw_metric["data"]:
|
|
keys = sessions_mobs. \
|
|
__get_mob_keys(project_id=project_id, session_id=raw_metric["data"]["sessionId"])
|
|
mob_exists = False
|
|
for k in keys:
|
|
if s3.exists(bucket=config("sessions_bucket"), key=k):
|
|
mob_exists = True
|
|
break
|
|
if mob_exists:
|
|
raw_metric["data"]['domURL'] = sessions_mobs.get_urls(session_id=raw_metric["data"]["sessionId"],
|
|
project_id=project_id)
|
|
raw_metric["data"]['mobsUrl'] = sessions_mobs.get_urls_depercated(
|
|
session_id=raw_metric["data"]["sessionId"])
|
|
return raw_metric["data"]
|
|
|
|
return make_chart(project_id=project_id, user_id=user_id, data=data, metric=metric)
|
|
|
|
|
|
PREDEFINED = {schemas.MetricOfWebVitals.count_sessions: metrics.get_processed_sessions,
|
|
schemas.MetricOfWebVitals.avg_image_load_time: metrics.get_application_activity_avg_image_load_time,
|
|
schemas.MetricOfWebVitals.avg_page_load_time: metrics.get_application_activity_avg_page_load_time,
|
|
schemas.MetricOfWebVitals.avg_request_load_time: metrics.get_application_activity_avg_request_load_time,
|
|
schemas.MetricOfWebVitals.avg_dom_content_load_start: metrics.get_page_metrics_avg_dom_content_load_start,
|
|
schemas.MetricOfWebVitals.avg_first_contentful_pixel: metrics.get_page_metrics_avg_first_contentful_pixel,
|
|
schemas.MetricOfWebVitals.avg_visited_pages: metrics.get_user_activity_avg_visited_pages,
|
|
schemas.MetricOfWebVitals.avg_session_duration: metrics.get_user_activity_avg_session_duration,
|
|
schemas.MetricOfWebVitals.avg_pages_dom_buildtime: metrics.get_pages_dom_build_time,
|
|
schemas.MetricOfWebVitals.avg_pages_response_time: metrics.get_pages_response_time,
|
|
schemas.MetricOfWebVitals.avg_response_time: metrics.get_top_metrics_avg_response_time,
|
|
schemas.MetricOfWebVitals.avg_first_paint: metrics.get_top_metrics_avg_first_paint,
|
|
schemas.MetricOfWebVitals.avg_dom_content_loaded: metrics.get_top_metrics_avg_dom_content_loaded,
|
|
schemas.MetricOfWebVitals.avg_till_first_byte: metrics.get_top_metrics_avg_till_first_bit,
|
|
schemas.MetricOfWebVitals.avg_time_to_interactive: metrics.get_top_metrics_avg_time_to_interactive,
|
|
schemas.MetricOfWebVitals.count_requests: metrics.get_top_metrics_count_requests,
|
|
schemas.MetricOfWebVitals.avg_time_to_render: metrics.get_time_to_render,
|
|
schemas.MetricOfWebVitals.avg_used_js_heap_size: metrics.get_memory_consumption,
|
|
schemas.MetricOfWebVitals.avg_cpu: metrics.get_avg_cpu,
|
|
schemas.MetricOfWebVitals.avg_fps: metrics.get_avg_fps,
|
|
schemas.MetricOfErrors.impacted_sessions_by_js_errors: metrics.get_impacted_sessions_by_js_errors,
|
|
schemas.MetricOfErrors.domains_errors_4xx: metrics.get_domains_errors_4xx,
|
|
schemas.MetricOfErrors.domains_errors_5xx: metrics.get_domains_errors_5xx,
|
|
schemas.MetricOfErrors.errors_per_domains: metrics.get_errors_per_domains,
|
|
schemas.MetricOfErrors.calls_errors: metrics.get_calls_errors,
|
|
schemas.MetricOfErrors.errors_per_type: metrics.get_errors_per_type,
|
|
schemas.MetricOfErrors.resources_by_party: metrics.get_resources_by_party,
|
|
schemas.MetricOfPerformance.speed_location: metrics.get_speed_index_location,
|
|
schemas.MetricOfPerformance.slowest_domains: metrics.get_slowest_domains,
|
|
schemas.MetricOfPerformance.sessions_per_browser: metrics.get_sessions_per_browser,
|
|
schemas.MetricOfPerformance.time_to_render: metrics.get_time_to_render,
|
|
schemas.MetricOfPerformance.impacted_sessions_by_slow_pages: metrics.get_impacted_sessions_by_slow_pages,
|
|
schemas.MetricOfPerformance.memory_consumption: metrics.get_memory_consumption,
|
|
schemas.MetricOfPerformance.cpu: metrics.get_avg_cpu,
|
|
schemas.MetricOfPerformance.fps: metrics.get_avg_fps,
|
|
schemas.MetricOfPerformance.crashes: metrics.get_crashes,
|
|
schemas.MetricOfPerformance.resources_vs_visually_complete: metrics.get_resources_vs_visually_complete,
|
|
schemas.MetricOfPerformance.pages_dom_buildtime: metrics.get_pages_dom_build_time,
|
|
schemas.MetricOfPerformance.pages_response_time: metrics.get_pages_response_time,
|
|
schemas.MetricOfPerformance.pages_response_time_distribution: metrics.get_pages_response_time_distribution,
|
|
schemas.MetricOfResources.missing_resources: metrics.get_missing_resources_trend,
|
|
schemas.MetricOfResources.slowest_resources: metrics.get_slowest_resources,
|
|
schemas.MetricOfResources.resources_loading_time: metrics.get_resources_loading_time,
|
|
schemas.MetricOfResources.resource_type_vs_response_end: metrics.resource_type_vs_response_end,
|
|
schemas.MetricOfResources.resources_count_by_type: metrics.get_resources_count_by_type, }
|
|
|
|
|
|
def get_predefined_metric(key: Union[schemas.MetricOfWebVitals, schemas.MetricOfErrors, \
|
|
schemas.MetricOfPerformance, schemas.MetricOfResources], project_id: int, data: dict):
|
|
return PREDEFINED.get(key, lambda *args: None)(project_id=project_id, **data)
|