openreplay/ee/api/chalicelib/core/funnels.py
Shekar Siri f562355aed
v1.1.0 (#31)
* ci(deployment): injecting secrets

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* fix: typo

* feat(installation): Enterprise license check

* fix(install): reset ee cli args

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* Fix typo

* Update README.md

* feat (tracker-axios): init plugin

* fix (tracker-axios): version patch

* Fixed alert's unknown metrics handler

* fix (tracker-mobx): dev-dependencies and updated package-lock

* feat: APIs for user session data deleteion - wip

* fix: alert metric value of performance.speed_index

* Build and deploy scripts for enterprise edition (#13)

* feat(installation): enterprise installation

* chore(install): enabling ansible gather_facts

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): quotes for enterprise key

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(installation): enterprise install dbs

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): rename yaml

* chore(install): change image tag

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): License key variable added

* chore(deployment): Injecting enterprise license key in workers.

* chore(install): remove deprecated files

* chore(install): make domain_name mandatory in vars.yaml

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(actions): ee workers

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(install): use local docker instead of crictl

You can use the images built in the local machine, in installation,
without putting that in any external registry.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat: APIs for user session data deleteion

* feat: prefix deleted mobs with DEL_

* feat: schedules to delete mobs

* chore(ci): fix ee build

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(build): passing build args to internal scripts

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): moving kafka topic creation at the end

Kafka pods usually takes time to be active.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): removing auth service.

* chore(install): Adding rancher for cluster management

* chore(install): proper name for alerts template

* separate requirements and clean up

* feat (frontend): typescript support

* feat (tracker): 3.0.4: maintain baseURL & connAttempt options

* feat(api): changed license validation

* feat(api): ee-license fix for unprovided value

* feat(api): fixed ee-signup cursor

* feat(api): FOS fix replay-mob issue

* feat(api): ee log ch-resources query

* chore(ci): change openreplay-cli with kube-install.sh

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* ci(actions): change ee naming

* feat(api): removed ch-logs

* feat(install): injecting ee variables only on ee installation.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): remove licence key from ee

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* fix(install): ch values for chalice

* feat(clickhouse): moved creation scripts to EE folder

* fix (backend-ee): disable ios tables so far

* chore(install): remove deprecated mandatory variables.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(api): remove duplicate files & changed signup

* fix(backend-ee): ch prepare after commit

* fix(backend-ee): syntax

* feat(api): added missing EE tenant column

* fix(scripts-ee): correct default clickhouse host

* feat(api): changed version_number location

* feat(api): ee log ch-errors query

* feat(api): ee fix ch-errors query

* feat: skip to issue button (#23)

* feat(api): 🐛 ee fix ambiguous ch-error query & accounts endpoint

* Feature: Autoplay Sessions (#22)

* feat: autoplay sessions

* change: removed unused import

* auto play filter by tab

* feat(api): changed JWT authorizer & API_KEY authorizer & fix undefined project_key

* feat (backend-devops): Dockerfile for all services in one image

* feat(sourcemap-uploader): --verbose argument use instead of --log

* feat(api): log middleware

* Feature - dom inspector (#28)

* feat (frontend): typescript support

* feat(frontend): DOM Inspector init

* fix(frontend): use tailwind bg

* feat(frontend dom-inspector): add element selection & deletion

* fix(frontend): todo comment

* di - styling wip

* feature(di) - editor theme

* feat(frontend): parse attributes with RE (+ability to add)

* feature(di) - input width

* fix(ui): di - review changes

Co-authored-by: ShiKhu <alex.kaminsky.11@gmail.com>

* chore(install): remove depricated init_dbs

* feat(api): ee override multi-tenant-core

* fix(frontend-build): gen css types before build

* fix(ui) - checking for the license (#30)

Co-authored-by: Rajesh Rajendran <rjshrjndrn@gmail.com>
Co-authored-by: Mehdi Osman <estradino@users.noreply.github.com>
Co-authored-by: ShiKhu <alex.kaminsky.11@gmail.com>
Co-authored-by: KRAIEM Taha Yassine <tahayk2@gmail.com>
Co-authored-by: Rajesh Rajendran <rjshrjndrn@users.noreply.github.com>
Co-authored-by: ourvakan <hi-psi@yandex.com>
Co-authored-by: tahayk2@gmail.com <enissay4ever4github>
2021-06-11 23:31:29 +05:30

275 lines
No EOL
12 KiB
Python

import chalicelib.utils.helper
from chalicelib.core import events, significance, sessions
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils import helper, pg_client
from chalicelib.utils import dev
import json
REMOVE_KEYS = ["key", "_key", "startDate", "endDate"]
ALLOW_UPDATE_FOR = ["name", "filter"]
def filter_stages(stages):
ALLOW_TYPES = [events.event_type.CLICK.ui_type, events.event_type.INPUT.ui_type,
events.event_type.LOCATION.ui_type, events.event_type.CUSTOM.ui_type,
events.event_type.CLICK_IOS.ui_type, events.event_type.INPUT_IOS.ui_type,
events.event_type.VIEW_IOS.ui_type, events.event_type.CUSTOM_IOS.ui_type, ]
return [s for s in stages if s["type"] in ALLOW_TYPES and s.get("value") is not None]
def create(project_id, user_id, name, filter, is_public):
helper.delete_keys_from_dict(filter, REMOVE_KEYS)
filter["events"] = filter_stages(stages=filter.get("events", []))
with pg_client.PostgresClient() as cur:
query = cur.mogrify("""\
INSERT INTO public.funnels (project_id, user_id, name, filter,is_public)
VALUES (%(project_id)s, %(user_id)s, %(name)s, %(filter)s::jsonb,%(is_public)s)
RETURNING *;""",
{"user_id": user_id, "project_id": project_id, "name": name, "filter": json.dumps(filter),
"is_public": is_public})
cur.execute(
query
)
r = cur.fetchone()
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r = helper.dict_to_camel_case(r)
r["filter"]["startDate"], r["filter"]["endDate"] = TimeUTC.get_start_end_from_range(r["filter"]["rangeValue"])
return {"data": r}
def update(funnel_id, user_id, name=None, filter=None, is_public=None):
s_query = []
if filter is not None:
helper.delete_keys_from_dict(filter, REMOVE_KEYS)
s_query.append("filter = %(filter)s::jsonb")
if name is not None and len(name) > 0:
s_query.append("name = %(name)s")
if is_public is not None:
s_query.append("is_public = %(is_public)s")
if len(s_query) == 0:
return {"errors": ["Nothing to update"]}
with pg_client.PostgresClient() as cur:
query = cur.mogrify(f"""\
UPDATE public.funnels
SET {" , ".join(s_query)}
WHERE funnel_id=%(funnel_id)s
RETURNING *;""",
{"user_id": user_id, "funnel_id": funnel_id, "name": name,
"filter": json.dumps(filter) if filter is not None else None, "is_public": is_public})
# print("--------------------")
# print(query)
# print("--------------------")
cur.execute(
query
)
r = cur.fetchone()
r["created_at"] = TimeUTC.datetime_to_timestamp(r["created_at"])
r = helper.dict_to_camel_case(r)
r["filter"]["startDate"], r["filter"]["endDate"] = TimeUTC.get_start_end_from_range(r["filter"]["rangeValue"])
return {"data": r}
def get_by_user(project_id, user_id, range_value=None, start_date=None, end_date=None, details=False):
with pg_client.PostgresClient() as cur:
team_query = """INNER JOIN
(
SELECT collaborators.user_id
FROM public.users AS creator
INNER JOIN public.users AS collaborators USING (tenant_id)
WHERE creator.user_id=%(user_id)s
) AS team USING (user_id)"""
cur.execute(
cur.mogrify(
f"""\
SELECT DISTINCT ON (funnels.funnel_id) funnel_id,project_id, user_id, name, created_at, deleted_at, is_public
{",filter" if details else ""}
FROM public.funnels {team_query}
WHERE project_id = %(project_id)s
AND funnels.deleted_at IS NULL
AND (funnels.user_id = %(user_id)s OR funnels.is_public);""",
{"project_id": project_id, "user_id": user_id}
)
)
rows = cur.fetchall()
rows = helper.list_to_camel_case(rows)
for row in rows:
row["createdAt"] = TimeUTC.datetime_to_timestamp(row["createdAt"])
if details:
row["filter"]["events"] = filter_stages(row["filter"]["events"])
get_start_end_time(filter_d=row["filter"], range_value=range_value, start_date=start_date,
end_date=end_date)
counts = sessions.search2_pg(data=row["filter"], project_id=project_id, user_id=None, count_only=True)
row["sessionsCount"] = counts["countSessions"]
row["usersCount"] = counts["countUsers"]
overview = significance.get_overview(filter_d=row["filter"], project_id=project_id)
row["stages"] = overview["stages"]
row.pop("filter")
row["stagesCount"] = len(row["stages"])
# TODO: ask david to count it alone
row["criticalIssuesCount"] = overview["criticalIssuesCount"]
row["missedConversions"] = 0 if len(row["stages"]) < 2 \
else row["stages"][0]["sessionsCount"] - row["stages"][-1]["sessionsCount"]
return rows
def get_possible_issue_types(project_id):
return [{"type": t, "title": chalicelib.utils.helper.get_issue_title(t)} for t in
['click_rage', 'dead_click', 'excessive_scrolling',
'bad_request', 'missing_resource', 'memory', 'cpu',
'slow_resource', 'slow_page_load', 'crash', 'custom_event_error',
'js_error']]
def get_start_end_time(filter_d, range_value, start_date, end_date):
if start_date is not None and end_date is not None:
filter_d["startDate"], filter_d["endDate"] = start_date, end_date
elif range_value is not None and len(range_value) > 0:
filter_d["rangeValue"] = range_value
filter_d["startDate"], filter_d["endDate"] = TimeUTC.get_start_end_from_range(range_value)
else:
filter_d["startDate"], filter_d["endDate"] = TimeUTC.get_start_end_from_range(filter_d["rangeValue"])
def delete(project_id, funnel_id, user_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify("""\
UPDATE public.funnels
SET deleted_at = timezone('utc'::text, now())
WHERE project_id = %(project_id)s
AND funnel_id = %(funnel_id)s;""",
{"funnel_id": funnel_id, "project_id": project_id, "user_id": user_id})
)
return {"data": {"state": "success"}}
def get_sessions(project_id, funnel_id, user_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["filter not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
return sessions.search2_pg(data=f["filter"], project_id=project_id, user_id=user_id)
def get_sessions_on_the_fly(funnel_id, project_id, user_id, data):
data["events"] = filter_stages(data.get("events", []))
if len(data["events"]) == 0:
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.get("rangeValue", None),
start_date=data.get('startDate', None),
end_date=data.get('endDate', None))
data = f["filter"]
return sessions.search2_pg(data=data, project_id=project_id, user_id=user_id)
def get_top_insights(project_id, funnel_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["filter not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=f["filter"], project_id=project_id)
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"stages": helper.list_to_camel_case(insights),
"totalDropDueToIssues": total_drop_due_to_issues}
def get_top_insights_on_the_fly(funnel_id, project_id, data):
data["events"] = filter_stages(data.get("events", []))
if len(data["events"]) == 0:
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.get("rangeValue", None),
start_date=data.get('startDate', None),
end_date=data.get('endDate', None))
data = f["filter"]
insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=data, project_id=project_id)
if len(insights) > 0:
insights[-1]["dropDueToIssues"] = total_drop_due_to_issues
return {"stages": helper.list_to_camel_case(insights),
"totalDropDueToIssues": total_drop_due_to_issues}
def get_issues(project_id, funnel_id, range_value=None, start_date=None, end_date=None):
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=start_date, end_date=end_date)
return {"data": {
"issues": helper.dict_to_camel_case(significance.get_issues_list(filter_d=f["filter"], project_id=project_id))
}}
@dev.timed
def get_issues_on_the_fly(funnel_id, project_id, data):
first_stage = data.get("firstStage")
last_stage = data.get("lastStage")
data["events"] = filter_stages(data.get("events", []))
if len(data["events"]) == 0:
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=data.get("rangeValue", None),
start_date=data.get('startDate', None),
end_date=data.get('endDate', None))
data = f["filter"]
return {
"issues": helper.dict_to_camel_case(
significance.get_issues_list(filter_d=data, project_id=project_id, first_stage=first_stage,
last_stage=last_stage))}
def get(funnel_id, project_id):
with pg_client.PostgresClient() as cur:
cur.execute(
cur.mogrify(
"""\
SELECT
*
FROM public.funnels
WHERE project_id = %(project_id)s
AND deleted_at IS NULL
AND funnel_id = %(funnel_id)s;""",
{"funnel_id": funnel_id, "project_id": project_id}
)
)
f = helper.dict_to_camel_case(cur.fetchone())
if f is None:
return None
f["createdAt"] = TimeUTC.datetime_to_timestamp(f["createdAt"])
f["filter"]["events"] = filter_stages(stages=f["filter"]["events"])
return f
@dev.timed
def search_by_issue(user_id, project_id, funnel_id, issue_id, data, range_value=None, start_date=None, end_date=None):
if len(data.get("events", [])) == 0:
f = get(funnel_id=funnel_id, project_id=project_id)
if f is None:
return {"errors": ["funnel not found"]}
get_start_end_time(filter_d=f["filter"], range_value=range_value, start_date=data.get('startDate', start_date),
end_date=data.get('endDate', end_date))
data = f["filter"]
# insights, total_drop_due_to_issues = significance.get_top_insights(filter_d=data, project_id=project_id)
issues = get_issues_on_the_fly(funnel_id=funnel_id, project_id=project_id, data=data).get("issues", {})
issues = issues.get("significant", []) + issues.get("insignificant", [])
issue = None
for i in issues:
if i.get("issueId", "") == issue_id:
issue = i
break
return {"sessions": sessions.search2_pg(user_id=user_id, project_id=project_id, issue=issue,
data=data) if issue is not None else {"total": 0, "sessions": []},
# "stages": helper.list_to_camel_case(insights),
# "totalDropDueToIssues": total_drop_due_to_issues,
"issue": issue}