Compare commits

...
Sign in to create a new pull request.

34 commits

Author SHA1 Message Date
Rajesh Rajendran
7ba4ea7cd2 Changing default encryption to false 2023-04-13 13:05:40 +02:00
Alexander
6f195a0ff0
feat(backend): enabled ecnryption and added metrics (#1160) 2023-04-13 12:47:44 +02:00
Rajesh Rajendran
0610965130
chore(helm): Enabling redis string for helm template variable (#1159)
fix #1158

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-13 10:10:20 +02:00
Shekar Siri
45c5dfc1bf
Add files via upload (#1157) 2023-04-12 19:10:50 +02:00
Shekar Siri
d020a8f8d7
Add files via upload (#1156) 2023-04-12 18:47:06 +02:00
Rajesh Rajendran
360d51d637
chore(helm): Updating chalice image release (#1155) 2023-04-12 18:11:19 +02:00
Kraiem Taha Yassine
83ea01762d
Merge pull request #1154 from openreplay/v1.11.0-patch
V1.11.0 patch
2023-04-12 16:50:05 +01:00
Taha Yassine Kraiem
a229f91501 chore(build): testing EE cron-Jobs 2023-04-12 16:36:51 +01:00
Taha Yassine Kraiem
b67300b462 feat(chalice): changed corn-Job execution time 2023-04-12 16:19:33 +01:00
Taha Yassine Kraiem
86017ec2cf feat(chalice): fixing Jobs 2023-04-12 15:59:17 +01:00
Rajesh Rajendran
62f238cdd0
chore(helm): disabling redis string if not enabled (#1153)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-12 16:25:45 +02:00
Taha Yassine Kraiem
92f4ffa1fb chore(build): test patch branch 2023-04-12 15:24:42 +01:00
Taha Yassine Kraiem
585d893063 feat(chalice): refactored Jobs
feat(chalice): added limits on Jobs
2023-04-12 15:23:50 +01:00
Kraiem Taha Yassine
76bb483505
Merge pull request #1152 from openreplay/v1.11.0-patch
V1.11.0 patch
2023-04-12 15:08:47 +01:00
Taha Yassine Kraiem
0d01afbcb5 feat(chalice): changes 2023-04-12 15:07:23 +01:00
Taha Yassine Kraiem
5c0faea838 feat(chalice): configurable mobs expiration 2023-04-12 14:51:15 +01:00
Taha Yassine Kraiem
e25bfabba0 feat(chalice): fixing jobs execution 2023-04-12 14:36:51 +01:00
Taha Yassine Kraiem
4113ffaa3b feat(chalice): debugging jobs execution 2023-04-12 14:07:47 +01:00
Taha Yassine Kraiem
82e2856d99 feat(chalice): debugging jobs execution 2023-04-12 13:48:52 +01:00
Taha Yassine Kraiem
b82be4c540 feat(chalice): debugging jobs execution 2023-04-12 13:37:16 +01:00
Taha Yassine Kraiem
048a9767ac feat(chalice): fixed jobs execution 2023-04-12 13:06:38 +01:00
Rajesh Rajendran
b14bcbb342
chore(build): Bump image version of frontend assets while building (#1149)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-11 16:07:36 +02:00
Rajesh Rajendran
dc032bf370
chore(helm): Updating frontend image release (#1147) 2023-04-11 15:06:23 +02:00
Rajesh Rajendran
d855bffb12
chore(helm): Adding option for records bucket (#1146)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-11 14:24:29 +02:00
Shekar Siri
6993104a02
fix(ui): fix player destruction on id change (#1145)
Co-authored-by: nick-delirium <nikita@openreplay.com>
2023-04-11 12:13:52 +02:00
Rajesh Rajendran
28a1ccf63e
chore(cli): Adding verbose logging (#1144)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-10 11:38:42 +02:00
Rajesh Rajendran
7784fdcdae
chore(helm): Updating chalice image release (#1143) 2023-04-09 15:47:12 +02:00
Rajesh Rajendran
ece075e2f3
fix(ee): chalice health check (#1142)
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-09 15:43:38 +02:00
Rajesh Rajendran
99fa7e18c1
fix(helm): clickhouse username (#1141) 2023-04-09 15:09:02 +02:00
Rajesh Rajendran
191ae35311
chore(helm): Updating chalice image release (#1139) 2023-04-08 10:17:38 +02:00
Rajesh Rajendran
8c1e6ec02e
fix redis endpoint and chalice health endpoints (#1138)
* chore(helm): Adding redis string from global config

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

* fix(chalice): health check url for alerts and assist

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>

---------

Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2023-04-08 10:14:10 +02:00
Rajesh Rajendran
0eb4b66b9d
chore(helm): Updating chalice image release (#1136) 2023-04-07 18:08:47 +02:00
Kraiem Taha Yassine
1d0f330118
Merge pull request #1135 from openreplay/v1.11.0-patch
feat(chalice): skip mob existence verification
2023-04-07 16:15:14 +01:00
Taha Yassine Kraiem
16e7be5e99 feat(chalice): skip mob existence verification 2023-04-07 16:14:33 +01:00
35 changed files with 667 additions and 526 deletions

View file

@ -10,6 +10,7 @@ on:
branches: branches:
- dev - dev
- api-* - api-*
- v1.11.0-patch
paths: paths:
- "ee/api/**" - "ee/api/**"
- "api/**" - "api/**"

View file

@ -10,6 +10,7 @@ on:
branches: branches:
- dev - dev
- api-* - api-*
- v1.11.0-patch
paths: paths:
- "api/**" - "api/**"
- "!api/.gitignore" - "!api/.gitignore"

View file

@ -10,6 +10,7 @@ on:
branches: branches:
- dev - dev
- api-* - api-*
- v1.11.0-patch
paths: paths:
- "ee/api/**" - "ee/api/**"
- "api/**" - "api/**"

View file

@ -14,9 +14,9 @@ def app_connection_string(name, port, path):
HEALTH_ENDPOINTS = { HEALTH_ENDPOINTS = {
"alerts": app_connection_string("alerts-openreplay", 8888, "metrics"), "alerts": app_connection_string("alerts-openreplay", 8888, "health"),
"assets": app_connection_string("assets-openreplay", 8888, "metrics"), "assets": app_connection_string("assets-openreplay", 8888, "metrics"),
"assist": app_connection_string("assist-openreplay", 8888, "metrics"), "assist": app_connection_string("assist-openreplay", 8888, "health"),
"chalice": app_connection_string("chalice-openreplay", 8888, "metrics"), "chalice": app_connection_string("chalice-openreplay", 8888, "metrics"),
"db": app_connection_string("db-openreplay", 8888, "metrics"), "db": app_connection_string("db-openreplay", 8888, "metrics"),
"ender": app_connection_string("ender-openreplay", 8888, "metrics"), "ender": app_connection_string("ender-openreplay", 8888, "metrics"),

View file

@ -1,6 +1,6 @@
from chalicelib.utils import pg_client, helper from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.core import sessions, sessions_mobs from chalicelib.core import sessions_mobs, sessions_devtool
class Actions: class Actions:
@ -17,9 +17,7 @@ class JobStatus:
def get(job_id): def get(job_id):
with pg_client.PostgresClient() as cur: with pg_client.PostgresClient() as cur:
query = cur.mogrify( query = cur.mogrify(
"""\ """SELECT *
SELECT
*
FROM public.jobs FROM public.jobs
WHERE job_id = %(job_id)s;""", WHERE job_id = %(job_id)s;""",
{"job_id": job_id} {"job_id": job_id}
@ -37,9 +35,7 @@ def get(job_id):
def get_all(project_id): def get_all(project_id):
with pg_client.PostgresClient() as cur: with pg_client.PostgresClient() as cur:
query = cur.mogrify( query = cur.mogrify(
"""\ """SELECT *
SELECT
*
FROM public.jobs FROM public.jobs
WHERE project_id = %(project_id)s;""", WHERE project_id = %(project_id)s;""",
{"project_id": project_id} {"project_id": project_id}
@ -51,23 +47,19 @@ def get_all(project_id):
return helper.list_to_camel_case(data) return helper.list_to_camel_case(data)
def create(project_id, data): def create(project_id, user_id):
with pg_client.PostgresClient() as cur: with pg_client.PostgresClient() as cur:
job = { job = {"status": "scheduled",
"status": "scheduled",
"project_id": project_id, "project_id": project_id,
**data "action": Actions.DELETE_USER_DATA,
} "reference_id": user_id,
"description": f"Delete user sessions of userId = {user_id}",
"start_at": TimeUTC.to_human_readable(TimeUTC.midnight(1))}
query = cur.mogrify("""\ query = cur.mogrify(
INSERT INTO public.jobs( """INSERT INTO public.jobs(project_id, description, status, action,reference_id, start_at)
project_id, description, status, action, VALUES (%(project_id)s, %(description)s, %(status)s, %(action)s,%(reference_id)s, %(start_at)s)
reference_id, start_at RETURNING *;""", job)
)
VALUES (
%(project_id)s, %(description)s, %(status)s, %(action)s,
%(reference_id)s, %(start_at)s
) RETURNING *;""", job)
cur.execute(query=query) cur.execute(query=query)
@ -90,14 +82,13 @@ def update(job_id, job):
**job **job
} }
query = cur.mogrify("""\ query = cur.mogrify(
UPDATE public.jobs """UPDATE public.jobs
SET SET updated_at = timezone('utc'::text, now()),
updated_at = timezone('utc'::text, now()),
status = %(status)s, status = %(status)s,
errors = %(errors)s errors = %(errors)s
WHERE WHERE job_id = %(job_id)s
job_id = %(job_id)s RETURNING *;""", job_data) RETURNING *;""", job_data)
cur.execute(query=query) cur.execute(query=query)
@ -113,61 +104,64 @@ def format_datetime(r):
r["start_at"] = TimeUTC.datetime_to_timestamp(r["start_at"]) r["start_at"] = TimeUTC.datetime_to_timestamp(r["start_at"])
def __get_session_ids_by_user_ids(project_id, user_ids):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""SELECT session_id
FROM public.sessions
WHERE project_id = %(project_id)s
AND user_id IN %(userId)s
LIMIT 1000;""",
{"project_id": project_id, "userId": tuple(user_ids)})
cur.execute(query=query)
ids = cur.fetchall()
return [s["session_id"] for s in ids]
def __delete_sessions_by_session_ids(session_ids):
with pg_client.PostgresClient(unlimited_query=True) as cur:
query = cur.mogrify(
"""DELETE FROM public.sessions
WHERE session_id IN %(session_ids)s""",
{"session_ids": tuple(session_ids)}
)
cur.execute(query=query)
def get_scheduled_jobs(): def get_scheduled_jobs():
with pg_client.PostgresClient() as cur: with pg_client.PostgresClient() as cur:
query = cur.mogrify( query = cur.mogrify(
"""\ """SELECT *
SELECT * FROM public.jobs FROM public.jobs
WHERE status = %(status)s AND start_at <= (now() at time zone 'utc');""", WHERE status = %(status)s
{"status": JobStatus.SCHEDULED} AND start_at <= (now() at time zone 'utc');""",
) {"status": JobStatus.SCHEDULED})
cur.execute(query=query) cur.execute(query=query)
data = cur.fetchall() data = cur.fetchall()
for record in data:
format_datetime(record)
return helper.list_to_camel_case(data) return helper.list_to_camel_case(data)
def execute_jobs(): def execute_jobs():
jobs = get_scheduled_jobs() jobs = get_scheduled_jobs()
if len(jobs) == 0:
# No jobs to execute
return
for job in jobs: for job in jobs:
print(f"job can be executed {job['id']}") print(f"Executing jobId:{job['jobId']}")
try: try:
if job["action"] == Actions.DELETE_USER_DATA: if job["action"] == Actions.DELETE_USER_DATA:
session_ids = sessions.get_session_ids_by_user_ids( session_ids = __get_session_ids_by_user_ids(project_id=job["projectId"],
project_id=job["projectId"], user_ids=job["referenceId"] user_ids=[job["referenceId"]])
) if len(session_ids) > 0:
print(f"Deleting {len(session_ids)} sessions")
sessions.delete_sessions_by_session_ids(session_ids) __delete_sessions_by_session_ids(session_ids)
sessions_mobs.delete_mobs(session_ids=session_ids, project_id=job["projectId"]) sessions_mobs.delete_mobs(session_ids=session_ids, project_id=job["projectId"])
sessions_devtool.delete_mobs(session_ids=session_ids, project_id=job["projectId"])
else: else:
raise Exception(f"The action {job['action']} not supported.") raise Exception(f"The action '{job['action']}' not supported.")
job["status"] = JobStatus.COMPLETED job["status"] = JobStatus.COMPLETED
print(f"job completed {job['id']}") print(f"Job completed {job['jobId']}")
except Exception as e: except Exception as e:
job["status"] = JobStatus.FAILED job["status"] = JobStatus.FAILED
job["error"] = str(e) job["errors"] = str(e)
print(f"job failed {job['id']}") print(f"Job failed {job['jobId']}")
update(job["job_id"], job) update(job["jobId"], job)
def group_user_ids_by_project_id(jobs, now):
project_id_user_ids = {}
for job in jobs:
if job["startAt"] > now:
continue
project_id = job["projectId"]
if project_id not in project_id_user_ids:
project_id_user_ids[project_id] = []
project_id_user_ids[project_id].append(job)
return project_id_user_ids

View file

@ -1065,47 +1065,6 @@ def get_session_user(project_id, user_id):
return helper.dict_to_camel_case(data) return helper.dict_to_camel_case(data)
def get_session_ids_by_user_ids(project_id, user_ids):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""\
SELECT session_id FROM public.sessions
WHERE
project_id = %(project_id)s AND user_id IN %(userId)s;""",
{"project_id": project_id, "userId": tuple(user_ids)}
)
ids = cur.execute(query=query)
return ids
def delete_sessions_by_session_ids(session_ids):
with pg_client.PostgresClient(unlimited_query=True) as cur:
query = cur.mogrify(
"""\
DELETE FROM public.sessions
WHERE
session_id IN %(session_ids)s;""",
{"session_ids": tuple(session_ids)}
)
cur.execute(query=query)
return True
def delete_sessions_by_user_ids(project_id, user_ids):
with pg_client.PostgresClient(unlimited_query=True) as cur:
query = cur.mogrify(
"""\
DELETE FROM public.sessions
WHERE
project_id = %(project_id)s AND user_id IN %(userId)s;""",
{"project_id": project_id, "userId": tuple(user_ids)}
)
cur.execute(query=query)
return True
def count_all(): def count_all():
with pg_client.PostgresClient(unlimited_query=True) as cur: with pg_client.PostgresClient(unlimited_query=True) as cur:
cur.execute(query="SELECT COUNT(session_id) AS count FROM public.sessions") cur.execute(query="SELECT COUNT(session_id) AS count FROM public.sessions")

View file

@ -24,3 +24,9 @@ def get_urls(session_id, project_id, check_existence: bool = True):
ExpiresIn=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900) ExpiresIn=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900)
)) ))
return results return results
def delete_mobs(project_id, session_ids):
for session_id in session_ids:
for k in __get_devtools_keys(project_id=project_id, session_id=session_id):
s3.schedule_for_deletion(config("sessions_bucket"), k)

View file

@ -57,5 +57,6 @@ def get_ios(session_id):
def delete_mobs(project_id, session_ids): def delete_mobs(project_id, session_ids):
for session_id in session_ids: for session_id in session_ids:
for k in __get_mob_keys(project_id=project_id, session_id=session_id): for k in __get_mob_keys(project_id=project_id, session_id=session_id) \
+ __get_mob_keys_deprecated(session_id=session_id):
s3.schedule_for_deletion(config("sessions_bucket"), k) s3.schedule_for_deletion(config("sessions_bucket"), k)

View file

@ -69,9 +69,11 @@ def get_by_id2_pg(project_id, session_id, context: schemas.CurrentContext, full_
if e['source'] == "js_exception"][:500] if e['source'] == "js_exception"][:500]
data['userEvents'] = events.get_customs_by_session_id(project_id=project_id, data['userEvents'] = events.get_customs_by_session_id(project_id=project_id,
session_id=session_id) session_id=session_id)
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id) data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id) check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id) data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id, check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
data['resources'] = resources.get_by_session_id(session_id=session_id, project_id=project_id, data['resources'] = resources.get_by_session_id(session_id=session_id, project_id=project_id,
start_ts=data["startTs"], duration=data["duration"]) start_ts=data["startTs"], duration=data["duration"])
@ -126,9 +128,11 @@ def get_replay(project_id, session_id, context: schemas.CurrentContext, full_dat
if data["platform"] == 'ios': if data["platform"] == 'ios':
data['mobsUrl'] = sessions_mobs.get_ios(session_id=session_id) data['mobsUrl'] = sessions_mobs.get_ios(session_id=session_id)
else: else:
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id) data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id) check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id) data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id, check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
data['metadata'] = __group_metadata(project_metadata=data.pop("projectMetadata"), session=data) data['metadata'] = __group_metadata(project_metadata=data.pop("projectMetadata"), session=data)
data['live'] = live and assist.is_live(project_id=project_id, session_id=session_id, data['live'] = live and assist.is_live(project_id=project_id, session_id=session_id,

View file

@ -110,11 +110,14 @@ def rename(source_bucket, source_key, target_bucket, target_key):
def schedule_for_deletion(bucket, key): def schedule_for_deletion(bucket, key):
if not exists(bucket, key):
return False
s3 = __get_s3_resource() s3 = __get_s3_resource()
s3_object = s3.Object(bucket, key) s3_object = s3.Object(bucket, key)
s3_object.copy_from(CopySource={'Bucket': bucket, 'Key': key}, s3_object.copy_from(CopySource={'Bucket': bucket, 'Key': key},
Expires=datetime.now() + timedelta(days=7), Expires=datetime.utcnow() + timedelta(days=config("SCH_DELETE_DAYS", cast=int, default=30)),
MetadataDirective='REPLACE') MetadataDirective='REPLACE')
return True
def generate_file_key(project_id, key): def generate_file_key(project_id, key):

View file

@ -54,3 +54,4 @@ ASSIST_JWT_EXPIRATION=144000
ASSIST_JWT_SECRET= ASSIST_JWT_SECRET=
PYTHONUNBUFFERED=1 PYTHONUNBUFFERED=1
REDIS_STRING=redis://redis-master.db.svc.cluster.local:6379 REDIS_STRING=redis://redis-master.db.svc.cluster.local:6379
SCH_DELETE_DAYS=30

View file

@ -1,5 +1,4 @@
from apscheduler.triggers.cron import CronTrigger from apscheduler.triggers.cron import CronTrigger
from apscheduler.triggers.interval import IntervalTrigger
from chalicelib.core import telemetry from chalicelib.core import telemetry
from chalicelib.core import weekly_report, jobs from chalicelib.core import weekly_report, jobs
@ -20,7 +19,7 @@ async def telemetry_cron() -> None:
cron_jobs = [ cron_jobs = [
{"func": telemetry_cron, "trigger": CronTrigger(day_of_week="*"), {"func": telemetry_cron, "trigger": CronTrigger(day_of_week="*"),
"misfire_grace_time": 60 * 60, "max_instances": 1}, "misfire_grace_time": 60 * 60, "max_instances": 1},
{"func": run_scheduled_jobs, "trigger": IntervalTrigger(minutes=1), {"func": run_scheduled_jobs, "trigger": CronTrigger(day_of_week="*", hour=0, minute=15),
"misfire_grace_time": 20, "max_instances": 1}, "misfire_grace_time": 20, "max_instances": 1},
{"func": weekly_report2, "trigger": CronTrigger(day_of_week="mon", hour=5), {"func": weekly_report2, "trigger": CronTrigger(day_of_week="mon", hour=5),
"misfire_grace_time": 60 * 60, "max_instances": 1} "misfire_grace_time": 60 * 60, "max_instances": 1}

View file

@ -2,7 +2,6 @@ from fastapi import Depends, Body
import schemas import schemas
from chalicelib.core import sessions, events, jobs, projects from chalicelib.core import sessions, events, jobs, projects
from chalicelib.utils.TimeUTC import TimeUTC
from or_dependencies import OR_context from or_dependencies import OR_context
from routers.base import get_routers from routers.base import get_routers
@ -15,7 +14,7 @@ async def get_user_sessions(projectKey: str, userId: str, start_date: int = None
if projectId is None: if projectId is None:
return {"errors": ["invalid projectKey"]} return {"errors": ["invalid projectKey"]}
return { return {
'data': sessions.get_user_sessions( "data": sessions.get_user_sessions(
project_id=projectId, project_id=projectId,
user_id=userId, user_id=userId,
start_date=start_date, start_date=start_date,
@ -30,7 +29,7 @@ async def get_session_events(projectKey: str, sessionId: int):
if projectId is None: if projectId is None:
return {"errors": ["invalid projectKey"]} return {"errors": ["invalid projectKey"]}
return { return {
'data': events.get_by_session_id( "data": events.get_by_session_id(
project_id=projectId, project_id=projectId,
session_id=sessionId session_id=sessionId
) )
@ -43,7 +42,7 @@ async def get_user_details(projectKey: str, userId: str):
if projectId is None: if projectId is None:
return {"errors": ["invalid projectKey"]} return {"errors": ["invalid projectKey"]}
return { return {
'data': sessions.get_session_user( "data": sessions.get_session_user(
project_id=projectId, project_id=projectId,
user_id=userId user_id=userId
) )
@ -55,14 +54,8 @@ async def schedule_to_delete_user_data(projectKey: str, userId: str):
projectId = projects.get_internal_project_id(projectKey) projectId = projects.get_internal_project_id(projectKey)
if projectId is None: if projectId is None:
return {"errors": ["invalid projectKey"]} return {"errors": ["invalid projectKey"]}
data = {"action": "delete_user_data", record = jobs.create(project_id=projectId, user_id=userId)
"reference_id": userId, return {"data": record}
"description": f"Delete user sessions of userId = {userId}",
"start_at": TimeUTC.to_human_readable(TimeUTC.midnight(1))}
record = jobs.create(project_id=projectId, data=data)
return {
'data': record
}
@app_apikey.get('/v1/{projectKey}/jobs', tags=["api"]) @app_apikey.get('/v1/{projectKey}/jobs', tags=["api"])
@ -70,16 +63,12 @@ async def get_jobs(projectKey: str):
projectId = projects.get_internal_project_id(projectKey) projectId = projects.get_internal_project_id(projectKey)
if projectId is None: if projectId is None:
return {"errors": ["invalid projectKey"]} return {"errors": ["invalid projectKey"]}
return { return {"data": jobs.get_all(project_id=projectId)}
'data': jobs.get_all(project_id=projectId)
}
@app_apikey.get('/v1/{projectKey}/jobs/{jobId}', tags=["api"]) @app_apikey.get('/v1/{projectKey}/jobs/{jobId}', tags=["api"])
async def get_job(projectKey: str, jobId: int): async def get_job(projectKey: str, jobId: int):
return { return {"data": jobs.get(job_id=jobId)}
'data': jobs.get(job_id=jobId)
}
@app_apikey.delete('/v1/{projectKey}/jobs/{jobId}', tags=["api"]) @app_apikey.delete('/v1/{projectKey}/jobs/{jobId}', tags=["api"])
@ -93,9 +82,7 @@ async def cancel_job(projectKey: str, jobId: int):
return {"errors": ["The request job has already been canceled/completed."]} return {"errors": ["The request job has already been canceled/completed."]}
job["status"] = "cancelled" job["status"] = "cancelled"
return { return {"data": jobs.update(job_id=jobId, job=job)}
'data': jobs.update(job_id=jobId, job=job)
}
@app_apikey.get('/v1/projects', tags=["api"]) @app_apikey.get('/v1/projects', tags=["api"])
@ -104,15 +91,13 @@ async def get_projects(context: schemas.CurrentContext = Depends(OR_context)):
for record in records: for record in records:
del record['projectId'] del record['projectId']
return { return {"data": records}
'data': records
}
@app_apikey.get('/v1/projects/{projectKey}', tags=["api"]) @app_apikey.get('/v1/projects/{projectKey}', tags=["api"])
async def get_project(projectKey: str, context: schemas.CurrentContext = Depends(OR_context)): async def get_project(projectKey: str, context: schemas.CurrentContext = Depends(OR_context)):
return { return {
'data': projects.get_project_by_key(tenant_id=context.tenant_id, project_key=projectKey) "data": projects.get_project_by_key(tenant_id=context.tenant_id, project_key=projectKey)
} }
@ -125,5 +110,5 @@ async def create_project(data: schemas.CreateProjectSchema = Body(...),
data=data, data=data,
skip_authorization=True skip_authorization=True
) )
del record['data']['projectId'] del record["data"]['projectId']
return record return record

View file

@ -84,7 +84,8 @@ ENV TZ=UTC \
CH_PASSWORD="" \ CH_PASSWORD="" \
CH_DATABASE="default" \ CH_DATABASE="default" \
# Max file size to process, default to 100MB # Max file size to process, default to 100MB
MAX_FILE_SIZE=100000000 MAX_FILE_SIZE=100000000 \
USE_ENCRYPTION=false
RUN if [ "$SERVICE_NAME" = "http" ]; then \ RUN if [ "$SERVICE_NAME" = "http" ]; then \

View file

@ -34,11 +34,29 @@ func (t FileType) String() string {
type Task struct { type Task struct {
id string id string
key string
domRaw []byte
devRaw []byte
doms *bytes.Buffer doms *bytes.Buffer
dome *bytes.Buffer dome *bytes.Buffer
dev *bytes.Buffer dev *bytes.Buffer
} }
func (t *Task) SetMob(mob []byte, tp FileType) {
if tp == DOM {
t.domRaw = mob
} else {
t.devRaw = mob
}
}
func (t *Task) Mob(tp FileType) []byte {
if tp == DOM {
return t.domRaw
}
return t.devRaw
}
type Storage struct { type Storage struct {
cfg *config.Config cfg *config.Config
s3 *storage.S3 s3 *storage.S3
@ -76,6 +94,7 @@ func (s *Storage) Upload(msg *messages.SessionEnd) (err error) {
// Prepare sessions // Prepare sessions
newTask := &Task{ newTask := &Task{
id: sessionID, id: sessionID,
key: msg.EncryptionKey,
} }
wg := &sync.WaitGroup{} wg := &sync.WaitGroup{}
wg.Add(2) wg.Add(2)
@ -108,6 +127,9 @@ func (s *Storage) Upload(msg *messages.SessionEnd) (err error) {
} }
func (s *Storage) openSession(filePath string, tp FileType) ([]byte, error) { func (s *Storage) openSession(filePath string, tp FileType) ([]byte, error) {
if tp == DEV {
filePath += "devtools"
}
// Check file size before download into memory // Check file size before download into memory
info, err := os.Stat(filePath) info, err := os.Stat(filePath)
if err == nil && info.Size() > s.cfg.MaxFileSize { if err == nil && info.Size() > s.cfg.MaxFileSize {
@ -142,50 +164,86 @@ func (s *Storage) sortSessionMessages(raw []byte) ([]byte, error) {
} }
func (s *Storage) prepareSession(path string, tp FileType, task *Task) error { func (s *Storage) prepareSession(path string, tp FileType, task *Task) error {
// Open mob file // Open session file
if tp == DEV {
path += "devtools"
}
startRead := time.Now() startRead := time.Now()
mob, err := s.openSession(path, tp) mob, err := s.openSession(path, tp)
if err != nil { if err != nil {
return err return err
} }
metrics.RecordSessionSize(float64(len(mob)), tp.String())
metrics.RecordSessionReadDuration(float64(time.Now().Sub(startRead).Milliseconds()), tp.String()) metrics.RecordSessionReadDuration(float64(time.Now().Sub(startRead).Milliseconds()), tp.String())
metrics.RecordSessionSize(float64(len(mob)), tp.String())
// Encode and compress session // Put opened session file into task struct
if tp == DEV { task.SetMob(mob, tp)
start := time.Now()
task.dev = s.compressSession(mob) // Encrypt and compress session
metrics.RecordSessionCompressDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String()) s.packSession(task, tp)
} else {
if len(mob) <= s.cfg.FileSplitSize {
start := time.Now()
task.doms = s.compressSession(mob)
metrics.RecordSessionCompressDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
return nil return nil
} }
func (s *Storage) packSession(task *Task, tp FileType) {
// Prepare mob file
mob := task.Mob(tp)
if tp == DEV || len(mob) <= s.cfg.FileSplitSize {
// Encryption
start := time.Now()
data := s.encryptSession(mob, task.key)
metrics.RecordSessionEncryptionDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
// Compression
start = time.Now()
result := s.compressSession(data)
metrics.RecordSessionCompressDuration(float64(time.Now().Sub(start).Milliseconds()), tp.String())
if tp == DOM {
task.doms = result
} else {
task.dev = result
}
return
}
// Prepare two workers
wg := &sync.WaitGroup{} wg := &sync.WaitGroup{}
wg.Add(2) wg.Add(2)
var firstPart, secondPart int64 var firstPart, secondPart, firstEncrypt, secondEncrypt int64
// DomStart part
go func() { go func() {
// Encryption
start := time.Now() start := time.Now()
task.doms = s.compressSession(mob[:s.cfg.FileSplitSize]) data := s.encryptSession(mob[:s.cfg.FileSplitSize], task.key)
firstPart = time.Now().Sub(start).Milliseconds() firstEncrypt = time.Since(start).Milliseconds()
// Compression
start = time.Now()
task.doms = s.compressSession(data)
firstPart = time.Since(start).Milliseconds()
// Finish task
wg.Done() wg.Done()
}() }()
// DomEnd part
go func() { go func() {
// Encryption
start := time.Now() start := time.Now()
task.dome = s.compressSession(mob[s.cfg.FileSplitSize:]) data := s.encryptSession(mob[s.cfg.FileSplitSize:], task.key)
secondPart = time.Now().Sub(start).Milliseconds() secondEncrypt = time.Since(start).Milliseconds()
// Compression
start = time.Now()
task.dome = s.compressSession(data)
secondPart = time.Since(start).Milliseconds()
// Finish task
wg.Done() wg.Done()
}() }()
wg.Wait() wg.Wait()
// Record metrics
metrics.RecordSessionEncryptionDuration(float64(firstEncrypt+secondEncrypt), tp.String())
metrics.RecordSessionCompressDuration(float64(firstPart+secondPart), tp.String()) metrics.RecordSessionCompressDuration(float64(firstPart+secondPart), tp.String())
} }
return nil
}
func (s *Storage) encryptSession(data []byte, encryptionKey string) []byte { func (s *Storage) encryptSession(data []byte, encryptionKey string) []byte {
var encryptedData []byte var encryptedData []byte

View file

@ -85,18 +85,18 @@ func RecordSessionSortDuration(durMillis float64, fileType string) {
storageSessionSortDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0) storageSessionSortDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
} }
var storageSessionEncodeDuration = prometheus.NewHistogramVec( var storageSessionEncryptionDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{ prometheus.HistogramOpts{
Namespace: "storage", Namespace: "storage",
Name: "encode_duration_seconds", Name: "encryption_duration_seconds",
Help: "A histogram displaying the duration of encoding for each session in seconds.", Help: "A histogram displaying the duration of encoding for each session in seconds.",
Buckets: common.DefaultDurationBuckets, Buckets: common.DefaultDurationBuckets,
}, },
[]string{"file_type"}, []string{"file_type"},
) )
func RecordSessionEncodeDuration(durMillis float64, fileType string) { func RecordSessionEncryptionDuration(durMillis float64, fileType string) {
storageSessionEncodeDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0) storageSessionEncryptionDuration.WithLabelValues(fileType).Observe(durMillis / 1000.0)
} }
var storageSessionCompressDuration = prometheus.NewHistogramVec( var storageSessionCompressDuration = prometheus.NewHistogramVec(
@ -133,7 +133,7 @@ func List() []prometheus.Collector {
storageTotalSessions, storageTotalSessions,
storageSessionReadDuration, storageSessionReadDuration,
storageSessionSortDuration, storageSessionSortDuration,
storageSessionEncodeDuration, storageSessionEncryptionDuration,
storageSessionCompressDuration, storageSessionCompressDuration,
storageSessionUploadDuration, storageSessionUploadDuration,
} }

View file

@ -15,9 +15,9 @@ def app_connection_string(name, port, path):
HEALTH_ENDPOINTS = { HEALTH_ENDPOINTS = {
"alerts": app_connection_string("alerts-openreplay", 8888, "metrics"), "alerts": app_connection_string("alerts-openreplay", 8888, "health"),
"assets": app_connection_string("assets-openreplay", 8888, "metrics"), "assets": app_connection_string("assets-openreplay", 8888, "metrics"),
"assist": app_connection_string("assist-openreplay", 8888, "metrics"), "assist": app_connection_string("assist-openreplay", 8888, "health"),
"chalice": app_connection_string("chalice-openreplay", 8888, "metrics"), "chalice": app_connection_string("chalice-openreplay", 8888, "metrics"),
"db": app_connection_string("db-openreplay", 8888, "metrics"), "db": app_connection_string("db-openreplay", 8888, "metrics"),
"ender": app_connection_string("ender-openreplay", 8888, "metrics"), "ender": app_connection_string("ender-openreplay", 8888, "metrics"),

View file

@ -31,3 +31,9 @@ def get_urls(session_id, project_id, context: schemas_ee.CurrentContext, check_e
ExpiresIn=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900) ExpiresIn=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900)
)) ))
return results return results
def delete_mobs(project_id, session_ids):
for session_id in session_ids:
for k in __get_devtools_keys(project_id=project_id, session_id=session_id):
s3.schedule_for_deletion(config("sessions_bucket"), k)

View file

@ -1396,47 +1396,6 @@ def get_session_user(project_id, user_id):
return helper.dict_to_camel_case(data) return helper.dict_to_camel_case(data)
def get_session_ids_by_user_ids(project_id, user_ids):
with pg_client.PostgresClient() as cur:
query = cur.mogrify(
"""\
SELECT session_id FROM public.sessions
WHERE
project_id = %(project_id)s AND user_id IN %(userId)s;""",
{"project_id": project_id, "userId": tuple(user_ids)}
)
ids = cur.execute(query=query)
return ids
def delete_sessions_by_session_ids(session_ids):
with pg_client.PostgresClient(unlimited_query=True) as cur:
query = cur.mogrify(
"""\
DELETE FROM public.sessions
WHERE
session_id IN %(session_ids)s;""",
{"session_ids": tuple(session_ids)}
)
cur.execute(query=query)
return True
def delete_sessions_by_user_ids(project_id, user_ids):
with pg_client.PostgresClient(unlimited_query=True) as cur:
query = cur.mogrify(
"""\
DELETE FROM public.sessions
WHERE
project_id = %(project_id)s AND user_id IN %(userId)s;""",
{"project_id": project_id, "userId": tuple(user_ids)}
)
cur.execute(query=query)
return True
def count_all(): def count_all():
with ch_client.ClickHouseClient() as cur: with ch_client.ClickHouseClient() as cur:
row = cur.execute(query=f"SELECT COUNT(session_id) AS count FROM {exp_ch_helper.get_main_sessions_table()}") row = cur.execute(query=f"SELECT COUNT(session_id) AS count FROM {exp_ch_helper.get_main_sessions_table()}")

View file

@ -72,10 +72,11 @@ def get_by_id2_pg(project_id, session_id, context: schemas_ee.CurrentContext, fu
if e['source'] == "js_exception"][:500] if e['source'] == "js_exception"][:500]
data['userEvents'] = events.get_customs_by_session_id(project_id=project_id, data['userEvents'] = events.get_customs_by_session_id(project_id=project_id,
session_id=session_id) session_id=session_id)
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id) data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id) check_existence=False)
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id, check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id, data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id,
context=context) context=context, check_existence=False)
data['resources'] = resources.get_by_session_id(session_id=session_id, project_id=project_id, data['resources'] = resources.get_by_session_id(session_id=session_id, project_id=project_id,
start_ts=data["startTs"], duration=data["duration"]) start_ts=data["startTs"], duration=data["duration"])
@ -132,10 +133,11 @@ def get_replay(project_id, session_id, context: schemas.CurrentContext, full_dat
if data["platform"] == 'ios': if data["platform"] == 'ios':
data['mobsUrl'] = sessions_mobs.get_ios(session_id=session_id) data['mobsUrl'] = sessions_mobs.get_ios(session_id=session_id)
else: else:
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id) data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id) check_existence=False)
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id, check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id, data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id,
context=context) context=context, check_existence=False)
data['metadata'] = __group_metadata(project_metadata=data.pop("projectMetadata"), session=data) data['metadata'] = __group_metadata(project_metadata=data.pop("projectMetadata"), session=data)
data['live'] = live and assist.is_live(project_id=project_id, session_id=session_id, data['live'] = live and assist.is_live(project_id=project_id, session_id=session_id,

View file

@ -74,3 +74,4 @@ ASSIST_JWT_EXPIRATION=144000
ASSIST_JWT_SECRET= ASSIST_JWT_SECRET=
KAFKA_SERVERS=kafka.db.svc.cluster.local:9092 KAFKA_SERVERS=kafka.db.svc.cluster.local:9092
KAFKA_USE_SSL=false KAFKA_USE_SSL=false
SCH_DELETE_DAYS=30

View file

@ -1,5 +1,4 @@
from apscheduler.triggers.cron import CronTrigger from apscheduler.triggers.cron import CronTrigger
from apscheduler.triggers.interval import IntervalTrigger
from decouple import config from decouple import config
from chalicelib.core import jobs from chalicelib.core import jobs
@ -29,9 +28,10 @@ cron_jobs = [
{"func": unlock_cron, "trigger": CronTrigger(day="*")}, {"func": unlock_cron, "trigger": CronTrigger(day="*")},
] ]
SINGLE_CRONS = [{"func": telemetry_cron, "trigger": CronTrigger(day_of_week="*"), SINGLE_CRONS = [
{"func": telemetry_cron, "trigger": CronTrigger(day_of_week="*"),
"misfire_grace_time": 60 * 60, "max_instances": 1}, "misfire_grace_time": 60 * 60, "max_instances": 1},
{"func": run_scheduled_jobs, "trigger": IntervalTrigger(minutes=60), {"func": run_scheduled_jobs, "trigger": CronTrigger(day_of_week="*", hour=0, minute=15),
"misfire_grace_time": 20, "max_instances": 1}, "misfire_grace_time": 20, "max_instances": 1},
{"func": weekly_report, "trigger": CronTrigger(day_of_week="mon", hour=5), {"func": weekly_report, "trigger": CronTrigger(day_of_week="mon", hour=5),
"misfire_grace_time": 60 * 60, "max_instances": 1} "misfire_grace_time": 60 * 60, "max_instances": 1}

View file

@ -13,6 +13,7 @@ import PlayerContent from './Player/ReplayPlayer/PlayerContent';
import { IPlayerContext, PlayerContext, defaultContextValue } from './playerContext'; import { IPlayerContext, PlayerContext, defaultContextValue } from './playerContext';
import { observer } from 'mobx-react-lite'; import { observer } from 'mobx-react-lite';
import { Note } from "App/services/NotesService"; import { Note } from "App/services/NotesService";
import { useParams } from 'react-router-dom'
const TABS = { const TABS = {
EVENTS: 'User Steps', EVENTS: 'User Steps',
@ -35,6 +36,7 @@ function WebPlayer(props: any) {
// @ts-ignore // @ts-ignore
const [contextValue, setContextValue] = useState<IPlayerContext>(defaultContextValue); const [contextValue, setContextValue] = useState<IPlayerContext>(defaultContextValue);
let playerInst: IPlayerContext['player']; let playerInst: IPlayerContext['player'];
const params: { sessionId: string } = useParams()
useEffect(() => { useEffect(() => {
if (!session.sessionId || contextValue.player !== undefined) return; if (!session.sessionId || contextValue.player !== undefined) return;
@ -91,13 +93,14 @@ function WebPlayer(props: any) {
// LAYOUT (TODO: local layout state - useContext or something..) // LAYOUT (TODO: local layout state - useContext or something..)
useEffect( useEffect(
() => () => { () => () => {
console.debug('cleaning up player after', params.sessionId)
toggleFullscreen(false); toggleFullscreen(false);
closeBottomBlock(); closeBottomBlock();
playerInst?.clean(); playerInst?.clean();
// @ts-ignore // @ts-ignore
setContextValue(defaultContextValue); setContextValue(defaultContextValue);
}, },
[] [params.sessionId]
); );
const onNoteClose = () => { const onNoteClose = () => {

View file

@ -20,7 +20,9 @@ check_prereq() {
chart=frontend chart=frontend
[[ $1 == ee ]] && ee=true [[ $1 == ee ]] && ee=true
[[ $PATCH -eq 1 ]] && { [[ $PATCH -eq 1 ]] && {
image_tag="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/$chart | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}')" __app_version="$(grep -ER ^.ppVersion ../scripts/helmcharts/openreplay/charts/${chart} | xargs | awk '{print $2}' | awk -F. -v OFS=. '{$NF += 1 ; print}' | cut -d 'v' -f2)"
sed -i "s/^VERSION = .*/VERSION = $__app_version/g" .env.sample
image_tag="v${__app_version}"
[[ $ee == "true" ]] && { [[ $ee == "true" ]] && {
image_tag="${image_tag}-ee" image_tag="${image_tag}-ee"
} }
@ -30,8 +32,9 @@ update_helm_release() {
# Update the chart version # Update the chart version
sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml sed -i "s#^version.*#version: $HELM_TAG# g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Update image tags # Update image tags
sed -i "s#ppVersion.*#ppVersion: \"$image_tag\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml sed -i "s#ppVersion.*#ppVersion: \"v${__app_version}\"#g" ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
# Commit the changes # Commit the changes
git add .env.sample
git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml git add ../scripts/helmcharts/openreplay/charts/$chart/Chart.yaml
git commit -m "chore(helm): Updating $chart image release" git commit -m "chore(helm): Updating $chart image release"
} }

View file

@ -84,4 +84,4 @@ nodeSelector: {}
tolerations: [] tolerations: []
affinity: {} affinity: {}
storageSize: 100G storageSize: 100Gi

View file

@ -186,6 +186,12 @@ function or_helm_upgrade() {
function upgrade_old() { function upgrade_old() {
old_vars_path="$1" old_vars_path="$1"
[[ -f $old_vars_path ]] || log err "No configuration file ${BWHITE}$old_vars_path${RED}.
If you're updating from version older than ${BWHITE}v1.10.0${RED}, for example ${BWHITE}v1.9.0${RED}:
${BWHITE}openreplay --deprecated-upgrade ~/openreplay_v1.9.0/scripts/helmcharts/vars.yaml${RED}.
If you're having a custom installation,
${BWHITE}openreplay --deprecated-upgrade /path/to/vars.yaml${RED}.
"
or_version=$(busybox awk '/fromVersion/{print $2}' < "${old_vars_path}") or_version=$(busybox awk '/fromVersion/{print $2}' < "${old_vars_path}")
sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml.backup."${or_version//\"}"_"$(date +%Y%m%d-%H%M%S)" || log err "Not able to copy old vars.yaml" sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml.backup."${or_version//\"}"_"$(date +%Y%m%d-%H%M%S)" || log err "Not able to copy old vars.yaml"
sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml || log err "Not able to copy old vars.yaml" sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml || log err "Not able to copy old vars.yaml"
@ -266,6 +272,12 @@ function upgrade() {
# 3. How to update package. Because openreplay -u will be done from old update script # 3. How to update package. Because openreplay -u will be done from old update script
# 4. Update from Version # 4. Update from Version
exists git || log err "Git not found. Please install" exists git || log err "Git not found. Please install"
[[ -f ${OR_DIR}/vars.yaml ]] || log err "No configuration file ${BWHITE}${OR_DIR}/vars.yaml${RED}.
If you're updating from version older than ${BWHITE}v1.10.0${RED}, for example ${BWHITE}v1.9.0${RED}:
${BWHITE}openreplay --deprecated-upgrade ~/openreplay_v1.9.0/scripts/helmcharts/vars.yaml${RED}.
If you're having a custom installation,
${BWHITE}openreplay --deprecated-upgrade /path/to/vars.yaml${RED}.
"
or_version=$(busybox awk '/fromVersion/{print $2}' < "${OR_DIR}/vars.yaml") || { or_version=$(busybox awk '/fromVersion/{print $2}' < "${OR_DIR}/vars.yaml") || {
log err "${BWHITE}${OR_DIR}/vars.yaml${RED} not found. log err "${BWHITE}${OR_DIR}/vars.yaml${RED} not found.
Please do ${BWHITE}openreplay --deprecated-upgrade /path/to/vars.yaml${RED} Please do ${BWHITE}openreplay --deprecated-upgrade /path/to/vars.yaml${RED}

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.7 version: 0.1.11
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using. # follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes. # It is recommended to use it with quotes.
AppVersion: "v1.11.7" AppVersion: "v1.11.11"

View file

@ -43,10 +43,9 @@ spec:
{{- .Values.healthCheck | toYaml | nindent 10}} {{- .Values.healthCheck | toYaml | nindent 10}}
{{- end}} {{- end}}
env: env:
{{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
- name: KAFKA_SERVERS - name: KAFKA_SERVERS
value: "{{ .Values.global.kafka.kafkaHost }}" value: "{{ .Values.global.kafka.kafkaHost }}"
- name: REDIS_STRING
value: "{{ .Values.global.redis.redisHost }}"
- name: ch_username - name: ch_username
value: "{{ .Values.global.clickhouse.username }}" value: "{{ .Values.global.clickhouse.username }}"
- name: ch_password - name: ch_password
@ -114,6 +113,8 @@ spec:
value: '{{ .Values.global.s3.region }}' value: '{{ .Values.global.s3.region }}'
- name: sessions_region - name: sessions_region
value: '{{ .Values.global.s3.region }}' value: '{{ .Values.global.s3.region }}'
- name: ASSIST_RECORDS_BUCKET
value: {{ .Values.global.s3.assistRecordsBucket }}
- name: sessions_bucket - name: sessions_bucket
value: {{ .Values.global.s3.recordingsBucket }} value: {{ .Values.global.s3.recordingsBucket }}
- name: sourcemaps_bucket - name: sourcemaps_bucket

View file

@ -44,7 +44,7 @@ spec:
{{- end}} {{- end}}
env: env:
- name: CH_USERNAME - name: CH_USERNAME
value: '{{ .Values.global.clickhouse.userame }}' value: '{{ .Values.global.clickhouse.username }}'
- name: CH_PASSWORD - name: CH_PASSWORD
value: '{{ .Values.global.clickhouse.password }}' value: '{{ .Values.global.clickhouse.password }}'
- name: CLICKHOUSE_STRING - name: CLICKHOUSE_STRING

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (frontends://semver.org/) # Versions are expected to follow Semantic Versioning (frontends://semver.org/)
version: 0.1.7 version: 0.1.8
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using. # follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes. # It is recommended to use it with quotes.
AppVersion: "v1.11.6" AppVersion: "v1.11.7"

View file

@ -51,8 +51,7 @@ spec:
value: '{{ .Values.global.s3.region }}' value: '{{ .Values.global.s3.region }}'
- name: LICENSE_KEY - name: LICENSE_KEY
value: '{{ .Values.global.enterpriseEditionLicense }}' value: '{{ .Values.global.enterpriseEditionLicense }}'
- name: REDIS_STRING {{- include "openreplay.env.redis_string" .Values.global.redis | nindent 12 }}
value: '{{ .Values.global.redis.redisHost }}:{{ .Values.global.redis.redisPort }}'
- name: KAFKA_SERVERS - name: KAFKA_SERVERS
value: '{{ .Values.global.kafka.kafkaHost }}:{{ .Values.global.kafka.kafkaPort }}' value: '{{ .Values.global.kafka.kafkaHost }}:{{ .Values.global.kafka.kafkaPort }}'
- name: KAFKA_USE_SSL - name: KAFKA_USE_SSL

View file

@ -65,6 +65,7 @@ Create the name of the service account to use
Create the environment configuration for REDIS_STRING Create the environment configuration for REDIS_STRING
*/}} */}}
{{- define "openreplay.env.redis_string" -}} {{- define "openreplay.env.redis_string" -}}
{{- if .enabled }}
{{- $scheme := (eq (.tls | default dict).enabled true) | ternary "rediss" "redis" -}} {{- $scheme := (eq (.tls | default dict).enabled true) | ternary "rediss" "redis" -}}
{{- $auth := "" -}} {{- $auth := "" -}}
{{- if or .existingSecret .redisPassword -}} {{- if or .existingSecret .redisPassword -}}
@ -83,6 +84,7 @@ Create the environment configuration for REDIS_STRING
- name: REDIS_STRING - name: REDIS_STRING
value: '{{ $scheme }}://{{ $auth }}{{ .redisHost }}:{{ .redisPort }}' value: '{{ $scheme }}://{{ $auth }}{{ .redisHost }}:{{ .redisPort }}'
{{- end }} {{- end }}
{{- end }}
{{/* {{/*
Create the volume mount config for redis TLS certificates Create the volume mount config for redis TLS certificates

View file

@ -50,7 +50,7 @@ kafka: &kafka
# value: "3000000" # value: "3000000"
redis: &redis redis: &redis
# enabled: false enabled: true
redisHost: "redis-master.db.svc.cluster.local" redisHost: "redis-master.db.svc.cluster.local"
redisPort: "6379" redisPort: "6379"
@ -117,6 +117,7 @@ global:
assetsBucket: "sessions-assets" assetsBucket: "sessions-assets"
recordingsBucket: "mobs" recordingsBucket: "mobs"
sourcemapsBucket: "sourcemaps" sourcemapsBucket: "sourcemaps"
assistRecordsBucket: "records"
vaultBucket: "vault-data" vaultBucket: "vault-data"
# This is only for enterpriseEdition # This is only for enterpriseEdition
quickwitBucket: "quickwit" quickwitBucket: "quickwit"

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 570 KiB

After

Width:  |  Height:  |  Size: 569 KiB

148
static/replay-thumbnail.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 99 KiB