Compare commits

...
Sign in to create a new pull request.

53 commits

Author SHA1 Message Date
snyk-bot
8d4bdd1bc1
fix: ee/connectors/deploy/requirements_snowflake.txt to reduce vulnerabilities
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6050294
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6126975
2024-01-26 18:18:03 +00:00
Savinien Barbotaud
61855d15c4
Fix docker compose local network (#1809)
* fix #1502  docker-compose in local network

* fix: docker-compose images versions

* fix CADDY_DOMAIN and chalice env

* add chalice line

* domain name again

* add caddy to common.env

* remove chalice variable is_dns_public to SKIP_H_SSL
2024-01-26 14:29:50 +01:00
rjshrjndrn
983672feb5 fix(backend): gssapi 2024-01-25 19:01:28 +00:00
rjshrjndrn
713aa20821 chore(helm): Updating frontend image release 2024-01-25 18:35:55 +00:00
rjshrjndrn
c8e30bcab7 chore(helm): Updating frontend image release 2024-01-25 18:30:53 +00:00
Delirium
d39c00be09
fix(ui): fix canvas replay walker cb (#1855) 2024-01-25 17:00:05 +01:00
Shekar Siri
a808200526
fix(ui): fix canvas ts comparison (#1851)
Co-authored-by: nick-delirium <nikita@openreplay.com>
2024-01-23 17:54:31 +01:00
Kraiem Taha Yassine
9453d25213
refactor(DB): stop DB init script if the DB already exists (#1850) 2024-01-23 11:57:36 +01:00
rjshrjndrn
58a588ea0d chore(helm): Updating frontend image release 2024-01-19 20:11:02 +00:00
Shekar Siri
dc442f5363
fix(ui): elastic config host validation (#1840) 2024-01-19 15:11:01 +01:00
rjshrjndrn
4b2e8b4407 openreplay-cli: install k9s
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2024-01-17 22:27:02 +01:00
rjshrjndrn
ba3af74437 chore(helm): Updating http image release 2024-01-17 18:35:38 +00:00
rjshrjndrn
81a28ab1e3 chore(helm): Updating ender image release 2024-01-17 18:34:01 +00:00
rjshrjndrn
09e8b25c08 chore(helm): Updating imagestorage image release 2024-01-17 18:28:50 +00:00
rjshrjndrn
e3eb018987 chore(helm): Updating videostorage image release 2024-01-17 18:02:49 +00:00
Alexander
9dac016e12
Patch/canvas perf upgrade (#1837)
* Canvas refactoring (#1836)

* feat(backend): added new topic for canvasToVideo communication, moved some logic

* feat(backend): enabled canvas recording

* feat(backend): fixed canvas service main logic

* feat(backend): fixed sessionEnd detector

* feat(backend): send canvas mix lists instead of just a sessID

* feat(backend): enabled canvas recording

* feat(backend): removed old logs from video-storage

* feat(backend): default low setting for canvas recording

* feat(backend): uncomment mobile logic
2024-01-17 14:55:19 +01:00
Mehdi Osman
b910d1235a
Added mobile support 2024-01-16 19:44:20 -05:00
rjshrjndrn
061179cde7 chore(helm): Updating chalice image release 2024-01-15 10:30:09 +00:00
hawbaker
1e4488ca4d
use smtpilb send_message #1829 (#1830)
fixes blank messages due to encoding problem
2024-01-15 11:00:56 +01:00
Mehdi Osman
557e60b1b7
Updated hero 2024-01-12 10:02:30 -05:00
Alexander
ddd3ce977a
feat(build): added GSSAPI option to dockerfile. (#1827) (#1828) 2024-01-12 14:58:46 +01:00
rjshrjndrn
790e1001b7 chore(helm): Updating chalice image release 2024-01-10 13:51:59 +00:00
Kraiem Taha Yassine
b707725906
fix(chalice): return domURL for mobile sessions (#1824) 2024-01-10 14:46:34 +01:00
rjshrjndrn
f7198e391d fix: install k9s
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
2024-01-08 13:04:04 +01:00
rjshrjndrn
c8fb77ad27 chore(helm): Updating chalice image release 2024-01-04 20:54:49 +00:00
Kraiem Taha Yassine
735af9a008
fix(chalice): fixed canvas default pattern (#1816) 2024-01-04 21:51:22 +01:00
rjshrjndrn
66637147c6 chore(helm): Updating chalice image release 2024-01-03 17:31:29 +00:00
Kraiem Taha Yassine
19d794225d
fix(chalice): fix table of URLs wrong values (#1815) 2024-01-03 18:16:37 +01:00
rjshrjndrn
aa52434780 chore(helm): Updating chalice image release 2023-12-21 11:13:06 +00:00
Philippe Vezina
f52d5f021e
fix: invitation password set (#1795) 2023-12-21 11:42:20 +01:00
rjshrjndrn
7e672e2315 chore(helm): Updating chalice image release 2023-12-19 17:20:40 +00:00
Kraiem Taha Yassine
1fb852590c
fix(chalice): fixed reset&update password at the same time (#1790) 2023-12-19 18:06:57 +01:00
rjshrjndrn
b5375df6e1 chore(helm): Updating chalice image release 2023-12-19 16:53:00 +00:00
rjshrjndrn
495038f5bd chore(helm): Updating frontend image release 2023-12-19 16:53:00 +00:00
Shekar Siri
ec4d1ec9a5
fix(ui): catch 5xx errors (#1788)
* fix(ui): xray check for data load

* fix(ui): api client catch errors
2023-12-19 17:39:43 +01:00
Kraiem Taha Yassine
77281ebd3e
fix(chalice): fixed reset&update password at the same time (#1786) 2023-12-19 17:17:30 +01:00
Shekar Siri
aeea4e50aa
fix(ui): xray check for data load (#1785) 2023-12-19 15:58:52 +01:00
rjshrjndrn
495927a717 chore(helm): Updating frontend image release 2023-12-15 21:58:03 +00:00
rjshrjndrn
7262fd2220 chore(helm): Updating chalice image release 2023-12-15 21:50:21 +00:00
Shekar Siri
0905726474
fix(ui): filter check for key and type (#1782) 2023-12-15 20:07:33 +01:00
rjshrjndrn
3ae4983154 chore(helm): Updating chalice image release 2023-12-15 12:51:31 +00:00
Kraiem Taha Yassine
ece2631c60
fix(chalice): fixed wrong schema transformer (#1780) 2023-12-15 13:37:13 +01:00
rjshrjndrn
48954352fe chore(helm): Updating chalice image release 2023-12-14 17:02:35 +00:00
rjshrjndrn
d3c18f9af6 chore(helm): Updating alerts image release 2023-12-14 16:56:10 +00:00
rjshrjndrn
bd391ca935 chore(helm): Updating alerts image release 2023-12-14 16:56:10 +00:00
rjshrjndrn
362133f110 chore(helm): Updating chalice image release 2023-12-14 16:56:10 +00:00
Kraiem Taha Yassine
dcf6d24abd
fix(chalice): fix experimental sessions search with negative events and performance filters at the same time (#1777) 2023-12-14 17:55:14 +01:00
Kraiem Taha Yassine
b2ac6ba0f8
Crons v1.16.0 (#1776)
* refactor(chalice): moved db_request_handler to utils package

* refactor(chalice): moved db_request_handler to utils package
fix(chalice): supported usability tests in EE

* refactor(crons): changed assist_events_aggregates_cron to have only 1 execution every hour
refactor(crons): optimized assist_events_aggregates_cron to use only 1 DB cursor for successive queries
2023-12-13 18:06:33 +01:00
rjshrjndrn
34729e87ff chore(helm): Updating chalice image release 2023-12-13 14:44:08 +00:00
Kraiem Taha Yassine
74950dbe72
patch Api v1.16.0 (#1774)
* refactor(chalice): moved db_request_handler to utils package

* refactor(chalice): moved db_request_handler to utils package
fix(chalice): supported usability tests in EE
2023-12-13 14:49:51 +01:00
rjshrjndrn
82943ab19b chore(helm): Updating frontend image release 2023-12-13 12:27:36 +00:00
nick-delirium
be1ae8e89e fix(ui): change env.sample 2023-12-13 13:22:46 +01:00
rjshrjndrn
d17a32af30 upgrade: fix scripts 2023-12-13 09:26:01 +01:00
55 changed files with 950 additions and 676 deletions

View file

@ -38,15 +38,11 @@
</a>
</p>
<p align="center">
<a href="https://github.com/openreplay/openreplay">
<img src="static/openreplay-git-hero.svg">
</a>
</p>
https://github.com/openreplay/openreplay/assets/20417222/684133c4-575a-48a7-aa91-d4bf88c5436a
OpenReplay is a session replay suite you can host yourself, that lets you see what users do on your web app, helping you troubleshoot issues faster.
OpenReplay is a session replay suite you can host yourself, that lets you see what users do on your web and mobile apps, helping you troubleshoot issues faster.
- **Session replay.** OpenReplay replays what users do, but not only. It also shows you what went under the hood, how your website or app behaves by capturing network activity, console logs, JS errors, store actions/state, page speed metrics, cpu/memory usage and much more.
- **Session replay.** OpenReplay replays what users do, but not only. It also shows you what went under the hood, how your website or app behaves by capturing network activity, console logs, JS errors, store actions/state, page speed metrics, cpu/memory usage and much more. In addition to web applications, iOS and React Native apps are also supported (Android and Flutter are coming out soon).
- **Low footprint**. With a ~26KB (.br) tracker that asynchronously sends minimal data for a very limited impact on performance.
- **Self-hosted**. No more security compliance checks, 3rd-parties processing user data. Everything OpenReplay captures stays in your cloud for a complete control over your data.
- **Privacy controls**. Fine-grained security features for sanitizing user data.

View file

@ -20,7 +20,7 @@ def get_canvas_presigned_urls(session_id, project_id):
"projectId": project_id,
"recordingId": rows[i]["recording_id"]
}
key = config("CANVAS_PATTERN", default="%(sessionId)/%(recordingId)s.mp4") % params
key = config("CANVAS_PATTERN", default="%(sessionId)s/%(recordingId)s.mp4") % params
rows[i] = StorageClient.get_presigned_url_for_sharing(
bucket=config("CANVAS_BUCKET", default=config("sessions_bucket")),
expires_in=config("PRESIGNED_URL_EXPIRATION", cast=int, default=900),

View file

@ -122,12 +122,10 @@ def get_replay(project_id, session_id, context: schemas.CurrentContext, full_dat
data = helper.dict_to_camel_case(data)
if full_data:
if data["platform"] == 'ios':
data['domURL'] = []
data['mobsUrl'] = []
data['videoURL'] = sessions_mobs.get_ios_videos(session_id=session_id, project_id=project_id,
check_existence=False)
else:
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id, check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
@ -139,6 +137,8 @@ def get_replay(project_id, session_id, context: schemas.CurrentContext, full_dat
else:
data['utxVideo'] = []
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
data['metadata'] = __group_metadata(project_metadata=data.pop("projectMetadata"), session=data)
data['live'] = live and assist.is_live(project_id=project_id, session_id=session_id,
project_key=data["projectKey"])

View file

@ -2,7 +2,7 @@ import logging
from fastapi import HTTPException, status
from chalicelib.core.db_request_handler import DatabaseRequestHandler
from chalicelib.utils.db_request_handler import DatabaseRequestHandler
from chalicelib.core.usability_testing.schema import UTTestCreate, UTTestSearch, UTTestUpdate
from chalicelib.utils.TimeUTC import TimeUTC
from chalicelib.utils.helper import dict_to_camel_case, list_to_camel_case

View file

@ -125,6 +125,10 @@ def update(tenant_id, user_id, changes, output=True):
if key == "password":
sub_query_bauth.append("password = crypt(%(password)s, gen_salt('bf', 12))")
sub_query_bauth.append("changed_at = timezone('utc'::text, now())")
sub_query_bauth.append("change_pwd_expire_at = NULL")
sub_query_bauth.append("change_pwd_token = NULL")
sub_query_bauth.append("invitation_token = NULL")
sub_query_bauth.append("invited_at = NULL")
else:
sub_query_bauth.append(f"{helper.key_to_snake_case(key)} = %({key})s")
else:
@ -445,9 +449,7 @@ def change_password(tenant_id, user_id, email, old_password, new_password):
def set_password_invitation(user_id, new_password):
changes = {"password": new_password,
"invitationToken": None, "invitedAt": None,
"changePwdExpireAt": None, "changePwdToken": None}
changes = {"password": new_password}
user = update(tenant_id=-1, user_id=user_id, changes=changes)
r = authenticate(user['email'], new_password)

View file

@ -69,7 +69,7 @@ def send_html(BODY_HTML, SUBJECT, recipient, bcc=None):
r += [bcc]
try:
logging.info(f"Email sending to: {r}")
s.sendmail(msg['FROM'], r, msg.as_string().encode('ascii'))
s.send_message(msg)
except Exception as e:
logging.error("!!! Email error!")
logging.error(e)
@ -84,7 +84,7 @@ def send_text(recipients, text, subject):
body = MIMEText(text)
msg.attach(body)
try:
s.sendmail(msg['FROM'], recipients, msg.as_string().encode('ascii'))
s.send_message(msg)
except Exception as e:
logging.error("!! Text-email failed: " + subject),
logging.error(e)

View file

@ -10,6 +10,8 @@ class EmptySMTP:
def sendmail(self, from_addr, to_addrs, msg, mail_options=(), rcpt_options=()):
logging.error("!! CANNOT SEND EMAIL, NO VALID SMTP CONFIGURATION FOUND")
def send_message(self, msg):
self.sendmail( msg["FROM"], msg["TO"], msg.as_string() )
class SMTPClient:
server = None

View file

@ -10,7 +10,7 @@ def transform_email(email: str) -> str:
def int_to_string(value: int) -> str:
return str(value) if isinstance(value, int) else int
return str(value) if isinstance(value, int) else value
def remove_whitespace(value: str) -> str:

View file

@ -1,8 +1,15 @@
# GSSAPI = true to enable Kerberos auth for Kafka and manually build librdkafka with GSSAPI support
ARG GSSAPI=false
#ARCH can be amd64 or arm64
ARG ARCH=amd64
FROM --platform=linux/$ARCH golang:1.21-alpine3.18 AS build
RUN apk add --no-cache gcc g++ make libc-dev
RUN if [ "$GSSAPI" = "true" ]; then \
apk add --no-cache git openssh openssl-dev pkgconf gcc g++ make libc-dev bash librdkafka-dev cyrus-sasl cyrus-sasl-gssapiv2 krb5; \
else \
apk add --no-cache gcc g++ make libc-dev; \
fi
WORKDIR /root
# Load code dependencies
@ -17,18 +24,29 @@ COPY internal internal
# Build service
ARG SERVICE_NAME
RUN CGO_ENABLED=1 GOOS=linux GOARCH=$ARCH go build -o service -tags musl openreplay/backend/cmd/$SERVICE_NAME
RUN if [ "$GSSAPI" = "true" ]; then \
CGO_ENABLED=1 GOOS=linux GOARCH=$ARCH go build -o service -tags dynamic openreplay/backend/cmd/$SERVICE_NAME; \
else \
CGO_ENABLED=1 GOOS=linux GOARCH=$ARCH go build -o service -tags musl openreplay/backend/cmd/$SERVICE_NAME; \
fi
FROM --platform=linux/$ARCH alpine AS entrypoint
ARG GIT_SHA
ARG GSSAPI=false
LABEL GIT_SHA=$GIT_SHA
LABEL GSSAPI=$GSSAPI
RUN apk add --no-cache ca-certificates cyrus-sasl cyrus-sasl-gssapiv2 krb5
RUN if [ "$GSSAPI" = "true" ]; then \
apk add --no-cache ca-certificates librdkafka-dev cyrus-sasl cyrus-sasl-gssapiv2 krb5; \
else \
apk add --no-cache ca-certificates cyrus-sasl cyrus-sasl-gssapiv2 krb5; \
fi
RUN adduser -u 1001 openreplay -D
ARG SERVICE_NAME
ENV TZ=UTC \
GIT_SHA=$GIT_SHA \
GSSAPI=$GSSAPI \
FS_ULIMIT=10000 \
FS_DIR=/mnt/efs \
MAXMINDDB_FILE=/home/openreplay/geoip.mmdb \
@ -57,6 +75,7 @@ ENV TZ=UTC \
TOPIC_TRIGGER=trigger \
TOPIC_MOBILE_TRIGGER=mobile-trigger \
TOPIC_CANVAS_IMAGES=canvas-images \
TOPIC_CANVAS_TRIGGER=canvas-trigger \
GROUP_SINK=sink \
GROUP_STORAGE=storage \
GROUP_DB=db \
@ -97,7 +116,8 @@ ENV TZ=UTC \
# Use to set compression threshold for tracker requests (20kb by default)
COMPRESSION_THRESHOLD="20000" \
# Set Access-Control-* headers for tracker requests if true
USE_CORS=false
USE_CORS=false \
RECORD_CANVAS=true
RUN if [ "$SERVICE_NAME" = "http" ]; then \

View file

@ -46,7 +46,7 @@ update_helm_release() {
function build_service() {
image="$1"
echo "BUILDING $image"
docker build -t ${DOCKER_REPO:-'local'}/$image:${image_tag} --platform linux/$arch --build-arg ARCH=$arch --build-arg SERVICE_NAME=$image --build-arg GIT_SHA=$git_sha .
docker build -t ${DOCKER_REPO:-'local'}/$image:${image_tag} --platform linux/$arch --build-arg ARCH=$arch --build-arg SERVICE_NAME=$image --build-arg GIT_SHA=$git_sha --build-arg GSSAPI=${GSSAPI:-'false'} .
[[ $PUSH_IMAGE -eq 1 ]] && {
docker push ${DOCKER_REPO:-'local'}/$image:${image_tag}
}

View file

@ -181,9 +181,13 @@ func main() {
}
} else {
if err := producer.Produce(cfg.TopicRawWeb, sessionID, msg.Encode()); err != nil {
log.Printf("can't send sessionEnd to topic: %s; sessID: %d", err, sessionID)
log.Printf("can't send sessionEnd to raw topic: %s; sessID: %d", err, sessionID)
return false, 0
}
// Inform canvas service about session end
if err := producer.Produce(cfg.TopicCanvasImages, sessionID, msg.Encode()); err != nil {
log.Printf("can't send sessionEnd signal to canvas topic: %s; sessID: %d", err, sessionID)
}
}
if currDuration != 0 {

View file

@ -1,6 +1,7 @@
package main
import (
"fmt"
"log"
"os"
"os/signal"
@ -29,6 +30,8 @@ func main() {
return
}
producer := queue.NewProducer(cfg.MessageSizeLimit, true)
consumer := queue.NewConsumer(
cfg.GroupImageStorage,
[]string{
@ -49,8 +52,39 @@ func main() {
cfg.TopicCanvasImages,
},
messages.NewImagesMessageIterator(func(data []byte, sessID uint64) {
if err := srv.ProcessCanvas(sessID, data); err != nil {
log.Printf("can't process canvas image: %s", err)
checkSessionEnd := func(data []byte) (messages.Message, error) {
reader := messages.NewBytesReader(data)
msgType, err := reader.ReadUint()
if err != nil {
return nil, err
}
if msgType != messages.MsgSessionEnd {
return nil, fmt.Errorf("not a session end message")
}
msg, err := messages.ReadMessage(msgType, reader)
if err != nil {
return nil, fmt.Errorf("read message err: %s", err)
}
return msg, nil
}
if msg, err := checkSessionEnd(data); err == nil {
sessEnd := msg.(*messages.SessionEnd)
// Received session end
if list, err := srv.PrepareCanvas(sessID); err != nil {
log.Printf("can't prepare canvas: %s", err)
} else {
for _, name := range list {
sessEnd.EncryptionKey = name
if err := producer.Produce(cfg.TopicCanvasTrigger, sessID, sessEnd.Encode()); err != nil {
log.Printf("can't send session end signal to video service: %s", err)
}
}
}
} else {
if err := srv.ProcessCanvas(sessID, data); err != nil {
log.Printf("can't process canvas image: %s", err)
}
}
}, nil, true),
false,
@ -68,7 +102,9 @@ func main() {
case sig := <-sigchan:
log.Printf("Caught signal %v: terminating\n", sig)
srv.Wait()
// close all consumers
consumer.Close()
canvasConsumer.Close()
os.Exit(0)
case <-counterTick:
srv.Wait()
@ -80,6 +116,8 @@ func main() {
}
case msg := <-consumer.Rebalanced():
log.Println(msg)
case msg := <-canvasConsumer.Rebalanced():
log.Println(msg)
default:
err := consumer.ConsumeNext()
if err != nil {

View file

@ -47,7 +47,7 @@ func main() {
func(msg messages.Message) {
sesEnd := msg.(*messages.IOSSessionEnd)
log.Printf("recieved mobile session end: %d", sesEnd.SessionID())
if err := srv.Process(sesEnd.SessionID(), workDir+"/screenshots/"+strconv.FormatUint(sesEnd.SessionID(), 10)+"/", false); err != nil {
if err := srv.Process(sesEnd.SessionID(), workDir+"/screenshots/"+strconv.FormatUint(sesEnd.SessionID(), 10)+"/", ""); err != nil {
log.Printf("upload session err: %s, sessID: %d", err, msg.SessionID())
}
},
@ -61,12 +61,18 @@ func main() {
canvasConsumer := queue.NewConsumer(
cfg.GroupVideoStorage,
[]string{
cfg.TopicTrigger,
cfg.TopicCanvasTrigger,
},
messages.NewMessageIterator(
func(msg messages.Message) {
sesEnd := msg.(*messages.SessionEnd)
if err := srv.Process(sesEnd.SessionID(), workDir+"/canvas/"+strconv.FormatUint(sesEnd.SessionID(), 10)+"/", true); err != nil {
filePath := workDir + "/canvas/" + strconv.FormatUint(sesEnd.SessionID(), 10) + "/"
canvasMix := sesEnd.EncryptionKey // dirty hack to use encryption key as canvas mix holder (only between canvas handler and canvas maker)
if canvasMix == "" {
log.Printf("no canvas mix for session: %d", sesEnd.SessionID())
return
}
if err := srv.Process(sesEnd.SessionID(), filePath, canvasMix); err != nil {
if !strings.Contains(err.Error(), "no such file or directory") {
log.Printf("upload session err: %s, sessID: %d", err, msg.SessionID())
}
@ -91,6 +97,7 @@ func main() {
log.Printf("Caught signal %v: terminating\n", sig)
srv.Wait()
consumer.Close()
canvasConsumer.Close()
os.Exit(0)
case <-counterTick:
srv.Wait()
@ -102,6 +109,8 @@ func main() {
}
case msg := <-consumer.Rebalanced():
log.Println(msg)
case msg := <-canvasConsumer.Rebalanced():
log.Println(msg)
default:
err = consumer.ConsumeNext()
if err != nil {

View file

@ -16,6 +16,7 @@ type Config struct {
LoggerTimeout int `env:"LOG_QUEUE_STATS_INTERVAL_SEC,required"`
TopicRawWeb string `env:"TOPIC_RAW_WEB,required"`
TopicRawIOS string `env:"TOPIC_RAW_IOS,required"`
TopicCanvasImages string `env:"TOPIC_CANVAS_IMAGES,required"`
ProducerTimeout int `env:"PRODUCER_TIMEOUT,default=2000"`
PartitionsNumber int `env:"PARTITIONS_NUMBER,required"`
UseEncryption bool `env:"USE_ENCRYPTION,default=false"`

View file

@ -32,8 +32,8 @@ type Config struct {
UseAccessControlHeaders bool `env:"USE_CORS,default=false"`
ProjectExpiration time.Duration `env:"PROJECT_EXPIRATION,default=10m"`
RecordCanvas bool `env:"RECORD_CANVAS,default=false"`
CanvasQuality string `env:"CANVAS_QUALITY,default=medium"`
CanvasFps int `env:"CANVAS_FPS,default=2"`
CanvasQuality string `env:"CANVAS_QUALITY,default=low"`
CanvasFps int `env:"CANVAS_FPS,default=1"`
WorkerID uint16
}

View file

@ -7,13 +7,14 @@ import (
type Config struct {
common.Config
FSDir string `env:"FS_DIR,required"`
ScreenshotsDir string `env:"SCREENSHOTS_DIR,default=screenshots"`
CanvasDir string `env:"CANVAS_DIR,default=canvas"`
TopicRawImages string `env:"TOPIC_RAW_IMAGES,required"`
TopicCanvasImages string `env:"TOPIC_CANVAS_IMAGES,required"`
GroupImageStorage string `env:"GROUP_IMAGE_STORAGE,required"`
UseProfiler bool `env:"PROFILER_ENABLED,default=false"`
FSDir string `env:"FS_DIR,required"`
ScreenshotsDir string `env:"SCREENSHOTS_DIR,default=screenshots"`
CanvasDir string `env:"CANVAS_DIR,default=canvas"`
TopicRawImages string `env:"TOPIC_RAW_IMAGES,required"`
TopicCanvasImages string `env:"TOPIC_CANVAS_IMAGES,required"`
TopicCanvasTrigger string `env:"TOPIC_CANVAS_TRIGGER,required"`
GroupImageStorage string `env:"GROUP_IMAGE_STORAGE,required"`
UseProfiler bool `env:"PROFILER_ENABLED,default=false"`
}
func New() *Config {

View file

@ -13,6 +13,7 @@ type Config struct {
GroupVideoStorage string `env:"GROUP_VIDEO_STORAGE,required"`
TopicMobileTrigger string `env:"TOPIC_MOBILE_TRIGGER,required"`
TopicTrigger string `env:"TOPIC_TRIGGER,required"`
TopicCanvasTrigger string `env:"TOPIC_CANVAS_TRIGGER,required"`
VideoReplayFPS int `env:"VIDEO_REPLAY_FPS,default=3"`
UseProfiler bool `env:"PROFILER_ENABLED,default=false"`
}

View file

@ -6,9 +6,12 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
"os"
"sort"
"strconv"
"strings"
"time"
gzip "github.com/klauspost/pgzip"
@ -74,6 +77,80 @@ type ScreenshotMessage struct {
Data []byte
}
func (v *ImageStorage) PrepareCanvas(sessID uint64) ([]string, error) {
// Build the directory path to session's canvas images
path := v.cfg.FSDir + "/"
if v.cfg.CanvasDir != "" {
path += v.cfg.CanvasDir + "/"
}
path += strconv.FormatUint(sessID, 10) + "/"
// Check that the directory exists
files, err := ioutil.ReadDir(path)
if err != nil {
return nil, err
}
if len(files) == 0 {
return []string{}, nil
}
log.Printf("There are %d canvas images of session %d\n", len(files), sessID)
type canvasData struct {
files map[int]string
times []int
}
images := make(map[string]*canvasData)
// Build the list of canvas images sets
for _, file := range files {
name := strings.Split(file.Name(), ".")
parts := strings.Split(name[0], "_")
if len(name) != 2 || len(parts) != 3 {
log.Printf("unknown file name: %s, skipping", file.Name())
continue
}
canvasID := fmt.Sprintf("%s_%s", parts[0], parts[1])
canvasTS, _ := strconv.Atoi(parts[2])
if _, ok := images[canvasID]; !ok {
images[canvasID] = &canvasData{
files: make(map[int]string),
times: make([]int, 0),
}
}
images[canvasID].files[canvasTS] = file.Name()
images[canvasID].times = append(images[canvasID].times, canvasTS)
}
// Prepare screenshot lists for ffmpeg
namesList := make([]string, 0)
for name, cData := range images {
// Write to file
mixName := fmt.Sprintf("%s-list", name)
mixList := path + mixName
outputFile, err := os.Create(mixList)
if err != nil {
log.Printf("can't create mix list, err: %s", err)
continue
}
sort.Ints(cData.times)
for i := 0; i < len(cData.times)-1; i++ {
dur := float64(cData.times[i+1]-cData.times[i]) / 1000.0
line := fmt.Sprintf("file %s\nduration %.3f\n", cData.files[cData.times[i]], dur)
_, err := outputFile.WriteString(line)
if err != nil {
outputFile.Close()
log.Printf("%s", err)
continue
}
}
outputFile.Close()
log.Printf("made canvas list %s", mixList)
namesList = append(namesList, mixName)
}
return namesList, nil
}
func (v *ImageStorage) ProcessCanvas(sessID uint64, data []byte) error {
var msg = &ScreenshotMessage{}
if err := json.Unmarshal(data, msg); err != nil {
@ -81,7 +158,7 @@ func (v *ImageStorage) ProcessCanvas(sessID uint64, data []byte) error {
}
// Use the same workflow
v.writeToDiskTasks <- &Task{sessionID: sessID, images: map[string]*bytes.Buffer{msg.Name: bytes.NewBuffer(msg.Data)}, imageType: canvas}
log.Printf("new canvas image, sessID: %d, name: %s, size: %d mb", sessID, msg.Name, len(msg.Data)/1024/1024)
log.Printf("new canvas image, sessID: %d, name: %s, size: %3.3f mb", sessID, msg.Name, float64(len(msg.Data))/1024.0/1024.0)
return nil
}
@ -138,6 +215,7 @@ func (v *ImageStorage) writeToDisk(task *Task) {
}
// Write images to disk
saved := 0
for name, img := range task.images {
outFile, err := os.Create(path + name) // or open file in rewrite mode
if err != nil {
@ -147,7 +225,9 @@ func (v *ImageStorage) writeToDisk(task *Task) {
log.Printf("can't copy file: %s", err.Error())
}
outFile.Close()
saved++
}
log.Printf("saved %d images to disk", saved)
return
}

View file

@ -7,9 +7,7 @@ import (
"log"
config "openreplay/backend/internal/config/videostorage"
"openreplay/backend/pkg/objectstorage"
"os"
"os/exec"
"sort"
"strconv"
"strings"
"time"
@ -81,7 +79,7 @@ func (v *VideoStorage) makeVideo(sessID uint64, filesPath string) error {
return nil
}
func (v *VideoStorage) makeCanvasVideo(sessID uint64, filesPath string) error {
func (v *VideoStorage) makeCanvasVideo(sessID uint64, filesPath, canvasMix string) error {
files, err := ioutil.ReadDir(filesPath)
if err != nil {
return err
@ -89,61 +87,23 @@ func (v *VideoStorage) makeCanvasVideo(sessID uint64, filesPath string) error {
if len(files) == 0 {
return nil
}
log.Printf("There are %d canvas images of session %d\n", len(files), 0)
type canvasData struct {
files map[int]string
times []int
}
images := make(map[string]*canvasData)
log.Printf("There are %d mix lists of session %d\n", len(files), sessID)
for _, file := range files {
name := strings.Split(file.Name(), ".")
parts := strings.Split(name[0], "_")
if len(name) != 2 || len(parts) != 3 {
log.Printf("unknown file name: %s, skipping", file.Name())
continue
}
canvasID := fmt.Sprintf("%s_%s", parts[0], parts[1])
canvasTS, _ := strconv.Atoi(parts[2])
log.Printf("%s : %d", canvasID, canvasTS)
if _, ok := images[canvasID]; !ok {
images[canvasID] = &canvasData{
files: make(map[int]string),
times: make([]int, 0),
}
}
images[canvasID].files[canvasTS] = file.Name()
images[canvasID].times = append(images[canvasID].times, canvasTS)
}
for name, cData := range images {
// Write to file
mixList := fmt.Sprintf("%s/%s-list", filesPath, name)
outputFile, err := os.Create(mixList)
if err != nil {
log.Printf("can't create mix list, err: %s", err)
if !strings.HasSuffix(file.Name(), "-list") {
continue
}
sort.Ints(cData.times)
for i := 0; i < len(cData.times)-1; i++ {
dur := float64(cData.times[i+1]-cData.times[i]) / 1000.0
line := fmt.Sprintf("file %s\nduration %.3f\n", cData.files[cData.times[i]], dur)
_, err := outputFile.WriteString(line)
if err != nil {
outputFile.Close()
log.Printf("%s", err)
continue
}
log.Printf(line)
}
outputFile.Close()
name := strings.TrimSuffix(file.Name(), "-list")
mixList := fmt.Sprintf("%s%s-list", filesPath, name)
videoPath := fmt.Sprintf("%s%s.mp4", filesPath, name)
// Run ffmpeg to build video
start := time.Now()
sessionID := strconv.FormatUint(sessID, 10)
videoPath := fmt.Sprintf("%s/%s.mp4", filesPath, name)
cmd := exec.Command("ffmpeg", "-y", "-f", "concat", "-safe", "0", "-i", mixList, "-vsync", "vfr",
cmd := exec.Command("ffmpeg", "-y", "-f", "concat", "-safe", "0", "-i", mixList, "-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2", "-vsync", "vfr",
"-pix_fmt", "yuv420p", "-preset", "ultrafast", videoPath)
// ffmpeg -f concat -safe 0 -i input.txt -vsync vfr -pix_fmt yuv420p output.mp4
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
@ -180,9 +140,9 @@ func (v *VideoStorage) sendToS3(task *Task) {
return
}
func (v *VideoStorage) Process(sessID uint64, filesPath string, isCanvas bool) error {
if isCanvas {
return v.makeCanvasVideo(sessID, filesPath)
func (v *VideoStorage) Process(sessID uint64, filesPath string, canvasMix string) error {
if canvasMix != "" {
return v.makeCanvasVideo(sessID, filesPath, canvasMix)
}
return v.makeVideo(sessID, filesPath)
}

1
ee/api/.gitignore vendored
View file

@ -274,3 +274,4 @@ Pipfile.lock
/orpy.py
/chalicelib/core/usability_testing/
/NOTES.md
/chalicelib/utils/db_request_handler.py

View file

@ -52,7 +52,7 @@ async def lifespan(app: FastAPI):
await events_queue.init()
app.schedule.start()
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs + traces.cron_jobs + ee_crons.ee_cron_jobs:
for job in core_crons.cron_jobs + core_dynamic_crons.cron_jobs + traces.cron_jobs + ee_crons.cron_jobs:
app.schedule.add_job(id=job["func"].__name__, **job)
ap_logger.info(">Scheduled jobs:")

View file

@ -1,11 +1,12 @@
import logging
from datetime import datetime
from fastapi import HTTPException
from chalicelib.utils import pg_client, helper
from chalicelib.utils.TimeUTC import TimeUTC
from schemas import AssistStatsSessionsRequest, AssistStatsSessionsResponse, AssistStatsTopMembersResponse
logger = logging.getLogger(__name__)
event_type_mapping = {
"sessionsAssisted": "assist",
"assistDuration": "assist",
@ -17,12 +18,12 @@ event_type_mapping = {
def insert_aggregated_data():
try:
logging.info("Assist Stats: Inserting aggregated data")
end_timestamp = int(datetime.timestamp(datetime.now())) * 1000
end_timestamp = TimeUTC.now()
start_timestamp = __last_run_end_timestamp_from_aggregates()
if start_timestamp is None: # first run
logging.info("Assist Stats: First run, inserting data for last 7 days")
start_timestamp = end_timestamp - (7 * 24 * 60 * 60 * 1000)
start_timestamp = end_timestamp - TimeUTC.MS_WEEK
offset = 0
chunk_size = 1000
@ -103,9 +104,8 @@ def __last_run_end_timestamp_from_aggregates():
result = cur.fetchone()
last_run_time = result['last_run_time'] if result else None
if last_run_time is None: # first run handle all data
sql = "SELECT MIN(timestamp) as last_timestamp FROM assist_events;"
with pg_client.PostgresClient() as cur:
if last_run_time is None: # first run handle all data
sql = "SELECT MIN(timestamp) as last_timestamp FROM assist_events;"
cur.execute(sql)
result = cur.fetchone()
last_run_time = result['last_timestamp'] if result else None

View file

@ -1,10 +1,10 @@
import ast
import logging
from typing import List, Union
import schemas
from chalicelib.core import events, metadata, projects, performance_event, metrics
from chalicelib.core import events, metadata, projects, performance_event, metrics, sessions_legacy
from chalicelib.utils import pg_client, helper, metrics_helper, ch_client, exp_ch_helper
import logging
logger = logging.getLogger(__name__)
SESSION_PROJECTION_COLS_CH = """\
@ -353,6 +353,7 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
step_size = int(metrics_helper.__get_step_size(endTimestamp=data.endTimestamp, startTimestamp=data.startTimestamp,
density=density))
extra_event = None
extra_deduplication = []
if metric_of == schemas.MetricOfTable.visited_url:
extra_event = f"""SELECT DISTINCT ev.session_id, ev.url_path
FROM {exp_ch_helper.get_main_events_table(data.startTimestamp)} AS ev
@ -360,12 +361,14 @@ def search2_table(data: schemas.SessionsSearchPayloadSchema, project_id: int, de
AND ev.datetime <= toDateTime(%(endDate)s / 1000)
AND ev.project_id = %(project_id)s
AND ev.event_type = 'LOCATION'"""
extra_deduplication.append("url_path")
elif metric_of == schemas.MetricOfTable.issues and len(metric_value) > 0:
data.filters.append(schemas.SessionSearchFilterSchema(value=metric_value, type=schemas.FilterType.issue,
operator=schemas.SearchEventOperator._is))
full_args, query_part = search_query_parts_ch(data=data, error_status=None, errors_only=False,
favorite_only=False, issue=None, project_id=project_id,
user_id=None, extra_event=extra_event)
user_id=None, extra_event=extra_event,
extra_deduplication=extra_deduplication)
full_args["step_size"] = step_size
sessions = []
with ch_client.ClickHouseClient() as cur:
@ -434,7 +437,6 @@ def search_table_of_individual_issues(data: schemas.SessionsSearchPayloadSchema,
full_args["issues_limit"] = data.limit
full_args["issues_limit_s"] = (data.page - 1) * data.limit
full_args["issues_limit_e"] = data.page * data.limit
print(full_args)
main_query = cur.format(f"""SELECT issues.type AS name,
issues.context_string AS value,
COUNT(DISTINCT raw_sessions.session_id) AS session_count,
@ -519,7 +521,7 @@ def __get_event_type(event_type: Union[schemas.EventType, schemas.PerformanceEve
# this function generates the query and return the generated-query with the dict of query arguments
def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_status, errors_only, favorite_only, issue,
project_id, user_id, platform="web", extra_event=None):
project_id, user_id, platform="web", extra_event=None, extra_deduplication=[]):
ss_constraints = []
full_args = {"project_id": project_id, "startDate": data.startTimestamp, "endDate": data.endTimestamp,
"projectId": project_id, "userId": user_id}
@ -1391,15 +1393,15 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
_value_conditions_not.append(_p)
value_conditions_not.append(p)
del _value_conditions_not
sequence_conditions += value_conditions_not
# sequence_conditions += value_conditions_not
events_extra_join += f"""LEFT ANTI JOIN ( SELECT DISTINCT session_id
FROM {MAIN_EVENTS_TABLE} AS main
WHERE {' AND '.join(__events_where_basic)}
AND ({' OR '.join(value_conditions_not)})) AS sub USING(session_id)"""
# if has_values:
# events_conditions = [c for c in list(set(sequence_conditions))]
# events_conditions_where.append(f"({' OR '.join(events_conditions)})")
if has_values and len(sequence_conditions) > 0:
events_conditions = [c for c in list(set(sequence_conditions))]
events_conditions_where.append(f"({' OR '.join(events_conditions)})")
events_query_part = f"""SELECT main.session_id,
MIN(main.datetime) AS first_event_ts,
@ -1487,11 +1489,12 @@ def search_query_parts_ch(data: schemas.SessionsSearchPayloadSchema, error_statu
FROM {MAIN_SESSIONS_TABLE} AS s {extra_event}
WHERE {" AND ".join(extra_constraints)}) AS s ON(s.session_id=f.session_id)"""
else:
deduplication_keys = ["session_id"] + extra_deduplication
extra_join = f"""(SELECT *
FROM {MAIN_SESSIONS_TABLE} AS s {extra_join} {extra_event}
WHERE {" AND ".join(extra_constraints)}
ORDER BY _timestamp DESC
LIMIT 1 BY session_id) AS s"""
LIMIT 1 BY {",".join(deduplication_keys)}) AS s"""
query_part = f"""\
FROM {f"({events_query_part}) AS f" if len(events_query_part) > 0 else ""}
{extra_join}
@ -1665,3 +1668,29 @@ def check_recording_status(project_id: int) -> dict:
"recordingStatus": row["recording_status"],
"sessionsCount": row["sessions_count"]
}
# TODO: rewrite this function to use ClickHouse
def search_sessions_by_ids(project_id: int, session_ids: list, sort_by: str = 'session_id',
ascending: bool = False) -> dict:
if session_ids is None or len(session_ids) == 0:
return {"total": 0, "sessions": []}
with pg_client.PostgresClient() as cur:
meta_keys = metadata.get(project_id=project_id)
params = {"project_id": project_id, "session_ids": tuple(session_ids)}
order_direction = 'ASC' if ascending else 'DESC'
main_query = cur.mogrify(f"""SELECT {sessions_legacy.SESSION_PROJECTION_BASE_COLS}
{"," if len(meta_keys) > 0 else ""}{",".join([f'metadata_{m["index"]}' for m in meta_keys])}
FROM public.sessions AS s
WHERE project_id=%(project_id)s
AND session_id IN %(session_ids)s
ORDER BY {sort_by} {order_direction};""", params)
cur.execute(main_query)
rows = cur.fetchall()
if len(meta_keys) > 0:
for s in rows:
s["metadata"] = {}
for m in meta_keys:
s["metadata"][m["key"]] = s.pop(f'metadata_{m["index"]}')
return {"total": len(rows), "sessions": helper.list_to_camel_case(rows)}

View file

@ -134,8 +134,6 @@ def get_replay(project_id, session_id, context: schemas.CurrentContext, full_dat
data['videoURL'] = sessions_mobs.get_ios_videos(session_id=session_id, project_id=project_id,
check_existence=False)
else:
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
data['mobsUrl'] = sessions_mobs.get_urls_depercated(session_id=session_id, check_existence=False)
data['devtoolsURL'] = sessions_devtool.get_urls(session_id=session_id, project_id=project_id,
context=context, check_existence=False)
@ -147,6 +145,8 @@ def get_replay(project_id, session_id, context: schemas.CurrentContext, full_dat
else:
data['utxVideo'] = []
data['domURL'] = sessions_mobs.get_urls(session_id=session_id, project_id=project_id,
check_existence=False)
data['metadata'] = __group_metadata(project_metadata=data.pop("projectMetadata"), session=data)
data['live'] = live and assist.is_live(project_id=project_id, session_id=session_id,
project_key=data["projectKey"])

View file

@ -150,6 +150,10 @@ def update(tenant_id, user_id, changes, output=True):
if key == "password":
sub_query_bauth.append("password = crypt(%(password)s, gen_salt('bf', 12))")
sub_query_bauth.append("changed_at = timezone('utc'::text, now())")
sub_query_bauth.append("change_pwd_expire_at = NULL")
sub_query_bauth.append("change_pwd_token = NULL")
sub_query_bauth.append("invitation_token = NULL")
sub_query_bauth.append("invited_at = NULL")
else:
sub_query_bauth.append(f"{helper.key_to_snake_case(key)} = %({key})s")
else:
@ -524,9 +528,7 @@ def change_password(tenant_id, user_id, email, old_password, new_password):
def set_password_invitation(tenant_id, user_id, new_password):
changes = {"password": new_password,
"invitationToken": None, "invitedAt": None,
"changePwdExpireAt": None, "changePwdToken": None}
changes = {"password": new_password}
user = update(tenant_id=tenant_id, user_id=user_id, changes=changes)
r = authenticate(user['email'], new_password)

View file

@ -55,7 +55,7 @@ rm -rf ./chalicelib/core/user_testing.py
rm -rf ./chalicelib/saml
rm -rf ./chalicelib/utils/__init__.py
rm -rf ./chalicelib/utils/args_transformer.py
rm -rf ./chalicelib/utils/canvas.py
rm -rf ./chalicelib/core/canvas.py
rm -rf ./chalicelib/utils/captcha.py
rm -rf ./chalicelib/utils/dev.py
rm -rf ./chalicelib/utils/email_handler.py
@ -93,4 +93,5 @@ rm -rf ./schemas/overrides.py
rm -rf ./schemas/schemas.py
rm -rf ./schemas/transformers_validators.py
rm -rf ./orpy.py
rm -rf ./chalicelib/core/usability_testing/
rm -rf ./chalicelib/core/usability_testing/
rm -rf ./chalicelib/utils/db_request_handler.py

View file

@ -3,6 +3,8 @@ from apscheduler.triggers.interval import IntervalTrigger
from chalicelib.utils import events_queue
from chalicelib.core import assist_stats
from decouple import config
async def pg_events_queue() -> None:
events_queue.global_queue.force_flush()
@ -12,8 +14,14 @@ async def assist_events_aggregates_cron() -> None:
assist_stats.insert_aggregated_data()
ee_cron_jobs = [
{"func": pg_events_queue, "trigger": IntervalTrigger(minutes=5), "misfire_grace_time": 20, "max_instances": 1},
{"func": assist_events_aggregates_cron,
"trigger": IntervalTrigger(hours=1, start_date="2023-04-01 0:0:0", jitter=10), }
# SINGLE_CRONS are crons that will be run the crons-service, they are a singleton crons
SINGLE_CRONS = [{"func": assist_events_aggregates_cron,
"trigger": IntervalTrigger(hours=1, start_date="2023-04-01 0:0:0", jitter=10)}]
# cron_jobs is the list of crons to run in main API service (so you will have as many runs as the number of instances of the API)
cron_jobs = [
{"func": pg_events_queue, "trigger": IntervalTrigger(minutes=5), "misfire_grace_time": 20, "max_instances": 1}
]
if config("LOCAL_CRONS", default=False, cast=bool):
cron_jobs += SINGLE_CRONS

View file

@ -31,3 +31,4 @@ s3transfer==0.6.1
six==1.16.0
urllib3==1.26.12
cryptography>=42.0.0 # not directly required, pinned by Snyk to avoid a vulnerability

View file

@ -1,3 +1,17 @@
\set or_version 'v1.16.0-ee'
SET client_min_messages TO NOTICE;
\set ON_ERROR_STOP true
SELECT EXISTS (SELECT 1
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'tenants') AS db_exists;
\gset
\if :db_exists
\echo >DB already exists, stopping script
\echo >If you are trying to upgrade openreplay, please follow the instructions here: https://docs.openreplay.com/en/deployment/upgrade/
\q
\endif
BEGIN;
-- Schemas and functions definitions:
CREATE SCHEMA IF NOT EXISTS events_common;
@ -6,12 +20,14 @@ CREATE SCHEMA IF NOT EXISTS events_ios;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS pgcrypto;
SELECT format($fn_def$
CREATE OR REPLACE FUNCTION openreplay_version()
RETURNS text AS
$$
SELECT 'v1.16.0-ee'
SELECT '%1$s'
$$ LANGUAGE sql IMMUTABLE;
$fn_def$, :'or_version')
\gexec
CREATE OR REPLACE FUNCTION generate_api_key(length integer) RETURNS text AS
$$

View file

@ -22,5 +22,5 @@ MINIO_ACCESS_KEY = ''
MINIO_SECRET_KEY = ''
# APP and TRACKER VERSIONS
VERSION = 1.14.0
TRACKER_VERSION = '9.0.0'
VERSION = 1.16.6
TRACKER_VERSION = '11.0.1'

View file

@ -159,6 +159,8 @@ export default class APIClient {
} else {
return Promise.reject({ message: `! ${this.init.method} error on ${path}; ${response.status}`, response });
}
}).catch((error) => {
return Promise.reject({ message: `! ${this.init.method} error on ${path};` });
});
}

View file

@ -78,7 +78,6 @@ function MobileOverviewPanelCont({ issuesList }: { issuesList: Record<string, a
function WebOverviewPanelCont({ issuesList }: { issuesList: Record<string, any>[] }) {
const { store } = React.useContext(PlayerContext);
const [dataLoaded, setDataLoaded] = React.useState(false);
const [selectedFeatures, setSelectedFeatures] = React.useState([
'PERFORMANCE',
'FRUSTRATIONS',
@ -93,7 +92,7 @@ function WebOverviewPanelCont({ issuesList }: { issuesList: Record<string, any>[
} = store.get();
const stackEventList = tabStates[currentTab]?.stackList || []
const eventsList = tabStates[currentTab]?.eventList || []
// const eventsList = tabStates[currentTab]?.eventList || []
const frustrationsList = tabStates[currentTab]?.frustrationsList || []
const exceptionsList = tabStates[currentTab]?.exceptionsList || []
const resourceListUnmap = tabStates[currentTab]?.resourceList || []
@ -116,24 +115,7 @@ function WebOverviewPanelCont({ issuesList }: { issuesList: Record<string, any>[
PERFORMANCE: performanceChartData,
FRUSTRATIONS: frustrationsList,
};
}, [dataLoaded, currentTab]);
useEffect(() => {
if (dataLoaded) {
return;
}
if (
resourceList.length > 0 ||
exceptionsList.length > 0 ||
eventsList.length > 0 ||
stackEventList.length > 0 ||
issuesList.length > 0 ||
performanceChartData.length > 0
) {
setDataLoaded(true);
}
}, [resourceList, issuesList, exceptionsList, eventsList, stackEventList, performanceChartData, currentTab]);
}, [tabStates, currentTab]);
return <PanelComponent resources={resources} endTime={endTime} selectedFeatures={selectedFeatures} fetchPresented={fetchPresented} setSelectedFeatures={setSelectedFeatures} />
}

View file

@ -146,7 +146,9 @@ function FilterAutoComplete(props: Props) {
const loadOptions = (inputValue: string, callback: (options: []) => void) => {
// remove underscore from params
const _params = Object.keys(params).reduce((acc: any, key: string) => {
acc[key] = params[key].replace(/^_/, '');
if (key === 'type' && params[key] === 'metadata') {
acc[key] = params[key].replace(/^_/, '');
}
return acc;
}, {});

View file

@ -317,12 +317,15 @@ export default class TabSessionManager {
if (!!lastScroll && this.screen.window) {
this.screen.window.scrollTo(lastScroll.x, lastScroll.y);
}
const canvasMsg = this.canvasReplayWalker.moveGetLast(t)
if (canvasMsg) {
this.canvasManagers[`${canvasMsg.timestamp}_${canvasMsg.nodeId}`].manager.startVideo();
this.canvasManagers[`${canvasMsg.timestamp}_${canvasMsg.nodeId}`].running = true;
}
const runningManagers = Object.keys(this.canvasManagers).filter((key) => this.canvasManagers[key].running);
this.canvasReplayWalker.moveApply(t, (canvasMsg) => {
if (canvasMsg) {
this.canvasManagers[`${canvasMsg.timestamp}_${canvasMsg.nodeId}`].manager.startVideo();
this.canvasManagers[`${canvasMsg.timestamp}_${canvasMsg.nodeId}`].running = true;
}
})
const runningManagers = Object.keys(this.canvasManagers).filter(
(key) => this.canvasManagers[key].running
);
runningManagers.forEach((key) => {
const manager = this.canvasManagers[key].manager;
manager.move(t);
@ -330,8 +333,11 @@ export default class TabSessionManager {
})
}
/**
* Used to decode state messages, because they can be large we only want to decode whats rendered atm
* */
public decodeMessage(msg: Message) {
return this.decoder.decode(msg)
return this.decoder.decode(msg);
}
public _sortMessagesHack = (msgs: Message[]) => {

View file

@ -39,7 +39,7 @@ export default class CanvasManager {
}
move(t: number) {
if (t - this.lastTs < 100) return;
if (Math.abs(t - this.lastTs) < 100) return;
this.lastTs = t;
const playTime = t - this.delta
if (playTime > 0) {

View file

@ -642,7 +642,8 @@ export const addElementToLiveFiltersMap = (
icon = 'filters/metadata'
) => {
liveFiltersMap[key] = {
key, type, category, label: capitalize(key),
key, type, category,
label: key.replace(/^_/, '').charAt(0).toUpperCase() + key.slice(2),
operator: operator,
operatorOptions,
icon,

View file

@ -1,8 +1,8 @@
import Record from 'Types/Record';
import { validateURL } from 'App/validate'
export const API_KEY_ID_LENGTH = 20;
export const API_KEY_LENGTH = 22;
export const API_KEY_ID_LENGTH = 5;
export const API_KEY_LENGTH = 5;
export default Record({
projectId: undefined,
@ -20,7 +20,7 @@ export default Record({
}),
methods: {
validateKeys() {
return this.apiKeyId.length >= API_KEY_ID_LENGTH && this.apiKey.length >= API_KEY_LENGTH && validateURL(this.host);
return this.apiKeyId.length > API_KEY_ID_LENGTH && this.apiKey.length > API_KEY_LENGTH && validateURL(this.host);
},
validate() {
return this.host !== '' && this.apiKeyId !== '' && this.apiKey !== '' && this.indexes !== '' && !!this.port &&

View file

@ -5,7 +5,9 @@ export function validateIP(value) {
export function validateURL(value) {
if (typeof value !== 'string') return false;
return /^(http|https):\/\/(?:www\.)?[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,}(\/\S*)?$/i.test(value);
const urlRegex = /^(http|https):\/\/(?:www\.)?[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,}(\/\S*)?$/i;
const ipRegex = /^(http|https):\/\/(?:localhost|(\d{1,3}\.){3}\d{1,3})(:\d+)?(\/\S*)?$/i;
return urlRegex.test(value) || ipRegex.test(value);
}
function escapeRegexp(s) {

View file

@ -0,0 +1,37 @@
import { validateURL } from './validate';
describe('validateURL', () => {
test('validates standard URLs', () => {
expect(validateURL('http://www.example.com')).toBeTruthy();
expect(validateURL('https://example.com')).toBeTruthy();
expect(validateURL('https://sub.example.com/path')).toBeTruthy();
});
test('validates localhost URLs', () => {
expect(validateURL('http://localhost')).toBeTruthy();
expect(validateURL('https://localhost:8080')).toBeTruthy();
expect(validateURL('http://localhost/path')).toBeTruthy();
});
test('validates IP address URLs', () => {
expect(validateURL('http://127.0.0.1')).toBeTruthy();
expect(validateURL('https://192.168.1.1')).toBeTruthy();
expect(validateURL('http://192.168.1.1:3000/path')).toBeTruthy();
});
test('rejects invalid URLs', () => {
expect(validateURL('justastring')).toBeFalsy();
expect(validateURL('http://')).toBeFalsy();
expect(validateURL('https://.com')).toBeFalsy();
expect(validateURL('256.256.256.256')).toBeFalsy(); // Invalid IP
expect(validateURL('http://example')).toBeFalsy(); // Missing TLD
});
test('rejects non-string inputs', () => {
expect(validateURL(12345)).toBeFalsy();
expect(validateURL({ url: 'http://example.com' })).toBeFalsy();
expect(validateURL(['http://example.com'])).toBeFalsy();
expect(validateURL(null)).toBeFalsy();
expect(validateURL(undefined)).toBeFalsy();
});
});

View file

@ -0,0 +1,6 @@
{$CADDY_DOMAIN} {
reverse_proxy nginx-openreplay:80
tls {
issuer internal
}
}

View file

@ -7,7 +7,7 @@ services:
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- opereplay-net
- openreplay-net
environment:
POSTGRESQL_PASSWORD: ${COMMON_PG_PASSWORD}
@ -17,7 +17,7 @@ services:
volumes:
- redisdata:/var/lib/postgresql/data
networks:
- opereplay-net
- openreplay-net
environment:
ALLOW_EMPTY_PASSWORD: "yes"
@ -27,7 +27,7 @@ services:
volumes:
- miniodata:/bitnami/minio/data
networks:
- opereplay-net
- openreplay-net
ports:
- 9001:9001
environment:
@ -48,6 +48,7 @@ services:
- -c
- |
chown -R 1001:1001 /mnt/{efs,minio,postgres}
restart: on-failure
minio-migration:
image: bitnami/minio:2020.10.9-debian-10-r6
@ -58,7 +59,7 @@ services:
- minio
- fs-permission
networks:
- opereplay-net
- openreplay-net
volumes:
- ../helmcharts/openreplay/files/minio.sh:/tmp/minio.sh
environment:
@ -87,7 +88,7 @@ services:
- postgresql
- minio-migration
networks:
- opereplay-net
- openreplay-net
volumes:
- ../schema/db/init_dbs/postgresql/init_schema.sql:/tmp/init_schema.sql
environment:
@ -108,63 +109,63 @@ services:
psql -v ON_ERROR_STOP=1 -f /tmp/init_schema.sql
frontend-openreplay:
image: public.ecr.aws/p1t3u8a3/frontend:v1.16.0
image: public.ecr.aws/p1t3u8a3/frontend:${COMMON_VERSION}
container_name: frontend
networks:
- opereplay-net
- openreplay-net
restart: unless-stopped
alerts-openreplay:
image: public.ecr.aws/p1t3u8a3/alerts:v1.16.0
image: public.ecr.aws/p1t3u8a3/alerts:${COMMON_VERSION}
container_name: alerts
networks:
- opereplay-net
- openreplay-net
env_file:
- alerts.env
restart: unless-stopped
assets-openreplay:
image: public.ecr.aws/p1t3u8a3/assets:v1.16.0
image: public.ecr.aws/p1t3u8a3/assets:${COMMON_VERSION}
container_name: assets
networks:
- opereplay-net
- openreplay-net
env_file:
- assets.env
restart: unless-stopped
assist-openreplay:
image: public.ecr.aws/p1t3u8a3/assist:v1.16.0
image: public.ecr.aws/p1t3u8a3/assist:${COMMON_VERSION}
container_name: assist
networks:
- opereplay-net
- openreplay-net
env_file:
- assist.env
restart: unless-stopped
db-openreplay:
image: public.ecr.aws/p1t3u8a3/db:v1.16.0
image: public.ecr.aws/p1t3u8a3/db:${COMMON_VERSION}
container_name: db
networks:
- opereplay-net
- openreplay-net
env_file:
- db.env
restart: unless-stopped
ender-openreplay:
image: public.ecr.aws/p1t3u8a3/ender:v1.16.0
image: public.ecr.aws/p1t3u8a3/ender:${COMMON_VERSION}
container_name: ender
networks:
- opereplay-net
- openreplay-net
env_file:
- ender.env
restart: unless-stopped
heuristics-openreplay:
image: public.ecr.aws/p1t3u8a3/heuristics:v1.16.0
image: public.ecr.aws/p1t3u8a3/heuristics:${COMMON_VERSION}
domainname: app.svc.cluster.local
container_name: heuristics
networks:
opereplay-net:
openreplay-net:
aliases:
- heuristics-openreplay.app.svc.cluster.local
env_file:
@ -172,88 +173,88 @@ services:
restart: unless-stopped
imagestorage-openreplay:
image: public.ecr.aws/p1t3u8a3/imagestorage:v1.16.0
image: public.ecr.aws/p1t3u8a3/imagestorage:${COMMON_VERSION}
container_name: imagestorage
env_file:
- imagestorage.env
networks:
- opereplay-net
- openreplay-net
restart: unless-stopped
integrations-openreplay:
image: public.ecr.aws/p1t3u8a3/integrations:v1.16.0
image: public.ecr.aws/p1t3u8a3/integrations:${COMMON_VERSION}
container_name: integrations
networks:
- opereplay-net
- openreplay-net
env_file:
- integrations.env
restart: unless-stopped
peers-openreplay:
image: public.ecr.aws/p1t3u8a3/peers:v1.16.0
image: public.ecr.aws/p1t3u8a3/peers:${COMMON_VERSION}
container_name: peers
networks:
- opereplay-net
- openreplay-net
env_file:
- peers.env
restart: unless-stopped
sourcemapreader-openreplay:
image: public.ecr.aws/p1t3u8a3/sourcemapreader:v1.16.0
image: public.ecr.aws/p1t3u8a3/sourcemapreader:${COMMON_VERSION}
container_name: sourcemapreader
networks:
- opereplay-net
- openreplay-net
env_file:
- sourcemapreader.env
restart: unless-stopped
videostorage-openreplay:
image: public.ecr.aws/p1t3u8a3/videostorage:v1.16.0
image: public.ecr.aws/p1t3u8a3/videostorage:${COMMON_VERSION}
container_name: videostorage
networks:
- opereplay-net
- openreplay-net
env_file:
- videostorage.env
restart: unless-stopped
http-openreplay:
image: public.ecr.aws/p1t3u8a3/http:v1.16.0
image: public.ecr.aws/p1t3u8a3/http:${COMMON_VERSION}
container_name: http
networks:
- opereplay-net
- openreplay-net
env_file:
- http.env
restart: unless-stopped
chalice-openreplay:
image: public.ecr.aws/p1t3u8a3/chalice:v1.16.0
image: public.ecr.aws/p1t3u8a3/chalice:${COMMON_VERSION}
container_name: chalice
volumes:
- shared-volume:/mnt/efs
networks:
- opereplay-net
- openreplay-net
env_file:
- chalice.env
restart: unless-stopped
sink-openreplay:
image: public.ecr.aws/p1t3u8a3/sink:v1.16.0
image: public.ecr.aws/p1t3u8a3/sink:${COMMON_VERSION}
container_name: sink
volumes:
- shared-volume:/mnt/efs
networks:
- opereplay-net
- openreplay-net
env_file:
- sink.env
restart: unless-stopped
storage-openreplay:
image: public.ecr.aws/p1t3u8a3/storage:v1.16.0
image: public.ecr.aws/p1t3u8a3/storage:${COMMON_VERSION}
container_name: storage
volumes:
- shared-volume:/mnt/efs
networks:
- opereplay-net
- openreplay-net
env_file:
- storage.env
restart: unless-stopped
@ -262,7 +263,7 @@ services:
image: nginx:latest
container_name: nginx
networks:
- opereplay-net
- openreplay-net
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
restart: unless-stopped
@ -279,10 +280,10 @@ services:
- caddy_data:/data
- caddy_config:/config
networks:
- opereplay-net
- openreplay-net
environment:
- ACME_AGREE=true # Agree to Let's Encrypt Subscriber Agreement
- CADDY_DOMAIN=or-foss.rjsh.me
- CADDY_DOMAIN=${CADDY_DOMAIN}
restart: unless-stopped
@ -295,4 +296,4 @@ volumes:
caddy_config:
networks:
opereplay-net:
openreplay-net:

View file

@ -25,7 +25,7 @@ fi
# Clone the repository
if git clone --depth 1 --branch "$REPO_BRANCH" "$REPO_URL" "$CLONE_DIR"; then
info "Repository cloned successfully."
info "Repository cloned successfully."
else
error "Failed to clone the repository."
fi

View file

@ -75,6 +75,21 @@ if [[ -z $DOMAIN_NAME ]]; then
fatal "DOMAIN_NAME variable is empty. Please provide a valid domain name to proceed."
fi
info "Using domain name: $DOMAIN_NAME 🌐"
echo "CADDY_DOMAIN=\"$DOMAIN_NAME\"" >> common.env
read -p "Is the domain on a public DNS? (y/n) " yn
case $yn in
y ) echo "$DOMAIN_NAME is on a public DNS";
;;
n ) echo "$DOMAIN_NAME is on a private DNS";
#add TLS internal to caddyfile
#In local network Caddy can't reach Let's Encrypt servers to get a certificate
mv Caddyfile Caddyfile.public
mv Caddyfile.private Caddyfile
;;
* ) echo invalid response;
exit 1;;
esac
# Create passwords if they don't exist
create_passwords
@ -87,8 +102,21 @@ set +a
# Use the `envsubst` command to substitute the shell environment variables into reference_var.env and output to a combined .env
find ./ -type f \( -iname "*.env" -o -iname "docker-compose.yaml" \) ! -name "common.env" -exec /bin/bash -c 'file="{}"; git checkout -- "$file"; cp "$file" "$file.bak"; envsubst < "$file.bak" > "$file"; rm "$file.bak"' \;
sudo -E docker-compose pull --no-parallel
sudo -E docker compose --profile migration up -d
case $yn in
y ) echo "$DOMAIN_NAME is on a public DNS";
##No changes needed
;;
n ) echo "$DOMAIN_NAME is on a private DNS";
##Add a variable to chalice.env file
echo "SKIP_H_SSL=True" >> chalice.env
;;
* ) echo invalid response;
exit 1;;
esac
sudo -E docker-compose --parallel 1 pull
sudo -E docker-compose --profile migration up --force-recreate --build -d
cp common.env common.env.bak
echo "🎉🎉🎉 Done! 🎉🎉🎉"

View file

@ -5,9 +5,9 @@ original_env_file="$1"
# Check if the original env file exists and is not empty
if [ ! -s "$original_env_file" ]; then
echo "Error: The original env file is empty or does not exist."
echo "Usage: $0 /path/to/original.env"
exit 1
echo "Error: The original env file is empty or does not exist."
echo "Usage: $0 /path/to/original.env"
exit 1
fi
new_env_file="./common.env"
@ -15,99 +15,111 @@ temp_env_file=$(mktemp)
# Function to merge environment variables from original to new env file
function merge_envs() {
while IFS='=' read -r key value; do
# Skip the line if the key is COMMON_VERSION
case "$key" in
COMMON_VERSION)
original_version=$(echo "$value" | xargs)
continue
;;
COMMON_PG_PASSWORD)
pgpassword=$value
;;
POSTGRES_VERSION | REDIS_VERSION | MINIO_VERSION)
# Don't update db versions automatically.
continue
;;
esac
while IFS='=' read -r key value; do
# Skip the line if the key is COMMON_VERSION
case "$key" in
COMMON_VERSION)
original_version=$(echo "$value" | xargs)
continue
;;
COMMON_PG_PASSWORD)
pgpassword=$(echo $value | xargs)
;;
POSTGRES_VERSION | REDIS_VERSION | MINIO_VERSION)
# Don't update db versions automatically.
continue
;;
esac
# Remove any existing entry from the new env file and add the new value
grep -v "^$key=" "$new_env_file" >"$temp_env_file"
mv "$temp_env_file" "$new_env_file"
echo "$key=$value" >>"$new_env_file"
done <"$original_env_file"
# Remove any existing entry from the new env file and add the new value
grep -v "^$key=" "$new_env_file" >"$temp_env_file"
mv "$temp_env_file" "$new_env_file"
echo "$key=$value" >>"$new_env_file"
done <"$original_env_file"
}
# Function to normalize version numbers for comparison
function normalise_version {
echo "$1" | awk -F. '{ printf("%03d%03d%03d\n", $1, $2, $3); }'
echo "$1" | awk -F. '{ printf("%03d%03d%03d\n", $1, $2, $3); }'
}
# Function to log messages
function log_message() {
echo "$@" >&2
echo "$@" >&2
}
# Function to create migration versions based on the current and previous application versions
function create_migration_versions() {
cd "${SCHEMA_DIR:-/opt/openreplay/openreplay/scripts/schema}" || {
log_message "not able to cd $SCHEMA_DIR"
exit 100
}
SCHEMA_DIR="../schema/"
cd $SCHEMA_DIR || {
log_message "not able to cd $SCHEMA_DIR"
exit 100
}
db=postgresql
# List all version directories excluding 'create' directory
all_versions=($(find db/init_dbs/$db -maxdepth 1 -type d -exec basename {} \; | grep -v create))
db=postgresql
# List all version directories excluding 'create' directory
all_versions=($(find db/init_dbs/$db -maxdepth 1 -type d -exec basename {} \; | grep -v create))
# Normalize the previous application version for comparison
PREVIOUS_APP_VERSION_NORMALIZED=$(normalise_version "${PREVIOUS_APP_VERSION}")
# Normalize the previous application version for comparison
PREVIOUS_APP_VERSION_NORMALIZED=$(normalise_version "${PREVIOUS_APP_VERSION}")
migration_versions=()
for ver in "${all_versions[@]}"; do
if [[ $(normalise_version "$ver") > "$PREVIOUS_APP_VERSION_NORMALIZED" ]]; then
migration_versions+=("$ver")
fi
done
migration_versions=()
for ver in "${all_versions[@]}"; do
if [[ $(normalise_version "$ver") > "$PREVIOUS_APP_VERSION_NORMALIZED" ]]; then
migration_versions+=("$ver")
fi
done
# Join migration versions into a single string separated by commas
joined_migration_versions=$(
IFS=,
echo "${migration_versions[*]}"
)
# Join migration versions into a single string separated by commas
joined_migration_versions=$(
IFS=,
echo "${migration_versions[*]}"
)
# Return to the previous directory
cd - >/dev/null || {
log_message "not able to cd back"
exit 100
}
# Return to the previous directory
cd - >/dev/null || {
log_message "not able to cd back"
exit 100
}
log_message "output: $joined_migration_versions"
echo "$joined_migration_versions"
log_message "output: $joined_migration_versions"
echo "$joined_migration_versions"
}
export SCHEMA_DIR="$(readlink -f ../schema/)"
echo $SCHEMA_DIR
# Function to perform migration
function migrate() {
# Set schema directory and previous application version
export SCHEMA_DIR="../schema/"
export PREVIOUS_APP_VERSION=${original_version#v}
# Set schema directory and previous application version
export PREVIOUS_APP_VERSION=${original_version#v}
# Create migration versions array
IFS=',' read -ra joined_migration_versions <<<"$(create_migration_versions)"
# Check if there are versions to migrate
[[ ${#joined_migration_versions[@]} -eq 0 ]] && {
echo "Nothing to migrate"
return
}
# Loop through versions and prepare Docker run commands
for ver in "${joined_migration_versions[@]}"; do
echo "$ver"
"docker run --rm --network openreplay-net \
--name pgmigrate -e 'PGHOST=postgres' -e 'PGPORT=5432' \
-e 'PGDATABASE=postgres' -e 'PGUSER=postgres' -e 'PGPASSWORD=$pgpassword' \
-v /opt/data/:$SCHEMA_DIR postgres psql -f /opt/data/schema/db/init_dbs/postgresql/$ver/$ver.sql"
done
# Create migration versions array
IFS=',' read -ra joined_migration_versions <<<"$(create_migration_versions)"
# Check if there are versions to migrate
[[ ${#joined_migration_versions[@]} -eq 0 ]] && {
echo "Nothing to migrate"
return
}
# Loop through versions and prepare Docker run commands
for ver in "${joined_migration_versions[@]}"; do
echo "$ver"
docker run --rm --network docker-compose_opereplay-net \
--name pgmigrate -e PGHOST=postgres -e PGPORT=5432 \
-e PGDATABASE=postgres -e PGUSER=postgres -e PGPASSWORD=$pgpassword \
-v $SCHEMA_DIR:/opt/data/ postgres psql -f /opt/data/db/init_dbs/postgresql/$ver/$ver.sql
done
}
# Merge environment variables and perform migration
merge_envs
migrate
# Load variables from common.env into the current shell's environment
set -a # automatically export all variables
source common.env
set +a
# Use the `envsubst` command to substitute the shell environment variables into reference_var.env and output to a combined .env
find ./ -type f \( -iname "*.env" -o -iname "docker-compose.yaml" \) ! -name "common.env" -exec /bin/bash -c 'file="{}";cp "$file" "$file.bak"; envsubst < "$file.bak" > "$file"; rm "$file.bak"' \;
sudo -E docker-compose up -d

View file

@ -8,16 +8,13 @@ YELLOW='\033[0;33m'
BWHITE='\033[1;37m'
NC='\033[0m' # No Color
# --- helper functions for logs ---
info()
{
info() {
echo -e "${GREEN}[INFO] " "$@" "$NC"
}
warn()
{
warn() {
echo -e "${YELLOW}[INFO] " "$@" "$NC"
}
fatal()
{
fatal() {
echo -e "${RED}[INFO] " "$@" "$NC"
exit 1
}
@ -36,13 +33,13 @@ function install_k8s() {
# Checking whether the app exists or we do have to upgade.
function exists() {
install_status=Upgrading
[[ $UPGRADE_TOOLS -eq 1 ]] && {
install_status=Upgrading
return 100
}
which $1 &> /dev/null
return $?
[[ $UPGRADE_TOOLS -eq 1 ]] && {
install_status=Upgrading
return 100
}
which $1 &>/dev/null
return $?
}
# Instal the toolings needed for installation/maintaining k8s
@ -50,7 +47,8 @@ function install_tools() {
## installing kubectl
exists kubectl || {
info "$install_status kubectl"
sudo curl -SsL https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl ; sudo chmod +x /usr/local/bin/kubectl
sudo curl -SsL https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
}
## $install_status GH package manager
@ -73,6 +71,7 @@ function install_tools() {
exists k9s || {
info "$install_status K9s"
sudo /usr/local/bin/eget -q --to /usr/local/bin derailed/k9s
sudo /usr/local/bin/eget -q --upgrade-only --to "$OR_DIR" derailed/k9s --asset=tar.gz --asset=^sbom
}
## installing helm, package manager for K8s
@ -89,8 +88,8 @@ function install_tools() {
randomPass() {
## Installing openssl
exists openssl || {
sudo apt update &> /dev/null
sudo apt install openssl -y &> /dev/null
sudo apt update &>/dev/null
sudo apt install openssl -y &>/dev/null
}
openssl rand -hex 10
}
@ -100,93 +99,92 @@ randomPass() {
# Mac os doesn't have gnu sed, which will cause compatibility issues.
# This wrapper will help to check the sed, and use the correct version="v1.16.0"
# Ref: https://stackoverflow.com/questions/37639496/how-can-i-check-the-version="v1.16.0"
function is_gnu_sed(){
sed --version >/dev/null 2>&1
function is_gnu_sed() {
sed --version >/dev/null 2>&1
}
function sed_i_wrapper(){
if is_gnu_sed; then
$(which sed) "$@"
else
a=()
for b in "$@"; do
[[ $b == '-i' ]] && a=("${a[@]}" "$b" "") || a=("${a[@]}" "$b")
done
$(which sed) "${a[@]}"
fi
function sed_i_wrapper() {
if is_gnu_sed; then
$(which sed) "$@"
else
a=()
for b in "$@"; do
[[ $b == '-i' ]] && a=("${a[@]}" "$b" "") || a=("${a[@]}" "$b")
done
$(which sed) "${a[@]}"
fi
}
function create_passwords() {
# Error out only if the domain name is empty in vars.yaml
existing_domain_name=$(awk '/domainName/ {print $2}' vars.yaml | xargs)
[[ -z $existing_domain_name ]] && {
[[ -z $DOMAIN_NAME ]] && {
fatal 'DOMAIN_NAME variable is empty. Rerun the script `DOMAIN_NAME=openreplay.mycomp.org bash init.sh `'
# Error out only if the domain name is empty in vars.yaml
existing_domain_name=$(awk '/domainName/ {print $2}' vars.yaml | xargs)
[[ -z $existing_domain_name ]] && {
[[ -z $DOMAIN_NAME ]] && {
fatal 'DOMAIN_NAME variable is empty. Rerun the script `DOMAIN_NAME=openreplay.mycomp.org bash init.sh `'
}
}
}
info "Creating dynamic passwords"
sed_i_wrapper -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/assistKey: \"SetARandomStringHere\"/assistKey: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/assistJWTSecret: \"SetARandomStringHere\"/assistJWTSecret: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/domainName: \"\"/domainName: \"${DOMAIN_NAME}\"/g" vars.yaml
info "Creating dynamic passwords"
sed_i_wrapper -i "s/postgresqlPassword: \"changeMePassword\"/postgresqlPassword: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/accessKey: \"changeMeMinioAccessKey\"/accessKey: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/secretKey: \"changeMeMinioPassword\"/secretKey: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/jwt_secret: \"SetARandomStringHere\"/jwt_secret: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/assistKey: \"SetARandomStringHere\"/assistKey: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/assistJWTSecret: \"SetARandomStringHere\"/assistJWTSecret: \"$(randomPass)\"/g" vars.yaml
sed_i_wrapper -i "s/domainName: \"\"/domainName: \"${DOMAIN_NAME}\"/g" vars.yaml
}
function set_permissions() {
info "Setting proper permission for shared folder"
sudo mkdir -p /openreplay/storage/nfs
sudo chown -R 1001:1001 /openreplay/storage/nfs
info "Setting proper permission for shared folder"
sudo mkdir -p /openreplay/storage/nfs
sudo chown -R 1001:1001 /openreplay/storage/nfs
}
## Installing OpenReplay
function install_openreplay() {
info "installing toolings"
helm uninstall tooling -n app || true
helm upgrade --install toolings ./toolings -n app --create-namespace --wait -f ./vars.yaml --atomic --debug ${HELM_OPTIONS}
info "installing databases"
helm upgrade --install databases ./databases -n db --create-namespace --wait -f ./vars.yaml --atomic --debug ${HELM_OPTIONS}
info "installing application"
helm upgrade --install openreplay ./openreplay -n app --create-namespace --wait -f ./vars.yaml --atomic --debug ${HELM_OPTIONS}
info "installing toolings"
helm uninstall tooling -n app || true
helm upgrade --install toolings ./toolings -n app --create-namespace --wait -f ./vars.yaml --atomic --debug ${HELM_OPTIONS}
info "installing databases"
helm upgrade --install databases ./databases -n db --create-namespace --wait -f ./vars.yaml --atomic --debug ${HELM_OPTIONS}
info "installing application"
helm upgrade --install openreplay ./openreplay -n app --create-namespace --wait -f ./vars.yaml --atomic --debug ${HELM_OPTIONS}
}
function main() {
[[ x$SKIP_K8S_INSTALL == "x1" ]] && {
info "Skipping Kuberntes installation"
} || {
install_k8s
}
[[ x$SKIP_K8S_TOOLS == "x1" ]] && {
info "Skipping Kuberntes tools installation"
} || {
install_tools
}
[[ x$SKIP_ROTATE_SECRETS == "x1" ]] && {
info "Skipping random password generation"
} || {
create_passwords
}
[[ x$SKIP_OR_INSTALL == "x1" ]] && {
info "Skipping OpenReplay installation"
} || {
set_permissions
sudo mkdir -p /var/lib/openreplay
sudo cp -f openreplay-cli /bin/openreplay
install_openreplay
# If you install multiple times using init.sh, Only keep the latest installation
if [[ -d /var/lib/openreplay/openreplay ]]; then
cd /var/lib/openreplay/openreplay
date +%m-%d-%Y-%H%M%S | sudo tee -a /var/lib/openreplay/or_versions.txt
sudo git log -1 2>&1 | sudo tee -a /var/lib/openreplay/or_versions.txt
sudo rm -rf /var/lib/openreplay/openreplay
cd -
fi
sudo cp -rf $(cd ../.. && pwd) /var/lib/openreplay/openreplay
sudo cp -rf ./vars.yaml /var/lib/openreplay/
}
[[ x$SKIP_K8S_INSTALL == "x1" ]] && {
info "Skipping Kuberntes installation"
} || {
install_k8s
}
[[ x$SKIP_K8S_TOOLS == "x1" ]] && {
info "Skipping Kuberntes tools installation"
} || {
install_tools
}
[[ x$SKIP_ROTATE_SECRETS == "x1" ]] && {
info "Skipping random password generation"
} || {
create_passwords
}
[[ x$SKIP_OR_INSTALL == "x1" ]] && {
info "Skipping OpenReplay installation"
} || {
set_permissions
sudo mkdir -p /var/lib/openreplay
sudo cp -f openreplay-cli /bin/openreplay
install_openreplay
# If you install multiple times using init.sh, Only keep the latest installation
if [[ -d /var/lib/openreplay/openreplay ]]; then
cd /var/lib/openreplay/openreplay
date +%m-%d-%Y-%H%M%S | sudo tee -a /var/lib/openreplay/or_versions.txt
sudo git log -1 2>&1 | sudo tee -a /var/lib/openreplay/or_versions.txt
sudo rm -rf /var/lib/openreplay/openreplay
cd -
fi
sudo cp -rf $(cd ../.. && pwd) /var/lib/openreplay/openreplay
sudo cp -rf ./vars.yaml /var/lib/openreplay/
}
}
main

View file

@ -13,21 +13,20 @@ OR_REPO="${OR_REPO:-'https://github.com/openreplay/openreplay'}"
# UPGRADE_OR_ONLY=1 openreplay -u
[[ -d $OR_DIR ]] || {
sudo mkdir $OR_DIR
sudo mkdir $OR_DIR
}
export PATH=/var/lib/openreplay:$PATH
function xargs() {
/var/lib/openreplay/busybox xargs
/var/lib/openreplay/busybox xargs
}
[[ $(awk '/enterpriseEditionLicense/{print $2}' < "/var/lib/openreplay/vars.yaml") != "" ]] && EE=true
[[ $(awk '/enterpriseEditionLicense/{print $2}' <"/var/lib/openreplay/vars.yaml") != "" ]] && EE=true
tools=(
zyedidia/eget
stern/stern
derailed/k9s
hidetatz/kubecolor
)
zyedidia/eget
stern/stern
hidetatz/kubecolor
)
# Ref: https://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux
RED='\033[0;31m'
@ -38,50 +37,50 @@ NC='\033[0m' # No Color
# Checking whether the app exists or we do have to upgade.
function exists() {
which "${1}" &> /dev/null
return $?
which "${1}" &>/dev/null
return $?
}
function err_cd() {
if ! cd "$1" &> /dev/null ; then
log err not able to cd to "$1"
exit 100
fi
if ! cd "$1" &>/dev/null; then
log err not able to cd to "$1"
exit 100
fi
}
function log () {
case "$1" in
function log() {
case "$1" in
info)
shift
echo -e "${GREEN}[INFO]" "$@" "${NC}"
return
;;
shift
echo -e "${GREEN}[INFO]" "$@" "${NC}"
return
;;
warn)
shift
echo -e "${YELLOW}[WARN]" "$@" "${NC}"
return
;;
shift
echo -e "${YELLOW}[WARN]" "$@" "${NC}"
return
;;
debug)
shift
echo -e "${YELLOW}[DEBUG]" "$@" "${NC}"
return
;;
shift
echo -e "${YELLOW}[DEBUG]" "$@" "${NC}"
return
;;
title)
shift
echo -e "\n${BWHITE}-" "$@" "${NC}"
return
;;
shift
echo -e "\n${BWHITE}-" "$@" "${NC}"
return
;;
err)
shift
echo -e "${RED}[ERROR]" "$@" "${NC}"
exit 100
;;
shift
echo -e "${RED}[ERROR]" "$@" "${NC}"
exit 100
;;
*)
echo "Not supported log format"
;;
esac
echo "[Error]" "$@"
exit 100
echo "Not supported log format"
;;
esac
echo "[Error]" "$@"
exit 100
}
# To run kubeconfig run
@ -96,33 +95,35 @@ tmp_dir=$(mktemp -d)
function install_packages() {
[[ -e "$OR_DIR/eget" ]] || {
cd "$tmp_dir" || log err "Not able to cd to tmp dir $tmp_dir"
curl --version &> /dev/null || log err "curl not found. Please install"
curl -SsL https://zyedidia.github.io/eget.sh | sh - > /dev/null
sudo mv eget $OR_DIR
err_cd -
}
[[ -e "$OR_DIR/eget" ]] || {
cd "$tmp_dir" || log err "Not able to cd to tmp dir $tmp_dir"
curl --version &>/dev/null || log err "curl not found. Please install"
curl -SsL https://zyedidia.github.io/eget.sh | sh - >/dev/null
sudo mv eget $OR_DIR
err_cd -
}
for package in "${tools[@]}"; do
log info Installing "$(awk -F/ '{print $2}' <<< $package)"
sudo /var/lib/openreplay/eget -q --upgrade-only --to "${OR_DIR}" "$package"
done
log info Installing yq
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" mikefarah/yq --asset=^tar.gz
log info Installing helm
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz -f helm
log info Installing kubectl
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" https://dl.k8s.io/release/v1.25.0/bin/linux/amd64/kubectl
log info Installing Busybox
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" https://busybox.net/downloads/binaries/1.35.0-x86_64-linux-musl/busybox
date | sudo tee $OR_DIR/packages.lock &> /dev/null
for package in "${tools[@]}"; do
log info Installing "$(awk -F/ '{print $2}' <<<$package)"
sudo /var/lib/openreplay/eget -q --upgrade-only --to "${OR_DIR}" "$package"
done
log info Installing k9s
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" derailed/k9s --asset=tar.gz --asset=^sbom
log info Installing yq
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" mikefarah/yq --asset=^tar.gz
log info Installing helm
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz -f helm
log info Installing kubectl
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" https://dl.k8s.io/release/v1.25.0/bin/linux/amd64/kubectl
log info Installing Busybox
sudo /var/lib/openreplay/eget -q --upgrade-only --to "$OR_DIR" https://busybox.net/downloads/binaries/1.35.0-x86_64-linux-musl/busybox
date | sudo tee $OR_DIR/packages.lock &>/dev/null
}
function help() {
echo -e ${BWHITE}
cat <<"EOF"
echo -e ${BWHITE}
cat <<"EOF"
___ ____ _
/ _ \ _ __ ___ _ __ | _ \ ___ _ __ | | __ _ _ _
| | | | '_ \ / _ \ '_ \| |_) / _ \ '_ \| |/ _` | | | |
@ -130,9 +131,9 @@ cat <<"EOF"
\___/| .__/ \___|_| |_|_| \_\___| .__/|_|\__,_|\__, |
|_| |_| |___/
EOF
echo -e ${NC}
echo -e ${NC}
log info "
log info "
Usage: openreplay [ -h | --help ]
[ -s | --status ]
[ -i | --install DOMAIN_NAME ]
@ -149,335 +150,342 @@ log info "
http integrations nginx-controller
peers sink sourcemapreader storage
"
return
return
}
function status() {
log info OpenReplay Version
# awk '(NR<2)' < "$OR_DIR/vars.yaml"
awk '/fromVersion/{print $2}' < "${OR_DIR}/vars.yaml"
log info Disk
df -h /var
log info Memory
free -mh
log info CPU
uname -a
# Print only the fist line.
awk '(NR<2)' < /etc/os-release
echo "CPU Count: $(nproc)"
log info Kubernetes
kubecolor version --short
log info Openreplay Component
kubecolor get po -n "${APP_NS}"
kubecolor get po -n "${DB_NS}"
return
log info OpenReplay Version
# awk '(NR<2)' < "$OR_DIR/vars.yaml"
awk '/fromVersion/{print $2}' <"${OR_DIR}/vars.yaml"
log info Disk
df -h /var
log info Memory
free -mh
log info CPU
uname -a
# Print only the fist line.
awk '(NR<2)' </etc/os-release
echo "CPU Count: $(nproc)"
log info Kubernetes
kubecolor version --short
log info Openreplay Component
kubecolor get po -n "${APP_NS}"
kubecolor get po -n "${DB_NS}"
return
}
# Function to upgrade helm openreplay app.
function or_helm_upgrade() {
set -o pipefail
log_file="${tmp_dir}/helm.log"
state=$1
chart_names=(
toolings
openreplay
set -o pipefail
log_file="${tmp_dir}/helm.log"
state=$1
chart_names=(
toolings
openreplay
)
[[ $UPGRADE_OR_ONLY -eq 1 ]] && chart_names=( openreplay )
# Cleaning up toolings
[[ $CLEANUP_TOOLING -eq 1 ]] && {
helm uninstall toolings -n "$APP_NS"
}
if [[ $state == "reload" ]]; then
chart_names=( openreplay )
HELM_OPTIONS="${HELM_OPTIONS} --set skipMigration=true"
fi
for chart in "${chart_names[@]}"; do
[[ -z $OR_VERSION ]] || HELM_OPTIONS="${HELM_OPTIONS} --set dbMigrationUpstreamBranch=${OR_VERSION}"
log info helm upgrade --install "$chart" ./"$chart" -n "$APP_NS" --wait -f ./vars.yaml --atomic --debug $HELM_OPTIONS 2>&1 | tee -a "${log_file}"
if ! helm upgrade --install "$chart" ./"$chart" -n "$APP_NS" --wait -f ./vars.yaml --atomic --debug $HELM_OPTIONS 2>&1 | tee -a "${log_file}"; then
log err "
[[ $UPGRADE_OR_ONLY -eq 1 ]] && chart_names=(openreplay)
# Cleaning up toolings
[[ $CLEANUP_TOOLING -eq 1 ]] && {
helm uninstall toolings -n "$APP_NS"
}
if [[ $state == "reload" ]]; then
chart_names=(openreplay)
HELM_OPTIONS="${HELM_OPTIONS} --set skipMigration=true"
fi
for chart in "${chart_names[@]}"; do
[[ -z $OR_VERSION ]] || HELM_OPTIONS="${HELM_OPTIONS} --set dbMigrationUpstreamBranch=${OR_VERSION}"
log info helm upgrade --install "$chart" ./"$chart" -n "$APP_NS" --wait -f ./vars.yaml --atomic --debug $HELM_OPTIONS 2>&1 | tee -a "${log_file}"
if ! helm upgrade --install "$chart" ./"$chart" -n "$APP_NS" --wait -f ./vars.yaml --atomic --debug $HELM_OPTIONS 2>&1 | tee -a "${log_file}"; then
log err "
Installation failed, run ${BWHITE}cat ${log_file}${RED} for more info
If logs aren't verbose, run ${BWHITE}openreplay --status${RED}
If pods are in failed state, run ${BWHITE}openreplay --logs <pod-name>${RED}
"
fi
done
set +o pipefail
return
fi
done
set +o pipefail
return
}
function upgrade_old() {
old_vars_path="$1"
[[ -f $old_vars_path ]] || log err "No configuration file ${BWHITE}$old_vars_path${RED}.
old_vars_path="$1"
[[ -f $old_vars_path ]] || log err "No configuration file ${BWHITE}$old_vars_path${RED}.
If you're updating from version older than ${BWHITE}v1.10.0${RED}, for example ${BWHITE}v1.9.0${RED}:
${BWHITE}RELEASE_UPGRADE=1 openreplay --deprecated-upgrade ~/openreplay_v1.9.0/scripts/helmcharts/vars.yaml${RED}.
If you're having a custom installation,
${BWHITE}RELEASE_UPGRADE=1 openreplay --deprecated-upgrade /path/to/vars.yaml${RED}.
"
or_version=$(busybox awk '/fromVersion/{print $2}' < "${old_vars_path}")
sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml.backup."${or_version//\"}"_"$(date +%Y%m%d-%H%M%S)" || log err "Not able to copy old vars.yaml"
sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml || log err "Not able to copy old vars.yaml"
upgrade
or_version=$(busybox awk '/fromVersion/{print $2}' <"${old_vars_path}")
sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml.backup."${or_version//\"/}"_"$(date +%Y%m%d-%H%M%S)" || log err "Not able to copy old vars.yaml"
sudo cp "${old_vars_path}" ${OR_DIR}/vars.yaml || log err "Not able to copy old vars.yaml"
upgrade
}
function clone_repo() {
err_cd "$tmp_dir"
log info "Working directory $tmp_dir"
git_options="-b ${OR_VERSION:-main}"
log info "git clone ${OR_REPO} --depth 1 $git_options"
eval git clone "${OR_REPO}" --depth 1 $git_options
return
err_cd "$tmp_dir"
log info "Working directory $tmp_dir"
git_options="-b ${OR_VERSION:-main}"
log info "git clone ${OR_REPO} --depth 1 $git_options"
eval git clone "${OR_REPO}" --depth 1 $git_options
return
}
function install() {
domain_name=$1
# Check existing installation
[[ -f ${OR_DIR}/vars.yaml ]] && {
or_version=$(busybox awk '/fromVersion/{print $2}' < "${OR_DIR}/vars.yaml")
log err "Openreplay installation ${BWHITE}${or_version}${RED} found. If you want to upgrade, run ${BWHITE}openreplay -u${RED}"
}
# Installing OR
log title "Installing OpenReplay"
clone_repo
err_cd "$tmp_dir/openreplay/scripts/helmcharts"
DOMAIN_NAME=$domain_name bash init.sh
return
domain_name=$1
# Check existing installation
[[ -f ${OR_DIR}/vars.yaml ]] && {
or_version=$(busybox awk '/fromVersion/{print $2}' <"${OR_DIR}/vars.yaml")
log err "Openreplay installation ${BWHITE}${or_version}${RED} found. If you want to upgrade, run ${BWHITE}openreplay -u${RED}"
}
# Installing OR
log title "Installing OpenReplay"
clone_repo
err_cd "$tmp_dir/openreplay/scripts/helmcharts"
DOMAIN_NAME=$domain_name bash init.sh
return
}
function cleanup() {
# Confirmation for deletion. Do you want to delete Postgres/Minio(session) data before $date ?
delete_from_number_days=$1
delete_from_date=$(date +%Y-%m-%d -d "$delete_from_number_days day ago")
# Confirmation for deletion. Do you want to delete Postgres/Minio(session) data before $date ?
delete_from_number_days=$1
delete_from_date=$(date +%Y-%m-%d -d "$delete_from_number_days day ago")
# Check if --force flag is present
if [[ $2 == --force ]]; then
log info "Deleting data without confirmation..."
else
log debug "Do you want to delete the data captured on and before ${BWHITE}$delete_from_date${YELLOW}?"
read -p "Are you sure[y/n]? " -n 1 -r
echo # (optional) move to a new line
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log err "Cancelling data deletion"
return 1 # Exit with an error code to indicate cancellation
# Check if --force flag is present
if [[ $2 == --force ]]; then
log info "Deleting data without confirmation..."
else
log debug "Do you want to delete the data captured on and before ${BWHITE}$delete_from_date${YELLOW}?"
read -p "Are you sure[y/n]? " -n 1 -r
echo # (optional) move to a new line
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log err "Cancelling data deletion"
return 1 # Exit with an error code to indicate cancellation
fi
fi
fi
# Run pg cleanup
pguser=$(awk '/postgresqlUser/{print $2}' < "${OR_DIR}/vars.yaml" | xargs)
pgpassword=$(awk '/postgresqlPassword/{print $2}' < "${OR_DIR}/vars.yaml" | xargs)
pghost=$(awk '/postgresqlHost/{print $2}' < "${OR_DIR}/vars.yaml" | xargs)
pgport=$(awk '/postgresqlPort/{print $2}' < "${OR_DIR}/vars.yaml" | xargs)
pgdatabase=$(awk '/postgresqlDatabase/{print $2}' < "${OR_DIR}/vars.yaml" | xargs)
cleanup_query="DELETE FROM public.sessions WHERE start_ts < extract(epoch from '${delete_from_date}'::date) * 1000;"
[[ $EE ]] && cleanup_query="DELETE FROM public.sessions WHERE start_ts < extract(epoch from '${delete_from_date}'::date) * 1000 AND session_id NOT IN (SELECT session_id FROM user_favorite_sessions);"
kubectl delete po -n "${APP_NS}" pg-cleanup &> /dev/null || true
kubectl run pg-cleanup -n "${APP_NS}" \
--restart=Never \
--env PGHOST="$pghost"\
--env PGUSER="$pguser"\
--env PGDATABASE="$pgdatabase"\
--env PGPASSWORD="$pgpassword"\
--env PGPORT="$pgport"\
--image bitnami/postgresql -- psql -c "$cleanup_query"
# Run minio cleanup
MINIO_ACCESS_KEY=$(awk '/accessKey/{print $NF}' < "${OR_DIR}/vars.yaml" | tail -n1 | xargs)
MINIO_SECRET_KEY=$(awk '/secretKey/{print $NF}' < "${OR_DIR}/vars.yaml" | tail -n1 | xargs)
MINIO_HOST=$(awk '/endpoint/{print $NF}' < "${OR_DIR}/vars.yaml" | tail -n1 | xargs)
kubectl delete po -n "${APP_NS}" minio-cleanup &> /dev/null || true
kubectl run minio-cleanup -n "${APP_NS}" \
--restart=Never \
--env MINIO_HOST="$pghost" \
--image bitnami/minio:2020.10.9-debian-10-r6 -- /bin/sh -c "
# Run pg cleanup
pguser=$(awk '/postgresqlUser/{print $2}' <"${OR_DIR}/vars.yaml" | xargs)
pgpassword=$(awk '/postgresqlPassword/{print $2}' <"${OR_DIR}/vars.yaml" | xargs)
pghost=$(awk '/postgresqlHost/{print $2}' <"${OR_DIR}/vars.yaml" | xargs)
pgport=$(awk '/postgresqlPort/{print $2}' <"${OR_DIR}/vars.yaml" | xargs)
pgdatabase=$(awk '/postgresqlDatabase/{print $2}' <"${OR_DIR}/vars.yaml" | xargs)
cleanup_query="DELETE FROM public.sessions WHERE start_ts < extract(epoch from '${delete_from_date}'::date) * 1000;"
[[ $EE ]] && cleanup_query="DELETE FROM public.sessions WHERE start_ts < extract(epoch from '${delete_from_date}'::date) * 1000 AND session_id NOT IN (SELECT session_id FROM user_favorite_sessions);"
kubectl delete po -n "${APP_NS}" pg-cleanup &>/dev/null || true
kubectl run pg-cleanup -n "${APP_NS}" \
--restart=Never \
--env PGHOST="$pghost" \
--env PGUSER="$pguser" \
--env PGDATABASE="$pgdatabase" \
--env PGPASSWORD="$pgpassword" \
--env PGPORT="$pgport" \
--image bitnami/postgresql -- psql -c "$cleanup_query"
# Run minio cleanup
MINIO_ACCESS_KEY=$(awk '/accessKey/{print $NF}' <"${OR_DIR}/vars.yaml" | tail -n1 | xargs)
MINIO_SECRET_KEY=$(awk '/secretKey/{print $NF}' <"${OR_DIR}/vars.yaml" | tail -n1 | xargs)
MINIO_HOST=$(awk '/endpoint/{print $NF}' <"${OR_DIR}/vars.yaml" | tail -n1 | xargs)
kubectl delete po -n "${APP_NS}" minio-cleanup &>/dev/null || true
kubectl run minio-cleanup -n "${APP_NS}" \
--restart=Never \
--env MINIO_HOST="$pghost" \
--image bitnami/minio:2020.10.9-debian-10-r6 -- /bin/sh -c "
mc alias set minio $MINIO_HOST $MINIO_ACCESS_KEY $MINIO_SECRET_KEY &&
mc rm --recursive --dangerous --force --older-than ${delete_from_number_days}d minio/mobs
"
log info "Postgres data cleanup process initiated. Postgres will automatically vacuum deleted rows when the database is idle. This may take up a few days to free the disk space."
log info "Minio (where recordings are stored) cleanup process initiated."
log info "Run ${BWHITE}openreplay -s${GREEN} to check the status of the cleanup process and available disk space."
return
log info "Postgres data cleanup process initiated. Postgres will automatically vacuum deleted rows when the database is idle. This may take up a few days to free the disk space."
log info "Minio (where recordings are stored) cleanup process initiated."
log info "Run ${BWHITE}openreplay -s${GREEN} to check the status of the cleanup process and available disk space."
return
}
function upgrade() {
# TODO:
# 1. store vars.yaml in central place.
# 3. In upgrade you'll have to clone the repo
# 3. How to update package. Because openreplay -u will be done from old update script
# 4. Update from Version
exists git || log err "Git not found. Please install"
[[ -f ${OR_DIR}/vars.yaml ]] || log err "No configuration file ${BWHITE}${OR_DIR}/vars.yaml${RED}.
# TODO:
# 1. store vars.yaml in central place.
# 3. In upgrade you'll have to clone the repo
# 3. How to update package. Because openreplay -u will be done from old update script
# 4. Update from Version
exists git || log err "Git not found. Please install"
[[ -f ${OR_DIR}/vars.yaml ]] || log err "No configuration file ${BWHITE}${OR_DIR}/vars.yaml${RED}.
If you're updating from version older than ${BWHITE}v1.10.0${RED}, for example ${BWHITE}v1.9.0${RED}:
${BWHITE}RELEASE_UPGRADE=1 openreplay --deprecated-upgrade ~/openreplay_v1.9.0/scripts/helmcharts/vars.yaml${RED}.
If you're having a custom installation,
${BWHITE}RELEASE_UPGRADE=1 openreplay --deprecated-upgrade /path/to/vars.yaml${RED}.
"
or_version=$(busybox awk '/fromVersion/{print $2}' < "${OR_DIR}/vars.yaml") || {
log err "${BWHITE}${OR_DIR}/vars.yaml${RED} not found.
or_version=$(busybox awk '/fromVersion/{print $2}' <"${OR_DIR}/vars.yaml") || {
log err "${BWHITE}${OR_DIR}/vars.yaml${RED} not found.
Please do ${BWHITE}openreplay --deprecated-upgrade /path/to/vars.yaml${RED}
"
}
}
# Unless its upgrade release, always checkout same tag.
[[ $RELEASE_UPGRADE -eq 1 ]] || OR_VERSION=${OR_VERSION:-$or_version}
# Unless its upgrade release, always checkout same tag.
[[ $RELEASE_UPGRADE -eq 1 ]] || OR_VERSION=${OR_VERSION:-$or_version}
time_now=$(date +%m-%d-%Y-%I%M%S)
# Creating backup dir of current installation
[[ -d "$OR_DIR/openreplay" ]] && sudo mv "$OR_DIR/openreplay" "$OR_DIR/openreplay_${or_version//\"}_${time_now}"
time_now=$(date +%m-%d-%Y-%I%M%S)
# Creating backup dir of current installation
[[ -d "$OR_DIR/openreplay" ]] && sudo mv "$OR_DIR/openreplay" "$OR_DIR/openreplay_${or_version//\"/}_${time_now}"
clone_repo
err_cd openreplay/scripts/helmcharts
install_packages
[[ -d /openreplay ]] && sudo chown -R 1001:1001 /openreplay
clone_repo
err_cd openreplay/scripts/helmcharts
install_packages
[[ -d /openreplay ]] && sudo chown -R 1001:1001 /openreplay
# Merge prefrerences
cp $OR_DIR/vars.yaml old_vars.yaml
or_new_version=$(awk '/fromVersion/{print $2}' < "vars.yaml")
yq '(load("old_vars.yaml") | .. | select(tag != "!!map" and tag != "!!seq")) as $i ireduce(.; setpath($i | path; $i))' vars.yaml > new_vars.yaml
mv new_vars.yaml vars.yaml
or_helm_upgrade
# Merge prefrerences
cp $OR_DIR/vars.yaml old_vars.yaml
or_new_version=$(awk '/fromVersion/{print $2}' <"vars.yaml")
yq '(load("old_vars.yaml") | .. | select(tag != "!!map" and tag != "!!seq")) as $i ireduce(.; setpath($i | path; $i))' vars.yaml >new_vars.yaml
mv new_vars.yaml vars.yaml
or_helm_upgrade
# Update the version
busybox sed -i "s/fromVersion.*/fromVersion: ${or_new_version}/" vars.yaml
sudo mv ./openreplay-cli /bin/
sudo mv ./vars.yaml "$OR_DIR"
sudo cp -rf ../../../openreplay $OR_DIR/
log info "Configuration file is saved in /var/lib/openreplay/vars.yaml"
log info "Run ${BWHITE}openreplay -h${GREEN} to see the cli information to manage OpenReplay."
# Update the version
busybox sed -i "s/fromVersion.*/fromVersion: ${or_new_version}/" vars.yaml
sudo mv ./openreplay-cli /bin/
sudo mv ./vars.yaml "$OR_DIR"
sudo cp -rf ../../../openreplay $OR_DIR/
log info "Configuration file is saved in /var/lib/openreplay/vars.yaml"
log info "Run ${BWHITE}openreplay -h${GREEN} to see the cli information to manage OpenReplay."
err_cd -
return
err_cd -
return
}
function reload() {
err_cd $OR_DIR/openreplay/scripts/helmcharts
sudo cp -f $OR_DIR/vars.yaml .
or_helm_upgrade reload
return
err_cd $OR_DIR/openreplay/scripts/helmcharts
sudo cp -f $OR_DIR/vars.yaml .
or_helm_upgrade reload
return
}
function clean_tmp_dir() {
[[ -z $SKIP_DELETE_TMP_DIR ]] && rm -rf "${tmp_dir}"
[[ -z $SKIP_DELETE_TMP_DIR ]] && rm -rf "${tmp_dir}"
}
[[ -f $OR_DIR/packages.lock ]] || {
log title Installing packages "${NC}"
install_packages
log title Installing packages "${NC}"
install_packages
}
PARSED_ARGUMENTS=$(busybox getopt -a -n openreplay -o Rrevpi:uhsl:U:c: --long reload,edit,restart,verbose,install-packages,install:,upgrade,help,status,logs,deprecated-upgrade:,cleanup:,force -- "$@")
VALID_ARGUMENTS=$?
if [[ "$VALID_ARGUMENTS" != "0" ]]; then
help
exit 100
help
exit 100
fi
eval set -- "$PARSED_ARGUMENTS"
while :
do
case "$1" in
-v | --verbose) VERBOSE=1; echo $VERBOSE; clean_tmp_dir ; shift ;;
while :; do
case "$1" in
-v | --verbose)
VERBOSE=1
echo $VERBOSE
clean_tmp_dir
shift
;;
-h | --help)
help
clean_tmp_dir
exit 0
;;
help
clean_tmp_dir
exit 0
;;
-i | --install)
log title "Installing OpenReplay"
install "$2"
clean_tmp_dir
exit 0
;;
log title "Installing OpenReplay"
install "$2"
clean_tmp_dir
exit 0
;;
-p | --install-packages)
log title "Updating/Installing dependency packages"
install_packages
clean_tmp_dir
exit 0
;;
log title "Updating/Installing dependency packages"
install_packages
clean_tmp_dir
exit 0
;;
-u | --upgrade)
if [[ $RELEASE_UPGRADE -eq 1 ]]; then
log title "Upgrading OpenReplay to Latest Release"
CLEANUP_TOOLING=1
else
log title "Applying Latest OpenReplay Patches"
UPGRADE_OR_ONLY=${UPGRADE_OR_ONLY:-1}
fi
upgrade
clean_tmp_dir
exit 0
;;
if [[ $RELEASE_UPGRADE -eq 1 ]]; then
log title "Upgrading OpenReplay to Latest Release"
CLEANUP_TOOLING=1
else
log title "Applying Latest OpenReplay Patches"
UPGRADE_OR_ONLY=${UPGRADE_OR_ONLY:-1}
fi
upgrade
clean_tmp_dir
exit 0
;;
-U | --deprecated-upgrade)
log title "[Deprected] Upgrading OpenReplay"
upgrade_old "$2"
clean_tmp_dir
exit 0
;;
log title "[Deprected] Upgrading OpenReplay"
upgrade_old "$2"
clean_tmp_dir
exit 0
;;
-c | --cleanup)
log title "Cleaning up data older than $2 days"
cleanup "$2" "$3"
clean_tmp_dir
exit 0
;;
log title "Cleaning up data older than $2 days"
cleanup "$2" "$3"
clean_tmp_dir
exit 0
;;
-r | --restart)
log title "Restarting OpenReplay Components"
kubecolor rollout restart deployment -n "${APP_NS}"
kubecolor rollout status deployment -n "${APP_NS}"
clean_tmp_dir
exit 0
;;
log title "Restarting OpenReplay Components"
kubecolor rollout restart deployment -n "${APP_NS}"
kubecolor rollout status deployment -n "${APP_NS}"
clean_tmp_dir
exit 0
;;
-R | --reload)
log title "Reloading OpenReplay Components"
reload
clean_tmp_dir
exit 0
;;
log title "Reloading OpenReplay Components"
reload
clean_tmp_dir
exit 0
;;
-e | --edit)
log title "Editing OpenReplay"
[[ -f ${OR_DIR}/vars.yaml ]] || {
log err "
log title "Editing OpenReplay"
[[ -f ${OR_DIR}/vars.yaml ]] || {
log err "
Couldn't open ${BWHITE}${OR_DIR}/vars.yaml${RED}. Seems like a custom installation.
Edit the proper ${BWHITE}vars.yaml${RED} and run ${BWHITE}openreplay -R${RED}
Or ${BWHITE}helm upgrade openreplay -n app openreplay/scripts/helmcharts/openreplay -f openreplay/scripts/helmcharts/vars.yaml --debug --atomic"
exit 100
}
/var/lib/openreplay/busybox md5sum /var/lib/openreplay/vars.yaml > "${tmp_dir}/var.yaml.md5"
sudo vim -n ${OR_DIR}/vars.yaml
/var/lib/openreplay/yq 'true' /var/lib/openreplay/vars.yaml &> /dev/null || {
log debug "seems like the edit is not correct. Rerun ${BWHITE}openreplay -e${YELLOW} and fix the issue in config file."
exit 100
}
/var/lib/openreplay/busybox md5sum /var/lib/openreplay/vars.yaml >"${tmp_dir}/var.yaml.md5"
sudo vim -n ${OR_DIR}/vars.yaml
/var/lib/openreplay/yq 'true' /var/lib/openreplay/vars.yaml &>/dev/null || {
log debug "seems like the edit is not correct. Rerun ${BWHITE}openreplay -e${YELLOW} and fix the issue in config file."
clean_tmp_dir
exit 100
}
if /var/lib/openreplay/busybox md5sum -c "${tmp_dir}/var.yaml.md5" &>/dev/null; then
log info "No change detected in ${BWHITE}${OR_DIR}/vars.yaml${GREEN}. Not reloading"
else
reload
fi
clean_tmp_dir
exit 100
}
if /var/lib/openreplay/busybox md5sum -c "${tmp_dir}/var.yaml.md5" &> /dev/null; then
log info "No change detected in ${BWHITE}${OR_DIR}/vars.yaml${GREEN}. Not reloading"
else
reload
fi
clean_tmp_dir
exit 0
;;
exit 0
;;
-s | --status)
log title "Checking OpenReplay Components Status"
status
clean_tmp_dir
exit 0
;;
log title "Checking OpenReplay Components Status"
status
clean_tmp_dir
exit 0
;;
-l | --logs)
# Skipping double quotes because we want globbing. For example
# ./openreplay -l "chalice --tail 10"
stern -A --container-state=running,terminated $2
clean_tmp_dir
exit 0
;;
# Skipping double quotes because we want globbing. For example
# ./openreplay -l "chalice --tail 10"
stern -A --container-state=running,terminated $2
clean_tmp_dir
exit 0
;;
# -- means the end of the arguments; drop this, and break out of the while loop
--) shift; break ;;
--)
shift
break
;;
# If invalid options were passed, then getopt should have reported an error,
# which we checked as VALID_ARGUMENTS when getopt was called...
*)
echo "Unexpected option: $1 - this should not happen."
help
clean_tmp_dir
;;
esac
echo "Unexpected option: $1 - this should not happen."
help
clean_tmp_dir
;;
esac
done
[ $# -eq 0 ] && help

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
version: 0.1.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.1"

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.7
version: 0.1.19
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.12"

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
version: 0.1.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.1"

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (frontends://semver.org/)
version: 0.1.10
version: 0.1.16
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.6"

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
version: 0.1.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.1"

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
version: 0.1.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.1"

View file

@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
version: 0.1.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
AppVersion: "v1.16.0"
AppVersion: "v1.16.1"

View file

@ -1,3 +1,17 @@
\set or_version 'v1.16.0'
SET client_min_messages TO NOTICE;
\set ON_ERROR_STOP true
SELECT EXISTS (SELECT 1
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'tenants') AS db_exists;
\gset
\if :db_exists
\echo >DB already exists, stopping script
\echo >If you are trying to upgrade openreplay, please follow the instructions here: https://docs.openreplay.com/en/deployment/upgrade/
\q
\endif
BEGIN;
-- Schemas and functions definitions:
CREATE SCHEMA IF NOT EXISTS events_common;
@ -6,11 +20,14 @@ CREATE SCHEMA IF NOT EXISTS events_ios;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS pgcrypto;
SELECT format($fn_def$
CREATE OR REPLACE FUNCTION openreplay_version()
RETURNS text AS
$$
SELECT 'v1.16.0'
SELECT '%1$s'
$$ LANGUAGE sql IMMUTABLE;
$fn_def$, :'or_version')
\gexec
CREATE OR REPLACE FUNCTION generate_api_key(length integer) RETURNS text AS