openreplay/ee/connectors/db/api.py
Shekar Siri f562355aed
v1.1.0 (#31)
* ci(deployment): injecting secrets

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* fix: typo

* feat(installation): Enterprise license check

* fix(install): reset ee cli args

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* Fix typo

* Update README.md

* feat (tracker-axios): init plugin

* fix (tracker-axios): version patch

* Fixed alert's unknown metrics handler

* fix (tracker-mobx): dev-dependencies and updated package-lock

* feat: APIs for user session data deleteion - wip

* fix: alert metric value of performance.speed_index

* Build and deploy scripts for enterprise edition (#13)

* feat(installation): enterprise installation

* chore(install): enabling ansible gather_facts

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): quotes for enterprise key

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(installation): enterprise install dbs

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): rename yaml

* chore(install): change image tag

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): License key variable added

* chore(deployment): Injecting enterprise license key in workers.

* chore(install): remove deprecated files

* chore(install): make domain_name mandatory in vars.yaml

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(actions): ee workers

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(install): use local docker instead of crictl

You can use the images built in the local machine, in installation,
without putting that in any external registry.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat: APIs for user session data deleteion

* feat: prefix deleted mobs with DEL_

* feat: schedules to delete mobs

* chore(ci): fix ee build

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(build): passing build args to internal scripts

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): moving kafka topic creation at the end

Kafka pods usually takes time to be active.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): removing auth service.

* chore(install): Adding rancher for cluster management

* chore(install): proper name for alerts template

* separate requirements and clean up

* feat (frontend): typescript support

* feat (tracker): 3.0.4: maintain baseURL & connAttempt options

* feat(api): changed license validation

* feat(api): ee-license fix for unprovided value

* feat(api): fixed ee-signup cursor

* feat(api): FOS fix replay-mob issue

* feat(api): ee log ch-resources query

* chore(ci): change openreplay-cli with kube-install.sh

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* ci(actions): change ee naming

* feat(api): removed ch-logs

* feat(install): injecting ee variables only on ee installation.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): remove licence key from ee

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* fix(install): ch values for chalice

* feat(clickhouse): moved creation scripts to EE folder

* fix (backend-ee): disable ios tables so far

* chore(install): remove deprecated mandatory variables.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(api): remove duplicate files & changed signup

* fix(backend-ee): ch prepare after commit

* fix(backend-ee): syntax

* feat(api): added missing EE tenant column

* fix(scripts-ee): correct default clickhouse host

* feat(api): changed version_number location

* feat(api): ee log ch-errors query

* feat(api): ee fix ch-errors query

* feat: skip to issue button (#23)

* feat(api): 🐛 ee fix ambiguous ch-error query & accounts endpoint

* Feature: Autoplay Sessions (#22)

* feat: autoplay sessions

* change: removed unused import

* auto play filter by tab

* feat(api): changed JWT authorizer & API_KEY authorizer & fix undefined project_key

* feat (backend-devops): Dockerfile for all services in one image

* feat(sourcemap-uploader): --verbose argument use instead of --log

* feat(api): log middleware

* Feature - dom inspector (#28)

* feat (frontend): typescript support

* feat(frontend): DOM Inspector init

* fix(frontend): use tailwind bg

* feat(frontend dom-inspector): add element selection & deletion

* fix(frontend): todo comment

* di - styling wip

* feature(di) - editor theme

* feat(frontend): parse attributes with RE (+ability to add)

* feature(di) - input width

* fix(ui): di - review changes

Co-authored-by: ShiKhu <alex.kaminsky.11@gmail.com>

* chore(install): remove depricated init_dbs

* feat(api): ee override multi-tenant-core

* fix(frontend-build): gen css types before build

* fix(ui) - checking for the license (#30)

Co-authored-by: Rajesh Rajendran <rjshrjndrn@gmail.com>
Co-authored-by: Mehdi Osman <estradino@users.noreply.github.com>
Co-authored-by: ShiKhu <alex.kaminsky.11@gmail.com>
Co-authored-by: KRAIEM Taha Yassine <tahayk2@gmail.com>
Co-authored-by: Rajesh Rajendran <rjshrjndrn@users.noreply.github.com>
Co-authored-by: ourvakan <hi-psi@yandex.com>
Co-authored-by: tahayk2@gmail.com <enissay4ever4github>
2021-06-11 23:31:29 +05:30

130 lines
4.6 KiB
Python

from sqlalchemy import create_engine
from sqlalchemy import MetaData
from sqlalchemy.orm import sessionmaker, session
from contextlib import contextmanager
import logging
import os
from pathlib import Path
DATABASE = os.environ['DATABASE_NAME']
if DATABASE == 'redshift':
import pandas_redshift as pr
base_path = Path(__file__).parent.parent
from db.models import Base
logger = logging.getLogger(__file__)
def get_class_by_tablename(tablename):
"""Return class reference mapped to table.
Raise an exception if class not found
:param tablename: String with name of table.
:return: Class reference.
"""
for c in Base._decl_class_registry.values():
if hasattr(c, '__tablename__') and c.__tablename__ == tablename:
return c
raise AttributeError(f'No model with tablename "{tablename}"')
class DBConnection:
"""
Initializes connection to a database
To update models file use:
sqlacodegen --outfile models_universal.py mysql+pymysql://{user}:{pwd}@{address}
"""
_sessions = sessionmaker()
def __init__(self, config) -> None:
self.metadata = MetaData()
self.config = config
if config == 'redshift':
self.pdredshift = pr
self.pdredshift.connect_to_redshift(dbname=os.environ['schema'],
host=os.environ['address'],
port=os.environ['port'],
user=os.environ['user'],
password=os.environ['password'])
self.pdredshift.connect_to_s3(aws_access_key_id=os.environ['aws_access_key_id'],
aws_secret_access_key=os.environ['aws_secret_access_key'],
bucket=os.environ['bucket'],
subdirectory=os.environ['subdirectory'])
self.connect_str = os.environ['connect_str'].format(
user=os.environ['user'],
password=os.environ['password'],
address=os.environ['address'],
port=os.environ['port'],
schema=os.environ['schema']
)
self.engine = create_engine(self.connect_str)
elif config == 'clickhouse':
self.connect_str = os.environ['connect_str'].format(
address=os.environ['address'],
database=os.environ['database']
)
self.engine = create_engine(self.connect_str)
elif config == 'pg':
self.connect_str = os.environ['connect_str'].format(
user=os.environ['user'],
password=os.environ['password'],
address=os.environ['address'],
port=os.environ['port'],
database=os.environ['database']
)
self.engine = create_engine(self.connect_str)
elif config == 'bigquery':
pass
elif config == 'snowflake':
self.connect_str = os.environ['connect_str'].format(
user=os.environ['user'],
password=os.environ['password'],
account=os.environ['account'],
database=os.environ['database'],
schema = os.environ['schema'],
warehouse = os.environ['warehouse']
)
self.engine = create_engine(self.connect_str)
else:
raise ValueError("This db configuration doesn't exist. Add into keys file.")
@contextmanager
def get_test_session(self, **kwargs) -> session:
"""
Test session context, even commits won't be persisted into db.
:Keyword Arguments:
* autoflush (``bool``) -- default: True
* autocommit (``bool``) -- default: False
* expire_on_commit (``bool``) -- default: True
"""
connection = self.engine.connect()
transaction = connection.begin()
my_session = type(self)._sessions(bind=connection, **kwargs)
yield my_session
# Do cleanup, rollback and closing, whatever happens
my_session.close()
transaction.rollback()
connection.close()
@contextmanager
def get_live_session(self) -> session:
"""
This is a session that can be committed.
Changes will be reflected in the database.
"""
# Automatic transaction and connection handling in session
connection = self.engine.connect()
my_session = type(self)._sessions(bind=connection)
yield my_session
my_session.close()
connection.close()