openreplay/api/chalicelib/utils/s3.py
Shekar Siri f562355aed
v1.1.0 (#31)
* ci(deployment): injecting secrets

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* fix: typo

* feat(installation): Enterprise license check

* fix(install): reset ee cli args

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* Fix typo

* Update README.md

* feat (tracker-axios): init plugin

* fix (tracker-axios): version patch

* Fixed alert's unknown metrics handler

* fix (tracker-mobx): dev-dependencies and updated package-lock

* feat: APIs for user session data deleteion - wip

* fix: alert metric value of performance.speed_index

* Build and deploy scripts for enterprise edition (#13)

* feat(installation): enterprise installation

* chore(install): enabling ansible gather_facts

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): quotes for enterprise key

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(installation): enterprise install dbs

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): rename yaml

* chore(install): change image tag

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): License key variable added

* chore(deployment): Injecting enterprise license key in workers.

* chore(install): remove deprecated files

* chore(install): make domain_name mandatory in vars.yaml

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(actions): ee workers

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(install): use local docker instead of crictl

You can use the images built in the local machine, in installation,
without putting that in any external registry.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat: APIs for user session data deleteion

* feat: prefix deleted mobs with DEL_

* feat: schedules to delete mobs

* chore(ci): fix ee build

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(build): passing build args to internal scripts

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): moving kafka topic creation at the end

Kafka pods usually takes time to be active.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): removing auth service.

* chore(install): Adding rancher for cluster management

* chore(install): proper name for alerts template

* separate requirements and clean up

* feat (frontend): typescript support

* feat (tracker): 3.0.4: maintain baseURL & connAttempt options

* feat(api): changed license validation

* feat(api): ee-license fix for unprovided value

* feat(api): fixed ee-signup cursor

* feat(api): FOS fix replay-mob issue

* feat(api): ee log ch-resources query

* chore(ci): change openreplay-cli with kube-install.sh

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* ci(actions): change ee naming

* feat(api): removed ch-logs

* feat(install): injecting ee variables only on ee installation.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* chore(install): remove licence key from ee

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* fix(install): ch values for chalice

* feat(clickhouse): moved creation scripts to EE folder

* fix (backend-ee): disable ios tables so far

* chore(install): remove deprecated mandatory variables.

Signed-off-by: Rajesh Rajendran <rjshrjndrn@gmail.com>

* feat(api): remove duplicate files & changed signup

* fix(backend-ee): ch prepare after commit

* fix(backend-ee): syntax

* feat(api): added missing EE tenant column

* fix(scripts-ee): correct default clickhouse host

* feat(api): changed version_number location

* feat(api): ee log ch-errors query

* feat(api): ee fix ch-errors query

* feat: skip to issue button (#23)

* feat(api): 🐛 ee fix ambiguous ch-error query & accounts endpoint

* Feature: Autoplay Sessions (#22)

* feat: autoplay sessions

* change: removed unused import

* auto play filter by tab

* feat(api): changed JWT authorizer & API_KEY authorizer & fix undefined project_key

* feat (backend-devops): Dockerfile for all services in one image

* feat(sourcemap-uploader): --verbose argument use instead of --log

* feat(api): log middleware

* Feature - dom inspector (#28)

* feat (frontend): typescript support

* feat(frontend): DOM Inspector init

* fix(frontend): use tailwind bg

* feat(frontend dom-inspector): add element selection & deletion

* fix(frontend): todo comment

* di - styling wip

* feature(di) - editor theme

* feat(frontend): parse attributes with RE (+ability to add)

* feature(di) - input width

* fix(ui): di - review changes

Co-authored-by: ShiKhu <alex.kaminsky.11@gmail.com>

* chore(install): remove depricated init_dbs

* feat(api): ee override multi-tenant-core

* fix(frontend-build): gen css types before build

* fix(ui) - checking for the license (#30)

Co-authored-by: Rajesh Rajendran <rjshrjndrn@gmail.com>
Co-authored-by: Mehdi Osman <estradino@users.noreply.github.com>
Co-authored-by: ShiKhu <alex.kaminsky.11@gmail.com>
Co-authored-by: KRAIEM Taha Yassine <tahayk2@gmail.com>
Co-authored-by: Rajesh Rajendran <rjshrjndrn@users.noreply.github.com>
Co-authored-by: ourvakan <hi-psi@yandex.com>
Co-authored-by: tahayk2@gmail.com <enissay4ever4github>
2021-06-11 23:31:29 +05:30

94 lines
3.3 KiB
Python

from botocore.exceptions import ClientError
from chalicelib.utils.helper import environ
from datetime import datetime, timedelta
import boto3
import botocore
from botocore.client import Config
client = boto3.client('s3', endpoint_url=environ["S3_HOST"],
aws_access_key_id=environ["S3_KEY"],
aws_secret_access_key=environ["S3_SECRET"],
config=Config(signature_version='s3v4'),
region_name='us-east-1')
def exists(bucket, key):
try:
boto3.resource('s3', endpoint_url=environ["S3_HOST"],
aws_access_key_id=environ["S3_KEY"],
aws_secret_access_key=environ["S3_SECRET"],
config=Config(signature_version='s3v4'),
region_name='us-east-1') \
.Object(bucket, key).load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
return False
else:
# Something else has gone wrong.
raise
return True
def get_presigned_url_for_sharing(bucket, expires_in, key, check_exists=False):
if check_exists and not exists(bucket, key):
return None
return client.generate_presigned_url(
'get_object',
Params={
'Bucket': bucket,
'Key': key
},
ExpiresIn=expires_in
)
def get_presigned_url_for_upload(bucket, expires_in, key):
return client.generate_presigned_url(
'put_object',
Params={
'Bucket': bucket,
'Key': key
},
ExpiresIn=expires_in
)
def get_file(source_bucket, source_key):
print("******************************")
print(f"looking for: {source_key} in {source_bucket}")
print("******************************")
try:
result = client.get_object(
Bucket=source_bucket,
Key=source_key
)
except ClientError as ex:
if ex.response['Error']['Code'] == 'NoSuchKey':
print(f'======> No object found - returning None for {source_bucket}/{source_key}')
return None
else:
raise ex
return result["Body"].read().decode()
def rename(source_bucket, source_key, target_bucket, target_key):
s3 = boto3.resource('s3', endpoint_url=environ["S3_HOST"],
aws_access_key_id=environ["S3_KEY"],
aws_secret_access_key=environ["S3_SECRET"],
config=Config(signature_version='s3v4'),
region_name='us-east-1')
s3.Object(target_bucket, target_key).copy_from(CopySource=f'{source_bucket}/{source_key}')
s3.Object(source_bucket, source_key).delete()
def schedule_for_deletion(bucket, key):
s3 = boto3.resource('s3', endpoint_url=environ["S3_HOST"],
aws_access_key_id=environ["S3_KEY"],
aws_secret_access_key=environ["S3_SECRET"],
config=Config(signature_version='s3v4'),
region_name='us-east-1')
s3_object = s3.Object(bucket, key)
s3_object.copy_from(CopySource={'Bucket': bucket, 'Key': key},
Expires=datetime.now() + timedelta(days=7),
MetadataDirective='REPLACE')