* Updated pg connector
* fix(player): fix first 8 byte checker
* fix(player): fix commit conflict
* Added pylint
* Removed pylint for incompatible license
* change(ui): check for sessions records state
* Patch/api v1.12.0 (#1299)
* fix(chalice): include metadata in sessions exp search
* fix(chalice): fixed sessions exp search wrong col name
* fix(chalice): removed cookies
* fix(chalice): changed base image to support SSO/xmlsec
* fix(chalice): changed Dockerfile to support SSO/xmlsec
* fix(chalice): changed Dockerfile to support SSO/xmlsec
(cherry picked from commit 4b8cf9742c)
* fix(ui): project fallback to recorded variable
* Patch/api v1.12.0 (#1301)
* fix(chalice): changed base image to support SSO/xmlsec
* fix(chalice): fixed exp search null metadata
(cherry picked from commit ab000751d2)
* change(ui): assist no content message styles and icons
* change(ui): revert menu disable
* fix(connector): Added method to save state in s3 for redshift if sigterm arise
* Rewriting python code in cython
* Added pyx module for messages
* Auto create pyx files
* Updated and fixed msgcodec.pyx
* Added new module to connector code
* Updated kafka lib for base image
* cleaned Docker and updated base image version for pandas
* cleaned prints
* Added code to fetch data from db and add it into redshift
* Updated consumer reading method. Async multithreading over sessionId
* Added split for country (Country,State,City)
* Fixed decoding issue for uint
* Created service able to fix data from redshift by reading from db
* Handle when process ended because of lost connection to pg, country set to country value only
* fix(connector): fixed bug of cache dict size error
* fix(connector): Added method to save state in s3 for redshift if sigterm arise
* fix(connector): Added exit signal handler and checkpoint method
* Added sslmode selection for connection to database, added use_ssl parameter for S3 connection
* fix(connector): Handle error when broken session_id
* fix(connector): fixed bug of cache dict size error
* fix(connector): Added method to save state in s3 for redshift if sigterm arise
* fix(connector): Added exit signal handler and checkpoint method
* Added sslmode selection for connection to database, added use_ssl parameter for S3 connection
* Updated dependancies for redshift connector, changed os module for python-decouple module
* Updated service and images
* Updated message protocol, added exception for BatchMetadata when version is 0 (we apply old read method)
* fixed load error from s3 to redshift. null values for string columns are now empty strings ("")
* Added file test consumer_async.py: reads every 3 minutes kafka raw and send task in background to upload to cloud
* Added method to skip messages that are not inserted to cloud
* Added logs into consumer_async. Changed urls and issues in sessions table from list to string
* Split between messages for sessions table and for events table
* Updated redshift tables
* Fixed small issue in query redshift_sessions.sql
* Updated Dockerfiles. Cleaned logs of consumer_async. Updated/Fixed tables. Transformed Nan as NULL for VARCHAR columns
* Added error handler for sql dropped connection
* chore(docker): Optimize docker builds
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
* Variables renamed
* Adding compression libraries
* Set default value of count events to 0 (instead of NULL) when event did not occur
* Added support specific project tracking. Added PG handler to connect to sessions table
* Added method to update values in db connection for sessions ended and restarted
* Removing intelligent file copying
* chore(connector): Build file
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
* Adding connection pool for pg
* Renaming and optimizing
* Fixed issue of missing information of sessions
---------
Signed-off-by: rjshrjndrn <rjshrjndrn@gmail.com>
Co-authored-by: rjshrjndrn <rjshrjndrn@gmail.com>