* feature(intelligent-search): Added API to connect to Llama.cpp in EC2 and filter the response into OR filters * updated sql to filter script and added init.sql for tables * feature(intelligent-search): Changed llama.cpp for llama in GPU now contained in API * Updated Dockerfile to use GPU and download LLM from S3 * Added link to facebook/research/llama * Updated Dockerfile * Updated requirements and Dockerfile base images * fixed minor issues: Not used variables, updated COPY and replace values * fix(intelligent-search): Fixed WHERE statement filter * feature(smart-charts): Added method to create charts using llama. style(intelligent-search): Changed names for attributes to match frontend format. fix(intelligent-search): Fixed vulnerability in requiments and small issues fix * Added some test before deploying the service * Added semaphore to handle concurrency --------- Co-authored-by: EC2 Default User <ec2-user@ip-10-0-2-226.eu-central-1.compute.internal>
25 lines
342 B
Text
25 lines
342 B
Text
# General utils
|
|
pydantic==2.3.0
|
|
requests==2.31.0
|
|
python-decouple==3.8
|
|
certifi==2023.7.22
|
|
|
|
# AWS utils
|
|
awscli==1.29.53
|
|
|
|
# ML modules
|
|
# torch==2.0.1
|
|
fairscale==0.4.13
|
|
sentencepiece==0.1.99
|
|
|
|
# Serving modules
|
|
fastapi==0.103.1
|
|
httpx==0.25.0
|
|
apscheduler==3.10.4
|
|
uvicorn==0.23.2
|
|
|
|
# Observability modules
|
|
traceloop-sdk==0.0.37
|
|
|
|
# Test
|
|
pytest==7.4.2
|