openreplay/ee/intelligent_search/test_main.py
MauricioGarciaS 16efb1316c
feat(intelligent-search): intelligent search service (#1545)
* feature(intelligent-search): Added API to connect to Llama.cpp in EC2 and filter the response into OR filters

* updated sql to filter script and added init.sql for tables

* feature(intelligent-search): Changed llama.cpp for llama in GPU now contained in API

* Updated Dockerfile to use GPU and download LLM from S3

* Added link to facebook/research/llama

* Updated Dockerfile

* Updated requirements and Dockerfile base images

* fixed minor issues: Not used variables, updated COPY and replace values

* fix(intelligent-search): Fixed WHERE statement filter

* feature(smart-charts): Added method to create charts using llama. style(intelligent-search): Changed names for attributes to match frontend format. fix(intelligent-search): Fixed vulnerability in requiments and small issues fix

* Added some test before deploying the service

* Added semaphore to handle concurrency

---------

Co-authored-by: EC2 Default User <ec2-user@ip-10-0-2-226.eu-central-1.compute.internal>
2023-10-25 10:13:58 +02:00

24 lines
799 B
Python

from fastapi.testclient import TestClient
from main import app
from decouple import config
from os import path
client = TestClient(app)
def test_alive():
response = client.get("/")
assert response.status_code == 200
def test_correct_download():
llm_dir = config('CHECKPOINT_DIR')
tokenizer_path = config('TOKENIZER_PATH')
assert path.exists(tokenizer_path) == True
assert path.exists(llm_dir) == True
def test_correct_upload():
with TestClient(app) as client_statup:
response = client_statup.post('llm/completion', headers={'Authorization': 'Bearer ' + config('LLAMA_API_AUTH_KEY', cast=str), 'Content-Type': 'application/json'}, json={"question": "Show me the sessions from Texas", "userId": 0, "projectId": 0})
assert response.status_code == 200