GitXplorerGitXplorer
J

fastapi_with_celery_redis

public
20 stars
2 forks
0 issues

Commits

List of commits on branch main.
Verified
b6fb465bf9e0d56e9ec52e903e95f5740171b8ac

Update README.md

JJINO-ROHIT committed a year ago
Verified
6c95860b9bf254c34d131ea804a7b5d45b3d66f3

Update README.md

JJINO-ROHIT committed a year ago
Verified
c107585183edcbfcc8cc18308b44086387329c65

Update README.md

JJINO-ROHIT committed a year ago
Verified
5ae81ab0845997b09a76441226faff07bbf90d9e

Update README.md

JJINO-ROHIT committed a year ago
Verified
98931c84ae021a4a43102e6c149290416bcdd11f

base task binding

JJINO-ROHIT committed a year ago
Unverified
1c8ca3c9fdf7acc9e9821c334285a7e619792887

overview

JJINO-ROHIT committed a year ago

README

The README file for this repository.

Asynchronous Torch Serving using Celery

This project shows how to serve a pytorch model using Celery, Redis and RabbitMQ to serve users asynchronously.

Project Overview

PyTorch: A simple resnet50 model with pretrained weights for classification.

Celery: A distributed task queue system in Python, allowing asynchronous processing of tasks, making it suitable for background jobs.

Redis: An in-memory data store often used as a caching mechanism, here employed for storing and retrieving intermediate results in the distributed system to enhance performance.

RabbitMQ: A message broker that facilitates communication between different parts of a distributed application, ensuring efficient and reliable message passing between the API and Celery workers.

Installation

  1. Build containers

make serve
  1. Check application health

curl -X 'GET' \
  'http://localhost:8000/health' \
  -H 'accept: application/json'

Response

{
  "health": "ok"
}
  1. Call the process api

curl -X 'POST' \
  'http://localhost:8000/api/process' \
  -H 'accept: application/json' \
  -H 'Content-Type: multipart/form-data' \
  -F '<file>'

Response

[
  {
    "task_id": "70471dd9-7cac-49a1-9088-dd95e4f2d2fe",
    "status": "PROCESSING",
    "url_result": "/api/result/70471dd9-7cac-49a1-9088-dd95e4f2d2fe"
  }
]
  1. Check the status in the queue

curl -X 'GET' \
  'http://localhost:8000/api/status/<task_id>' \
  -H 'accept: application/json'

Response

{
  "task_id": "70471dd9-7cac-49a1-9088-dd95e4f2d2fe",
  "status": "PENDING",
}
  1. Check the result

curl -X 'GET' \
  'http://localhost:8000/api/result/<task_id>' \
  -H 'accept: application/json'

Response

{
  "task_id": "70471dd9-7cac-49a1-9088-dd95e4f2d2fe",
  "status": "SUCCESS",
  "result": "suit : 35%"
}
  1. Stop the service

make stop