first commit

This commit is contained in:
sujune 2024-12-05 07:40:39 +09:00
commit bcb7168824
16 changed files with 508 additions and 0 deletions

54
README.md Normal file
View File

@ -0,0 +1,54 @@
# 이 코드 템플릿은 SDT Cloud 환경에서 s3에 파일을 업로드하는 MQTT 메세지를 발행하는 코드입니다.
# 패키지 설치
- 코드는 sdtclouds3, sdtcloudpubsub 패키지를 사용합니다. 아래 명령어로 패키지를 다운로드 해야합니다.
```bash
$ pip install sdtclouds3 sdtcloudpubsub
```
# 환경 셋팅
- 코드를 실행하기 전에, 장비에 sdtcloud 로그인 작업을 수행해야 합니다. !!
```bash
sudo bwc-cli login
```
# 코드 작성
## s3 코드 작성
- 코드는 runAction 함수에서 동작하고자 하는 기능을 작성합니다.
- uploadFile 변수는 s3에 업로드할 파일의 위치입니다. 반드시 파일의 위치와 파일명을 함께 작성해야 합니다.
```bash
uploadFile = "filepath/text.txt"
```
- 업로드가 완료되면 파일의 URL 값이 반환됩니다.
```bash
result = sdtcloudClient.uploadData(uploadFile)
print(result)
----------
https://<s3_bucket>/path/text.txt
```
- 만약 업로드하는 파일이 PNG 파일이며, URL를 클릭했을 때 이미지가 바로 보이도록 설정하고 싶다면 다음과 같이 수정합니다.
```bash
result = sdtcloudClient.uploadData(uploadFile, {“ContentType”: “image/png”})
```
- 이외에 여러 옵션이 있으므로 [참고 페이지](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)를 클릭하세요.
## mqtt 코드 작성
- 다음 변수로 메세지를 발행하는 코드를 작성하면...
```bash
msg = {
"message": "Hello World",
"uploadFile": result
}
```
- 실제로 발행되는 메세지은 다음과 같습니다.
```bash
msg = {
"data": {
"message": "Hello World",
"uploadFile": "https://<s3_bucket>/path/text.txt"
},
"timestamp": 12312311...
}
```

13
config.json Normal file
View File

@ -0,0 +1,13 @@
{
"model": {
"seed": 0,
"name": "vit_base_patch32_384",
"num_classes": 6,
"device": "cuda:0",
"ckpt_path": "./weights/vit_base_patch32_384_1.pth"},
"minio_bucket": "inference-app-models",
"minio_url": "http://43.200.53.170:31191",
"minio_access_key":"shWhLpEhJA8mlMLcldCT",
"minio_secret_key":"QONntgD3bww2CGVKKDz5Qtg3CWzP1FMqWyatBU5P",
"minio_region_name": "us-east-1"
}

0
data/test.txt Normal file
View File

17
framework.yaml Normal file
View File

@ -0,0 +1,17 @@
version: bwc/v2 # bwc 버전 정보입니다.
spec:
appName: inference-app # 앱의 이름입니다.
appType: inference
runFile: main.py # 앱의 실행 파일입니다.
env:
bin: python3 # 앱을 실행할 바이너라 파일 종류입니다.(장비에 따라 다르므로 확인 후 정의해야 합니다.)
virtualEnv: inference-app-env # 사용할 가상환경 이름입니다.
package: requirement.txt # 설치할 Python 패키지 정보 파일입니다.(기본 값은 requirement.txt 입니다.)
runtime: python3.11.4
stackbase:
tagName: v1.0.6 # Stackbase(gitea)에 릴리즈 태그명 입니다.
repoName: inference-app # Stackbase(gitea)에 저장될 저장소 이릅니다.
inference:
weightFile: vit_base_patch32_384_type.pth
bucket: inference-app-models
path: vit_base_patch32_384

153
logs/log_inference.log Normal file
View File

@ -0,0 +1,153 @@
2024-07-09 14:37:27,415 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:37:28,681 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:37:55,566 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:38:00,446 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:38:01,173 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:38:13,599 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:38:14,483 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:38:14,962 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 141, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 82, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:39:29,783 - grpc._server - ERROR - Exception calling application: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 142, in UploadImage
runAction(request.filename, pred)
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 84, in runAction
sdtcloudMqttClient.pubMessage(mqttClient, data)
TypeError: sdtcloudpubsub.pubMessage() takes 2 positional arguments but 3 were given
2024-07-09 14:40:29,903 - grpc._server - ERROR - Exception calling application: runAction() missing 1 required positional argument: 'mqttClient'
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 142, in UploadImage
runAction(request.filename, pred)
TypeError: runAction() missing 1 required positional argument: 'mqttClient'
2024-07-09 14:40:33,019 - grpc._server - ERROR - Exception calling application: runAction() missing 1 required positional argument: 'mqttClient'
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 142, in UploadImage
runAction(request.filename, pred)
TypeError: runAction() missing 1 required positional argument: 'mqttClient'
2024-07-09 14:40:35,817 - grpc._server - ERROR - Exception calling application: runAction() missing 1 required positional argument: 'mqttClient'
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt/Workspace/app_store_mlapp/inference-app/1.0.1/main.py", line 142, in UploadImage
runAction(request.filename, pred)
TypeError: runAction() missing 1 required positional argument: 'mqttClient'
2024-12-02 20:02:33,749 - root - INFO - DEVICE: cuda:0
2024-12-02 20:03:05,621 - root - INFO - DEVICE: cuda:0
2024-12-02 20:03:09,049 - root - INFO - 2024-12-02 20:03:08: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 자갈
2024-12-02 20:03:22,873 - root - INFO - 2024-12-02 20:03:22: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 자갈
2024-12-02 20:03:52,855 - root - INFO - DEVICE: cuda:0
2024-12-02 20:03:56,205 - root - INFO - 2024-12-02 20:03:56: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 자갈
2024-12-02 20:04:09,480 - root - INFO - DEVICE: cuda:0
2024-12-02 20:04:12,045 - root - INFO - 2024-12-02 20:04:11: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:18,297 - root - INFO - 2024-12-02 20:04:18: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:18,429 - root - INFO - 2024-12-02 20:04:18: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:18,574 - root - INFO - 2024-12-02 20:04:18: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:18,729 - root - INFO - 2024-12-02 20:04:18: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:18,878 - root - INFO - 2024-12-02 20:04:18: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,039 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,155 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,284 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,446 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,579 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,733 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:19,884 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:20,031 - root - INFO - 2024-12-02 20:04:19: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:20,160 - root - INFO - 2024-12-02 20:04:20: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:20,302 - root - INFO - 2024-12-02 20:04:20: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-02 20:04:20,447 - root - INFO - 2024-12-02 20:04:20: filename = /home/sdt-dev1/Workspace/kimdy/request-app/data/test_1.jpg, predicted class = 모래
2024-12-03 09:39:36,395 - root - INFO - DEVICE: cuda:0
2024-12-03 10:01:28,013 - root - INFO - DEVICE: cuda:0
2024-12-03 10:01:48,534 - root - INFO - DEVICE: cuda:0
2024-12-03 10:02:40,259 - grpc._server - ERROR - Exception calling application: name 'runAction' is not defined
Traceback (most recent call last):
File "/etc/sdt/venv/inference-app-env/lib/python3.11/site-packages/grpc/_server.py", line 494, in _call_behavior
response_or_iterator = behavior(argument, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sdt-dev1/Workspace/kimdy/inference-app/main.py", line 122, in UploadImage
runAction(request.filename, pred)
^^^^^^^^^
NameError: name 'runAction' is not defined
2024-12-03 10:04:13,292 - root - INFO - DEVICE: cuda:0
2024-12-03 10:05:07,064 - root - INFO - 2024-12-03 10:05:06: filename = ./data/test_1.jpg, predicted class = 자갈
2024-12-03 10:06:08,233 - root - INFO - 2024-12-03 10:06:08: filename = ./data/test_1.jpg, predicted class = 자갈

0
logs/test.txt Normal file
View File

141
main.py Normal file
View File

@ -0,0 +1,141 @@
import grpc
from concurrent import futures
import utils.image_pb2 as pb2
import utils.image_pb2_grpc as pb2_grpc
from PIL import Image
import io
import datetime
from botocore.client import Config
import traceback
import logging
import logging.handlers
import boto3
import json
import timm
import os
import uuid
import random
import torch
from torchvision import models, transforms
###############################################
# Config #
###############################################
with open('./config.json','r') as f:
cfg = json.load(f)
SEED = cfg['model']['seed']
MODEL_NAME = cfg['model']['name']
NUM_CLASSES = cfg['model']['num_classes']
DEVICE_CFG = cfg['model']['device']
DEVICE = DEVICE_CFG if torch.cuda.is_available() else "cpu"
MODEL_CKPT = cfg['model']['ckpt_path']
MODEL_FILE_NAME = MODEL_CKPT.split('/')[-1]
CATEGORIES = {0: '모래',
1: '자갈',
2: '덮개',
3: '빈차',
4: '레미콘',
5: '차량없음'}
# bwc에서 모델 업로드/다운로드 가능해지면 사용 안할 예정
MINIO_BUCKET = cfg['minio_bucket']
MINIO_URL = cfg['minio_url']
MINIO_ACC_KEY = cfg['minio_access_key']
MINIO_SCR_KEY = cfg['minio_secret_key']
MINIO_REGION = cfg['minio_region_name']
###############################################
# Logger Setting #
###############################################
logger = logging.getLogger()
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
log_fileHandler = logging.handlers.RotatingFileHandler(
filename="./logs/log_inference.log",
maxBytes=1024000,
backupCount=3,
mode='a')
log_fileHandler.setFormatter(formatter)
logger.addHandler(log_fileHandler)
###############################################
# Model download #
###############################################
#model_storage = boto3.client('s3',
# endpoint_url=MINIO_URL,
# aws_access_key_id=MINIO_ACC_KEY,
# aws_secret_access_key=MINIO_SCR_KEY,
# config=Config(signature_version='s3v4'),
# region_name=MINIO_REGION)
#
## minio에서 model ckpt 파일 다운로드
#if not os.path.isfile(MODEL_CKPT):
# model_storage.download_file(MINIO_BUCKET,f'{MODEL_NAME}/{MODEL_FILE_NAME}', MODEL_CKPT)
# print('Model is downloaded')
###############################################
# Model Class #
###############################################
class Model:
def __init__(self, ckpt_path, num_classes, device):
logger.info(f"DEVICE: {device}")
self.model = timm.create_model(MODEL_NAME, pretrained=False, num_classes=num_classes).to(device)
self.model.load_state_dict(torch.load(ckpt_path, map_location=device))
self.device = device
self.transform = transforms.Compose([transforms.Resize((384, 384)),
transforms.ToTensor()])
def inference(self, image):
t_image = self.transform(image).unsqueeze(0)
with torch.no_grad():
self.model.eval()
inputs = t_image.to(self.device)
outputs = self.model(inputs)
preds = torch.argmax(outputs, dim=-1)
return preds.item()
class Inference_Agent(pb2_grpc.ImageServiceServicer):
def __init__(self, model):
self.model = model
def UploadImage(self, request, context):
image = Image.open(io.BytesIO(request.image_data)).convert("RGB")
now = datetime.datetime.now()
formatted_now = now.strftime("%Y-%m-%d %H:%M:%S")
with torch.no_grad():
pred = model.inference(image)
#runAction(request.filename, pred)
logger.info(f'{formatted_now}: filename = {request.filename}, predicted class = {CATEGORIES[pred]}')
print(f'{formatted_now}: filename = {request.filename}, predicted class = {CATEGORIES[pred]}')
result = f"Predicted class = {CATEGORIES[pred]}"
return pb2.ImageResponse(message="Image Result", inference_result = result)
def serve(model):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=4))
pb2_grpc.add_ImageServiceServicer_to_server(Inference_Agent(model), server)
server.add_insecure_port('[::]:50051')
server.start()
print('Waitting for client...')
server.wait_for_termination()
if __name__ == "__main__":
model = Model(MODEL_CKPT,NUM_CLASSES, DEVICE)
print('Model is loaded')
serve(model)

16
requirement.txt Normal file
View File

@ -0,0 +1,16 @@
# Write package's name that need your app.
awscrt
awsiotsdk
grpcio==1.56.2
protobuf==4.25.0
Pillow==10.2.0
boto3
botocore
timm
numpy==1.24.4
pandas==2.0.3
opencv-python
torch==2.4.0 --index-url https://download.pytorch.org/whl/cu124
torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu124
torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu124
pyyaml

0
result/test.txt Normal file
View File

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

18
utils/image.proto Normal file
View File

@ -0,0 +1,18 @@
syntax = "proto3";
package image;
service ImageService {
rpc UploadImage (ImageRequest) returns (ImageResponse);
}
message ImageRequest{
string filename =1;
bytes image_data = 2;
}
message ImageResponse {
string message = 1;
string inference_result = 2;
}

30
utils/image_pb2.py Normal file
View File

@ -0,0 +1,30 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: image.proto
# Protobuf Python Version: 4.25.0
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0bimage.proto\x12\x05image\"4\n\x0cImageRequest\x12\x10\n\x08\x66ilename\x18\x01 \x01(\t\x12\x12\n\nimage_data\x18\x02 \x01(\x0c\":\n\rImageResponse\x12\x0f\n\x07message\x18\x01 \x01(\t\x12\x18\n\x10inference_result\x18\x02 \x01(\t2H\n\x0cImageService\x12\x38\n\x0bUploadImage\x12\x13.image.ImageRequest\x1a\x14.image.ImageResponseb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'image_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_globals['_IMAGEREQUEST']._serialized_start=22
_globals['_IMAGEREQUEST']._serialized_end=74
_globals['_IMAGERESPONSE']._serialized_start=76
_globals['_IMAGERESPONSE']._serialized_end=134
_globals['_IMAGESERVICE']._serialized_start=136
_globals['_IMAGESERVICE']._serialized_end=208
# @@protoc_insertion_point(module_scope)

66
utils/image_pb2_grpc.py Normal file
View File

@ -0,0 +1,66 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
import utils.image_pb2 as image__pb2
class ImageServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.UploadImage = channel.unary_unary(
'/image.ImageService/UploadImage',
request_serializer=image__pb2.ImageRequest.SerializeToString,
response_deserializer=image__pb2.ImageResponse.FromString,
)
class ImageServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def UploadImage(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_ImageServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'UploadImage': grpc.unary_unary_rpc_method_handler(
servicer.UploadImage,
request_deserializer=image__pb2.ImageRequest.FromString,
response_serializer=image__pb2.ImageResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'image.ImageService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class ImageService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def UploadImage(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/image.ImageService/UploadImage',
image__pb2.ImageRequest.SerializeToString,
image__pb2.ImageResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)