|
|
||
|---|---|---|
| .. | ||
| Jetson | ||
| android_demo | ||
| avh | ||
| cpp_infer | ||
| docker/hubserving | ||
| fastdeploy | ||
| hubserving | ||
| ios_demo | ||
| lite | ||
| paddle2onnx | ||
| paddlecloud | ||
| paddlejs | ||
| pdserving | ||
| slim | ||
| README.md | ||
| README_ch.md | ||
README.md
English | 简体中文
PP-OCR Deployment
Paddle Deployment Introduction
Paddle provides a variety of deployment schemes to meet the deployment requirements of different scenarios. Please choose according to the actual situation:
PP-OCR Deployment
PP-OCR has supported muti deployment schemes. Click the link to get the specific tutorial.
- Python Inference
- C++ Inference
- Serving (Python/C++)
- Paddle-Lite (ARM CPU/OpenCL ARM GPU)
- Paddle.js
- Jetson Inference
- Paddle2ONNX
If you need the deployment tutorial of academic algorithm models other than PP-OCR, please directly enter the main page of corresponding algorithms, entrance。