Commit Graph

251 Commits

Author SHA1 Message Date
Anton Khirnov
6d75d44d90 lavfi: drop internal.h
All that remains in it are things that belong in avfilter_internal.h.

Move them there and remove internal.h
2024-08-19 21:48:04 +02:00
Anton Khirnov
1afe42852b lavfi/internal: move functions used by filters to filters.h
internal.h currently mixes interfaces intended to be used by filters
with those that should be limited to generic filter- or graph-level
code.
2024-08-19 21:45:25 +02:00
Wenbin Chen
7560db937d libavfi/dnn: enable LibTorch xpu device option support
Add xpu device support to libtorch backend.
To enable xpu support you need to add
 "-Wl,--no-as-needed -lintel-ext-pt-gpu -Wl,--as-needed" to
"--extra-libs" when configure ffmpeg.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2024-06-08 19:45:21 +08:00
Zhao Zhili
6de951923b avfilter/dnn: Remove a level of dereference
For code such as 'model->model = ov_model' is confusing. We can
just drop the member variable and use cast to get the subclass.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:31 +08:00
Zhao Zhili
a1fea7e11b avfilter/dnn_backend_torch: Simplify memory allocation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:27 +08:00
Zhao Zhili
abfefbb33b avfilter/dnn_backend_tf: Simplify memory allocation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:21 +08:00
Zhao Zhili
a40df366c4 avfilter/dnn_backend_tf: Fix free context at random place
It will be freed again by ff_dnn_uninit.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:17 +08:00
Zhao Zhili
d3db7bbc03 avfilter/dnn_backend_tf: Remove one level of indentation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:10 +08:00
Zhao Zhili
57a3c2cd40 avfilter/dnn_backend_openvino: simplify memory allocation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:07 +08:00
Zhao Zhili
ac52cee72e avfilter/dnn_backend_openvino: Fix free context at random place
It will be freed again by ff_dnn_uninit.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:00 +08:00
Zhao Zhili
093f5da534 avfilter/dnn: Don't show backends which are not supported by a filter 2024-05-30 18:13:46 +08:00
Zhao Zhili
4f051c746b avfilter/dnn: Use dnn_backend_info_list to search for dnn module
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:13:29 +08:00
Zhao Zhili
8c21f1e3b7 avfilter/dnn: Refactor DNN parameter configuration system
This patch trying to resolve mulitiple issues related to parameter
configuration:

Firstly, each DNN filters duplicate DNN_COMMON_OPTIONS, which should
be the common options of backend.

Secondly, backend options are hidden behind the scene. It's a
AV_OPT_TYPE_STRING backend_configs for user, and parsed by each
backend. We don't know each backend support what kind of options
from the help message.

Third, DNN backends duplicate DNN_BACKEND_COMMON_OPTIONS.

Last but not the least, pass backend options via AV_OPT_TYPE_STRING
makes it hard to pass AV_OPT_TYPE_BINARY to backend, if not impossible.

This patch puts backend common options and each backend options inside
DnnContext to reduce code duplication, make options user friendly, and
easy to extend for future usecase.

For example,

./ffmpeg -h filter=dnn_processing

dnn_processing AVOptions:
   dnn_backend       <int>        ..FV....... DNN backend (from INT_MIN to INT_MAX) (default tensorflow)
     tensorflow      1            ..FV....... tensorflow backend flag
     openvino        2            ..FV....... openvino backend flag
     torch           3            ..FV....... torch backend flag

dnn_base AVOptions:
   model             <string>     ..F........ path to model file
   input             <string>     ..F........ input name of the model
   output            <string>     ..F........ output name of the model
   backend_configs   <string>     ..F.......P backend configs (deprecated)
   options           <string>     ..F.......P backend configs (deprecated)
   nireq             <int>        ..F........ number of request (from 0 to INT_MAX) (default 0)
   async             <boolean>    ..F........ use DNN async inference (default true)
   device            <string>     ..F........ device to run model

dnn_tensorflow AVOptions:
   sess_config       <string>     ..F........ config for SessionOptions

dnn_openvino AVOptions:
   batch_size        <int>        ..F........ batch size per request (from 1 to 1000) (default 1)
   input_resizable   <boolean>    ..F........ can input be resizable or not (default false)
   layout            <int>        ..F........ input layout of model (from 0 to 2) (default none)
     none            0            ..F........ none
     nchw            1            ..F........ nchw
     nhwc            2            ..F........ nhwc
   scale             <float>      ..F........ Add scale preprocess operation. Divide each element of input by specified value. (from INT_MIN to INT_MAX) (default 0)
   mean              <float>      ..F........ Add mean preprocess operation. Subtract specified value from each element of input. (from INT_MIN to INT_MAX) (default 0)

dnn_th AVOptions:
   optimize          <int>        ..F........ turn on graph executor optimization (from 0 to 1) (default 0)

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-18 19:44:50 +08:00
Fei Wang
0534d2ac84 lavfi/dnn_backend_torch: Include mem.h
Fix build fail since 790f793844.

Signed-off-by: Fei Wang <fei.w.wang@intel.com>
2024-04-10 18:18:49 +08:00
Wenbin Chen
478d97f303 libavfilter/dnn_io_proc: Take step into consideration when crop frame
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-04-04 14:26:57 +08:00
Wenbin Chen
8869f5ce86 libavfilter/dnn_backend_openvino: Check bbox's height
Check bbox's height with frame's height rather than frame's width.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-04-04 14:26:52 +08:00
Andreas Rheinhardt
790f793844 avutil/common: Don't auto-include mem.h
There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.

Keep it for external users in order to not cause breakages.

Also improve the other headers a bit while just at it.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-31 00:08:43 +01:00
Wenbin Chen
f4e0664fd1 libavfi/dnn: add LibTorch as one of DNN backend
PyTorch is an open source machine learning framework that accelerates
the path from research prototyping to production deployment. Official
website: https://pytorch.org/. We call the C++ library of PyTorch as
LibTorch, the same below.

To build FFmpeg with LibTorch, please take following steps as
reference:
1. download LibTorch C++ library in
 https://pytorch.org/get-started/locally/,
please select C++/Java for language, and other options as your need.
Please download cxx11 ABI version:
 (libtorch-cxx11-abi-shared-with-deps-*.zip).
2. unzip the file to your own dir, with command
unzip libtorch-shared-with-deps-latest.zip -d your_dir
3. export libtorch_root/libtorch/include and
libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
4. config FFmpeg with ../configure --enable-libtorch \
 --extra-cflag=-I/libtorch_root/libtorch/include \
 --extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
 --extra-ldflags=-L/libtorch_root/libtorch/lib/
5. make

To run FFmpeg DNN inference with LibTorch backend:
./ffmpeg -i input.jpg -vf \
dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg

The LibTorch_model.pt can be generated by Python with torch.jit.script()
api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
pytorch official guide about how to convert and load torchscript model.
Please note, torch.jit.trace() is not recommanded, since it does
not support ambiguous input size.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-03-19 14:48:58 +08:00
Anton Khirnov
1e7d2007c3 all: use designated initializers for AVOption.unit
Makes it robust against adding fields before it, which will be useful in
following commits.

Majority of the patch generated by the following Coccinelle script:

@@
typedef AVOption;
identifier arr_name;
initializer list il;
initializer list[8] il1;
expression tail;
@@
AVOption arr_name[] = { il, { il1,
- tail
+ .unit = tail
}, ...  };

with some manual changes, as the script:
* has trouble with options defined inside macros
* sometimes does not handle options under an #else branch
* sometimes swallows whitespace
2024-02-14 14:53:41 +01:00
Wenbin Chen
3de38b9da5 libavfilter/dnn_interface: use dims to represent shapes
For detect and classify output, width and height make no sence, so
change width, height to dims to represent the shape of tensor. Use
layout and dims to get width, height and channel.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-01-28 11:18:06 +08:00
Wenbin Chen
c695de56b5 libavfilter/dnn_bakcend_openvino: Add automatic input/output detection
Now when using openvino backend, user doesn't need to set input/output
names in command line. Model ports will be automatically detected.

For example:
ffmpeg -i input.png -vf \
dnn_detect=dnn_backend=openvino:model=model.xml:input=image:\
output=detection_out -y output.png

can be simplified to:
ffmpeg -i input.png -vf dnn_detect=dnn_backend=openvino:model=model.xml\
 -y output.png

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-01-28 11:17:59 +08:00
Wenbin Chen
86435582a6 libavfilter/dnn_backend_openvino: Add dynamic output support
Add dynamic outputs support. Some models don't have fixed output size.
Its size changes according to result. Now openvino can run these kinds of
models.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-12-30 12:12:51 +08:00
Wenbin Chen
da02836b9d libavfilter/vf_dnn_detect: Add input pad
Add input pad to get model input resolution. Detection models always
have fixed input size. And the output coordinators are based on the
input resolution, so we need to get input size to map coordinators to
our real output frames.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-12-16 21:50:37 +08:00
Wenbin Chen
22652b576c libavfiter/dnn_backend_openvino: Add multiple output support
Add multiple output support to openvino backend. You can use '&' to
split different output when you set output name using command line.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-12-16 21:50:16 +08:00
Wenbin Chen
47b2328076 libavfilter/vf_dnn_detect: Add yolo support
Add yolo support. Yolo model doesn't output final result. It outputs
candidate boxes, so we need post-process to remove overlap boxes to
get final results. Also, the box's coordinators relate to cell and
anchors, so we need these information to calculate boxes as well.

Model detail please refer to: https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tf

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-11-26 20:38:36 +08:00
Wenbin Chen
fa81de4af0 libavfilter/dnn/openvino: Reduce redundant memory allocation
We can directly get data ptr from tensor, so that extral memory
allocation can be removed.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-11-11 09:32:31 +08:00
Wenbin Chen
58b6c0c327 libavfilter/dnn: Initialze DNNData variables
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Wenbin Chen
c8c925dc29 libavfilter/dnn: Add scale and mean preprocess to openvino backend
Dnn models has different data preprocess requirements. Scale and mean
parameters are added to preprocess input data.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Wenbin Chen
74ce1d2d11 libavfilter/dnn: add layout option to openvino backend
Dnn models have different input layout (NCHW or NHWC), so a
"layout" option is added
Use openvino's API to do layout conversion for input data. Use swscale
to do layout conversion for output data as openvino doesn't have
similiar C API for output.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Zhao Zhili
4f4dc0a1a2 avfilter/dnn_backend_openvino: fix wild pointer on error path
When ov_model_const_input_by_name/ov_model_const_output_by_name
failed, input_port/output_port can be wild pointer.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-09-15 13:02:15 +08:00
Zhao Zhili
791b88fcb4 avfilter/dnn_backend_openvino: fix input_port/output_port leaks 2023-09-15 13:02:15 +08:00
Zhao Zhili
37123100d2 avfilter/dnn_backend_openvino: fix leak of ov_shape_t 2023-09-15 13:02:15 +08:00
Zhao Zhili
d2c5c3b7ef avfilter/dnn_backend_openvino: fix leak or ov_core_t on error path 2023-09-15 13:02:15 +08:00
Zhao Zhili
e0880ef8cb avfilter/dnn_backend_openvino: fix use uninitialized values
Error handling was broken since neither `ret` nor `task` has being
initialized on error path.
2023-09-15 13:02:15 +08:00
Zhao Zhili
7cb6329296 avfilter/dnn_backend_openvino: reduce indentation in free_model_ov
No functional changes except ensures model isn't null.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-09-15 13:02:15 +08:00
Zhao Zhili
5369548f2e avfilter/dnn_backend_openvino: fix multiple memleaks
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-09-15 13:02:15 +08:00
Wenbin Chen
e79bd1f1b1 lavfi/dnn: Add OpenVINO API 2.0 support
OpenVINO API 2.0 was released in March 2022, which introduced new
features.
This commit implements current OpenVINO features with new 2.0 APIs. And
will add other features in API 2.0.
Please add installation path, which include openvino.pc, to
PKG_CONFIG_PATH mannually for new OpenVINO libs config.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-08-26 14:12:10 +08:00
Zhao Zhili
32a749c7a6 avfilter/dnn_backend_openvino: fix log message
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:44 +08:00
Zhao Zhili
3a5d95e3fa avfilter/dnn_backend_tf: silence implicit cast warning
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili
b0c0fedcda avfilter/dnn_backend_tf: fix use of uninitialized value
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili
d9f41a343e avfilter/dnn_backend_tf: check TF_OperationOutputType return value
This also fixed a warning: implicit conversion from enumeration
type 'TF_DataType' (aka 'enum TF_DataType') to different
enumeration type 'DNNDataType'.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili
f3495ef4f8 avfilter/dnn_backend_tf: remove unused define
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili
016f2f61c3 avfilter/dnn: add log context to ff_get_dnn_module
Print backend type when failed.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili
505c43bb65 avfilter/dnn: refactor ff_get_dnn_module to remove allocation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili
3f52b7eedc avfilter/dnn: define each backend as a DNNModule
To avoid export multiple functions for each backend
implementation.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Ting Fu
78f95f1088 lavfi/dnn: Remove DNN native backend
According to discussion in
https://etherpad.mit.edu/p/FF_dev_meeting_20221202 and the proposal in
http://ffmpeg.org/pipermail/ffmpeg-devel/2022-December/304534.html,
the DNN native backend should be removed at first step.
All the DNN native backend related codes are deleted.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-04-28 11:07:41 +08:00
Ting Fu
7ed6f28a7c lavfi/dnn: modify dnn interface for removing native backend
Native backend will be removed in following commits, so change the
dnn interface and modify the error message in it first.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-04-28 11:07:40 +08:00
Ting Fu
bc589c91f7 lavfi/dnn: add error info for TF backend filling task failure
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Ting Fu
af052f9066 lavfi/dnn: fix mem leak in TF backend error handle
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Ting Fu
5c216d081d lavfi/dnn: fix corruption when TF backend infer failed
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00