Commit Graph

59 Commits

Author SHA1 Message Date
Andreas Rheinhardt 69f120ead7 avcodec/avcodec: Don't include cpu.h
It is not used here at all; instead, add it where it is used without
including it or any of the arch-specific CPU headers.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-07-22 12:59:07 +02:00
Shubhanshu Saxena 0bc7ddc460 lavfi/dnn_backend_ov: Rename RequestItem to OVRequestItem
Rename RequestItem to OVRequestItem in the OpenVINO backend
to avoid confusion.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-22 08:13:14 +08:00
Shubhanshu Saxena 429954822c lavfi/dnn_backend_openvino.c: Fix Memory Leak in execute_model_ov
In cases where the execution inside the function execute_model_ov fails,
the OVRequestItem must be pushed back to the request_queue before returning
the error. In case pushing back fails, release the allocated memory.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-22 08:13:14 +08:00
Shubhanshu Saxena 08d8b3b631 lavfi/dnn_backend_tf: Request-based Execution
This commit uses TFRequestItem and the existing sync execution
mechanism to use request-based execution. It will help in adding
async functionality to the TensorFlow backend later.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Shubhanshu Saxena f73943d514 lavfi/dnn_backend_openvino.c: Fix Memory Leak in execute_model_ov
In cases where the execution inside the function execute_model_ov fails,
push the RequestItem back to the request_queue before returning the error.
In case pushing back fails, release the allocated memory.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-04 18:56:17 +08:00
Guo Yejun 2cf95f2dd9 lavfi/dnn_backend_openvino.c: fix crash when target is not specified 2021-06-19 19:17:56 +08:00
Shubhanshu Saxena 2df963b5fa lavfi/dnn_backend_openvino.c: Fix Memory Leak for RequestItem
Fix memory leak for RequestItem upon error while pushing to the
request_queue in the completion callback.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-06-18 21:26:50 +08:00
Shubhanshu Saxena 5509235818 lavfi/dnn: Fill Task using Common Function
This commit adds a common function for filling the TaskItems
in all three backends.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-06-12 15:18:58 +08:00
Shubhanshu Saxena 9675ebbb91 lavfi/dnn: Add nb_output to TaskItem
Add nb_output property to TaskItem for use in TensorFlow backend
and Native backend.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-06-12 15:18:58 +08:00
Shubhanshu Saxena 446b4f77c1 lavfi/dnn: Convert output_name to char** in TaskItem
Convert output_name to char **output_names in TaskItem and use it as
a pointer to array of output names in the DNN backend.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-06-12 15:18:58 +08:00
Shubhanshu Saxena f5ab8905fd lavfi/dnn: Extract TaskItem and InferenceItem from OpenVino Backend
Extract TaskItem and InferenceItem from OpenVino backend and convert
ov_model to void in TaskItem.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-06-12 15:18:58 +08:00
Shubhanshu Saxena e41255cddb lavfi/dnn_backend_openvino.c: Correct Pointer Type while Freeing
This commit corrects the type of pointer of elements from the
inference queue in ff_dnn_free_model_ov.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-05-28 08:40:07 +08:00
Guo, Yejun 4c705a2775 lavfi/dnn: refine code to separate processing and detection in backends 2021-05-24 09:09:34 +08:00
Guo, Yejun fc26dca64e lavfi/dnn: add classify support with openvino backend
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-05-06 10:50:44 +08:00
Guo, Yejun a3b74651a0 lavfi/dnn: refine dnn interface to add DNNExecBaseParams
Different function type of model requires different parameters, for
example, object detection detects lots of objects (cat/dog/...) in
the frame, and classifcation needs to know which object (cat or dog)
it is going to classify.

The current interface needs to add a new function with more parameters
to support new requirement, with this change, we can just add a new
struct (for example DNNExecClassifyParams) based on DNNExecBaseParams,
and so we can continue to use the current interface execute_model just
with params changed.
2021-05-06 10:50:44 +08:00
Guo, Yejun 7eb9accc37 lavfi/dnn_backend_openvino.c: move the logic for batch mode earlier 2021-05-06 10:50:44 +08:00
Guo, Yejun e37cc72387 lavfi/dnn_backend_openvino.c: add InferenceItem between TaskItem and RequestItem
There's one task item for one function call from dnn interface,
there's one request item for one call to openvino. For classify,
one task might need multiple inference for classification on every
bounding box, so add InferenceItem.
2021-05-06 10:50:44 +08:00
Guo, Yejun 1b5dc712cd lavfi/dnn_backend_openvino.c: unify code for infer request for sync/async 2021-05-06 10:50:44 +08:00
shubhanshu02 d98884be41 lavfi/dnn_backend_openvino.c: Spelling Correction in OpenVino Backend
Correct Spelling of the word `descibe` to `describe`
in init_model_ov

Signed-off-by: shubhanshu02 <shubhanshu.e01@gmail.com>
2021-04-25 09:02:54 +08:00
Guo, Yejun 13bf797ced lavfi/dnn: add post process for detection 2021-04-08 09:23:02 +08:00
Guo, Yejun 59021d79a2 lavfi/dnn: refine code for frame pre/proc processing 2021-04-08 09:23:02 +08:00
Guo, Yejun d2ccbc966b lavfi/dnn_backend_openvino.c: only allow DFT_PROCESS_FRAME to get output dim 2021-04-08 09:23:02 +08:00
Guo, Yejun da12d600ea lavfi/dnn_backend_openvino.c: fix mem leak for TaskItem upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun df59ae8bb2 lavfi/dnn_backend_openvino.c: fix mem leak for RequestItem upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun 41f4af16fc lavfi/dnn_backend_openvino.c: fix typo upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun bd3ca0859e lavfi/dnn_backend_openvino.c: fix mem leak for input_blob and output_blob upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun 3ce2ee7f54 lavfi/dnn_backend_openvino.c: fix mem leak for AVFrame upon error 2021-03-18 09:30:09 +08:00
Ting Fu b0d75a8de9 dnn_backend_openvino.c: allow out_frame as NULL for analytic case 2021-02-18 09:59:37 +08:00
Guo, Yejun 2da3a5c10f dnn: add color conversion for analytic case
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun 76fc6879e2 dnn: add function type for model
So the backend knows the usage of model is for frame processing,
detect, classify, etc. Each function type has different behavior
in backend when handling the input/output data of the model.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun 995c33a046 dnn_backend_openvino.c: fix multi-thread issue for async execution
once we mark done for the task in function infer_completion_callback,
the task is possible to be release in function ff_dnn_get_async_result_ov
in another thread just after it, so we need to record request queue
first, instead of using task->ov_model->request_queue later.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun 51c105a62d dnn_backend_openvino.c: fix mismatch between ffmpeg(NHWC) and openvino(NCHW)
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun eccc7971c2 dnn_backend_openvino.c: remove extra semicolon 2021-01-28 09:45:13 +08:00
Guo, Yejun 06c01f1763 dnn: remove type cast which is not necessary 2021-01-28 09:45:13 +08:00
Guo, Yejun d4f40c1b60 dnn/queue: remove prefix FF for Queue and SafeQueue
we don't need FF prefix for internal data struct

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun c5e30d588d libavfilter/dnn: add prefix ff_ for internal functions
from proc_from_frame_to_dnn to ff_proc_from_frame_to_dnn, and
from proc_from_dnn_to_frame to ff_proc_from_dnn_to_frame.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun 2d6af4a501 libavfilter/dnn: use avpriv_report_missing_feature for unsupported features
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun 0d5fd4999a dnn_backend_openvino.c: add version mismatch reminder
The OpenVINO model file format changes when OpenVINO goes to a new
release, it does not work if the versions between model file and
runtime are mismatched.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Ting Fu 71b82e4ffd dnn/openvino: support model input resize
OpenVINO APIs require specify input size to run the model, while some
OpenVINO model does accept different input size. To enable this feature
adding input_resizable option here for easier use.
Setting bool variable input_resizable to specify if the input can be resizable or not.
input_resizable = 1 means support input resize, aka accept different input size.
input_resizable = 0 (default) means do not support input resize.
Please make sure the inference model does accept different input size
before use this option, otherwise the inference engine may report error(s).
eg: ./ffmpeg -i video_name.mp4 -vf dnn_processing=dnn_backend=openvino:\
      model=model_name.xml:input=input_name:output=output_name:\
      options=device=CPU\&input_resizable=1 -y output_video_name.mp4

Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-01-18 13:09:22 +08:00
Ting Fu 048d5cc620 dnn/openvino: refine code for better model initialization
Move openvino model/inference request creation and initialization steps
from ff_dnn_load_model_ov to new function init_model_ov, for later input
resize support.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-01-18 13:09:22 +08:00
Ting Fu 946fcd4508 dnn/openvino: remove unnecessary code
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-01-18 13:09:21 +08:00
Guo, Yejun 64ea15f050 libavfilter/dnn: add batch mode for async execution
the default number of batch_size is 1

Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-15 08:59:54 +08:00
Guo, Yejun 6b0cfa8399 dnn/queue: add error check and cleanup
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-31 08:31:17 +08:00
Guo, Yejun 8e78d5d394 dnn: fix redefining typedefs and also refine naming with correct prefix
The prefix for symbols not exported from the library and not
local to one translation unit is ff_ (or FF for types).

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-31 08:31:17 +08:00
Guo, Yejun 5024286465 dnn_interface: change from 'void *userdata' to 'AVFilterContext *filter_ctx'
'void *' is too flexible, since we can derive info from
AVFilterContext*, so we just unify the interface with this data
structure.

Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun e67b5d0a24 dnn: add async execution support for openvino backend
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun 38089925fa dnn_backend_openvino.c: refine code for error handle
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun 2b177033bb dnn_backend_openvino.c: separate function execute_model_ov
function fill_model_input_ov and infer_completion_callback are
extracted, it will help the async execution for reuse.

Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Chris Miceli 6bdfea8d4b libavfilter/dnn/dnn_backend{openvino, tf}: check memory alloc non-NULL
These previously would not check that the return value was non-null
meaning it was susceptible to a sigsegv. This checks those values.
2020-10-14 11:08:09 +08:00
Guo, Yejun e71d73b096 dnn: add a new interface DNNModel.get_output
for some cases (for example, super resolution), the DNN model changes
the frame size which impacts the filter behavior, so the filter needs
to know the out frame size at very beginning.

Currently, the filter reuses DNNModule.execute_model to query the
out frame size, it is not clear from interface perspective, so add
a new explict interface DNNModel.get_output for such query.
2020-09-21 21:26:56 +08:00