[TGL][EHL][ADL]OpenVINO validation

Bug #1958866 reported by apoorv
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
intel
New
Undecided
Unassigned
Lookout-canyon-series
New
Undecided
Unassigned

Bug Description

We propose to cover as much models in the reasonable time is to download models via Open Model Zoo and then run benchmarking application against them. As OMZ showcases models of different use cases (images, language processing etc.) we can select the representative set of models, starting from e.g. resnet, bert that are currently widely used in different HW programs. Later models set can be adjusted to fit our needs and time we have.

Steps to run :
• install OpenVINO from pip <Intel will share the openVINO package to be installed>
• download and convert model
• run benchmarking application

Python benchmarking application looks like:
cd /tmp
<Intel will share the benchmarking apps to be used>
pip3 install openvino --find-links=/tmp
pip3 install openvino-dev[caffe,kaldi,mxnet,onnx,pytorch,tensorflow2] --find-links=/tmp
omz_downloader --name alexnet # or resnet-50-tf or bert-base-ner or whatever from https://github.com/openvinotoolkit/open_model_zoo/tree/master/models
omz_converter --name alexnet --precision FP32 # models from intel scope does not require conversion
benchmark_app -m ./public/alexnet/FP32/alexnet.xml

C++ benchmarking application can be installed via apt as described at https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_apt.html. Then still do pip installation from above and get models
cd /opt/intel/openvino_<VERSION>/samples/cpp # this is for 2022.1, use openvino_<VERSION>/inference_engine/samples/cpp for 2021.4
./build_samples.sh
cd ~/inference_engine_samples_build
./benchmark_app -m /path/to/converted/model # should be /tmp/public/alexnet/FP32/alexnet.xml here

Success Criteria: Successful execution of benchmark-app, nothing more (rely on zero exit code or run with -report_folder parameter that leads to generating statistics file in CSV format and check if “throughput” line contains some value).

apoorv (sangal)
description: updated
Ana Lasprilla (anamlt)
Changed in intel:
milestone: adl-iotg → none
Revision history for this message
apoorv (sangal) wrote :

For OpenVINO installation please follow the instructions from https://pypi.org/project/openvino/2022.1.0.dev20220131/

Revision history for this message
Sachin Mokashi (sachinmokashi) wrote :

OpenVINO Validation Steps:

Please install Openvino on the Lookout Canyon Ubuntu image and execute both C++ and Python benchmarking application on that image as follows

Please see the attached work logs for more details.

• For the python benchmarking application, install the Openvino runtime and development packages from https://pypi.org/project/openvino/2022.1.0.dev20220131/ and https://pypi.org/project/openvino-dev/2022.1.0.dev20220131/.

Below are the steps taken to run the benchmark app:
1. Create a virtual environment to avoid dependency conflicts. You can skip this step only if you do want to install all dependencies globally.
-python3 -m venv openvino_env
-source openvino_env/bin/activate

2. Set Up and Update PIP to the Highest Version
-python3 -m pip install --upgrade pip

3. Install openvino runtime packages
-pip3 install openvino==2022.1.0.dev20220131
-run “python -c "from openvino.runtime import Core"” to verify that the runtime package is properly installed, you will not see any error messages.

4. Install openvino development packages
-pip3 install openvino-dev[caffee,kaldi,mxnet,onnx,pytorch,tensorflow2]==2022.1.0.dev20220131
-run “mo -h” to verify that the developer package is properly installed, you will see the help message for Model Optimizer.

5. Use omz_downloader to download the model file from online source.
-omz_downloader --name alexnet

6. Use omz_converter to converts the models that are not in the Inference Engine IR format into that format.
-omz_converter --name alexnet --precision FP32

7. Run the python benchmark app
-benchmark_app -m ./public/alexnet/FP32/alexnet.xml

• For C++ benchmarking application, follow the steps provided:

C++ benchmarking application can be installed from apt as described at https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_apt.html. Then still do pip installation from above and get models
cd /opt/intel/openvino_<VERSION>/samples/cpp # this is for 2022.1, use openvino_<VERSION>/inference_engine/samples/cpp for 2021.4
./build_samples.sh
cd ~/inference_engine_samples_build
./benchmark_app -m /path/to/converted/model # should be /tmp/public/alexnet/FP32/alexnet.xml here

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.