AI Made Friendly HERE

Adding new ML model in VVAS & running it on KV260

This tutorial/article is detail version of steps on “adding new ML model in VVAS“. This section covers details about various possibilities of executing a ML model using VVAS.

Requirements:

  • VVAS 2.0
  • Vitis AI 2.5
  • Petalinux 2022.2 KV260 starter kit bsp : xilinx-kv260-starterkit-v2022.2-10141622.bsp

If you are following VVAS 3.0 then review this tutorial throughly and focus on points as mentined at “How to add model in VVAS 3.0 ?”

Overview

For adding new model in VVAS we will need to modify/customize the VVAS source and then build/make the VVAS source via cross compiler (SDK). In this tutorial we are starting from Petalinux project, creating Petalinux SDK (for cross compilation in Host PC/Machine), modifyng VVAS source (from github.com/xilinx/vvas/tree/2.0) and then compiling/make the source in SDK environment.

Creating Petalinux SDK with Vitis AI package

  • Create Petalinux project from the downloaded base BSP for KV260 starter kit

$ petalinux-create -t project -s xilinx-kv260-starterkit-v2022.2-10141622.bsp -n kv260_2022_2_vvas_bringup

  • Get the Vitis AI recipe from following source : and copy the recipes-vitis-ai at /project-spec/meta-user folder
  • Enable Vitis AI automation package group in Petalinux project by adding following line in /project-spec/meta-user/conf file:

IMAGE_INSTALL:append = ” vitis-ai-library “
IMAGE_INSTALL:append = ” vitis-ai-library-dev “

  • Build the petalinux project and sdk using following command:

petalinux-build -s

This will create sdk.sh file at /images/linux folder

Install sdk in host machine

  • Run the above sdk.sh in host machine to install the SDK for cross compiling VVAS source code in host machine

./sdk.sh

Get VVAS source code

  • Get the VVAS source code from the following github repository :https://github.com/Xilinx/VVAS
  • Change the git branch to vvas-rel-v2.0

git checkout vvas-rel-v2.0

Adding new model in VVAS

This section covers details about various possibilities of executing a ML model using VVAS.

If the model corresponds to one of the classes already supported by vvas_xinfer plug-in then there is no change needed anywhere except specifying correct class name in the infer json file used by vvas_xinfer.

If the class corresponding to the new model is not supported by vvas_xinfer but that is supported by Vitis AI library then follow the below mentioned steps to add a new model class in vvas_xinfer.

Add the model class type in file vvas_xdpumodels.hpp : vvas-accel-sw-libs/vvas_xdpuinfer/src/vvas_xdpumodels.hpp

Changing in only above file lead to build fail with following error:

Needed to update following file along with in above file:VVAS/vvas-gst-plugins/gst-libs/gst/vvas/gstvvasinpinfer.h

Update vvas_dpuinfer.cpp with the header file for your model class. You may refer to the implementation for other class to know the changes needed.

Also update the model variable on basis of new model:

Update the meson.build for the new class and corresponding Vitis AI library name.

Changing only above throws option not found error. So need to add options in :VVAS/vvas-accel-sw-libs/meson_options.txt

Add new files corresponding to the new class implementation in VVAS/vvas-accel-sw-libs/vvas_xdpuinfer/src. – You may refer to the other class implementation.

Update meson.build

In case the new model results are not supported by the existing fields in the VVAS Inference Metadata structure, then this may also needs to be modified to support the new meta data. Modifications are needed in https://github.com/Xilinx/VVAS/blob/vvas-rel-v2.0/vvas-gst-plugins/gst-libs/gst/vvas/gstinferenceprediction.h

If the model is not supported by any of the classes supported by Vitis AI library or if the model is not in DPU (Deep Learning Processing Unit) deployable format, then it first needs to be converted into DPU deployable state. For this refer to Deploying a NN Model with Vitis AI. For user guide, refer to Vitis AI 3.0 documentation

Once xmodel is availablel for the user model, refer to Rawtensor example to know more about how to use this model with VVAS.

Next one can view if the added model is loaded into shared library (libvvas_xdpuinfer.so) or not by decompiling and running following command in Edge device (KV260 board here)

VVAS dpu infer library for new model

Here example of YoloV3 implementation is takenvvas_xdpuinfer library has following files for each ML models:

vvas_xyolov3.hpp

  • declaration of vvas_xyolov3

#pragma once
#include “vvas_xdpupriv.hpp”

#include
#include

using namespace std;
using namespace cv;

class vvas_xyolov3:public vvas_xdpumodel
{

int log_level = 0;
std::unique_ptr < vitis::ai::YOLOv3 > model; //calling vitis ai YoloV3 model class

public:

vvas_xyolov3 (vvas_xkpriv * kpriv, const std::string & model_name,
bool need_preprocess); // constructor

virtual int run (vvas_xkpriv * kpriv, std::vector& images,
GstInferencePrediction **predictions); //model run method

virtual int requiredwidth (void);
virtual int requiredheight (void);
virtual int supportedbatchsz (void);
virtual int close (void);

virtual ~vvas_xyolov3 ();
};

vvas_xyolov3.cpp

Constructor definition:

#include “vvas_xyolov3.hpp”
#include

vvas_xyolov3::vvas_xyolov3 (vvas_xkpriv * kpriv, const std::string & model_name,
bool need_preprocess)
{
log_level = kpriv->log_level;
kpriv->labelflags = VVAS_XLABEL_REQUIRED;
LOG_MESSAGE (LOG_LEVEL_DEBUG, kpriv->log_level, “enter”);

if (kpriv->labelptr == NULL) {
LOG_MESSAGE (LOG_LEVEL_ERROR, kpriv->log_level, “label not found”);
kpriv->labelflags |= VVAS_XLABEL_NOT_FOUND;
} else
kpriv->labelflags |= VVAS_XLABEL_FOUND;

model = vitis::ai::YOLOv3::create (model_name, need_preprocess);
}

Here Vitis AI Yolov3 create method is used to create the model.

implementation of vvas_xyolov3::run methodHere model is run and result is stored in GstInferencePredidction **prediction

int
vvas_xyolov3::run (vvas_xkpriv * kpriv, std::vector& images,
GstInferencePrediction **predictions)
{
LOG_MESSAGE (LOG_LEVEL_DEBUG, kpriv->log_level, “enter batch”);
auto results = model->run (images);

labels *lptr;
char *pstr; /* prediction string */

if (kpriv->labelptr == NULL) {
LOG_MESSAGE (LOG_LEVEL_ERROR, kpriv->log_level, “label not found”);
return false;
}

if (kpriv->objs_detection_max > 0) {
LOG_MESSAGE (LOG_LEVEL_DEBUG, kpriv->log_level, “sort detected objects based on bbox area”);

/* sort objects based on dimension to pick objects with bigger bbox */
for (unsigned int i = 0u; i < results.size(); i++) {
std::sort(results[i].bboxes.begin(), results[i].bboxes.end(), compare_by_area);
}
} else {
LOG_MESSAGE (LOG_LEVEL_WARNING, kpriv->log_level, “max-objects count is zero. So, not doing any metadata processing”);
return true;
}

for (auto i = 0u; i < results.size(); i++) {
GstInferencePrediction *parent_predict = NULL;
unsigned int cur_objs = 0;

LOG_MESSAGE (LOG_LEVEL_INFO, kpriv->log_level, “objects detected %lu”,
results[i].bboxes.size());

if (results[i].bboxes.size()) {
BoundingBox parent_bbox;
int cols = images[i].cols;
int rows = images[i].rows;

parent_predict = predictions[i];

for (auto & box:results[i].bboxes) {

lptr = kpriv->labelptr + box.label;
if (kpriv->filter_labels.size()) {
bool found_label = false;

for (unsigned int n = 0; n < kpriv->filter_labels.size(); n++) {
const char *filter_label = kpriv->filter_labels[n].c_str();
const char *current_label = lptr->display_name.c_str();
if (!strncmp (current_label, filter_label, strlen (filter_label))) {
LOG_MESSAGE (LOG_LEVEL_DEBUG, kpriv->log_level, “current label %s is in filter_label list”, current_label);
found_label = true;
}
}

if (!found_label)
continue;
}

if (!parent_predict) {
parent_bbox.x = parent_bbox.y = 0;
parent_bbox.width = cols;
parent_bbox.height = rows;
parent_predict = gst_inference_prediction_new_full (&parent_bbox);
}
int label = box.label;
float xmin = box.x * cols + 1;
float ymin = box.y * rows + 1;
float xmax = xmin + box.width * cols;
float ymax = ymin + box.height * rows;
if (xmin < 0.)
xmin = 1.;
if (ymin < 0.)
ymin = 1.;
if (xmax > cols)
xmax = cols;
if (ymax > rows)
ymax = rows;
float confidence = box.score;

BoundingBox bbox;
GstInferencePrediction *predict;
GstInferenceClassification *c = NULL;

bbox.x = xmin;
bbox.y = ymin;
bbox.width = xmax – xmin;
bbox.height = ymax – ymin;

predict = gst_inference_prediction_new_full (&bbox);

c = gst_inference_classification_new_full (label, confidence,
lptr->display_name.c_str (), 0, NULL, NULL, NULL);
gst_inference_prediction_append_classification (predict, c);

if (parent_predict->predictions == NULL)
LOG_MESSAGE (LOG_LEVEL_ERROR, kpriv->log_level, “parent_predict->predictions is NULL”);
gst_inference_prediction_append (parent_predict, predict);

LOG_MESSAGE (LOG_LEVEL_INFO, kpriv->log_level,
“RESULT: %s(%d) %f %f %f %f (%f)”, lptr->display_name.c_str (), label,
xmin, ymin, xmax, ymax, confidence);

cur_objs++;
if (cur_objs == kpriv->objs_detection_max) {
LOG_MESSAGE (LOG_LEVEL_DEBUG, kpriv->log_level, “reached max limit of objects to add to metadata”);
break;
}
}

if (parent_predict) {
pstr = gst_inference_prediction_to_string (parent_predict);
LOG_MESSAGE (LOG_LEVEL_DEBUG, kpriv->log_level, “prediction tree : n%s”,
pstr);
free(pstr);
}
}
predictions[i] = parent_predict;
}

LOG_MESSAGE (LOG_LEVEL_INFO, kpriv->log_level, ” “);

return true;
}

More information on data structure for storing predicition result is available at following links:

  • https://developer.ridgerun.com/wiki/index.php/GstInference/Metadatas/GstInferenceMeta
  • https://xilinx.github.io/VVAS/1.1/build/html/docs/common/A-VVAS-Inference-Metadata.html

Building the VVAS with new model

  • Step 1 : Source sysroot path if not done already

source /environment-setup-aarch64-xilinx-linux./build_install_vvas.sh Edge

vvas_installer.tar.gz can be found at /installer

  • Step 3 : copy VVAS installer to embedded board

scp install/vvas_installer.tar.gz :/

  • Step 4 : Install VVAS on embedded board

cd /
tar -xvf vvas_installer.tar.gz

Testing the new model

Updated the json for model:

Test pipeline:

sudo gst-launch-1.0 filesrc location=”walking_nv12.yuv” ! rawvideoparse format=nv12 width=1920 height=1080 framerate=30/1
! “video/x-raw, width=1920, height=1080, format=NV12”
! vvas_xmultisrc kconfig=”/opt/xilinx/kv260-smartcam/share/vvas/vvas-test1/preprocess.json” ! queue ! vvas_xfilter kernels-config=”/opt/xilinx/kv260-smartcam/share/vvas/vvas-test1/aiinference.json” ! perf ! fakesink

Here is the log of pipeline with updated Yolov4 Tiny model class:

What Next?

In above tutorial, prediction are listed in console are the result of the post processing steps.

So next we will look into adding post processing for new added model which stores the model output into prediction tree data structure.

References:

  • Above tutorial is based on Xilinx Tutorial Adding new model in VVAS which is for VVAS 3.0 but with few changes can be used for adding model in VVAS 2.0

How to add model in VVAS 3.0 ?

  • Compared to VVAS 2.0, in VVAS 3.0 – core libraries are moved to one repository : vvas-core repository available at : https://github.com/Xilinx/vvas-core/tree/61935434021f7a7ee16e26d22ffa1b96a0e4895c
  • Update the vvas-core/common/vvas_core /vvas_dpucommon.h with the typedef for the new VVAS model class instead of VVAS/vvas-gst-plugins/gst-libs/gst/vvas/gstvvasinpinfer.h
  • Add the VVAS new model source files at

***

Meet you in next tutorial!

Kudos to Sanam@LogicTronix for creating this in-depth tutorial!

For any queries please write us at info@logictronix.com!

Originally Appeared Here

You May Also Like

About the Author:

Early Bird