If you already made something, tell us!
AI Performance Benchmark for UP products
Hi makers and up-developers,
this discussion treats the performance of UP products for AI applications, especially running neural nets for inference. The goal would be a benchmark between some well known maker-boards and the UP products.
Why do I open this discussion?
Currently I am developing an application which core component is a neural network that predicts objects in images (a pretty normal application ). Currently i'm using the Jetson TX2 and it works well. Using the internal GPU with Tensorflow is very intuitively. Now I want to try some other (maybe cheaper) boards for inferring neural nets.
The biggest problem: Converting nets with OpenVINO is much work... Often my Nets are not running or couldn't be converted. An example problem i had is THIS.
I don't want to be sad after hard investigating and work on my models to make them running on UP products to realize that the Up products are at least very slow...
I discussed this topic with an AAEON collaborator on VISION trade fair in Stuttgart and he agreed my opinion, that it would be very usefull to have a kind of benchmark.
What to do?
To get meaningful results, one needs to run the same model on UP products with OpenVINO and some other boards. It would be very interesting to see a direct comparison between some boards inferring the same neural net.
Interesting categories would be:
- Up Board GPU
- Up Squared GPU
- Up AI core (MYRIAD)
- Up AI core X (after release!)
A widespread board is the Jetson TX2, so i think thats a interesting adversary for a comparison (Just my opinion!)
Is there anybody that made some experiences? It would be nice to share them!
My first experiences!
Ok, let me open this discussion with my experiences:
I converted a Mask-RCNN with Resnet101 as backend. You can find it in here. I inferred a Full HD video (1920x1080)
- On Jetson TX2 inferring was done within 1-2 seconds
- On Up Squared GPU it was done in ~40 Seconds.
- On Up AI core (MYRIAD) i was not able to test it, because the IR-model was not loadable...
Thanks for your interest!