I am working on a project that uses OpenCV SVMs. The code is executed on mobile devices (Android, iOS) and x86_64 desktops.
Usually, the SVMs are exported in a YAML or XML file format which is then parsed by OpenCV itself.
Especially for the mobile devices, I'd like to avoid using these text files for two reasons:
1. They are huge. This could be somewhat alleviated by compression, but they need to be uncompressed for loading, so the memory of the device can become a bottleneck.
2. They have to be parsed, which is rather slow for larger files.
Hence, I'd like to embed the data such that I basically have a preloaded SVM in memory.
How would I go about this? How can I embed the floats with the correct endianess for each platform?
Unfortunately, OpenCV does not have a setter for the decision function and the support vectors, which does not allow me to inject my own loader.
After browsing the code, I decided to rip out the specific SVM type implementation I am using. This allowed me to write a Python script that generates C++ float arrays from the yaml files and operate them directly in my code.
This cut down the load time from around one second on my phone to basically nothing since it is just pointer passing with my code. Interestingly, the optimizer also improved the prediction runtime of my code over OpenCV's by a factor of 2.
I did not test the slower devices but so far they were about 20x slower, so I'd expect to have cut down the load time from ~20s to <1s.