您现在的位置是:首页 >学无止境 >arm平台上的MNN编译与运行网站首页学无止境

arm平台上的MNN编译与运行

I_am_Damon 2024-08-13 12:01:03
简介arm平台上的MNN编译与运行

1.编译准备

GitHub - alibaba/MNN: MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba下载code->downloadZIP

unzip MNN-master.zip

cd MNN-master

vim CMakeLists.txt

将其中的MNN_BUILD_DEMO、MNN_BUILD_TOOLS、MNN_BUILD_QUANTOOLS、MNN_BUILD_CONVERTER的option改为ON

2.生成Makefile

查看arm的架构,如armv7、v8等,赋值给CMAKE_SYSTEM_PROCESSOR

确定编译结果保存文件的路径赋值给CMAKE_INSTALL_PREFIX

确定编译工具链的位置,赋值给CMAKE_C_COMPILER、CMAKE_CXX_COMPILER

whereis arm-linux-gnueabihf-gcc
arm-linux-gnueabihf-gcc: /root/arm-linux-compiler/gcc-linaro-12.2.1-2023.01-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc
whereis arm-linux-gnueabihf-g++
arm-linux-gnueabihf-g++: /root/arm-linux-compiler/gcc-linaro-12.2.1-2023.01-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++
mkdir install && mkdir build && cd build
cmake .. 
-DCMAKE_BUILD_TYPE=Release 
-DMNN_BUILD_DEMO=ON 
-DMNN_BUILD_BENCHMARK=true 
-DMNN_BUILD_CONVERTER=true 
-DMNN_BUILD_QUANTOOLS=true 
-DMNN_USE_INT8_FAST=true 
-DMNN_BUILD_TEST=true
-DMNN_OPENCL=ON 
-DCMAKE_SYSTEM_NAME=Linux 
-DCMAKE_SYSTEM_VERSION=1 
-DCMAKE_SYSTEM_PROCESSOR=armv7 
-DCMAKE_INSTALL_PREFIX=/root/mnn/MNN-master/install 
-DCMAKE_C_COMPILER=/root/arm-linux-compiler/gcc-linaro-12.2.1-2023.01-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc 
-DCMAKE_CXX_COMPILER=/root/arm-linux-compiler/gcc-linaro-12.2.1-2023.01-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++

3.编译

执行make

如果遇到报错:

/root/mnn/MNN-master/source/math/Vec.hpp:342:71: error: cannot convert ‘int16x8_t’ to ‘int8x16_t’
  342 |         auto m0m1 = vtrnq_s8(reinterpret_cast<int8x16_t>(vec0.value), reinterpret_cast<int16x8_t>(vec1.value));
      |                                                                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                                                       |
      |                                                                       int16x8_t
In file included from /root/mnn/MNN-master/source/math/Vec.hpp:15:
/root/arm-linux-compiler/gcc-linaro-12.2.1-2023.01-x86_64_arm-linux-gnueabihf/lib/gcc/arm-linux-gnueabihf/12.2.1/include/arm_neon.h:9532:36: note:   initializing argument 2 of ‘int8x16x2_t vtrnq_s8(int8x16_t, int8x16_t)’
 9532 | vtrnq_s8 (int8x16_t __a, int8x16_t __b)
      |                          ~~~~~~~~~~^~~
/root/mnn/MNN-master/source/math/Vec.hpp:343:71: error: cannot convert ‘int16x8_t’ to ‘int8x16_t’
  343 |         auto m2m3 = vtrnq_s8(reinterpret_cast<int8x16_t>(vec2.value), reinterpret_cast<int16x8_t>(vec3.value));
      |                                                                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                                                       |
      |                                                                       int16x8_t

则将/root/mnn/MNN-master/source/math/Vec.hpp的342、343行

auto m0m1 = vtrnq_s8(reinterpret_cast<int8x16_t>(vec0.value), reinterpret_cast<int16x8_t>(vec1.value));
auto m2m3 = vtrnq_s8(reinterpret_cast<int8x16_t>(vec2.value), reinterpret_cast<int16x8_t>(vec3.value));

改为

auto m0m1 = vtrnq_s8(reinterpret_cast<int8x16_t>(vec0.value), reinterpret_cast<int8x16_t>(vec1.value));
auto m2m3 = vtrnq_s8(reinterpret_cast<int8x16_t>(vec2.value), reinterpret_cast<int8x16_t>(vec3.value));

重新make -j4

如果发生make进度卡住的情况,则不要加-j4

成果物是libMNN.so

4.模型准备

Model Zoo · 语雀中找合适的模型,比如:GitHub - Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB: ?1MB lightweight face detection model (1MB轻量级人脸检测模型)

风语者!平时喜欢研究各种技术,目前在从事后端开发工作,热爱生活、热爱工作。