You're reading an pre-release version of this documentation.
For the latest stable release version, please have a look at master.

图示add_pre_post命令作用

add_pre_post命令的作用是将输入的归一化和量化以及输出的反量化加进网络由PNNA核计算。

量化神经网络推理算力分配

graph TD %% 定义硬件边界 subgraph CPU_Task [CPU 软件侧] A[原始输入数据] --> B(归一化 Normalize) B --> C(量化 Quantization) G(反量化 De-quantization) --> H(后处理 Post-processing) H --> I[最终输出结果] end subgraph NPU_Task [PNNA NPU 硬件加速侧] D{数据载入} D --> E(INT8 定点推理 Inference) E --> F{输出tensor} end %% 连接分工逻辑 C -->|float32 to int8| D F -->|int8 to float32| G %% 样式美化 style CPU_Task fill:#f5f5f5,stroke:#333,stroke-width:2px style NPU_Task fill:#e1f5fe,stroke:#01579b,stroke-width:2px style E fill:#0288d1,color:#fff,stroke:#01579b style B fill:#fff,stroke:#333 style C fill:#fff,stroke:#333 style G fill:#fff,stroke:#333 style H fill:#fff,stroke:#333

量化神经网络推理算力分配示意图

add_pre_post命令推理算力分配

graph TD %% 定义硬件边界 subgraph CPU_Task [CPU 软件侧 - 仅业务逻辑] A[原始输入数据<br/>Raw Data] H(业务后处理<br/>Logic/NMS) --> I[最终输出结果] end subgraph NPU_Task [PNNA NPU 硬件加速侧] direction TB D{数据载入} subgraph Internal_HW [PNNA 算力] B(硬件归一化 Normalize) C(硬件量化 Quantization) E(INT8 定点推理 Inference) G(硬件反量化 De-quantization) B --> C --> E --> G end F{输出tensor} D --> B G --> F end %% 连接分工逻辑 A ==>|原始数据流| D F ==>|浮点推理结果| H %% 样式美化 style CPU_Task fill:#f5f5f5,stroke:#333,stroke-width:2px style NPU_Task fill:#e1f5fe,stroke:#01579b,stroke-width:2px style Internal_HW fill:#ffffff,stroke:#01579b,stroke-dasharray: 5 5 style E fill:#0288d1,color:#fff,stroke:#01579b style B fill:#b3e5fc,stroke:#01579b style C fill:#b3e5fc,stroke:#01579b style G fill:#b3e5fc,stroke:#01579b

执行add_pre_post算力分配示意图


作者 @sq suiqiang@hngwg.com