Modern neural networks are growing not only in size and complexity but also in inference time. One of the most effective compression techniques — channel pruning — combats this trend by removing channels from convolutional weights to reduce resource consumption. However, removing channels is non-trivial for multi-branch segments of a model, which can introduce extra memory copies at inference time. These copies incur increase latency — so much so, that the pruned model is even slower than the original, unpruned model. As a workaround, existing pruning works constrain certain channels to be…
Article Source link and Credit