WebVision 2020 VIRTUAL
Paper Session - Workshop Papers : Paper #3 (10:25 - 10:29)
Paper Title: When Ensembling Smaller Models is More Efficient than Single Large Models
Authors: Dan Kondratyuk, Mingxing Tan, Matthew Brown, Boqing Gong
Email: {dankondratyuk,tanmingxing,mtbr,bgong}@google.com
Short Description: Ensembling is a simple and popular technique for boosting evaluation performance by training multiple models. This approach is commonly reserved for the largest models, as it is commonly held that increasing the model size provides a more substantial reduction in error than ensembling smaller models. However, we show results from experiments on CIFAR-10 and ImageNet that ensembles can outperform single models with both higher accuracy and requiring fewer total FLOPs to compute. This can imply output diversity in ensembling can often be more efficient than training larger models, especially when the models approach the size of the dataset.
Keywords: ensemble, efficient, NAS, vision.
Talk |
|
Slides |
|
Paper |
|
|
Abstract : Ensembling is a simple and popular technique for boosting evaluation performance by training multiple models (e.g., with different initializations) and aggregating their predictions. This approach is commonly reserved for the largest models, as it is commonly held that increasing the model size provides a more substantial reduction in error than ensembling smaller models. However, we show results from experiments on CIFAR-10 and ImageNet that ensembles can outperform single models with both higher accuracy and requiring fewer total FLOPs to compute, even when those individual models’ weights and hyperparameters are highly optimized. Furthermore, this gap in improvement widens as models become large. This presents an interesting observation that output diversity in ensembling can often be more efficient than training larger models, especially when the models approach the size of what their dataset can foster. Instead of using the common practice of tuning a single large model, one can use ensembles as a more flexible trade-off between a model’s inference speed and accuracy. This also potentially eases hardware design, e.g., an easier way to parallelize the model across multiple workers for real-time or distributed inference.