PTN-09 Large Scale Distributed Neural Network Training Through Online Distillation

1 Piece
14,000
Delivery
Enter pincode for exact delivery dates and charge
Safe and Secure payments.100% Authentic products
Specifications
BrandMelody's Ideal ForUnisex Age Group8+ Years
Description

Techniques such as ensembling and distillation promise model quality
improvements when paired with almost any base model. However, due to increased test-time
cost (for ensembles) and increased complexity of the training pipeline (for distillation), these
techniques are challenging to use in industrial settings. In this paper we explore a variant of
distillation which is relatively straightforward to use as it does not require a complicated multistage
setup or many new hyperparameters. Our first claim is that online distillation enables us
to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still
speed up training even after we have already reached the point at which additional parallelism
provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural
networks trained on disjoint subsets of the data can share knowledge by encouraging each
model to agree with the predictions the other model would have made. These predictions can
come from a stale version of the other model so they can be safely computed using weights
that only rarely get transmitted. Our second claim is that online distillation is a cost-effective
way to make the exact predictions of a model dramatically more reproducible. We support our
claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the
largest to-date dataset used for neural language modeling, containing 6×1011 tokens and
based on the Common Crawl repository of web data