On suppressing range of adaptive stepsizes of Adam to improve generalisation performance
Zhang, G
Date: 22 August 2024
Conference paper
Publisher
Springer
Publisher DOI
Abstract
A number of recent adaptive optimizers improve the generalisation performance of Adam by essentially reducing the variance of
adaptive stepsizes to make a bridge with SGD with momentum. Following the above motivation, we suppress the range of the adaptive stepsizes
of Adam by exploiting the layerwise gradient statistics. In particular, ...
A number of recent adaptive optimizers improve the generalisation performance of Adam by essentially reducing the variance of
adaptive stepsizes to make a bridge with SGD with momentum. Following the above motivation, we suppress the range of the adaptive stepsizes
of Adam by exploiting the layerwise gradient statistics. In particular, at
each iteration, we propose to perform three consecutive operations on
the second momentum vt before using it to update a DNN model: (1):
down-scaling, (2): ϵ -embedding, and (3): down-translating. The resulting
algorithm is referred to as SET-Adam, where SET is a brief notation of
the three operations. The down-scaling operation on vt is performed layerwise by making use of the angles between the layerwise subvectors of vt
and the corresponding all-one subvectors. Extensive experimental results
show that SET-Adam outperforms eight adaptive optimizers when training transformers and LSTMs for NLP, and VGG and ResNet for image
classification over CIAF10 and CIFAR100 while matching the best performance of the eight adaptive methods when training WGAN-GP models for image generation tasks. Furthermore, SET-Adam produces higher
validation accuracies than Adam and AdaBelief for training ResNet18
over ImageNet.
Computer Science
Faculty of Environment, Science and Economy
Item views 0
Full item downloads 0