dc.contributor.author | Zhang, G | |
dc.date.accessioned | 2024-09-02T09:07:51Z | |
dc.date.issued | 2024-08-22 | |
dc.date.updated | 2024-08-29T09:50:33Z | |
dc.description.abstract | A number of recent adaptive optimizers improve the generalisation performance of Adam by essentially reducing the variance of
adaptive stepsizes to make a bridge with SGD with momentum. Following the above motivation, we suppress the range of the adaptive stepsizes
of Adam by exploiting the layerwise gradient statistics. In particular, at
each iteration, we propose to perform three consecutive operations on
the second momentum vt before using it to update a DNN model: (1):
down-scaling, (2): ϵ -embedding, and (3): down-translating. The resulting
algorithm is referred to as SET-Adam, where SET is a brief notation of
the three operations. The down-scaling operation on vt is performed layerwise by making use of the angles between the layerwise subvectors of vt
and the corresponding all-one subvectors. Extensive experimental results
show that SET-Adam outperforms eight adaptive optimizers when training transformers and LSTMs for NLP, and VGG and ResNet for image
classification over CIAF10 and CIFAR100 while matching the best performance of the eight adaptive methods when training WGAN-GP models for image generation tasks. Furthermore, SET-Adam produces higher
validation accuracies than Adam and AdaBelief for training ResNet18
over ImageNet. | en_GB |
dc.identifier.citation | In: Machine Learning and Knowledge Discovery in Databases. Research Track European Conference, ECML PKDD 2024, Vilnius, Lithuania, 9 - 13 September 2024. Proceedings, Part IV, edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, and Indrė Žliobaitėpp, pp. 72 - 88. Lecture Notes in Computer Science volume 14944 | en_GB |
dc.identifier.doi | 10.1007/978-3-031-70359-1_5 | |
dc.identifier.uri | http://hdl.handle.net/10871/137298 | |
dc.language.iso | en | en_GB |
dc.publisher | Springer | en_GB |
dc.rights.embargoreason | Under embargo until 22 August 2025 in compliance with publisher policy | en_GB |
dc.rights | © 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG | en_GB |
dc.subject | Adam | en_GB |
dc.subject | adaptive optimization | en_GB |
dc.subject | Transformer | en_GB |
dc.title | On suppressing range of adaptive stepsizes of Adam to improve generalisation performance | en_GB |
dc.type | Conference paper | en_GB |
dc.date.available | 2024-09-02T09:07:51Z | |
dc.identifier.isbn | 978-3-031-70359-1 | |
dc.description | This is the author accepted manuscript. The final version is available from Springer via the DOI in this record | en_GB |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
dcterms.dateSubmitted | 2024-03-15 | |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2024-08-22 | |
rioxxterms.type | Conference Paper/Proceeding/Abstract | en_GB |
refterms.dateFCD | 2024-09-02T09:04:28Z | |
refterms.versionFCD | AM | |
refterms.panel | B | en_GB |
exeter.rights-retention-statement | No | |