Show simple item record

dc.contributor.authorZhang, G
dc.date.accessioned2024-09-02T09:07:51Z
dc.date.issued2024-08-22
dc.date.updated2024-08-29T09:50:33Z
dc.description.abstractA number of recent adaptive optimizers improve the generalisation performance of Adam by essentially reducing the variance of adaptive stepsizes to make a bridge with SGD with momentum. Following the above motivation, we suppress the range of the adaptive stepsizes of Adam by exploiting the layerwise gradient statistics. In particular, at each iteration, we propose to perform three consecutive operations on the second momentum vt before using it to update a DNN model: (1): down-scaling, (2): ϵ -embedding, and (3): down-translating. The resulting algorithm is referred to as SET-Adam, where SET is a brief notation of the three operations. The down-scaling operation on vt is performed layerwise by making use of the angles between the layerwise subvectors of vt and the corresponding all-one subvectors. Extensive experimental results show that SET-Adam outperforms eight adaptive optimizers when training transformers and LSTMs for NLP, and VGG and ResNet for image classification over CIAF10 and CIFAR100 while matching the best performance of the eight adaptive methods when training WGAN-GP models for image generation tasks. Furthermore, SET-Adam produces higher validation accuracies than Adam and AdaBelief for training ResNet18 over ImageNet.en_GB
dc.identifier.citationIn: Machine Learning and Knowledge Discovery in Databases. Research Track European Conference, ECML PKDD 2024, Vilnius, Lithuania, 9 - 13 September 2024. Proceedings, Part IV, edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, and Indrė Žliobaitėpp, pp. 72 - 88. Lecture Notes in Computer Science volume 14944en_GB
dc.identifier.doi10.1007/978-3-031-70359-1_5
dc.identifier.urihttp://hdl.handle.net/10871/137298
dc.language.isoenen_GB
dc.publisherSpringeren_GB
dc.rights.embargoreasonUnder embargo until 22 August 2025 in compliance with publisher policyen_GB
dc.rights© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AGen_GB
dc.subjectAdamen_GB
dc.subjectadaptive optimizationen_GB
dc.subjectTransformeren_GB
dc.titleOn suppressing range of adaptive stepsizes of Adam to improve generalisation performanceen_GB
dc.typeConference paperen_GB
dc.date.available2024-09-02T09:07:51Z
dc.identifier.isbn978-3-031-70359-1
dc.descriptionThis is the author accepted manuscript. The final version is available from Springer via the DOI in this recorden_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateSubmitted2024-03-15
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2024-08-22
rioxxterms.typeConference Paper/Proceeding/Abstracten_GB
refterms.dateFCD2024-09-02T09:04:28Z
refterms.versionFCDAM
refterms.panelBen_GB
exeter.rights-retention-statementNo


Files in this item

This item appears in the following Collection(s)

Show simple item record