Show simple item record

dc.contributor.authorGhica, DR
dc.contributor.authorAlyahya, K
dc.date.accessioned2018-09-20T12:22:28Z
dc.date.issued2017-06
dc.description.abstractGame semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol.en_GB
dc.identifier.citationICE 2017: 10th Interaction and Concurrency Experience, 21 - 22 June 2017, Neuchâtel, Switzerland, pp. 57-75en_GB
dc.identifier.doi10.4204/EPTCS.261.7
dc.identifier.urihttp://hdl.handle.net/10871/34041
dc.language.isoenen_GB
dc.publisherInteraction and Concurrency Experienceen_GB
dc.rights© D. R. Ghica and K. Alyahya This work is licensed under the Creative Commons Attribution License.en_GB
dc.titleOn the Learnability of Programming Language Semanticsen_GB
dc.typeConference paperen_GB
dc.date.available2018-09-20T12:22:28Z
dc.descriptionThis is the final version of the article. Available from ICE via the DOI in this record.en_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record