dc.contributor.author | Ghica, DR | |
dc.contributor.author | Alyahya, K | |
dc.date.accessioned | 2018-09-20T12:22:28Z | |
dc.date.issued | 2017-06 | |
dc.description.abstract | Game semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol. | en_GB |
dc.identifier.citation | ICE 2017: 10th Interaction and Concurrency Experience, 21 - 22 June 2017, Neuchâtel, Switzerland, pp. 57-75 | en_GB |
dc.identifier.doi | 10.4204/EPTCS.261.7 | |
dc.identifier.uri | http://hdl.handle.net/10871/34041 | |
dc.language.iso | en | en_GB |
dc.publisher | Interaction and Concurrency Experience | en_GB |
dc.rights | © D. R. Ghica and K. Alyahya
This work is licensed under the
Creative Commons Attribution License. | en_GB |
dc.title | On the Learnability of Programming Language Semantics | en_GB |
dc.type | Conference paper | en_GB |
dc.date.available | 2018-09-20T12:22:28Z | |
dc.description | This is the final version of the article. Available from ICE via the DOI in this record. | en_GB |