Some three weeks ago we had the pleasure of a visit by Mike Lewis, from Washington University, originally a student of Mark Steedman in Edinburgh.
He came to Nuance and talked about his A* super efficient parsing system that he talked about at ACL in San Diego. I really wanted him to talk about his older work with Mark, on Combined Distributional and Logical Semantics Transactions of the Association for Computational Linguistics, 2013, but if someone is nice enough to come and talk to you, they may choose whatever they want to talk about. at least in my books.
He came to Nuance and talked about his A* super efficient parsing system that he talked about at ACL in San Diego. I really wanted him to talk about his older work with Mark, on Combined Distributional and Logical Semantics Transactions of the Association for Computational Linguistics, 2013, but if someone is nice enough to come and talk to you, they may choose whatever they want to talk about. at least in my books.
And besides people in the Lab were super interested in Mike's new work. Mike is a great speaker, one of those that give you the impression that you are really understanding everything he said. Very impressive indeed! Especially if you considered how little I know about parsing or LSTM (long short-term memory) methods. But the parser is publicly released, everyone can find it in GitHub.
There's even a recorded talk of the presentation I wanted to hear, Combined Distributional and Logical Semantics, so altogether it was a splendid visit. When discussing other work in their paper, Mike and Mark say about our Bridge system:
'Others attempted to build computational models of linguistic theories based on formal compositional semantics, such as the CCG-based Boxer (Bos, 2008) and the LFG- based XLE (Bobrow et al., 2007). Such approaches convert parser output into formal semantic representations and have demonstrated some ability to model complex phenomena such as negation. For lexical semantics, they typically compile lexical resources such as VerbNet and WordNet into inference rules—but still achieve only low recall on open-domain tasks, such as RTE, mostly due to the low coverage of such resources.'
I guess I agree that the resources we managed to gather didn't have the coverage we needed. More, other resources like those, are still needed. We need bigger, more complete, more encompassing "Unified Lexica" for different phenomena. and more, many more languages. But I stop now with a very impressive slide from Mike's presentation.
No comments:
Post a Comment