Friday, May 12, 2017

Universal Dependencies for Textual Inference

In February 2017 I gave a talk at Nuance's AI Lab about a small experiment that I did with Alexandre Rademaker and Fabricio Chalub, from IBM Research, in Rio de Janeiro.

We used the SICK corpus created by the COMPOSES project in Italy, (devised to downplay the difficulties of language understanding), together with Parsey McParseFace,  Google's self-declared most accurate model in the world, to create logical representations of the sentences in SICK.

I reasoned that with all the fanfare about the advances of neural nets in NLP, considering that the corpus is simplified on purpose, these representations should be accurate enough to allow us to do the inferences required. Unfortunately between small errors here and there and big errors in the disambiguation, this experiment did not work the  way I expected it to.

You can see the slides in slideshare. Since the disambiguation, using Freeling's version of personalized PageRank, didn't work at all, we have all the possible word senses from WordNet at the moment in the GitHub repository. Now I am thinking about disambiguation, but also thinking about the kinds of inferences that we want and don't want to make. This project was suggested by Danny Bobrow several years ago.


Tuesday, May 2, 2017

Hurwitz-Radon Transformations

Looking for something else I discovered that my masters' thesis had been put online by the Maths department of PUC. Yay!!!

Had to ask Noemi to download it for me, as it required a PUC login. (a few months back I had asked fedex to scan it for me, but they wanted 600 dollars, phew!)

My Master's supervisor Duane Randall is on the left, in the back of this very old picture.

Now I am planning to latex and translate it, as quartenions and octonions are still something I enjoy thinking about.