Sometimes my posts are about stuff I've done that I want to remember. Some times it is about things that I feel strongly about. But mostly is about having conversations with myself, pretending that there is someone else there, in the hopes that the pretense will yield clarity and good arguments...
Yep, I think I am one of these people that reason much more clearly when debating. Don't know if it's the adrenalin that brings about new ideas or what, but I am extremely fond of `mental ping-pong'. I say a bunch of things (and some might be provocative) and you reply with a bunch of stuff that hadn't occurred to me and one way or the other (whether we agree or not) things get a little clarified, a little less fuzzy. And we go on in an infinite approximation of knowledge.
So the post today is about measuring the quality of logics or logical systems, if you prefer. the question comes about from two very different sides. In one hand, in my real work, I am trying to devise ways of measuring the quality of logical forms. In that case I have a proposal for a system of representations and what I am after is simply an automated system that will compute both the representations and changes to them. Even with very small sets of sentences (where it is feasible to calculate the ideal or golden standard representations) given that you're trying to improve stuff, it's very easy to modify one thing and mess up completely something else that you didn't think was related. Tracy H. King and I had a short paper in Coling 2008 about it, Designing Testsuites for Grammar-based Systems in Applications.
Thus you need systems that will keep you in the straight and narrow and that will work as "forcing functions" to move you forward, not simply in circles...The paper above is about `grammar-based systems' because they bring complications of their own, in terms of storing and comparing representations, which, perhaps, more numerical systems do not. (for numerical systems maybe you only want to know if you f-score is up or down, or maybe you want precision, recall and f-score, but I think even then I wouldn't be satisfied...)
But the need for these measurements is real and palpable in other systems too. I have now spent months, actually years of my life begging for such systems to be implemented--another story for a rainy day. Anyways adding here another paper of Tracy's on the same subject as I probably need to read it again, Regression Testing For Grammar-Based Systems.
Yep, I think I am one of these people that reason much more clearly when debating. Don't know if it's the adrenalin that brings about new ideas or what, but I am extremely fond of `mental ping-pong'. I say a bunch of things (and some might be provocative) and you reply with a bunch of stuff that hadn't occurred to me and one way or the other (whether we agree or not) things get a little clarified, a little less fuzzy. And we go on in an infinite approximation of knowledge.
So the post today is about measuring the quality of logics or logical systems, if you prefer. the question comes about from two very different sides. In one hand, in my real work, I am trying to devise ways of measuring the quality of logical forms. In that case I have a proposal for a system of representations and what I am after is simply an automated system that will compute both the representations and changes to them. Even with very small sets of sentences (where it is feasible to calculate the ideal or golden standard representations) given that you're trying to improve stuff, it's very easy to modify one thing and mess up completely something else that you didn't think was related. Tracy H. King and I had a short paper in Coling 2008 about it, Designing Testsuites for Grammar-based Systems in Applications.
Thus you need systems that will keep you in the straight and narrow and that will work as "forcing functions" to move you forward, not simply in circles...The paper above is about `grammar-based systems' because they bring complications of their own, in terms of storing and comparing representations, which, perhaps, more numerical systems do not. (for numerical systems maybe you only want to know if you f-score is up or down, or maybe you want precision, recall and f-score, but I think even then I wouldn't be satisfied...)
But the need for these measurements is real and palpable in other systems too. I have now spent months, actually years of my life begging for such systems to be implemented--another story for a rainy day. Anyways adding here another paper of Tracy's on the same subject as I probably need to read it again, Regression Testing For Grammar-Based Systems.
But on the other hand the question comes from a student of Joao Marcos, on how can we measure the difficulty of derivations or proofs. Of course if one is thinking about different logical systems we can, perhaps, measure how good they are through the number of useful results that one can prove about them. So clearly you want a logic that you can prove sound, completeness is nice and the more complexity facts you know about it the better. Not a huge fan of complexity myself, but being able to prove complexity assertions clearly indicates that the system is amenable to mathematical methods, which is good. Then you can try to show something about expressiveness, either in terms of stuff you could do before or in terms of what you've devised your system to do. and some times you can think of notions of coverage and of precision. do you need long proofs? you must have proofs, systems that are only a model do not count for me. but we're back at the student's question: how do we measure and decide which proofs are good proofs? bad proofs? easy proofs? hard proofs? there are some things one can say, but I sure need to read more and think more about it.
but not now. now is yoga time.
but not now. now is yoga time.
No comments:
Post a Comment