I am Adam Ścibior, a third year PhD student at University of Cambridge and Max Planck Institute for Intelligent Systems. I work on probabilistic programming, which puts me at the intersection of the fields of Machine Learning and Programming Languages. I was therefore delighted to see that Artificial Intelligence was a central topic to both keynotes at ICFP this year.
The first keynote was given by Chris Martens and titled Compositional creativity: some principles for talking to computers. The central theme was using formal logic as a common language understandable by both humans and computers. This is essentially the symbolic approach to AI that dates back long before I was born, probably 1960s but I can’t really remember. The idea is indeed very attractive since the behaviour of such logic-based systems can be completely understood by humans which makes them very reliable. It was great to see applications of these approaches to tasks such as storytelling with agents that reason about each other’s behaviour. At the same time, purely logic-based approaches no longer receive a lot of attention from the AI community since they proved insufficient to realistically mimic human behaviour. While systems that employ logic combined with probability can be successfully deployed as expert systems, I believe that achieving truly human-like AI behaviour will require going beyond these approaches.
The second keynote was given by John Launchbury and titled Assuring AI. I thought it was a great introduction to the modern methods of machine learning and neural networks in particular for the ICFP audience. It emphasised a pressing problem in machine learning which is how to verify properties of artificial intelligence systems. For example, can we ensure that our face recognition system doesn’t recognize cats as humans, or that our self-driving car does not run over pedestrians? These properties are very different from the ones we usually verify in programs since they can not be stated precisely in a formal language that a neural network understands. I heard this problem identified as very important multiple times at machine learning conferences but I am yet to see any feasible solutions. It would be very exciting if any of the techniques currently used for program verification could be adapted to the machine learning setting.