# Belle Lab

The Lab carries out research in **artificial intelligence**, by unifying ideas from **machine learning** and **symbolic systems (logics, programs & plans)**, with a recent emphasis on *explainability* and *ethics*.

We are motivated by the need to augment *learning* and *perception* with *high-level structured, commonsensical knowledge*, to enable systems to learn faster and more accurate models of the world. We are interested in developing computational frameworks that are able to *explain their decisions, modular, re-usable*, and *robust* to variations in problem description. A non-exhaustive list of topics include:

- probabilistic and statistical knowledge bases
- ethics and explainability in AI
- exact and approximate probabilistic inference
- statistical relational learning and causality
- unifying deep learning and probabilistic learning methods
- probabilistic programming
- numerical optimization
- automated planning and high-level programming
- reinforcement learning and learning for automated planning
- cognitive robotics
- automated reasoning
- modal logics (knowledge, action, belief)
- multi-agent systems and epistemic planning

For example, our recent work has touched upon:

- morality in machine learning systems
- tractable learning with relational logic
- deep tractable probabilistic generative models
- learning with missing data
- program learning for explainability
- implementing fairness
- model abstraction for explainability
- strategies for interpretable & responsible AI

For more information, please check our papers.

**Faculty:** Vaishak Belle

**Postdoctoral fellows and PhD students:**

*Rafael Karampatsis*(postdoc), interested in ML interpretability*Paulius Dilkas*, interested in logical abstractions*Miguel Mendez Lucero*, interested in causality*Jonathan Feldstein*(with James Cheney), interested in probabilistic programming*Eleanor Platt*(with Amos Storkey), interested in interpretable deep learning*Fazl Barez*(with Ekaterina Komendantskaya), interested in explainable AI*Giannis Papantonis*, interested in causality*Ionela-Georgiana Mocanu*, interested in PAC learning*Gary Smith*(with Ron Petrick), interested in epistemic planning*Andreas Bueff*, interested in tractable learning and reinforcement learning*Sandor Bartha*(with James Cheney), interested in program induction

**Alumni:**

*Samuel Kolb*(PhD 2019, KU Leuven, with Luc De Raedt), interested in inference for hybrid domains*AmÃ©lie Levray*(Postdoctoral fellow), interested in tractable learning with credal networks*Stefanie Speichert*(Msc, 2018), interested in program induction*Anton Fuxjaeger*(Msc, 2019), interested in applications of tractable models to deep learning*Davide Nitti*(PhD 2016, KU Leuven, with Luc De Raedt), interested in machine learning for hybrid domains*Jazon Szabo*(BSc, 2019), interested in modal logics for causality*Himan Mookherjee (Msc, 2018)*(principal supervisor: James Cheney), interested in machine learning for anomaly detection*Michael Varley*(MSc, 2018), interested in algorithmic fairness*Lewis Hammond*(MSc, 2018), interested in responsible decision making*Laszlo Treszkai*(MSc, 2018), interested in probabilistic planning*Amit Parag*(MSc by Research, 2019), interested in machine learning for physics*Rose Khan*(MSc, 2017), interested in default reasoning*Nazgul Tazhigaliyeva*(MSc, 2017), interested in model counting

**Visitors:**

*Esra Erdem*, Sabanci University*Yoram Moses*, Technion*Brendan Juba*, Washington University in St. Louis*Loizos Michael*(via the Alan Turing Institute), Open University of Cyprus*Till Hoffman,*RWTH Aachen University