July 6, 2021
Talk on XAI
I’ll be giving a talk on explainable AI at Lancaster University (Leipzig) Symposium on Intelligent Systems (LEISYS) on July 22.
May 17, 2021
Talk on logic & learning
Will be giving a talk on logic & learning at LMU Munich, drawing from my SUM-2020 tutorial. Thanks to Felix for the invitation!
May 16, 2021
Talk on explainable machine learning
Gave a talk this Monday in Edinburgh on the principles & practice of machine learning, covering motivations & insights from our survey paper. Key questions raised included, how to: extract intelligible explanations + modify the model to fit changing needs.
May 31, 2021
Journal paper on explainable machine learning accepted
Our work (with Giannis) surveying and distilling approaches to explainability in machine learning has been accepted. Preprint here, but the final version will be online and open access soon.
May 24, 2021
A course on explainable machine learning
I will be teaching a course on explainability machine learning, a practical introduction, supported by Andreas Bueff and others. Link here.
May 10, 2021
Two papers on probabilistic inference at UAI-2021
Jonathan’s paper considers a lifted approached to weighted model integration, including circuit construction. Paulius’ paper develops a measure-theoretic perspective on weighted model counting and proposes a way to encode conditional weights on literals analogously to conditional probabilities, which leads to significant performance improvements.
Paper on weighted model counting at SAT-2021
Weighted model counting often assumes that weights are only specified on literals, often necessitating the need to introduce auxillary variables. We consider a new approach based on psuedo-Boolean functions, leading to a more general definition. Empirically, we also get SOTA results.
May 5, 2021
Paper on learning linear programming objectives accepted at IJCAI-2021
We have a new paper accepted on learning optimal linear programming objectives. We take an “implicit“ hypothesis construction approach that yields nice theoretical bounds. Congrats to Gini and Alex on getting this paper accepted. Preprint here.
Mar 30, 2021
Dagstuhl seminar on Trustworthiness and Responsibility in AI
Sriraam, Shannon, Kush, Joost, Hana and I are excited to be organizing a dagstuhl seminar on Trustworthiness and Responsibility in AI. Link here.
Mar 25, 2021
Project on trustworthy systems
A consortia project on trustworthy systems and goverance was accepted late last year. News link here.
Mar 22, 2021
Journal paper on prior constraints in tractable models
A journal paper has been accepted on prior constraints in tractable probabilistic models, available on the papers tab. Congratulations Giannis!
Mar 21, 2021
Will be speaking at the AIUK event on principles and practice of interpretability in machine learning.
Feb 8, 2021
Will be co-chairing a special session on knowledge representation & machine learning at KR-2021.
Jan 19, 2021
Workshop on deep learning and logic
Together with Efi, Loizos & Phokion, we are co-organising a workshop on deep learning & logic.
Jan 05, 2021
3 journal papers accepted
Three journal papers have been accepted, all on tractable probabilistic models. With Andreas and Stefanie, a continuous variant is proposed. With Michael, modeling fairness is considered. And with Lewis, the learning of moral responsibility is investigated.
Oct 22, 2020
Talk at AI, ethics & society
The talk is entitled Fairness and Moral responsibility meets Computational Tractability; link to the event here.
Oct 15, 2020
Talk at U3A
I’ll be giving a talk on responsible AI at the Edinburgh chapter of U3A. Thanks to Rod & George for the invitation.
Sep 25, 2020
Talk at the SICSA conference
I’ll be giving a talk at the conference on fair and responsible AI in the cyber physical systems session. Thanks to Ram & Christian for the invitation. Link to event.
A recent collaboration with the NatWest Group on explainable machine learning is discussed in The Scotsman. Link to article here. A preprint on the results will be made available shortly.
I'll be giving a lecture at FU Berlin on Interpretable, Fair and Responsible AI, all approached via probabilistic circuits. See the recent work with Varley, Hammond, Bueff, Papantonis, for example. Link to the class here.
I'll be giving a seminar at the LAIV (Lab For AI Verification) at Heriott-Watt on our ECAI-2020 paper with Anton. The paper interprets variational autoencoders using probabilistic circuits. Link to event here.
A journal paper has been accepted that positions a new framework we call semiring programming, which extends probabilistic programming with connectives taken from any semiring. This then allows us to capture a wide range of search and combinatorial problems considered in AI, including inference, SAT, convex programming, weighted model integration etc, in a single unified programming model. Preprint here.
I will be giving a tutorial on logic and learning with a focus on infinite domains at this year's SUM. Link to event here.
Our paper on synthesizing plans with loops in the presence of probabilistic noise, accepted the journal of approximate reasoning, has also been accepted to the ICAPS journal track. Preprint to the full paper here.
Extended abstracts of our NeurIPS paper (on PAC-learning in first-order logic) and the journal paper on abstracting probabilistic models was accepted to KR's recently published research track.
A survey paper has been accepted at SUM, which looks at the semantics for integrating first-order logic, probability & action. In particular, the situation calculus, a dialect of first-order logic, is considered as the underlying representation language. Preprint here.
Paulius' work on algorithmic strategies for randomly generating logic programs and probabilistic logic programs has been accepted to the principles and practise of constraint programming (CP2020). The work is motivated by the need to test and evaluate inference algorithms. A combinatorial argument for the correctness of the ideas is also considered. Preprint here.
Our paper (joint with Amelie Levray) on learning credal sum-product networks has been accepted to AKBC. Such networks, along with other types of probabilistic circuits, are attractive because they guarantee that certain types of probability estimation queries can be computed in time linear in the size of the network. The problem we tackle is how the learning should be defined when there is missing or incomplete data, leading to an account based on imprecise probabilities. Preprint here.
A journal paper on abstracting probabilistic models has been accepted. The paper studies the semantic constraints that allows one to abstract a complex, low-level model with a simpler, high-level one. The framework is applicable to a large class of formalisms, including probabilistic relational models. The paper also studies the synthesis problem in that context. Preprint here.
I gave a talk on our recent NeurIPS paper in Glasgow while also covering other approaches at the intersection of logic, learning and tractability. Thanks to Oana for the invitation.
If you are attending AAAI this year, you may be interested in checking out our papers that touch on fairness, abstraction and generalized sum-product problems.
Our work on (goal) regression and progression operators for first-order probabilistic logics will appear in Artificial Intelligence. It studies how representations in these logics behave in a dynamic setting, and introduces operators for reducing a query after actions to an initial state, or updating the representation against those actions.
Conference link Our work on symbolically interpreting variational autoencoders, as well as a new learnability for SMT (satisfiability modulo theory) formulas got accepted at ECAI.
Our work on synthesizing plans with loops in the presence of noise will appear in the international journal of approximate reasoning. It investigates how the AND-OR controller search of Hu & De Giacomo can be extended for strong goal satisfaction and termination conditions when tackling stochastic nondeterminism.
Last week, I gave a talk on our NeurIPS paper on implicit learnability in FOL at SFU. Thanks to Eugenia and Jim for hosting me. Slides here.
If you are attending NeurIPS this year, you may be interested in checking out our papers that touch on morality, causality, and interpretability. Preprints can be found on the workshop page.
We were thrilled to hear that our recent paper at ILP, entitled "Learning Probabilistic Logic Programs over Continuous Data", received the best student paper award in the long paper track.
We were pleased to learn that our work entitled "Implicitly Learning to Reason in First-Order Logic" was chosen as a best paper at The Fourth International Workshop on Declarative Learning Based Programming, IJCAI, 2019.
Link In the last week of October, I gave a talk informally discussing explainability and ethical responsibility in artificial intelligence. Thanks to the organizers for the invitation.
Event link I was very excited to be giving an invited talk at the Samsung AI Forum in Seoul today. Thanks to Samsung for the invitation and the hospitality.
The article, to appear in The Biochemist, surveys some of the motivations and approaches for making AI interpretable and responsible.
I'm thrilled to have received the Royal Society University Research Fellowship. Announcement from the Royal Society can be found here.
I gave a talk at the beyond symposium 2019, which aims to put together artists, scientists, economists, among others. The talk was on experiential AI and the challenges.
Brendan and I have our paper on (implicit) PAC learnability for first-order logic accepted at NeurIPS.
Drew, Dave, Larissa and I had the opportunity to discuss the motivatons and foundations for instigating the new research theme of Experiential AI in a 90 minute talk.
Thanks to Kristian, Michael, Phokion and Daniel for organizing a great seminar at the Dagstuhl. Along with the discussions, I had an opportunity to reflect on the ways logic ehances learning. Slides on a short presentation I gave, entitled six perspectives on logic & learning (in infinite domains) can be found here.
Paper link Drew, Ruth, Larissa, Dave, Frank and I have an editorial accepted on experiential AI at the Leonardo journal
Link I gave a talk at the Skeptics on the Fringe on ethical AI. Thanks to the Edinburgh Skeptics for the invitation.
The paper tackles unsupervised program induction over mixed discrete-continuous data, and is accepted at ILP.
Last week, I gave a seminar at NII in Tokyoon our recent work on interpretable and responsible AI. Thanks to Katsumi Inoue for organising the talk.
I gave a seminar at the Indian Institute of Science on our recent work on interpretable and ethical AI. Thanks to Partha Talukdar for organising the talk.
Seminar link I gave a seminar at the Sabancı University in Turkey on our recent work on interpretable and ethical AI. Thanks to Esra Erdem for organising the talk.
Link Raffaella and I are thrilled to receive an EPSRC IAA grant on "AI for credit risk."
Last week, I gave a talk at the pint of science on automated systems and their impact, touching on the topics of fairness and blameworthiness.
Larissa, Drew, Dave and I are excited to be giving a talk on experiential AI at the ZKM Center for Art and Media Karlsruhe.
Together with colleagues from Edinburgh and Herriot Watt, we have put out the call for a new research agenda.
I gave the tutorial at the 16th International Conference on Principles of Knowledge Representation and Reasoning / KR 2018.
I gave a talk at the Cognitive Robotics Workshop at KR-18, entitled Probabilistic Planning by Probabilistic Programming: Semantics, Inference and Learning.
I gave a talk at the London Machine Learning Meetup. Thanks to the organizers for the invitation.
I gave at a tutorial on effective inference and learning with probabilistic logical models in continuous domains, at ACAI 2018.
We are organising a workshop on integrating learning and reasoning at IJCAI-ECAI in Sweden.
I'm thrilled to become a member of the Royal Society of Edinburgh (RSE) Young Academy of Scotland.
The article introduces a general logical framework for reasoning about discrete and continuous probabilistic models in dynamical domains.
I gave a talk, entitled "Explainability as a service", at the above event that discussed expectations regarding explainable AI and how could be enabled in applications.
In the paper, we exploit the XADD data structure to perform probabilistic inference in mixed discrete-continuous spaces efficiently.
Our MLJ (2017) article on planning with hybrid MDPs was accepted for presentation at the journal track.
I gave a seminar on decision-theoretic planning via probabilistic programming, based on our recent MLJ (2017) article.
I'm thrilled to soon get started on a EPSRC first grant on XAI.
Through the Alan Turing Institute, we (Stefanie, Andreas and I) mentored at the Deloitte Datathon, on the theme of financial services for social good.
Last week, I gave an IPAB (Edinburgh) seminar on decision-theoretic planning via probabilistic programming, based on our recent MLJ (2017) article.
I attended the SML workshop in the Black Forest, and talked about the connections between explainable AI and statistical relational learning.
I gave a talk entitled "Perspectives on Explainable AI," at an interdisciplinary workshop focusing on building trust in AI.
An article at the planning and inference workshop at AAAI-18 compares two distinct approaches for probabilistic planning by means of probabilistic programming.
The paper discusses the epistemic formalisation of generalised planning in the presence of noisy acting and sensing.
I gave a talk at the workshop on how the synthesis of logic and machine learning, especially areas such as statistical relational learning, can enable interpretability.
I gave a talk on decision-theoretic planning via probabilistic programming at Oxford.
I gave a talk and a tutorial at the Hybrid reasoning workshop at Aachen, Germany.
Applications are invited for a PhD position in Artificial Intelligence, to be based in the School of Informatics at the University of Edinburgh.
I'm thrilled to be a faculty fellow at the Alan Turing Institute.
I gave a seminar on extending the expressiveness of probabilistic relational models with first-order features, such as universal quantification over infinite domains.
We study planning in relational Markov decision processes involving discrete and continuous states and actions, and an unknown number of objects (via probabilistic programming).
I'm giving a tutorial on First-Order Multi-agent Logics in Action at IJCAI in Melbourne, Australia.
I'm giving a tutorial on Unifying Logic, Dynamics and Probability - Foundations, Algorithms and Challenges at IJCAI in Melbourne, Australia.
The paper discusses how to handle nested functions and quantification in relational probabilistic graphical models.
I discussed advances in open-universe probabilistic models.
I attended a workshop on epistemic planning, where I presented a poster on some results pertaining to programs in unbounded stochastic domains.
Henri, Lluis, Guilin, James, Marcelo and I are organising a workshop on the logical foundations of uncertainty and learning.
Honored to be giving a talk at the IJCAI-17 Early Career Spotlight track.
The first introduces a first-order language for reasoning about probabilities in dynamical domains, and the second considers the automated solving of probability problems specified in natural language.
I went over symbolic approaches to probabilistic inference and optimisation.
Siddharth, Sheila, Ron and I are organizing a workshop on generalized planning to be held at ICAPS.
I gave a talk on the risks of artificial intelligence and research priorities at the International Development Society.
I discussed model counting approaches for mixed discrete-continuous probability spaces.
These introduce (1) the use of symbolic representations in solving logical linear programs, and (2) an extension of weighted model counting for open universes.
Optimization Scott, Rodrigo, Kristian, Martin and me are organizing a workshop to explore and promote symbolic approaches to probabilistic inference, numerical optimization and machine learning.
Since October, I am at the University of Edinburgh.
An abridged version of our UAI-15 paper on approximate inference will be presented in the sister conferences track at IJCAI-16.
We consider the question of how generalized plans (plans with loops) can be deemed correct in unbounded and continuous domains.
These introduce (1) component caching in hybrid domains, and (2) a first-order logic of probability with only knowing.
Our Paper Planning in Discrete and Continuous Markov Decision Processes by Probabilistic Programming received the best student paper award at ECML-PKDD.
We have 4 papers accepted at IJCAI-15. These cover (1) weighted model counting for hybrid domains, (2) the GOLOG language in hybrid domains, (3) interactions between only knowing and common knowledge, and (4) only knowing defined in classical modal logic.
Our Paper Hashing-Based Approximate Probabilistic Inference in Hybrid Domains received the best paper award at 31st Conference on Uncertainty in Artificial Intelligence (UAI), 2015.
Since November, I am a postdoctoral researcher at KU Leuven.
We introduce (a) PREGO, an action language for cognitive robotics in continuous domains, and (b) computing compact conditional plans in partially observable domains.
We address the progression of basic action theories in (a) multiagent systems and (b) stochastic domains.
My project on cognitive robotics was selected as a finalist for the Kurt Gödel Research Prize Fellowship, receiving a silver medal by the Kurt Gödel Society.
I have been selected to participate in the Heidelberg Laureate Forum, where I will meet with Abel, Fields and Turing Laureates.
We introduce a rich account of robot localization.
We are offering a summer research project in knowledge representation.
This Winter, I will be teaching CSC 2502/486 Knowledge Representation and Reasoning.