DS4DM Coffee Talks

Which is the optimal amount of caffeine you need to make good decisions? At DS4DM we transform coffee, tea, and chats into cutting-edge ideas. Join our seminar series either in Montreal or via Zoom with your favorite hot beverage.

Roughly every 2 weeks, we will invite a speaker working in the areas of Optimization, Machine Learning, and Game Theory to give a talk in our lab.

The format: we will host either a single 50 minutes talk from an experienced researchers, or two 30 minutes talks from junior researchers. You can either join the online stream or attend physically (if you are in Montréal) by visiting Pavillon André-Aisenstadt (Université de Montréal), 5th Floor in Cirrelt Conference Room(5441).

Zhaocheng Zhu

University of Montreal, Mila

Knowledge Graph Reasoning with Graph Neural Networks

Knowledge graphs encode real-world facts and are critical in various applications and domains such as natural language understanding, recommender systems, drug discovery. A fundamental problem on knowledge graphs is to predict the answer to some queries by reasoning over existing facts, a.k.a. knowledge graph reasoning. Such a problem is commonly solved by embedding methods. In this talk, we take another approach, and solves knowledge graph reasoning with graph neural networks. First, we propose Neural Bellman-Ford Networks (NBFNet) that solves single-hop reasoning with the generalized Bellman-Ford algorithm and learned operators in the path formulation. Then we introduce Graph Neural Network Query Executor (GNN-QE) that combines GNNs with fuzzy sets for answering multi-hop queries on knowledge graphs. Both models are highly related to traditional symbolic methods, such as personalized PageRank and subgraph matching, but can additionally deal with the missing links in knowledge graphs. Meanwhile, both models can visualize their intermediate steps, which may help us better understand the reasoning process.
When: 4th of August 2022 - 11am (EST)
Where: Pavillon André-Aisenstadt (Université de Montréal), 5th floor, Cirrelt Conference Room(5441)
More: Zhaocheng's Website

Rosemarie Santa González

University of Quebec in Montréal

A Stochastic Prize Collection Methodology for Mobile Clinic Deployment Planning

Practitioners strive to deploy clinics that can access populations with the highest needs. However, when planning humanitarian operations, there is uncertainty that arises in the travel time, usability of roads, and access to population points. This study models mobile clinic deployment as a two-stage Stochastic Prize Collection Problem that maximizes the benefit offered and minimizes the costs, while considering several sources of uncertainty. The impact of four recourse policies (simple recourse, select-then-route, re-route per time period, and full network recourse) on the mobile clinic deployment plans are also studied. The recourse policies’ performance is evaluated on three different severity levels: moderate, medium, and severe. Results and managerial insights are presented for the real-world instance of Kenya.
When: 4th of August 2022 - 11am (EST)
Where: Pavillon André-Aisenstadt (Université de Montréal), 5th floor, Cirrelt Conference Room(5441)

Past Talks

Adrian Vetta


Matching Markets: Incentives under the Deferred Acceptance Algorithm

I will give a short overview of several important applications of algorithmic mechanism design and game theory. Then I will focus on one specific problem: what incentives arise in matching markets when the classical deferred acceptance algorithm is used to obtain a stable matching? This talk is based on work with Ndiame Ndiaye and Sergey Norin.
When: 10th of November 2021 - 2.30pm (EST)
More: Adrian's Website

Can Li

DS4DM & Purdue

Algorithms and Software for Two-stage Stochastic Mixed-integer Nonlinear Programs

Two-stage stochastic programming is one of the approaches to model decision-making under uncertainty. Although there have been algorithmic advances in two-stage stochastic mixed-integer linear programs in the past two decades, the algorithms to address the nonlinear counterpart are few. In this talk, I will first give an overview of the recent advances in solving stochastic MINLP problems. The major focus of this talk is a generalized Benders decomposition-based branch and cut algorithm that combines strengthened Benders cuts and Lagrangean cuts for solving two stage stochastic MINLPs with mixed-binary first and second stage variables. The proposed algorithm is applied to a stochastic pooling problem, a crude selection problem, and a storage design problem. The performance of the proposed algorithm is compared with a Lagrangean decomposition-based branch and bound algorithm and solving the corresponding deterministic equivalent with the solvers including BARON, ANTIGONE, and SCIP.

When: 24th of November 2021 - 11am (EST)
More: Can's Website

Guanyi Wang


Solving row-sparse principal component analysis via convex integer programs

For general computational-intensive mixed-integer non-linear optimization problems, a major challenge is to verify/guarantee the quality of any feasible solution within tractable time/computational limits. A natural idea to tackle this challenge is constructing tight relaxations and designing approximation algorithms for relaxed problems. In this talk, we focus on the (row) sparse principal component analysis (rsPCA) problem, i.e., the problem of finding the top-r leading principal components of a covariance matrix such that all these principal components are linear combinations of a subset of k variables. We propose: (a) a convex integer programming relaxation of rsPCA that gives upper (dual) bounds for rsPCA, and; (b) a new local search algorithm for finding primal feasible solutions for rsPCA. We also show that, in the worst-case, the dual bounds provided by the convex IP are within an affine function of the globally optimal value. We demonstrate our techniques applied to large-scale covariance matrices. This is a joint work with Santanu S. Dey, Rahul Mazumder, Macro Molinaro.

When: 24th of November 2021 - 11am (EST)
More: Guanyi's Website

Didier Chételat


Continuous cutting planes algorithms

Standard cutting plane algorithms are iterative, in that cuts are added as rounds, and the resulting linear program grows at every round. Ineffective cuts are sometimes pruned, but the number of cuts must still fundamentally grow for the algorithm to converge. This is problematic for computational stability, as well as memory and time. In this talk, we present an alternative class of algorithms that finds good cuts by deformation: it starts with a fixed number of initial cuts, and continuously deforms them by gradient ascent to tighten the LP relaxation. The number of cuts never changes throughout the process, and interestingly, it frames the problem of computing good cuts as a continuous optimization problem. Computational experiments show that this can be made to work in practice, although the method is not currently time competitive against the more classical cutting plane algorithms.
When: 22nd of February 2022 - 11am (EST)
More: Didier's Website

Maxime Gasse


Causal Reinforcement Learning using Observational and Interventional Data

Learning efficiently a causal model of the environment is a key challenge of model-based RL. In this work we consider a POMDP scenario where the learning agent has the ability to collect online experiences through direct interactions with the environment (interventional data), but has also access to a large collection of offline experiences, obtained by observing another agent interacting with the environment (observational data). A key ingredient, that makes this situation non-trivial, is that we allow the observed agent to interact with the environment based on hidden information, which is not observed by the learning agent. We then ask the following questions: can the online and offline experiences be safely combined for learning a causal model ? And can we expect the offline experiences to improve the agent's performances ? To answer these, we import ideas from the well-established causal framework of do-calculus, and we express model-based reinforcement learning as a causal inference problem.
When: 22nd of February 2022 - 11am (EST)
More: Maxime's Website

Alexandre Florio

Polytechnique Montreal

Large-Scale Dynamic Vehicle Routing Problems with Stochastic Requests

Dynamic vehicle routing problems (DVRPs) arise in several applications such as technician routing, meal delivery, and parcel shipping. We consider the DVRP with stochastic customer requests (DVRPSR), in which vehicles must be routed dynamically with the goal of maximizing the number of served requests. We model the DVRPSR as a multi-stage optimization problem, where the first-stage decision defines route plans for serving scheduled requests. Our main contributions are knapsack-based linear models to approximate accurately the expected reward-to-go, measured as the number of accepted requests, at any state of the stochastic system. These approximations are based on representing each vehicle as a knapsack with a capacity given by the remaining service time available along the vehicle's route. We combine these approximations with optimal acceptance and assignment decision rules and derive efficient and high-performing online scheduling policies. We further leverage good predictions of the expected reward-to-go to design initial route plans that facilitate serving dynamic requests. Computational experiments on very large instances based on a real street network demonstrate the effectiveness of the proposed methods in prescribing high-quality offline route plans and online scheduling decisions. This work is available as a preprint at arxiv:2202.12983 and implementations (and visualizations) are available here.
When: 7th of April 2022 - 11am (EST)

Kilian Fatras

MILA and McGill

Optimal Transport

Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over minibatches of data. While computationally appealing, we highlight in this talk some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behaviours. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.
When: 7th of April 2022 - 11am (EST)
More: Kilian's Website

Alfredo Torrico

Polytechnique Montreal

Preserving diversity when partitioning: A geometric approach

Diversity plays a crucial role in team formation, representation of minority groups and generally when allocating resources fairly. Given a community composed by individuals of different types, we study the problem of dividing this community such that the global diversity is preserved as much as possible in each subgroup. We consider the diversity metric introduced by Simpson in his influential work that, roughly speaking, corresponds to the inverse of the probability that two individuals are from the same type when taken at random, with replacement. We provide a novel perspective by reinterpreting this quantity in geometric terms. We characterize the instances in which the optimal partition exactly preserves the global diversity in each subgroup. When this is not possible, we provide an efficient polynomial-time algorithm that outputs an optimal partition for the problem with two types. Finally, we discuss further challenges and open questions for the problem that considers more than two types.

When: 13th of May 2022 - 11am (EST)
More: Alfredo's Website

Dounia Lakhmiri

Polytechnique Montreal

A Stochastic Proximal Method for Non-smooth Regularized Finite Sum Optimization

We consider the problem of training a deep neural network with non-smooth regularization to retrieve a sparse and efficient sub-structure. Our regularizer is only assumed to be lower semi-continuous and prox-bounded. We combine an adaptive quadratic regularization approach with proximal stochastic gradient principles to derive a new solver, called SR2. Our experiments on network instances trained on CIFAR-10 and CIFAR-100 with L1 and L0 regularization show that SR2 achieves higher sparsity than other proximal methods such as ProxGEN and ProxSGD with satisfactory accuracy.

When: 20th of May 2022 - 11am (EST)

Alexandre Forel

Polytechnique Montreal

Robust counterfactual explanations for random forests

Counterfactual explanations describe how to modify a feature vector to flip the outcome of a trained classifier. Several methods have been proposed to generate counterfactual explanations, but their robustness when the classifier is re-trained has not been studied so far. Our goal is to obtain counterfactual explanations for random forests that are robust to algorithmic uncertainty. We study the link between the robustness of ensemble models and base learners and formulate a chance-constrained optimization model. We provide statistical guarantees for random forests of stumps and develop a practical method with good performance. We show that existing naive and plausibility-based methods provide surprisingly low robustness. Our method achieves the best trade-off between robustness and the distance of counterfactual explanations to the initial observation.

When: 20th of May 2022 - 11am (EST)

Gauthier Gidel

Université de Montréal

Extragradient Methods: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities

The Extragradient [Korpelevich, 1976] and the Past Extragradient [Popov, 1980] methods are among the most popular methods for solving saddle point and variational inequalities problems (VIP). Recently, they have known a surge of interest with the emergence of variational inequality formulation for machine learning such as Generative Adversarial Networks. Despite their long history and significant attention in the optimization community, there remain important open problems about their convergence in the monotone setting. In this talk, we present how we resolved some of these open questions and derived the first last-iterate O(1/K) convergence rates for monotone and Lipschitz VIP without any additional assumptions on the operator. The central part of our analysis is based on Performance Estimation Problems and computer-aided proofs. In the presentation, I will pay special attention to this approach and explain the main non-trivial issues we faced to get the final proofs via numerical computations.
When: 2nd of June 2022 - 11am (EST)

Emma Frejinger

Université de Montréal

Fast Continuous and Integer L-shaped Heuristics Through Supervised Learning

We propose a methodology at the nexus of operations research and machine learning (ML) leveraging generic approximators available from ML to accelerate the solution of mixed-integer linear two-stage stochastic programs. We aim at solving problems where the second stage is highly demanding. Our core idea is to gain large reductions in online solution time while incurring small reductions in first-stage solution accuracy by substituting the exact second-stage solutions with fast, yet accurate supervised ML predictions. This upfront investment in ML would be justified when similar problems are solved repeatedly over time, for example, in transport planning related to fleet management, routing and container yard management. Our numerical results focus on the problem class seminally addressed with the integer and continuous L-shaped cuts. Our extensive empirical analysis is grounded in standardized families of problems derived from stochastic server location (SSLP) and stochastic multi knapsack (SMKP) problems available in the literature. The proposed method can solve the hardest instances of SSLP in less than 9\% of the time it takes the state-of-the-art exact method, and in the case of SMKP the same figure is 20%. Average optimality gaps are in most cases less than 0.1%.
When: 9th of June 2022 - 11am (EST)

Julien Ferry

LAAS-CNRS (France)

Operational Research for Fairness, Privacy and Interpretability in Machine Learning: Leveraging ILP to Learn Optimal Fair Rule Lists

Fairness, interpretability, and privacy are important fields for the development of responsible AI. While these three topics are often considered separately, their simultaneous application also seems desirable. The objective of my PhD is to study their interactions leveraging tools from operations research and combinatorial optimization. In a first part of my talk, I will provide an overview of my different past and current works on the frontiers of these different topics. In a second part, I will present my work on the use of Integer Linear Programming to learn interpretable, fair and optimal models.
When: 7th of July 2022 - 11am (EST)
Where: Pavillon André-Aisenstadt (Université de Montréal), 5th Floor in the CERC conference Room (5496)
More: Julien's Website

Natasja Sluijk

Eindhoven University of Technology

Dynamic Fleet Size and Order Assignment Problem

The dynamic fleet size and order assignment problem (DFSOAP) is a canonical version of a real-world problem faced by a logistic service provider. Each day, the company has to decide on the assignment of newly arrived orders to delivery days, as well as the number of additional vehicles to rent for each day in the planning horizon. When making these decisions, one has to trade-off the increasing rental costs and decreasing demand uncertainty. We formalize the problem and formulate a sequential stochastic optimization model for DFSOAP. We propose four algorithms (policies) that, given a state, return a decision for DFSOAP. Preliminary computation results on a set of instances show the benefit of assigning multiple orders simultaneously, rather than one at a time.
When: 21th of July 2022 - 11am (EST)
Where: Pavillon André-Aisenstadt (Université de Montréal), 5th floor, Cirrelt Conference Room(5441)

Sitao Luan


On Addressing the Limitations of Graph Convolutional Networks

Many real-world tasks can be modeled as graphs. Recently, graph convolutional neural network (GCN) based approaches have achieved significant progress for solving large, complex, graph-structured problems. Although with high expressive power, GCNs still suffer from several difficulties, e.g. the over-smoothing problem limits deep GCNs to sufficiently exploit multi-scale information, heterophily problem makes the graph-aware models underperform the graph-agnostic models. In this presentation, I will summarize those challenges, propose some methods to address them and put forward several research problems we are currently investigating.
When: 21th of July 2022 - 11am (EST)
Where: Pavillon André-Aisenstadt (Université de Montréal), 5th floor, Cirrelt Conference Room(5441)


We are the Canada Excellence Research Chair in Data Science for Decision Making (DS4DM) from Polytechnique Montréal. We focus on Optimization, Machine Learning and Game Theory, and often do research at the interface of such areas.


2021-2022: Gabriele Dragotto and Federico Bobbio
2022- : Federico Bobbio, Léa Ricard and Defeng Liu

  • Address

    Pavillon André-Aisenstadt
    2920, Chemin de la Tour,
    Montréal, QC Canada
  • Contacts

    federico (dot) bobbio (at) umontreal (dot) ca
    defeng (dot) liu (at) polymtl (dot) ca