Dragos Margineantu - Current Research

I am the technical lead of Boeing's research in artficial intelligence and machine learning.

Our external research collaborators include Ben van Roy (Stanford), Brian Williams (MIT), Jure Leskovec (Stanford), David Forsyth (University of Illinois, Urbana-Champaign), Martin Rinard (MIT), Cheng Soon-Ong (CSIRO/Data 61), Subhasish Mitra (Stanford), Bernard Ghanem (KAUST), Sanjay Chawla (QCRI), and Richard Nock (CSIRO/Data 61).

My work and research interests are in machine learning - addressing the fundamental questions to make machine learning approaches usable by non-ML/AI experts.

Lately, I am mostly focused on robust AI and machine learning approaches and algorithms for systems that interact with humans and other automated components, form teams with humans, and make decisions that optimize a global system function. Handling of anomalies and previously unmodeled phenomena (on which the system wasn't trained on) by AI systems is central to my research.

I lead the AI team that develops a fully autonomous airplane for all phases of flight. We do so by not requiring any changes/modifications for the environment in which the airplane operates - i.e., we aim to be as good as or better than pilots. Technologies that result from our research will contribute to safer flying - even in human-piloted airplanes.

I am also the AI lead for Boeing's team on DARPA's Assured Autonomy program. As part of this program we drive the research on AI methods and approaches for assuring AI systems.

My team and I are very interested in Probabilistic Programming. The field offers the right means for model-based and data driven approaches. We started employing these approaches for anomaly detection tasks and on tasks for which it is genuinely expensive to collect data in regions of the operating space that need to be visited by the deployed system (e.g., anomalies in flight). 

Together with internal and external research collaborators (see below), we started looking at the novel research questions raised by embedding machine learning into complex systems for deployment. I am particularly interested in human-machine embedded systems for anomaly detection, change detection, human intent recognition, real-time decisions, and sequential decisions.

Inverse reinforcement learning (IRL) techniques, offer both (1) good means for studying human and computer decision making and elegant practical solutions for intent recognition and anomalous action detection and (2) solutions for predicting the state of the environment in autonomous systems. My Boeing colleagues and I have developed several interactive IRL-based solutions for explainable anomalous action detection and for intent analysis. About half of my time is dedicated to learning methods for practical machine learning solutions for temporal events and time series data. I also work on machine learning methods for object detection in images, especially from small data. I am mainly focused on methods that learn and interact with the users and the systems flawlessly, and therefore do feature construction and make decisions based on robustness and safety requirements. I served as the Boeing PI of DARPA's Bootstrapped Learning program where my colleagues and I focused on learning efficiently from small samples, semi-supervised learning, and inverse planning.

We're all aware that most predictions should ultimately lead to decisions and actions and those decisions and actions require have costs and risks associated with them. Therefore our approaches focus on learning and decision making techniques that deal with costs, budgets and risks (typically these are non-uniform functions). Cost-sensitive learning, active learning, and hierarchical learning are typically required in any practical application of machine learning algorithms.

I am also interested and working on statistical tests and validation methods for decision systems, learned models, and learning algorithms.

General categories of machine learning methods that myself and my colleagues have implemented and have experience with: ensemble learning, active learning, semi-supervised learning, clustering, deep learning, inverse reinforcement learning, sequential decision making and reinforcement learning.

In general, I am interested in listening and exchanging ideas on learning, methods for scaling up and improving the performance of learning techniques, computational and statistical learning theory, unsupervised learning, reinforcement learning, and game theory.


Dragos Margineantu, 2018.