Research Programme

The projects that PhD students complete in each cohort will focus on a small set of research challenges selected in a scoping workshop by a mixture of academics and industry partners. The constraints used to select these are that each challenge must be:

  • at the cutting-edge of ML research;
  • of medium to long-term interest to industry partners;
  • within the expertise of the named supervisor group;
  • broad enough to support multiple PhD projects and so form a mini-cohort.

The results of the first scoping workshop are shown below.

The majority of the research challenges fall under the ML Fundamentals pillar, but challenges are also included under the ML in Society, ML Practice, and ML Applications pillars. This breadth of research is one of the distinguishing characteristics of ML-Labs. A selection of challenges are elaborated upon below.

ML Fundamentals – ML for Sequential Data: While the performance of existing techniques to learn from sequential data is promising, their efficiency and interpretability are still critical challenges.
Sample Project Titles:
“Discovering hidden signals of life: protein function prediction by Deep Learning” (Gianluca Pollastri, UCD); “Continuous/Adaptive Rule Learning for Scalable Stream Reasoning” (Alessandra Mileo, DCU).

ML Fundamentals – Deep Learning for Computer Vision: Despite recent advances in the use of deep convolutional neural networks for computer vision, many questions remain: for example, learning from small amounts of data, updating models once trained; and computational complexity for deploying “on the edge”.
Sample Project Titles: “Joint cross-modal embeddings for captioning and interpretable image retrieval” (Kevin McGuinness, DCU), “UAV assisted continuous crop analysis using unsupervised machine learning on RGB images” (Bianca Schoen-Phelan, TUD).

ML Fundamentals – Explainable ML: Users of ML models are faced with a choice between accuracy (achieved with complex models) and explanation (achieved with simple models). Challenges for explainable ML include objectively quantifying and comparing the quality of explanation coming from different algorithms.
Sample Project Titles: “Explainable Machine Learning Models for Structured Data” (Georgiana Ifrim, UCD), “Combining Machine Learning and Argumentation Theory to support Explainable Artificial Intelligence” (Luca Longo, TUD).

ML Fundamentals – Human-in-the-loop ML: In human-in-the-loop machine learning (HILML) analysts interactively collaborate with machine learning algorithms in order to guide them to optimal solutions, yet existing solutions are a naive in the manner in which humans can provide guidance.
Sample Project Titles: “Machine Intelligence and Human Agency” (David Coyle, UCD), “The development of deep visualisations for modelling large-scale high dimensional data” (John McAuley, TUD).

ML Fundamentals – ML in Recommender Systems: While recommender systems based on techniques such as collaborative filtering have been widely deployed on platforms such as Amazon, Digg, and Netflix, these approaches still suffer from many challenges: from the so-called cold-start problem to the natural sparsity of the user-item rating data, as well as the lack of transparency and explainability they exhibit. Machine learning techniques are being adopted in recommender systems research to address these issues. For example, to address the cold-start problem in music recommendation; and to automatically generate text explanations.
Sample Project Titles: “Applying recommender systems techniques to support physical exercise” (Barry Smyth, UCD), “Intelligent Conversational Recommender System” (Ruihai Dong, UCD).

ML Fundamentals – ML in Language: Machine Learning has dramatically changed the way Natural Language is analysed and processed in the last decade, and in so doing has opened the door for huge improvements in the quality of tasks ranging from spam detection to emotional speech synthesis. Significant challenges remain however in the development of Machine Learning architectures that can support deep understanding of natural language in context, the automatic production of structured representations of natural language, and the development of transferable ML architectures across domains and even languages, to name but a few. This research theme will explore the development of the fundamental ML research needed here to support a new generation of ML driven Natural Language Technology.
Sample Project Titles: “What lies beneath: what can deep learning tell us about how humans process speech?” (Julie Carsen-Berndsen, UCD), “Vector space representations for natural language processing” (Jennifer Foster, DCU).

ML in Society – Bias, Fairness, and Ethics in ML: The emerging field of critical data studies emphasizes values in ML, including openness, equity, transparency, and trust. Interrogating algorithms has become more difficult, however, as corporate interests have increasing hold on them and technologies evolve faster than the means to critique them.
Sample Project Titles:
“Identifying and eliminating systematic biases in statistical machine-learning techniques” (Fintan Costello, UCD), “Data Ethics in Corporate Environments” (Kalpana Shankar, UCD).

ML Practice – Data Architectures for ML: Novel data architectures are required to handle massive volumes of data (e.g. TensorFlow) and platforms optimised to minimise memory footprint and power consumption on modest devices (e.g. CoreML).
Sample Project Titles: “Dynamic IT Architecture optimization in data-rich environments” (Marcus Helfert, DCU), “Embedded machine learning for Individualised anomaly detection in wearable IoT devices” (John Deepu, UCD)

ML Applications: Research projects focused on the application of machine learning to specific domains, driven by the interests of the ML-Labs industry partner group.
Sample Project Titles: “Expediting Digital Forensics through Automated Evidence Analysis” (Mark Scanlon, UCD), “Clustering of tumour proteomic profiling data” (Colm Ryan, UCD).