Dr. Muhammad Umer, PhD

Dr. Muhammad Umer, PhD
Computer Science & Engineering
Gregory Olsen College of Engineering & Science
Metropolitan Campus
Fairleigh Dickinson University
Project Title
Explainable and Trustworthy Continual Learning
Project Details
Principal Investigator
Dr. Muhammad Umer, PhD
Computer Science & Engineering
Fairleigh Dickinson University
External Partner(s)
Dr. Ravi Prakash Ramachandran, PhD
Electrical and Computer Engineering
Rowan University
Project Period
February 1, 2026, to January 31, 2027
Project Description
This project explores how artificial intelligence (AI) systems can learn continuously over time while remaining transparent, stable, and trustworthy. In real-world settings, such as healthcare, autonomous vehicles, or evolving online environments, data changes rapidly—and AI models must adapt without losing what they previously learned. Current systems often fail at this, forgetting old information when exposed to new tasks, which limits safe and reliable deployment. Our work focuses on continual learning, a growing research area designed to address this problem. Unlike traditional machine learning, continual learning enables AI models to accumulate knowledge across tasks. However, most existing approaches prioritize accuracy alone, leaving a critical gap: the need for models whose internal reasoning is understandable and whose behavior remains consistent over time. The Seed Grant will support early-stage development of methods that combine continual learning with explainability tools. These funds will allow us to benchmark existing algorithms, build prototypes that track how model features evolve during learning, and train undergraduate researchers. This foundational effort prepares us for larger external funding opportunities.
Problem Addressed
Artificial intelligence systems deployed in real-world environments must be capable of adapting to new data while retaining earlier knowledge. Yet most machine learning models cannot learn sequentially without overwriting prior information—a problem known as catastrophic forgetting. This limits the long-term reliability of AI systems and poses serious risks for industries that require stable performance.
A second challenge is that many AI models behave like “black boxes,” making it difficult to understand how decisions are formed or how internal features change as the model encounters new tasks. This lack of transparency prevents adoption in safety-critical domains, where accountability and interpretability are essential. Our project tackles these two interconnected gaps: adaptability and explainability.
Who Will Benefit
This project will benefit researchers and engineers developing adaptive AI systems by providing access to new methodologies that improve knowledge retention and transparency. Industrial sectors that rely on adaptive, safe, and transparent AI technologies—such as healthcare, autonomous systems, cybersecurity, and defense—will benefit from the development of transparent and trustworthy adaptive algorithms.
Fairleigh Dickinson University and its students will also benefit directly. Undergraduates will participate in hands-on machine learning research, learning skills aligned with modern AI careers. The project strengthens FDU’s research profile, deepens collaboration with Rowan University, and helps position the institution for future external grants.
Goals During the Grant Period
During the seed funding period, we aim to design and test initial prototypes of explainable continual learning systems. This includes benchmarking state-of-the-art algorithms, integrating explainability techniques such as saliency maps and SHAP, and analyzing how model representations change over time. By the end of the grant, we expect to establish validated metrics of trustworthiness, generate comparative experimental results, and prepare at least one peer-reviewed publication. We also expect to train undergraduate researchers, produce open-source code, and generate preliminary findings needed for future NSF-level submissions.
Broader Impact
This project contributes to the national and global effort to build responsible and trustworthy AI systems. As AI becomes embedded in everyday infrastructure—from transportation to medicine—adaptability without transparency poses serious societal concerns. Our work ensures that future AI systems can evolve safely while providing insight into how and why they make decisions. The project lays a foundation for a long-term research agenda at FDU centered on adaptive, explainable AI. It will strengthen interdisciplinary collaboration, create new research opportunities for students, and support future proposals to the NSF, particularly programs related to responsible AI, trustworthy machine learning, and STEM education. Ultimately, this work promotes AI systems that contribute positively and ethically to society