< All Topics
Print

1. Senior Projects 2024

Advisory BOT: A Smart Academic Assistant Streamlining Advising Services at Qatar University

Project ID = F24SDP 01 CS F

Supervisor: Rehab Duwairi

Fajar AL-Hajri, Maha Al-Bazeli, Olla Deyab, Saja Abdelmalik

Getting proper guidance at university can sometimes be frustrating as the traditional advising systems are not effective or ineffective. Many students face the challenge of not being able to get answers to question they may have within a reasonable time, which in turn affects the time taken to exorcise critical academic choices. Considering the challenges stated above, we have created an advisory chatbot that makes academic advising quick and easy for everyone, including the users who are students in this case. The chatbot, which is developed with the use of Python, grog cloud and SQLite, can give students the help they need in a timely and engaging manner. It can respond to inquiries, provide recommendations, and save chat interactions so that students and advisors can refer to these chats later. This helps facilitate the management of outstanding issues and ensures effective and timely contact between students and their appropriate representatives. In case of more advanced questions, the chatbot can help direct students to contact the human consultants. The chatbot design is open, so new and improved updates will be possible. With this initiative, our focus was on enhancing the academic advising service and making it easier for students. The chatbot enables students to find information quickly, helps advisors organize their activities, and offers more targeted assistance. Furthermore, the platform allows service providers to reach students who may find their services relevant. Overall, this project transforms advising into a smooth, free experience, making it easier for students to focus on their education.

Cyynara: A Cybersecurity Awareness Game

Project ID = F24SDP 02 CS F

Supervisor: Khaled Khan

Aisha Abdul Quadir, Arwa AL Sayed, Mais Abushahla, Maria Attili

The increase in cyber threats, particularly social engineering targeting individuals, has highlighted the importance of making people aware of cybersecurity. Social engineering is one of the best forms of exploiting human vulnerabilities by using psychological manipulation to gain sensitive information, contributing to 95% of cyberattacks. This indicates an urgent need to focus on human aspects of cyber security. Adolescents use the internet actively but are not informed enough about such threats. Hence, this project aims to address the gap in cybersecurity education. Our proposed solution is a 2D interactive and education-based game that raises awareness of several social engineering attacks including phishing and baiting. This is achieved by replicating real-life scenarios. The game includes interactive tools such as quizzes, decision making and feedback. Moreover, the learning materials are designed to fit the age group of our target audience ensuring it aligns with their age group and is culturally appropriate. The game uses Unity as a game engine, Firebase for user authentication and progress tracking as well as a component-based architecture so that it can support modularity and scalability. The key achievements include the development of an interactive gameplay with an educational value, creation of a pleasant user-interface and incorporating real-life scenarios and threats into the gameplay. In conclusion, this project aims to address human vulnerabilities found in cybersecurity through gamified education. Hence, this solution could be adapted into school curriculums to help students gain confidence in their ability to identify social engineering attempts which will improve their safety both personally and professionally.

BabAIlon: an AI Language learning Application with Personalized and Emotionally Intelligent 3D Avatars

Project ID = F24SDP 03 CS F

Supervisor: Osama Halabi

Hanan Rashid, Islam Hamdi, Leina Elsheiri, Rain Alkai, Israa Demdoum

In an increasingly interconnected world, the ability to speak multiple languages has become invaluable for personal, professional, and cultural interactions. However, many language learners struggle to gain conversational fluency despite years of traditional study, largely due to limited opportunities for realistic, immersive practice. This project presents BabAIlon, an innovative language-learning application designed to bridge this gap by offering a highly interactive, AI-driven environment where users can practice language in real-time. Utilizing 3D avatars powered by advanced artificial intelligence, BabAIlon engages users in conversations that feel natural, emotionally connected, and contextually relevant, providing a unique sense of companionship and support that is often missing from traditional language-learning platforms. Aligned with Qatar National Vision 2030, which emphasizes advancements in education and technology, BabAIlon leverages the country’s commitment to innovation to create a learning tool that meets the needs of modern learners. The application integrates advanced speech recognition, natural language processing (NLP), and emotional intelligence to provide nuanced feedback on pronunciation, grammar, word choice, and cultural context. By adjusting to the learner's proficiency and responding with tailored suggestions, BabAIlon fosters an environment of continuous growth and confidence-building. Through dynamic, interactive dialogues and personalized adjustments, BabAIlon empowers users to build confidence and fluency in a natural, engaging way. By focusing on authentic conversational practice and emotional engagement, BabAIlon aims to make language learning both effective and enjoyable, offering a unique platform that meets the diverse needs of modern language learners.

Nusmi3uk: An Arabic sign language system

Project ID = F24SDP 04 CS F

Supervisor: Mohammad Saleh

FatemaElzahraa Elrotel, Hams Gelban, Rouaa Naim, Sara Said

"Nusmi3uk" is a transformative web-based platform specifically designed to bridge the communication gap between the Deaf and hearing communities by facilitating the translation of spoken and written Arabic into Arabic Sign Language (ArSL) and vice versa. By leveraging advanced machine learning technologies, including Bidirectional Long Short-Term Memory (BiLSTM) networks, our system provides precise and real-time translation of ArSL, thereby enhancing communication in critical areas such as education, healthcare, and various social sectors. This innovation serves over 20 million people in the Arab world who suffer from disabling hearing loss [1], significantly improving their daily interactions and access to essential services. One of the key technical milestones of "Nusmi3uk" is the development of an advanced real-time ArSL recognition algorithm, which has achieved an accuracy rate of 98%. This has been made possible by employing the most extensive dataset currently available, which includes 502 distinct ArSL signs. Each sign has been captured by 3 different signers and later annotated with detailed body landmark data using Google's MediaPipe technology. This rich dataset also enables our 3D avatars to be accurately rendered using Three.js, to execute sign language gestures realistically, closely mimicking human movements. The dual translation capability of our system is another significant feature. It is powered by text glossing algorithms that adapt written Arabic into the syntactical format of ArSL. This adapted text is then brought to life through animations using ReadyPlayerMe avatars, providing users with an engaging and accurate representation of sign language. The user interface of "Nusmi3uk" is intentionally designed to be intuitive and easy to navigate while retaining the complexity and robustness of the underlying technological framework. This ensures a seamless communication experience for all users, regardless of their technical expertise. Looking ahead, "Nusmi3uk" plans to expand its accessibility by integrating into web extensions, enhancing user engagement. Future enhancements will focus on improving avatar animations to achieve even more lifelike and expressive sign language interpretations, thereby setting new standards in assistive communication technologies. Our ongoing efforts aim to not only support daily communication for individuals with hearing impairments but also to foster their active and full participation in societal activities.

JobQuest: Daily Employment Platform

Project ID = F24SDP 05 CS F

Supervisor: Saeed Salem

Aljazi Almarri, Deema Elkahlout, Maryam Alyafei, Salsabil Hamade

Due to the increasing demand for flexible and immediate employment among business professionals, the JobQuest platform has been created to match job seekers with employers. This project addresses the existing challenges in the traditional employment market by providing a clear and effective way for individuals to search not only for jobs but also for volunteer positions. Given the current trends in the labor market, it is essential to establish an open and functional channel for employers and job applicants to communicate and interact regarding job details. The JobQuest platform is designed with two primary goals: to help employers find staff for transient, daily jobs and to enable job seekers to search for, showcase, and apply for such vacancies. Employers can upload daily job openings, specifying qualification requirements, wages, and salaries, among other details. People can use JobQuest to find and submit applications for these positions, making the hiring process faster. Furthermore, candidates can showcase their skills and past work experience on the platform, increasing their appeal to potential employers. Moreover, JobQuest has added a feature specifically for volunteer opportunities, giving organizations a great chance to publicize volunteering opportunities. Through a simple sign-up process, users interested in contributing to their communities can create value for non-profit organizations while promoting a culture of volunteering. Moreover, several implementation strategies are proposed, including a pilot study conducted over a specific timeline to allow employers and job seekers to interact with the platform. This study will evaluate application rates, employer satisfaction, and user feedback to fine-tune the platform's features. The JobQuest platform integrates paid employment and volunteer work into a single solution that addresses the needs and concerns of both parties. Its key features aim to disrupt the traditional methods of searching for daily employment and volunteer positions, thereby enhancing digital employment opportunities.

qReserve

Project ID = F24SDP 07 CS F

Supervisor: Khaled Khan

Fatima Almohannadi, Kholoud Al-Shafai, Nouf Alkaabi, Reham Alameen

The huge number of existing booking platforms leads a user to use multiple websites to book various services, such as, accommodation, beauty services, car rentals, event places, car services, and so on. This project tackles the problem by devising and developing ‘qReserve’, a centralized one-stop-service platform that facilitates the reservation and booking of several services using our proposed one-stop-service booking platform. qReserve brings in a better booking experience by simplifying the overall reservation process, minimizing the time a user takes while making bookings for various services from different websites. The architectural solution of the task consists of the application of progressive web technologies, in particular technological layering of a system that distinguishes between the user interface, the business logic, and the database, which boosts the system's reliability and security. The platform generally combines several services, including a centralized search and filter system, booking and reservation in real-time, reminders, and booking history or user tracking. Besides, there is a ‘Request for partner’ option, which enables a user to request partner for the intended booking, enhancing the social and collaborative aspects of making a service or event that involves several people. The system also integrates with third-party applications for payment processing, service booking data, and geolocation availability. Acquiring interactivity elements such as account set-up, reservation, and payment is important to develop a functional prototype. The ‘Request For Company’ feature has also been integrated into the application. The time taken to make reservations is lower than that of traditional multi-platform reservation systems. Users can utilize qReserve effectively and that the service providers can maximize their revenue, with prospects for enhancements such as extended features and tailored services.

FitMate – Your Partner in Prime

Project ID = F24SDP 08 CS F

Supervisor: Saeed Salem

Fatima Al-Mohanadi, Sharifa Al-Ansari, Shatha Alhazbi, Shamaim Hamid

FitMate is an AI-powered mobile application that presents personalized workout and nutrition tracking. The app addresses the limitations of current fitness apps through real-time exercise form checking to prevent injuries while exercising, simple and automated food logging, and customized recommendations. Unlike intimidating fitness apps that swamp users with complex terminology and cluttered interfaces, FitMate creates an approachable environment through friendly animations, simple navigation, and complete guidance. FitMate acknowledges that professional fitness coaching is costly and inaccessible for many individuals. Therefore, it utilizes AI models to mimic personal coaching to provide an affordable, albeit very effective alternative. The models include Small Language Models (SLMs) for workout recommendations, Computer Vision for real-time exercise form analysis, and Multimodal Language Models (MLLMs) for food recognition. FitMate is constructed using Flutter with a layered architecture, and it’s built to stress user privacy and data security while maintaining seamless functionality. Globally, there is a demand for a solution that combines workout and nutrition guidance. Locally, market research conducted in Qatar shows the same trend, with 77.9% of survey respondents preferring a combined workout and nutrition app. This reflects a growing interest within the Qatari population for accessible fitness technologies that promote healthy lifestyles. The project demonstrates how AI can democratize entry to personalized fitness guidance while maintaining user privacy and promoting healthy lifestyle choices.

Development of Doctors’ Calling System

Project ID = F24SDP 14 CS F

Supervisor: Wadha Labda

Dina Ayad, Fatma Sharlawe, Najah Alnounou, Sabah Al-Marri

Effective communication between doctors and supervisors in clinical settings is essential to ensure timely training and quality patient care, especially in teaching hospitals where medical trainees require constant supervision. However, the traditional method of calling a supervisor requires doctors to leave their cubicles, remove protective equipment, and find a supervisor on site, which disrupts workflow, delays procedures, and compromises the sterile environment. These inefficiencies not only impact the time-sensitive nature of medical care but also hinder the overall operational productivity of hospitals. Addressing this issue is critical to creating a smoother and more efficient clinical workflow that benefits both doctors and patients. The solution we propose is a web-based system designed to simplify communication between doctors and supervisors. The system allows doctors to send instant supervision requests directly from their workstations, providing supervisors with basic details such as the doctor's name, booth number, and the nature of the request. Supervisors receive instant notifications through the system, allowing them to respond quickly without the delays caused by manual communication methods. The system also maintains a detailed log of all interactions, promoting transparency, accountability, and data analysis for future improvements. Using a user-friendly interface and real-time announcements, the system reduces workflow disruption and improves efficiency. One of the most important achievements in this project is the successful implementation of the notification system, which ensures that supervisors receive immediate warnings when the doctor submits the request. This prevents delays associated with physical communication disorders. The system also improves clinical efficiency by allowing doctors to remain in space, maintaining sterile conditions and reducing the cessation of the program. Furthermore, detailed interaction logs provide valuable insights into hospital management, enabling better oversight and process optimization. The project conclusion shows that technology-based solutions can significantly improve internal communication in clinical environments and ultimately support better patient outcomes. Overall, the project addresses serious inefficiencies in clinical operations by introducing practical and effective communication tools. By streamlining doctor-monitor interactions, the system reduces delays, increases workflow efficiency, and supports better decision-making, making it a significant innovation in today's healthcare environment.

Real-Time User Authentication and Gesture Recognition via EMG and Hand Motion Fusion

Project ID = F24SDP 15 CE F

Supervisor: Sumaya Al-Maadeed

Alreem Al-Tamimi, Hamda Al-marri, Sheyma Al-Jaber, Wejdan Al-Mari

This project introduces a multi-purpose system for person identification and hand gesture recognition, combining surface Electromyography (EMG) signals and Leap Motion 3D hand tracking to enhance security and usability in applications such as rehabilitation and user verification. Traditional identification methods, such as fingerprint or iris recognition, are becoming increasingly vulnerable to spoofing and security breaches. To overcome these limitations, we propose a non-invasive, multimodal approach that captures both muscle activity and hand gesture patterns using an 8-channel EMG armband and a Leap Motion controller. To evaluate the system, we collected a dataset from 16 different users, where each participant performed three distinct gestures fist, wave, and thumbs up repeated 10 times each. This resulted in a total of 480 samples, combining both EMG and Leap Motion data for each gesture. We extracted time-domain features from the EMG signals and 3D positional data from the Leap Motion. Support Vector Machine (SVM) classifiers were trained on both modalities independently, as well as through multimodal fusion approaches specifically, feature-level concatenation and classifier-level probability averaging. The system was evaluated on two key tasks: user identification and gesture recognition. For person identification, both EMG and Leap Motion modalities achieved high accuracy (90%+), with classifier fusion consistently outperforming individual modalities. Notably, classifier fusion achieved up to 99.38% accuracy for the fist gesture and above 96% across all gestures. These results indicate that both EMG and Leap Motion capture unique personal traits in how users perform gestures. In gesture recognition, performance varied significantly across modalities. The Leap Motion system achieved near-perfect accuracy (99.22%), while the EMG-based system reached 63.41%, consistent with literature benchmarks for cross-user EMG classification. This disparity reflects the nature of EMG signals, which are more affected by inter-user variability, sensor placement, and physiological differences. Nevertheless, classifier fusion still achieved a strong 94.08% accuracy for gesture recognition, showing that using both EMG and Leap Motion together leads to better performance than using each one alone. In conclusion, this system demonstrates the effectiveness of combining EMG and 3D motion data for real-time, secure gesture-based user identification. It offers a scalable and reliable solution that improves classification accuracy, system robustness, and user-specific personalization—marking a promising step toward next-generation identity recognition technologies.

Mind-Driven Environmental Control: Capturing Brain Waves for Locked-In Patient Assistance

Project ID = F24SDP 16 CE F

Supervisor: Hela Chamkhia

Mariam Awad, Maryem Bahnasy, Noora Al-Hajri, Rahaf Eltayeb

The project focuses on the development of an innovative EEG-based communication system designed for individuals with severe physical disabilities, such as those with Locked-In Syndrome (LIS). The main objective is to create a system that allows users to control a device and communicate by detecting and classifying brainwave patterns using an EEG device. This system aims to overcome the communication barriers experienced by individuals with LIS, offering them a non-invasive and efficient means to interact with their environment. The proposed solution uses the MindRove EEG system to capture brain activity and translate it into commands via a machine learning algorithm. It processes EEG data in real-time to classify brainwave patterns for predefined actions, enabling interaction with digital devices through integrated hardware and software components. Several challenges were addressed, including optimizing the system for low latency (85%) to ensure prompt and reliable responses. Compatibility with MindRove EEG hardware and integration with other components were key considerations, along with ensuring affordability and adherence to health and safety standards for noninvasive usage. The project will develop a functional prototype that meets the defined technical, economic, and safety constraints. The final design demonstrates the potential to significantly improve the quality of life for individuals with LIS by enabling effective communication through brain computer interaction. The novelty of this design lies in its ability to offer a real-time, noninvasive, and cost-efficient solution for individuals with severe disabilities, contributing to the broader field of assistive technology. The system’s impact extends beyond individual use, as it has the potential to revolutionize accessibility technologies, making them more inclusive and available to those in need.

The Guardian Helmet: Integrating Smart Technology to Enhance Safety on Construction and Industrial Sites

Project ID = F24SDP 17 CE F

Supervisor: Muhammed Azeem

Aljory Al-Enazi, Deema Al-Tamimi, Lolwa Al-Abdulla, Maryam Al-Mulla

The Guardian Helmet is essential to enhance worker's safety in hazardous environments. Smart helmets have built-in sensors to assess the threats on the construction sites and other dangerous places. The Guardian Helmet has a built temperature and humidity sensor in the system for the purpose of identifying possible occupational hazards. As a result, workers in these environments are at considerable risk of health problems. For emergency procedures, the helmet is designed with an accelerometer that triggers emergency alert and is also meant to detect a fall. In this way, emergencies can be resolved easily. The Guardian Helmet incorporates both GPS and Ultra-Wideband (UWB) technologies following the successful completion of two development phases: In Phase 1, GPS was implemented to provide outdoor location tracking, allowing supervisors to monitor workers’ positions in real-time and respond swiftly when necessary. In Phase 2, Ultra-Wideband (UWB) technology was added for high-precision indoor positioning. The system leverages the RTLS Controller software provided by Skylab, which manages the UWB anchors and tags, visualizes tag locations, and tracks worker movement within indoor environments. With this incremental approach, we ensure that safety concerns are addressed immediately. With the help of the Blynk app, the helmet collects data on worker status and environment, including falls and location, and sends it to the cloud. Supervisors receive instant notifications on their mobile devices if an emergency arises, so they can respond immediately. The Guardian Helmet provides instantaneous visibility, Surrounding condition monitoring, cloud connectivity, and alerting systems for more efficient response and safety. Using the information gathered during the project, supervisors will be able to make quick decisions and thus meet the urgent requirement for secure and reliable solutions during the project. By embracing various new technologies, the Guardian Helmet provides safety to individuals and assists industries that face dangers every day in becoming more proactive in risk management.

Smart Glasses for Real-Time Navigation and Obstacle Avoidance for Visually Impaired Users

Project ID = F24SDP 18 CE F

Supervisor: Loay Ismail

Raghad Alasmi, Shaima Nasser, Shahd Soliman, Weam Shehata

This senior design project focuses on improving the mobility and independence of visually impaired individuals through the development of a unique assistive technology. The project, "Smart Glasses for Real-Time Navigation and Obstacle Avoidance," combines cutting-edge features like object recognition, obstacle avoidance, and real-time navigation to make everyday activities safer and more accessible. Smart glasses integrate powerful hardware and software components, including YOLOv8-based object recognition, ultrasonic sensors for detecting obstacles, orientation tracking with ICM-20948, and an emergency alert system by a physical button or automatic fall detection. These glasses provide real-time feedback by detecting obstacles and delivering immediate audio alerts through a text-to-speech module, helping users navigate their environment safely. The system is designed for reliability in all conditions, using auditory and haptic feedback to ensure usability in noisy environments. Additionally, a voice command interface makes the glasses easy to control, while the emergency alert system sends precise location updates to caregivers in urgent situations. The prototype is built with lightweight, durable materials to ensure comfort for long-term wear, and practicality for daily use. The project brings together ideas from multiple disciplines, including computer vision, embedded systems, and user-centered design, to overcome common challenges in assistive technology—such as sensor integration, real-time data processing, and affordability. By integrating engineering, technology, and human-centered design, this project balance functionality, usability, and cost. These smart glasses represented a significant advancement in assistive technology, empowering visually impaired users with greater safety, independence, and confidence.

High-Resolution Radar Imaging and AI Diagnostics for Early for Breast Cancer Detection

Project ID = F24SDP 19 CE F

Supervisor: Sumaya Al-Maadeed

Anwar Al-kurbi, Dana Al-Hajri, Fatima Alyafei, Nadia Al-Rauili

Breast cancer is one of the leading causes of mortality among women worldwide, and early detection is critical for effective treatment and improved survival rates. This project focuses on developing a mmWave radar-based prototype system for non-invasive breast cancer detection. Unlike traditional imaging methods, our system explores the use of radar to identify abnormalities in breast tissue without ionizing radiation or discomfort. We used two different mmWave radar modules for experimentation: Texas Instruments’ IWR1843 and Infineon’s BGT60TR13C Development Kit. Our objective was to assess the capability of these radars in detecting tumor presence by analyzing signal behavior under various movement conditions—stable positioning, x-axis, y-axis, and zigzag motions. These movement scenarios were tested using tissue-mimicking phantoms to simulate realistic scanning environments and evaluate radar consistency and accuracy. In addition to hardware experimentation, machine learning played a key role in enhancing detection reliability. We trained classification models on diverse mammogram datasets to support automated tumor detection. The models were optimized for accuracy and robustness, achieving a peak classification accuracy of 99.7% in identifying malignant versus benign samples. Our prototype consists of a non-contact hardware system using mmWave radar for preliminary breast tissue scanning, and a separate machine learning model trained on mammogram datasets to support automated tumor detection. Its portable design makes it suitable as a pre-screening particularly in resource-constrained regions where access to conventional mammography is limited. By integrating mmWave radar technology with intelligent signal analysis, this project highlights a promising approach to transforming breast cancer detection—offering a fast, accurate, and safe alternative to traditional methods in a user-friendly format.

Health Management for Asthma Pediatric Monitoring and Tracking in Real-Time

Project ID = F24SDP 20 CE F

Supervisor: Khalid Abualsaud

Aljazi Al-kuwari, Aesha Albuainain, Hala Alshamari, Najlaa Al-Marri

Amongst all chronic diseases, asthma is the most prevalent in children and, if poorly managed, may result in serious life-threatening events. This project addresses the urgent need for a real-time, continuous asthma monitoring system in children aged 6-12 years to improve the safety of such children by promptly alerting caregivers. Herein, we demonstrate a wearable wristband device that can provide measurement and monitoring of key health metrics like heart rate (HR), oxygen saturation (SpO2), respiratory rate (RR), and airway pressure. From there, the system classifies the child's condition into three stages of severity: normal, moderate, and severe. When the system detects high risk, the device will notify the caregivers and emergency services with the recorded metrics and the Global Positioning System (GPS) location of the child. The wristband integrates multiple technologies that can perform the task of continuous monitoring. It contains an ESP32 microcontroller with advanced processing features combined with real-time alert capabilities. It uses the MAX30102 for recording the HR, SpO2, and RR. To acquire the location of the child, it uses the NEO-6M GPS module. Data is represented on the OLED screen, while LEDs light up according to the levels of severity in a very user-friendly and intuitive way for a child. It is also designed to offer comfort and ease in application, thus enabling children to wear it during daily activities regardless of whether they are at home, at school, or in open spaces. The wristband has a miniature tube attached to an MPX5050 pressure sensor, which, on reaching red alert, an alert will show up on the OLED screen to prompt the child to blow into the tube in order to measure airway pressure, that gives a secondary indication of respiratory functioning. The key achievements in this design include the integration of real-time monitoring, GPS-enabled alerts, and a user-friendly design with minimal interaction from the child. This system is unique from all conventional asthma-monitoring devices because it offers location-based alerts in case of emergencies. Early detection and immediate response to asthma episodes could feasibly lead to improved health outcomes and quality of life for the pediatric asthma patient with this project. Such an innovative integration of wearables and analytics to provide real-time health data presents an unparalleled approach toward asthma management and reliably equips caregivers with a proactive tool in asthma care, while increasing safety for young patients.

Prosthetic Arm with Neural Interface

Project ID = F24SDP 21 CE F

Supervisor: Loay Ismail

Vasiliki Gerokosta, Yomna Mohamed, Fathima Nasar, Fathima Abdeen

There are millions of people suffering from upper limb loss in the world. The complexities of available prosthetic technologies keep most people away from these solutions. The present project aims to address this by designing an easy-to-operate, non-invasive Brain-Computer Interface (BCI) prosthetic arm using an Emotiv Insight headset. The system includes the acquisition of electroencephalogram (EEG) signals that are filtered and preprocessed and then classified and mapped to the movements of a 3D-printed robotic arm with six degrees of freedom. Intensive testing ensures that the prosthetic gives precise control with minimal latency and high reliability. The modular, custom design also greatly reduces production and maintenance costs, hence making this solution scalable and accessible to diverse communities. The project described contributes to neuroprosthetics progress and provides a solution in a very sustainable and inexpensive way for users, enabling independence for people who have lost limbs. It does so by combining the competencies of state-of-the-art BCI technology with open-source frameworks and holds immense promise in reordering the world of assistive devices, contributing toward international goals concerning health and sustainability. This intuitively controls a prosthetic arm using EMOTIV EEG signal processing, machine learning algorithms and a custom softwarehardware interface. In the future, this can be developed further into an advanced neural interface system that would have sophisticated neural signal decoding and multimodal feedback mechanisms. These could allow even finer motor control and sensory feedback, closer to creating prosthetics that feel and function like biological limbs. It is going to act as a role model for low-cost, sustainable, and effective innovation in assistive technology under this global effort to improve accessibility and inclusion in health.

SafeAirSense: Wearable Air Quality Monitoring Smart System

Project ID = F24SDP 22 CE F

Supervisor: Armstrong Nhlabatsi

Elham Bastin Takhti, Yomna Mohamed, Zainab Ghadiri, Fatima Al-Bader

Nowadays, air pollution is one of the most serious problems and dangerous for human health. The effects of air pollution on health are profound. Some people have breathing problems, such as asthma patients, and they are worried about going to a place where the air is polluted. Therefore, they want to know about air quality before travelling to that location. Our solution for this project addresses the limitations of existing solutions. Our proposed solution is a wearable device that monitors air quality and displays data based on locations on a map. This allows patients to be Informed about the polluted air in detail in those locations. The wearable device is connected to a mobile app and cloud platform. It sends notifications to mobile apps. We hope that this device provides quick access to vital air quality information and at the same time increases public health awareness and quality of life. By creating this wearable device, we are taking a step towards assisting a society that pays special attention to the special needs of all its members, particularly patients, and prioritizes their health and well-being.

MyVoice: A Real-Time Arabic Sign Language Recognition device for Deaf and Mute

Project ID = F24SDP 23 CE F

Supervisor: Elias Yaccoub

Aliaa Al-Ajji, Belqes Alshami, Dania Albatarni, Yomna Nasr

This project tackles the communication challenges experienced by the Arabic-speaking deaf and mute community, particularly when engaging with individuals unfamiliar with sign language. The main goal is to create a real-time Arabic Sign Language (ArSL) recognition system that converts hand gestures into text or Arabic speech, thereby simplifying interactions between individuals who are deaf and nonspeaking and those who can hear. The system uses a Raspberry Pi 5, integrated with a Raspberry Pi Camera Module 3, to capture hand gestures on the fly. The software is developed in Python and leverages the MediaPipe library for efficient hand detection. A pre-trained model is utilized to interpret both Arabic letters and frequently used words, enabling comprehensive gesture recognition. Operating on the Raspbian OS, development and testing were conducted using Thonny. The system relies on MediaPipe to extract hand landmarks and accurately predict corresponding Arabic sign language gestures. Recognized gestures are displayed as text on a connected monitor and also converted into Arabic speech using a text-to-speech module, enhancing accessibility for hearing individuals. The entire system is designed to allow contactless users to interact by pointing at on-screen buttons instead of physically touching them. Additionally, through Wi-Fi connectivity, the staff can access a dedicated HTTP-based chat interface hosted on the Raspberry Pi 5. This interface allows them to engage in real-time conversations with the user, viewing the translated Arabic sign language, responding in their preferred language, and having their messages automatically translated into Arabic. The translated reply is then sent to the shared display monitor, ensuring the Deaf and Mute user can read the response clearly, enabling seamless two-way multilingual communication. This project aims to deliver an affordable, efficient solution for the Arabic-speaking deaf and mute community, with applications in areas such as hospitals, educational institutions, and public service environments. By offering real-time Arabic sign language translation, the project aspires to bridge communication gaps, foster inclusivity, and enhance interactions across diverse segments of society.

NASM: Design and Implementation of a Wearable Tracking and Health Monitoring System for Alzheimer's Patients

Project ID = F24SDP 25 CE F

Supervisor: Khalid Abualsaud

Almaha Al-mohannadi, Aishah Alsaad, Noora Alyafei, Sara Al-malik

Many diseases around the world have affected people's health and well-being. One of those diseases that has huge effects on the human brain, as well as is considered a disease that has no specific therapy to stop its effects, is Alzheimer’s. Alzheimer’s is a disease that causes people to lose memory and experience difficulties in carrying out their routine daily activities. Therefore, in this project, a wearable assistive device has been designed to assist people with Alzheimer’s as well as their caregivers. This is accomplished through building an electronic circuit accompanied by a sensor network to measure certain signals that can assist caregivers in monitoring the health of Alzheimer's patients. Those sensors include a temperature sensor to measure body temperature, a heart rate and oxygen level sensor, which measures the number of heart beats per minute as well as the oxygen level in the body, and a motion sensor to measure abnormal movement. All these sensors are connected and controlled by the ESP32 C3 controller. In addition, the device is accompanied by GPS to determine the patient’s location in case the patient goes out of the specified range or location set to be monitored. A message is sent by the device to the caregiver about the patient’s location. This message is sent through the Wi-Fi module inherent in the ESP32 C3 controller. Besides all these operations, a scheduled time for reminding the patient about taking their medicine is set through an algorithm and uploaded to the controller. This notification is achieved through messages sent to the Blynk App and sound notifications from the buzzer. It was observed in the results that the device was able to capture vital signs, these being a temperature (normal: 36.1°C to 37.2°C), a heart rate of 107 bpm, and an oxygen saturation of 99%, affirming the patient to be in a stable condition. In addition, both the GPS and notification systems worked properly since location coordinates were well determined, and notifications were also delivered on time. This device demonstrates a practical way of enhancing patient care, safety, and communication between patients and their caregivers.

AquaGuard: Intelligent Drowning Detection and Rescue System

Project ID = F24SDP 26 CE F

Supervisor: Muhammed Azeem

Alshaima Al-Musleh, Shahd Al-Jaber, Meaad Al-karbi, Nusaibah Abduljabbar, Aisha Al-Kaabi

This project presents AquaGuard, a real-time swimmer monitoring and drowning detection system that integrates wearable sensors, AI-driven computer vision, and IoT communication protocols. Swimmers wear wristbands embedded with pressure and heart rate sensors connected to an ESP32 microcontroller, which transmits vital data over Wi-Fi to an MQTT broker for individualized, real-time monitoring. In parallel, a Raspberry Pi processes live video from a Pi-camera using a YOLO-based object detection model. The system employs a dynamic grid-based localization technique to estimate the swimmer’s position within the pool, enabling more precise incident detection and targeted responses. When either the sensor or vision subsystem detects signs of distress, alerts are immediately dispatched to a mobile application built with Flutter. In critical scenarios, the system also triggers an automated net at the pool’s base, which elevates to surface level, facilitating rapid and non-intrusive rescue. By combining real-time vitals monitoring, spatial video analysis, and automated intervention, AquaGuard provides a scalable and robust solution to enhance swimmer safety in both public and private aquatic environments.

Tabib: Intelligent Healthcare Chatbot with Vital Signs Detection Using contactless sensors

Project ID = F24SDP 28 CE F

Supervisor: Sumaya Al-Maadeed

Aisha Alyafei, Asmaa Alkuwari, Amna Al-Nasr, Moza Al-Kuwari

The aim of this project is to create an intelligent, user-friendly health monitoring system that enables patients to check their vital indicators, particularly their heart rate, from the convenience of their own homes. The device helps identify early indicators of health problems, minimizes the need for frequent hospital visits, and guarantees that doctors receive precise, real-time health updates by utilizing contactless mmWave radar technology and powerful artificial intelligence. An AI-powered medical chatbot is one of the system's most crucial components. This virtual assistant converses with patients, poses tailored health-related queries, and offers guidance based on radar data, and notifies physicians of any anomalies. In order to realize this concept, we collected heart rate data without making physical touch using a SeeedStudio microcontroller coupled to a mmWave radar sensor. This data is subsequently processed and stored by the system both locally and online. Additionally, we made the chatbot compatible with both text and voice, so users of various ages and technological skill levels can utilize it. We contrasted the radar data with readings from wearable sensor to verify its accuracy. We successfully mapped out and constructed the system with the aid of flowcharts, diagrams, and architecture models. The results have been positive. Through the chatbot, the system consistently monitored variations in heart rate and promptly sent out alarms. In comparison to conventional wearable technology, the mmWave radar demonstrated high accuracy during testing. According to surveys, 84.7% of respondents said that tools like this are crucial in their everyday life, demonstrating how much people—particularly those over 30—value being able to keep an eye on their health at home. Our idea offers an effective and useful solution for remote health monitoring that is prepared to make a significant impact by fusing AI, radar technology, and an intuitive user interface.

Raqib: A Lockdown IDE with Real-Time Monitoring for Secure Programming Exams

Project ID = F24SDP 30 CS M

Supervisor: Abdelkarim Erradi

Abdulrazzaq Alsiddiq, Asem Shouman, Ibrahem Alsalimy, Marwan Khankan

The increasing reliance on digital solutions in education and professional development has highlighted significant challenges in conducting secure and fair coding-based assessments. Traditional methods, such as pen-and-paper exams or unsupervised coding tests on personal devices, fail to provide a secure environment for fair evaluation while preserving the efficiency of modern development tools. These limitations hinder both students and educators, making it difficult to assess programming skills accurately and equitably. To address this issue, this project introduces Raqib, a secure and customizable platform designed for coding-based exams. Raqib combines a Lockdown IDE and an Admin Dashboard to create a controlled environment for students while providing educators with real-time monitoring and management capabilities. The Lockdown IDE restricts unauthorized access, ensuring exam integrity without compromising the programming experience. Meanwhile, the Admin Dashboard empowers instructors to enforce policies, track progress, and address potential violations during the exam. Key accomplishments of the project include:
• Development of a user-friendly IDE, utilizing Electron and Monaco Editor, to replicate real-world coding environments.
• Implementation of robust lockdown mechanisms to prevent cheating, such as application switching and network restrictions.
• Creation of an Admin Dashboard to provide real-time oversight, violation notifications, and exam policy configurations.
• Support for diverse assessment formats, including coding challenges and multiple-choice questions (MCQs).
By bridging the gap between practical coding environments and secure exam conditions, Raqib offers a reliable solution for universities, schools, and certification centers. This platform not only ensures fair evaluations but also supports scalable, flexible deployment for various examination scenarios, setting a new standard for programming assessments.

Multi-Modal Large Language Model-based App for Plant Disease Detection

Project ID = F24SDP 31 CS M

Supervisor: Cagatay Catal

Basil Saeed, Mohammad Hassam Khaili, Mahodi Hasan Sabab, Omer Khan

Plant diseases are a significant issue that affects farmers, individuals, and organizations worldwide. These diseases not only reduce crop yields but can also lead to economic crisis, food shortage, and health risks if diseased plants are consumed. Farmers, who often depend entirely on agriculture as their primary income source, are particularly vulnerable. To address this problem, we developed a mobile application that uses a multi-modal input approach that accepts text and images to detect plant diseases. The app provides detailed information on the detected disease, possible causes, and practical solutions, enabling users to make informed decisions to prevent or reduce the issue. The significance of tackling plant disease issues cannot be overstated. Farmers can use the application to avoid recurring mistakes, improve crop yields, and safeguard their livelihoods. Additionally, the app benefits home gardeners by providing easy-to-understand guidance for nurturing healthy plants. This is especially valuable for individuals who lack expertise in plant care, as it helps them avoid unnecessary expenses on ineffective treatments. By promoting higher confidence and awareness, the app encourages both farmers and individuals to adopt better plant care practices. The application’s impact extends beyond individual users. On a societal level, it promotes a greener environment and supports Qatar’s efforts to improve its climate and culture. Organizations can save money on plant care, making plant-based businesses more appealing and affordable. Globally, the application facilitates easy disease detection and treatment guidance, making plant care more accessible and affordable. Our mobile application is a comprehensive solution using AI and multi-modal input capabilities and aims at resolving the plant disease issue for farmers, individuals, and organizations. Through these efforts, we aim to make plant care easier, more accessible, and more impactful for everyone involved.

Chronoleap: A 2D Platformer Physics and Time based Game

Project ID = F24SDP 32 CS M

Supervisor: Mucahid Kutlu

Abdulla Abdullah, Mohammed Al-Adbi, Mazen Hassan, Youssef Mohamed

This project presents the development of a 2D platformer game that incorporates time manipulation mechanics, carefully designed to enhance player engagement and gameplay dynamics. Inspired by other successful indie titles such as Celeste, our game contains unique time-based abilities such as time-freeze and utilizing environment enabling our players to strategically navigate complex environments. This mechanic allows the players to have greater control over their gameplay, impacting both character movements and environmental interactions, by such features it is offering innovative twist to traditional platformer genre. The project addresses challenges that are common to platformer development, such as balancing abilities, level design complexity, and ensuring smooth uninterrupted gameplay. We used Unity engine to develop our game, hence our solution involved making custom managers and multiple feasibility checks due to the uncertainty of some mechanics not correctly working with pre-existing features. It managed time-based effects on in-game elements, like obstacles and destructible terrain, which respond differently based on what is active. By utilizing Scrum methodology, we achieved iterative testing and feedback regularly, allowing us to refine the core mechanics and the game in general. Key achievements of the project include the successful planning and implementation of complex time control mechanisms, dynamic environmental responses, and well-designed visuals. These core elements contribute to an engaging player experience that keeps the player hooked, offering fresh air of mechanics for a platformer game. The game is now available online to download for Windows and MacOS [34].

MANARA: AI-Powered Stock Market Prediction for the Qatar Stock Exchange

Project ID = F24SDP 33 CS M

Supervisor: Saleh Al-Hazbi

Alhasan Mahmood, Marwan Sayed, Muhammed Khan, Youssef Ahmed

The stock market in Qatar plays a vital role in the country's economy, yet investors often face challenges in making informed decisions due to the complexity of analyzing multiple factors influencing stock prices. This project addresses the need for a comprehensive, reliable, and accessible tool to assist investors by providing actionable insights tailored specifically to the Qatar Stock Exchange (QSE). Our proposed solution Manara is a web-based financial advisor specialized in the Qatari market. By leveraging advanced artificial intelligence, the system predicts stock price movements for all companies listed on the QSE. These predictions incorporate a wide range of technical and fundamental factors, as well as external influences such as financial and geopolitical news, oil and gold prices, currency exchange rates, and other influential global markets. Rather than simply providing standalone predictions, the system integrates these forecasts with historical market data and relevant news articles to deliver a holistic view. Users are presented with clear and actionable recommendations, enabling them to make confident and well-informed investment decisions. Key achievements include the successful integration of LSTM and TFT models to analyze and predict stock price trends by leveraging both fundamental and technical data. Comprehensive testing verified that the system meets functional and non-functional requirements, including user accessibility, performance, and scalability. This project not only fills a critical gap in the Qatari financial market but also demonstrates the potential of combining AI and financial data to empower decision-making. In conclusion, the project delivers an innovative solution that equips investors with the tools to navigate the complexities of the QSE, setting the stage for future advancements in financial advisory systems.

The Coconut Project: AI powered Assignment Generation & Evaluation

Project ID = F24SDP 34 CS M

Supervisor: Devrim Unal

Ahmedfaseeh Akram, Hunzalah Bhatti, Haris Khan, Sarim Toqeer

Manually grading coding assignments is one of the most repetitive and time-consuming tasks for computer science educators. Reviewing each submission for correctness, efficiency, and code quality takes significant effort, especially when dealing with large classes. This process often leads to delayed feedback, inconsistent evaluations, and limited time for instructors to focus on teaching. As the volume of assignments grows, maintaining fairness, speed, and quality in grading becomes increasingly difficult without automation. This is where coconut comes in. Coconut is a teacher-only tool that simplifies two major tasks: creating assignments and semi-automatically grading multiple student submissions. For assignment creation, teachers input their idea, topic, difficulty, and language, alongside any additional requirements for the problem and Coconut uses fine-tuned AI agents to generate a relevant problem, providing proper problem descriptions for the students, grading rubric, relevant starter code, and a test case guide. During grading, the tool analyzes student’s code, supporting even multiple files for submission - checking for correctness, efficiency, and coding practices - and provides individualized feedback and suggested fixes. Teachers can take this review and adjust the results as needed, making Coconut do all the tiring work while still giving teachers full control over the output. Coconut reduces the effort of creating assignments from scratch and speeds up grading, helping educators focus on teaching without compromising quality. Initially, the project aimed to build a broader Learning Management System (LMS) but strategically shifted toward a more focused and novel direction: creating an intelligent, agentic platform that supports educators specifically in programming assignment generation and evaluation. By narrowing the scope, Coconut introduced unique capabilities — particularly in AI-driven problem creation, static code analysis without remote execution risks, and modular, adjustable feedback workflows — that set it apart from traditional LMS tools. This engineered solution not only fills a critical gap for computer science educators but also demonstrates how targeted, agent-assisted platforms can deliver high-impact improvements in education without the complexity and overhead of full-scale LMS ecosystems.

MedPulse: A Serverless, AI-Powered, and Scalable Patient-Centric Platform for Modern Healthcare Management

Project ID = F24SDP 35 CS M

Supervisor: Moutaz Saleh

Ahmed Al-Ghoul, Ahmed Elkhashen, Ahmed Ezzat, Mohammed Abuhaweeleh

Healthcare systems globally face significant challenges due to reliance on outdated administrative and patient management processes, often resulting in operational inefficiencies, increased costs, and compromised patient care quality. Particularly in regions like the Middle East, existing platforms frequently lack user-centric design, interoperability, and the adaptability needed to meet evolving healthcare demands. This highlights a critical need for modernized, integrated solutions that centralize core healthcare functions while prioritizing usability, robust security, and regulatory compliance. Such systems are vital for streamlining operations, reducing administrative overhead, and ultimately enhancing patient outcomes. This project presents the design, development, and evaluation of an innovative, full-stack Healthcare Platform and Management System (HPMS). Leveraging a modern technology stack featuring a FastAPI backend and a Next.js frontend, the HPMS provides a comprehensive, integrated solution aimed at transforming healthcare delivery and administration. It serves as a centralized platform facilitating seamless interaction among patients, healthcare providers, and administrators through an intuitive web interface. The system encompasses critical features including electronic health record (EHR) data management, interactive appointment scheduling, integrated telemedicine capabilities, prescription handling, and foundational elements for billing automation. Emphasis was placed on creating a user-friendly experience while ensuring robust security through measures like JWT authentication and role-based access control, adhering to standards like HIPAA, GDPR, and relevant Qatari regulations. This report details the system architecture (both frontend and backend), implemented functional modules, development methodologies, testing strategies, and key technical challenges overcome during the creation of this integrated platform. By bridging gaps in current healthcare technology, the HPMS aims to support a more efficient, accessible, and patient-centric healthcare ecosystem.

Diraya: A Multi-Agent LLM System for Interactive Learning

Project ID = F24SDP 36 CS M

Supervisor: Tamer Elsayed

Abdulrahman Selmi, Aly Soliman, Omar Elshenhabi, Osama Hardan

In an age of information overload, efficiently processing and extracting valuable insights from large volumes of unstructured data is a significant challenge across various industries, including education, healthcare, and legal fields. Traditional methods often struggle to manage and make sense of such data, hindering productivity and decision-making. Our project addresses this issue by developing a multi-agent LLM system designed for interactive learning. The system allows users to engage in real-time, dynamic conversations with multiple AI agents, each contributing expertise to analyze and discuss specific sections of learning material in the form of documents. The proposed solution integrates large language models (LLMs) with a real-time audio interface, enabling users to interact using both text and speech. By leveraging multiple AI agents, the system enhances the depth and interactivity of the learning material in the form of document exploration, enabling users to gain insights from multiple perspectives that may not be evident from the document alone. This project aims specifically to improve how students and learners interact with and extract knowledge from documents, enhancing learning outcomes. We have successfully implemented a working system, which proved our concept of using AI agents for interactive learning to be beneficial through continuous testing of the prototype. Our solution provides novelty in enhancing the learning process using AI agents with live user interaction and document analysis, which is a lacking feature in other existing solutions.

Shaheen: A Highly Available End-to-End Secure File Sharing Solution for Untrusted Cloud Environments

Project ID = F24SDP 37 CS M

Supervisor: Qutaibah Malluhi

Abdullah Nawaz, Mohammed Al-ghazali, Mohamed Ahmed, Osamah Alsumaitti

In today's digital era, the dependence on cloud service providers for file sharing has raised critical concerns about security, privacy, and user trust. Issues such as data breaches, misuse of user content, and the lack of anonymity in existing solutions have exposed users to significant risks. To address these challenges, Shaheen, a secure open-source file-sharing application, provides a robust alternative designed to safeguard files even from untrustworthy cloud environments. Shaheen employs a multi-layered security approach to ensure end-to-end protection for file sharing. Encryption keys are established securely using the Diffie-Hellman key exchange protocol, allowing users to generate a shared secret without the need for direct key sharing or pre-existing key certificates. Once established, these keys are used for AES encryption, which ensures the confidentiality of files by encrypting their content before uploading them to the cloud. To maintain file authenticity and integrity, Shaheen utilizes Message Authentication Codes (MACs) to provide both integrity and authenticity by verifying that the file’s content has not been tampered with and confirming it originated from a trusted source. Additionally, the system utilizes multi-cloud storage to enhance file availability and redundancy, reducing reliance on a single provider. Users can also set customizable file properties such as lifespan and access limits, giving them granular control over how their files are shared and accessed. With its intuitive interface and strong commitment to file protection, Shaheen provides a secure, user-friendly solution that restores trust in file sharing by addressing key security concerns comprehensively.

QatarFit: Develop Health and Fitness in Qatar

Project ID = F24SDP 38 CS M

Supervisor: Cagatay Catal

Ahmed Ahmed, Abdelbari Kecita, Mustafa Hussein, Osama Osman

Many people struggle to stay active and healthy due to a lack of easy access to fitness information, outdoor activity locations, and personalized health plans. Qatar Fit was developed to solve this problem by offering a simple and effective mobile app designed for people in Qatar. The app provides users with tools to discover fitness locations, track their health, and receive personalized workout and nutrition plans, making it easier for them to live a healthier lifestyle. Qatar Fit includes several helpful features. The ActiveSpot Finder helps users find parks, walking paths, and other exercise spots, while the Route Analyzer allows users to plan their activities. The app also connects people through the Fitness Community Hub, where users can share their progress and take part in fun challenges. Using AI, the app creates personalized workout and meal plans based on each user's needs. Additionally, the Health Data Integration feature combines data from other fitness apps and devices to give users a complete picture of their health. We used the MVVM (Model-View-ViewModel) design pattern to make the app organized, easy to maintain, and scalable. The app was successfully built with Firebase for secure backend support, and its user-friendly design ensures smooth navigation. During testing, Qatar Fit showed that it can provide accurate recommendations and useful features for different types of users. Qatar Fit is more than just a fitness app, it brings together useful tools, AI-powered recommendations, and a strong community to encourage active lifestyles. This project proves that technology can make staying healthy easier and more enjoyable, providing long-term benefits for individuals and the community in Qatar.

Adversarial Attack and Defense Mechanisms for Image Classification Models in Healthcare

Project ID = F24SDP 39 CS M

Supervisor: Rehab Duwairi

Ahmed Hagana, Faisal Elbadri, Ibrahim Shatah, Rashid Nafwa

The field of artificial intelligence (AI) has witnessed rapid advancements, particularly in image classification, where AI models are now integral to critical domains such as healthcare, autonomous vehicles, and security systems. However, this growing reliance exposes a fundamental vulnerability: adversarial attacks. These attacks intentionally manipulate model inputs to induce incorrect predictions, posing significant risks in sensitive applications like medical diagnosis. Misclassification in such contexts can have severe consequences, highlighting the urgent need to enhance model robustness against adversarial perturbations. This project presents a comprehensive approach to improving the adversarial resilience of image classification models, focusing on black-box attacks, including Boundary Attack and Sign-OPT Attack. At the core of the system is the MobileViT model, specialized for detecting brain tumors in MRI scans using a curated medical imaging dataset. The primary challenge addressed is the model’s susceptibility to adversarial perturbations that could compromise diagnostic integrity. To mitigate this risk, adversarial training techniques were employed, incorporating adversarial examples directly into the training process to improve the model’s ability to correctly classify both clean and perturbed inputs. Beyond standard adversarial training, the system adopts a modular, layered architecture, promoting flexibility, scalability, and easier experimentation. Key components include automated generation of adversarial samples, continuous monitoring of model performance on both clean and adversarial datasets, and a robust reporting framework for evaluating resilience. A dual optimization strategy was implemented: first maximizing clean dataset accuracy, then enhancing adversarial robustness without sacrificing performance. Ethical considerations and transparency were central to the methodology, ensuring that adversarial techniques were applied responsibly, especially in healthcare settings where trustworthiness is critical. Robustness evaluation was conducted through simulated adversarial conditions: Exposure Attack, Distortion Attack, and Combined Attack scenarios. These varied perturbations provided a comprehensive understanding of the model’s defensive capabilities. Experimental results demonstrated that the adversarially trained MobileViT model significantly outperformed the baseline model across clean and adversarial datasets. Enhancements in robustness were observed without major degradation in clean accuracy, affirming the effectiveness of the proposed defense strategy. This work establishes a solid foundation for future research in secure AI for healthcare, ensuring that machine learning systems remain reliable, transparent, and ethically aligned in high-risk environments. In conclusion, the project delivers a practical, ethically grounded approach to strengthening adversarial robustness in healthcare-focused AI systems. Through the integration of adversarial training, modular system design, and continuous robustness evaluation, this research advances the development of safer, more trustworthy machine learning solutions.

NexaNode: A Search Engine for Decentralized Content on IPFS

Project ID = F24SDP 40 CS M

Supervisor: Noora Fetais

Athman Alkhalaf alibrahim, Ali Murad, Nasser Aljayyousi, Zaid Hamdan

The decentralized ecosystem currently faces a significant challenge: the lack of efficient content browsing. This limitation prevents many non-technical users from fully benefiting from decentralized networks. Moreover, content moderation often undermines freedom of speech and intensifies the spread of misinformation, targeting specific groups or communities. Privacy is increasingly compromised by centralized services requiring users to surrender personal information. What the world needs is a decentralized platform that enables transparent information sharing without intermediaries controlling what content is allowed. Such a platform must also be intuitive, catering to both experienced and novice users, to expand its audience and foster community growth. NexaNode addresses this need by introducing a decentralized search engine for the IPFS (InterPlanetary File System) network. Currently, there is no fully functional search engine for IPFS, highlighting the importance of NexaNode’s solution. Our approach relies on contributors, or provider nodes, who upload files via our platform, making them accessible through the search engine. Traditional methods of crawling and indexing are challenging to implement in IPFS, which is why our contributor-based model was chosen. The platform successfully enables users to exchange files while safeguarding privacy and maintaining data integrity. Moving forward, the focus will be on marketing NexaNode to attract more contributors, thereby expanding the database of files available for search. In conclusion, NexaNode meets the growing need for a decentralized, transparent search engine that champions freedom of speech and eliminates the control exerted by large organizations over shared data. In a decentralized environment, no single entity holds power, ensuring true autonomy and user empowerment.

PolicyGuard: AI-Driven Compliance and Policy Guidance for SME

Project ID = F24SDP 41 CS M

Supervisor: Noora Fetais

Ahmed Deef, Mohammed Al-Ashwal, Mohamed Mahmoud, Otba Awouda

Small and medium-sized enterprises (SMEs) in Qatar face critical challenges in developing effective, tailored cybersecurity policies due to limited resources and expertise. These gaps expose SMEs to significant risks, including data breaches, operational disruptions, and regulatory non-compliance, potentially undermining economic growth and digital transformation efforts. Addressing these issues is essential to enhance cybersecurity resilience and ensure SMEs' compliance with national regulations established by the National Cyber Security Agency (NCSA). This project aims to develop an AI-driven chatbot that compares policies, provides users with a score, and offers guidance on regulatory standards. Key achievements include training a fine-tuned AI model using our dataset which contains QDB manual, combined with retrieval-augmented generation (RAG) and prompt engineering, to deliver tailored policy recommendations aligned with NCSA and QDB standards. This solution has the potential to bridge the cybersecurity gap for SMEs in Qatar, contributing to a more robust cybersecurity ecosystem and supporting sustainable growth. The approach also offers broader applications for addressing cybersecurity challenges in resource-constrained markets globally.

SHAHEEN: Revolutionizing Price Comparison and E-Commerce Recommendations

Project ID = F24SDP 42 CS M

Supervisor: Mucahid Kutlu

Abdulla Al Naemi, Abdulla Jaidah, Hammad Muhammad Imtiaz, Alwaleed Sarieh Daher, Yousef Al-Yazidi

With the development of technology and increase in number of business entities, it is becoming hard for the consumers to find the products at the right prices. This problem does not only affect the time management but also the financial resources since people try to look for the products in different e-commerce websites. Shaheen is a web application that is developed to solve this problem with the help of web scrapers and machine learning recommendation systems. It collects product information from various e-commerce sites and provide them in one place where consumers can use the product find the best price. The following is an explanation of the Shaheen application and how it works, the challenges encountered during the development process, the technologies used, and the benefits expected from its implementation. As for the online shopping activity that occurs quite frequently, Shaheen is predicted to simplify the purchasing process and thus suit both consumers and business owners. This development is expected to enhance the shopping experience in the digital marketplace as it aims to enhance the efficiency and accessibility of the shopping process.

JerboLab: An Educational Standalone Self-Hosted Home Lab for Hands-On Learning and Development

Project ID = F24SDP 45 CS M

Supervisor: Moutaz Saleh

Abdulla Al-malki, Abdollah Kandrani, Mohamed Salih, Sultan Alsaad

Hands-on learning is a key part of building essential skills for computer science students. However, many students lose interest in coding projects because they often feel disconnected from practical applications. To address this challenge, we developed JerboLab as an Educational Standalone Self-Hosted Home Lab for Hands-On Learning and Development. This project offers an engaging platform where students can host, manage, and test their coding projects in a real-world environment, bridging the gap between academic exercises and practical experience. Our solution makes student submissions and grading seamless for instructors by allowing projects to be run immediately without any setup required. The platform automatically containerizes each project using Docker, ensuring consistent and isolated environments. Students can upload and deploy their work, interact with real system services like firewalls, and manage system tasks, gaining valuable hands-on experience. JerboLab’s user-friendly interface simplifies complex tasks, while its integration with university systems enables smooth project submission and evaluation, making it an efficient tool for both students and instructors. To support code development and review, JerboLab includes a built-in web-based code editor that lets students write, upload, and view their code directly on the platform. This makes it easy to test, debug, and improve their work without switching tools. It also enhances the learning experience by keeping everything in one place and providing feedback within the same environment used by instructors for evaluation. Built on a clean layered architecture, JerboLab is modular, scalable, and secure, making it suitable for both classroom and personal use. It uses advanced design patterns to ensure efficient code maintenance and system reliability. JerboLab also supports a self-hosted option, allowing students and institutions to deploy it on their own servers, ensuring data privacy and compliance with organizational policies. JerboLab not only enhances technical skills but also sparks curiosity by encouraging students to explore the inner workings of systems, services, and networks. Through hands-on experimentation with real-world tools, students develop a deeper understanding and passion for technology. By offering tools and experiences that mirror professional development workflows, JerboLab prepares students for future careers in technology, making coding projects more engaging, relevant, and impactful.

Smart Hydroponic System with IoT Integration and Automated Harvesting

Project ID = F24SDP 47 CE M

Supervisor: Ahmed Badawy

Jassim AlAnsari, Khalid Abdallah, Mohamad Aljawad Sandouk, Osama Hemdan

This project proposes a cost-effective and automated hydroponic farming system to support sustainable agriculture. Traditional methods consume large amounts of water and land, while commercial hydroponic solutions are often too expensive for small-scale users. To address this, the system uses the Nutrient Film Technique (NFT) to minimize water usage and space requirements. An ESP32 microcontroller monitors environmental factors such as pH, temperature, humidity, and water level, and controls actuators accordingly. A Raspberry Pi manages a robotic arm equipped with a machine learning model that automates harvesting by identifying when plants are ready. The dashboard enables real-time monitoring through Wi-Fi communication, enhancing system transparency and user interaction. To reduce costs, the NFT channels were constructed using affordable, locally sourced materials, making the system both accessible and scalable for future development.

RESCUE: Radar-based Emergency Swarm for Critical Under-rubble Estimation

Project ID = F24SDP 50 CE M

Supervisor: Amr Mohamed

Jeham Al-Kuwari, Mohammed Al-Sada, Turki Alahzam, Sultan Al-Harami

In the aftermath of a disaster, time is everything. The longer it takes to find survivors trapped under rubble, the slimmer become their chances to live. Traditional rescue methods are often too slow and risky in these urgent situations. Imagine if we could harness the power of technology to make these life-saving missions faster and safer. The objective of our project is to develop an efficient disaster response system employing a fleet of drones equipped with specific sensors to quickly and safely identify those trapped beneath debris. The system comprises high-altitude and low-altitude drones working together to scan areas, detect signs of life, and send this information to a Command-and-Control Center (CCC). The high-altitude drone, equipped with cameras, scans vast areas to identify specific sites requiring further inspection. High-altitude drone plays an important role in scanning wide areas and determining the extent of the damage, and guide the overall mission. Upon marking these locations, low-altitude drones equipped with a camera and radar sensor (24 GHz) proceed to conduct a thorough search for any individual trapped under rubble. The radar sensors on low-altitude drones are particularly effective for detecting movement or signals through obstacles, while the cameras offer detailed, close-up views to validate observations. All data gathered by the drones is securely transmitted to the CCC using a 4G/5G network, providing rescue teams with real-time updates for quick, informed decision-making. This project aims to establish a robust communication system, efficient data-sharing techniques, and reliable navigation, all tailored for operation in demanding disaster scenarios. This system employs signal processing and machine learning techniques on high-altitude drones to analyze aerial imagery and localize potential rubble zones. Low-altitude drones are then deployed to these areas, utilizing radar-based sensing to detect signs of human presence beneath the rubble. This combination significantly enhances the precision and speed of victim localization, increasing the chances of saving lives in critical situations.

Autonomous Golf Cart

Project ID = F24SDP 51 CE M

Supervisor: Uvais Qidwai

Abdulla Musa, Abdelaziz Shehata, Zabin Al-Dosari, Ahmad Chowdhury

The development of autonomous vehicles has revolutionized the transportation industry, extending beyond personal mobility and logistics to more specialized environments. This project aims to design and implement a Level 4 autonomous golf cart, tailored specifically for controlled environments such as university campuses and golf courses. The primary objective is to create a fully autonomous vehicle capable of navigating without human intervention, providing a safe, efficient, and low-speed transportation solution. By utilizing advanced sensors and powerful computing platforms, this project explores the potential of integrating cost-effective, off-the-shelf components to build a robust, real-world autonomous system. In recent years, Qatar has emerged as a leader in adopting advanced technologies across several sectors, aligning with its vision to become a global hub for innovation. Autonomous vehicles, such as the proposed golf cart, align perfectly with this trajectory, enhancing the nation’s technological evolution. Golf carts have already demonstrated their utility during the FIFA 2022 World Cup, where they played a crucial role in efficiently transporting personnel and materials across large venues [1]. By integrating autonomous golf carts into future mega-events, Qatar could further elevate its reputation for technological excellence. Autonomous golf carts would not only enhance operational efficiency but also provide an eco-friendly solution, aligning with Qatar’s sustainability goals and commitment to reducing carbon emissions. The system architecture of the proposed solution is structured into different layers: at the top, the Jetson Orin NX 16GB RAM AI 100TOPS Development Kit handles high-level tasks such as sensor fusion, object detection, path planning, and obstacle avoidance. At the bottom layer, microcontrollers manage key sensors—LiDAR for environmental mapping, stereo cameras for object identification, and ultrasonic sensors for close-range obstacle detection. The system also includes a Speed Controller connected to the torque motor for controlling the cart's acceleration and movement. This layered architecture ensures seamless communication among components, allowing the cart to perceive its environment, make decisions in real-time, and autonomously navigate through complex terrains. Designed for thorough testing in predictable environments such as university campuses, the system’s integration of Python and ROS2 for LiDAR data preprocessing enhances its ability to create accurate 2D maps, while ROS2 ensures efficient communication and sensor fusion for real-time navigation. The novelty of this design lies in its use of cost-effective hardware, delivering advanced functionality typically reserved for more expensive systems. The successful implementation of this autonomous golf cart will contribute to Qatar’s ongoing technological advancements, positioning it as a pioneer in integrating autonomous systems into practical, high-impact applications.

Flood Detection System in Tunnels

Project ID = F24SDP 52 CE M

Supervisor: Elias Yaccoub

Fawaz Al-Soufi, Osama Abdelaziz, Rashid Alyafei, Saqer Almurikhi

Flooding in roadway tunnels is a significant challenge, particularly in Gulf Cooperation Council (GCC) nations such as Qatar, Oman, and the United Arab Emirates (UAE), which have seen unprecedented rainfall in recent years due to climate change and development. This project proposes an innovative way to improve flood detection and response in tunnels. The system utilizes the Arduino UNO R4 Wi-Fi microcontroller, sensors, and a deep learning Convolutional Neural Network (CNN) model to measure water levels and grade flood severity into three classifications: minimal, medium, and high. The hardware includes ultrasonic sensors, water level sensors, and a camera, while the software incorporates a real-time graphical user interface (GUI) and a trained convolutional neural network (CNN) model for flood classification. The system comprises automated features, including the dispatch of SMS messages to administrators and the activation of drainage pumps, alongside manual control options for operators through the GUI. Testing has verified the system's accuracy and reliability, with 96.36% precision in flood classification using the CNN model. The system provides a cost-effective, scalable, and user-friendly method to mitigate tunnel flooding, enhance public safety, minimizing economic damage, and promoting sustainable urban infrastructure management. This project amalgamates engineering disciplines to deliver a robust solution to a substantial environmental challenge.

IOT Smart Agriculture System

Project ID = F24SDP 54 CE M

Supervisor: Mahmoud Barhamgi

Ahmed Alhato, Abdullah Nagib, Anas Hammad, Mustafa Elqaq

The increasing global demand for water in agriculture has led to the exploration of innovative solutions to minimize wastage and maximize resource efficiency. In arid regions like Qatar, where natural freshwater resources are scarce and the climate is predominantly desert, efficient water management is critical. Traditional irrigation methods are often inefficient, resulting in excessive water use and suboptimal crop yield. To address these challenges, this project presents a smart agriculture system that integrates advanced sensors and machine learning for both irrigation and pest control. The system monitors soil moisture, temperature, humidity, and light, using real-time environmental data to automate irrigation decisions. Additionally, an ESP32-CAM module is employed for insect detection, identifying pests such as worms, cockroaches, scorpions, and grasshoppers. A locally hosted HTML dashboard provides real-time visualization of sensor data and enables users to switch between manual and automatic irrigation control modes. By optimizing water usage and incorporating pest detection, the system improves crop yield, conserves resources, and contributes to sustainable farming practices in water-scarce environments.

Design and Implementation of a Wearable Monitoring System for Post-Stroke Rehabilitation with Heart Rate, and Gait Analysis

Project ID = F24SDP 55 CE M

Supervisor: Khaled Shaban

Anas Alghannam, Abdulla Zain, Hamzah Almomani, Mohammed Al-Salahi

Stroke survivors often face unique challenges in physical rehabilitation, particularly in regaining mobility and stability. Off-the-shelf fitness trackers, while effective for individuals with regular gait patterns, fail to provide accurate step counts, distance measurements, and detailed insights for those with impaired mobility. This lack of precision and clinically relevant data limits their utility for post-stroke recovery monitoring. To address these limitations, we propose a specialized wearable device designed specifically for post-stroke rehabilitation. The system monitors key metrics such as step counts, walking distances, and heart rate, providing healthcare providers with actionable insights into patient recovery progress. A cloud-based platform enables remote access to patient data, allowing doctors to analyze trends, detect anomalies, and make informed adjustments to therapy plans without requiring frequent in-person evaluations. By focusing on the specific needs of stroke survivors, this project aims to bridge the gap between standard fitness trackers and the specialized requirements of rehabilitation patients. The system not only enhances patient outcomes through precise monitoring but also reduces the burden on healthcare providers, offering an innovative and scalable solution for post-stroke care.

Close Proximity Alerting System for Emergency Services

Project ID = F24SDP 56 CE M

Supervisor: Ahmed Badawy

Abdullah Nasser, Faisal Abdulhameed, Khamis Al-Kubaisi, Samir Khalifa

The Close Proximity Alerting System for Emergency Services presents an innovative solution aimed at enhancing the efficiency and safety of emergency responders. It addresses critical challenges encountered during high-pressure situations, particularly the difficulty of signalling proximity to surrounding vehicles in a timely and coordinated manner. By improving motorist awareness, the system facilitates faster emergency response times and effective risk mitigation. The design incorporates passive or semi-passive RF tags attached to civilian vehicles. Directional RF antennas, mounted on emergency vehicles, transmit signals that activate the tags, using energy harvested directly from the radio waves. Upon activation, the system triggers visual and auditory alerts to notify drivers of approaching emergency services without requiring active driver engagement. This ensures seamless, real-time communication in close-range environments, even in the absence of dedicated power sources. The architecture is modular and energy-efficient, combining wireless RF power harvesting with secure data communication protocols. Although the intended design envisions a unified RF-based system, current hardware limitations necessitated a dual-band approach: 915 MHz RF signals are used for wireless power harvesting, while ESP-NOW (2.4 GHz) is employed for secure, low-latency data communication. The system integrates robust encryption and authentication mechanisms to safeguard transmitted data against cyber threats. It is built to perform reliably in dense urban environments and harsh outdoor conditions, aligning with sustainability goals through minimal energy consumption and durable hardware components. Additionally, the modular design allows for flexible deployment across diverse emergency scenarios and scalability toward nationwide implementation. By addressing critical gaps in existing traffic management systems, this project contributes a practical, cost-effective, and secure solution that enhances emergency response capabilities, promotes public safety, and supports sustainable urban development.

PIPEGUARD: In-Pipe Inspection AI-Enhanced Robot for Defects Detection Using Computer Vision

Project ID = F24SDP 57 CE M

Supervisor: Junaid Qadir

Aymen Filali, Fisal Elgamal, Mohamed Abbas, Nasser Alyafei

Maintaining the structural integrity of gas pipelines is critical for ensuring operational safety, environmental protection, and economic stability, particularly in energy-driven countries like Qatar. Traditional inspection methods, such as Pipeline Inspection Gauges (PIGs) and Closed-Circuit Television (CCTV) systems, are costly, labor-intensive, and often limited in early defect detection capability. These limitations create an urgent need for a more efficient, accurate, and scalable inspection solution. In response, this project introduces PIPEGUARD, a semi-autonomous pipeline inspection robot equipped with environmental sensors and an AI-enhanced computer vision system. The robot leverages a tracked skid-steering chassis for reliable movement in confined pipeline interiors and integrates a multi-layered hardware architecture featuring a Jetson Orin NX, an ESP32 microcontroller, and an OAK-D Pro camera. Unlike traditional systems, PIPEGUARD performs onboard defect detection directly on the camera’s Myriad X VPU, running an optimized YOLOv8 model. This edge-based inference significantly reduces processing latency and energy consumption, allowing the Jetson Orin NX to focus on motion control, stabilization through sensor fusion, and real-time data logging. Environmental conditions such as temperature, humidity, and methane gas concentration are continuously monitored via a network of sensors managed by the ESP32. These readings, combined with IMU-based stabilization, contribute to a comprehensive inspection framework capable of detecting both surface-level defects and deeper structural anomalies. Sensor data, movement feedback, and vision outputs are logged, processed, and summarized into detailed inspection reports, enabling proactive maintenance decisions. Key achievements of this project include the successful deployment of a real-time AI vision model on a resource-constrained edge device, the seamless integration of ROS2-based motion and sensor fusion control, and the development of a lightweight, modular robot platform optimized for in-pipeline operation. Rigorous subsystem testing demonstrated robust environmental sensing, accurate stabilization during movement, and effective defect detection across various pipeline conditions. The novelty of the PIPEGUARD design lies in its unique combination of edge AI inference, modular ROS2 control, and skid-steering locomotion, all within a compact, power-efficient form factor. By shifting inference to the OAK-D Pro’s VPU, the system achieves a balance between performance and energy efficiency that is not present in conventional crawler robots. This innovation enables cost-effective, scalable pipeline monitoring solutions, potentially transforming pipeline maintenance practices by enhancing inspection frequency, reducing operational risks, and supporting the transition towards smarter and more sustainable infrastructure systems.

Real-Time Integrated Driver Drowsiness Detection and Alert Response system (RIDAR)

Project ID = F24SDP 60 CE-CS M

Supervisor: Qutaibah Malluhi

Ahmad Al-Enazi, Farouk Mansour, Mhd Kheir Alhafez, Moustafa Elbeltagy

Driver drowsiness is a significant safety issue that leads to numerous road accidents each year. This project addresses the need for a reliable detection system that monitors signs of driver drowsiness and provides timely alerts to prevent accidents. The objective is to develop a system that combines vehicle behavior sensors, such as gyroscope and accelerometer sensors; in addition to driver behavior sensors, like face-tracking cameras, to monitor the driver’s alertness in real time. The system processes data from these sensors to detect fatigue indicators, including prolonged eye closure, Yawning and Harsh braking or turing. Upon detecting drowsiness, it issues an alert to prompt the driver to take corrective action, such as pulling over to rest. Designed for high accuracy, the system is also cost-effective and integrates easily with vehicle electronics while operating non-intrusively to avoid distracting the driver. This project aims to contribute to road safety by reducing drowsiness-related accidents. It represents a step forward in driver assistance technologies, with potential extensions for applications in autonomous and semi-autonomous vehicles.

VoteChain

Project ID = F24SDP 61 CE-CS F

Supervisor: Elias Yaccoub

Fatima AlNoaimi, Jori Al-Amoodi, Muneera Al-Naimi, Tamader Al-Dosari

The project aims to design and develop a secure and efficient electronic voting system, utilizing advanced technologies to enhance the integrity, confidentiality, and reliability of the voting process. Our electronic voting system uses a private blockchain model where each vote is encrypted and stored in a linked list of blocks. Each block contains a hash of the previous block, ensuring data immutability, and uses a nonce to prevent hash collisions or replay attacks. Fingerprint authentication is integrated into the voting system using an optical fingerprint sensor, ensuring only authorized users can access the voting system, thereby eliminating the risk of voter impersonation, vote manipulation, or fraud. Our electronic voting system combines biometric authentication and encryption techniques to propose a secure, transparent, and user-friendly approach. The system simplifies the voting process by providing voters with straightforward and reliable means of casting their votes. Moreover, the system reduces dependence on manual processes, enhancing both accuracy and efficiency in vote counting and result compilation. By addressing critical challenges in conventional voting systems, such as inefficiency, data security, and fraud, this project demonstrates how technology can build trust and enhance transparency in democratic systems and encourage people to participate more in electoral processes.

AI-Powered Helmet for Construction Workers with Wireless Mesh Networking and Secure Data Protection

Project ID = F24SDP 62 CE-CS F

Supervisor: Devrim Unal

AlJazi Afifa, Abeer Aljomai, Hanan Bawazir, Maryam Bawazir

About 6,000 workers die daily from work-related accidents and diseases, totaling over 2.2 million deaths annually. This project introduces an AI-powered smart helmet designed to improve construction workers' safety and health. It integrates health monitoring, hazard detection, and real-time communication to reduce accidents and enhance well-being. Equipped with sensors to track heart rate, body temperature, and fatigue, the helmet sends real-time health reports securely via a wireless Mesh network, ensuring seamless communication across large construction sites. In addition to health monitoring, the helmet also features environmental hazard detection capabilities. It can detect the presence of toxic gases, smoke, and physical hazards such as loose wires or fire. If any potential risk is identified, the helmet triggers an alarm, immediately notifying both the wearer and the on-site manager, enabling quick intervention to mitigate the threat. The helmet's data transmission is powered by wireless Mesh networking, ensuring reliable and scalable communication across the site. This high-efficiency network allows multiple helmets to communicate with each other and the central receiver, even in areas with dense construction material or obstacles. The central system evaluates the data using AI algorithms, providing predictive insights and health risk assessments for the construction manager. This enables proactive management of worker health and safety, facilitating timely interventions and reducing the risk of accidents. To ensure the integrity and security of sensitive data, the system leverages blockchain technology for data integrity and secure storage. Each health report, and environmental reading is recorded in an immutable blockchain ledger, ensuring all data is tamper-proof and verifiable. This secures workers' personal health information and provides a transparent and auditable record of safety measures taken during the construction process. By combining AI, wireless Mesh networking, and blockchain security, this smart helmet offers a comprehensive solution to the growing challenges of worker safety and health management in the construction industry. The system enhances real-time monitoring, facilitates swift decision-making, and provides a proactive approach to safety, improving overall worker protection and reducing the risk of injury or health issues on construction sites.

Was this article helpful?
0 out Of 5 Stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
How can we improve this article?

Leave a comment

Table of Contents