Cutting Edge '25

Omnify – Blockchain-Based Smart Contract System for Game Tournament Management

Omnify is a blockchain-based system to improve game tournament management, addressing transparency, security, and fairness issues. Using Ethereum and Solidity smart contracts, it automates prize distribution and result verification with a tamper-proof ledger. Its React Next.js frontend with Web3 integration offers an intuitive interface, boosting trust and efficiency in eSports and online gaming tournaments.

mitoMatch – A Machine Learning Approach to Identify Human Relatedness Using Mitochondrial DNA Hypervariable Region I and II

This project presents an interdisciplinary approach that combines genomics and computing to identify human relatedness by predicting an individuals ethnicity and geographic region using mitochondrial DNA (mtDNA) Hypervariable region 1 and 2. Unlike nuclear DNA, mtDNA is maternally inherited and takes a long period of time to degrade. The ethnicity model is a Gradient Boosting model with accuracy 95% and the geographic location is a Random Forest model with accuracy 90%. The data obtained from GenBank to train the model has also been validated to prove that there is variation between the samples using Analysis of Molecular Variance (AMOVA) and a web application has been developed using React and Flask api to integrate the machine learning model.

KnowledgeNexus – Adaptive AI-Powered Learning Companion for Students

KnowledgeNexus is an intelligent learning platform that combines AI-powered personalization with collaborative learning features. The system processes various learning materials using Azure AI services and creates customized learning roadmaps through a RAG agent. It enables students to engage in interactive learning sessions. The platform leverages advanced ML processes to optimize learning paths and provide personalized study guides and advanced learning insights.

WristGuard: A Deep Learning Approach for Detection and Classification of Wrist Fractures in Athletes

“The wristGuard application employs a hybrid approach, combining deep learning models for
feature extraction and stacking ensemble method for classification of wrist fracture types.This
algorithm combines deep learning-based feature extraction with a stacking ensemble classification
approach to improve wrist fracture detection. It utilizes four pre-trained CNNs, namely MobileNet,
ResNet50, DenseNet121, and InceptionV3, to extract high-level features from grayscale X-ray
images, which are then used by XGBoost classifiers for fracture classification. “

Chart Based Stock Market Price Prediction for CSE using Deep Learning Explainability

This project aims to develop a stock market price prediction system for the Colombo Stock Exchange (CSE) using candlestick chart images and deep learning techniques integrated with Explainable AI (XAI). Unlike traditional numerical forecasting models, this research focuses on visual patterns within candlestick charts to capture complex price movement trends. The system utilizes Convolutional Neural Networks (CNNs), specifically EfficientNetB7, to extract meaningful features from candlestick chart images. These extracted features are then fed into a Long Short-Term Memory (LSTM) model to perform sequential time-series forecasting and predict future stock prices, including open, high, low, and close values. Additionally, the system incorporates XAI methods to provide visual explanations for the model’s predictions, enhancing transparency and building investor trust. The ultimate goal is to offer an intelligent and interpretable decision support tool for investors and financial analysts, helping them understand not only the predicted outcomes but also the reasoning behind them. By combining image processing, deep learning, and explainability, this project bridges the gap between predictive accuracy and model interpretability in financial forecasting for the CSE.

Sustainable Vehicle Parking Utilisation to Minimise Traffic Congestion Using Real-Time Computer Vision and Perspective Transformation

The recent decades have experienced rapid growth of cities, their populations and the number of vehicles, which however has caused an upsurge of parking space problems in cities and eventually led to traffic congestion and waste of limited resources of parking spaces. The present research offers a new vision in resolving this problem by combining computer vision and perspective transformation in real time for parking management. The system used side view cameras and geometric transformation in order to view parking spaces from the non-conventional camera angles and even the position of the parking space itself. One of the significant advances is the concept of a comprehensive system of management of dispersed private and public parking lots targeting fragmentation of parking infrastructure within urban centres. The solution applies imaging and diagnostic algorithms for vehicle detection and tracking in real time, where information was adapted to the input for each of the vehicle types including cars, motorcycles, and three wheeled vehicles. The system makes use of perspective transformation and dynamic allocation of spaces to map and manage parking spaces and provides live information about available spaces to end users. The results from the study revealed increased parking space utilisation levels and decreased parking space search times which, in turn reduced the congestion levels of traffic. The research addresses critical gaps in existing parking management systems, particularly in handling multiple vehicle types and integrating diverse parking spaces within the urban environment.

Learning State Machines for Adaptive Authentication

Traditional authentication systems using only the username-password method are increasingly inadequate in addressing modern security threats, as they fail to adapt to dynamic risks. This project focuses on developing an adaptive authentication system using the learning state machines concept, which adjusts security protocols based on user behaviour, device type, and contextual factors, offering a more secure and adaptable approach to user authentication with a lesser usage of computational power. An adaptive authentication system was developed based on a probabilistic finite learning state machine that considers user behaviour and contextual factors to analyse the risk associated with the login attempt. Depending on the analysis, the system proceeds with the adaptive authentication to ensure a secure and user-friendly authentication process. The state machine was implemented using FlexFringe; a framework for learning automata.

ImmersiveSmilePlus: Personalised Pain Management Model with Tactile Feedback in Virtual Reality Exposure Therapy

This study introduces an advanced interactive virtual reality exposure therapy (VRET) system tailored for pediatric pain management, motivated by initial trials that revealed the limitations of VR alone in adult pain relief during cystoscopy and the promising results when combined with physical interaction in paediatric patients. Traditional VRET systems lack real-time adaptive feedback and tactile integration, which this system addresses by incorporating a sensor-enabled stress ball with a force-sensitive resistor (FSR) sensor. This sensor captures tactile feedback such as grip force, squeeze frequency, and duration, transmitting the data wirelessly to a Virtual Reality Therapy application.

Initial trials conducted at the Asiri Central Hospital, Sri Lanka demonstrated that while VR
alone was insufficient for adult pain management, with patients reporting high pain scores (7–9/10) during cystoscopy procedure. And during a trial conducted with the admitted Dengue patients at Royal Hospital, Sri Lanka, the combining VR with physical interaction significantly reduced pain scores in paediatric patients (1–5/10) and enhanced their engagement. These findings shifted the focus towards developing an interactive VR therapy specifically for children, integrating tactile feedback for a more immersive and responsive experience.

Harmful Visual Content Detection System for Social Media Platforms

“Harmful content on social media platforms poses a significant threat to user safety, especially when such content includes violent, abusive, or inappropriate visuals that bypass traditional moderation filters. Current content moderation systems often fail to detect nuanced harmful visuals, such as subtle gestures or weapons shown in harmless contexts. Moreover, most existing approaches focus on text-based filtering or rely on basic object detection, which struggle to understand the real context behind the images and videos shared online. This research addresses the pressing need for a more intelligent and context-aware detection system that can help minimize the spread of harmful visuals in real time.
To solve this issue, a hybrid AI-based system was developed that integrates YOLOv8 for object detection with a vision-language model to analyze visual context and determine the
harmfulness of content. The system detects harmful categories such as alcohol, blood,
cigarettes, guns, knives, and insulting gestures in both images and videos. It then classifies the content as harmful or non-harmful based on the scenario. An automatic alert mechanism is also implemented to notify administrators via email when harmful content is detected. The backend was built with Flask, and a user-friendly interface was provided for seamless interaction and visualization of results.
The system demonstrated strong performance with high detection accuracy across various test cases, including challenging scenarios with small object sizes, low lighting, and multiple object categories. Evaluation results showed high precision and recall values, and experts praised the contextual understanding achieved by combining object detection with language reasoning.
The model successfully flagged harmful content and generated contextual justifications,
offering a practical solution for enhancing safety on social media platforms. These results
indicate that the proposed system is both effective and scalable for real-time harmful visual
content moderation.”

DEETECTOR

In the evolving technological landscape, the rise in the number of deepfake videos has risen by a marginally great amount. Deepfake videos are videos which are created using digital software, machine learning and face swapping. Deepfakes are artificially generated videos in which images are combined to create events and statements that never happened or never had been said. This brings our attention to the need for useful detection techniques that can tell whether videos are authentic or if they have been artificially generated using AI.
Typically, a majority of deepfake videos are widespread using mobile applications such as WhatsApp, Facebook, Telegram and a variety of other mobile phone applications. This brings to light the problem of individuals not being able to differentiate between real videos and Deepfake videos which creates a need for detection software which people can directly use on their phones.
The approach the author proposes, an multimodal based approach where the video modality consists of a distilled Vision Transformer model and the video modality consists of a modified CNN light-weight architecture.
The research also looks at the feature extraction methods that are optimal for lightweight audio deepfake detection.