Cutting Edge '25

The Influence of Brand Awareness on Purchase Intention: Mediating Effects of Loyalty, Perceived Quality, and Brand Association in the Sri Lankan Smartphone Market

“Abstract

This study explores how Brand Loyalty, Perceived Quality, and Brand Association mediate the relationship between Brand Awareness and Purchase Intention for Samsung smartphones in Sri Lanka. Using a quantitative approach, data was collected from 456 respondents through purposive sampling. An analysis through SPSS was conducted and included correlation, regression, and Sobel’s test for mediation.

Findings show that Brand Awareness significantly impacts all three mediators, with Brand Loyalty and Brand Association emerging as strong direct predictors of Purchase Intention. Perceived Quality, while not significant on its own, demonstrated a meaningful mediating role.

The final outcome suggests that brand-driven purchase behavior is primarily influenced by emotional and cognitive factors rather than awareness solely.

Keywords
Brand Awareness, Brand Loyalty, Perceived Quality, Brand Association, Purchase Intention, Mediation, SPSS, Sri Lanka, Samsung, Brand Equity Models.”

EduGraphAI: Using LLMs for Automated Construction of Pedagogy aware Educational Knowledge Graphs

“This project explores the use of Large Language Models (LLMs) to automate the construction of pedagogy-aware Educational Knowledge Graphs (EduKGs), addressing the challenges of manual development. While LLMs support self-guided learning, their effectiveness depends on access to structured, syllabus-aligned educational data. Educational Knowledge Graphs offer this structure, but incorporating pedagogical metadata such as learning outcomes and learner states remains a complex and underexplored task.

To overcome this, the project presents a hybrid framework that combines ontological modeling with LLM-driven automation. The framework enables the generation of EduKGs that capture both educational content and pedagogical context. A data pipeline is established to allow natural language querying through Graph Retrieval Augmented Generation (GraphRAG), where the LLM interacts with the EduKG to retrieve and generate responses. The system also integrates learner-specific data into the EduKG to support personalized learning experiences and provide better visibility into alignment with learning goals.

The proposed system, EduGraphAI, was evaluated through both qualitative and quantitative methods. Expert interviews assessed the system’s relevance, design accuracy, and adaptability in educational settings. Quantitative evaluation benchmarked the extraction of Learning Outcomes and Concepts against a gold standard dataset using Precision, Recall, and F1 Score. Learning Outcome extraction achieved an F1 Score of 82.35 percent. Concept extraction reached 80.00 percent Precision, 87.50 percent Recall, and an F1 Score of 83.58 percent. These results demonstrate the system’s effectiveness in automating the creation of educational knowledge graphs while maintaining pedagogical integrity.”

The Effect of User-Generated Content (UGC) Quality on Brand Engagement on Instagram in the Quick Service Restaurant (QSR) Industry in the Colombo District: The Mediating Role of Perceived Value

“User-Generated Content (UGC) plays a vital role in driving brand-consumer interactions on visual platforms like Instagram. However, limited empirical research has examined how the quality of UGC—defined by content, design, and technical dimensions—impacts brand engagement in the Quick Service Restaurant (QSR) industry, particularly in emerging markets such as Sri Lanka. Moreover, the mediating role of perceived value (emotional and functional) remains underexplored.

This quantitative study employed a deductive approach grounded in positivist philosophy, using judgmental purposive sampling to collect responses from 390 Instagram users in Colombo. Data were analyzed using SPSS v29, applying correlation, regression, and Sobel mediation tests.

Findings reveal that high-quality UGC significantly improves both perceived value and brand engagement. Perceived value was found to significantly mediate this relationship, reinforcing the Stimulus-Organism-Response (S-O-R) framework and Uses and Gratifications Theory. Emotional resonance and informational clarity were key in strengthening user-brand interaction.

While limited to Instagram and cross-sectional data, the study offers key implications for QSR marketers to encourage visually appealing, emotionally engaging, and functionally useful UGC. Future research could explore longitudinal effects, alternative platforms, or other mediators like brand trust and community belonging.”

Enhancing Tip of of the Tongue IR Systems

“Tip-of-the-Tongue (ToT) retrieval tackles the challenge of
identifying known items when users recall only vague or fragmentary de
tails. Conventional lexical methods struggle to bridge the semantic gap
inherent in ambiguous queries. In this study, we present a compact two
stage pipeline built around the all-MiniLM-L6-v2 sentence transformer.
First, we fine-tune it on the TREC ToT 2023 Q&A pair dataset, achiev
ing a notable increase in single-stage retrieval performance (nDCG@1000
rising from 0.0322 to 0.1437 and MRR from 0.0005 to 0.0690). Second, we
apply lightweight neural re-ranking—employing both a MonoT5 point
wise re-ranker and a MiniLM-based cross-encoder—and fuse their out
puts via Reciprocal Rank Fusion (RRF). While individual re-rankers
yielded mixed results, RRF consistently enhanced both early- and deep
list metrics (nDCG@1000 up to 0.1498, MRR to 0.0861). Finally, we
report a small-scale zero-shot trial of four GPT variants, observing that
“mini” models outperform their full-size counterparts in top-3 accuracy.
Ourfindings demonstrate that a resource-efficient, fine-tuned transformer,
when coupled with strategic fusion of lightweight re-rankers, can deliver
improved performance on ToT known-item retrieval tasks.”

Deep Learning Models for Canine Dermatology: A Comparative Study Focused on Sri Lanka

Canine skin diseases, including canine scabies, fungal infections, and hypersensitivity allergies, are prevalent and require accurate diagnosis for effective treatment. However, traditional veterinary diagnostic methods face challenges such as similar clinical symptoms, limited accessibility to veterinary professionals, and misdiagnosis due to lack of expertise, particularly in rural areas of Sri Lanka. To address these challenges, the study explores deep learning-based classification for automated detection, addressing limitations in traditional veterinary diagnostics. A custom dataset of 2,511 images was collected and augmented to 12,565 images to improve model generalization. A comparative analysis was conducted using five deep learning models, including a baseline CNN, Hybrid CNN + ResNet50, ResNet152V2, EfficientNetB1, and MobileNetV3. The models were trained and evaluated based on their accuracy, loss, and generalization performance. The results indicate that EfficientNetB1 achieved the highest validation accuracy (98.96%) with a low loss (0.1278), followed closely by MobileNetV3 (98.76% accuracy). The Hybrid CNN + ResNet50 model balanced accuracy and efficiency, whereas the baseline CNN exhibited overfitting. Findings suggest EfficientNet-B1 and MobileNetV3 as optimal models for real-world deployment due to their accuracy and computational efficiency. Future work will focus on expanding the dataset, improving model interpretability using explainable AI techniques, and optimizing models for real-time deployment in veterinary applications. This research contributes to the advancement of AI-powered diagnostic tools that can assist veterinarians and pet owners in early and accurate detection of canine skin diseases.

QuanNetDetect – Quantum Hybrid Deep Learning Model Framework for Detecting Encrypted TLS Malicious Network Traffic

The research project is proposed as a novel approach for detecting encrypted malicious TLS network traffic by using Quantum Deep Learning. Without decrypting the traffic, by using the metadata (packet-level) information alone, the model can do the detection. The project involved multiclassification on TLS-based encrypted attacks and binary classification over TLS-based malicious network traffic by using Quantum Computing and Quantum Deep Learning.

MADQN-AV : A Multi-Agent Deep Reinforcement Learning framework for Emergent Cooperation and Conflict Resolution in Autonomous Vehicle Intersection Navigation

“The increasing adoption of autonomous vehicles (AVs) presents significant challenges in
managing intersections efficiently while minimizing conflicts. Traditional centralized traffic
control systems struggle with scalability and adaptability, whereas decentralized approaches
often face coordination and safety issues. Ensuring real-time decision-making and cooperation
among AVs in dynamic, multi-agent environments remains a critical challenge.
This research investigates a Multi-Agent Deep Reinforcement Learning (MADRL)
framework for Emergent Cooperation and Conflict Resolution in AVs. The approach utilizes
decentralized learning, allowing AVs to develop cooperative driving strategies based on
observed behaviors and shared environmental interactions, rather than explicit communication.
The proposed MADRL model is evaluated against a baseline algorithm, MADQN-AV, using key
performance metrics, including collision rate, throughput, average delay, and decision accuracy.
Experimental results indicate that the MADQN framework significantly reduces collision
rates, enhances traffic efficiency, and maintains real-time decision-making capabilities with
minimal computational overhead. Comparative analysis with MADQN-AV demonstrates
superior scalability and adaptability, validating the potential of decentralized multi-agent
learning in autonomous traffic systems.”

MIMO – A Gamification-Based Multi-Modal Approach to Enhance Cognitive Stimulation and Emotional Awareness in Children with Down Syndrome

“Children with Down syndrome often face challenges in their emotional and cognitive development, impacting areas like social skills, memory, attention, and spatial awareness. While previous research has explored gamification and AI in clinical settings, there’s a clear need for direct intervention through specific, cohesively designed assistive learning applications that address these unique cognitive and emotional needs.

A Novel Gamification-Based Approach
This research proposes a novel gamification-based multimodal approach leveraging modern deep-learning techniques to bridge this gap. The goal is to create a single, comprehensive assistive application that integrates three crucial capabilities:

Interactive Storytelling Module: This module allows children to engage with various awareness scenarios, receiving dynamic feedback generated by OpenAI’s GPT-4 model. Their progress is systematically documented, providing valuable insights for educators.

Facial Emotion Recognition System: This system uses a Fractal Neural Network enhanced by a secondary HDC classifier and optimized through a Genetic Algorithm. To ensure transparency, Explainable AI (XAI), specifically Grad-CAM, is incorporated to visualize the system’s decision-making process.

Speech Emotion Recognition System: Built on a hybrid feature set, this system combines traditional audio features (such as Chroma, Mel Spectrogram, and Spectroid) with advanced descriptors like Topological Data Analysis, Spectral Flux, Glottal Dynamics, and Teo with auto envelope features. The model design utilizes Ordinary Differential Equations (ODE), Bidirectional Gated Recurrent Units (BiGRU), and Attention methods to enhance emotional categorization accuracy.

Accuracy and Future Directions
The proposed study demonstrated a multimodal accuracy of 81% for emotion recognition and 51% for spoken emotion recognition. It’s important to note that the research acknowledges limitations in generalizing datasets for each modality, which influenced the achieved accuracy levels. This opens up a promising avenue for open research to further enhance the accuracy and robustness of these assistive learning applications for children with Down syndrome.”

HarvestSmart

“Manual assessment of oil palm fresh fruit bunch (FFB) ripeness in plantations is often inconsistent, time-consuming, and heavily reliant on individual experience. These inefficiencies can lead to mistimed harvesting, reduced oil quality, and lower yields. HarvestSmart addresses this challenge by providing a mobile-based, AI-powered solution for real-time ripeness detection and daily harvest reporting.

The system leverages a lightweight deep learning model based on YOLOv10, enhanced with Low-Rank Adaptation (LoRA) to significantly reduce the number of trainable parameters and computational load. The model was optimized and converted to TensorFlow Lite for seamless execution on Android devices, even in offline rural environments. Farmers can use the app to scan oil palm bunches in the field, receive immediate classification results—such as ripe, underripe, overripe, or abnormal—and generate PDF-based harvest summaries.

In addition to detection, the app features a reporting module that archives daily results, helping farmers and supervisors monitor harvest quality and make data-driven decisions over time. The interface is built with simplicity in mind, ensuring ease of use for non-technical users.

By integrating AI into everyday farming workflows, HarvestSmart improves productivity, minimizes waste, and empowers smallholder farmers with accessible agricultural technology—turning smartphones into powerful harvest tools.”