ZKSafe: Enhancing Crypto Wallet Usability and Security Through Zero-Knowledge Proof-Based Authentication
Security and cryptocurrency wallet use are still at the center of the issues with blockchain adoption. Seed phrase-based physical wallets…
This project is dedicated on leveraging Automatic Speech Recognition (ASR) within the medical domain, which emphasizes on refining and enhancing the accuracy of the transcription through Large Language Model (LLM) based approach. The major challenge discussed is the difficulty of manual documentation which is time consuming and laborious. ASR meets the challenge of transcribing medical conversations, but still struggles to understand the intricacies in patient-doctor consultations. These problems arise mostly because there are complexities in medical language, nuanced phrases, detailed medical terms and people speaking with different accents. Poor performance with special vocabulary and frequent transcription errors are usual for general ASR models in these domain-specific information systems. Different accents can further disrupt word understanding which adds more challenges to transcription. This issue is very serious because inaccurate information from transcription may affect how patients’ treatment and diagnosis. An illustrative example of this problem is that ""Cystic fibrosis"" being misinterpreted as ""65 Roses"". This work aims to analyze interconnections between context and ASR results related to medical terms and accents which will help to fix parts of current technology and thereby enhance accuracy in ASR. The approach improves the problem area by creating a medical ASR system that considers the context and adapts to the accent used by both patients and healthcare providers. For its first ASR component, the developed system recorded a Word Error Rate (WER) of 12%. Following this, a Large Language Model (LLM) helped to correct the errors made by speech recognition. This new method with LLMs makes it easier to understand sentences more completely. It depends on deep learning methods, especially neural networks and contextual understanding, for speech recognition in the medical domain. The outcome of this project is anticipated to serve on optimally deploying ASR in healthcare settings. This research addresses the critical need for domain-specific ASR system which is adaptable to diverse accent and is contextually aware regarding the medical terminology. As a result, this contributes to improve the overall patient satisfaction and the productivity of medical documentation within clinical settings.
Security and cryptocurrency wallet use are still at the center of the issues with blockchain adoption. Seed phrase-based physical wallets…
The existing traditional e-commerce systems struggle to focus the user query to give a relevant product recommendation at the end…
“The wristGuard application employs a hybrid approach, combining deep learning models for feature extraction and stacking ensemble method for classification…
“The COVID-19 epidemic changed IT industry processes by hastening the introduction of remote work. In virtual workplaces, detecting employee engagement…
“VRoxel is an innovative procedural generation system that addresses critical challenges in creating large-scale 3D environments for VR applications. The…
Block-based programming (BBP) has proven effective in teaching programming concepts, offering a visual and more intuitive approach than traditional text-based…
Copyright © 2025 - Informatics Institute of Technology - All Rights Reserved. Concept, Design & Development by IIT Student Union