1. Date: 05th August 2024

Polyglot GPT Engine: Generative Pre-Trained Transformer Based Multilingual Chatbot

Vinodh Gunasekera, Ph.D.

Abstract:  This invention introduces a robust multilingual chatbot system that amalgamates a semantic translation engine with a potent generative pre-trained transformer. This system enables seamless knowledge dissemination across language barriers, thereby empowering users to engage in natural language conversations in their preferred language.

  1. Date: 19th March 2025

A System and Method for Remote Stethoscope for Telemedicine

Vinodh Gunasekera, Ph.D.

Abstract: The invention, titled “A System and Method for Remote Stethoscope for Telemedicine,” introduces a remotely operable stethoscope system to enhance telemedicine by enabling real-time audio transmission of medical sounds. The system includes a low-frequency audio pick-up device on the patient side and a high-fidelity sound playback device on the doctor’s side, enabling remote consultations with improved diagnostic capabilities. Advanced features include Angle of Arrival (AoA) analysis, stereo 3D sound, AI-assisted diagnostics, and secure recording and playback capabilities.

  1. Date: 09th October 2025

Personalized Adaptive Auscultation System and Method for Physician-Specific Hearing Compensation

Vinodh Gunasekera, Ph.D.

Abstract: The invention introduces a personalized auscultation system using an electronic stethoscope that adapts diagnostic sound perception to each physician’s hearing profile. During calibration, reference tones or synthetic body sounds generate a frequency-specific profile. Real-time adaptive filtering, selective amplification, and equalization then process incoming signals based on this profile. The system combines digital signal processing and AI-driven algorithms to enhance signal-to-noise ratio and diagnostic clarity, refining compensation for user behavior such as repeated volume changes. Multiple user profiles are supported for shared environments. A playback module re-renders recorded sounds per physician profile for training and consultation. By optimizing output to individual auditory perception, the invention enhances detection of murmurs, breath, Korotkoff, and other body sounds beyond conventional stethoscopes.

  1. Date: 14th October 2025

A system and method for localizing three-dimensional Acoustic sounds and localizing internal body sounds with an audio cardiograph using a Multiple-Microphone Electronic Stethoscope

Vinodh Gunasekera, Ph.D.

Abstract: The invention provides a system and method for three-dimensional (3D) localization and visualization of internal body sounds using a multi-sensor electronic stethoscope. A wearable array of high speed acoustic sensors capture simultaneous auscultation data, which a digital signal-processing (DSP) unit analyzes to compute time-difference-of-arrival (TDoA), amplitude differential, and phase correlation of acoustic signals. These parameters are converted by a mapping engine into a real-time 3D spatial map of sound origins within the body. The system enables clinicians to visualize cardiac, pulmonary, and vascular sound sources with millimeter-scale precision. A tracking module stores spatially localized auscultations to follow changes in disease severity over time. Additionally, the invention enables non-invasive fetal monitoring by detecting and mapping fetal cardiac and vascular sounds to assess growth and development. The invention introduces a novel form of acoustic imaging, integrating signal localization, visualization, machine learning, and telemetric diagnostic intelligence in a wearable diagnostic platform.

  1. Date: 25th October 2025

Continuous Remote Acoustic Heart Rate and Rhythm Monitoring System Including Inter-Beat Interval (IBI) Using Smartphone-Connected Electronic Stethoscope with AI Analysis

Vinodh Gunasekera, Ph.D.

Abstract: An acoustic cardiac monitoring system integrates an electronic stethoscope, mobile device, and AI platform for remote detection of abnormal cardiac rhythms and inter-beat intervals (IBI). The stethoscope captures phonocardiography signals transmitted to a smartphone for preprocessing and cloud or local AI analysis. The AI engine determines heart rate, rhythm, IBI variability, and detects arrhythmias such as atrial and ventricular abnormalities. Clinically significant findings trigger alerts on the patient’s device and at a remote monitoring center. Optional embodiments include multi-sensor acoustic arrays, hybrid acoustic-electrocardiographic acquisition, auxiliary physiological data integration, and incorporation of behavioral or biochemical biomarkers. Abnormal IBI patterns may indicate stress syndromes or conditions such as PTSD, depression, anxiety, and autonomic dysfunction.

  1. Date: 01st December 2025

Wearable Auscultation and Inertial Monitoring System with Adaptive Emergency Telemetry

Vinodh Gunasekera, Ph.D.

Abstract: A wearable auscultation system is disclosed comprising a spatially distributed microphone array, inertial measurement sensors, and an AI processor for continuous health monitoring. The system utilizes sensor fusion to distinguish between accidental falls and medically induced collapses by correlating cardiac acoustics with inertial data. Syncope is detected through physiological signatures, including abrupt bradycardia, asystolic pauses, and irregular inter-beat intervals matched with collapse sequences. Furthermore, the system identifies cardiac emergencies, such as arrhythmias and myocardial infarction, by detecting acoustic abnormalities like muffled S1/S2 tones. Upon identifying a high-risk event, the wearable triggers a paired smartphone to transmit event classification, geolocation, and sensor telemetry to emergency responders. A critical feature includes dynamic alert management: if the system detects physiological recovery following an event—for instance, a patient recovering from syncope—it automatically transmits a status update to de-escalate the severity level for emergency personnel, optimizing response prioritization.

  1. Date: 12th November 2025

AI-Integrated Acoustic-Respiratory Inertial Measurement Unit (IMU) System for Continuous Autonomic and Metabolic Monitoring

Vinodh Gunasekera, Ph.D.

Abstract: This invention discloses an AI-integrated wearable system that combines acoustic, respiratory, and inertial sensing for continuous assessment of autonomic and metabolic function. Acoustic transducers capture heart and lung sounds to derive inter-beat intervals and heart-rate variability (HRV), while inertial-measurement-unit (IMU) data quantify motion intensity and posture. An adaptive AI processor fuses these multimodal inputs to compute a Metabolic Load Percentage (ML%), representing real-time energy expenditure relative to baseline, and an Autonomic-Metabolic Index (AMI), reflecting sympathetic-parasympathetic balance normalized for activity level. A neural-Bayesian calibration engine continuously adapts model coefficients to individual physiology and sensor drift. The system operates locally or via cloud inference, transmitting results to mobile or clinical dashboards. Applications include military fatigue detection, athletic overtraining prevention, HRV/PTSD monitoring, and post-COVID autonomic dysfunction tracking, providing a compact, low-cost, and intelligent platform for continuous monitoring of human physiological resilience and metabolic state.