Smart technologies for infection control and antimicrobial stewardship in critical care settings: A narrative review
Authors
- Siddhant SaswatDepartment of Critical Care Medicine, Institute of Medical Science and SUM Hospital, Siksha ‘O’ Anusandhan (Deemed to be University), Kalinga Nagar, Bhubaneswar, Odisha, India
- Sanghamitra MishraDepartment of Critical Care Medicine, Institute of Medical Science and SUM Hospital, Siksha ‘O’ Anusandhan (Deemed to be University), Kalinga Nagar, Bhubaneswar, Odisha, India
- Sasmita MohantyDepartment of Critical Care Medicine, Institute of Medical Science and SUM Hospital, Siksha ‘O’ Anusandhan (Deemed to be University), Kalinga Nagar, Bhubaneswar, Odisha, India
- Manoranjan DashDepartment of Business and Computer Studies, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India
- Bhagyashree MohantyDepartment of Critical Care Medicine, Institute of Medical Science and SUM Hospital, Siksha ‘O’ Anusandhan (Deemed to be University), Kalinga Nagar, Bhubaneswar, Odisha, India
- Anasuya PriyadarshiniDepartment of Critical Care Medicine, Institute of Medical Science and SUM Hospital, Siksha ‘O’ Anusandhan (Deemed to be University), Kalinga Nagar, Bhubaneswar, Odisha, India
DOI:
Keywords
Downloads
Correspondence
Publication history
Responsible editor
Reviewers
Funding
Ethical approval
Trial registration number
Copyright
Published by Bangabandhu Sheikh Mujib Medical University (currently Bangladesh Medical University).
Background: Infection control and antimicrobial stewardship (AMS) remain major challenges in intensive care units (ICUs), driven by delayed diagnostics, extensive empirical antimicrobial use, and escalating antimicrobial resistance. Conventional microbiology-based approaches are often slow and resource-intensive, highlighting the need for adaptive, data-driven solutions. This narrative review synthesises current evidence on the application of smart technologies-artificial intelligence (AI), machine learning (ML), the internet of things (IoT), and telemedicine to enhance infection prevention, early diagnosis, and antimicrobial optimisation in critical care.
Methods: A narrative review of 30 peer-reviewed publications was conducted, including original studies, reviews, case studies, and policy reports published between 2020 and 2025. Literature was retrieved from PubMed, Scopus, and Google Scholar, with additional targeted searches focusing on digital stewardship frameworks and ICU-based technological applications. Studies addressing AI, ML, IoT, or telemedicine-enabled infection control or AMS in critical care settings were included.
Results: AI and ML models showed strong performance in predicting sepsis, ventilator-associated pneumonia, and multidrug-resistant organism risk, with several studies reporting area under the receiver operating characteristic curve values exceeding 0.80 despite methodological heterogeneity and predominantly retrospective designs. IoT-based systems-such as wearable sensors, smart environmental monitoring, and automated surveillance-enabled real-time physiological and environmental data capture, improving early detection and compliance monitoring. Telemedicine platforms expanded access to infectious disease expertise and AMS services, improving antimicrobial prescribing quality. Digital stewardship tools enhanced prescribing appropriateness and workflow efficiency. Key challenges included data interoperability, cybersecurity, limited model explainability, and scarce multi-centre validation.
Conclusion: By facilitating early detection and enhancing antimicrobial decision-making, smart technologies hold great promise for improving AMS in ICUs.
Intensive care units (ICUs) are essential in the global battle against severe illnesses and the escalating issue of antimicrobial resistance (AMR). The main problem that commonly arises in these high-risk environments is that standard infection control procedures can occasionally be extremely slow and even imprecise. Because normal microbiological testing might take days to yield definitive results, clinicians must make quick judgments about antibiotic medication, sometimes with little information. Even though broad-spectrum empirical therapy is often required to save lives in critically ill patients, excessive reliance on such treatment can cause collateral damage to essential commensal microbial flora and lead to unnecessary drug exposure.
This practice increases the risk of treatment failure when the infecting pathogen is resistant to the prescribed agents. Moreover, indiscriminate use of broad-spectrum antimicrobials directly intensifies selection pressure, thereby accelerating the development of antimicrobial resistance [1].
The COVID-19 pandemic further exposed gaps in infection control and antimicrobial stewardship (AMS), with increased empirical antibiotic use and disruption of stewardship programmes. These challenges highlighted the need for adaptable, data-driven infection management strategies [2, 3, 4].
A new type of "smart" healthcare that combines telemedicine, the internet of things (IoT), and artificial intelligence (AI) has emerged as a result to minimise this expanding problem. AI and machine learning (ML) may be used to analyse large volumes of complicated data from electronic health records (EHRs) to improve antibiotic selection, evaluate infection risk, and speed up illness diagnosis. This facilitates the shift from a reactive to a proactive therapy [5]. IoT devices that concurrently deliver the real-time, high-granularity data streams required to support these prediction models and enable early intervention include wearable sensors that continually record vital signs and smart environmental monitors [6]. Telemedicine, which transcends national boundaries and enables the remote provision of expert information on infectious illnesses and stewardship, further improves this technological environment. When resources are few, telemedicine can be a useful strategy [7]. The combination of these technologies will lead to more effective, customized, and sustainable infection control modalities. This narrative review evaluates the role of AI, ML, IoT, and telemedicine in strengthening infection control and AMS in critical care, focusing on clinical applications, implementation challenges, and future directions.
Study design
This study was conducted as a narrative review to synthesize current evidence on the application of AI, the IoT, and telemedicine in infection control and AMS within critical care settings.
Data sources and search strategy
A comprehensive literature search was performed using PubMed, Scopus, and Google Scholar. The search covered publications from January 2020 to December 2025. The following search terms and their combinations were artificial intelligence or machine learning, IoT, telemedicine or telehealth, infection control, AMS, critical care or ICU. Boolean operators (AND/OR) were applied to refine the search strategy. Reference lists of relevant articles were also manually screened to identify additional pertinent studies.
Focused Google searches were also performed to locate reliable articles, policy papers, and clinical suggestions. The research was chosen based on how well it addressed the use of smart technologies, including telemedicine, machine learning, AI, and the IoT, in infection control and AMS in critical care settings.
Inclusion and exclusion criteria
Inclusion criteria were publications between 2020 and 2025, studies focusing on AI, IoT, or telemedicine applied to infection control or AMS in critical care or ICU settings, article types including original research articles, systematic and narrative reviews, case studies, pilot studies, and relevant policy or guideline documents. Articles not related to infection control, AMS, or critical care, conference abstracts without full-text availability and opinion pieces lacking scientific or technical relevance were excluded.
Screening and selection process
The initial database search yielded a broader pool of articles. Titles and abstracts were first screened for relevance. Full-text screening was subsequently performed for potentially eligible publications. After applying the inclusion and exclusion criteria, 30 publications were selected for final inclusion in the narrative synthesis (Figure 1).
Quality assessment and risk of bias
As this study is a narrative review, a formal quantitative risk-of-bias assessment was not conducted. However, the methodological rigor, relevance, clarity of objectives, and applicability of findings were critically appraised qualitatively during data interpretation. Greater emphasis was placed on peer-reviewed studies and authoritative guideline or policy documents.
Data extraction and synthesis
Key information extracted from each included publication comprised of study type, setting technology used (AI, IoT, telemedicine), target outcomes related to infection control or AMS and reported benefits, limitations, and implementation challenges
Findings were synthesized descriptively to highlight trends, emerging applications, advantages, limitations, and future opportunities for integrating digital technologies into critical care infection management.
As this study is based exclusively on previously published literature and did not involve human participants, patient data, or experimental interventions, ethical approval was not required.

Figure 1 Flowchart of article identification using PubMed, Scopus and Google Scholar search, 2020–2025


Overview of digital platforms in ICU infection control
Predictive analytics powered by AI and ML might be considered the system's "brain," real-time data collection through the IoT its "spinal cord," and remote expertise and intervention delivery through telemedicine and digital platforms its "hands, legs, eyes, ears, and voice." The research that is now available suggests that these applications of smart technologies may be loosely divided into three functional domains, which correlate to the activity and flow of information in clinical settings. [5, 8]. AI and ML for prediction and surveillance, to effectively evaluate large amounts of diverse data. This discipline uses cutting-edge methods that significantly outperform human analysis in areas such as genetic data, diagnostic results, clinical records, and EHR data [5]. AI can identify the pathogen causing the infection, predict its resistance profile, and recommend the most effective antimicrobial treatment by precisely recognising patterns and correlations that predict the likelihood of an upcoming infection such as sepsis or a healthcare-associated infection (HAI) [4]. These approaches aid in slowing the development of AMR by emphasising precise prediction over conjecture. As a result, doctors will be able to treat patients faster, select antibiotics from a smaller range, and ultimately enhance patient outcomes [8].
For real-time data collection, ML and AI models in the second category rely on the IoT's primary data layer. Since the precision and dependability of prediction models are dependent on the timeliness, quality, and accuracy of the input data, IoT has radically changed patient monitoring procedures in this industry. This category includes wearable sensors that continuously monitor vital indicators, including body temperature, heart rate, and breathing rate, smart medical equipment, and environmental sensors that monitor catheter use and hand hygiene compliance [6]. This continuous flow of high-resolution data, as opposed to the usual intermittent manual data collection, enables the identification of particular physiological changes that may be an early indicator of infection, sometimes before clinical symptoms manifest. The effectiveness of early warning systems and the dynamic inputs needed for updated adaptive treatment models depend on this kind of real-time situational awareness [9].
Role of AI and ML in ICU infection control
Critical care has greatly benefited from AI and ML, especially in the areas of disease prognostics and diagnostic applications. Two of the most pressing infection control concerns that this analytical capability is being used to address are the early detection of sepsis and the identification of particular drug-resistant bacteria. Numerous studies suggest that, with the correct training and assessment, AI models might be a beneficial tool for strengthening clinical judgment, surpassing manual monitoring methods and standard scoring systems [4].
The ability of these models to simultaneously incorporate several high-dimensional and heterogeneous data sources is a key component of their effectiveness. Table 1 shows which AI systems are suitable for which types of functions. For instance, time-series data from EHRs may be analysed using deep learning models such as Long Short-Term Memory (LSTM) networks, which are very effective at predicting the onset of illnesses like sepsis or ventilator-associated pneumonia (VAP) [4]. Deep learning models, such as networks with LSTM, can be used to analyse time-series data from EHRs. In a specific ICU setting, supervised machine-learning models such as gradient-boosted trees have demonstrated high predictive performance for early sepsis detection, achieving an area under the curve of up to 0.94 in single-centre retrospective validation studies, although such performance is not uniform across models or clinical contexts [5]. By predicting the formation of multidrug-resistant organisms (MDROs) and guiding the proper antibiotic selection, AI has also demonstrated a substantial contribution to enhancing AMS and preserving the effectiveness of last-line treatments [12]. Even before definitive culture findings are received, AI-based empirical treatment has a greater connection with the expected pathogen and its susceptibility profile [13].
Table 1 Artificial intelligence/ machine learning models for infection detection and prediction in intensive care unit settings
Serial | Types of model developed | Target infection / task | Key performance metric / finding | Clinical settings | References |
1 | Random forests, long short-term memories, various, machine learning models | Prediction of ventilator-associated pneumonia, central line-associated bloodstream infection, and sepsis; Differentiating inflammation vs. infection. | Some sepsis prediction models can predict sepsis up to 4 hours before clinical onset, achieving an area under the receiver operating characteristic curve of approximately 0.85 in internal validation and 0.76 in external validation. | Intensive care unit | |
2 | Random forest, neural networks, decision trees | Detection and prediction of surgical site infection, urinary tract infections, and other healthcare-associated infections. | High predictive accuracy, with many models achieving area under the curve scores >0.80. | General hospital and Intensive care unit | |
3 | Machine learning, deep learning, hybrid models | Pathogen identification (methicillin-resistant staphylococcus aureus, Clostridioides difficile), infection risk assessment. | Accuracy, sensitivity, specificity, precision, F1-score (varied across included studies) mostly applied | General bacterial infection control | |
4 | Whole-genome sequencing analysis | Tracking a clonal outbreak of linezolid-resistant staphylococcus epidermidis. | Whole-Genome Sequencing confirmed a single clonal outbreak (genomic confirmation of transmission) | Intensive care unit | |
5 | XGBoost machine learning model | Early identification of severe infection and sepsis to guide therapy. | Area under the receiver operating characteristic curve (area under the receiver operating characteristic curve): up to 0.94 (primary performance metric) Sensitivity: ~0.87, Specificity: ~0.87 | Intensive care unit |
The AI/ML models have demonstrated accurate prediction of specific infection-related outcomes—particularly ICU-acquired sepsis, VAP, central line–associated bloodstream infection, surgical site infections, and urinary tract infections—under defined clinical settings, achieving area under the receiver operating characteristic curve values ranging from ~0.80 to 0.94 and enabling early detection up to 4–12 hours before clinical onset in validated ICU and hospital cohorts. This achievement is therefore tempered by other notable and enduring barriers that hinder widespread clinical use. Nonetheless, generalizability and validation are the two most crucial issues. Since many of the high-performing models described in the literature were developed and validated using retrospective data, it is unclear if they will perform as well in other clinical settings with different patient populations and data infrastructures. Another closely related issue is the "black box" nature of many complex models, particularly deep learning networks. Since doctors could be hesitant to accept a diagnosis if they are unsure of its source, explainable active learning (XAI) models were created to promote model transparency and boost user confidence. In conclusion, the effective integration of these technologies entails both technical and nontechnical issues, necessitating significant financial assistance for clinician training, data infrastructure, and a seamless transition into a modern, smart, technology-driven clinical practice.
Role of IoT in ICU infection
The IoT is the most efficient means of managing the constant flow of high-quality, real-time data required for the potent prediction abilities of AI models. An important change from short-term, human data collection to ongoing, automated monitoring is represented by the usage of IoT in critical care. By collecting and transmitting a vast array of physiological and environmental data, this technology layer serves as the peripheral nervous system, offering an unparalleled level of insight into a patient's condition. This continuous flow of data makes it possible to create more dynamic and individualised patient care models and is essential for the early detection of clinical deterioration [6].
Table 2 illustrates the diverse range of IoT applications in critical care that are continuously expanding. Vital indicators, such as temperature, oxygen saturation, heart rate, and breathing rate, are continuously monitored at the patient level using wearable and reasonably priced sensors. This enables the detection of even the slightest alterations that could be the initial indication of a major disease [6]. Therefore, acute care facilities may benefit in the future from adapting approaches derived from traditional remote patient monitoring, which has historically been implemented in out-of-hospital settings . The IoT encompasses both the patient and the medical environment. Smart hospital beds, automatic hand hygiene equipment, and intelligent infusion pumps can all keep an eye on adherence to infection control protocols, and a secure network architecture guarantees that information is transmitted securely [9]. The vast, accurate, and real-time data that this network of networked devices delivers is essential for future AI-driven infection control [6].
Table 2 Internet of things-based innovations for infection monitoring and antibiotic optimisation in the intensive care unit
Serial | Internet of things technology | Type of data used | Application | Opportunities and challenges | References |
1 | Automated data extraction from intensive care unit Electronic Medical Record | Antimicrobial consumption data (Days of Therapy, Defined Daily Dose) Patient clinical risk factor data. Mortality outcome data. Automated structured intensive care unit electronic medical record data (clinical + antimicrobial usage data). | Automated surveillance of antimicrobial consumption. | Allows for high-granularity, long-term surveillance, revealing variations in practice between different clinical units. | |
2 | Wearable sensors, Cloud/Fog/Edge computing, Blockchain, radio-frequency identification tags | Vital sign physiological data — specifically: Heart rate, Oxygen saturation, Respiratory rate, Body temperature, Physical activity level / movement data - derived from wearable motion sensors (e.g., accelerometers) embedded in wearables. | Continuous Remote Patient Monitoring for early deterioration detection and personalised care. | Enables continuous, noninvasive monitoring. Challenges include data security, privacy, signal processing, and interoperability. | |
3 | Telehealth platforms, remote monitoring devices and home diagnostic kits | Vital signs, patient-reported symptoms, diagnostic results | Optimising antibiotic use in adult intensive care unit populations, including older patients. | Increases access to specialists but faces challenges in accuracy, patient education, and digital literacy in vulnerable populations. | |
4 | Smartphones, mobile apps, digital imaging | Digital image data (surgical wound photographs) and patient-reported outcome data collected via smartphone/mobile applications. | Post-discharge surveillance of surgical site infections. | Improved SSI detection sensitivity; early post-discharge monitoring; enhanced patient engagement. Digital access inequity; variable image quality; data privacy concerns; workflow integration barriers. | |
5 | Implantable Medical Devices, Internet of Wearable Devices | Biometrics (heart rhythms, glucose), vital signs | Secure communication for medical devices. | Highlights the critical need for robust security protocols (e.g., two-factor authentication) to prevent attacks on life-sustaining devices. |
The IoT integration in critical care enables active, data-rich infection management. The main outcome is the shift from sporadic to ongoing surveillance, which enables the detection of patient decline well in advance of its clinical manifestation. The initial instances of predictive AI are fueled by this constant flow of data. The two main new issues brought on by the growing number of connected devices are data security and privacy. Implementing secure protocols and robust authentication is essential to safeguarding vital patient data throughout collection, transmission, and storage, since every sensor is a potential point of attack. The second major challenge, however, is interoperability and data integration. For data from devices manufactured by many manufacturers to be meaningful, it must be standardised and connectable. In addition to technology standards, new data architectures (such as edge computing) are needed to handle the massive amount of data without causing delays or problems. Finally, human elements such as process integration and user permission are just as important for effectiveness as AI.
Telemedicine and digital stewardship platforms
The network of telemedicine and digital stewardship platforms is what turns the promises of AI and IoT into reality in clinical practice, even though they provide real-time data and analytical insights, respectively. These technologies, which serve as the foundational layer of communication and action, ensure that specialised information is available when and where it is needed, and that regular practitioners receive recommendations based on data. This is especially crucial in AMS, when prompt and informed decisions are required to maintain antibiotic potency and optimise treatment. The study indicates that these digital platforms are developing from useful instruments to crucial infrastructure for stewardship initiatives that are efficient, scalable, and fair [7, 14, 15, 16].
The primary ways that digital platforms are altering AMS are listed in Table 3. First, hospitals and intensive care units without staff or specialists can benefit from telemedicine and e-consultation services. These services are an excellent way to improve access to information about AMS and infectious diseases (ID), according to published literature. In areas with limited resources, particularly in rural areas, this approach proves to be beneficial [7, 14, 15, 16]. Local practitioners gain from the democratisation of information, which also encourages a commitment to best practices. Additionally, more automated monitoring methods are required for contemporary stewardship projects. These systems can automatically track antimicrobial consumption indicators like days of treatment (DoT) or defined daily doses (DDD), identify patients who are taking too little or the incorrect kind of broad-spectrum medication, and notify the AMS team of possible intervention opportunities by continuously monitoring EHR and pharmacy data [11]. Automating the laborious case-finding process allows stewardship teams to focus their efforts more efficiently [10]. Finally, by sharing ideas and providing feedback, these platforms which frequently have integrated CDS in the EHR support the core stewardship duties of prospective audit. It has frequently been shown that this makes drugs more appropriate [3, 10].
Variables | Frequency (%) |
Indication of colposcopy |
|
Visual inspection of the cervix with acetic acid positive | 200 (66.7) |
Abnormal pap test | 13 (4.3) |
Human papilloma virus DNA positive | 4 (1.3) |
Suspicious looking cervix | 14 (4.7) |
Others (per vaginal discharge, post-coital bleeding) | 69 (23.0) |
Histopathological diagnosis | |
Cervical Intraepithelial Neoplasia 1 | 193 (64.3) |
Cervical Intraepithelial Neoplasia 2 | 26 (8.7) |
Cervical Intraepithelial Neoplasia 3 | 32 (10.7) |
Invasive cervical cancer | 27 (9.0) |
Chronic cervicitis | 17 (5.6) |
Squamous metaplasia | 5 (1.7) |
Groups based on pre-test marks | Pretest | Posttest Marks (%) | Difference in pre and post-test marks (mean improvement) | P |
Didactic lecture classes | ||||
<50% | 36.6 (4.8) | 63.2 (9.4) | 26.6 | <0.001 |
≥50% | 52.8 (4.5) | 72.4 (14.9) | 19.6 | <0.001 |
Flipped classes | ||||
<50% | 36.9 (4.7) | 82.2 (10.8) | 45.4 | <0.001 |
≥50% | 52.8 (4.6) | 84.2 (10.3) | 31.4 | <0.001 |
Data presented as mean (standard deviation) | ||||
Background characteristics | Number (%) |
Age at presentation (weeks)a | 14.3 (9.2) |
Gestational age at birth (weeks)a | 37.5 (2.8) |
Birth weight (grams)a | 2,975.0 (825.0) |
Sex |
|
Male | 82 (41) |
Female | 118 (59) |
Affected side |
|
Right | 140 (70) |
Left | 54 (27) |
Bilateral | 6 (3) |
Delivery type |
|
Normal vaginal delivery | 152 (76) |
Instrumental delivery | 40 (20) |
Cesarean section | 8 (4) |
Place of delivery |
|
Home delivery by traditional birth attendant | 30 (15) |
Hospital delivery by midwife | 120 (60) |
Hospital delivery by doctor | 50 (25) |
Prolonged labor | 136 (68) |
Presentation |
|
Cephalic | 144 (72) |
Breech | 40 (20) |
Transverse | 16 (8) |
Shoulder dystocia | 136 (68) |
Maternal diabetes | 40 (20) |
Maternal age (years)a | 27.5 (6.8) |
Parity of mother |
|
Primipara | 156 (78) |
Multipara | 156 (78) |
aMean (standard deviation), all others are n (%) | |
Background characteristics | Number (%) |
Age at presentation (weeks)a | 14.3 (9.2) |
Gestational age at birth (weeks)a | 37.5 (2.8) |
Birth weight (grams)a | 2,975.0 (825.0) |
Sex |
|
Male | 82 (41) |
Female | 118 (59) |
Affected side |
|
Right | 140 (70) |
Left | 54 (27) |
Bilateral | 6 (3) |
Delivery type |
|
Normal vaginal delivery | 152 (76) |
Instrumental delivery | 40 (20) |
Cesarean section | 8 (4) |
Place of delivery |
|
Home delivery by traditional birth attendant | 30 (15) |
Hospital delivery by midwife | 120 (60) |
Hospital delivery by doctor | 50 (25) |
Prolonged labor | 136 (68) |
Presentation |
|
Cephalic | 144 (72) |
Breech | 40 (20) |
Transverse | 16 (8) |
Shoulder dystocia | 136 (68) |
Maternal diabetes | 40 (20) |
Maternal age (years)a | 27.5 (6.8) |
Parity of mother |
|
Primipara | 156 (78) |
Multipara | 156 (78) |
aMean (standard deviation), all others are n (%) | |
Mean escape latency of acquisition day | Groups | ||||
NC | SC | ColC | Pre-SwE Exp | Post-SwE Exp | |
Days |
|
|
|
|
|
1st | 26.2 (2.3) | 30.6 (2.4) | 60.0 (0.0)b | 43.2 (1.8)b | 43.8 (1.6)b |
2nd | 22.6 (1.0) | 25.4 (0.6) | 58.9 (0.5)b | 38.6 (2.0)b | 40.5 (1.2)b |
3rd | 14.5 (1.8) | 18.9 (0.4) | 56.5 (1.2)b | 34.2 (1.9)b | 33.8 (1.0)b |
4th | 13.1 (1.7) | 17.5 (0.8) | 53.9 (0.7)b | 35.0 (1.6)b | 34.9 (1.6)b |
5th | 13.0 (1.2) | 15.9 (0.7) | 51.7 (2.0)b | 25.9 (0.7)b | 27.7 (0.9)b |
6th | 12.2 (1.0) | 13.3 (0.4) | 49.5 (2.0)b | 16.8 (1.1)b | 16.8 (0.8)b |
Average of acquisition days | |||||
5th and 6th | 12.6 (0.2) | 14.6 (0.8) | 50.6 (0.7)b | 20.4 (2.1)a | 22.4 (3.2)a |
NC indicates normal control; SC, Sham control; ColC, colchicine control; SwE, swimming exercise exposure. aP <0.05; bP <0.01. | |||||
Categories | Number (%) |
Sex |
|
Male | 36 (60.0) |
Female | 24 (40.0) |
Age in yearsa | 8.8 (4.2) |
Education |
|
Pre-school | 20 (33.3) |
Elementary school | 24 (40.0) |
Junior high school | 16 (26.7) |
Cancer diagnoses |
|
Acute lymphoblastic leukemia | 33 (55) |
Retinoblastoma | 5 (8.3) |
Acute myeloid leukemia | 4 (6.7) |
Non-Hodgkins lymphoma | 4 (6.7) |
Osteosarcoma | 3 (5) |
Hepatoblastoma | 2 (3.3) |
Lymphoma | 2 (3.3) |
Neuroblastoma | 2 (3.3) |
Medulloblastoma | 1 (1.7) |
Neurofibroma | 1 (1.7) |
Ovarian tumour | 1 (1.7) |
Pancreatic cancer | 1 (1.7) |
Rhabdomyosarcoma | 1 (1.7) |
aMean (standard deviation) | |



Test results | Disease | Sensitivity (%) | Specificity (%) | PPV (%) | NPV (%) | ||
Yes | No | ||||||
Reid’s score ≥ 5 | Positive | 10 | 15 | 37.0 | 94.5 | 40.1 | 93.8 |
Negative | 17 | 258 |
|
|
|
| |
Swede score ≥ 5 | Positive | 20 | 150 | 74.1 | 45.0 | 11.8 | 94.6 |
Negative | 7 | 123 |
|
|
|
| |
Swede score ≥ 8 | Positive | 3 | 21 | 11.1 | 92.3 | 12.5 | 91.3 |
Negative | 24 | 252 |
|
|
|
| |
a High-grade indicates a score of ≥5 in both tests; PPV indicates positive predictive value; NPV, negative predictive value | |||||||
Test | Sensitivity (%) | Specificity (%) | Positive predictive value (%) | Negative predictive value (%) |
Reid’s score ≥ 5 | 37.0 | 94.5 | 40.0 | 93.8 |
Swede score ≥ 5 | 74.1 | 45 | 11.8 | 94.6 |
Swede score ≥ 8 | 11.1 | 92.3 | 12.5 | 91.3 |
Test | Sensitivity (%) | Specificity (%) | Positive predictive value (%) | Negative predictive value (%) |
Reid’s score ≥ 5 | 37.0 | 94.5 | 40.0 | 93.8 |
Swede score ≥ 5 | 74.1 | 45 | 11.8 | 94.6 |
Swede score ≥ 8 | 11.1 | 92.3 | 12.5 | 91.3 |
Narakas classification | Total 200 (100%) | Grade 1 72 (36%) | Grade 2 64 (32%) | Grade 3 50 (25%) | Grade 4 14 (7%) |
Complete recoverya | 107 (54) | 60 (83) | 40 (63) | 7 (14) | - |
Near complete functional recovery but partial deformitya | 22 (11) | 5 (7) | 10 (16) | 6 (12) | 1 (7) |
Partial recovery with gross functional defect and deformity | 31 (16) | 7 (10) | 13 (20) | 10 (20) | 1 (7) |
No significant improvement | 40 (20) | - | 1 (1.5) | 27 (54) | 12 (86) |
aSatisfactory recovery bGrade 1, C5, 6, 7 improvement; Grade 2, C5, 6, 7 improvement; Grade 3, panpalsy C5, 6, 7, 8, 9, Grade 4, panpalsy with Hornon’s syndrome. | |||||
Narakas classification | Total 200 (100%) | Grade-1 72 (36%) | Grade-2 64 (32%) | Grade-3 50 (25%) | Grade-4 14 (7%) |
Complete recoverya | 107 (54) | 60 (83) | 40 (63) | 7 (14) | - |
Near complete functional recovery but partial deformitya | 22 (11) | 5 (7) | 10 (16) | 6 (12) | 1 (7) |
Partial recovery with gross functional defect and deformity | 31 (16) | 7 (10) | 13 (20) | 10 (20) | 1 (7) |
No significant improvement | 40 (20) | - | 1 (1.5) | 27 (54) | 12 (86) |
aSatisfactory recovery bGrade 1, C5, 6, 7 improvement; Grade 2, C5, 6, 7 improvement; Grade 3, panpalsy C5, 6, 7,8,9, Grade 4, panpalsy with Hornon’s syndrome. | |||||
Variables in probe trial day | Groups | ||||
NC | SC | ColC | Pre-SwE Exp | Post-SwE Exp | |
Target crossings | 8.0 (0.3) | 7.3 (0.3) | 1.7 (0.2)a | 6.0 (0.3)a | 5.8 (0.4)a |
Time spent in target | 18.0 (0.4) | 16.2 (0.7) | 5.8 (0.8)a | 15.3 (0.7)a | 15.2 (0.9)a |
NC indicates normal control; SC, Sham control; ColC, colchicine control; SwE, swimming exercise exposure. aP <0.01. | |||||
Pain level | Number (%) | P | ||
Pre | Post 1 | Post 2 | ||
Mean (SD)a pain score | 4.7 (1.9) | 2.7 (1.6) | 0.8 (1.1) | <0.001 |
Pain categories | ||||
No pain (0) | - | 1 (1.7) | 31 (51.7) | <0.001 |
Mild pain (1-3) | 15 (25.0) | 43 (70.0) | 27 (45.0) | |
Moderete pain (4-6) | 37 (61.7) | 15 (25.0) | 2 (3.3) | |
Severe pain (7-10) | 8 (13.3) | 2 (3.3) | - | |
aPain scores according to the visual analogue scale ranging from 0 to 10; SD indicates standard deviation | ||||
Surgeries | Number (%) | Satisfactory outcomes n (%) |
Primary surgery (n=24) |
|
|
Upper plexus | 6 (25) | 5 (83) |
Pan-palsy | 18 (75) | 6 (33) |
All | 24 (100) | 11 (46) |
Secondary Surgery (n=26) |
|
|
Shoulder deformity | 15 (58) | 13 (87) |
Wrist and forearm deformity | 11 (42) | 6 (54) |
All | 26 (100) | 19 (73) |
Primary and secondary surgery | 50 (100) | 30 (60) |
Mallet score 14 to 25 or Raimondi score 2-3 or Medical Research grading >3 to 5. | ||
Narakas classification | Total 200 (100%) | Grade-1 72 (36%) | Grade-2 64 (32%) | Grade-3 50 (25%) | Grade-4 14 (7%) |
Complete recoverya | 107 (54) | 60 (83) | 40 (63) | 7 (14) | - |
Near complete functional recovery but partial deformitya | 22 (11) | 5 (7) | 10 (16) | 6 (12) | 1 (7) |
Partial recovery with gross functional defect and deformity | 31 (16) | 7 (10) | 13 (20) | 10 (20) | 1 (7) |
No significant improvement | 40 (20) | - | 1 (1.5) | 27 (54) | 12 (86) |
aSatisfactory recovery bGrade 1, C5, 6, 7 improvement; Grade 2, C5, 6, 7 improvement; Grade 3, panpalsy C5, 6, 7,8,9, Grade 4, panpalsy with Hornon’s syndrome. | |||||
Trials | Groups | ||||
NC | SC | ColC | Pre-SwE Exp | Post-SwE Exp | |
1 | 20.8 (0.6) | 22.1 (1.8) | 41.1 (1.3)b | 31.9 (1.9)b | 32.9 (1.8)a, b |
2 | 10.9 (0.6) | 14.9 (1.7) | 37.4 (1.1)b | 24.9 (2.0)b | 26.8 (2.5)b |
3 | 8.4 (0.5) | 9.9 (2.0) | 32.8 (1.2)b | 22.0 (1.4)b | 21.0 (1.4)b |
4 | 7.8 (0.5) | 10.4 (1.3) | 27.6(1.1)b | 12.8 (1.2)b | 13.0 (1.4)b |
Savings (%)c | 47.7 (3.0) | 33.0 (3.0) | 10.0 (0.9)b | 23.6 (2.7)b | 18.9 (5.3)b |
NC indicates normal control; SC, Sham control; ColC, colchicine control; SwE, swimming exercise exposure. aP <0.05; bP <0.01. cThe difference in latency scores between trials 1 and 2, expressed as the percentage of savings increased from trial 1 to trial 2 | |||||


Table 3 Impact of digital tools on antimicrobial use and infection control across critical care and acute care settings
Serial | Digital tool / platform | Primary function | Setting | Findings | References |
1 | Antimicrobial stewardship toolkits, various digital platforms | Digital antimicrobial stewardship support through dashboards, electronic prescribing surveillance, and virtual infectious disease consultations to optimize antibiotic use. | Acute care hospitals | During the pandemic, there was an increase in multidisciplinary teamwork and use of digital tools (e.g., virtual meetings, procalcitonin-guided therapy). | |
2 | Clinical decision support, computerised provider order entry, surveillance systems | A review of digital interventions for antimicrobial stewardship. | Hospitals | Digital interventions reduce antimicrobial use and improve prescribing appropriateness; however, effects on clinical outcomes such as infection resolution, length of hospital stay, and mortality vary across studies due to differences in settings, intervention design, and patient populations . | |
3 | Automated antimicrobial consumption surveillance service | Automated, real-time monitoring of antimicrobial consumption (days of therapy, defined daily dose). | Intensive care unit in Sweden | The system provided versatile, long-term data linking consumption to patient factors, crucial for refining stewardship. | |
4 | Structured communication tools (checklists, situation–background-assessment-recommendation) | Improving team communication for infection prevention and control/ antimicrobial stewardship. | Hospitals (mainly the intensive care unit) | Standardising communication improves teamwork and leads to better patient outcomes, including shorter antibiotic duration. | |
5 | (Survey of platforms) International survey on antimicrobial stewardship; practices | Antimicrobial stewardship and therapeutic drug monitoring | International intensive care units | Formal antimicrobial stewardship programmes are common (63% of intensive care units), and their presence is strongly associated with the use of therapeutic drug monitoring |
The digital stewardship solutions have been demonstrated to enhance two process indicators: prescription eligibility and overall antibiotic use. One significant discovery is the significance of workflow integration and reducing alert fatigue. These technologies must be developed with the end user in mind in order to provide helpful information, without sacrificing professional care. If a CDS system's warnings are too frequent or unrelated, doctors may disregard them, which might be harmful. The second important discovery is the effectiveness of automated monitoring and feedback. Systems that can automatically collect, analyse, and share data have improved the ability of stewardship teams to evaluate performance, track trends, and focus treatments for resistance and consumption patterns. On the other hand, although the findings are promising, it should be emphasized early that evidence on a direct and sustained reduction in mortality remains conflicting, warranting a more cautious interpretation of the reported benefits and other objective clinical outcomes. Digital technologies are excellent at speeding up operations, but how they affect patient results will likely depend on a variety of complex factors, including the length of therapy and the severity of the underlying ailment.
Ways and means to make smart technology smarter
If AI, IoT, and digital platforms are successfully integrated into critical care, there may be a viable solution to the long-standing issues of infection control and antibiotic stewardship. However, achieving this promise requires a systematic, comprehensive strategy that considers the nontechnical, technological, and ethical concerns brought up by the earlier researchers.
An integrated, human-centred, nontechnical system where technology improves clinical knowledge, expedites procedures, and fosters a culture of safety and continuous progress is the solution and not just more technology. Establishing a safe and appropriate data environment, prioritising team communication and human components, and fostering trust and guaranteeing dependability through explainability and validation are the three main pillars that must be prioritised to do this [5, 17].
Gaining trust in AI, the first pillar, is essential. We should not leave the decision-making process to a "black box," thus, a good validation is required. Before being utilised in clinical settings, AI models must first undergo a comprehensive, multi-centre external validation procedure to ensure their reliability and generalisability across a variety of patient groups [5, 17]. Many intriguing models are still in the prototype stage and have only been tested on internal data, demonstrating the significant differences between retroactive analysis and future and actual deployment [18]. Trust is necessary for clinical acceptance, and "model fact labels" are a helpful way to educate end users about this complex information [19]. However, using XAI techniques does not involve any trade-offs. Because XAI platforms can enhance model interpretability by providing concise explanations, but these explanations are not always clinically intuitive and may require expert contextualisation for their projections, doctors may objectively assess a model's output. To identify the specific clinical factors that cause a sepsis alarm, for example, they may use Shapley Additive explanations (SHAP) data [4]. By going beyond mere openness, this increases physician trust and advances AI findings [20].
The second pillar is building a data ecosystem that is safe and compliant. Since an AI model's success is determined by the quality of its input data, the saying "garbage in, rubbish out" is extremely relevant [21]. High-quality, trustworthy data is crucial because of the substantial global impact of HAIs [22]. This necessitates a "security-by-design" approach for IoT devices that include robust authentication and encryption techniques [9]. A commitment to data standardisation—using technologies like HL7 FHIR—is also required to ensure that data from different EHRs, labs, and devices can be merged into an open, interoperable data stream [23]. Standardisation is necessary for automated surveillance systems, which are essential to modern stewardship, to provide precise, comparable measurements across different units and organisations [24]. In addition to the regular monitoring of device-associated infections, these data systems must be flexible enough to consider additional data points when the conditions for HAI monitoring change [25].
The third and most crucial pillar is the focus on human factors, collaboration, and the moral incorporation of technology into therapeutic practice. Care is given by people, but technology makes it simpler. Extensive research has demonstrated that even the most sophisticated digital tools are useless if they are not developed in conjunction with physicians to fit their workflow [10, 26]. This means removing warning problems and creating interfaces that are easy to use. Importantly, these tools should enhance inter-professional communication rather than replace it. In this context, embedding structured communication protocols such as SBAR within smart digital health technologies enhances real-time information exchange, thereby strengthening patient safety and interdisciplinary coordination [15]. An AI-driven VAP prediction tool [27, 28] or Clostridioides difficile prediction [29, 30, 31, 32] is an example of how technology may work in concert with certain best practices.
The research indicates that the integration of AI, the IoT, and digital stewardship platforms in infection control and AMS within critical care has revolutionary potential, despite its early phases. Future research should focus on multicenter, prospective validation of AI and ML models to ensure generalizability across different ICU populations and healthcare systems. This will help overcome the current reliance on single-centre or retrospective datasets, which limit the model's application in a range of clinical contexts. The development of XAI technologies is also crucial to building clinician trust since they will aid medical professionals in better understanding the reasoning behind prediction warnings and boost their adoption and integration into regular practices.
Safe and appropriate data environments must also be established in order to facilitate the seamless integration of EHR systems, decision support platforms, and IoT. Strong authentication, standardised data formats, and rules and regulations will be necessary to protect patient privacy, lessen cybersecurity risks, and encourage connectivity among different manufacturers and health systems. These systems must be able to deliver high-clarity data in real-time without overburdening clinical teams with unnecessary or redundant information in order to assist predictive analytics.
From a therapeutic perspective, future methods should pay particular attention to integrating human-centred design and processes. Instead of making caregiving more difficult, digital technology may help people feel less stressed and avoid alert fatigue. To align technology with real-world clinical needs, co-designing solutions with AMS teams, infection control experts, and frontline critical care unit staff will be crucial. The expansion of telemedicine in AMS can provide equitable access to global best practices and infectious disease information, particularly for underfunded or remote critical care units. Finally, the next generation of intelligent infection control would benefit from adaptive learning health systems that continuously enhance stewardship strategies and prediction models based on collecting local and global data. By utilising digital stewardship, IoT-based surveillance, and AI-driven analytics, this dynamic feedback system may provide proactive infection management as an alternative to reactive infection control. In the future, these technologies may offer strong, effective critical care environments that could enhance patient outcomes and antibiotic usage while reducing the likelihood of antibiotic resistance.
Limitations
It is critical to acknowledge this review's numerous shortcomings. First of all, because it is a narrative review and deviates from the accepted protocol of a systematic review, there is a greater chance of selection bias, and some relevant studies may be left out. The selection of papers was general, but it may not have included all of the current body of research on the topic because it was mostly based on how well the publications addressed the issue of smart technology in infection control and AMS.
Second, this study includes policy papers, observational reports, pilot studies, and conceptual frameworks in addition to different types of research, locations, and social contexts. Because of this variability, it is difficult to directly compare data or draw trustworthy inferences from the listed articles.
Third, the technologies under discussion—such as AI, the IoT, and telemedicine—are evolving rapidly. Many of the models and approaches addressed in the literature are still in the pilot or pilot phase, and many have not been significantly validated by large-scale or multi-centre research. Therefore, whether these ideas are practical and usable in the actual world is still uncertain.
Fourth, due to language and database limitations, our research may have missed significant contributions from other disciplines or languages by focusing on papers that were indexed in certain databases. Lastly, despite synthesising data from multiple sources, this study lacks quantitative meta-analysis and outcome-level statistical comparisons. Therefore, the data should be regarded as a thorough depiction of the current state of knowledge rather than as definitive evidence of the effectiveness of therapy.
The development of safer, learning-oriented healthcare systems can be aided by combining AI with human experiences, which can enhance clinical judgment rather than replace it. This narrative review demonstrates how modern technologies like AI, IoT-based monitoring, and telemedicine can enhance ICU infection control and AMS by facilitating earlier infection prediction, optimizing antimicrobial decisions, supporting real-time surveillance, and closing access gaps through remote expertise. However, secure infrastructure, reliable and compatible data systems, explainable and independently verified models, physician training, and user-friendly integration into regular workflows are all necessary for effective adoption. When taken as a whole, these developments provide a way forward for critical care practice that is more focused on patients, data-driven, and sustainable. Further multi-centre trials, systematic reviews, and longitudinal studies are needed to ascertain the dependability, security, and long -term impacts of these smart technologies in real critical care settings.
Lesion-size | Histopathology report | Total | |||||
CIN1 | CIN2 | CIN3 | ICC | CC | SM | ||
0–5 mm | 73 | 0 | 0 | 0 | 5 | 5 | 83 |
6–15 mm | 119 | 18 | 1 | 4 | 0 | 0 | 142 |
>15 mm | 1 | 8 | 31 | 23 | 12 | 0 | 75 |
Total | 193 | 26 | 32 | 27 | 17 | 5 | 300 |
CIN indicates cervical intraepithelial neoplasia; ICC, invasive cervical cancer; CC, chronic cervicitis; SM, squamous metaplasia | |||||||
| Histopathology report | Total | ||||||
CIN1 | CIN2 | CIN3 | ICC | CC | SM | |||
Lesion -Size | 0-5 mm | 73 | 0 | 0 | 0 | 5 | 5 | 83 |
6-15 mm | 119 | 18 | 1 | 4 | 0 | 0 | 142 | |
>15 mm | 1 | 8 | 31 | 23 | 12 | 0 | 75 | |
Total | 193 | 26 | 32 | 27 | 17 | 5 | 300 | |
CIN indicates Cervical intraepithelial neoplasia; ICC, Invasive cervical cancer; CC, Chronic cervicitis; SM, Squamous metaplasia | ||||||||
Group | Didactic posttest marks (%) | Flipped posttest marks (%) | Difference in marks (mean improvement) | P |
<50% | 63.2 (9.4) | 82.2 (10.8) | 19.0 | <0.001 |
≥50% | 72.4 (14.9) | 84.2 ( 10.3) | 11.8 | <0.001 |
Data presented as mean (standard deviation) | ||||








