HWC Sub-Centres te KPI atan hian Outcome Indicators pumpui hman tawh tur a nih avangin July thla report aṭangin data enter tur tihbelh a ni.
Explore our FAQs page, designed to offer comprehensive information and clarity on various aspects of NQAS policies, protocols, and initiatives aimed at promoting and safeguarding quality assurance in public health facilities.
Divide the number of children discharged as “recovered” by the total number of discharged children.
Exclusion: LAMA, abscond
Drug Stock-out atan State DH EML pumpui tawp kha chu Department-wise stock-out-ah kan tarlan vek chuan a denominator a tam dawn lutuk a, performance a tilang chhe dawn a. Chuvangin, Standard C5.1 -ah department mamawh bik kha chu a awm a, khami ang kha denominator-ah hman tur a ni.
When I visited one hospital, the Lab technician plotted Control Chart using random samples collected from his patients from different 20 days. All sample comes from different individuals. Comment.
This is an interesting approach, but it's crucial to understand the limitations and potential misinterpretations when a lab technician plots a control chart using random patient samples from different days and individuals to monitor test accuracy.
Here's a breakdown of what that might mean and the challenges involved:
What the lab technician is trying to do
The lab technician is likely attempting to use Patient-Based Real-Time Quality Control (PBRTQC).
This method leverages the continuous flow of patient data to monitor the analytical system for changes in assay performance.
It is considered a complement to traditional Internal Quality Control (IQC) measures using control samples.
PBRTQC can address some limitations of traditional IQC, such as the limited stability and lack of commutability of QC materials.
Challenges and potential misinterpretations
Patient variability: The main challenge is that patient results inherently vary due to numerous factors, including:
Biological variation: Individual patients have different physiological states, health conditions, medications, and diets, all of which influence their lab results.
Disease progression: If the samples are from patients with a disease, their results can change over time due to disease progression or treatment response.
Patient demographics: Factors like age, gender, and ethnicity can also affect reference ranges and expected values.
Lack of a stable process mean: Control charts rely on the assumption of a relatively stable process mean (the expected average value of the measurement). When using random patient samples from different individuals, especially if those individuals have different health conditions or demographics, establishing a true stable process mean for test accuracy becomes difficult.
Difficulty in establishing control limits: The control limits on a control chart are calculated based on the expected process variation. With high patient-to-patient variability, it's harder to distinguish between normal variation and actual analytical errors.
Misinterpreting "out of control" signals: An "out of control" point or trend on such a chart might not necessarily indicate a problem with the lab test's accuracy. It could simply reflect:
A significant change in the patient population being tested.
A natural fluctuation in the biological values of the patients.
A higher incidence of a particular disease in the patient population at that time.
Difficulty in identifying root causes: If an out-of-control signal does indicate an issue, pinpointing the root cause becomes more complex. Was it an instrument issue, a reagent problem, or simply a reflection of patient variability or a change in the patient population?
Need for sophisticated algorithms: Patient-based quality control approaches are typically more effective with advanced algorithms that can identify shifts or trends in assay performance while accounting for patient variability. Using simple control charts directly on raw patient data may not be sufficient.
In one of the hospital I visited, sample from one patient was taken and divided into 10 samples, and they run Hb Test for 10 times in a day. And plotted a control chart. Comment.
Repeated testing of a single patient sample for Hb
When a lab technician takes a single patient's blood sample, divides it into 10 aliquots (sub-samples), runs an Hb test on each of those 10 aliquots, and then plots a control chart, this is a very valuable and common practice, but specifically for evaluating precision (repeatability) rather than accuracy or daily operational quality control.
What the lab technician is doing (and why)
This process is essentially a repeatability study. By analyzing multiple aliquots of the same sample under the same conditions (same operator, same instrument, short time interval), the technician is measuring how consistently the instrument or method produces the same result for the same sample. This is a direct assessment of the test's precision.
Using a control chart for this purpose
Type of chart: An X-bar and R (or S) chart would be appropriate here, as the data is collected in subgroups (each subgroup of 10 measurements represents a single patient's sample).
X-bar chart: The X-bar chart would track the average of the 10 Hb measurements for each patient sample. While the true Hb value of the patient is unknown, the focus here is on the consistency of the measurements.
R chart: The R chart, which plots the range (difference between the highest and lowest value) within each set of 10 measurements, is crucial for assessing within-sample variability.
Interpretation: A stable R chart (and X-bar chart) indicates good precision. If the charts show out-of-control points or trends, it suggests an issue with the instrument's repeatability or a factor causing inconsistent results, like potential problems with reagent, pipetting, or the instrument itself.
Benefits of this approach
Direct assessment of precision: It provides direct feedback on the analytical variability of the test.
Troubleshooting potential issues: Out-of-control signals on the chart can help identify and troubleshoot sources of variation that might be affecting the test's precision.
Ensuring reliable results: Consistent precision in testing ensures that the results obtained for patients are reliable and can be trusted for clinical decisions.
Limitations
Focus on precision only: This type of study primarily assesses precision or repeatability and doesn't directly measure the accuracy of the test (how close the measured value is to the true value). To assess accuracy, the results would need to be compared against a known reference standard or another method known to be highly accurate, according to ScienceDirect.com.
Limited scope: This experiment only reflects the performance of the analyzer for that specific patient's sample, on that day, and with that technician. It might not be representative of the analyzer's performance across the entire range of patient samples or over a longer period.
What is usually done for monitoring test accuracy
Internal Quality Control (IQC): Running known QC samples (materials with a defined concentration or value) at regular intervals (daily, per shift, etc.) is the standard practice for monitoring the analytical process and detecting shifts in accuracy and precision.
External Quality Assessment (EQA) or Proficiency Testing: Periodically testing samples provided by an external organization to compare a lab's performance against other labs and peer groups.
Patient-Based Real-Time Quality Control (PBRTQC): This is a valid approach, but it involves more sophisticated statistical methods than simply plotting individual patient results. For example, it might involve:
Moving averages or truncated moving averages of patient data to smooth out individual patient variability and reveal trends.
Calculations based on the difference between current and previous patient results for the same patient, if repeat testing is common.
Leveraging large datasets of patient results with advanced algorithms to detect shifts in assay performance while accounting for biological and demographic variability.