Search Results
Found 2 results
510(k) Data Aggregation
(177 days)
Milvue
TechCare Trauma is intended to analyze 2D X ray radiographs using techniques to aid in the detection, localization, and characterization of fractures and/or elbow joint effusion during the review of commonly acquired radiographs of: Ankle, Foot, Knee, Leg (includes Tibia/Fibula), Femur, Wrist, Hand/Finger, Elbow, Forearm, Arm (includes Humerus), Shoulder, Clavicle, Pelvis, Hip, Thorax (includes ribs).
TechCare Trauma can provide results for fracture in neonates and infants (from birth to less than 2 years), children and adolescents (aged 2 to less than 22 years) and adults (aged 22 years and over).
TechCare Trauma can provide results for elbow joint effusions in children and adolescents (aged 2 to less than 22 years) and adults (aged 22 years and over).
The intended users of TechCare Trauma are clinicians with the authority to diagnose fractures and/or elbow joint effusions in various settings including primary care (e. g., family practice, internal medicine), emergency medicine, urgent care, and specialty care (e. g. orthopedics), as well as radiologists who review radiographs across settings.
TechCare Trauma results are not intended to be used on a stand-alone basis for clinical decision-making. Primary diagnostic and patient management decisions are made by the clinical user.
The TechCare Trauma device is a software as Medical Device (SaMD). More specifically it is defined as a "radiological computer assisted detection and diagnostic software for suspected fractures".
As a CADe/x software, TechCare Trauma is an image processing device intended to aid in the detection and localization of fractures and elbow joint effusions on acquired medical images (2D X-ray radiographs).
TechCare Trauma uses an artificial intelligence algorithm to analyze acquired medical images (2D X-ray radiographs) for features suggestive of fractures and elbow joint effusions.
TechCare Trauma can provide results for fractures in neonates and infants (from birth to less than 2 years), children and adolescents (aged 2 to less than 22 years) and adults (aged 22 years and over) regardless of their condition.
TechCare Trauma can provide results for elbow joint effusions in children and adolescents (aged 2 to less than 22 years) and adults (aged 22 years and over).The device detects and identifies fractures and elbow joint effusions based on a visual model's analysis of images and provides information about the presence and location of these prespecified findings to the user.
It relies solely on images provided by DICOM sources. Once integrated into existing networks, TechCare Trauma automatically receives and processes these images without any manual intervention. The processed results, which consist of one or more images derived from the original inputs, are then sent to specified DICOM destinations. This ensures that the results can be seamlessly viewed on any compatible DICOM viewer, allowing smooth into medical imaging workflows.
TechCare Trauma can be deployed on-premises or on cloud and be connected to multiple DICOM sources / destinations (including but not limited to DICOM storage platform, PACS, VNA and radiological equipment, such as X-ray systems), ensuring easy integration into existing clinical workflows.
Here's a detailed breakdown of the acceptance criteria and study findings for the TechCare Trauma device, based on the provided text:
Acceptance Criteria and Device Performance
The acceptance criteria for the TechCare Trauma device appear to be based on achieving high diagnostic accuracy, specifically measured by the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve for both standalone performance and multi-reader multi-case (MRMC) comparative studies. The study demonstrated successful performance against these implied criteria.
Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Implied/Study Goal) | Reported Device Performance (Standalone) | Reported Device Performance (MRMC with AI vs. without AI) |
---|---|---|---|
Standalone Performance (Image-level ROC-AUC) | High accuracy (specific threshold not explicitly stated but implied by achievement across all categories) | Fracture - Adult: 0.962 [0.957 - 0.967] | |
Fracture - Pediatric: 0.962 [0.955 - 0.969] | |||
EJE - Adult: 0.965 [0.936 - 0.986] | |||
EJE - Pediatric: 0.976 [0.963 - 0.986] | |||
(Further detailed by anatomical regions, age, gender, image view, and imaging hardware manufacturers, all showing high AUCs.) | Not applicable (standalone algorithm only) | ||
Reader Performance (MRMC ROC-AUC) | Superior to unaided reader performance (statistically significant improvement) | Not applicable (human reader performance) | Adult Fracture: Improved from 0.865 to 0.955 (Δ 0.090, p |
Ask a specific question about this device
(274 days)
Milvue
SmartChest is a radiological computer assisted triage and notification software that analyzes frontal chest X-ray images (Postero-Anterior (PA) or Antero-Posterior (AP)) of transitional adolescents (18 -21 yo but treated like adults) and adults (≥22 yo) for the presence of suspected pleural effusion and/or pneumothorax. SmartChest uses an artificial intelligence algorithm to analyze the images for features suggestive of critical findings and provides case-level output available to a PACS (or other DICOM storage platforms) for worklist prioritization.
As a passive notification for prioritization-only software tool within the standard of care workflow, SmartChest does not send a proactive alert directly to a trained medical specialist.
SmartChest is not intended to direct attention to a specific portion of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
Smart Chest is a radiological computer assisted triage and notification software that analyzes (Postero-Anterior (PA) and/or Antero-Posterior (AP)) of transitional adolescents (18 ≤ age ≤ 21 yo but treated like adults) and adults (22 yo ≤ age) for the presence of suspected pleural effusion and/or pneumothorax. The software utilizes Al-based image analysis algorithms to detect the findings.
SmartChest provides case-level output available in the worklist prioritization by appropriately trained medical specialists qualified to interpret chest radiographs are automatically received from the user's image acquisition or storage systems (e.g., PACS, other DICOM storage platforms) and processed by SmartChest for analysis. After receiving Chest X-Ray images, the device automatically analyzes the images and identifies pre-spectied findings (pleural effusion and/or pneumothorax). Then the analysis results are passively sent by SmartChest yia a notfication to the worklist software being used (PACS, or other platforms).
The results are made available via a newly generated DICOM series (containing a secondary capture image), where DICOM tags contains the following information:
-
"SUSPECTED FINDING" or "CASE PROCESSED" if the algorithm ran successfully, "NOT PROCESSED" if the algorithm receives a study containing chest images that are not part of the intended use (lateral views or excluded age for example).
-
"SUSPECTED PLEURAL EFFUSION" OR "SUSPECTED PNEUMOTHORAX" if one pre-specified finding category identified OR,
3."SUSPECTED PLEURAL EFFUSION, PNEUMOTHORAX" if the two pre-specified finding categories identified
-
- The secondary capture image returned in the storage system indicates at the study-level:
- The number of images received by SmartChest,
- The number of images processed by SmartChest,
- The status of the study: "NOT PROCESSED", "SUSPECTED FINDING" or "CASE PROCESSED".
The DICOM storage component may be a Picture Archiving and Communications (PACS) system or other local storage platforms. This would allow the appropriately trained medical specialists to group suspicious exams together that may potentially benefit their prioritization. Chest radiographs without an identified anomaly are placed in the worklist for routine review, which is the current standard of care.
The device is not intended to be a rule-out device and for cases that have been processed by the device without notification for prespecified suspected findings it should not be viewed as indicating that the pre-specified findings are excluded. SmartChest device does not alter the order nor remove imaging exams from the interpretation queue. Unflagged cases should still be interpreted by medical specialists.
The notification is contextual and does not provide any diagnostic information. The results are not intended to be used on a stand-alone basis for clinical decision-making. The summary image will display the following statement: "The product is not for Diagnostic Use-For Prioritization Only".
The information provided is a 510(k) summary for the Milvue SmartChest device. Here's a breakdown of its acceptance criteria and the study proving it meets those criteria:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" as a separate section with specific numerical thresholds for sensitivity, specificity, and AUC that were set a priori. However, it reports the device's performance metrics for two distinct conditions: Pneumothorax and Pleural Effusion. The implication is that these reported performances met an acceptable level for substantial equivalence to the predicate device.
Performance Metric | Pneumothorax Reported Performance | Pleural Effusion Reported Performance |
---|---|---|
ROC AUC | 0.989 [0.978; 0.997] | 0.975 [0.960; 0.987] |
Sensitivity | 92.7% [95% CI: 87.4-96.2] | 93.3% [95% CI: 88.1-96.4] |
Specificity | 97.3% [95% CI: 93.4-99.1] | 90.0% [95% CI: 84.1-94.1] |
Mean Execution Time (local) | 2.322 ± 0.267 seconds | 2.288 ± 0.165 seconds |
Mean Execution Time (cloud) | 28.542 ± 8.254 seconds | 28.257 ± 7.226 seconds |
2. Sample Size and Data Provenance for the Test Set:
- Sample Size for Each Study: 300 Chest X-Ray cases
- Total Test Set Size (extrapolated based on two studies): 600 Chest X-Ray cases (300 for pneumothorax, 300 for pleural effusion).
- Data Provenance: The test data was obtained from multiple institutions across the US. It was from sites different from the training data sites, ensuring independence. It included images from rural (49 for pneumothorax, 53 for pleural effusion) and urban (251 for pneumothorax, 247 for pleural effusion) sites from states like New York, North Carolina, Texas, and Washington. The data was retrospective.
3. Number of Experts and Qualifications for Ground Truth - Test Set:
- Number of Experts: Three.
- Qualifications of Experts: All three were ABR-certified (American Board of Radiology-certified) radiologists with a minimum of 5 years of experience.
4. Adjudication Method for the Test Set:
- The ground truth was established by three ABR-certified radiologists. The first two radiologists independently interpreted each case. The third radiologist independently reviewed cases where there was a disagreement between the first two. The final ground truth was determined by majority consensus (2+1 adjudication).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- The document does not indicate that an MRMC comparative effectiveness study was done to measure human reader improvement with AI assistance. The study focuses solely on the standalone performance of the AI algorithm.
6. Standalone Performance Study:
- Yes, a standalone performance study was done. Two individual standalone performance assessment studies were conducted to evaluate the effectiveness of SmartChest for pneumothorax and pleural effusion separately.
7. Type of Ground Truth Used:
- The ground truth for the test set was established by expert consensus among three ABR-certified radiologists.
8. Sample Size for the Training Set:
- The training set was composed of 9,560 images.
9. How the Ground Truth for the Training Set Was Established:
- The document states that the training data was collected from an unfiltered stream of exams in four French institutions between October 2018 and December 2021. It lists the distribution of exams per pathology (No findings, Pleural Effusion, Pneumothorax).
- While it explains where and when the data was collected and the distribution of findings, the document does not explicitly describe the detailed process by which the ground truth labels for the training set were established. It only mentions that the data was processed to fit the model's requirements and that the images were used to train the model. This typically implies that the original clinical reports or expert annotations associated with these studies were used to create the ground truth labels for training, but the specific expert qualifications or adjudication methods for the training set ground truth are not provided.
Ask a specific question about this device
Page 1 of 1