Search Results
Found 2 results
510(k) Data Aggregation
(109 days)
The diagnostic ultrasound system and probes are designed to obtain ultrasound images and analyze body fluids.
The clinical applications include: Fetal/Obstetrics, Abdominal, Gynecology, Intra-operative, Pediatric, Small Organ, Neonatal Cephalic, Adult Cephalic, Trans-vaginal, Muscular-Skeletal (Conventional, Superficial), Urology, Cardiac Adult, Cardiac Pediatric, Thoracic, Peripheral vessel and Dermatology.
It is intended for use by, or by the order of, and under the supervision of, an appropriately trained healthcare professional who is qualified for direct use of medical devices. It can be used in hospitals, private practices, clinics and similar care environment for clinical diagnosis of patients.
Modes of Operation: 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, Multi-Image mode(Dual, Quad), 3D/4D mode, MV-Flow Mode.
V5 Diagnostic Ultrasound System; H5 Diagnostic Ultrasound System; XV5 Diagnostic Ultrasound System: XH5 Diagnostic Ultrasound System; V4 Diagnostic Ultrasound System; H4 Diagnostic Ultrasound System: XV4 Diagnostic Ultrasound System; XH4 Diagnostic Ultrasound System are a general purpose, mobile, software controlled, diagnostic ultrasound system. Their function is to acquire ultrasound data and to display the data as 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, MV-Flow mode, Multi-Image mode(Dual, Quad), 3D/4D mode. The V5/H5/XV5/XH5, V4/H4/XV4/XH4 diagnostic ultrasound system also give the operator the ability to measure anatomical structures and offer analysis packages that provide information that is used to make a diagnosis by competent health care professionals. The V5/H5/XV5/XH5, V4/H4/XV4/XH4 diagnostic ultrasound system have a real time acoustic output display with two basic indices, a mechanical index and a thermal index, which are both automatically displayed.
Here's a breakdown of the acceptance criteria and supporting study details for each AI-powered feature of the V5/H5/XV5/XH5 and V4/H4/XV4/XH4 Diagnostic Ultrasound Systems, as provided in the document:
HeartAssist (Fetal and Adult)
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Modality | Acceptance Criteria | Reported Performance |
---|---|---|---|
View Recognition | Fetal | Average recognition accuracy ≥ 89% | 93.21% |
Adult | Average recognition accuracy ≥ 84% | 98.31% | |
Segmentation | Fetal | Average Dice-score ≥ 0.8 | 0.88 |
Adult | Average Dice-score ≥ 0.9 | 0.93 | |
Size Measurement | Fetal | Error rate of area measured value ≤ 8% | ≤ 8% |
(Error Rate) | Fetal | Error rate of angle measured value ≤ 4% | ≤ 4% |
Fetal | Error rate of circumference measured value ≤ 11% | ≤ 11% | |
Fetal | Error rate of diameter measured value ≤ 11% | ≤ 11% | |
Size Measurement | Adult | PCC value (B-mode) ≥ 0.8 | Passed (PCC ≥ 0.8) |
(Pearson Correlation) | Adult | PCC value (M-mode) ≥ 0.8 | Passed (PCC ≥ 0.8) |
Adult | PCC value (Doppler-mode) ≥ 0.8 | Passed (PCC ≥ 0.8) |
2. Sample Sizes and Data Provenance:
-
Test Set (Fetal):
- Number of individuals: 80
- Number of static images: 280
- Provenance: Mix of retrospective and prospective data collection in clinical practice from five hospitals.
- Country of Origin: Not explicitly stated, but clinical experts are from multiple countries including US and Korea, suggesting data could be from these regions.
-
Test Set (Adult):
- Number of individuals: 30
- Number of static images: 540
- Provenance: Mix of retrospective and prospective data collection in clinical practice from five hospitals.
- Country of Origin: Not explicitly stated, but clinical experts are from multiple countries including US and Korea, suggesting data could be from these regions.
3. Number of Experts and Qualifications for Ground Truth:
-
Fetal:
- 3 participating experts:
- 1 obstetrician with >20 years of experience (in fetal cardiology)
- 2 sonographers with >10 years of experience (in fetal cardiology)
- Supervised by: 1 obstetrician with >25 years of experience.
- 3 participating experts:
-
Adult:
- 4 experts:
- 2 cardiologists with at least 10 years of experience
- 2 sonographers with at least 10 years of experience.
- 4 experts:
4. Adjudication Method:
- Fetal: All acquired images were first classified into correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn. The entire process was supervised by another obstetrician.
- Adult: Experts manually traced the contours of the heart and the signal outline on the images. (Implicitly, this suggests an expert consensus or expert-defined ground truth, but a specific adjudication method like "2+1" is not detailed.)
5. MRMC Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was explicitly mentioned for evaluating human readers with and without AI assistance for HeartAssist. The study focused on standalone performance metrics against ground truth.
6. Standalone Performance Study:
- Yes, a standalone study was performed. The reported performance metrics (recognition accuracy, Dice-score, error rates, PCC values) are direct measurements of the algorithm's performance against expert-defined ground truth.
7. Type of Ground Truth Used:
- Expert consensus / Expert-defined outlines. For fetal images, experts classified views and manually drew anatomy areas. For adult images, cardiologists and sonographers manually traced contours. The PCC for adult size measurement was calculated against "the cardiologist's measurements."
8. Sample Size for Training Set:
- Not explicitly stated ("Data used for training, tuning and validation purpose are completely separated").
9. How Ground Truth for Training Set Was Established:
- Not explicitly stated, but it can be inferred that a similar expert-based process was used for generating ground truth for training data, as noted for the validation data: "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn..."
BiometryAssist (Fetal)
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Acceptance Criteria | Reported Performance |
---|---|---|
Segmentation | Average Dice-score ≥ 0.8 | 0.919 |
Size Measurement (Circumference) | Error rate ≤ 8% | ≤ 8% |
Size Measurement (Distance) | Error rate ≤ 4% | ≤ 4% |
Size Measurement (NT) | Error rate ≤ 1mm | ≤ 1mm |
2. Sample Sizes and Data Provenance:
- Test Set:
- Number of individuals: 77
- Number of static images: 320
- Provenance: Mix of retrospective and prospective data collection in clinical practice from two hospitals.
- Country of Origin: Americans and Koreans, suggesting data could be from these regions.
3. Number of Experts and Qualifications for Ground Truth:
- 3 participating experts:
- 1 obstetrician with >20 years of experience (in fetal cardiology)
- 2 sonographers with >10 years of experience (in fetal cardiology)
- Supervised by: 1 obstetrician with >25 years of experience.
4. Adjudication Method:
- All acquired images were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each image. The entire process was supervised by another obstetrician.
5. MRMC Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was explicitly mentioned.
6. Standalone Performance Study:
- Yes, a standalone study was performed. The reported performance metrics (Dice-score, error rates) are direct measurements of the algorithm's performance against expert-defined ground truth.
7. Type of Ground Truth Used:
- Expert consensus / Expert-defined outlines. Experts classified views and manually drew anatomy areas.
8. Sample Size for Training Set:
- Not explicitly stated ("Data used for training, tuning and validation purpose are completely separated").
9. How Ground Truth for Training Set Was Established:
- It's inferred that a similar expert-based process was used as for validation data: "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn..."
ViewAssist (Fetal)
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Acceptance Criteria | Reported Performance |
---|---|---|
View Recognition | Average accuracy ≥ 89% | 94.26% |
Anatomy Annotation (Segmentation) | Average Dice-score ≥ 0.8 | 0.885 |
2. Sample Sizes and Data Provenance:
- Test Set:
- Number of individuals: 77
- Number of static images: 680
- Provenance: Mix of retrospective and prospective data collection in clinical practice from two hospitals.
- Country of Origin: Americans and Koreans, suggesting data could be from these regions.
3. Number of Experts and Qualifications for Ground Truth:
- 3 participating experts:
- 1 obstetrician with >20 years of experience (in fetal cardiology)
- 2 sonographers with >10 years of experience (in fetal cardiology)
- Supervised by: 1 obstetrician with >25 years of experience.
4. Adjudication Method:
- All acquired images were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each image. The entire process was supervised by another obstetrician.
5. MRMC Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was explicitly mentioned.
6. Standalone Performance Study:
- Yes, a standalone study was performed. The reported performance metrics (recognition accuracy, Dice-score) are direct measurements of the algorithm's performance against expert-defined ground truth.
7. Type of Ground Truth Used:
- Expert consensus / Expert-defined outlines. Experts classified views and manually drew anatomy areas.
8. Sample Size for Training Set:
- Not explicitly stated ("Data used for training, tuning and validation purpose are completely separated").
9. How Ground Truth for Training Set Was Established:
- It's inferred that a similar expert-based process was used as for validation data: "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn..."
UterineAssist
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Acceptance Criteria | Reported Performance |
---|---|---|
Segmentation (Uterus) | Average Dice-score not explicitly stated, but 96% is reported. | 96% |
Segmentation (Endometrium) | Average Dice-score not explicitly stated, but 92% is reported. | 92% |
Feature Points Extraction (Uterus) | Errors ≤ 5.8 mm | ≤ 5.8 mm |
Feature Points Extraction (Endometrium) | Errors ≤ 4.3 mm | ≤ 4.3 mm |
Size Measurement | Errors ≤ 2.0 mm | ≤ 2.0 mm |
2. Sample Sizes and Data Provenance:
-
Test Set (Segmentation):
- Number of static images: 450 sagittal uterus images and 150 transverse uterus images (total 600 images).
- Number of individuals: 60 contributed to the validation dataset (for segmentation AND feature points/size measurement)
- Provenance: Mix of retrospective and prospective data collection in clinical practice from three hospitals.
- Country of Origin: All Koreans.
-
Test Set (Feature Points Extraction & Size Measurement):
- Number of static images: 48 sagittal and 44 transverse plane images of the uterus (total 92 images).
- Number of individuals: 60 individuals contributed to the validation dataset (same as segmentation).
- Provenance: Mix of retrospective and prospective data collection in clinical practice from three hospitals.
- Country of Origin: All Koreans.
3. Number of Experts and Qualifications for Ground Truth:
- 3 participating OB/GYN experts with >10 years' experience.
4. Adjudication Method:
- Segmentation of the ground truth was generated by three participating OB/GYN experts. The image set was divided into three subsets, and each expert drew ground truths for one subset. Ground truths drawn by one expert were cross-checked by the other two experts. Any images not meeting inclusion/exclusion criteria were excluded.
5. MRMC Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was explicitly mentioned.
6. Standalone Performance Study:
- Yes, a standalone study was performed. The reported performance metrics (Dice-score, error rates) are direct measurements of the algorithm's performance against expert-defined ground truth.
7. Type of Ground Truth Used:
- Expert consensus / Expert-drawn contours and measurements.
8. Sample Size for Training Set:
- Not explicitly stated ("Data used for test/training validation purpose are completely separated").
9. How Ground Truth for Training Set Was Established:
- It's inferred that a similar expert-based process was used as for validation data: "Segmentation of the ground truth was generated by three participating OB/GYN experts with more than 10 years' experience." This process for ground truth establishment is also applicable to training data generation.
NerveTrack (Nerve Detection)
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Acceptance Criteria | Reported Performance |
---|---|---|
Detection | Average accuracy from 10 image sequence not explicitly stated as an acceptance criterion, but 89.91% is reported. A detection is considered correct if Dice coefficient is ≥ 0.5. | Average accuracy from 10 image sequence: 89.91% (95% CI: 86.51, 93.35) |
Speed | Not explicitly stated as an acceptance criterion, but 3.98 fps is reported. | Average speed (fps): 3.98 (95% CI: 3.98, 3.99) |
2. Sample Sizes and Data Provenance:
- Test Set:
- Number of individuals: 22
- Number of images: 3,999 (extracted from 2D sequences, with each individual contributing at least ten images per sequence).
- Provenance: Prospective data collected in clinical practice from eight hospitals.
- Country of Origin: Koreans.
3. Number of Experts and Qualifications for Ground Truth:
- 3 participating experts:
- 1 anesthesiologist with >10 years of experience in pain management (for drawing GT).
- "Other doctors" with >10 years of experience (for GT verification).
- Doctors who scanned the ultrasound were directly involved in GT construction.
4. Adjudication Method:
- Manual drawing of nerve areas by an anesthesiologist. For verification of GT, "other doctors" checked every frame. If they disagreed, corrections were made to finalize the GT.
5. MRMC Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was explicitly mentioned.
6. Standalone Performance Study:
- Yes, a standalone study was performed. The reported accuracy and speed are direct measurements of the algorithm's performance against expert-defined ground truth.
7. Type of Ground Truth Used:
- Expert consensus / Expert-annotated rectangular regions for nerve locations.
8. Sample Size for Training Set:
- Not explicitly stated ("Data used for training, tuning and validation purpose are completely separated").
9. How Ground Truth for Training Set Was Established:
- It's inferred that a similar expert-based process was used as for validation data: "The GT data were built by three participating experts. Nerve areas in all acquired images for training, tuning and validation were manually drawn by an anesthesiologist... For verification of GT, other doctors with more than 10 years of experience checked every frame... corrections were made to make the final GT."
NerveTrack (Nerve Segmentation)
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Acceptance Criteria | Reported Performance |
---|---|---|
Segmentation | Average accuracy from nine image sequences not explicitly stated as an acceptance criterion, but 98.30% is reported. A segmentation is considered correct if Dice coefficient is ≥ 0.5. | Average accuracy from nine image sequences: 98.30% (95% CI: 95.43, 100) |
Speed | Not explicitly stated as an acceptance criterion, but 3.98 fps is reported. | Average speed (fps): 3.98 (95% CI: 3.98, 3.98) |
2. Sample Sizes and Data Provenance:
- Test Set:
- Number of individuals: 20
- Number of images: 1,675 (extracted from 2D sequences, with each individual contributing at least ten images per sequence).
- Provenance: Prospective data collected in clinical practice from ten hospitals.
- Country of Origin: Koreans.
3. Number of Experts and Qualifications for Ground Truth:
- 3 participating experts:
- 1 anesthesiologist with >10 years of experience in pain management (for drawing GT).
- "Other doctors" with >10 years of experience (for GT verification).
- Doctors who scanned the ultrasound were directly involved in GT construction.
4. Adjudication Method:
- Manual drawing of nerve areas (contours) by an anesthesiologist. For verification of GT, "other doctors" checked every frame. If they disagreed on nerve and other organ contours, corrections were made to finalize the GT.
5. MRMC Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was explicitly mentioned.
6. Standalone Performance Study:
- Yes, a standalone study was performed. The reported accuracy and speed are direct measurements of the algorithm's performance against expert-defined ground truth.
7. Type of Ground Truth Used:
- Expert consensus / Expert-annotated contours for nerve regions.
8. Sample Size for Training Set:
- Not explicitly stated ("Data used for training, tuning and validation purpose are completely separated").
9. How Ground Truth for Training Set Was Established:
- It's inferred that a similar expert-based process was used as for validation data: "The GT data were built by three participating experts. Nerve areas in all acquired images for training, tuning and validation were manually drawn by an anesthesiologist... For verification of GT, other doctors with more than 10 years of experience checked every frame... corrections were made to make the final GT."
Ask a specific question about this device
(109 days)
The diagnostic ultrasound system and transducers are designed to obtain ultrasound images and analyze body fluids.
The clinical applications include: Fetal/Obstetrics, Abdominal, Gynecology, Intra-operative, Small Organ, Neonatal Cephalic, Adult Cephalic, Trans-vaginal, Muscular-Skeletal (Conventional, Superficial), Urology, Cardiac Adult, Cardiac Pediatric, Thoracic, Trans-esophageal (Cardiac) and Peripheral vessel.
It is intended for use by, or by the order of, and under the supervision of, an appropriately trained healthcare professional who is qualified for direct use of medical devices. It can be used in hospitals, private practices, clinics and similar care environment for clinical diagnosis of patients.
Modes of Operation: 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, Multi-Image mode(Dual, Quad), 3D/4D mode
The V8 / V7 / V6 / H8 / H7 / H6 are a general purpose, mobile, software controlled, diagnostic ultrasound system. Their function is to acquire ultrasound data and to display the data as 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, Multi-Image mode(Dual, Quad), 3D/4D mode. The V8 / V7 / V6 / H8 / H7 / H6 also give the operator the ability to measure anatomical structures and offer analysis packages that provide information that is used to make a diagnosis by competent health care professionals. The V8 / V7 / V6 / H8 / H7 / H6 have a real time acoustic output display with two basic indices, a mechanical index and a thermal index, which are both automatically displayed.
Here's an analysis of the acceptance criteria and the study proving the device meets those criteria, based on the provided document:
Device: Samsung Medison V8/H8, V7/H7, V6/H6 Diagnostic Ultrasound System with NerveTrack AI
1. Table of Acceptance Criteria and Reported Device Performance
Validation Type | Definition | Acceptance Criteria | Reported Device Performance (Average) | Standard Deviation | 95% CI |
---|---|---|---|---|---|
Nerve Detection | |||||
Accuracy (%) | Number of correctly detected frames / Total number of frames with nerve × 100 | ≥ 80% | 90.3% | 4.8 | 88.6 to 92.0 |
Speed (FPS) | 1000 / Average latency time of each frame (msec) | ≥ 2 FPS | 3.61 | 0.25 | 3.43 to 3.78 |
Nerve Segmentation | |||||
Accuracy (%) | Number of correctly segmented frames / Total number of frames with nerve × 100 | ≥ 80% | 98.69% | 0.64 | 96.31 to 100 |
Speed (FPS) | 1000 / Average latency time of each frame (msec) | ≥ 2 FPS | 3.62 | 0.36 | 3.49 to 3.75 |
2. Sample Size Used for the Test Set and Data Provenance
-
Nerve Detection Test Set:
- Number of Subjects: 18 (13 females, 5 males)
- Number of Images/Frames: 2,146
- Data Provenance: All Koreans. The document does not explicitly state if the data was retrospective or prospective. However, the description of data collection (sliding transducer at specific speeds) suggests it was collected for the purpose of this study, indicating a prospective component or at least an intentionally collected dataset.
-
Nerve Segmentation Test Set:
- Number of Subjects: 11 (8 females, 3 males)
- Number of Images/Frames: 3,836
- Data Provenance: All Koreans. Similar to the detection dataset, the provenance is Korean, and the collection method description points towards intentionally collected data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: 15 experts were involved (10 anesthesiologists and 5 sonographers).
- Qualifications of Experts: All experts had "more than 10 years of experience."
4. Adjudication Method for the Test Set
The ground truth establishment method was as follows:
- One anesthesiologist who scanned the ultrasound directly drew the initial ground truth (GT) of the nerve location.
- "Two or more other anesthesiologists and sonographers reviewed and confirmed that it was correct."
- "If there was any mistake during the review, it was revised again."
This describes a form of consensus-based adjudication with an initial ground truth creator and subsequent confirmation/revision by multiple independent experts. It's not a strict N+M or sequential read, but rather a collaborative review and refinement process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, the document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to compare human readers with vs. without AI assistance. The study focuses solely on the standalone performance of the AI algorithm (NerveTrack).
6. If a Standalone (algorithm only without human-in-the-loop performance) was done
Yes, the document explicitly states: "The standalone performance of NerveTrack was evaluated..." The "Summary Performance data" tables provided are for the algorithm's performance without a human in the loop.
7. The Type of Ground Truth Used
The ground truth used was expert consensus. It was established by a team of experienced anesthesiologists and sonographers who reviewed and confirmed the actual nerve locations in the ultrasound images.
8. The Sample Size for the Training Set
The document states: "The training data used for the training of the NerveTrack algorithm are independent of the data used to test the NerveTrack algorithm." However, the exact sample size for the training set is not provided in the given text.
9. How the Ground Truth for the Training Set was Established
The document mentions that the training data is independent of the test data. While it does not explicitly detail the ground truth establishment method for the training set, it is highly probable that a similar expert-based annotation process (as described for the test set) was used to establish the ground truth for the training data. This is a common practice in AI development to ensure consistency in data labeling.
Ask a specific question about this device
Page 1 of 1