Search Results
Found 10 results
510(k) Data Aggregation
(172 days)
SpectoMed (v1.0) provides real-time spatial-visualization of medical imaging the manipulation, processing, review, analysis, communication and media interchange of multi-dimages of several modalities as defined by the DICOM standard, such as CT or MRI imaging datasets. Furthermore, SpectoMed (v1.0) aids the clinician in the preparation of a surgical intervention with measurements and import of compatible objects within the realistic spatial visuals of the patient's anatomy as well as their intraoperative display. SpectoMed (v1.0) is intended as an aid for assessing patient images.
SpectoMed (v1.0) is a software-only device that allows trained medical professionals to review CT and MRI image data in three-dimensional (3D) format and/or in virtual reality (VR) and/or Augmented reality (AR) interfaces. The 3D rendered images are accessible through the software desktop application and, if desired, through compatible VR and AR headsets which are used by users for preoperative surgical planning and for display during intervention/surgery.
SpectoMed (v1.0) product is to be used to assist in medical image review. Intended users are trained medical professionals, including imaging technicians, clinicians and surgeons.
The 3D images generated using SpectoMed (v1.0) are intended as an aid for assessing CT or MR images that are used for diagnosis, preoperative planning and/or during intervention/surgery. SpectoMed (v1.0) is intended to be used in environments where the usage of a computer display or an XR system is safe such as:
- Office spaces
- · Operating rooms under these two scenarios:
- · Display of the SpectoMed (v1.0) visual output on existing monitors/screens in the operating room
- Use of XR displays in a non-sterile situation as e.g. before draping the patient or outside the sterility area, and thus strictly in compliance with sterility protocols.
SpectoMed (v1.0) is not meant to:
- · be used in a sterile environment,
- · to register 3D images to patients,
- · to be used with the user's hands on the patient during an interventional or surgical procedure.
Here's a breakdown of the acceptance criteria and study information for SpectoMed (v1.0) based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Feature/Function | Tool in Subject Device: SpectoMed (v1.0) | Tool in Reference Device: Osirix MD (K101342) | Acceptance Criteria | Reported Device Performance |
---|---|---|---|---|
Distance Measurements | Distance | Region of Interest (ROI) • 3D coordinates | No statistical difference between distributions of measurements obtained for SpectoMed (v1.0) or the reference device Osirix MD, as evaluated per t-test statistics. Tests performed at varying reference CT phantom scan resolutions. | Met (implied by conclusion) |
Angle Measurements | Angle | Region of Interest (ROI) • 3D coordinates | No statistical difference between distributions of measurements obtained for SpectoMed (v1.0) or the reference device Osirix MD, as evaluated per t-test statistics. Tests performed at varying reference CT phantom scan resolutions. | Met (implied by conclusion) |
2. Sample Size Used for the Test Set and Data Provenance:
The document mentions "reference digital phantoms" and "varying reference CT phantom scan resolutions" for performance testing. It does not specify a numerical sample size for the test set or the country of origin of the data directly. The data is implied to be synthetic or standardized (phantoms) rather than from real patient cases. It is also not specified if it was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
The document does not explicitly state the number of experts used to establish the ground truth for the test set or their qualifications. The ground truth for measurement performance was established by comparing to a "cleared device (Osirix MD K101342)."
4. Adjudication Method for the Test Set:
The document does not describe an adjudication method for the test set in the traditional sense, as the ground truth for measurements was established by comparison to a cleared device (Osirix MD K101342) and statistical analysis (t-test). It does not mention human adjudication of results.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:
No, an MRMC comparative effectiveness study was not done. The performance testing described focuses on the accuracy of measurements compared to a reference device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:
Yes, the performance testing described for measurements (distance and angle) appears to be a standalone (algorithm only) performance evaluation. The device's measurements were compared against those of a reference device on digital phantoms, without mentioning human intervention during the measurement process for the test.
7. The Type of Ground Truth Used:
The ground truth used for measurement performance was based on the measurements provided by a previously cleared medical device (Osirix MD K101342), applied to "reference digital phantoms."
8. The Sample Size for the Training Set:
The document does not mention a training set or its sample size. The SpectoMed (v1.0) is described as a software-only device for manipulating, processing, reviewing, and analyzing imaging data and aiding in surgical intervention preparation. This description suggests it might not be an AI/ML-driven device requiring extensive training data in the typical sense for classification or detection tasks, but rather a tool for visualization and measurement.
9. How the Ground Truth for the Training Set Was Established:
As no training set is mentioned, the method for establishing its ground truth is not applicable/provided in the document.
Ask a specific question about this device
(93 days)
VitruvianScan is indicated for use as a magnetic resonance diagnostic device software application for non-invasive fat and muscle evaluation that enables the generation, display and review of magnetic resonance medical image data.
VitruvianScan produces quantified metrics and composite images from magnetic resonance medical image data which when interpreted by a trained healthcare professional, yield information that may assist in clinical decisions.
VitruvianScan is a standalone, post processing software medical device. VitruvianScan enables the generation, display and review of magnetic resonance (MR) medical image data from a single timepoint (one patient visit).
When a referring healthcare professional requests quantitative analysis using VitruvianScan, relevant images are acquired from patients at MRI scanning clinics and are transferred to the Perspectum portal through established secure gateways. Perspectum trained analysts use the VitruvianScan software medical device to process the MRI images and produce the quantitative metrics and composite images. The device output information is then sent to the healthcare professionals for their clinical use.
The metrics produced by VitruvianScan are intended to provide insight into the composition of muscle and fat of a patient. The device is intended to be used as part of an overall assessment of a patient's health and wellness and should be interpreted whilst considering the device's limitations when reviewing or interpreting images.
Here's an analysis of the acceptance criteria and the study provided in the document for the VitruvianScan (v1.0) device:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document unfortunately does not explicitly state the specific quantitative acceptance criteria for each performance aspect (e.g., a specific percentage for repeatability, a particular correlation coefficient). Instead, it states that "All aspects of the performance tests met the defined acceptance criteria." and that the device "successfully passed the acceptance criteria with no residual anomalies."
However, based on the described performance tests, we can infer the types of acceptance criteria that would have been defined. The document also lacks specific numerical results for the device performance that directly map to these criteria.
Performance Aspect | Inferred Acceptance Criteria (Example) | Reported Device Performance |
---|---|---|
Repeatability of metrics | Coefficient of Variation (CV) or Intraclass Correlation Coefficient (ICC) for various metrics (Visceral Fat, Subcutaneous Fat, Muscle Area) to be within a pre-defined threshold for the same subject, scanner, field strength, and day. | "met the defined acceptance criteria" |
Reproducibility of metrics | Coefficient of Variation (CV) or Intraclass Correlation Coefficient (ICC) for various metrics (Visceral Fat, Subcutaneous Fat, Muscle Area) to be within a pre-defined threshold for the same subject, scanner (different field strength), and day. | "met the defined acceptance criteria" |
Inter-operator variability | Low variability (e.g., high ICC or low CV) in metric measurements between different trained operators using VitruvianScan. | "Characterization of inter-operator variability" met acceptance criteria |
Intra-operator variability | Low variability (e.g., high ICC or low CV) in metric measurements by the same trained operator over repeated measurements. | "Characterization of intra-operator variability" met acceptance criteria |
Benchmarking against reference device | Established equivalence or non-inferiority in metric measurements when compared to a validated reference regulated device (OSIRIX MD). | "Results...compared with the results from testing using reference regulated device 'OSIRIX MD' for benchmarking performance" met acceptance criteria |
Comparison to gold standard (human experts) | High agreement (e.g., high ICC, low mean absolute error) between device output and the gold standard (mean of 3 radiologists' results). | "Comparative testing between the operators' results and the gold standard (mean of 3 radiologists results)" met acceptance criteria |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size used for the test set.
The data provenance (country of origin, retrospective/prospective) is not explicitly stated. However, the context of an FDA submission for a device used in clinical settings suggests the data would likely be from a clinical or research environment, potentially multi-center, but this is an inference.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: 3 radiologists
- Qualifications of Experts: The document states "3 radiologists results" but does not provide specific qualifications (e.g., years of experience, subspecialty).
4. Adjudication Method for the Test Set
The adjudication method used for the test set is implicitly a "mean (average) of 3 radiologists results" for establishing the gold standard. This suggests a form of consensus, where the average of their interpretations serves as the reference. It's not a typical "X+1" method (like 2 out of 3, or 3 out of 1 for disagreement), but rather a central tendency measure of their assessments.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A MRMC comparative effectiveness study was NOT explicitly done in the sense of comparing human readers with AI assistance versus without AI assistance.
The study did involve "Comparative testing between the operators' results and the gold standard (mean of 3 radiologists results)," which evaluates the device's output and how operators use it against a human expert consensus. However, it doesn't describe a scenario where human readers improve with AI assistance in their own diagnostic performance compared to their performance without the AI. The stated use case is that "Perspectum trained analysts use the VitruvianScan software medical device to process the MRI images and produce the quantitative metrics and composite images," and then these are sent to "trained Healthcare Professionals who then utilize these to make clinical decisions." This suggests the device provides quantitative data to healthcare professionals, rather than directly assisting their image interpretation to improve their diagnostic accuracy from images alone.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance assessment was conducted. The "Comparative testing between the operators' results and the gold standard (mean of 3 radiologists results)" and the "benchmarking performance" against OSIRIX MD inherently assess the algorithm's output (via the trained analysts) against a reference, which signifies a standalone evaluation of the device's quantitative capabilities.
7. Type of Ground Truth Used
The primary type of ground truth used for the comparative testing was expert consensus (mean of 3 radiologists results).
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set for the VitruvianScan algorithm. This information is typically crucial for understanding the generalizability and robustness of an AI/ML device.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established.
Ask a specific question about this device
(73 days)
The Preview Shoulder software is intended to be used as a tool for orthopedic surgeons to develop pre-operative shoulder plans based on a patient CT imaging study.
The import process allows the user to select a DICOM CT scan series from any location that the user's computer sees as an available file source.
3D digital representations of various implant models are available in the planning software. Preview Shoulder allows the user to digitally perform the surgical planning a representation of the patient's shoulder anatomy as a 3D model and allows the surgeon to place the implant in the patient's anatomy.
The software allows the surgeon to generate a report, detailing the output of the planning activity. Experience in usage and a clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
The Preview Shoulder, a 3D total shoulder arthroplasty (TSA) surgical planning software, is a standalone software application which assists the surgeon in planning reverse and anatomic shoulder arthroplasty. Preview Shoulder includes 3D digital representations of implants for placement in images used for surgical planning. Preview Shoulder is a secure software application used by qualified or trained surgeons and is accessed by authorized users.
The primary function of Preview Shoulder is to receive and process DICOM CT image(s) of patients. Preview Shoulder can be used to place an implant in the original CT image and place an implant in the 3D model of reconstructed bone. The Preview Shoulder allow the user to perform surgical planning and generate an output surgical report. Preview Shoulder does not provide a diagnosis or surgical recommendation. The surgeon is responsible for selecting and placing the implant model for pre-surgical planning purposes.
The provided text focuses on the 510(k) summary for the Preview Shoulder software, outlining its substantial equivalence to a predicate device and general non-clinical testing. However, it does not include detailed information about specific acceptance criteria for performance metrics, nor does it describe a study that explicitly proves the device meets such criteria with reported performance values.
The document states:
- "Software Verification and Validation testing was performed on the Preview Shoulder, and documentation is provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Content of Premarket Submissions for Device Software Functions'."
- "Testing verified that the system performs as intended."
- "The measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)."
- "All validation testing was performed on a fully configured system using anonymized patient shoulder CT images to emulate intended use."
- "All user features have been validated by surgeons."
- "Clinical testing was not necessary to demonstrate substantial equivalence of the Preview Shoulder to the predicate device."
Given this, I cannot provide the requested information in full detail. Here's what can be inferred and what is missing:
1. A table of acceptance criteria and the reported device performance
This information is not provided in the document. While it mentions "Testing verified that the system performs as intended" and "measurement capabilities... were validated to be significantly equivalent to a benchmark tool," it does not specify what those performance metrics, acceptance criteria, or reported performance values are.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not explicitly stated. The document mentions "anonymized patient shoulder CT images" were used for validation testing, but the number of such images (sample size) is not given.
- Data Provenance: "Anonymized patient shoulder CT images" were used. The country of origin and whether the data was retrospective or prospective is not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- The document states, "All user features have been validated by surgeons." It does not specify the number of surgeons or their qualifications or how they established "ground truth" for quantitative assessments.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- The document mentions "All user features have been validated by surgeons" but does not detail any adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not necessary to demonstrate substantial equivalence of the Preview Shoulder to the predicate device." The software is not an AI for diagnosis or interpretation but a surgical planning tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not explicitly detailed as a standalone performance study in the context of typical AI performance evaluation. The document states that "The measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)." This suggests a comparison of the software's output with a benchmark, which could be considered a form of standalone validation for its measurement capabilities. However, specific metrics and results are not provided. The device itself is described as a "tool for orthopedic surgeons," implying human-in-the-loop operation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The document implies that "benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)" served as a reference for validating measurement capabilities. For user features, "validation by surgeons" might have implicitly used their clinical judgment as a form of ground truth for usability and functionality, but this is not explicitly defined in terms of a formal ground truth process.
8. The sample size for the training set
- This device is described as a "software application" for surgical planning and emphasizes its validation through comparison of measurement capabilities against a benchmark tool and user feature validation by surgeons. It does not explicitly state that it uses machine learning/AI models that require a "training set" in the conventional sense. If there are AI algorithms as implied by "Post-processing algorithm is added to further refine the 3D mesh quality" and "Algorithm is added to calculate humerus-side features," the training set size is not provided.
9. How the ground truth for the training set was established
- As the existence of a "training set" is not confirmed or described, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
(317 days)
AVATAR MEDICAL Software V1 is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. AVATAR MEDICAL Software V1 is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
The Avatar Medical Software V1 (AMS V1) is a software-only device that allows trained medical professionals to review CT and MR image data in three-dimensional (3D) format and/or in virtual reality (VR) interface. The 3D and VR images are accessible through the software desktop application and, if desired, through compatible VR headsets which are used by users for preoperative surgical planning and for display during intervention/surgery.
The AMS V1 product is to be used to assist in medical image review. Intended users are trained medical professionals, including imaging technicians, clinicians and surgeons.
AMS V1 includes two main software-based user interface components, the Desktop Interface and the VR Interface. The Desktop Interface runs on a compatible off-the-shelf (OTS) workstation provided by the hospital and only accessed by authorized personnel. The Desktop Interface contains a graphical user interface where a user can retrieve DICOM-compatible medical images locally or on a Picture Archiving Communication System (PACS). Retrieved CT and MR images can be viewed in 2D and 3D formats. Users are able to make measurements, annotations, and apply fixed and manual image filters.
The VR Interface is accessible via a compatible OTS headset to allow users to review the medical images in a VR format. VR formats can be viewed only when the user connects a compatible VR headset directly to the workstation being used to view the Desktop Interface. Additionally. AMS V1 enables the intended users to remotely stream the Desktop Interface to another workstation on the same local area network (LAN).
The 3D images generated using AMS V1 are intended to be used in relation to surgical procedures in which CT or MR images are used for preoperative planning and/or during intervention/surgery.
The intraoperative use of the AMS V1 solely corresponds to the two following cases:
- Display of the AMS V1 Desktop Interface on existing monitors/screens in the operating room
- Use in a non-sterile image review room accessible from the operating room during the procedure (AMS V1 operates on VR headsets which are not approved to be used in the sterile environment of the operating room)
Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Function | Tool in Subject Device: AMS V1 | Tool in Reference Device: Osirix MD (K101342) | Acceptance Criteria | Reported Device Performance |
---|---|---|---|---|
Linear Measurements (polylines) | Curve | Close Polygon | No statistical difference between distributions of measurements obtained for AMS V1 or the reference device Osirix MD, as evaluated per t-test statistics. Tests performed for a series of objects in reference MR and CT phantom images. | No statistical difference was reported, as implied by the statement "No statistical difference between distributions of measurements obtained for AMS V1 or the reference device Osirix MD, as evaluated per t-test statistics. Tests performed for a series of objects in reference MR and CT phantom images." The device met this criterion. |
Diameter Measurements | Ruler | Length | No statistical difference between distributions of measurements obtained for AMS V1 or the reference device Osirix MD, as evaluated per t-test statistics. Tests performed for a series of objects in reference MR and CT phantom images. | No statistical difference was reported, as implied by the statement "No statistical difference between distributions of measurements obtained for AMS V1 or the reference device Osirix MD, as evaluated per t-test statistics. Tests performed for a series of objects in reference MR and CT phantom images." The device met this criterion. |
Display Quality (Luminance, Contrast) | N/A | N/A (evaluated against guidance) | Successfully evaluated against the AAPM guidance recommendation for visual evaluation of luminance and contrast. | "The quality of the display was successfully evaluated against the AAPM guidance recommendation for visual evaluation of luminance and contrast." The device met this criterion. |
Optical Testing (VR Platforms) | N/A | N/A (evaluated against standard) | Homogeneity of luminance, resolution, and contrast evaluated as acceptable per IEC63145-20-20 in the center of the displays for the specified VR platforms. | "Additional optical testing was conducted on compatible VR platforms as per IEC63145-20-20 and passed as expected. The homogeneity of luminance, resolution, and contrast was evaluated as acceptable per these standards in the center of the displays for the specified VR platforms." The device met this criterion. |
Filter Technology Functionality | Image filters | (Similar to cleared device Osirix MD (K101342)) | Opacity and color of specific voxels in the image demonstrated to be controllable as intended by the filtering principle. | "The functioning of the filter technology was assessed by visual inspection. Using a reference DICOM, the opacity and color of specific voxels in the image was demonstrated to be controllable as intended by the filtering principle, which is similar to the cleared device Osirix MD (K101342)." The device met this criterion. |
VR Experience Fluidity | N/A | N/A | Averaged Frame Per Second (FPS) superior to a specific threshold for the minimal hardware configuration. | "The fluidity of the VR experience was assessed by evaluating the average Frame Per Second rate. The averaged FPS was superior to the specific threshold for the minimal hardware configuration." The device met this criterion. |
The provided text for points 2-9 below is sparse. The information below is limited to what can be directly inferred or is explicitly stated in the document.
2. Sample Size Used for the Test Set and Data Provenance
The document states that tests for linear and diameter measurements were "performed for a series of objects in reference MR and CT phantom images." This indicates that phantom data was used for these specific tests. The exact sample size (number of phantom images or objects within them) for the test set is not specified.
The data provenance is described as "reference MR and CT phantom images," which implies controlled, synthetic data rather than patient data from a specific country or collected retrospectively/prospectively.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used or their qualifications for establishing ground truth on the test set. For the measurement tests, the ground truth was based on "reference MR and CT phantom images," meaning the inherent dimensions of the phantom objects served as the ground truth. For visual evaluations (display quality, filter functionality), it can be inferred that qualified personnel performed these, but no details are provided.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method. For the measurement tests, the ground truth was objective (phantom dimensions), and for visual assessments, it seems a direct assessment against standards or intended functionality was performed, without mention of a multiple-reader adjudication process.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done or reported in this document. The study focused on standalone performance evaluation as described below.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation was done. The performance data section describes "Measurement performance testing was conducted by leveraging reference digital phantoms and the comparison with a cleared device (Osirix MD K101342)." This compares the algorithm's measurement capabilities against a reference, which is a standalone assessment. Similarly, the display quality, optical testing, filter technology, and VR fluidity assessments are focused on the device's inherent performance.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The ground truth used for specific tests was:
- phantom images (for linear and diameter measurements) where the actual dimensions of the objects in the phantoms served as the objective truth.
- AAPM guidance recommendation and IEC63145-20-20 standard (for display quality and optical testing).
- Reference DICOM with intended filtering principles (for filter technology functionality).
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size for a training set. This submission focuses on verification and validation testing, suggesting that the core algorithms might have been developed prior, and details about their training data are not included in this 510(k) summary.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for a training set was established, as it does not discuss a training set.
Ask a specific question about this device
(70 days)
OsiriX MD TM (K101342)
3Dicom MD software is intended for use as a diagnostic and analysis tool for diagnostic images for hospitals. imaging centers, radiologists, reading practices and any user who requires and is granted access to patient image, demographic and report information.
3Dicom MD displays and manages diagnostic quality DICOM images.
3Dicom MD is not intended for diagnostic use with mammography images. Usage for mammography is for reference and referral only.
3Dicom MD is not intended for diagnostic use on mobile devices.
Contraindications: 3Dicom MD is not intended for the acquisition of mammographic image data and is meant to be used by qualified medical personnel.
3Dicom MD is a software application developed to focus on core image visualization functions such as 2D multi-planar reconstruction, 3D volumetric rendering, measurements, and markups. 3Dicom MD also supports real-time remote collaboration, sharing the 2D & 3D visualization of the processed patient scan and allowing simultaneous interactive communication modes between multiple users online through textual chat, voice, visual aids, and screen-sharing.
Designed to be used by radiologists and clinicians who are familiar with 2D scan images, 3Dicom MD provides both 2D and 3D image visualization tools for CT, MRI, and PET scans from different makes and models of image acquisition hardware.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Measurement Accuracy) | Reported Device Performance |
---|---|
Length (>10 mm) | 99.3% |
Length (1-10 mm) | 98.8% |
Area | 99.52% |
Angle | 99.46% |
Note: The document states that the tested accuracy for the lowest clinical range (1-10mm) was found to be slightly inferior (98.8%) due to the resolution of the input scan and screen.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 81 Digital Reference Objects (test cases).
- Data Provenance: The Digital Reference Objects were "created...representative of the clinical range typically encountered in radiology practice." The text does not specify a country of origin or whether they were retrospective or prospective data in the clinical sense, as they are synthetically created for testing.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used to establish the ground truth for the test set or their specific qualifications. It mentions that "usability testing involving trained healthcare professionals" was performed, but this is distinct from establishing ground truth for the measurement accuracy tests. For the measurement accuracy tests, the ground truth was "known values" from the "Digital Reference Objects."
4. Adjudication Method for the Test Set
Not applicable. The ground truth for measurement accuracy was established using "known values" from Digital Reference Objects, not through expert adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or presented in the provided text. The study described focuses on the device's standalone measurement accuracy.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, a standalone performance study was done for the measurement accuracy. The reported percentages (e.g., 99.3% accuracy for length > 10mm) represent the device's performance in measuring known values from Digital Reference Objects. There is no indication of human-in-the-loop performance in these specific metrics.
7. Type of Ground Truth Used
The ground truth used for the measurement accuracy tests was known values from Digital Reference Objects. These objects were created to represent the clinical range.
8. Sample Size for the Training Set
The document does not provide information about the sample size for a training set. The descriptions focus on verification and validation activities for the device's performance, not on a machine learning model's training.
9. How the Ground Truth for the Training Set Was Established
As no information about a training set for a machine learning model is provided, there is no description of how its ground truth was established.
Ask a specific question about this device
(291 days)
The VisionAir Patient-Specific Airway Stent is indicated for the treatment of adults ≥22 years of age with symptomatic stenosis of the airway. The silicone stent is intended for implantation into the airway by a physician using the recommended deployment system or an equivalent rigid bronchoscope and stent placement system that accepts the maximum stent diameter being placed. The stent is intended to be in the patient up to 12 months after initial placement.
The subject device, VisionAir Patient-Specific Airway Stent is comprised of a cloudbased software suite and the patient-specific airway stent. These two function together as a system to treat symptomatic stenosis of the airway per the indications for use. The implantable patient-specific airway stent is designed by a physician using a CT scan as a guide in the cloud-based software suite. The airway is segmented from the CT scan and used by the physician in designing a patient-specific stent. When design is complete, the stent is manufactured via silicone injection into a 3D-printed mold and delivered to the treating physician nonsterile, to be sterilized before use.
The implantable patient-specific airway stent includes the following general features:
- Deployed through a compatible rigid bronchoscope system
- Made of biocompatible, implant-grade silicone
- Steam sterilizable by the end user
- Anti-migration branched design
- Anti-migration studs on anterior surface of main branch
- Single-use
The cloud-based software suite has the following general features:
- Upload of CT scans
- Segmentation of the airway
- Design of a patient specific stent from segmented airway
- Order management of designed stents
The provided text is a 510(k) Summary for the VisionAir Patient-Specific Airway Stent, which focuses on demonstrating substantial equivalence to a predicate device. It primarily discusses the device description, indications for use, technological characteristics, and a list of nonclinical performance and functional tests conducted.
However, the document does not contain the detailed information required to fulfill the request regarding acceptance criteria and the study that proves the device meets those criteria. Specifically, it lacks:
- A table of acceptance criteria and reported device performance: While it lists types of tests, it does not provide specific quantitative acceptance criteria or the actual results from these tests.
- Sample size used for the test set and data provenance: No information is given about the sample size for any clinical or performance test, nor the origin or nature of the data (retrospective/prospective, country).
- Number of experts used to establish ground truth and qualifications: This information is completely absent.
- Adjudication method for the test set: Not mentioned.
- Multi-Reader Multi-Case (MRMC) comparative effectiveness study details: No MRMC study is described; the testing mentioned is primarily non-clinical or related to software validation/verification, not human-AI comparative effectiveness.
- Standalone (algorithm-only) performance: While "Software Verification and Validation Testing" and "Airway Segmentation Process Testing" are mentioned, no specific standalone performance metrics (e.g., accuracy, precision for segmentation) or acceptance criteria are provided.
- Type of ground truth used: The document mentions "Airway Segmentation Process Testing" and refers to a predicate device (Mimics) for "performance reference specification" for dimensional testing of airway segmentation. This implies that the ground truth for segmentation would likely be derived from expert-reviewed segmentations or potentially from known anatomical measurements, but the method is not explicitly detailed.
- Sample size for the training set: There is no mention of a "training set" or any machine learning model that would require one. The software aspect described is for physician-guided design and semi-automated segmentation, not explicitly an AI/ML model that undergoes a training phase in the typical sense for medical image analysis.
- How the ground truth for the training set was established: Not applicable, as no training set is described.
The document states: "Reference devices, Mimics (K073468) and Osirix MD (K101342) were used for reference software performance specifications." and "Dimensional Testing of Airway Segmentation (reference device Mimics K073468 used for performance reference specification)". These statements hint at software validation, especially for the segmentation component, but do not provide the detailed study design, acceptance criteria, or results.
In summary, the provided text does not contain the necessary information to answer the request in detail, as it focuses on demonstrating substantial equivalence through non-clinical performance and functional testing rather than a clinical study with acceptance criteria for device performance based on human reader interaction or AI model performance.
Ask a specific question about this device
(55 days)
The Preview Shoulder software is intended to be used as a tool for orthopedic surgeons to develop pre-operative shoulder plans based on a patient CT imaging study.
The import process allows the user to select a DICOM CT scan series from any location that the user's computer sees as an available file source.
3D digital representations of various implant models are available in the planning software. Preview Shoulder allows the user to digitally perform the surgical planning by showing a representation of the patient's shoulder anatomy as a 3D model and allows the surgeon to place the implant in the patient's anatomy.
The software allows the surgeon to generate a report, detailing the output of the planning activity.
Experience in usage and a clinical assessment are necessary for a proper use of the software. It is to be used for adult patients only and should not be used for diagnostic purposes.
The Preview Shoulder, a 3D total shoulder arthroplasty (TSA) surgical planning software, is a standalone software application which assists the surgeon in planning reverse and anatomic shoulder arthroplasty. Preview Shoulder includes 3D digital representations of implants for placement in images used for surgical planning. Preview Shoulder is a secure software application used by qualified or trained surgeons and is accessed by authorized users.
The primary function of Preview Shoulder is to receive and process DICOM CT image(s) of patients. Preview Shoulder can be used to place an implant in the original CT image and place an implant in the 3D model of reconstructed bone. The Preview Shoulder allow the user to perform surgical planning and generate an output surgical report. Preview Shoulder does not provide a diagnosis or surgical recommendation. The surgeon is responsible for selecting and placing the implant model for pre-surgical planning purposes.
Here's an analysis of the acceptance criteria and the study information for the "Preview Shoulder" device, based on the provided text:
1. Table of Acceptance Criteria & Reported Device Performance
The provided text does not explicitly state acceptance criteria in a quantitative format (e.g., minimum accuracy percentages, specific error margins). Instead, the bench testing section describes the goals of validation as:
- "assess the safety and effectiveness of the device"
- "demonstrate the processing of patient images to produce accurate and repeatable 3D reconstructed bones and surgical coordinates provided to the surgeon"
- "demonstrate the safety and efficacy of the device to meet its intended use and specifications"
- "measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)"
- "All validation testing was performed on a fully configured system using anonymized patient shoulder CT images to emulate intended use. All user features have been validated by surgeons."
Given this, the table for reported performance will focus on the qualitative outcomes described:
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Safety and Effectiveness demonstrated | "Software Verification and Validation testing was performed... to assess the safety and effectiveness of the device." "The software has been verified via code reviews and automated and manual testing." "All validation testing was performed on a fully configured system... to emulate intended use." |
Accurate and repeatable 3D reconstructed bones and surgical coordinates | "Testing verified that the system performs as intended." "The measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)." |
Device meets intended use and specifications | "demonstrate the safety and efficacy of the device to meet its intended use and specifications." "All user features have been validated by surgeons." |
Measurement capabilities equivalent to benchmark tool (Osirix MD K101342) | "The measurement capabilities of the Preview Shoulder were validated to be significantly equivalent to a benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)." (No specific metrics or quantitative equivalence reported in this document, only that it was "significantly equivalent" - this likely implies pre-defined bounds were met). |
Performance on anonymized patient shoulder CT images reflecting intended use and adult population | "All validation testing was performed on a fully configured system using anonymized patient shoulder CT images to emulate intended use." "Preview Shoulder has been validated using adult patient images." |
Validation of user features by surgeons | "All user features have been validated by surgeons." |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated. The text only mentions "anonymized patient shoulder CT images" were used.
- Data Provenance: "anonymized patient shoulder CT images". The country of origin is not specified, nor is whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not explicitly stated for ground truth establishment. However, the document notes that "All user features have been validated by surgeons." This implies at least some surgeons were involved in testing, but doesn't specify if they established the ground truth for measurements or just validated features.
- Qualifications of Experts: The experts who validated user features are identified as "surgeons." No further qualifications (e.g., years of experience, subspecialty) are provided in this document.
4. Adjudication Method for the Test Set
- The document does not describe a formal adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth or resolving discrepancies in the test set.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not necessary to demonstrate substantial equivalence of the Preview Shoulder to the predicate device." The validation focused on equivalence to a benchmark tool (Osirix MD) rather than human readers, or comparing human readers with and without AI assistance.
- Effect Size: Not applicable as no such study was performed or reported.
6. Standalone Performance
- Yes, a standalone performance evaluation was done. The device itself is described as a "standalone software application." The "Bench Testing" section details verification and validation tests indicating the algorithm's performance in reconstructing 3D bone models and its measurement capabilities. The validation comparing its measurement capabilities to Osirix MD (K101342) is a form of standalone performance evaluation.
7. Type of Ground Truth Used
- The ground truth for validating "measurement capabilities" was established by comparison to a "benchmark tool with CT rendering measurement capabilities, Osirix MD (K101342)." For "3D reconstructed bones and surgical coordinates," the ground truth implicitly refers to the accuracy and repeatability against accepted standards or the output of the benchmark tool. Implicitly, the accuracy and repeatability would be compared to a derived ground truth based on the CT images themselves or measurements obtained from the benchmark. It is not explicitly pathology or outcomes data.
8. Sample Size for the Training Set
- Not stated. The document mentions the use of an "AI algorithm" for 3D model reconstruction but does not provide any information about the training set size, composition, or how it was used to train the AI.
9. How the Ground Truth for the Training Set Was Established
- Not stated. As the training set size and details are not provided, the method for establishing its ground truth is also not mentioned in this document.
Ask a specific question about this device
(18 days)
Ambra PACS software is intended for use as a primary diagnostic and analysis tool for diagnostic images for hospitals. imaging centers, radiologists, reading practices and any user who requires and is granted access to Patient image, demographic and report information.
Ambra Pro Viewer, a component of Ambra PACS, displays, modifics and manages diagnostic quality DICOM images including 3D visualization and reordering functionality.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary diagnosis or image interpretations. Mammographic images may only be viewed using cleared monitors intended for manmography Display.
Not intended for diagnostic use on mobile devices.
Ambra ProViewer, a component of Ambra PACS, displays, modifies, and manages diagnostic quality DICOM images, including 3D visualization and reordering functionality. It is designed to target standards-compliant, cross-platform web browsers with an underlying architecture built on top of ReactJS and Material-UI, as well as WebGL2 for advanced visualization tools. The Ambra ProViewer is designed to utilize modern web application APIs.
Ambra PACS is considered a 'Continuous Use' device. This device is compliant with HIPAA/HITECH, Safe Harbor, and 21 CFR Part 11 regulations regarding patient privacy (such as restricting access to particular studies, logging access to data), data integrity, patient safety and best software development and validation practices.
The provided document is a 510(k) summary for the Ambra PACS including Ambra ProViewer. It discusses the device's substantial equivalence to predicate devices but does not contain the detailed acceptance criteria and study results in the format requested, especially regarding specific performance metrics, sample sizes for test/training sets, expert qualifications, or MRMC studies.
Therefore, I cannot fully complete the requested table and answer all questions with the information given.
Here's what can be extracted and what is missing:
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified in document | Substantially equivalent performance of measurement and visualization tools to predicate devices (Ambra PACS Viewer K152977 and Pixmeo SARL Osirix MD K101342) |
Missing Information: Specific quantitative acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or performance thresholds for visualization/measurement tools) are not provided. The document primarily focuses on demonstrating "substantial equivalence" rather than reporting specific performance metrics against defined criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Missing Information: The document states "validation testing of DICOM images" was conducted using "the same reference DICOM images" as the predicate device, but it does not specify the sample size of the test set nor the data provenance (e.g., country of origin, retrospective or prospective nature).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Missing Information: The document does not provide any information on the number of experts, their qualifications, or how ground truth was established for "validation testing of DICOM images."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Missing Information: The document does not specify any adjudication method for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Missing Information: The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it refer to AI assistance or its effect size on human reader performance. The device is described as a PACS viewer with advanced visualization, not an AI diagnostic aid.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Answer: Yes, a standalone evaluation was likely conducted in the form of "validation testing of DICOM images to demonstrate substantially equivalent performance of the measurement and visualization tools." The phrasing implies an assessment of the software's inherent functionality (algorithm only) against the predicate. However, detailed results are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Answer (Inferred): Given the context of PACS viewers and "measurement and visualization tools," the ground truth was likely established by reference DICOM images where measurements or visualization characteristics were already known or precisely quantifiable, serving as a "gold standard" for the software's performance, but this is not explicitly stated. It's unlikely to be pathology or outcomes data for a viewer device.
8. The sample size for the training set
Missing Information: The document does not mention a training set. This is a PACS viewer, not typically an AI model that requires a distinct "training set" in the context of deep learning. The "validation testing" refers to evaluating the software's functionality, not training an algorithm.
9. How the ground truth for the training set was established
Missing Information: Since a training set isn't mentioned or applicable in the context of this document, the method for establishing its ground truth is also not applicable/missing.
Ask a specific question about this device
(206 days)
InstaRISPACS / InstaZFP / InstaMobi is a software device (Enterprise Server, Diagnostic Workstation, Physician Zerofoot print Viewer and a Mobile Viewer) used for viewing and manipulating digital images (including mammography and data from various sources (including CT, MR, US, RF units, computed and direct radiographic devices, secondary capture devices, scanners, imaging sources) can be displayed, processed, stored and communicated across computer networks using this software. InstaPACS modules can be integrated with an institution's existing Hospital Information System (HIS) or Radiology Information System (RIS) based on the study of the System, providing seamless access to reports for fully-integrated electronic patient records. Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretation. Mammographic images may only be interpreted using an FDA cleared monitor that offers at least 5 Megapixel resolution and meets other technical specifications reviewed and accepted by FDA.
InstaRISPACS / InstaZFP / InstaMobi is not intended for diagnostic image review or diagnostic use on mobile devices.
InstaRISPACS / InstaZFP / InstaMobi, is a software application used for viewing and manipulating medical images. Digital images and data from various sources (including CT. MR. US, RF units, computed and direct radiographic devices, secondary capture devices, scanners, imaging gateways, or imaging sources) can be displayed, processed, stored and communicated across computer networks using this software. When viewing images, users can perform adjustments of window width and level, annotation and various image manipulations.
- . InstaRISPACS, is a server-based application for use by Physicians, which comes with the Diagnostic Viewer.
- InstaZFP, is a Zero Foot print viewer for use by Physicians to display images after ● fetching them from the InstaRISPACS server.
- . InstaMobi, is a Mobile application (for iOS & Android), for use by Physicians to fetch images from InstaRISPACS and display them on the mobile device.
In addition, InstaRISPACS, can be integrated with an institution's existing Hospital Information System or Radiology Information System (based on the study of the System), providing seamless access to reports for fully-integrated electronic patient records.
The provided text describes the Meddiff InstaRISPACS / InstaZFP / InstaMobi V5.0 system, a software application for viewing and manipulating medical images. However, it does not contain information about a study that specifically proves the device meets quantifiable acceptance criteria related to its performance metrics (e.g., sensitivity, specificity, accuracy) for image interpretation or against a clinical gold standard.
The document does describe non-clinical performance testing for compliance with standards and the device's functional requirements.
Here's an analysis based on the available information:
Acceptance Criteria and Reported Device Performance
The document clearly states the following under "VII. PERFORMANCE DATA":
"The test results in this 510(k), demonstrate that InstaRISPACS/ InstaZFP/InstaMobi:
- complies with the aforementioned international and FDA-recognized consensus standards and
- FDA guidance document, and
- Meets the acceptance criteria and is adequate for its intended use."
However, the specific "acceptance criteria" for quantifiable device performance (e.g., image quality metrics, processing speed thresholds, diagnostic accuracy) are not explicitly detailed or quantified in the provided text. The "reported device performance" is described as compliance with standards and meeting functional requirements, rather than clinical performance metrics.
Based on the provided text, a table of quantifiable acceptance criteria and device performance cannot be generated. The document focuses on showing substantial equivalence based on technical characteristics and compliance with regulatory standards, rather than a clinical performance study.
Study Details (Based on available information)
1. A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly quantifiable clinical or diagnostic performance metrics described. The acceptance criteria mentioned relate to compliance with regulatory standards and functional requirements.
- Reported Device Performance: "complies with the aforementioned international and FDA-recognized consensus standards and FDA guidance document, and Meets the acceptance criteria and is adequate for its intended use." No specific performance numbers (e.g., accuracy, sensitivity, specificity) are reported.
2. Sample size used for the test set and the data provenance:
- Sample Size: Not specified.
- Data Provenance: Not specified. This was a non-clinical study focused on technical compliance and functional verification, not clinical data evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable. This was a non-clinical study; no ground truth was established by experts for a test set of clinical images.
4. Adjudication method for the test set:
- Not applicable. This was a non-clinical study; no image interpretation or adjudication occurred.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document explicitly states: "InstaRISPACS/ InstaZFP/InstaMobi does not require clinical studies to demonstrate substantial equivalence to the predicate device." Later, it also states: "Diagnosis is not performed by the software but by Radiologists, Clinicians and or referring Physicians. A physician, providing ample opportunity for competent human intervention interprets images and information being displayed and printed." This indicates the device is an image viewing and manipulation system, not an AI-powered diagnostic aid meant to directly improve human reader performance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- No. The device is an image viewing and manipulation system, not an algorithm performing standalone diagnostic tasks. Its performance is assessed in terms of its functionality and compliance with standards.
7. The type of ground truth used:
- Not applicable. As a non-clinical study focusing on functional compliance and technical characteristics, there was no clinical "ground truth" (like pathology or outcomes data) involved in a diagnostic performance evaluation.
8. The sample size for the training set:
- Not applicable. This device is a PACS/RIS software for image management and viewing, not a machine learning model that requires a training set.
9. How the ground truth for the training set was established:
- Not applicable, as it's not a machine learning model requiring a training set.
Summary of the "Study" (Non-Clinical Performance Testing):
The "study" described in the document is limited to non-clinical performance testing to demonstrate compliance with international and FDA-recognized consensus standards (ISO 14971, IEC 62304, NEMA-PS 3.1 - PS 3.20 DICOM) and FDA guidance documents. It also involved Meddiff's internal verification and validation processes to address intended use, technological characteristics claims, requirement specifications, and risk management results.
The comparison table provided (pages 2-3) primarily highlights technological characteristics and feature parity/differences with predicate and reference devices, rather than a clinical performance study involving patient data. For instance, for "PET-CT fusion viewing" and "Embedded & basic Volume Rendering," it states the subject device was "tested and compared to Reference device #1, (K101342)" and "Data indicated that the subject device was found to be similar to the reference device." This indicates functional equivalence rather than a clinical performance claim.
Conclusion stated by the submitter: Based on these non-clinical tests, the subject device is considered "substantially equivalent" to the predicate device in terms of safety and effectiveness, and no clinical studies were deemed necessary.
Ask a specific question about this device
(140 days)
Hardware: The BLUEPRINT™ Glenoid Guides are patient-specific drill guides. They have been specially designed to assist in the intraoperative positioning of glenoid components used with total anatomic or reversed shoulder arthroplasty procedures using anatomic landmarks that are identifiable on patient-specific preoperative CT scans.
Software: BLUEPRINT™ 3D Planning Software is a medical device for surgeon composed of one software component. It is intended to be used as a pre-surgical planner for shoulder orthopedic surgery. BLUEPRINT™ 3D Planning Software runs on standard personal and business computers running Microsoft Windows or Mac OS operating systems. The software supports DICOM standard to import the CT scan (Computed Tomography) images of the patient. Only CT scan modality can be loaded with BLUEPRINT™ 3D Planning Software. BLUEPRINT™ 3D Planning Software allows surgeon to visualize, measure, reconstruct, annotate and edit anatomic data. It allows surgeon to design glenoid patient-specific guides based on the pre-surgical plan. The software leads to the generation of a surgery report along with a 3D file of the glenoid patient-specific guide. BLUEPRINT™ 3D Planning Software does not include any system to manufacture the glenoid patient-specific guide. BLUEPRINT™ 3D Planning Software is to be used for adult patients only and should not be used for diagnostic purpose.
BLUEPRINT™ Patient Specific Instrumentation is composed of two components: BLUEPRINT™ Glenoid Guides (hardware) and BLUEPRINT™ 3D Planning Software (software).
Hardware: The BLUEPRINT™ Glenoid Guides are patient-specially designed to facilitate the implantation of WRIGHT-TORNIER glenoid prostheses. The BLUEPRINT™ Glenoid Guides are designed and manufactured based on a pre-operative plan generated by the BLUEPRINTTM 3D Planning Software. All BLUEPRINT™ Glenoid Guides are patient-specific, single use and delivered non-sterile.
Software: BLUEPRINT™ 3D Planning Software is composed of one software component connected to an Online Management System (OMS). The software installed on a computer is intended to be used by orthopedic surgeons, as a preoperative planning software for shoulder arthroplasty surgery (anatomic and reversed). It is intended to help plan an operation by allowing surgeons to: Position and select glenoid implant, Position and select humeral implant, Display bone density and reaming surface, Simulate the prosthetic range of motion, Design a patient specific guide for the glenoid component.
The provided document is a 510(k) summary for the BLUEPRINT™ Patient Specific Instrumentation (K162800). It describes a medical device system composed of software (BLUEPRINT™ 3D Planning Software) and hardware (BLUEPRINT™ Glenoid Guides) used for pre-surgical planning and intraoperative guidance in shoulder arthroplasty procedures.
However, this document does not contain the specific details required to fully address all points in your request. It refers to previous 510(k) clearances (K143374 and K160555) for validation of the device, but it does not reproduce the detailed acceptance criteria and study results within this summary.
Based on the information available in this document, here's what can be extracted and what cannot:
1. Table of acceptance criteria and the reported device performance:
This document does not explicitly state specific acceptance criteria (e.g., accuracy thresholds for guide placement) or detailed performance results in a table format. It broadly states that "The validation of BLUEPRINT™ Patient Specific Instrumentation (Subject Device System) was already proven via BLUEPRINT™ Patient Specific Instrumentation (K143374) and BLUEPRINT™ Patient Specific Instrumentation (K160555), including polyamide and titanium materials... The performed testing for validation of the design, manufacturing, biocompatibility, sterility, dimensions and the accuracy of the guide are applicable to the subject device."
To obtain this information, one would need to access the full 510(k) submissions for K143374 and K160555.
2. Sample size used for the test set and the data provenance:
This document does not specify the sample size for the test set or the data provenance (e.g., country of origin, retrospective/prospective). It refers to previous validations but does not provide details about those studies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not present in the provided document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
This information is not present in the provided document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
This document does not mention an MRMC comparative effectiveness study or any effect size related to human reader improvement with/without AI assistance. The software is described as a pre-surgical planner to assist surgeons, implying it's a tool, but not explicitly tested in an MRMC setting for diagnostic performance comparison.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document describes the software as a "pre-surgical planner" and the guides as assisting in "intraoperative positioning." This implies a human-in-the-loop scenario where the software aids the surgeon. It does not describe a standalone algorithm performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
This document does not specify the type of ground truth used for validation. It refers to "accuracy of the guide" being validated, which suggests comparison to a known standard, but the nature of that standard (e.g., CMM measurements, post-operative CT) is not detailed.
8. The sample size for the training set:
This document does not mention any training set size as it concerns the validation of an already proven system. If the AI component (the 3D Planning Software) involved machine learning, this information would typically be in its original clearance or a more detailed technical document.
9. How the ground truth for the training set was established:
As no training set is mentioned in this document, this information is not available.
In summary:
This 510(k) summary focuses on demonstrating substantial equivalence to previously cleared devices rather than providing a detailed technical report of a new primary clinical validation study with all the requested specifics. To get the requested information, one would need to consult the original 510(k) submissions for K143374 and K160555, as these are the documents indicated as containing the previous validation data.
Ask a specific question about this device
Page 1 of 1