Search Results
Found 24 results
510(k) Data Aggregation
(97 days)
breast tomosynthesis
1 iCAD PowerLook AMP is a Class I medical device exempt per 21 CFR § 892.2010
PowerLook® Tomo Detection V2 Software is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting physician.
PLTD V2 detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) image. The PLTD V2 software allows a interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the PLTD V2 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. The PLTD V2 marks also serve as a navigation tool for users, because each mark is linked to the tomosynthesis plane where the detection was identified. Users can navigate to the plane associated with each mark by clicking on the detection mark. Each detected region will also be assigned a "score" that corresponds to the PLTD V2 algorithm's confidence that the detected region is a cancer (Certainty of Finding Score). Certainty of Finding scores are relative scores assigned to each detected region and a Case Score is assigned to each case regardless of the number of detected regions. Certainty of Finding and Case Scores are computed by the PLTD V2 algorithm and represent the algorithm's confidence that a specific finding or case is malignant. The scores are represented on a 0% to 100% scale. Higher scores represent a higher algorithm confidence that a finding or case is malignant. Lower scores represent a lower algorithm confidence that a finding or case is malignant.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text.
1. Acceptance Criteria and Reported Device Performance
The device is a Computer-Assisted Detection and Diagnosis (CAD) software for digital breast tomosynthesis (DBT) exams. The acceptance criteria are largely demonstrated through the multi-reader multi-case (MRMC) pivotal reader study and standalone performance evaluations.
Table of Acceptance Criteria and Reported Device Performance:
Criteria Category | Metric | Acceptance Criteria (Implied / Stated) | Reported Device Performance (with CAD vs. without CAD) |
---|---|---|---|
Pivotal Reader Study (Human-in-the-Loop) | |||
Radiologist Performance | Case-level Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) | Non-inferiority to radiologist performance without CAD. Implicit superiority is also a desirable outcome. | AUC with CAD: 0.852 |
AUC without CAD: 0.795 | |||
Average difference: 0.057 (95% CI: 0.028, 0.087); p |
Ask a specific question about this device
(79 days)
synthetic images and is
1 iCAD PowerLook AMP is a Class I medical device exempt per 21 CFR § 892.2010
PowerLook Density Assessment is a software application intended for use with digital breast tomosynthesized 2D images from tomosynthesis exams. PowerLook Density Assessment provides an ACR BI-RADS Atlas 5th Edition breast density category to aid health care professionals in the assessment of breast tissue composition. PowerLook Density Assessment produces adjunctive information. It is not a diagnostic aid.
The PowerLook Density Assessment Software analyzes digital breast tomosynthesis 2D synthetic images to calculate the dense tissue area of each breast. The measured dense tissue area is then used to provide a Category of 1-4 consistent with ACR BI-RADS 5th edition a-d. The top-level design sub-systems are as follows: Initialization, Breast Segmentation, Breast Thickness Correction, and Breast Density Computation. The assessment results in a final density map and, in conjunction with its pixel size (in square cm), is used to compute the area of the dense tissue (square cm). The area of the breast (square cm) is computed by counting the total number of pixels in the valid regions of the breast segmentation mask). The ratio of the dense area to the total breast area gives the percent breast density (PBD) for the given view. The dense areas, breast areas, percent breast densities, and dispersion for the CC and MLO views are averaged in order to report measurements for each breast. The average PBD and the average dispersion are then taken, and mapped to a density category from 1 through 4 consistent with ACR BI-RADS 5th edition a-d for each breast, using a set of calibrated boundaries. The higher category of the two breasts is reported as the overall case score. The PowerLook Density Assessment is designed as a stand-alone executable operating within the larger software framework provided by PowerLook AMP. As such, the PowerLook Density Assessment software is purely focused on processing tomosynthesis 2D synthetic images and is not concerned with system issues such as managing DICOM image inputs or managing system outputs to a printer, PACS or Mammography Workstation. The PowerLook Density Assessment software is automatically invoked by PowerLook AMP. The results of PowerLook Density Assessment are designed to display on a mammography workstation, high resolution monitor, or in a printed case report. PowerLook Density Assessment is designed to process approximately 60-120 cases per hour.
Here's a summary of the acceptance criteria and the study details for the PowerLook Density Assessment Software, based on the provided FDA 510(k) document:
Acceptance Criteria and Device Performance
The document states that the PowerLook Density Assessment Software performed "substantially equivalent to the predicate device." While specific numerical acceptance criteria (e.g., minimum kappa score, percentage agreement) are not explicitly listed in the provided text as pass/fail thresholds, the performance was assessed based on:
- Kappa score: A statistical measure of inter-rater agreement, commonly used for categorical data.
- Percent correct in each BI-RADS category: Measures the accuracy of the software's classification into each of the four BI-RADS density categories (1, 2, 3, 4).
- Combined A/B and C/D BI-RADS categories: Assesses performance when categories are grouped (e.g., non-dense vs. dense).
The document states: "PowerLook Density Assessment performed substantially equivalent to the predicate device." This implies that the device's performance metrics were within an acceptable range compared to the already cleared predicate.
Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Kappa Score | Substantially equivalent to predicate device | Performance substantially equivalent to predicate device |
Percent Correct (Each BI-RADS Category) | Substantially equivalent to predicate device | Performance substantially equivalent to predicate device |
Combined A/B and C/D BI-RADS Categories | Substantially equivalent to predicate device | Performance substantially equivalent to predicate device |
Study Details
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated in the provided text. The document mentions "a set of digital breast tomosynthesis synthesized 2D images."
- Data Provenance: Not explicitly stated (e.g., country of origin, specific clinics). The study used "digital breast tomosynthesis synthesized 2D images from." It is retrospective, as it refers to images for which BI-RADS scores "were obtained from radiologists."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated. The document mentions "BI-RADS scores were obtained from radiologists." It does not specify if this was a single radiologist or multiple.
- Qualifications of Experts: The experts are identified as "radiologists." No specific experience level (e.g., "10 years of experience") is provided.
-
Adjudication method for the test set:
- The document states that BI-RADS scores "were obtained from radiologists," but it does not specify an adjudication method (such as 2+1, 3+1, or none) for determining a consensus ground truth if multiple radiologists were involved.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC comparative effectiveness study was not explicitly mentioned or described. The study primarily focused on the standalone performance of the PowerLook Density Assessment Software against radiologist assessments (ground truth). It did not describe a scenario where human readers' performance with and without AI assistance was compared.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone study was performed. The described validation involved running the "PowerLook Density Assessment Software... followed by a comparison of the results between the predicate results, desired results, and observed performance..." This indicates the algorithm's performance was evaluated independently against the established ground truth.
-
The type of ground truth used:
- Expert Consensus (Radiologist BI-RADS Scores): The ground truth was established by "BI-RADS scores... obtained from radiologists." This implies the radiologists' interpretations served as the reference standard for breast density categorization.
-
The sample size for the training set:
- Not specified. The document does not provide any information regarding the training set's sample size or characteristics.
-
How the ground truth for the training set was established:
- Not specified. Since information about the training set size or its establishment is absent, the method for establishing ground truth for training data is also not provided.
Ask a specific question about this device
(110 days)
Ophthalmic
Class II, per 21 CFR §886.1120
Device, Storage, Images, Ophthalmic
Class I per 21 CFR §892.2010
The Navilas Laser System 577s is indicated for use:
- In Retinal Photocoagulation for the treatment of Clinically Significant Diabetic Macular Edema (Focal or Grid Laser). Proliferative Diabetic Retinopathy (Panretinal Photocoagulation), Sub-retinal (Choroidal) Neovascularization (Focal Laser), Central and Branch Retinal Vein Occlusion (Scatter Laser Photocoagulation, Focal or Grid Laser), Latice Degeneration, Retinal Tears and Detachments (Laser Retinopexy).
- For the imaging (capture, display, storage and manipulation) of the retina of the eve, including via color and infrared imaging; and for aiding in the diagnosis and treatment of ocular pathology in the posterior segment of the eye.
- In Laser Trabeculoplasty for Primary Open Angle Glaucoma, as well as Iridotomy and Iridoplasty for Closed Angle Glaucoma.
The Navilas Laser 577s is a laser photocoagulator with an integrated digital fundus camera. The Navilas 577s Laser System combines imaging technologies (fundus live imaging, and infra-red imaging) with established laser photocoagulation treatment methods by providing the doctor a system for imaging and treatment planning prior to the photocoagulation.
The Navilas Laser 577s is comprised of:
- A semiconductor laser source that operates at 577nm. The semiconductor laser . source for the Navilas 577s is identical to the laser source used with the Navilas 577+ cleared under K141851.
- An integrated delivery system that directs the laser beam through an ● ophthalmoscope using motorized mirrors.
- A digital camera that provides continuous real-time imaging in color with white light illumination of the fundus, or in monochrome using infrared illumination.
- A software platform intended to be used to capture display, store and manipulate . images captured by the fundus camera.
The Navilas Laser System 577s supports the user during multiple steps of a laser treatment procedure with digital imaging, image storage, planning and laser treatment options including:
Digital imaging as provided by a color image with white light, supporting mydratic and non-mydratic image acquisition (with and without dilated pupils), or a monochrome IR image. Images are presented using a digital display. An illumination mode is selected where images are acquired and either stored or discarded after viewing on the touch sensitive digital display.
Image Storage - Captured images can be digitally stored in the Navilas Laser System 577s database along with other patient related data to create a complete patient record for future reference. Images from other devices may also be imported and stored.
Planning - Areas identified on acquired or imported images by the user that are selected for future treatment consideration can be marked through the use of treatment planning tools available. The physician has the ability to highlight areas on acquired images (called Points of Interest). These locations are created and manipulated using the touch sensitive digital display.
Laser Treatment - Treatment options are also unchanged from the predicate device with Single Pulse Mode. Repeat Mode and Scanned Pattern Mode available on all Navilas laser models. Pre-positioning of the aiming beam onto locations which are selected by the physician during planning is also facilitated. The position of the aiming beam can be monitored on the real-time image that is displayed on the touch sensitive digital display.
Report generation - Information collected in the database includes images obtained before, during and after treatment. This information may be used for the generation of patient reports for documentation purposes.
The provided text describes a 510(k) premarket notification for the "Navilas® Laser System 577s." This document primarily focuses on demonstrating substantial equivalence to a predicate device (Navilas Laser System 577+), rather than presenting a detailed independent study with specific acceptance criteria and performance data for a new AI/algorithm-driven device.
Therefore, many of the requested elements (e.g., acceptance criteria for a new clinical study, sample size for test set with data provenance, number of experts for ground truth, adjudication methods, MRMC study, standalone performance, training set details) are not present in this document because it is a submission for a device change that is functionally identical to the predicate with some hardware improvements and elimination of a feature. The "Performance Data" section discusses engineering and software testing, not clinical performance against specific metrics for diagnostic accuracy or efficacy.
However, I can extract and present the information that is available:
Summary of Device and Context:
- Device Name: Navilas® Laser System 577s
- Device Type: Laser photocoagulator with an integrated digital fundus camera.
- Purpose of Submission: 510(k) premarket notification to demonstrate substantial equivalence to the Navilas Laser System 577+ (K141851).
- Key Differences from Predicate: Elimination of fluorescein angiography imaging capability; hardware design improvements (new GUI, relocation of scanner controls, conversion to manual base height adjustment, designation of table as optional accessory, inclusion of combination objective element). The core laser and imaging technology are stated as the same.
- Indications for Use: (Identical to predicate, except for the removed angiography feature)
- Retinal Photocoagulation for various conditions (Diabetic Macular Edema, Proliferative Diabetic Retinopathy, Sub-retinal Neovascularization, Retinal Vein Occlusion, Lattice Degeneration, Retinal Tears and Detachments).
- Imaging (capture, display, storage, manipulation) of the retina (color, infrared) for aiding diagnosis and treatment of ocular pathology in the posterior segment.
- Laser Trabeculoplasty for Primary Open Angle Glaucoma, and Iridotomy/Iridoplasty for Closed Angle Glaucoma.
Regarding the Requested Information:
Since this document describes a 510(k) submission for substantial equivalence based on functional identity and engineering testing rather than a new clinical performance study for an AI/algorithm, most of the specific questions about acceptance criteria for clinical performance, ground truth, expert consensus, and reader studies are not applicable or not detailed in this document.
Here's what can be inferred or directly stated from the provided text, with a clear note when information is absent:
-
A table of acceptance criteria and the reported device performance
- Acceptance Criteria: For substantial equivalence, the primary acceptance criterion is that the new device (Navilas 577s) is as safe and effective as the predicate device (Navilas 577+). This is demonstrated through engineering and software verification and validation, showing that the minor changes do not negatively impact performance or safety. No specific quantitative clinical performance acceptance criteria (e.g., accuracy thresholds) are listed, as this isn't a de novo clinical study establishing such criteria. Instead, it's about meeting specifications and requirements.
- Reported Device Performance: The document states: "All criteria for this testing were met and results demonstrate that laser photocoagulation performed with the Navilas Laser 577s meets all performance specifications and requirements." This is a qualitative statement of success in the engineering and software tests.
Criterion Type Description / Test Performed Reported Performance / Outcome Substantial Equivalence Device maintains functional identity to predicate (Navilas 577+) with minor changes (no fluorescein angiography, hardware updates). Determined to be substantially equivalent by FDA. Illumination Safety ISO 15004-2 Ophthalmic Instruments - Fundamental Requirements and Test Methods - Part 2: Light Hazard Protection Criteria met. Software Life Cycle Process IEC 62304 Medical Device Software Software Life Cycle Process (Software LOC "Major") Criteria met. (Implies adherence to process adequate for software with potential for serious injury/death) Human Factors & Usability IEC 62366 (Usability Engineering), IEC 60601-1-6 (General requirements for basic safety and essential performance - Usability) Criteria met. Laser Product Safety IEC 60601-1 (General safety), IEC 60601-1-2 (EMC), IEC 60601-2-22 (Therapeutic Laser Equipment), IEC 60825-1 (Laser classification) Criteria met. Laser Bench Testing Verify spot and pattern placement accuracy. Criteria met. Software Verification & Validation General V&V for the "Major" LOC software. Criteria met. -
Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not Applicable / Not Provided: This document describes engineering and software testing ("bench testing") rather than a clinical study with a patient "test set". There is no mention of patient data (images or otherwise) used for testing in this summary.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not Applicable / Not Provided: No clinical test set or ground truth establishment by experts is described in this submission summary.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not Applicable / Not Provided: No clinical test set or adjudication method is described.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No: This document does not describe an MRMC study or any study involving human readers with or without AI assistance. The device is a laser system with imaging capability, not an AI diagnostic assistant.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not Applicable: While the device has an imaging and software component ("aiding in the diagnosis"), the focus of this submission is on the laser safety, performance, and the integrated system's function, not a standalone diagnostic algorithm. No such standalone performance study is mentioned.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not Applicable / Not Provided: For the engineering and software tests, "ground truth" would likely refer to established engineering specifications and requirements (e.g., laser power, spot size accuracy, software functionality conforming to design), rather than clinical ground truth like pathology or expert consensus on disease states.
-
The sample size for the training set
- Not Applicable / Not Provided: This is not a submission for a machine learning or AI device that typically requires a distinctive "training set" for an algorithm. The software is described as a platform for image capture, display, storage, and manipulation, and planning tools, not a learning algorithm.
-
How the ground truth for the training set was established
- Not Applicable / Not Provided: As noted above, training sets and their associated ground truth methodology are not discussed in this substantial equivalence submission.
Ask a specific question about this device
(177 days)
|
| | Device, Storage, Images, Ophthalmic
Class I per 21 CFR §892.2010
The Navilas Laser System/Navilas Laser System 532+/ Navilas Laser System 577+ are indicated for use:
· In Retinal Photocoagulation for the treatment of Clinically Significant Diabetic Macular Edema (Focal or Grid Laser), Proliferative Diabetic Retinopathy (Panretinal Photocoagulation), Sub-retinal (Choroidal) Neovascularization (Focal Laser), Central and Branch Retinal Vein Occlusion (Scatter Laser Photocoagulation, Focal or Grid Laser), Latice Degeneration, Retinal Tears and Detachments (Laser Retinopexy).
· For the imaging (capture, display, storage and manipulation) of the eve, including via color. fluorescein angiography and infrared imaging; and for aiding in the diagnosis and treatment of ocular pathology in the posterior segment of the eye.
· In Laser Trabeculoplasty for Primary Open Angle Glaucoma, as well as Iridotomy and Iridoplasty for Closed Angle Glaucoma.
The NAVILAS Laser System combines imaging technologies (fundus live imaging, infra-red imaging and fluorescein angiography) with established retinal laser photocoagulation treatment methods, providing the doctor with a system for image capture, display, storage and manipulation for treatment planning and documentation.
The primary components of the Navilas Laser System include:
-
. One of three optional ophthalmic laser sources:
- a frequency doubled ND:YVO4 laser source that operates at 532nm, or -
- an optically-pumped semiconductor laser source that also operates at the same -532nm, or
- an optically-pumped semiconductor laser source that operates at 577nm -
-
An integrated delivery system that directs the laser beam through ophthalmoscope optics using motorized mirrors,
-
A digital camera and computer hardware that provides continuous real-time imaging ● using slit illumination that is projected through the ophthalmoscope optics and panned automatically at a rapid rate of 25 Hz across the subject area using the motorized mirrors. Imaging can be in color (using white light illumination) or in monochrome (using infrared illumination or blue light illuminations).
Laser photocoagulation with the NAVILAS can be performed using single shot (Single Spot Mode), repeated shots (Repeat Mode), and scanned patterns (Pattern Mode).
The provided text is a 510(k) summary for the Navilas Laser System. It focuses on demonstrating substantial equivalence to predicate devices and adherence to performance standards, rather than detailing a specific clinical study with acceptance criteria for device performance in terms of diagnostic or therapeutic accuracy.
Therefore, the document does not contain the acceptance criteria and study details as requested in the input prompt, particularly regarding a study that proves the device meets specific performance metrics for an AI/algorithm-driven application.
The sections that would contain such information (4.8 SUMMARY OF PERFORMANCE TEST RESULTS and 4.9 CONCLUSIONS) are very general. They state that "Performance verification and validation testing was completed to demonstrate that the device performance complies with specifications and requirements identified for the Navilas Laser System" and "All criteria for this testing were met and results demonstrate that the Navilas Laser System meets all performance specifications and requirements." However, they do not provide the specific acceptance criteria or the study details (sample size, ground truth, expert qualifications, etc.) for any performance evaluation in the context of an AI-driven component.
To directly answer your prompt based only on the provided text, the information requested is largely absent.
Here's a breakdown of what is and isn't available based on your requested structure:
1. A table of acceptance criteria and the reported device performance
- Not available in the provided text. The document states that performance testing was completed and criteria were met, but it does not list the specific acceptance criteria or the quantitative results of the device's performance against those criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not available in the provided text. There is no mention of a test set sample size or data provenance for any performance evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not available in the provided text. The document does not describe the establishment of ground truth by experts, as it does not detail a study involving such a process.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not available in the provided text. No adjudication method is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not available in the provided text. The document does not describe an MRMC study or the use of AI assistance for human readers. The Navilas Laser System, as described, is a surgical laser system with imaging, planning, and documentation capabilities, not an AI diagnostic assistant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not available in the provided text. No standalone algorithm performance study is described.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Not available in the provided text. As no study requiring ground truth is detailed, the type of ground truth is not mentioned.
8. The sample size for the training set
- Not available in the provided text. There is no mention of a training set, indicating that this submission is not about an AI/ML algorithm requiring such data.
9. How the ground truth for the training set was established
- Not available in the provided text. As no training set is mentioned, the method for establishing its ground truth is not provided.
Conclusion: The provided 510(k) summary (K141851) for the Navilas Laser System is focused on establishing substantial equivalence based on indications for use, technological characteristics, and compliance with general safety and performance standards (like IEC and ISO). It does not present data from a clinical or performance study that would typically include acceptance criteria, sample sizes, ground truth establishment, or expert evaluations as requested for an AI/algorithm-based device.
Ask a specific question about this device
(34 days)
(Class I exempt per 21 CFR § 892.2010 and CFR § 892.2020).
Device Description:
MPE is a software
Mammography Prior Enhancement (MPE) is a software application intended to enhance the appearance of prior non-Hologic digital mammography x-ray images so that they more closely resemble Hologic digital mammography images. MPE processed images are intended for comparison purposes only and cannot be used for primary diagnosis.
MPE runs on a Windows-based computer. Results can be displayed on a workstation capable of displaying mammography x-ray images, such as Hologic's SecurView® DX workstation, Product Code LLZ, CFR 892.2050 (K103385)
MPE is a software application that runs on a Windows server or softcopy display workstation, such as SecurView DX (K103385). MPE processes (manipulates) prior GE digital mammography images so that they will appear similar to Hologic digital mammography images. The image processing consists of various steps to improve visualization of structures in the breast including, logarithmic conversion, skin line correction and contrast enhancement. These are standard methods used to allow optimal display and review of mammography images with minimal window/leveling operation.
The provided text describes a 510(k) submission for the Mammography Prior Enhancement (MPE) device and its substantial equivalence to a predicate device. However, it does not contain the detailed acceptance criteria or a specific study that proves the device meets those criteria in a quantitative manner.
Here's a breakdown of the information available and what is missing based on your request:
1. Table of Acceptance Criteria and Reported Device Performance:
- Acceptance Criteria: Not explicitly stated in the provided text in a quantitative or measurable format. The document focuses on demonstrating substantial equivalence based on intended use, technological characteristics, and safety/effectiveness concerns addressed by adherence to standards and risk management.
- Reported Device Performance: The text states, "The MPE software further processes and displays prior digital mammography images for physicians or trained medical personnel to use as a historical image reference when reviewing current Hologic digital mammography images. The MPE processed images will appear similar to Hologic digital images." This is a qualitative description of its function but lacks specific performance metrics (e.g., image quality scores, similarity metrics).
2. Sample Size Used for the Test Set and Data Provenance:
- Not provided. The document does not mention any specific test set size, data provenance (country of origin), or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
- Not provided. There is no mention of experts, ground truth establishment, or their qualifications.
4. Adjudication Method for the Test Set:
- Not applicable/Not provided. Since no specific test set or ground truth establishment method is described, there is no mention of an adjudication method.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size:
- Not performed or reported. The 510(k) summary does not describe any MRMC study comparing human reader performance with and without AI assistance. The focus is on image processing for comparison, not diagnostic performance improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not performed or reported as a performance study. The device's function is standalone in the sense that it processes images automatically. However, there's no standalone performance study reported with specific metrics. Its output is explicitly stated as "for comparison purposes only and cannot be used for primary diagnosis," which inherently avoids standalone diagnostic performance claims.
7. The Type of Ground Truth Used:
- Not applicable/Not provided. As no performance study with a defined ground truth is described, this information is not present.
8. The Sample Size for the Training Set:
- Not provided. There is no mention of a training set or its size.
9. How the Ground Truth for the Training Set was Established:
- Not applicable/Not provided. Since no training set or ground truth establishment is described, this information is not present.
Summary of available information related to "acceptance criteria" and "study":
The "study" or justification for the device's acceptance presented in this 510(k) is primarily based on:
- Demonstration of Substantial Equivalence: The MPE software shares the same intended use, technological characteristics, and performance standards as its predicate device (DigitalNow HD, K091368).
- Adherence to Standards: The device is designed and manufactured according to ISO 13485, ISO 14971, IEC 62304, and 21 CFR Part 820.
- Risk Management: Potential hazards are controlled via risk management processes and verification and validation testing, ensuring "no risk of data loss" and that "MPE processed images are not intended for diagnosis."
- Intended Use Limitations: The key acceptance criterion implicitly stated is that the processed images are "for comparison purposes only and cannot be used for primary diagnosis."
The document focuses on regulatory compliance and safe operation within its limited intended use, rather than a quantitative clinical performance study with specific acceptance metrics. For a device intended "for comparison purposes only" and not for primary diagnosis, a full clinical performance study as might be expected for an AI diagnostic aid is often not required for 510(k) clearance, as the primary risk is misuse (i.e., using it for diagnosis), which is mitigated by labeling and intended use restrictions.
Ask a specific question about this device
(159 days)
falls in the classification of Medical Image Storage (Section 892.2010
The ThinkingNet is a Medical Image Management and Review System, commonly known as PACS. ThinkingNet made by Thinking Systems Corporation, Florida, USA, is indicated for acceptance, transmission, storage, archival, reading, interpretation, clinical review, analysis, annotation, distribution, printing, editing and processing of digital images and data acquired from DICOM compatible diagnostic device, by healthcare professionals, including radiologists, cardiologists, physicians, technologists and clinicians.
- With a ThinkingWeb option it can be used to access diagnostic information remotely with all workstation functionality or to collaborate with other users. The client device is cross platform for all but the thick-client ThinkingNet.Net option.
- With the molecular imaging option it can be used for processing and interpreting nuclear medicine and other molecular imaging studies.
- With image co-registration and fusion option it can be used for processing and interpreting PET-CT, PET-MRI, SPECT-CT and other hybrid imaging studies.
- With the Mammography option it can be used for screening and diagnosis (with -MG, "For presentation" images only) from FDA approved modalities in softcopy (using FDA cleared displays for mammography) and printed formats.
- With the cardiology option it can be used for reading, interpreting and reporting cardiac studies, such as nuclear cardiac, PET cardiac, echocardiographic, X-ray angiographic and CTA studies.
- With the Orthopedic option it can be used to perform common orthopedic measurements of the hip, knee, spine, etc.
OR - With the 3D/MPR option it can be used to volumetric image data visualization: -MIP, MPR, VR and triangulation.
- With the Quality Assurance option it can be used by PACS administrators or clinicians to perform quality control activities related to patient and images data.
ThinkingNet is a multi-modality PACS/RIS with applications optimized for each individual imaging modality. The image data and applications can be accessed locally or remotely. ThinkingNet workstation software is designed as diagnostic reading and processing software packages, which may be marketed as software only, as well as packaged with standard off-the-shelf computer hardware.
The base functions include receiving, transmission, storage, archival, display images from all imaging modalities. When enabled, the system allows remote access of image data and applications over a local or wide area network, using a Web browser, thick-client, thin-client or cloud-based remote application deployment method.
Options allow for additional capability, including modality specific applications, quantitative postprocessing, modality specific measurements, multi-planar reformatting and 3D visualization.
ThinkingNet Molecular Imaging modules offer the image processing functionality, through MDStation and ThinkingWeb, that have the same indication as the predicate modality workstations. It delivers image processing and review tools for applications used in functional imaging modalities, such as nuclear medicine, PET, PET/CT, SPECT/CT and PET/MRI.
ThinkingNet Mammo module is a diagnostic softcopy breast imaging workstation with diagnostic print capability.
- It displays and prints regionally approved DICOM DR Digital Mammography Images (MG . SOP class) with a default or user defined mammography hanging protocol.
- . It displays and prints regionally approved DICOM CR Digital Mammography Images (CR SOP class) a default or user defined mammography hanging protocol.
- It displays adjunct breast imaging modality studies (i.e. Breast MR, Breast . PET and Breast gamma camera) for comparison.
ThinkingWeb modules offer comprehensive remote image and application access methods to allow clinicians to review and process images remotely. It has the following modules. - . ThinkingWebLite: Clientless image distribution via simple Web browser (see NuWEB in K010271). It is primarily a referral physician's portal, not intended for primary reading.
- . ThinkingNet.Net: A thick-client implementation using an existing image review module (NuFILM) with a proprietary image streaming mechanism.
- . ThinkingWeb: Cross-platform thin-client remote application access based the existing MDStation software and off-the-shelf remote computing technology.
- . ThinkingWeb Extreme: A cloud-based remote application deployment implementation based the existing MDStation software and off-the-shelf cloud computing technologies.
Besides ThinkingNet.Net, all ThinkingWeb products support cross-platform client computer devices. Thinking Net uses Windows-based client computer.
The provided 510(k) summary for "ThinkingNet Modality Applications and Web Extensions" does not contain specific acceptance criteria or performance study results in the typical format of a clinical or technical validation study.
Instead, the submission primarily focuses on demonstrating substantial equivalence to predicate devices. This means that, for this type of submission, the manufacturer asserts that their device is as safe and effective as existing legally marketed devices, rather than needing to prove new performance metrics against predefined acceptance criteria. The "Performance testing" mentioned refers to internal verification and validation activities to ensure the new features (like Web options and modality-specific modules) still perform as expected and maintain equivalence to the predicate devices.
Therefore, many of the requested details about acceptance criteria, specific performance numbers, sample sizes for test sets, expert qualifications, and ground truth methodologies for a new performance claim are not present in the provided document.
Here's an analysis of the available information:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated as quantifiable metrics for a new claim. The "acceptance criteria" here are implicitly linked to demonstrating substantial equivalence with the listed predicate devices. This means the device must function similarly in terms of image management, communication, archiving, and processing capabilities.
- Reported Device Performance: No specific numerical performance metrics (e.g., sensitivity, specificity, accuracy) are reported for the device in the context of a clinical study designed to establish new performance claims. The document states that "Performance testing was conducted to show that ThinkingNet is safe and effective," but does not provide details of these tests or their results.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC study is mentioned. This device is a PACS/image management system, not an AI-assisted diagnostic tool for which such studies are typically conducted.
- Effect Size: Not applicable as no MRMC study was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable/not specified. The device is an image management and review system, inherently designed for human interaction.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not specified. For a PACS, the "ground truth" would generally relate to the accurate display, storage, and retrieval of medical images and data, ensuring data integrity and functionality mirroring predicate devices.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/not specified. This is a PACS system, not a machine learning model that typically requires a discrete training set.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable.
Summary of available information regarding the "study" (internal V&V) and acceptance criteria:
The "study" referenced in the document is the internal verification and validation (V&V) process conducted by Thinking Systems, following their ISO 13485 and FDA 21 CFR Part 820 compliant Quality System.
- Acceptance Criteria (Implicit): The primary acceptance criterion is that the "ThinkingNet with its Web options and modality specific modules to be as safe, as effective, and performance is substantially equivalent to the predicate device(s)." This means the device must meet the functional and performance characteristics established by the legally marketed predicate devices.
- Proof of Meeting Criteria:
- Performance Testing: "Performance testing was conducted to show that ThinkingNet is safe and effective." This would have involved internal tests to ensure the system's various functions (receiving, transmission, storage, display, advanced processing, remote access) operate correctly and reliably. These tests would validate that the device's features, especially the new Web options and modality-specific applications, perform comparably to or within the established safety and effectiveness parameters of the predicate devices.
- Quality Assurance Measures: The document lists several QA measures applied to the development, including Requirements Specification, Design Specification, Hazard and Risk Analysis, Modular Testing, Verification Testing, Validation Testing, and Integration Testing. These processes ensure that the device was designed and built to meet its intended purpose and functions properly.
- Substantial Equivalence Argument: The core of the submission is the detailed argument for substantial equivalence, comparing ThinkingNet's intended use, indications, target populations, and technical characteristics with multiple predicate devices across various modalities (PACS, molecular imaging, mammography, cardiology, RECIST, echocardiography). The differences identified (e.g., more built-in modality-specific applications, server-side processing for Web clients, cross-platform support) are then argued not to affect safety and effectiveness, supported by the internal performance testing.
In essence, for this 510(k) submission, the "acceptance criteria" were met by demonstrating that the device functions comparably to existing cleared devices, and the "study" was the manufacturer's well-documented internal verification and validation process designed to ensure this equivalence and the device's safety and effectiveness. No independent clinical efficacy study with specific numerical performance targets was required or presented for this type of device and submission pathway.
Ask a specific question about this device
(169 days)
Classification:
21 CFR §886.1120, Class 11
Product Code: HKI
Subsequent Classification: 21 CFR 892.2010
The Non-Mydriatic Auto Fundus Camera AFC-330 with Image Filing Software NAVIS-EX is intended to capture, display, store and manipulate images of the retina and the anterior segment of the eye, to aid in diagnosing or monitoring diseases of the eye that may be observed and photographed.
The Non-Mydriatic Auto Fundus Camera AFC-330 with Image Filing Software NA VIS-EX ("AFC-330 with NA VIS-EX") is a conventional non-mydriatic auto fundus camera. The AFC-330 with NAVIS-EX captures fundus images using a built-in colour CCD camera without the use of mydriatic agents. With this single device, registration of patient information, image capture, and viewing of captured images are possible. By connecting a personal computer (PC) to the device via a LAN and installing the NA VIS-EX image filing system software, images captured by this device can be transferred to the PC and viewed and managed on the PC.
The provided text is a 510(k) summary for the Nidek Non-Mydriatic Auto Fundus Camera AFC-330 with Image Filing Software NAVIS-EX. It describes the device, its intended use, and substantial equivalence to predicate devices, but it does not contain information about acceptance criteria or a specific study proving the device meets those criteria, as typically found in clinical performance studies of AI/CADe devices.
The document states:
- Testing in support of substantial equivalence determination: "All necessary bench testing was conducted on the AFC-330 with NA VIS-EX to support a determination of substantial equivalence to the predicate devices. The performance testing included the following tests:
- Electrical and mechanical safety testing
- Electromagnetic compatibility testing
- Light burden testing
- Verification and validation testing"
- Summary: "The collective performance testing results demonstrate that AFC-330 with NA VIS-EX is substantially equivalent to the predicate devices."
This indicates that the submission relied on bench testing to demonstrate performance characteristics related to safety and fundamental functionality, rather than a clinical study evaluating diagnostic accuracy against specific performance metrics and acceptance criteria for an AI or CADe component. The device appears to be a traditional fundus camera with image filing software, not a product that performs automated diagnostic interpretations requiring acceptance criteria like sensitivity, specificity, or AUC based on expert reads.
Therefore, I cannot provide the requested information regarding acceptance criteria and studies proving the device meets them because such information is not present in the provided text. The submission focuses on demonstrating substantial equivalence through standard device testing (safety, EMC, light burden, verification/validation) for a medical imaging acquisition and management system.
Ask a specific question about this device
(196 days)
|
| | Device, Storage, Images, Ophthalmic
Class I per 21 CFR §892.2010
The NAVILAS Laser System is a retinal photocoagulator integrated with a digital fundus camera. The NAVILAS is indicated for use in retinal photocoagulation, as well as for the imaging (capture, display, storage and manipulation) of the retina of the eye, including via color, fluorescein angiography and infra-red imaging; and for aiding in the diagnosis and treatment of ocular pathology in the posterior segments of the eye.
The NAVILAS Laser System is a retinal laser photocoagulator with an integrated digital fundus camera. The NAVILAS Laser System combines imaging technologies (fundus live imaging, infra-red imaging and fluorescein angiography) with established retinal laser photocoagulation treatment methods by providing the doctor a system for imaging and treatment planning prior to the photocoagulation.
The NAVILAS Laser System is comprised of a laser photocoagulation module, digital imaging camera, computer hardware, and a software platform intended to be used to capture display, store and manipulate images captured by the fundus camera.
Like the predicate devices, laser photocoagulation with the NAVILAS is performed using single shot (Single Spot Mode), repeated shots (Repeat Mode), and scanned patterns (Pattern Mode). All treatment-related information and images are continuously displayed on the monitor to provide the physician an optimal platform for the photocoagulation procedure.
The provided 510(k) summary for the NAVILAS Laser System primarily focuses on demonstrating substantial equivalence to predicate devices and verifying that the device complies with specifications through internal testing. It does not present a clinical study with detailed acceptance criteria and reported device performance metrics typically found in AI/ML device submissions.
Therefore, I cannot provide a table of acceptance criteria and reported device performance from a clinical study for this device, nor can I answer questions about sample sizes, ground truth establishment, expert qualifications, adjudication methods, or MRMC studies, as this information is not present in the document.
The document discusses "Performance Data" in a general sense:
Performance Data Summary (as per the document):
"Performance verification and validation testing was completed to demonstrate that the device performance complies with specifications and requirements identified for the NAVILAS Laser System. This was accomplished by software and hardware verification & validation testing, along with system level bench testing of the NAVILAS Laser System. All criteria for this testing were met and results demonstrate that laser photocoagulation performed with the NAVILAS Laser System meets all performance specifications and requirements."
This statement indicates that internal engineering and bench testing was performed to ensure the device met its design specifications, but it does not detail specific acceptance criteria for a human-interpretable performance metric (like sensitivity, specificity, accuracy) derived from a clinical trial or expert review.
Here's what can be extracted based on the provided text, with many fields noted as "Not Applicable" or "Not Provided" due to the nature of the submission:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified in terms of clinical performance metrics (e.g., sensitivity, specificity, accuracy) from a clinical study. | "All criteria for this testing were met and results demonstrate that laser photocoagulation performed with the NAVILAS Laser System meets all performance specifications and requirements." (This refers to internal verification and validation testing, not clinical performance metrics.) |
2. Sample Size and Data Provenance for Test Set:
- Sample Size: Not provided. The testing described appears to be internal engineering and bench testing, not a clinical study with a patient test set.
- Data Provenance: Not provided.
3. Number of Experts and Qualifications for Ground Truth:
- Number of Experts: Not applicable/Not provided. The submission focuses on device engineering specifications and functionality, not diagnostic accuracy requiring expert ground truth in a clinical context.
- Qualifications of Experts: Not applicable/Not provided.
4. Adjudication Method for Test Set:
- Adjudication Method: Not applicable/Not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was an MRMC study done? No.
- Effect size of human reader improvement with AI vs. without AI assistance: Not applicable. (The NAVILAS Laser System is a photocoagulator and imaging system, not an AI-diagnostic assistant that would typically be evaluated in an MRMC study for reader improvement.)
6. Standalone (Algorithm Only) Performance Study:
- Was a standalone study done? Not in the sense of an algorithm's diagnostic performance. The document describes "software and hardware verification & validation testing, along with system level bench testing," which functions as a standalone performance evaluation of the device's operational specifications, but not its diagnostic or clinical efficacy in terms of specific performance metrics against a clinical ground truth.
7. Type of Ground Truth Used:
- Type of Ground Truth: Not applicable for a clinical performance evaluation. The "ground truth" for the verification and validation testing would be the engineering specifications and functional requirements of the device itself (e.g., laser power output accuracy, image resolution, software functionality).
8. Sample Size for Training Set:
- Sample Size: Not applicable/Not provided. This device is not described as involving machine learning or AI that would require a 'training set' in the modern sense.
9. How Ground Truth for Training Set was Established:
- How Ground Truth was Established: Not applicable/Not provided.
Ask a specific question about this device
(316 days)
Regulation Number: 21 CFR 892.2010 Product Code: NFF, Class I 510(k) exempt
Classification Name: Medical
Static Vessel Analyzer (SVA) with VesselMap2 is intended to capture, display, store, and manipulate images of the eye, especially the retina area, as well as surrounding areas, to aid in diagnosing or monitoring diseases of the eye that may be observed and photographed. Specifically, the VesselMap2 software is intended to be used for semiautomated measurement and calculation of the retinal artery/vein diameter ratio.
The IMEDOS SVA unit is designed as a complete Fundus Imaging system or the software can be sold separately as a standalone product. The SVA software consist of two components - VisualIS, an imaging software for capture, display, storage and manipulation of images of the eye and VesselMap2, an add-on for enhanced analysis of retina images. VisualIS captures the images which are provided by the fundus camera and the connected digital image sensor. Together with an image set the data of the patient are recorded and stored. The complete examination can be stored and opened for follow-up examination purposes. The add-on VesselMap2 offers a semi-automated measurement for Arterial -- Venous Ratio of retinal vessels. The software allows the user to select veins and arteries. Vessel diameters are estimated by the software and the ratio of the analyzed arteries to veins is calculated. The image is not altered in any way during this calculation.
The provided text describes the IMEDOS GmbH's Static Vessel Analyzer (SVA) and its VesselMap2 software. The information focuses on a method comparison study between the semi-automated VesselMap2 software and a manual method for determining arterial-venous ratios.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative "acceptance criteria" with specific thresholds (e.g., "accuracy > 90%"). Instead, it describes performance in terms of agreement, repeatability, and reproducibility, and the absence of certain biases. The reported device performance is presented as conclusions from the study.
Acceptance Criteria (Implied) | Reported Device Performance (VesselMap2) |
---|---|
Agreement with manual method | "Agreement between manual and semi-automatic (VesselMap2) retinal vessel analysis, without any indication of statistically significant differences between methods." |
Repeatability (Intragrader) | "Smaller intragrader variability for the semi-automatic method (VesselMap2) than for the manual method." |
Reproducibility (Intergrader) | "Smaller [...] intergrader variability (reproducibility) for the semi-automatic method (VesselMap2) than for the manual method." |
Absence of reader-to-reader bias | "In the manual method, a systematic bias between the two graders exists, which is not the case in the semi-automatic method. Thus, the comparison study demonstrated that the VesselMap software does not introduce reader-to-reader bias." |
Intravisit Reproducibility | "Appropriate intravisit [...] reproducibility for the semi-automatic method." |
Intervisit Reproducibility | "Appropriate [...] intervisit reproducibility for the semi-automatic method." |
Low device-associated variability | "The visit-to-visit variability (whether for manual or semi-automated analysis) was shown to be much higher than the variability in results associated with the device." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document does not specify the exact number of retinal images or patients used in the method comparison study. It only mentions "representative retina images."
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document mentions "two graders" in the context of comparing manual methods and intergrader variability. It does not provide their specific qualifications (e.g., "radiologist with 10 years of experience"). It's implied these graders performed the manual measurements against which the semi-automatic method was compared.
4. Adjudication Method for the Test Set
The document does not describe an explicit adjudication method (like 2+1 or 3+1) for establishing ground truth. Instead, it describes a comparison between manual measurements performed by graders and the semi-automatic software.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Its Effect Size
A formal MRMC comparative effectiveness study, comparing human readers with AI assistance versus without AI assistance, is not explicitly described. The study compared a semi-automatic method (VesselMap2) to a purely manual method, highlighting the improved repeatability and reproducibility of the semi-automatic method and the absence of reader-to-reader bias. It does not quantify an "effect size of how much human readers improve with AI vs. without AI assistance" in terms of observer performance but rather demonstrates the device's inherent performance characteristics.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The VesselMap2 is described as a "semi-automated measurement" tool where "the software allows the user to select veins and arteries." This indicates human-in-the-loop involvement, so a purely standalone (algorithm-only) performance evaluation is not detailed for the final output (arterial-venous ratio). The comparison performed was between this semi-automated approach and a manual approach.
7. The Type of Ground Truth Used
The "ground truth" for the test set was effectively the manual method of determining arterial-venous ratios performed by human graders, against which the "semi-automated calculation" of the VesselMap2 software was compared. The study aimed to show agreement with this established manual method and improvements in consistency.
8. The Sample Size for the Training Set
The document does not provide any information about a dedicated training set or its sample size. The focus is on the performance of the software in a comparison study.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned, there is no information on how its ground truth would have been established.
Ask a specific question about this device
(27 days)
DigitalNow HD is a software application which runs on the Hologic Cenova server (Class I exempt per 21 CFR § 892.2010
DigitalNow HD is a software application which runs on the Hologic Cenova server (Class I exempt per 21 CFR § 892.2010
DigitalNow HD is a software application which runs on the Hologic Cenova server (Class I exempt per 21 CFR § 892.2010
R2 DigitalNow HD is a software application intended to process digitized screen-film mammograms for comparison purposes only. The software processes digitized prior film images to produce lossy-compressed DICOM images that more closely resemble digital mammography images. R2 DigitalNow HD images are intended for comparison purposes only and cannot be used for primary diagnosis.
DigitalNow HD is a software application which runs on the Hologic Cenova server (Class I exempt per 21 CFR § 892.2010 and 21 CFR § 892.2020).
R2 DigitalNow HD is a software application intended to process digitized screen-film mammograms.
The provided text does not contain detailed information about specific acceptance criteria or the study that rigorously proves the device meets those criteria. The submission is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than comprehensive performance validation against defined acceptance criteria.
However, based on the implied acceptance criteria for a 510(k) submission and the information provided, we can infer some aspects:
The primary "acceptance criterion" for this 510(k) submission appears to be demonstrating substantial equivalence to the predicate device (DexTop Mammography Workstation K080351 for certain software functions) in terms of intended use, technological characteristics, and safety and effectiveness.
Here's an attempt to structure an answer based on the given information, highlighting what is present and what is absent:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion (Inferred from 510(k) Goal) | Reported Device Performance (Summary from Document) |
---|---|
Substantial Equivalence to Predicate Device | The FDA reviewed the 510(k) and determined the device is substantially equivalent to legally marketed predicate devices for the stated indications for use. |
Intended Use Equivalence | "R2 DigitalNow HD is a software application intended to process digitized screen-film mammograms for comparison purposes only." This matches the general scope of such software. |
Technological Characteristics Equivalence | DigitalNow HD is described as "a software application intended to process digitized screen-film mammograms." The submission would have detailed its algorithms and processing. |
Safety and Effectiveness | "The 510(k) Pre-Market Notification for DigitalNow HD contains adequate information and data to enable FDA - CDRH to determine substantial equivalence..." |
"General Safety and Effectiveness Concerns: The device labeling contains instructions for use and any necessary cautions and warnings to provide for safe and effective use of this device. Risk management is ensured via a risk analysis, which is used to identify potential hazards." | |
Compliance with Standards | Designed and manufactured in accordance with ISO 13485, ISO 14971, ANSI/AAMI SW68:2001, 21 CFR § 820. |
Note: The document explicitly states, "The submission contains the results of a hazard analysis and the 'Level of Concern' for potential hazards has been classified as 'Moderate'." This indicates risk management was performed and reviewed as part of establishing safety.
2. Sample Size Used for the Test Set and Data Provenance
The provided text does not specify a sample size for a test set (e.g., number of images, number of patients). It also does not mention the data provenance (e.g., country of origin, retrospective or prospective).
For a 510(k) submission, particularly one involving image processing for comparison only and not primary diagnosis, the "test set" might not be a clinical image dataset for performance metrics like sensitivity/specificity. Instead, testing might focus on technical validation, such as image quality assessment (e.g., preservation of detail, compression artifacts, adherence to DICOM standards) and functional testing against specified requirements. The document mentions "The performance of the software is also tested in accordance with Hologic's SOPs and testing procedures to demonstrate adequate performance," which implies internal validation.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
This information is not provided in the given text. Since the device is for "comparison purposes only" and "cannot be used for primary diagnosis," a ground truth established by experts for diagnostic performance (e.g., disease presence/absence) would likely not be the primary focus of the performance testing presented in this summary. If image quality was assessed by experts, their qualifications are not mentioned.
4. Adjudication Method for the Test Set
This information is not provided in the given text.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
An MRMC study is not mentioned in the provided text. Given the device's intended use ("for comparison purposes only" and "cannot be used for primary diagnosis"), a comparative effectiveness study showing human readers' improvement with AI assistance would not be required or relevant for this specific 510(k) pathway.
6. Standalone (Algorithm Only) Performance Study
The text does not explicitly describe a standalone performance study in terms of specific clinical metrics like sensitivity or specificity. The "performance" mentioned likely refers to technical performance, such as:
- Ability to process digitized screen-film mammograms.
- Production of lossy-compressed DICOM images.
- Resemblance to digital mammography images (qualitative assessment).
The submission implies internal testing ("The performance of the software is also tested in accordance with Hologic's SOPs and testing procedures to demonstrate adequate performance"), which would include standalone functional and technical validation.
7. Type of Ground Truth Used
The text does not specify the type of ground truth used. For this type of device, ground truth would likely relate to objective measurements of image fidelity to the original film, proper DICOM formatting, and the visual similarity ("more closely resemble digital mammography images") which could be a qualitative assessment by an internal team rather than a clinical ground truth like pathology or outcomes data.
8. Sample Size for the Training Set
The text does not provide any information regarding a training set sample size. Given the stated function ("process digitized screen-film mammograms to produce lossy-compressed DICOM images that more closely resemble digital mammography images"), it's possible the algorithms are based on image processing techniques that don't rely on a "training set" in the machine learning sense (i.e., for classification or detection). Instead, they might use fixed, rule-based, or adaptive algorithms for image manipulation. If machine learning was involved for the "resemblance" aspect, the training data is not discussed.
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the given text, as no training set or its ground truth is mentioned.
Ask a specific question about this device
Page 1 of 3