Search Results
Found 27 results
510(k) Data Aggregation
(140 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The AeroDR SYSTEM with P-31 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in general-purpose diagnostic procedures. The AeroDR SYSTEM with P-31 is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroDR SYSTEM with P31(K130936) is a digital imaging system to be used with diagnostic x-ray systems. It consists of AeroDR Detector, Console CS-7 (operator console). Images captured with the flat panel digital detector can be communicated to the operator console via wired connection or wireless, depend on user's choice.
The following modifications were added to the AeroDR SYSTEM (K102349/ the predicate device) for the AeroDR SYSTEM with P-31(K130936/ the proposed device). The panel size of 10 x 12 inches (P-31) is added to 14 x 17 inches. The materials of the proposed panel also had been evaluated with the latest ISO 10993-1, and had been assured the safety as same as the predicate device. Two accessories were added: one is AeroDR Interface Unit2 designed to be able to replacement and function of both AeroDR Interface Unit and AeroDR Generator Interface Unit. The other is AeroDR Battery Charger2 designed for the 10X12 inches proposed panel (P-31) which can function as same as the AeroDR Battery Charger of predicate device. Irrespective of those minor modifications, the AeroDR SYSTEM (102349) and AeroDR SYSTEM with P-31(K130936) perform same, and also device design, material used and physical properties of both devices are substantially equivalent.
Here's an analysis of the provided text regarding the acceptance criteria and study for the AeroDR SYSTEM with P-31 (K130936).
Crucially, the provided document does not contain detailed acceptance criteria or the specifics of a comprehensive study proving the device meets said criteria in the way typically expected for an AI/ML medical device submission. This document is a 510(k) summary for a digital radiography system, likely focusing on demonstrating substantial equivalence to a predicate device rather than a de novo submission with extensive performance studies.
The document primarily focuses on demonstrating that the new model (AeroDR SYSTEM with P-31) is substantially equivalent to a previously cleared device (AeroDR SYSTEM, K102349) despite minor modifications (panel size, specific accessories).
Therefore, many of the requested data points (like sample size for test sets, number of experts for ground truth, MRMC studies, AI performance metrics) are not present in this type of submission. The "study" here is primarily a comparison to the predicate device and testing to ensure basic functionality and safety.
I will populate the table and answer the questions based on the information available and explicitly state when information is not provided.
Acceptance Criteria and Device Performance Study (K130936)
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Functional Equivalence: Performs the same as the predicate device (AeroDR SYSTEM, K102349) in generating radiographic images. | "perform same" as the predicate device. |
Safety: Meets relevant safety standards and is as safe as the predicate device. | Materials of the proposed panel evaluated with ISO 10993-1, assured safety as same as predicate device. Electrical safety (IEC 60601-1) and electromagnetic compatibility (IEC 60601-1-2) assured as the predicate device safety. |
Effectiveness: No issues or differences in effectiveness compared to the predicate device. | Performance testing (Bench testing), including Non clinical and clinical testing referring to the FDA Guidance for the Submission of 510(k)'s for Solid State X-ray Imaging Device, conducted and showed equivalent evaluation outcome. "no impacts in technological characteristics such as design, material chemical composition energy source and other factors of the proposed device were recognized." "no safety and effectiveness and performance issue or no differences were found in further than the predicate device has." "no safety and effectiveness and performance issue or difference as the predicate devices has." |
Material Compatibility: New panel materials are safe. | Materials of the proposed panel evaluated with ISO 10993-1, assured safety. |
Software/Hardware V&V: Completed without problems. | Software and Hardware verification and validation completed without problem. |
Risk Management: ISO 14971 based risk management completed. | Risk management based on ISO14971 completed without problem. |
Interpretation: The acceptance criteria are implicitly met by demonstrating "substantial equivalence" to the predicate device. The primary "performance" reported is that the new device performs "the same" and has "no differences" in safety, effectiveness, or technological characteristics compared to the predicate.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not provided. The document refers to "Non clinical and clinical testing" but does not specify sample sizes for any test set or data provenance. This 510(k) summary focuses on equivalence, not a detailed performance study with specific test populations.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not provided. Given that this is a digital radiography system aiming for equivalence and not an AI/CAD/image analysis device, the concept of "ground truth" and expert adjudication in this context is likely different or not explicitly detailed in this summary. The "clinical testing" mentioned likely refers to ensuring image quality is comparable, rather than disease detection/diagnosis.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. This is not an AI/CAD device. The AeroDR SYSTEM with P-31 is a digital X-ray imaging system. Therefore, an MRMC study comparing human readers with and without AI assistance is not applicable and was not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No. This is not an AI/CAD device. Standalone algorithm performance is not relevant for a digital X-ray capture system itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not explicitly stated in terms of diagnostic ground truth. For a digital radiography system, "ground truth" regarding image quality might be based on established phantom testing, expert visual assessment of image fidelity, and comparison against images from the predicate device. Actual disease-specific ground truth (like pathology or outcomes) is not mentioned as this device is a general imaging system, not a diagnostic aid for specific conditions.
8. The sample size for the training set
- Not applicable/Not provided. This device is a hardware imaging system, not a machine learning algorithm that requires a "training set" in the typical sense.
9. How the ground truth for the training set was established
- Not applicable/Not provided. As above, no training set for an AI/ML algorithm is mentioned or relevant to this device type.
Summary of what the document does communicate about the "study":
The "study" or evaluation of the AeroDR SYSTEM with P-31 was primarily focused on demonstrating substantial equivalence to its predicate device (AeroDR SYSTEM, K102349). This involved:
- Engineering and Performance Testing: "Bench testing" and "Non clinical and clinical testing" were performed. These likely involved technical measurements of image quality, dose efficiency, and physical properties, as well as comparison of images to those produced by the predicate device.
- Safety Testing: Evaluation of new materials (ISO 10993-1), electrical safety (IEC 60601-1), and electromagnetic compatibility (IEC 60601-1-2).
- Verification and Validation: For software and hardware.
- Risk Management: Based on ISO 14971.
The conclusion is that, despite minor modifications (a new panel size and updated accessories), the proposed device performs the same as the predicate and has no new safety or effectiveness issues.
Ask a specific question about this device
(37 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
AeroPilot is a software device that is used in conjunction with REGIUS Unitea to control Konica Minolta Digital Radiography systems. This device is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/ screen systems in general-purpose diagnostic procedures.
This device is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroPilot is a software device that is used in conjunction with our REGIUS Unitea. K071436, to control Konica Minolta Digital Radiography systems. The AeroPilot is the software designed to be installed in Off-the-shelf PC (operation console) which is one of component of REGIUS Unitea and works with Konica Minolta Digital Radiography systems to be an interface with X-ray generator or between REGIUS Unitea and the specified Konica Minolta Digital Radiography systems, and to acquire X-ray images like the specified Konica Minolta Digital Radiography systems do.
The provided text is a 510(k) Summary for the AeroPilot device, which is a software device used to control Konica Minolta Digital Radiography systems. The document focuses on establishing substantial equivalence to predicate devices and does not detail a study with specific acceptance criteria, test set characteristics, expert involvement, or adjudication methods for evaluating algorithm performance. Instead, it relies on demonstrating equivalence in configuration, specifications, and principle of operation, along with performance testing to show equivalent image quality.
Therefore, many of the requested fields cannot be extracted or are not applicable from the provided document.
Here's a summary of what can and cannot be answered based on the provided text:
1. Table of acceptance criteria and the reported device performance:
- The document states that a "Risk Analysis for the AeroPilot (software) was conducted on the basis of ISO14971...the risk associated with all of the identified hazards was reduced to an acceptable level or ALARP." This is a high-level acceptance criterion related to risk management.
- It also states, "the results of performance testing show that the image quality of proposed device is equivalent to the predicate device." This implies an acceptance criterion for image quality, but specific numerical criteria (e.g., sensitivity, specificity, resolution, contrast) are not provided.
Acceptance Criteria | Reported Device Performance |
---|---|
Risk Level | Reduced to an acceptable level or ALARP (As low as reasonably practicable) based on ISO14971. |
Image Quality | Equivalent to the predicate device. |
2. Sample size used for the test set and the data provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as no human-read ground truth study is described for the device's performance. The comparison is against predicate device performance, implying technical specifications and image quality rather than diagnostic accuracy by humans.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was mentioned. The device is a control software for a digital radiography system, not an AI-powered diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The document describes "performance testing" to show "image quality... equivalent to the predicate device." This suggests a standalone evaluation of the device's technical output (image quality), but specific metrics and a detailed study design are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The "ground truth" for the performance testing appears to be the image quality and functionality of the predicate device. The new device's performance is compared against the established performance of existing Konica Minolta Digital Radiography systems.
8. The sample size for the training set: Not applicable, as this is a software device for controlling hardware and acquiring images, not a machine learning algorithm that requires a training set in the typical sense.
9. How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
(88 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The AeroDR Stitching System is used with Konica Minolta AeroDR SYSTEM which is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in general-purpose diagnostic procedures. This device is used for examinations of long areas of anatomy such as the leg and spine. This device is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroDR Stitching System is used with 510(k) cleared Konica Minolta AeroDR SYSTEM (K102349) which is indicated for use in generating radiographic images of human anatomy. The AeroDR Stitching System is an accessory of stationary X-ray system which extends the capability of the AeroDR SYSTEM to allow the capture of long length images with an image area up to 1196mm x 349mm. It consists of AeroDR Stitching Unit, AeroDR Stitching X-ray Auto Barrier Unit and Power Supply Unit. The AeroDR Detecter (K102349) loaded in the AeroDR Stitching Unit takes up to 3 images and transfer them to the Console CS-7 (K102349). Combining the transferred images in the CS-7 enables diagnosis of long images.
The provided 510(k) summary for the Konica Minolta AeroDR Stitching System (K120752) focuses primarily on substantial equivalence to a predicate device and adherence to general safety and EMC standards. It does not contain detailed performance study data, acceptance criteria, or ground truth establishment relevant to the device's image stitching capability.
Here's an analysis of the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document unfortunately does not explicitly state specific acceptance criteria in a quantitative or qualitative manner for the imaging performance of the stitching system, nor does it report detailed device performance metrics against such criteria.
The "Performance-Testing" section states generically that: "Performance testing was conducted to verify the design output met the design input requirements and to validate AeroDR Stitching SYSTEM conformed to the defined user needs and intended uses upon the quality of the device software. Through validation results of sample images and non-clinical testing under simulated use conditions, safe, effectiveness and performances are confirmed the achievement of predefined acceptance criteria..."
However, these predefined acceptance criteria and the corresponding performance results are not detailed in the summary. The focus is on demonstrating substantial equivalence to a predicate device and compliance with safety and EMC standards.
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size of "sample images" used for performance testing (if any were used beyond engineering testing). It also does not mention the data provenance (e.g., country of origin, retrospective or prospective) for any clinical or non-clinical image data used in testing. The phrase "validation results of sample images and non-clinical testing under simulated use conditions" suggests that testing might have involved a limited set of images, possibly generated internally, rather than a broad clinical dataset.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The 510(k) summary does not provide any information regarding the use of experts to establish ground truth for a test set. This type of detail would typically be found in a clinical performance study report, which is not included here.
4. Adjudication Method for the Test Set
Since no information is provided about expert review or a "test set" in the context of clinical evaluation, there is no mention of an adjudication method (e.g., 2+1, 3+1, none).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. The focus of the submission is on demonstrating substantial equivalence to a predicate device through technological characteristics and safety standards, not on comparative clinical performance with human readers. Therefore, no effect size of human readers improving with AI vs. without AI assistance is mentioned.
6. Standalone (Algorithm Only) Performance Study
While the device's stitching capability is an algorithm, the document does not present a standalone performance study in terms of specific metrics like stitching accuracy, artifact detection, or image quality assessments related only to the algorithm's output. The "Performance-Testing" section vaguely refers to "validation results of sample images and non-clinical testing under simulated use conditions" to confirm "safe, effectiveness and performances," but concrete results of the stitching algorithm's standalone performance are absent.
7. Type of Ground Truth Used
The document does not specify the type of ground truth used for any performance evaluation. Given the nature of a stitching system for long-length imaging, ground truth might ideally involve physical measurements on phantoms, or expert assessment of stitching lines and image continuity. However, this information is not provided.
8. Sample Size for the Training Set
The document does not provide any information about a training set size. As a 510(k) for a relatively early-stage digital radiography accessory (2012), and based on the summary, it's possible the device relied more on rule-based or deterministic algorithms for stitching rather than extensive machine learning requiring a large training set. Even if machine learning was used, the details are not presented.
9. How the Ground Truth for the Training Set Was Established
Since no information is provided about a training set, the document also does not explain how ground truth for a training set was established.
Summary of Missing Information:
The provided 510(k) summary is typical for a device primarily seeking substantial equivalence based on technological similarity and compliance with recognized standards. It lacks the detailed performance study information, including acceptance criteria, sample sizes, expert involvement, and ground truth methodologies, that would be expected for a more in-depth clinical performance evaluation or an AI-intensive device requiring specific validation of its intelligent features. The "Performance-Testing" section is very general and does not disclose the specific data or methods used to "confirm the achievement of predefined acceptance criteria."
Ask a specific question about this device
(41 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
AeroDR X70 is a stationary x-ray system intended for obtaining radiographic images of various portions of the human body in a clinical environment. The AeroDR X70 is not intended for mammography
The AeroDR X70 is a stationary x-ray system with a ceiling mounted tube support, a floor mounted table and wall stand. The image receptor and the image receptor holder is placed in the table or the wall stand. The ceiling stand and the table are motorized for up and down movements, all other movements are manually operated. The standard equipment includes a graphic display showing X-ray tube rotation and film focus or source image distance, a generator control console and an Image system console.
Here's an analysis of the provided text regarding the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, the device (AeroDR X70 Stationary X-ray System) is a conventional X-ray system, not an AI-powered device. Therefore, the "acceptance criteria" and "device performance" described primarily relate to safety and equivalency to a predicate device, rather than specific diagnostic performance metrics (like sensitivity, specificity, or AUC) that would be relevant for an AI-enabled diagnostic tool.
The "acceptance criteria" can be inferred from the standards the device meets to demonstrate safety and substantial equivalence.
Acceptance Criteria (Inferred) | Reported Device Performance and Evidence |
---|---|
Safety and Electrical Requirements: Conformance to relevant medical electrical equipment safety standards. | Evidence: The device meets several IEC standards (IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-1-4, IEC 60601-2-7, IEC 60601-2-28, IEC 60601-2-32) and NEMA XR7 regarding general safety, electromagnetic compatibility, radiation protection, programmable electrical medical systems, high-voltage generators, X-ray source assembly, and associated equipment. "The same scope of testing has been preformed by certification body." |
Equivalent Imaging Capability: Ability to produce radiographic images of various portions of the human body, equivalent to the predicate device. | Evidence: "The provided performance data demonstrate that the imaging system in the AeroDR X70 system is substantially equivalent to the predicated device with reqards to the capability of producing radiographic images of various portions of the human body." |
"AeroDR X70 digital image system...have the same imaging principle, physical characteristic and Intended use" as the predicate device. | |
Intended Use: Consistent with obtaining radiographic images of various human body portions, excluding mammography. | Evidence: "AeroDR X70 is a stationary x-ray system intended for obtaining radiographic images of various portions of the human body in a clinical environment. The AeroDR X70 is not intended for mammography." This matches the predicate device's intended use. |
Functional Equivalence: Similar mechanical and operational characteristics compared to the predicate device. | Evidence: "The AeroDR X70 is basically the same product as the predicate device Intuition." Both have motorized ceiling stand and table, manual operation for other movements. Wallstand modifications are noted but considered equivalent functionality (e.g., magnetic vs. manual brake release). |
Based on the provided document, the device described is a conventional X-ray system, not an AI-powered device. Therefore, the following sections about AI-specific study details (sample sizes, ground truth establishment, expert adjudication, MRMC studies, standalone performance) are not applicable to this submission. The "study" here refers to the testing and comparison performed to demonstrate substantial equivalence to a predicate conventional X-ray system.
2. Sample Size Used for the Test Set and Data Provenance
- Not applicable for an AI device.
- For this conventional X-ray system, the "test set" would refer to the tests performed against the safety and performance standards listed (IEC, NEMA). The document does not specify a "sample size" in terms of patient data or images used for testing, as it's not a diagnostic AI algorithm. The testing focused on hardware performance and safety.
- Data Provenance: Not specified as it's not a data-driven AI device.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Not applicable for an AI device.
- For this conventional X-ray system, "ground truth" relates to compliance with engineering and safety standards, validated by accredited certification bodies. No medical experts are mentioned as establishing "ground truth" for the device's technical performance or safety tests.
4. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set
- Not applicable for an AI device.
- For the conventional X-ray system, the "adjudication" would involve technical verification against the standards by a certification body. The specific process (e.g., how disputes or interpretations during testing are resolved) is not detailed.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. Not applicable as this is a conventional X-ray system without AI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- No. Not applicable as this is a conventional X-ray system without an AI algorithm.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- Not applicable for an AI device.
- For the conventional X-ray system, the "ground truth" for demonstrating substantial equivalence is primarily based on:
- Conformance to international safety and performance standards (IEC, NEMA).
- Direct comparison of technical specifications and imaging principles with a legally marketed predicate device.
- Performance data demonstrating the imaging system's capability to produce radiographic images, implicitly checked against expected image quality for general radiography.
8. The Sample Size for the Training Set
- Not applicable as this is a conventional X-ray system, not an AI model. There is no "training set" in the context of machine learning.
9. How the Ground Truth for the Training Set was Established
- Not applicable as this is a conventional X-ray system, not an AI model.
Ask a specific question about this device
(53 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The AeroSync for AeroDR SYSTEM is a software device that is used in conjunction with AeroDR SYSTEM. This device is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in general-purpose diagnostic procedures.
This device is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroSync for AeroDR SYSTEM is a software device that is used in conjunction with our AeroDR SYSTEM, K102349. It adds minor change for our cleared AeroDR SYSTEM. It eliminates the need for an electrical connection between the AeroDR SYSTEM and X-ray generator. It can be detected X-ray irradiation without cables connected to X-ray generator.
The provided text describes the Konica Minolta AeroSync for AeroDR SYSTEM, a software device intended to eliminate the need for an electrical connection between the AeroDR SYSTEM and X-ray generator, thereby detecting X-ray irradiation without cables.
Here's an analysis of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
The submission primarily focuses on establishing substantial equivalence to a predicate device (AeroDR SYSTEM, K102349) rather than defining specific new acceptance criteria in terms of numeric thresholds for performance metrics. The core "acceptance criteria" here appear to be demonstrating equivalence to the predicate in key areas.
Acceptance Criterion (Implicit / Stated Goal) | Reported Device Performance |
---|---|
Safety | Complies with IEC 60601-1 Ed.2 (1988) + A1 (1991)+A2(1995) and IEC 60601-1-2 Ed.3 (2007). Risk analysis (ISO14971) shows all identified hazards reduced to acceptable levels (ALARP). Meets FCC Part15 Subpart C, E for RF wireless technologies. |
Electromagnetic Compatibility | Complies with IEC 60601-1-2 Ed.3 (2007). Meets FCC Part15 Subpart C, E for RF wireless technologies. |
Image Quality (Equivalence to Predicate) | "The results of performance testing shows that the image quality of proposed device is equivalent to the predicate device." (No specific metrics or quantitative comparisons are provided in this summary). |
Substantial Equivalence to Predicate Device | Demonstrated through comparison of Indications for Use, Configuration, Specifications, Principals of Operation, Risk Analysis, Compliance to Standards, and performance testing. "Comprehensively, we conclude that the AeroSync for AeroDR System has the same technological characteristics as the predicate device." |
2. Sample Size Used for the Test Set and Data Provenance
The provided summary does not specify the sample size used for performance testing (e.g., number of images, patient cases). It also does not mention the data provenance (country of origin, retrospective/prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not provide information on experts used to establish ground truth or their qualifications. Given the focus on "image quality equivalence," it is possible that image quality assessments were performed by qualified personnel, but this is not detailed.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method used for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
The document does not mention an MRMC comparative effectiveness study or any effect size for human reader improvement with/without AI assistance. This is a software device facilitating X-ray detection, not an AI diagnostic tool.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The document describes "performance testing" leading to the conclusion that "the image quality of proposed device is equivalent to the predicate device." This suggests standalone testing of the system's ability to produce images. However, no specific details of this testing (e.g., what was measured, how it was performed) are provided in this summary. The device's primary function is X-ray detection and image generation, not an interpretation independent of a human.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, Etc.)
The document does not specify the type of ground truth used for performance testing. Given the context, ground truth for image quality equivalence would likely involve a comparison against images produced by the predicate device under controlled conditions, possibly evaluated by imaging experts.
8. The Sample Size for the Training Set
The document does not mention a training set sample size. This device is described as a "software device" for X-ray detection and image generation, not a machine learning or AI algorithm that typically requires a training set in the conventional sense for feature learning or classification. It adds a "minor change" to an existing system, suggesting an engineering modification rather than a complex algorithm requiring extensive algorithmic training data.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned in the context of machine learning, there is no information on how its ground truth was established.
Ask a specific question about this device
(24 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The Direct Digitizer, REGIUS SIGMA2 is an X-ray image reader which uses a stimulable phosphor plate (Plate) as X-ray detector installed in a separate cassette. It reads the image recorded on the Plate and transfers the image data to an externally connected device such as a host computer, an order input device, an image display device, a printer, an image data filing device, and other image reproduction devices. It is designed and intended to be used by trained medical personnel in a clinic, a radiology department in a hospital and in other medical facilities.
This device is not intended for use in mammography.
The Direct Digitizer, REGIUS SIGMA2 is a compact X-ray image reader which uses a stimulable phosphor plate (Plate) as X-ray detector installed in a cassette, and reads the image recorded on the Plate by inserting a cassette in the entrance slot of this device. By means of laser scan and photoelectric method, this device reads the X-ray image data created in form of a latent image on the Plate exposed by an external X-ray generating device, and converts the read data into digital.
The device is comprised of a reading unit with cassette containing Plate.
Plates and cassette remain unchanged from the predicate device, REGIUS SIGMA.
The image data transfer to an externally connected device such as a host computer, an order input device, an image display device, a printer, an image data filing device, and other image reproduction devices.
The basic operations of REGIUS SIGMA2 and the predicate device, REGIUS SIGMA, such as a starting, a shut down, a registration-of-patient, a setting of a various condition are operated with the Medical Image Processing Workstation, ImagePilot (operator console) which is cleared 510(k), K071436.
The modification was made from the REGIUS SIGMA to the REGIUS SIGMA2 to increase processing capacity. To increase the processing capacity, the firmware (device mechanical control software) of the reading unit is modified. The outline is as follows.
The processing capacity is increased by controlling the feed sequence.
(1) Increasing the speed of removal of Plate from cassette and the sequence of storage into cassette
(2) Increasing the erasing speed by changing the erasing LED
The feed sequence related to Image Quality (Reading speed, Open/Close timing of sub-scanning nip) is not changed.
Here's a breakdown of the acceptance criteria and study information based on the provided text, using the requested structure:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not explicitly state specific numerical acceptance criteria for image quality or processing performance. Instead, it describes a comparative approach where the performance of the new device (REGIUS SIGMA2) is evaluated against a legally marketed predicate device (REGIUS SIGMA).
Acceptance Criteria Category | Acceptance Criteria (Stated or Implied) | Reported Device Performance |
---|---|---|
Image Quality | Perform "as well as" the predicate device; no new safety and efficacy issues. | Performance data from non-clinical testing of REGIUS SIGMA2 compared favorably with the predicate device, REGIUS SIGMA, indicating it performed "as well as" the predicate device. |
Processing Capacity | Increased processing capacity compared to the predicate device. | Processing capacity was increased by modifying firmware to control feed sequence, including faster Plate removal/storage and increased erasing speed. |
Safety | Meet specified safety standards. | Met IEC60601-1, IEC60601-1-2, 21 CFR 1040.10, IEC60825-1, and ISO14971 standards. Risk analysis reduced identified risks to an acceptable level. |
Technological Equivalence | Same technological characteristics and principle of operation as the predicate. | Principles of operation and technological characteristics are the same. Plates and cassettes remain unchanged from the predicate device. |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Performance data from non-clinical testing of the REGIUS SIGMA2 is compared with data from the predicate device, REGIUS SIGMA."
- Sample Size (Test Set): Not specified. The phrase "non-clinical testing" suggests laboratory/bench testing rather than human subject data.
- Data Provenance: Not specified, but likely internal Konica Minolta laboratory data, given it's "non-clinical testing." No indication of country of origin or whether it's retrospective or prospective in the traditional sense of human data studies.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not applicable/not specified. The testing described is "non-clinical" and focuses on device performance metrics rather than image interpretation by experts to establish a "ground truth" for clinical accuracy.
- Qualifications of Experts: N/A.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable/none. As the testing is non-clinical, there wouldn't be an adjudication method for human interpretation. The comparison is based on technical performance metrics of the device itself.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The device is an X-ray image reader, not an AI-assisted diagnostic tool for interpretation.
- Effect Size: Not applicable.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone Performance: Yes, in a sense. The described "non-clinical testing" is inherently a standalone evaluation of the device's technical specifications and performance characteristics (e.g., image quality, processing speed) compared to the predicate, independent of human interaction for interpretation. It's evaluating the device's output, not its interpretative assistance to humans.
7. The Type of Ground Truth Used
- Type of Ground Truth: For the "non-clinical testing," the "ground truth" implicitly refers to the established technical performance specifications and acceptable output quality of the predicate device (REGIUS SIGMA), as well as adherence to relevant safety standards. The new device's performance metrics (e.g., image quality, speed) were compared directly against those of the predicate device to ensure equivalence or improvement. There's no mention of pathology, outcomes data, or expert consensus related to diagnostic accuracy from images as the ground truth.
8. The Sample Size for the Training Set
- Sample Size (Training Set): Not applicable/not specified. This device is an X-ray digitizer/reader, not a machine learning or AI algorithm that requires a "training set" in the conventional sense. The "firmware (device mechanical control software)" modifications are likely conventional programming changes to optimize mechanical sequences, not an AI model trained on data.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable. As noted above, there's no indication of a "training set" for AI/ML in this context. The "ground truth" for developing the firmware modifications would be engineering specifications and desired operational parameters for mechanical control.
Ask a specific question about this device
(75 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The AeroDR SYSTEM with P-21 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in general-purpose diagnostic procedures. The AeroDR SYSTEM with P-21 is not indicated for use in mammography, fluoroscopy, tomography and angiography applications.
The AeroDR SYSTEM, K102349 is a digital imaging system to be used with diagnostic x-ray systems. It consists of AeroDR Defector (flat panel digital detector), Console CS-7 (operator console), AeroDR Interface Unit, AeroDR Generator Interface Unit, AeroDR Access Point and AeroDR Battery Charger. Images captured with the flat panel digital detector can be communicated to the operator console via wired connection or wireless, depend on user's choice. The modification was made to the AeroDR SYSTEM with P-21 to add the different panel size. The panel size of 17 x 17 inches (P-21) is added to 17 x 14 inches. The materials of the panel remain unchanged and no other changes were made other than the panel size from 17 x 14 inches to 17 x 17 inches.
I am sorry, but based on the provided text, there is no information about acceptance criteria or a study proving the device meets those criteria. The document describes a 510(k) submission for a medical device (AeroDR SYSTEM with P-21), but it primarily focuses on its substantial equivalence to a predicate device, its indications for use, and regulatory compliance.
Specifically, the "Performance Testing" section states: "Performance data from non-clinical testing of the AeroDR SYSTEM with P-21 is compared with data from the predicate device, AeroDR SYSTEM (P-11). This comparison showed that the AeroDR SYSTEM with P-21 performed as well as the predicate device."
This statement indicates a comparison was made, but it does not provide details on:
- Specific acceptance criteria.
- The results of the performance data in terms of specific metrics.
- The type of study conducted (e.g., sample size, data provenance, ground truth, expert involvement, MRMC, standalone performance).
Therefore, I cannot populate the table or answer the specific questions about the study from the given text.
Ask a specific question about this device
(283 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The Konica Minolta Xpress Digital Mammography System is a software device that is used in conjunction with a specified Konica computed radiography system, REGIUS Model 190 or REGIUS Model 210 with REGIUS Cassette Plate CP1M200 and the REGIUS Console CS-3000, and a mammography x-ray unit, to produce full field digital mammography images. The Xpress Digital Mammography software with a specified Konica Minolta computed radiography system is designed to replace screen-film based systems for the production of mammographic images. The device is intended to be used for screening and diagnosis of breast cancer. The mammographic images can be interpreted by a qualified physician using either hard copy film or soft copy display at a workstation.
Konica Minolta's Xpress Digital Mammography is a software device that is used in conjunction with currently marketed Konica computed radiography systems to acquire Full Field Digital Mammography Images. Digital mammography can be performed with the specified Konica Minolta computed radiography systems including the activated Xpress Digital Mammography software using any mammography x-ray unit legally marketed in the U.S .. Konica Minolta does not specify a mammography x-ray unit for use with the specified Konica Minolta computed radiography system with the activated Xpress Digital Mammography software.
The components of the specified Konica Minolta computed radiography systems for digital mammography include either REGIUS Models 190 (K052095) or 210 (K092717) Direct Digitizers and the REGIUS Console CS-3000 medical image processing workstation (K051523) with optional bar code reader accessories.
The x-ray images produced by the legally marketed x=ray unit:are captured on a REGIUS image plate and digitized using either REGIUS Model 190 or 210 Direct. Digitized images are imported into the REGIUS Console CS-3000 workstation. The REGIUS Console CS-3000 is identical to the REGIUS Console CS-3000 described in K051523, with the Xpress Digital Manimography software activated. The Xpress Digital Mammography software prov a high resolution reading capability and display options specific to the review of mammographic image.
Soft. copy images can be transferred to any legally marketed diagnostic viewing station that accepts DICOM 3 input. Hard copy images can be generated using any printer legally marketed for use in digital mammography that supports DICOM basic grayscale print management service with a maximum 50 micrometer pixel pitch and film minimum optical density of at least 3.6.
Here's a breakdown of the acceptance criteria and study information for the Konica Minolta Xpress Digital Mammography System, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on demonstrating non-inferiority to predicate devices (Fuji Computed Radiography Mammography Suite - FCRMS) and traditional screen-film mammography, rather than specific numerical acceptance thresholds. Therefore, the "acceptance criteria" are implied by the non-inferiority claims, and the "reported device performance" reflects how the device compared to the predicate/screen-film.
Acceptance Criteria (Implied by Non-Inferiority) | Reported Device Performance |
---|---|
Non-Clinical / Physical Performance: | |
Sensitometric Response: Linear relationship between pixel value and radiation exposure maintained. | Linear relationship between pixel value and exposure maintained for all exposure conditions (R-squared = 1.00). |
Spatial Resolution (Presampling MTF): Equivalent or better than FCRMS. | Presampling MTF of Xpress Digital Mammography is equivalent or better than FCRMS. |
Dynamic Range (Average DQE): Equivalent or better than FCRMS. | Average DQE of Xpress Digital Mammography is equivalent or better than FCRMS. |
Image Erasure: No residual signals after erasure. | No residual signals observed after erasure. |
Image Fading: Initial decline in luminescence intensity during 2 hours following exposure acceptable. | Initial decline in luminescence intensity during 2 hours following the exposure was 20% at 30 degrees Celsius. (No explicit acceptance range given, but presented as acceptable.) |
Image Retention: No signal observed after repeated exposures and erasures. | No signal observed after 100 cycles of exposures and erasures. |
Fogging after Exposure to Room Light: No signal fogging or depletion. | No signal fogging or depletion observed after 5 minutes exposure to light of 7000 Ix luminance (more than ten times ordinary room condition). |
Repeated Exposure Test (Lag value): Acceptable lag value. | Resultant lag value was 0.0047. (No explicit acceptance range given, but presented as acceptable.) |
Phantom Testing (CDMAM IQF): Comparable to FCRMS. | IQF results generally comparable across soft-copy, hard-copy, and FCRMS, with hard-copy 1.47mGy showing the highest IQF. Implicitly met, as no inferiority claims were made. |
Phantom Testing (ACR Phantom Scores): Equivalent or better than FCRMS and film-screen systems. | ACR phantom scores for fibers, specks, and masses were equivalent or better than FCRMS and a film-screen system (Min-R EV/Min-R EV150) for both hard-copy and soft-copy display. |
Clinical Performance: | |
ROC Curves (AUC): Non-inferior to Screen-Film Mammography. | Observed difference in AUC about 1%, with upper one-sided 95% confidence limits less than 5%, indicating non-inferiority to Screen-Film Mammography. |
Specificity: Non-inferior to Screen-Film Mammography. | Difference in specificities about 2% favoring Xpress system, with upper one-sided confidence limit (GEE analysis) less than 0.05. Supportive ROC curve analysis also demonstrated non-inferiority. |
Sensitivity: Non-inferior to Screen-Film Mammography. | Not non-inferior by GEE analysis. However, ROC curve analysis indicated that at a specificity of 90%, the upper one-sided confidence limit on the sensitivity of Screen-Film minus the Xpress system was less than 0.10. (Acknowledged bias due to recruitment of suspicious Screen-Film cases). |
Features Analysis: No loss of detail compared to Screen-Film. | Xpress system favored for visualization of the skin line; little evidence of a difference for other features. Indicates no loss of detail. No trend favoring hard image to soft image for Xpress system. |
Overall Safety and Effectiveness: Safe and effective for screening and diagnosis of breast cancer. | Demonstrated safe and effective for screening and diagnosis of breast cancer based on ROC analyses, sensitivity, specificity, recall rates, and features analyses, with images not inferior to conventional Screen-Film images. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size (Clinical Study): 210 subjects
- 60 pathology-proven cancers
- 130 benign abnormal subjects (benign biopsy findings)
- 20 negative subjects (confirmed by negative one-year follow-up mammography)
- Data Provenance: Prospective, non-randomized, non-blinded clinical trial conducted at 11 clinical investigational centers. The country of origin is not explicitly stated but implies the US given the FDA submission. Both standard screen-film mammograms and Xpress Digital mammograms were acquired.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Ground Truth for Clinical Study:
- Pathology: For the 60 cancer cases and 130 benign abnormal cases, ground truth was established by pathology reports.
- Follow-up Mammography: For the 20 negative cases, ground truth was established by negative one-year follow-up mammography.
- Experts for Phantom Testing: 3 observers participated in rating both the CDMAM and ACR phantoms.
- Observer A: Seven years' experience of phantom scoring.
- Observer B: Five years' experience of phantom scoring.
- Observer C: Five years' experience of phantom scoring.
4. Adjudication Method for the Test Set
- Clinical Study: Not explicitly described as an adjudication method for ground truth. However, 11 radiologists interpreted the 210 cases, scoring each with a BIRADS score and probability of malignancy. The text states, "The final study cohort consisted of 210 subjects: 60 pathology proven cancers, 130 benign abnomal subjects (benign biopsy findings), and 20 negative subjects (confirmed by negative one year follow-up mammography)." This implies that the ground truth for individual cases (cancer/benign/negative) was established independently (pathology/follow-up) rather than through radiologist consensus on the test images themselves. The radiologists' interpretations were then compared against this established ground truth.
- Phantom Testing: The average scores of all three observers were calculated for the CDMAM phantom (contrast-detail diagrams and IQF results) and the ACR phantom (fibers, specks, masses scores). This is a form of consensus/averaging rather than adjudication for differing interpretations.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Yes, a multi-reader, multi-case study was performed as part of the clinical trial.
- Human Readers: 11 radiologists interpreted the 210 screen-film and Xpress Digital images. They were blinded to patient histories, mammography films, and results.
- AI (Device) vs. Human Readers: This study compared the device (Xpress Digital) to traditional screen-film mammography, both interpreted by human readers. It did not directly measure the effect size of how much human readers improve with AI vs. without AI assistance (i.e., AI-assisted reading vs. unassisted reading). Instead, it compared two different imaging modalities (digital vs. film), both interpreted by humans. The device itself is the imaging system, not an AI interpretation aid.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
No. The clinical study involved human radiologists interpreting both the Xpress Digital mammograms and the screen-film mammograms. The device itself is an imaging system, not an AI algorithm for automated interpretation.
7. Type of Ground Truth Used
- Clinical Study:
- Pathology: For cancer cases and benign abnormal subjects (biopsy findings).
- Outcomes Data (Follow-up Mammography): For negative subjects (negative one-year follow-up mammography).
8. Sample Size for the Training Set
The document does not mention a training set or any machine learning components that would require one. The Xpress Digital Mammography System is described as a "software device" for image acquisition and display, and the studies performed relate to its physical and clinical performance as an imaging system, not an AI diagnostic algorithm.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set or machine learning components are described or implied.
Ask a specific question about this device
(164 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The Direct Digitizer, REGIUS SIGMA is an X-ray image reader which uses a stimulable phosphor plate (Plate) as X-ray detector installed in a separate cassette. It reads the image recorded on the Plate and transfers the image data to an externally connected device such as a host computer, an order input device, an image display device, a printer, an image data filing device, and other image reproduction devices. It is designed and intended to be used by trained medical personnel in a clinic, a radiology department in a hospital and in other medical facilities.
This device is not intended for use in mammography.
The Direct Digitizer, REGIUS SIGMA is a compact X-ray image reader which uses a stimulable phosphor plate (Plate) as X-ray detector installed in a cassette, and reads the image recorded on the Plate by inserting a cassette in the entrance slot of this device. By means of laser scan and photoelectric method, this device reads the X-ray image data created in form of a latent image on the Plate exposed by an external X-ray generating device, and converts the read data into digital.
The device is comprised of a reading unit with cassette containing Plate.
The image data transfer to an externally connected device such as a host computer, an order input device, an image display device, a printer, an image data filing device, and other image reproduction devices.
The basic operations of REGIUS SIGMA such as a starting, a shut down, a registration-of-patient, a setting of a various condition are operated with the Medical Image Processing Workstation, ImagePilot (operator console) which is cleared 510(k), K071436.
This 510(k) summary for the Direct Digitizer, REGIUS SIGMA, primarily focuses on demonstrating substantial equivalence to predicate devices through technical comparisons and adherence to safety standards. It does not contain detailed information about acceptance criteria or a specific study proving device performance against such criteria in terms of diagnostic accuracy or clinical effectiveness.
The document states: "The performance test results show that there is no new safety and efficacy issue of the REGIUS SIGMA." However, it does not elaborate on what these performance tests entailed, what specific metrics were measured, nor what were the acceptance criteria for those metrics.
Therefore, many of your requested points cannot be answered from the provided text.
Here's an analysis of what can and cannot be answered:
1. A table of acceptance criteria and the reported device performance
- Cannot be provided. The document does not specify performance acceptance criteria related to diagnostic accuracy or clinical effectiveness, nor does it report specific device performance metrics against such criteria. It mentions meeting safety standards (IEC60601-1 Ed.2, IEC60601-1-2 Ed.3, 21 CFR 1040.10, IEC60825-1) and ISO14971 for risk management, which are general safety and quality standards, not specific performance metrics.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided. The document does not describe any specific clinical or performance test set, its size, or its provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Cannot be provided. Since no test set is described for diagnostic performance, ground truth establishment methods or expert qualifications are not mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Cannot be provided. No test set or adjudication method is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study described. This device is an X-ray image reader (hardware) and does not appear to incorporate AI for diagnostic assistance based on the description. Therefore, an MRMC study comparing human readers with/without AI assistance would not be relevant to this specific device submission as described.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable/Cannot be determined from the text. This device is hardware for digitizing X-ray images. It's not an AI algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Cannot be provided. No ground truth for diagnostic performance is mentioned.
8. The sample size for the training set
- Not applicable/Cannot be provided. This is hardware, not a machine learning model, so there is no "training set" in the sense of data used to train an algorithm.
9. How the ground truth for the training set was established
- Not applicable/Cannot be provided. As above, no training set for an algorithm is mentioned.
Summary from the document:
The 510(k) submission for the Direct Digitizer, REGIUS SIGMA, focuses on demonstrating substantial equivalence to predicate devices (Direct Digitizer, REGIUS Model 110, and Point-of-Care CR360) based on:
- Similar principles of operation and technological characteristics.
- Performance test results showing no new safety and efficacy issues. (Specific criteria or results are not detailed in this summary).
- Adherence to recognized safety and EMC standards:
- IEC60601-1 Ed.2 (electrical safety)
- IEC60601-1-2 Ed.3 (electromagnetic compatibility)
- 21 CFR 1040.10, IEC60825-1 (radiation safety, specifically laser safety for the reading mechanism)
- ISO14971 (risk management)
The key takeaway is that this approval is based on demonstrating the new device performs "similarly" to previously cleared devices and meets applicable safety standards, rather than providing a detailed clinical study with specific acceptance criteria for diagnostic performance.
Ask a specific question about this device
(218 days)
KONICA MINOLTA MEDICAL & GRAPHIC, INC.
The Acies is a software product. It is intended for installation on an off-the-shelf PC meeting or exceeding minimum specifications. The Acies primarily facilitates processing and presentation of medical images on display monitors suitable for the medical task being performed. The Acies can process and display medical images from a variety of different modality systems. It also interfaces to various image storage and printing devices using DICOM or similar interface standards. Lossy compressed mammographic images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 MP resolution and meets other technical specifications reviewed and accepted by FDA.
The Acies is the software that is intended to configure PACS ( Picture Archiving and Communications System ) using a normal Windows-based PC. The workstation on which this software is installed can be utilized as the server-client front-end PC with the function of the Image server and the Viewer to read the image stored in the server. In addition, it is capable to read the image from the client PC through the network. The Acies primarily facilitates processing and presentation of medical images on display monitors suitable for the medical task being performed. The Acies can process and display medical images from a variety of different modality systems. It also interfaces to various image storage and printing devices using DICOM or DICOM based interface standards. Lossy compressed mammographic images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 MP resolution and meets other technical specifications reviewed and accepted by FDA. The system is also capable of displaying the diagnostic image on the display screen by receiving DICOM SR from FDA approved CAD (Computed Aided Detection) processor.
This document is a 510(k) premarket notification for a Picture Archiving and Communications System (PACS) called "Acies." As such, it describes the device's intended use and claims substantial equivalence to predicate devices, rather than presenting a study with acceptance criteria and performance metrics for a novel AI/CAD device.
Therefore, the requested information regarding acceptance criteria, device performance, sample sizes, ground truth establishment, expert qualifications, adjudication methods, MRMC studies, and standalone performance for an AI/CAD device cannot be extracted from this document.
The document focuses on:
- Device Identification: Company, submitter, trade name, common name, classification, product code.
- Device Description: The Acies is software for configuring a PACS, enabling processing and presentation of medical images from various modalities, interfacing with storage/printing via DICOM. It explicitly states that lossy compressed mammographic images must not be reviewed for primary interpretation and that mammographic images require FDA-approved monitors with specific resolution. It also mentions displaying diagnostic images from FDA-approved CAD processors.
- Indications for Use: Primarily facilitates processing and presentation of medical images on suitable display monitors, interfacing with various modality systems and storage/printing devices using DICOM.
- Substantial Equivalence: Compares the Acies to existing predicate devices (HOLOGIC SecurView DX, K062107 and KONICAMINOLTA REGIUS Unitea / ImagePilot, K071436), claiming similar indications for use, technological characteristics, and stating that verification and validation testing showed no safety/efficacy issues.
- Conformance to Standards: Lists several relevant standards (IEC 62304, ISO 14971, ISO/IEC 10918-1, DICOM).
- Conclusion: Reaffirms substantial equivalence to predicate devices.
- FDA Clearance Letter: Confirms the 510(k) clearance based on substantial equivalence.
In summary, this document does not contain the information required to answer your specific questions related to AI/CAD device performance studies. It is a regulatory submission for a PACS system, not a study report validating an AI algorithm's diagnostic performance.
Ask a specific question about this device
Page 1 of 3