Search Results
Found 2 results
510(k) Data Aggregation
(120 days)
Bone Suppression Software
Bone Suppression Software is an image processing technology to improve the visibility of soft tissues in the lung area by suppressing the signals of ribs and clavicles in chest x-ray images.
The purpose of this software is to provide Bone Suppression images. The software receives the exposed frontal plain chest X-ray images as inputs, then starts processing each image from the Senciafinder Gateway application. After the software is started, it performs the extraction process of the lung field area and Bone Suppression process to attenuate the signals of bones, and outputs Bone Suppression images with attenuated signals of ribs and clavicles in the extracted lung field area.
In conjunction with the Senciafinder Gateway, images are received from diagnostic imaging equipment via network. Received images are applied Bone Suppression processing and output to image display system such as PACS / workstation via network.
The provided text describes a 510(k) premarket notification for "Bone Suppression Software" (K240281) by Konica Minolta, Inc. The core information regarding acceptance criteria and performance studies is present in Section VII, "PERFORMANCE DATA."
However, the provided text explicitly states: "No clinical studies were required to support the substantial equivalence." This means that the 510(k) submission relied on non-clinical performance data and comparison to a predicate device, rather than new clinical trials with human subjects.
Therefore, many of the requested points in your prompt, such as data provenance for a test set, number of experts for ground truth, adjudication methods, MRMC studies, standalone performance on a clinical test set, and detailed training set information, are not applicable or available from the provided text, because a clinical study (as typically understood in terms of human subjects and diagnostic performance analysis) was not performed or deemed necessary for this 510(k) clearance based on the provided document. The performance data relied on software verification and validation activities.
Here's a breakdown of what can be answered based on the provided text, and what cannot:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: The text states, "All the verification and validation activities required by the specification and the risk analysis for the Bone Suppression Software were performed and the results showed that the predetermined acceptance criteria were met." However, the specific acceptance criteria (e.g., in terms of image quality metrics, bone suppression ratios, or processing speed) are not detailed in the provided document.
- Reported Device Performance: Similarly, the specific quantitative performance metrics are not reported in the document. It only states that the device "performs as specified and functions as intended."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Not applicable for a clinical test set (as no clinical study was required for substantial equivalence). The performance data referred to "Software Verification and Validation Testing," which typically involves simulated data, historical non-clinical data, or controlled laboratory tests, not a clinical "test set" in the diagnostic performance sense. The text does not provide details on the nature or origin of the data used for V&V activities.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable. As no clinical study was performed or required for the 510(k) substantial equivalence, no expert-established ground truth for a clinical test set is mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. See point 3.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. The document explicitly states, "No clinical studies were required to support the substantial equivalence." Therefore, no MRMC study evaluating human reader improvement with AI assistance was conducted or reported for this 510(k).
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that standalone performance was assessed as part of "Software Verification and Validation Testing" to ensure it "performs as specified and functions as intended." However, the metrics of this standalone performance (e.g., objective image quality measurements, quantitative bone suppression metrics) are not provided in the text. It's a statement that V&V was successful, not a report of the results themselves.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not explicitly stated for the V&V activities. For "Software Verification and Validation Testing," ground truth would typically be established based on the specifications and expected outputs of the software (e.g., if a bone is known to be in a certain location, can the software correctly identify and suppress it?). This is different from clinical ground truth based on expert reads, pathology, or outcomes. The document does not elaborate on how this "ground truth" for V&V was established.
8. The sample size for the training set
- Not applicable/Not provided. The document does not mention details about the development (training) of the software. As it's a 510(k) submission, the focus is on verification and validation against a predicate, not detailing the original development process.
9. How the ground truth for the training set was established
- Not applicable/Not provided. See point 8.
Summary based on the provided text:
The acceptance criteria for the Bone Suppression Software were met through "Software Verification and Validation Activities." These activities confirmed that the device "performs as specified and functions as intended." However, the specific quantitative acceptance criteria and the detailed performance results are not disclosed in this 510(k) summary. Crucially, no clinical studies involving human subjects were required or conducted to support this substantial equivalence determination, meaning aspects like test set data, expert ground truth, adjudication, and MRMC studies are not part of this submission's evidence as described.
Ask a specific question about this device
(119 days)
BONE SUPPRESSION SOFTWARE
The software's intended use is to assist diagnosis of chest pathology by minimizing anatomical distractions such as the ribs and clavicle in chest x-ray images.
The Bone Suppression Software is a software component for use on diagnostic x-ray systems utilizing digital radiography (DR) or computed radiography (CR) technology. The software option suppresses bone anatomy in order to enhance visualization of chest pathology in a companion image that is delivered in addition to the original diagnostic image.
Here's a breakdown of the acceptance criteria and study information for the Bone Suppression Software, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list quantitative acceptance criteria for the Bone Suppression Software's performance. It states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as the predicate device." However, the specific metrics or thresholds for these criteria are not detailed.
Therefore, this section focuses on the qualitative claims made and the reported outcome.
Acceptance Criteria Category | Reported Device Performance (Qualitative) |
---|---|
Safety and Effectiveness | "demonstrated that the device is as safe, as effective, and performs as well as the predicate device." |
Image Acceptability | "Clinical testing was conducted to evaluate the acceptability of the companion images for assisting diagnosis." (Implied: results were acceptable) |
Design Output Compliance | "Performance testing was conducted to verify the design output met the design input requirements." (Implied: met requirements) |
User Needs/Intended Uses | "to validate the device conformed to the defined user needs and intended uses." (Implied: conformed) |
Substantial Equivalence | "demonstrated that the device is as safe, as effective, and performs as well as the predicate device." (Supports a claim of substantial equivalence to predicates.) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not specified in the provided text. The document mentions "Clinical testing was conducted," but does not give a number of cases.
- Data Provenance: Not specified. It's unclear what country the data came from or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. The document only mentions "Clinical testing was conducted to evaluate the acceptability of the companion images for assisting diagnosis." It implies that qualified professionals reviewed the images, but their specific roles or experience levels are not detailed.
4. Adjudication Method for the Test Set
- Not specified. The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth or evaluating the clinical images.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- MRMC Study: The document does not explicitly state that an MRMC comparative effectiveness study was done comparing human readers with and without AI assistance (bone suppression).
- Effect Size: Not provided. Since an MRMC study is not confirmed, no effect size is discussed. The study's focus was on the "acceptability of the companion images for assisting diagnosis" and demonstrating equivalence to a predicate.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document describes the "Bone Suppression Software" as a "software component" that "suppresses bone anatomy in order to enhance visualization of chest pathology in a companion image that is delivered in addition to the original diagnostic image." This heavily implies that the software provides a processed image for human interpretation, rather than a standalone diagnostic output. While the algorithm performs its function independently, the "clinical testing" mentioned focuses on the acceptability of the companion images for assisting diagnosis, indicating an evaluation of the output intended for human review, not a standalone diagnostic performance evaluation against ground truth.
7. The Type of Ground Truth Used
- The document implies that the ground truth for evaluation was based on the "acceptability of the companion images for assisting diagnosis" by clinical review. However, the specific type of ground truth (e.g., pathology, clinical follow-up, expert consensus on disease presence) against which the diagnostic enhancement was measured is not explicitly stated. It appears the ground truth was primarily related to the acceptability and utility of the suppressed images for assisting diagnosis rather than independently verifying the presence or absence of specific pathologies with a definitive gold standard.
8. The Sample Size for the Training Set
- Not specified. The document does not provide any information about the training data or its sample size.
9. How the Ground Truth for the Training Set Was Established
- Not specified. As no information on the training set or its ground truth is provided, the establishment method is unknown.
Ask a specific question about this device
Page 1 of 1