Search Results
Found 2 results
510(k) Data Aggregation
(130 days)
The Vita Flex CR System is intended for digital radiography using a phosphor storage screen for standard radiographic diagnostic images. The LL is indicated for Long Length Imaging examinations of long areas of anatomy such as the leg and spine.
The Vita Flex CR System with LLI is a Computer Radiography (CR) acquisition scanner, which includes mechanical and software interface to the LLI cassette. The device is constructed from a Man Machine Interface panel, a CR scanner and infrastructure, which enables connection to external applications, i.e. to import command messages, to export images and provide status messages. The LLI is a CR cassette, which is used for Long Length Imaging X Ray examinations of long areas of anatomy.
The Vita Flex CR system with LLI accepts an x-ray cassette with a screen. An X-ray cassette is a light-resistant container that protects the screen from exposure to daylight, and allows the passage of X-rays through the front cover on to the phosphor layer of the screen. When stroked by radiation the intensifying screen fluoresces emitting a light that creates the image.
Our Vita Flex CR system take a cassette as an input and it extracts an exposed screen and scans in the image off the screen. The image is stored on the computer system attached to the Vita Flex CR system. Once the scan is complete the screen data is erased and the screen is placed back inside the cassette to be used again by the customer.
When a cassette is properly inserted into the scanner, the scanner will lock the cassette in place. Once locked into place the cassette door can be opened to allow the scanner to feed the screen into the unit.
The operation of the scanning of the LLI cassette and screen will be done exactly as the predicate. Since the size of a long length imaging screen and cassette is large, the operation consists of 2 scans – scanning one half of the image, then turning the cassette around and scanning the second half of the image.
The document describes the regulatory submission for the Vita Flex CR System with LLI, a digital radiography system. The key argument for its clearance is its substantial equivalence to a previously cleared predicate device. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed within the context of demonstrating this substantial equivalence through non-clinical testing, rather than a clinical trial with human subjects.
Here's the breakdown of the information requested:
1. A table of acceptance criteria and the reported device performance
The acceptance criteria are implicitly defined by the demonstration of equivalent or improved performance compared to the predicate device across various features and operational parameters. The reported device performance is presented as a comparison between the modified device (Vita Flex CR System with LLI) and the predicate device (Point of Care including LLI).
Feature / Acceptance Criteria Category | Predicate Device (Performance Baseline) | Vita Flex CR System with LLI (Reported Performance) | Met/Exceeds Criteria (Demonstrates Substantial Equivalence or Improvement) |
---|---|---|---|
Intended Use / Indications for Use | "digital radiography using a phosphor storage screen for standard radiographic diagnostic images. The LLI is indicated for Long Length Imaging examinations of long areas of anatomy such as the leg and spine." | Identical | Met - Unchanged |
Safety Standards | IEC60601-1, IEC60601-1-2, IEC 60825-1 (Class 1 Laser) | IEC60601-1, IEC60601-1-2, IEC 60825-1 (Class 1 Laser) | Met - Conformance verified by an OSHA approved test lab |
Working Environment | Ambient: +10 to +40°C, RH: 30-70% | Ambient: +5 to +45°C, RH: 25-81%, Atmospheric pressure: 700-1060 hPa | Exceeds/Broader - Improved operational range |
Physical Size | 658mm x 735mm x 358mm, 45KG Weight | 668mm x 675mm x 385mm, 30KG Weight | Different but within acceptable range for function, lighter weight (Improvement) |
Power Input | Multiple profiles (90-250VAC, 50/60Hz) | Unified profile (100-240VAC, 50/60Hz, 1.5A) | Improvement - Streamlined power input |
Power Module | Internal AC/DC converter | External AC/DC converter | Different - No impact on safety or effectiveness |
Cassette Loading | Manual loading | Manual loading | Met - Unchanged |
Screen Access | Autofeed by Driving Roller in Screen Transportation unit | Autofeed by Driving Roller in Screen Transportation Unit | Met - Unchanged |
Imaging Module | Laser Platen Scanning (Vertical & Horizontal Direction) | Laser Platen Scanning (Vertical & Horizontal Direction) | Met - Unchanged fundamental technology |
Laser Beam Wavelength | Red Light: 655 ± 10 nm | Red Light: 660 ± 7nm | Met - "Negligible difference," "no impact to safety or effectiveness" |
Laser Output Power (mW) | 22~25 | 30 ± 2 | Met - "Slight increase," "no impact to safety or effectiveness" |
Laser Level | Class 3B | Class 3B | Met - Unchanged |
Screen Erase Module | Achromatic Light Eraser, Fluorescent Lamps | Monochromatic Light Eraser, Red LED Light Source | Improvement - "More stable over longer period," "no impact to safety or effectiveness" |
Console Connector | USB 2.0 | USB 2.0 | Met - Unchanged |
Software Development Kit | Ultra Lite SDK | Ultra Lite SDK | Met - Unchanged |
Long Length Imaging Software | CR Long-Length Imaging System (K021829) | DR Long Length Imaging Software (K130567) (FDA cleared, K100094, for Carestream Image Suite Software) | Met - Uses newer, also cleared software, deemed "no impact to safety or effectiveness" |
DICOM | 3.0 | 3.0 | Met - Unchanged |
Image Pixel Depth (Bit) | 12 | 12 | Met - Unchanged |
Phosphor Screen & Cassette Spec. | 14x17", 10x12", 8x10", 24x30cm, 14x14", 14x33" (LLI), 15x30cm (Dental) and some not supported (10x10", 9.5x9.5") | Same supported sizes, plus 10x10" (Dental Vet) newly supported; 9.5x9.5" still not supported. | Exceeds/Improvement - Broader compatibility with some cassette sizes |
Throughput Tolerance ±5% (PPH) | Examples specific values (e.g., 14x17" @ 21; 14x33" @ 2.5) | Examples specific values (e.g., 14x17" @ 30 and higher; 14x33" @ 2.5) | Exceeds/Improvement - Higher PPH for some configurations |
Max Spatial Resolution (LP/mm) | Examples specific values (e.g., 8x10" @ 4.2; 10x12" @ 3.5) | Examples specific values (e.g., 8x10" @ 4.2; 10x12" @ 4.2) | Exceeds/Improvement - Higher resolution for some configurations |
Min Pixel Pitch ($\mu$m) | Examples specific values (e.g., 14x33" @ 173; 8x10" @ 100) | Examples specific values (e.g., 14x33" @ 160; 8x10" @ 86) | Exceeds/Improvement - Smaller pixel pitch for some configurations |
2. Sample size used for the test set and the data provenance
The document explicitly states: "Given the differences from the predicate device, clinical testing is not necessary for the subject device. Bench testing alone is sufficient to demonstrate substantial equivalence."
Therefore, there was no "test set" in the sense of a dataset of patient images. The evaluation was based on bench testing of the device's hardware and software components. The "sample size" would refer to the number of devices tested, or the number of tests performed on a device, not a patient image sample size. No specific numbers are provided for the quantity of bench tests or units tested, beyond the general statement that "Bench testing was performed."
Data Provenance: Not applicable as no clinical or image data was used for testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. As no clinical testing or image-based test set was used, there was no need for expert radiologists to establish ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No image-based test set where adjudication would be relevant.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. An MRMC study was not performed. The device is a digital radiography system, not an AI-powered diagnostic aid meant to assist human readers. The submission focuses on the safety and performance of the imaging equipment itself in comparison to its predicate.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No. This describes the performance of the imaging system and its included components, not a standalone algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable. The "ground truth" for this submission was the established performance and safety characteristics of the predicate device and relevant industry standards (IEC, etc.). The modified device was evaluated against these benchmarks using non-clinical (bench) testing.
8. The sample size for the training set
Not applicable. This device is a hardware system with integrated software, not a machine learning model that requires a "training set."
9. How the ground truth for the training set was established
Not applicable. No training set was used.
Ask a specific question about this device
(269 days)
Smart Stitching software is indicated for use to allow post-capture positioning and joining of individual images to produce an additional composite image for two dimensional image review.
Smart Stitching provides a function to read multiple medical images at one time by stitching them to one image based on overlapping areas. It supports DICOM 3.0 which is standard of medical image format as well as Tiff and Raw images. It provides additional functions such as image retrieval, storage, and transmission. Smart Stitching is designed to facilitate diagnosis by easily stitching multiple medical images, up to 5 images, from multiple captures to one image. Smart Stitching software does not serve as acquisition software nor has control function for acquisition setting.
This FDA 510(k) summary for the OSKO Smart Stitching Software System provides limited details about specific acceptance criteria and the study that rigorously proves the device meets those criteria with statistical significance. However, based on the provided text, here's an attempt to extract and infer the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or a detailed "reported device performance" against those criteria. It refers to "predetermined acceptance criteria" and successful completion of "in house testing criteria." The primary function described is "auto stitching," so the implicit criterion would be the accurate and functional joining of images.
Acceptance Criteria (Inferred from documentation) | Reported Device Performance |
---|---|
Software functions as intended for stitching multiple medical images | "the software functions as intended" |
All input functions operate correctly | "passed all in house testing criteria" |
All output functions operate correctly | "passed all in house testing criteria" |
All actions performed by the software are correct | "passed all in house testing criteria" |
Imaging stitching process is documented and correct | "imaging stitching process are documented in the software validation report" |
Risk analysis verified | "verified and validated the risk analysis" |
Individual performance results were within predetermined acceptance criteria | "individual performance results were within the predetermined acceptance criteria" |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated. The document mentions "software validation testing," but no specific number of images or cases used in this testing is provided.
- Data Provenance: Not explicitly stated. Given it's "in house testing criteria" and no clinical testing was performed, the data could be internal test data, possibly simulated or representative images. There is no mention of country of origin, retrospective, or prospective data collection.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
- Number of Experts: Not mentioned.
- Qualifications of Experts: Not mentioned.
- Ground Truth Establishment: The document refers to "in house testing criteria" and "predetermined acceptance criteria," suggesting that acceptance was determined by the manufacturer's internal quality and testing procedures, rather than an independent expert panel establishing a separate ground truth for a test set.
4. Adjudication Method for the Test Set
Not applicable/Not mentioned. There is no mention of a formal adjudication process involving multiple reviewers for establishing ground truth, as no clinical testing or reader study is described. The acceptance seems to be based on internal software validation against pre-defined engineering and functional specifications.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- MRMC Study: No, an MRMC comparative effectiveness study was explicitly not done. The document states: "No clinical testing is performed."
- Effect Size: Not applicable, as no MRMC study was conducted. Thus, there is no effect size reported for human readers improving with AI assistance.
6. If a Standalone Study (algorithm only without human-in-the-loop performance) was Done
Yes, implicitly. The "software validation testing" and "auto stitching function and SW compatibility test" described are standalone tests of the algorithm's functionality and performance against its intended design. These tests were conducted without human-in-the-loop performance evaluation in a clinical setting.
7. Type of Ground Truth Used
The ground truth for the device's functionality appears to be based on:
- Engineering Specifications/Design Intent: The software's ability to "allow post-capture positioning and joining of individual images to produce an additional composite image for two dimensional image review."
- Internal Validation Standards: "all in house testing criteria" and "predetermined acceptance criteria."
- Functional Correctness: Verification that the "software functions as intended" and performs its defined input/output operations.
It is not based on expert consensus, pathology, or outcomes data, as no clinical testing was performed.
8. Sample Size for the Training Set
Not mentioned. The document focuses on validation/testing rather than the development or training process of any machine learning components (although it's referred to as "Smart Stitching Software System, Image Processing," implying some algorithmic intelligence, details of its training are absent).
9. How the Ground Truth for the Training Set Was Established
Not mentioned, as details about a training set are not provided.
Ask a specific question about this device
Page 1 of 1