Search Results
Found 1 results
510(k) Data Aggregation
(525 days)
Konan Noncon Robo Pachy, K980357
The Noncon Robo Pachy F&A is a non-contact ophthalmic microscope, optical pachymeter, and camera intended for examination of corneal endothelium and for measurement of the thickness of the cornea.
The Noncon Robo Pachy F&A specular microscope and optical pachymeter is a non-contact ophthalmic microscope, optical pachymeter, and camera intended for examination of the corneal endothelium and for measurement of the thickness of the cornea. It is an improvement to the original Konan Noncon Robo Pachy, K980357. The device permits visual inspection and photography of the corneal endothelium and measurement of the corneal thickness without any object contacting the eye. It features focusing by means of infrared techniques, and computer-assisted cell counting and cell analysis capabilities. The computer functions are also used to aid in setting up the various features of the machine and to aid in photographic images are temporarily stored in the system's memory, and are preserved in video form on magnetic tape or by using a video printer. The memory can store two endothelial cell images and two anterior segment images, which are usually those of the left and right eyes.
Konan NonCon Robo Pachy F&A: Acceptance Criteria and Study Details
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria for the Konan NonCon Robo Pachy F&A are based on the agreement and variability of its analysis methods for corneal endothelial cell density, coefficient of variation, and percent hexagonality, when compared to manual and center methods. The reported performance is presented as the mean difference and 95% limits of agreement between these methods for agreement, and standard deviation (within-image) for variability.
Acceptance Criteria & Reported Performance Table:
| Parameter | Analysis Method Comparison | Acceptance Criteria (Implied) | Reported Device Performance (Mean Difference & 95% Limits of Agreement) | Reported Device Performance (Variability - Standard Deviation (within-image) as % of mean value) |
|---|---|---|---|---|
| Cell Density | Manual vs PC-Assist (with redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 0.06%, LoA: -0.78% to 0.9% | 1.14% (PC-Assist with redrawing) |
| Manual vs PC-Assist (without redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 0.00%, LoA: -2.3% to 2.3% | 0.78% (PC-Assist without redrawing) | |
| Center vs PC-Assist (with redrawing) | Close agreement, narrow limits of agreement | Mean Diff: -0.11%, LoA: -1.47% to 1.25% | 1.23% (Center Method) | |
| Center vs PC-Assist (without redrawing) | Close agreement, narrow limits of agreement | Mean Diff: -0.17%, LoA: 2.37% to 2.71% | ||
| Manual vs Center Method | Close agreement, narrow limits of agreement | Mean Diff: 0.16%, LoA: -1.1% to 1.42% | ||
| Coefficient of Variation | Manual vs PC-Assist (with redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 0.43%, LoA: -2.35% to 3.21% | 4.23% (PC-Assist with redrawing) |
| Manual vs PC-Assist (without redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 0.24%, LoA: -5.62% to 6.10% | 8.02% (PC-Assist without redrawing) | |
| Center vs PC-Assist (with redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 2.38%, LoA: -4.18% to 8.94% | 6.60% (Center Method) | |
| Center vs PC-Assist (without redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 2.18%, LoA: -4.18% to 8.94% | ||
| Manual vs Center Method | Close agreement, narrow limits of agreement | Mean Diff: -1.96%, LoA: -8.58% to 4.66% | ||
| Percent Hexagonality | Manual vs PC-Assist (with redrawing) | Close agreement, narrow limits of agreement | Mean Diff: -0.08%, LoA: -3.32% to 3.16% | 3.87% (PC-Assist with redrawing) |
| Manual vs PC-Assist (without redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 1.47%, LoA: -3.63% to 6.57% | 12.67% (PC-Assist without redrawing) | |
| Center vs PC-Assist (with redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 2.13%, LoA: -3.03% to 7.29% | 6.42% (Center Method) | |
| Center vs PC-Assist (without redrawing) | Close agreement, narrow limits of agreement | Mean Diff: 3.67%, LoA: -3.13% to 10.47% | ||
| Manual vs Center Method | Close agreement, narrow limits of agreement | Mean Diff: -2.23%, LoA: -7.03% to 2.57% |
Note on Acceptance Criteria: The document does not explicitly state numerical acceptance criteria thresholds. Instead, it presents the "Agreement Between Methods of Analysis" and "Variability Associated with the Analysis Methods" as performance data to demonstrate substantial equivalence to the predicate device. The implied acceptance is that the device's performance, as measured by these metrics, is comparable to, or an improvement on, the predicate device and clinically acceptable.
Additionally, the study explicitly states: "Agreement and variability of the analysis methods was obtained using a sample that included virtually no eyes with Percent Hexagonality <45, Coefficient of Variation >0.41, or Cell Density <2100. Agreement and variability of the analysis methods is not known for eyes with parameters beyond these values." This indicates a limitation of the device's validated performance to a specific range of corneal health parameters.
2. Sample Size and Data Provenance for the Test Set
- Sample Size: 40 images of eyes.
- Data Provenance: Not explicitly stated, but the submission is from Konan Inc., Japan, suggesting the data could be from Japan, though this is not confirmed. The study is described as a "clinical test," implying prospective data collection, but no further details are provided.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Four "classifiers" were used.
- Qualifications of Experts: Not specified. The document only refers to them as "classifiers."
4. Adjudication Method for the Test Set
The adjudication method is not explicitly stated as typical "2+1" or "3+1." However, the description indicates:
- Each of the four classifiers analyzed each of the 40 images three times.
- "For a given analysis method, the standard deviation of the within-image results was calculated as a measure of variability." This suggests that the multiple readings by each classifier were used to assess within-observer variability, and potentially implicitly contributed to agreement calculations.
- The "Mean Difference" and "Limits of Agreement" are calculated by comparing the results of different analysis methods (Manual, PC-Assist, Center Method).
Therefore, it appears a form of repeated reading by multiple classifiers was used as part of the evaluation, rather than a specific consensus-based adjudication method for a "ground truth" label.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
The study performed does not appear to be a traditional MRMC comparative effective study comparing human readers with AI assistance versus without AI assistance. Instead, it compares different analysis methods (Manual, PC-Assist, Center Method) for a standalone device.
- PC-Assist represents the device's algorithm, with and without redrawing functionality.
- The Manual method represents a human-driven analysis.
- The Center Method is another analysis method, presumably also algorithm-driven as it's compared in a similar manner.
The study aims to show agreement between these methods, suggesting the new PC-Assist algorithm is comparable to existing or manual methods. It does not measure the improvement of human readers with AI assistance, but rather evaluates the performance of the device's algorithms alongside manual analysis.
6. Standalone Performance Study
Yes, a standalone study was done for the device's algorithm.
The "PC-Assist (with redrawing)" and "PC-Assist (without redrawing)" methods represent the algorithm's performance. The "Agreement Between Methods of Analysis" tables directly report the performance of these algorithm-driven methods against "Manual" and "Center" methods, showing its ability to analyze images independently. The "Variability Associated with the Analysis Method" table also presents standalone variability for the PC-Assist methods.
7. Type of Ground Truth Used for the Test Set
The ground truth used is effectively "agreement with other established analysis methods."
The study assesses agreement between the device's PC-Assist methods and "Manual" analysis and a "Center Method," which are implied to be reference standards or established practices. It's not based on external pathology, or long-term clinical outcomes, but rather on the consistency of measurements obtained through different analytical approaches on the same images.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set. It only mentions that the device is an improvement to an original model and includes an "improved cell counting algorithm." The new software and algorithm were validated, but details about the training data or its size are not provided.
9. How Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established. It only states that there is an "improved cell counting algorithm" and that "A new software validation test has been done, to validate the new software." This implies that the algorithm was developed and refined, but the process of creating its training data and establishing ground truth for that data is not described in this 510(k) summary.
Ask a specific question about this device
Page 1 of 1