Search Results
Found 4 results
510(k) Data Aggregation
(95 days)
PeekMed web
PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning. The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.
PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment are necessary for the proper use of the system in the revision and approval of the output of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
As the PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it can then provide a total overview of the surgery. Being software, it does not interact with any part of the body of the user and/or patient.
Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state the "reported device performance" against each acceptance criterion. It only states that the comparison of efficacy results met the acceptance criteria. Thus, the "Reported Device Performance" column reflects this qualitative statement.
ML Model | Acceptance Criteria | Reported Device Performance |
---|---|---|
Segmentation | DICE is no less than 90% | |
HD-95 is no more than 8 | ||
STD DICE is between +/- 10% | ||
Precision is more than 85% | ||
Recall is more than 90% | Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence. | |
Landmarking | MRE is no more than 7mm | |
STD MRE is between +/- 5mm | Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence. | |
Classification | Accuracy is no less than 90%. | |
Precision is no less than 85% | ||
Recall is no less than 90% | ||
F1 score is no less than 90% | Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence. | |
Detection | MAP is no less than 90%. | |
Precision is no less than 85% | ||
Recall is no less than 90% | Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence. |
Note: The document only confirms that the performance met the criteria, not the exact values achieved.
2. Sample Sizes and Data Provenance
- Test Set Sample Sizes (External Validation Data):
- Segmentation ML model: 402 unique datasets
- Landmarking ML model: 367 unique datasets
- Classification ML model: 347 unique datasets
- Detection ML model: 198 unique datasets
- Data Provenance: The document states that ML models were developed with datasets from "multiple sites." It doesn't specify the country of origin but mentions that the development, training, and testing data, as well as the external validation data, were designed to cover the intended use population while ensuring variety and diverse patient characteristics. It implies the data is retrospective as it refers to "datasets" collected for model development and validation.
3. Number of Experts and Qualifications for Ground Truth
The document does not specify the number of experts or their qualifications for establishing the ground truth for the test set. It only mentions that the "External validation...was collected independently of the development data to prevent bias, ensuring the reliability of the results. For the external validation, a fully independent dataset, labeled by a separate team, was employed..." The qualifications of this "separate team" are not detailed.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It states that the external validation dataset was "labeled by a separate team." This suggests a single labeling event by that team, rather than an explicit multi-reader adjudication process.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not reported. The study focuses purely on the standalone performance of the ML models against established ground truth.
6. Standalone (Algorithm Only) Performance
Yes, a standalone performance evaluation of the algorithm (ML models) was done. The performance metrics (DICE, HD-95, MRE, Accuracy, Precision, Recall, F1 score, MAP) and acceptance criteria are applied directly to the output of the ML models.
7. Type of Ground Truth Used
The type of ground truth used is referred to as "predefined ground truth" which was established through a "truthing process" and labeled by a "separate team." While it doesn't explicitly state "expert consensus" or "pathology," for image segmentation and landmarking in medical imaging, ground truth is typically established by trained human experts (e.g., radiologists, orthopedic surgeons, or technicians with specific training) through manual annotation or expert review, which often involves some form of consensus. For classification and detection tasks, ground truth similarly relies on definitive labels provided by experts or established from patient records/outcomes, though the document does not elaborate on the specific method for each ML model type.
8. Sample Size for the Training Set
The training set comprised 80% of the total datasets available for ML model development, which included:
- Total X-rays: 2852
- Total CT scans: 1903
- Total MRIs: 151
Therefore, the approximate sample sizes for the training set are:
- X-rays: 0.80 * 2852 = 2281.6 (approx. 2282)
- CT scans: 0.80 * 1903 = 1522.4 (approx. 1522)
- MRIs: 0.80 * 151 = 120.8 (approx. 121)
9. How the Ground Truth for the Training Set Was Established
The document states, "ML models were developed with datasets from multiple sites... We trained the ML models with 80% of the datasets, developed with 10%, and tested with the remaining 10%." It also mentions that "the validation dataset...has never been used for the algorithm training or for tuning the algorithm, and leakage between development and validation data sets did not occur."
While the process for the training set's ground truth is not explicitly detailed in the same way as the external validation "labeled by a separate team," it is implicitly established through the "development" process. Typically, for ML models of this nature, ground truth for training data would also be established through manual annotation by qualified personnel (e.g., clinicians, trained annotators) following established protocols. The document's emphasis on data independence for external validation suggests that the development/training data was also accurately labeled for its purpose.
Ask a specific question about this device
(70 days)
PeekMed web
PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.
PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
As PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.
Here's a breakdown of the acceptance criteria and study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document provides the acceptance criteria but does not directly state the reported device performance metrics from the external validation. It only states that the efficacy results "met the acceptance criteria."
ML Model | Acceptance Criteria | Reported Device Performance (Not explicitly stated in document, only that criteria were met) |
---|---|---|
Segmentation | DICE is no less than 90% | |
HD-95 is no more than 8 | ||
STD DICE is between +/- 10% | ||
Precision is more than 85% | ||
Recall is more than 90% | Met acceptance criteria | |
Landmarking | MRE is no more than 7mm | |
STD MRE is between +/- 5mm | Met acceptance criteria | |
Classification | Accuracy is no less than 90%. | |
Precision is no less than 85% | ||
Recall is no less than 90% | ||
F1 score is no less than 90% | Met acceptance criteria | |
Detection | MAP is no less than 90%. | |
Precision is no less than 85% | ||
Recall is no less than 90% | Met acceptance criteria |
2. Sample Sizes Used for the Test Set and Data Provenance
- Test Set (External Validation Dataset) Sample Sizes:
- Segmentation ML model: 375
- Landmarking ML model: 345
- Classification ML model: 347
- Detection ML model: 198
- Data Provenance: The document states "multiple sites." It does not specify the country of origin. The external validation dataset was collected "independently of the development data to prevent bias" and was a "fully independent dataset." It is not explicitly stated whether the data was retrospective or prospective, but the phrasing "collected independently" for external validation often implies existing, retrospectively collected data used for this specific purpose.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document states that the external validation dataset was "labeled by a separate team" to establish ground truth. It does not provide the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It only states that the ground truth was "labeled by a separate team."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If So, What was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in the provided text. The study focuses solely on the standalone performance of the ML models against a predefined ground truth. The device is a "decision support tool" requiring "clinical assessment" and "revision and approval of the output of the planning" by healthcare professionals, implying a human-in-the-loop workflow, but no human performance study is detailed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone performance evaluation of the ML models was done. The acceptance criteria and the evaluation against a "predefined ground truth" for Segmentation, Landmarking, Classification, and Detection ML models indicate standalone algorithm performance. The document states that the "efficacy results... met the acceptance criteria for ML model performance."
7. The Type of Ground Truth Used
The ground truth was "predefined ground truth" established by a "separate team" for the external validation dataset. While not explicitly stated as "expert consensus," this typically implies human expert review and labeling given the context of medical imaging and planning. It is not stated as pathology or outcomes data.
8. The Sample Size for the Training Set
The ML models were developed with datasets totaling 2852 CR datasets and 1903 CT scans.
- Training Set: 80% of these datasets were used for training.
- 0.80 * (2852 + 1903) = 0.80 * 4755 = 3804 datasets
9. How the Ground Truth for the Training Set Was Established
The document states that the ML models were "developed with datasets from multiple sites." While it mentions that "External validation datasets were collected independently of the development data... labeled by a separate team," it does not explicitly describe the methodology for establishing ground truth for the training dataset. However, it's generally inferred in such contexts that training data also requires labeled ground truth, likely established by human annotators or experts, but the specifics are not provided in this document.
Ask a specific question about this device
(246 days)
PeekMed web
PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.
PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
As PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.
The provided document describes the PeekMed web device and its performance testing to demonstrate substantial equivalence to a predicate device. Here's a breakdown of the acceptance criteria and the study details:
1. Table of Acceptance Criteria and Reported Device Performance
The document explicitly states the acceptance criteria for each ML model but does not provide the reported device performance values against these criteria. It only states that the device met the acceptance criteria.
ML model | Acceptance Criteria | Reported Device Performance |
---|---|---|
Segmentation | DICE is no less than 90% | |
HD-95 is no more than 8 | ||
STD DICE is between +/- 10% | ||
Precision is more than 85% | ||
Recall is more than 90% | Met acceptance criteria | |
Landmarking | MRE is no more than 7mm | |
STD MRE is between +/- 5mm | Met acceptance criteria | |
Classification | Accuracy is no less than 90% | |
Precision is no less than 85% | ||
Recall is no less than 90% | ||
F1 score is no less than 90% | Met acceptance criteria |
2. Sample Sizes and Data Provenance
-
Test Set (External Validation Dataset):
- Segmentation ML model: 367 unique datasets
- Landmarking ML model: 367 unique datasets
- Classification ML model: 367 unique datasets
-
Training and Development Datasets:
- Total CR datasets: 2852
- Total CT scans: 1903
- Training: 80% of the combined CR/CT datasets
- Development: 10% of the combined CR/CT datasets
- Testing (internal): The remaining 10%
-
Data Provenance: The document states that ML models were developed with datasets from multiple sites. It does not specify the country of origin of the data. The data used for external validation was collected independently from the development data to prevent bias, implying a prospective collection or at least a carefully curated retrospective collection to ensure independence. It states the external validation dataset was "not a sampling of the development dataset" and "leakage between development and validation data sets did not occur."
3. Number of Experts and Qualifications
The document states that the external validation dataset was "labeled by a separate team" to establish ground truth. However, it does not specify the number of experts used or their qualifications (e.g., radiologist with 10 years of experience).
4. Adjudication Method
The document mentions that the external validation dataset was "labeled by a separate team," implying that this team established the ground truth. However, it does not describe the specific adjudication method used (e.g., 2+1, 3+1, none).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The study focuses on the standalone performance of the ML models and their equivalency to a predicate device, not on the improvement of human readers with AI assistance.
6. Standalone (Algorithm Only) Performance
Yes, the performance evaluation described focuses on the standalone performance of the ML models (segmentation, landmarking, classification) as incorporated into the PeekMed web system. The results against the predefined acceptance criteria evaluate the algorithm's performance without specific human-in-the-loop metrics, although the overall device is intended to assist human healthcare professionals.
7. Type of Ground Truth Used
The ground truth for the test set (external validation datasets) was established by predefined ground truth. The term "predefined ground truth" and "labeled by a separate team" suggests expert consensus or a similar meticulous labeling process, but it does not specify whether it was pathology, outcomes data, or directly observed clinical outcomes. Given the nature of medical imaging software, it is highly probable to be expert consensus.
8. Sample Size for the Training Set
The training set comprised 80% of a total of 2852 CR datasets and 1903 CT scans.
Total datasets = 2852 (CR) + 1903 (CT) = 4755 datasets.
Training set size = 80% of 4755 = 3804 datasets.
9. How the Ground Truth for the Training Set was Established
The document states that the ML models were developed with datasets from multiple sites. It implies that these datasets, used for training, development, and internal testing, also had established ground truth necessary for supervised learning. However, the document does not explicitly describe the process for establishing the ground truth specifically for the training set. It only states that the external validation data was "labeled by a separate team" for independence, suggesting a similar process might have been used for the training data but this is not confirmed.
Ask a specific question about this device
(108 days)
PeekMed web (v1)
PeekMed® web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
PeekMed® web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for proper use of the system in the revision and approval of the output of the planning.
The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.
As PeekMed® web is capable of representing the medical images in a 2D or 3D environment, performing relevant measurements on those images and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.
The provided text describes PeekMed web (v1), a medical image management and processing system for pre-operative planning, and its substantial equivalence to a predicate device, PeekMed. The document focuses on regulatory compliance and the differences between the new device and its predicate.
However, the furnished text does not contain the specific details required to answer all parts of your request, particularly regarding the acceptance criteria, the study design for proving the device meets it, and detailed performance metrics of the ML models. The document states that "ML models incorporated into PeekMed web were also trained, tested and validated for their performance," and that the "measuring function of the software was verified and validated... in order to assure the safety and correct performance of the device compared to the predicate." It also mentions fulfilling "previously defined accuracy and precision specifications." This implies that such studies were performed, but the results themselves and the specific criteria are not provided in this document.
Therefore, the following response is based only on the information explicitly available in the provided text, and I will highlight where the requested information is not present.
Device: PeekMed web (v1)
Acceptance Criteria and Reported Device Performance
The document broadly states that the device was validated to ensure it "fulfills the previously defined accuracy and precision specifications" for its measuring function and that the "ML models... were also trained, tested and validated for their performance." However, specific numerical acceptance criteria or reported performance metrics (e.g., sensitivity, specificity, or error margins for measurements) are NOT provided in the text.
Acceptance Criteria (e.g., Performance Threshold for ML Models, Measurement Accuracy/Precision) | Reported Device Performance |
---|---|
Not explicitly stated in the provided text. The document indicates "previously defined accuracy and precision specifications" for the measuring function and general validation for ML model performance. | Not explicitly stated in the provided text. The document states "it was confirmed that it fulfills the previously defined accuracy and precision specifications" for the measuring function and that ML models were "validated for their performance." |
Study Information
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not specified in the provided text. The document mentions "All anatomical areas were tested, as well as other main areas of the software, such as the planning final report, and saved planning, ML models, among others." However, the number of cases/images used for testing is not detailed.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not specified. The text mentions that "It is mandatory that the qualified user validates each individually landmatically positioned" and that "qualified users (trained surgeons) can perform activities related to the approval of clinical and critical information." This suggests expert involvement in ground truth establishment for validation/review, but the specific number and their qualifications for formal test set ground truth are not detailed.
- Qualifications of Experts: The text refers to "qualified medical specialist (user)" and "trained surgeons." Specific experience levels (e.g., "10 years of experience") are not provided.
-
Adjudication Method for the Test Set:
- Not specified. The document states that "An automatic plan is always reviewed and validated by the qualified medical specialist" and that "It is mandatory that the qualified user validates each individually landmatically positioned." This describes the workflow of the device requiring user validation, but not a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth during the validation studies.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If so, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- No, an MRMC comparative effectiveness study is not explicitly mentioned or detailed in the provided text. The document describes the automatic planning and landmarking features as "designed to improve and accelerate the user planning experience," but no results from MRMC studies showing an effect size of human reader improvement with AI assistance are provided.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- The document implies that standalone performance of the ML models was evaluated, as it states: "ML models incorporated into PeekMed web were also trained, tested and validated for their performance." However, the specific metrics and the definition of "standalone" in this context are not detailed. The device's use case heavily emphasizes human validation of the AI's output, suggesting standalone performance serves as a component of the overall system validation rather than a primary use case.
-
The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):
- The text suggests that for the automatic planning and landmarking features, the ground truth or the corrective mechanism involves validation by "qualified medical specialists" or "trained surgeons." For the measuring function, it was verified to "effectively and repeatedly match the real dimensions," implying comparison against a known or established standard/reference. There is no mention of pathology or outcomes data as ground truth.
-
The Sample Size for the Training Set:
- Not specified in the provided text. The document mentions that "For the development of these 2 ML models, it was verified that no pediatric images were used." This pertains to the exclusion criteria for the training data, but not its size.
-
How the Ground Truth for the Training Set Was Established:
- Not explicitly detailed in the provided text. It is implied that for the ML models, the training data would have had some form of annotated ground truth, likely established by experts, given the nature of pre-operative planning. However, the specific methodology (e.g., single expert, consensus) is not described.
Ask a specific question about this device
Page 1 of 1