Quality Match GmbH
www.quality-match.comBetter datasets create better machine learning models, enabling better products.Quality Match verifies and improves the quality of datasets for computer vision and machine learning. This can be applied to: - dataset architecture (are my dataset and taxonomy good?) - measure the quality of annotation providers (is the quality as advertised?) - evaluate model performance (when does the model fail?) - supervise model-drift (is the model still working?) and many more use cases. We focus on creating actionable, quantitative metrics on dataset quality, such as: - representativeness (bias analysis) - accuracy (labels, detection rates and geometric properties of annotations) - and ambiguity (edge cases due to bad data or bad taxonomies). We do this by breaking down quality questions into a large decision tree where each node is either a training-free, unambiguous crowd-sourcing task which is repeated until statistical significance, or a pretrained machine learning model with known confusion matrix. We call this system our Annotation Quality Engine. To verify your data, we help you verifying and integrating your metrics into our Annotation Quality Engine which you will then be able to query through a simple REST API.
Read moreBetter datasets create better machine learning models, enabling better products.Quality Match verifies and improves the quality of datasets for computer vision and machine learning. This can be applied to: - dataset architecture (are my dataset and taxonomy good?) - measure the quality of annotation providers (is the quality as advertised?) - evaluate model performance (when does the model fail?) - supervise model-drift (is the model still working?) and many more use cases. We focus on creating actionable, quantitative metrics on dataset quality, such as: - representativeness (bias analysis) - accuracy (labels, detection rates and geometric properties of annotations) - and ambiguity (edge cases due to bad data or bad taxonomies). We do this by breaking down quality questions into a large decision tree where each node is either a training-free, unambiguous crowd-sourcing task which is repeated until statistical significance, or a pretrained machine learning model with known confusion matrix. We call this system our Annotation Quality Engine. To verify your data, we help you verifying and integrating your metrics into our Annotation Quality Engine which you will then be able to query through a simple REST API.
Read moreCountry
City (Headquarters)
Heidelberg
Industry
Employees
11-50
Founded
2019
Social
Employees statistics
View all employeesPotential Decision Makers
Managing Director and Founder
Email ****** @****.comPhone (***) ****-****Co - Founder and Chief Technology Officer
Email ****** @****.comPhone (***) ****-****Developer
Email ****** @****.comPhone (***) ****-****Head of Product Development
Email ****** @****.comPhone (***) ****-****
Technologies
(24)