Found 93 repositories(showing 30)
openml
OpenML AutoML Benchmarking Framework
sxjscience
Repository for Multimodal AutoML Benchmark
NitroML is a modular, portable, and scalable model-quality benchmarking framework for Machine Learning and Automated Machine Learning (AutoML) pipelines.
Alex-Lekov
A Performance Benchmark of Different AutoML Frameworks
georgian-io-archive
Distributed, large-scale, benchmarking framework for rigorous assessment of automatic machine learning repositories, projects, and libraries.
Sbrussee
PathBench-MIL: A comprehensive, flexible Benchmarking / AutoML framework for multiple instance learning in Histopathology
david-thrower
The Cerebros package is an ultra-precise Neural Architecture Search (NAS) / AutoML that is intended to much more closely mimic biological neurons than conventional neural network architecture strategies.
With recent advances in machine learning, semantic segmentation algorithms are becoming increasingly general-purpose and translatable to unseen tasks. Many key algorithmic advances in the field of medical imaging are commonly validated on a small number of tasks, limiting our understanding of the generalizability of the proposed contributions. A model which works out-of-the-box on many tasks, in the spirit of AutoML (Automated Machine Learning), would have a tremendous impact on healthcare. The field of medical imaging is also missing a fully open source and comprehensive benchmark for general-purpose algorithmic validation and testing covering a large span of challenges, such as: small data, unbalanced labels, large-ranging object scales, multi-class labels, and multimodal imaging, etc. To address these problems, in this project, as part of the MSD challenge, we propose a generic machine learning algorithm which we applied on two organs: liver and tumors, spleen. We propose an unsupervised generic model by implementing U-net CNN architecture with Generalized Dice Coefficient as loss function and also as a metric. The MSD dataset consists of dozens of medical examinations in 3D (per organ), we’ll transform the 3-dimensional data into 2-d cuts as an input of our U-net. Experimental results show that our generic model based on U-net and Generalized Dice Coefficient algorithm leads to high segmentation accuracy for each organ (liver and tumors, spleen), separately, without human interaction, with a relatively short run time compared to traditional segmentation methods.
Ennosigaeon
A benchmark to evaluate popular CASH and AutoML frameworks
albumentations-team
Benchmarks for AutoAlbument - AutoML for Image Augmentation
zuliani99
Benchmark for some usual automated machine learning, such as: AutoSklearn, MLJAR, H2O, TPOT and AutoGluon. All visualized via a Dash Web Application
MaximilianJohannesObpacher
An automl server for benchmarking different automl systems. Including AutoSklearn and AutoKeras and Tpot
Sette
No description available
christian-oleary
Benchmarks of AutoML Frameworks
enaix
ML2B: multi-lingual ML benchmark for AutoML
LittleLittleCloud
a bunch of AutoML benchmark tests using MLNet.CLI
h2oai
Wave Dashboard for the OpenML AutoML Benchmark
nikolaevs92
No description available
facebookresearch
This repository contains the code for generating the benchmark results in the following paper Olson et al. Ax A Platform for Adaptive Experimentation. AutoML Conference, 2025. https://openreview.net/forum?id=U1f6wHtG1g
jonathankrauss
Benchmarking of AutoML systems
brainome
ML benchmarks comparing brainome, google, sage maker, and azure engines
hildafab
No description available
Ranakghosh7
No description available
Benchmarking auto-ML frameworks.
psushi
Benchmarking popular open source Automatic Machine Learning Frameworks
biagiolicari
Code for the paper "Benchmarking AutoML solutions for Clusering"
No description available
pplonski
MLJAR AutoML benchmark on Kaggle datasets
ersilia-os
This repository contains the benchmark data for ersilia's autoML tools
nabenabe0928
The experiment repository for the paper `Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks` in AutoML Conference 2024.