Found 108 repositories(showing 30)
erikbern
Benchmarks of approximate nearest neighbor libraries in Python
harsha-simhadri
Framework for evaluating ANNS algorithms on billion scale datasets.
microsoft
A novel embedding training algorithm leveraging ANN search and achieved SOTA retrieval on Trec DL 2019 and OpenQA benchmarks
matchyc
🏆 Winning NeurIPS (NIPS) Competition Track: Big ANN, Practical Vector Search Challenge 2023. (see big-ann-benchmark https://big-ann-benchmarks.com/neurips23.html). The fastest cross-modal vector retrieval.
maumueller
Benchmarking approximate nearest neighbors. Note: This is an archived version from our SISAP 2017 paper, see below.
Event-AHU
[PokerEvent Benchmark Dataset & SNN-ANN Baseline, IEEE TCDS 2025] Official PyTorch implementation of "SSTFormer: Bridging Spiking Neural Network and Memory Support Transformer for Frame-Event based Recognition"
lmccccc
Benchmark for filtered ANN search applications
MahdisSep
Deep Learning (ANN) model to predict optimal Linux kernel readahead parameters. Utilizes LTTng tracing on kernel Writeback events during RocksDB I/O benchmarks.
hamidrahmanifard
In this study, I predicted the tertiary oil recovery with the Gas-Assisted Gravity Drainage (GAGD) method in fractured porous media using shallow and deep neural network (NN) algorithms. I explored the tertiary oil recovery prediction versus viscosity, density, surface tension, porosity, permeability, wettability index, connate water saturation, residual water saturation after flooding, production rate, production time, capillary number, dimensionless time, and bond number. For this purpose, using 263 sets of experimental data from the literature [91,92], I first assessed the relationship between various parameters and tertiary oil recovery and determined a subset of the most influential parameters. Running DOE using ANOVA over the variables mentioned above showed that the tertiary oil recovery is a strong function of the wettability index, connate water saturation, residual water flooding, and production time. As the next step, I conducted a comparative study on one to four hidden layers ANN models to find the best architecture of the NN algorithms for predicting the tertiary oil recovery. My benchmarks for selecting the best algorithm were RMSE, MRE, and R2. Note that because of the acceptable performance of the Levenberg-Marquardt (LM) algorithm in terms of error and execution time, I used this algorithm in training the neural network models. Finally, for the transfer functions, I deployed tansig function for all layers except the output layer where purlin function is utilized.
maumueller
EDML 19 submission repo for ann-benchmarks.
intellistream
[SIGMOD'2026] CANDOR-Bench: Benchmarking In-Memory Continuous ANNS under Dynamic Open-World Streams
parhamdehghani
This work includes the application of the ANN using Keras to conduct the parameter scan for the universal case of the BLRSSM SUSY model within the realm of the beyond the standard model phenomenology. The model has been traind based on 1 million points in a multi-dimensional space and then has been used to predict the relic density of each set of solutions. So that we have a faster approach to conduct the parameter scan rather than running SPheno and micrOMEGAS in row which drains a huge time and considerable computational resources. This model ended up resulting in a benchmark that could not be found using the direct search with the mentioned packages.
ivirshup
Benchmarks of AnnData using asv
A benchmark on various ML tasks for evaluating SNN in comparison to classical ANN neural networks.
maumueller
http://ann-benchmarks.com/sisap19/
Koncopd
No description available
TsekrekosEA
A high-performance Approximate Nearest Neighbor (ANN) search engine built from scratch in C++17. Implements and benchmarks LSH, Hypercube, IVF-Flat, and IVFPQ on the SIFT1M and MNIST datasets.
SecretDev804
Benchmarks of approximate nearest neighbor libraries in Python
alexklibisz
Approximate nearest neighbor benchmarks using Lucene
gtsoukas
Practical nearest neighbor search needs filtering, but filtering introduces counterintuitive performance penalties.
mayya-sharipova
Benchmarking approximate nearest neighbours search in Elasticsearch
maumueller
Source code to reproduce https://www.sciencedirect.com/science/article/pii/S0306437918303685
aerospike-community
Standalone tooling to benchmark Aerospike Vector Search using datasets popularized in ann-benchmark and big-ann
kshesha1
Comparative Analysis of Faiss and Milvus: Performance Evaluation and Benchmarking of Indexing Methods of Approximate nearest-neighbor search (ANNS) in Vector Databases
niledatabase
Improved ANN benchmark with support for different scales and concurrency
kwang2049
Benchmarking Approximate Nearest Neighbor (ANN) algorithms for dense text retrieval.
WittenYeh
Easy-to-use datasets for ANN benchmark. A MAKE command is all you need.
Kwan-Yiu
A high-concurrency benchmark suite for Approximate Nearest Neighbor (ANN) indexes with mixed read/write workloads.
DanialAmini
Some benchmark functions for curve fitting are used to assess soft computing methods (ANN, ANFIS, RBF, etc.)
samipsinghal
NEARLY is a benchmarking framework for Approximate Nearest Neighbor (ANN) methods applied to dense passage retrieval on the MS MARCO dataset.