Found 209 repositories(showing 30)
mixedbread-ai
Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, implementing, and testing new search methods. Baguetter supports sparse (traditional), dense (semantic), and hybrid retrieval methods.
JingwenWang95
Neural SLAM Evaluation Benchmark. [CVPR'23] Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM
hpcgarage
Benchmark for measuring the performance of sparse and irregular memory access.
mjiUST
2020 TPAMI, SurfaceNet+ is a volumetric learning framework for the very sparse MVS. The sparse-MVS benchmark is maintained here. Authors: Mengqi Ji#, Jinzhi Zhang#, Qionghai Dai, Lu Fang.
mozanunal
Framework for designing, training and benchmarking sparse-view CT reconstruction algorithms; includes datasets, metrics, baselines and CLI so you can prototype new methods and compare fairly in minutes.
alecjacobson
No description available
tonyzyl
Benchmarking of diffusion models for global field reconstruction from sparse observations
Reinforcement learning (RL) is an effective method to find reasoning pathways in incomplete knowledge graphs (KGs). To overcome the challenges of sparse rewards and the explore-exploit dilemma, a self-supervised pretraining method is proposed to warm up the policy network before the RL training stage. The seeding paths used in the supervised pretraining stage are generated by searching the 3-hop neighborhoods of start entities in a subset of training facts. Our self-supervised RL (SSRL) method with partial labels combines the fast learning speed of RL and wide coverage of SL. We adopt two RL architectures, i.e., MINERVA and MultiHopKG as our baseline RL models and experimentally show that our SSRL model consistently outperforms both baselines on all Hits@k and mean reciprocal rank (MRR) metrics on four large benchmark KG datasets. We also show that our SSRL model (either SS-MINERVA or SS-MultiHopKG) meets or exceeds current state-of-the-art results for all of these KG reasoning tasks.
ceruleangu
Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.
jkminder
Implementation of "SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning". SALSA-CLRS is an extension to the original clrs package, prioritizing scalability and the utilization of sparse representations. It provides pytorch based PyG datasets and dataloaders.
We propose a novel real time monocular Hybrid visual odometry formulation which combines the high precision of indirect approaches with the fast performance of direct methods. The system initializes inverse depth estimates represented as a Gaussian probability distribution for features (lines, edges and points) extracted in each keyframe which we continuously propagate and update with new measurements in the following frames. The key idea is to incorporate the depth filter distributions into the initial pose tracking via sparse image alignment and also the pose refinement via map localization. We also propose a comprehensive initialization method of these depth filters and classify the map points into different categories based on the uncertainty of these depth estimates which as a result greatly improves the tracking performance. The experimental evaluation on benchmark datasets shows that the proposed approach is significantly faster than the state-of-the-art algorithms while achieving comparable accuracy. We make our implementation publically open source at github to provide as a valuable reference for the SLAM community.
chrockey
Benchmarking various sparse convolution libraries: MinkowskiEngine, SpConv, TorchSparse, and Open3D.
abdelfattah-lab
Kratos: An FPGA Benchmark for Unrolled Deep Neural Networks with Fine-Grained Sparsity and Mixed Precision
karlrupp
Sparse Matrix-Matrix Multiplication Benchmark on Intel Xeon and Xeon Phi (KNC, KNL) from blog post:
shahdharam7
In the hyperspectral unmixing literature, endmember extraction is addressed majorly using three methods i.e. Statistical, Sparse-regression and Geometrical. The majority of the endmember extraction algorithms are developed based on only one of the methods. Recently, GSEE (Geo-Stat Endmember Extraction) has been proposed that combines the geometrical and statistical features. In this paper, we propose a Modified GSEE (MGSEE) algorithm which considers the removal of noisy bands. In the proposed work, the Minimum Noise Fraction (MNF) is used to select high SNR bands. The strength of the MGSEE framework is scrutinized using a synthetic and real benchmark dataset. In this paper, we show that the proposed algorithm obtained from the GSEE by preceding the noise removal step greatly decreases Spectral Angle Error (SAE) and Spectral Information Divergence (SID) error thus indicating its importance to extract pure material in the unmixing problem.
mblondel
Benchmark of different sparse-sparse and sparse-dense dot product implementations.
adkipnis
A principled reduction of six benchmarks from the Open LLM Leaderboard to a single sparse benchmark.
mohamedkhayat
A Python implementation of Reinforcement Learning Trees (Zhu et al., 2015). RLT leverages "look-ahead" reinforcement learning to master high-dimensional, sparse data where standard Random Forests fail. Includes reproduction of synthetic scenarios, UCI benchmarks vs. XGBoost, explainability analysis.
benchopt
Benchmark for Convolutional Sparse Coding
EgorOrachyov
Benchmark for sparse linear algebra libraries for CPU and GPU platforms.
karlrupp
Sparse matrix transposition benchmark. Details: https://www.karlrupp.net/2016/02/sparse-matrix-transposition-datastructure-performance-comparison
computablee
A benchmark of SpMV for heterogeneous systems, intended on testing viability of CSR-k as a sparse matrix format.
qdrant
This is a benchmarking tool for Qdrant's sparse vector implementation
jfilter
Sparse Truncated SVD Benchmark (Python)
RichardWang11
a benchmark contains all vecor searchs (include dense vector, sparse vector, filter vector search...)
In this paper, we proposed a novel sparse coding based image classification approach in a hierarchical structure. Hierarchical Structured Dictionary Learning (HSDL) aims to exploit the visual correlation between object categories which are visually similar by learning multiple class-specific dictio- naries and more than one corresponded shared dictionary. A Fisher discrimination criterion based discriminative term is adopted for both the level of class-specific dictionaries and the level of shared dictionaries level to enhance the discrimination of dictionaries. The performance of HSDL has been evaluated on benchmark image databases.
This software is a Sparse Stereo Visual Odometry system for navigation of autonomous vehicles. The proposed system has the capability to estimate the camera’s pose based on its surrounding environment. In contrast to other Visual Odometry systems with Bundle Adjustment optimization, the system proposed in here differs in four main aspects: (1) it utilizes both stereo frames to track features between frames; (2) it does not require a bootstrap step to initialize the algorithm; (3) it performs a local optimization at every increment frame instead of perform a windowed optimization; and (4) it consider the both stereo images inside the optimization instead of just one side of the stereo system. The system was tested on the Karlsruhe Institute of Technology (KITTI) Vision Benchmark.
Sparse autoencoder benchmarks (CS 229)
Yusen-Peng
[EMNLP-W 2025] CE-Bench: Towards a Reliable Contrastive Evaluation Benchmark of Interpretability of Sparse Autoencoders
ebasaeed
Demo of "Automatic Feature Learning for Spatio-Spectral Image Classification With Sparse SVM" on the The Prague Texture Segmentation Datagenerator and Benchmark - ALI dataset