Found 72 repositories(showing 30)
LAION-AI
CLIP-like model evaluation
cat-state
clip retrieval benchmark
JayyShah
image similarity with a deep dive into CLIP and DINOv2 models. This repository offers comprehensive code and insights for benchmarking these AI giants in image similarity tasks.
Huite
Benchmarking Sutherland-Hodgman clipping in a few languages
ahnjaewoo
🥷🏻 Code for our ACL 2025 Main paper: "Can LLMs Deceive CLIP? Benchmarking Adversarial Compositionality of Pre-trained Multimodal Representation via Text Updates"
Rethinking Few Shot CLIP Benchmarks: A Critical Analysis in the Inductive Setting (ICCV 2025)
This paper presents a new approach for acoustic environment classification based on the discrete Hartley transform. The approach applies a Hidden Markov Model based classifier on test data composed of audio clips, in order to determine which environment is surrounding these audio clips. The approach uses features obtained from the discrete Hartley transform, leading to a set of features that require only real arithmetic computations. This can make the technique advantageous in terms of simplicity and/or in terms of computational speed. The proposed approach performance is evaluated on benchmark datasets provided from the 2013 and 2016 Detection and Classification of Acoustic Scenes and Events (DCASE) challenges. Experiments show that the proposed method is competitive compared to other recently proposed methods, and that the use of the discrete Hartley transform improves the classification performance.
danimelatru
Continual learning benchmark using CLIP + LoRA for CIFAR-10. Includes Catastrophic Forgetting experiment and PEFT mitigation.
harim061
2024 MobileCLIP Benchmark
oliverwehrens
Benchmarking a 10 minute audio clip on different models and different cores
AdamBlm
Benchmarking DINOv2, CLIP, and MAE under Distribution Shifts
pramanik-souvik
Benchmarking Vision-Language Models (CLIP, FLAVA, BLIP, ViLT) for movie genre prediction using contrastive learning and multimodal fusion paradigms.
roboflow
No description available
pranzalkhadka
Benchmarking Fine-Tuning Strategies for CLIPSeg
No description available
LibreCS
VQGAN+CLIP implementation for aarch64 architecture testing and benchmarking with machine learning workloads
orshkuri
A benchmark and analysis of QFormer, Cross Attention, and Concat models for binary Visual Question Answering (VQA) using CLIP and BERT+ViT-CLIP encoders.
Jiun-Tseng
A reproducible workflow to evaluate whole-genome variant calls from DRAGEN soft-clipped and hard-clipped pipelines against GIAB HG002 truth set. This workflow performs reference and VCF normalization (bcftools/GATK), BED cleaning, Truvari benchmarking, SNP/INDEL visualization, and Bland–Altman statistical comparison across pipelines
Multimodal-Intelligence-Lab
The CLIP-CC Dataset is a carefully curated collection of 200 YouTube video links with human-written summaries, specifically designed for research and experimentation in multimodal AI tasks. This dataset addresses the growing need for high-quality video comprehension benchmarks that can effectively evaluate the narrative understanding capabilities.
rrs-2002
Zero-Shot Industrial Anomaly Detection using WinCLIP — A production-ready system that detects manufacturing defects WITHOUT any training data. Built on OpenAI's CLIP with sliding window analysis, prompt ensembling, and a Flask-based web dashboard. Evaluated on MVTec AD benchmark (15 categories, 89% F1-Score). Runs on standard CPU — no GPU required.
olehsharov
No description available
LecterF
No description available
VISHNU193
No description available
wallacezq
No description available
2D Polygon clipping comparation
HamzaKhan760
Benchmarking OpenAI's CLIP model
jersonjb
No description available
sathnil
No description available
dimmonn
No description available
RenxuLogan
Benchmarking and comparing the performance of CLIP (Contrastive Language-Image Pretraining) and LPIPS (Learned Perceptual Image Patch Similarity) on various tasks, with a focus on evaluating visual similarity and perceptual quality.