Found 15 repositories(showing 15)
networknt
Raw benchmarks on throughput, latency and transfer of Hello World on popular microservices frameworks
icebob
Benchmark of microservices frameworks for NodeJS
Benchmarks for microservices frameworks kratos, go-zero, sponge
MitocGroup
Benchmarking Microservice is built on top of DEEP Framework and used in DEEP Marketplace
DescartesResearch
Creo is a framework for generating executable microservice applications for performance benchmarking. The framework also provides built-in support for standardized monitoring, load generation, and deployment.
barista-benchmarks
Barista is an open-source Microservice Benchmark Suite for the JVM platform. It comprises applications written in a variety of popular frameworks and supports both JIT and AOT compilations.
Designed and implemented a benchmarking framework to evaluate performance overheads of modern Trusted Execution Environments (TEEs) like Intel TDX and AMD SEV-SNP for microservice-based workloads deployed on Azure Confidential Virtual Machines.
Reference framework for distributed, real-time inference at billion-user scale: architecture, microservices, dynamic micro-batching, ANN-style candidate gen, re-ranking, A/B & shadow testing, observability, benchmarks, and Kubernetes deployment examples.
riiali
A green microservices benchmark framework
fvcastellanos
Benchmarking for MicroServices among different programming languages / frameworks
russok
Benchmarking a simple microservice implemented in various frameworks and programming languages
This repository contains a Locust-based stress testing and performance benchmarking framework for APIs and microservices.
hemerajs
Simple benchmark for microservice frameworks in Node.Js which support NATS as transport.
AkashCSE-884
A high-performance microservice built with Java Vert.x framework and PostgreSQL, capable of handling 12300 requests per second. Tested locally using Apache Benchmark load testing.
Heretyc
Microservices framework that retrofits existing agentic workflows to opportunistically route inference to local compute when your GPU is free, with built-in benchmarking, wake-on-LAN, and automatic cloud fallback. Includes a Windows tray app that monitors GPU load and gates Ollama network access automatically and notifies the user on running jobs
All 15 repositories loaded