Microservices framework that retrofits existing agentic workflows to opportunistically route inference to local compute when your GPU is free, with built-in benchmarking, wake-on-LAN, and automatic cloud fallback. Includes a Windows tray app that monitors GPU load and gates Ollama network access automatically and notifies the user on running jobs
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
30
commits
Add application icon, Start Menu shortcut, and MCP tooling improvements
c6e84c2View on GitHubMove listening address to second tooltip line to prevent truncation
f7c7513View on GitHubShow FreeCycle listening address in tray tooltip instead of separate Ollama/agent lines
cdb1c2bView on GitHubFix catalog scraper for updated ollama.com HTML layout, reduce verbose log noise
fc8ed00View on GitHubAdd Model Library submenu to system tray with installed status tracking
35fb325View on GitHubUnify all component versions to 2.0.1, remove BUILD_MCP_SERVER_PROMPT.md
8879a05View on GitHub