Run a custom LLM on demand in an OpenStack VM and start a dependent process. The VM powers off as soon as the process exits to save $$.
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
17
commits
Ensuring non-root user and key persistence. It's working after removing cloud-init!
1c8ca2cView on GitHubNow mounting a Cinder volume on the instance and able to run ollama models from it!
a20d29cView on GitHubHoping that switching to the stable channel will speed up image builds
a4b0decView on GitHubAdding various packages building up to llama.cpp support
df43ff8View on GitHubCopied/rewrote OpenStack nix scripts, now unable to ssh into the instance
95575f8View on GitHubAttempting to make the default image work, but it's not cooperating as it's full of AWS-specific bloat. Still a good reference though!
03b0facView on GitHubAdded instructions on building and uploading a generic NixOS image to OpenStack
58b9584View on GitHubAdded instructions for creating a keypair. Added initial Heat template for the deployment.
6244990View on GitHub