Found 126 repositories(showing 30)
coleam00
Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!
Juanhacker051
*FotoSploit* $ git clone https://github.com/Cesar-Hack-Gray/FotoSploit $ cd FotoSploit $ chmod +x * $ bash install sh $ ./FotoSploit $ show options ================================================= *instalar metasploit no termux(Facilmente)* comandos para instalar metasploit no termux 1: apt update && apt upgrade 2: apt install curl 3: curl -LO https://raw.githubusercontent.com/Hax4us/Metasploit_termux/master/metasploit.sh 4: ls 5: chmod 777 metasploit.sh 6: ls 7: ./metasploit.sh (Carregandooooo) 8: msfconsole ================================================= *EchoPwn* Instalação git clone https://github.com/hackerspider1/EchoPwn.git cd EchoPwn chmod +x install.sh EchoPwn.sh ./install.sh EchoPwn.sh ================================================= *DarkFly-Tool* Instalação apt update && apt upgrade apt install git git clone https://github.com/Ranginang67/DarkFly-Tool cd DarkFly-Tool chmod +x * python2 install.py ================================================= *Tool-x* Instalação apt update apt install git git clone https://github.com/rajkumardusad/Tool-X.git cd Tool-X chmod +x install.aex sh install.aex ou ./install.aex ================================================= *Multi_Phish* pkg instalar droplet pkg instalar openssh pkg instalar git pkg instalar curl pkg instalar wget apt instalar git php -y git clone https://github.com/perjayro/MultiPhish.git cd phish chmod 777 phish.sh bash phish.sh ================================================= *Pentest Tools Framework* git clone https://github.com/pikpikcu/Pentest-Tools-Framework.git cd Pentest-Tools-Framework pip install -r require.txt python install.py python ptf.py ================================================= *Destroyer-framework* ⭕️LINUX git clone https://github.com/Cesar-Hack-Gray/Destroyer-framework cd Destroyer-framework ls bash install.sh ./Destroyer ⭕️TERMUX apt upgrede -y && pkg update -y apt install -y apt install -y curl apt install git git clone https://github.com/Cesar-Hack-Gray/Destroyer-framework cd Destroyer-framework ls bash install.sh ./Destroyer ================================================= *NIKTO* Instalação git clone https://github.com/sullo/nikto apt-get install openssl libcrypt-ssleay-perl Uso de proxys: perl nikto.pl -h localhost -p 8080 -useproxy proxyIp Atualizando Nikto: perl nikto.pl -update ================================================= *SocialFish* Instalação $ apt update && upgrade $ apt install git $ apt install python2 $ git clone https://github.com/UndeadSec/SocialFish.git $ cd SocialFish $ chmod +x * $ pip2 install -r requirements.txt ================================================= *Opal [ATUALIZADO]* git clone https://github.com/shadowlabscc/ProjectOpal.git cd ProjectOpal python opal.py python Injector.py ================================================= *Kit de ferramentas para bugbounty. #CVEs* https://github.com/Medicean/VulApps https://github.com/qazbnm456/awesome-cve-poc https://github.com/tunz/js-vuln-db https://github.com/cve-search/cve-search https://github.com/nixawk/labs https://github.com/Coalfire-Research/java-deserialization-exploits https://github.com/Metnew/uxss-db https://github.com/TH3xACE/SUDO_KILLER https://github.com/Mr-xn/Penetration_Testing_POC https://github.com/toolswatch/vFeed ================================================= *Para pegar informações* 1️⃣ Phone In Foga https://github.com/sundowndev/PhoneInfoga 2️⃣ In Foga - Email https://github.com/m4ll0k/Infoga 3️⃣ Angry Fuzz3r https://github.com/ihebski/angryFuzzer 4️⃣ Hakku Framework https://github.com/4shadoww/hakkuframework 5️⃣ Knock Mail https://github.com/4w4k3/KnockMail 6️⃣ Santet Online https://github.com/Gameye98/santet-online 7️⃣ The Harvester https://github.com/laramies/theHarvester 8️⃣ Optiva Framework https://github.com/joker25000/Optiva-Framework 9️⃣ Cyber Scan https://github.com/medbenali/CyberScan 🔟 Gloom Framework https://github.com/StreetSec/Gloom-Framework ================================================= *OXID Tools* git clone https://github.com/oxyda-fox/OXIDTools.git cd OXIDTools chmod +x * . /setup.sh . /run.sh ================================================= *xShock* Instalação git clone https://github.com/capture0x/xShock/ cd xShock pip3 install -r requirements.txt Executar python3 main.py ================================================= *Web Pentest* Instalação apt update && apt upgrade apt install git apt install python2 apt install python git clone https://github.com/cr4shcod3/pureblood cd pureblood chmod +x * pip install -r requirements.txt Uso python2 pureblood.py ================================================= *Quack* Requisitos apt update && apt upgrade -y termux-setup-storage pkg install -y git pkg install -y python pip install --upgrade pip pip install requests Instalação git clone https://github.com/entynetproject/quack cd quack pip install -r requirements.txt chmod +x quack ================================================= *Thoron Framework* git clone https://github.com/entynetproject/thoron.git cd thoron chmod + x install.sh ./install.sh ================================================= *BlackPhish* git clone https://github.com/Ahmedmahmed8a/BlackPhish cd BlackPhish bash installer.sh ================================================= *RapidPayload* git clone https://github.com/AngelSecurityTeam/RapidPayload cd RapidPayload bash install.sh python3 RapidPayload.py ================================================= *Termux_ExtraKeys* apt update && apt upgrade -y apt install git -y git clone https://github.com/Fabrix07Hack/Termux_ExtraKeys.git cd Termux_ExtraKeys chmod 777 * ./extrakeys_Termux ================================================= *PyReconExSploit* apt-get update apt-get upgrade apt-get install exploitdb netcat nmap perl php git clone https://github.com/AkutoSai/PyReconExSploit cd PyReconExSploit/ python3 setup.py install cp -r /home/user/Desktop/PyReconExSploit/pyreconexsploit /usr/local/lib/python3.7/dist-packages pyreconexsploit ================================================= *Evil Framework* apt update apt upgrade pip2 install requests git clone https://github.com/LOoLzeC/Evil-create-framework cd Evil-create-framework python2 vcrt.py show android help Escolha um virus create virus"seu virus" SET OUTPUT cd /sdcard SET VIRUS NAME "nome do seu virus" run ================================================= *Wifite* apt update && apt upgrade apt install git apt install python2 git clone https://github.com/derv82/wifite2 ls cd wifite ls python2 wifite.py ================================================= *MALICIOUS* $ termux-setup-storage $ cd /sdcard $ apt install git $ apt install python2 $ apt install ruby $ gem install lolcat $ git clone https://github.com/Hider5/Malicious $ cd Malicious $ pip2 install -r requirements.txt $ python2 malicious.py ================================================= *Hammer* apt update apt-get install python -y apt install git apt install python3 git clone https://github.com/cyweb/hammer ls cd hammer chmod +x hammer.py python3 hammer.py -s (alvo) -p 80 -t 150 ================================================= *VIRUS X* $ apt update && apt upgrade $ apt install git $ apt install python $ git clone https://github.com/TSMaitry/VirusX.git $ cd VirusX $ chmod +x VirusX.py $ python2 VirusX.py ================================================= *INFECT* $ apt-get update -y $ apt-get upgrade -y $ apt install python -y $ apt install python2 -y $ apt install git -y $ pip install lolcat $ git clone https://github.com/noob-hackers/Infect $ ls $ cd infect $ ls $ bash infect.sh ================================================= *F-Society Framework* (Instalação) apt install git apt install python2 (Instalação do pacote) git clone https://github.com/Manisso/fsociety ls cd fsociety ls chmod +x fsociety.py ./install.sh python2 fsociety.py ================================================= *MyServer* Abra o termux e digite os seguintes comandos. apt update apt install git git clone https://github.com/rajkumardusad/MyServer cd MyServer chmod +x install ./install ================================================= *AirCrack-ng* apt update apt install root-repo apt install aircrack-ng ================================================= *RouterSploit* apt update && apt upgrade apt install python -y pip2 install apt install git git clone https://github.com/threat9/routersploit ls cd routersploit pip2 install -r requirements -dev.txt pip install future ls python rsf.py ================================================= *Shell Phish* apt update apt upgrade -y termux-setup-storage apt installl git git clone https://github.com/thelinuxchoice/shellphish cd shellphish apt installl php apt install curl git clone https://github.com/PSecurity/ps.ngrok cd ps.ngrok mv ngrok /data/data/com.termux/files/home/shellphish/ cd .. rm -rf ps.ngrok chmod +x ngrok chmod +x shellphish.sh bash shellphish.sh # Ao Iniciar: cd shellphish bash shellphish.sh comandos : pkg install clang git clone https://github.com/XCHADXFAQ77X/XERXES ls cd XERXES ls chmod +x * ls clang xerxes.c -o xerxes ./xerxes exemplo: website.com.br 80 galera lembrando nao bote HTTPS nem www so o nome do site exemplo : website.com.br ``` ALGUNS COMANDOS DO TERMUX BY: BAN``` apt update && apt upgrade termux-setup-storage apt install git apt install net-tools apt install termux-tools apt install neofetch apt install ncurses-utils apt install curl curl -LO https://raw.githubusercontent.com/Hax4us/Metasploit_termux/master/metasploit.sh chmod +x metasploit.sh ./metasploit.sh msfconsole ____________________________________________ https://github.com/PSecurity/ps.ngrok termux-setup-storage apt update && apt upgrade -y pkg update && pkg upgrade -y apt install curl pkg install git git clone https://github.com/PSecurity/ps.ngrok cd ps.ngrok mv ngrok /data/data/com.termux/files/home cd .. chmod +x ngrok ./ngrok ( chave de ativação NGROK) ./ngrok http 80 ________________________________________________ apt update Apt install python2 Apt install git Git clone https://github.com/evait-security/weeman ls cd weeman chmod +x * python2 weeman.py Set url (url) set action_url (url) run (→Abra outra página←) cd .. ./ngrok http 8080 ____________________________________________ apt update apt upgrade apt git git clone https://github.com/liorvh/hammer-1 cd hammer-1 chmod +x * python hammer.py python hammer.py -s (site + www) -t 256 -p 80 ___________________________________________________ apt install git Pkg install clang Faça a instalação do Script git clone https://github.com/zanyarjamal/xerxes Entre no diretório cd xerxes Digite o comando clang xerxes.c -o xerxes Agora e só inicia o Ataque ./xerxes website.com 80 ____________________________________________ apt update && apt upgrade apt install php apt install python2 apt install toilet apt install git git clone https://github.com/4L13199/LITESPAM cd LITESPAM ls sh LITESPAM.sh As opções aparecerão como mostrado abaixo, você apenas escolhe qual bomba de spam SMS será executada ____________________________________________ $ pkg update $ pkg upgrade $ pkg install git $ pkg install php $ pkg install toilet $ pkg install python2 $ gem install lolcat $ pip2 install requests $ pip2 install termcolor $ git clone https://github.com/mbest99/MIXINGS.git $ cd MIXINGS $ bash 0ppay.sh Features:- [ 1] PHISING V1 [ 2] PHISING V2 [ 3] PHISING V3 [ 4] PHISING V4 [ 5] PHISING GAME [ 6] Hack fb target [ 7] Hack fb massal [ 8] Hack fb Target+Massal [ 9] Hack FB ans (#root) [10] Hack Instagram (#root) [11] Hack Twitter (#root) [12] Hack Gmail (#root) [13] Fb Info [14] Santet Online [15] Spam IG [16] Spam WA [17] Spam Sms [18] Youtube AutoView (#root) ____________________________________________ No termux, pra adiantar... pkg install nodejs Em seguida, crie um aplicativo e guarde o nome dele https://www.heroku.com/ $ pkg install git -y $ termux-setup-storage $ ls $ git clone -b herooku https://github.com/XploitWizer/XploitSPY $ cd XploitSPY $ ls $ pkg install nodejs $ npm install heroku -g $ heroku login -i $ heroku git:remote -a nomedoapp $ heroku buildpacks:add heroku/jvm $ heroku buildpacks:add heroku/nodejs $ git push heroku herooku:master ____________________________________________ PERSONALIZAR TERMUX apt update && apt upgrade -y pkg install nano pkg install vim cd ../usr/etc ls vim bash.bashrc Precione a letra (I) para editar o texto Depois vc apaga a seguinte mensagem que aparece no termux " PS1='\$ ' " E cola isso → PS1="\033[1;32m ╔\033[0m""\033[1;31m[ \033m""\033[1;32m SEU NOME AKI\033[0m""\033[1;31m @\033[0m""\033[1;32m║\033[0m""\033[1;37m ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡\033[0m""\033[1;32m ╚▶ " Coloca o seu nome onde tá escrito " seu nome aqui" Para sair vc clica no ESC + : + x aí você dê enter. Depois de o comando exit e dê enter, dps é só abrir dnovo ;> ____________________________________________ pkg update && pkg upgrade $pkg install python2 $pip2 install requests $pip2 install mechanize $pkg install git $git clone https://github.com/ARIYA-CYBER/NEW $cd NEW $python2 FbNew.py ____________________________________________ https://github.com/Paxv28/CrusherDDoS apt install git apt install python cd CrusherDDoS chmod +x Setup.sh ./Setup.sh python CSDDoS.py
nath1295
A python package for developing AI applications with local LLMs.
Centauri2442
Simple AI | An UdonSharp based local and synced AI package, meant for use in VRChat!
mudigosa
Image Classifier Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smartphone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice, you'd train this classifier, then export it for use in your application. We'll be using this dataset of 102 flower categories. When you've completed this project, you'll have an application that can be trained on any set of labelled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. This is the final Project of the Udacity AI with Python Nanodegree Prerequisites The Code is written in Python 3.6.5 . If you don't have Python installed you can find it here. If you are using a lower version of Python you can upgrade using the pip package, ensuring you have the latest version of pip. To install pip run in the command Line python -m ensurepip -- default-pip to upgrade it python -m pip install -- upgrade pip setuptools wheel to upgrade Python pip install python -- upgrade Additional Packages that are required are: Numpy, Pandas, MatplotLib, Pytorch, PIL and json. You can donwload them using pip pip install numpy pandas matplotlib pil or conda conda install numpy pandas matplotlib pil In order to intall Pytorch head over to the Pytorch site select your specs and follow the instructions given. Viewing the Jyputer Notebook In order to better view and work on the jupyter Notebook I encourage you to use nbviewer . You can simply copy and paste the link to this website and you will be able to edit it without any problem. Alternatively you can clone the repository using git clone https://github.com/fotisk07/Image-Classifier/ then in the command Line type, after you have downloaded jupyter notebook type jupyter notebook locate the notebook and run it. Command Line Application Train a new network on a data set with train.py Basic Usage : python train.py data_directory Prints out current epoch, training loss, validation loss, and validation accuracy as the netowrk trains Options: Set direcotry to save checkpoints: python train.py data_dor --save_dir save_directory Choose arcitecture (alexnet, densenet121 or vgg16 available): pytnon train.py data_dir --arch "vgg16" Set hyperparameters: python train.py data_dir --learning_rate 0.001 --hidden_layer1 120 --epochs 20 Use GPU for training: python train.py data_dir --gpu gpu Predict flower name from an image with predict.py along with the probability of that name. That is you'll pass in a single image /path/to/image and return the flower name and class probability Basic usage: python predict.py /path/to/image checkpoint Options: Return top K most likely classes: python predict.py input checkpoint ---top_k 3 Use a mapping of categories to real names: python predict.py input checkpoint --category_names cat_To_name.json Use GPU for inference: python predict.py input checkpoint --gpu Json file In order for the network to print out the name of the flower a .json file is required. If you aren't familiar with json you can find information here. By using a .json file the data can be sorted into folders with numbers and those numbers will correspond to specific names specified in the .json file. Data and the json file The data used specifically for this assignemnt are a flower database are not provided in the repository as it's larger than what github allows. Nevertheless, feel free to create your own databases and train the model on them to use with your own projects. The structure of your data should be the following: The data need to comprised of 3 folders, test, train and validate. Generally the proportions should be 70% training 10% validate and 20% test. Inside the train, test and validate folders there should be folders bearing a specific number which corresponds to a specific category, clarified in the json file. For example if we have the image a.jpj and it is a rose it could be in a path like this /test/5/a.jpg and json file would be like this {...5:"rose",...}. Make sure to include a lot of photos of your catagories (more than 10) with different angles and different lighting conditions in order for the network to generalize better. GPU As the network makes use of a sophisticated deep convolutional neural network the training process is impossible to be done by a common laptop. In order to train your models to your local machine you have three options Cuda -- If you have an NVIDIA GPU then you can install CUDA from here. With Cuda you will be able to train your model however the process will still be time consuming Cloud Services -- There are many paid cloud services that let you train your models like AWS or Google Cloud Coogle Colab -- Google Colab gives you free access to a tesla K80 GPU for 12 hours at a time. Once 12 hours have ellapsed you can just reload and continue! The only limitation is that you have to upload the data to Google Drive and if the dataset is massive you may run out of space. However, once a model is trained then a normal CPU can be used for the predict.py file and you will have an answer within some seconds. Hyperparameters As you can see you have a wide selection of hyperparameters available and you can get even more by making small modifications to the code. Thus it may seem overly complicated to choose the right ones especially if the training needs at least 15 minutes to be completed. So here are some hints: By increasing the number of epochs the accuracy of the network on the training set gets better and better however be careful because if you pick a large number of epochs the network won't generalize well, that is to say it will have high accuracy on the training image and low accuracy on the test images. Eg: training for 12 epochs training accuracy: 85% Test accuracy: 82%. Training for 30 epochs training accuracy 95% test accuracy 50%. A big learning rate guarantees that the network will converge fast to a small error but it will constantly overshot A small learning rate guarantees that the network will reach greater accuracies but the learning process will take longer Densenet121 works best for images but the training process takes significantly longer than alexnet or vgg16 *My settings were lr=0.001, dropoup=0.5, epochs= 15 and my test accuracy was 86% with densenet121 as my feature extraction model. Pre-Trained Network The checkpoint.pth file contains the information of a network trained to recognise 102 different species of flowers. I has been trained with specific hyperparameters thus if you don't set them right the network will fail. In order to have a prediction for an image located in the path /path/to/image using my pretrained model you can simply type python predict.py /path/to/image checkpoint.pth Contributing Please read CONTRIBUTING.md for the process for submitting pull requests. Authors Shanmukha Mudigonda - Initial work Udacity - Final Project of the AI with Python Nanodegree
kekko7072
Flutter package for local AI inference using native OS APIs - iOS Foundation Models, Android ML Kit GenAI, and Windows AI APIs. Zero model downloads required.
MarcoDotIO
A Swift package SDK for running local OpenClaw-style AI agent workflows inside native Swift applications.
vimalgandhi
# Docker Commands, Help & Tips ### Show commands & management commands ``` $ docker ``` ### Docker version info ``` $ docker version ``` ### Show info like number of containers, etc ``` $ docker info ``` # WORKING WITH CONTAINERS ### Create an run a container in foreground ``` $ docker container run -it -p 80:80 nginx ``` ### Create an run a container in background ``` $ docker container run -d -p 80:80 nginx ``` ### Shorthand ``` $ docker container run -d -p 80:80 nginx ``` ### Naming Containers ``` $ docker container run -d -p 80:80 --name nginx-server nginx ``` ### TIP: WHAT RUN DID - Looked for image called nginx in image cache - If not found in cache, it looks to the default image repo on Dockerhub - Pulled it down (latest version), stored in the image cache - Started it in a new container - We specified to take port 80- on the host and forward to port 80 on the container - We could do "$ docker container run --publish 8000:80 --detach nginx" to use port 8000 - We can specify versions like "nginx:1.09" ### List running containers ``` $ docker container ls ``` OR ``` $ docker ps ``` ### List all containers (Even if not running) ``` $ docker container ls -a ``` ### Stop container ``` $ docker container stop [ID] ``` ### Stop all running containers ``` $ docker stop $(docker ps -aq) ``` ### Remove container (Can not remove running containers, must stop first) ``` $ docker container rm [ID] ``` ### To remove a running container use force(-f) ``` $ docker container rm -f [ID] ``` ### Remove multiple containers ``` $ docker container rm [ID] [ID] [ID] ``` ### Remove all containers ``` $ docker rm $(docker ps -aq) ``` ### Get logs (Use name or ID) ``` $ docker container logs [NAME] ``` ### List processes running in container ``` $ docker container top [NAME] ``` #### TIP: ABOUT CONTAINERS Docker containers are often compared to virtual machines but they are actually just processes running on your host os. In Windows/Mac, Docker runs in a mini-VM so to see the processes youll need to connect directly to that. On Linux however you can run "ps aux" and see the processes directly # IMAGE COMMANDS ### List the images we have pulled ``` $ docker image ls ``` ### We can also just pull down images ``` $ docker pull [IMAGE] ``` ### Remove image ``` $ docker image rm [IMAGE] ``` ### Remove all images ``` $ docker rmi $(docker images -a -q) ``` #### TIP: ABOUT IMAGES - Images are app bianaries and dependencies with meta data about the image data and how to run the image - Images are no a complete OS. No kernel, kernel modules (drivers) - Host provides the kernel, big difference between VM ### Some sample container creation NGINX: ``` $ docker container run -d -p 80:80 --name nginx nginx (-p 80:80 is optional as it runs on 80 by default) ``` APACHE: ``` $ docker container run -d -p 8080:80 --name apache httpd ``` MONGODB: ``` $ docker container run -d -p 27017:27017 --name mongo mongo ``` MYSQL: ``` $ docker container run -d -p 3306:3306 --name mysql --env MYSQL_ROOT_PASSWORD=123456 mysql ``` ## CONTAINER INFO ### View info on container ``` $ docker container inspect [NAME] ``` ### Specific property (--format) ``` $ docker container inspect --format '{{ .NetworkSettings.IPAddress }}' [NAME] ``` ### Performance stats (cpu, mem, network, disk, etc) ``` $ docker container stats [NAME] ``` ## ACCESSING CONTAINERS ### Create new nginx container and bash into ``` $ docker container run -it --name [NAME] nginx bash ``` - i = interactive Keep STDIN open if not attached - t = tty - Open prompt **For Git Bash, use "winpty"** ``` $ winpty docker container run -it --name [NAME] nginx bash ``` ### Run/Create Ubuntu container ``` $ docker container run -it --name ubuntu ubuntu ``` **(no bash because ubuntu uses bash by default)** ### You can also make it so when you exit the container does not stay by using the -rm flag ``` $ docker container run --rm -it --name [NAME] ubuntu ``` ### Access an already created container, start with -ai ``` $ docker container start -ai ubuntu ``` ### Use exec to edit config, etc ``` $ docker container exec -it mysql bash ``` ### Alpine is a very small Linux distro good for docker ``` $ docker container run -it alpine sh ``` (use sh because it does not include bash) (alpine uses apk for its package manager - can install bash if you want) # NETWORKING ### "bridge" or "docker0" is the default network ### Get port ``` $ docker container port [NAME] ``` ### List networks ``` $ docker network ls ``` ### Inspect network ``` $ docker network inspect [NETWORK_NAME] ("bridge" is default) ``` ### Create network ``` $ docker network create [NETWORK_NAME] ``` ### Create container on network ``` $ docker container run -d --name [NAME] --network [NETWORK_NAME] nginx ``` ### Connect existing container to network ``` $ docker network connect [NETWORK_NAME] [CONTAINER_NAME] ``` ### Disconnect container from network ``` $ docker network disconnect [NETWORK_NAME] [CONTAINER_NAME] ``` ### Detach network from container ``` $ docker network disconnect ``` # IMAGE TAGGING & PUSHING TO DOCKERHUB # tags are labels that point ot an image ID ``` $ docker image ls ``` Youll see that each image has a tag ### Retag existing image ``` $ docker image tag nginx btraversy/nginx ``` ### Upload to dockerhub ``` $ docker image push bradtraversy/nginx ``` ### If denied, do ``` $ docker login ``` ### Add tag to new image ``` $ docker image tag bradtraversy/nginx bradtraversy/nginx:testing ``` ### DOCKERFILE PARTS - FROM - The os used. Common is alpine, debian, ubuntu - ENV - Environment variables - RUN - Run commands/shell scripts, etc - EXPOSE - Ports to expose - CMD - Final command run when you launch a new container from image - WORKDIR - Sets working directory (also could use 'RUN cd /some/path') - COPY # Copies files from host to container ### Build image from dockerfile (reponame can be whatever) ### From the same directory as Dockerfile ``` $ docker image build -t [REPONAME] . ``` #### TIP: CACHE & ORDER - If you re-run the build, it will be quick because everythging is cached. - If you change one line and re-run, that line and everything after will not be cached - Keep things that change the most toward the bottom of the Dockerfile # EXTENDING DOCKERFILE ### Custom Dockerfile for html paqge with nginx ``` FROM nginx:latest # Extends nginx so everything included in that image is included here WORKDIR /usr/share/nginx/html COPY index.html index.html ``` ### Build image from Dockerfile ``` $ docker image build -t nginx-website ``` ### Running it ``` $ docker container run -p 80:80 --rm nginx-website ``` ### Tag and push to Dockerhub ``` $ docker image tag nginx-website:latest btraversy/nginx-website:latest ``` ``` $ docker image push bradtraversy/nginx-website ``` # VOLUMES ### Volume - Makes special location outside of container UFS. Used for databases ### Bind Mount -Link container path to host path ### Check volumes ``` $ docker volume ls ``` ### Cleanup unused volumes ``` $ docker volume prune ``` ### Pull down mysql image to test ``` $ docker pull mysql ``` ### Inspect and see volume ``` $ docker image inspect mysql ``` ### Run container ``` $ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql ``` ### Inspect and see volume in container ``` $ docker container inspect mysql ``` #### TIP: Mounts - You will also see the volume under mounts - Container gets its own uniqe location on the host to store that data - Source: xxx is where it lives on the host ### Check volumes ``` $ docker volume ls ``` **There is no way to tell volumes apart for instance with 2 mysql containers, so we used named volumes** ### Named volumes (Add -v command)(the name here is mysql-db which could be anything) ``` $ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql ``` ### Inspect new named volume ``` docker volume inspect mysql-db ``` # BIND MOUNTS - Can not use in Dockerfile, specified at run time (uses -v as well) - ... run -v /Users/brad/stuff:/path/container (mac/linux) - ... run -v //c/Users/brad/stuff:/path/container (windows) **TIP: Instead of typing out local path, for working directory use $(pwd):/path/container - On windows may not work unless you are in your users folder** ### Run and be able to edit index.html file (local dir should have the Dockerfile and the index.html) ``` $ docker container run -p 80:80 -v $(pwd):/usr/share/nginx/html nginx ``` ### Go into the container and check ``` $ docker container exec -it nginx bash $ cd /usr/share/nginx/html $ ls -al ``` ### You could create a file in the container and it will exiost on the host as well ``` $ touch test.txt ``` # DOCKER COMPOSE - Configure relationships between containers - Save our docker container run settings in easy to read file - 2 Parts: YAML File (docker.compose.yml) + CLI tool (docker-compose) ### 1. docker.compose.yml - Describes solutions for - containers - networks - volumes ### 2. docker-compose CLI - used for local dev/test automation with YAML files ### Sample compose file (From Bret Fishers course) ``` version: '2' # same as # docker run -p 80:4000 -v $(pwd):/site bretfisher/jekyll-serve services: jekyll: image: bretfisher/jekyll-serve volumes: - .:/site ports: - '80:4000' ``` ### To run ``` docker-compose up ``` ### You can run in background with ``` docker-compose up -d ``` ### To cleanup ``` docker-compose down ```
svetlyi
An MCP server that helps AI assistants work with GoLang third-party packages using your local Go module cache.
LevitateOS
A Linux distribution where you maintain your own packages. Write simple Rhai recipes, build from source, with local AI assistance. Full control, no upstream dependencies.
chriscantey
A companion package for PAI (Personal AI Infrastructure) that adds a web portal, file exchange, clipboard, and optional local text-to-speech. Your AI assistant handles the setup. Designed for local Linux VMs.
wehnsdaefflae
Use your local Claude Code CLI as a Pydantic AI model provider. This package provides a Pydantic AI-compatible model implementation that wraps the local Claude CLI, enabling you to use Claude locally with all Pydantic AI features including structured responses, tool calling, streaming, and multi-turn conversations.
bashful-sh
A terminal extension used as a superset for bash. With bash profile and package management for Linux (Debian & Ubuntu) based systems. Now also includes local AI, ML, LLM integrations for developers.
Nibir1
Helix: AI-powered CLI with RAG intelligence that converts natural language to commands using 450+ system commands. Features local AI, cross-platform package management, Git workflows, syntax highlighting, and multi-layer safety with directory sandboxing.
RBND
Silas Blue is a Discord bot that runs local AI models—no cloud needed. With a retro GUI, multi-model support, and strong permission controls, it brings advanced AI features and privacy to your server in a stylish, nostalgic package.
codedwithlikhon
Super-Gemini is a local-first, agentic AI system designed to run on Termux combines powerful terminal emulation with an extensive Linux package collection. Android and act as a universal developer + productivity agent.
dagemdworku
Whisper_cpp is a Flutter package that provides a local interface to Whisper AI using the whisper.cpp library.
sgardoll
Add on-device AI to your FlutterFlow project via this integration of the 'flutter_gemma" pub.dev package, allowing for the easy addition of Google's Gemma 3 & 3n models into FlutterFlow projects. This opens up local on-device AI capabilities with authenticated model downloads and real-time chat functionality.
dgtlss
A Laravel package that enables semantic search using vector embeddings for better relevance in content-heavy applications like blogs, e-commerce, or knowledge bases. Supports multiple AI providers including OpenAI, Google Gemini, and local Ollama models.
bgeneto
An Espanso package that enables users to quickly send prompts to a local (e.g. Ollama or LM Studio) or remote LLM API calls (OpenAI standard) and insert the AI-generated response directly into any text field.
AzizBahloul
Python package AI-powered crash log analyzer with memory and automated setup for local/API LLMs.
devswha
Local-only AI coding agent powered by Gemma 4 via candle. Runs entirely offline — packaged as a ZIP for air-gapped environments.
SorenMaagaard
A Model Context Protocol (MCP) server that provides AI assistants with tools to explore and analyze NuGet packages from your local cache.
patidarganesh
Open-source AI security scanner for AI agents and skill packages. Detect prompt injection, data exfiltration, hardcoded secrets, and malicious logic before running AI tools. Supports OpenAI, Anthropic, Gemini, OpenRouter, and local Ollama models.
alfredobs97
A local Model Context Protocol (MCP) server that provides an AI agent with the ability to fetch documentation context for Dart and Flutter packages from pub.dev.
tuanle277
Package: This project provides a Python client for interacting with Google's Generative AI services using the Gemini model. The client enables users to generate AI responses from text prompts, images, and videos, either from local files or URLs. It includes a command-line interface for easy configuration and use.
stevedwray
No description available
rudra2001-coder
No description available
jeanhackpy
Local AI stack with OpenClaw integration
codingwithalina
Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!