Found 10 repositories(showing 10)
dockersamples
Sample apps to demonstrate the power of docker init
chantxu09231
$ sudo apt-get update; sudo apt-get upgrade $ sudo apt-get install curl; sudo apt-get install git $ curl -fsSL https://get.docker.com/ | sh $ sudo vi /etc/default/docker (新增一行) DOCKER_OPTS="$DOCKER_OPTS -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --api-cors-header='*'" $ sudo service docker restart $ sudo usermod -aG docker slim $ sudo login $ sudo apt-get install python-pip $ sudo pip install docker-compose==1.14.0 $ cd /home/slim/ $ curl -sSL http://bit.ly/2ysbOFE | bash -s 1.2.0 $ export PATH=/home/slim/fabric-samples/bin:$PATH $ ls ~/fabric-samples/bin configtxgen discover idemixgen configtxlator fabric-ca-client orderer cryptogen get-docker-images.sh peer $ docker images |grep 1.2.0 // one channel multiple chaincode $ cd ~/fabric-samples/chaincode-docker-devmode //myc.tx orderer.block (已事先安裝) $ docker-compose -f docker-compose-simple.yaml up -d $ docker exec -it chaincode bash -c "stty cols 1024 && bash" (執行 chaincode) # cd sacc/go # go build (編譯 chaincode) # CORE_PEER_ADDRESS=peer:7052 CORE_CHAINCODE_ID_NAME=mycc:0 ./go & // open the second terminal $ cd ~/fabric-samples/chaincode-docker-devmode $ docker exec -it cli bash -c "stty cols 1024 && bash" # peer chaincode install -p chaincodedev/chaincode/sacc/go -n mycc -v 0 //chaincode receive transaction proposal args # peer chaincode instantiate -n mycc -v 0 -c '{"Args":["a","10"]}' -C myc # peer chaincode invoke -n mycc -c '{"Args":["set", "a", "20"]}' -C myc # peer chaincode invoke -n mycc -c '{"Args":["get","a"]}' -C myc payload:"20" shim interface $ls /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim chaincode.go inprocstream_test.go response.go ext interfaces.go shim_test.go handler.go mockstub.go inprocstream.go mockstub_test.go // open the second terminal $ docker exec -it cli bash -c "stty cols 1024 && bash" # peer chaincode install -p chaincodedev/chaincode/chaincode_example02/go -n mycc1 -v 0 # peer chaincode instantiate -n mycc1 -v 0 -c '{"Args":["init","a","100","b","200"]}' -C myc # peer chaincode invoke -n mycc1 -c '{"Args":["invoke","a","b","10"]}' -C myc # peer chaincode invoke -n mycc1 -c '{"Args":["query","a"]}' -C myc payload:"90" // Build & start the chaincode for first terminal $ docker exec -it chaincode bash # cd marbles02/go # go build # CORE_PEER_ADDRESS=peer:7052 CORE_CHAINCODE_ID_NAME=mycc2:0 ./go & (the third chaincode id mycc2) # ps aux ./sacc/go ./chaincode_example02/go ./marbles02/go / open the second terminal $ docker exec -it cli bash -c "stty cols 1024 && bash" # peer chaincode install -p chaincodedev/chaincode/marbles02/go -n mycc2 -v 0 # peer chaincode instantiate -n mycc2 -v 0 -c '{"Args":["init"]}' -C myc # peer chaincode invoke -n mycc2 -c '{"Args":["initMarble","marble1","blue","35","tom"]}' -C myc (建立 marble1) # peer chaincode invoke -n mycc2 -c '{"Args":["initMarble","marble2","red","50","tom"]}' -C myc (建立 marble2) # peer chaincode invoke -n mycc2 -c '{"Args":["initMarble","marble3","blue","70","tom"]}' -C myc (建立 marble3) //將 marble2 擁有者轉移至 jerry # peer chaincode invoke -n mycc2 -c '{"Args":["transferMarble","marble2","jerry"]}' -C myc //驗證轉移是否成功 # peer chaincode invoke -n mycc2 -c '{"Args":["readMarble","marble2"]}' -C myc / 對指定顏色之 mable 轉換 owner # peer chaincode invoke -n mycc2 -c '{"Args":["transferMarblesBasedOnColor","blue","jerry"]}' -C myc (指定藍色 marble 之 owner 為 jerry) // 驗證 marble3 (marble=blue) owner 已轉為 jerry # peer chaincode invoke -n mycc2 -c '{"Args":["readMarble","marble3"]}' -C myc {\"docType\":\"marble\",\"name\":\"marble3\",\"color\":\"blue\",\"size\":70,\"owner\":\"jerry\"} // 查詢 marble1, marble2, marble3 # peer chaincode invoke -C myc -n mycc2 -c '{"Args":["getMarblesByRange","marble1","marble4"]}' //刪除 marble1 # peer chaincode invoke -n mycc2 -c '{"Args":["delete","marble1"]}' -C myc //驗證 marble1 是否刪除 # peer chaincode invoke -n mycc2 -c '{"Args":["readMarble","marble1"]}' -C myc // Build & start the chaincode for first terminal $ docker exec -it chaincode bash -c "stty cols 1024 && bash" # cd fabcar/go # go build # CORE_PEER_ADDRESS=peer:7052 CORE_CHAINCODE_ID_NAME=mycc3:0 ./go & (the second chaincode id mycc3) // open the second terminal $ docker exec -it cli bash -c "stty cols 1024 && bash" # peer chaincode install -n mycc3 -v 0 -p chaincodedev/chaincode/fabcar/go # peer chaincode instantiate -n mycc3 -v 0 -c '{"Args":[""]}' -C myc # peer chaincode invoke -n mycc3 -c '{"function":"initLedger","Args":[""]}' -C myc # peer chaincode invoke -n mycc3 -c '{"function":"queryAllCars","Args":[""]}' -C myc $ docker rm -f $(docker ps -a -q) // 回至第 6 頁 linux 文字編輯器 vi , pico , nano 刪除佔用本機 port 的行程 $sudo netstat -pna |grep xxxx (xxxx:port number) tcp 0 0 0.0.0.0:xxxx 0.0.0.0:* LISTEN ****/node $kill -9 **** 清除 all container $ docker rm -f $(docker ps -a -q) 清除 all images $docker rmi -f $(docker images -a -q) // ctl + alt+ F1 (終端機介面) ctl + alt+ F7 (視窗介面) https://github.com/hyperledger/fabric/blob/master/docs/source/peer-chaincode-devmode.rst https://media.readthedocs.org/pdf/hyperledger-fabric/latest/hyperledger-fabric.pdf https://hyperledger-fabric.readthedocs.io/en/release-1.2/configtx.html https://hyperledger-fabric.readthedocs.io/en/release-1.2/getting_started.html https://godoc.org/github.com/hyperledger/fabric/core/chaincode/shim https://github.com/kigichang/golang (Go 語言) http://man.linuxde.net/nano https://pws.niu.edu.tw/~ttlee/linux.101.1/14.ppt (nano 編輯器) http://linux.vbird.org/linux_basic/0310vi/0310vi.php (vi 編輯器) http://wuhsiublog.blogspot.com/2017/02/virtualboxwindows-10puttyubuntusshnat.html (SSH 連線 NAT mode) https://ithelp.ithome.com.tw/users/20079210/ironman/721 (初學 Golang 30 天) https://blog.csdn.net/TripleS_X/article/details/80550401 (chaincode 開發範例) https://openhome.cc/Gossip/Go/Testing.html (Go 測試套件) //HyperLedger Fabric chaincode (for go) 開發及測試範例 https://blog.csdn.net/TripleS_X/article/details/80550401 https://github.com/mh4u/chaincode_demo
7erry
HazelCast Jet and IMDG with mySQL Source w/ CDC updates via Kafka-Connect Debezium connector for a scenario with batch and real time updates of securities data in mySQL In this example we want to test out both real time and batch feeds to IMDG and the write-thru' and read thru' capabilities for Hazelcast IMDG and the data pipelining via Hazelcast Jet working against an RDBMS SOR. The RDBMS can be updated by many apps and we want to demonstrate CDC capabilities via Kafka using Debezium CDC MySQL connectors to refresh the IMDG asynchronoulsy and also synchronously via a client app. Setup is with docker-compose with 7 containers: 1 Lenses.io container having single node Kafka/Schema registry/kafka-connect/Kafka REST server + Kafka Management UI 2 Hazelcast Jet/IMDG cluster node containers (hazelcast1, hazelcast2) 1 Hazelcast Jet/IMDG container used for submitting jet jobs (hz_jet_submit) 1 Hazelcast IMDG management center container (mancenter) 1 Hazelcast Jet Management center (hz_jet_mancenter) 1 Mysql source (SOR) DB container (mysql) - this will be our SOR for the demo Use docker-compose to fire up all the above containers: docker-compose -f hazelcast-jet-ent-docker-compose.yaml up -d We get real-time and historic stock market data from Alphavantage Inc. APIs for daily OHLCV (open/high/low/close/volume) for the past 20 years (1999-2019) for 30 stocks in Dow Jones Industrial Average as well as real time data in 1 minute increments for the same stocks over a 7 day window. MySQL container has folder /scripts for data loading which create a securiries_master database and populate tables with stock data above as well as reference data on all S&P 500 stocks. In order to help with editing/debugging of DB scripts without having to respin the containers every time and to persist data locally on host, mysql scripts, conf, and data folders are mapped to a host volume via docker-compose. To run the data loading scripts inside the container: hazelcast-kafka-cdc-test $ docker exec -it mysql bash root@ee8690b41e43:/# /scripts/init-create-load-databases-tables.sh Kafka container has a scripts folder with the configurations for Kafka-connect Debezium connector to mySQL to handle CDC updates from the mySQL source DB. To configure the Debezium connector go to lenses UI on kakfa container by navigating to: http://localhost:3030 (username: admin, password: admin) To configure kafka-connect mySQL debezium connector, navigate to Connectors -> Create New Connector -> Choose "CDC for MySQL" and copy and paste the properties (uncommented part of the file) from the file in the kafka-scripts/debezium-mysql-connector.properties into the UI. The connector should startup and create several topics (one for capturing database wide DDL events, one topic per table for update events for each table). In order to help with editing/debugging of scripts or config files without having to respin the containers every time and to persist data locally on host, scripst and config files are mapped to a host volume via docker-compose. Hazelcast management center is at: http://localhost:8080/hazelcast-mancenter/login.html Setup a login and verify that the cluster hz-jet-ent-cluster is operational with 3 Jet/IMDG nodes (running on ports 5701, 5702, 5703 of the host) Hazelcast Jet management center is at: http://localhost:9090 Use default credentials (admin/admin) to login and and verify that the cluster hz-jet-ent-cluster is operational with 3 Jet/IMDG nodes (running on ports 5701, 5702, 5703 of the host) There is also a test hazelcast jet job provided as a maven project in the kafka2imap folder tree in the repo. This will read the topic "sp500_stocks" and write to a Hazelcast IMap called "securities_,master.sp500_stocks". To build the shaded (fat) jar to submit this job to jet run mvn clean package on the provided pom.xml file in the kafka2imap project folder. This will create a jar with all dependencies included (note: In order to use the jet built in wrapper script to submit jobs from commandline to the ject/IMDG cluster we need to package all dependencies and submit them together or add the dependecies to classpath as part of job submission as there is no guarantee that dependencies will be available on all the nodes in the grid) In order to help with editing/debugging of scripts or config files without having to re-spin the containers every time and to persist data locally on host there is a job-jars folder for job artifacts and a resources folder which has all the script and config files mapped to a host volume via docker-compose. To submit jobs to jet using the submit commandline utility, we have a convenience wrapper script that is run inside the hz_jet_submit container as follows: hazelcast-kafka-cdc-test $ docker exec -it hz_jet_submit bash bash-4.4# pwd /opt/hazelcast-jet-enterprise bash-4.4# cd job-jars/ bash-4.4# ls kafka2imap-1.0-SNAPSHOT.jar run-kafka2imap-job.sh run-wordcount-job.sh test.out bash-4.4# ./run-kafka2imap-job.sh Verbose mode is on, setting logging level to INFO Submitting JAR './job-jars/kafka2imap-1.0-SNAPSHOT.jar' with arguments []
nholuongut
This repository provides a collection of sample applications in different programming languages, along with Docker init support for each language. It's a great starting point for developers who want to quickly get familiar with Docker init on various programming languages.
jhonbilly
Sample apps to demonstrate the power of docker init
fazlan-nazeem
No description available
darrylwest
demo of generating docker env using docker init
shirkerohit
Sample CRUD app with pgsql and express (Demo for docker init)
LouisLU9911
A simple demo to show how the init process affects docker run behaviors.
killerjoker73
astAPI demo showing modern containerization with docker init. Generates Dockerfile, Compose, and .dockerignore automatically; includes clean endpoints (/, /health, /echo, /sum), a tiny pytest suite, and a one-command run via docker compose up. Perfect for screenshots, blogging, and showcasing DevOps best practices.
All 10 repositories loaded