Found 914 repositories(showing 30)
shreyashankar
List of datasets to apply stats/machine learning/technology to the world of social good.
ColinIanKing
Powerstat measures the power consumption of a machine using the battery stats or the Intel RAPL interface. The output is like vmstat but also shows power consumption statistics. At the end of a run, powerstat will calculate the average, standard deviation and min/max of the gathered data.
spacesdrive
Every analytics tool wants a subscription. And your data. I built Twiligent instead. YouTube stats. Instagram stats. One dashboard. Runs locally on your machine.
tirthajyoti
Misc Statistics and Machine Learning codes in R
treeform
Cross-platform way to find common system resources like, os, os version, machine name, cpu stats...
There are many good resources for learning Git. (Here's an excellent online book, and this is my videos series introducing Git and GitHub.) But once you've learned the basics, it can be hard to remember which commands to use to execute the most common tasks. I went searching for a Git reference guide that would be useful for beginners like myself, but didn't find anything ideal: Git - the simple guide is useful as a high-level overview of the basic commands, but doesn't provide enough details. Git Cheatsheet uses a nice interactive approach to summarize a ton of git commands on one screen, but it doesn't give you any sense of workflow. Git Reference is close to what I was looking for, and links each entry to the relevant section of Pro Git (awesome!), but is too long for a quick reference. So, I decided to make my own reference guide! The guide below is organized by task, with an emphasis on basic tasks and common command line arguments. It begins with the workflow for cloning, updating, and syncing with a remote repo because that's a common way to get started with Git and GitHub. Note that this is only a reference guide, and will not teach you Git. It does not explain the difference between staged and committed, what to do with a .gitignore file, or when to create a branch. But if you are already familiar with those concepts, this guide will hopefully refresh your memory and help you to discover other commands you might need. Please enjoy, and let me know your thoughts or questions in the comments! Cloning a remote repo (that you created or forked on GitHub) git clone < your-repo-URL >: copies your remote repo to your local machine (in a subdirectory with the repo's name), and automatically creates an "origin" handle git remote add upstream < forked-repo-URL >: adds an "upstream" handle for the repo you forked git remote -v: shows the handles for your remotes git remote show < handlename >: inspect a remote in detail Tracking, committing, and pushing your changes git add < name >: if untracked, start tracking a file or directory; if tracked and modified, stage it for committing git reset HEAD < name >: unstage a changed file git commit -m "message": commits everything that has been staged with a message -a -m "message": automatically stages any modified files, then commits --amend -m "new message": fixes the message from the last commit git push origin master: pushes your commits to the master branch of the origin Syncing your local repo with the upstream repo git fetch upstream: fetch the upstream and store its master branch in "upstream/master" git merge upstream/master: merge that branch into the working branch Viewing the status of your files git status: check which files have been modified and/or staged since the last commit git diff: shows the diff for files that are modified but not staged --staged: shows the diff for files that are staged but not committed Viewing the commit history git log: shows the detailed commit history -1: only shows the last 1 commit -p: shows the line diff for each commit -p --word-diff: shows the word diff for each commit --stat: shows stats instead of diff details --name-status: shows a simpler version of stat --oneline: just shows commit comments gitk: open a visual commit browser Managing branches git branch: shows a list of local branches < branchname >: create a new branch with that name -d < branchname >: delete a branch -v: show the last commit on each local branch -a: show local and remote branches -va: show the last commit on each local and remote branch --merged: list which branches are already merged into the working branch (safe to delete) --no-merged: list which branches are not merged into the working branch git checkout < branchname >: switch the HEAD pointer to a different branch -b < branchname >: create a new branch and switch to it Removing, deleting, and reverting files git rm < name >: deletes that file from the disk, then stages its deletion --cached < name >: stops tracking a file, then stages its deletion (but does not delete it from the disk) git mv < oldname > < newname >: renames the file on disk, then stages the deletion of the old name and addition of the new name git checkout -- < name >: revert a modified file on disk back to the last committed version Other basic commands git init: initialize Git in an existing directory git config --list: shows your Git configuration touch .gitignore: create an empty .gitignore file
vimalgandhi
# Docker Commands, Help & Tips ### Show commands & management commands ``` $ docker ``` ### Docker version info ``` $ docker version ``` ### Show info like number of containers, etc ``` $ docker info ``` # WORKING WITH CONTAINERS ### Create an run a container in foreground ``` $ docker container run -it -p 80:80 nginx ``` ### Create an run a container in background ``` $ docker container run -d -p 80:80 nginx ``` ### Shorthand ``` $ docker container run -d -p 80:80 nginx ``` ### Naming Containers ``` $ docker container run -d -p 80:80 --name nginx-server nginx ``` ### TIP: WHAT RUN DID - Looked for image called nginx in image cache - If not found in cache, it looks to the default image repo on Dockerhub - Pulled it down (latest version), stored in the image cache - Started it in a new container - We specified to take port 80- on the host and forward to port 80 on the container - We could do "$ docker container run --publish 8000:80 --detach nginx" to use port 8000 - We can specify versions like "nginx:1.09" ### List running containers ``` $ docker container ls ``` OR ``` $ docker ps ``` ### List all containers (Even if not running) ``` $ docker container ls -a ``` ### Stop container ``` $ docker container stop [ID] ``` ### Stop all running containers ``` $ docker stop $(docker ps -aq) ``` ### Remove container (Can not remove running containers, must stop first) ``` $ docker container rm [ID] ``` ### To remove a running container use force(-f) ``` $ docker container rm -f [ID] ``` ### Remove multiple containers ``` $ docker container rm [ID] [ID] [ID] ``` ### Remove all containers ``` $ docker rm $(docker ps -aq) ``` ### Get logs (Use name or ID) ``` $ docker container logs [NAME] ``` ### List processes running in container ``` $ docker container top [NAME] ``` #### TIP: ABOUT CONTAINERS Docker containers are often compared to virtual machines but they are actually just processes running on your host os. In Windows/Mac, Docker runs in a mini-VM so to see the processes youll need to connect directly to that. On Linux however you can run "ps aux" and see the processes directly # IMAGE COMMANDS ### List the images we have pulled ``` $ docker image ls ``` ### We can also just pull down images ``` $ docker pull [IMAGE] ``` ### Remove image ``` $ docker image rm [IMAGE] ``` ### Remove all images ``` $ docker rmi $(docker images -a -q) ``` #### TIP: ABOUT IMAGES - Images are app bianaries and dependencies with meta data about the image data and how to run the image - Images are no a complete OS. No kernel, kernel modules (drivers) - Host provides the kernel, big difference between VM ### Some sample container creation NGINX: ``` $ docker container run -d -p 80:80 --name nginx nginx (-p 80:80 is optional as it runs on 80 by default) ``` APACHE: ``` $ docker container run -d -p 8080:80 --name apache httpd ``` MONGODB: ``` $ docker container run -d -p 27017:27017 --name mongo mongo ``` MYSQL: ``` $ docker container run -d -p 3306:3306 --name mysql --env MYSQL_ROOT_PASSWORD=123456 mysql ``` ## CONTAINER INFO ### View info on container ``` $ docker container inspect [NAME] ``` ### Specific property (--format) ``` $ docker container inspect --format '{{ .NetworkSettings.IPAddress }}' [NAME] ``` ### Performance stats (cpu, mem, network, disk, etc) ``` $ docker container stats [NAME] ``` ## ACCESSING CONTAINERS ### Create new nginx container and bash into ``` $ docker container run -it --name [NAME] nginx bash ``` - i = interactive Keep STDIN open if not attached - t = tty - Open prompt **For Git Bash, use "winpty"** ``` $ winpty docker container run -it --name [NAME] nginx bash ``` ### Run/Create Ubuntu container ``` $ docker container run -it --name ubuntu ubuntu ``` **(no bash because ubuntu uses bash by default)** ### You can also make it so when you exit the container does not stay by using the -rm flag ``` $ docker container run --rm -it --name [NAME] ubuntu ``` ### Access an already created container, start with -ai ``` $ docker container start -ai ubuntu ``` ### Use exec to edit config, etc ``` $ docker container exec -it mysql bash ``` ### Alpine is a very small Linux distro good for docker ``` $ docker container run -it alpine sh ``` (use sh because it does not include bash) (alpine uses apk for its package manager - can install bash if you want) # NETWORKING ### "bridge" or "docker0" is the default network ### Get port ``` $ docker container port [NAME] ``` ### List networks ``` $ docker network ls ``` ### Inspect network ``` $ docker network inspect [NETWORK_NAME] ("bridge" is default) ``` ### Create network ``` $ docker network create [NETWORK_NAME] ``` ### Create container on network ``` $ docker container run -d --name [NAME] --network [NETWORK_NAME] nginx ``` ### Connect existing container to network ``` $ docker network connect [NETWORK_NAME] [CONTAINER_NAME] ``` ### Disconnect container from network ``` $ docker network disconnect [NETWORK_NAME] [CONTAINER_NAME] ``` ### Detach network from container ``` $ docker network disconnect ``` # IMAGE TAGGING & PUSHING TO DOCKERHUB # tags are labels that point ot an image ID ``` $ docker image ls ``` Youll see that each image has a tag ### Retag existing image ``` $ docker image tag nginx btraversy/nginx ``` ### Upload to dockerhub ``` $ docker image push bradtraversy/nginx ``` ### If denied, do ``` $ docker login ``` ### Add tag to new image ``` $ docker image tag bradtraversy/nginx bradtraversy/nginx:testing ``` ### DOCKERFILE PARTS - FROM - The os used. Common is alpine, debian, ubuntu - ENV - Environment variables - RUN - Run commands/shell scripts, etc - EXPOSE - Ports to expose - CMD - Final command run when you launch a new container from image - WORKDIR - Sets working directory (also could use 'RUN cd /some/path') - COPY # Copies files from host to container ### Build image from dockerfile (reponame can be whatever) ### From the same directory as Dockerfile ``` $ docker image build -t [REPONAME] . ``` #### TIP: CACHE & ORDER - If you re-run the build, it will be quick because everythging is cached. - If you change one line and re-run, that line and everything after will not be cached - Keep things that change the most toward the bottom of the Dockerfile # EXTENDING DOCKERFILE ### Custom Dockerfile for html paqge with nginx ``` FROM nginx:latest # Extends nginx so everything included in that image is included here WORKDIR /usr/share/nginx/html COPY index.html index.html ``` ### Build image from Dockerfile ``` $ docker image build -t nginx-website ``` ### Running it ``` $ docker container run -p 80:80 --rm nginx-website ``` ### Tag and push to Dockerhub ``` $ docker image tag nginx-website:latest btraversy/nginx-website:latest ``` ``` $ docker image push bradtraversy/nginx-website ``` # VOLUMES ### Volume - Makes special location outside of container UFS. Used for databases ### Bind Mount -Link container path to host path ### Check volumes ``` $ docker volume ls ``` ### Cleanup unused volumes ``` $ docker volume prune ``` ### Pull down mysql image to test ``` $ docker pull mysql ``` ### Inspect and see volume ``` $ docker image inspect mysql ``` ### Run container ``` $ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql ``` ### Inspect and see volume in container ``` $ docker container inspect mysql ``` #### TIP: Mounts - You will also see the volume under mounts - Container gets its own uniqe location on the host to store that data - Source: xxx is where it lives on the host ### Check volumes ``` $ docker volume ls ``` **There is no way to tell volumes apart for instance with 2 mysql containers, so we used named volumes** ### Named volumes (Add -v command)(the name here is mysql-db which could be anything) ``` $ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql ``` ### Inspect new named volume ``` docker volume inspect mysql-db ``` # BIND MOUNTS - Can not use in Dockerfile, specified at run time (uses -v as well) - ... run -v /Users/brad/stuff:/path/container (mac/linux) - ... run -v //c/Users/brad/stuff:/path/container (windows) **TIP: Instead of typing out local path, for working directory use $(pwd):/path/container - On windows may not work unless you are in your users folder** ### Run and be able to edit index.html file (local dir should have the Dockerfile and the index.html) ``` $ docker container run -p 80:80 -v $(pwd):/usr/share/nginx/html nginx ``` ### Go into the container and check ``` $ docker container exec -it nginx bash $ cd /usr/share/nginx/html $ ls -al ``` ### You could create a file in the container and it will exiost on the host as well ``` $ touch test.txt ``` # DOCKER COMPOSE - Configure relationships between containers - Save our docker container run settings in easy to read file - 2 Parts: YAML File (docker.compose.yml) + CLI tool (docker-compose) ### 1. docker.compose.yml - Describes solutions for - containers - networks - volumes ### 2. docker-compose CLI - used for local dev/test automation with YAML files ### Sample compose file (From Bret Fishers course) ``` version: '2' # same as # docker run -p 80:4000 -v $(pwd):/site bretfisher/jekyll-serve services: jekyll: image: bretfisher/jekyll-serve volumes: - .:/site ports: - '80:4000' ``` ### To run ``` docker-compose up ``` ### You can run in background with ``` docker-compose up -d ``` ### To cleanup ``` docker-compose down ```
Rushikesh8983
Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explore the Data Play around with view_sentence_range to view different parts of the data. view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the <EOS> word id by doing: target_vocab_to_int['<EOS>'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] \ for sentence in source_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] \ for sentence in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) Tests Passed Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. import problem_unittests as tests """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: model_inputs process_decoder_input encoding_layer decoding_layer_train decoding_layer_infer decoding_layer seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], 'input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32, []) keep_prob = tf.placeholder(tf.float32, [], 'keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], 'target_sequence_length') max_target_len = tf.reduce_max(target_sequence_length) source_sequence_length = tf.placeholder(tf.int32, [None], 'source_sequence_length') return inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) Tests Passed Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function go = tf.constant([[target_vocab_to_int['<GO>']]]*batch_size) # end = tf.slice(target_data, [0, 0], [-1, batch_size]) end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) return tf.concat([go, end], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) Tests Passed Encoding Implement encoding_layer() to create a Encoder RNN layer: Embed the encoder input using tf.contrib.layers.embed_sequence Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper Pass cell and embedded input to tf.nn.dynamic_rnn() from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def lstm_cell(): lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) return tf.contrib.rnn.DropoutWrapper(lstm, keep_prob) stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) # initial_state = stacked_lstm.zero_state(source_sequence_length, tf.float32) return tf.nn.dynamic_rnn(stacked_lstm, embed, source_sequence_length, dtype=tf.float32) # initial_state=initial_state) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) Tests Passed Decoding - Training Create a training decoding layer: Create a tf.contrib.seq2seq.TrainingHelper Create a tf.contrib.seq2seq.BasicDecoder Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) dec_train_logits, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length) # for tensorflow 1.2: # dec_train_logits, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length) return dec_train_logits # keep_prob/dropout not used? """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) Tests Passed Decoding - Inference Create inference decoder: Create a tf.contrib.seq2seq.GreedyEmbeddingHelper Create a tf.contrib.seq2seq.BasicDecoder Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.constant([start_of_sequence_id]*batch_size) helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) dec_infer_logits, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length) # for tensorflow 1.2: # dec_infer_logits, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length) return dec_infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
appsignal
Rust library to read out system stats from a machine running Unix
404GamerNotFound
The VServer SSH Stats add-on for Home Assistant allows you to monitor remote Linux servers (vServers, Raspberry Pi, or dedicated machines) without installing any additional agents on the target machines.
a3r0id
A simple bot for periodically checking your website's stats including request latency, http status code & more from any remote machine using Python3 requests module. Great for DevOps teams who use Discord.
RasmusRynell
The project explores the idea of using different machine learning techniques to determine different stats in NHL games.
Corosso
Using machine learning to predict which Valorant team is goig to win a match, based on previous scores and player stats
Final project for Stats 132 at UC Berkeley; use machine learning algorithms to classify PLoS articles into subjects (accessing them via the SOLR API)
PacktPublishing
Code Repository for Machine Learning 101 with Scikit-learn and StatsModels, Published by Packt
ChickenTarm
Machine Learning Predictor for NBA games that takes into account player stats, team stats.
damies13
Python Module for reading perfmon stats from local and remote windows machines
pooleja
Get paid in bitcoin to report stats about your machine.
bangdasun
STAT GR5241 Statistical Machine Learning taught by Professor Linxi Liu. Here I also include some other sources of machine learning materials/practice, as well as implementation of popular machine learning algorithms by myself without using ml packages. FOR COLUMBIA STATS STUDENTS WHO TAKE THIS COURSE: DO NOT COPY SINCE HOMEWORK PROBLEMS MIGHT BE THE SAME.
tidalmigrations
A simple and effective way to gather machine statistics (RAM, Storage, CPU, etc.) from a Windows or Unix server environment.
Unicode01
A lightweight NAT forward manager for virtual machines, with web UI, shared 80/443 proxy, port range mapping, worker orchestration, and traffic stats.
hanskamin
Analyzing MLB teams' stats over the past few decades to create a machine-learning model that predicts a team's wins as well as (if not better than) Pythagorean Expectation.
dgrubis
Jupyter notebook that outlines the process of creating a machine learning predictive model. Predicts the peak "Wins Shared" by the current draft prospects based on numerous features such as college stats, projected draft pick, physical profile and age. I try out multiple models and pick the best performing one for the data from my judgement.
K-4yser
Hack The Box from your terminal - List machines, submit flags, spawn/stop labs, view profile stats. Rich UI, API v4, Kitty image support.
Can we infer important COVID-19 public health risk factors from outdated data? In many countries census and other survey data may be incomplete or out of date. This challenge is to develop a proof-of-concept for how machine learning can help governments more accurately map COVID-19 risk in 2020 using old data, without requiring a new costly, risky, and time-consuming on-the-ground survey. The 2011 census gives us valuable information for determining who might be most vulnerable to COVID-19 in South Africa. However, the data is nearly 10 years old, and we expect that some key indicators will have changed in that time. Building an up-to-date map showing where the most vulnerable are located will be a key step in responding to the disease. A mapping effort like this requires bringing together many different inputs and tools. For this competition, we’re starting small. Can we infer important risk factors from more readily available data? The task is to predict the percentage of households that fall into a particularly vulnerable bracket - large households who must leave their homes to fetch water - using 2011 South African census data. Solving this challenge will show that with machine learning it is possible to use easy-to-measure stats to identify areas most at risk even in years when census data is not collected.
6302655433
Stats, Machine learing, Deep learning, Computer vision documents.
Living-with-machines
Living with Machines GitHub Stats report
gionn
A very simple java library to gather cpu, memory, network, load average stats from procfs on Linux machines.
Arc Raiders Companion is your essential offline-ready guide for surviving Calabretta. Track your hideout progress, look up item stats, find loot locations, and stay ahead of the machines— Play smart ;)
Creates rankings for positional groups in fantasy football based on neural network machine learning with previous year stats and current year team positional grades.