Found 361 repositories(showing 30)
No description available
molyswu
using Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
mandartule
All the programs that were taught in the cource with my hand written notes.
VighneshPath
A python Program that prints Text on a page with fixed dimensions
Hand Written course notes of Deep Learning Specialization by Andrew NG on Coursera
anuanurag2410
Hand Written Notes PDF
This repository contains my hand written notes of Linux - Ubuntu. These notes have everything about Linux - Ubuntu you need to learn. It contains all the commands, terms, tools, etc. with examples. I am sharing my notes because I believe in learning in public so I am sharing whatever I am learning with the community. Hope my notes helps you and share this with your people.
Chandrakant817
Entire Machine Learning Hand Written Notes
N30nHaCkZ
Linux kernel release 3.x <http://kernel.org/> These are the release notes for Linux version 3. Read them carefully, as they tell you what this is all about, explain how to install the kernel, and what to do if something goes wrong. WHAT IS LINUX? Linux is a clone of the operating system Unix, written from scratch by Linus Torvalds with assistance from a loosely-knit team of hackers across the Net. It aims towards POSIX and Single UNIX Specification compliance. It has all the features you would expect in a modern fully-fledged Unix, including true multitasking, virtual memory, shared libraries, demand loading, shared copy-on-write executables, proper memory management, and multistack networking including IPv4 and IPv6. It is distributed under the GNU General Public License - see the accompanying COPYING file for more details. ON WHAT HARDWARE DOES IT RUN? Although originally developed first for 32-bit x86-based PCs (386 or higher), today Linux also runs on (at least) the Compaq Alpha AXP, Sun SPARC and UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH, Cell, IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64, AXIS CRIS, Xtensa, Tilera TILE, AVR32 and Renesas M32R architectures. Linux is easily portable to most general-purpose 32- or 64-bit architectures as long as they have a paged memory management unit (PMMU) and a port of the GNU C compiler (gcc) (part of The GNU Compiler Collection, GCC). Linux has also been ported to a number of architectures without a PMMU, although functionality is then obviously somewhat limited. Linux has also been ported to itself. You can now run the kernel as a userspace application - this is called UserMode Linux (UML). DOCUMENTATION: - There is a lot of documentation available both in electronic form on the Internet and in books, both Linux-specific and pertaining to general UNIX questions. I'd recommend looking into the documentation subdirectories on any Linux FTP site for the LDP (Linux Documentation Project) books. This README is not meant to be documentation on the system: there are much better sources available. - There are various README files in the Documentation/ subdirectory: these typically contain kernel-specific installation notes for some drivers for example. See Documentation/00-INDEX for a list of what is contained in each file. Please read the Changes file, as it contains information about the problems, which may result by upgrading your kernel. - The Documentation/DocBook/ subdirectory contains several guides for kernel developers and users. These guides can be rendered in a number of formats: PostScript (.ps), PDF, HTML, & man-pages, among others. After installation, "make psdocs", "make pdfdocs", "make htmldocs", or "make mandocs" will render the documentation in the requested format. INSTALLING the kernel source: - If you install the full sources, put the kernel tarball in a directory where you have permissions (eg. your home directory) and unpack it: gzip -cd linux-3.X.tar.gz | tar xvf - or bzip2 -dc linux-3.X.tar.bz2 | tar xvf - Replace "X" with the version number of the latest kernel. Do NOT use the /usr/src/linux area! This area has a (usually incomplete) set of kernel headers that are used by the library header files. They should match the library, and not get messed up by whatever the kernel-du-jour happens to be. - You can also upgrade between 3.x releases by patching. Patches are distributed in the traditional gzip and the newer bzip2 format. To install by patching, get all the newer patch files, enter the top level directory of the kernel source (linux-3.X) and execute: gzip -cd ../patch-3.x.gz | patch -p1 or bzip2 -dc ../patch-3.x.bz2 | patch -p1 Replace "x" for all versions bigger than the version "X" of your current source tree, _in_order_, and you should be ok. You may want to remove the backup files (some-file-name~ or some-file-name.orig), and make sure that there are no failed patches (some-file-name# or some-file-name.rej). If there are, either you or I have made a mistake. Unlike patches for the 3.x kernels, patches for the 3.x.y kernels (also known as the -stable kernels) are not incremental but instead apply directly to the base 3.x kernel. For example, if your base kernel is 3.0 and you want to apply the 3.0.3 patch, you must not first apply the 3.0.1 and 3.0.2 patches. Similarly, if you are running kernel version 3.0.2 and want to jump to 3.0.3, you must first reverse the 3.0.2 patch (that is, patch -R) _before_ applying the 3.0.3 patch. You can read more on this in Documentation/applying-patches.txt Alternatively, the script patch-kernel can be used to automate this process. It determines the current kernel version and applies any patches found. linux/scripts/patch-kernel linux The first argument in the command above is the location of the kernel source. Patches are applied from the current directory, but an alternative directory can be specified as the second argument. - Make sure you have no stale .o files and dependencies lying around: cd linux make mrproper You should now have the sources correctly installed. SOFTWARE REQUIREMENTS Compiling and running the 3.x kernels requires up-to-date versions of various software packages. Consult Documentation/Changes for the minimum version numbers required and how to get updates for these packages. Beware that using excessively old versions of these packages can cause indirect errors that are very difficult to track down, so don't assume that you can just update packages when obvious problems arise during build or operation. BUILD directory for the kernel: When compiling the kernel, all output files will per default be stored together with the kernel source code. Using the option "make O=output/dir" allow you to specify an alternate place for the output files (including .config). Example: kernel source code: /usr/src/linux-3.X build directory: /home/name/build/kernel To configure and build the kernel, use: cd /usr/src/linux-3.X make O=/home/name/build/kernel menuconfig make O=/home/name/build/kernel sudo make O=/home/name/build/kernel modules_install install Please note: If the 'O=output/dir' option is used, then it must be used for all invocations of make. CONFIGURING the kernel: Do not skip this step even if you are only upgrading one minor version. New configuration options are added in each release, and odd problems will turn up if the configuration files are not set up as expected. If you want to carry your existing configuration to a new version with minimal work, use "make oldconfig", which will only ask you for the answers to new questions. - Alternative configuration commands are: "make config" Plain text interface. "make menuconfig" Text based color menus, radiolists & dialogs. "make nconfig" Enhanced text based color menus. "make xconfig" X windows (Qt) based configuration tool. "make gconfig" X windows (Gtk) based configuration tool. "make oldconfig" Default all questions based on the contents of your existing ./.config file and asking about new config symbols. "make silentoldconfig" Like above, but avoids cluttering the screen with questions already answered. Additionally updates the dependencies. "make olddefconfig" Like above, but sets new symbols to their default values without prompting. "make defconfig" Create a ./.config file by using the default symbol values from either arch/$ARCH/defconfig or arch/$ARCH/configs/${PLATFORM}_defconfig, depending on the architecture. "make ${PLATFORM}_defconfig" Create a ./.config file by using the default symbol values from arch/$ARCH/configs/${PLATFORM}_defconfig. Use "make help" to get a list of all available platforms of your architecture. "make allyesconfig" Create a ./.config file by setting symbol values to 'y' as much as possible. "make allmodconfig" Create a ./.config file by setting symbol values to 'm' as much as possible. "make allnoconfig" Create a ./.config file by setting symbol values to 'n' as much as possible. "make randconfig" Create a ./.config file by setting symbol values to random values. "make localmodconfig" Create a config based on current config and loaded modules (lsmod). Disables any module option that is not needed for the loaded modules. To create a localmodconfig for another machine, store the lsmod of that machine into a file and pass it in as a LSMOD parameter. target$ lsmod > /tmp/mylsmod target$ scp /tmp/mylsmod host:/tmp host$ make LSMOD=/tmp/mylsmod localmodconfig The above also works when cross compiling. "make localyesconfig" Similar to localmodconfig, except it will convert all module options to built in (=y) options. You can find more information on using the Linux kernel config tools in Documentation/kbuild/kconfig.txt. - NOTES on "make config": - Having unnecessary drivers will make the kernel bigger, and can under some circumstances lead to problems: probing for a nonexistent controller card may confuse your other controllers - Compiling the kernel with "Processor type" set higher than 386 will result in a kernel that does NOT work on a 386. The kernel will detect this on bootup, and give up. - A kernel with math-emulation compiled in will still use the coprocessor if one is present: the math emulation will just never get used in that case. The kernel will be slightly larger, but will work on different machines regardless of whether they have a math coprocessor or not. - The "kernel hacking" configuration details usually result in a bigger or slower kernel (or both), and can even make the kernel less stable by configuring some routines to actively try to break bad code to find kernel problems (kmalloc()). Thus you should probably answer 'n' to the questions for "development", "experimental", or "debugging" features. COMPILING the kernel: - Make sure you have at least gcc 3.2 available. For more information, refer to Documentation/Changes. Please note that you can still run a.out user programs with this kernel. - Do a "make" to create a compressed kernel image. It is also possible to do "make install" if you have lilo installed to suit the kernel makefiles, but you may want to check your particular lilo setup first. To do the actual install, you have to be root, but none of the normal build should require that. Don't take the name of root in vain. - If you configured any of the parts of the kernel as `modules', you will also have to do "make modules_install". - Verbose kernel compile/build output: Normally, the kernel build system runs in a fairly quiet mode (but not totally silent). However, sometimes you or other kernel developers need to see compile, link, or other commands exactly as they are executed. For this, use "verbose" build mode. This is done by inserting "V=1" in the "make" command. E.g.: make V=1 all To have the build system also tell the reason for the rebuild of each target, use "V=2". The default is "V=0". - Keep a backup kernel handy in case something goes wrong. This is especially true for the development releases, since each new release contains new code which has not been debugged. Make sure you keep a backup of the modules corresponding to that kernel, as well. If you are installing a new kernel with the same version number as your working kernel, make a backup of your modules directory before you do a "make modules_install". Alternatively, before compiling, use the kernel config option "LOCALVERSION" to append a unique suffix to the regular kernel version. LOCALVERSION can be set in the "General Setup" menu. - In order to boot your new kernel, you'll need to copy the kernel image (e.g. .../linux/arch/i386/boot/bzImage after compilation) to the place where your regular bootable kernel is found. - Booting a kernel directly from a floppy without the assistance of a bootloader such as LILO, is no longer supported. If you boot Linux from the hard drive, chances are you use LILO, which uses the kernel image as specified in the file /etc/lilo.conf. The kernel image file is usually /vmlinuz, /boot/vmlinuz, /bzImage or /boot/bzImage. To use the new kernel, save a copy of the old image and copy the new image over the old one. Then, you MUST RERUN LILO to update the loading map!! If you don't, you won't be able to boot the new kernel image. Reinstalling LILO is usually a matter of running /sbin/lilo. You may wish to edit /etc/lilo.conf to specify an entry for your old kernel image (say, /vmlinux.old) in case the new one does not work. See the LILO docs for more information. After reinstalling LILO, you should be all set. Shutdown the system, reboot, and enjoy! If you ever need to change the default root device, video mode, ramdisk size, etc. in the kernel image, use the 'rdev' program (or alternatively the LILO boot options when appropriate). No need to recompile the kernel to change these parameters. - Reboot with the new kernel and enjoy. IF SOMETHING GOES WRONG: - If you have problems that seem to be due to kernel bugs, please check the file MAINTAINERS to see if there is a particular person associated with the part of the kernel that you are having trouble with. If there isn't anyone listed there, then the second best thing is to mail them to me (torvalds@linux-foundation.org), and possibly to any other relevant mailing-list or to the newsgroup. - In all bug-reports, *please* tell what kernel you are talking about, how to duplicate the problem, and what your setup is (use your common sense). If the problem is new, tell me so, and if the problem is old, please try to tell me when you first noticed it. - If the bug results in a message like unable to handle kernel paging request at address C0000010 Oops: 0002 EIP: 0010:XXXXXXXX eax: xxxxxxxx ebx: xxxxxxxx ecx: xxxxxxxx edx: xxxxxxxx esi: xxxxxxxx edi: xxxxxxxx ebp: xxxxxxxx ds: xxxx es: xxxx fs: xxxx gs: xxxx Pid: xx, process nr: xx xx xx xx xx xx xx xx xx xx xx or similar kernel debugging information on your screen or in your system log, please duplicate it *exactly*. The dump may look incomprehensible to you, but it does contain information that may help debugging the problem. The text above the dump is also important: it tells something about why the kernel dumped code (in the above example, it's due to a bad kernel pointer). More information on making sense of the dump is in Documentation/oops-tracing.txt - If you compiled the kernel with CONFIG_KALLSYMS you can send the dump as is, otherwise you will have to use the "ksymoops" program to make sense of the dump (but compiling with CONFIG_KALLSYMS is usually preferred). This utility can be downloaded from ftp://ftp.<country>.kernel.org/pub/linux/utils/kernel/ksymoops/ . Alternatively, you can do the dump lookup by hand: - In debugging dumps like the above, it helps enormously if you can look up what the EIP value means. The hex value as such doesn't help me or anybody else very much: it will depend on your particular kernel setup. What you should do is take the hex value from the EIP line (ignore the "0010:"), and look it up in the kernel namelist to see which kernel function contains the offending address. To find out the kernel function name, you'll need to find the system binary associated with the kernel that exhibited the symptom. This is the file 'linux/vmlinux'. To extract the namelist and match it against the EIP from the kernel crash, do: nm vmlinux | sort | less This will give you a list of kernel addresses sorted in ascending order, from which it is simple to find the function that contains the offending address. Note that the address given by the kernel debugging messages will not necessarily match exactly with the function addresses (in fact, that is very unlikely), so you can't just 'grep' the list: the list will, however, give you the starting point of each kernel function, so by looking for the function that has a starting address lower than the one you are searching for but is followed by a function with a higher address you will find the one you want. In fact, it may be a good idea to include a bit of "context" in your problem report, giving a few lines around the interesting one. If you for some reason cannot do the above (you have a pre-compiled kernel image or similar), telling me as much about your setup as possible will help. Please read the REPORTING-BUGS document for details. - Alternatively, you can use gdb on a running kernel. (read-only; i.e. you cannot change values or set break points.) To do this, first compile the kernel with -g; edit arch/i386/Makefile appropriately, then do a "make clean". You'll also need to enable CONFIG_PROC_FS (via "make config"). After you've rebooted with the new kernel, do "gdb vmlinux /proc/kcore". You can now use all the usual gdb commands. The command to look up the point where your system crashed is "l *0xXXXXXXXX". (Replace the XXXes with the EIP value.) gdb'ing a non-running kernel currently fails because gdb (wrongly) disregards the starting offset for which the kernel is compiled.
This repository contains my hand written notes of Kubernetes. These notes have everything about Kubernetes you need to learn. It contains all the commands, important terminologies, etc. with examples. I am sharing my notes because I believe in learning in public so I am sharing whatever I am learning with the community. Hope my notes helps you and share this with your people.
SOYJUN
Overview For this assignment you will be developing and implementing : An On-Demand shortest-hop Routing (ODR) protocol for networks of fixed but arbitrary and unknown connectivity, using PF_PACKET sockets. The implementation is based on (a simplified version of) the AODV algorithm. Time client and server applications that send requests and replies to each other across the network using ODR. An API you will implement using Unix domain datagram sockets enables applications to communicate with the ODR mechanism running locally at their nodes. I shall be discussing the assignment in class on Wednesday, October 29, and Monday, November 3. The following should prove useful reference material for the assignment : Sections 15.1, 15.2, 15.4 & 15.6, Chapter 15, on Unix domain datagram sockets. PF_PACKET(7) from the Linux manual pages. You might find these notes made by a past CSE 533 student useful. Also, the following link http://www.pdbuchan.com/rawsock/rawsock.html contains useful code samples that use PF_PACKET sockets (as well as other code samples that use raw IP sockets which you do not need for this assignment, though you will be using these types of sockets for Assignment 4). Charles E. Perkins & Elizabeth M. Royer. “Ad-hoc On-Demand Distance Vector Routing.” Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, New Orleans, Louisiana, February 1999, pp. 90 - 100. The VMware environment minix.cs.stonybrook.edu is a Linux box running VMware. A cluster of ten Linux virtual machines, called vm1 through vm10, on which you can gain access as root and run your code have been created on minix. See VMware Environment Hosts for further details. VMware instructions takes you to a page that explains how to use the system. The ten virtual machines have been configured into a small virtual intranet of Ethernet LANs whose topology is (in principle) unknown to you. There is a course account cse533 on node minix, with home directory /users/cse533. In there, you will find a subdirectory Stevens/unpv13e , exactly as you are used to having on the cs system. You should develop your source code and makefiles for handing in accordingly. You will be handing in your source code on the minix node. Note that you do not need to link against the socket library (-lsocket) in Linux. The same is true for -lnsl and -lresolv. For example, take a look at how the LIBS variable is defined for Solaris, in /home/courses/cse533/Stevens/unpv13e_solaris2.10/Make.defines (on compserv1, say) : LIBS = ../libunp.a -lresolv -lsocket -lnsl -lpthread But if you take a look at Make.defines on minix (/users/cse533/Stevens/unpv13e/Make.defines) you will find only: LIBS = ../libunp.a -lpthread The nodes vm1 , . . . . . , vm10 are all multihomed : each has two (or more) interfaces. The interface ‘eth0 ’ should be completely ignored and is not to be used for this assignment (because it shows all ten nodes as if belonging to the same single Ethernet 192.168.1.0/24, rather than to an intranet composed of several Ethernets). Note that vm1 , . . . . . , vm10 are virtual machines, not real ones. One implication of this is that you will not be able to find out what their (virtual) IP addresses are by using nslookup and such. To find out these IP addresses, you need to look at the file /etc/hosts on minix. More to the point, invoking gethostbyname for a given vm will return to you only the (primary) IP address associated with the interface eth0 of that vm (which is the interface you will not be using). It will not return to you any other IP address for the node. Similarly, gethostbyaddr will return the vm node name only if you give it the (primary) IP address associated with the interface eth0 for the node. It will return nothing if you give it any other IP address for the node, even though the address is perfectly valid. Because of this, and because it will ease your task to be able to use gethostbyname and gethostbyaddr in a straightforward way, we shall adopt the (primary) IP addresses associated with interfaces eth0 as the ‘canonical’ IP addresses for the nodes (more on this below). Time client and server A time server runs on each of the ten vm machines. The client code should also be available on each vm so that it can be evoked at any of them. Normally, time clients/servers exchange request/reply messages using the TCP/UDP socket API that, effectively, enables them to receive service (indirectly, via the transport layer) from the local IP mechanism running at their nodes. You are to implement an API using Unix domain sockets to access the local ODR service directly (somewhat similar, in effect, to the way that raw sockets permit an application to access IP directly). Use Unix domain SOCK_DGRAM, rather than SOCK_STREAM, sockets (see Figures 15.5 & 15.6, pp. 418 - 419). API You need to implement a msg_send function that will be called by clients/servers to send requests/replies. The parameters of the function consist of : int giving the socket descriptor for write char* giving the ‘canonical’ IP address for the destination node, in presentation format int giving the destination ‘port’ number char* giving message to be sent int flag if set, force a route rediscovery to the destination node even if a non-‘stale’ route already exists (see below) msg_send will format these parameters into a single char sequence which is written to the Unix domain socket that a client/server process creates. The sequence will be read by the local ODR from a Unix domain socket that the ODR process creates for itself. Recall that the ‘canonical’ IP address for a vm node is the (primary) IP address associated with the eth0 interface for the node. It is what will be returned to you by a call to gethostbyname. Similarly, we need a msg_recv function which will do a (blocking) read on the application domain socket and return with : int giving socket descriptor for read char* giving message received char* giving ‘canonical’ IP address for the source node of message, in presentation format int* giving source ‘port’ number This information is written as a single char sequence by the ODR process to the domain socket that it creates for itself. It is read by msg_recv from the domain socket the client/server process creates, decomposed into the three components above, and returned to the caller of msg_recv. Also see the section below entitled ODR and the API. Client When a client is evoked at a node, it creates a domain datagram socket. The client should bind its socket to a ‘temporary’ (i.e., not ‘well-known’) sun_path name obtained from a call to tmpnam() (cf. line 10, Figure 15.6, p. 419) so that multiple clients may run at the same node. Note that tmpnam() is actually highly deprecated. You should use the mkstemp() function instead - look up the online man pages on minix (‘man mkstemp’) for details. As you run client code again and again during the development stage, the temporary files created by the calls to tmpnam / mkstemp start to proliferate since these files are not automatically removed when the client code terminates. You need to explicitly remove the file created by the client evocation by issuing a call to unlink() or to remove() in your client code just before the client code exits. See the online man pages on minix (‘man unlink’, ‘man remove’) for details. The client then enters an infinite loop repeating the steps below. The client prompts the user to choose one of vm1 , . . . . . , vm10 as a server node. Client msg_sends a 1 or 2 byte message to server and prints out on stdout the message client at node vm i1 sending request to server at vm i2 (In general, throughout this assignment, “trace” messages such as the one above should give the vm names and not IP addresses of the nodes.) Client then blocks in msg_recv awaiting response. This attempt to read from the domain socket should be backed up by a timeout in case no response ever comes. I leave it up to you whether you ‘wrap’ the call to msg_recv in a timeout, or you implement the timeout inside msg_recv itself. When the client receives a response it prints out on stdout the message client at node vm i1 : received from vm i2 <timestamp> If, on the other hand, the client times out, it should print out the message client at node vm i1 : timeout on response from vm i2 The client then retransmits the message out, setting the flag parameter in msg_send to force a route rediscovery, and prints out an appropriate message on stdout. This is done only once, when a timeout for a given message to the server occurs for the first time. Client repeats steps 1. - 3. Server The server creates a domain datagram socket. The server socket is assumed to have a (node-local) ‘well-known’ sun_path name which it binds to. This ‘well-known’ sun_path name is designated by a (network-wide) ‘well-known’ ‘port’ value. The time client uses this ‘port’ value to communicate with the server. The server enters an infinite sequence of calls to msg_recv followed by msg_send, awaiting client requests and responding to them. When it responds to a client request, it prints out on stdout the message server at node vm i1 responding to request from vm i2 ODR The ODR process runs on each of the ten vm machines. It is evoked with a single command line argument which gives a “staleness” time parameter, in seconds. It uses get_hw_addrs (available to you on minix in ~cse533/Asgn3_code) to obtain the index, and associated (unicast) IP and Ethernet addresses for each of the node’s interfaces, except for the eth0 and lo (loopback) interfaces, which should be ignored. In the subdirectory ~cse533/Asgn3_code (/users/cse533/Asgn3_code) on minix I am providing you with two functions, get_hw_addrs and prhwaddrs. These are analogous to the get_ifi_info_plus and prifinfo_plus of Assignment 2. Like get_ifi_info_plus, get_hw_addrs uses ioctl. get_hw_addrs gets the (primary) IP address, alias IP addresses (if any), HW address, and interface name and index value for each of the node's interfaces (including the loopback interface lo). prhwaddrs prints that information out. You should modify and use these functions as needed. Note that if an interface has no HW address associated with it (this is, typically, the case for the loopback interface lo for example), then ioctl returns get_hw_addrs a HW address which is the equivalent of 00:00:00:00:00:00 . get_hw_addrs stores this in the appropriate field of its data structures as it would with any HW address returned by ioctl, but when prhwaddrs comes across such an address, it prints a blank line instead of its usual ‘HWaddr = xx:xx:xx:xx:xx:xx’. The ODR process creates one or more PF_PACKET sockets. You will need to try out PF_PACKET sockets for yourselves and familiarize yourselves with how they behave. If, when you read from the socket and provide a sockaddr_ll structure, the kernel returns to you the index of the interface on which the incoming frame was received, then one socket will be enough. Otherwise, somewhat in the manner of Assignment 2, you shall have to create a PF_PACKET socket for every interface of interest (which are all the interfaces of the node, excluding interfaces lo and eth0 ), and bind a socket to each interface. Furthermore, if the kernel also returns to you the source Ethernet address of the frame in the sockaddr_ll structure, then you can make do with SOCK_DGRAM type PF_PACKET sockets; otherwise you shall have to use SOCK_RAW type sockets (although I would prefer you to use SOCK_RAW type sockets anyway, even if it turns out you can make do with SOCK_DGRAM type). The socket(s) should have a protocol value (no larger than 0xffff so that it fits in two bytes; this value is given as a network-byte-order parameter in the call(s) to function socket) that identifies your ODR protocol. The <linux/if_ether.h> include file (i.e., the file /usr/include/linux/if_ether.h) contains protocol values defined for the standard protocols typically found on an Ethernet LAN, as well as other values such as ETH_P_ALL. You should set protocol to a value of your choice which is not a <linux/if_ether.h> value, but which is, hopefully, unique to yourself. Remember that you will all be running your code using the same root account on the vm1 , . . . . . , vm10 nodes. So if two of you happen to choose the same protocol value and happen to be running on the same vm node at the same time, your applications will receive each other’s frames. For that reason, try to choose a protocol value for the socket(s) that is likely to be unique to yourself (something based on your Stony Brook student ID number, for example). This value effectively becomes the protocol value for your implementation of ODR, as opposed to some other cse 533 student's implementation. Because your value of protocol is to be carried in the frame type field of the Ethernet frame header, the value chosen should be not less than 1536 (0x600) so that it is not misinterpreted as the length of an Ethernet 802.3 frame. Note from the man pages for packet(7) that frames are passed to and from the socket without any processing in the frame content by the device driver on the other side of the socket, except for calculating and tagging on the 4-byte CRC trailer for outgoing frames, and stripping that trailer before delivering incoming frames to the socket. Nevertheless, if you write a frame that is less than 60 bytes, the necessary padding is automatically added by the device driver so that the frame that is actually transmitted out is the minimum Ethernet size of 64 bytes. When reading from the socket, however, any such padding that was introduced into a short frame at the sending node to bring it up to the minimum frame size is not stripped off - it is included in what you receive from the socket (thus, the minimum number of bytes you receive should never be less than 60). Also, you will have to build the frame header for outgoing frames yourselves (assuming you use SOCK_RAW type sockets). Bear in mind that the field values in that header have to be in network order. The ODR process also creates a domain datagram socket for communication with application processes at the node, and binds the socket to a ‘well known’ sun_path name for the ODR service. Because it is dealing with fixed topologies, ODR is, by and large, considerably simpler than AODV. In particular, discovered routes are relatively stable and there is no need for all the paraphernalia that goes with the possibility of routes changing (such as maintenance of active nodes in the routing tables and timeout mechanisms; timeouts on reverse links; lifetime field in the RREP messages; etc.) Nor will we be implementing source_sequence_#s (in the RREQ messages), and dest_sequence_# (in RREQ and RREP messages). In reality, we should (though we will not, for the sake of simplicity, be doing so) implement some sort of sequence number mechanism, or some alternative mechanism such as split-horizon for example, if we are to avoid possible scenarios of routing loops in a “count to infinity” context (I shall explain this point in class). However, we want ODR to discover shortest-hop paths, and we want it to do so in a reasonably efficient manner. This necessitates having one or two aspects of its operations work in a different, possibly slightly more complicated, way than AODV does. ODR has several basic responsibilities : Build and maintain a routing table. For each destination in the table, the routing table structure should include, at a minimum, the next-hop node (in the form of the Ethernet address for that node) and outgoing interface index, the number of hops to the destination, and a timestamp of when the the routing table entry was made or last “reconfirmed” / updated. Note that a destination node in the table is to be identified only by its ‘canonical’ IP address, and not by any other IP addresses the node has. Generate a RREQ in response to a time client calling msg_send for a destination for which ODR has no route (or for which a route exists, but msg_send has the flag parameter set or the route has gone ‘stale’ – see below), and ‘flood’ the RREQ out on all the node’s interfaces (except for the interface it came in on and, of course, the interfaces eth0 and lo). Flooding is done using an Ethernet broadcast destination address (0xff:ff:ff:ff:ff:ff) in the outgoing frame header. Note that a copy of the broadcast packet is supposed to / might be looped back to the node that sends it (see p. 535 in the Stevens textbook). ODR will have to take care not to treat these copies as new incoming RREQs. Also note that ODR at the client node increments the broadcast_id every time it issues a new RREQ for any destination node. When a RREQ is received, ODR has to generate a RREP if it is at the destination node, or if it is at an intermediate node that happens to have a route (which is not ‘stale’ – see below) to the destination. Otherwise, it must propagate the RREQ by flooding it out on all the node’s interfaces (except the interface the RREQ arrived on). Note that as it processes received RREQs, ODR should enter the ‘reverse’ route back to the source node into its routing table, or update an existing entry back to the source node if the RREQ received shows a shorter-hop route, or a route with the same number of hops but going through a different neighbour. The timestamp associated with the table entry should be updated whenever an existing route is either “reconfirmed” or updated. Obviously, if the node is going to generate a RREP, updating an existing entry back to the source node with a more efficient route, or a same-hops route using a different neighbour, should be done before the RREP is generated. Unlike AODV, when an intermediate node receives a RREQ for which it generates a RREP, it should nevertheless continue to flood the RREQ it received if the RREQ pertains to a source node whose existence it has heretofore been unaware of, or the RREQ gives it a more efficient route than it knew of back to the source node (the reason for continuing to flood the RREQ is so that other nodes in the intranet also become aware of the existence of the source node or of the potentially more optimal reverse route to it, and update their tables accordingly). However, since an RREP for this RREQ is being sent by our node, we do not want other nodes who receive the RREQ propagated by our node, and who might be in a position to do so, to also send RREPs. So we need to introduce a field in the RREQ message, not present in the AODV specifications, which acts like a “RREP already sent” field. Our node sets this field before further propagating the RREQ and nodes receiving an RREQ with this field set do not send RREPs in response, even if they are in a position to do so. ODR may, of course, receive multiple, distinct instances of the same RREQ (the combination of source_addr and broadcast_id uniquely identifies the RREQ). Such RREQs should not be flooded out unless they have a lower hop count than instances of that RREQ that had previously been received. By the same token, if ODR is in a position to send out a RREP, and has already done so for this, now repeating, RREQ , it should not send out another RREP unless the RREQ shows a more efficient, previously unknown, reverse route back to the source node. In other words, ODR should not generate essentially duplicative RREPs, nor generate RREPs to instances of RREQs that reflect reverse routes to the source that are not more efficient than what we already have. Relay RREPs received back to the source node (this is done using the ‘reverse’ route entered into the routing table when the corresponding RREQ was processed). At the same time, a ‘forward’ path to the destination is entered into the routing table. ODR could receive multiple, distinct RREPs for the same RREQ. The ‘forward’ route entered in the routing table should be updated to reflect the shortest-hop route to the destination, and RREPs reflecting suboptimal routes should not be relayed back to the source. In general, maintaining a route and its associated timestamp in the table in response to RREPs received is done in the same manner described above for RREQs. Forward time client/server messages along the next hop. (The following is important – you will lose points if you do not implement it.) Note that such application payload messages (especially if they are the initial request from the client to the server, rather than the server response back to the client) can be like “free” RREPs, enabling nodes along the path from source (client) to destination (server) node to build a reverse path back to the client node whose existence they were heretofore unaware of (or, possibly, to update an existing route with a more optimal one). Before it forwards an application payload message along the next hop, ODR at an intermediate node (and also at the final destination node) should use the message to update its routing table in this way. Thus, calls to msg_send by time servers should never cause ODR at the server node to initiate RREQs, since the receipt of a time client request implies that a route back to the client node should now exist in the routing table. The only exception to this is if the server node has a staleness parameter of zero (see below). A routing table entry has associated with it a timestamp that gives the time the entry was made into the table. When a client at a node calls msg_send, and if an entry for the destination node already exists in the routing table, ODR first checks that the routing information is not ‘stale’. A stale routing table entry is one that is older than the value defined by the staleness parameter given as a command line argument to the ODR process when it is executed. ODR deletes stale entries (as well as non-stale entries when the flag parameter in msg_send is set) and initiates a route rediscovery by issuing a RREQ for the destination node. This will force periodic updating of the routing tables to take care of failed nodes along the current path, Ethernet addresses that might have changed, and so on. Similarly, as RREQs propagate through the intranet, existing stale table entries at intermediate nodes are deleted and new route discoveries propagated. As noted above when discussing the processing of RREQs and RREPs, the associated timestamp for an existing table entry is updated in response to having the route either “reconfirmed” or updated (this applies to both reverse routes, by virtue of RREQs received, and to forward routes, by virtue of RREPs). Finally, note that a staleness parameter of 0 essentially indicates that the discovered route will be used only once, when first discovered, and then discarded. Effectively, an ODR with staleness parameter 0 maintains no real routing table at all ; instead, it forces route discoveries at every step of its operation. As a practical matter, ODR should be run with staleness parameter values that are considerably larger than the longest RTT on the intranet, otherwise performance will degrade considerably (and collapse entirely as the parameter values approach 0). Nevertheless, for robustness, we need to implement a mechanism by which an intermediate node that receives a RREP or application payload message for forwarding and finds that its relevant routing table entry has since gone stale, can intiate a RREQ to rediscover the route it needs. RREQ, RREP, and time client/server request/response messages will all have to be carried as encapsulated ODR protocol messages that form the data payload of Ethernet frames. So we need to design the structure of ODR protocol messages. The format should contain a type field (0 for RREQ, 1 for RREP, 2 for application payload ). The remaining fields in an ODR message will depend on what type it is. The fields needed for (our simplified versions of AODV’s) RREQ and RREP should be fairly clear to you, but keep in mind that you need to introduce two extra fields: The “RREP already sent” bit or field in RREQ messages, as mentioned above. A “forced discovery” bit or field in both RREQ and RREP messages: When a client application forces route rediscovery, this bit should be set in the RREQ issued by the client node ODR. Intermediate nodes that are not the destination node but which do have a route to the destination node should not respond with RREPs to an RREQ which has the forced discovery field set. Instead, they should continue to flood the RREQ so that it eventually reaches the destination node which will then respond with an RREP. The intermediate nodes relaying such an RREQ must update their ‘reverse’ route back to the source node accordingly, even if the new route is less efficient (i.e., has more hops) than the one they currently have in their routing table. The destination node responds to the RREQ with an RREP in which this field is also set. Intermediate nodes that receive such a forced discovery RREP must update their ‘forward’ route to the destination node accordingly, even if the new route is less efficient (i.e., has more hops) than the one they currently have in their routing table. This behaviour will cause a forced discovery RREQ to be responded to only by the destination node itself and not any other node, and will cause intermediate nodes to update their routing tables to both source and destination nodes in accordance with the latest routing information received, to cover the possibility that older routes are no longer valid because nodes and/or links along their paths have gone down. A type 2, application payload, message needs to contain the following type of information : type = 2 ‘canonical’ IP address of source node ‘port’ number of source application process (This, of course, is not a real port number in the TCP/UDP sense, but simply a value that ODR at the source node uses to designate the sun_path name for the source application’s domain socket.) ‘canonical’ IP address of destination node ‘port’ number of destination application process (This is passed to ODR by the application process at the source node when it calls msg_send. Its designates the sun_path name for an application’s domain socket at the destination node.) hop count (This starts at 0 and is incremented by 1 at each hop so that ODR can make use of the message to update its routing table, as discussed above.) number of bytes in application message The fields above essentially constitute a ‘header’ for the ODR message. Note that fields which you choose to have carry numeric values (rather than ascii characters, for example) must be in network byte order. ODR-defined numeric-valued fields in type 0, RREQ, and type 1, RREP, messages must, of course, also be in network byte order. Also note that only the ‘canonical’ IP addresses are used for the source and destination nodes in the ODR header. The same has to be true in the headers for type 0, RREQ, and type 1, RREP, messages. The general rule is that ODR messages only carry ‘canonical’ IP node addresses. The last field in the type 2 ODR message is essentially the data payload of the message. application message given in the call to msg_send An ODR protocol message is encapsulated as the data payload of an Ethernet frame whose header it fills in as follows : source address = Ethernet address of outgoing interface of the current node where ODR is processing the message. destination address = Ethernet broadcast address for type 0 messages; Ethernet address of next hop node for type 1 & 2 messages. protocol field = protocol value for the ODR PF_PACKET socket(s). Last but not least, whenever ODR writes an Ethernet frame out through its socket, it prints out on stdout the message ODR at node vm i1 : sending frame hdr src vm i1 dest addr ODR msg type n src vm i2 dest vm i3 where addr is in presentation format (i.e., hexadecimal xx:xx:xx:xx:xx:xx) and gives the destination Ethernet address in the outgoing frame header. Other nodes in the message should be identified by their vm name. A message should be printed out for each packet sent out on a distinct interface. ODR and the API When the ODR process first starts, it must construct a table in which it enters all well-known ‘port’ numbers and their corresponding sun_path names. These will constitute permanent entries in the table. Thereafter, whenever it reads a message off its domain socket, it must obtain the sun_path name for the peer process socket and check whether that name is entered in the table. If not, it must select an ‘ephemeral’ ‘port’ value by which to designate the peer sun_path name and enter the pair < port value , sun_path name > into the table. Such entries cannot be permanent otherwise the table will grow unboundedly in time, with entries surviving for ever, beyond the peer processes’ demise. We must associate a time_to_live field with a non-permanent table entry, and purge the entry if nothing is heard from the peer for that amount of time. Every time a peer process for which a non-permanent table entry exists communicates with ODR, its time_to_live value should be reinitialized. Note that when ODR writes to a peer, it is possible for the write to fail because the peer does not exist : it could be a ‘well-known’ service that is not running, or we could be in the interval between a process with a non-permanent table entry terminating and the expiration of its time_to_live value. Notes A proper implementation of ODR would probably require that RREQ and RREP messages be backed up by some kind of timeout and retransmission mechanism since the network transmission environment is not reliable. This would considerably complicate the implementation (because at any given moment, a node could have multiple RREQs that it has flooded out, but for which it has still not received RREPs; the situation is further complicated by the fact that not all intermediate nodes receiving and relaying RREQs necessarily lie on a path to the destination, and therefore should expect to receive RREPs), and, learning-wise, would not add much to the experience you should have gained from Assignment 2.
Suraj520
A repository to showcase the upskilling of self in theoretical & applied aspects of data science during the ongoing sabbatical of 23 months(Jan. 2022 - Nov 2023*) along with hand written notes. It's an "attempt" to pursue masters in data science using Internet as an open university.
Aryia-Behroziuan
Quickstart tutorial Prerequisites Before reading this tutorial you should know a bit of Python. If you would like to refresh your memory, take a look at the Python tutorial. If you wish to work the examples in this tutorial, you must also have some software installed on your computer. Please see https://scipy.org/install.html for instructions. Learner profile This tutorial is intended as a quick overview of algebra and arrays in NumPy and want to understand how n-dimensional (n>=2) arrays are represented and can be manipulated. In particular, if you don’t know how to apply common functions to n-dimensional arrays (without using for-loops), or if you want to understand axis and shape properties for n-dimensional arrays, this tutorial might be of help. Learning Objectives After this tutorial, you should be able to: Understand the difference between one-, two- and n-dimensional arrays in NumPy; Understand how to apply some linear algebra operations to n-dimensional arrays without using for-loops; Understand axis and shape properties for n-dimensional arrays. The Basics NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of non-negative integers. In NumPy dimensions are called axes. For example, the coordinates of a point in 3D space [1, 2, 1] has one axis. That axis has 3 elements in it, so we say it has a length of 3. In the example pictured below, the array has 2 axes. The first axis has a length of 2, the second axis has a length of 3. [[ 1., 0., 0.], [ 0., 1., 2.]] NumPy’s array class is called ndarray. It is also known by the alias array. Note that numpy.array is not the same as the Standard Python Library class array.array, which only handles one-dimensional arrays and offers less functionality. The more important attributes of an ndarray object are: ndarray.ndim the number of axes (dimensions) of the array. ndarray.shape the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with n rows and m columns, shape will be (n,m). The length of the shape tuple is therefore the number of axes, ndim. ndarray.size the total number of elements of the array. This is equal to the product of the elements of shape. ndarray.dtype an object describing the type of the elements in the array. One can create or specify dtype’s using standard Python types. Additionally NumPy provides types of its own. numpy.int32, numpy.int16, and numpy.float64 are some examples. ndarray.itemsize the size in bytes of each element of the array. For example, an array of elements of type float64 has itemsize 8 (=64/8), while one of type complex32 has itemsize 4 (=32/8). It is equivalent to ndarray.dtype.itemsize. ndarray.data the buffer containing the actual elements of the array. Normally, we won’t need to use this attribute because we will access the elements in an array using indexing facilities. An example >>> import numpy as np a = np.arange(15).reshape(3, 5) a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) a.shape (3, 5) a.ndim 2 a.dtype.name 'int64' a.itemsize 8 a.size 15 type(a) <class 'numpy.ndarray'> b = np.array([6, 7, 8]) b array([6, 7, 8]) type(b) <class 'numpy.ndarray'> Array Creation There are several ways to create arrays. For example, you can create an array from a regular Python list or tuple using the array function. The type of the resulting array is deduced from the type of the elements in the sequences. >>> >>> import numpy as np >>> a = np.array([2,3,4]) >>> a array([2, 3, 4]) >>> a.dtype dtype('int64') >>> b = np.array([1.2, 3.5, 5.1]) >>> b.dtype dtype('float64') A frequent error consists in calling array with multiple arguments, rather than providing a single sequence as an argument. >>> >>> a = np.array(1,2,3,4) # WRONG Traceback (most recent call last): ... TypeError: array() takes from 1 to 2 positional arguments but 4 were given >>> a = np.array([1,2,3,4]) # RIGHT array transforms sequences of sequences into two-dimensional arrays, sequences of sequences of sequences into three-dimensional arrays, and so on. >>> >>> b = np.array([(1.5,2,3), (4,5,6)]) >>> b array([[1.5, 2. , 3. ], [4. , 5. , 6. ]]) The type of the array can also be explicitly specified at creation time: >>> >>> c = np.array( [ [1,2], [3,4] ], dtype=complex ) >>> c array([[1.+0.j, 2.+0.j], [3.+0.j, 4.+0.j]]) Often, the elements of an array are originally unknown, but its size is known. Hence, NumPy offers several functions to create arrays with initial placeholder content. These minimize the necessity of growing arrays, an expensive operation. The function zeros creates an array full of zeros, the function ones creates an array full of ones, and the function empty creates an array whose initial content is random and depends on the state of the memory. By default, the dtype of the created array is float64. >>> >>> np.zeros((3, 4)) array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) >>> np.ones( (2,3,4), dtype=np.int16 ) # dtype can also be specified array([[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]], dtype=int16) >>> np.empty( (2,3) ) # uninitialized array([[ 3.73603959e-262, 6.02658058e-154, 6.55490914e-260], # may vary [ 5.30498948e-313, 3.14673309e-307, 1.00000000e+000]]) To create sequences of numbers, NumPy provides the arange function which is analogous to the Python built-in range, but returns an array. >>> >>> np.arange( 10, 30, 5 ) array([10, 15, 20, 25]) >>> np.arange( 0, 2, 0.3 ) # it accepts float arguments array([0. , 0.3, 0.6, 0.9, 1.2, 1.5, 1.8]) When arange is used with floating point arguments, it is generally not possible to predict the number of elements obtained, due to the finite floating point precision. For this reason, it is usually better to use the function linspace that receives as an argument the number of elements that we want, instead of the step: >>> >>> from numpy import pi >>> np.linspace( 0, 2, 9 ) # 9 numbers from 0 to 2 array([0. , 0.25, 0.5 , 0.75, 1. , 1.25, 1.5 , 1.75, 2. ]) >>> x = np.linspace( 0, 2*pi, 100 ) # useful to evaluate function at lots of points >>> f = np.sin(x) See also array, zeros, zeros_like, ones, ones_like, empty, empty_like, arange, linspace, numpy.random.Generator.rand, numpy.random.Generator.randn, fromfunction, fromfile Printing Arrays When you print an array, NumPy displays it in a similar way to nested lists, but with the following layout: the last axis is printed from left to right, the second-to-last is printed from top to bottom, the rest are also printed from top to bottom, with each slice separated from the next by an empty line. One-dimensional arrays are then printed as rows, bidimensionals as matrices and tridimensionals as lists of matrices. >>> >>> a = np.arange(6) # 1d array >>> print(a) [0 1 2 3 4 5] >>> >>> b = np.arange(12).reshape(4,3) # 2d array >>> print(b) [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] >>> >>> c = np.arange(24).reshape(2,3,4) # 3d array >>> print(c) [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] See below to get more details on reshape. If an array is too large to be printed, NumPy automatically skips the central part of the array and only prints the corners: >>> >>> print(np.arange(10000)) [ 0 1 2 ... 9997 9998 9999] >>> >>> print(np.arange(10000).reshape(100,100)) [[ 0 1 2 ... 97 98 99] [ 100 101 102 ... 197 198 199] [ 200 201 202 ... 297 298 299] ... [9700 9701 9702 ... 9797 9798 9799] [9800 9801 9802 ... 9897 9898 9899] [9900 9901 9902 ... 9997 9998 9999]] To disable this behaviour and force NumPy to print the entire array, you can change the printing options using set_printoptions. >>> >>> np.set_printoptions(threshold=sys.maxsize) # sys module should be imported Basic Operations Arithmetic operators on arrays apply elementwise. A new array is created and filled with the result. >>> >>> a = np.array( [20,30,40,50] ) >>> b = np.arange( 4 ) >>> b array([0, 1, 2, 3]) >>> c = a-b >>> c array([20, 29, 38, 47]) >>> b**2 array([0, 1, 4, 9]) >>> 10*np.sin(a) array([ 9.12945251, -9.88031624, 7.4511316 , -2.62374854]) >>> a<35 array([ True, True, False, False]) Unlike in many matrix languages, the product operator * operates elementwise in NumPy arrays. The matrix product can be performed using the @ operator (in python >=3.5) or the dot function or method: >>> >>> A = np.array( [[1,1], ... [0,1]] ) >>> B = np.array( [[2,0], ... [3,4]] ) >>> A * B # elementwise product array([[2, 0], [0, 4]]) >>> A @ B # matrix product array([[5, 4], [3, 4]]) >>> A.dot(B) # another matrix product array([[5, 4], [3, 4]]) Some operations, such as += and *=, act in place to modify an existing array rather than create a new one. >>> >>> rg = np.random.default_rng(1) # create instance of default random number generator >>> a = np.ones((2,3), dtype=int) >>> b = rg.random((2,3)) >>> a *= 3 >>> a array([[3, 3, 3], [3, 3, 3]]) >>> b += a >>> b array([[3.51182162, 3.9504637 , 3.14415961], [3.94864945, 3.31183145, 3.42332645]]) >>> a += b # b is not automatically converted to integer type Traceback (most recent call last): ... numpy.core._exceptions.UFuncTypeError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind' When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behavior known as upcasting). >>> >>> a = np.ones(3, dtype=np.int32) >>> b = np.linspace(0,pi,3) >>> b.dtype.name 'float64' >>> c = a+b >>> c array([1. , 2.57079633, 4.14159265]) >>> c.dtype.name 'float64' >>> d = np.exp(c*1j) >>> d array([ 0.54030231+0.84147098j, -0.84147098+0.54030231j, -0.54030231-0.84147098j]) >>> d.dtype.name 'complex128' Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the ndarray class. >>> >>> a = rg.random((2,3)) >>> a array([[0.82770259, 0.40919914, 0.54959369], [0.02755911, 0.75351311, 0.53814331]]) >>> a.sum() 3.1057109529998157 >>> a.min() 0.027559113243068367 >>> a.max() 0.8277025938204418 By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the axis parameter you can apply an operation along the specified axis of an array: >>> >>> b = np.arange(12).reshape(3,4) >>> b array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> b.sum(axis=0) # sum of each column array([12, 15, 18, 21]) >>> >>> b.min(axis=1) # min of each row array([0, 4, 8]) >>> >>> b.cumsum(axis=1) # cumulative sum along each row array([[ 0, 1, 3, 6], [ 4, 9, 15, 22], [ 8, 17, 27, 38]]) Universal Functions NumPy provides familiar mathematical functions such as sin, cos, and exp. In NumPy, these are called “universal functions”(ufunc). Within NumPy, these functions operate elementwise on an array, producing an array as output. >>> >>> B = np.arange(3) >>> B array([0, 1, 2]) >>> np.exp(B) array([1. , 2.71828183, 7.3890561 ]) >>> np.sqrt(B) array([0. , 1. , 1.41421356]) >>> C = np.array([2., -1., 4.]) >>> np.add(B, C) array([2., 0., 6.]) See also all, any, apply_along_axis, argmax, argmin, argsort, average, bincount, ceil, clip, conj, corrcoef, cov, cross, cumprod, cumsum, diff, dot, floor, inner, invert, lexsort, max, maximum, mean, median, min, minimum, nonzero, outer, prod, re, round, sort, std, sum, trace, transpose, var, vdot, vectorize, where Indexing, Slicing and Iterating One-dimensional arrays can be indexed, sliced and iterated over, much like lists and other Python sequences. >>> >>> a = np.arange(10)**3 >>> a array([ 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]) >>> a[2] 8 >>> a[2:5] array([ 8, 27, 64]) # equivalent to a[0:6:2] = 1000; # from start to position 6, exclusive, set every 2nd element to 1000 >>> a[:6:2] = 1000 >>> a array([1000, 1, 1000, 27, 1000, 125, 216, 343, 512, 729]) >>> a[ : :-1] # reversed a array([ 729, 512, 343, 216, 125, 1000, 27, 1000, 1, 1000]) >>> for i in a: ... print(i**(1/3.)) ... 9.999999999999998 1.0 9.999999999999998 3.0 9.999999999999998 4.999999999999999 5.999999999999999 6.999999999999999 7.999999999999999 8.999999999999998 Multidimensional arrays can have one index per axis. These indices are given in a tuple separated by commas: >>> >>> def f(x,y): ... return 10*x+y ... >>> b = np.fromfunction(f,(5,4),dtype=int) >>> b array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33], [40, 41, 42, 43]]) >>> b[2,3] 23 >>> b[0:5, 1] # each row in the second column of b array([ 1, 11, 21, 31, 41]) >>> b[ : ,1] # equivalent to the previous example array([ 1, 11, 21, 31, 41]) >>> b[1:3, : ] # each column in the second and third row of b array([[10, 11, 12, 13], [20, 21, 22, 23]]) When fewer indices are provided than the number of axes, the missing indices are considered complete slices: >>> >>> b[-1] # the last row. Equivalent to b[-1,:] array([40, 41, 42, 43]) The expression within brackets in b[i] is treated as an i followed by as many instances of : as needed to represent the remaining axes. NumPy also allows you to write this using dots as b[i,...]. The dots (...) represent as many colons as needed to produce a complete indexing tuple. For example, if x is an array with 5 axes, then x[1,2,...] is equivalent to x[1,2,:,:,:], x[...,3] to x[:,:,:,:,3] and x[4,...,5,:] to x[4,:,:,5,:]. >>> >>> c = np.array( [[[ 0, 1, 2], # a 3D array (two stacked 2D arrays) ... [ 10, 12, 13]], ... [[100,101,102], ... [110,112,113]]]) >>> c.shape (2, 2, 3) >>> c[1,...] # same as c[1,:,:] or c[1] array([[100, 101, 102], [110, 112, 113]]) >>> c[...,2] # same as c[:,:,2] array([[ 2, 13], [102, 113]]) Iterating over multidimensional arrays is done with respect to the first axis: >>> >>> for row in b: ... print(row) ... [0 1 2 3] [10 11 12 13] [20 21 22 23] [30 31 32 33] [40 41 42 43] However, if one wants to perform an operation on each element in the array, one can use the flat attribute which is an iterator over all the elements of the array: >>> >>> for element in b.flat: ... print(element) ... 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 See also Indexing, Indexing (reference), newaxis, ndenumerate, indices Shape Manipulation Changing the shape of an array An array has a shape given by the number of elements along each axis: >>> >>> a = np.floor(10*rg.random((3,4))) >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.shape (3, 4) The shape of an array can be changed with various commands. Note that the following three commands all return a modified array, but do not change the original array: >>> >>> a.ravel() # returns the array, flattened array([3., 7., 3., 4., 1., 4., 2., 2., 7., 2., 4., 9.]) >>> a.reshape(6,2) # returns the array with a modified shape array([[3., 7.], [3., 4.], [1., 4.], [2., 2.], [7., 2.], [4., 9.]]) >>> a.T # returns the array, transposed array([[3., 1., 7.], [7., 4., 2.], [3., 2., 4.], [4., 2., 9.]]) >>> a.T.shape (4, 3) >>> a.shape (3, 4) The order of the elements in the array resulting from ravel() is normally “C-style”, that is, the rightmost index “changes the fastest”, so the element after a[0,0] is a[0,1]. If the array is reshaped to some other shape, again the array is treated as “C-style”. NumPy normally creates arrays stored in this order, so ravel() will usually not need to copy its argument, but if the array was made by taking slices of another array or created with unusual options, it may need to be copied. The functions ravel() and reshape() can also be instructed, using an optional argument, to use FORTRAN-style arrays, in which the leftmost index changes the fastest. The reshape function returns its argument with a modified shape, whereas the ndarray.resize method modifies the array itself: >>> >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.resize((2,6)) >>> a array([[3., 7., 3., 4., 1., 4.], [2., 2., 7., 2., 4., 9.]]) If a dimension is given as -1 in a reshaping operation, the other dimensions are automatically calculated: >>> >>> a.reshape(3,-1) array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) See also ndarray.shape, reshape, resize, ravel Stacking together different arrays Several arrays can be stacked together along different axes: >>> >>> a = np.floor(10*rg.random((2,2))) >>> a array([[9., 7.], [5., 2.]]) >>> b = np.floor(10*rg.random((2,2))) >>> b array([[1., 9.], [5., 1.]]) >>> np.vstack((a,b)) array([[9., 7.], [5., 2.], [1., 9.], [5., 1.]]) >>> np.hstack((a,b)) array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) The function column_stack stacks 1D arrays as columns into a 2D array. It is equivalent to hstack only for 2D arrays: >>> >>> from numpy import newaxis >>> np.column_stack((a,b)) # with 2D arrays array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) >>> a = np.array([4.,2.]) >>> b = np.array([3.,8.]) >>> np.column_stack((a,b)) # returns a 2D array array([[4., 3.], [2., 8.]]) >>> np.hstack((a,b)) # the result is different array([4., 2., 3., 8.]) >>> a[:,newaxis] # view `a` as a 2D column vector array([[4.], [2.]]) >>> np.column_stack((a[:,newaxis],b[:,newaxis])) array([[4., 3.], [2., 8.]]) >>> np.hstack((a[:,newaxis],b[:,newaxis])) # the result is the same array([[4., 3.], [2., 8.]]) On the other hand, the function row_stack is equivalent to vstack for any input arrays. In fact, row_stack is an alias for vstack: >>> >>> np.column_stack is np.hstack False >>> np.row_stack is np.vstack True In general, for arrays with more than two dimensions, hstack stacks along their second axes, vstack stacks along their first axes, and concatenate allows for an optional arguments giving the number of the axis along which the concatenation should happen. Note In complex cases, r_ and c_ are useful for creating arrays by stacking numbers along one axis. They allow the use of range literals (“:”) >>> >>> np.r_[1:4,0,4] array([1, 2, 3, 0, 4]) When used with arrays as arguments, r_ and c_ are similar to vstack and hstack in their default behavior, but allow for an optional argument giving the number of the axis along which to concatenate. See also hstack, vstack, column_stack, concatenate, c_, r_ Splitting one array into several smaller ones Using hsplit, you can split an array along its horizontal axis, either by specifying the number of equally shaped arrays to return, or by specifying the columns after which the division should occur: >>> >>> a = np.floor(10*rg.random((2,12))) >>> a array([[6., 7., 6., 9., 0., 5., 4., 0., 6., 8., 5., 2.], [8., 5., 5., 7., 1., 8., 6., 7., 1., 8., 1., 0.]]) # Split a into 3 >>> np.hsplit(a,3) [array([[6., 7., 6., 9.], [8., 5., 5., 7.]]), array([[0., 5., 4., 0.], [1., 8., 6., 7.]]), array([[6., 8., 5., 2.], [1., 8., 1., 0.]])] # Split a after the third and the fourth column >>> np.hsplit(a,(3,4)) [array([[6., 7., 6.], [8., 5., 5.]]), array([[9.], [7.]]), array([[0., 5., 4., 0., 6., 8., 5., 2.], [1., 8., 6., 7., 1., 8., 1., 0.]])] vsplit splits along the vertical axis, and array_split allows one to specify along which axis to split. Copies and Views When operating and manipulating arrays, their data is sometimes copied into a new array and sometimes not. This is often a source of confusion for beginners. There are three cases: No Copy at All Simple assignments make no copy of objects or their data. >>> >>> a = np.array([[ 0, 1, 2, 3], ... [ 4, 5, 6, 7], ... [ 8, 9, 10, 11]]) >>> b = a # no new object is created >>> b is a # a and b are two names for the same ndarray object True Python passes mutable objects as references, so function calls make no copy. >>> >>> def f(x): ... print(id(x)) ... >>> id(a) # id is a unique identifier of an object 148293216 # may vary >>> f(a) 148293216 # may vary View or Shallow Copy Different array objects can share the same data. The view method creates a new array object that looks at the same data. >>> >>> c = a.view() >>> c is a False >>> c.base is a # c is a view of the data owned by a True >>> c.flags.owndata False >>> >>> c = c.reshape((2, 6)) # a's shape doesn't change >>> a.shape (3, 4) >>> c[0, 4] = 1234 # a's data changes >>> a array([[ 0, 1, 2, 3], [1234, 5, 6, 7], [ 8, 9, 10, 11]]) Slicing an array returns a view of it: >>> >>> s = a[ : , 1:3] # spaces added for clarity; could also be written "s = a[:, 1:3]" >>> s[:] = 10 # s[:] is a view of s. Note the difference between s = 10 and s[:] = 10 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) Deep Copy The copy method makes a complete copy of the array and its data. >>> >>> d = a.copy() # a new array object with new data is created >>> d is a False >>> d.base is a # d doesn't share anything with a False >>> d[0,0] = 9999 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) Sometimes copy should be called after slicing if the original array is not required anymore. For example, suppose a is a huge intermediate result and the final result b only contains a small fraction of a, a deep copy should be made when constructing b with slicing: >>> >>> a = np.arange(int(1e8)) >>> b = a[:100].copy() >>> del a # the memory of ``a`` can be released. If b = a[:100] is used instead, a is referenced by b and will persist in memory even if del a is executed. Functions and Methods Overview Here is a list of some useful NumPy functions and methods names ordered in categories. See Routines for the full list. Array Creation arange, array, copy, empty, empty_like, eye, fromfile, fromfunction, identity, linspace, logspace, mgrid, ogrid, ones, ones_like, r_, zeros, zeros_like Conversions ndarray.astype, atleast_1d, atleast_2d, atleast_3d, mat Manipulations array_split, column_stack, concatenate, diagonal, dsplit, dstack, hsplit, hstack, ndarray.item, newaxis, ravel, repeat, reshape, resize, squeeze, swapaxes, take, transpose, vsplit, vstack Questions all, any, nonzero, where Ordering argmax, argmin, argsort, max, min, ptp, searchsorted, sort Operations choose, compress, cumprod, cumsum, inner, ndarray.fill, imag, prod, put, putmask, real, sum Basic Statistics cov, mean, std, var Basic Linear Algebra cross, dot, outer, linalg.svd, vdot Less Basic Broadcasting rules Broadcasting allows universal functions to deal in a meaningful way with inputs that do not have exactly the same shape. The first rule of broadcasting is that if all input arrays do not have the same number of dimensions, a “1” will be repeatedly prepended to the shapes of the smaller arrays until all the arrays have the same number of dimensions. The second rule of broadcasting ensures that arrays with a size of 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array element is assumed to be the same along that dimension for the “broadcast” array. After application of the broadcasting rules, the sizes of all arrays must match. More details can be found in Broadcasting. Advanced indexing and index tricks NumPy offers more indexing facilities than regular Python sequences. In addition to indexing by integers and slices, as we saw before, arrays can be indexed by arrays of integers and arrays of booleans. Indexing with Arrays of Indices >>> >>> a = np.arange(12)**2 # the first 12 square numbers >>> i = np.array([1, 1, 3, 8, 5]) # an array of indices >>> a[i] # the elements of a at the positions i array([ 1, 1, 9, 64, 25]) >>> >>> j = np.array([[3, 4], [9, 7]]) # a bidimensional array of indices >>> a[j] # the same shape as j array([[ 9, 16], [81, 49]]) When the indexed array a is multidimensional, a single array of indices refers to the first dimension of a. The following example shows this behavior by converting an image of labels into a color image using a palette. >>> >>> palette = np.array([[0, 0, 0], # black ... [255, 0, 0], # red ... [0, 255, 0], # green ... [0, 0, 255], # blue ... [255, 255, 255]]) # white >>> image = np.array([[0, 1, 2, 0], # each value corresponds to a color in the palette ... [0, 3, 4, 0]]) >>> palette[image] # the (2, 4, 3) color image array([[[ 0, 0, 0], [255, 0, 0], [ 0, 255, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 0, 255], [255, 255, 255], [ 0, 0, 0]]]) We can also give indexes for more than one dimension. The arrays of indices for each dimension must have the same shape. >>> >>> a = np.arange(12).reshape(3,4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> i = np.array([[0, 1], # indices for the first dim of a ... [1, 2]]) >>> j = np.array([[2, 1], # indices for the second dim ... [3, 3]]) >>> >>> a[i, j] # i and j must have equal shape array([[ 2, 5], [ 7, 11]]) >>> >>> a[i, 2] array([[ 2, 6], [ 6, 10]]) >>> >>> a[:, j] # i.e., a[ : , j] array([[[ 2, 1], [ 3, 3]], [[ 6, 5], [ 7, 7]], [[10, 9], [11, 11]]]) In Python, arr[i, j] is exactly the same as arr[(i, j)]—so we can put i and j in a tuple and then do the indexing with that. >>> >>> l = (i, j) # equivalent to a[i, j] >>> a[l] array([[ 2, 5], [ 7, 11]]) However, we can not do this by putting i and j into an array, because this array will be interpreted as indexing the first dimension of a. >>> >>> s = np.array([i, j]) # not what we want >>> a[s] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index 3 is out of bounds for axis 0 with size 3 # same as a[i, j] >>> a[tuple(s)] array([[ 2, 5], [ 7, 11]]) Another common use of indexing with arrays is the search of the maximum value of time-dependent series: >>> >>> time = np.linspace(20, 145, 5) # time scale >>> data = np.sin(np.arange(20)).reshape(5,4) # 4 time-dependent series >>> time array([ 20. , 51.25, 82.5 , 113.75, 145. ]) >>> data array([[ 0. , 0.84147098, 0.90929743, 0.14112001], [-0.7568025 , -0.95892427, -0.2794155 , 0.6569866 ], [ 0.98935825, 0.41211849, -0.54402111, -0.99999021], [-0.53657292, 0.42016704, 0.99060736, 0.65028784], [-0.28790332, -0.96139749, -0.75098725, 0.14987721]]) # index of the maxima for each series >>> ind = data.argmax(axis=0) >>> ind array([2, 0, 3, 1]) # times corresponding to the maxima >>> time_max = time[ind] >>> >>> data_max = data[ind, range(data.shape[1])] # => data[ind[0],0], data[ind[1],1]... >>> time_max array([ 82.5 , 20. , 113.75, 51.25]) >>> data_max array([0.98935825, 0.84147098, 0.99060736, 0.6569866 ]) >>> np.all(data_max == data.max(axis=0)) True You can also use indexing with arrays as a target to assign to: >>> >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a[[1,3,4]] = 0 >>> a array([0, 0, 2, 0, 0]) However, when the list of indices contains repetitions, the assignment is done several times, leaving behind the last value: >>> >>> a = np.arange(5) >>> a[[0,0,2]]=[1,2,3] >>> a array([2, 1, 3, 3, 4]) This is reasonable enough, but watch out if you want to use Python’s += construct, as it may not do what you expect: >>> >>> a = np.arange(5) >>> a[[0,0,2]]+=1 >>> a array([1, 1, 3, 3, 4]) Even though 0 occurs twice in the list of indices, the 0th element is only incremented once. This is because Python requires “a+=1” to be equivalent to “a = a + 1”. Indexing with Boolean Arrays When we index arrays with arrays of (integer) indices we are providing the list of indices to pick. With boolean indices the approach is different; we explicitly choose which items in the array we want and which ones we don’t. The most natural way one can think of for boolean indexing is to use boolean arrays that have the same shape as the original array: >>> >>> a = np.arange(12).reshape(3,4) >>> b = a > 4 >>> b # b is a boolean with a's shape array([[False, False, False, False], [False, True, True, True], [ True, True, True, True]]) >>> a[b] # 1d array with the selected elements array([ 5, 6, 7, 8, 9, 10, 11]) This property can be very useful in assignments: >>> >>> a[b] = 0 # All elements of 'a' higher than 4 become 0 >>> a array([[0, 1, 2, 3], [4, 0, 0, 0], [0, 0, 0, 0]]) You can look at the following example to see how to use boolean indexing to generate an image of the Mandelbrot set: >>> import numpy as np import matplotlib.pyplot as plt def mandelbrot( h,w, maxit=20 ): """Returns an image of the Mandelbrot fractal of size (h,w).""" y,x = np.ogrid[ -1.4:1.4:h*1j, -2:0.8:w*1j ] c = x+y*1j z = c divtime = maxit + np.zeros(z.shape, dtype=int) for i in range(maxit): z = z**2 + c diverge = z*np.conj(z) > 2**2 # who is diverging div_now = diverge & (divtime==maxit) # who is diverging now divtime[div_now] = i # note when z[diverge] = 2 # avoid diverging too much return divtime plt.imshow(mandelbrot(400,400)) ../_images/quickstart-1.png The second way of indexing with booleans is more similar to integer indexing; for each dimension of the array we give a 1D boolean array selecting the slices we want: >>> >>> a = np.arange(12).reshape(3,4) >>> b1 = np.array([False,True,True]) # first dim selection >>> b2 = np.array([True,False,True,False]) # second dim selection >>> >>> a[b1,:] # selecting rows array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[b1] # same thing array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[:,b2] # selecting columns array([[ 0, 2], [ 4, 6], [ 8, 10]]) >>> >>> a[b1,b2] # a weird thing to do array([ 4, 10]) Note that the length of the 1D boolean array must coincide with the length of the dimension (or axis) you want to slice. In the previous example, b1 has length 3 (the number of rows in a), and b2 (of length 4) is suitable to index the 2nd axis (columns) of a. The ix_() function The ix_ function can be used to combine different vectors so as to obtain the result for each n-uplet. For example, if you want to compute all the a+b*c for all the triplets taken from each of the vectors a, b and c: >>> >>> a = np.array([2,3,4,5]) >>> b = np.array([8,5,4]) >>> c = np.array([5,4,6,8,3]) >>> ax,bx,cx = np.ix_(a,b,c) >>> ax array([[[2]], [[3]], [[4]], [[5]]]) >>> bx array([[[8], [5], [4]]]) >>> cx array([[[5, 4, 6, 8, 3]]]) >>> ax.shape, bx.shape, cx.shape ((4, 1, 1), (1, 3, 1), (1, 1, 5)) >>> result = ax+bx*cx >>> result array([[[42, 34, 50, 66, 26], [27, 22, 32, 42, 17], [22, 18, 26, 34, 14]], [[43, 35, 51, 67, 27], [28, 23, 33, 43, 18], [23, 19, 27, 35, 15]], [[44, 36, 52, 68, 28], [29, 24, 34, 44, 19], [24, 20, 28, 36, 16]], [[45, 37, 53, 69, 29], [30, 25, 35, 45, 20], [25, 21, 29, 37, 17]]]) >>> result[3,2,4] 17 >>> a[3]+b[2]*c[4] 17 You could also implement the reduce as follows: >>> >>> def ufunc_reduce(ufct, *vectors): ... vs = np.ix_(*vectors) ... r = ufct.identity ... for v in vs: ... r = ufct(r,v) ... return r and then use it as: >>> >>> ufunc_reduce(np.add,a,b,c) array([[[15, 14, 16, 18, 13], [12, 11, 13, 15, 10], [11, 10, 12, 14, 9]], [[16, 15, 17, 19, 14], [13, 12, 14, 16, 11], [12, 11, 13, 15, 10]], [[17, 16, 18, 20, 15], [14, 13, 15, 17, 12], [13, 12, 14, 16, 11]], [[18, 17, 19, 21, 16], [15, 14, 16, 18, 13], [14, 13, 15, 17, 12]]]) The advantage of this version of reduce compared to the normal ufunc.reduce is that it makes use of the Broadcasting Rules in order to avoid creating an argument array the size of the output times the number of vectors. Indexing with strings See Structured arrays. Linear Algebra Work in progress. Basic linear algebra to be included here. Simple Array Operations See linalg.py in numpy folder for more. >>> >>> import numpy as np >>> a = np.array([[1.0, 2.0], [3.0, 4.0]]) >>> print(a) [[1. 2.] [3. 4.]] >>> a.transpose() array([[1., 3.], [2., 4.]]) >>> np.linalg.inv(a) array([[-2. , 1. ], [ 1.5, -0.5]]) >>> u = np.eye(2) # unit 2x2 matrix; "eye" represents "I" >>> u array([[1., 0.], [0., 1.]]) >>> j = np.array([[0.0, -1.0], [1.0, 0.0]]) >>> j @ j # matrix product array([[-1., 0.], [ 0., -1.]]) >>> np.trace(u) # trace 2.0 >>> y = np.array([[5.], [7.]]) >>> np.linalg.solve(a, y) array([[-3.], [ 4.]]) >>> np.linalg.eig(j) (array([0.+1.j, 0.-1.j]), array([[0.70710678+0.j , 0.70710678-0.j ], [0. -0.70710678j, 0. +0.70710678j]])) Parameters: square matrix Returns The eigenvalues, each repeated according to its multiplicity. The normalized (unit "length") eigenvectors, such that the column ``v[:,i]`` is the eigenvector corresponding to the eigenvalue ``w[i]`` . Tricks and Tips Here we give a list of short and useful tips. “Automatic” Reshaping To change the dimensions of an array, you can omit one of the sizes which will then be deduced automatically: >>> >>> a = np.arange(30) >>> b = a.reshape((2, -1, 3)) # -1 means "whatever is needed" >>> b.shape (2, 5, 3) >>> b array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [12, 13, 14]], [[15, 16, 17], [18, 19, 20], [21, 22, 23], [24, 25, 26], [27, 28, 29]]]) Vector Stacking How do we construct a 2D array from a list of equally-sized row vectors? In MATLAB this is quite easy: if x and y are two vectors of the same length you only need do m=[x;y]. In NumPy this works via the functions column_stack, dstack, hstack and vstack, depending on the dimension in which the stacking is to be done. For example: >>> >>> x = np.arange(0,10,2) >>> y = np.arange(5) >>> m = np.vstack([x,y]) >>> m array([[0, 2, 4, 6, 8], [0, 1, 2, 3, 4]]) >>> xy = np.hstack([x,y]) >>> xy array([0, 2, 4, 6, 8, 0, 1, 2, 3, 4]) The logic behind those functions in more than two dimensions can be strange. See also NumPy for Matlab users Histograms The NumPy histogram function applied to an array returns a pair of vectors: the histogram of the array and a vector of the bin edges. Beware: matplotlib also has a function to build histograms (called hist, as in Matlab) that differs from the one in NumPy. The main difference is that pylab.hist plots the histogram automatically, while numpy.histogram only generates the data. >>> import numpy as np rg = np.random.default_rng(1) import matplotlib.pyplot as plt # Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2 mu, sigma = 2, 0.5 v = rg.normal(mu,sigma,10000) # Plot a normalized histogram with 50 bins plt.hist(v, bins=50, density=1) # matplotlib version (plot) # Compute the histogram with numpy and then plot it (n, bins) = np.histogram(v, bins=50, density=True) # NumPy version (no plot) plt.plot(.5*(bins[1:]+bins[:-1]), n) ../_images/quickstart-2.png Further reading The Python tutorial NumPy Reference SciPy Tutorial SciPy Lecture Notes A matlab, R, IDL, NumPy/SciPy dictionary © Copyright 2008-2020, The SciPy community. Last updated on Jun 29, 2020. Created using Sphinx 2.4.4.
anujkumarthakur
Introduction Note: This edition of the book is the same as The Rust Programming Language available in print and ebook format from No Starch Press. Welcome to The Rust Programming Language, an introductory book about Rust. The Rust programming language helps you write faster, more reliable software. High-level ergonomics and low-level control are often at odds in programming language design; Rust challenges that conflict. Through balancing powerful technical capacity and a great developer experience, Rust gives you the option to control low-level details (such as memory usage) without all the hassle traditionally associated with such control. Who Rust Is For Rust is ideal for many people for a variety of reasons. Let’s look at a few of the most important groups. Teams of Developers Rust is proving to be a productive tool for collaborating among large teams of developers with varying levels of systems programming knowledge. Low-level code is prone to a variety of subtle bugs, which in most other languages can be caught only through extensive testing and careful code review by experienced developers. In Rust, the compiler plays a gatekeeper role by refusing to compile code with these elusive bugs, including concurrency bugs. By working alongside the compiler, the team can spend their time focusing on the program’s logic rather than chasing down bugs. Rust also brings contemporary developer tools to the systems programming world: Cargo, the included dependency manager and build tool, makes adding, compiling, and managing dependencies painless and consistent across the Rust ecosystem. Rustfmt ensures a consistent coding style across developers. The Rust Language Server powers Integrated Development Environment (IDE) integration for code completion and inline error messages. By using these and other tools in the Rust ecosystem, developers can be productive while writing systems-level code. Students Rust is for students and those who are interested in learning about systems concepts. Using Rust, many people have learned about topics like operating systems development. The community is very welcoming and happy to answer student questions. Through efforts such as this book, the Rust teams want to make systems concepts more accessible to more people, especially those new to programming. Companies Hundreds of companies, large and small, use Rust in production for a variety of tasks. Those tasks include command line tools, web services, DevOps tooling, embedded devices, audio and video analysis and transcoding, cryptocurrencies, bioinformatics, search engines, Internet of Things applications, machine learning, and even major parts of the Firefox web browser. Open Source Developers Rust is for people who want to build the Rust programming language, community, developer tools, and libraries. We’d love to have you contribute to the Rust language. People Who Value Speed and Stability Rust is for people who crave speed and stability in a language. By speed, we mean the speed of the programs that you can create with Rust and the speed at which Rust lets you write them. The Rust compiler’s checks ensure stability through feature additions and refactoring. This is in contrast to the brittle legacy code in languages without these checks, which developers are often afraid to modify. By striving for zero-cost abstractions, higher-level features that compile to lower-level code as fast as code written manually, Rust endeavors to make safe code be fast code as well. The Rust language hopes to support many other users as well; those mentioned here are merely some of the biggest stakeholders. Overall, Rust’s greatest ambition is to eliminate the trade-offs that programmers have accepted for decades by providing safety and productivity, speed and ergonomics. Give Rust a try and see if its choices work for you. Who This Book Is For This book assumes that you’ve written code in another programming language but doesn’t make any assumptions about which one. We’ve tried to make the material broadly accessible to those from a wide variety of programming backgrounds. We don’t spend a lot of time talking about what programming is or how to think about it. If you’re entirely new to programming, you would be better served by reading a book that specifically provides an introduction to programming. How to Use This Book In general, this book assumes that you’re reading it in sequence from front to back. Later chapters build on concepts in earlier chapters, and earlier chapters might not delve into details on a topic; we typically revisit the topic in a later chapter. You’ll find two kinds of chapters in this book: concept chapters and project chapters. In concept chapters, you’ll learn about an aspect of Rust. In project chapters, we’ll build small programs together, applying what you’ve learned so far. Chapters 2, 12, and 20 are project chapters; the rest are concept chapters. Chapter 1 explains how to install Rust, how to write a Hello, world! program, and how to use Cargo, Rust’s package manager and build tool. Chapter 2 is a hands-on introduction to the Rust language. Here we cover concepts at a high level, and later chapters will provide additional detail. If you want to get your hands dirty right away, Chapter 2 is the place for that. At first, you might even want to skip Chapter 3, which covers Rust features similar to those of other programming languages, and head straight to Chapter 4 to learn about Rust’s ownership system. However, if you’re a particularly meticulous learner who prefers to learn every detail before moving on to the next, you might want to skip Chapter 2 and go straight to Chapter 3, returning to Chapter 2 when you’d like to work on a project applying the details you’ve learned. Chapter 5 discusses structs and methods, and Chapter 6 covers enums, match expressions, and the if let control flow construct. You’ll use structs and enums to make custom types in Rust. In Chapter 7, you’ll learn about Rust’s module system and about privacy rules for organizing your code and its public Application Programming Interface (API). Chapter 8 discusses some common collection data structures that the standard library provides, such as vectors, strings, and hash maps. Chapter 9 explores Rust’s error-handling philosophy and techniques. Chapter 10 digs into generics, traits, and lifetimes, which give you the power to define code that applies to multiple types. Chapter 11 is all about testing, which even with Rust’s safety guarantees is necessary to ensure your program’s logic is correct. In Chapter 12, we’ll build our own implementation of a subset of functionality from the grep command line tool that searches for text within files. For this, we’ll use many of the concepts we discussed in the previous chapters. Chapter 13 explores closures and iterators: features of Rust that come from functional programming languages. In Chapter 14, we’ll examine Cargo in more depth and talk about best practices for sharing your libraries with others. Chapter 15 discusses smart pointers that the standard library provides and the traits that enable their functionality. In Chapter 16, we’ll walk through different models of concurrent programming and talk about how Rust helps you to program in multiple threads fearlessly. Chapter 17 looks at how Rust idioms compare to object-oriented programming principles you might be familiar with. Chapter 18 is a reference on patterns and pattern matching, which are powerful ways of expressing ideas throughout Rust programs. Chapter 19 contains a smorgasbord of advanced topics of interest, including unsafe Rust, macros, and more about lifetimes, traits, types, functions, and closures. In Chapter 20, we’ll complete a project in which we’ll implement a low-level multithreaded web server! Finally, some appendixes contain useful information about the language in a more reference-like format. Appendix A covers Rust’s keywords, Appendix B covers Rust’s operators and symbols, Appendix C covers derivable traits provided by the standard library, Appendix D covers some useful development tools, and Appendix E explains Rust editions. There is no wrong way to read this book: if you want to skip ahead, go for it! You might have to jump back to earlier chapters if you experience any confusion. But do whatever works for you. An important part of the process of learning Rust is learning how to read the error messages the compiler displays: these will guide you toward working code. As such, we’ll provide many examples that don’t compile along with the error message the compiler will show you in each situation. Know that if you enter and run a random example, it may not compile! Make sure you read the surrounding text to see whether the example you’re trying to run is meant to error. Ferris will also help you distinguish code that isn’t meant to work:
This repository contains my hand written notes of Docker. These notes have everything about Docker you need to learn. It contains all the commands, important terminologies, etc. with examples. I am sharing my notes because I believe in learning in public so I am sharing whatever I am learning with the community. Hope my notes helps you and share this with your people.
Praneshchow
In this repository, B.Sc. in Computer Science & Engineering courses and topics. Here some of my completed courses slide, assignment, hand written notes and books.
w32zhong
CSC 541 Assignment 4 B-Trees Introduction The goals of this assignment are two-fold: To introduce you to searching data on disk using B-trees. To investigate how changing the order of a B-tree affects its performance. Index File During this assignment you will create, search, and manage a binary index file of integer key values. The values stored in the file will be specified by the user. You will structure the file as a B-tree. Program Execution Your program will be named assn_4 and it will run from the command line. Two command line arguments will be specified: the name of the index file, and a B-tree order. assn_4 index-file order For example, executing your program as follows assn_4 index.bin 4 would open an index file called index.bin that holds integer keys stored in an order-4 B-tree. You can assume order will always be ≥ 3. For convenience, we refer to the index file as index.bin throughout the remainder of the assignment. Note. If you are asked open an existing index file, you can assume the B-tree order specified on the command line matches the order that was used when the index file was first created. B-Tree Nodes Your program is allowed to hold individual B-tree nodes in memory—but not the entire tree—at any given time. Your B-tree node should have a structure and usage similar to the following. #include <stdlib.h> int order = 4; /* B-tree order */ typedef struct { /* B-tree node */ int n; /* Number of keys in node */ int *key; /* Node's keys */ long *child; /* Node's child subtree offsets */ } btree_node; btree_node node; /* Single B-tree node */ node.n = 0; node.key = (int *) calloc( order - 1, sizeof( int ) ); node.child = (long *) calloc( order, sizeof( long ) ); Note. Be careful when you're reading and writing data structures with dynamically allocated memory. For example, trying to write node like this fwrite( &node, sizeof( btree_node ), 1, fp ); will write node's key count, the pointer value for its key array, and the pointer value for its child offset array, but it will not write the contents of the key and child offset arrays. The arrays' contents and not pointers to their contents need to be written explicitly instead. fwrite( &node.n, sizeof( int ), 1, fp ); fwrite( node.key, sizeof( int ), order - 1, fp ); fwrite( node.child, sizeof( long ), order, fp ); Reading node structures from disk would use a similar strategy. Root Node Offset In order to manage any tree, you need to locate its root node. Initially the root node will be stored near the front of index.bin. If the root node splits, however, a new root will be appended to the end of index.bin. The root node's offset will be maintained persistently by storing it at the front of index.bin when the file is closed, and reading it when the file is opened. #include <stdio.h> FILE *fp; /* Input file stream */ long root; /* Offset of B-tree root node */ fp = fopen( "index.bin", "r+b" ); /* If file doesn't exist, set root offset to unknown and create * file, otherwise read the root offset at the front of the file */ if ( fp == NULL ) { root = -1; fp = fopen( "index.bin", "w+b" ); fwrite( &root, sizeof( long ), 1, fp ); } else { fread( &root, sizeof( long ), 1, fp ); } User Interface The user will communicate with your program through a set of commands typed at the keyboard. Your program must support four simple commands: add k Add a new integer key with value k to index.bin. find k Find an entry with a key value of k in index.bin, if it exists. print Print the contents and the structure of the B-tree on-screen. end Update the root node's offset at the front of the index.bin, and close the index file, and end the program. Add Use a standard B-tree algorithm to add a new key k to the index file. Search the B-tree for the leaf node L responsible for k. If k is stored in L's key list, print Entry with key=k already exists on-screen and stop, since duplicate keys are not allowed. Create a new key list K that contains L's keys, plus k, sorted in ascending order. If L's key list is not full, replace it with K, update L's child offsets, write L back to disk, and stop. Otherwise, split K about its median key value km into left and right key lists KL = (k0, ... , km-1) and KR = (km+1, ... , kn-1). Use ceiling to calculate m = ⌈(n-1)/2⌉. For example, if n = 3, m = 1. If n = 4, m = 2. Save KL and its associated child offsets in L, then write L back to disk. Save KR and its associated child offsets in a new node R, then append R to the end of the index file. Promote km , L's offset, and R's offset and insert them in L's parent node. If the parent's key list is full, recursively split its list and promote the median to its parent. If a promotion is made to a root node with a full key list, split and create a new root node holding km and offsets to L and R. Find To find key value k in the index file, search the root node for k. If k is found, the search succeeds. Otherwise, determine the child subtree S that is responsible for k, then recursively search S. If k is found during the recursive search, print Entry with key=k exists on-screen. If at any point in the recursion S does not exist, print Entry with key=k does not exist on-screen. Print This command prints the contents of the B-tree on-screen, level by level. Begin by considering a single B-tree node. To print the contents of the node on-screen, print its key values separated by commas. int i; /* Loop counter */ btree_node node; /* Node to print */ long off; /* Node's offset */ for( i = 0; i < node.n - 1; i++ ) { printf( "%d,", node.key[ i ] ); } printf( "%d", node.key[ node.n - 1 ] ); To print the entire tree, start by printing the root node. Next, print the root node's children on a new line, separating each child node's output by a space character. Then, print their children on a new line, and so on until all the nodes in the tree are printed. This approach prints the nodes on each level of the B-tree left-to-right on a common line. For example, inserting the integers 1 through 13 inclusive into an order-4 B-tree would produce the following output. 1: 9 2: 3,6 12 3: 1,2 4,5 7,8 10,11 13 To support trees with more than 9 levels, we leave space for two characters to print the level at the beginning of each line, that is, using printf( "%2d: ", lvl )" or something similar. Hint. To process nodes left-to-right level-by-level, do not use recursion. Instead, create a queue containing the root node's offset. Remove the offset at the front of the queue (initially the root's offset) and read the corresponding node from disk. Append the node's non-empty subtree offsets to the end of the queue, then print the node's key values. Continue until the queue is empty. End This command ends the program by writing the root node's offset to the front of index.bin, then closing the index file. Programming Environment All programs must be written in C, and compiled to run on the remote.eos.ncsu.edu Linux server. Any ssh client can be used to access your Unity account and AFS storage space on this machine. Your assignment will be run automatically, and the output it produces will be compared to known, correct output using diff. Because of this, your output must conform to the print command's description. If it doesn't, diff will report your output as incorrect, and it will be marked accordingly. Supplemental Material In order to help you test your program, we provide example input and output files. input-01.txt, an input file of commands applied to an initially empty index file saved as an order-4 B-tree, and input-02.txt, an input file of commands applied to the index file produced by input-01.txt. The output files show what your program should print after each input file is processed. output-01.txt, the output your program should produce after it processes input-01.txt. output-02.txt, the output your program should produce after it processes input-02.txt. To test your program, you would issue the following commands: % rm index.bin % assn_4 index.bin 4 < input-01.txt > my-output-01.txt % assn_4 index.bin 4 < input-02.txt > my-output-02.txt You can use diff to compare output from your program to our output files. If your program is running properly and your output is formatted correctly, your program should produce output identical to what is in these files. Please remember, the files we're providing here are meant to serve as examples only. Apart from holding valid commands, you cannot make any assumptions about the size or the content of the input files we will use to test your program. Test Files The following files were used to test your program. Order 3 Test Case. input-03.txt output-03-first.txt Order 4 Test Case. input-04.txt output-04-first.txt Order 10 Test Case. input-10-01.txt, input-10-02.txt output-10-01.txt, output-10-02.txt Order 20 Test Case. input-20.txt output-20-first.txt Your program was run on all test cases using order-3, order-4, and order-20 B-trees. % rm index.bin % assn_4 index.bin 3 < input-03.txt > my-output-03.txt % rm index.bin % assn_4 index.bin 4 < input-04.txt > my-output-04.txt % rm index.bin % assn_4 index.bin 20 < input-20.txt > my-output-20.txt Your program was also run twice using an order-10 B-tree, to test its ability to re-use an existing index file. % rm index.bin % assn_4 index.bin 10 < input-10-01.txt > my-output-10-01.txt % assn_4 index.bin 10 < input-10-02.txt > my-output-10-02.txt Hand-In Requirements Use Moodle (the online assignment submission software) to submit the following files: assn_4, a Linux executable of your finished assignment, and all associated source code files (these can be called anything you want). There are four important requirements that your assignment must satisfy. Your executable file must be named exactly as shown above. The program will be run and marked electronically using a script file, so using a different name means the executable will not be found, and subsequently will not be marked. Your program must be compiled to run on remote.eos.ncsu.edu. If we cannot run your program, we will not be able to mark it, and we will be forced to assign you a grade of 0. Your program must produce output that exactly matches the format described in the print command section of this assignment. If it doesn't, it will not pass our automatic comparison to known, correct output. You must submit your source code with your executable prior to the submission deadline. If you do not submit your source code, we cannot MOSS it to check for code similarity. Because of this, any assignment that does not include source code will be assigned a grade of 0. Updated 20-Dec-14
imran1509
This repository contains my hand written notes of computer Networking. These notes have everything about computer networking you need to learn. I am sharing my notes because I believe in learning in public so I am sharing whatever I am learning with the community. Hope my notes helps you and share this with your people.
This repository contains my hand written notes of Git and GitHub. These notes have everything about Git and GitHub you need to learn. It contains all the commands, terms, etc. with examples. I am sharing my notes because I believe in learning in public so I am sharing whatever I am learning with the community. Hope my notes helps you and share this with your people.
Kelexer1
A collection of all my hand written and typed University course notes. Feel free to use for whatever you want!
imran1509
This repository contains my hand written notes of YAML. These notes have everything about YAML you need to learn. It contains all the commands, important terminologies, etc. with examples.
reactVishnu
Complete Django Hand-Written Notes by Vishnu Tiwari - Jose Portila Udemy Course
zphixon
A hand-written note taking program
alan-chengzhang
It includes my hand-written note while taken the Udacity A/B testing Course and the report of final project together with the Excel that generated those numeric results
This repository contains my hand written notes of Ansible. These notes have everything about Ansible you need to learn. It contains all the commands, important terminologies, etc. with examples.
ARK-Builders
ARK Memo is one app for all of your notes: it's aiming to combine plain text, voice and hand-written notes
Chatzichristidis
Hand-written Architecture Notes from the 2022 Class
Pprudhvivardhan
DEVOPS End to End Hand Written Notes
subbusir
Design analysis and algorithms (Hand Written notes)
PALASH-BAJPAI
A collection of hand written notes and codes to learn java effectively