Found 1,685 repositories(showing 30)
lucidrains
A concise but complete full-attention transformer with a set of promising experimental features from various papers
kimiyoung
No description available
yangyanli
PointCNN: Convolution On X-Transformed Points (NeurIPS 2018)
GaoPeng97
transformer xl在中文文本生成上的尝试(可写小说、古诗)(transformer xl for text generation of chinese)
intel
No description available
molyswu
using Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **0.9686@0.5IOU**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.
fjp
Transform Frenet (s,d) to local Cartesian (x,y) coordinates.
rachtibat
Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]
No description available
himanshub1007
# AD-Prediction Convolutional Neural Networks for Alzheimer's Disease Prediction Using Brain MRI Image ## Abstract Alzheimers disease (AD) is characterized by severe memory loss and cognitive impairment. It associates with significant brain structure changes, which can be measured by magnetic resonance imaging (MRI) scan. The observable preclinical structure changes provides an opportunity for AD early detection using image classification tools, like convolutional neural network (CNN). However, currently most AD related studies were limited by sample size. Finding an efficient way to train image classifier on limited data is critical. In our project, we explored different transfer-learning methods based on CNN for AD prediction brain structure MRI image. We find that both pretrained 2D AlexNet with 2D-representation method and simple neural network with pretrained 3D autoencoder improved the prediction performance comparing to a deep CNN trained from scratch. The pretrained 2D AlexNet performed even better (**86%**) than the 3D CNN with autoencoder (**77%**). ## Method #### 1. Data In this project, we used public brain MRI data from **Alzheimers Disease Neuroimaging Initiative (ADNI)** Study. ADNI is an ongoing, multicenter cohort study, started from 2004. It focuses on understanding the diagnostic and predictive value of Alzheimers disease specific biomarkers. The ADNI study has three phases: ADNI1, ADNI-GO, and ADNI2. Both ADNI1 and ADNI2 recruited new AD patients and normal control as research participants. Our data included a total of 686 structure MRI scans from both ADNI1 and ADNI2 phases, with 310 AD cases and 376 normal controls. We randomly derived the total sample into training dataset (n = 519), validation dataset (n = 100), and testing dataset (n = 67). #### 2. Image preprocessing Image preprocessing were conducted using Statistical Parametric Mapping (SPM) software, version 12. The original MRI scans were first skull-stripped and segmented using segmentation algorithm based on 6-tissue probability mapping and then normalized to the International Consortium for Brain Mapping template of European brains using affine registration. Other configuration includes: bias, noise, and global intensity normalization. The standard preprocessing process output 3D image files with an uniform size of 121x145x121. Skull-stripping and normalization ensured the comparability between images by transforming the original brain image into a standard image space, so that same brain substructures can be aligned at same image coordinates for different participants. Diluted or enhanced intensity was used to compensate the structure changes. the In our project, we used both whole brain (including both grey matter and white matter) and grey matter only. #### 3. AlexNet and Transfer Learning Convolutional Neural Networks (CNN) are very similar to ordinary Neural Networks. A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected. ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network. #### 3.1. AlexNet The net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The overall architecture is shown in Figure 1. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. AlexNet maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution. The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (as shown in Figure1). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.  The first convolutional layer filters the 224x224x3 input image with 96 kernels of size 11x11x3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5x5x48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3x3x256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3x3x192 , and the fifth convolutional layer has 256 kernels of size 3x3x192. The fully-connected layers have 4096 neurons each. #### 3.2. Transfer Learning Training an entire Convolutional Network from scratch (with random initialization) is impractical[14] because it is relatively rare to have a dataset of sufficient size. An alternative is to pretrain a Conv-Net on a very large dataset (e.g. ImageNet), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. Typically, there are three major transfer learning scenarios: **ConvNet as fixed feature extractor:** We can take a ConvNet pretrained on ImageNet, and remove the last fully-connected layer, then treat the rest structure as a fixed feature extractor for the target dataset. In AlexNet, this would be a 4096-D vector. Usually, we call these features as CNN codes. Once we get these features, we can train a linear classifier (e.g. linear SVM or Softmax classifier) for our target dataset. **Fine-tuning the ConvNet:** Another idea is not only replace the last fully-connected layer in the classifier, but to also fine-tune the parameters of the pretrained network. Due to overfitting concerns, we can only fine-tune some higher-level part of the network. This suggestion is motivated by the observation that earlier features in a ConvNet contains more generic features (e.g. edge detectors or color blob detectors) that can be useful for many kind of tasks. But the later layer of the network becomes progressively more specific to the details of the classes contained in the original dataset. **Pretrained models:** The released pretrained model is usually the final ConvNet checkpoint. So it is common to see people use the network for fine-tuning. #### 4. 3D Autoencoder and Convolutional Neural Network We take a two-stage approach where we first train a 3D sparse autoencoder to learn filters for convolution operations, and then build a convolutional neural network whose first layer uses the filters learned with the autoencoder.  #### 4.1. Sparse Autoencoder An autoencoder is a 3-layer neural network that is used to extract features from an input such as an image. Sparse representations can provide a simple interpretation of the input data in terms of a small number of \parts by extracting the structure hidden in the data. The autoencoder has an input layer, a hidden layer and an output layer, and the input and output layers have same number of units, while the hidden layer contains more units for a sparse and overcomplete representation. The encoder function maps input x to representation h, and the decoder function maps the representation h to the output x. In our problem, we extract 3D patches from scans as the input to the network. The decoder function aims to reconstruct the input form the hidden representation h. #### 4.2. 3D Convolutional Neural Network Training the 3D convolutional neural network(CNN) is the second stage. The CNN we use in this project has one convolutional layer, one pooling layer, two linear layers, and finally a log softmax layer. After training the sparse autoencoder, we take the weights and biases of the encoder from trained model, and use them a 3D filter of a 3D convolutional layer of the 1-layer convolutional neural network. Figure 2 shows the architecture of the network. #### 5. Tools In this project, we used Nibabel for MRI image processing and PyTorch Neural Networks implementation.
microsoft
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale
OctoberChang
X-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classification
Modern .NET tools and library for XDT (Xml Document Transformation)
Reytuag
No description available
javismiles
Infographic about the inner computations of a transformer model, training and inference
zhengzihe
以Swin Transformer作为骨干网络的YoloX目标检测项目
walaura
A tiny transform library that lets you write CSS using XML
lucidrains
Implementation of a transformer for reinforcement learning using `x-transformers`
microsoft
No description available
The aim of this assignment is to have you do UDP socket client / server programming with a focus on two broad aspects : Setting up the exchange between the client and server in a secure way despite the lack of a formal connection (as in TCP) between the two, so that ‘outsider’ UDP datagrams (broadcast, multicast, unicast - fortuitously or maliciously) cannot intrude on the communication. Introducing application-layer protocol data-transmission reliability, flow control and congestion control in the client and server using TCP-like ARQ sliding window mechanisms. The second item above is much more of a challenge to implement than the first, though neither is particularly trivial. But they are not tightly interdependent; each can be worked on separately at first and then integrated together at a later stage. Apart from the material in Chapters 8, 14 & 22 (especially Sections 22.5 - 22.7), and the experience you gained from the preceding assignment, you will also need to refer to the following : ioctl function (Chapter 17). get_ifi_info function (Section 17.6, Chapter 17). This function will be used by the server code to discover its node’s network interfaces so that it can bind all its interface IP addresses (see Section 22.6). ‘Race’ conditions (Section 20.5, Chapter 20) You also need a thorough understanding of how the TCP protocol implements reliable data transfer, flow control and congestion control. Chapters 17- 24 of TCP/IP Illustrated, Volume 1 by W. Richard Stevens gives a good overview of TCP. Though somewhat dated for some things (it was published in 1994), it remains, overall, a good basic reference. Overview This assignment asks you to implement a primitive file transfer protocol for Unix platforms, based on UDP, and with TCP-like reliability added to the transfer operation using timeouts and sliding-window mechanisms, and implementing flow and congestion control. The server is a concurrent server which can handle multiple clients simultaneously. A client gives the server the name of a file. The server forks off a child which reads directly from the file and transfers the contents over to the client using UDP datagrams. The client prints out the file contents as they come in, in order, with nothing missing and with no duplication of content, directly on to stdout (via the receiver sliding window, of course, but with no other intermediate buffering). The file to be transferred can be of arbitrary length, but its contents are always straightforward ascii text. As an aside let me mention that assuming the file contents ascii is not as restrictive as it sounds. We can always pretend, for example, that binary files are base64 encoded (“ASCII armor”). A real file transfer protocol would, of course, have to worry about transferring files between heterogeneous platforms with different file structure conventions and semantics. The sender would first have to transform the file into a platform-independent, protocol-defined, format (using, say, ASN.1, or some such standard), and the receiver would have to transform the received file into its platform’s native file format. This kind of thing can be fairly time consuming, and is certainly very tedious, to implement, with little educational value - it is not part of this assignment. Arguments for the server You should provide the server with an input file server.in from which it reads the following information, in the order shown, one item per line : Well-known port number for server. Maximum sending sliding-window size (in datagram units). You will not be handing in your server.in file. We shall create our own when we come to test your code. So it is important that you stick strictly to the file name and content conventions specified above. The same applies to the client.in input file below. Arguments for the client The client is to be provided with an input file client.in from which it reads the following information, in the order shown, one item per line : IP address of server (not the hostname). Well-known port number of server. filename to be transferred. Receiving sliding-window size (in datagram units). Random generator seed value. Probability p of datagram loss. This should be a real number in the range [ 0.0 , 1.0 ] (value 0.0 means no loss occurs; value 1.0 means all datagrams all lost). The mean µ, in milliseconds, for an exponential distribution controlling the rate at which the client reads received datagram payloads from its receive buffer. Operation Server starts up and reads its arguments from file server.in. As we shall see, when a client communicates with the server, the server will want to know what IP address that client is using to identify the server (i.e. , the destination IP address in the incoming datagram). Normally, this can be done relatively straightforwardly using the IP_RECVDESTADDR socket option, and picking up the information using the ancillary data (‘control information’) capability of the recvmsg function. Unfortunately, Solaris 2.10 does not support the IP_RECVDESTADDR option (nor, incidentally, does it support the msg_flags option in msghdr - see p.390). This considerably complicates things. In the absence of IP_RECVDESTADDR, what the server has to do as part of its initialization phase is to bind each IP address it has (and, simultaneously, its well-known port number, which it has read in from server.in) to a separate UDP socket. The code in Section 22.6, which uses the get_ifi_info function, shows you how to do that. However, there are important differences between that code and the version you want to implement. The code of Section 22.6 binds the IP addresses and forks off a child for each address that is bound to. We do not want to do that. Instead you should have an array of socket descriptors. For each IP address, create a new socket and bind the address (and well-known port number) to the socket without forking off child processes. Creating child processes comes later, when clients arrive. The code of Section 22.6 also attempts to bind broadcast addresses. We do not want to do this. It binds a wildcard IP address, which we certainly do not want to do either. We should bind strictly only unicast addresses (including the loopback address). The get_ifi_info function (which the code in Section 22.6 uses) has to be modified so that it also gets the network masks for the IP addresses of the node, and adds these to the information stored in the linked list of ifi_info structures (see Figure 17.5, p.471) it produces. As you go binding each IP address to a distinct socket, it will be useful for later processing to build your own array of structures, where a structure element records the following information for each socket : sockfd IP address bound to the socket network mask for the IP address subnet address (obtained by doing a bit-wise and between the IP address and its network mask) Report, in a ReadMe file which you hand in with your code, on the modifications you had to introduce to ensure that only unicast addresses are bound, and on your implementation of the array of structures described above. You should print out on stdout, with an appropriate message and appropriately formatted in dotted decimal notation, the IP address, network mask, and subnet address for each socket in your array of structures (you do not need to print the sockfd). The server now uses select to monitor the sockets it has created for incoming datagrams. When it returns from select, it must use recvfrom or recvmsg to read the incoming datagram (see 6. below). When a client starts, it first reads its arguments from the file client.in. The client checks if the server host is ‘local’ to its (extended) Ethernet. If so, all its communication to the server is to occur as MSG_DONTROUTE (or SO_DONTROUTE socket option). It determines if the server host is ‘local’ as follows. The first thing the client should do is to use the modified get_ifi_info function to obtain all of its IP addresses and associated network masks. Print out on stdout, in dotted decimal notation and with an appropriate message, the IP addresses and network masks obtained. In the following, IPserver designates the IP address the client will use to identify the server, and IPclient designates the IP address the client will choose to identify itself. The client checks whether the server is on the same host. If so, it should use the loopback address 127.0.0.1 for the server (i.e. , IPserver = 127.0.0.1). IPclient should also be set to the loopback address. Otherwise it proceeds as follows: IPserver is set to the IP address for the server in the client.in file. Given IPserver and the (unicast) IP addresses and network masks for the client returned by get_ifi_info in the linked list of ifi_info structures, you should be able to figure out if the server node is ‘local’ or not. This will be discussed in class; but let me just remind you here that you should use ‘longest prefix matching’ where applicable. If there are multiple client addresses, and the server host is ‘local’, the client chooses an IP address for itself, IPclient, which matches up as ‘local’ according to your examination above. If the server host is not ‘local’, then IPclient can be chosen arbitrarily. Print out on stdout the results of your examination, as to whether the server host is ‘local’ or not, as well as the IPclient and IPserver addresses selected. Note that this manner of determining whether the server is local or not is somewhat clumsy and ‘over-engineered’, and, as such, should be viewed more in the nature of a pedagogical exercise. Ideally, we would like to look up the server IP address(es) in the routing table (see Section 18.3). This requires that a routing socket be created, for which we need superuser privilege. Alternatively, we might want to dump out the routing table, using the sysctl function for example (see Section 18.4), and examine it directly. Unfortunately, Solaris 2.10 does not support sysctl. Furthermore, note that there is a slight problem with the address 130.245.1.123/24 assigned to compserv3 (see rightmost column of file hosts, and note that this particular compserv3 address “overlaps” with the 130.245.1.x/28 addresses in that same column assigned to compserv1, compserv2 & comserv4). In particular, if the client is running on compserv3 and the server on any of the other three compservs, and if that server node is also being identified to the client by its /28 (rather than its /24) address, then the client will get a “false positive” when it tests as to whether the server node is local or not. In other words, the client will deem the server node to be local, whereas in fact it should not be considered local. Because of this, it is perhaps best simply not to use compserv3 to run the client (but it is o.k. to use it to run the server). Finally, using MSG_DONTROUTE where possible would seem to gain us efficiency, in as much as the kernel does not need to consult the routing table for every datagram sent. But, in fact, that is not so. Recall that one effect of connect with UDP sockets is that routing information is obtained by the kernel at the time the connect is issued. That information is cached and used for subsequent sends from the connected socket (see p.255). The client now creates a UDP socket and calls bind on IPclient, with 0 as the port number. This will cause the kernel to bind an ephemeral port to the socket. After the bind, use the getsockname function (Section 4.10) to obtain IPclient and the ephemeral port number that has been assigned to the socket, and print that information out on stdout, with an appropriate message and appropriately formatted. The client connects its socket to IPserver and the well-known port number of the server. After the connect, use the getpeername function (Section 4.10) to obtain IPserver and the well-known port number of the server, and print that information out on stdout, with an appropriate message and appropriately formatted. The client sends a datagram to the server giving the filename for the transfer. This send needs to be backed up by a timeout in case the datagram is lost. Note that the incoming datagram from the client will be delivered to the server at the socket to which the destination IP address that the datagram is carrying has been bound. Thus, the server can obtain that address (it is, of course, IPserver) and thereby achieve what IP_RECVDESTADDR would have given us had it been available. Furthermore, the server process can obtain the IP address (this will, of course, be IPclient) and ephemeral port number of the client through the recvfrom or recvmsg functions. The server forks off a child process to handle the client. The server parent process goes back to the select to listen for new clients. Hereafter, and unless otherwise stated, whenever we refer to the ‘server’, we mean the server child process handling the client’s file transfer, not the server parent process. Typically, the first thing the server child would be expected to do is to close all sockets it ‘inherits’ from its parent. However, this is not the case with us. The server child does indeed close the sockets it inherited, but not the socket on which the client request arrived. It leaves that socket open for now. Call this socket the ‘listening’ socket. The server (child) then checks if the client host is local to its (extended) Ethernet. If so, all its communication to the client is to occur as MSG_DONTROUTE (or SO_DONTROUTE socket option). If IPserver (obtained in 5. above) is the loopback address, then we are done. Otherwise, the server has to proceed with the following step. Use the array of structures you built in 1. above, together with the addresses IPserver and IPclient to determine if the client is ‘local’. Print out on stdout the results of your examination, as to whether the client host is ‘local’ or not. The server (child) creates a UDP socket to handle file transfer to the client. Call this socket the ‘connection’ socket. It binds the socket to IPserver, with port number 0 so that its kernel assigns an ephemeral port. After the bind, use the getsockname function (Section 4.10) to obtain IPserver and the ephemeral port number that has been assigned to the socket, and print that information out on stdout, with an appropriate message and appropriately formatted. The server then connects this ‘connection’ socket to the client’s IPclient and ephemeral port number. The server now sends the client a datagram, in which it passes it the ephemeral port number of its ‘connection’ socket as the data payload of the datagram. This datagram is sent using the ‘listening’ socket inherited from its parent, otherwise the client (whose socket is connected to the server’s ‘listening’ socket at the latter’s well-known port number) will reject it. This datagram must be backed up by the ARQ mechanism, and retransmitted in the event of loss. Note that if this datagram is indeed lost, the client might well time out and retransmit its original request message (the one carrying the file name). In this event, you must somehow ensure that the parent server does not mistake this retransmitted request for a new client coming in, and spawn off yet another child to handle it. How do you do that? It is potentially more involved than it might seem. I will be discussing this in class, as well as ‘race’ conditions that could potentially arise, depending on how you code the mechanisms I present. When the client receives the datagram carrying the ephemeral port number of the server’s ‘connection’ socket, it reconnects its socket to the server’s ‘connection’ socket, using IPserver and the ephemeral port number received in the datagram (see p.254). It now uses this reconnected socket to send the server an acknowledgment. Note that this implies that, in the event of the server timing out, it should retransmit two copies of its ‘ephemeral port number’ message, one on its ‘listening’ socket and the other on its ‘connection’ socket (why?). When the server receives the acknowledgment, it closes the ‘listening’ socket it inherited from its parent. The server can now commence the file transfer through its ‘connection’ socket. The net effect of all these binds and connects at server and client is that no ‘outsider’ UDP datagram (broadcast, multicast, unicast - fortuitously or maliciously) can now intrude on the communication between server and client. Starting with the first datagram sent out, the client behaves as follows. Whenever a datagram arrives, or an ACK is about to be sent out (or, indeed, the initial datagram to the server giving the filename for the transfer), the client uses some random number generator function random() (initialized by the client.in argument value seed) to decide with probability p (another client.in argument value) if the datagram or ACK should be discarded by way of simulating transmission loss across the network. (I will briefly discuss in class how you do this.) Adding reliability to UDP The mechanisms you are to implement are based on TCP Reno. These include : Reliable data transmission using ARQ sliding-windows, with Fast Retransmit. Flow control via receiver window advertisements. Congestion control that implements : SlowStart Congestion Avoidance (‘Additive-Increase/Multiplicative Decrease’ – AIMD) Fast Recovery (but without the window-inflation aspect of Fast Recovery) Only some, and by no means all, of the details for these are covered below. The rest will be presented in class, especially those concerning flow control and TCP Reno’s congestion control mechanisms in general : Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery. Implement a timeout mechanism on the sender (server) side. This is available to you from Stevens, Section 22.5 . Note, however, that you will need to modify the basic driving mechanism of Figure 22.7 appropriately since the situation at the sender side is not a repetitive cycle of send-receive, but rather a straightforward progression of send-send-send-send- . . . . . . . . . . . Also, modify the RTT and RTO mechanisms of Section 22.5 as specified below. I will be discussing the details of these modifications and the reasons for them in class. Modify function rtt_stop (Fig. 22.13) so that it uses integer arithmetic rather than floating point. This will entail your also having to modify some of the variable and function parameter declarations throughout Section 22.5 from float to int, as appropriate. In the unprrt.h header file (Fig. 22.10) set : RTT_RXTMIN to 1000 msec. (1 sec. instead of the current value 3 sec.) RTT_RXTMAX to 3000 msec. (3 sec. instead of the current value 60 sec.) RTT_MAXNREXMT to 12 (instead of the current value 3) In function rtt_timeout (Fig. 22.14), after doubling the RTO in line 86, pass its value through the function rtt_minmax of Fig. 22.11 (somewhat along the lines of what is done in line 77 of rtt_stop, Fig. 22.13). Finally, note that with the modification to integer calculation of the smoothed RTT and its variation, and given the small RTT values you will experience on the cs / sbpub network, these calculations should probably now be done on a millisecond or even microsecond scale (rather than in seconds, as is the case with Stevens’ code). Otherwise, small measured RTTs could show up as 0 on a scale of seconds, yielding a negative result when we subtract the smoothed RTT from the measured RTT (line 72 of rtt_stop, Fig. 22.13). Report the details of your modifications to the code of Section 22.5 in the ReadMe file which you hand in with your code. We need to have a sender sliding window mechanism for the retransmission of lost datagrams; and a receiver sliding window in order to ensure correct sequencing of received file contents, and some measure of flow control. You should implement something based on TCP Reno’s mechanisms, with cumulative acknowledgments, receiver window advertisements, and a congestion control mechanism I will explain in detail in class. For a reference on TCP’s mechanisms generally, see W. Richard Stevens, TCP/IP Illustrated, Volume 1 , especially Sections 20.2 - 20.4 of Chapter 20 , and Sections 21.1 - 21.8 of Chapter 21 . Bear in mind that our sequence numbers should count datagrams, not bytes as in TCP. Remember that the sender and receiver window sizes have to be set according to the argument values in client.in and server.in, respectively. Whenever the sender window becomes full and so ‘locks’, the server should print out a message to that effect on stdout. Similarly, whenever the receiver window ‘locks’, the client should print out a message on stdout. Be aware of the potential for deadlock when the receiver window ‘locks’. This situation is handled by having the receiver process send a duplicate ACK which acts as a window update when its window opens again (see Figure 20.3 and the discussion about it in TCP/IP Illustrated). However, this is not enough, because ACKs are not backed up by a timeout mechanism in the event they are lost. So we will also need to implement a persist timer driving window probes in the sender process (see Sections 22.1 & 22.2 in Chapter 22 of TCP/IP Illustrated). Note that you do not have to worry about the Silly Window Syndrome discussed in Section 22.3 of TCP/IP Illustrated since the receiver process consumes ‘full sized’ 512-byte messages from the receiver buffer (see 3. below). Report on the details of the ARQ mechanism you implemented in the ReadMe file you hand in. Indeed, you should report on all the TCP mechanisms you implemented in the ReadMe file, both the ones discussed here, and the ones I will be discussing in class. Make your datagram payload a fixed 512 bytes, inclusive of the file transfer protocol header (which must, at the very least, carry: the sequence number of the datagram; ACKs; and advertised window notifications). The client reads the file contents in its receive buffer and prints them out on stdout using a separate thread. This thread sits in a repetitive loop till all the file contents have been printed out, doing the following. It samples from an exponential distribution with mean µ milliseconds (read from the client.in file), sleeps for that number of milliseconds; wakes up to read and print all in-order file contents available in the receive buffer at that point; samples again from the exponential distribution; sleeps; and so on. The formula -1 × µ × ln( random( ) ) , where ln is the natural logarithm, yields variates from an exponential distribution with mean µ, based on the uniformly-distributed variates over ( 0 , 1 ) returned by random(). Note that you will need to implement some sort of mutual exclusion/semaphore mechanism on the client side so that the thread that sleeps and wakes up to consume from the receive buffer is not updating the state variables of the buffer at the same time as the main thread reading from the socket and depositing into the buffer is doing the same. Furthermore, we need to ensure that the main thread does not effectively monopolize the semaphore (and thus lock out for prolonged periods of time) the sleeping thread when the latter wakes up. See the textbook, Section 26.7, ‘Mutexes: Mutual Exclusion’, pp.697-701. You might also find Section 26.8, ‘Condition Variables’, pp.701-705, useful. You will need to devise some way by which the sender can notify the receiver when it has sent the last datagram of the file transfer, without the receiver mistaking that EOF marker as part of the file contents. (Also, note that the last data segment could be a “short” segment of less than 512 bytes – your client needs to be able to handle this correctly somehow.) When the sender receives an ACK for the last datagram of the transfer, the (child) server terminates. The parent server has to take care of cleaning up zombie children. Note that if we want a clean closing, the client process cannot simply terminate when the receiver ACKs the last datagram. This ACK could be lost, which would leave the (child) server process ‘hanging’, timing out, and retransmitting the last datagram. TCP attempts to deal with this problem by means of the TIME_WAIT state. You should have your receiver process behave similarly, sticking around in something akin to a TIME_WAIT state in case in case it needs to retransmit the ACK. In the ReadMe file you hand in, report on how you dealt with the issues raised here: sender notifying receiver of the last datagram, clean closing, and so on. Output Some of the output required from your program has been described in the section Operation above. I expect you to provide further output – clear, well-structured, well-laid-out, concise but sufficient and helpful – in the client and server windows by means of which we can trace the correct evolution of your TCP’s behaviour in all its intricacies : information (e.g., sequence number) on datagrams and acks sent and dropped, window advertisements, datagram retransmissions (and why : dup acks or RTO); entering/exiting Slow Start and Congestion Avoidance, ssthresh and cwnd values; sender and receiver windows locking/unlocking; etc., etc. . . . . The onus is on you to convince us that the TCP mechanisms you implemented are working correctly. Too many students do not put sufficient thought, creative imagination, time or effort into this. It is not the TA’s nor my responsibility to sit staring at an essentially blank screen, trying to summon up our paranormal psychology skills to figure out if your TCP implementation is really working correctly in all its very intricate aspects, simply because the transferred file seems to be printing o.k. in the client window. Nor is it our responsibility to strain our eyes and our patience wading through a mountain of obscure, ill-structured, hyper-messy, debugging-style output because, for example, your effort-conserving concept of what is ‘suitable’ is to dump your debugging output on us, relevant, irrelevant, and everything in between.
AmeenAli
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation
CyberZHG
Transformer-XL with checkpoint loader
lgesuellip
An application built on the Model Context Protocol (MCP) that transforms any website into highly relevant content based on your queries. The app seamlessly integrates with platforms like X, Slack, and among others.
moeru-ai
🤗💬 Transformers.js provider for xsAI. Running Embedding, Whisper, and LLMs right in your browser!
tensorops
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow ✅, Pytorch 🔜, and Jax 🔜)
lucidrains
A variant of Transformer-XL where the memory is updated not with a queue, but with attention
xuhaiming1996
这是使用pytoch 实现的长文本分类器
jonpeterson
Jackson 2.x module to support versioning and transforming of modules.
joenali
waterfall } $("body").addClass("noscroll"); c.show(); g = e.outerHeight(); e.css("margin-bottom", "-" + g / 2 + "px"); setTimeout(function() { c.addClass("visible"); c.css("-webkit-transform", "none") }, 1); this.trigger("show", b); return false }, close: function(b) { var c = $("#" + b); c.data("parent") && c.data("parent").append(c); $("#zoomScroll").length === 0 && $("body").removeClass("noscroll"); c.removeClass("visible"); setTimeout(function() { c.hide(); c.css("-webkit-transform", "translateZ(0)") }, 251); this.trigger("close", b); return false } }; _.extend(Modal, Backbone.Events); var Arrays = { conjunct: function(b) { if (b.length == 1) return b[0]; else { b = b.slice(0); last = b.pop(); b.push("and " + last); return b.join(", ") } } }; $(document).ready(function() { ScrollToTop.setup(); Modal.setup(); $(".tipsyHover").tipsy({ gravity: "n", delayIn: 0.1, delayOut: 0.1, opacity: 0.7, live: true, html: true }); $("#query").focus(function() { cache && $(this).catcomplete("search", $(this).val()) }); $.widget("custom.catcomplete", $.ui.autocomplete, { _renderMenu: function(c, e) { var g = this, f = ""; $.each(e, function(d, h) { if (h.category != f) { c.append("<li class='ui-autocomplete-category'>" + h.category + "</li>"); f = h.category } g._renderItem(c, h) }); e = { link: "/search/?q=" + this.term }; $("<li></li>").data("item.autocomplete", e).append("<a href='/search/?q=" + this.term + "' class='ui-corner-all' tabindex='-1' style='font-weight:bold; min-height:0 !important;'>Search for " + this.term + "</a>").appendTo(c) } }); var b = $("#query").catcomplete({ source: function(c, e) { Tagging.getFriends(c, function(g) { var f = g; if (myboards) { f = tagmate.filter_options(myboards, c.term); f = g.concat(f) } for (g = 0; g < f.length; g++) f[g].value = f[g].label; e(f) }) }, minLength: 1, delay: 0, appendTo: "#SearchAutocompleteHolder", select: function(c, e) { document.location.href = e.item.link } }); if (typeof b.data("catcomplete") != "undefined") b.data("catcomplete")._renderItem = function(c, e) { var g = "<a href='" + e.link + "'><img src='" + e.image + "' class='AutocompletePhoto' alt='Photo of " + e.label + "' width='38px' height='38px'/><span class='AutocompleteName'>" + e.label + "</span></a>"; return $("<li></li>").data("item.autocomplete", e).append(g).appendTo(c) }; $("#query").defaultValue($("#query").attr("placeholder"), "default_value"); $("#Search #query_button").click(function() { $("#Search form").submit(); return false }); $("body").on("click", "a[rel=nofollow]", function(c) { var e = $(this).attr("href"); if (e === "#") return c.isDefaultPrevented(); if (!e.match(/^(http|https):\/\//) || e.match(/(http:\/\/|https:\/\/|\.)pinterest\.com\//gi) || $(this).hasClass("safelink")) return true; c = (c = $(this).parents(".pin").attr("data-id") || $(this).parents(".pin").attr("pin-id") || $(this).attr("data-id")) ? "&pin=" + c: ""; var g = $(this).parents(".comment").attr("comment-id"); g = g ? "&comment_id=" + g: ""; var f = (new jsSHA(getCookie("csrftoken"), "ASCII")).getHash("HEX"); window.open("//" + window.location.host + "/offsite/?url=" + encodeURIComponent(e) + "&shatoken=" + f + c + g); return false }) }); Twitter = new(function() { var b = this; this.startTwitterConnect = function() { b._twitterWindow = window.open("/connect/twitter/", "Pinterest", "location=0,status=0,width=800,height=400"); b._twitterInterval = window.setInterval(b.completeTwitterConnect, 1E3) }; this.completeTwitterConnect = function() { if (b._twitterWindow.closed) { window.clearInterval(b._twitterInterval); window.location.reload() } } }); Facebook = new(function() { var b = this; this.startFacebookConnect = function(c, e, g, f) { g = g == undefined ? true: g; var d = "/connect/facebook/", h = "?"; if (c) { d += h + "scope=" + c; h = "&" } if (e) { d += h + "enable_timeline=1"; h = "&" } if (f) d += h + "ref_page=" + f; b._facebookWindow = window.open(d, "Pinterest", "location=0,status=0,width=800,height=400"); if (g) b._facebookInterval = window.setInterval(this.completeFacebookConnect, 1E3) }; this.completeFacebookConnect = function() { if (b._facebookWindow.closed) { window.clearInterval(b._facebookInterval); window.location.reload() } } }); Google = new(function() { var b = this; this.startGoogleConnect = function() { b._googleWindow = window.open("/connect/google/", "Google", "location=0,status=0,width=800,height=400"); b._googleInterval = window.setInterval(b.completeGoogleConnect, 1E3) }; this.completeGoogleConnect = function() { if (b._googleWindow.closed) { window.clearInterval(b._googleInterval); window.location.reload() } } }); Yahoo = new(function() { var b = this; this.startYahooConnect = function() { b._yahooWindow = window.open("/connect/yahoo/", "Yahoo", "location=0,status=0,width=800,height=400"); b._yahooInterval = window.setInterval(b.completeYahooConnect, 1E3) }; this.completeYahooConnect = function() { if (b._yahooWindow.closed) { window.clearInterval(b._yahooInterval); window.location.reload() } } }); (function(b) { function c(g) { return typeof g == "object" ? g: { top: g, left: g } } var e = b.scrollTo = function(g, f, d) { b(window).scrollTo(g, f, d) }; e.defaults = { axis: "xy", duration: parseFloat(b.fn.jquery) >= 1.3 ? 0 : 1 }; e.window = function() { return b(window)._scrollable() }; b.fn._scrollable = function() { return this.map(function() { var g = this; if (! (!g.nodeName || b.inArray(g.nodeName.toLowerCase(), ["iframe", "#document", "html", "body"]) != -1)) return g; g = (g.contentWindow || g).document || g.ownerDocument || g; return b.browser.safari || g.compatMode == "BackCompat" ? g.body: g.documentElement }) }; b.fn.scrollTo = function(g, f, d) { if (typeof f == "object") { d = f; f = 0 } if (typeof d == "function") d = { onAfter: d }; if (g == "max") g = 9E9; d = b.extend({}, e.defaults, d); f = f || d.speed || d.duration; d.queue = d.queue && d.axis.length > 1; if (d.queue) f /= 2; d.offset = c(d.offset); d.over = c(d.over); return this._scrollable().each(function() { function h(m) { k.animate(u, f, d.easing, m && function() { m.call(this, g, d) }) } var j = this, k = b(j), l = g, r, u = {}, o = k.is("html,body"); switch (typeof l) { case "number": case "string": if (/^([+-]=)?\d+(\.\d+)?(px|%)?$/.test(l)) { l = c(l); break } l = b(l, this); case "object": if (l.is || l.style) r = (l = b(l)).offset() } b.each(d.axis.split(""), function(m, q) { var v = q == "x" ? "Left": "Top", w = v.toLowerCase(), B = "scroll" + v, D = j[B], I = e.max(j, q); if (r) { u[B] = r[w] + (o ? 0 : D - k.offset()[w]); if (d.margin) { u[B] -= parseInt(l.css("margin" + v)) || 0; u[B] -= parseInt(l.css("border" + v + "Width")) || 0 } u[B] += d.offset[w] || 0; if (d.over[w]) u[B] += l[q == "x" ? "width": "height"]() * d.over[w] } else { q = l[w]; u[B] = q.slice && q.slice( - 1) == "%" ? parseFloat(q) / 100 * I: q } if (/^\d+$/.test(u[B])) u[B] = u[B] <= 0 ? 0 : Math.min(u[B], I); if (!m && d.queue) { D != u[B] && h(d.onAfterFirst); delete u[B] } }); h(d.onAfter) }).end() }; e.max = function(g, f) { var d = f == "x" ? "Width": "Height"; f = "scroll" + d; if (!b(g).is("html,body")) return g[f] - b(g)[d.toLowerCase()](); d = "client" + d; var h = g.ownerDocument.documentElement; g = g.ownerDocument.body; return Math.max(h[f], g[f]) - Math.min(h[d], g[d]) } })(jQuery); (function() { jQuery.each({ getSelection: function() { var b = this.jquery ? this[0] : this; return ("selectionStart" in b && function() { var c = b.selectionEnd - b.selectionStart; return { start: b.selectionStart, end: b.selectionEnd, length: c, text: b.value.substr(b.selectionStart, c) } } || document.selection && function() { b.focus(); var c = document.selection.createRange(); if (c == null) return { start: 0, end: b.value.length, length: 0 }; var e = b.createTextRange(), g = e.duplicate(); e.moveToBookmark(c.getBookmark()); g.setEndPoint("EndToStart", e); var f = g.text.length, d = f; for (e = 0; e < f; e++) g.text.charCodeAt(e) == 13 && d--; f = g = c.text.length; for (e = 0; e < g; e++) c.text.charCodeAt(e) == 13 && f--; return { start: d, end: d + f, length: f, text: c.text } } || function() { return { start: 0, end: b.value.length, length: 0 } })() }, setSelection: function(b, c) { var e = this.jquery ? this[0] : this, g = b || 0, f = c || 0; return ("selectionStart" in e && function() { e.focus(); e.selectionStart = g; e.selectionEnd = f; return this } || document.selection && function() { e.focus(); var d = e.createTextRange(), h = g; for (i = 0; i < h; i++) if (e.value[i].search(/[\r\n]/) != -1) g -= 0.5; h = f; for (i = 0; i < h; i++) if (e.value[i].search(/[\r\n]/) != -1) f -= 0.5; d.moveEnd("textedit", -1); d.moveStart("character", g); d.moveEnd("character", f - g); d.select(); return this } || function() { return this })() }, replaceSelection: function(b) { var c = this.jquery ? this[0] : this, e = b || ""; return ("selectionStart" in c && function() { c.value = c.value.substr(0, c.selectionStart) + e + c.value.substr(c.selectionEnd, c.value.length); return this } || document.selection && function() { c.focus(); document.selection.createRange().text = e; return this } || function() { c.value += e; return this })() } }, function(b) { jQuery.fn[b] = this }) })(); var tagmate = tagmate || { USER_TAG_EXPR: "@\\w+(?: \\w*)?", HASH_TAG_EXPR: "#\\w+", USD_TAG_EXPR: "\\$(?:(?:\\d{1,3}(?:\\,\\d{3})+)|(?:\\d+))(?:\\.\\d{2})?", GBP_TAG_EXPR: "\\\u00a3(?:(?:\\d{1,3}(?:\\,\\d{3})+)|(?:\\d+))(?:\\.\\d{2})?", filter_options: function(b, c) { for (var e = [], g = 0; g < b.length; g++) { var f = b[g].label.toLowerCase(), d = c.toLowerCase(); d.length <= f.length && f.indexOf(d) == 0 && e.push(b[g]) } return e }, sort_options: function(b) { return b.sort(function(c, e) { c = c.label.toLowerCase(); e = e.label.toLowerCase(); if (c > e) return 1; else if (c < e) return - 1; return 0 }) } }; (function(b) { function c(d, h, j) { d = d.substring(j || 0).search(h); return d >= 0 ? d + (j || 0) : d } function e(d) { return d.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, "\\$&") } function g(d, h, j) { var k = {}; for (tok in h) if (j && j[tok]) { var l = {}, r = {}; for (key in j[tok]) { var u = j[tok][key].value, o = j[tok][key].label, m = e(tok + o), q = ["(?:^(", ")$|^(", ")\\W|\\W(", ")\\W|\\W(", ")$)"].join(m), v = 0; for (q = new RegExp(q, "gm"); (v = c(d.val(), q, v)) > -1;) { var w = r[v] ? r[v] : null; if (!w || l[w].length < o.length) r[v] = u; l[u] = o; v += o.length + 1 } } for (v in r) k[tok + r[v]] = tok } else { l = null; for (q = new RegExp("(" + h[tok] + ")", "gm"); l = q.exec(d.val());) k[l[1]] = tok } d = []; for (m in k) d.push(m); return d } var f = { "@": tagmate.USER_TAG_EXPR, "#": tagmate.HASH_TAG_EXPR, $: tagmate.USD_TAG_EXPR, "\u00a3": tagmate.GBP_TAG_EXPR }; b.fn.extend({ getTags: function(d, h) { var j = b(this); d = d || j.data("_tagmate_tagchars"); h = h || j.data("_tagmate_sources"); return g(j, d, h) }, tagmate: function(d) { function h(o, m, q) { for (m = new RegExp("[" + m + "]"); q >= 0 && !m.test(o[q]); q--); return q } function j(o) { var m = o.val(), q = o.getSelection(), v = -1; o = null; for (tok in u.tagchars) { var w = h(m, tok, q.start); if (w > v) { v = w; o = tok } } m = m.substring(v + 1, q.start); if ((new RegExp("^" + u.tagchars[o])).exec(o + m)) return o + m; return null } function k(o, m, q) { var v = o.val(), w = o.getSelection(); w = h(v, m[0], w.start); var B = v.substr(0, w); v = v.substr(w + m.length); o.val(B + m[0] + q + v); v = w + q.length + 1; o.setSelection(v, v); u.replace_tag && u.replace_tag(m, q) } function l(o, m) { m = tagmate.sort_options(m); for (var q = 0; q < m.length; q++) { var v = m[q].label, w = m[q].image; q == 0 && o.html(""); var B = "<span>" + v + "</span>"; if (w) B = "<img src='" + w + "' alt='" + v + "'/>" + B; v = u.menu_option_class; if (q == 0) v += " " + u.menu_option_active_class; o.append("<div class='" + v + "'>" + B + "</div>") } } function r(o, m) { var q = m == "down" ? ":first-child": ":last-child", v = m == "down" ? "next": "prev"; m = o.children("." + u.menu_option_active_class); if (m.length == 0) m = o.children(q); else { m.removeClass(u.menu_option_active_class); m = m[v]().length > 0 ? m[v]() : m } m.addClass(u.menu_option_active_class); v = o.children(); var w = Math.floor(b(o).height() / b(v[0]).height()) - 1; if (b(o).height() % b(v[0]).height() > 0) w -= 1; for (q = 0; q < v.length && b(v[q]).html() != b(m).html(); q++); q > w && q - w >= 0 && q - w < v.length && o.scrollTo(v[q - w]) } var u = { tagchars: f, sources: null, capture_tag: null, replace_tag: null, menu: null, menu_class: "tagmate-menu", menu_option_class: "tagmate-menu-option", menu_option_active_class: "tagmate-menu-option-active" }; return this.each(function() { function o() { w.hide(); var D = j(m); if (D) { var I = D[0], p = D.substr(1), n = m.getSelection(), z = h(m.val(), I, n.start); n.start - z <= D.length && function(A) { if (typeof u.sources[I] === "object") A(tagmate.filter_options(u.sources[I], p)); else typeof u.sources[I] === "function" ? u.sources[I]({ term: p }, A) : A() } (function(A) { if (A && A.length > 0) { l(w, A); w.css("top", m.outerHeight() - 1 + "px"); w.show(); for (var E = m.data("_tagmate_sources"), F = 0; F < A.length; F++) { for (var Q = false, H = 0; ! Q && H < E[I].length; H++) Q = E[I][H].value == A[F].value; Q || E[I].push(A[F]) } } D && u.capture_tag && u.capture_tag(D) }) } } d && b.extend(u, d); var m = b(this); m.data("_tagmate_tagchars", u.tagchars); var q = {}; for (var v in u.sources) q[v] = []; m.data("_tagmate_sources", q); var w = u.menu; if (!w) { w = b("<div class='" + u.menu_class + "'></div>"); m.after(w) } m.offset(); w.css("position", "absolute"); w.hide(); var B = false; b(m).unbind(".tagmate").bind("focus.tagmate", function() { o() }).bind("blur.tagmate", function() { setTimeout(function() { w.hide() }, 300) }).bind("click.tagmate", function() { o() }).bind("keydown.tagmate", function(D) { if (w.is(":visible")) if (D.keyCode == 40) { r(w, "down"); B = true; return false } else if (D.keyCode == 38) { r(w, "up"); B = true; return false } else if (D.keyCode == 13) { D = w.children("." + u.menu_option_active_class).text(); var I = j(m); if (I && D) { k(m, I, D); w.hide(); B = true; return false } } else if (D.keyCode == 27) { w.hide(); B = true; return false } }).bind("keyup.tagmate", function() { if (B) { B = false; return true } o() }); b("." + u.menu_class + " ." + u.menu_option_class).die("click.tagmate").live("click.tagmate", function() { var D = b(this).text(), I = j(m); k(m, I, D); w.hide(); B = true; return false }) }) } }) })(jQuery); (function(b) { function c(f) { var d; if (f && f.constructor == Array && f.length == 3) return f; if (d = /rgb\(\s*([0-9]{1,3})\s*,\s*([0-9]{1,3})\s*,\s*([0-9]{1,3})\s*\)/.exec(f)) return [parseInt(d[1]), parseInt(d[2]), parseInt(d[3])]; if (d = /rgb\(\s*([0-9]+(?:\.[0-9]+)?)\%\s*,\s*([0-9]+(?:\.[0-9]+)?)\%\s*,\s*([0-9]+(?:\.[0-9]+)?)\%\s*\)/.exec(f)) return [parseFloat(d[1]) * 2.55, parseFloat(d[2]) * 2.55, parseFloat(d[3]) * 2.55]; if (d = /#([a-fA-F0-9]{2})([a-fA-F0-9]{2})([a-fA-F0-9]{2})/.exec(f)) return [parseInt(d[1], 16), parseInt(d[2], 16), parseInt(d[3], 16)]; if (d = /#([a-fA-F0-9])([a-fA-F0-9])([a-fA-F0-9])/.exec(f)) return [parseInt(d[1] + d[1], 16), parseInt(d[2] + d[2], 16), parseInt(d[3] + d[3], 16)]; return g[b.trim(f).toLowerCase()] } function e(f, d) { var h; do { h = b.curCSS(f, d); if (h != "" && h != "transparent" || b.nodeName(f, "body")) break; d = "backgroundColor" } while ( f = f . parentNode ); return c(h) } b.each(["backgroundColor", "borderBottomColor", "borderLeftColor", "borderRightColor", "borderTopColor", "color", "outlineColor"], function(f, d) { b.fx.step[d] = function(h) { if (h.state == 0) { h.start = e(h.elem, d); h.end = c(h.end) } h.elem.style[d] = "rgb(" + [Math.max(Math.min(parseInt(h.pos * (h.end[0] - h.start[0]) + h.start[0]), 255), 0), Math.max(Math.min(parseInt(h.pos * (h.end[1] - h.start[1]) + h.start[1]), 255), 0), Math.max(Math.min(parseInt(h.pos * (h.end[2] - h.start[2]) + h.start[2]), 255), 0)].join(",") + ")" } }); var g = { aqua: [0, 255, 255], azure: [240, 255, 255], beige: [245, 245, 220], black: [0, 0, 0], blue: [0, 0, 255], brown: [165, 42, 42], cyan: [0, 255, 255], darkblue: [0, 0, 139], darkcyan: [0, 139, 139], darkgrey: [169, 169, 169], darkgreen: [0, 100, 0], darkkhaki: [189, 183, 107], darkmagenta: [139, 0, 139], darkolivegreen: [85, 107, 47], darkorange: [255, 140, 0], darkorchid: [153, 50, 204], darkred: [139, 0, 0], darksalmon: [233, 150, 122], darkviolet: [148, 0, 211], fuchsia: [255, 0, 255], gold: [255, 215, 0], green: [0, 128, 0], indigo: [75, 0, 130], khaki: [240, 230, 140], lightblue: [173, 216, 230], lightcyan: [224, 255, 255], lightgreen: [144, 238, 144], lightgrey: [211, 211, 211], lightpink: [255, 182, 193], lightyellow: [255, 255, 224], lime: [0, 255, 0], magenta: [255, 0, 255], maroon: [128, 0, 0], navy: [0, 0, 128], olive: [128, 128, 0], orange: [255, 165, 0], pink: [255, 192, 203], purple: [128, 0, 128], violet: [128, 0, 128], red: [255, 0, 0], silver: [192, 192, 192], white: [255, 255, 255], yellow: [255, 255, 0] } })(jQuery); jQuery.cookie = function(b, c, e) { if (arguments.length > 1 && String(c) !== "[object Object]") { e = jQuery.extend({}, e); if (c === null || c === undefined) e.expires = -1; if (typeof e.expires === "number") { var g = e.expires, f = e.expires = new Date; f.setDate(f.getDate() + g) } c = String(c); return document.cookie = [encodeURIComponent(b), "=", e.raw ? c: encodeURIComponent(c), e.expires ? "; expires=" + e.expires.toUTCString() : "", e.path ? "; path=" + e.path: "", e.domain ? "; domain=" + e.domain: "", e.secure ? "; secure": ""].join("") } e = c || {}; f = e.raw ? function(d) { return d }: decodeURIComponent; return (g = (new RegExp("(?:^|; )" + encodeURIComponent(b) + "=([^;]*)")).exec(document.cookie)) ? f(g[1]) : null }; if (!window.JSON) window.JSON = {}; (function() { function b(r) { return r < 10 ? "0" + r: r } function c(r) { d.lastIndex = 0; return d.test(r) ? '"' + r.replace(d, function(u) { var o = k[u]; return typeof o === "string" ? o: "\\u" + ("0000" + u.charCodeAt(0).toString(16)).slice( - 4) }) + '"': '"' + r + '"' } function e(r, u) { var o, m, q = h, v, w = u[r]; if (w && typeof w === "object" && typeof w.toJSON === "function") w = w.toJSON(r); if (typeof l === "function") w = l.call(u, r, w); switch (typeof w) { case "string": return c(w); case "number": return isFinite(w) ? String(w) : "null"; case "boolean": case "null": return String(w); case "object": if (!w) return "null"; h += j; v = []; if (Object.prototype.toString.apply(w) === "[object Array]") { m = w.length; for (r = 0; r < m; r += 1) v[r] = e(r, w) || "null"; u = v.length === 0 ? "[]": h ? "[\n" + h + v.join(",\n" + h) + "\n" + q + "]": "[" + v.join(",") + "]"; h = q; return u } if (l && typeof l === "object") { m = l.length; for (r = 0; r < m; r += 1) { o = l[r]; if (typeof o === "string") if (u = e(o, w)) v.push(c(o) + (h ? ": ": ":") + u) } } else { for (o in w) if (Object.hasOwnProperty.call(w, o)) if (u = e(o, w)) { v.push(c(o) + (h ? ": ": ":") + u); } } u = v.length === 0 ? "{}": h ? "{\n" + h + v.join(",\n" + h) + "\n" + q + "}": "{" + v.join(",") + "}"; h = q; return u } } if (typeof Date.prototype.toJSON !== "function") { Date.prototype.toJSON = function() { return isFinite(this.valueOf()) ? this.getUTCFullYear() + "-" + b(this.getUTCMonth() + 1) + "-" + b(this.getUTCDate()) + "T" + b(this.getUTCHours()) + ":" + b(this.getUTCMinutes()) + ":" + b(this.getUTCSeconds()) + "Z": null }; String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function() { return this.valueOf() } } var g = window.JSON, f = /[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, d = /[\\\"\x00-\x1f\x7f-\x9f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, h, j, k = { "\u0008": "\\b", "\t": "\\t", "\n": "\\n", "\u000c": "\\f", "\r": "\\r", '"': '\\"', "\\": "\\\\" }, l; if (typeof g.stringify !== "function") g.stringify = function(r, u, o) { var m; j = h = ""; if (typeof o === "number") for (m = 0; m < o; m += 1) j += " "; else if (typeof o === "string") j = o; if ((l = u) && typeof u !== "function" && (typeof u !== "object" || typeof u.length !== "number")) throw new Error("JSON.stringify"); return e("", { "": r }) }; if (typeof g.parse !== "function") g.parse = function(r, u) { function o(m, q) { var v, w, B = m[q]; if (B && typeof B === "object") for (v in B) if (Object.hasOwnProperty.call(B, v)) { w = o(B, v); if (w !== undefined) B[v] = w; else delete B[v] } return u.call(m, q, B) } r = String(r); f.lastIndex = 0; if (f.test(r)) r = r.replace(f, function(m) { return "\\u" + ("0000" + m.charCodeAt(0).toString(16)).slice( - 4) }); if (/^[\],:{}\s]*$/.test(r.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, "@").replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, "]").replace(/(?:^|:|,)(?:\s*\[)+/g, ""))) { r = eval("(" + r + ")"); return typeof u === "function" ? o({ "": r }, "") : r } throw new SyntaxError("JSON.parse"); } })(); (function() { var b = function(o) { var m = [], q = o.length * 8, v; for (v = 0; v < q; v += 8) m[v >> 5] |= (o.charCodeAt(v / 8) & 255) << 24 - v % 32; return m }, c = function(o) { var m = [], q = o.length, v, w; for (v = 0; v < q; v += 2) { w = parseInt(o.substr(v, 2), 16); if (isNaN(w)) return "INVALID HEX STRING"; else m[v >> 3] |= w << 24 - 4 * (v % 8) } return m }, e = function(o) { var m = "", q = o.length * 4, v, w; for (v = 0; v < q; v += 1) { w = o[v >> 2] >> (3 - v % 4) * 8; m += "0123456789abcdef".charAt(w >> 4 & 15) + "0123456789abcdef".charAt(w & 15) } return m }, g = function(o) { var m = "", q = o.length * 4, v, w, B; for (v = 0; v < q; v += 3) { B = (o[v >> 2] >> 8 * (3 - v % 4) & 255) << 16 | (o[v + 1 >> 2] >> 8 * (3 - (v + 1) % 4) & 255) << 8 | o[v + 2 >> 2] >> 8 * (3 - (v + 2) % 4) & 255; for (w = 0; w < 4; w += 1) m += v * 8 + w * 6 <= o.length * 32 ? "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".charAt(B >> 6 * (3 - w) & 63) : "" } return m }, f = function(o, m) { return o << m | o >>> 32 - m }, d = function(o, m, q) { return o ^ m ^ q }, h = function(o, m, q) { return o & m ^ ~o & q }, j = function(o, m, q) { return o & m ^ o & q ^ m & q }, k = function(o, m) { var q = (o & 65535) + (m & 65535); return ((o >>> 16) + (m >>> 16) + (q >>> 16) & 65535) << 16 | q & 65535 }, l = function(o, m, q, v, w) { var B = (o & 65535) + (m & 65535) + (q & 65535) + (v & 65535) + (w & 65535); return ((o >>> 16) + (m >>> 16) + (q >>> 16) + (v >>> 16) + (w >>> 16) + (B >>> 16) & 65535) << 16 | B & 65535 }, r = function(o, m) { var q = [], v, w, B, D, I, p, n, z, A = [1732584193, 4023233417, 2562383102, 271733878, 3285377520], E = [1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1518500249, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 1859775393, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 2400959708, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782, 3395469782]; o[m >> 5] |= 128 << 24 - m % 32; o[(m + 65 >> 9 << 4) + 15] = m; z = o.length; for (p = 0; p < z; p += 16) { m = A[0]; v = A[1]; w = A[2]; B = A[3]; D = A[4]; for (n = 0; n < 80; n += 1) { q[n] = n < 16 ? o[n + p] : f(q[n - 3] ^ q[n - 8] ^ q[n - 14] ^ q[n - 16], 1); I = n < 20 ? l(f(m, 5), h(v, w, B), D, E[n], q[n]) : n < 40 ? l(f(m, 5), d(v, w, B), D, E[n], q[n]) : n < 60 ? l(f(m, 5), j(v, w, B), D, E[n], q[n]) : l(f(m, 5), d(v, w, B), D, E[n], q[n]); D = B; B = w; w = f(v, 30); v = m; m = I } A[0] = k(m, A[0]); A[1] = k(v, A[1]); A[2] = k(w, A[2]); A[3] = k(B, A[3]); A[4] = k(D, A[4]) } return A }, u = function(o, m) { this.strToHash = this.strBinLen = this.sha1 = null; if ("HEX" === m) { if (0 !== o.length % 2) return "TEXT MUST BE IN BYTE INCREMENTS"; this.strBinLen = o.length * 4; this.strToHash = c(o) } else if ("ASCII" === m || "undefined" === typeof m) { this.strBinLen = o.length * 8; this.strToHash = b(o) } else return "UNKNOWN TEXT INPUT TYPE" }; u.prototype = { getHash: function(o) { var m = null, q = this.strToHash.slice(); switch (o) { case "HEX": m = e; break; case "B64": m = g; break; default: return "FORMAT NOT RECOGNIZED" } if (null === this.sha1) this.sha1 = r(q, this.strBinLen); return m(this.sha1) }, getHMAC: function(o, m, q) { var v; v = []; var w = []; switch (q) { case "HEX": q = e; break; case "B64": q = g; break; default: return "FORMAT NOT RECOGNIZED" } if ("HEX" === m) { if (0 !== o.length % 2) return "KEY MUST BE IN BYTE INCREMENTS"; m = c(o); o = o.length * 4 } else if ("ASCII" === m) { m = b(o); o = o.length * 8 } else return "UNKNOWN KEY INPUT TYPE"; if (64 < o / 8) { m = r(m, o); m[15] &= 4294967040 } else if (64 > o / 8) m[15] &= 4294967040; for (o = 0; o <= 15; o += 1) { v[o] = m[o] ^ 909522486; w[o] = m[o] ^ 1549556828 } v = r(v.concat(this.strToHash), 512 + this.strBinLen); v = r(w.concat(v), 672); return q(v) } }; window.jsSHA = u })(); var Router = function() { var b; if (!window.history.pushState) return null; b = new Backbone.Router({ routes: { "pin/:pinID/": "zoom", "pin/:pinID/repin/": "repin", ".*": "other" } }); Backbone.history.start({ pushState: true, silent: true }); return b } (); var BoardLayout = function() { return { setup: function(b) { if (!this.setupComplete) { this.setupFlow(); $(function() { if (window.userIsAuthenticated) { Like.gridListeners(); Follow.listeners(); Comment.gridComment(); RepinDialog2.setup() } Zoom.setup() }); this.center = !!b; this.setupComplete = true } }, setupFlow: function(b) { if (!this.flowSetupComplete) { BoardLayout.allPins(); b || $(window).resize(_.throttle(function() { BoardLayout.allPins() }, 200)); this.flowSetupComplete = true } }, pinsContainer: ".BoardLayout", pinArray: [], orderedPins: [], mappedPins: {}, nextPin: function(b) { b = this.orderedPins.indexOf(b) + 1; if (b >= this.orderedPins.length) return 0; return this.orderedPins[b] }, previousPin: function(b) { b = this.orderedPins.indexOf(b) - 1; if (b >= this.orderedPins.length) return 0; return this.orderedPins[b] }, columnCount: 4, columns: 0, columnWidthInner: 192, columnMargin: 15, columnPadding: 30, columnContainerWidth: 0, allPins: function() { var b = $(this.pinsContainer + " .pin"), c = this.getContentArea(); this.columnWidthOuter = this.columnWidthInner + this.columnMargin + this.columnPadding; this.columns = Math.max(this.columnCount, parseInt(c / this.columnWidthOuter, 10)); if (b.length < this.columns) this.columns = Math.max(this.columnCount, b.length); c = this.columnWidthOuter * this.columns - this.columnMargin; var e = document.getElementById("wrapper"); if (e) e.style.width = c + "px"; $(".LiquidContainer").css("width", c + "px"); for (c = 0; c < this.columns; c++) this.pinArray[c] = 0; document.getElementById("SortableButtons") ? this.showPins() : this.flowPins(b, true); if ($("#ColumnContainer .pin").length === 0 && window.location.pathname === "/") { $("#ColumnContainer").addClass("empty"); setTimeout(function() { window.location.reload() }, 5E3) } }, newPins: function() { var b = window.jQuery ? ":last": ":last-of-type", c = $(this.pinsContainer + b + " .pin"); c = c.length > 0 ? c: $(this.pinsContainer + b + " .pin"); this.flowPins(c) }, flowPins: function(b, c) { if (c) { this.mappedPins = {}; this.orderedPins = [] } if (this.pinArray.length > this.columns) this.pinArray = this.pinArray.slice(0, this.columns); for (c = 0; c < b.length; c++) this.positionPin(b[c]); this.updateContainerHeight(); this.showPins(); window.useLazyLoad && LazyLoad.invalidate() }, positionPin: function(b) { var c = $(b).attr("data-id"); if (c && this.mappedPins[c]) $(b).remove(); else { var e = _.indexOf(this.pinArray, Math.min.apply(Math, this.pinArray)), g = this.shortestColumnTop = this.pinArray[e]; b.style.top = g + "px"; b.style.left = e * this.columnWidthOuter + "px"; b.setAttribute("data-col", e); this.pinArray[e] = g + b.offsetHeight + this.columnMargin; this.mappedPins[c] = this.orderedPins.length; this.orderedPins.push(c) } }, showPins: function() { $.browser.msie && parseInt($.browser.version, 10) == 7 || $(this.pinsContainer).css("opacity", 1); var b = $(this.pinsContainer); setTimeout(function() { b.css({ visibility: "visible" }) }, 200) }, imageLoaded: function() { $(this).removeClass("lazy") }, getContentArea: function() { return this.contentArea || document.documentElement.clientWidth }, updateContainerHeight: function() { $("#ColumnContainer").height(Math.max.apply(Math, this.pinArray)) } } } (); var LazyLoad = new(function() { var b = this, c = 0, e = 0, g = 100, f = $(window); b.images = {}; b.invalidate = function() { $("img.lazy").each(function(u, o) { u = $(o); b.images[u.attr("data-id")] = u; h(u) && j(u) }) }; b.check = function() { var u, o = false; return function() { if (!o) { o = true; clearTimeout(u); u = setTimeout(function() { o = false; d() }, 200) } } } (); var d = function() { var u = 0, o = 0; for (var m in b.images) { var q = b.images[m]; u++; if (h(q)) { j(q); o++ } } }; b.stop = function() { f.unbind("scroll", k); f.unbind("resize", l) }; var h = function(u) { return u.offset().top <= g }, j = function(u) { if (u.hasClass("lazy")) { var o = u.attr("data-src"), m = u.attr("data-id"); u.load(function() { if (u[0]) u[0].style.opacity = "1"; delete b.images[m] }); u.attr("src", o); u.removeClass("lazy"); if (u[0]) u[0].style.opacity = "0" } }, k = function() { c = $(window).scrollTop(); r(); b.check() }, l = function() { e = $(window).height(); r(); b.check() }, r = function() { g = c + e + 600 }; if (window.useLazyLoad) { f.ready(function() { k(); l() }); f.scroll(k); f.resize(l) } }); var FancySelect = function() { var b; return { setup: function(c, e, g) { function f() { b.hide(); j.hide() } function d() { j.show(); b.show() } var h = $('<div class="FancySelect"><div class="current"><span class="CurrentSelection"></span><span class="DownArrow"></span></div><div class="FancySelectList"><div class="wrapper"><ul></ul></div></div></div>'), j = $(".FancySelectList", h), k = $("ul", j), l = $(".CurrentSelection", h), r = "", u, o; b || (b = $('<div class="FancySelectOverlay"></div>').appendTo("body")); c = $(c); u = c.prop("selectedIndex"); e = e || function() { return '<li data="' + $(this).val() + '"><span>' + $(this).text() + "</span></li>" }; o = $("option", c); o.each(function(m) { r += e.call(this, m, m === u) }); k.html(r); l.text(o.eq(u).text()); c.before(h); c.hide(); h.click(function() { d() }); b.click(function() { f() }); k.on("click", "li", function() { var m = $(this).prevAll().length; l.text($(this).text()); c.prop("selectedIndex", m); f(); g && g($(this).attr("data")); return false }) } } } (); var boardPicker = function() { return { setup: function(b, c, e) { b = $(b); var g = $(".boardListOverlay", b.parent()), f = $(".boardList", b), d = $(".currentBoard", b), h = $("ul", f); b.click(function() { f.show(); g.show() }); g.click(function() { f.hide(); g.hide() }); $(h).on("click", "li", function() { if (!$(this).hasClass("noSelect")) { d.text($(this).text()); g.hide(); f.hide(); c && c($(this).attr("data")) } return false }); b = $(".createBoard", f); var j = $("input", b), k = $(".Button", b), l = $(".CreateBoardStatus", b); j.defaultValue("Create New Board"); k.click(function() { if (k.attr("disabled") == "disabled") return false; if (j.val() == "Create New Board") { l.html("Enter a board name").css("color", "red").show(); return false } l.html("").hide(); k.addClass("disabled").attr("disabled", "disabled"); $.post("/board/create/", { name: j.val(), pass_category: true }, function(r) { if (r && r.status == "success") { h.append("<li data='" + r.id + "'><span>" + $("<div/>").text(r.name).html() + "</span></li>"); f.hide(); d.text(r.name); j.val("").blur(); k.removeClass("disabled").removeAttr("disabled"); e && e(r.id) } else { l.html(r.message).css("color", "red").show(); k.removeClass("disabled").removeAttr("disabled") } }, "json"); return false }) } } } (); var CropImage = function() { this.initialize.apply(this, arguments) }; (function() { var b = Backbone.View.extend({ el: "#CropImage", events: { "click .cancel": "onClose", "click .save": "onSave", "mousedown .drag": "onStartDrag" }, dragging: false, mousePosition: {}, initialize: function() { _.bindAll(this, "onDragging", "onStopDragging", "onImageLoaded"); _.defaults(this.options, { title: "Crop Image", buttonTitle: "Save", size: { width: 222, height: 150 } }); this.$holder = this.$el.find(".holder"); this.$bg = this.$el.find(".holder .bg"); this.$overlay = this.$el.find(".holder .overlayContent"); this.$frame = this.$el.find(".holder .frame"); this.$mask = this.$el.find(".holder .mask"); this.$footer = this.$el.find(".footer"); this.$button = this.$el.find(".footer .Button.save"); this.$spinner = this.$el.find(".holder .spinner") }, render: function() { this.$el.find(".header span").text(this.options.title); this.$button.text(this.options.buttonTitle).removeClass("disabled"); this.$holder.show().css("height", this.options.size.height + 120 + 40); this.$footer.find(".buttons").css("visibility", "visible"); this.$footer.find(".complete").hide(); this.$bg.html("").show(); this.$spinner.hide(); this.options.className && this.$el.addClass(this.options.className); this.options.overlay && this.$overlay.html("").append(this.options.overlay); var c = this.bounds = { left: this.$holder.width() / 2 - this.options.size.width / 2, width: this.options.size.width, top: 60, height: this.options.size.height }; c.ratio = c.height / c.width; this.$frame.css(c); this.$mask.find("span").each(function(e, g) { e === 0 && $(g).css({ top: 0, left: 0, right: 0, height: c.top }); e === 1 && $(g).css({ top: c.top, left: c.left + c.width, right: 0, height: c.height }); e === 2 && $(g).css({ top: c.top + c.height, left: 0, right: 0, bottom: 0 }); e === 3 && $(g).css({ top: c.top, left: 0, width: c.left, height: c.height }) }); this.options.image && this.setImage(this.options.image) }, onClose: function() { this.trigger("close"); return false }, onSave: function() { this.trigger("save"); return false }, onImageLoaded: function(c) { if (this.$img.height() === 0) return setTimeout(this.onImageLoaded, 200, c); this.$img.removeAttr("width").removeAttr("height"); c = this.imageBounds = { originalWidth: this.$img.width(), originalHeight: this.$img.height() }; c.ratio = c.originalHeight / c.originalWidth; this.$img.css({ visibility: "visible", opacity: 1 }); this.fitImage(); this.centerImage(); this.hideSpinner() }, onStartDrag: function(c) { this.mousePosition = { x: c.pageX, y: c.pageY }; this.startPosition = { x: parseInt(this.$bg.css("left"), 10), y: parseInt(this.$bg.css("top"), 10) }; this.trigger("startDrag"); this.dragging = true; $("body").on({ mousemove: this.onDragging, mouseup: this.onStopDragging }); c.preventDefault() }, onDragging: function(c) { var e = { top: this.startPosition.y + (c.pageY - this.mousePosition.y), left: this.startPosition.x + (c.pageX - this.mousePosition.x) }; if (this.enforceBounds(e)) { this.$bg.css(e); c.preventDefault() } }, onStopDragging: function() { this.trigger("stopDrag"); this.dragging = false; $("body").off({ mousemove: this.onDragging, mouseup: this.onStopDragging }) }, enforceBounds: function(c) { c.top = Math.min(c.top, this.bounds.top); c.left = Math.min(c.left, this.bounds.left); if (c.left + this.imageBounds.width < this.bounds.left + this.bounds.width) c.left = this.bounds.left + this.bounds.width - this.imageBounds.width + 1; if (c.top + this.imageBounds.height < this.bounds.top + this.bounds.height) c.top = this.bounds.top + this.bounds.height - this.imageBounds.height + 1; return c }, showComplete: function() { this.$footer.find(".buttons").css("visibility", "hidden"); this.$footer.find(".complete").fadeIn(300); this.hideSpinner() }, setImage: function(c) { this.showSpinner(); var e = this.$img = $("<img>"); e.load(this.onImageLoaded).css({ opacity: "0.01", visibility: "hidden" }); e.attr("src", c); this.$bg.html(e) }, fitImage: function() { var c = 1; c = this.imageBounds.ratio >= this.bounds.ratio ? this.bounds.width / this.imageBounds.originalWidth: this.bounds.height / this.imageBounds.originalHeight; this.scaleImage(c, 10) }, centerImage: function() { var c = this.$holder.height() - 40, e = this.$holder.width(); this.$bg.css({ top: c / 2 - this.$bg.height() / 2 + 1, left: e / 2 - this.$bg.width() / 2 + 1 }) }, scaleImage: function(c, e) { var g = this.imageBounds.width = this.imageBounds.originalWidth * c + e || 0; c = this.imageBounds.height = this.imageBounds.originalHeight * c + e || 0; this.$img.attr("width", g); this.$img.attr("height", c) }, getOffset: function() { return { x: Math.abs(parseInt(this.$bg.css("left"), 10) - this.bounds.left), y: Math.abs(parseInt(this.$bg.css("top"), 10) - this.bounds.top) } }, getScale: function() { return this.$img.width() / this.imageBounds.originalWidth }, saving: function() { this.showSpinner(); this.$button.addClass("disabled") }, showSpinner: function() { this.$spinner.show() }, hideSpinner: function() { this.$spinner.hide() } }); CropImage.prototype = { initialize: function() { _.bindAll(this, "save", "close") }, show: function(c) { var e = this; c = this.view = new b(c); this.options = this.view.options; c.on("save", this.save); c.on("close", this.close); c.on("stopDrag", function() { e.trigger("dragComplete") }); Modal.show("CropImage"); c.render() }, setImage: function(c) { this.view.setImage(c) }, setParams: function(c) { this.options.params = c }, save: function() { var c = this, e = this.view.getOffset(), g = this.view.getScale(); e = _.extend({ x: e.x, y: e.y, width: this.options.size.width, height: this.options.size.height, scale: g }, this.options.params || {}); this.view.saving(); this.trigger("saving", e); $.ajax({ url: this.options.url, data: e, dataType: "json", type: "POST", success: function(f) { c.view.hideSpinner(); c.trigger("save", f); c.options.delay !== 0 && c.view.showComplete(); setTimeout(c.close, c.options.delay || 1200) } }) }, close: function() { Modal.close("CropImage"); this.view.undelegateEvents(); this.trigger("close"); delete this.view; delete this.options } }; _.extend(CropImage.prototype, Backbone.Events) })(); var BoardCoverSelector = function() { this.initialize.apply(this, arguments) }; (function() { var b = null; BoardCoverSelector.prototype = { pins: null, index: null, boardURL: null, initialize: function() { if (b) { b.cancel(); b = null } _.bindAll(this, "onKeyup", "onPinsLoaded", "onSave", "onSaving", "removeListeners", "next", "previous"); b = this; this.options = {}; this.imageCrop = new CropImage; this.imageCrop.on("close", this.removeListeners); this.imageCrop.on("save", this.onSave); this.imageCrop.on("saving", this.onSaving); this.imageCrop.on("dragComplete", function() { trackGAEvent("board_cover", "dragged") }); this.$img = $("<img>") }, loadPins: function() { $.ajax({ url: this.options.boardURL + "pins/", dataType: "json", success: this.onPinsLoaded }); this.boardURL = this.options.boardURL }, show: function(c) { this.options = c; this.imageCrop.show({ className: "BoardCover", overlay: this.overlayContent(), params: { pin: c.pin }, image: this.options.image, size: { width: 222, height: 150 }, title: c.title || "Select a cover photo and drag to position it.", buttonTitle: c.buttonTitle || "Set Cover", url: this.options.boardURL + "cover/", delay: c.delay }); if (!this.pins || this.boardURL != this.options.boardURL) this.loadPins(); else this.options.image || this.setIndex(0); trackGAEvent("board_cover", "show"); $("body").keyup(this.onKeyup) }, onPinsLoaded: function(c) { var e = null; if (this.options.image) { var g = this.options.image; _.each(c.pins, function(f, d) { if (e == null && g.match(new RegExp(f.image_key, "gi"))) e = d }) } this.index = e || 0; this.pins = c.pins; if (this.pins.length !== 0) { this.pins.length === 1 ? this.hideArrows() : this.preload([e - 1, e + 1]); e === null && this.setIndex(0) } }, onKeyup: function(c) { if (this.index !== null) { c.keyCode === 37 && this.previous(); c.keyCode === 39 && this.next(); c.keyCode === 27 && this.imageCrop.close(); c.keyCode === 13 && this.imageCrop.save() } }, overlayContent: function() { var c = this.$holder = $("<div class='BoardOverlay'></div>"), e = $('<button class="prev Button WhiteButton Button13" type="button"><em></em></button>').click(this.previous), g = $('<button class="next Button WhiteButton Button13" type="button"><em></em></button>').click(this.next); c.append("<h3 class='serif'>" + this.options.boardName + "</h3>"); c.append(e, g); return c }, next: function() { this.index === this.pins.length - 1 ? this.setIndex(0) : this.setIndex(this.index + 1); trackGAEvent("board_cover", "toggle_pin"); return false }, previous: function() { this.index === 0 ? this.setIndex(this.pins.length - 1) : this.setIndex(this.index - 1); trackGAEvent("board_cover", "toggle_pin"); return false }, setIndex: function(c) { var e = this.pins[c]; if (e) { this.imageCrop.setImage(e.url); this.imageCrop.setParams({ pin: e.id }); this.index = c; this.preload([this.index - 2, this.index - 1, this.index + 1, this.index + 2]) } }, preload: function(c) { var e = this; _.each(c, function(g) { if (g = e.pins[g])(new Image).src = g.url }) }, hideArrows: function() { this.$holder.find(".arrow").hide() }, removeListeners: function() { $("body").unbind("keyup", this.onKeyup) }, onSaving: function() { this.hideArrows() }, onSave: function(c) { this.options.success && this.options.success(c); trackGAEvent("board_cover", "saved") } }; _.extend(BoardCoverSelector.prototype, Backbone.Events) })(); var AddDialog = function() { return { setup: function(b) { var c = "#" + b, e = $(c), g = $(".Buttons .RedButton", e), f = $(".mainerror", e), d = $(".DescriptionTextarea", e); BoardPicker.setup(c + " .BoardPicker", function(h) { $(c + " #id_board").val(h) }, function(h) { $(c + " #id_board").val(h) }); AddDialog.shareCheckboxes(b); Tagging.initTextarea(c + " .DescriptionTextarea"); Tagging.priceTag(c + " .DescriptionTextarea", c + " .ImagePicker"); CharacterCount.setup(c + " .DescriptionTextarea", c + " .CharacterCount", c + " .Button"); g.click(function() { if (g.hasClass("disabled")) return false; trackGAEvent("pin", "clicked", "add_dialogue"); if (d.val() === "" || d.val() === "Describe your pin...") { f.html("Please describe your pin").slideDown(300); return false } else f.slideUp(300, function() { f.html("") }); g.addClass("disabled").html("Pinning..."); $("#id_details", e).val(d.val()); Tagging.loadTags(c + " .DescriptionTextarea", c + " #peeps_holder", c + " #id_tags", c + " #currency_holder"); $("form", e).ajaxSubmit({ url: "/pin/create/", type: "POST", dataType: "json", iframe: true, success: function(h) { if (h.status == "success") { trackGAEvent("pin", "success", "add_dialogue"); window.location = h.url } else if (h.captcha) { RecaptchaDialog.challenge(); AddDialog.reset(b) } else f.html(h.message).slideDown(300) } }); return false }) }, reset: function(b) { b === "CreateBoard" && CreateBoardDialog.reset(); b === "ScrapePin" && ScrapePinDialog.reset(); b === "UploadPin" && UploadPinDialog.reset(); AddDialog._resets[b] && AddDialog._resets[b]() }, close: function(b, c) { $("#" + b).addClass("super"); Modal.show(c) }, childClose: function(b, c) { var e = this, g = $("#" + c); $(".ModalContainer", g); e.reset(c); $("#" + b).removeClass("super"); Modal.close(b); Modal.close(c) }, pinBottom: function(b) { var c = $("#" + b); $(".PinBottom", c).slideDown(300, function() { var e = $(".modal:first", c);
augustwester
A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)