Found 146 repositories(showing 30)
stomita
Fixes iOS6 Safari's image file rendering issue for large size image (over mega-pixel), which causes unexpected subsampling when drawing it in canvas.
abusufyanvu
MIT Introduction to Deep Learning (6.S191) Instructors: Alexander Amini and Ava Soleimany Course Information Summary Prerequisites Schedule Lectures Labs, Final Projects, Grading, and Prizes Software labs Gather.Town lab + Office Hour sessions Final project Paper Review Project Proposal Presentation Project Proposal Grading Rubric Past Project Proposal Ideas Awards + Categories Important Links and Emails Course Information Summary MIT's introductory course on deep learning methods with applications to computer vision, natural language processing, biology, and more! Students will gain foundational knowledge of deep learning algorithms and get practical experience in building neural networks in TensorFlow. Course concludes with a project proposal competition with feedback from staff and a panel of industry sponsors. Prerequisites We expect basic knowledge of calculus (e.g., taking derivatives), linear algebra (e.g., matrix multiplication), and probability (e.g., Bayes theorem) -- we'll try to explain everything else along the way! Experience in Python is helpful but not necessary. This class is taught during MIT's IAP term by current MIT PhD researchers. Listeners are welcome! Schedule Monday Jan 18, 2021 Lecture: Introduction to Deep Learning and NNs Lab: Lab 1A Tensorflow and building NNs from scratch Tuesday Jan 19, 2021 Lecture: Deep Sequence Modelling Lab: Lab 1B Music Generation using RNNs Wednesday Jan 20, 2021 Lecture: Deep Computer Vision Lab: Lab 2A Image classification and detection Thursday Jan 21, 2021 Lecture: Deep Generative Modelling Lab: Lab 2B Debiasing facial recognition systems Friday Jan 22, 2021 Lecture: Deep Reinforcement Learning Lab: Lab 3 pixel-to-control planning Monday Jan 25, 2021 Lecture: Limitations and New Frontiers Lab: Lab 3 continued Tuesday Jan 26, 2021 Lecture (part 1): Evidential Deep Learning Lecture (part 2): Bias and Fairness Lab: Work on final assignments Lab competition entries due at 11:59pm ET on Canvas! Lab 1, Lab 2, and Lab 3 Wednesday Jan 27, 2021 Lecture (part 1): Nigel Duffy, Ernst & Young Lecture (part 2): Kate Saenko, Boston University and MIT-IBM Watson AI Lab Lab: Work on final assignments Assignments due: Sign up for Final Project Competition Thursday Jan 28, 2021 Lecture (part 1): Sanja Fidler, U. Toronto, Vector Institute, and NVIDIA Lecture (part 2): Katherine Chou, Google Lab: Work on final assignments Assignments due: 1 page paper review (if applicable) Friday Jan 29, 2021 Lecture: Student project pitch competition Lab: Awards ceremony and prize giveaway Assignments due: Project proposals (if applicable) Lectures Lectures will be held starting at 1:00pm ET from Jan 18 - Jan 29 2021, Monday through Friday, virtually through Zoom. Current MIT students, faculty, postdocs, researchers, staff, etc. will be able to access the lectures during this two week period, synchronously or asynchronously, via the MIT Canvas course webpage (MIT internal only). Lecture recordings will be uploaded to the Canvas as soon as possible; students are not required to attend any lectures synchronously. Please see the Canvas for details on Zoom links. The public edition of the course will only be made available after completion of the MIT course. Labs, Final Projects, Grading, and Prizes Course will be graded during MIT IAP for 6 units under P/D/F grading. Receiving a passing grade requires completion of each software lab project (through honor code, with submission required to enter lab competitions), a final project proposal/presentation or written review of a deep learning paper (submission required), and attendance/lecture viewing (through honor code). Submission of a written report or presentation of a project proposal will ensure a passing grade. MIT students will be eligible for prizes and awards as part of the class competitions. There will be two parts to the competitions: (1) software labs and (2) final projects. More information is provided below. Winners will be announced on the last day of class, with thousands of dollars of prizes being given away! Software labs There are three TensorFlow software lab exercises for the course, designed as iPython notebooks hosted in Google Colab. Software labs can be found on GitHub: https://github.com/aamini/introtodeeplearning. These are self-paced exercises and are designed to help you gain practical experience implementing neural networks in TensorFlow. For registered MIT students, submission of lab materials is not necessary to get credit for the course or to pass the course. At the end of each software lab there will be task-associated materials to submit (along with instructions) for entry into the competitions, open to MIT students and affiliates during the IAP offering. This includes MIT students/affiliates who are taking the class as listeners -- you are eligible! These instructions are provided at the end of each of the labs. Completing these tasks and submitting your materials to Canvas will enter you into a per-lab competition. MIT students and affiliates will be eligible for prizes during the IAP offering; at the end of the course, prize-winners will be awarded with their prizes. All competition submissions are due on January 26 at 11:59pm ET to Canvas. For the software lab competitions, submissions will be judged on the basis of the following criteria: Strength and quality of final results (lab dependent) Soundness of implementation and approach Thoroughness and quality of provided descriptions and figures Gather.Town lab + Office Hour sessions After each day’s lecture, there will be open Office Hours in the class GatherTown, up until 3pm ET. An MIT email is required to log in and join the GatherTown. During these sessions, there will not be a walk through or dictation of the labs; the labs are designed to be self-paced and to be worked on on your own time. The GatherTown sessions will be hosted by course staff and are held so you can: Ask questions on course lectures, labs, logistics, project, or anything else; Work on the labs in the presence of classmates/TAs/instructors; Meet classmates to find groups for the final project; Group work time for the final project; Bring the class community together. Final project To satisfy the final project requirement for this course, students will have two options: (1) write a 1 page paper review (single-spaced) on a recent deep learning paper of your choice or (2) participate and present in the project proposal pitch competition. The 1 page paper review option is straightforward, we propose some papers within this document to help you get started, and you can satisfy a passing grade with this option -- you will not be eligible for the grand prizes. On the other hand, participation in the project proposal pitch competition will equivalently satisfy your course requirements but additionally make you eligible for the grand prizes. See the section below for more details and requirements for each of these options. Paper Review Students may satisfy the final project requirement by reading and reviewing a recent deep learning paper of their choosing. In the written review, students should provide both: 1) a description of the problem, technical approach, and results of the paper; 2) critical analysis and exposition of the limitations of the work and opportunities for future work. Reviews should be submitted on Canvas by Thursday Jan 28, 2021, 11:59:59pm Eastern Time (ET). Just a few paper options to consider... https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf https://papers.nips.cc/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf https://papers.nips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf https://science.sciencemag.org/content/362/6419/1140 https://papers.nips.cc/paper/2018/file/0e64a7b00c83e3d22ce6b3acf2c582b6-Paper.pdf https://arxiv.org/pdf/1906.11829.pdf https://www.nature.com/articles/s42256-020-00237-3 https://pubmed.ncbi.nlm.nih.gov/32084340/ Project Proposal Presentation Keyword: proposal This is a 2 week course so we do not require results or working implementations! However, to win the top prizes, nice, clear results and implementations will demonstrate feasibility of your proposal which is something we look for! Logistics -- please read! You must sign up to present before 11:59:59pm Eastern Time (ET) on Wednesday Jan 27, 2021 Slides must be in a Google Slide before 11:59:59pm Eastern Time (ET) on Thursday Jan 28, 2021 Project groups can be between 1 and 5 people Listeners welcome To be eligible for a prize you must have at least 1 registered MIT student in your group Each participant will only be allowed to be in one group and present one project pitch Synchronous attendance on 1/29/21 is required to make the project pitch! 3 min presentation on your idea (we will be very strict with the time limits) Prizes! (see below) Sign up to Present here: by 11:59pm ET on Wednesday Jan 27 Once you sign up, make your slide in the following Google Slides; submit by midnight on Thursday Jan 28. Please specify the project group # on your slides!!! Things to Consider This doesn’t have to be a new deep learning method. It can just be an interesting application that you apply some existing deep learning method to. What problem are you solving? Are there use cases/applications? Why do you think deep learning methods might be suited to this task? How have people done it before? Is it a new task? If so, what are similar tasks that people have worked on? In what aspects have they succeeded or failed? What is your method of solving this problem? What type of model + architecture would you use? Why? What is the data for this task? Do you need to make a dataset or is there one publicly available? What are the characteristics of the data? Is it sparse, messy, imbalanced? How would you deal with that? Project Proposal Grading Rubric Project proposals will be evaluated by a panel of judges on the basis of the following three criteria: 1) novelty and impact; 2) technical soundness, feasibility, and organization, including quality of any presented results; 3) clarity and presentation. Each judge will award a score from 1 (lowest) to 5 (highest) for each of the criteria; the average score from each judge across these criteria will then be averaged with that of the other judges to provide the final score. The proposals with the highest final scores will be selected for prizes. Here are the guidelines for the criteria: Novelty and impact: encompasses the potential impact of the project idea, its novelty with respect to existing approaches. Why does the proposed work matter? What problem(s) does it solve? Why are these problems important? Technical soundness, feasibility, and organization: encompasses all technical aspects of the proposal. Do the proposed methodology and architecture make sense? Is the architecture the best suited for the proposed problem? Is deep learning the best approach for the problem? How realistic is it to implement the idea? Was there any implementation of the method? If results and data are presented, we will evaluate the strength of the results/data. Clarity and presentation: encompasses the delivery and quality of the presentation itself. Is the talk well organized? Are the slides aesthetically compelling? Is there a clear, well-delivered narrative? Are the problem and proposed method clearly presented? Past Project Proposal Ideas Recipe Generation with RNNs Can we compress videos with CNN + RNN? Music Generation with RNNs Style Transfer Applied to X GAN’s on a new modality Summarizing text/news articles Combining news articles about similar events Code or spec generation Multimodal speech → handwriting Generate handwriting based on keywords (i.e. cursive, slanted, neat) Predicting stock market trends Show language learners articles or videos at their level Transfer of writing style Chemical Synthesis with Recurrent Neural networks Transfer learning to learn something in a domain for which it’s hard or risky to gather data or do training RNNs to model some type of time series data Computer vision to coach sports players Computer vision system for safety brakes or warnings Use IBM Watson API to get the sentiment of your Facebook newsfeed Deep learning webcam to give wifi-access to friends or improve video chat in some way Domain-specific chatbot to help you perform a specific task Detect whether a signature is fraudulent Awards + Categories Final Project Awards: 1x NVIDIA RTX 3080 4x Google Home Max 3x Display Monitors Software Lab Awards: Bose headphones (Lab 1) Display monitor (Lab 2) Bebop drone (Lab 3) Important Links and Emails Course website: http://introtodeeplearning.com Course staff: introtodeeplearning-staff@mit.edu Piazza forum (MIT only): https://piazza.com/mit/spring2021/6s191 Canvas (MIT only): https://canvas.mit.edu/courses/8291 Software lab repository: https://github.com/aamini/introtodeeplearning Lab/office hour sessions (MIT only): https://gather.town/app/56toTnlBrsKCyFgj/MITDeepLearning
vonKristoff
A jQuery plugin that uses canvas to create a 'screen print' offset effect on an image, by altering the pixel data.
codepo8
A few examples how to use canvas with pixel images based on the work necessary for logo-o-matic
sherluok
A pure javascript binary bitmap image file.bmp creator, you can input an pixel data array or a canvas like object and get uint8array file data output
mattdesl
Gets the RGBA pixel array from an Image/Video/Canvas source
ashiishme
Canvas Animation using React Hooks - Image Pixel Manipulation & Particles
axi1000
🖼️ pixel image(or text) swiper by canvas
RichM1216
A collaborative pixel art platform where users can place pixels on a shared canvas in real time, building community-driven images together with live updates and interactive features.
witmin
Add pixel size mark on image and save as .png. Build with p5.js, electron and canvas.
CursedPrograms
Cursed Pixels contains the source code for a basic canvas drawing application built using HTML and JavaScript. The application allows users to draw on a canvas element, change brush colors and sizes, and load images onto the canvas.
Author: Xu Liu Date: 02/18/2019 This is a semi-auto labelling software for those who work on image labelling. Input: Click the "Open" button, open the "image" directory and select the first image to start. After this step, the left canvas will display the original image with 50% transparence colored label on it. And the right canvas will display the labelled image with totally black ground and colorful foreground (one color, one instance). All the images in the "image" directory are the outputs(aka. prediction) of the segmentation deep neural network and are located at the "masks" directory. Most parts of the right image are labelled correctly by the neural network, what we need to do is just to revise it slightly. Instructions: 1. Click your right mouse button on the right image to pick a color (the color is corresponding to the pixel where your cursor locates) that you wish to revise on the left image. If there are instances that have not been labelled, you can click your right mouse button on the circular color palatte to pick a different color. 2. Move your cursor to right image, click at the place where you wish to revise the label, keep press the left mouse button and move it can draw curve lines, which can revise a large part. 3. The silder on the top of the two images can control the pen (or brush) size, when you need to paint (revise) a large part, you can move the slider a bit right to get a larger pen size. On the other hand, just move it left can be helpful for revising small part. 4. Click "Save" button on the right can save the revised image (the right one) as a 3 channels, 8 unsigned bits PNG format file at the "output" directory, which is in the same directory as the "images" directory. 5. Click "Next" button then the two canvas can refresh and load the next pair: image and label. Then just repeat the above operations.
ericleong
A javascript library for drawing images to the canvas with reverse pixel mapping using WebGL.
hughsk
Detect whether an image or canvas element contains any transparent pixels.
Koushikphy
Canvas pixel rain with image
ChrisAkridge
A library to perform various imaging tasks, such as drawing any file as a series of pixels, or combining many images into a single image or zoomable canvas.
Ethanthegrand
A super fast and efficient pixel-based canvas for image processing or rendering of graphics in real-time for Roblox.
hunkim98
I use a pixel canvas for creating an image input that can be sent to a stable diffusion model alongside a text prompt
RehanMerchant
A pixel-art avatar creation tool that allows users to customize character sprites and download them as images. Built using HTML 5 Canvas, TypeScript and React with LPC-based assets.
nextml-code
Fixes web html5 canvas for very large translations (e.g. 100'000'000 pixels away). Vanilla canvas starts breaking down with missaligned images and incorrect thickness of things, this fixes that. But has some limitations.
Ethernol
Ethernol is a website which allows fans to support their favourite contributer. But instead of a simple transaction, the supporter can contruibute something to a community project himself. By donating some Ether he receives the opportunity to paint some pixels of a canvas and helps creating an unique image on the blockchain.
judy-chun
Using the colors of pixels in the image to allow pixel substitution from other images. Take a picture of a person wearing a red shirt in front of a green canvas and make the background look like another locale and their T-shirt have a new design. The other pictures that will replace the background and shirt can be used from pictures in media sources or from internet.
classicvalues
Project 2: Space Invader - enhanced features and utilizing design patterns For this project, students are allowed to do pair-programming but are not required. Thus, students can choose to do project 2 either individually or pair-programming. For details of pair-programming, refer to Pair Programming The pair-programming team has an additional requirement specified in "E. Additional Requirement of Pair-programming Team" A. Overview of Program Requirements Your code must be based on Lesson 7. No credits will be given if your code is not based on Lesson 7. The key structures of Lesson 7 student should keep include (but is not limited to): The way Java GUI window is created and initialized. Use of Java GUI other than Swing (with some AWT) is not allowed. I.e., the use of JavaFX is not allowed. The use of MVC architecture: Separation codes among model, view, controller functions. The use of MyCanvas to render graphics: paintComponent() method in JPanel class represents a canvas for drawing The use of the game loop: Events of Timer provides the periodic loop to update canvas rendering and collision processing Add more features to make a better Space Invader game. Add more functionality to the gameplay. Visual changes using graphic images or adding sound effects are not counted for credits. E.g., the use of graphic image files to represent the enemy is not counted for credits. However, you may do it for your own interests. Implement 3 design patterns in the project B. Required Features to Add The enemies The enemy array goes down by 20 pixels (ENEYMY_SIZE) whenever it changes direction (hits the side walls). When an enemy reaches the bottom, the game ends (showing game over screen) Game Score and Game over Score: 10 points every time an enemy is destroyed. Display the live score updates. "You Won" if all enemies are destroyed "You Lost" if (1) one of the enemies reaches the bottom or (2) the shooter is hit 4 times by bombs - all 4 squares of the shooter get destroyed. When the game ends, the game over message ("You Won" or "You Lost") with "score: XXX" must be displayed on the canvas. The shooter As a bomb hits one of the four squares, the corresponding square is destroyed and disappears from the game scene. When all four squares are destroyed, the game ends C. Add Your Own Ideas Add your own creative ideas to enhance the functionality of the gameplay Note: Graphic image changes or adding sound effects including background music will not be counted for credits. Credits will be given as you add new features to the gameplay, and the amount of the credits will be determined by the complexity of implementation. You may borrow ideas from the original space invader game or you may add your own ideas to make the game more fun D. Implement 3 Design Patterns (1) Strategy pattern, (2) Observer pattern, (3) An additional design pattern (other than Strategy and Observer). The third design pattern must be from one of the patterns learned in Lesson 5. If not, it is not counted for credits. i.e., the third design pattern should be one of these: State, Decorator, or Builder You should implement yourself ALL participants in the corresponding design pattern. Utilizing Java libraries is not counted as an implementation requirement (e.g., the use of JButton event listener as observer design pattern). E. Additional Requirement of Pair-programming Team Implement one more design pattern (the fourth design pattern) from the ones learned in Lesson 5. Two implementations of the same design pattern is not counted. E.g., implementing Observer pattern in two places is not counted. The same requirements in item "D" apply Video Presentation Requirements Video 1: Show running program demo that you've completed "B. Required Features" Video 2: Show running program demo about "C. Your Own Ideas" Video 3: Show you've correctly implemented the Strategy pattern as follows: From running program, indicate where the Strategy pattern is utilized Show and explain the UML diagram for your implementation of the Strategy pattern Students should mannually draw UML diagrams by a software tool such as draw.io web site The use of automatic UML generator tool is NOT allowed (No credits if violated). Include classes only if they are participants of the Strategy pattern (No credits if other classes are included) In Class UML, show member variables/methods only if they are relevant to the Strategy pattern (No credits if unrelated members are included) Show and explain the source code to prove the Strategy pattern is correctly implemented Explain how your implementation meets (1) the intent of the design pattern and (2) the responsibility of each participant in the design pattern You may not get credits if your explanation is not correct. Video 4: The same as Video 3 but this is for the Observer pattern Video 5: The same as Video 3 but this is for an additional (third) design pattern Video 6 (only for the pair-programming team): The same as Video 3 but this is for the fourth design pattern. Max video length: Each video should not be longer than 4 minutes. Refer to the submission instructions below as to the places you should submit the video links. Submission Program code Download from GitHub and submit the zip file to Project2 Video presentation Post the URLs of Videos 1 and 2 at the corresponding Student Video Presentation forum (i.e., Videos 1 and 2 are disclosed to all students) Paste the URLs of Videos 3, 4, 5, (6 if pair programming) in the "Comments" box of the Program submission page (i.e., Videos 3,4,5,6 are not disclosed to students) Note: D2L keeps only the last submission if you submit multiple times. If you submit again, you should submit everything again together. E.g., submitting the code only or submitting the revised URL only is not allowed. Grading Program code and ALL videos should be submitted to get credits. If a video is not submitted, zero points will be given for the corresponding part. In videos 3,4,5,6, incorrect explanation of the design pattern implementation will cause significant penalty. Points allocation: Total of 220 points 60 points for "B. Required Features" 70 points for "C. Your Own Ideas" 90 points for "D. Design Patterns"; 30 points for each design pattern (22.5 points for each pattern if pair-programming)
mage2tv
A small JavaScript library used as an example in some videos of https://www.mage2.tv/
happyhorseskull
renders image from existing image pixels onto a black canvas
Reloadaxe
No description available
anivire
Canvas with pixel-art images
wenqili
Render and process images pixel by pixel using offscreen canvas
atharray
Pixel Canvas Composite Image Renderer For Cute And Funny Images
cbillingham
Simple image filters using the WebGL canvas and pixel algorithms