Found 40 repositories(showing 30)
ajaybhatiya1234
 Read the technical deep dive: https://www.dessa.com/post/deepfake-detection-that-actually-works # Visual DeepFake Detection In our recent [article](https://www.dessa.com/post/deepfake-detection-that-actually-works), we make the following contributions: * We show that the model proposed in current state of the art in video manipulation (FaceForensics++) does not generalize to real-life videos randomly collected from Youtube. * We show the need for the detector to be constantly updated with real-world data, and propose an initial solution in hopes of solving deepfake video detection. Our Pytorch implementation, conducts extensive experiments to demonstrate that the datasets produced by Google and detailed in the FaceForensics++ paper are not sufficient for making neural networks generalize to detect real-life face manipulation techniques. It also provides a current solution for such behavior which relies on adding more data. Our Pytorch model is based on a pre-trained ResNet18 on Imagenet, that we finetune to solve the deepfake detection problem. We also conduct large scale experiments using Dessa's open source scheduler + experiment manger [Atlas](https://github.com/dessa-research/atlas). ## Setup ## Prerequisities To run the code, your system should meet the following requirements: RAM >= 32GB , GPUs >=1 ## Steps 0. Install [nvidia-docker](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) 00. Install [ffmpeg](https://www.ffmpeg.org/download.html) or `sudo apt install ffmpeg` 1. Git Clone this repository. 2. If you haven't already, install [Atlas](https://github.com/dessa-research/atlas). 3. Once you've installed Atlas, activate your environment if you haven't already, and navigate to your project folder. That's it, You're ready to go! ## Datasets Half of the dataset used in this project is from the [FaceForensics](https://github.com/ondyari/FaceForensics/tree/master/dataset) deepfake detection dataset. . To download this data, please make sure to fill out the [google form](https://github.com/ondyari/FaceForensics/#access) to request access to the data. For the dataset that we collected from Youtube, it is accessible on [S3](ttps://deepfake-detection.s3.amazonaws.com/augment_deepfake.tar.gz) for download. To automatically download and restructure both datasets, please execute: ``` bash restructure_data.sh faceforensics_download.py ``` Note: You need to have received the download script from FaceForensics++ people before executing the restructure script. Note2: We created the `restructure_data.sh` to do a split that replicates our exact experiments avaiable in the UI above, please feel free to change the splits as you wish. ## Walkthrough Before starting to train/evaluate models, we should first create the docker image that we will be running our experiments with. To do so, we already prepared a dockerfile to do that inside `custom_docker_image`. To create the docker image, execute the following commands in terminal: ``` cd custom_docker_image nvidia-docker build . -t atlas_ff ``` Note: if you change the image name, please make sure you also modify line 16 of `job.config.yaml` to match the docker image name. Inside `job.config.yaml`, please modify the data path on host from `/media/biggie2/FaceForensics/datasets/` to the absolute path of your `datasets` folder. The folder containing your datasets should have the following structure: ``` datasets ├── augment_deepfake (2) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── base_deepfake (1) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── both_deepfake (3) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── precomputed (4) └── T_deepfake (0) ├── manipulated_sequences │ ├── DeepFakeDetection │ ├── Deepfakes │ ├── Face2Face │ ├── FaceSwap │ └── NeuralTextures └── original_sequences ├── actors └── youtube ``` Notes: * (0) is the dataset downloaded using the FaceForensics repo scripts * (1) is a reshaped version of FaceForensics data to match the expected structure by the codebase. subfolders called `frames` contain frames collected using `ffmpeg` * (2) is the augmented dataset, collected from youtube, available on s3. * (3) is the combination of both base and augmented datasets. * (4) precomputed will be automatically created during training. It holds cashed cropped frames. Then, to run all the experiments we will show in the article to come, you can launch the script `hparams_search.py` using: ```bash python hparams_search.py ``` ## Results In the following pictures, the title for each subplot is in the form `real_prob, fake_prob | prediction | label`. #### Model trained on FaceForensics++ dataset For models trained on the paper dataset alone, we notice that the model only learns to detect the manipulation techniques mentioned in the paper and misses all the manipulations in real world data (from data)   #### Model trained on Youtube dataset Models trained on the youtube data alone learn to detect real world deepfakes, but also learn to detect easy deepfakes in the paper dataset as well. These models however fail to detect any other type of manipulation (such as NeuralTextures).   #### Model trained on Paper + Youtube dataset Finally, models trained on the combination of both datasets together, learns to detect both real world manipulation techniques as well as the other methods mentioned in FaceForensics++ paper.   for a more in depth explanation of these results, please refer to the [article](https://www.dessa.com/post/deepfake-detection-that-actually-works) we published. More results can be seen in the [interactive UI](http://deepfake-detection.dessa.com/projects) ## Help improve this technology Please feel free to fork this work and keep pushing on it. If you also want to help improving the deepfake detection datasets, please share your real/forged samples at foundations@dessa.com. ## LICENSE © 2020 Square, Inc. ATLAS, DESSA, the Dessa Logo, and others are trademarks of Square, Inc. All third party names and trademarks are properties of their respective owners and are used for identification purposes only.
hristoast
Shell scripts for converting textures to atlas format (for Morrowind's Project Atlas)
Andrzejandy
This project contains two programs: - Atlas - Blender plugin written in Pyhton used to generate image atlas from animations in Blender. - 2D_Game_Atlas - 2D animation showcase usage of Atlas plugin generated images. This program uses two atlas generated images for walking and standing: Walk.png, Stand.png which are located in Data/Textures/Player Controls: 2D_Game_Atlas: Please open 2D_Game_Atlas.exe to run the program. WASD - move character. ESC - exit program. Mouse scroll wheel - change scale. Atlas: This project contains python Atlas.py plugin and blender file atlas.blend with already set up example which shows the 2d walking animation. Walking animation has been taken from: https://github.com/SebLague/Blender-to-Unity-Character-Creation/tree/master/Blend%20files In order to generate the atlas image: 1. Open the atlas.blend file. 2. Click on the button "Run Script" on the bottom of the text editor. 3. On the right "Atlas generator" tab will appear which allows you to change the number of rows, columns and outline for the atlas generator. 4. Once you click the "Start atlas", rendering will start and once it finished the atlas image will appear in Output directory which is by default: C:/tmp\
XoXoHarsh
The MERN Template automates MERN stack setup with a Linux script, installing frontend tools like axios, Bootstrap, React, and backend essentials including bcrypt, Express, nodemailer. Users configure email credentials and MongoDB Atlas URI for seamless integration, facilitating swift project initiation and contributions for enhancements.
mkelly9513
Contains the code and scripts used to process and analyze the data for my analysis of super-enhancer function in ovarian cancer cells (OVCAR3) and in patient RNA-seq and Copy Number data from The Cancer Genome Atlas (TCGA). This project was published by Nat. Communications in July 2022. https://doi.org/10.1038/s41467-022-31919-8
Scripts for Strategic Atlas project
pablo-gar
Custom scripts for use in the Human Cell Atlas project
deevdevil88
Rmarkdown scripts for adult human postmortem SN & Cortex cell atlas project.
MartinCanovas
Terraform scripts that create a project and a MongoDB cluster in a Atlas Organization.
lizfischer
Scripts and documents related to the project "Atlas of a Medieval Life" at UT Austin
zanderso13
Set of scripts that will be used to analyze resting state data for the BrainMAPD project. This includes the application of various atlases, graph theoretic approaches, and neuroimmune analyses.
thanksfortheride136
This is a web app created using Google Apps Scripts. The purpose is to simplify submission processes of students work as it relates to 3D modeling, 3D printing as well as lasercutting. The web app takes student submissions, and the automatically creates folders for students and organizes their work by their class period. Part of the ATLAS project.
SIlvaMFPedro
Python Scripts for ATLAS Project
bioShaun
store scripts for pig atlas project
ayam196
Project Script Atlas For Every Game
msvdk
Public scripts and data from the atlas project
IGME-RIT
Utilities for making ATLAS projects, including scripts and snippets
BrookeLab1
Test of repository for scripts for 3D atlas project
ma-tech
Simple VTK scripts and executables from the Mouse Atlas project
russbate42
Scripts for personal contributions/modifications to the ATLAS ML pion projects
IsmailAbdennadher
This project includes Python scripts that dump/restore data from/to Mongo Atlas
ZhigangHeLab
Code and scripts used in the analysis of the spinal projecting neuron snRNAseq atlas
A collection of Python scripts for managing MongoDB Atlas organizations, projects, clusters, and users.
PaulBrant
Node.js script to detect IP Access lists in Atlas projects
Vanjia-lee
This scripts used for the analysis of the project for spatiotemporal transcriptome atlas of developing mouse lung
SuoLab-GZLab
These scripts are used for the analysis of the project for spatiotemporal transcriptome atlas of developing mouse lung
This script handles the "code healing" and reproducibility framework we developed for the Atlas project.
Skills take away From This Project(Python scripting, Data Collection, MongoDB, Streamlit, API integration, Data Management using MongoDB (Atlas) and SQL)
Skills take away From This Project Python scripting, Data Collection, MongoDB, Streamlit, API integration, Data Managment using MongoDB (Atlas) and SQL
aundrus
Scripts for Continuous Integration integration framework that bridges GitLab merge requests with a Jenkins-based DAG build/test pipeline, developed for the ATLAS experiment software projects at CERN.