Found 240 repositories(showing 30)
a2888409
基于netty的异步非阻塞实时聊天(IM)服务器。
datitran
pix2pix demo that learns from facial landmarks and translates this into a face
Tencent
面对面翻译小程序是微信团队针对面对面沟通的场景开发的流式语音翻译小程序,通过微信同声传译插件提供了语音识别,文本翻译等功能。
DariusAf
"MesoNet: a Compact Facial Video Forgery Detection Network" (D. Afchar, V. Nozick) - IEEE WIFS 2018
NetEase-GameAI
The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)
GordonRen
This is a pix2pix demo that learns from pose and translates this into a human. A webcam-enabled application is also provided that translates your pose to the trained pose. Everybody dance now !
ESanchezLozano
GANnotation (PyTorch): Landmark-guided face to face synthesis using GANs (And a triple consistency loss!)
SocAIty
Swap faces in images and videos. Create face embeddings. Enhance face image quality. Deploy as a web api.
taylorlu
Face swap and 3D alignment from a single image based on PRNet
benmaier
Temporal networks in Python. Provides fast tools to analyze temporal contact networks and simulate dynamic processes on them using Gillespie's SSA.
alina1021
Real-time Facial Expression Transfer --> facial expression capture and reenactment via webcam
kimoktm
A python library to to fit 3D morphable models to images of faces and capture facial performance overtime with no markers or a special mount
taylorlu
Audio driven video synthesis
karaninder
A software which takes images of two people and can takes the expressions of person in 1st photo and mimics the same expression in the 2nd image.
ajaybhatiya1234
 Read the technical deep dive: https://www.dessa.com/post/deepfake-detection-that-actually-works # Visual DeepFake Detection In our recent [article](https://www.dessa.com/post/deepfake-detection-that-actually-works), we make the following contributions: * We show that the model proposed in current state of the art in video manipulation (FaceForensics++) does not generalize to real-life videos randomly collected from Youtube. * We show the need for the detector to be constantly updated with real-world data, and propose an initial solution in hopes of solving deepfake video detection. Our Pytorch implementation, conducts extensive experiments to demonstrate that the datasets produced by Google and detailed in the FaceForensics++ paper are not sufficient for making neural networks generalize to detect real-life face manipulation techniques. It also provides a current solution for such behavior which relies on adding more data. Our Pytorch model is based on a pre-trained ResNet18 on Imagenet, that we finetune to solve the deepfake detection problem. We also conduct large scale experiments using Dessa's open source scheduler + experiment manger [Atlas](https://github.com/dessa-research/atlas). ## Setup ## Prerequisities To run the code, your system should meet the following requirements: RAM >= 32GB , GPUs >=1 ## Steps 0. Install [nvidia-docker](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) 00. Install [ffmpeg](https://www.ffmpeg.org/download.html) or `sudo apt install ffmpeg` 1. Git Clone this repository. 2. If you haven't already, install [Atlas](https://github.com/dessa-research/atlas). 3. Once you've installed Atlas, activate your environment if you haven't already, and navigate to your project folder. That's it, You're ready to go! ## Datasets Half of the dataset used in this project is from the [FaceForensics](https://github.com/ondyari/FaceForensics/tree/master/dataset) deepfake detection dataset. . To download this data, please make sure to fill out the [google form](https://github.com/ondyari/FaceForensics/#access) to request access to the data. For the dataset that we collected from Youtube, it is accessible on [S3](ttps://deepfake-detection.s3.amazonaws.com/augment_deepfake.tar.gz) for download. To automatically download and restructure both datasets, please execute: ``` bash restructure_data.sh faceforensics_download.py ``` Note: You need to have received the download script from FaceForensics++ people before executing the restructure script. Note2: We created the `restructure_data.sh` to do a split that replicates our exact experiments avaiable in the UI above, please feel free to change the splits as you wish. ## Walkthrough Before starting to train/evaluate models, we should first create the docker image that we will be running our experiments with. To do so, we already prepared a dockerfile to do that inside `custom_docker_image`. To create the docker image, execute the following commands in terminal: ``` cd custom_docker_image nvidia-docker build . -t atlas_ff ``` Note: if you change the image name, please make sure you also modify line 16 of `job.config.yaml` to match the docker image name. Inside `job.config.yaml`, please modify the data path on host from `/media/biggie2/FaceForensics/datasets/` to the absolute path of your `datasets` folder. The folder containing your datasets should have the following structure: ``` datasets ├── augment_deepfake (2) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── base_deepfake (1) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── both_deepfake (3) │ ├── fake │ │ └── frames │ ├── real │ │ └── frames │ └── val │ ├── fake │ └── real ├── precomputed (4) └── T_deepfake (0) ├── manipulated_sequences │ ├── DeepFakeDetection │ ├── Deepfakes │ ├── Face2Face │ ├── FaceSwap │ └── NeuralTextures └── original_sequences ├── actors └── youtube ``` Notes: * (0) is the dataset downloaded using the FaceForensics repo scripts * (1) is a reshaped version of FaceForensics data to match the expected structure by the codebase. subfolders called `frames` contain frames collected using `ffmpeg` * (2) is the augmented dataset, collected from youtube, available on s3. * (3) is the combination of both base and augmented datasets. * (4) precomputed will be automatically created during training. It holds cashed cropped frames. Then, to run all the experiments we will show in the article to come, you can launch the script `hparams_search.py` using: ```bash python hparams_search.py ``` ## Results In the following pictures, the title for each subplot is in the form `real_prob, fake_prob | prediction | label`. #### Model trained on FaceForensics++ dataset For models trained on the paper dataset alone, we notice that the model only learns to detect the manipulation techniques mentioned in the paper and misses all the manipulations in real world data (from data)   #### Model trained on Youtube dataset Models trained on the youtube data alone learn to detect real world deepfakes, but also learn to detect easy deepfakes in the paper dataset as well. These models however fail to detect any other type of manipulation (such as NeuralTextures).   #### Model trained on Paper + Youtube dataset Finally, models trained on the combination of both datasets together, learns to detect both real world manipulation techniques as well as the other methods mentioned in FaceForensics++ paper.   for a more in depth explanation of these results, please refer to the [article](https://www.dessa.com/post/deepfake-detection-that-actually-works) we published. More results can be seen in the [interactive UI](http://deepfake-detection.dessa.com/projects) ## Help improve this technology Please feel free to fork this work and keep pushing on it. If you also want to help improving the deepfake detection datasets, please share your real/forged samples at foundations@dessa.com. ## LICENSE © 2020 Square, Inc. ATLAS, DESSA, the Dessa Logo, and others are trademarks of Square, Inc. All third party names and trademarks are properties of their respective owners and are used for identification purposes only.
interactivetech
Final Project for Stanford Deep Generative Modeling Class CS236.
Face forgery techniques such as Generative Adversarial Network (GAN) have been widely used for image synthesis in movie production, journalism, etc. What backfires is that these generative technologies are widely abused to impersonate credible people and distribute illegal, misleading, and confusing information to the public. However, to our dismay, the problem with previous fake face detection methods is that they fail to distinguish between different fake generation modalities (various GANs), so none of these methods generalize to opening counterfeit scenes. These previous methods are almost ineffective in identifying fake faces when faced with unknown forgery approaches. To address this challenge, this paper first further analyzes the weaknesses of GAN-based generators. Our validation experimental results of different face generation models, such as Deepfakes, Face2Face, FaceSwap, etc., found that the faces generated by other models have no generalization. Our experiments revealed that the recent fake faces generated by GANs are still not robust enough because it does not consider enough pixels. Inspired by this finding, we design a novel convolutional neural network that uses frequency texture augmentation and knowledge distillation to enhance its global texture perception, effectively describe textures at different semantic levels in images, and improve robustness. It is worth mentioning that we introduce two core components: Discrete Cosine Transform (DCT) and Knowledge Distillation (KDL). DCT could play the role of image compression and also as image distinguishing between fake faces and real faces. KDL is used to extract features from counterfeit and real image targets, making our model generalize to multiple types of fake face generation methods. Experiments were done on two datasets, Celeb-DF and FaceForenscics++, demonstrating that DCT facilitates deep fakes detection in some cases. Knowledge distillation plays a key role in our model. Our model achieves better and more consistent performance in image processing or cross-domain settings, especially when images are subject to Gaussian noise.
meiwulang
heygem 合成视频源码
Face2Face-Py
A python program that allows usage of facebook using hand gestures to navigate and facial expressions to react to posts.
penserbjorne
Proyecto para las materias de "Aprendizaje (Máquina)" y "Reconocimiento de patrones" de la FI, UNAM, semestre 2020-1
jtorregrosa
This project allows to extract and align faces from an image. Those output images could be used as input for any machine learning algorithm that learn how to recognize faces.
herma-mora
A mirror of iammitochondrion's "Simple Face to Face Conversation" source code.
BingruLin
Face2FaceTranslator demo
YunfanXu
This is the back-end of the system, it contains trained Xception model to classify the manipulated facial images and videos, which includes Deepfakes, Face2Face, and FaceSwap.
wangfpp
face2face share coding
OliverCollins
Web application to determine whether someone is lying 🤔
Baldur10
A project to train a machine learning model to successfully identify images of people that have been modified with DeepFake or Face2Face.
BigJohnn
Please download MUCT dataset to test the face2face deformation, or use your own pictures.(need to pick points with annotation_with_opencv tool.)
gesiscss
No description available
ice-penguin
node sdk of alipay face2face