SlideShare a Scribd company logo
ABSTRACT
Deep fakes are altered, high-quality, realistic videos/images that have lately gained
popularity. Many incredible uses of this technology are being investigated.
Malicious uses of fake videos, such as fake news, celebrity pornographic videos
and financial scams are currently on the rise in the digital world. As a result,
celebrities, politicians, and other well-known persons are particularly vulnerable to
the Deep fake detection challenge. Numerous research has been undertaken in
recent years to understand how deep fakes function and many deep learning-based
algorithms to detect deep fake videos or pictures have been presented.
This study comprehensively evaluates deep fake production and detection
technologies based on several deep learning algorithms. In addition, the limits of
current approaches and the availability of databases in society will be discussed. A
deep fake detection system that is both precise and automatic. Given the ease with
which deep fake videos/images may be generated and shared, the lack of an
effective deep fake detection system creates a serious problem for the world.
However, there have been various attempts to address this issue, and deep
learning-related solutions outperform traditional approaches. These capabilities are
used to train a ResNext which learns to categorizeif a video has been concern to
manipulation or now no longer and is also capable of hit upon the temporal
inconsistencies among frames presented by DF introduction tools.
Index Terms—-Deep Fakes, Deep Learning, Fake Generation, Fake Detection,
Machine Learning.
1.INTRODUCTION
1.1 MOTIVATION
The deep fake generation and detection technologies based on several deep
learning algorithms are thoroughly assessed in this paper. Furthermore, the
limitations of existing methodologies and the accessibility of databases across
society will be examined. An automated technique for deepfake detection that is
accurate. The absence of an efficient deep fake detection system poses a major
threat to the global community, given the simplicity with which deepfake movies
and pictures may be created and distributed. There have been many efforts to solve
this problem, however, and deep learning-related solutions work better than
conventional methods.
1.2 PROBLEM DEFINITION
Due to the huge loss of frame content during video compression, existing deep
learning algorithms for image identification cannot effectively detect bogus videos.
The severe deterioration of the frame data following video compression prevents
the majority of image recognition techniques from being employed for videos.
Additionally, videos provide a problem for techniques intended to identify only
still fake images since their temporal features vary across sets of frames.
1.3 OBJECTIVE OF PROJECT
A framework on which low-level face manipulation defects are expected to further
appear as temporal distortions with irregularities between the frames. However,
deep learning algorithms frequently employ face photos from the internet that
typically display people with wide eyes; fewer pictures of persons with closed eyes
may be seen online. As a result, deep fake algorithms are unable to generate fake
faces that blink often in the absence of photographs of actual people doing so.
Deep fakes, in other words, have far lower blink rates than regular videos.
1.4 SCOPE OF PROJECT
Detecting deep fake images and videos using deep learning techniques is an important
and evolving area of research and development. The scope of this field is broad,
encompassing both technological advancements and the societal implications of deep
fake technology. Here are some key aspects to consider within the scope of deep fake
detection using deep learning techniques:
2.LITERATURE SURVEY
2.1 Deepfake video detection using recurrent neural networks
AUTHORS: D. Guera and E. J. Delp,
ABSTRACT: In recent months a machine learning based free software tool has
made it easy to create believable face swaps in videos that leaves few traces of
manipulation, in what are known as "deepfake" videos. Scenarios where these
realistic fake videos are used to create political distress, blackmail someone or
fake terrorism events are easily envisioned. This paper proposes a temporal-
aware pipeline to automatically detect deepfake videos. Our system uses a
convolutional neural network (CNN) to extract frame-level features. These
features are then used to train a recurrent neural network (RNN) that learns to
classify if a video has been subject to manipulation or not. We evaluate our
method against a large set of deepfake videos collected from multiple video
websites. We show how our system can achieve competitive results in this task
while using a simple architecture.
2.2 Face x-ray for more general face forgery detection
AUTHORS: L. Li, J. Bao, T. Zhang, H. Yang, D. Chen, F. Wen, and B. Guo
ABSTRACT: In this paper we propose a novel image representation called face
X-ray for detecting forgery in face images. The face X-ray of an input face
image is a greyscale image that reveals whether the input image can be
decomposed into the blending of two images from different sources. It does so
by showing the blending boundary for a forged image and the absence of
blending for a real image. We observe that most existing face manipulation
methods share a common step: blending the altered face into an existing
background image. For this reason, face X-ray provides an effective way for
detecting forgery generated by most existing face manipulation algorithms.
Face X-ray is general in the sense that it only assumes the existence of a
blending step and does not rely on any knowledge of the artifacts associated
with a specific face manipulation technique. Indeed, the algorithm for
computing face X-ray can be trained without fake images generated by any of
the state-of-the-art face manipulation methods. Extensive experiments show
that face X-ray remains effective when applied to forgery generated by unseen
face manipulation techniques, while most existing face forgery detection or
deepfake detection algorithms experience a significant performance drop.
2.3 Deepfakestack: A deep ensemblebased learning technique for deepfake
detection
AUTHORS: M. S. Rana and A. H. Sung.
ABSTRACT: Recent advances in technology have made the deep learning
(DL) models available for use in a wide variety of novel applications; for
example, generative adversarial network (GAN) models are capable of
producing hyperrealistic images, speech, and even videos, such as the so-called
“Deepfake” produced by GANs with manipulated audio and/or video clips,
which are so realistic as to be indistinguishable from the real ones in human
perception. Aside from innovative and legitimate applications, there are
numerous nefarious or unlawful ways to use such counterfeit contents in
propaganda, political campaigns, cybercrimes, extortion, etc. To meet the
challenges posed by Deepfake multimedia, we propose a deep ensemble
learning technique called DeepfakeStack for detecting such manipulated videos.
The proposed technique combines a series of DL based state-of-art
classification models and creates an improved composite classifier. Based on
our experiments, it is shown that DeepfakeStack outperforms other classifiers
by achieving an accuracy of 99.65% and AUROC of 1.0 score in detecting
Deepfake. Therefore, our method provides a solid basis for building a Realtime
Deepfake detector.
2.4 Detecting deepfake videos using attributionbased confidence metric
AUTHORS: S. Fernandes, S. Raj, R. Ewetz, J. S. Pannu, S. K. Jha, E. Ortiz, I.
Vintila, and M. Salter,
ABSTRACT: Recent advances in generative adversarial networks have made
detecting fake videos a challenging task. In this paper, we propose the
application of the state-of-theart attribution based confidence (ABC) metric for
detecting deepfake videos. The ABC metric does not require access to the
training data or training the calibration model on the validation data. The ABC
metric can be used to draw inferences even when only the trained model is
available. Here, we utilize the ABC metric to characterize whether a video is
original or fake. The deep learning model is trained only on original videos. The
ABC metric uses the trained model to generate confidence values. For, original
videos, the confidence values are greater than 0.94.
2.5 Deepfakestack: A deep ensemblebased learning technique for deepfake
detection
AUTHORS: M. S. Rana and A. H. Sung,
ABSTRACT: Recent advances in technology have made the deep learning
(DL) models available for use in a wide variety of novel applications; for
example, generative adversarial network (GAN) models are capable of
producing hyper-realistic images, speech, and even videos, such as the so-called
“Deepfake” produced by GANs with manipulated audio and/or video clips,
which are so realistic as to be indistinguishable from the real ones in human
perception. Aside from innovative and legitimate applications, there are
numerous nefarious or unlawful ways to use such counterfeit contents in
propaganda, political campaigns, cybercrimes, extortion, etc. To meet the
challenges posed by Deepfake multimedia, we propose a deep ensemble
learning technique called DeepfakeStack for detecting such manipulated videos.
The proposed technique combines a series of DL based state-of-art
classification models and creates an improved composite classifier. Based on
our experiments, it is shown that DeepfakeStack outperforms other classifiers
by achieving an accuracy of 99.65% and AUROC of 1.0 score in detecting
Deepfake. Therefore, our method provides a solid basis for building a Realtime
Deepfake detector.
3.SYSTEM ANALYSIS
3.1 EXISTING SYSTEM:
Zhao et al. recently introduced a methodology for deep fake detection utilizing the
self-consistency of local source features, which are spatially-local, content-
independent details of pictures. A CNN model employs a unique representation
learning approach to extract these source features, which are represented as down-
sampled feature maps referred to as pairwise self-consistency learning. This aims
to punish feature vector pairings that correspond to areas in the same picture with
poor cosine similarity scores. When dealing with false pictures created by
technologies that output the entire image directly and whose source features are
constant throughout each point inside each image, it could have a disadvantage.
In past months, free deep learning-based software tools have made the creation of
credible face exchanges in videos that leave few traces of manipulation, in what
are known as "DeepFake"(DF) videos.
Manipulations of digital videos has been demonstrated for many years through the
good use of visual effects, recent advances in deep learning have led to a drastic
increase in the making real looking of fake content and the accessibility in which it
can be created.
3.1.1 DISADVANTAGES OF EXISTING SYSTEM:
 Since, fake image-based methods use error functions for real or fake
image detection. For video, it needs lots of computational power and
is hence time-consuming by using such methods.
 Some poorly created deep fake videos keep some visual artifacts
behind, which can be used for deepfake detection. Thus we can group
methods used for classification based on classifiers used i.e either
deep or shallow.
3.2PROPOSED SYSTEM:
There are many tools available for creating the DeepFakes, but for DeepFakes
detection there is hardly any tool available. Our approach for detecting the DF will
be a great contribution in avoiding the percolation of the DF over the world wide
web. We will be providing a web-based platform for the user for uploading the
video and detect if its fake or real. This project is often scaled up from developing
a webbased platform to a browser plugin for automatic DF detections. Even big
applications like WhatsApp, Facebook can integrate this project with their
application for easy pre-detection of DF before sending it to another user. One of
the important objectives is to evaluate its performance and acceptability in terms of
security, user-friendliness, accuracy and reliability. Our method is focusing on
detecting all types of DF like replacement DF, retrenchment DF and interpersonal
DF.
3.2.1 ADVANTAGES OF PROPOSED SYSTEM:
 Deep learning has shown considerable achievement in the identification of
deep fakes.
 In order to recognize fake videos & photos properly must be enhanced
current deep learning approaches.
 It primarily covers classic detection methods as well as deep Learning based
methods such as CNN, RNN, and LSTM.
3.3 SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
• Operating System: Windows
• Coding Language: Python 3.7
3.4 SYSTEM STUDY
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business
proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed system is
to be carried out. This is to ensure that the proposed system is not a burden to
the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
4.SYSTEM DESIGN
4.1 SYSTEM ARCHITECTURE:
4.2 DATA FLOW DIAGRAM:
1. The DFD is also called as bubble chart. It is a simple graphical formalism
that can be used to represent a system in terms of input data to the
system, various processing carried out on this data, and the output data is
generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the
system process, the data used by the process, an external entity that
interacts with the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels that
represent increasing information flow and functional detail.
USER:
Yes NO
Check
Unauthorized user
Upload Real & Fake Video Dataset
Data Preprocess
User
4.3 UML DIAGRAMS
UML stands for Unified Modeling Language. UML is a standardized
general-purpose modeling language in the field of object-oriented software
Feature Extraction
Build LSTM & ResNext Classifiers
End process
Model Generation
Upload Test Video
Build LSTM & ResNext Classifiers
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical notations
to express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
Use case diagram:
A use case diagram in the Unified Modeling Language (UML) is a type of
behavioral diagram defined by and created from a Use-case analysis. Its purpose is
to present a graphical overview of the functionality provided by a system in terms
of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can be
depicted.
Class diagram:
The class diagram is used to refine the use case diagram and define a detailed design of
the system. The class diagram classifies the actors defined in the use case diagram into a set of
interrelated classes. The relationship or association between the classes can be either an "is-a" or
"has-a" relationship. Each class in the class diagram may be capable of providing certain
functionalities. These functionalities provided by the class are termed "methods" of the class.
Apart from this, each class may have certain "attributes" that uniquely identify the class.
Object diagram:
The object diagram is a special kind of class diagram. An object is an instance of a class.
This essentially means that an object represents the state of a class at a given point of time while
the system is running. The object diagram captures the state of different classes in the system and
their relationships or associations at a given point of time.
State diagram:
A state diagram, as the name suggests, represents the different states that objects in the
system undergo during their life cycle. Objects in the system change states in response to events.
In addition to this, a state diagram also captures the transition of the object's state from an initial
state to a final state in response to events affecting the system.
Activity diagram:
The process flows in the system are captured in the activity diagram. Similar to a state
diagram, an activity diagram also consists of activities, actions, transitions, initial and final
states, and guard conditions.
Sequence diagram:
A sequence diagram represents the interaction between different objects in the system. The
important aspect of a sequence diagram is that it is time-ordered. This means that the exact
sequence of the interactions between the objects is represented step by step. Different objects in
the sequence diagram interact with each other by passing "messages".
Collaboration diagram:
A collaboration diagram groups together the interactions between different objects. The
interactions are listed as numbered interactions that help to trace the sequence of the interactions.
The collaboration diagram helps to identify all the possible interactions that each object has with
other objects.
4.3 IMPLEMENTATION:
MODULES:
Dataset: To built any machine learning and deep learning model we require a real-
world data. First we collected data from different platform like Kaggle's Deepfake
Detection challenge, Celeb-DF[8], FaceForensic. Kaggle’s DeepFake detection
challenge contains 3000 videos in which 50% data is real and 50% is manipulated
data. Celeb-DF contains the videos of some famous celebrities and there are a total
of 1000 videos in which 500 are real and 500 are manipulated videos.FaceForensic
++ dataset contains a total of 2000 videos of which 1000 are real and the remaining
are manipulated. Further this all three datasets are merged together and passed to
the preprocessing of data.
Data Preprocessing: Preprocessing of data is a very important part as by doing
preprocessing we actually try to get some important information from the data. We
eliminate unnecessary data from original data. Splitting the movie into frames is
part of the dataset preprocessing. Face detection is then performed, and the frame
with the detected face is cropped. To preserve consistency in the number of frames,
the mean of the video dataset is determined, and a new processed face cropped
dataset containing the frames equal to the mean is constructed. During
preprocessing, frames that do not include faces are ignored. Processing a 10-
second movie at 30 frames per second, or 300 frames in total, will necessitate a
significant amount of CPU power. So, for the sake of experimentation, we propose
using only the first 100 frames to train the model.
Model:The model is made up of resnext50 32x4d and one LSTM layer. The Data
Loader loads the preprocessed face cropped films and divides them into two
groups: train and test. In addition, the frames from the processed videos are
supplied to the model in tiny batches for training and testing.
ResNextCNN for Feature Extraction:We propose using the ResNext CNN
classifier for extracting features and reliably recognizing frame-level
characteristics instead of rewriting the classifier. Following that, we'll fine-tune the
network by adding extra layers as needed and setting a correct learning rate to
ensure that the gradient descent of the model is properly converged. LSTM for
Sequence Processing: Assume a 2-node neural network with the probabilities of
the sequence being part of a deep fake video or an untampered video as input and a
sequence of ResNext CNN feature vectors of input frames as output. The main
problem that we must solve is the design of a model that can recursively process a
sequence in a meaningful way. For this task, we propose using a 2048 LSTM unit
with a 0.4 likelihood of dropping out, which is capable of achieving our goal. The
LSTM is used to analyze the frames sequentially in order to do a temporal analysis
of the video by comparing the frame at ‘t' second with the frame at ‘t' second.
Predict: The trained model is given a new video to forecast. A fresh video is also
preprocessed to incorporate the trained model's format. The video is divided into
frames, then face cropped, and instead of keeping the video locally, the cropped
frames are sent immediately to the trained model for identification.
ALGORITHMS:
Long short-term memory (LSTM):
Long short-term memory is an artificial recurrent neural network (RNN)
architecture used in the field of deep learning. Unlike standard feedforward neural
networks, LSTM has feedback connections. It can not only process single data
points (such as images), but also entire sequences of data (such as speech or
video). For example, LSTM is applicable to tasks such as unsegmented,
connected handwriting recognition, speech recognition[3][4]
and anomaly detection
in network traffic or IDSs (intrusion detection systems).
A common LSTM unit is composed of a cell, an input gate, an output gate and
a forget gate. The cell remembers values over arbitrary time intervals and the
three gates regulate the flow of information into and out of the cell.
LSTM networks are well-suited to classifying, processing and making
predictions based on time series data, since there can be lags of unknown duration
between important events in a time series. LSTMs were developed to deal with
the vanishing gradient problem that can be encountered when training traditional
RNNs. Relative insensitivity to gap length is an advantage of LSTM over
RNNs, hidden Markov models and other sequence learning methods in numerous
applications
Training:
An RNN using LSTM units can be trained in a supervised fashion, on a set of
training sequences, using an optimization algorithm, like gradient descent,
combined with backpropagation through time to compute the gradients needed
during the optimization process, in order to change each weight of the LSTM
network in proportion to the derivative of the error (at the output layer of the
LSTM network) with respect to corresponding weight.
A problem with using gradient descent for standard RNNs is that error
gradients vanish exponentially quickly with the size of the time lag between
important events. However, with LSTM units, when error values are back-
propagated from the output layer, the error remains in the LSTM unit's cell. This
"error carousel" continuously feeds error back to each of the LSTM unit's gates,
until they learn to cut off the value.
ResNeXt:
ResNeXt is a Convolutional Neural Network (CNN) architecture, which is a deep
learning model. ResNeXt was developed by Microsoft Research and introduced in
2017 in a paper titled “Aggregated Residual Transformations for Deep Neural
Networks.”
ResNeXt uses the basic ideas of the ResNet (Residual Network) model, but unlike
ResNet, it uses “groups” instead of many smaller paths. These groups contain
multiple parallel paths, and each path is used to learn different features. This
allows the network to learn more features more effectively, increasing its
representational power.
The main features and advantages of ResNeXt are:
Parallel Paths: ResNeXt is based on the use of multiple parallel paths (or groups)
in the same layer. This allows the network to learn a broader and more diverse set
of features.
Depth and Width: ResNeXt combines two basic methods, both increasing the
depth of the network and increasing the width of the network by increasing the
number of groups in each layer. This allows using more parameters to achieve
better performance.
State-of-the-Art Performance: ResNeXt has demonstrated state-of-the-art
performance on a variety of tasks. It has achieved successful results especially in
image classification, object recognition and other visual processing tasks.
Transfer Learning: ResNeXt can be effectively used to adapt pre-trained models to
other tasks. This is important for transfer learning applications.
ResNeXt is used in many application areas, particularly in deep learning problems
working with visual and text data, such as image classification, object detection, face
recognition, natural language processing (NLP) and medical image analysis. This
model performs particularly well on large data sets and is also a suitable option for
transfer learning applications.
5.SOFTWARE ENVIRONMENT
What is Python :-
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python
programs generally are smaller than other programming languages like Java.
Programmers have to type relatively less and indentation requirement of the language,
makes them readable all the time.
Python language is being used by almost all tech-giant companies like – Google,
Amazon, Facebook, Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard library which can be used
for the following –
 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQt etc. )
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like Opencv, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia
Advantages of Python :-
Let’s see how Python dominates over other languages.
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like
regular expressions, documentation-generation, unit-testing, web browsers, threading,
databases, CGI, email, image manipulation, and more. So, we don’t have to write the
complete code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some
of your code in languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python
code in your source code of a different language, like C++. This lets us add scripting
capabilities to our code in the other language.
4. Improved Productivity
The language’s simplicity and extensive libraries render programmers more productive than
languages like Java and C++ do. Also, the fact that you need to write less and get more
things done.
5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
the Internet Of Things. This is a way to connect the language with the real world.
6. Simple and Easy
When working with Java, you may have to create a class to print ‘Hello World’. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code.
This is why when people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English.
This is the reason why it is so easy to learn, understand, and code. It also does not need
curly braces to define blocks, and indentation is mandatory. This further aids the
readability of the code.
8. Object-Oriented
This language supports both the procedural and object-oriented programming paradigms.
While functions help us with code reusability, classes and objects let us model the real
world. A class allows the encapsulation of data and functions into one.
9. Free and Open-Source
Like we said earlier, Python is freely available. But not only can you download Python for
free, but you can also download its source code, make changes to it, and even distribute it. It
downloads with an extensive collection of libraries to help you with your tasks.
10. Portable
When you code your project in a language like C++, you may need to make some changes
to it if you want to run it on another platform. But it isn’t the same with Python. Here, you
need to code only once, and you can run it anywhere. This is called Write Once Run
Anywhere (WORA). However, you need to be careful enough not to include any system-
dependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment section.
Advantages of Python Over Other Languages
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have to
search for any third-party libraries to get your job done. This is the reason that many people
suggest learning Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the
free available resources to build applications. Python is popular and widely used so it gives
you better community support.
The 2019 Github annual survey showed us that Python has overtaken Java in the most
popular programming language category.
3. Python is for Everyone
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder programming
language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing
Python over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on
the client-side. Besides that, it is rarely ever used to implement smartphone-based
applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it
just means that if it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.
4. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers
are a bit underdeveloped. Consequently, it is less often applied in huge enterprises.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
History of Python : -
What do the alphabet and the programming language Python have in common? Right, both
start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language and
programming environment, which had been developed in the Netherlands, Amsterdam, at the
CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence
the design of Python.Python was conceptualized in the late 1980s. Guido van Rossum
worked that time in a project at the CWI, called Amoeba, a distributed operating system. In
an interview with Bill Venners1
, Guido van Rossum said: "In the early 1980s, I worked as an
implementer on a team building a language called ABC at Centrum voor Wiskunde en
Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to
mention ABC's influence because I'm indebted to everything I learned during that project
and to the people who worked on it."Later on in the same Interview, Guido van Rossum
continued: "I remembered all my experience and some of my frustration with ABC. I
decided to try to design a simple scripting language that possessed some of ABC's better
properties, but without its problems. So I started typing. I created a simple virtual machine, a
simple parser, and a simple runtime. I made my own version of the various ABC parts that I
liked. I created a basic syntax, used indentation for statement grouping instead of curly
braces or begin-end blocks, and developed a small number of powerful data types: a hash
table (or dictionary, as we call it), a list, strings, and numbers."
What is Machine Learning : -
Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often categorized
as a subfield of artificial intelligence, but I find that categorization can often be misleading at
first brush. The study of machine learning certainly arose from research in this context, but
in the data science application of machine learning methods, it's more helpful to think of
machine learning as a means of building models of data.
Fundamentally, machine learning involves building mathematical models to help understand
data. "Learning" enters the fray when we give these models tunable parameters that can be
adapted to observed data; in this way the program can be considered to be "learning" from
the data. Once these models have been fit to previously seen data, they can be used to predict
and understand aspects of newly observed data. I'll leave to the reader the more philosophical
digression regarding the extent to which this type of mathematical, model-based "learning" is
similar to the "learning" exhibited by the human brain.Understanding the problem setting in
machine learning is essential to using these tools effectively, and so we will start with some
broad categorizations of the types of approaches we'll discuss here.
Categories Of Machine Leaning :-
At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured features
of data and some label associated with the data; once this model is determined, it can be used
to apply labels to new, unknown data. This is further subdivided into classification tasks
and regression tasks: in classification, the labels are discrete categories, while in regression,
the labels are continuous quantities. We will see examples of both types of supervised
learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to any
label, and is often described as "letting the dataset speak for itself." These models include
tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct
groups of data, while dimensionality reduction algorithms search for more succinct
representations of the data. We will see examples of both types of unsupervised learning in
the following section.
Need for Machine Learning
Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is still in
its initial stage and haven’t surpassed human intelligence in many aspects. Then the question
is that what is the need to make machine learn? The most suitable reason for doing this is,
“to make decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial Intelligence,
Machine Learning and Deep Learning to get the key information from data to perform
several real-world tasks and solve problems. We can call it data-driven decisions taken by
machines, particularly to automate the process. These data-driven decisions can be used,
instead of using programing logic, in the problems that cannot be programmed inherently.
The fact is that we can’t do without human intelligence, but other aspect is that we all need
to solve real-world problems with efficiency at a huge scale. That is why the need for
machine learning arises.
Challenges in Machines Learning :-
While Machine Learning is rapidly evolving, making significant strides with cybersecurity
and autonomous cars, this segment of AI as whole still has a long way to go. The reason
behind is that ML has not been able to overcome number of challenges. The challenges that
ML is facing currently are −
Quality of data − Having good-quality data for ML algorithms is one of the biggest
challenges. Use of low-quality data leads to the problems related to data preprocessing and
feature extraction.
Time-Consuming task − Another challenge faced by ML models is the consumption of time
especially for data acquisition, feature extraction and retrieval.
Lack of specialist persons − As ML technology is still in its infancy stage, availability of
expert resources is a tough job.
No clear objective for formulating business problems − Having no clear objective and
well-defined goal for business problems is another key challenge for ML because this
technology is not that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be
represented well for the problem.
Curse of dimensionality − Another challenge ML model faces is too many features of data
points. This can be a real hindrance.
Difficulty in deployment − Complexity of the ML model makes it quite difficult to be
deployed in real life.
Applications of Machines Learning :-
Machine Learning is the most rapidly growing technology and according to researchers we
are in the golden year of AI and ML. It is used to solve many real-world complex problems
which cannot be solved with traditional approach. Following are some real-world applications
of ML −
 Emotion analysis
 Sentiment analysis
 Error detection and prevention
 Weather forecasting and prediction
 Stock market analysis and forecasting
 Speech synthesis
 Speech recognition
 Customer segmentation
 Object recognition
 Fraud detection
 Fraud prevention
 Recommendation of products to customer in online shopping
How to Start Learning Machine Learning?
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get
started!!!
How to start learning ML?
This is a rough roadmap you can follow on your way to becoming an insanely talented
Machine Learning Engineer. Of course, you can always modify the steps according to your
needs to reach your desired end-goal!
Step 1 – Understand the Prerequisites
In case you are a genius, you could start ML directly but normally, there are some
prerequisites that you need to know which include Linear Algebra, Multivariate Calculus,
Statistics, and Python. And if you don’t know these, never fear! You don’t need a Ph.D.
degree in these topics to get started but you do need a basic understanding.
(a) Learn Linear Algebra and Multivariate Calculus
Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However,
the extent to which you need them depends on your role as a data scientist. If you are more
focused on application heavy machine learning, then you will not be that heavily focused on
maths as there are many common libraries available. But if you want to focus on R&D in
Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very
important as you will have to implement many ML algorithms from scratch.
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML
expert will be spent collecting and cleaning data. And statistics is a field that handles the
collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance,
Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is
also a very important part of ML which deals with various concepts like Conditional
Probability, Priors, and Posteriors, Maximum Likelihood, etc.
(c) Learn Python
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various
online resources and courses such as Fork Python available Free on GeeksforGeeks.
Step 2 – Learn Various ML Concepts
Now that you are done with the prerequisites, you can move on to actually learning ML
(Which is the fun part!!!) It’s best to start with the basics and then move on to the more
complicated stuff. Some of the basic concepts in ML are:
(a) Terminologies of Machine Learning
 Model – A model is a specific representation learned from data by applying some machine
learning algorithm. A model is also called a hypothesis.
 Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as input to
the model. For example, in order to predict a fruit, there may be features like color, smell,
taste, etc.
 Target (Label) – A target variable or label is the value to be predicted by our model. For the
fruit example discussed in the feature section, the label with each set of input would be the
name of the fruit like apple, orange, banana, etc.
 Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so
after training, we will have a model (hypothesis) that will then map new data to one of the
categories trained on.
 Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a
predicted output(label).
(b) Types of Machine Learning
 Supervised Learning – This involves learning from a training dataset with labeled data using
classification and regression models. This learning process continues until the required level of
performance is achieved.
 Unsupervised Learning – This involves using unlabelled data and then finding the underlying
structure in the data in order to learn more and more about the data itself using factor and
cluster analysis models.
 Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning
with a small amount of labeled data. Using labeled data vastly increases the learning accuracy
and is also more cost-effective than Supervised Learning.
 Reinforcement Learning – This involves learning optimal actions through trial and error. So
the next action is decided by learning behaviors that are based on the current state and that will
maximize the reward in the future.
Advantages of Machine learning :-
1. Easily identifies trends and patterns -
Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it
serves to understand the browsing behaviors and purchase histories of its users to help cater to
the right products, deals, and reminders relevant to them. It uses the results to reveal relevant
advertisements to them.
2. No human intervention needed (automation)
With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms on
their own. A common example of this is anti-virus softwares; they learn to filter new threats as
they are recognized. ML is also good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets
them make better decisions. Say you need to make a weather forecast model. As the amount of
data you have keeps growing, your algorithms learn to make more accurate predictions faster.
4. Handling multi-dimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multi-dimensional and multi-
variety, and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for new
data to be generated.
2. Time and Resources
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it.
Python Development Steps : -
Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in
February 1991. This release included already exception handling, functions, and the core data
types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which Guido
Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
introduced. This release included list comprehensions, a full garbage collector and it was
supporting unicode.Python flourished for another 8 years in the versions 2.x before the next
major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python
3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the
removal of duplicate programming constructs and modules, thus fulfilling or coming close to
fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one --
obvious way to do it."Some changes in Python 7.3:
 Print is now a function
 Views and iterators instead of lists
 The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be
sorted, because all the elements of a list must be comparable to each other.
 There is only one integer type left, i.e. int. long is int as well.
 The division of two integers returns a float instead of an integer. "//" can be used to have the
"old" behaviour.
 Text Vs. Data Instead Of Unicode Vs. 8-bit
Purpose :-
We demonstrated that our approach enables successful segmentation of intra-retinal layers—
even with low-quality images containing speckle noise, low contrast, and different intensity
ranges throughout—with the assistance of the ANIS feature.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python has a design
philosophy that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.
Modules Used in Project :-
Tensorflow
TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library, and is also used
for machine learning applications such as neural networks. It is used for both research and
production at Google.‍
TensorFlow was developed by the Google Brain team for internal Google use. It was
released under the Apache 2.0 open-source license on November 9, 2015.
Numpy
Numpy is a general-purpose array-processing package. It provides a high-performance
multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains various
features including these important ones:
 A powerful N-dimensional array object
 Sophisticated (broadcasting) functions
 Tools for integrating C/C++ and Fortran code
 Useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, Numpy can also be used as an efficient multi-dimensional
container of generic data. Arbitrary data-types can be defined using Numpy which allows
Numpy to seamlessly and speedily integrate with a wide variety of databases.
Pandas
Pandas is an open-source Python Library providing high-performance data manipulation and
analysis tool using its powerful data structures. Python was majorly used for data munging
and preparation. It had very little contribution towards data analysis. Pandas solved this
problem. Using Pandas, we can accomplish five typical steps in the processing and analysis
of data, regardless of the origin of data load, prepare, manipulate, model, and analyze.
Python with Pandas is used in a wide range of fields including academic and commercial
domains including finance, economics, Statistics, analytics, etc.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats and interactive environments across platforms. Matplotlib can
be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web
application servers, and four graphical user interface toolkits. Matplotlib tries to make easy
things easy and hard things possible. You can generate plots, histograms, power spectra, bar
charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see
the sample plots and thumbnail gallery.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly when
combined with IPython. For the power user, you have full control of line styles, font
properties, axes properties, etc, via an object oriented interface or via a set of functions
familiar to MATLAB users.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a
consistent interface in Python. It is licensed under a permissive simplified BSD license and is
distributed under many Linux distributions, encouraging academic and commercial use.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python has a design
philosophy that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.
Install Python Step-by-Step in Windows and Mac :
Python a versatile programming language doesn’t come pre-installed on your computer
devices. Python was first released in the year 1991 and until today it is a very popular high-
level programming language. Its style philosophy emphasizes code readability with its
notable use of great whitespace.
The object-oriented approach and language construct provided by Python enables
programmers to write both clear and logical code for projects. This software does not come
pre-packaged with Windows.
How to Install Python on Windows and Mac :
There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python but
this tutorial will solve your query. The latest or the newest version of Python is version 3.7.4 or
in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know about
your System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7 device
or to install Python 3. Download the Python Cheatsheet here.The steps on how to install Python
on Windows 10, 8 and 7 are divided into 4 parts to help understand better.
Download the Correct version into the system
Step 1: Go to the official site to download and install python using Google Chrome or any other
web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
Step 2: Click on the Download Tab.
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or
you can scroll further down and click on download with respective to their version. Here, we
are downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
• To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 web-
based installer.
•To download Windows 64-bit python, you can select any one from the three options: Windows
x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-
based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which
version of python is to be downloaded is completed. Now we move ahead with the second part
in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release
Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.
Verify the Python Installation
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.
Step 3: Open the Command prompt option.
Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.
Step 5: You will get the answer as 3.7.4
Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.
Check how the Python IDLE works
Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save
Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.
Step 6: Now for e.g. enter print
6.SYSTEM TEST
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business process,
application, and/or system configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and contains clearly defined inputs
and expected results.
Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system documentation, and
user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example of
system testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other kinds
of tests, must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be written in
detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Test cases1:
Test case for Login form:
FUNCTION: LOGIN
EXPECTED RESULTS: Should Validate the user and check his existence in
database
ACTUAL RESULTS: Validate the user and checking the user against the
database
LOW PRIORITY No
HIGH PRIORITY Yes
Test case2:
Test case for User Registration form:
FUNCTION: USER REGISTRATION
EXPECTED RESULTS: Should check if all the fields are filled by the user
and saving the user to database.
ACTUAL RESULTS: Checking whether all the fields are field by user or
not through validations and saving user.
LOW PRIORITY No
HIGH PRIORITY Yes
Test case3:
Test case for Change Password:
When the old password does not match with the new password ,then this results in
displaying an error message as “ OLD PASSWORD DOES NOT MATCH WITH THE
NEW PASSWORD”.
FUNCTION: Change Password
EXPECTED RESULTS: Should check if old password and new password
fields are filled by the user and saving the user to
database.
ACTUAL RESULTS: Checking whether all the fields are field by user or
not through validations and saving user.
LOW PRIORITY No
HIGH PRIORITY Yes
Test Cases :
Test Test Case Test Case Test Steps Test Test
Case
ID
Name Description Step Expected Actual Case Priority
Status
01 Start the Host the If it We The High High
Application application doesn't cannot application
and test if it Start run the hosts
starts Application. success.
making sure
the required
software is
available
02 Home Page Check the If it We The High High
deployment doesn’t cannot application
environment load. access is running
for the successfully
properly Application. .
loading the
application.
03 User Verify the If it We The High High
Mode working of doesn’t cannot application
the Respond use the displays the
application Freestyle Freestyle
in freestyle mode. Page
mode
04 Data Input Verify if the If it fails We The High High
application to take
the
cannot application
takes input input or proceed updates the
and updates store in further input to
application
The
Database
7.SCREENSHOTS
8. CONCLUSION AND FUTURE ENHANCEMENT
CONCLUSION:
Various researchers have created a number of deep-learning approaches for deep
fake images and videos. Due to the extensive availability of photographs and
videos in social media material, deep fakes had grown in popularity. This is
especially crucial in social networking sites that make it simple for users to spread
and share such fake information. Numerous deep learning-based approaches have
recently been put out to deal with this problem and effectively identify fake images
and videos. The first section discussed the existing programs and technologies that
are extensively used to make fake photos and videos. And in the second section
discuss the different type of techniques that are used for deep fake images and
videos. Also, provide details of available datasets and evaluation metrics that are
used for deep fake detection. Despite the fact that deep learning has done well in
detecting deep fakes, the quality of deep fakes has been increasing. In order to
recognize fake videos & photos properly must be enhanced current deep learning
approaches.
We provided a neural network-primarily based totally method to classify the video
as deep fake or actual, at the side of the self-assurance of the proposed model. Our
approach does the frame stage detection the use of ResNext CNN and video class
the use of LSTM. The proposed approach is successful in detecting the video as a
deep fake or actual primarily based totally on the listed parameters in the paper.
We consider that it'll offer a very excessive accuracy on actual time data.
FUTURE ENHANCEMENT
Furthermore, given present deep learning approaches, it is unknown how to
identify the number of layers necessary and the appropriate architecture for deep
fake detection.To improve their capacity to cope with the ubiquitous impacts of
deep fakes and mitigate their consequences, social media companies are integrating
deep fake detection tools.
9. REFERENCES
[1] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv
preprint arXiv:1411.1784, 2014.
[2] Y. Bengio, P. Simard, and P. Frasconi, “Long short-term memory,” IEEE
Trans. Neural Netw, vol. 5, pp. 157–166, 1994.
[3] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.
[4] S. Hochreiter, “Ja1 4 rgen schmidhuber (1997).“long short-term memory”,”
Neural Computation, vol. 9, no. 8.
[5] M. Schuster and K. Paliwal, “Networks bidirectional reccurent neural,” IEEE
Trans Signal Proces, vol. 45, pp. 2673–2681, 1997.
[6] J. Hopfield et al., “Rigorous bounds on the storage capacity of the dilute
hopfield model,” Proceedings of the National Academy of Sciences, vol. 79, pp.
2554–2558, 1982.
[7] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun,
Y. Cao, Q. Gao, K. Macherey, et al., “Google’s neural machine translation system:
Bridging the gap between human and machine translation,” arXiv preprint
arXiv:1609.08144, 2016.
[8] L. Nataraj, T. M. Mohammed, B. Manjunath, S. Chandrasekaran, A. Flenner, J.
H. Bappy, and A. K. Roy-Chowdhury, “Detecting gan generated fake images using
co-occurrence matrices,” Electronic Imaging, vol. 2019, no. 5, pp. 532–1, 2019.
[9] B. Zi, M. Chang, J. Chen, X. Ma, and Y.-G. Jiang, “Wilddeepfake: A
challenging real-world dataset for deepfake detection,” in Proceedings of the 28th
ACM international conference on multimedia, 2020, pp. 2382– 2390.
[10] H. A. Khalil and S. A. Maged, “Deepfakes creation and detection using deep
learning,” in 2021 International Mobile, Intelligent, and Ubiquitous Computing
Conference (MIUCC). IEEE, 2021, pp. 1–4.
[11] J. Luttrell, Z. Zhou, Y. Zhang, C. Zhang, P. Gong, B. Yang, and R. Li, “A
deep transfer learning approach to fine-tuning facial recognition models,” in 2018
13th IEEE Conference on Industrial Electronics and Applications (ICIEA). IEEE,
2018, pp. 2671–2676.
[12] S. Tariq, S. Lee, H. Kim, Y. Shin, and S. S. Woo, “Detecting both machine
and human created fake face images in the wild,” in Proceedings of the 2nd
international workshop on multimedia privacy and security, 2018, pp. 81–87.
[13] N.-T. Do, I.-S. Na, and S.-H. Kim, “Forensics face detection from gans using
convolutional neural network,” ISITC, vol. 2018, pp. 376–379, 2018.
[14] X. Xuan, B. Peng, W. Wang, and J. Dong, “On the generalization of gan
image forensics,” in Chinese conference on biometric recognition. Springer, 2019,
pp. 134–141.
[15] P. Yang, R. Ni, and Y. Zhao, “Recapture image forensics based on laplacian
convolutional neural networks,” in International Workshop on Digital
Watermarking. Springer, 2016, pp. 119–128.

More Related Content

PPTX
8. Deepfake Mix PPT using the CNN technique.pptx
DOCX
Deep fake video detection using machine learning.docx
PPTX
698642933-DdocfordownloadEEP-FAKE-PPT.pptx
PPTX
Deep Fake Face Detection using LSTM.pptx
PDF
deepfakefacedetectionusinglstm-240620073631-f1d4d568.pdf
PPTX
DeepFake Seminar.pptx
PDF
A Neural Network Approach to Deep-Fake Video Detection
PDF
DEEPFAKE DETECTION TECHNIQUES: A REVIEW
8. Deepfake Mix PPT using the CNN technique.pptx
Deep fake video detection using machine learning.docx
698642933-DdocfordownloadEEP-FAKE-PPT.pptx
Deep Fake Face Detection using LSTM.pptx
deepfakefacedetectionusinglstm-240620073631-f1d4d568.pdf
DeepFake Seminar.pptx
A Neural Network Approach to Deep-Fake Video Detection
DEEPFAKE DETECTION TECHNIQUES: A REVIEW

Similar to DEEP FAKE IMAGES AND VIDEOS DETECTION USING DEEP LEARNING TECHNIQUES.docx (20)

PDF
Understanding Deepfake Technology.pdf
PPTX
Deep Fake Voice Detection And Extraction Using Deep review 1.pptx
PDF
IRJET - Deepfake Video Detection using Image Processing and Hashing Tools
PPTX
MINI PROJECT 2023 deepfake detection.pptx
PPTX
RESEARCH SEMINAR presentation on fake face.pptx
PDF
Copy-of-Defake-Protecting-Reality-in-a-Digital-Age.pdf
PDF
Unmasking deepfakes: A systematic review of deepfake detection and generation...
PPTX
Unmasking-the-Digital-Deception good.pptx
PDF
Recent Advancements in the Field of Deepfake Detection
PDF
Recent Advancements in the Field of Deepfake Detection
PPTX
Deepfake-Detection-Using-Deep-Learning.pptx
PPTX
AuthenticAI-[GAN Image Detector using ML].pptx
PPT
Seminar_ON_deepfake CHETjkjnemnfdkvjHAN.ppt
PDF
A survey of deepfakes in terms of deep learning and multimedia forensics
PDF
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...
PDF
Deepfakes Manipulating Reality with AI.pdf
PPTX
smart india hackathon newly updated 2024.pptx
PDF
Artificial intelligence for deepfake detection: systematic review and impact ...
PPTX
t.pptx is a ppt for DDS and software applications
PPTX
presentation.pptxpresentation.pptxpresentation.pptx
Understanding Deepfake Technology.pdf
Deep Fake Voice Detection And Extraction Using Deep review 1.pptx
IRJET - Deepfake Video Detection using Image Processing and Hashing Tools
MINI PROJECT 2023 deepfake detection.pptx
RESEARCH SEMINAR presentation on fake face.pptx
Copy-of-Defake-Protecting-Reality-in-a-Digital-Age.pdf
Unmasking deepfakes: A systematic review of deepfake detection and generation...
Unmasking-the-Digital-Deception good.pptx
Recent Advancements in the Field of Deepfake Detection
Recent Advancements in the Field of Deepfake Detection
Deepfake-Detection-Using-Deep-Learning.pptx
AuthenticAI-[GAN Image Detector using ML].pptx
Seminar_ON_deepfake CHETjkjnemnfdkvjHAN.ppt
A survey of deepfakes in terms of deep learning and multimedia forensics
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...
Deepfakes Manipulating Reality with AI.pdf
smart india hackathon newly updated 2024.pptx
Artificial intelligence for deepfake detection: systematic review and impact ...
t.pptx is a ppt for DDS and software applications
presentation.pptxpresentation.pptxpresentation.pptx
Ad

More from spub1985 (20)

DOCX
FAKE SOCIAL MEDIA ACCOUNT DETECTION DOCUMENTATION[6][1] (1).docx
DOCX
SECURE FILE TRANSFER USING AES & RSA ALGORITHMS.docx
DOCX
RESUME BUILDER projects using machine learning.docx
DOCX
SMS ENCRYPTION SYSTEM SMS ENCRYPTION SYSTEM
DOCX
IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES
DOCX
JOB RECRUITING BOARD JOB RECRUITING BOARD
DOCX
GRAPHICAL PASSWORD SUFFLELING 2222222222
DOCX
AGRICULTURE MANAGEMENT SYSTEM-1[ DDDDDDD
DOCX
E VOTING intro_merged E VOTING intro_merged E VOTING intro_merged
DOCX
EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SY...
DOCX
Batch--7 Smart meter for liquid flow monitoring and leakage detection system ...
DOCX
Criminal navigation using email tracking system.docx
DOCX
AGRICUdfdfdfdfdfdfdLTURE MANAGEMENT SYSTEM-1[1].docx
DOCX
online shopping for gadet using python project
DOC
graphical password authentical using machine learning document
DOCX
online evening managemendddt using python
DOCX
Multi Bank Transaction system oooooooooooo.docx
DOCX
online shopping python online shopping project
DOCX
Criminsdsdsdsdsal navigation using email tracking system.docx
DOCX
WEED IDENTIFICATION USING DEEP LEARNING.docx
FAKE SOCIAL MEDIA ACCOUNT DETECTION DOCUMENTATION[6][1] (1).docx
SECURE FILE TRANSFER USING AES & RSA ALGORITHMS.docx
RESUME BUILDER projects using machine learning.docx
SMS ENCRYPTION SYSTEM SMS ENCRYPTION SYSTEM
IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES
JOB RECRUITING BOARD JOB RECRUITING BOARD
GRAPHICAL PASSWORD SUFFLELING 2222222222
AGRICULTURE MANAGEMENT SYSTEM-1[ DDDDDDD
E VOTING intro_merged E VOTING intro_merged E VOTING intro_merged
EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SY...
Batch--7 Smart meter for liquid flow monitoring and leakage detection system ...
Criminal navigation using email tracking system.docx
AGRICUdfdfdfdfdfdfdLTURE MANAGEMENT SYSTEM-1[1].docx
online shopping for gadet using python project
graphical password authentical using machine learning document
online evening managemendddt using python
Multi Bank Transaction system oooooooooooo.docx
online shopping python online shopping project
Criminsdsdsdsdsal navigation using email tracking system.docx
WEED IDENTIFICATION USING DEEP LEARNING.docx
Ad

Recently uploaded (20)

PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Basic Mud Logging Guide for educational purpose
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
Lesson notes of climatology university.
PDF
Insiders guide to clinical Medicine.pdf
PPTX
master seminar digital applications in india
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
GDM (1) (1).pptx small presentation for students
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PPTX
Institutional Correction lecture only . . .
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
RMMM.pdf make it easy to upload and study
Final Presentation General Medicine 03-08-2024.pptx
Basic Mud Logging Guide for educational purpose
Supply Chain Operations Speaking Notes -ICLT Program
TR - Agricultural Crops Production NC III.pdf
Lesson notes of climatology university.
Insiders guide to clinical Medicine.pdf
master seminar digital applications in india
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
2.FourierTransform-ShortQuestionswithAnswers.pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
GDM (1) (1).pptx small presentation for students
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Institutional Correction lecture only . . .
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
RMMM.pdf make it easy to upload and study

DEEP FAKE IMAGES AND VIDEOS DETECTION USING DEEP LEARNING TECHNIQUES.docx

  • 1. ABSTRACT Deep fakes are altered, high-quality, realistic videos/images that have lately gained popularity. Many incredible uses of this technology are being investigated. Malicious uses of fake videos, such as fake news, celebrity pornographic videos and financial scams are currently on the rise in the digital world. As a result, celebrities, politicians, and other well-known persons are particularly vulnerable to the Deep fake detection challenge. Numerous research has been undertaken in recent years to understand how deep fakes function and many deep learning-based algorithms to detect deep fake videos or pictures have been presented. This study comprehensively evaluates deep fake production and detection technologies based on several deep learning algorithms. In addition, the limits of current approaches and the availability of databases in society will be discussed. A deep fake detection system that is both precise and automatic. Given the ease with which deep fake videos/images may be generated and shared, the lack of an effective deep fake detection system creates a serious problem for the world. However, there have been various attempts to address this issue, and deep learning-related solutions outperform traditional approaches. These capabilities are used to train a ResNext which learns to categorizeif a video has been concern to manipulation or now no longer and is also capable of hit upon the temporal inconsistencies among frames presented by DF introduction tools. Index Terms—-Deep Fakes, Deep Learning, Fake Generation, Fake Detection, Machine Learning.
  • 2. 1.INTRODUCTION 1.1 MOTIVATION The deep fake generation and detection technologies based on several deep learning algorithms are thoroughly assessed in this paper. Furthermore, the limitations of existing methodologies and the accessibility of databases across society will be examined. An automated technique for deepfake detection that is accurate. The absence of an efficient deep fake detection system poses a major threat to the global community, given the simplicity with which deepfake movies and pictures may be created and distributed. There have been many efforts to solve this problem, however, and deep learning-related solutions work better than conventional methods. 1.2 PROBLEM DEFINITION Due to the huge loss of frame content during video compression, existing deep learning algorithms for image identification cannot effectively detect bogus videos. The severe deterioration of the frame data following video compression prevents the majority of image recognition techniques from being employed for videos. Additionally, videos provide a problem for techniques intended to identify only still fake images since their temporal features vary across sets of frames. 1.3 OBJECTIVE OF PROJECT A framework on which low-level face manipulation defects are expected to further appear as temporal distortions with irregularities between the frames. However, deep learning algorithms frequently employ face photos from the internet that typically display people with wide eyes; fewer pictures of persons with closed eyes may be seen online. As a result, deep fake algorithms are unable to generate fake
  • 3. faces that blink often in the absence of photographs of actual people doing so. Deep fakes, in other words, have far lower blink rates than regular videos. 1.4 SCOPE OF PROJECT Detecting deep fake images and videos using deep learning techniques is an important and evolving area of research and development. The scope of this field is broad, encompassing both technological advancements and the societal implications of deep fake technology. Here are some key aspects to consider within the scope of deep fake detection using deep learning techniques:
  • 4. 2.LITERATURE SURVEY 2.1 Deepfake video detection using recurrent neural networks AUTHORS: D. Guera and E. J. Delp, ABSTRACT: In recent months a machine learning based free software tool has made it easy to create believable face swaps in videos that leaves few traces of manipulation, in what are known as "deepfake" videos. Scenarios where these realistic fake videos are used to create political distress, blackmail someone or fake terrorism events are easily envisioned. This paper proposes a temporal- aware pipeline to automatically detect deepfake videos. Our system uses a convolutional neural network (CNN) to extract frame-level features. These features are then used to train a recurrent neural network (RNN) that learns to classify if a video has been subject to manipulation or not. We evaluate our method against a large set of deepfake videos collected from multiple video websites. We show how our system can achieve competitive results in this task while using a simple architecture. 2.2 Face x-ray for more general face forgery detection AUTHORS: L. Li, J. Bao, T. Zhang, H. Yang, D. Chen, F. Wen, and B. Guo ABSTRACT: In this paper we propose a novel image representation called face X-ray for detecting forgery in face images. The face X-ray of an input face image is a greyscale image that reveals whether the input image can be
  • 5. decomposed into the blending of two images from different sources. It does so by showing the blending boundary for a forged image and the absence of blending for a real image. We observe that most existing face manipulation methods share a common step: blending the altered face into an existing background image. For this reason, face X-ray provides an effective way for detecting forgery generated by most existing face manipulation algorithms. Face X-ray is general in the sense that it only assumes the existence of a blending step and does not rely on any knowledge of the artifacts associated with a specific face manipulation technique. Indeed, the algorithm for computing face X-ray can be trained without fake images generated by any of the state-of-the-art face manipulation methods. Extensive experiments show that face X-ray remains effective when applied to forgery generated by unseen face manipulation techniques, while most existing face forgery detection or deepfake detection algorithms experience a significant performance drop. 2.3 Deepfakestack: A deep ensemblebased learning technique for deepfake detection AUTHORS: M. S. Rana and A. H. Sung. ABSTRACT: Recent advances in technology have made the deep learning (DL) models available for use in a wide variety of novel applications; for example, generative adversarial network (GAN) models are capable of producing hyperrealistic images, speech, and even videos, such as the so-called “Deepfake” produced by GANs with manipulated audio and/or video clips, which are so realistic as to be indistinguishable from the real ones in human perception. Aside from innovative and legitimate applications, there are numerous nefarious or unlawful ways to use such counterfeit contents in
  • 6. propaganda, political campaigns, cybercrimes, extortion, etc. To meet the challenges posed by Deepfake multimedia, we propose a deep ensemble learning technique called DeepfakeStack for detecting such manipulated videos. The proposed technique combines a series of DL based state-of-art classification models and creates an improved composite classifier. Based on our experiments, it is shown that DeepfakeStack outperforms other classifiers by achieving an accuracy of 99.65% and AUROC of 1.0 score in detecting Deepfake. Therefore, our method provides a solid basis for building a Realtime Deepfake detector. 2.4 Detecting deepfake videos using attributionbased confidence metric AUTHORS: S. Fernandes, S. Raj, R. Ewetz, J. S. Pannu, S. K. Jha, E. Ortiz, I. Vintila, and M. Salter, ABSTRACT: Recent advances in generative adversarial networks have made detecting fake videos a challenging task. In this paper, we propose the application of the state-of-theart attribution based confidence (ABC) metric for detecting deepfake videos. The ABC metric does not require access to the training data or training the calibration model on the validation data. The ABC metric can be used to draw inferences even when only the trained model is available. Here, we utilize the ABC metric to characterize whether a video is original or fake. The deep learning model is trained only on original videos. The ABC metric uses the trained model to generate confidence values. For, original videos, the confidence values are greater than 0.94. 2.5 Deepfakestack: A deep ensemblebased learning technique for deepfake detection AUTHORS: M. S. Rana and A. H. Sung,
  • 7. ABSTRACT: Recent advances in technology have made the deep learning (DL) models available for use in a wide variety of novel applications; for example, generative adversarial network (GAN) models are capable of producing hyper-realistic images, speech, and even videos, such as the so-called “Deepfake” produced by GANs with manipulated audio and/or video clips, which are so realistic as to be indistinguishable from the real ones in human perception. Aside from innovative and legitimate applications, there are numerous nefarious or unlawful ways to use such counterfeit contents in propaganda, political campaigns, cybercrimes, extortion, etc. To meet the challenges posed by Deepfake multimedia, we propose a deep ensemble learning technique called DeepfakeStack for detecting such manipulated videos. The proposed technique combines a series of DL based state-of-art classification models and creates an improved composite classifier. Based on our experiments, it is shown that DeepfakeStack outperforms other classifiers by achieving an accuracy of 99.65% and AUROC of 1.0 score in detecting Deepfake. Therefore, our method provides a solid basis for building a Realtime Deepfake detector.
  • 8. 3.SYSTEM ANALYSIS 3.1 EXISTING SYSTEM: Zhao et al. recently introduced a methodology for deep fake detection utilizing the self-consistency of local source features, which are spatially-local, content- independent details of pictures. A CNN model employs a unique representation learning approach to extract these source features, which are represented as down- sampled feature maps referred to as pairwise self-consistency learning. This aims to punish feature vector pairings that correspond to areas in the same picture with poor cosine similarity scores. When dealing with false pictures created by technologies that output the entire image directly and whose source features are constant throughout each point inside each image, it could have a disadvantage. In past months, free deep learning-based software tools have made the creation of credible face exchanges in videos that leave few traces of manipulation, in what are known as "DeepFake"(DF) videos. Manipulations of digital videos has been demonstrated for many years through the good use of visual effects, recent advances in deep learning have led to a drastic increase in the making real looking of fake content and the accessibility in which it can be created. 3.1.1 DISADVANTAGES OF EXISTING SYSTEM:
  • 9.  Since, fake image-based methods use error functions for real or fake image detection. For video, it needs lots of computational power and is hence time-consuming by using such methods.  Some poorly created deep fake videos keep some visual artifacts behind, which can be used for deepfake detection. Thus we can group methods used for classification based on classifiers used i.e either deep or shallow. 3.2PROPOSED SYSTEM: There are many tools available for creating the DeepFakes, but for DeepFakes detection there is hardly any tool available. Our approach for detecting the DF will be a great contribution in avoiding the percolation of the DF over the world wide web. We will be providing a web-based platform for the user for uploading the video and detect if its fake or real. This project is often scaled up from developing a webbased platform to a browser plugin for automatic DF detections. Even big applications like WhatsApp, Facebook can integrate this project with their application for easy pre-detection of DF before sending it to another user. One of the important objectives is to evaluate its performance and acceptability in terms of security, user-friendliness, accuracy and reliability. Our method is focusing on detecting all types of DF like replacement DF, retrenchment DF and interpersonal DF. 3.2.1 ADVANTAGES OF PROPOSED SYSTEM:  Deep learning has shown considerable achievement in the identification of deep fakes.
  • 10.  In order to recognize fake videos & photos properly must be enhanced current deep learning approaches.  It primarily covers classic detection methods as well as deep Learning based methods such as CNN, RNN, and LSTM. 3.3 SYSTEM REQUIREMENTS: HARDWARE REQUIREMENTS: • System : Pentium IV 2.4 GHz. • Hard Disk : 40 GB. • Floppy Drive : 1.44 Mb. • Monitor : 15 VGA Colour. • Mouse : Logitech. • Ram : 512 Mb. SOFTWARE REQUIREMENTS: • Operating System: Windows • Coding Language: Python 3.7
  • 11. 3.4 SYSTEM STUDY FEASIBILITY STUDY The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are  ECONOMICAL FEASIBILITY  TECHNICAL FEASIBILITY  SOCIAL FEASIBILITY ECONOMICAL FEASIBILITY
  • 12. This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased. TECHNICAL FEASIBILITY This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system. SOCIAL FEASIBILITY The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
  • 13. 4.SYSTEM DESIGN 4.1 SYSTEM ARCHITECTURE: 4.2 DATA FLOW DIAGRAM: 1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of input data to the
  • 14. system, various processing carried out on this data, and the output data is generated by this system. 2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system. 3. DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output. 4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail. USER: Yes NO Check Unauthorized user Upload Real & Fake Video Dataset Data Preprocess User
  • 15. 4.3 UML DIAGRAMS UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software Feature Extraction Build LSTM & ResNext Classifiers End process Model Generation Upload Test Video Build LSTM & ResNext Classifiers
  • 16. engineering. The standard is managed, and was created by, the Object Management Group. The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to; or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. GOALS: The Primary goals in the design of the UML are as follows: 1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models. 2. Provide extendibility and specialization mechanisms to extend the core concepts. 3. Be independent of particular programming languages and development process. 4. Provide a formal basis for understanding the modeling language. 5. Encourage the growth of OO tools market. 6. Support higher level development concepts such as collaborations, frameworks, patterns and components.
  • 17. 7. Integrate best practices. Use case diagram: A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram is to show what system functions are performed for which actor. Roles of the actors in the system can be depicted.
  • 18. Class diagram: The class diagram is used to refine the use case diagram and define a detailed design of the system. The class diagram classifies the actors defined in the use case diagram into a set of interrelated classes. The relationship or association between the classes can be either an "is-a" or "has-a" relationship. Each class in the class diagram may be capable of providing certain functionalities. These functionalities provided by the class are termed "methods" of the class. Apart from this, each class may have certain "attributes" that uniquely identify the class.
  • 19. Object diagram: The object diagram is a special kind of class diagram. An object is an instance of a class. This essentially means that an object represents the state of a class at a given point of time while the system is running. The object diagram captures the state of different classes in the system and their relationships or associations at a given point of time. State diagram: A state diagram, as the name suggests, represents the different states that objects in the system undergo during their life cycle. Objects in the system change states in response to events. In addition to this, a state diagram also captures the transition of the object's state from an initial state to a final state in response to events affecting the system.
  • 20. Activity diagram: The process flows in the system are captured in the activity diagram. Similar to a state diagram, an activity diagram also consists of activities, actions, transitions, initial and final states, and guard conditions.
  • 21. Sequence diagram: A sequence diagram represents the interaction between different objects in the system. The important aspect of a sequence diagram is that it is time-ordered. This means that the exact sequence of the interactions between the objects is represented step by step. Different objects in the sequence diagram interact with each other by passing "messages".
  • 22. Collaboration diagram: A collaboration diagram groups together the interactions between different objects. The interactions are listed as numbered interactions that help to trace the sequence of the interactions. The collaboration diagram helps to identify all the possible interactions that each object has with other objects.
  • 23. 4.3 IMPLEMENTATION: MODULES: Dataset: To built any machine learning and deep learning model we require a real- world data. First we collected data from different platform like Kaggle's Deepfake Detection challenge, Celeb-DF[8], FaceForensic. Kaggle’s DeepFake detection challenge contains 3000 videos in which 50% data is real and 50% is manipulated data. Celeb-DF contains the videos of some famous celebrities and there are a total of 1000 videos in which 500 are real and 500 are manipulated videos.FaceForensic ++ dataset contains a total of 2000 videos of which 1000 are real and the remaining are manipulated. Further this all three datasets are merged together and passed to the preprocessing of data. Data Preprocessing: Preprocessing of data is a very important part as by doing preprocessing we actually try to get some important information from the data. We eliminate unnecessary data from original data. Splitting the movie into frames is part of the dataset preprocessing. Face detection is then performed, and the frame with the detected face is cropped. To preserve consistency in the number of frames, the mean of the video dataset is determined, and a new processed face cropped
  • 24. dataset containing the frames equal to the mean is constructed. During preprocessing, frames that do not include faces are ignored. Processing a 10- second movie at 30 frames per second, or 300 frames in total, will necessitate a significant amount of CPU power. So, for the sake of experimentation, we propose using only the first 100 frames to train the model. Model:The model is made up of resnext50 32x4d and one LSTM layer. The Data Loader loads the preprocessed face cropped films and divides them into two groups: train and test. In addition, the frames from the processed videos are supplied to the model in tiny batches for training and testing. ResNextCNN for Feature Extraction:We propose using the ResNext CNN classifier for extracting features and reliably recognizing frame-level characteristics instead of rewriting the classifier. Following that, we'll fine-tune the network by adding extra layers as needed and setting a correct learning rate to ensure that the gradient descent of the model is properly converged. LSTM for Sequence Processing: Assume a 2-node neural network with the probabilities of the sequence being part of a deep fake video or an untampered video as input and a sequence of ResNext CNN feature vectors of input frames as output. The main problem that we must solve is the design of a model that can recursively process a sequence in a meaningful way. For this task, we propose using a 2048 LSTM unit with a 0.4 likelihood of dropping out, which is capable of achieving our goal. The LSTM is used to analyze the frames sequentially in order to do a temporal analysis of the video by comparing the frame at ‘t' second with the frame at ‘t' second. Predict: The trained model is given a new video to forecast. A fresh video is also preprocessed to incorporate the trained model's format. The video is divided into
  • 25. frames, then face cropped, and instead of keeping the video locally, the cropped frames are sent immediately to the trained model for identification. ALGORITHMS: Long short-term memory (LSTM): Long short-term memory is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition[3][4] and anomaly detection in network traffic or IDSs (intrusion detection systems). A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell. LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the vanishing gradient problem that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications Training:
  • 26. An RNN using LSTM units can be trained in a supervised fashion, on a set of training sequences, using an optimization algorithm, like gradient descent, combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight. A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. However, with LSTM units, when error values are back- propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value. ResNeXt: ResNeXt is a Convolutional Neural Network (CNN) architecture, which is a deep learning model. ResNeXt was developed by Microsoft Research and introduced in 2017 in a paper titled “Aggregated Residual Transformations for Deep Neural Networks.”
  • 27. ResNeXt uses the basic ideas of the ResNet (Residual Network) model, but unlike ResNet, it uses “groups” instead of many smaller paths. These groups contain multiple parallel paths, and each path is used to learn different features. This allows the network to learn more features more effectively, increasing its representational power. The main features and advantages of ResNeXt are: Parallel Paths: ResNeXt is based on the use of multiple parallel paths (or groups) in the same layer. This allows the network to learn a broader and more diverse set of features. Depth and Width: ResNeXt combines two basic methods, both increasing the depth of the network and increasing the width of the network by increasing the
  • 28. number of groups in each layer. This allows using more parameters to achieve better performance. State-of-the-Art Performance: ResNeXt has demonstrated state-of-the-art performance on a variety of tasks. It has achieved successful results especially in image classification, object recognition and other visual processing tasks. Transfer Learning: ResNeXt can be effectively used to adapt pre-trained models to other tasks. This is important for transfer learning applications. ResNeXt is used in many application areas, particularly in deep learning problems working with visual and text data, such as image classification, object detection, face recognition, natural language processing (NLP) and medical image analysis. This model performs particularly well on large data sets and is also a suitable option for transfer learning applications.
  • 29. 5.SOFTWARE ENVIRONMENT What is Python :- Below are some facts about Python. Python is currently the most widely used multi-purpose, high-level programming language. Python allows programming in Object-Oriented and Procedural paradigms. Python programs generally are smaller than other programming languages like Java. Programmers have to type relatively less and indentation requirement of the language, makes them readable all the time. Python language is being used by almost all tech-giant companies like – Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python is huge collection of standard library which can be used for the following –  Machine Learning  GUI Applications (like Kivy, Tkinter, PyQt etc. )  Web frameworks like Django (used by YouTube, Instagram, Dropbox)  Image processing (like Opencv, Pillow)  Web scraping (like Scrapy, BeautifulSoup, Selenium)  Test frameworks  Multimedia Advantages of Python :- Let’s see how Python dominates over other languages. 1. Extensive Libraries Python downloads with an extensive library and it contain code for various purposes like regular expressions, documentation-generation, unit-testing, web browsers, threading,
  • 30. databases, CGI, email, image manipulation, and more. So, we don’t have to write the complete code for that manually. 2. Extensible As we have seen earlier, Python can be extended to other languages. You can write some of your code in languages like C++ or C. This comes in handy, especially in projects. 3. Embeddable Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your source code of a different language, like C++. This lets us add scripting capabilities to our code in the other language. 4. Improved Productivity The language’s simplicity and extensive libraries render programmers more productive than languages like Java and C++ do. Also, the fact that you need to write less and get more things done. 5. IOT Opportunities Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the Internet Of Things. This is a way to connect the language with the real world. 6. Simple and Easy When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just a print statement will do. It is also quite easy to learn, understand, and code. This is why when people pick up Python, they have a hard time adjusting to other more verbose languages like Java. 7. Readable Because it is not such a verbose language, reading Python is much like reading English. This is the reason why it is so easy to learn, understand, and code. It also does not need
  • 31. curly braces to define blocks, and indentation is mandatory. This further aids the readability of the code. 8. Object-Oriented This language supports both the procedural and object-oriented programming paradigms. While functions help us with code reusability, classes and objects let us model the real world. A class allows the encapsulation of data and functions into one. 9. Free and Open-Source Like we said earlier, Python is freely available. But not only can you download Python for free, but you can also download its source code, make changes to it, and even distribute it. It downloads with an extensive collection of libraries to help you with your tasks. 10. Portable When you code your project in a language like C++, you may need to make some changes to it if you want to run it on another platform. But it isn’t the same with Python. Here, you need to code only once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA). However, you need to be careful enough not to include any system- dependent features. 11. Interpreted Lastly, we will say that it is an interpreted language. Since statements are executed one by one, debugging is easier than in compiled languages. Any doubts till now in the advantages of Python? Mention in the comment section. Advantages of Python Over Other Languages 1. Less Coding Almost all of the tasks done in Python requires less coding when the same task is done in other languages. Python also has an awesome standard library support, so you don’t have to
  • 32. search for any third-party libraries to get your job done. This is the reason that many people suggest learning Python to beginners. 2. Affordable Python is free therefore individuals, small companies or big organizations can leverage the free available resources to build applications. Python is popular and widely used so it gives you better community support. The 2019 Github annual survey showed us that Python has overtaken Java in the most popular programming language category. 3. Python is for Everyone Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need to learn different languages for different jobs but with Python, you can professionally build web apps, perform data analysis and machine learning, automate things, do web scraping and also build games and powerful visualizations. It is an all-rounder programming language. Disadvantages of Python So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should be aware of its consequences as well. Let’s now see the downsides of choosing Python over another language. 1. Speed Limitations We have seen that Python code is executed line by line. But since Python is interpreted, it often results in slow execution. This, however, isn’t a problem unless speed is a focal point for the project. In other words, unless high speed is a requirement, the benefits offered by Python are enough to distract us from its speed limitations.
  • 33. 2. Weak in Mobile Computing and Browsers While it serves as an excellent server-side language, Python is much rarely seen on the client-side. Besides that, it is rarely ever used to implement smartphone-based applications. One such application is called Carbonnelle. The reason it is not so famous despite the existence of Brython is that it isn’t that secure. 3. Design Restrictions As you know, Python is dynamically-typed. This means that you don’t need to declare the type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that if it looks like a duck, it must be a duck. While this is easy on the programmers during coding, it can raise run-time errors. 4. Underdeveloped Database Access Layers Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers are a bit underdeveloped. Consequently, it is less often applied in huge enterprises. 5. Simple No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java code seems unnecessary. This was all about the Advantages and Disadvantages of Python Programming Language. History of Python : - What do the alphabet and the programming language Python have in common? Right, both start with ABC. If we are talking about ABC in the Python context, it's clear that the programming language ABC is meant. ABC is a general-purpose programming language and programming environment, which had been developed in the Netherlands, Amsterdam, at the
  • 34. CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence the design of Python.Python was conceptualized in the late 1980s. Guido van Rossum worked that time in a project at the CWI, called Amoeba, a distributed operating system. In an interview with Bill Venners1 , Guido van Rossum said: "In the early 1980s, I worked as an implementer on a team building a language called ABC at Centrum voor Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to mention ABC's influence because I'm indebted to everything I learned during that project and to the people who worked on it."Later on in the same Interview, Guido van Rossum continued: "I remembered all my experience and some of my frustration with ABC. I decided to try to design a simple scripting language that possessed some of ABC's better properties, but without its problems. So I started typing. I created a simple virtual machine, a simple parser, and a simple runtime. I made my own version of the various ABC parts that I liked. I created a basic syntax, used indentation for statement grouping instead of curly braces or begin-end blocks, and developed a small number of powerful data types: a hash table (or dictionary, as we call it), a list, strings, and numbers." What is Machine Learning : - Before we take a look at the details of various machine learning methods, let's start by looking at what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of artificial intelligence, but I find that categorization can often be misleading at first brush. The study of machine learning certainly arose from research in this context, but in the data science application of machine learning methods, it's more helpful to think of machine learning as a means of building models of data. Fundamentally, machine learning involves building mathematical models to help understand data. "Learning" enters the fray when we give these models tunable parameters that can be adapted to observed data; in this way the program can be considered to be "learning" from the data. Once these models have been fit to previously seen data, they can be used to predict and understand aspects of newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the human brain.Understanding the problem setting in
  • 35. machine learning is essential to using these tools effectively, and so we will start with some broad categorizations of the types of approaches we'll discuss here. Categories Of Machine Leaning :- At the most fundamental level, machine learning can be categorized into two main types: supervised learning and unsupervised learning. Supervised learning involves somehow modeling the relationship between measured features of data and some label associated with the data; once this model is determined, it can be used to apply labels to new, unknown data. This is further subdivided into classification tasks and regression tasks: in classification, the labels are discrete categories, while in regression, the labels are continuous quantities. We will see examples of both types of supervised learning in the following section. Unsupervised learning involves modeling the features of a dataset without reference to any label, and is often described as "letting the dataset speak for itself." These models include tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms search for more succinct representations of the data. We will see examples of both types of unsupervised learning in the following section. Need for Machine Learning Human beings, at this moment, are the most intelligent and advanced species on earth because they can think, evaluate and solve complex problems. On the other side, AI is still in its initial stage and haven’t surpassed human intelligence in many aspects. Then the question is that what is the need to make machine learn? The most suitable reason for doing this is, “to make decisions, based on data, with efficiency and scale”. Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine Learning and Deep Learning to get the key information from data to perform several real-world tasks and solve problems. We can call it data-driven decisions taken by machines, particularly to automate the process. These data-driven decisions can be used,
  • 36. instead of using programing logic, in the problems that cannot be programmed inherently. The fact is that we can’t do without human intelligence, but other aspect is that we all need to solve real-world problems with efficiency at a huge scale. That is why the need for machine learning arises. Challenges in Machines Learning :- While Machine Learning is rapidly evolving, making significant strides with cybersecurity and autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is that ML has not been able to overcome number of challenges. The challenges that ML is facing currently are − Quality of data − Having good-quality data for ML algorithms is one of the biggest challenges. Use of low-quality data leads to the problems related to data preprocessing and feature extraction. Time-Consuming task − Another challenge faced by ML models is the consumption of time especially for data acquisition, feature extraction and retrieval. Lack of specialist persons − As ML technology is still in its infancy stage, availability of expert resources is a tough job. No clear objective for formulating business problems − Having no clear objective and well-defined goal for business problems is another key challenge for ML because this technology is not that mature yet. Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be represented well for the problem. Curse of dimensionality − Another challenge ML model faces is too many features of data points. This can be a real hindrance. Difficulty in deployment − Complexity of the ML model makes it quite difficult to be deployed in real life.
  • 37. Applications of Machines Learning :- Machine Learning is the most rapidly growing technology and according to researchers we are in the golden year of AI and ML. It is used to solve many real-world complex problems which cannot be solved with traditional approach. Following are some real-world applications of ML −  Emotion analysis  Sentiment analysis  Error detection and prevention  Weather forecasting and prediction  Stock market analysis and forecasting  Speech synthesis  Speech recognition  Customer segmentation  Object recognition  Fraud detection  Fraud prevention  Recommendation of products to customer in online shopping How to Start Learning Machine Learning? Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that gives computers the capability to learn without being explicitly programmed”. And that was the beginning of Machine Learning! In modern times, Machine Learning is one of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
  • 38. Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085 per year. But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it? So this article deals with the Basics of Machine Learning and also the path you can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!! How to start learning ML? This is a rough roadmap you can follow on your way to becoming an insanely talented Machine Learning Engineer. Of course, you can always modify the steps according to your needs to reach your desired end-goal! Step 1 – Understand the Prerequisites In case you are a genius, you could start ML directly but normally, there are some prerequisites that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python. And if you don’t know these, never fear! You don’t need a Ph.D. degree in these topics to get started but you do need a basic understanding. (a) Learn Linear Algebra and Multivariate Calculus Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the extent to which you need them depends on your role as a data scientist. If you are more focused on application heavy machine learning, then you will not be that heavily focused on maths as there are many common libraries available. But if you want to focus on R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very important as you will have to implement many ML algorithms from scratch.
  • 39. (b) Learn Statistics Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will be spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!! Some of the key concepts in statistics that are important are Statistical Significance, Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very important part of ML which deals with various concepts like Conditional Probability, Priors, and Posteriors, Maximum Likelihood, etc. (c) Learn Python Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as they go along with trial and error. But the one thing that you absolutely cannot skip is Python! While there are other languages you can use for Machine Learning like R, Scala, etc. Python is currently the most popular language for ML. In fact, there are many Python libraries that are specifically useful for Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s best if you learn Python! You can do that using various online resources and courses such as Fork Python available Free on GeeksforGeeks. Step 2 – Learn Various ML Concepts Now that you are done with the prerequisites, you can move on to actually learning ML (Which is the fun part!!!) It’s best to start with the basics and then move on to the more complicated stuff. Some of the basic concepts in ML are: (a) Terminologies of Machine Learning  Model – A model is a specific representation learned from data by applying some machine learning algorithm. A model is also called a hypothesis.
  • 40.  Feature – A feature is an individual measurable property of the data. A set of numeric features can be conveniently described by a feature vector. Feature vectors are fed as input to the model. For example, in order to predict a fruit, there may be features like color, smell, taste, etc.  Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit example discussed in the feature section, the label with each set of input would be the name of the fruit like apple, orange, banana, etc.  Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so after training, we will have a model (hypothesis) that will then map new data to one of the categories trained on.  Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a predicted output(label). (b) Types of Machine Learning  Supervised Learning – This involves learning from a training dataset with labeled data using classification and regression models. This learning process continues until the required level of performance is achieved.  Unsupervised Learning – This involves using unlabelled data and then finding the underlying structure in the data in order to learn more and more about the data itself using factor and cluster analysis models.  Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning with a small amount of labeled data. Using labeled data vastly increases the learning accuracy and is also more cost-effective than Supervised Learning.  Reinforcement Learning – This involves learning optimal actions through trial and error. So the next action is decided by learning behaviors that are based on the current state and that will maximize the reward in the future.
  • 41. Advantages of Machine learning :- 1. Easily identifies trends and patterns - Machine Learning can review large volumes of data and discover specific trends and patterns that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to understand the browsing behaviors and purchase histories of its users to help cater to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant advertisements to them. 2. No human intervention needed (automation) With ML, you don’t need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own. A common example of this is anti-virus softwares; they learn to filter new threats as they are recognized. ML is also good at recognizing spam. 3. Continuous Improvement As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them make better decisions. Say you need to make a weather forecast model. As the amount of data you have keeps growing, your algorithms learn to make more accurate predictions faster. 4. Handling multi-dimensional and multi-variety data Machine Learning algorithms are good at handling data that are multi-dimensional and multi- variety, and they can do this in dynamic or uncertain environments. 5. Wide Applications You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it holds the capability to help deliver a much more personal experience to customers while also targeting the right customers.
  • 42. Disadvantages of Machine Learning :- 1. Data Acquisition Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and of good quality. There can also be times where they must wait for new data to be generated. 2. Time and Resources ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a considerable amount of accuracy and relevancy. It also needs massive resources to function. This can mean additional requirements of computer power for you. 3. Interpretation of Results Another major challenge is the ability to accurately interpret results generated by the algorithms. You must also carefully choose the algorithms for your purpose. 4. High error-susceptibility Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with data sets small enough to not be inclusive. You end up with biased predictions coming from a biased training set. This leads to irrelevant advertisements being displayed to customers. In the case of ML, such blunders can set off a chain of errors that can go undetected for long periods of time. And when they do get noticed, it takes quite some time to recognize the source of the issue, and even longer to correct it. Python Development Steps : - Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in February 1991. This release included already exception handling, functions, and the core data types of list, dict, str and others. It was also object oriented and had a module system. Python version 1.0 was released in January 1994. The major new features included in this
  • 43. release were the functional programming tools lambda, map, filter and reduce, which Guido Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was introduced. This release included list comprehensions, a full garbage collector and it was supporting unicode.Python flourished for another 8 years in the versions 2.x before the next major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal of duplicate programming constructs and modules, thus fulfilling or coming close to fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one -- obvious way to do it."Some changes in Python 7.3:  Print is now a function  Views and iterators instead of lists  The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be sorted, because all the elements of a list must be comparable to each other.  There is only one integer type left, i.e. int. long is int as well.  The division of two integers returns a float instead of an integer. "//" can be used to have the "old" behaviour.  Text Vs. Data Instead Of Unicode Vs. 8-bit Purpose :- We demonstrated that our approach enables successful segmentation of intra-retinal layers— even with low-quality images containing speckle noise, low contrast, and different intensity ranges throughout—with the assistance of the ANIS feature. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace.
  • 44. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library.  Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP.  Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to write your programs. Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. Modules Used in Project :- Tensorflow TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google.‍ TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November 9, 2015. Numpy Numpy is a general-purpose array-processing package. It provides a high-performance multidimensional array object, and tools for working with these arrays.
  • 45. It is the fundamental package for scientific computing with Python. It contains various features including these important ones:  A powerful N-dimensional array object  Sophisticated (broadcasting) functions  Tools for integrating C/C++ and Fortran code  Useful linear algebra, Fourier transform, and random number capabilities Besides its obvious scientific uses, Numpy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined using Numpy which allows Numpy to seamlessly and speedily integrate with a wide variety of databases. Pandas Pandas is an open-source Python Library providing high-performance data manipulation and analysis tool using its powerful data structures. Python was majorly used for data munging and preparation. It had very little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can accomplish five typical steps in the processing and analysis of data, regardless of the origin of data load, prepare, manipulate, model, and analyze. Python with Pandas is used in a wide range of fields including academic and commercial domains including finance, economics, Statistics, analytics, etc. Matplotlib Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web application servers, and four graphical user interface toolkits. Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see the sample plots and thumbnail gallery. For simple plotting the pyplot module provides a MATLAB-like interface, particularly when combined with IPython. For the power user, you have full control of line styles, font properties, axes properties, etc, via an object oriented interface or via a set of functions familiar to MATLAB users.
  • 46. Scikit – learn Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python. It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library.  Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP.  Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to write your programs. Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. Install Python Step-by-Step in Windows and Mac : Python a versatile programming language doesn’t come pre-installed on your computer devices. Python was first released in the year 1991 and until today it is a very popular high-
  • 47. level programming language. Its style philosophy emphasizes code readability with its notable use of great whitespace. The object-oriented approach and language construct provided by Python enables programmers to write both clear and logical code for projects. This software does not come pre-packaged with Windows. How to Install Python on Windows and Mac : There have been several updates in the Python version over the years. The question is how to install Python? It might be confusing for the beginner who is willing to start learning Python but this tutorial will solve your query. The latest or the newest version of Python is version 3.7.4 or in other words, it is Python 3. Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices. Before you start with the installation process of Python. First, you need to know about your System Requirements. Based on your system type i.e. operating system and based processor, you must download the python version. My system type is a Windows 64-bit operating system. So the steps below are to install python version 3.7.4 on Windows 7 device or to install Python 3. Download the Python Cheatsheet here.The steps on how to install Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better. Download the Correct version into the system Step 1: Go to the official site to download and install python using Google Chrome or any other web browser. OR Click on the following link: https://www.python.org
  • 48. Now, check for the latest and the correct version for your operating system. Step 2: Click on the Download Tab. Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or you can scroll further down and click on download with respective to their version. Here, we are downloading the most recent python version for windows 3.7.4
  • 49. Step 4: Scroll down the page until you find the Files option. Step 5: Here you see a different version of python along with the operating system. • To download Windows 32-bit python, you can select any one from the three options: Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 web- based installer. •To download Windows 64-bit python, you can select any one from the three options: Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web- based installer.
  • 50. Here we will install Windows x86-64 web-based installer. Here your first part regarding which version of python is to be downloaded is completed. Now we move ahead with the second part in installing python i.e. Installation Note: To know the changes or updates that are made in the version you can click on the Release Note Option. Installation of Python Step 1: Go to Download and Open the downloaded python version to carry out the installation process. Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
  • 51. Step 3: Click on Install NOW After the installation is successful. Click on Close. With these above three steps on python installation, you have successfully and correctly installed Python. Now is the time to verify the installation.
  • 52. Note: The installation process might take a couple of minutes. Verify the Python Installation Step 1: Click on Start Step 2: In the Windows Run Command, type “cmd”. Step 3: Open the Command prompt option. Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter. Step 5: You will get the answer as 3.7.4
  • 53. Note: If you have any of the earlier versions of Python already installed. You must first uninstall the earlier version and then install the new one. Check how the Python IDLE works Step 1: Click on Start Step 2: In the Windows Run command, type “python idle”. Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click on Save Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have named the files as Hey World. Step 6: Now for e.g. enter print
  • 54. 6.SYSTEM TEST The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement. TYPES OF TESTS Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
  • 55. Functional test Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input : identified classes of valid input must be accepted. Invalid Input : identified classes of invalid input must be rejected. Functions : identified functions must be exercised. Output : identified classes of application outputs must be exercised. Systems/Procedures : interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. System Test System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. White Box Testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.
  • 56. Black Box Testing Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. Unit Testing Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. Test strategy and approach Field testing will be performed manually and functional tests will be written in detail. Test objectives  All field entries must work properly.  Pages must be activated from the identified link.  The entry screen, messages and responses must not be delayed. Features to be tested  Verify that the entries are of the correct format  No duplicate entries should be allowed  All links should take the user to the correct page. Integration Testing Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.
  • 57. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. Test Results: All the test cases mentioned above passed successfully. No defects encountered. Acceptance Testing User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements. Test Results: All the test cases mentioned above passed successfully. No defects encountered. Test cases1: Test case for Login form: FUNCTION: LOGIN EXPECTED RESULTS: Should Validate the user and check his existence in database ACTUAL RESULTS: Validate the user and checking the user against the database LOW PRIORITY No HIGH PRIORITY Yes Test case2: Test case for User Registration form:
  • 58. FUNCTION: USER REGISTRATION EXPECTED RESULTS: Should check if all the fields are filled by the user and saving the user to database. ACTUAL RESULTS: Checking whether all the fields are field by user or not through validations and saving user. LOW PRIORITY No HIGH PRIORITY Yes Test case3: Test case for Change Password: When the old password does not match with the new password ,then this results in displaying an error message as “ OLD PASSWORD DOES NOT MATCH WITH THE NEW PASSWORD”. FUNCTION: Change Password EXPECTED RESULTS: Should check if old password and new password fields are filled by the user and saving the user to database. ACTUAL RESULTS: Checking whether all the fields are field by user or not through validations and saving user. LOW PRIORITY No HIGH PRIORITY Yes
  • 59. Test Cases : Test Test Case Test Case Test Steps Test Test Case ID Name Description Step Expected Actual Case Priority Status 01 Start the Host the If it We The High High Application application doesn't cannot application and test if it Start run the hosts starts Application. success. making sure the required software is available 02 Home Page Check the If it We The High High deployment doesn’t cannot application environment load. access is running for the successfully properly Application. . loading the application. 03 User Verify the If it We The High High Mode working of doesn’t cannot application the Respond use the displays the application Freestyle Freestyle in freestyle mode. Page mode 04 Data Input Verify if the If it fails We The High High application to take the cannot application takes input input or proceed updates the and updates store in further input to application The Database
  • 61. 8. CONCLUSION AND FUTURE ENHANCEMENT
  • 62. CONCLUSION: Various researchers have created a number of deep-learning approaches for deep fake images and videos. Due to the extensive availability of photographs and videos in social media material, deep fakes had grown in popularity. This is especially crucial in social networking sites that make it simple for users to spread and share such fake information. Numerous deep learning-based approaches have recently been put out to deal with this problem and effectively identify fake images and videos. The first section discussed the existing programs and technologies that are extensively used to make fake photos and videos. And in the second section discuss the different type of techniques that are used for deep fake images and videos. Also, provide details of available datasets and evaluation metrics that are used for deep fake detection. Despite the fact that deep learning has done well in detecting deep fakes, the quality of deep fakes has been increasing. In order to recognize fake videos & photos properly must be enhanced current deep learning approaches. We provided a neural network-primarily based totally method to classify the video as deep fake or actual, at the side of the self-assurance of the proposed model. Our approach does the frame stage detection the use of ResNext CNN and video class the use of LSTM. The proposed approach is successful in detecting the video as a deep fake or actual primarily based totally on the listed parameters in the paper. We consider that it'll offer a very excessive accuracy on actual time data. FUTURE ENHANCEMENT
  • 63. Furthermore, given present deep learning approaches, it is unknown how to identify the number of layers necessary and the appropriate architecture for deep fake detection.To improve their capacity to cope with the ubiquitous impacts of deep fakes and mitigate their consequences, social media companies are integrating deep fake detection tools. 9. REFERENCES
  • 64. [1] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014. [2] Y. Bengio, P. Simard, and P. Frasconi, “Long short-term memory,” IEEE Trans. Neural Netw, vol. 5, pp. 157–166, 1994. [3] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016. [4] S. Hochreiter, “Ja1 4 rgen schmidhuber (1997).“long short-term memory”,” Neural Computation, vol. 9, no. 8. [5] M. Schuster and K. Paliwal, “Networks bidirectional reccurent neural,” IEEE Trans Signal Proces, vol. 45, pp. 2673–2681, 1997. [6] J. Hopfield et al., “Rigorous bounds on the storage capacity of the dilute hopfield model,” Proceedings of the National Academy of Sciences, vol. 79, pp. 2554–2558, 1982. [7] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al., “Google’s neural machine translation system: Bridging the gap between human and machine translation,” arXiv preprint arXiv:1609.08144, 2016. [8] L. Nataraj, T. M. Mohammed, B. Manjunath, S. Chandrasekaran, A. Flenner, J. H. Bappy, and A. K. Roy-Chowdhury, “Detecting gan generated fake images using co-occurrence matrices,” Electronic Imaging, vol. 2019, no. 5, pp. 532–1, 2019. [9] B. Zi, M. Chang, J. Chen, X. Ma, and Y.-G. Jiang, “Wilddeepfake: A challenging real-world dataset for deepfake detection,” in Proceedings of the 28th ACM international conference on multimedia, 2020, pp. 2382– 2390.
  • 65. [10] H. A. Khalil and S. A. Maged, “Deepfakes creation and detection using deep learning,” in 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC). IEEE, 2021, pp. 1–4. [11] J. Luttrell, Z. Zhou, Y. Zhang, C. Zhang, P. Gong, B. Yang, and R. Li, “A deep transfer learning approach to fine-tuning facial recognition models,” in 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2018, pp. 2671–2676. [12] S. Tariq, S. Lee, H. Kim, Y. Shin, and S. S. Woo, “Detecting both machine and human created fake face images in the wild,” in Proceedings of the 2nd international workshop on multimedia privacy and security, 2018, pp. 81–87. [13] N.-T. Do, I.-S. Na, and S.-H. Kim, “Forensics face detection from gans using convolutional neural network,” ISITC, vol. 2018, pp. 376–379, 2018. [14] X. Xuan, B. Peng, W. Wang, and J. Dong, “On the generalization of gan image forensics,” in Chinese conference on biometric recognition. Springer, 2019, pp. 134–141. [15] P. Yang, R. Ni, and Y. Zhao, “Recapture image forensics based on laplacian convolutional neural networks,” in International Workshop on Digital Watermarking. Springer, 2016, pp. 119–128.