SlideShare a Scribd company logo
IAES International Journal of Artificial Intelligence (IJ-AI)
Vol. 13, No. 4, December 2024, pp. 3786~3792
ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i4.pp3786-3792  3786
Journal homepage: http://ijai.iaescore.com
Artificial intelligence for deepfake detection: systematic review
and impact analysis
Venkateswarlu Sunkari1
, Ayyagari Sri Nagesh2
1
Department of Computer Science and Engineering, Acharya Nagarjuna University, Guntur, India
2
Department of Computer Science and Engineering, RVR & JC College of Engineering, Chowdavaram, India
Article Info ABSTRACT
Article history:
Received Dec 29, 2023
Revised Mar 3, 2024
Accepted Mar 21, 2024
Deep learning and artificial intelligence (AI) have enabled deepfakes,
prompting concerns about their social impact. deepfakes have detrimental
effects in several businesses, despite their apparent benefits. We explore
deepfake detection research and its social implications in this study. We
examine capsule networks' ability to detect video deepfakes and their design
implications. This strategy reduces parameters and provides excellent
accuracy, making it a promising deepfake defense. The social significance of
deepfakes is also highlighted, underlining the necessity to understand them.
Despite extensive use of face swap services, nothing is known about
deepfakes' social impact. The misuse of deepfakes in image-based sexual
assault and public figure distortion, especially in politics, highlight the
necessity for further research on their social impact. Using state-of-the-art
deepfake detection methods like fake face and deepfake detectors and a
broad forgery analysis tool reduces the damage deepfakes do. We inquire
about to review deepfake detection research and its social impacts in this
work. In this paper we analysed various deepfake methods, social impact
with misutilization of deepfake technology, and finally giving clear analysis
of existing machine learning models. We want to illuminate the potential
effects of deepfakes on society and suggest solutions by combining study
data.
Keywords:
Capsule network
Deep learning
Deepfake
Forgery analysis
Swap service
This is an open access article under the CC BY-SA license.
Corresponding Author:
Venkateswarlu Sunkari
Department of Computer Science and Engineering, Acharya Nagarjuna University
Nagarjuna Nagar, Guntur 522510, Andhra Pradesh, India
Email: sunkarivenkateswarlu@gmail.com
1. INTRODUCTION
Deepfake technology, powered by artificial intelligence (AI) and deep learning, has surfaced as a
ground-breaking instrument that might revolutionize a number of sectors, including customer service and
online education. Research, academia, and industry have all paid close attention to the versatility of deep
learning in making deepfakes, which has resulted in substantial breakthroughs in the generation and detection
of deepfakes. Nevertheless, despite the benefits, worries about deepfakes' detrimental effects on society are
becoming more and more prevalent.
Face-swapping models, also referred to as deepfake technology, have been used maliciously to
propagate false information and fake news, posing major problems for society. The bad events brought about
by the improper use of deepfake technology have highlighted the need for research into face-swapping tasks
and the creation of superior deepfake detection algorithms. Furthermore, face-swapping's beneficial
uses-such as anonymization for privacy protection and the development of new characters for the
entertainment sector-highlight the depth of deepfake technology [1], [2].
Int J Artif Intell ISSN: 2252-8938 
Artificial intelligence for deepfake detection: systematic review and impact analysis (Venkateswarlu Sunkari)
3787
Few research has thoroughly investigated the social impact of deepfakes, despite the widespread use
of face swapping platforms; this is a crucial gap in our knowledge of the consequences of deepfakes. With
the goal of advancing deepfake research, this special issue examines the psychological, social, and policy
ramifications of a society in which it is simple to create and distribute fake films, underscoring the urgent
need for in-depth analysis and preventative measures. Researchers have made great progress in creating
cutting-edge deepfake detection methods and sophisticated forensics platforms in an effort to counteract the
negative consequences of deepfakes. The incorporation of these instruments represents a significant
breakthrough in reducing the detrimental effects of deepfakes [3]. We want to give a thorough overview of
deepfake detection and its social ramifications in this study. We do this by analyzing data from various
research to clarify the possible social effects of deepfake technology and to offer suggestions for resolving
these issues. Though detection tools have advanced, much more needs to be understood about how people
react to and interpret deepfake content, as well as how it influences their behavior and level of trust in visual
media [4].
Deepfakes are being created and detected, and AI has been important in this process. The
development of hyper-realistic face image generating systems, such Face2Face and deepfake, has sparked
questions about society's credibility because of possible ethical problems with manipulating photos and
videos [5]. The necessity for thorough research on the societal effects of deepfakes has been brought to light
by the misuse of deepfake technology, particularly in the dissemination of false information and fake news
[6]. By examining the possibilities of capsule networks in identifying video deepfakes and highlighting the
design and sociological ramifications, Stanciu and Ionescu [7] have made a contribution to this field [8].
Their results highlight the significance of comprehending the ramifications of deepfakes and creating
efficient detection techniques. This is in line with the increasing focus on AI and deep learning for the
production and identification of deepfakes from research, academia, and industry [9]. Researchers have made
great progress in creating cutting-edge deepfake detection methods and sophisticated forensics systems in
order to address these issues [10].
‒ Approaches to detect deepfakes using artificial intelligence
One major difficulty that calls for creative solutions utilizing AI and machine learning is the
detection of deepfakes. Scholars and professionals in the field have been investigating diverse approaches to
tackle this problem and alleviate the possible negative effects of deepfakes on society. The 'deepfake
detection challenge', which has brought together tech companies and academia to promote joint efforts in
creating effective detection algorithms, is one of the pioneering projects in this field [11]. The challenge
intends to motivate scholars to address the issue of deepfake proliferation and its detrimental impacts on
society. Participants in this competition have been able to investigate deep neural networks and sophisticated
machine learning models for reliable deepfake detection by utilizing AI technology.
The video deepfake identification problem has shown tremendous potential in recent years due to
the advent of deep learning algorithms like capsule networks [12]. This progress has been made possible by
Stanciu and Ionescu [7] investigation into capsule networks' capacity to identify deepfake films. Their study
highlights the vital role that cutting-edge AI methods play in mitigating the negative effects of deepfakes and
highlights the necessity of continuing to investigate cutting-edge strategies to improve detection accuracy.
Additionally, a major advancement in reducing the negative effects of deepfakes has been made
with the combination of cutting-edge forensics platforms and AI-powered deepfake detection methods. These
advanced AI-powered techniques have shown excellent results in identifying deepfakes, especially when
applied to popular datasets. Researchers have been able to protect the integrity of visual content in a variety
of societal sectors by using AI to create strong detection models that can recognize modified media.
Moving forward, AI-driven approaches to detect deepfakes will continue to evolve, leveraging the
latest advancements in machine learning and computer vision. As the threat of deepfake misuse persists, it is
imperative for researchers and industry stakeholders to collaborate on developing AI-based solutions that not
only detect deepfakes with high accuracy but also address the broader societal implications of this technology
[13]. By employing AI in the fight against deepfakes, we can pave the way for a more secure and trustworthy
media environment, ensuring that visual content remains reliable and authentic in the digital age.
2. SYSTEMATIC ANALYSIS OF DEEPFAKE DETECTION METHODS
Detecting deepfakes is a complex and evolving challenge that requires a systematic approach to
evaluate the efficacy of various detection methods. Research in this field has been driven by the increasing
prevalence and potential societal impact of deepfakes across diverse contexts. The emergence of algorithmic
techniques and user-focused solutions underscores the multifaceted nature of deepfake detection and the need
for comprehensive analyses of detection methods.
A systematic review of deepfake detection methods reveals the limitations of current algorithms in
achieving successful detection across different deepfake types, content formats, characteristics, and datasets.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 4, December 2024: 3786-3792
3788
Despite notable progress, the robustness of these algorithms remains a concern, prompting the exploration of
alternative approaches to enhance detection accuracy and reliability. The Table 1 provides a comparative
analysis of deepfake detection methods, highlighting the distinct advantages and limitations of each
approach.
Table 1. Comparative analysis of deepfake detection methods
Detection method Advantages Limitations
Deep learning-based models High detection performance Limited robustness across diverse deepfake types and
characteristics
Capsule networks Potential for reduced parameters
while maintaining high accuracy
Evaluation across varied datasets neededfor
comprehensive assessment
Forensic platforms Robust detection capabilities Resource-intensive and computationally demanding
Lightweight object detection models Real-time performance improvement Sacrifice in accuracy compared to heavier
Deep learning-based models have demonstrated high detection performance, but their limited
robustness across diverse deepfake types and characteristics necessitates further refinement. Capsule
networks offer the potential for reduced parameters while maintaining high accuracy, yet comprehensive
evaluation across varied datasets is essential for a thorough assessment [6]. Additionally, forensic platforms
exhibit robust detection capabilities but are often resource-intensive and computationally demanding, posing
practical challenges for widespread adoption [14].
As researchers continue to explore and develop novel deepfake detection methods, it is imperative
to systematically evaluate the strengths and limitations of each approach. Through rigorous comparative
analyses and empirical validation, the efficacy of detection methods can be assessed across a comprehensive
range of deepfake scenarios, thereby advancing the development of robust and reliable detection technique
[7]. In summary, the systematic review of deepfake detection methods underscores the need for continued
research and innovation in this critical domain. By systematically evaluating the advantages and limitations
of existing detection approaches, researchers can inform the development of more effective and resilient
methods to detect and mitigate the societal impact of deepfakes.
3. ARTIFICIAL INTELLIGENCE EFFICACY IN IDENTIFYING DEEPFAKES
Advancements in AI technology have significantly contributed to the efficacy of identifying
deepfakes, with researchers leveraging innovative techniques to counter the harmful effects of manipulated
media. The utilization of AI-driven deepfake detection methodologies has demonstrated substantial progress
in detecting and mitigating the impact of synthetic media [15]. By taking a temporally-based approach and
analyzing the entire sequence of frames in a video, AI systems have shown promising results in effectively
detecting deepfake content while avoiding vulnerabilities to adversarial attacks [16]. These approaches,
which combine convolutional neural networks and the Jaya optimization algorithm, have exhibited high
accuracy rates and outperformed existing techniques, making them a formidable solution for identifying
deepfake videos in different contexts [17].
Additionally, the detection of AI-generated photos and videos has been significantly enhanced by
the combination of ensemble learning techniques with capsule-forensics architecture. The overall
effectiveness of deepfake detection has also been improved by the use of detection techniques based on
convolutional long short-term memory networks and sequential temporal analysis [18]. It is clear that in
order to determine the success and limitations of deepfake detection techniques, thorough examination and
comparison are necessary. Researchers have made great progress in creating methods to identify
resolution-inconsistent facial aberrations, mesoscopic characteristics, and temporal dynamics inside films by
utilizing AI innovations like convolutional neural networks and processing deepfakes frame-by-frame [19].
These approaches represent the continued development of deepfake detection strategies and demonstrate the
promise of AI technology in addressing the problems presented by deepfakes.
As the threat of deepfake misuse persists, continued exploration and development of novel
AI-driven approaches remain critical in the battle against manipulated media. Collaboration among
researchers and industry stakeholders will be pivotal in advancing AI-based solutions that not only detect
deepfakes with high accuracy but also address the broader societal implications of this technology. The
ongoing evolution of AI-driven deepfake detection methodologies will play a pivotal role in fostering a more
secure and trustworthy media environment in the digital age. As deepfake technology continues to advance
and pose significant risks to various aspects of society, the role of AI in mitigating these threats becomes
increasingly important. In Table 2 analyses the present trending methods used and their accuracy.
Int J Artif Intell ISSN: 2252-8938 
Artificial intelligence for deepfake detection: systematic review and impact analysis (Venkateswarlu Sunkari)
3789
Table 2. Analysis of different models and their accuracy in deepfake detection
Model Architecture Dataset used Training
accuracy (%)
Validation
accuracy (%)
Testing
accuracy (%)
Remarks
Model A Convolutional
neural network
DeepFake detection
dataset
98.5 96.2 95.8 Achieves high
accuracy but may be
overfitting on the
training set.
Regularization
techniques could be
explored.
Model B Recurrent
neural network
FaceForensics++
dataset
94.2 91.8 90.5 Effective on certain
types of deepfakes,
struggles with more
sophisticated
manipulations.
Investigate additional
pre-processing
techniques
Model C Generative
adversarial
network
DFDC dataset 96.8 95.3 94.7 Demonstrates good
generalization, but
there is a risk of
adversarial attacks.
Implementing
adversarial training
may enhance
robustness.
Ensemble
model
Combination
of A, B, and C
Mixed datasets 99.1 97.5 97.2 Superior performance
by combining strengths
of individual models.
Careful attention to
diversity in training
data sources is crucial.
Real-time
processing
EfficientNet Custom dataset NA NA 92.6 Focuses on real-time
processing with a
compromise on
accuracy. Ideal for
applications requiring
quick identification
Transfer
learning
approach
Pre-trained
ResNet50
Fine-tuned on
DeepFakeForensics
dataset
97.3 96.1 95.5 Leverages pre-trained
features, reducing the
need for extensive
training data. Fine-
tuning allows
adaptation to specific
deepfake
characteristics.
4.1. Analysis of the social impact of deepfakes
Deepfake technology's widespread use has brought about new difficulties for social and digital
media. Deepfakes' deceptive and manipulative qualities have the potential to have a big impact on a lot of
different areas of society, such politics, public discourse, and private life. In order to comprehend the
ramifications and create effective countermeasures, a thorough examination of the societal impact of
deepfakes is necessary.
4.2. Impact on public trust and perception
One of the most profound societal impacts of deepfakes is the wearing down of public faith and the
distortion of perception. With the ability to fabricate convincing videos and images, malicious actors can
manipulate public figures, disseminate false information, and incite social discord. Consequently, the
widespread circulation of deepfakes pose an important threat to the truthfulness of information and the
public's ability to discern authentic content from fabricated media [20].
4.3. Political and social manipulation
The use of deepfakes for political misinformation and social manipulation has raised concerns about
the potential destabilization of democratic processes and societal harmony. By creating deceptive content
featuring political leaders or influential figures. Bad actors can exploit deepfakes to manipulate public
opinion, sow discord, and undermine the credibility of institutions [21].
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 4, December 2024: 3786-3792
3790
4.4. Privacy violations and personal harm
Individuals and public figures are susceptible to privacy violation and individual harm resulting
from the malicious use of deepfake technology. Unauthorized creation and distribution of fake videos can
lead to reputational damage, harassment, and emotional distress [22]. Moreover, deepfake content that
superimposes individuals' faces onto explicit or compromising scenes can have far-reaching consequences on
their personal and professional lives.
4.5. Economic implications
The proliferation of deepfakes also presents economic implications, particularly in industries reliant
on visual media and advertising. The dissemination of falsified content can undermine the integrity of
advertising campaigns, impact consumer trust, and result in financial repercussions for businesses and
individuals featured in manipulated media [23]. By incorporating additional deepfake techniques and their
corresponding characteristics, the evaluation can offer a more nuanced perspective on the complexities of
detecting manipulated digital media. In Table 3, the expanded analysis will provide a more comprehensive
framework for understanding the landscape of deepfake technology and the advancements in detection
methods.
Table 3. Different deepfake technues and their social impact
Deepfake technique Key components Detection methods Potential misuses Social impact
Face2Face Facial manipulation,
expression transfer
Frame-level analysis,
facial landmark tracking
Politically motivated
misinformation
Erosion of trust in
political
institutions
Deepfake Neural network-
based image
manipulation
Video-level analysis,
anomaly detection
Targeted revenge
pornography
Impact on
individual privacy
and well-being
Neural Texture
Synthesis
Texture transfer,
image recoloring
Statistical analysis of
texture patterns, artifact
detection
Creation of false
evidence
Legal and judicial
complications
Lip-sync Deepfake Audio-visual
synchronization,
speech synthesis
Audio-visual correlation
analysis, voice signature
detection
Fabrication of false
statements
Legal implications
and public
deception
Hybrid Deepfake
Models
Combination of
multiple techniques,
adaptive
manipulation
Cross-modal analysis,
anomaly detection
Multi-faceted
misinformation
campaigns
Societal discord
and psychological
harm
To address the multifaceted social impact of deepfakes, it is imperative to leverage advanced
technologies, including AI, to develop robust detection and mitigation strategies. Future advancements in
deepfake detection are likely to embrace multimodal techniques, integrating various data sources such as
audio, video, and contextual information. By fusing multiple modalities, including linguistic patterns, facial
movements, and audiovisual consistency, detection systems can enhance their resilience against sophisticated
deepfake manipulations [24]. The integration of explainable AI techniques into deepfake detection models
will facilitate the interpretation of detection results and provide insights into the rationale behind
classification decisions [25].
As deepfake generation techniques continue to advance, the development of specialized detection
systems based on generative adversarial networks is expected to gain prominence. By leveraging the
principles of generative adversarial networks, detection models can adapt to the evolving landscape of
deepfake creation and effectively discern manipulated media from authentic content [26]. Future trends in
deepfake detection will involve intensified collaboration among researchers, industry stakeholders, and
regulatory bodies to establish benchmarking frameworks and standardized evaluation protocols. These efforts
are crucial for validating the effectiveness of detection methods and ensuring their consistent performance
across diverse deepfake scenarios [27].
The social impact of deepfakes extends beyond technological advancements and directly influences
public trust, political integrity, personal privacy, and economic stability. The integration of advanced
AI-driven detection methods is instrumental in mitigating the adverse effects of deepfakes on society. By
understanding the societal implications and implementing effective countermeasures, stakeholders can work
towards fostering a more resilient and trustworthy digital landscape in the face of evolving technological
challenges. In conclusion, the social brunt of deepfakes is significant and far-reaching. It affects a variety of
aspects of society including trust, politics, privacy, and economy.
Int J Artif Intell ISSN: 2252-8938 
Artificial intelligence for deepfake detection: systematic review and impact analysis (Venkateswarlu Sunkari)
3791
4. CONCLUSION
The emergence of explainable AI, multimodal detection approaches, ethical and regulatory
frameworks, federated learning, privacy-preserving techniques, and human-in-the-loop approaches signifies a
collective effort to fortify deepfake detection capabilities. By embracing these emerging trends, stakeholders
can work towards fostering a more resilient and trustworthy digital landscape while addressing the societal
impact of deepfakes. It is imperative to continuously refine benchmarking protocols and evaluation
methodologies for the comprehensive assessment of detection models across diverse deepfake types and
characteristics. The culmination of these future trends in deepfake detection reflects a proactive and adaptive
approach to combatting the multifaceted challenges posed by deepfakes in the digital age. In conclusion, the
societal impact of deepfakes is extensive and has implications for public trust, political honesty, personal
privacy, and economic stability. Addressing this multifaceted challenge requires a nuanced understanding of
the evolving technological landscape and a commitment to implementing effective countermeasures. By
integrating advanced AI-driven detection methods and anticipating future trends in deepfake detection,
stakeholders can strive towards fostering a more robust and trustworthy digital environment. Furthermore,
the ethical considerations, collaborative frameworks, and innovative approaches exemplified in the future
trends of deepfake detection offer a pathway to enhancing the efficacy, transparency, and resilience of
societal defense against deepfakes.
REFERENCES
[1] T. T. Nguyen et al., “Deep learning for deepfakes creation and detection: A survey,” Computer Vision and Image Understanding,
vol. 223, 2022, doi: 10.1016/j.cviu.2022.103525.
[2] F. J. -Xu, R. Wang, Y. Huang, Q. Guo, L. Ma, and Y. Liu, “Countering malicious deepfakes: survey, battleground, and horizon,”
International Journal of Computer Vision, vol. 130, no. 7, pp. 1678–1734, 2022, doi: 10.1007/s11263-022-01606-8.
[3] R. Gil, J. Virgili-Gomà, J. M. López-Gil, and R. García, “Deepfakes: Evolution and trends,” Soft Computing, vol. 27, no. 16, pp.
11295–11318, 2023, doi: 10.1007/s00500-023-08605-y.
[4] D. Gamage, P. Ghasiya, V. Bonagiri, M. E. Whiting, and K. Sasahara, “Are deepfakes concerning? Analyzing conversations of
deepfakes on reddit and exploring societal implications,” in CHI Conference on Human Factors in Computing Systems, 2022, pp.
1–19. doi: 10.1145/3491102.3517446.
[5] N. Sontakke, S. Utekar, S. Rastogi, and S. Sonawane, “Comparative analysis of deep-fake algorithms,” International Journal of
Computer Science Trends and Technology, vol. 11, no. 4, pp. 109–115, 2023.
[6] J. Pu et al., “Deepfake videos in the wild: Analysis and detection,” The Web Conference 2021 - Proceedings of the World Wide
Web Conference, WWW 2021. ACM, pp. 981–992, 2021. doi: 10.1145/3442381.3449978.
[7] D.-C. Stanciu and B. Ionescu, “Deepfake video detection with facial features and long-short term memory deep networks,” in
2021 International Symposium on Signals, Circuits and Systems (ISSCS), 2021, pp. 1–4. doi:
10.1109/ISSCS52333.2021.9497385.
[8] H. H. Nguyen, J. Yamagishi, and I. Echizen, “Capsule-forensics networks for deepfake detection,” in Handbook of digital face
manipulation and detection, Cham, Switzerland: Springer International Publishing, 2022, pp. 275–301. doi: 10.1007/978-3-030-
87664-7_13.
[9] N. Diakopoulos and D. Johnson, “Anticipating and addressing the ethical implications of deepfakes in the context of elections,”
New Media & Society, vol. 23, no. 7, pp. 2072–2098, 2021, doi: 10.1177/1461444820925811.
[10] S. Kaur, P. Kumar, and P. Kumaraguru, “Deepfakes: temporal sequential analysis to detect face-swapped video clips using
convolutional long short-term memory,” Journal of Electronic Imaging, vol. 29, no. 3, 2020, doi: 10.1117/1.JEI.29.3.033013.
[11] Y. Mirsky and W. Lee, “The creation and detection of deepfakes,” ACM Computing Surveys, vol. 54, no. 1, pp. 1–41, 2021, doi:
10.1145/3425780.
[12] B. K. Kumar and E. S. Reddy, “RAFT: Congestion control technique for efficient information dissemination in ICN based
VANET,” International Journal of Knowledge-Based and Intelligent Engineering Systems, vol. 25, no. 4, pp. 397–404, 2021, doi:
10.3233/KES-210083.
[13] S. Karnouskos, “Artificial intelligence in digital media: The era of deepfakes,” IEEE Transactions on Technology and Society,
vol. 1, no. 3, pp. 138–147, 2020, doi: 10.1109/tts.2020.3001312.
[14] R. Tolosana, R. V. -Rodriguez, J. Fierrez, A. Morales, and J. O. -Garcia, “Deepfakes and beyond: A Survey of face manipulation
and fake detection,” Information Fusion, vol. 64, pp. 131–148, 2020, doi: 10.1016/j.inffus.2020.06.014.
[15] R. Katarya and A. Lal, “A study on combating emerging threat of deepfake weaponization,” in 2020 Fourth International
Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2020, pp. 485–490. doi: 10.1109/I-
SMAC49090.2020.9243588.
[16] T. Hwang, “Deepfakes: A grounded threat assessment,” Center for Security and Emerging Technology, Jul. 2020. doi:
10.51593/20190030.
[17] N. N. Thaw, T. July, A. N. Wai, D. H. Goh, and A. Y. K. Chua, “Is it real? A study on detecting deepfake videos,” Proceedings of
the Association for Information Science and Technology, vol. 57, no. 1, 2020, doi: 10.1002/pra2.366.
[18] R. Chesney and D. K. Citron, “Deep fakes: A looming crisis for national security, democracy and privacy?,” Lawfare, 2018.
[Online]. Available: https://scholarship.law.bu.edu/shorter_works/33/
[19] D. Fallis, “The epistemic threat of deepfakes,” Philosophy and Technology, vol. 34, no. 4, pp. 623–643, 2021, doi:
10.1007/s13347-020-00419-2.
[20] T. Dobber, N. Metoui, D. Trilling, N. Helberger, and C. D. Vreese, “Do (microtargeted) deepfakes have real effects on political
attitudes?,” International Journal of Press/Politics, vol. 26, no. 1, pp. 69–91, 2021, doi: 10.1177/1940161220944364.
[21] T. C. Helmus, “Artificial intelligence, deepfakes, and disinformation: A primer,” Center for Security and Emerging Technology,
pp. 1–23, 2022, doi: 10.7249/PEA1043-1.
[22] Y. Zhang, R. Hu, D. Li, and X. Wang, “Fake identity attributes detection based on analysis of natural and human behaviors,”
IEEE Access, vol. 8, pp. 78901–78911, 2020, doi: 10.1109/ACCESS.2020.2987966.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 13, No. 4, December 2024: 3786-3792
3792
[23] R. Chesney and D. K. Citron, “Deep fakes: A looming challenge for privacy, democracy, and national security,” California Law
Review, vol. 107, no. 6, pp. 1753–1820, 2019, doi: https:/10.15779/Z38RV0D15J.
[24] M. R. Shoaib, Z. Wang, M. T. Ahvanooey, and J. Zhao, “Deepfakes, misinformation, and disinformation in the era of frontier AI,
generative AI, and large AI models,” in 2023 International Conference on Computer and Applications (ICCA), 2023, pp. 1–7.
doi: 10.1109/ICCA59364.2023.10401723.
[25] I. Solaiman et al., “Evaluating the social impact of generative AI systems in systems and society,” Arxiv-Computer Science, pp.
1–56, 2023, doi: 10.48550/arXiv.2306.05949.
[26] C. R. Leibowicz, S. McGregor, and A. Ovadya, “The deepfake detection dilemma: A multistakeholder exploration of adversarial
dynamics in synthetic media,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 736–744.
doi: 10.1145/3461702.3462584.
[27] A. M. Almars, “Deepfakes detection techniques using deep learning: A survey,” Journal of Computer and Communications, vol.
9, no. 5, pp. 20–35, 2021, doi: 10.4236/jcc.2021.95003.
BIOGRAPHIES OF AUTHORS
Venkateswarulu Sunkari is research scholar at Acharya Nagarjuna University
in India's Department of Computer Science and Engineering, Jawaharlal Technological
University, Kakinada, India awarded him an M.Tech. in computer science and engineering in
2010; the same university awarded him a B.Tech. in computer science and engineering in
Hyderabad, India in 2004. Deep learning, machine learning, image processing, data mining,
and artificial intelligence are among the topics of his research. He can be contacted at email:
sunkarivenkateswarlu@gmail.com.
Dr. Ayyagari Sri Nagesh obtained Ph.D. from JNTUH, Hyderabad, in Computer
Science and Engineering. In Guntur, India, he teaches computer science at the RVR & JC
College of Engineering. Furthermore, he has authored more than fifty papers for international
journals and ten papers for international conferences. At numerous national and international
conferences, he presided over sessions. Under his direction, two scholars were awarded Ph.D.,
and six more are pursuing. He was granted two patents. His areas of interest in research are
natural language processing, deep learning, and image processing. He is an active member of
ACM and a life member of CSI and ISTE. He can be contacted at email:
asrinagesh@gmail.com.

More Related Content

PDF
Unmasking deepfakes: A systematic review of deepfake detection and generation...
PDF
A Privacy-Preserving Deep Learning Framework for CNN-Based Fake Face Detection
PDF
NEURAL NETWORK-BASED DETECTION OF FRAUDULENT PROFILES IN SOCIAL MEDIA PLATFOR...
PDF
A Novel Approach for Enhancing Image Copy Detection with Robust Machine Learn...
PDF
A Novel Approach for Enhancing Image Copy Detection with Robust Machine Learn...
PDF
Broadcasting Forensics Using Machine Learning Approaches
PDF
Understanding Deepfake Technology.pdf
PDF
Review on effectiveness of deep learning approach in digital forensics
Unmasking deepfakes: A systematic review of deepfake detection and generation...
A Privacy-Preserving Deep Learning Framework for CNN-Based Fake Face Detection
NEURAL NETWORK-BASED DETECTION OF FRAUDULENT PROFILES IN SOCIAL MEDIA PLATFOR...
A Novel Approach for Enhancing Image Copy Detection with Robust Machine Learn...
A Novel Approach for Enhancing Image Copy Detection with Robust Machine Learn...
Broadcasting Forensics Using Machine Learning Approaches
Understanding Deepfake Technology.pdf
Review on effectiveness of deep learning approach in digital forensics

Similar to Artificial intelligence for deepfake detection: systematic review and impact analysis (20)

PPTX
Presentation1-deep-fake-identifier-for-security.pptx
PDF
Deepfakes Manipulating Reality with AI.pdf
PPTX
___________________________________________________________
PDF
EXPLORATORY DATA ANALYSIS AND FEATURE SELECTION FOR SOCIAL MEDIA HACKERS PRED...
PDF
Exploratory Data Analysis and Feature Selection for Social Media Hackers Pred...
PDF
How-Deepfakes-Affect-Society-Navigating-the-New-Realities.pptx.pdf
PDF
Face Mask Detection System Using Artificial Intelligence
PPTX
698642933-DdocfordownloadEEP-FAKE-PPT.pptx
PPTX
Unmasking-the-Digital-Deception good.pptx
PDF
Generative AI Ethics: a Comprehensive Safety and Regulation Framework
PPTX
PHD Proposal Presentation for All Universities.pptx
PDF
Generative AI Ethics: a Comprehensive Safety and Regulation Framework
PDF
B13 FIRST REVIEW 2 (1).pdf advanced machine learning
PDF
Deepfake Technology's Emergence: Exploring Its Impact on Cybersecurity
PDF
Generative AI Ethics: a Comprehensive Safety and Regulation Framework
PDF
Fake News Detection Using Machine Learning
PDF
Deepfakes-Cybersecurity-Implications.pptx.pdf
PDF
Face and liveness detection with criminal identification using machine learni...
PDF
A survey of deepfakes in terms of deep learning and multimedia forensics
PPTX
deepfake final ppt final year cse cybersecurity
Presentation1-deep-fake-identifier-for-security.pptx
Deepfakes Manipulating Reality with AI.pdf
___________________________________________________________
EXPLORATORY DATA ANALYSIS AND FEATURE SELECTION FOR SOCIAL MEDIA HACKERS PRED...
Exploratory Data Analysis and Feature Selection for Social Media Hackers Pred...
How-Deepfakes-Affect-Society-Navigating-the-New-Realities.pptx.pdf
Face Mask Detection System Using Artificial Intelligence
698642933-DdocfordownloadEEP-FAKE-PPT.pptx
Unmasking-the-Digital-Deception good.pptx
Generative AI Ethics: a Comprehensive Safety and Regulation Framework
PHD Proposal Presentation for All Universities.pptx
Generative AI Ethics: a Comprehensive Safety and Regulation Framework
B13 FIRST REVIEW 2 (1).pdf advanced machine learning
Deepfake Technology's Emergence: Exploring Its Impact on Cybersecurity
Generative AI Ethics: a Comprehensive Safety and Regulation Framework
Fake News Detection Using Machine Learning
Deepfakes-Cybersecurity-Implications.pptx.pdf
Face and liveness detection with criminal identification using machine learni...
A survey of deepfakes in terms of deep learning and multimedia forensics
deepfake final ppt final year cse cybersecurity
Ad

More from IAESIJAI (20)

PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Abstractive summarization using multilingual text-to-text transfer transforme...
PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
Automatic detection of dress-code surveillance in a university using YOLO alg...
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Two-dimensional Klein-Gordon and Sine-Gordon numerical solutions based on dee...
PDF
Improved convolutional neural networks for aircraft type classification in re...
PDF
Primary phase Alzheimer's disease detection using ensemble learning model
PDF
Deep learning-based techniques for video enhancement, compression and restora...
PDF
Hybrid model detection and classification of lung cancer
PDF
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
PDF
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
PDF
Event detection in soccer matches through audio classification using transfer...
PDF
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
PDF
Optimizing deep learning models from multi-objective perspective via Bayesian...
PDF
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Exploring DenseNet architectures with particle swarm optimization: efficient ...
A comparative study of natural language inference in Swahili using monolingua...
Abstractive summarization using multilingual text-to-text transfer transforme...
Enhancing emotion recognition model for a student engagement use case through...
Automatic detection of dress-code surveillance in a university using YOLO alg...
Hindi spoken digit analysis for native and non-native speakers
Two-dimensional Klein-Gordon and Sine-Gordon numerical solutions based on dee...
Improved convolutional neural networks for aircraft type classification in re...
Primary phase Alzheimer's disease detection using ensemble learning model
Deep learning-based techniques for video enhancement, compression and restora...
Hybrid model detection and classification of lung cancer
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
Event detection in soccer matches through audio classification using transfer...
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
Optimizing deep learning models from multi-objective perspective via Bayesian...
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
A novel scalable deep ensemble learning framework for big data classification...
Exploring DenseNet architectures with particle swarm optimization: efficient ...
Ad

Recently uploaded (20)

PDF
Machine learning based COVID-19 study performance prediction
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Big Data Technologies - Introduction.pptx
PPTX
Cloud computing and distributed systems.
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPT
Teaching material agriculture food technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
Machine learning based COVID-19 study performance prediction
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
The Rise and Fall of 3GPP – Time for a Sabbatical?
gpt5_lecture_notes_comprehensive_20250812015547.pdf
20250228 LYD VKU AI Blended-Learning.pptx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Review of recent advances in non-invasive hemoglobin estimation
Big Data Technologies - Introduction.pptx
Cloud computing and distributed systems.
Advanced methodologies resolving dimensionality complications for autism neur...
MYSQL Presentation for SQL database connectivity
Spectral efficient network and resource selection model in 5G networks
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Per capita expenditure prediction using model stacking based on satellite ima...
Teaching material agriculture food technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Assigned Numbers - 2025 - Bluetooth® Document

Artificial intelligence for deepfake detection: systematic review and impact analysis

  • 1. IAES International Journal of Artificial Intelligence (IJ-AI) Vol. 13, No. 4, December 2024, pp. 3786~3792 ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i4.pp3786-3792  3786 Journal homepage: http://ijai.iaescore.com Artificial intelligence for deepfake detection: systematic review and impact analysis Venkateswarlu Sunkari1 , Ayyagari Sri Nagesh2 1 Department of Computer Science and Engineering, Acharya Nagarjuna University, Guntur, India 2 Department of Computer Science and Engineering, RVR & JC College of Engineering, Chowdavaram, India Article Info ABSTRACT Article history: Received Dec 29, 2023 Revised Mar 3, 2024 Accepted Mar 21, 2024 Deep learning and artificial intelligence (AI) have enabled deepfakes, prompting concerns about their social impact. deepfakes have detrimental effects in several businesses, despite their apparent benefits. We explore deepfake detection research and its social implications in this study. We examine capsule networks' ability to detect video deepfakes and their design implications. This strategy reduces parameters and provides excellent accuracy, making it a promising deepfake defense. The social significance of deepfakes is also highlighted, underlining the necessity to understand them. Despite extensive use of face swap services, nothing is known about deepfakes' social impact. The misuse of deepfakes in image-based sexual assault and public figure distortion, especially in politics, highlight the necessity for further research on their social impact. Using state-of-the-art deepfake detection methods like fake face and deepfake detectors and a broad forgery analysis tool reduces the damage deepfakes do. We inquire about to review deepfake detection research and its social impacts in this work. In this paper we analysed various deepfake methods, social impact with misutilization of deepfake technology, and finally giving clear analysis of existing machine learning models. We want to illuminate the potential effects of deepfakes on society and suggest solutions by combining study data. Keywords: Capsule network Deep learning Deepfake Forgery analysis Swap service This is an open access article under the CC BY-SA license. Corresponding Author: Venkateswarlu Sunkari Department of Computer Science and Engineering, Acharya Nagarjuna University Nagarjuna Nagar, Guntur 522510, Andhra Pradesh, India Email: sunkarivenkateswarlu@gmail.com 1. INTRODUCTION Deepfake technology, powered by artificial intelligence (AI) and deep learning, has surfaced as a ground-breaking instrument that might revolutionize a number of sectors, including customer service and online education. Research, academia, and industry have all paid close attention to the versatility of deep learning in making deepfakes, which has resulted in substantial breakthroughs in the generation and detection of deepfakes. Nevertheless, despite the benefits, worries about deepfakes' detrimental effects on society are becoming more and more prevalent. Face-swapping models, also referred to as deepfake technology, have been used maliciously to propagate false information and fake news, posing major problems for society. The bad events brought about by the improper use of deepfake technology have highlighted the need for research into face-swapping tasks and the creation of superior deepfake detection algorithms. Furthermore, face-swapping's beneficial uses-such as anonymization for privacy protection and the development of new characters for the entertainment sector-highlight the depth of deepfake technology [1], [2].
  • 2. Int J Artif Intell ISSN: 2252-8938  Artificial intelligence for deepfake detection: systematic review and impact analysis (Venkateswarlu Sunkari) 3787 Few research has thoroughly investigated the social impact of deepfakes, despite the widespread use of face swapping platforms; this is a crucial gap in our knowledge of the consequences of deepfakes. With the goal of advancing deepfake research, this special issue examines the psychological, social, and policy ramifications of a society in which it is simple to create and distribute fake films, underscoring the urgent need for in-depth analysis and preventative measures. Researchers have made great progress in creating cutting-edge deepfake detection methods and sophisticated forensics platforms in an effort to counteract the negative consequences of deepfakes. The incorporation of these instruments represents a significant breakthrough in reducing the detrimental effects of deepfakes [3]. We want to give a thorough overview of deepfake detection and its social ramifications in this study. We do this by analyzing data from various research to clarify the possible social effects of deepfake technology and to offer suggestions for resolving these issues. Though detection tools have advanced, much more needs to be understood about how people react to and interpret deepfake content, as well as how it influences their behavior and level of trust in visual media [4]. Deepfakes are being created and detected, and AI has been important in this process. The development of hyper-realistic face image generating systems, such Face2Face and deepfake, has sparked questions about society's credibility because of possible ethical problems with manipulating photos and videos [5]. The necessity for thorough research on the societal effects of deepfakes has been brought to light by the misuse of deepfake technology, particularly in the dissemination of false information and fake news [6]. By examining the possibilities of capsule networks in identifying video deepfakes and highlighting the design and sociological ramifications, Stanciu and Ionescu [7] have made a contribution to this field [8]. Their results highlight the significance of comprehending the ramifications of deepfakes and creating efficient detection techniques. This is in line with the increasing focus on AI and deep learning for the production and identification of deepfakes from research, academia, and industry [9]. Researchers have made great progress in creating cutting-edge deepfake detection methods and sophisticated forensics systems in order to address these issues [10]. ‒ Approaches to detect deepfakes using artificial intelligence One major difficulty that calls for creative solutions utilizing AI and machine learning is the detection of deepfakes. Scholars and professionals in the field have been investigating diverse approaches to tackle this problem and alleviate the possible negative effects of deepfakes on society. The 'deepfake detection challenge', which has brought together tech companies and academia to promote joint efforts in creating effective detection algorithms, is one of the pioneering projects in this field [11]. The challenge intends to motivate scholars to address the issue of deepfake proliferation and its detrimental impacts on society. Participants in this competition have been able to investigate deep neural networks and sophisticated machine learning models for reliable deepfake detection by utilizing AI technology. The video deepfake identification problem has shown tremendous potential in recent years due to the advent of deep learning algorithms like capsule networks [12]. This progress has been made possible by Stanciu and Ionescu [7] investigation into capsule networks' capacity to identify deepfake films. Their study highlights the vital role that cutting-edge AI methods play in mitigating the negative effects of deepfakes and highlights the necessity of continuing to investigate cutting-edge strategies to improve detection accuracy. Additionally, a major advancement in reducing the negative effects of deepfakes has been made with the combination of cutting-edge forensics platforms and AI-powered deepfake detection methods. These advanced AI-powered techniques have shown excellent results in identifying deepfakes, especially when applied to popular datasets. Researchers have been able to protect the integrity of visual content in a variety of societal sectors by using AI to create strong detection models that can recognize modified media. Moving forward, AI-driven approaches to detect deepfakes will continue to evolve, leveraging the latest advancements in machine learning and computer vision. As the threat of deepfake misuse persists, it is imperative for researchers and industry stakeholders to collaborate on developing AI-based solutions that not only detect deepfakes with high accuracy but also address the broader societal implications of this technology [13]. By employing AI in the fight against deepfakes, we can pave the way for a more secure and trustworthy media environment, ensuring that visual content remains reliable and authentic in the digital age. 2. SYSTEMATIC ANALYSIS OF DEEPFAKE DETECTION METHODS Detecting deepfakes is a complex and evolving challenge that requires a systematic approach to evaluate the efficacy of various detection methods. Research in this field has been driven by the increasing prevalence and potential societal impact of deepfakes across diverse contexts. The emergence of algorithmic techniques and user-focused solutions underscores the multifaceted nature of deepfake detection and the need for comprehensive analyses of detection methods. A systematic review of deepfake detection methods reveals the limitations of current algorithms in achieving successful detection across different deepfake types, content formats, characteristics, and datasets.
  • 3.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 4, December 2024: 3786-3792 3788 Despite notable progress, the robustness of these algorithms remains a concern, prompting the exploration of alternative approaches to enhance detection accuracy and reliability. The Table 1 provides a comparative analysis of deepfake detection methods, highlighting the distinct advantages and limitations of each approach. Table 1. Comparative analysis of deepfake detection methods Detection method Advantages Limitations Deep learning-based models High detection performance Limited robustness across diverse deepfake types and characteristics Capsule networks Potential for reduced parameters while maintaining high accuracy Evaluation across varied datasets neededfor comprehensive assessment Forensic platforms Robust detection capabilities Resource-intensive and computationally demanding Lightweight object detection models Real-time performance improvement Sacrifice in accuracy compared to heavier Deep learning-based models have demonstrated high detection performance, but their limited robustness across diverse deepfake types and characteristics necessitates further refinement. Capsule networks offer the potential for reduced parameters while maintaining high accuracy, yet comprehensive evaluation across varied datasets is essential for a thorough assessment [6]. Additionally, forensic platforms exhibit robust detection capabilities but are often resource-intensive and computationally demanding, posing practical challenges for widespread adoption [14]. As researchers continue to explore and develop novel deepfake detection methods, it is imperative to systematically evaluate the strengths and limitations of each approach. Through rigorous comparative analyses and empirical validation, the efficacy of detection methods can be assessed across a comprehensive range of deepfake scenarios, thereby advancing the development of robust and reliable detection technique [7]. In summary, the systematic review of deepfake detection methods underscores the need for continued research and innovation in this critical domain. By systematically evaluating the advantages and limitations of existing detection approaches, researchers can inform the development of more effective and resilient methods to detect and mitigate the societal impact of deepfakes. 3. ARTIFICIAL INTELLIGENCE EFFICACY IN IDENTIFYING DEEPFAKES Advancements in AI technology have significantly contributed to the efficacy of identifying deepfakes, with researchers leveraging innovative techniques to counter the harmful effects of manipulated media. The utilization of AI-driven deepfake detection methodologies has demonstrated substantial progress in detecting and mitigating the impact of synthetic media [15]. By taking a temporally-based approach and analyzing the entire sequence of frames in a video, AI systems have shown promising results in effectively detecting deepfake content while avoiding vulnerabilities to adversarial attacks [16]. These approaches, which combine convolutional neural networks and the Jaya optimization algorithm, have exhibited high accuracy rates and outperformed existing techniques, making them a formidable solution for identifying deepfake videos in different contexts [17]. Additionally, the detection of AI-generated photos and videos has been significantly enhanced by the combination of ensemble learning techniques with capsule-forensics architecture. The overall effectiveness of deepfake detection has also been improved by the use of detection techniques based on convolutional long short-term memory networks and sequential temporal analysis [18]. It is clear that in order to determine the success and limitations of deepfake detection techniques, thorough examination and comparison are necessary. Researchers have made great progress in creating methods to identify resolution-inconsistent facial aberrations, mesoscopic characteristics, and temporal dynamics inside films by utilizing AI innovations like convolutional neural networks and processing deepfakes frame-by-frame [19]. These approaches represent the continued development of deepfake detection strategies and demonstrate the promise of AI technology in addressing the problems presented by deepfakes. As the threat of deepfake misuse persists, continued exploration and development of novel AI-driven approaches remain critical in the battle against manipulated media. Collaboration among researchers and industry stakeholders will be pivotal in advancing AI-based solutions that not only detect deepfakes with high accuracy but also address the broader societal implications of this technology. The ongoing evolution of AI-driven deepfake detection methodologies will play a pivotal role in fostering a more secure and trustworthy media environment in the digital age. As deepfake technology continues to advance and pose significant risks to various aspects of society, the role of AI in mitigating these threats becomes increasingly important. In Table 2 analyses the present trending methods used and their accuracy.
  • 4. Int J Artif Intell ISSN: 2252-8938  Artificial intelligence for deepfake detection: systematic review and impact analysis (Venkateswarlu Sunkari) 3789 Table 2. Analysis of different models and their accuracy in deepfake detection Model Architecture Dataset used Training accuracy (%) Validation accuracy (%) Testing accuracy (%) Remarks Model A Convolutional neural network DeepFake detection dataset 98.5 96.2 95.8 Achieves high accuracy but may be overfitting on the training set. Regularization techniques could be explored. Model B Recurrent neural network FaceForensics++ dataset 94.2 91.8 90.5 Effective on certain types of deepfakes, struggles with more sophisticated manipulations. Investigate additional pre-processing techniques Model C Generative adversarial network DFDC dataset 96.8 95.3 94.7 Demonstrates good generalization, but there is a risk of adversarial attacks. Implementing adversarial training may enhance robustness. Ensemble model Combination of A, B, and C Mixed datasets 99.1 97.5 97.2 Superior performance by combining strengths of individual models. Careful attention to diversity in training data sources is crucial. Real-time processing EfficientNet Custom dataset NA NA 92.6 Focuses on real-time processing with a compromise on accuracy. Ideal for applications requiring quick identification Transfer learning approach Pre-trained ResNet50 Fine-tuned on DeepFakeForensics dataset 97.3 96.1 95.5 Leverages pre-trained features, reducing the need for extensive training data. Fine- tuning allows adaptation to specific deepfake characteristics. 4.1. Analysis of the social impact of deepfakes Deepfake technology's widespread use has brought about new difficulties for social and digital media. Deepfakes' deceptive and manipulative qualities have the potential to have a big impact on a lot of different areas of society, such politics, public discourse, and private life. In order to comprehend the ramifications and create effective countermeasures, a thorough examination of the societal impact of deepfakes is necessary. 4.2. Impact on public trust and perception One of the most profound societal impacts of deepfakes is the wearing down of public faith and the distortion of perception. With the ability to fabricate convincing videos and images, malicious actors can manipulate public figures, disseminate false information, and incite social discord. Consequently, the widespread circulation of deepfakes pose an important threat to the truthfulness of information and the public's ability to discern authentic content from fabricated media [20]. 4.3. Political and social manipulation The use of deepfakes for political misinformation and social manipulation has raised concerns about the potential destabilization of democratic processes and societal harmony. By creating deceptive content featuring political leaders or influential figures. Bad actors can exploit deepfakes to manipulate public opinion, sow discord, and undermine the credibility of institutions [21].
  • 5.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 4, December 2024: 3786-3792 3790 4.4. Privacy violations and personal harm Individuals and public figures are susceptible to privacy violation and individual harm resulting from the malicious use of deepfake technology. Unauthorized creation and distribution of fake videos can lead to reputational damage, harassment, and emotional distress [22]. Moreover, deepfake content that superimposes individuals' faces onto explicit or compromising scenes can have far-reaching consequences on their personal and professional lives. 4.5. Economic implications The proliferation of deepfakes also presents economic implications, particularly in industries reliant on visual media and advertising. The dissemination of falsified content can undermine the integrity of advertising campaigns, impact consumer trust, and result in financial repercussions for businesses and individuals featured in manipulated media [23]. By incorporating additional deepfake techniques and their corresponding characteristics, the evaluation can offer a more nuanced perspective on the complexities of detecting manipulated digital media. In Table 3, the expanded analysis will provide a more comprehensive framework for understanding the landscape of deepfake technology and the advancements in detection methods. Table 3. Different deepfake technues and their social impact Deepfake technique Key components Detection methods Potential misuses Social impact Face2Face Facial manipulation, expression transfer Frame-level analysis, facial landmark tracking Politically motivated misinformation Erosion of trust in political institutions Deepfake Neural network- based image manipulation Video-level analysis, anomaly detection Targeted revenge pornography Impact on individual privacy and well-being Neural Texture Synthesis Texture transfer, image recoloring Statistical analysis of texture patterns, artifact detection Creation of false evidence Legal and judicial complications Lip-sync Deepfake Audio-visual synchronization, speech synthesis Audio-visual correlation analysis, voice signature detection Fabrication of false statements Legal implications and public deception Hybrid Deepfake Models Combination of multiple techniques, adaptive manipulation Cross-modal analysis, anomaly detection Multi-faceted misinformation campaigns Societal discord and psychological harm To address the multifaceted social impact of deepfakes, it is imperative to leverage advanced technologies, including AI, to develop robust detection and mitigation strategies. Future advancements in deepfake detection are likely to embrace multimodal techniques, integrating various data sources such as audio, video, and contextual information. By fusing multiple modalities, including linguistic patterns, facial movements, and audiovisual consistency, detection systems can enhance their resilience against sophisticated deepfake manipulations [24]. The integration of explainable AI techniques into deepfake detection models will facilitate the interpretation of detection results and provide insights into the rationale behind classification decisions [25]. As deepfake generation techniques continue to advance, the development of specialized detection systems based on generative adversarial networks is expected to gain prominence. By leveraging the principles of generative adversarial networks, detection models can adapt to the evolving landscape of deepfake creation and effectively discern manipulated media from authentic content [26]. Future trends in deepfake detection will involve intensified collaboration among researchers, industry stakeholders, and regulatory bodies to establish benchmarking frameworks and standardized evaluation protocols. These efforts are crucial for validating the effectiveness of detection methods and ensuring their consistent performance across diverse deepfake scenarios [27]. The social impact of deepfakes extends beyond technological advancements and directly influences public trust, political integrity, personal privacy, and economic stability. The integration of advanced AI-driven detection methods is instrumental in mitigating the adverse effects of deepfakes on society. By understanding the societal implications and implementing effective countermeasures, stakeholders can work towards fostering a more resilient and trustworthy digital landscape in the face of evolving technological challenges. In conclusion, the social brunt of deepfakes is significant and far-reaching. It affects a variety of aspects of society including trust, politics, privacy, and economy.
  • 6. Int J Artif Intell ISSN: 2252-8938  Artificial intelligence for deepfake detection: systematic review and impact analysis (Venkateswarlu Sunkari) 3791 4. CONCLUSION The emergence of explainable AI, multimodal detection approaches, ethical and regulatory frameworks, federated learning, privacy-preserving techniques, and human-in-the-loop approaches signifies a collective effort to fortify deepfake detection capabilities. By embracing these emerging trends, stakeholders can work towards fostering a more resilient and trustworthy digital landscape while addressing the societal impact of deepfakes. It is imperative to continuously refine benchmarking protocols and evaluation methodologies for the comprehensive assessment of detection models across diverse deepfake types and characteristics. The culmination of these future trends in deepfake detection reflects a proactive and adaptive approach to combatting the multifaceted challenges posed by deepfakes in the digital age. In conclusion, the societal impact of deepfakes is extensive and has implications for public trust, political honesty, personal privacy, and economic stability. Addressing this multifaceted challenge requires a nuanced understanding of the evolving technological landscape and a commitment to implementing effective countermeasures. By integrating advanced AI-driven detection methods and anticipating future trends in deepfake detection, stakeholders can strive towards fostering a more robust and trustworthy digital environment. Furthermore, the ethical considerations, collaborative frameworks, and innovative approaches exemplified in the future trends of deepfake detection offer a pathway to enhancing the efficacy, transparency, and resilience of societal defense against deepfakes. REFERENCES [1] T. T. Nguyen et al., “Deep learning for deepfakes creation and detection: A survey,” Computer Vision and Image Understanding, vol. 223, 2022, doi: 10.1016/j.cviu.2022.103525. [2] F. J. -Xu, R. Wang, Y. Huang, Q. Guo, L. Ma, and Y. Liu, “Countering malicious deepfakes: survey, battleground, and horizon,” International Journal of Computer Vision, vol. 130, no. 7, pp. 1678–1734, 2022, doi: 10.1007/s11263-022-01606-8. [3] R. Gil, J. Virgili-Gomà, J. M. López-Gil, and R. García, “Deepfakes: Evolution and trends,” Soft Computing, vol. 27, no. 16, pp. 11295–11318, 2023, doi: 10.1007/s00500-023-08605-y. [4] D. Gamage, P. Ghasiya, V. Bonagiri, M. E. Whiting, and K. Sasahara, “Are deepfakes concerning? Analyzing conversations of deepfakes on reddit and exploring societal implications,” in CHI Conference on Human Factors in Computing Systems, 2022, pp. 1–19. doi: 10.1145/3491102.3517446. [5] N. Sontakke, S. Utekar, S. Rastogi, and S. Sonawane, “Comparative analysis of deep-fake algorithms,” International Journal of Computer Science Trends and Technology, vol. 11, no. 4, pp. 109–115, 2023. [6] J. Pu et al., “Deepfake videos in the wild: Analysis and detection,” The Web Conference 2021 - Proceedings of the World Wide Web Conference, WWW 2021. ACM, pp. 981–992, 2021. doi: 10.1145/3442381.3449978. [7] D.-C. Stanciu and B. Ionescu, “Deepfake video detection with facial features and long-short term memory deep networks,” in 2021 International Symposium on Signals, Circuits and Systems (ISSCS), 2021, pp. 1–4. doi: 10.1109/ISSCS52333.2021.9497385. [8] H. H. Nguyen, J. Yamagishi, and I. Echizen, “Capsule-forensics networks for deepfake detection,” in Handbook of digital face manipulation and detection, Cham, Switzerland: Springer International Publishing, 2022, pp. 275–301. doi: 10.1007/978-3-030- 87664-7_13. [9] N. Diakopoulos and D. Johnson, “Anticipating and addressing the ethical implications of deepfakes in the context of elections,” New Media & Society, vol. 23, no. 7, pp. 2072–2098, 2021, doi: 10.1177/1461444820925811. [10] S. Kaur, P. Kumar, and P. Kumaraguru, “Deepfakes: temporal sequential analysis to detect face-swapped video clips using convolutional long short-term memory,” Journal of Electronic Imaging, vol. 29, no. 3, 2020, doi: 10.1117/1.JEI.29.3.033013. [11] Y. Mirsky and W. Lee, “The creation and detection of deepfakes,” ACM Computing Surveys, vol. 54, no. 1, pp. 1–41, 2021, doi: 10.1145/3425780. [12] B. K. Kumar and E. S. Reddy, “RAFT: Congestion control technique for efficient information dissemination in ICN based VANET,” International Journal of Knowledge-Based and Intelligent Engineering Systems, vol. 25, no. 4, pp. 397–404, 2021, doi: 10.3233/KES-210083. [13] S. Karnouskos, “Artificial intelligence in digital media: The era of deepfakes,” IEEE Transactions on Technology and Society, vol. 1, no. 3, pp. 138–147, 2020, doi: 10.1109/tts.2020.3001312. [14] R. Tolosana, R. V. -Rodriguez, J. Fierrez, A. Morales, and J. O. -Garcia, “Deepfakes and beyond: A Survey of face manipulation and fake detection,” Information Fusion, vol. 64, pp. 131–148, 2020, doi: 10.1016/j.inffus.2020.06.014. [15] R. Katarya and A. Lal, “A study on combating emerging threat of deepfake weaponization,” in 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2020, pp. 485–490. doi: 10.1109/I- SMAC49090.2020.9243588. [16] T. Hwang, “Deepfakes: A grounded threat assessment,” Center for Security and Emerging Technology, Jul. 2020. doi: 10.51593/20190030. [17] N. N. Thaw, T. July, A. N. Wai, D. H. Goh, and A. Y. K. Chua, “Is it real? A study on detecting deepfake videos,” Proceedings of the Association for Information Science and Technology, vol. 57, no. 1, 2020, doi: 10.1002/pra2.366. [18] R. Chesney and D. K. Citron, “Deep fakes: A looming crisis for national security, democracy and privacy?,” Lawfare, 2018. [Online]. Available: https://scholarship.law.bu.edu/shorter_works/33/ [19] D. Fallis, “The epistemic threat of deepfakes,” Philosophy and Technology, vol. 34, no. 4, pp. 623–643, 2021, doi: 10.1007/s13347-020-00419-2. [20] T. Dobber, N. Metoui, D. Trilling, N. Helberger, and C. D. Vreese, “Do (microtargeted) deepfakes have real effects on political attitudes?,” International Journal of Press/Politics, vol. 26, no. 1, pp. 69–91, 2021, doi: 10.1177/1940161220944364. [21] T. C. Helmus, “Artificial intelligence, deepfakes, and disinformation: A primer,” Center for Security and Emerging Technology, pp. 1–23, 2022, doi: 10.7249/PEA1043-1. [22] Y. Zhang, R. Hu, D. Li, and X. Wang, “Fake identity attributes detection based on analysis of natural and human behaviors,” IEEE Access, vol. 8, pp. 78901–78911, 2020, doi: 10.1109/ACCESS.2020.2987966.
  • 7.  ISSN: 2252-8938 Int J Artif Intell, Vol. 13, No. 4, December 2024: 3786-3792 3792 [23] R. Chesney and D. K. Citron, “Deep fakes: A looming challenge for privacy, democracy, and national security,” California Law Review, vol. 107, no. 6, pp. 1753–1820, 2019, doi: https:/10.15779/Z38RV0D15J. [24] M. R. Shoaib, Z. Wang, M. T. Ahvanooey, and J. Zhao, “Deepfakes, misinformation, and disinformation in the era of frontier AI, generative AI, and large AI models,” in 2023 International Conference on Computer and Applications (ICCA), 2023, pp. 1–7. doi: 10.1109/ICCA59364.2023.10401723. [25] I. Solaiman et al., “Evaluating the social impact of generative AI systems in systems and society,” Arxiv-Computer Science, pp. 1–56, 2023, doi: 10.48550/arXiv.2306.05949. [26] C. R. Leibowicz, S. McGregor, and A. Ovadya, “The deepfake detection dilemma: A multistakeholder exploration of adversarial dynamics in synthetic media,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 736–744. doi: 10.1145/3461702.3462584. [27] A. M. Almars, “Deepfakes detection techniques using deep learning: A survey,” Journal of Computer and Communications, vol. 9, no. 5, pp. 20–35, 2021, doi: 10.4236/jcc.2021.95003. BIOGRAPHIES OF AUTHORS Venkateswarulu Sunkari is research scholar at Acharya Nagarjuna University in India's Department of Computer Science and Engineering, Jawaharlal Technological University, Kakinada, India awarded him an M.Tech. in computer science and engineering in 2010; the same university awarded him a B.Tech. in computer science and engineering in Hyderabad, India in 2004. Deep learning, machine learning, image processing, data mining, and artificial intelligence are among the topics of his research. He can be contacted at email: sunkarivenkateswarlu@gmail.com. Dr. Ayyagari Sri Nagesh obtained Ph.D. from JNTUH, Hyderabad, in Computer Science and Engineering. In Guntur, India, he teaches computer science at the RVR & JC College of Engineering. Furthermore, he has authored more than fifty papers for international journals and ten papers for international conferences. At numerous national and international conferences, he presided over sessions. Under his direction, two scholars were awarded Ph.D., and six more are pursuing. He was granted two patents. His areas of interest in research are natural language processing, deep learning, and image processing. He is an active member of ACM and a life member of CSI and ISTE. He can be contacted at email: asrinagesh@gmail.com.