Leveraging Artificial Intelligence (AI) by a Strategic Defense against Deepfakes and Digital Misinformation
DOI:
https://doi.org/10.38124/ijsrmt.v3i11.76Keywords:
Deepfakes, Digital Misinformation, Artificial Intelligence, Media Trust, AI Detection, Ethical Considerations, Media Literacy, Technological Analysis, Case Studies, Collaborative Efforts.Abstract
With rapid technological advancements, the emergence of deepfakes and digital misinformation has become both a powerful tool and a formidable challenge. Deepfakes—realistic yet fabricated media generated through artificial intelligence—threaten media credibility, public perception, and democratic integrity. This study explores the intersection of AI technology with these concerns, highlighting AI's role both as a driver of innovation and as a defense mechanism. By conducting an in-depth review of literature, analyzing current technologies, and examining case studies, this research evaluates AI-based strategies for identifying and addressing misinformation. Additionally, it considers the ethical and policy implications, calling for greater transparency, accountability, and media literacy. Through examining present AI techniques and predicting future trends, this paper underscores the importance of collaborative efforts among tech companies, government agencies, and the public to uphold truth and integrity in the digital age.
Downloads
References
Abbu, H., Mugge, P., & Gudergan, G. (2022, June). Ethical considerations of artificial intelligence: ensuring fairness, transparency, and explainability. In 2022 IEEE 28th International Conference on Engineering, Technology and Innovation (ICE/ITMC) & 31st International Association For Management of Technology (IAMOT) Joint Conference (pp. 1-7). IEEE.
Abilimi, C. A., Addo, H., & Opoku-Mensah, E. (2013). Effective Information Security Management in Enterprise Software Application with the Revest-Shamir-Adleman (RSA) Cryptographic Algorithm. In International Journal of Engineering Research and Technology, 2(8), 315 – 327.
Abilimi, C.A., Amoako, L., Ayembillah, J. N., Yeboah, T.(2013). Assessing the Availability of Information and Communication Technologies in Teaching and Learning in High School Education in Ghana. International Journal of Engineering Research and Technology, 2(11), 50 - 59.
Abilimi, C. A., & Adu-Manu, K. S. (2013). Examining the impact of Information and Communication Technology capacity building in High School education in Ghana. In International Journal of Engineering Research and Technology,2(9), 72- 78
Abilimi, C. A., & Yeboah, T. (2013). Assessing the challenges of Information and Communication Technology in educational development in High Schools in Ghana. In International Journal of Engineering Research and Technology,2(11), 60 - 67.
Aggarwal, A., Gaba, S., Nagpal, S., & Arya, A. (2022). A deep analysis on the role of deep learning models using generative adversarial networks. In Blockchain and Deep Learning: Future Trends and Enabling Technologies (pp. 179-197). Cham: Springer International Publishing.
Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of deepfakes: Landscape, threats, and impact. Deeptrace.
Ajibili, D. O., Ebhonu, S. I., & Ajibili, B. S. (2024). INFORMATION LITERACY PROGRAMS: CATALYSTS FOR COMBATING MISINFORMATION IN NIGERIAN SOCIETIES. NIGERAN LIBRARY ASSOCIATION (NLA) NIGER STATE CHAPTER, 43.
Akinrinola, O., Okoye, C. C., Ofodile, O. C., & Ugochukwu, C. E. (2024). Navigating and reviewing ethical dilemmas in AI development: Strategies for transparency, fairness, and accountability. GSC Advanced Research and Reviews, 18(3), 050-058.
Alaofin, T. (2024). A Revolutionary Artificial Intelligence ChatGPT May Soon Take Your Jobs. Tunde Alaofin.
Al-Khazraji, S. H., Saleh, H. H., Khalid, A. I., & Mishkhal, I. A. (2023). Impact of Deepfake Technology on Social Media: Detection, Misinformation and Societal Implications. The Eurasia Proceedings of Science Technology Engineering and Mathematics, 23, 429-441.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-236.
Alshahrani, M. H., & Maashi, M. S. (2024). A Systematic Literature Review: Facial Expression and Lip Movement Synchronization of an Audio Track. IEEE Access.
Alzaabi, F. R., & Mehmood, A. (2024). A review of recent advances, challenges, and opportunities in malicious insider threat detection using machine learning methods. IEEE Access, 12, 30907-30927.
Anthonysamy, L., & Sivakumar, P. (2024). A new digital literacy framework to mitigate misinformation in social media infodemic. Global Knowledge, Memory and Communication, 73(6/7), 809-827.
Antoliš, K. (2024). DISINFORMATION SUPPORTED BY ARTIFICIAL INTELLIGENCE FROM DYNAMIC RESEARCH TO HOLISTIC SOLUTIONS. Public Security and Public Order, (35), 11-23.
Appio, F. P., Lima, M., & Paroutis, S. (2019). Understanding Smart Cities: Innovation ecosystems, technological advancements, and societal challenges. Technological Forecasting and Social Change, 142, 1-14.
Bhandari, A., Cherukuri, A. K., & Kamalov, F. (2023). Machine learning and blockchain integration for security applications. In Big Data Analytics and Intelligent Systems for Cyber Threat Intelligence (pp. 129-173). River Publishers.
Caled, D., & Silva, M. J. (2022). Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation. Journal of Computational Social Science, 5(1), 123-159.
Carmi, E., Yates, S. J., Lockley, E., & Pawluczuk, A. (2020). Data citizenship: Rethinking data literacy in the age of disinformation, misinformation, and malinformation. Internet Policy Review, 9(2), 1-22.
Carpenter, P. (2024). FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. John Wiley & Sons.
Chakraborty, R., & Naskar, R. (2024). Role of human physiology and facial biomechanics towards building robust deepfake detectors: A comprehensive survey and analysis. Computer Science Review, 54, 100677.
Chakraborty, T., KS, U. R., Naik, S. M., Panja, M., & Manvitha, B. (2024). Ten years of generative adversarial nets (GANs): a survey of the state-of-the-art. Machine Learning: Science and Technology, 5(1), 011001.
Cheong, B. C. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273.
Chesney, R., & Citron, D. K. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147-155.
Christopher, A. A.(2013). Effective Information Security Management in Enterprise Software Application with the Revest-Shamir-Adleman (RSA) Cryptographic Algorithm.International Journal of Engineering Research & Technology (IJERT),ISSN: 2278-0181,Vol. 2 Issue 8, August - 2013.
Citron, D. K. (2019). Sexual privacy. Yale Law Journal, 128(7), 1870-1960.
Courtney, I. (2017). In an era of fake news, information literacy has a role to play in journalism education in Ireland (Doctoral dissertation, Dublin Business School).
Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A. (2018). Generative adversarial networks: An overview. IEEE Signal Processing Magazine, 35(1), 53-65.
Dadkhah, S., Shoeleh, F., Yadollahi, M. M., Zhang, X., & Ghorbani, A. A. (2021). A real-time hostile activities analyses and detection system. Applied Soft Computing, 104, 107175.
Dagar, D., & Vishwakarma, D. K. (2022). A literature review and perspectives in deepfakes: generation, detection, and applications. International journal of multimedia information retrieval, 11(3), 219-289.
Dasi, U., Singla, N., Balasubramanian, R., Benadikar, S., & Shanbhag, R. R. (2024). Ethical implications of AI-driven personalization in digital media. Journal of Informatics Education and Research, 4(3).
Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
Fedorov, A. V., Levitskaya, A. A., Tselykh, M. P., & Novikov, A. (2022). Media manipulations and media literacy education.
Fenstermacher, L., Uzcha, D., Larson, K., Vitiello, C., & Shellman, S. (2023, June). New perspectives on cognitive warfare. In Signal Processing, Sensor/Information Fusion, and Target Recognition XXXII (Vol. 12547, pp. 172-187). SPIE.
Filimowicz, M. (Ed.). (2022). Deep fakes: algorithms and Society. Routledge.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Schafer, B. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
Forest, J. J. (2022). Digital Influence Mercenaries: Profits and Power Through Information Warfare. Naval Institute Press.
Funke, D. (2020). How fact-checkers are fighting coronavirus misinformation worldwide. Poynter. Retrieved from https://www.poynter.org/fact-checking/2020/how-fact-checkers-are-fighting-coronavirus-misinformation-worldwide/
Garon, J. M. (2022). When AI Goes to War: Corporate Accountability for Virtual Mass Disinformation, Algorithmic Atrocities, and Synthetic Propaganda. N. Ky. L. Rev., 49, 181.
Gasco-Hernandez, M., Gil-Garcia, J. R., & Luna-Reyes, L. F. (2022). Unpacking the role of technology, leadership, governance and collaborative capacities in inter-agency collaborations. Government Information Quarterly, 39(3), 101710.
George, A. S., & George, A. H. (2023). Deepfakes: the evolution of hyper realistic media manipulation. Partners Universal Innovative Research Publication, 1(2), 58-74.
Gilbert, C.(2012). The Quest of Father and Son: Illuminating Character Identity, Motivation, and Conflict in Cormac McCarthy’s The Road. English Journal, Volume 102, Issue Characters and Character, p. 40 - 47. https://doi.org/10.58680/ej201220821.
Gilbert, C. (2018). Creating Educational Destruction: A Critical Exploration of Central Neoliberal Concepts and Their Transformative Effects on Public Education. The Educational Forum, 83(1), 60–74. https://doi.org/10.1080/00131725.2018.1505017.
Gilbert, C., & Gilbert, M. A. (2024a). Unraveling Blockchain Technology: A Comprehensive Conceptual Review. International Journal of Emerging Technologies and Innovative Research, 11(9), 575-584.
Gilbert, C., & Gilbert, M. A. (2024b). Strategic Framework for Human-Centric AI Governance: Navigating Ethical, Educational, and Societal Challenges. International Journal of Latest Technology in Engineering Management & Applied Science, 13(8), 132-141.
Gilbert, C., & Gilbert, M. A. (2024c). The Impact of AI on Cybersecurity Defense Mechanisms: Future Trends and Challenges. Global Scientific Journals, 12(9), 427-441.
Gilbert, C. & Gilbert, M.A. (2024d). The Convergence of Artificial Intelligence and Privacy: Navigating Innovation with Ethical Considerations. International Journal of Scientific Research and Modern Technology, 3(9), 9-9.
Gilbert, C. & Gilbert, M.A.(2024e).Transforming Blockchain: Innovative Consensus Algorithms for Improved Scalability and Security. International Journal of Emerging Technologies and Innovative Research (www.jetir.org), ISSN:2349-5162, Vol.11, Issue 10, page no.b299-b313, October-2024, Available :http://www.jetir.org/papers/JETIR2410134.pdf.
Gilbert, C. & Gilbert, M.A. (2024f). Future Privacy Challenges: Predicting the Agenda of Webmasters Regarding Cookie Management and Its Implications for User Privacy. International Journal of Advanced Engineering Research and Science, ISSN (Online): 2455-9024,Volume 9, Issue 4, pp. 95-106.
Gilbert, C., & Gilbert, M. A. (2024g). Navigating the Dual Nature of Deepfakes: Ethical, Legal, and Technological Perspectives on Generative Artificial Intelligence (AI) Technology. International Journal of Scientific Research and Modern Technology, 3(10). https://doi.org/10.38124/ijsrmt.v3i10.54
Gilbert, C., & Gilbert, M. A. (2024h).Revolutionizing Computer Science Education: Integrating Blockchain for Enhanced Learning and Future Readiness. International Journal of Latest Technology in Engineering, Management & Applied Science, ISSN 2278-2540, Volume 13, Issue 9, pp.161-173.
Gilbert, C. & Gilbert, M.A. (2024i). Unlocking Privacy in Blockchain: Exploring Zero-Knowledge Proofs and Secure Multi-Party Computation Techniques. Global Scientific Journal (ISSN 2320-9186) 12 (10), 1368-1392.
Gilbert, C. & Gilbert, M.A. (2024j).The Role of Artificial Intelligence (AI) in Combatting Deepfakes and Digital Misinformation.International Research Journal of Advanced Engineering and Science (ISSN: 2455-9024), Volume 9, Issue 4, pp. 170-181.
Gilbert, C. & Gilbert, M.A.(2024k). AI-Driven Threat Detection in the Internet of Things (IoT), Exploring Opportunities and Vulnerabilities. International Journal of Research Publication and Reviews, Vol 5, no 11, pp 219-236.
Gilbert, C., & Gilbert, M. A. (2024l). The security implications of artificial intelligence (AI)-powered autonomous weapons: Policy recommendations for international regulation. International Research Journal of Advanced Engineering and Science, 9(4), 205–219.
Gilbert, C., & Gilbert, M. A. (2024m). The role of quantum cryptography in enhancing cybersecurity. International Journal of Research Publication and Reviews, 5(11), 889–907. https://www.ijrpr.com
Gilbert, C., & Gilbert, M. A. (2024n). Bridging the gap: Evaluating Liberia's cybercrime legislation against international standards. International Journal of Research and Innovation in Applied Science (IJRIAS), 9(10), 131–137. https://doi.org/10.51584/IJRIAS.2024.910013
Gilbert, M.A., Oluwatosin, S. A., & Gilbert, C.(2024). An investigation into the types of role-based relationships that exist between lecturers and students in universities across southwestern nigeria: a sociocultural and institutional analysis. Global Scientific Journal, ISSN 2320-9186, Volume 12, Issue 10, pp. 263-280.
Gilbert, M.A., Auodo, A. & Gilbert, C.(2024). Analyzing Occupational Stress in Academic Personnel through the Framework of Maslow’s Hierarchy of Needs. International Journal of Research Publication and Reviews, Vol 5, no 11, pp 620-630.
Giansiracusa, N. (2021). How algorithms create and prevent fake news (pp. 17-39). Berkeley, CA: Apress.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
Henderson, J., Ward, P. R., Tonkin, E., Meyer, S. B., Pillen, H., McCullum, D., ... & Wilson, A. (2020). Developing and maintaining public trust during and post-COVID-19: can we apply a model developed for responding to food scares?. Frontiers in public health, 8, 369.
Hobbs, R. (2017). Create to learn: Introduction to digital literacy. Wiley.
Howard, P. N. (2020). Lie machines: How to save democracy from troll armies, deceitful robots, junk news operations, and political operatives. Yale University Press.
Hussein, S. A., & Répás, S. R. (2024). Anomaly Detection in Log Files Based on Machine Learning Techniques. Journal of Electrical Systems, 20(3s), 1299-1311.
Juneja, P., & Mitra, T. (2022). Human and technological infrastructures of fact-checking. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1-36.
Kalpokas, I., & Kalpokiene, J. (2022). Deepfakes: a realistic assessment of potentials, risks, and policy regulation. Springer Nature.
Karinshak, E., & Jin, Y. (2023). AI-driven disinformation: a framework for organizational preparation and response. Journal of Communication Management, 27(4), 539-562.
Kashif, M., Garg, H., Weqar, F., & David, A. (2024). Regulatory Strategies and Innovative Solutions for Deepfake Technology. In Navigating the World of Deepfake Technology (pp. 262-282). IGI Global.
Khan, A. A., Chen, Y. L., Hajjej, F., Shaikh, A. A., Yang, J., Ku, C. S., & Por, L. Y. (2024). Digital forensics for the socio-cyber world (DF-SCW): A novel framework for deepfake multimedia investigation on social media platforms. Egyptian Informatics Journal, 27, 100502.
King, G., & Persily, N. (2020). A new model for industry–academic partnerships. PS: Political Science & Politics, 53(4), 703-709.
Kılıç, B., & Kahraman, M. E. (2023). Current Usage Areas of Deepfake Applications with Artificial Intelligence Technology. İletişim ve Toplum Araştırmaları Dergisi, 3(2), 301-332.
Koltay, T. (2011). The media and the literacies: Media literacy, information literacy, digital literacy. Media, Culture & Society, 33(2), 211-221.
Korshunov, P., & Marcel, S. (2018). Speaker inconsistency detection in tampered video. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS) (pp. 1-7). IEEE.
Kumar, R., Khan, S. A., Alharbe, N., & Khan, R. A. (2024). Code of silence: Cyber security strategies for combating deepfake disinformation. Computer Fraud & Security, 2024(4).
Kwame, A. E., Martey, E. M., & Chris, A. G. (2017). Qualitative assessment of compiled, interpreted and hybrid programming languages. Communications on Applied Electronics, 7(7), 8-13.
Lasantha, C., Abeysekara, R., & Maduranga, M. (2024). A novel framework for real-time ip reputation validation using artificial intelligence. Int. J. Wirel. Microwave Technol.(IJWMT), 14(2), 1-16.
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094-1096.
Li, Z. (2024). Ethical frontiers in artificial intelligence: navigating the complexities of bias, privacy, and accountability. International Journal of Engineering and Management Research, 14(3), 109-116.
Mahashreshty Vishweshwar, S. (2023). Implications of Deepfake Technology on Individual Privacy and Security.
Masood, M., Nawaz, M., Malik, K. M., Javed, A., Irtaza, A., & Malik, H. (2023). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied intelligence, 53(4), 3974-4026.
McCosker, A. (2024). Making sense of deepfakes: Socializing AI and building data literacy on GitHub and YouTube. new media & society, 26(5), 2786-2803.
Mensah, G. B. (2023). Artificial intelligence and ethics: a comprehensive review of bias mitigation, transparency, and accountability in AI Systems. Preprint, November, 10.
Mihailidis, P., & Thevenin, B. (2013). Media literacy as a core competency for engaged citizenship in participatory democracy. American Behavioral Scientist, 57(11), 1611-1622.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21.
Molina, M. D., Sundar, S. S., Le, T., & Lee, D. (2021). “Fake news” is not simply false information: A concept explication and taxonomy of online content. American behavioral scientist, 65(2), 180-212.
Monteiro, S. M. (2024). Detection of fake images generated by deep learning (Doctoral dissertation).
Montasari, R. (2024). Responding to Deepfake Challenges in the United Kingdom: Legal and Technical Insights with Recommendations. In Cyberspace, Cyberterrorism and the International Security in the Fourth Industrial Revolution: Threats, Assessment and Responses (pp. 241-258). Cham: Springer International Publishing.
Mubarak, R., Alsboui, T., Alshaikh, O., Inuwa-Dute, I., Khan, S., & Parkinson, S. (2023). A survey on the detection and impacts of deepfakes in visual, audio, and textual formats. IEEE Access.
Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2021). Deep learning for deepfakes creation and detection: A survey. arXiv preprint arXiv:1909.11573.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Nnamdi, N., Oniyinde, O. A., & Abegunde, B. (2023). An Appraisal of the Implications of Deep Fakes: The Need for Urgent International Legislations. American Journal of Leadership and Governance, 8(1), 43-70.
Opoku-Mensah, E., Abilimi, C. A., & Boateng, F. O. (2013). Comparative analysis of efficiency of fibonacci random number generator algorithm and gaussian Random Number Generator Algorithm in a cryptographic system. Comput. Eng. Intell. Syst, 4, 50-57.
Opoku-Mensah, E., Abilimi, A. C., & Amoako, L. (2013). The Imperative Information Security Management System Measures In the Public Sectors of Ghana. A Case Study of the Ghana Audit Service. International Journal on Computer Science and Engineering (IJCSE), 760-769.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Patel, Y., Tanwar, S., Gupta, R., Bhattacharya, P., Davidson, I. E., Nyameko, R., ... & Vimal, V. (2023). Deepfake generation and detection: Case study and challenges. IEEE Access.
Pennycook, G., & Rand, D. G. (2020). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 117(5), 2322-2328.
Pranay Kumar, B. V., Ahmed, S., & Sadanandam, M. (2024). Designing a Safe Ecosystem to Prevent Deepfake-Driven Misinformation on Elections. Digital Society, 3(2), 1-35.
Ressi, D., Romanello, R., Piazza, C., & Rossi, S. (2024). AI-enhanced blockchain technology: A review of advancements and opportunities. Journal of Network and Computer Applications, 103858.
Robinson, S. C. (2020). Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society, 63, 101421.
Rubin, V. L. (2022). Artificially Intelligent Solutions: Detection, Debunking, and Fact-Checking. In Misinformation and Disinformation: Detecting Fakes with the Eye and AI (pp. 207-263). Cham: Springer International Publishing.
Schroepfer, M. (2020). Combating COVID-19 misinformation across our apps. Facebook. Retrieved from https://about.fb.com/news/2020/04/covid-19-misinfo-update/
Selwyn, N. (2021). Education and technology: Key issues and debates. Bloomsbury Publishing.
Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023, November). Deepfakes, misinformation, and disinformation in the era of frontier AI, generative AI, and large AI models. In 2023 International Conference on Computer and Applications (ICCA) (pp. 1-7). IEEE.
Shree, M. S., Arya, R., & Roy, S. K. (2024). Investigating the Evolving Landscape of Deepfake Technology: Generative AI's Role in it's Generation and Detection. International Research Journal on Advanced Engineering Hub (IRJAEH), 2(05), 1489-1511.
Silva, C. A. G. D., Ramos, F. N., de Moraes, R. V., & Santos, E. L. D. (2024). ChatGPT: Challenges and benefits in software programming for higher education. Sustainability, 16(3), 1245.
Singh, P., & Dhiman, B. (2023). Exploding AI-Generated Deepfakes and Misinformation: A Threat to Global Concern in the 21st Century. Authorea Preprints.
Smith, C. (2020). How The New York Times is using AI to fact-check the news. The New York Times Company. Retrieved from https://www.nytco.com/press/how-the-new-york-times-is-using-ai-to-fact-check-the-news/
Song, A. K. (2019). The Digital Entrepreneurial Ecosystem—a critique and reconfiguration. Small Business Economics, 53(3), 569-590.
Taylor, B. C. (2021). Defending the state from digital Deceit: the reflexive securitization of deepfake. Critical Studies in Media Communication, 38(1), 1-17.
Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. (2020). Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion, 64, 131-148.
Trattner, C., Jannach, D., Motta, E., Costera Meijer, I., Diakopoulos, N., Elahi, M., ... & Moe, H. (2022). Responsible media technology and AI: challenges and research directions. AI and Ethics, 2(4), 585-594.
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: key problems and solutions. Ethics, governance, and policies in artificial intelligence, 97-123.
Tsotniashvili, Z. (2024). Silicon Tactics: Unravelling the Role of Artificial Intelligence in the Information Battlefield of the Ukraine Conflict. Asian Journal of Research, 9(1-3), 54-65.
Tucker, J. A., Guess, A., Barbera, P., Vaccari, C., Siegel, A., Sanovich, S., ... & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Political Polarization, 1, 1-75.
Ünver, A. (2023). Emerging technologies and automated fact-checking: Tools, techniques and algorithms. Techniques and Algorithms (August 29, 2023).
Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 1-13.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Wang, S., Zhang, Y., Yao, Y., & Zhang, Y. (2020). Blockchain-based approach for deepfake detection. IEEE Access, 8, 27320-27329.
Wardle, C. (2019). Understanding information disorder. First Draft News. Retrieved from https://firstdraftnews.org/latest/understanding-information-disorder/
Whyte, C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of cyber policy, 5(2), 199-217.
Wright, N. D. (2021). Defend Democratic.
Yan, Y. (2022). Deep Dive into Deepfakes-Safeguarding Our Digital Identity. Brook. J. Int'l L., 48, 767.
Yeboah, T., Odabi, O. I., & Abilimi, C.A. (2016). Utilizing Divisible Load Scheduling Theorem in Round Robin Algorithm for Load Balancing In Cloud Environment. Computer Engineering and Intelligent Systems, 6(4), 81-90.
Yeboah, T., Opoku-Mensah, E., & Abilimi, C. A. (2013a). A Proposed Multiple Scan Biometric-Based Registration System for Ghana Electoral Commission. Journal of Engineering Computers & Applied Sciences, 2(7), 8-11.
Yeboah, T., Opoku-Mensah, E., & Abilimi, C. A. (2013b). Automatic Biometric Student Attendance System: A Case Study Christian Service University College. Journal of Engineering Computers & Applied Sciences, 2(6), 117-121.
Yeboah, T., & Abilimi, C.A. (2013). Using Adobe Captivate to creative Adaptive Learning Environment to address individual learning styles: A Case study Christian Service University, International Journal of Engineering Research & Technology (IJERT), 2(11).
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 International Journal of Scientific Research and Modern Technology
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.