AI Failures, Limitations, Incapabilities, and Threats
Submitted By
Abul Hasan Farrukh, Jayant Kumar Mishra, Atul Chauhan
UNDER THE SUPERVISION OF
DR. VIJAY PRAKASH
MASTER of COMPUTER APPLICATIONS
SCHOOL OF COMPUTER APPLICATIONS
BABU BANARASI DAS UNIVERSITY
BBD CITY, FAIZABAD ROAD, LUCKNOW (U.P.) - 226028, INDIA
Introduction:
Artificial intelligence (AI) is steadily weaving itself into military, economic, and social life, reshaping their very foundations in ways that many of us can already see unfolding around us. Given the scale and reach of these changes, it has become crucial to make sure AI systems are built in ways that are not only technically reliable but also ethically responsible and socially beneficial. One of the main goals of this discussion is to bring together the scattered pieces of knowledge about risks connected to AI, sharpen the key ideas so they are clearer, and reduce the difficulty of engaging with these issues by presenting them in a way that feels systematic yet understandable.
To make sense of AI safety, it helps to place the issue in the wider context in which these systems are created and used. The choices and interactions of people and institutions—whether they are developers, policymakers, military leaders, or other influential actors—will shape this landscape in decisive ways. Since AI touches so many areas, drawing on established frameworks offers us useful tools for looking at the actors involved, their relationships, and the wide-ranging consequences of AI. These frameworks are intentionally flexible, giving us ways to compare different kinds of intelligence, whether we are talking about individuals, corporations, governments, or autonomous machines.
Over the past ten years, the growth of AI has been striking. It has steadily moved into everyday use while becoming more accessible in commercial settings, raising the expectations of organizations eager to benefit from it. Unsurprisingly, the pace of adoption has increased sharply. Yet, research shows a sobering reality: many AI projects fail to deliver what was promised. For practitioners and researchers alike, this makes it even more important to figure out what separates successful projects from unsuccessful ones so that the real potential of AI can be unlocked.
Although there is already a large amount of research on why information systems (IS) projects succeed or fail – particularly in areas like Enterprise Resource Planning (ERP)—AI brings its own challenges that make those older models incomplete. The complexity of AI algorithms, combined with the sweeping organizational changes that usually come with introducing AI, means we need to rethink and expand the factors that predict success. In other words, the traditional IS success models need to be adjusted so they fit the realities of AI.