AI's Unpredictability Problem
Mark Cuban, the well-known investor, has pointed out a significant flaw in artificial intelligence, particularly within enterprise applications. Contrary
to the belief that AI will soon become an all-knowing entity, Cuban argues that its current output is only as reliable as the human who verifies it. He highlights that the primary weakness of AI today is its lack of consistent results. Unlike traditional software programs that operate on strict logical frameworks, Large Language Models (LLMs) generate responses based on probability. This means they essentially 'predict' the next word or action, leading to variations in answers even when presented with the exact same question multiple times. For businesses that require absolute precision in their operations, this inherent 'unreliability' poses a substantial risk and liability. This characteristic challenges the notion of AI as a fully autonomous problem-solver.
Reality Check for Doomers
Cuban leverages this technical limitation of AI to offer a counter-argument to the so-called 'AI Doomers' – those who express concerns about AI rapidly developing consciousness and potentially taking over the world. He contends that if an AI cannot even grasp the necessity of providing consistent answers to identical queries, it certainly lacks the comprehension of real-world implications of its generated content. This inability to ensure consistent output underscores a lack of true understanding. Cuban strongly advocates for individuals possessing specialized domain knowledge, emphasizing that deep, focused expertise in a particular field is becoming increasingly indispensable. His stance suggests that as AI continues to evolve, the human capacity for judgment and critical evaluation of AI-generated information is not diminishing but rather growing in value.
Human Oversight in Coding
The importance of human involvement in AI-driven processes is further illustrated by practices at major tech companies. Sundar Pichai, the CEO of Google, and Dara Khosrowshahi, the CEO of Uber, have both affirmed the role of human engineers in approving AI-generated code. Pichai shared that approximately 75% of new code at Google is now created by AI, a significant increase from 50% in the previous autumn. However, he emphasized that every single line of this AI-generated code undergoes rigorous review and approval by human engineers. Similarly, Khosrowshahi mentioned that while about 10% of code is written by AI at Uber, it is also subject to engineer approval. These examples highlight that even in areas where AI excels at generation, human oversight remains a critical component for ensuring accuracy, safety, and adherence to complex requirements.














