In a courtroom moment that quickly caught attention across the AI world, the Tesla chief Elon Musk revealed that his startup xAI has, at least in part, relied on models from the San Francisco-based AI giant OpenAI to improve its own systems. The statement came during testimony in a federal court in California on Thursday, adding another twist to the already tense rivalry between the two sides.What Musk Said In CourtWhen asked directly about “model distillation,” Musk didn’t deny the concept. Instead, he explained it in simple terms, saying it involves using one AI model to train another. But when the questioning turned sharper, specifically about whether xAI had used OpenAI’s models, Musk’s response was less direct. “Generally all the AI companies”
do it, he said. When pushed for a clearer answer, he admitted, “Partly.” He went on to add, “It is standard practice to use other AIs to validate your AI.” That single exchange has sparked quite a bit of debate, especially given how sensitive the topic has become within the industry.What Is Model Distillation, Really?For those unfamiliar, model distillation is essentially a shortcut. A large, powerful AI system acts like a teacher, passing knowledge to a smaller, more efficient model. It’s widely used inside companies to optimise performance and reduce costs. But things get complicated when one company’s model is used to train another’s. That’s where the line between innovation and intellectual property concerns starts to blur.Musk vs OpenAI Trial Heats Up As Elon Musk Calls Questions ‘Designed To Trick’ HimA Growing Industry DebateThis isn’t the first time distillation has stirred controversy. The ChatGPT-maker OpenAI and the US-based AI giant Anthropic have both raised concerns about competitors, especially some Chinese firms, allegedly using this method to replicate their models’ capabilities. Even Google has stepped in, introducing safeguards against what it calls “distillation attacks,” describing them as a form of intellectual property misuse. In its own blog post, Anthropic acknowledged the dual nature of the practice, stating, “Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
/images/ppid_a911dc6a-image-177739102586298391.webp)

/images/ppid_a911dc6a-image-177738053788741145.webp)

/images/ppid_a911dc6a-image-177744003830545705.webp)
/images/ppid_a911dc6a-image-177752053560222982.webp)
/images/ppid_a911dc6a-image-177754859662275864.webp)
/images/ppid_a911dc6a-image-177748552819862877.webp)
/images/ppid_a911dc6a-image-17774645297885302.webp)
/images/ppid_a911dc6a-image-177744703802353951.webp)
/images/ppid_a911dc6a-image-17775520333439602.webp)
/images/ppid_a911dc6a-image-17773560355729138.webp)
/images/ppid_a911dc6a-image-177739106434792998.webp)