The Unexpected Journey
Embarking on a career in Artificial Intelligence might seem daunting, especially without a traditional computer science or engineering background. Priyanka
Kuvalekar's path exemplifies this, as she transitioned from a five-year architectural studies program in India to become a Senior UX Research Lead at Microsoft, focusing on AI experiences within Microsoft Teams Calling. Her initial foray into the professional world began with an architectural internship during her final year of studies. Upon graduation, she faced a pivotal decision: either continue down the architectural path or pivot towards the rapidly evolving digital landscape. This crossroads led her to enroll in a focused three-month user experience course, which subsequently inspired her to pursue a Master's degree in User Experience and Interaction Design. This educational shift, relocating her to Philadelphia in January 2018, marked the beginning of her immersion into the tech industry, starting with a UX researcher internship at Korn Ferry. After a year, she was offered a full-time position, laying the groundwork for her future success in the field until 2021.
AI Integration Starts Here
Priyanka's direct engagement with AI began during her tenure at Cisco, where she held a UX Research Lead position for over three and a half years before her move to Microsoft. At Cisco, she spearheaded projects involving AI-driven features for Webex, specifically focusing on meeting and messaging functionalities. Recognizing the necessity of a deeper understanding of AI's underlying mechanisms, she proactively sought out certifications and training. This self-driven upskilling involved exploring generative AI, agentic AI design patterns, large language models, and specialized methods for evaluating AI experiences from a research perspective. Furthermore, she expanded her knowledge base by undertaking courses on UI design from prominent platforms like Google Skills, Microsoft Training, and DeepLearning.AI. This comprehensive approach was crucial for grasping how generative AI could be effectively integrated into her project work, bridging the gap between user needs and technological capabilities.
Lesson 1: Continuous AI Evaluation
A fundamental insight Priyanka gained is the critical need for ongoing evaluation of AI systems, rather than treating testing as a one-off event. The nature of AI demands continuous assessment to ensure it consistently delivers dependable and trustworthy user experiences. This involves designing and conducting qualitative studies specifically to understand how AI-driven conversations perform across a diverse range of user demographics. Through such research, subtle yet significant issues like inconsistencies in tone, misinterpretations of user intent, and problematic pacing in AI responses were identified. By meticulously uncovering these real-world friction points, researchers can effectively refine AI systems, making them more reliable, inclusive, and user-friendly. This iterative process of evaluation is paramount for the ethical and effective deployment of AI.
Lesson 2: Accessibility in AI
Another significant takeaway emphasizes the dual role AI can play: either lowering barriers or inadvertently creating new obstacles, particularly through an accessibility lens. AI has the potential to simplify complex tasks and significantly reduce challenges for individuals with disabilities by automating certain processes. However, if not conceived and implemented with accessibility as a core principle, AI can foster new forms of inequity. Priyanka stresses that accessibility and AI are intrinsically linked and cannot be treated as separate concerns. Her approach involves actively including individuals with disabilities in AI research protocols and rigorously assessing how AI interfaces with essential assistive technologies, such as screen readers and keyboard navigation systems, to ensure equitable access for all.
Lesson 3: Fluency Over Depth
Priyanka's experience highlights that breaking into the AI field doesn't necessitate hands-on development of the technology itself. Instead, it requires a robust understanding sufficient to engage meaningfully with technical teams and translate user needs effectively. The key is developing 'fluency' – grasping the core concepts, understanding the limitations of technologies like large language models, and constructing evaluation frameworks that account for these constraints. This level of understanding empowers researchers to pose pertinent questions to engineers, design studies that accurately measure user trust, reliability, and consistency, and collaborate effectively with product managers to define success metrics for AI-powered features. It's about being the crucial bridge between complex technology and human experience.
Leveraging Non-Traditional Strengths
A unique perspective, often gained from non-traditional backgrounds, can actually be a significant advantage when entering the AI domain. The advice is to focus on the intersection of AI and people, rather than solely on the technical coding aspects. Prioritize understanding how AI manifests in products and the actual user experience. A practical way to contribute value is by defining and shaping what constitutes 'quality' for an AI feature. This involves collaborating with product managers to address critical questions, such as ensuring the AI stays within its designated scope, handles interruptions gracefully, and is inclusive across various languages and dialects. These nuances are frequently overlooked when AI is viewed purely as a technical system, yet they are vital for building user trust. Framing these considerations in actionable terms can make a non-technical professional indispensable.
Building an 'AI-Plus-People' Portfolio
When applying for roles, hiring managers look beyond a candidate's awareness of AI; they seek evidence of responsible shaping and usability enhancement. Building a strong portfolio is crucial, showcasing evaluation frameworks, assessment rubrics, detailed study results, and concrete examples of how user insights have directly influenced decision-making. Even without access to large-scale corporate projects, individuals can conduct their own focused studies on publicly available AI tools to demonstrate their analytical and strategic thinking. Actively volunteering for projects that involve AI integration into existing products is also highly beneficial. This allows for practical engagement with questions like, What specific functions should the AI perform? How should it behave in different scenarios? Can the AI clearly articulate its capabilities and limitations? And crucially, how does the AI recover from errors? These are precisely the types of critical questions that researchers and product thinkers with diverse backgrounds are exceptionally well-suited to answer.














