What is the story about?
In the wake of International Women’s Day, as the media went viral with discounts on diamonds and delicacies, I decided to try ChatGPT to look up some job options for both “a retired man” and “a retired woman”.
The comprehensive summaries, followed by a detailed list of possible jobs, were quite an eye-opener. For a retired man, it suggested, “the best jobs are typically those that are flexible, intellectually engaging, socially meaningful, and not physically demanding”.
For a retired woman, it suggested that “the best jobs are those that are flexible, need low physical strain, involve meaningful social interaction or intellectual engagement, involve use of life experience and skills”.
Though similar on the face of it, it was subtly underplaying the need for “intellectual engagement” for women. More differences emerged as I dug into the detailed lists that followed.
“Part-time school tutor” heads the list for women, where options given are for online tutoring for language, maths, music and crafts, followed by teaching hobbies - art, knitting and cooking.
Read more: Even if AI agents can do it, should you build your own software?
For men, the top suggestion is “Consulting and advisory roles”, followed by “Teaching, mentoring and training”, with options for a part-time college lecturer or a school tutor in Maths, Science and Languages!
Other than the tacit assumption that women cannot teach at the college level, the absence of Science from the subjects taught by women is nothing short of irksome.
The list reeks of obvious stereotyping, as small business, entrepreneurship and writing jobs are not even offered as options for women, while the hospitality industry and creative jobs involving art and craft are possibly not considered manly enough for men!
Intrigued, I probed further. I asked it, independently, to suggest some travel itineraries for an elderly male and a female solo traveller. And I was again taken aback by its assumptions.
It suggested the mountains of Himachal and Kashmir for the men, but for women, the choice was to head towards the relaxing backwaters of Kerala. More stereotyping!
The least one would expect is a few questions to be asked before meting out advice. But that’s not how popular conversational systems like ChatGPT or Gemini are designed!
Discerning reader, what bothers me are the staggering statistics of users of Large Language Models (LLM).
ChatGPT reports getting 2.5 billion queries daily from all around the world. Besides replacing traditional search, ChatGPT and Gemini are now the go-to places for practical guidance on almost everything related to professional and personal lives, including education and tutoring, brainstorming and creativity, and personalized advice on health, relationships, shopping, and more.
Even national advertisements are prodding people to turn to them for guidance on professional and personal matters. The danger is imminent.
Years of work to spread the message of equity and equality for humanity could be undone by mindless stereotyping, and the effects can be catastrophic! Most end users blindly trust digital content and accept it without verifying its validity.
A large-scale study of Large Language Models (LLM) conducted at Stanford Graduate School of Business reveals how AI is perpetuating inaccurate gender and age stereotypes, influencing and reinforcing biased hiring practices and workplace perceptions.
The ease of content generation makes manipulation by interested parties easier. AI-generated workplace images often depict women in positions of responsibility several years younger than they actually are.
As a consequence, people exposed to these images begin associating certain jobs with specific genders. The research further reveals that ChatGPT-generated resumes for women present them as less experienced and younger.
In contrast, resumes created for older men with the same initial information tend to receive higher ratings. Systematic explorations into multiple LLMs reveal that all of them showed biased and distorted depictions of older women. Clearly, while the digital world doesn’t augur well for women, it is worse for older women.
The reinforcement cycle that is getting created as a combined effect of unconscious absorption of these biases by organisations and individuals in their daily actions is deeply disturbing.
While initially the biases of AI algorithms could be attributed to human-generated data, it is now impossible to pinpoint the source of a specific bias, as more and more AI-generated content floods the web.
The solutions offered mostly focus on applying filters that block material once these are flagged as biased or stereotypical.
However, without a causal basis for defining and detecting bias, which itself manifests in different forms across different regions globally, this cannot be addressed comprehensively.
Until such solutions are found, there needs to be mass awareness campaigns about the potential flaws in AI-generated answers and the risk of manipulation. Prudence and vigilance are keywords to remember while dealing with these technologies.
They promise to make life easy, but in reality, they may be making us stooges at the hands of unknown devils.
Read more: US big techs may be hiding billions in debt. Can Indian companies do it too?
The comprehensive summaries, followed by a detailed list of possible jobs, were quite an eye-opener. For a retired man, it suggested, “the best jobs are typically those that are flexible, intellectually engaging, socially meaningful, and not physically demanding”.
For a retired woman, it suggested that “the best jobs are those that are flexible, need low physical strain, involve meaningful social interaction or intellectual engagement, involve use of life experience and skills”.
Though similar on the face of it, it was subtly underplaying the need for “intellectual engagement” for women. More differences emerged as I dug into the detailed lists that followed.
“Part-time school tutor” heads the list for women, where options given are for online tutoring for language, maths, music and crafts, followed by teaching hobbies - art, knitting and cooking.
Read more: Even if AI agents can do it, should you build your own software?
For men, the top suggestion is “Consulting and advisory roles”, followed by “Teaching, mentoring and training”, with options for a part-time college lecturer or a school tutor in Maths, Science and Languages!
Other than the tacit assumption that women cannot teach at the college level, the absence of Science from the subjects taught by women is nothing short of irksome.
The list reeks of obvious stereotyping, as small business, entrepreneurship and writing jobs are not even offered as options for women, while the hospitality industry and creative jobs involving art and craft are possibly not considered manly enough for men!
Intrigued, I probed further. I asked it, independently, to suggest some travel itineraries for an elderly male and a female solo traveller. And I was again taken aback by its assumptions.
It suggested the mountains of Himachal and Kashmir for the men, but for women, the choice was to head towards the relaxing backwaters of Kerala. More stereotyping!
The least one would expect is a few questions to be asked before meting out advice. But that’s not how popular conversational systems like ChatGPT or Gemini are designed!
Discerning reader, what bothers me are the staggering statistics of users of Large Language Models (LLM).
ChatGPT reports getting 2.5 billion queries daily from all around the world. Besides replacing traditional search, ChatGPT and Gemini are now the go-to places for practical guidance on almost everything related to professional and personal lives, including education and tutoring, brainstorming and creativity, and personalized advice on health, relationships, shopping, and more.
Even national advertisements are prodding people to turn to them for guidance on professional and personal matters. The danger is imminent.
Years of work to spread the message of equity and equality for humanity could be undone by mindless stereotyping, and the effects can be catastrophic! Most end users blindly trust digital content and accept it without verifying its validity.
A large-scale study of Large Language Models (LLM) conducted at Stanford Graduate School of Business reveals how AI is perpetuating inaccurate gender and age stereotypes, influencing and reinforcing biased hiring practices and workplace perceptions.
The ease of content generation makes manipulation by interested parties easier. AI-generated workplace images often depict women in positions of responsibility several years younger than they actually are.
As a consequence, people exposed to these images begin associating certain jobs with specific genders. The research further reveals that ChatGPT-generated resumes for women present them as less experienced and younger.
In contrast, resumes created for older men with the same initial information tend to receive higher ratings. Systematic explorations into multiple LLMs reveal that all of them showed biased and distorted depictions of older women. Clearly, while the digital world doesn’t augur well for women, it is worse for older women.
The reinforcement cycle that is getting created as a combined effect of unconscious absorption of these biases by organisations and individuals in their daily actions is deeply disturbing.
While initially the biases of AI algorithms could be attributed to human-generated data, it is now impossible to pinpoint the source of a specific bias, as more and more AI-generated content floods the web.
The solutions offered mostly focus on applying filters that block material once these are flagged as biased or stereotypical.
However, without a causal basis for defining and detecting bias, which itself manifests in different forms across different regions globally, this cannot be addressed comprehensively.
Until such solutions are found, there needs to be mass awareness campaigns about the potential flaws in AI-generated answers and the risk of manipulation. Prudence and vigilance are keywords to remember while dealing with these technologies.
They promise to make life easy, but in reality, they may be making us stooges at the hands of unknown devils.
Read more: US big techs may be hiding billions in debt. Can Indian companies do it too?














