“How dangerous AI is depends on its utilisation. It can be perceived as a technology that facilitates regular activities and advanced information processing. From that perspective, it can be an efficient tool for performance. AI can be utilised to complement and enhance human abilities for performing tactical and strategic functions. This can free up time and provide knowledge to HR personnel to improve the functions, take key decisions and increase human touch in employee interactions,” stated Professor Smita Chaudhry, Department of Human Resources at FLAME University.
She further commented that, “Raising concerns about AI while working in Google would have given rise to some ethical dilemmas for Mr Geoffrey Hinton. However, leaving Google would close doors to the opportunity for setting best practices and globally recognised standards for AI use, and regularly disseminating information about its dangers using a credible platform.”
As AI becomes increasingly integrated into various aspects of our lives, it is essential to regulate its use to ensure it is harnessed for positive outcomes. Without proper regulation, there is a risk of AI being misused, leading to negative consequences such as job losses, privacy breaches, and even accidents resulting from biased decision-making.
“I do share Geoffrey Hinton’s concerns on the missing focus on ethics in the growth of AI. However, there is a consensus among most experts that the only part of AI that is witnessing the kind of growth that's triggering these discussions is the ‘Supervised Learning’ - more specifically the Generative (the G in GPT) AI. There are way too many domains and problems that fall outside the scope of supervised learning,” said Srinivas Vedantam, Director, OdinSchool.
“While there has been some advancement in the Unsupervised Learning space - take Cicero for example, they are far and few in between, and don’t attract the same attention as ChatGPT. The recent protests from WGA are only adding fuel to the argument that AI is evil. However, we are nowhere near being threatened by AI. In fact, if regulated properly, AI can be used to power humanity’s efforts towards fulfillment, sustainability, and seeking a larger purpose for our existence,” he added.
As technology continues to evolve and disrupt various industries, it is essential to recognise that with innovation come grey areas. Mr Daksh Sharma, Director of Iffort, explained that when, “you go back to the 1980s and look at the era of calculators, what's common is that any new piece of disruptive tech always need collective action from all the stakeholders.”
“Generative AI is evolving and what we are seeing today is radically going to transform and it will only get better in months and years to come. Data privacy, hallucinating output, and Deepfake are the biggest concerns but the potential benefits are overall way higher than the risks and that's what the whole ecosystem needs to focus on,” he added.
One area where AI has had a profound impact is on the job market. While it is true that AI has created new jobs in areas like data science, machine learning, and robotics, it has also displaced many traditional jobs. Director of HR at the Sharda Group, Col. Gaurav Dimri, said, “While AI opens hitherto unknown frontiers and gets poised for widespread proliferation across various domains, it does come with proverbial risks. AI-based data algorithms can become manipulative and generate distorted information. There is also a likelihood of job losses, particularly in data interpretation, IT, and ITES sectors.
“Even sub-domains in finance, marketing, consumer behavior, HR, tertiary workforce in medicine, legal, and edu-tech could face job cuts as AI-enabled systems take over certain roles. However, the greater concern is the risk of AI-based machines acquiring greater control, particularly in the absence of effective regulatory mechanisms. It is likely that these concerns, along with others, have led Dr. Geoffrey Hinton to call for a renewed focus on establishing effective procedures for ensuring the ethical application of AI to harness its optimal potential,” he stated.
Much like how we had to vaccinate ourselves with Covid vaccines to protect ourselves, we need to learn and master AI tools to safeguard ourselves from the potential negative consequences of the technology. It's important not to waste time pondering over the "what if's" and instead act quickly to learn and adapt.
“I think an interesting analogy to draw, with the current state of AI and what’s about to come, is if we compare it to the outbreak of the Covid pandemic. Once we know, something as viral in nature as Covid is out there, it’s foolish to waste time over analysing it. The way we should look at it is - the only thing that is in our control is to vaccinate. With Covid we had to get vaccinated to protect ourselves. With AI we have to keep learning and be our own antidote - using these technologies to upskill,” suggested Azaan Sait, the founder of The Hub Bengaluru, The Hubverse, and The HubCo.
"Also, AI will spread at that rate of velocity the way Covid took over the world. Whether we are an organisation or individual, it is time to act and master the tools available and vaccinate ourselves from the negative consequences that might come from this tech,” he added.
AI's rapid development has sparked growing concerns about its potential negative impacts. One of the most pressing issues is the widespread dissemination of fake news, images, videos, and text, which can cause people to lose their ability to differentiate between what's true and what's not. Yet, with the dawn of the AI age, a realignment between humans and machines is necessary. By delegating routine tasks to machines, people can devote their time and attention to more sophisticated endeavors.
“AI systems could eventually learn unexpected, dangerous behaviour, and that such systems will eventually power killer robots and AI could cause harmful disruption to the labour market. However, as we enter the AI era, it’s a human-machine realignment that needs to have happen wherein daily, trivial chores and tasks will be done by the machine leaving humans to do more elevated work and that has happened in every stage of previous revolution - agriculture, manufacturing and software,” stated President of 3AI, Sameer Dhanrajani.
“Dr Hinton is far from the first artificial intelligence expert to sound the alarm on the dangers of the AI that they have built. In recent months, two major open letters warned about the “profound risks to society and humanity” that it poses, and was signed by many of the people who had helped create it. However, in the future, the potential of problem solving at scale with AI for large , complex and unresolved problems far more exceed the dangers that requires regulations and governance,” he concluded.