Home / Security / Companies have to figure out the skills they need to reap AI benefits

Companies have to figure out the skills they need to reap AI benefits

AI concept

Getty Images/d3sign

Organizations need to build new skillsets for a workplace that will increasingly tap into artificial intelligence (AI), but they must first figure out how they plan to benefit from the technology. 

As many as 97% of workers believe companies should prioritize AI skills in their employee development journey, according to a survey released by Salesforce.com, which polled 11,035 working adults in February across 11 markets — such as Singapore, India, Australia, France, and the US — on AI digital skills.

Also: How to use ChatGPT: Everything you need to know

All respondents in India said organizations should prioritize AI skills in their employee development plans, while 98% in Singapore and 97% in Australia said likewise. 

Globally, 61% of respondents said they are already aware of how generative AI would impact their work, including 70% in Singapore and 53% in Australia. This figure was as much as 93% in India. 

Also: How to use ChatGPT to write code

However, just one in 10 of all survey respondents currently carry out daily work tasks that involve AI. This proprotion reached 15% in Singapore and just 7% in Australia, while about 40% in India believe their current daily work involves AI. 

In Singapore, 57% believe AI is among the most in-demand digital skills today. And while 51% in the Asian city expressed concerns about generative AI replacing jobs, 72% said they are excited about using it. Another 57% cited ethical AI and automation skills as among the fastest-growing and in-demand skills today. 

Some 63% of respondents in Singapore said their organization is considering ways to use generative AI, compared to 46% in Australia and 91% in India. Worldwide, 67% said their company is exploring ways to tap into the technology. 

Figuring out exactly how they plan to use AI should be the first step — and several organizations still need to work this out, according to Terence Chia, cluster director of digital industry and talent group for Infocomm and Media Development Authority (IMDA). 

The global pandemic, for instance, drove the need for remote work and telecommuting, compelling companies to adapt, Chia said during a panel discussion at Salesforce’s World Tour Essentials Asia. Now with the move to the cloud, AI capabilities are increasingly baked into applications, whether companies know how to use them or not. 

Also: ChatGPT is the most sought out tech skill in the workforce, says learning platform

Chia said it’s imperative businesses identify the key issues, so they can move quickly and figure out whether they have the technology stack to make progress. Any company would have to build the skillsets and culture to support this progress.

“For our workforce to be AI-ready, we need to…know how to use AI, in a general sense, [which] could require skills like prompt engineering [and] enabling us to ask the right questions of AI,” he said. 

“We [also] need to be able to apply AI to sectoral use cases. This may require industry-specific digital skills for areas like healthcare, financing, and manufacturing.”

Chia continued: “We need to ensure we leverage AI to complement what our people can do. We should focus less on what AI is going to take over from us and more on how it will generate new opportunities for us.” 

Damien Joseph, associate dean of Nanyang Technological University’s Nanyang Business School, also noted the impact that the rapid emergence of generative AI already has had on the education sector, with students using tools such as ChatGPT without any formal training. 

“From an education perspective, we can either resist AI or we can figure out what are the skills necessary for people to leverage the full potential of it — either as a tool, as a collaborator, or a team member,” Joseph said. 

“For students, we are seeing the need to sensitize the ethical use of generative AI. For professionals, it’s not just technical AI skills that they need, but more importantly the general skills that can help them use the AI technology in their day-to-day work.”

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

Some legal knowledge, for instance, will be important in the use of generative AI to work through potential issues related to copyright or proprietary rights. 

Jospeh said that, while it’s difficult to predict where emerging and fast-evolving technologies such as AI are headed, there are fundamental principles and skillsets on which to develop an approach.

In its efforts to drive AI adoption and skills, Singapore has stressed the need to build a framework based on trust and transparency. Amid the ongoing AI craze — and with tech vendors electing to cut AI ethics teams as part of company-wide layoffs — ZDNET asked if regulations were necessary to ensure businesses adopted ethical AI practices. 

Chia said there are already some laws in place, such as the mandate for organizations in Singapore to appoint a data protection officer. This inidvidual is tasked with ensuring the organization complies with the country’s Personal Data Protection Act.

Also: How does ChatGPT work?

While the regulation pertains to personal information, rather than AI specifically, it remains crucial because data is the bedrock of AI, he said. 

He added that it’s important to continue monitoring market developments, as generative AI could surface new issues and complexities related to the use of data. Such vigilance is necessary to ensure the ecosystem grows “responsibly”, without putting unnecessary crimps on growth and opportunities. 

Chia said Singapore had introduced several initiatives to guide businesses on their use of AI, including a testing framework and toolkit, A.I. Verify, to help companies demonstrate their “objective and verifiable” use of AI.

Sujith Abraham, Salesforce’s Asean senior vice president and general manager, said his company has safeguards in place to ensure the ethical use of AI and data in its product-development processes. Salesforce has a global team dedicated to establishing the necessary safety checks, Abraham said.

Salesforce also provides resources for employees to assess whether a task or service should be carried out based on the company’s guidelines on ethics. Its AI-powered Einstein Vision, for example, cannot be used for facial recognition.

Abraham added that Salesforce has a set of guidelines specific to generative AI, based on its Trusted AI Principles, which focus on the “responsible development and implementation” of generative AI.

“AI technology has been around for a long time, but the missing piece has always been the ability to use it to achieve personalization at scale,” he said. “It is critical this rapid pace of development is complemented with the necessary ethical guardrails and guidance.”

Also: Generative AI can make some workers a lot more productive, according to this study

Salesforce last week unveiled new AI capabilities to its product range, including Einstein GPT, a generative AI CRM technology that enables users to create and tweak automation processes using a conversational interface. 

Its collaborative platform Slack has also been integrated with a new conversational feature, dubbed Slack GPT. It taps generative AI technology to allow users to build workflows with the use of prompts, without the need for coding. 


Source link

About admin

Check Also

How AI lies, cheats, and grovels to succeed – and what we need to do about it

Timucin Taka/Getty Images It has always been fashionable to anthropomorphize artificial intelligence (AI) as an ...

Leave a Reply

Your email address will not be published. Required fields are marked *