Artificial intelligence (AI) is a "minefield" for employees and a "national conversation" is required on how it should be handled in workplaces, according to the director of a HR company.
Director of CIPD Ireland Mary Connaughton told BreakingNews.ie: "It really is a minefield. A lot of the programmes are still in development and how they might operate in a company context isn't clear. There are a number of competing softwares out there, so it's hard for companies to make the right decision about what is the best way for them to use it and what is the best technology for them to use.
"I think it's important at a national level that clear principles are established around its use so that it is used to augment and support what people do rather than replace jobs. There is always the people element in decisions, and to make organisations aware of the risks around using it, so I think it is important that we have that national debate on it. Over time the technology will improve and organisers will be better able to make the right decisions for them."
Ms Connaughton said plans for a new AI Advisory Council and a government AI ambassador were both welcome developments.
She said investment in skills was also important, citing a lack of cloud computing and analytical software skills to deal with the progression of AI.
"At the other end we need to invest in uplifting general digital knowledge and capability of the workforce because more and more jobs will have AI embedded in somewhere, so just understanding it and raising the digital knowledge of the workforce. It also means people would be able to see where we could add value."
While there are fears of job loses, Ms Connaughton pointed out that even prominent large language AI models need human supervision.
"The current model and software is what they call a large language model. It pulls in language from a lot of different places and can write beautifully and pull in different source data. Where it comes to trying to use language in certain ways it can really add expertise to it. At this point in time people have to ensure that there is accuracy in whatever is being produced because often the software has pulled together information from a lot of different sources, but it is not necessarily up to date.
"There is a whole validation piece that is there currently around is it representing a situation correctly, does it know enough to be fully accurate? Over time, it will infiltrate into more knowledge workers jobs, and it will also infiltrate into more tech jobs as it will be able to do more programming work. At the same time the quality assurance and validation will become increasingly important.
"We have risks around data and data protection, around fairness and bias in data that is used. For example if historical models are used, those models can have gender or race bias built into them, so this is a risk. Also, what happens to data you put into it and where the data has come from often isn't very clear, so there's a transparency question around that. Because of those risks there needs to be guidance on how to best use it."
She said the current focus on questions around AI is a positive, however, she once again called for a national conversation.
"They have appointed an AI ambassador which is good because there is someone to go out and talk about it. They have also joined the OECD partnership on it and are looking at standards. It does need that multi-faceted approach because we are still in the early days of the technology. The reason for having a national approach now is to ensure we lay the right foundations so as the technology goes on we can say 'we have already worked out that is not appropriate use'.
"There is also developments in Europe to bring in an AI act and Ireland needs to influence that as well. There will be legislation for companies to comply with, which will also safeguard them around data management and transparency. They need to be agreed on a national and possibly a European and global level. What does it mean for GDPR, that kind of thing. Those standards need to be defined around having those protections in place.
People now say if they had of looked at the start of social media maybe there would have been principles and boundaries established and now it's too late. We're always trying to improve protections around use of AI but we're on the back foot, risks will grow around what it might do, so there needs to be that national conversation along with guidelines and rules.