Managing workplace AI effectively

A human and a robot working on their laptops and discuss business reports, concept of artificial intelligence-Machine learning.

A recent survey by Fishbowl revealed that 68% of professionals are secretly utilising AI in their roles. But this shouldn’t be the case, according to Iain Simmons from Arbor Law. Instead, he suggests organisations develop clear AI use policies to guide staff

CREDIT: This is an edited version of an article that originally appeared on SME Today

AI’s influence on our daily lives and work environments has become a major talking point, especially since the launch of ChatGPT in November 2022. The rapid development of AI tools for both business and personal use has prompted companies to explore these technologies to reduce costs and automate tedious tasks.

According to Deloitte’s state of ethics and trust in technology survey, last year saw 74% of companies testing generative AI technologies, with 65% already using them internally. As more AI tools emerge, these figures are likely even higher now. It’s crucial for employers to act swiftly in establishing policies and procedures to harness the benefits of AI while mitigating associated risks.

Shadow AI: A growing concern

Shadow AI, the unauthorised use of AI tools at work, is on the rise. Without proper controls, this can lead to security threats, data breaches, inconsistent work quality, and even regulatory violations. To combat this, companies need to update their rules and procedures to match the pace of AI advancements. Educating employees on permissible AI use is essential, starting with a well-defined AI Use Policy.

Creating an AI Use Policy

An AI Use Policy ensures that any AI technology used within a business is safe, reliable, and appropriate, thereby minimising risks. This policy should guide employees on how to effectively and securely use AI tools. Here are some key elements to include:

Purpose and scope

Begin with an introduction that sets the context, purpose, and scope of the policy. Clearly define which staff and tasks it applies to and reference any related company policies.

Approval process

List any pre-approved AI tools, such as ChatGPT or Google Gemini, and outline the process for approving new tools. Set evaluation criteria like legal compliance, transparency, accountability, and security. Also, consider vendor evaluations, terms and conditions reviews, and risk-benefit analyses.

Rules of use

This section is crucial for most employees. Provide clear do’s and don’ts for inputs and outputs to ensure compliance with data security, privacy, and ethical standards. For example, advise against inputting confidential or proprietary information, or using AI in ways that could perpetuate bias. For outputs, emphasise the importance of fact-checking and human oversight, especially for decisions impacting individuals.

By developing an AI use policy, organisations can mitigate the risks associated with shadow AI. This ensures they can enjoy the benefits of AI while staying within legal and regulatory boundaries.

 

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply