Artificial Intelligence (AI) Tool Usage Policy

AI tool usage policy

Introduction

Artificial Intelligence (AI) tools are transforming the way we work. They have the potential to automate tasks, improve decision-making, and provide valuable insights into our operations.

However, the use of AI tools also presents new challenges in terms of information security and data protection. This policy is a guide for employees on how to be safe and secure when using AI tools, especially when it involves the sharing of potentially sensitive company and customer information.

Purpose

The purpose of this policy is to ensure that all employees use AI tools in a secure, responsible and confidential manner. The policy outlines the requirements that employees must follow when using AI tools, including the evaluation of security risks and the protection of confidential data.

Policy Statement

Blended U.K. Limited (hereafter the “organisation”) recognizes that the use of AI tools can pose risks to our operations and customers. Therefore, we are committed to protecting the confidentiality, integrity, and availability of all company and customer data. This policy requires all employees to use AI tools in a manner consistent with our security best practices.

Security Best Practices

All employees are expected to adhere to the following security best practices when using AI tools:

a. Evaluation of AI tools: Employees must evaluate the security of any AI tool before using it. This includes reviewing the tool's security features, terms of service, and privacy policy. Employees must also check the reputation of the tool developer and any third-party services used by the tool.

b. Protection of confidential data: Employees must not upload or share any data that is confidential, proprietary, or protected by regulation without prior approval from the appropriate department. This includes data related to customers, employees, or partners.

c. Access control: Employees must not give access to AI tools outside the company without prior approval from the appropriate department or manager and subsequent processes as required to meet security compliance requirements. This includes sharing login credentials or other sensitive information with third parties.

d. Use of reputable AI tools: Employees should use only reputable AI tools and be cautious when using tools developed by individuals or companies without established reputations. Any AI tool used by employees must meet our security and data protection standards.

e. Compliance with security policies: Employees must apply the same security best practices we use for all company and customer data. This includes using strong passwords, keeping software up-to-date, and following our data retention and disposal policies.

f. Data privacy: Employees must exercise discretion when sharing information publicly. As a first step, employees must ask themselves the question, "Would I be comfortable sharing this information outside of the company? Would we be okay with this information being leaked publicly?" before uploading or sharing any data into AI tools. Second would be to follow (b) above.

Review and Revision

This policy will be reviewed and updated on a regular basis to ensure that it remains current and effective. Any revisions to the policy will be communicated to all employees.

Conclusion

Our organisation is committed to ensuring that the use of AI tools is safe and secure for all employees and customers, as well as the organisation itself. We believe that by following the guidelines outlined in this policy, we can maximise the benefits of AI tools while minimising the potential risks associated with their use.