Acceptable Use Policies for Generative AI

One of the most important components of a successful security program for generative AI technology is having a widely understood and easily enforced acceptable use policy. Acceptable use policies get at, not just the tools, but the users of those tools. That means that they are critically relevant in protecting against all of the key security threats, including all 10 of OWASP’s top 10 for LLMs. For that reason, enterprises from the Fortune 50 to SMBs are developing  policies that govern their generative AI usage. A good policy will guide users towards best practices and away from potential risks. This guide is intended for CISOs, or security professionals who are considering an AI acceptable use policy for Generative AI tools like ChatGPT, Claude 2, Bard, and others.

Why have an AI Acceptable Use Policy?

Ethical and safe deployment of artificial intelligence models (or “Gen AI models”) are an essential precondition to realizing the vast gains they offer. As a first step, enterprises must establish clear acceptable use policies to ensure compliance with legal requirements, mitigate potential risks, and ensure best practices are used. In most jurisdictions, some form of legal restrictions on the use of AI models apply, which must be integrated into policies governing how they are used at the enterprise level. Acceptable use policies can provide guidelines on both appropriate and prohibited uses of AI systems.

Well-designed policies help reduce legal liability, reputational damage, and data leaks by guiding users towards safe data sources and compliant usecases. With advanced models like ChatGPT, Claude 2, and others capable of generating very convincing but completely fabricated content, acceptable use guides can protect legal teams from submitting hallucinated citations, HR teams from making illegal staffing decisions, and IT and Software teams from inadvertently introducing security flaws. Counter-intuitively, companies seeking to drive productivity improvements have found that implementing acceptable use policies can actually increase uptake of the technology because, left in the dark, users are afraid IT or legal may frown on their experimentation. By making  the company’s policies clear, enterprises liberate employees to engage with these AI models and can safely collect the productivity gains.

What should a good Generative AI Acceptable Use Policy Do?

A good policy for acceptable use of Generative AI should do 4 key things:

  1. Outline the benefits and potential risks of responsible usage of Generative AI
  2. Define the scope and goals of AI systems and data usage at the enterprise and outline prohibited use
  3. Provide clear guidelines on data collection, storage, access when using Generative AI
  4. Outline oversight, auditing and enforcement

Outline the Benefits and risks of responsible usage of Generative AI

Benefits:

There are a few different ways to think about the benefits of generative AI, but typically AI benefits are bracketed under “increased productivity for employees” and “improved products or services”. Increased productivity for employees is possible since it can both automate a large number of otherwise repetitive manual tasks like drafting emails or articles, but it can also provide useful insight enhancing work outputs - such as suggesting way to edit an article to make it more concise, or rewrite a proposed change to a piece of software to make it more readable for future engineers. Equally, improvements to products or services can be equally great - LLMs can help companies significantly reduce the time taken to get a first response for a customer support query, and can also be used inside existing software products to make them much easier to use: for example, software with notoriously complex user interfaces (like Salesforce, or Palantir Foundry), may find that implementing a natural language interface that allows a user to simply specify the action to take “Update the Walmart account to be Closed Won with an annual account value of $500k” may be a lot easier for a user than figuring out the precise sequence of buttons to press in order to complete that action.

In a survey of a subset of our users, we found significant time savings of around a full working day every week across a wide array of use-cases and departments:

One thing that businesses should think carefully about is viewing generative AI through a cost saving lens. Of course, increased productivity can lead to reduced costs. But as with any new technology, employees and end users can be concerned that technologies like generative AI can pose a risk to their job security - especially with press announcements about companies like Buzzfeed allegedly implementing major Reductions in Force whilst replacing what was previously their core business with AI. Therefore when discussing the benefits of AI in the a business’ acceptable use policy, legal, privacy and security teams that want to encourage its adoption may want to consider placing emphasis on the potential for staff to dedicate more of their time to more valuable tasks, rather than emphasising the cost cutting potential, which may deter usage or cause concern amongst employees.

Of course, an Acceptable Use policy should follow basic principles relevant to all other internal company communications, where its important for business to state their intentions clearly and honestly to employees who may be impacted. Clear and trusted AI principles can actually increase AI safety and encourage responsible, productive usage.

Risks

Even now, at this point, the potential risks of AI are still surprisingly poorly understood - hence the need for this AI Security Guide. This guide focusses on Hallucination, Data leakage, Prompt Injection, Model Misuse, Data poisoning, Insecure Output Handling and Cost Controls. But each enterprise may wish to emphasize different risks, depending on the usecases they foresee. Some organizations like non-profits or those doing fundamental AI research or doing particularly high risk usecases should also consider encouraging employees to consider AI Safety more broadly, including considerations of existential risk. This excellent templated Acceptable use policy designed by the Contrast Security team  and licensed under Creative Commons Attribution license) focusses on a four different risks:

  • Intellectual Property (IP) risks
  • AI-generated code with security vulnerabilities
  • Risks associated with sensitive data

Many other risks with these LLMs do exist, including legal risks (depending on the regulatory framework in the Geography that your business operates in). For example, in the state of New York in the United States, the use of AI in HR processes is regulated due to the risks of the AI exhibiting racial or gender biases encoded in the training data. Specifically, these technologies need to undergo rigorous anti-bias and discrimination testing before use. Most large language models have not yet passed these tests.

The use of a large language model providing analysis of a (fictitious) candidate - a practice that would arguably be legally restricted in certain geographies

Also very important to understand are the data privacy risks associated with employees using AI. These models are typically (but not always) hosted in America, and so employees seeking to comply with the UK or EU GDPR should consider the data privacy implications and advice from their counsel.

It is common practice to try to align your acceptable use policies with these frameworks, often by referencing the relevant laws and regulations in your acceptable use policy directly.

How to go about writing a Generative AI Acceptable Use Policy

Identify Stakeholders

If you’re charged with writing a Generative AI Acceptable Use Policy for your organization and its employees, you want to consider ensuring that all key stakeholders are aligned. In particular Employee Policies for Generative AI will likely require either input or review from at least 5 key parties:

  • Legal team
  • Data security team
  • AI development team
  • Representative users
  • Leadership

The legal team can help advise on the relevant legal frameworks that may apply to your business, and the relative importance of either referencing those frameworks explicitly or simple aligning the content of the policy with them. They can also often help guide the formulation of AI principles that can guide the rest of the AI policy. Data security teams can express positions on the expected types of security controls that should be applied either by the AI technologies themselves or by the business using them. Common controls include vulnerability scanning, penetration testing, etc. AI development teams that are actually building tools should be included as well, to ensure that work being done to build AI tools are consistent with the intended policies, and that any conflicts are appropriately ironed out and clarified before policies go into effect. If there are specific usecases in mind, representative users from each usecase should be looped in as well, so that benefits and risks can be appropriately weighed before publishing a new policy, and any concerns or doubts about the technology can be fully assuaged. Finally, given the enormity of the expected impact from AI adoption, Acceptable Use policy will often require a quick review from leadership teams, to ensure that the proposed policies aligns with leadership’s vision for the role of AI at the Enterprise.

Define the scope and goals of AI systems and data usage

In concert with the relevant stakeholders, the scope and goals of AI systems and data usage should be explained as clearly as possible in the acceptable use policy, and should conform to the AI principles established. Concretely, the acceptable use policy should explain that artificial intelligence systems are expected to drive whichever the key benefits identified (such as productivity improvements/time savings for employees and improved products and services for customers). If the intent is to procure very specific tools, such as a chatbot like ChatGPT or Claude, a coding assistant such as Cody or Copilot, or a Meeting Notes transcriber such as Otter or Gong, then the goals of procuring those tools should be identified. Ideally a list of endorsed tools is maintained and referenced explicitly in the document, grounded in the AI principles about what problems and approaches the Enterprise considers relevant problems for AI. For enterprises curious about what some of the good usecases for AI might be, we wrote a dedicated blog post discussing the most common AI usecases at enterprises today, but some of the top usecases are shown below, including Business Research and Knowledge Retrieval, Internal Communications, Engineering, Sentiment Analysis, Meetings, and other tools.

Outline Prohibited Use

AI Acceptable use policies should also clearly articulate what uses of AI tools are prohibited. This will differ by organization and regulatory environment, but should broadly instruct users that they may not use AI systems inconsistently with any applicable laws, regulations, or ethical obligations. Organizations may also outline uses that, for whatever reason, they have decided they do not want employees to engage in. For example, they may restrict use to business purposes only.

Provide clear guidelines on data collection, storage, access

Among the most important aspects of an Acceptable Use policy for Generative AI (or any AI more broadly) are the guidelines that are provided on what data can be used, by what tools. Lack of clarity on this is one of the main reasons that users at companies that either permit or do not explicitly prohibit the use of AI choose not to adopt it:

Some of the core AI principles about data collection, storage and access that Enterprises should align on when using AI:

  1. Should internal data be allowed for use with AI?
  2. If so, which specific tools are considered safe or unsafe and what precautions should employees take when using internal data with those tools?
  3. Should customer data be allowed for use with AI? If not, why not?
  4. If so, which specific tools are considered safe or unsafe and what precautions should employees take when using internal data with those tools?
  5. Should employees have to go through a specific onboarding or training process in order to be able to use AI?
  6. If so should this apply only for specific usecases or all ones?
  7. Should tools be required to host data in specific geographies in order to receive internal or customer data (or even just user prompts?)
  8. What data privacy requirements apply?
  9. Should tools be required to offer SSO, RBAC, etc, in order to receive internal or customer data, or even just user prompts?
  10. Should tools be required to have SOC 2 Type 2, HIPAA, FedRAMP or other certifications in order to receive either internal or customer data (or even just user prompts)?
  11. Should employees conducting AI research be allowed to publish that research publicly or should it be held within the organization?
  12. What internal data can be published in AI research and publications?

Oversight, auditing and enforcement

The AI acceptable use policy should typically include any ongoing monitoring procedures for AI in place at the company, including any internal audits and/or external oversight from lawyers, where appropriate. Merely notifying employees that usage is monitored can significantly enhance adherence to policy. But in cases where policy is not adhered to, it’s especially important that this section includes accountability measures and consequences for violation, since notification of employees about the likely consequences for violation can protect enterprises in the event of misuse or abuse of its systems.

Although there are a large number of AI tools out there, its very important for users that their adherence to policy is monitored automatically, since many of these LLM tools and AI chatbots have very open-ended potential uses. Its extremely easy to use a Chatbot, which might be approved for a specific purpose like Coding or Sales, to assist in a different usecase like CV reviews or Contract reviews. But if policies endorse a chatbot based tool for one usecase, but prohibit the other, automatically identifying abuse or ‘cross over usage’ can be extremely difficult. That’s why tools like Credal actually offer automated monitoring and enforcement, allowing your business to automatically monitor and enforce your acceptable use policies.

The screenshots below show how when a policy is uploaded to Credal, it can automatically detect  unsanctioned usage, notifying the end user, as well as if needed IT admins, the precise policy that the usage may have violated, and providing a simple UI for an end user to either go back and delete their message or alternatively to acknowledge and proceed, creating a clear audit trail of exactly what actions were taken by which users at what time.

Credal allows a user to upload or define  acceptable use cases
Once acceptable use cases are defined, Credal can automatically flag or identify high risk or non compliant usage
With a compliance trail and audit log of what usage occurs, which policies were tripped, and what the user chose to do in response.

Alongside this monitoring processes, we recommend that you educate your employees through various detailed communication channels, like Email, Slack or Microsoft Teams - whichever channels your users are most likely to engage with. Similarly, it can be especially valuable to integrate acceptable use policy education into onboarding and training, with periodic refreshers as policies, endorsed tools, change over time as AI research solves some problems with AI and discovers new ones. It’s typical to then require signed acknowledgement and evaluations for compliance. Such education should cover things like AI principles, AI Safety (including explanation of why users might not need to be worried about AI threatening to take over the world etc just yet) and a basic primer on how AI technologies work and how they can be useful. Tools like Credal can then be used to monitor activity and perform audits, ensuring policies are properly adhered to and appropriate consequences like retraining can be enforced where not.

AI is here to stay: you may as well get visibility and control over its usage

AI is not just not going anywhere - its usage is exploding, and whether you know it or not, the chances are that your employees are already using some form of AI tool, be it ChatGPT or something else. Moreover, the potential productivity gains are vast, but in many cases they will go unrealized because the most responsible employees can be afraid to experiment with tools if they are uncertain about what types of usage are sanctioned. Without a clear policy, enterprises are left with a situation where responsible users shy away from the tool meaning the productive potential of AI models are lost, but irresponsible employees pay no heed to the risks and so company intellectual property, data security, and compliance can all be put at risk. For that reason alone, enterprises ought to issue clear, unambiguous policies: to ensure that users can realize the productivity gains but be educated about the risks.

To help organizations get an acceptable use policy in place, this free tool generates an acceptable use policy based on the specific needs of your organization, and emails it as a PDF to the relevant contact provided. Feel free to use it if you  want a template policy to get started!

Building blocks towards secure AI apps

Credal gives you everything you need to supercharge your business using generative AI, securely.

Ready to dive in?

Get Started