AI Policy & Governance for Organizations

Comprehensive guide to building effective AI policies for your organization. Learn the three key questions every AI policy must answer: What are we protecting? What are we providing? What are we expecting? This session covers data security, privacy considerations, content integrity, vendor management, and practical implementation strategies with real-world examples including Brand USA's AI policy.

40 min
13 chapters
JR
Janette Roush
Chief AI Officer, Brand USA

Chapters

Key Takeaways

  • 1Effective AI policy is built on three foundational questions: What are we protecting (data security, privacy, IP), what are we providing (secure tools, clear guidance), and what are we expecting (transparency, verification, accountability)
  • 2Data security requires protecting three layers: model training (use enterprise tools with training opt-out), encryption (SOC2 compliance), and data retention policies with clear admin controls and sub-processor transparency
  • 3Data privacy means never putting Personally Identifiable Information (PII) into AI systems, understanding GDPR and EU AI Act compliance requirements, and always obtaining consent before using employee or customer data
  • 4Content integrity requires safeguarding intellectual property rights, understanding the risks of uploading confidential information to AI tools, and reviewing contracts to ensure your organization owns AI-generated work
  • 5Successful AI adoption requires both providing employees with secure, approved tools (to prevent "shadow AI") and establishing clear points of contact through an AI committee, IT support, and HR guidance for policy questions

What You'll Learn

After watching this video, you will be able to:

  • Articulate the three core questions that form the foundation of effective AI policy: protection, provision, and expectations
  • Identify critical data security requirements including SOC2 compliance, encryption standards, model training controls, and sub-processor transparency
  • Explain why PII must never be entered into AI systems and understand GDPR and EU AI Act compliance obligations
  • Evaluate vendor contracts for AI usage rights, data ownership, privacy protections, and incident reporting requirements
  • Design an AI governance structure that balances employee empowerment with organizational risk management
  • Create a practical AI policy framework covering vision, ethical principles, transparency, responsibility, confidentiality, and governance

Full Transcript: AI for Tourism Professionals

Today we are going to talk about creating an AI policy for your tourism organization.

For me, this has been a long process of learning, reading and taking coursework. When I started talking and writing about AI three years ago, the one question that keeps coming up is what is okay to put in a large language model? What is safe for me to upload or use as context to help the language models give us better responses?

I didn't want to give a surface answer to that question. I have done a ton of reading and created a lot of resources around who is actually talking and writing about AI governance. I first dove into this educational journey summer a year ago. Did coursework for a certification called the AI Governance Professional, the AIGP certification from IAPP, which is the International Association of Privacy Professionals.

Part of studying for this, it's a two hour proctored exam. I did not actually formally do the test and get the certification. But it is something that people in the legal community are endeavoring to undertake. I did coursework from Dr. David Privacy, whose website is shown here. He has a course on Coursera that was about 12 hours of video content that I watched probably five times through taking notes, really trying to figure out and understand what are the pieces of this information that applies in tourism or that applies specifically to a DMO versus healthcare or creating your own language model.

That study and coursework is reflected in what I'll talk through today. There's also resources I follow on LinkedIn or through their substack newsletters for ongoing education about AI governance. That includes Louisa's newsletter, that includes Oliver Patel has a great substack and my very first introduction and deep dive into all of this work was conducted by Kara Franker, now the CEO of Visit Florida Keys and Roxanne Steinhoff, who was the head of Steinhoff Law. She came to that through her work at Choose Chicago and ended up getting her law degree, starting her own law firm and just a month ago she announced that she was joining Miles' Partnership as their general counsel. Kara and Roxanne have written white papers around the intersection of legal questions and responsibility and the work of DMOs, and all of their work has been really well researched and is absolutely worth looking up.

AI governance is our plan for using AI ethically, strategically, and safely. This is important because this is something our employees want from us. Employees are anxious about using AI, particularly if there's not a policy in place, because they don't know what they are or are not supposed to do with AI.

As leaders of our organizations, we need to provide those instructions to them, and your AI policy needs to address three questions for your staff: What are we protecting? What are we providing and what are we expecting? Throughout, this webinar, I'm going to cover each of these three points.

Starting with the first one, what are we protecting? That's going to come down to three different things. The first being data security and how are we protecting systems?

The first thing is having the ability to turn model training off. There's fear around the idea that if you upload company information into ChatGPT, that someone else would somehow be able to get that information back out, that a competitor could see your advertising plan if you had shared that with the tool.

That's not how language models work because if they do take your information and use it to train future versions of the model, it's a drop in the bucket of the overall information that is used to train a language model. But that said, we still don't want to do it.

We don't want to give up that information. You will see on this image, a screenshot of the setting screen from ChatGPT. They have hidden under data controls "improve the model for everyone". Because of course, don't we want to be generous, right? And improve the model for everybody who uses it by uploading all of the information that you give it into the next training round for that language model.

And the answer is no. We don't want to improve the model for everyone. So when you are using an AI tool, you want to have the ability to turn model training off. We want to know that data is encrypted. We want to be able to see in the Terms of Service of the AI tools that we're using, that the data we enter is encrypted, both when it is traveling to the cloud and while it is at rest in the cloud.

The third thing we want to look at are data retention policies. How long do these tools have the right to keep your data in the cloud that they own? The fourth piece is admin controls. Ideally, any AI tools that you use at your organization have the ability to have an enterprise account or an admin setup because if somebody leaves your organization, they shouldn't get to keep access to the AI tool and have their own login once they have exited your organization.

Just like people don't get to retain access to their emails or file server, they shouldn't retain access to AI tools.

Sub processor transparency. If you're not looking at a contract with a tool like ChatGPT or Claude, but a tool that uses those tools for processing, I'll use an example.

We have a partnership with Mindtrip on our America the Beautiful website. Mindtrip is not in itself a language model. They have sub-processors that they work with to provide the AI pieces of their tools. We need our vendors to be transparent about who those subprocessors are, so we understand the terms of use with those processors as well.

And then finally, SOC two compliance. This is making sure that your data is secure, safe from being stolen both at rest in the cloud and in transit to the cloud.

Your policy needs to protect people through data privacy. How do we do that? We do not put PII, personally identifiable information, into a language model.

To be more specific, and that's something that I was interested in when I was taking this AIGP course is, it's one thing to say, don't do it, but to understand why, it comes down to the EU AI Act. As part of the EU AI Act, they say that you have to get consent from someone to take their information and enter it into a language model.

So unless your check boxes, when somebody is giving their information to you, specifically says that their information is going to be entered into an LLM, you do not have consent to do that with somebody from the EU. If you take somebody's information, their email address, their phone number, and it gets uploaded into a language model, meaning the training isn't turned off, they're allowed to take your information and use it to train a future version of the model.

When they do that training, they convert all of the words that you have given it into numbers, and then those numbers are entered into an algorithm. When that happens, you can't undo it. There is no way to extract that information again. You have now permanently given away somebody's email address or phone number, and that is a violation of GDPR, which says that people own their data.

We can rent it to you, we can allow you to use it, but at any time I am able to get that information back from you. If it goes into a language model and it is used as part of the training of a model, you can never extract it again. That is the reason why we shouldn't put PII into language models.

Then the third piece that we're protecting is the integrity of content, so protecting brand and community. This comes to the questions of, well, what if I want to upload a research report into ChatGPT? Is it safe to take our board minutes and upload those into copilot or into Gemini? The reason we have concerns around this comes into these three risks of putting external information into the language model.

The first one is permanence. Once those inputs are in the training data, you cannot extract them. This is applicable to tools that are training on your inputs, if you don't provide paid tools to your staff and people are allowed to use their own personal free tools. Most likely that information has been put permanently into a language model because it's been used to train future models.

The second risk is regurgitation. Language models don't work like Wikipedia, I don't upload my business plan and then Australia can say, give me Brand USA's business plan. It's not a retrieval tool in that way. But it is predicting text based on all of the words that are part of the training data, and it makes it possible for a language model to reproduce your content in some form in someone else's response. We protect ourselves from that risk by turning off training and using paid tools. And then the third piece is violation of IP rights if you are sharing information with a language model that you don't have permission to share with the third party.

That's going to come down to looking at the contract of the information that you want to share. Thinking of licensed content specifically, that could be a research report that you are paying for, a syndicated report that many people are using. It could be different if this is research that was commissioned specifically for your organization, but if it is research that is, you know, multiple copies of it are being sold, it probably says in the contract that you are not allowed to share that with a third party. Now that doesn't mean you can't do it ever. I would say that's an opportunity to have a conversation with the vendor and ask if you can put an amendment in the contract: I have training turned off, can we make an exception for uploading the research report in that instance? Other places where you need to be concerned about rights violations is other people's information like partner data. This is of huge concern to business event planners. They typically never have the rights to take information about the partners they are working with and uploading that into a language model. For them, the solution is to host on premise, a language model, which means it's living on your own computer or on your own server, and then the data you upload never leaves the confines of your business.

If you're looking at whether it would be okay to share information with a service provider, the service provider being ChatGPT, if you're looking at your own drafts, notes, ideas, that's going to be okay.

It's the same thing with public information. If it is already available on the internet, I feel okay uploading it to a language model because it is already in the language model if it is on the internet.

Internal strategy documents, look at your organization's cloud policy, but typically that's going to be okay.

Now we get a little deeper. You look at those licensed research reports, check the license first, ask if you can have a carve out for using it inside your language model. Confidential information, you want to get people's permission to do that.

PII. Again, you're going to have to get consent first.

Employee HR data or your internal banking information, that's not ever going to be appropriate in a language model. If your HR team is looking at use cases where this is essential, there's opportunities to anonymize information and then do something with a locally hosted language model. The coursework I took on the AIGP Act went into great detail on what that entails.

Probably it is going to be overkill for your DMO.

When you're looking at this question of what is safe or okay to upload, it's a little less about what is safe and it's a little more about why are you doing it.

So this comes down to specifically to IP and making sure that you don't get slapped with an IP infringement lawsuit. There's two different areas of risk for that. One is going to be on the output side, one is on the input side. So looking at the output, you want to make sure that you are looking at lower risk use cases.

If you have a research report, the reason you got the research was to do internal strategic planning. AI can be a tool for making that happen. You want to use it for non-commercial educational purposes for doing market and policy research. So all of these use cases, because it is in the spirit of why you have the IP to begin with, you're not using AI output to devalue that.

If you put in a research report, you can't use AI to do something else with the research and then sell it to people because that still belongs to the original rights holders. To use this for non-public, non-commercial uses, assuming that you have training turned off, that you are not giving away this information to train a future model, that's going to be okay for internal research use.

From a legal perspective, you are going to win this proverbial lawsuit, but you want to look at whether your use of the IP is going to qualify as fair use under US law. Are you making a transformative use of this? Are you using this information internally without distributing it to the public?

That's why there's a distinction between using an AI tool to make images of Mickey Mouse doing things that Disney would not allow versus doing something for purely internal, educational, use that does not affect the market value of the product.

If you want to truly reduce your risk in the outputs of these AI models, you want to make sure that you are always providing attribution and transparency. If you are using something as a resource for information that you are creating, you want to say, not only did this happen inside of ChatGPT, and this is the prompt that I put in to use it, but these were the resources that I used in order to generate this output. That is the first piece that needs to be covered in your AI policy. What are we protecting?

Now we move on to the second piece of what are we providing, secure tools, clear guidance, and a person to ask with questions at your organization.

To look first at those secure tools, you want to provide secure paid tools for your staff. That could look like ChatGPT Team, Claude Team, Gemini for Google Workspace or Microsoft Co-Pilot. Providing those secure tools allows you to remove the risks that come with shadow AI or BYOAI.

The first area is security risks, because if there's a data breach, your IT team has no knowledge or control over what happened to the data that your employee uploaded to the language model.

It'd be privacy risks like we discussed around personally identifiable information. If somebody leaves your organization, they get to retain everything that they uploaded into their personal AI tool. There's also operational risks where everybody in the company is working from a different set of tools.

There's no way for anybody who's overseeing this work to understand the accuracy of what somebody is producing. These models hallucinate as a matter of course.

If somebody is creating information about your destination, whether visual assets or written copy, and they are keeping their AI use a secret and something is wrong, that can create a violation of visitor trust. If there is a data breach with personally identifiable information uploaded into a free AI tool, those things could have legal fallout.

Regulators, if there is a situation with your organization, are going to have questions and you won't have an AI policy or a suite of paid AI tools to point to as a way that you were securing the organization.

The second component of what you need to provide for your staff is clear guidance on what people can and can't do, and a person to come to with their questions.

And that can be an AI committee, that could be the head of IT or the head of HR. If you are looking for a person or a team who should own AI in your AI policy, I think that sits squarely in the operations role because that's part of making sure everybody in your organization has the operational knowledge and tools to do their best possible work at your organization.

And so now we come to the third question that your AI policy needs to answer, and that is what are we expecting from our staff? That comes down to three things. The first one is transparency. Your AI policy should ask that your team be very transparent with how they are using AI, and that should be internally, with external partners, and on your website or in your Instagram posts, wherever you are sharing things that have been made partially or completely with artificial intelligence. The second piece is verification. These tools are hallucination machines. It is a feature, not a bug. If they didn't hallucinate, then it would be regular machine learning where there would be one pre-written answer for every question that you give it.

And that is not how language models work. You need to make sure that people understand both how they work, so they understand that hallucinations are going to happen, and that they have an obligation to check their materials, to make sure that they are accurate and reflect the point of view of your organization.

Very early in the days of AI, May, 2023, the City of Boston came out with an AI policy for their city workers. They posted it on their website and emailed it to every single city employee. They said, we encourage AI use. This is going to change the world.

You should become accustomed with how these tools work and can benefit your work. But it is just like autocorrect in an email. Autocorrect changes a word in that email and that changes the meaning of the email, it is not auto correct's fault. It is your fault. That is exactly true with AI, we have the responsibility to verify everything that is part of an output because that is our work.

It is not AI's work. AI may have assisted with it, but we have to be responsible at the end of the day and your policy needs to make that clear. And then the third piece is human accountability. You cannot pass the blame on AI that the output belongs with the person.

Now let's look at what AI guidelines can look at for your organization.

This is the structure that we use to build the AI policy here at Brand USA. This comes from the excellent work that Kara Franker and Roxanne Steinhof on behalf of the AI Opener for Destinations Program, I've served as an expert advisor for the last two years on that program.

That's run through Group NAO, who created the program, and then the North American cohort is run through Miles Partnership. They have the full suite of these materials available on the website of the AI Opener for Destinations Program, but I believe you need to be a participant and have a login to see that full suite of materials.

I'm going to just walk you through the outline of what they suggest, and I'm going to show you our exact policy so you can see how we put this into life. First, you want to look at the vision that your organization has for AI because I imagine the vision that Brand USA has is going to be very different than your organization's vision.

We want to be at the forefront of AI use at a DMO. Your DMO may not share that exact same opinion, which is completely fine. If you are just looking to safely explore, that is a perfectly respectable point of view for AI adoption. The second piece you want to look at are your ethical principles, and this may ultimately be the most important conversations that your destination has because this comes down to transparency.

How are you going to let people know that you used AI? And in what ways will it be okay to use AI at your organization? Can you use AI to make images of your destination and post those on Instagram? Some of those questions feel like it'd be very easy to answer, right? Like, no, of course we wouldn't want artificial images of our destinations, but none of these things are black and white necessarily. You think back 25 years ago to when Photoshop was new, there was a lot of hand wringing around whether it was okay to use Photoshop to make the the sky of the photo that's on your brochure a little more blue. And then of course, we can't imagine not making it more blue.

If you did that change with AI instead of with Photoshop, would your organization think that's okay? There's no laws governing this, so this is an organization by organization decision. Even that answer might be fairly simple, but in the future there will be questions like, say there is a child who is in a video of your destination in the background and you didn't get a video release signed for that child.

Is it okay to use AI to remove that person from the video before you post it? Again, there's no right or wrong or yes or no. It's just what you collectively agree is the responsible choice, and then put into the policy and let people know whether it's posting the policy on your website, putting it as part of your privacy policy, et cetera, making sure that we are transparent about it.

The third piece is responsibility. Spelling out how do you keep the human in the loop? Because again, these AI guidelines, it's for your staff to know what's okay and not okay to do. It's important for them to see in black and white. They have to be involved in the process.

The fourth piece is confidentiality and safety. How is your organization going to protect personally identifiable information, confidential records, information that you are licensing but do not own? What tools are you going to be providing for your staff to use?

Governance and accountability, how will you ensure compliance with your AI policy? Who is the internal lead for oversight on AI use and how are you going to train staff?

Your team really wants education on AI. They really want regular, consistent messaging from leadership on what's okay and not okay to do. Providing that to your organization, along with secure paid tools is really important. And then provide some practical tips.

What tools can you use? What are examples or ideas of permissible ways to use AI? Let's walk through what this looks like for Brand USA. This was, created, oh my goodness, it's about to celebrate its first birthday, and it's good to be walking through all of this right now because we need to go back and add specific information about image generators into this policy.

This was written initially not to be super specific, looking at every single conceivable use case and providing some kind of verdict on it. It was meant to be much higher level and philosophical about how we approach AI and how the team should approach AI. That starts with our vision, which is we want to set the global standard for responsible and innovative AI powered tourism promotion.

We are actively looking for opportunities to use AI to do our work better or to make the United States more bookable and discoverable. We know that education is a central component of that, which is why we do regular AI trainings with the staff. We have a Slack channel, dedicated to AI news and tricks.

And we have this monthly webinar series for the industry. Second piece here is transparency, and we want to be very transparent about our use of AI. When I send emails that involve AI use, I will write in my email, and this will be even for internal emails where I did a little research or thinking through an idea with AI, I will say, oh, so I worked with Claude and I used this prompt, and this is how I came up with this answer that I am now giving you. Sometimes I will even put the answer from the AI tool in red, and then put my notes beside it in a different color, in part because I want to be transparent, but honestly, because I want to show people by example how you can use AI tools as a thought partner, to be interwoven into your work.

We want to actively engage people in those conversations, in these AI guidelines, which are posted on our website. It does say the phrase, these AI guidelines were generated with support from ChatGPT-4 and edited by me. I spell out that we're embracing a culture of responsible experimentation, which might not be the case at your DMO, and that's okay.

What people want to see is what exactly is the approach that you are comfortable with. Then we want to spell out responsibility. Keeping the human in the loop, explaining that generative AI is a tool, but we're still responsible for the outcomes, how we need to fact check and review all content created by AI, particularly if it is used in public communication.

Also, it's good to let teams know that AI cannot fact check itself. It will hallucinate and say everything was correct, so you as the human still have to go through and confirm that everything that it said is okay. And then in terms of responsibilities, on the side of Brand USA, we're responsible in providing the AI tools themselves and ongoing training. Confidentiality and safety, this is where we spell out that we may not enter personally identifiable information into a prompt, spelling out exactly what that includes and explaining why it is that we have this spelled out. You may not enter trade secrets, confidential information, information received to us that is protected by a license.

Our general counsel and I will have conversations with partners about changing contracts to carve out an amendment that says in these particular cases, using this type of tool, it is okay if it is something that we truly want to work with inside of AI. We then spell out BYO AI and why we don't want people to bring their own tools to work, but we will vet tools if there's something that a person would like to use, and then make it available if it meets our standards.

Then governance and accountability. We want to make sure that everyone knows they can reach out to our general counsel, to me, their manager, or to HR to discuss any questions or problems they have regarding AI or AI governance and accountability within the destination. It'll be reviewed annually or as needed.

Right now we are at annually, so I will be reaching out to Jake to make sure that this is up to date. Again, philosophically, all of this could apply to words or to images, but there's been a great deal of development in AI image generators over the past 12 months between when this was written and today, and we want to be very clear so that there's no confusion with our team about how we perceive AI use when it comes to images specifically. That's something we'll definitely be adding here. To help you in the creation of your own AI policy, I asked Google Gemini to write a prompt that you can access through this QR code as a custom GPT inside of ChatGPT.

You might need to have a paid account and ChatGPT to be able to use this. At a minimum, you have to at least be logged into ChatGPT. This should be able to coach you through those same questions that I was just walking you through on those three imperatives for your AI policy and then breaking down each component separately.

I'm not a lawyer, so that is not legal advice. It is meant to get you out of your own heads. I think it can be very difficult to get these policies on paper sometimes because we let the perfect be the enemy of the good, or we feel like, oh, we have to find money for counsel to draft this.

But think of it as a philosophical point of view that your destination has, and that's something you want the team on the ground to write. You don't want to outsource the writing of your philosophical point of view about how will you approach the use of AI.

There's other uses of AI that we need to be concerned with, and that is with our vendors and third parties. When you are working with vendors, make sure that you are reviewing contracts carefully to understand data ownership, access privacy in your partner agreements, particularly with any agencies that you work with.

Make sure you understand how that vendor or partner is using visitor data, and understand if they are putting PII into language models. Have conversations about this with your vendors. Ideally, they already have their own AI guidelines in place, but if not, you want to make sure that your AI guidelines are aligned with their use of AI.

You wouldn't want to completely forbid any use of AI in creating destination images, only to find out that your agency doesn't have that same rule in place.

This is an example of a checklist that you could use with vendors to make sure they are identifying to you when they are using AI and how they use AI in their own internal processes. And this isn't to say that they shouldn't be using AI. I certainly hope they all are using AI. I think agencies and particularly advertising agencies should be very deeply exploring the capabilities of AI, but we should know as the client how and where AI is being used. You need to understand if they are using AI to write content on your behalf. You may not be able to get a copyright on that content.

I don't think the courts have been very clear yet about whether human beings can own the rights to content that was not created by human beings. That's something you want to be very clear about. The same thing about third party IP usage and what your vendors are uploading into language models to create your outputs.

Incident reporting. If they have any kind of AI related problem, are they obligated to let you know about it? Data security and privacy. You want their policies to mirror your own and for their own AI output, are they committing to human review of that the same way that you are?

When you are procuring AI products and trying to decide if it's okay to work with this vendor or to subscribe to a product or not, this is advice that I took from an AI Opener event in Europe last year. An AI ethicist said, you need to know what to ask and who to ask when buying risky tools from strangers.

Use AI to help with this. Let's say you're looking at an AI notetaker, find their terms of use and their privacy policy. Upload that to the AI tool of your choice and ask, what should concern my DMO regarding the terms of use in the privacy policy?

Include your own policies once you have that written, so it knows what your point of view is on these things. Find out from a security perspective, do they offer single sign on? Is their data encrypted? Are you able to have an enterprise license for the product and find out from a privacy perspective, are you able to turn training off?

Are they using your content to make their service better? Does your DMO own the outputs from using that service? With note taking tools, many of them are free and that's the reason they're free. They want to get that data and use it to train their own tools or to sell it back to language models.

So it's very important from a privacy perspective to go through this checklist when you or a team member is looking at an AI tool to use. This is an AI generated picture as were all of the images in the presentation today. They were all made with the new nano banana model from Gemini, which is incredible. If you're looking at which AI tool to make available to your staff, typically I have said to go with Open AI and ChatGPT. There's no wrong answer because they all leapfrog each other in terms of capabilities, but if you're a Google Workspace user, the new Gemini model and nano banana inside of Gemini is phenomenal.

Leadership to-do list. Please empower your staff to use these tools, but make sure they understand where their limits are. This will encourage AI use at your organization because people drive the car faster when they know how to use the brake pedal. You want to approve and communicate guidelines to your team.

You want to oversee risks to both your vendor and your tech stack and ultimately you want to set the tone. I've read studies that say employees are much more likely to use AI if they know their immediate boss is using AI.

The only way that we're going to encourage AI adoption at our organizations is by everybody from the top down leaning in and moving through that initial discomfort to ask the question, I wonder if it can help me with this. And then really seeing what it can do.

If you are new to using AI, it does take some time to understand what to do to get the best out of these systems. Very few of them work great when you just put in one prompt and then walk away. It requires iteration. It is like working with a brilliant but ignorant intern, right?

You have to have the patience to give it the context it needs in order to really get the best results out of it. It will rise or fall to your expectations. If you assume AI can't do anything, you're not going to put your best effort into prompting it and it won't be able to do anything for you.

Whereas if you assume that it's going to be able to do amazing things, that will encourage you to keep working with it until it does, in fact give you amazing outputs. That's the tone that I hope you can set at your organization.

To wrap up, this is the QR code for today's presentation if you would like to download a copy. Thank you so much for joining the webinar today.

Agents of Change | AI Research & Innovation by Janette Roush