Strategic guidance for tourism leaders on AI adoption, governance, and organizational transformation. This comprehensive leadership guide covers scaling AI by department, creating AI governance frameworks, managing three critical risk areas (Data Security, Data Privacy, Content Integrity), developing organizational AI guidelines, handling vendor partnerships, and implementing both top-down and bottom-up adoption strategies.
After watching this video, you will be able to:
Thank you so much for logging in. My name is Janette Roush. I am the SVP of Innovation and Chief AI Officer for brand USA. If you're not familiar with us, we are the destination marketing organization for the United States. And we focus on driving international inbound visitation to the US.
So first of all, I want to assuage any fears that you have that this might be dry or not useful. I'm including the image that you see of the Play-Doh version of Janette getting squished because I have seen a lot of AI presentations, and I know a lot of them are just a list of all the little tricks and tips that you can do, but you don't actually see anything useful that you can then take back to your office and actually use for work. So while I think the fun stuff is fun, and if you wanna make one of these on your own, it is free at pika.art. But for the rest of the presentation, going to be more on practical use cases.
I like to start off my webinars with the funny image of my face getting squeezed like Play-Doh, which you can make yourself. It's at PIKA dot art. I share these in my webinars to reassure everyone that I know a lot of AI presentations focus on all the funny tricks you can do with AI, but don't always bring it back to how you can work with AI in your job. This will be the only funny trick you see today. I'm going to spend most of my time focusing on how we are using AI at work and how you can you set up your organization to be ready to work with AI.
And as an FYI, my agenda for working with AI in the tourism industry for the United States focuses on three branches. The first is operational excellence. How are we using AI at Brand USA to be more efficient at our jobs or do those jobs better.
The second piece is industry empowerment. How can Brand USA serve as a catalyst to drive AI adoption across the United States tourism industry? And finally, the traveler experience and enhancement. What are we doing to make the United States more searchable, more discoverable, and more bookable by our international B2C and B2B audiences? That's what I'm focusing on in my role at Brand USA.
I want to start by talking about how are you able to scale your work at your DMO. This is for the DMOs that maybe you've been working with AI for a little while. You have individuals who are using AI to different levels of success across your organization.
How do you as a DMO leader move from just a bunch of people doing random work using AI into creating AI workflows that everybody in your organization benefits from? It's as simple as going to your org chart, looking at your various departments, then double-click on one of those departments.
In this example, I'm looking at human resources and what are all of the responsibilities that fall to your human resources team? Performance management, talent acquisition, employee engagement, all of these different areas.
All of their work is laddering up to these broad buckets. If you want to move from individuals working with AI to individuals working together as a team with AI, sit down and identify use cases for each area of focus. If you're looking at performance management, when it comes time to do employee performance reviews, some of the employees are using AI to write those reviews. And then on the flip side, some of the managers are now using AI to write their side of the performance review. If you are at an organization where that is starting to happen, maybe the solution is instead of ignoring AI and pretending like it's not part of the process, to actually incorporate AI into your process, create some custom prompts that will help both the managers and the individual contributors answer the right questions in the most thoughtful way to get to the goal, which is employees having clear goals to work towards and managers being in line with those goals. That's one example of things that you could look at for these different areas of HR. That's a really important process because it's allowing us to find some quick wins for each of those HR pillars. It also helps people across a department all use AI in the same way. Another way to help make that happen is to make sure that specific team is meeting regularly on their own to talk about AI adoption. So in this example, if we are creating custom GPTs for the folks that are part of HR to help them write a job description. Let's say you want to have weekly or biweekly meetings just for the people in that department to say, oh, I'm not quite sure how this worked, or, I got a response that I couldn't quite figure out and use that as a learning opportunity so that there's not one person responsible for fixing a prompt if it stops working as well as it used to.
That's something that can happen because the underlying models that these prompts are using change, and when they change, you have to update the prompts that you're giving.
Weekly meetings, regular trainings, help people stay on top of that.
The final reason why we want to do this work is that it helps you collect SOPs, the standard operating procedures for how your company gets things done. Think about what your new hire SOP looks like and how you need to start requisition in new roles, what people need to be notified who needs to write the job description and post it, and all of the different steps for hiring and onboarding that new employee. Once you have those written out, that allows you to do the process of what parts in this process can AI assist with?
What parts in this process can AI do without assistance one day? Because when agentic AI becomes more accessible to those of us who aren't coders or developers, six months from now, it's going to be very plug and play to work with AI agents. But you're going to want to know the steps in a process because not every step an agent will be able to help with. And so writing it out, that's how you can plug in down the line.
This will be the bulk of what I wanted to talk about today, and that is AI governance, because we know your staff wants to use AI ethically. This study, it's a little bit older. It's from 2023. The name of the study ended up being AI Anxiety and Business, because that was all of the outcomes of this study was how nervous employees are to experiment with using AI. A lot of that nervousness comes from not knowing what is okay to put into an AI system. It is our jobs as DMO leaders to make that very clear what it's okay and not okay to use AI to do, but that's not as easy as it sounds, right? Because we don't always know what's okay and not okay for AI to do.
How do you create this framework? I want us to start by thinking about these three common risk areas for AI. The first is data security and how we are protecting our systems. The second is data privacy and how are we protecting the privacy of people? And the third is content integrity and how are we safeguarding the information that we're putting into AI systems?
So first, for data security. I want you to provide paid AI tools for your staff. Even if there's plenty of free tools available. If your staff is already bringing their own paid AI tools to work. So what do these tools look like? ChatGPT Team, Claude Team, Gemini for Google Workspace, Microsoft Copilot.
Those are the predominant safe ways that you can use AI tools. If you are an enterprise-sized organization, at least 150 employees, you can be on an enterprise account. Team is a light version of enterprise, but for between two and 149 employees, there's a lot of benefits to having those secure tools.
We want to use them because they're going to provide SOC 2 compliance. That means that when you put data into a tool, the data, whether it is traveling to the cloud or at rest in the cloud, is less likely to be hacked by bad actors. So that is a safer way for us to use AI.
Also, when we use paid tools, we are able to turn off training on your data.
So that is, if you are on a free account or your own personal paid account, that's a toggle switch that's hidden in settings. The toggle says "help improve the model for everyone." What that means, is OpenAI and these other companies are taking all the data you put in and then they are using that information to train the next version of ChatGPT or Claude or Gemini.
And I don't think you want your data to be used in that way. So get the paid account, turn off training.
Now I want to look at the second risk area for DMO, and that's data privacy. How are we making sure that we protect people when we are using large language models? The easiest shortcut here is don't put PII - personally identifiable information - into a large language model.
If we are looking specifically at the EU AI Act, we do not own someone else's information. You can never own the name Janette Roush. I am always going to own my name and my email address. I can lend it to you for the purposes of joining your email database or what have you, but I can always take back because I'm the one who owns it, not you. With a large language model, an email address is put into the model and used to train that underlying model. It means that all of that information is turned into huge numbers called tokens, and then those tokens are training the model.
Once that happens, you cannot extract information out of the model again. It's there permanently, and because of that permanent nature, it is against the EU AI Act, and it is a violation of GDPR to put somebody's information into a language model if they're an EU citizen. For best practices, using the EU as the strictest use case, it just makes sense to not put other people's information in a language model, even if you have training turned off, meaning that somebody's email address would never go into the underlying model data. Part of that is the EU AI Act states that before you put someone's information into a model, they have to explicitly give you permission. The safest way forward: don't put that stuff into a language model.
And then the third piece is content integrity. How are we protecting intellectual property by knowing what we can and can't put into a model?
I've done a lot of reading and studying around this and talking to our own general counsel and doing coursework for a certification called "AI Governance Principles", which is managed by the International Association for Privacy Professionals.
It's a really interesting area of study, and I am not a lawyer and can't give legal advice, but I have approached this very seriously to understand what is acceptable and not acceptable use of intellectual property and a language model so that I am giving good advice.
The advice that I am seeing is that it's not a yes or no issue, right? It's not black or white. It's not, you either can or cannot put something into a language model. It's more about how are you using the output, or why are you putting something into a large language model?
I am thinking of things like research reports. Many of us at DMOs are working with incredible research on Tourism Economics or any number of companies that have put together these great research reports we want to make sure we're using.
And for me, a great use case for research is to create a Project inside of ChatGPT or Claude. Upload relevant research to what I am working on and then saying, "great, now I have a source of truth in my language model". So if I am putting together audiences who might be interested in traveling for America250, I have all of this research loaded in with information about heritage tourism and who might be coming to the United States in 2026, so that when I am working on a strategy with AI, I'm not just guessing, it's actually giving me useful responses that are based on science. The trick with this is that not all research can be uploaded into a language model.
First, make sure you are in a paid tool and that you have training turned off. Tourism Economics might not want their data to be part of the underlying data of a future training model, so it's unfair to take their information and upload it to a model where training is turned on.
You really need to have that training turned off. And then the next piece is, are you using proprietary research that your company paid for, that your company has a license to use, or is this a syndicated report that the research company is hoping to sell to many destinations? Because we are already working with research in this way.
We already have to separate the research on our own servers from that which is okay to share with our members or partners, and that which needs to be walled off because it's only for internal use. So the question with ChatGPT is it is technically considered a third party. And if this is a syndicated piece of research that says it may not be shared with a third party, that's when you can go back to the research company and say, "I want to add an addendum to my contract saying I am allowed to use your research with this third party when I am working with a paid version of one of these AI tools". If it doesn't say anything about your research being tied to your internal use, then the question of what can you do with it in AI comes down to: how are you using it?
If you are putting research in to help you with future strategic planning, that is a lower risk use case. Honestly, that's the reason research companies are making research is that they want us to use them to inform our campaigns and inform our ideas. They don't want us to put them into a large language model and then create a new version of them that we then sell ourselves.
That's the whole point of IP law, that we don't get to profit off of someone else's work. And so there are lots of ways to use IP in AI where you would not be profiting off of someone else's work. This is also just a good sign that we have got to get AI use out of the shadows. We have got to stop pretending that this hasn't happened to our industry, let's start putting AI mentions into our contracts so that we aren't guessing, "oh, I'm pretty sure the vendor wouldn't want me to do this". Let's find out what the vendors do and do not want us using with their research, with their sales reports. Right? Let's talk about BYO AI a little bit. It's also called shadow AI, bringing your own AI to work. Why might that be an issue?
It falls again into three tranches. So the first one's security. We don't have any way to control what people are putting into a language model if it's not procured through your IT department and ran through your company.
So it means there could be a data breach that happens with that company. And because your IT director isn't aware that company information is in that tool, you can't do anything to mitigate that data breach. Privacy risks as I outlined earlier concerning GDPR.
I think this is an important one. When we allow employees to use their own AI tools at work, they retain everything they put into that AI tool once they separate from the company. We don't typically allow employees to keep access to their email accounts when they change jobs. We don't allow them to keep their log in to your server.
So on the same path, you wouldn't want your employees to retain access to your AI tools once they leave the company. Even if it's the employee's own AI tools, if they are using AI a lot, they're putting a lot of information into those tools you may not want them to access once they leave. That's a strong argument for getting a Team account for one of these programs.
There's also operational risks with BYO AI. If somebody is using a tool that's not all that great, you don't have the opportunity to say, those outputs aren't good, because you're using a wonky version of AI, or somebody's knockoff version. Maybe you are using DeepSeek, which is owned by a Chinese company. You wouldn't want your private information to be uploaded to a company located in a jurisdiction we don't have access to. The lack of oversight if we don't know which AI model the person is using, means we can't know if that is a good AI model or not.
And then reputational risks. Your organization needs to be able to stand behind the information you are putting out into the universe and control any reputational risks from bad information, the legal fallout from a visitor being told something that was incorrect. You want to have jurisdiction internally over the tools that are being used by your company. Let this be your motivation to move to a $30 per person per month plan through Microsoft, Google, OpenAI, or Anthropic.
And of course, just banning AI or doing nothing at all also creates risk because there will still be a segment of your staff who will continue to use AI. You just won't know who it is and what they're doing with it, and you're not able to gain the benefits of working with AI because this person is keeping it a secret. They are deciding how they will spend the time they are reclaiming through AI use.
Let's talk about AI guidelines, what they need to include and how you can get started writing them.
First, there's going to be three operational layers for working with AI at your DMO. Internal guidelines will be the most important of those.
Here I give credit to Roxanne Steinhoff and Kara Franker, who's now the head of the Florida Keys DMO, as part of the AI Opener for Destinations program created by Group NAO and Miles Partnership.
Roxanne Steinhoff created a framework for how can destinations think through what should be in their AI guidelines. It is not my IP, so I cannot share it, but you can find Roxanne Steinhoff on LinkedIn or reach out to her through the AI Opener for Destinations program.
It's really about walking through these six elements. You need to think about your vision for AI. Do you have a high risk or low risk approach? Are you "move fast and break things" or very conservative with how you approach AI?
And I'll share, the Brand USA AI guidelines on slides following this slide. You will see how we've made these decisions. But you can't just copy our AI guidelines because you probably will have a more conservative approach to working with AI than we do. I want us to be very AI forward, and that could be a different approach from you.
Ethical principles. You need to determine as a group, as a DMO, how transparent you want to be about your use of AI.
Are you going to include language in your privacy policy? Are you going to include a note by every single piece of content that had an AI component? As an organization, you need to come to an agreement on what you think is right. How are you keeping a human in the loop?
Because AI is not yet at the point where we can trust it to do whatever we want without supervision, confidentiality and safety. How is your organization going to protect PII? How are we making sure that we aren't putting licensed material into these large language models permanently?
Governance and accountability. How are you going to monitor compliance with your company's AI policy? Do you have somebody overseeing staff training or AI use? And then finally offering some practical tips so people know what tools are allowed and if there are new tools they want to use, how could they go about using them?
Now I'm going to share what the Brand USA guidelines look like. We start with our vision, and our vision is to set the global standard for responsible, innovative AI tourism promotion. Again, that might be a little more aggressive than your organization, so that's a question that you have to answer for yourself.
Then when we get to transparency, we want to share when AI has been used to substantially assist with a task or a piece of content. We want to disclose internally every time that we use AI, because for internal use, I see this as part of education. People won't know how to recognize, AI ideas or input until you start spelling it out for them. This is an opportunity to shine a light on the cool stuff that AI can do. I like to say lead with wonder. These tools are not dry and boring. They are a lot of fun, so lean into the fun side of this as well. I like here "we embrace a culture of responsible experimentation". We want to show when we are experimenting and using AI in new ways, because that's going to kick off ideas for what other people can do using AI.
And for responsibility, we always want to keep a human in the loop. Nothing that we're doing here has automated the human out in any way. This example about autocorrect comes from the city of Boston. In May of 2023. The City of Boston released generative AI guidelines where they said keeping the human in the loop is, think about it like writing an email where you have autocorrect and if autocorrect changes a word in your email and you hit send, it's still your email. Those are still your thoughts. You have still represented that this is coming from you. It doesn't matter if autocorrect changed a word you don't get to blame something on, "oh, well, that came from AI, so I thought it would be correct". It is your job to be the human in the loop and to fact check every single thing that's part of that AI output and to make sure that it is representative of your brand, your destination.
You also don't get to tell AI, "would you fact check this for me"? Because it is incapable of doing that. Large language models have no relationship to truth. They are just putting words in a particular order to try to make you happy. That means if you ask it to fact check it, it's going to give you whatever answer it thinks will make you happy in that moment.
Responsibilities: Brand USA, we have a responsibility to keep our staff trained on using AI. Part of our role in providing these tools to the staff is making sure everyone knows how to use them. Then confidentiality and safety. I walked through why that was important already, as well as governance and accountability. So you need to have processes in place because that's going to assist in the future should something go wrong, you need to prove that you have a long history of doing the right thing of checking things.
This could be the case for those of you who have AI chatbots on your website, you may want to set up a regular cadence of red teaming or trying to trick your chatbot into saying something that it shouldn't say. The reason to do that is because the underlying models change and we have to stay on top of training these models. We can't assume it's one and done and "great that worked last week, so that means therefore it will work perfectly forever". You have to keep your hands in it all the time, and so part of that is going to be the constant red teaming because you don't want to find out through a newspaper article three months from now that your AI model said things that it shouldn't.
Part of why you do a regular red teaming schedule and that you have a process for it and that you write down when you do it, and then you annotate whether you saw any errors as part of this process is if in three months or three years, something does go off the rails, you have a track record. You want to point back at your governance documents and say, "it is unfortunate that this happened. However, we are staying on top of red teaming this AI and this is the proof that we did," so it's something to think about if you have a chatbot on your website.
So then to go back to the operational layers of working with AI, we had the internal guidelines. Now you need to think about external partnerships and working with vendors. We want to make sure that all of our vendors understand how they are using AI in their work, particularly if we are buttoned up with our own AI policy.
You wouldn't want to find out that your media agency has completely different guidelines and levels of transparency when it comes to working with AI. It's important for DMOs to have those conversations with your agency partners as new contracts come in. Take a moment to see how AI is addressed. It may be time to start inserting your own clauses saying, we want to analyze the results of this campaign using AI, and ensure that is upfront in this contract, so we're not wondering down the line.
These are the kinds of things to include on a checklist. Is your own IP going to be put into a model? Is the training turned on or off? Who owns those outputs? How will third parties your agency works with be using AI?
Is there an incident reporting process if something goes wrong with your AI materials? Is your agency providing human review at the same level that your company would? It's just making sure that your agencies and partners are aligned on AI use.
And then finally, tool vetting. How do we know any of these AI tools we're using, if we ought to be using them or not? Laura Haaber Ihle gave this wonderful presentation last summer in an AI opener for Destinations Bootcamp. She put this as, remember, you are buying risky tools from strangers, so have some kind of procurement process set up in advance so that you are not just handing away your IP and your data to every company that comes along.
This is my own personal process. You would want to bring your technology team and your legal team into this process. I start by getting the terms of use and the privacy policy, and uploading that to ChatGPT or your favorite tool, then asking what my DMO should be worried about regarding these terms.
You may want to have SSO as part of your security policy. Does that SaaS tool provide single sign-on? Are they using your content to improve their services? Do we own the output of what these third party tools make?
These are all going to be shades of gray. But you want to understand the answers to those questions before you bring the tool to legal and IT and say, do we think that this makes sense at this price point for our organization?
I want to talk through our AI mantras at Brand USA.
First I want to remind you as leadership to stay focused on your mission, that AI is a tool, but not the solution for questions like "what problems are we solving for our stakeholders? And can generative AI help us solve those problems either better or faster?" The mission doesn't change because of AI, but the way we achieve that mission might look a little different.
I encourage us to start small, to think big, that means the tiniest little use cases - for many people it is still "fancy Google", or it's the machine that rewrites my emails for me. Those are completely fine use cases for AI. It gets you into the programs and starting to use them. Ideally, you get invested with AI and start to use it for more interesting, meatier use cases. And then ultimately what we will all be able to do is AI can help us create websites and write apps and software tools that will solve bigger problems for us.
AI today can help you write tools and create software that solves problems, but you're not going to get to those bigger use cases for AI unless you start by treating it like it's fancy Google, unless you start with the teeny, tiny ideas that don't seem like a big deal. So it's fine if that's where you are, that's great because you gotta go through all of the steps to get to the really exciting use cases. Don't be afraid about starting small to ultimately achieve those bigger ideas.
AI adoption at our organizations requires both a top down and a bottom up approach. This entire webinar is about all of the things that ideally you will put into place to make your staff feel supported in their adoption of AI. We need to provide the paid tools. We need to provide those guidelines.
You need the individual staff members to find ways that AI makes sense for their own jobs and to start to build and stack those use cases on each other.
That turns into meeting with the department and talking through what are the ideas that everybody should be using.
Lead with wonder because you are not going to get your staff on board by chastising them about AI use. It's going to happen through everyone's natural excitement about how cool this stuff is. It's okay for it to be fun. AI doesn't have to steal from us the parts of our job that we think are fun. I don't use AI really for writing unless the thing I am writing is super corporate boilerplate copy, because I don't like writing that anyway. And that's the type of writing that AI happens to be very good at.
But in terms of ideation and coming up with problem solving opportunities, I'm using AI every single day for those types of use cases.
If you are curious how you can level up your own AI use, just keep it open on your computer screen on a second monitor if you have one. Every day try to think of something that you are doing that just throw it into AI and see if it can help you or not. I like that approach because the opposite approach is something that I think Gen X is being accused of doing, which is, "oh, I love ai. I am totally in support of AI and I'm gonna use it next week because I have this use case that I read was a really good use case, so I'm saving my AI for that."
Don't do that. Just throw something in there and see how it works. Don't wait for that perfect use case because it's never going to come. Open AI does not have a list of perfect use cases for DMOs. They are worrying about inventing artificial super intelligence, right?
They are onto the next thing. So it is on us to share our ideas for how we're using AI because that's what's going to help the entire US tourism industry remain competitive.
My final note is that it is not cheating when we use AI at work, and I say the term cheating very specifically because Harvard Business Review put out a meta study recently of AI adoption across different regions of the world. In over 200 studies, they found that women's use of AI lagged behind the use of men by 20 points, no matter what subset of people they were studying. One reason for that is that there's something in the back of our heads telling us, this is a bad shortcut to take, this is cheating.
And then you read everything about students cheating and what a maelstrom this has created for education. You could be forgiven for thinking it would be cheating to work with AI. But at work, particularly for DMOs, we are stewards of public funding. It's on us to make sure we are spending public funds responsibly.
And I think that means taking seven minutes on the job that used to take you seven hours, because if that's what AI can help us to do, I think it's incumbent upon us to run with it. So that is my encouragement for you.
We have a couple of questions.
How many of these tools should you have at your organization? I would say it would be really, honestly, one. The best possible use case would be, get the paid team account for ChatGPT. That is my number one resource.
I'd like to have two different language models because I think that shows people they do different things. It shows our staff that AI isn't a monolith. We use Claude and ChatGPT internally, and I like Claude because if you work with OpenAI and with ChatGPT a lot, I find that all of the writing at a certain point gives you the same answer over and over and you can't get out of that log jam.
I find that that happens less in Claude. I will get more creative original ideas there and the writing's a little better, but if I were to only use one for me, it would be ChatGPT.
What are my favorite resources, outlets, and thought leaders for keeping up with AI development?
In terms of podcasts, I listen regularly to the Hard Fork Podcast from the New York Times and to the Marketing AI Institute. They have their own standalone podcast with Paul Roetzer and Mike Kaput.
I really like Ethan Mollick. He is a professor at the Wharton School of Business and writes very prolifically about AI and the science of change management and working with AI. He has a lot of interesting insights.
Connor Grennan, who is the Head of AI at NYU also does a lot of work with change management and posts about that online. Both Ethan and Connor have email newsletters that I find really helpful.
Then we have some requests for use cases for ChatGPT. I like to say there is no prompt pack to download, right? I would start with just trying to teach it like it's an intern. These things, they are the smartest intern that you have ever had, but they don't know who you are. They don't know what you do.
Just like how you have to overexplain stuff to an intern, overexplain what you are doing to the AI. That's giving it more context, that will help it give you better answers.
Now we have a question from Beth about where in the organization have we seen ownership of AI usage and policies? Ideally it should be a blend of people.
I could see that living with your COO and I have had conversations with COOs who are putting together their own policies.
Outside of that role, I think it's a partnership between IT and legal, but really for rollout then to happen successfully, you want leadership from all across your organization. You want HR to be involved in that. You want to have the operations team involved and then you want to make sure that leadership of the other teams are a champion as well. Because if your CEO is a champion, but your CMO isn't, you're not going to have the same AI education flow and acceptance throughout the organization as you might want.
All right. When using a team account, do I give each of my vendors a seat or is this where legal agreements are needed? And you want your vendors to have your own, their own team or paid accounts? For a team account, it's tied to your email address. So your vendors can't be on your team account unless they have an email through your company.
So that's where you will want to have stricter control set up where they make you comfortable that they have their own team or enterprise accounts. For the most part, it will be hard for them to be on your account.
Do I have recommendations for the best industry AI training or certification?
I think of this webinar series as an extended course in AI. And I am working with a tool called Descript, which is an AI video editing tool, to edit these webinars into bite-sized content on TikTok.
Will I have an official certification as part of that? Stay tuned and we'll see how it rolls out. And that's the thing, any certification right now is only going to be as official as you decide it is. There is no official governing body of AI handing out official certifications.
I think there's a lot of really useful, classes on LinkedIn learning. Find the people that you think are interesting, and most of them are creating educational and video content around the work that they're doing. So I'll say Allie K. Miller is one of those people with an AI course that ends with a certification.
So is Connor Grennan. So if you want the certification, I think that's possible, but there is no industry wide cert yet.
If I had had more time or more of a legal background, I am very interested in the AIGP certification from IAPP, the International Association of Privacy Professionals. They have a certification called the Artificial Intelligence Governance Principles, and even going through the coursework for that is an education. I have learned a lot about AI in the process of learning about AI governance.
This will be our last question: are there any AI tools for the DMO websites specifically and for the website users that you recommend? If that's a question about chat bots for websites, there's a number of different companies. We just launched our own AI itinerary builder tool. Please go to America the beautiful.com. We have a partnership with a company called Mindtrip that you will find on that website. It includes buttons all across the website that allow you to engage with this AI trip planner tool. We are learning all kinds of stuff about working with chatbots and itinerary planners through that process.
There's a number of other great companies doing that work, and I could see that being a fun topic for a future webinar.