Foundation concepts of AI for tourism professionals. This comprehensive introduction covers how Large Language Models work, practical prompting strategies, campaign strategy development, AI governance and security, and hands-on ChatGPT setup. Perfect for beginners looking to understand AI fundamentals and start using AI tools professionally.
After watching this video, you will be able to:
Hello everyone and welcome. My name is Janette Roush and I'm the Chief AI Officer for Brand USA. Today we're going to be doing Generative AI 101, which is really the foundation for everything else that we're going to be talking about in this program.
So I want to start by just defining a few terms because there's a lot of jargon in the world of AI and it can get really confusing really quickly. So when we talk about ChatGPT, what we're actually talking about is an interface. ChatGPT is the website, it's the app, it's the thing that you go to. But what's running behind ChatGPT is something called a large language model, or an LLM.
And so when people say LLM, they're talking about the actual AI model that's doing the work. And there are lots of different LLMs. There's GPT-4, which is made by OpenAI. There's Claude, which is made by Anthropic. There's Gemini, which is made by Google. There's a whole bunch of different LLMs, and each of them has different strengths and weaknesses.
But the key thing to understand is that an LLM is essentially a really, really, really sophisticated autocomplete. What it does is it looks at all of the text that you've given it, all of the context, and then it predicts what the next word should be. And then it predicts the next word after that, and the next word after that, and it keeps going until it's finished the response.
And so the way that I like to think about it is, you know when you're typing on your phone and your phone suggests the next word? That's essentially what an LLM is doing, except it's doing it at a much more sophisticated level because it's been trained on an enormous corpus of text from the internet.
But the key thing to understand—and this is really important—is that it is not pulling facts from a database. It is not going to Wikipedia and checking. It is not verifying information. It is predicting what word is most likely to come next based on patterns that it learned during training.
And that means that it can and will make things up. It will hallucinate. It will give you information that sounds very confident and very convincing but is actually completely wrong. And so you as the human user are responsible for fact-checking, for verifying, for making sure that the information that the AI is giving you is actually correct.
So that's the first really important thing to understand. AI is not a database of facts. It is a prediction machine. And that has implications for how you use it.
Now, one of the things that I hear from a lot of people who are new to AI is they have what I call "fear of the blank prompt." They open up ChatGPT, they see that empty text box, and they don't know what to type. They don't know how to start. And so what I want to do is just give you a few really practical examples of how you can use AI in your day-to-day work.
So one of the simplest use cases is just asking it to help you with data. So for example, let's say you're working in Excel and you need to write a formula but you can't remember the syntax. You can just ask ChatGPT, "How do I write a formula that does X, Y, and Z?" and it will give you the formula. And then you can copy and paste that formula into Excel and it works.
Or let's say you're trying to clean up some data. You've got a spreadsheet with a bunch of messy data and you need to reformat it. You can ask ChatGPT, "How do I do this?" and it will tell you.
So those are really simple, practical use cases that don't require any special skills. You just have to know how to ask a question.
Now, let me show you a slightly more interesting example. I'm going to ask ChatGPT a question and we're going to see what happens.
So I'm going to say, "Tell me about Minnesota's winter weather. I want to know what the average temperature is, what the snowfall is like, and what activities are popular during the winter."
And so ChatGPT is going to take that prompt and it's going to generate a response. And you can see it's giving me information about average temperatures, about snowfall, about winter activities like ice fishing and snowmobiling and cross-country skiing.
Now, here's the thing: I don't know if this information is correct. It sounds plausible. It sounds like it could be right. But I would need to go verify this information before I used it in any kind of official capacity, before I put it in a marketing campaign, before I put it on a website. Because the AI could be making this up. It could be hallucinating.
And so this is where the "human in the loop" comes in. The human has to be the fact-checker. The human has to be the one who verifies the information.
Now, let me talk about a really important concept which is "how to cheat" using AI. And I put "cheat" in quotes because it's not actually cheating. But a lot of people think of it as cheating.
So let me give you an example. Let's say you're in school and you have to write a six-page paper about the Iliad. And you think, "Oh, I'll just ask ChatGPT to write the paper for me." So you type in, "Write me a six-page paper about the Iliad," and ChatGPT spits out six pages of text. And you turn it in.
Here's the problem: That paper is probably going to be mediocre at best. It's going to be generic. It's not going to have your voice. It's not going to have your insights. And it's probably going to have some factual errors because, remember, AI hallucinates.
So that's not how you should use AI. That's the lazy way to use AI.
The smart way to use AI is to break the task down into steps. So instead of asking for the final product all at once, you ask for help with each component of the task.
So for the Iliad paper, you would start by asking, "What are some potential thesis statements for a paper about the Iliad?" And ChatGPT gives you five or six options. And you look at them and you think, "Oh, that one's interesting. I like that one."
Then you ask, "Can you help me create an outline for a paper with that thesis statement?" And ChatGPT gives you an outline.
Then you say, "Okay, let's start with the introduction. Can you help me draft an introduction?" And ChatGPT drafts an introduction. And then you read it and you edit it and you make it your own.
And you keep going through each section of the paper, working with the AI as a partner, as a collaborator, rather than just asking it to do the whole thing for you.
And the result is that you end up with a much better paper. It has your voice. It has your insights. And you learned something in the process because you were engaged in the work.
And so that's what I mean by "how to cheat." It's not about being lazy. It's about being strategic. It's about using AI as a tool to help you do better work, not to do the work for you.
Now, I want to talk about a really important concept which is "it is not cheating to use AI at work." A lot of people feel guilty about using AI. They feel like they're not doing their job if they're using AI to help them. And I want to push back on that really strongly.
Using AI at work is not cheating. It's evolving. It's adapting to new tools and new technologies. It's the same as when email was invented and people started using email instead of writing letters by hand. It's the same as when calculators were invented and people started using calculators instead of doing math by hand.
AI is a tool. And if you're not using the best tools available to you, then you're putting yourself at a disadvantage. You're making your job harder than it needs to be.
And so my encouragement to all of you is: Use AI. Use it without guilt. Use it strategically. Use it to do better work, to be more productive, to free up your time for the things that really require human judgment and human creativity.
Now, let me show you a more complex example of how you can use AI for something like campaign strategy development.
So let's say you're a destination marketing organization and you want to develop a campaign to promote winter travel to your destination. And you want AI to help you with that.
Here's the problem: If you just ask ChatGPT, "Help me develop a winter travel campaign," it's going to give you generic information. It's going to hallucinate. It's not going to be based on real data about your destination.
And so what you need to do is you need to "prime the prompt" with a source of truth. You need to give the AI actual information, actual data that it can work with.
So here's how you would do that. First, you would go do some research. You would go to Google, you would go to Perplexity.ai, which is an AI-native search engine, and you would gather information about winter travel trends, about your competitors, about what travelers are looking for in a winter destination.
And then you would take all of that research and you would upload it to a tool like Google NotebookLM. NotebookLM is a Google tool that allows you to upload documents—PDFs, Google Docs, website links—and then you can ask the AI questions about those documents. And the AI will only answer based on the documents that you uploaded. It won't hallucinate. It won't make things up. It will only pull from the sources that you gave it.
And so you upload all of your research to NotebookLM, and then you ask it, "Based on these sources, what are the key trends in winter travel? What are travelers looking for? What are my competitors doing?"
And NotebookLM gives you answers based on your research. And then you can take those insights and you can use them to inform your campaign strategy.
And so that's what I mean by "priming the prompt" with a source of truth. You're giving the AI real data to work with, and that results in much better, much more accurate output.
Now, there's still a human in the loop. You still have to read the output, you still have to apply your judgment, you still have to make strategic decisions. But the AI has done a lot of the heavy lifting for you in terms of synthesizing all of that research and pulling out the key insights.
Now, I want to talk about some of the watch-outs and governance issues that you need to be aware of when you're using AI professionally.
The first big issue is data security. If you are using a free version of ChatGPT or any other AI tool, you need to understand that your data is being used to train future models. That means that anything you type into the AI—any proprietary information, any confidential information, any sensitive information—could potentially end up in the training data and could potentially be surfaced to other users in the future.
And so if you're using AI for work, you absolutely must use a paid version and you must turn off the setting that says "Improve the model for everyone." That's the setting that controls whether your data is used for training. And you need to turn that off.
The second big issue is data privacy. If you are putting Personally Identifiable Information—so people's names, email addresses, phone numbers, any kind of PII—into an AI tool, you are potentially violating privacy laws like GDPR. And more importantly, you're potentially putting people at risk.
And so the rule is: Do not put PII into AI tools. Just don't do it. If you need to work with data that includes PII, you need to anonymize it first. You need to strip out the PII before you upload it to an AI tool.
The third issue is hallucinations. I've mentioned this already, but it's worth repeating. AI will make things up. It will give you information that sounds very confident but is actually wrong. And so you need to fact-check. You need to verify. You cannot just trust the AI blindly.
And let me give you a real example. I asked ChatGPT to write my bio for a conference. And it came back with this very impressive bio that said I had won all these awards and had done all these amazing things. And I read it and I thought, "This sounds great, but I don't think I've actually done all of these things."
And so I went through and I fact-checked every single claim in the bio. And it turned out that about half of it was made up. ChatGPT had hallucinated a bunch of accomplishments that I didn't actually have.
And so if I had just used that bio without checking, I would have been claiming credit for things I didn't do. And that would have been really embarrassing and potentially damaging to my reputation.
And so the lesson is: Always fact-check. Always verify. Never trust the AI blindly.
The fourth issue is bias. AI models are trained on data from the internet. And the internet is full of human bias—racial bias, gender bias, cultural bias, all kinds of bias. And so the AI learns those biases and reproduces them in its output.
And so you need to be aware of that. You need to read the output critically. You need to ask yourself, "Is this reinforcing stereotypes? Is this making assumptions? Is this biased in some way?" And if it is, you need to push back on it. You need to edit it. You need to correct it.
The fifth issue is sustainability. AI models use an enormous amount of energy and water. Every time you use ChatGPT, every time you generate a response, there are data centers running in the background that are consuming electricity and water to cool the servers. And the environmental impact of that is significant.
And so you need to be thoughtful about how you use AI. You don't need to stop using it altogether, but you should be aware of the impact and you should use it intentionally, not frivolously.
Now, let me show you how to set up ChatGPT for professional use.
The first thing you want to do is go into the settings and customize your instructions. You can tell ChatGPT things like, "I work in destination marketing. I prefer a professional tone. I want you to ask clarifying questions before you give me an answer." And then every time you start a new chat, ChatGPT will remember those preferences and will tailor its responses accordingly.
The second thing you want to do is go into the data controls and turn off the setting that says "Improve the model for everyone." This is the setting that controls whether your data is used for training. You want to turn that off.
The third thing you want to do is familiarize yourself with the model selector. There are different models available—there's GPT-4o, which is the standard conversational model, and there are reasoning models like o1, which are better for complex problem-solving and coding. And you want to choose the right model for the task that you're doing.
The fourth thing you want to understand is Custom GPTs. Custom GPTs are like apps. They're pre-configured versions of ChatGPT that have specific instructions and specific knowledge built in.
So for example, at Brand USA, we've built a Custom GPT for our travel and expense policy. And so any employee can go to that Custom GPT and ask questions about the travel policy, and the AI will answer based on the official policy document that we uploaded.
And so Custom GPTs are a really powerful way to make AI more useful and more accurate for specific use cases.
The fifth thing you want to understand is Projects. Projects are a way to organize your chats. So if you're working on a big campaign, you can create a project for that campaign and then all of your chats related to that campaign will be organized together.
And you can also upload files to a project, and those files will be available to all of the chats within that project. So it's a way to keep your work organized and to make sure that the AI has the context it needs.
Now, let me talk about next steps.
There are two approaches to adopting AI in an organization. There's the top-down approach and there's the bottom-up approach.
The top-down approach is about creating guidelines and policies. This is where leadership says, "Here are the rules for how we use AI. Here's what you can do and what you can't do. Here's how we protect data and privacy and security."
And that's important. You need to have those guardrails in place.
But the top-down approach alone is not enough. You also need the bottom-up approach, which is about people actually using AI in their day-to-day work.
And the bottom-up approach is messy. It's experimental. It's about trying things and failing and learning and iterating. And it's about building skills and building confidence and building intuition for how to use these tools effectively.
And so my encouragement to all of you is: Don't wait for the perfect use case. Don't wait for someone to tell you exactly how to use AI. Just start using it. Use it for small things. Use it to rephrase an email. Use it to write an Excel formula. Use it to brainstorm ideas.
And as you use it, you'll start to build an intuition for what it's good at and what it's not good at. You'll start to understand how to prompt it effectively. You'll start to see opportunities where AI can help you.
And that's how you build the skills that you need to tackle the bigger, more complex use cases.
And so the bottom-up approach is really just: You just have to do it. You have to dive in. You have to experiment. You have to make mistakes and learn from them.
And that's how you become proficient with these tools.
So to wrap up: AI is a prediction machine, not a fact database. It will hallucinate, so you need to fact-check. The best way to use AI is to break tasks down into steps and to prime the prompt with sources of truth. You need to use paid tools and turn off model training for professional use. And you need to just start using it, even imperfectly, to build the skills that you need.
Thank you so much for your time today. I'm really excited to see how all of you are going to use AI in your work. And I'm looking forward to the rest of the sessions in this program. Thank you.