Field Notes

A research log of emerging AI concepts, frameworks, and technical insights for destination marketing. These are living documents—observations and definitions that evolve as the technology advances.

By Janette RoushUpdated regularlyRSS

The Leapfrog Thesis: Why Some Teams Should Skip Chatbots Entirely

AI StrategyAgentic AIDigital LeapfroggingPersonal OS

In the early 2000s, entire regions of sub-Saharan Africa skipped landline telephone infrastructure and went directly to mobile phones. Banking in Kenya skipped branch networks for M-Pesa. Estonia skipped paper government for digital-first citizen services. The pattern is always the same: when a transitional technology arrives alongside a more advanced alternative, the organizations that skip the middle step gain a structural advantage.

I believe we are watching this pattern play out right now in AI adoption. The transitional technology is the chatbot workflow—ChatGPT, Copilot, Gemini—where a human types a prompt, reads the output, copies it somewhere else, and manually verifies the result. The more advanced alternative is what I use every day: a setup where AI operates directly in my files and systems, completes actual tasks, and submits its work for my review.

I am not a developer. I am a Chief AI Officer. And I do all my work inside a code editor—not because I learned to code, but because that's where the AI lives.

What My Actual Workflow Looks Like

My entire work life runs through a system I call a Personal OS. It's a folder of markdown files on my local machine, organized with a simple numbered structure: 00_Inbox for daily agendas, 10_Projects for active work, 20_Areas for ongoing responsibilities, 30_Resources for reference materials, and 99_System for configuration files that teach Claude how to work with me.

At the root sits a file called CLAUDE.md. Claude reads this automatically every time I start a session. It tells Claude its role ("Chief of Staff for Personal Operating System"), where my files live, and how to behave ("Read local files before asking for context. Concise, actionable responses—no lectures."). Every project folder can have its own CLAUDE.md with project-specific instructions.

Here is what a typical work session looks like: I open VS Code, type /today, and Claude scans all my project folders for tasks with due dates, checks what's overdue, and creates a daily agenda in my Inbox. Then I start working. "Add the new podcast episode to the site." Claude reads my PODCAST_GUIDE.md—a markdown file I wrote that teaches it exactly how to update the three files involved—then makes the changes across the codebase, and presents me with a clean diff to review. I approve. The site deploys automatically.

I didn't write a single line of code. I didn't copy-paste anything. I didn't translate between a chat window and my actual systems. I described the work, reviewed the output, and approved it. The same way I'd review work from any team member.

And it's not just website work. When I'm preparing for a keynote, the same system applies. I tell Claude I'm building a 30-minute talk for a specific audience—say, state tourism directors who've never used AI tools. Claude reads my context files, pulls from my field notes on frameworks I've developed, and drafts a narrative arc with section beats. I reshape it, ask for a tighter opening, swap in a better example. Each round, Claude updates the working draft in my project folder. By the time I'm rehearsing, I have a complete outline, speaker notes, and supporting data points—all built iteratively in my files, not copied from a chat window. The process looks exactly like working with a speechwriter, except the speechwriter read my entire body of work before the first draft.

Why the Chatbot Phase Stalls

The chatbot adoption playbook looks something like this: buy enterprise licenses, train the team on prompting, designate "AI champions," build a prompt library, create an acceptable use policy, and wait for adoption to spread organically.

Here is what actually happens: 20% of the team uses it regularly, 30% tried it and stopped, and 50% never logged in. The AI champions burn out. The prompt library becomes stale. Leadership asks for ROI metrics and gets anecdotes.

This isn't a failure of the people. It's a failure of the paradigm. The chatbot workflow requires every individual on the team to develop a new skill—prompt engineering—before any organizational value is created. That's a literacy barrier disguised as a tool deployment. It's like requiring every employee to learn to code before you can modernize your website.

The deeper problem is that the chatbot workflow puts the human in the wrong role. You're operating the AI—typing prompts, evaluating outputs, manually transferring results into the systems where work actually lives. You're a translator. This is cognitively expensive, interruptive, and the value disappears the moment you close the chat window. Nothing persists. Nothing compounds.

Why Markdown Files Change Everything

The Personal OS approach solves the persistence problem completely. Every project has a process_notes.md file that logs what happened in each session—decisions made, context gathered, what's still pending. When I come back tomorrow, Claude reads those notes and picks up exactly where we left off. No re-explaining. No lost context.

But here's the deeper insight: the markdown files aren't just notes. They're instructions. My PODCAST_GUIDE.md doesn't just document how to add a podcast—it teaches Claude the exact steps, file locations, and data formats. When I write a guide once, Claude can execute that workflow forever. I'm not prompting anymore. I'm building systems.

This is the difference between a chatbot and an operating system. A chatbot answers questions. An operating system runs your life. Every markdown file I create makes Claude more capable for the next session. The value compounds instead of resetting.

The Review-First Model

The key reframe is this: the critical skill for working with AI isn't prompting. It's reviewing.

In my workflow, I describe what I need at a high level. Claude reads the relevant context files, does the work across multiple files and systems, and presents me with changes to approve. My job is to review and accept—the same skill I've used for twenty years reviewing creative briefs, campaign strategies, and vendor proposals.

Reviewing work is something experienced professionals already know how to do. A destination marketing director, a convention sales lead, a communications VP—they all have deep expertise in evaluating whether work product meets the standard. That expertise transfers directly to reviewing AI output. No new skill required.

The organizational implications are significant. Instead of requiring every team member to develop AI literacy before any value is created, this model requires only that decision-makers can evaluate outputs—a skill they were hired for. The barrier to value drops from "everyone learns prompting" to "leadership can review work," which is already true on day one.

Trust Through Transparency

The most common objection is trust: "We're not ready to let AI make changes." This objection conflates autonomy with agency. In my setup, Claude has agency (it can do work) but not autonomy (nothing ships without my approval).

The trust architecture has several layers:

- Version control tracks every change Claude makes. Every edit is diffable and reversible. If something is wrong, I revert it—same as I'd handle any bad edit. - Branch isolation means Claude works on a separate branch. Changes only reach the live site after I review and merge them. - Process notes create a transparent log of what Claude did, why it did it, and what decisions were made. This is the audit trail. - CLAUDE.md guardrails set boundaries upfront: don't touch the archives, don't preload profiles unless asked, always confirm before deleting.

This infrastructure isn't exotic. It's the same system software teams have used for decades. What's new is that a non-developer—a strategist—can use it just as effectively, because Claude handles the technical mechanics while I handle the judgment.

What Gets Leapfrogged (And What Doesn't)

To be clear about scope: the leapfrog applies to operational AI adoption, not all AI use. Chatbot conversations are still valuable for brainstorming, exploratory research, and strategy sessions. Nobody should skip those.

What gets leapfrogged is the institutional investment in chatbot-as-workflow: the prompt libraries, the copy-paste pipelines, the "AI-assisted content creation" processes where a human mediates every interaction between AI and the systems where work actually happens.

The leapfrog is most compelling for teams with digital deliverables: websites, data reporting, content management workflows, analytics dashboards. These are domains where an agentic tool can complete tasks end-to-end, where version control provides a natural safety net, and where the output is objectively verifiable.

The Advantage of Starting Late

There's an underappreciated advantage to being a late adopter right now. Teams that invested early in chatbot workflows have institutional habits built around copy-paste, prompt libraries, and human-mediated processes. These habits create organizational inertia. The team that "knows how to use ChatGPT" may actually resist the shift because it invalidates their hard-won expertise.

Teams starting from zero have no habits to unlearn. They can go directly to the more advanced paradigm—the same way a new company today would choose cloud infrastructure over on-premise servers. The absence of legacy investment is itself an advantage.

This doesn't mean chatbot-experienced teams wasted their time. Their AI literacy—understanding what AI can and can't do, how to evaluate outputs, how to spot hallucination—transfers directly to the review-first model. But the specific workflows and prompt libraries they built? Those are the landline infrastructure that gets leapfrogged.

What This Means for Your Team

If your organization hasn't fully committed to the chatbot workflow yet, you have a strategic choice: invest in the transitional technology everyone else adopted, or leapfrog directly to the model that's replacing it.

The leapfrog path is concrete. Set up a Personal OS—a folder of markdown files with a CLAUDE.md that gives Claude context about your work. Pick one workflow that's currently manual: website updates, event listings, content publishing. Write a simple guide in markdown that teaches Claude how that workflow works. Then let Claude do the work while you review the output.

This is a narrower, more focused adoption path than "give everyone ChatGPT licenses and see what happens." It requires less organizational change, produces measurable results faster, and builds toward the paradigm that's clearly winning.

The chatbot era gave us an important proof of concept: AI can do useful work. The Personal OS era delivers on the actual promise: AI can do the work where it lives, under human supervision, with compounding context that gets smarter every session. For teams that haven't yet committed to the middle step, the question isn't whether to leapfrog—it's whether you can afford not to.

Building Your DMO's AI Strategy: The Two-Priority Framework

AI StrategyDMOFrameworkSource of TruthGEO

Every DMO needs an AI strategy. The question is what goes in it. After working with destination organizations across the country, a clear framework has emerged: your AI strategy has two priorities, and both are required.

Priority One: Source of Truth. Your destination's information needs to be accurate wherever someone encounters it — your website, ChatGPT, Google AI Overviews, Perplexity, Gemini, or any third-party platform. Managing the destination narrative has always been the DMO's core job. AI added new channels to the same mandate.

Priority Two: AI-Capable. Every function in your organization uses AI fluently — marketing, research, partner services, operations. Team-wide capability with clear protocols. The value of AI compounds when the whole team uses it.

These two priorities reinforce each other. A strong Source of Truth makes your team's AI work more effective, and a capable team builds better data.

Start with the AI Audit. Before you build anything, you need to know where you stand. Three steps to do today:

1. Open ChatGPT, Perplexity, and Google AI Overviews. Ask each about your destination. Write down what's wrong, outdated, or missing. That list is your work order. 2. Search your top three visitor experiences. Does AI cite your website as the source — or someone else's? 3. Do your key pages have schema markup? Run them through Google's Rich Results Test. The answer will tell you where you are.

What Source of Truth Requires:

Schema Markup (JSON-LD) — structured tags on your attractions, events, and accommodations. This is how AI crawlers understand your content as data, not just web text.

GEO-Optimized Content — Generative Engine Optimization means writing long-form, authoritative content that directly answers the questions AI tools are trained to respond to.

Structured Data Feeds — machine-readable feeds for your key assets. AI platforms ingest structured data. Unstructured content gets ignored or misquoted.

Authoritative Citations — AI systems favor sources that credible third parties reference. Earned media, strong backlinks, and verified profiles all reinforce your standing.

The Two-Year Roadmap. The content layer has to exist before the integration layer can work. Year One focuses on the content layer: AI visibility audit, schema markup, GEO-optimized content, verified profiles on AI-indexed directories, and a content freshness protocol (AI values recency). Year Two builds the integration layer: real-time partner data feeds, dynamic content, API partnerships with AI travel platforms, a destination AI assistant for owned channels, and analytics tracking AI-sourced discovery traffic.

Becoming AI-Capable Across the Organization. The second priority means AI fluency in every department. In marketing: audience analysis and targeting — using AI to identify emerging feeder markets, synthesize visitor research, and sharpen campaign targeting. In research: rapid insight synthesis — analyzing visitor surveys, competitive landscapes, and trend reports in hours. In partner services: personalized outreach — tailoring communications, co-op proposals, and grant summaries to each recipient. In operations: board prep and reporting — summarizing monthly metrics and building briefing documents in a fraction of the time.

Build the Governance. Without structure, AI adoption stalls at the enthusiasts and risk exposure grows quietly. Three things to stand up: an AI use policy (set guardrails before tools proliferate — short and actionable beats comprehensive and ignored), an AI champion per department (one designated advocate per function who shares wins, surfaces blockers, and keeps their team moving), and an AI steering committee (cross-functional, meets quarterly, evaluates tools, tracks ROI, and aligns AI investment with strategic priorities).

Your First Three Moves:

1. Run the AI audit. Ask ChatGPT, Perplexity, and Google AI Overviews about your destination today. Document the gaps. 2. Check your schema markup. Use Google's Rich Results Test on your five most important pages. 3. Name one AI champion per department. That's your governance structure, started.

A state DMO, city DMO, and national tourism organization face different constraints. The strategic categories translate — the specifics are yours to define.

From Voice Recording to Live Website: How I Built the Wyoming Keynote Recap

Claude CodePlaudWorkflowCase Study

I gave a keynote at the Wyoming Governor's Conference on Tourism in February 2026. The talk covered AI's dual impact on tourism — how it's changing traveler behavior and how it's changing the way we work. I recorded the entire session on my Plaud device, a pocket-sized AI recorder that generates transcripts automatically.

Here's what happened next: I took that transcript and fed it into Claude Code. Within a single working session, Claude Code turned a raw voice recording into a fully designed, responsive website — complete with slide imagery, Brand USA typography, structured sections, and social sharing metadata. The result is live at wyoming-keynote-recap.vercel.app.

The workflow was three steps:

Step 1: Record with Plaud. I clipped the device to my outfit and let it capture the full 60-minute session. Plaud generated a raw transcript — no speaker labels, just a continuous stream of text — which gave me a starting point for everything I said, including the Q&A.

Step 2: Build with Claude Code. I gave Claude Code the transcript, my presentation slides, and Brand USA's design system (colors, fonts, logo specs). Claude Code read through the transcript, matched content to slides, and generated a complete HTML page with embedded slide imagery, responsive layout, and proper Open Graph tags for social sharing. The entire build happened in conversation — I described what I wanted, reviewed iterations, and refined the design through dialogue.

Step 3: Deploy to Vercel. One push to Bitbucket, connected to Vercel, and the site was live. Total time from raw recording to public URL: one working session.

Why this matters for tourism professionals:

This workflow eliminates the gap between "event happened" and "content published." Every conference keynote, panel discussion, or stakeholder meeting generates valuable content that typically dies in a notebook or recording app. With a voice recorder and an AI coding tool, that content becomes a shareable, searchable, branded digital asset — the same day.

The Wyoming recap page serves multiple purposes: it's a reference for the 300 attendees who were in the room, a portfolio piece for future speaking engagements, and a discoverable resource for anyone searching for AI applications in tourism. One recording, three outcomes.

Tools used: Plaud (recording and transcript — one-time device cost plus subscription), Claude Code (website generation), Vercel (hosting), Bitbucket (version control).

See the live result: wyoming-keynote-recap.vercel.app

AI Agents Taxonomy: Four Types That Matter for Tourism

AI AgentsFrameworkTourism Strategy

After analyzing hundreds of AI tools and their applications in destination marketing, a clear taxonomy has emerged. There are four distinct types of AI agents, each serving different operational needs:

Operator Agents automate browser-based tasks—the digital equivalent of "using the mouse for you." For DMOs, this means automated lead generation, competitive web scraping, and data extraction. Tools like Browse.ai and Manus.im fall into this category.

Researcher Agents perform deep analysis and synthesis. Unlike simple search tools, these agents can analyze dozens of competitors simultaneously, synthesize market research, and generate comprehensive reports. This is where strategic intelligence happens.

Builder Agents create digital products from natural language prompts. Lovable.ai generates functional websites; Claude Artifacts builds interactive applications. For tourism marketers, this means rapid prototyping without engineering resources.

Automator Agents orchestrate workflows across multiple platforms. N8N, Agent.ai, and Google Gemini Gems connect disparate systems and automate multi-step processes. This is the infrastructure layer for AI-powered operations.

Understanding this taxonomy helps DMOs make strategic technology decisions. Each type solves different problems. Most organizations will eventually need all four.

Model Context Protocol: Solving the Trust Problem

MCPTechnicalData Accuracy

AI hallucination isn't a bug—it's a feature of how large language models work. They generate plausible text based on patterns, not facts. This is fine for creative writing, catastrophic for travel planning.

The Model Context Protocol (MCP) represents a paradigm shift. Instead of trying to train AI to "know" facts, MCP creates a standardized way for AI to query authoritative data sources in real-time. Think of it as an API layer specifically designed for AI agents.

For tourism, the implications are profound: - Accessible travel route verification (no more hallucinated ramp locations) - Real-time venue capacity checks (critical for meeting planners) - Authoritative attraction operating hours (not "best guess" information)

The technical architecture is elegant: DMOs maintain their "source of truth" databases, MCP provides the query protocol, and AI agents can reliably access verified information. This shifts DMO strategy from "creating content for humans to read" to "maintaining data for AI to query."

This is not future speculation. Anthropic's Claude Desktop already implements MCP. The question is no longer "will this happen?" but "which DMOs will adopt it first?"

Why CRIT Framework Matters: Context Over Commands

CRIT FrameworkPromptingMethodology

Most AI prompt guidance focuses on the task: "Write me an email." "Create a social media post." This approach consistently produces mediocre results for strategic work.

The CRIT Framework (Context, Role, Interview, Task) emerged from observing what separates exceptional AI outputs from generic ones: rich context.

Context: Tourism professionals operate in a specialized domain. AI doesn't inherently understand DMO budget cycles, state tourism office hierarchies, or CVB stakeholder dynamics. Providing this context—often through voice input for natural explanation—transforms output quality.

Role: Assigning AI a specific role ("You are a convention sales director with 15 years of experience") activates relevant training data patterns and adjusts tone appropriately.

Interview: Before jumping to the task, let AI ask clarifying questions. This surfaces assumptions and ensures alignment. The best strategic outputs come after 2-3 rounds of AI-led questioning.

Task: Only after establishing context, role, and conducting an interview should you specify the task. By this point, AI has sufficient context to produce strategic-level work.

This framework was developed specifically for tourism professionals because our industry's context is too nuanced for generic prompting advice. The difference in output quality is not incremental—it's transformational.

Subscribe to Field Notes

New research insights published regularly. Subscribe via RSS to stay updated.

Subscribe via RSS