Thought Leadership
Q&A with Rachael Orellana and Skye King on Bringing AI into Flood Risk Management
April 21, 2026As AI tools spread across the AEC industry, many teams still face the same questions: where do these tools actually add value, and where do they create risk? That question sits at the center of an upcoming Nevada Watershed University presentation, “When AI Meets Reality: Designing for Decisions, Not Just Tools.”
We spoke with Rachael Orellana and Skye King about what they plan to cover, why flood risk management demands a more thoughtful approach to AI, and how organizations can move past the current AI pressure and instead toward practical, human-centered use.
Q: For readers who may be considering this session, what is the core idea behind your presentation?
A: Skye King: We want to help people step back from a tools-first mindset. Many organizations feel pressure to adopt AI because leadership, peers, or the broader market expect it. But the better question is: what challenge are you actually trying to solve and does AI meaningfully help? Sometimes the answer is yes. Sometimes it is no. Our goal is to give people a better framework for making that call.
A: Rachael Orellana: We also want to show examples that stretch people’s thinking and help them see AI as a tool that can support judgment, not replace it. One of the examples I’ll share involves hundreds of hours of recorded meetings from years ago. A client needed to pull out very specific details tied to a high-stakes California Water Commission process. That is the kind of problem AI can help with: finding needles in haystacks, surfacing patterns, and helping people work through large volumes of information more effectively.
Q: Why do those messages matter so much in flood risk management and related AEC work?
A: Skye: These are high-stakes environments. Decisions affect communities, infrastructure, public trust, funding, and long-term resilience. In flood risk management, you cannot afford to treat AI as a novelty or a shortcut. This is where clear thinking around accountability, trust, governance, and human impact becomes essential.
A: Rachael: Exactly; in this space, people are often working on complex problems with technical, regulatory, and stakeholder dimensions all at once. AI can absolutely support that work, but only if people use it with intention. We want attendees to think about the systems around the decision, not only the tool in front of them.
Q: What kinds of practical AI use cases will you highlight?
A: Rachael: The examples focus on real applications. We’ll talk about AI for time compression, pattern recognition, and collaborative thinking. That includes using AI to search large archives of meeting records, organize complex brainstorming outputs, and make large volumes of information easier for teams to work through.
What I find exciting is that these uses do not hand over authority to AI. They help people sort, surface, and structure information so they can think better and move faster where it makes sense.
A: Skye: That distinction matters. We are trying to expand how people think about AI beyond the most common entry points, like rewriting an email or drafting text. Those uses may have a place, but the more interesting opportunities are often upstream, where you’re helping people understand a challenge more clearly, prepare better for conversations, or identify patterns across multiple inputs.
Q: Where do you see people in professional settings go wrong with AI today?
A: Rachael: A lot of people know this feeling already: they get an AI-generated draft, it sounds polished at first glance, and then they realize it doesn’t actually say much. That is a real issue that takes practice to recognize, and I think it helps to talk about it openly instead of pretending it only happens to others.
One of the best pieces of advice I have heard is to know yourself when you use AI. For me, that means recognizing where I’m tempted to accept a draft too quickly. Sometimes I do not need AI to write for me. I need it to help me frame questions, reorder ideas, or reflect back what I’m trying to say so I can sharpen it myself.
A: Skye: I see two common problems. First, people over-trust the output. That’s where automation bias and authority bias show up. If the tool sounds confident, it’s easy to assume it’s right. Second, organizations underestimate the human side of adoption. They move too fast into the technology and skip the strategy, the stakeholder buy-in, and the change management work that makes adoption successful.
Q: You both keep coming back to understanding issues first rather than choosing an AI tool and forcing solutions out of it. What does that look like in practice?
A: Skye: It starts with a pause. Before you rush to the tool, sketch out the problem with your own brain and the people around you. Ask better questions first. That helps you figure out whether you are dealing with a technical challenge or an adaptive challenge.
A technical challenge is something you already know how to solve, and AI may help you do it faster or more efficiently.
An adaptive challenge runs deeper. It involves people, trust, behavior, culture, competing needs, or public acceptance. In flood risk management, many of the hardest questions fall into that second category. If you treat an adaptive challenge as a purely technical one, you will miss the people who need to trust the work and live with the outcome.
A: Rachael: I would also say the best use cases should involve more collaboration, not less. If AI helps a team capture and work through ideas faster, that should create more room for better discussion, more stakeholder input, and more thoughtful planning. I want to see AI support stronger systems, clearer decisions, and better buy-in.
Q: What does successful AI use in the AEC world look like to you in the future?
A: Rachael: I hope it looks like better planning processes, stronger guardrails, and more confidence in how teams make decisions. AI is a mirror, not a magic wand. It can reflect patterns back to us, help us pressure-test assumptions, and give us a new angle on the work. But people still need to own the system, judgment, and the final decision.
A: Skye: Successful use should strengthen relationships, not weaken them. In community engagement and risk communication, for example, AI might help a team scan media, meeting minutes, public concerns, and other inputs before conversations begin. That preparation can help teams show up more informed and more thoughtful. But it should never replace the active listening and trust-building that effective work requires.
Q: What is a smart place to start for someone who may feel behind on AI?
A: Rachael: Start in your personal life with something low-risk. Use it in an area where you already know enough to judge the output. That makes it easier to see where the tool helps, where it distorts, and where you still need your own judgment. That kind of experimentation can build confidence without creating unnecessary pressure.
A: Skye: I agree. It’s always a good time to start your AI journey, and there’s a place for everyone in this conversation. The important thing is to begin noticing how these tools behave, where they help (and where they don’t), and where your own judgment and guardrails still matter most.
Q: What do you hope attendees do or take away after the session ends?
A: Skye: I’d love for people to walk out of the session with one concrete next step. That might mean bringing a better set of questions back to their team, identifying one real challenge worth exploring, or testing a small pilot. I want them to feel empowered to move from vague pressure to clarity to deliberate action.
A: Rachael: I hope they leave with a more grounded mindset. AI is already here. It is already influencing work. That said, people still have agency in how they use it. I’d like for attendees to walk away more curious, more thoughtful, and more prepared to ask the right questions as they explore the ways AI can support their flood risk management work.