- Chelsey Fleming conducts research at Google Labs on how people engage with emerging tech like AI.
- Fleming spoke with BI about how companies can encourage their employees to experiment with AI.
- This article is part of "CXO AI Playbook" — straight talk from business leaders on how they're testing and using AI.
There may not be many archaeologists in Silicon Valley, but perhaps there should be.
A deep understanding of the past, after all, can offer insights into the future — especially when it comes to navigating technological change. That's how Chelsey Fleming, a UX research lead at Google Labs who earned her doctorate in archaeology from UCLA, sees it.
Fleming studies how people interact with emerging technology, drawing parallels between ancient innovations and today's AI-driven developments. "Every major technological shift can feel like a massive disruption," she said. "That's not to minimize or negate its significance, but history shows that we adapt, we learn."
Throughout history, new technologies and advancements — from recorded music to refrigeration to the internet — have often been met with resistance. Yet they became essential to progress. "What feels like a radical change now will, over time, become part of how we live and work," Fleming said.
But that shift shouldn't come at the cost of workers. Instead, companies should focus on using AI to reskill rather than replace them. With so much hype around AI-powered tools, Fleming said companies should be intentional about treating AI as a collaborator, not a substitute for humans. "It's important to focus on the partnership."
She said experimentation and exploration are key to building that partnership. BI spoke with Fleming about how businesses can put that into practice.
The following has been edited for clarity and length.
As AI becomes more embedded in companies, you say that employers should focus on upskilling rather than cutting costs. Why?
First and foremost, it makes for better products. A lot of companies are focused on being "AI first," but often the best solutions are human-driven or involve other types of problem-solving. Whether you're making products or changing work processes, it's about balance.
Do you think companies sometimes miss the mark when it comes to how they integrate AI into work?
I talked to someone recently who's thinking of starting a company. He asked me, "How can we use AI to help people work better?" I said that while AI can definitely improve efficiency, the first step is understanding what people actually do, the problems they face, and what they need help with. There are aspects of work people enjoy and find rewarding — those are areas where AI shouldn't intervene. People want their work to feel meaningful. It's not just about adding AI, but making sure it supports the right aspects of work.
There's evidence that many employees are reluctant, and even resistant, to using AI. Why do you think that is?
I think a lot of people feel overwhelmed by the sheer number of AI tools out there and by the thought of having to learn them. I get that. It's hard to know where to start.
And when companies treat AI training as a requirement, it can feel like a chore. If you're thinking, "I have to learn this for my job," it's a burden.
So, how can companies make using AI and upskilling more engaging?
When people are given room to experiment with new tools, it becomes much more engaging. On my team, we had a series of Friday workshops where we could explore AI: prototyping and testing things out. It made all the difference. If companies want people to learn, they should make it enjoyable by carving out small amounts of time for experimentation.
How should a company measure the ROI of that exploration?
I wouldn't necessarily try to quantify experimentation; that's the point of experimenting. Sometimes you'll realize a tool isn't useful, and that's a great learning outcome. If you figure out it's not going to work for you, fantastic — you can move on and focus on something that does.
When we develop these tools, we try to anticipate every possible use case, but once they're out in the world, people come up with things we never expected. That's part of the joy. But if you don't have a healthy mindset around experimentation and adaptation, people won't discover unexpected ways to apply the tool. They'll only see what's right in front of them.
In your line of work, do you think there will always need to be a human in the loop?
AI applications are powerful for accelerating research, but they don't eliminate the need for human insight, by, say, asking the right questions and framing them the right way. These tools speed up the work, but they don't replace human ingenuity or creativity.