The Camera and the Code: Why Technical Leaders Need Creative Outlets
Photo by Eric D. Brown

The Camera and the Code: Why Technical Leaders Need Creative Outlets

"Our LLM keeps hallucinating financial data."

The CTO's frustration was evident during our call last week. They'd fine-tuned, prompt-engineered, and RAG'd their way through every solution. Nothing worked. The model insisted on inventing numbers.

I asked an odd question: "What do you do outside of work?"

"What does that have to do with anything?"

Everything, actually.

That evening, I was processing photos from a sunrise shoot at Rocky Mountain National Park, adjusting the shadows, highlights, and clarity. Then it hit me... people are treating their AI like a calculator when it's more like a camera. Calculators give exact answers. Cameras interpret light.

The fix?

Stop asking the LLMs to generate financial data. Use it to interpret and explain data from verified sources. Different tools for different purposes.

Why AI Makes Creative Outlets More Critical, Not Less

Everyone's racing to implement AI. Boards want AI strategies yesterday. Teams scramble to build chatbots, automate workflows, and generate content.

Here's what I see: the leaders who navigate this AI transformation successfully are the ones who can think beyond the binary of "AI will save us" or "AI will replace us."

They're usually the ones with a guitar in their office. Or photos from their weekend hikes. Or that 3D printer they tinker with after hours.

The Pattern Recognition Paradox

We're asking LLMs to recognize patterns in everything. Customer behavior. Code bugs. Market trends. But who's getting better at recognizing patterns? Not us. We're outsourcing that skill to machines.

Photography forced me to develop pattern recognition viscerally. Waiting for wildlife, you learn rhythms. That Great Blue Heron returns to the same spot day after day at the same time of day. And I need to find a way to be in the right spot when the shadows reach a certain angle to get 'that' photo of the bird.

This matters because AI tools are pattern-matching machines that lack the ability to understand context (for now?). You need human pattern recognition to know when the AI's patterns make sense and when they're hallucinating connections that don't exist.

Last month, a client's AI flagged a "critical pattern" in their sales data. The AI was technically correct; a pattern was indeed present. But anyone who'd watched seasons change would recognize it immediately: people buy less ice cream in winter. The AI had discovered seasonality and treated it like a revelation.

Breaking the Prompt Engineering Tunnel Vision

I've watched brilliant engineers spend days crafting the "perfect" prompt. Tweaking words, adjusting temperature settings, and adding more context. It's the same tunnel vision I see in photographers who think buying a better lens will fix their composition problems.

My best work comes after I step away from the screen. Walk around with my camera. Let my brain process in the background. Creative work forces this mental shift.

You can't prompt-engineer your way to innovation. You can't fine-tune your way to transformation. At some point, you need to think differently, not just iterate harder.

What Photography Teaches About AI Limitations

Every photographer learns this lesson: the camera captures what's there, not what you remember. That sunset was magnificent in person. The photo? Flat. Disappointing. Missing the feeling entirely.

LLMs have the same limitation. They process text brilliantly. They pattern-match across massive datasets. But they don't understand meaning, context, or nuance the way humans do.

When I'm photographing birds, I know why that hawk is circling (hunting) versus why those crows are gathering (mobbing an owl). The camera just sees birds in motion. Same data, vastly different meanings.

This gap between data and understanding is where creative practice matters. It trains you to see beyond the surface patterns to the underlying story.

The Dangerous Confidence of Generated Content

AI generates content with absolute confidence. No hedging. No uncertainty. Even when it's completely wrong.

Photography taught me to distrust that confidence. That "perfect" shot on your camera's LCD? Load it on a big screen and notice the focus is slightly off. What looked sharp at 3 inches falls apart at 30.

Leaders implementing AI need this healthy skepticism. Generated code that looks clean might have subtle bugs. Analysis that sounds authoritative might rest on flawed assumptions. You need trained eyes to spot the differences.

Using Creative Constraints in the Age of Infinite Generation

AI removes constraints. Need 50 variations of marketing copy? Done. Want 100 different analyses? No problem.

But creativity thrives on constraints. When I photograph with just one prime lens, no zoom, I make better images. The limitation forces me to move, to think, to see differently.

Apply this to AI: Instead of generating endless options, set strict boundaries. Use AI for specific, constrained tasks. The magic happens in how you combine these constrained outputs, not in generating more of them.

Pattern Interrupt as a Leadership Tool

Every time I switch from debugging code to editing photos, my brain resets. Different tools. Different goals. Different parts of my brain are firing.

This pattern interrupt is crucial when working with AI. It's too easy to fall into conversation loops with ChatGPT, iterating endlessly on variations of the same idea. You need circuit breakers.

I have a rule when it comes to working with AI: after 30 minutes, I write my ideas on paper. No screens. No generation. Just thinking turned into words on paper. My breakthrough insights come from these writings, not from prompt iteration 47.

Building AI Intuition Through Creative Practice

You can't build intuition by reading papers about AI. You build it by recognizing patterns across domains.

When I'm processing a landscape photo, I make hundreds of micro-decisions. Increase contrast here. Soften shadows there. Each adjustment affects the whole image.

Working with AI requires the same intuitive adjustments. This prompt is too constrained. That temperature setting is too high. This context window needs different framing. You develop feel through practice, and creative work accelerates that development.

The Meta-Skill That Matters Most

Technical leaders ask me what AI skills they should develop. Prompt engineering? Fine-tuning? Vector databases? Agents? <insert the latest buzzword here>?

Wrong question. The meta-skill that matters is learning how to learn rapidly in ambiguous domains.

Creative hobbies build this meta-skill. When you start photography, everything is ambiguous. What makes a photo "good"? How do you balance technical perfection with emotional impact? There's no unit test for art.

AI work is similarly ambiguous. What makes a prompt effective? How do you balance accuracy with usefulness? When should you trust the output? These aren't technical questions with clear answers.

Start Your Practice Today

Pick something unrelated to technology. The distance matters. You want your brain to build bridges between disparate domains.

My recommendations:

  • Photography: Train observation and pattern recognition

  • Music: Develops timing and rhythm sensing

  • Drawing: Forces you to see what's there, not what you think is there

  • Pottery: Teaches patience and working with unpredictable materials

  • Writing fiction: Builds narrative understanding, crucial for AI interaction

Commit to 30 minutes, three times per week. Morning sessions work best for me, as they prime your brain for creative problem-solving throughout the day, but do whatever works best for you.

The Compound Effect in an AI World

Here's what I've noticed after 15 years of mixing photography with technology leadership:

The patience I gain from waiting for perfect light helps me better let AI iterations run without premature optimization. The composition skills from framing shots help me structure better workflows. The post-processing discipline teaches me when to stop tweaking and ship.

These changes compound. As AI tools become more powerful, the humans who thrive will be those who bring creative, multi-domain thinking to problems that pure technical skills cannot solve.

Your next breakthrough probably won't come from reading another AI paper. It might come from noticing how light reflects off water during your morning walk with a camera.

P.S. Still skeptical?

Try this experiment: Track your AI problem-solving success rate for two weeks. Then start a creative practice. Track for two more weeks. The improvement won't be subtle.


If this resonated, you'll get more practical insights on leading through technical transformation in my weekly newsletter. No hype, no fluff...just real experience from the intersection of technology and leadership. Join for free at newsletter.ericbrown.com

To view or add a comment, sign in

Others also viewed

Explore topics