Former Intel exec: The responsibility for AI ethics sits with all of us

AI is everywhere right now: How do you use it in your work? Which jobs it will change or replace? Where to invest? What training to take?
Endless examples and endless opinions.
Like many in my generation, I’ve lived through multiple major technology shifts and watched how they reshaped both our personal and professional lives. When I think about AI and its projected impact, I see parallels to the internet, PCs, cameras – even film and video.
Photography is a good example.
When I was young, pictures were analog. Cameras were bulky. The flash wasn’t built in — you had to decide if you needed extra light and physically attach a flash bulb. Bringing a camera to an event or vacation was a conscious decision.
Once the film was used up, you rewound it, dropped it off at a photo booth in a shopping center parking lot (or mailed it in) and waited days — sometimes weeks — to see the results. The photos weren’t perfect with lighting issues, half-smiles, missed moments. But we were always happy to have those memories captured.
Then disposable cameras arrived. Easier; lighter; revolutionary – or so we thought.
Next it was digital cameras. What had once been limited to professional studios became accessible to everyday users. Flash was built in. Eventually, the camera determined when to use a flash for you. You could instantly review a photo and retake it if needed. Image processing improved rapidly – color, exposure, zoom – first through PCs, then directly in the camera itself.
As quality improved and costs dropped, cameras stopped being “a thing” on their own. They became embedded into phones, laptops, ATMs, security systems, manufacturing systems, doorbells, cars.
Image processing went from a standalone function to something woven into everything. You could instantly see, adjust, filter, edit and share — in real time.
I could tell similar stories about PCs. About the internet. About video. This is how I view AI today. But at an even greater scale of where it will be embedded.
Soon, AI will be ubiquitous
AI research and use has been evolving for decades. Right now, much of AI still feels separate. Separate tools. Separate applications. Separate agents brought in to support specific tasks. We train models. We deploy systems. We “use AI.” But this won’t last.
We are moving toward a world where AI is simply another capability inside everything we already use, from enterprise systems to applications. Devices. Infrastructure.
This is already happening. We see it in partnerships being announced and features quietly rolled out. We experience it daily on our phones, in search, in social media and in how information is delivered to us.
Like many, I believe AI will be ubiquitous. And like every major technology shift before it, it will expand human capability and improve outcomes at scale. And there will be unforeseen, or perhaps seen, but not managed consequences.
The difference this time is speed. This shift is happening exponentially faster than anything we’ve experienced before.
Regulation will not keep up — history has shown us that. Which means the responsibility for safeguards, ethics and thoughtful implementation cannot be outsourced or delayed.
It sits with the industry itself. With companies. With leaders. With boards. With builders. With employees. All the people who influence how these systems are designed, deployed and used.
Last month, I read an article in WSJ about a philosopher and her immense responsibility to train the AI system on morals and societal expectations.
While I applaud the initiative, it is clear that this responsibility cannot sit with one person, one role, or one organization.
AI will not arrive as a single moment or decision point. It will be embedded – quietly and continuously – into every system, process, and interaction. The choices we make now will shape how it serves people at scale, often in ways that are invisible once they’re in place.
This is our moment to be intentional. We need to ask:
- Not only what can AI do, but also what should it do?
- Not only how can AI create an advantage, but also is it done with impact and trust?
- Not only how can we innovate with AI, but also how can we be good stewards of AI?
Because once AI is everywhere, the time to ask questions will have already passed. And the responsibility for getting it right belongs to all of us.
Our call to action as managers, board members, policy makers, developers, consumers, employees: We must educate ourselves on AI not just on how to use it, but in order to ask the right questions and take the right steps in this accelerating AI journey.
Christine Boles is founder of MC Consulting Services, board chair of the Arizona Technology Council, and former Intel Edge Computing Group vice president and Federal & Industrial Solutions Division general manager.
