Alec Scott is a Principal at CDW for Intelligent Platforms.
Most conversations about AI ethics begin in the right place, but they stop too soon.
They begin with bias, fairness, explainability, privacy, robustness and accountability. That work matters. NIST’s AI Risk Management Framework and the OECD’s AI Principles rightly focus on trustworthy, responsible AI. But this article is an executive reflection on a harder question: Just because we can use AI to automate, replicate or reduce human involvement at scale, should we?
I write this from the perspective of an IT executive working at the intersection of technology, operations, risk and business transformation. In that world, AI is no longer a lab experiment. It is becoming part of how enterprises operate, decide, serve customers and reduce cost. That is exactly why the ethical conversation has to widen now, before the logic of efficiency outruns the logic of responsibility.
The deeper ethical issue is not only whether AI is fair inside the model. It is whether we are using AI to remove too many people from economically meaningful roles in society. That is a different kind of ethics problem. It is no longer only about model behavior. It is about social structure—and whether an economy can remain morally legitimate when too many people are pushed out of the loop that gives them income, dignity, judgment and a stake in the system itself.
That risk is no longer theoretical. The IMF has warned that AI affects almost 40% of jobs worldwide, while the World Economic Forum estimates that 22% of jobs will be disrupted by 2030. Those numbers can sound manageable when reduced to net gains. Real life is not lived in net gains. It is lived in transitions, and transitions are where families get squeezed.
That is why the human-in-the-loop conversation needs to mature. In many boardrooms, the phrase still means a person somewhere approves an AI recommendation before final action is taken.
That is part of the answer. A society does not stay healthy merely because a human clicks “approve” at the end of an automated workflow. It stays healthy when humans continue to hold meaningful roles in the creation of value itself. If AI leaves people only as ceremonial reviewers inside systems that have already hollowed out the surrounding labor market, then the “human in the loop” becomes more of a comfort blanket than a safeguard. That is not human-centered design. It is managed exclusion.
Labor displacement also does not stay inside HR dashboards. It moves into the financial system. If enough workers are displaced too quickly, one of the first places stress is likely to surface is housing.
The New York Fed reported in February 2026 that debt reached $18.8 trillion, with mortgage balances at $13.17 trillion, and that mortgage delinquencies were rising, especially in lower-income areas and places with declining home prices. A related Liberty Street Economics analysis found delinquencies rose most where unemployment and housing conditions worsened. That does not prove a housing crisis is underway. But it does make the transmission mechanism obvious. If income weakens at scale, fixed obligations eventually expose the damage.
This is where the ethics debate becomes uncomfortable. If AI becomes good enough to mimic conversation, reasoning, support, analysis and judgment, are we building tools that expand human agency, or quietly reducing the number of humans needed to participate in the economy at all?
A 2025 report from the UNDP argues that the future of AI depends less on what the technology can do than on the choices societies make about how to use it, and it explicitly frames the opportunity as using AI to augment people, not sideline them. That is the standard worth holding on to.
For executives, this is not just a technology governance issue. It is an enterprise risk issue and, increasingly, a market stability issue. The CIO has to understand where AI embeds and where human judgment remains central. The CFO has to look past short-term labor savings and ask what happens to demand, affordability and long-term revenue if too much human earning power is stripped out of the market too quickly. The CISO and CRO need to evaluate not just model and control risk, but systemic risk, including over-automation, concentration, resilience. The CHRO has to own role redesign, redeployment and reskilling. The CEO has to force a unified view so one function does not optimize itself while the broader enterprise, and perhaps the broader economy, absorbs the downstream damage. That is not a compliance exercise. It is leadership.
This conversation needs to happen in the rooms where technology, capital, labor and policy meet: Davos, the IMF and World Bank meetings, the World Governments Summit, OECD policy forums and other cross-sector gatherings. If executives, policymakers, investors and labor economists are not in the same room on this issue, then the conversation is still too narrow.
There is also a reason this needs to happen now. The AI race among nations is real. Stanford’s 2025 report shows the United States leading private AI investment, while China continues to close model-quality gaps and lead in publications and patents.
At the same time, regulation moves more slowly. The EU’s AI Act will become generally applicable in August 2026, and some obligations extend beyond that. Executives are being squeezed from both sides: Competitive pressure is telling them to accelerate, while public policy is still catching up.
If business leaders do not come together and set their own guardrails, governments will. By then, the response will likely be reactive, blunt and late. That is not an argument against regulation. It is an argument for executive self-governance before regulation becomes the only remaining tool.
That, to me, is the next real frontier of AI ethics. But the larger question now is whether we are designing a future in which humans still matter inside the economy. If we get that wrong, the backlash against AI will not come first from benchmark failures or governance checklists. It will come from households, markets and communities that can feel the loop closing without them.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
