Walking the floors of CES 2026 between 6-9 January, 2026, one could not escape the sense of awe. The scale, ambition, and speed of technological innovation on display were staggering.
Artificial intelligence embedded everywhere. Automation promising efficiency at unprecedented levels. Systems designed to anticipate, optimize, and replace. Yet beneath the spectacle ran a quieter, more troubling undercurrent: while technology is advancing at exponential speed, our ethical frameworks, social institutions, and collective sense of purpose are lagging dangerously behind. Six Fellows of the 被窝影视福利 of Art and Science attending the world’s largest tech event in Las Vegas in January and convened a special meeting to discuss their thoughts, insights and observations of what they saw.
WAAS Fellow听Peter Schlosser听spoke on a panel titled “AI and Sustainable Living” where he noted that AI can help us optimize natural resources, and that much historical data has yet to be processed to help uncover new insights. Fellow听Lawrence Ford听and WAAS General Manager,听Grant Schreiber,听hosted a panel conceived by WAAS Fellow听Eden Mamut听on “Advancing Human Security and Smart Mobility.” WAAS Trustee听Jonathan Granoff听spoke on a panel titled “Staying Ahead in the Data Defense Game,” that explored how AI is reshaping cybersecurity by protecting data, detecting threats, and helping us stay one step ahead of misuse and cyber-attacks. 听
鈥婣 special side event at CES 2026 convened a broader group of WAAS Fellows, that included听Carol Carter听补苍诲听An Krumberger听to reflect on their experiences and articulate shared concerns about the future of ethical technology.
Watch: AI and Sustainable Living
Again and again, conversations returned to the same unresolved tension. Technology excels at answering the question of how鈥攈ow to automate, how to scale, how to optimize鈥攂ut it consistently fails to address why. Why this technology? For whom? At what cost? And with what consequences for human dignity, identity, and security?
CES has long been a marketplace of solutions, but what was increasingly evident this year was the absence of a shared moral architecture guiding those solutions. Innovation is being driven primarily by market incentives and competitive advantage, while the human implications鈥攋ob displacement, erosion of identity, widening inequality, and social fragmentation鈥攔emain largely externalized. These are treated as collateral effects rather than central design constraints.
One of the most striking observations was how little space exists for the human experience itself. Technology continues to be marketed as labor-saving, friction-reducing, and productivity-enhancing. Yet artificial intelligence represents something fundamentally different from previous tools: it is not merely saving physical labor, but increasingly replacing cognitive labor鈥攔easoning, analysis, creativity, and decision-making. The scale and speed of this shift threaten not only livelihoods but the deeper human need for purpose, participation, and meaning.
Watch: Advancing Human Security and Smart Mobility in Connected Communities
Work has never been solely about income. It is a source of identity, belonging, and contribution to society. As AI systems absorb large portions of cognitive work, the risk is not simply economic displacement, but widespread social disorientation. A society in which people are detached from meaningful participation is not secure鈥攏o matter how efficient its systems may be.
These human security implications were notably absent from many displays. While panels explored technical performance, infrastructure, and investment, far fewer addressed the societal shockwaves now unfolding in real time. The question of who bears responsibility for these consequences鈥攊ndustry, government, or civil society鈥攔emains unresolved, and too often unasked.
Another recurring theme was the narrowing of perspective. Many technologies are being developed by relatively homogenous groups and deployed at global scale, shaping lives far beyond the rooms in which design decisions are made. Women, youth, marginalized communities, people with disabilities, and those most vulnerable to disruption are still largely missing from the feedback loops that shape innovation. Ethical technology cannot be achieved by adding a human-centric slogan after the fact; it requires intentional inclusion of diverse human perspectives at the point of conception.
Watch: Staying Ahead in the Data Defense Game
The conversations also revealed a deeper philosophical divide emerging beneath the surface of technological progress. Technology is becoming increasingly autonomous, while humans are becoming increasingly procedural. Systems are optimized; people are standardized. In corporations, automation and quality assurance frameworks have already narrowed human agency to predefined roles. Left unchecked, advanced AI risks accelerating this trend鈥攔educing humans to operators, overseers, or passive recipients of algorithmic outcomes.
This inversion is not inevitable, but it is already underway.
Human security, as discussed at CES, offers a crucial reframing. It shifts the focus from protecting systems, borders, or profits to protecting people鈥攖heir dignity, safety, livelihoods, and capacity to thrive. It insists that innovation be evaluated not only by efficiency or return on investment, but by whether it genuinely meets human needs. In turbulent times, such a compass is not a luxury; it is essential.
Yet introducing this perspective is not without political and institutional sensitivity. Concepts like human security challenge entrenched power structures by asserting that human well-being is not subordinate to sovereignty, markets, or technological inevitability. This makes the conversation uncomfortable鈥攂ut also necessary. Ethical technology cannot be apolitical if it claims to serve humanity; it must confront the realities of power, exclusion, and consequence.
Importantly, several voices noted that this is no longer a moment for endless study. The issues are well understood. The time-sensitive imperative is action. Practical use cases, real-world applications, and visible demonstrations of ethical integration are needed now鈥攏ot as theoretical exercises, but as working models. Whether in education, home technologies, mobility, or AI governance, the challenge is to show that ethical design is not a constraint on innovation, but a condition for its legitimacy and long-term success.
CES 2026 made one truth unmistakably clear: technology will not slow down to wait for our institutions to catch up. If ethical frameworks are not embedded deliberately, they will be replaced by default logics鈥攅fficiency over equity, automation over participation, profit over purpose. The absence of intention is itself a choice, and one with profound consequences.
Watch: CES CEO Gary Shapiro on Human Security, AI & How Technology Can Improve Lives
What is required now is a shift in mindset. Technology must be understood not as an end in itself, but as a technique鈥攑owerful, neutral, and incomplete without values to guide it. Markets can drive innovation, but they cannot define meaning. Algorithms can optimize outcomes, but they cannot determine what outcomes are worth optimizing.
CES remains one of the world鈥檚 most influential stages for shaping the future. What happens there reverberates far beyond Las Vegas. The question emerging from this year鈥檚 gathering is not whether technology will transform society鈥攊t already is. The real question is whether humanity will have the wisdom, courage, and foresight to shape that transformation before it shapes us.
In an era of accelerating disruption, ethical technology is not about slowing progress. It is about ensuring that progress remains human.





