Why Are People Afraid of AI? We’ve Been Here Before

30 Oct 2025 1:55 PM | Ali Kucukozyigit (Administrator)

Why Are People Afraid of AI? We’ve Been Here Before

by Cansu Yalim

Throughout history, major technological leaps have unsettled society before becoming foundations for progress. The fear surrounding artificial intelligence today is not new; it echoes anxieties that accompanied the Industrial Revolution, when mechanized production first scaled fears about technology’s impact on work and life. What feels different now is AI’s perceived autonomy in systems that seem not only to perform tasks but to think, decide and learn. This gives the illusion of losing human control, which is at the root of our modern unease. By inspecting parallels with our one of the earliest great technological watersheds, we can better understand the nature of today’s fears, and the responsibilities carried out in this new era.

The Industrial Revolution as a Break-Even Point


The human rhythm of work was radically changed by the Industrial Revolution. Prior to it, many daily activities were controlled by natural cycles; the factory clock then controlled time [1]. This shift resulted in a previously unheard-of level of stressors, including alienation, burnout, and loss of autonomy. According to research, this period had a negative psychological "imprint" on industrial populations; areas that were once used for coal mining still have higher rates of neuroticism, anxiety, and depression today [2]. This generational unhappiness is believed to be the inherited product of selective migration as those facing rural depression moved into industrial centers. The negative social impacts of hard work and deplorable living conditions marked by overcrowding, inadequate sanitation, and the spread of disease exacerbate this. The exchange of mechanical efficiency for physical autonomy caused substantial and long-lasting shock to society. The analogy to the present is clear: we feel the strain at a higher level of abstraction since AI may compromise cognitive autonomy in favor of algorithmic efficiency.

The Pattern of Fear in Technological Evolution


The most visceral fear then, as now, was job displacement. The Luddites of the early 19th century were not simply technophobes; they were highly skilled artisans whose craft, honed over years of apprenticeship, was being rendered obsolete by automated looms that could be operated by unskilled, cheaper labor. Their acts of smashing machinery were a desperate form of "collective bargaining by riot," a tactic to pressure employers who were threatening their very livelihood [3]. Today, the anxiety is similar, but the domain has shifted from physical to cognitive labor. While automation has always created new jobs over time, it also displaces workers and can worsen inequality by shifting compensation from labor to capital. The “new automation” of AI targets a broader swath of cognitive tasks from editing and design to analysis and other professional services. All of these are affecting professionals more than ever before and provoking fears not only of job loss but of diminished human relevance.

Why This Wave Feels Different Now.

Why does the fear feel so different now? Unlike previous machines that improved human capabilities, AI mimics human thought, which raises existential concerns. The magnitude and speed of the change, which is taking place over years rather than decades, adds to the anxiety. Prominent scientists and tech leaders have cautioned that unconstrained development of systems more powerful than GPT-4 could pose “profound risks to society and humanity” and called for a verifiable six-month pause to establish safety protocols [4]. More pressing red flags include the potential for AI to be used maliciously, such as to create pandemics or deploy deadly autonomous weapons, and the possibility that faulty algorithms trained on biased data will reinforce societal biases [5]. These systems, which frequently function as "black boxes," present moral dilemmas pertaining to accountability, privacy, and equity that need to be resolved.

What History Suggests about Adaptation


This is where engineering managers find themselves standing today: at the crossroads of technological evolution and human adaptation. Their challenge is not to resist the tide of AI, but to channel it responsibly. This means moving beyond a simple quest for efficiency and instead learning to balance automation with human purpose, and algorithmic power with ethical stewardship. Leadership in this new era will be defined less by technical mastery alone and more by a deep, human-centric systems thinking which is an ability to see and shape the complex interplay between machine intelligence, human creativity, and the culture of our organizations. The engineers of the Industrial Revolution were tasked with designing machines; our task is far more intricate. We are being called to design the very socio-technical systems that will either preserve or erode human dignity and agency. Seen this way, the fear surrounding AI is not an irrational panic to be dismissed. It is a vital emotional signal. It is indeed a decisive prompt to think with intention and empathy before we build blindly.

Conclusion

Humanity has always feared what it cannot yet govern, but history shows that we eventually learn to co-govern with our creations. The Industrial Revolution mechanized our muscles; the Information Revolution digitized our minds; the AI Revolution now challenges us to humanize our machines. For engineering managers, the opportunity is to steer adoption deliberately: protect human autonomy, invest in worker transition and upskilling, demand transparency and auditability from AI systems, and align deployments with values as well as metrics. It is not the end of human relevance. It is another beginning of what it means to be human in an engineered world.

Contributor Bio

Cansu Yalim is pursuing her Ph.D. in Engineering Management and Systems Engineering at Old Dominion University, where she also serves as a graduate research and teaching assistant. Before academia, she worked in the thermotechnology and automotive industries. Yalim’s research reframes industrial root-cause diagnosis as a causal inference problem, overcoming the correlational limits of traditional ML by integrating time, causal structure, and system dynamics to deliver trustworthy fault attribution in complex, changing environments.

LinkedIn: Cansu Yalim (https://www.linkedin.com/in/cansu-yalim-63089a153/)

REFERENCES

[1] Thompson, E. P. (2017). Time, work‐discipline, and industrial capitalism. Class: The Anthology, 27-40. https://doi.org/10.1002/9781119395485.ch3

[2] Obschonka, M., Stuetzer, M., Rentfrow, P. J., Shaw-Taylor, L., Satchell, M., Silbereisen, R. K., ... & Gosling, S. D. (2018). In the shadow of coal: How large-scale industries contributed to present-day regional differences in personality and well-being. Journal of Personality and Social Psychology115(5), 903. https://psycnet.apa.org/doi/10.1037/pspp0000175

[3] Hobsbawm, E. J. (1952). The machine breakers. Past & Present, (1), 57-70.

[4] Future of Life Institute. (2023, March 22). Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[5] Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.

BLOG DISCLAIMER: The opinions, views and content presented in this online blog solely represent the original creator and has no association with ASEM activities, products or services. The content is made available to the community for educational and informational purposes only. All blog posts are created voluntarily and are not sold, but may be used, shared and distributed free of charge. The blogs are not academic pieces, and therefore do not go through a peer-review process, and are not fact-checked. All errors belong to the creators.
 

 


CLICK LOGO BELOW to find out more about ASEM's Promoted Organizations / Sponsors:




















Powered by Wild Apricot Membership Software