Research Article | Volume 3 Issue 1 (None, 2026) | Pages 207 - 221
A Multilevel Review of Artificial Intelligence and Employee Well-Being in Organizations
 ,
1
Assistant Professor, Department of Information Technology, Department of Management Jagan Institute of Management Studies, Sector 5 Rohini, New Delhi, India
Under a Creative Commons license
Open Access
Received
Dec. 5, 2025
Revised
Dec. 20, 2025
Accepted
Jan. 12, 2026
Published
Jan. 30, 2026
Abstract

The rapid integration of artificial intelligence (AI) into organizational systems is transforming how work is designed, managed, and experienced. While AI-driven technologies promise efficiency and performance gains, their implications for employee well-being remain contested. Existing research is fragmented across disciplines and levels of analysis, limiting a comprehensive understanding of AI’s human consequences. This study presents a multilevel review of empirical research examining the relationship between AI use in organizations and employee well-being. Following a systematic review process, 136 articles were initially identified, of which 72 empirical studies met the inclusion criteria. Using an inductive thematic approach, the literature is synthesized across individual, team, and organizational levels of analysis. The review reveals that AI can simultaneously act as a resource and a stressor for employees, influencing job design, psychological health, trust, and engagement depending on contextual and governance factors. The study contributes by offering an integrative multilevel framework, identifying gaps for future research, and providing actionable insights for organizations seeking to adopt AI while safeguarding employee well-being.

Keywords
INTRODUCTION

Organizations are increasingly integrating artificial intelligence (AI) technologies into work systems, human resource practices, and decision-making processes, fundamentally reshaping how work is organized and experienced by employees. Across diverse organizational contexts, AI-enabled tools are now used for recruitment and selection, performance evaluation, people analytics, workflow automation, employee support, and decision assistance (Malik et al., 2022; Murugesan et al., 2023; Pereira et al., 2021). Empirical research consistently demonstrates that AI adoption is no longer peripheral but has become embedded in the core functioning of contemporary organizations, with significant implications for employee experiences and well-being (Bankins et al., 2023; Soulami et al., 2024).

 

The growing use of AI in organizations has generated extensive scholarly debate regarding its consequences for employees. On one hand, AI has been associated with positive outcomes such as enhanced task efficiency, improved decision quality, increased engagement, and strengthened employee resilience when deployed as a supportive and augmentative technology (Braganza et al., 2020; Malik et al., 2022; Xiao et al., 2023). AI-enabled HR practices have also been shown to contribute to job satisfaction, creative willingness, and sustainable performance by reducing routine workload and enabling data-informed managerial decisions (Sweiss et al., 2024; Chin et al., 2024). On the other hand, a substantial body of research highlights the darker sides of AI adoption, including technostress, emotional exhaustion, job insecurity, privacy concerns, and perceived loss of autonomy (Nazareno & Schiff, 2021; Wu et al., 2022; Zhou et al., 2023; Giermindl et al., 2021). These mixed findings suggest that AI’s effects on employee well-being are complex and contingent on how technologies are designed, implemented, and governed within organizations.

 

Employee perceptions and attitudes toward AI play a critical role in shaping well-being outcomes. Empirical studies indicate that employees actively interpret AI systems in terms of usefulness, fairness, transparency, and perceived threat, which in turn influence psychological responses such as stress, engagement, and trust (Johnson et al., 2020; Nazareno & Schiff, 2021). Research on AI-enabled and algorithm-based HR systems further demonstrates that opaque decision-making, surveillance-oriented applications, and weak organizational support intensify anxiety and resistance, whereas transparency, fairness, and employee involvement mitigate negative well-being effects (Soomro et al., 2024; Taslim et al., 2025; Haipeter et al., 2024). These findings underscore the importance of understanding AI not merely as a technological innovation but as a socio-organizational phenomenon shaping employee well-being through perceptions of control, trust, and justice.

 

Despite the rapid expansion of empirical research on AI in organizations, the literature remains fragmented across levels of analysis. Some studies focus primarily on individual-level outcomes such as stress, burnout, job satisfaction, and affective well-being (Jin et al., 2024; Jaiswal et al., 2021; Hill et al., 2022), while others emphasize team-level dynamics related to collaboration, leadership, and digital work environments (Alkhayyal et al., 2024; Murphy, 2024). At the organizational level, research has examined AI governance, ethical considerations, HR system strength, and people analytics, highlighting their influence on employee trust, engagement, and well-being (Chang et al., 2023; Palmucci et al., 2024; Giermindl et al., 2021). However, there remains limited integrative discussion that systematically connects these levels to explain how AI shapes employee well-being across organizational contexts.

 

In response to this gap, the present article provides a systematic review of empirical research examining the relationship between artificial intelligence and employee well-being in organizations. Drawing on 72 empirical studies identified through a PRISMA-guided review process, we adopt a multilevel perspective encompassing individual, group, and organizational levels of analysis. Specifically, we synthesize evidence on how AI influences job design, employee attitudes and experiences, algorithmic management practices, human–AI collaboration, and organizational conditions that shape psychological, emotional, and occupational well-being (Bankins et al., 2023; Pereira et al., 2021; Soulami et al., 2024).

 

This review makes three key contributions to the literature on AI and employee well-being. First, it consolidates fragmented findings to clarify how AI reshapes work processes and employee experiences, highlighting the mechanisms through which AI can function as both a resource and a stressor. Second, it advances understanding of employee attitudes toward AI and algorithmic management by integrating research on trust, fairness, surveillance, and organizational support. Third, by focusing exclusively on empirical evidence, the review moves beyond speculative predictions to provide grounded insights into how AI is currently affecting employee well-being in practice. In doing so, the review offers guidance for future research and supports organizations in designing and implementing AI systems that promote employee well-being alongside organizational effectiveness.

 

The remainder of this article is structured as follows. The next section outlines the literature search strategy and coding procedures used in the review. This is followed by a presentation of the key multilevel themes emerging from the analysis. The article concludes by discussing implications for theory and future research in AI-enabled workplaces.

LITERATURE SEARCH METHOD AND ANALYSIS

2.1 | Search strategy

We identified relevant articles through a four-step systematic search and screening process, summarized in Figure 1 using a PRISMA flow diagram. First, we conducted a comprehensive search of the Scopus database using combinations of keywords related to artificial intelligence, algorithmic management, people analytics, employee well-being, mental health, stress, engagement, and job satisfaction. To ensure relevance to organizational contexts, the search was limited to articles published in business, management, organizational behavior, human resource management, and psychology-related subject areas. This initial search yielded 136 articles.

 

Second, the retrieved articles were screened for journal quality and disciplinary relevance. Given the focus of this review on employee well-being and organizational outcomes, we retained articles published in peer-reviewed journals recognized within management, organizational behavior, human resource management, and applied psychology domains. This step ensured an inclusive yet rigorous sample aligned with micro- and meso-level work-related outcomes.

 

In the third step, we manually screened the titles and abstracts of the remaining articles (n = 120) to assess their relevance to AI use in organizational settings and their explicit focus on employee well-being. Articles were excluded if they focused solely on technical system development, macroeconomic forecasting, or non-organizational contexts, or if they did not examine employee-level outcomes.

 

In the fourth step, full-text screening was conducted for 88 articles to ensure empirical relevance and methodological rigor. Conceptual papers, editorials, and review-only articles were excluded at this stage to maintain an empirical focus. This process resulted in a final sample of 72 empirical articles, which form the basis of the systematic review. The complete identification, screening, eligibility, and inclusion process is illustrated in Figure 1.

 

Table 1 summarizes the five key themes that emerged from the systematic review of 72 empirical studies examining artificial intelligence and employee well-being. The themes reflect distinct yet interconnected streams of research and collectively capture how AI influences employee experiences across individual, group, and organizational levels of analysis. The table also highlights the methodological approaches and theoretical perspectives most frequently used within each theme, illustrating both the diversity and concentration of research designs in this literature.

 

The largest body of work focuses on AI-enabled job design and human–AI interaction, emphasizing how changes in task allocation, autonomy, and cognitive demands shape employee well-being outcomes such as stress, engagement, and sustainable performance. A second stream centers on employee perceptions of AI, demonstrating that trust, fairness, and transparency play

 

Figure 1. PRISMA flow diagram illustrating the identification, screening, eligibility assessment, and inclusion of studies in the systematic review.

 

A substantial number of studies also address technostress, algorithmic control, and surveillance, highlighting the psychological costs associated with AI-driven monitoring and decision-making systems. These findings are complemented by research on AI-enabled HR practices and governance mechanisms, which suggests that ethical design, organizational support, and participatory approaches can mitigate negative well-being effects and foster resilience. Finally, research on digital work environments and smart technologies illustrates how AI reshapes employee well-being in virtual, hybrid, and technology-intensive work contexts, with implications for long-term sustainability and work–life balance.

 

Overall, Table 1 demonstrates that research on AI and employee well-being is methodologically dominated by survey-based studies, while experimental, qualitative, and mixed-method approaches remain comparatively underrepresented. Theoretically, the literature draws heavily on job demands–resources, technology acceptance, and social exchange perspectives, indicating opportunities for greater theoretical integration across levels of analysis. By organizing the empirical evidence into these five themes, Table 1 provides a structured foundation for the detailed thematic analysis presented in the following sections.

 

TABLE1: Key themes from empirical research on AI in the workplace.

Themes

Description

Example articles

Examples of theories used

1. AI-enabled job design and human–AI interaction (n = 18 papers)

This theme examines how AI reshapes job design, task allocation, and human–AI interaction, and how these changes influence employee psychological, emotional, and occupational well-being. Studies focus on AI as an augmentative or substitutive technology and its effects on workload, autonomy, and stress.

Individual level: Braganza et al. (2020); Jin et al. (2024); Sweiss et al. (2024) Group level: Alkhayyal et al. (2024); Murphy (2024) Organizational level: Malik et al. (2022); Chin et al. (2024)

Job demands–resources theory; socio-technical systems theory; conservation of resources theory; job design theory

2. Employee perceptions of AI and well-being outcomes (n = 14 papers)

This theme focuses on employees’ perceptions of AI, including trust, fairness, transparency, and usefulness, and how these perceptions shape stress, anxiety, engagement, and affective well-being.

Individual level: Nazareno & Schiff (2021); Jaiswal et al. (2021); Hill et al. (2022) Group level: Nil Organizational level: Soomro et al. (2024); Taslim et al. (2025)

Technology acceptance model; organizational justice theory; social cognitive theory; psychological contract theory

3. Technostress, algorithmic control, and surveillance (n = 15 papers)

This theme captures research on technostress, algorithmic management, surveillance, and control mechanisms, and their implications for employee burnout, strain, privacy concerns, and emotional exhaustion.

Individual level: Wu et al. (2022); Jin et al. (2024); Zhou et al. (2023) Group level: Nil Organizational level: Giermindl et al. (2021); Palmucci et al. (2024)

Labor process theory; stress–strain theory; surveillance theory; effort–reward imbalance theory

4. AI-enabled HR practices, analytics, and governance (n = 13 papers)

This theme focuses on AI-enabled HR practices, people analytics, and governance mechanisms, examining how ethical design, transparency, and organizational support influence employee well-being and resilience.

Individual level: Xiao et al. (2023); Peña et al. (2024) Group level: Haipeter et al. (2024) Organizational level: Chang et al. (2023); Malik et al. (2022)

Social exchange theory; ethical AI frameworks; HRM system strength theory; organizational support theory

5. Digital work environments and sustainable employee well-being (n = 12 papers)

This theme examines AI in digital and virtual work environments, including remote work, smart technologies, and wearables, and their implications for engagement, health, work–life balance, and long-term well-being.

Individual level: Torres et al. (2021); Thynne et al. (2022); Agarwal (2020) Group level: Aubouin-Bonnaventure et al. (2023) Organizational level: Leesakul et al. (2022); Mukhuty et al. (2022)

Well-being theory; self-determination theory; sustainable HRM perspectives; work–life balance theory

 

2.2 | Coding procedures and approach to organizing the literature

To systematically analyze the selected articles, we developed a structured data extraction template to capture key information from each study, including research objectives, theoretical framing, methodological approach, AI application context, level of analysis, and employee well-being outcomes examined. This process ensured consistency in data extraction across the full sample of 72 studies.

 

Using an inductive and iterative approach, we organized the literature without imposing a predefined categorization scheme. Instead, themes were allowed to emerge organically through repeated reading and comparison of the articles. Each paper was read in full, and initial codes were generated based on the primary focus of the study. These codes were then refined through successive rounds of comparison, clustering, and abstraction.

 

During this iterative process, several preliminary categories were merged or differentiated as conceptual clarity increased. For example, early groupings related to job design, employee perceptions, and digital stress were initially treated as a single category but later differentiated into distinct themes capturing nuanced aspects of employee well-being. Similarly, studies examining AI-enabled HR analytics and algorithmic management were separated from broader discussions of AI adoption once their unique implications for employee experiences became evident.

 

The final organization of the literature resulted in five distinct but interrelated themes, each spanning multiple levels of analysis. These themes capture individual-level experiences (e.g., stress, engagement, affective well-being), group-level dynamics (e.g., collaboration, leadership, virtual work), and organizational-level influences (e.g., AI governance, HR systems, ethical considerations). An overview of these themes and representative studies is provided in Table 1.

 

In the sections that follow, we present a detailed analysis of each theme. While analytically distinct, the themes are interconnected and collectively illustrate how artificial intelligence shapes employee well-being across individual, group, and organizational levels. We begin with the largest theme, which focuses on AI-enabled changes to work design and human–AI interaction, followed by themes addressing employee perceptions, algorithmic management, digital work environments, and broader organizational practices.

RESULTS

This section presents the results of the systematic review of 72 empirical studies examining the relationship between artificial intelligence (AI) use at work and employee well-being. Through an inductive thematic analysis, five overarching themes were identified that capture how AI influences employee experiences, attitudes, and outcomes across multiple levels of analysis. These themes reflect distinct but interconnected streams of research and collectively illustrate the complex, socio-technical nature of AI-enabled workplaces.

 

The identified themes span individual-, group-, and organizational-level mechanisms through which AI shapes employee well-being. At the individual level, studies primarily focus on how AI affects job design, task characteristics, psychological responses, and perceptions of fairness, trust, and job security. At the group level, research highlights the role of collective sensemaking, occupational identity, team support, and human–AI collaboration in shaping employee responses to AI. At the organizational level, the literature emphasizes the importance of AI governance, ethical HR practices, leadership support, and organizational climate in moderating the well-being consequences of AI adoption.

 

The five themes emerging from the review are: (1) human–AI collaboration; (2) employee perceptions of AI and well-being; (3) technostress, algorithmic control, and surveillance; (4) AI-enabled HR practices and governance; and (5) digital work environments and sustainable employee well-being. Each theme captures a distinct set of mechanisms and outcomes, while also intersecting with other themes across levels of analysis. Together, these themes demonstrate that the impact of AI on employee well-being is neither uniformly positive nor negative, but contingent on how AI technologies are designed, implemented, and experienced within organizational contexts.

 

In the following subsections, each theme is discussed in detail. For each theme, we synthesize findings across individual, group, and organizational levels of analysis to provide a comprehensive understanding of how AI use at work influences employee well-being outcomes.

 

Theme 1: Human–AI collaboration

A central theme emerging from the reviewed literature concerns the role of human–AI collaboration in shaping employee well-being. Across organizational contexts, the productivity, efficiency, and well-being benefits associated with AI adoption are shown to depend critically on how effectively AI systems are integrated into human work processes. Rather than functioning as autonomous replacements for human labor, AI technologies tend to yield positive outcomes when they are designed and deployed to complement, augment, and support human skills, judgment, and decision-making (Braganza et al., 2020; Malik et al., 2022; Bankins et al., 2023). The reviewed studies highlight that human–AI collaboration is a socio-technical process shaped by individual capabilities and attitudes, collective work practices, and organizational structures and cultures.

 

Individual level

At the individual level, human–AI collaboration is influenced by employees’ perceptions of AI system characteristics, changes to job design, and their own skills and confidence in working with AI. Several studies show that when employees perceive a strong fit between AI capabilities and task requirements, collaboration with AI is associated with positive well-being outcomes, including higher job satisfaction, engagement, and perceived performance sustainability (Braganza et al., 2020; Sweiss et al., 2024). AI systems that support decision-making, reduce routine cognitive load, or provide actionable feedback enable employees to focus on higher-value tasks, thereby enhancing both efficiency and psychological well-being (Malik et al., 2022; Jin et al., 2024).

 

Changes in job design play a particularly important role in shaping individual experiences of human–AI collaboration. When AI augments job autonomy, task variety, and information processing, employees report greater motivation and lower strain (Chin et al., 2024; Hill et al., 2022). In contrast, AI implementations that increase workload, intensify monitoring, or require rapid upskilling without adequate support tend to generate stress and undermine well-being (Nazareno & Schiff, 2021; Wu et al., 2022). These findings suggest that AI can simultaneously act as a job resource and a job demand, with implications for employee well-being depending on how work is redesigned.

 

Individual attitudes toward AI further condition collaboration outcomes. Trust in AI systems, perceptions of transparency, and beliefs about AI reliability influence whether employees view AI as a supportive partner or a threatening control mechanism (Soomro et al., 2024; Zhou et al., 2023). Employees who trust AI-generated recommendations and understand their purpose are more likely to integrate AI into their work practices, experience lower anxiety, and report more positive affective states. Conversely, low trust and perceived opacity contribute to technostress, resistance, and emotional exhaustion (Giermindl et al., 2021).

 

Skill levels and experience also shape individual collaboration with AI. Evidence suggests that employees with adequate digital skills and moderate task expertise benefit most from AI support, as they are better able to interpret AI outputs and integrate them with human judgment (Braganza et al., 2020; Malik et al., 2022). In contrast, employees with insufficient skills may experience overload and frustration, while highly experienced employees may perceive limited value in AI support, reducing collaboration benefits. These findings underscore the importance of targeted training and adaptive AI design to support diverse employee profiles.

 

Group level

At the group level, human–AI collaboration is shaped by shared work practices, collective sensemaking, and occupational identity. Research indicates that teams play a critical role in interpreting AI technologies and normalizing their use in everyday work. When teams collectively frame AI as a collaborative tool rather than a threat, employees experience lower uncertainty and greater psychological safety (Aubouin-Bonnaventure et al., 2023; Murphy, 2024). Shared understanding of AI capabilities and limitations enables teams to coordinate human and algorithmic contributions more effectively, supporting both performance and well-being.

 

Occupational identity also influences group-level responses to AI. Studies show that occupational groups with clearly defined identities and professional norms are better positioned to experiment with AI and integrate it into their work practices without undermining collective well-being (Bankins et al., 2023). In such contexts, AI is more readily positioned as an aid to professional judgment rather than a substitute for expertise, facilitating acceptance and collaboration. Conversely, groups with less cohesive identities may experience greater ambiguity and resistance, intensifying stress and undermining collaboration.

 

Digital leadership and team-level support further moderate group-level human–AI collaboration. Teams characterized by supportive leadership and open communication are better able to manage AI-related change, share learning, and buffer individual anxieties associated with AI adoption (Alkhayyal et al., 2024). These group-level resources help translate AI use into positive well-being outcomes by fostering trust and collective efficacy.

 

Organizational level

At the organizational level, human–AI collaboration is strongly influenced by broader cultural, structural, and governance-related factors. Supportive organizational climates that emphasize learning, innovation, and employee development facilitate effective collaboration between humans and AI systems (Malik et al., 2022; Chang et al., 2023). Such environments encourage employees to experiment with AI, view technological change as manageable, and adopt approach-oriented coping strategies that protect well-being.

 

Organizational alignment between AI systems, work routines, and performance management practices is also critical. Studies demonstrate that misalignment—such as introducing AI without adapting workflows, evaluation criteria, or leadership practices—undermines collaboration and increases employee strain (Bankins et al., 2023; Palmucci et al., 2024). In contrast, organizations that integrate AI into existing work processes while preserving human discretion and judgment are more likely to foster positive collaboration outcomes.

 

Finally, ethical governance and employee involvement emerge as key organizational enablers of human–AI collaboration. Transparent communication about AI use, opportunities for employee input, and clear accountability structures enhance trust and reduce well-being risks associated with AI adoption (Taslim et al., 2025; Chang et al., 2023). Without such governance mechanisms, AI systems risk being perceived as coercive or surveillant, undermining collaboration and employee well-being.

 

Theme Summary

Overall, this theme highlights that human–AI collaboration is a multilevel phenomenon shaped by individual skills and attitudes, group-level sensemaking and support, and organizational cultures and governance structures. Collaboration is most effective—and well-being outcomes are most positive—when AI systems are designed to augment human work, supported by training and trust-building practices, and embedded within organizational contexts that value employee participation and ethical AI use

 

Theme 2: Employee perceptions of artificial intelligence and well-being

A second major theme emerging from the review concerns employees’ perceptions of artificial intelligence and the implications of these perceptions for employee well-being. Across the literature, AI is not experienced as a neutral technological artifact; rather, employees actively interpret AI systems in terms of their usefulness, fairness, transparency, and potential threat. These perceptions shape how employees emotionally and cognitively respond to AI-enabled work systems and, consequently, how AI influences psychological and work-related well-being outcomes.

 

Individual level

At the individual level, employee perceptions of AI are strongly associated with psychological well-being outcomes, including stress, anxiety, burnout, job satisfaction, and engagement. Studies consistently show that when employees perceive AI as supportive, accurate, and fair, they are more likely to experience positive affective states and higher job satisfaction (Jaiswal et al., 2021; Malik et al., 2022; Sweiss et al., 2024). In contrast, perceptions of AI as opaque, biased, or beyond human control are associated with heightened technostress, emotional exhaustion, and reduced well-being (Nazareno & Schiff, 2021; Wu et al., 2022; Jin et al., 2024).

 

Trust in AI systems emerges as a particularly salient individual-level perception. Employees who trust AI-generated outputs and believe that AI decisions are reliable and aligned with organizational goals report lower anxiety and greater acceptance of AI in their daily work (Soomro et al., 2024). Conversely, low trust—often arising from a lack of explainability or perceived bias in algorithmic decisions—amplifies uncertainty and fear, negatively affecting mental well-being and increasing resistance to AI use (Giermindl et al., 2021; Zhou et al., 2023).

 

Perceived job insecurity is another key perceptual mechanism linking AI to employee well-being. Several studies document that employees who view AI as a threat to job continuity or career progression experience higher stress levels and diminished engagement (Nazareno & Schiff, 2021; Johnson et al., 2020). These fear-based perceptions are particularly pronounced in roles subject to automation or algorithmic evaluation, where employees have limited insight into how AI systems influence performance assessments and employment decisions.

 

Group level

Although group-level evidence is comparatively limited, available studies indicate that shared perceptions of AI within teams influence collective well-being and coping responses. Teams that engage in collective sensemaking around AI—discussing its purpose, limitations, and implications—are better able to normalize AI use and reduce uncertainty among members (Murphy, 2024). Such shared understanding contributes to lower stress and greater psychological safety in AI-enabled work environments.

 

Group-level social support and digital leadership further shape how AI is perceived and experienced. Supportive team climates, characterized by open communication and collaborative problem-solving, help buffer negative emotional responses to AI-related change (Alkhayyal et al., 2024; Aubouin-Bonnaventure et al., 2023). In contrast, fragmented teams and weak leadership exacerbate negative perceptions, intensifying anxiety and resistance to AI adoption.

 

Organizational level

At the organizational level, structural and contextual factors play a critical role in shaping employee perceptions of AI and their well-being consequences. Organizational practices that emphasize transparency, ethical governance, and employee involvement in AI-related decisions are consistently associated with more positive employee perceptions (Chang et al., 2023; Taslim et al., 2025). When employees understand how AI systems are used, why they are implemented, and how decisions are made, perceptions of fairness and trust are strengthened, reducing well-being risks.

 

Organizational support mechanisms also moderate perceptual responses to AI. Access to training, reskilling opportunities, and clear communication about the role of AI in work processes help employees interpret AI as an enabling rather than threatening technology (Palmucci et al., 2024; Malik et al., 2022). In contrast, organizations that deploy AI without adequate support or explanation foster uncertainty and cynicism, undermining employee well-being.

 

Theme Summary

Overall, this theme demonstrates that employee perceptions are a central mechanism through which AI influences well-being. Positive perceptions—grounded in trust, fairness, and transparency—support psychological health and engagement, whereas negative perceptions—driven by fear, opacity, and perceived injustice—undermine well-being. These findings highlight the importance of managing not only the technical performance of AI systems but also how they are perceived and experienced by employees across organizational levels.

 

Theme 3: Technostress, algorithmic control, and surveillance

A third prominent theme in the reviewed literature concerns the unintended negative consequences of AI-enabled systems, particularly technostress, algorithmic control, and workplace surveillance, and their implications for employee well-being. While AI technologies are often introduced to enhance efficiency and decision-making, a substantial body of empirical research demonstrates that their deployment can intensify job demands, reduce perceived autonomy, and erode psychological well-being when experienced as intrusive or controlling.

 

Individual level

At the individual level, AI-driven systems are frequently associated with heightened technostress, emotional exhaustion, and anxiety. Studies examining automation, people analytics, and algorithmic decision-making consistently show that continuous monitoring, datafication of performance, and opaque evaluation criteria increase cognitive and emotional strain (Nazareno & Schiff, 2021; Wu et al., 2022; Jin et al., 2024). Employee’s report feeling constantly evaluated by algorithmic systems, which contributes to pressure to maintain performance metrics and undermines psychological safety.

 

Technostress emerges as a central mechanism linking AI use to negative well-being outcomes. AI systems that require constant interaction, rapid adaptation, or continuous skill updating can overwhelm employees, particularly when organizational support is insufficient (Wu et al., 2022; Palmucci et al., 2024). Such stress responses are exacerbated when AI-generated feedback is perceived as impersonal, inflexible, or misaligned with human judgment, leading to frustration and emotional exhaustion (Zhou et al., 2023).

 

Perceived loss of autonomy is another critical individual-level outcome. Research indicates that employees experience diminished control over their work when AI systems dictate task allocation, pacing, or evaluation without room for human discretion (Giermindl et al., 2021). This perceived loss of agency is closely associated with lower job satisfaction, increased strain, and withdrawal behaviors. In extreme cases, employees respond by disengaging from AI systems altogether or engaging in coping strategies aimed at circumventing algorithmic controls.

 

Group level

At the group level, algorithmic control and surveillance shape collective experiences of work and well-being. Although fewer studies explicitly focus on teams, available evidence suggests that shared exposure to AI-driven monitoring influences group norms and interaction patterns. In teams where algorithmic oversight is pervasive, employees may become more risk-averse, reduce knowledge sharing, and prioritize metric compliance over collaboration, indirectly affecting collective well-being (Aubouin-Bonnaventure et al., 2023).

 

Group-level responses to algorithmic control are also influenced by social comparison processes. AI-generated performance rankings and dashboards can intensify competition among team members, increasing stress and undermining social cohesion (Wu et al., 2022). In contrast, teams that contextualize algorithmic data through discussion and shared interpretation are better able to buffer negative emotional effects and maintain supportive interpersonal dynamics (Murphy, 2024).

 

Organizational level

At the organizational level, the design and governance of AI systems play a decisive role in determining whether technostress and surveillance undermine employee well-being. Research on people analytics and algorithmic management highlights that organizations adopting control-oriented AI systems—characterized by extensive monitoring and limited transparency—tend to generate higher levels of employee strain and distrust (Giermindl et al., 2021; Zhou et al., 2023). In such contexts, AI is experienced less as a supportive tool and more as a mechanism of control.

 

Conversely, organizational practices that balance AI-enabled monitoring with ethical governance and employee voice can mitigate well-being risks. Studies show that transparent communication about data use, clear boundaries around surveillance, and opportunities for employees to contest or contextualize AI-driven evaluations reduce stress and perceptions of injustice (Chang et al., 2023; Taslim et al., 2025). Leadership commitment to responsible AI use and psychological safety further buffers the negative effects of algorithmic control (Palmucci et al., 2024).

 

Importantly, organizational intent matters. When employees perceive AI surveillance as designed to support development and safety rather than punishment or cost reduction, well-being outcomes are less negative (Nazareno & Schiff, 2021). This finding underscores the role of organizational framing and governance in shaping employee experiences of AI-driven control.

 

Theme Summary

Overall, this theme highlights the potential well-being risks associated with AI-enabled technostress, algorithmic control, and surveillance. While AI systems can enhance efficiency and accountability, their deployment often intensifies job demands and reduces perceived autonomy when implemented without adequate safeguards. The reviewed evidence suggests that mitigating these risks requires organizational practices that preserve employee agency, transparency, and psychological safety across individual, group, and organizational levels.

 

Theme 4: AI-enabled HR practices and governance

A fourth theme emerging from the review focuses on the role of AI-enabled human resource (HR) practices and governance mechanisms in shaping employee well-being. This stream of research examines how AI is embedded in HR functions—such as recruitment, performance management, people analytics, and employee monitoring—and how governance structures influence whether these technologies support or undermine employee well-being. Across the literature, AI-enabled HR systems are shown to act as double-edged tools that can enhance efficiency and consistency while simultaneously raising ethical, psychological, and relational concerns.

 

Individual level

At the individual level, AI-enabled HR practices influence employee well-being by shaping perceptions of fairness, support, and developmental opportunity. Studies indicate that when AI is used to enhance HR decision-making—such as providing objective feedback, identifying skill gaps, or supporting career development—employees report higher resilience, job satisfaction, and engagement (Xiao et al., 2023; Peña et al., 2024). AI-driven insights that are framed as developmental rather than evaluative help employees perceive HR systems as supportive resources rather than sources of control.

 

However, individual-level well-being outcomes depend heavily on how AI-generated decisions are communicated and enacted. Employees experience stress and reduced trust when AI-based HR decisions are perceived as opaque or when employees lack opportunities to question or contextualize algorithmic outputs (Giermindl et al., 2021; Zhou et al., 2023). In such cases, AI-enabled HR systems contribute to feelings of powerlessness and procedural injustice, negatively affecting psychological well-being.

 

Group level

At the group level, AI-enabled HR practices influence collective experiences of fairness, participation, and support. Research highlights that employee involvement in AI-driven HR decision-making processes—such as consultation during system design or feedback mechanisms—enhances collective acceptance and mitigates well-being risks (Taslim et al., 2025; Haipeter et al., 2024). Teams that perceive HR analytics and AI systems as transparent and inclusive are better able to integrate these technologies into work practices without undermining morale or trust.

 

Group-level dynamics are also shaped by how AI-enabled HR systems standardize or differentiate employee treatment. While algorithmic consistency can reduce perceptions of favoritism, rigid application of AI-generated decisions may overlook contextual factors known to teams, thereby generating frustration and collective disengagement. These findings suggest that group-level sensemaking and dialogue are essential for translating AI-enabled HR practices into positive well-being outcomes.

 

Organizational level

At the organizational level, governance structures play a central role in determining the well-being consequences of AI-enabled HR practices. Studies consistently emphasize the importance of ethical AI governance frameworks that define accountability, transparency, and data responsibility (Chang et al., 2023; Malik et al., 2022). Organizations that establish clear principles for AI use—such as explainability, human oversight, and fairness—are more likely to foster employee trust and protect well-being.

 

Organizational investment in training and reskilling further moderates the impact of AI-enabled HR systems on employee well-being. Access to learning opportunities helps employees adapt to AI-driven changes and reduces anxiety related to skill obsolescence (Chin et al., 2024; Palmucci et al., 2024). In contrast, organizations that deploy AI without adequate support structures risk exacerbating stress and resistance, undermining both well-being and system effectiveness.

 

Leadership commitment to responsible AI use also emerges as a key organizational-level factor. Leaders who actively communicate the purpose of AI adoption, model ethical use, and encourage employee voice contribute to a climate in which AI-enabled HR practices are perceived as legitimate and supportive rather than coercive (Malik et al., 2022; Chang et al., 2023).

 

Theme Summary

Overall, this theme demonstrates that AI-enabled HR practices and governance mechanisms are pivotal in shaping employee well-being. When guided by ethical principles, transparency, and employee participation, AI-enabled HR systems can enhance resilience, engagement, and sustainable performance. Conversely, weak governance and exclusionary practices amplify stress, distrust, and perceptions of injustice. These findings highlight the importance of aligning AI-enabled HR practices with human-centered governance to support employee well-being across organizational levels.

 

Theme 5: Digital work environments and sustainable employee well-being

The fifth theme focuses on the role of AI in shaping digital and technology-mediated work environments and the implications of these changes for sustainable employee well-being. This stream of research examines AI-enabled virtual work, smart technologies, wearable devices, and digitally mediated coordination, highlighting how technology-intensive environments influence employees’ physical, psychological, and social well-being over time. Collectively, these studies emphasize that the sustainability of employee well-being depends not only on the presence of AI but also on how digital work environments are structured and supported.

 

Individual level

At the individual level, AI-enabled digital work environments are shown to exert both positive and negative influences on employee well-being. Several studies report that smart technologies and AI-supported systems can reduce physical strain, support health monitoring, and enhance work flexibility, contributing to improved well-being outcomes (Torres et al., 2021; Thynne et al., 2022). Wearable technologies, for example, have been associated with increased activity levels, better sleep quality, and higher job satisfaction when implemented as part of supportive wellness initiatives (Torres et al., 2021).

 

However, digital work environments also introduce new well-being risks. Virtual work arrangements and AI-enabled coordination tools can blur boundaries between work and non-work domains, increasing cognitive load and emotional exhaustion (Hill et al., 2022; Murphy, 2024). Employees in highly digitalized environments report challenges related to constant connectivity, smart-working fatigue, and difficulties disengaging from work, which negatively affect mental health and work–life balance (Palmucci et al., 2024). These findings suggest that AI-enabled flexibility can enhance well-being when accompanied by boundary management practices but undermine it when expectations of constant availability prevail.

 

Group level

At the group level, digital and AI-enabled work environments reshape team interactions, coordination, and social support, with implications for collective well-being. Studies indicate that virtual and hybrid teams rely heavily on AI-supported communication and coordination tools, which can facilitate collaboration but also reduce informal social interaction (Aubouin-Bonnaventure et al., 2023; Murphy, 2024). Reduced opportunities for spontaneous interaction may weaken social bonds and diminish perceived support, increasing feelings of isolation and stress among team members.

 

Group-level leadership and shared norms play a critical role in moderating these effects. Teams characterized by supportive digital leadership and explicit norms around technology use are better able to maintain engagement and well-being in AI-enabled environments (Alkhayyal et al., 2024). Collective practices such as regular check-ins, shared reflection on technology use, and mutual support help teams sustain well-being despite high levels of digitalization.

 

Organizational level

At the organizational level, the sustainability of employee well-being in digital work environments depends on strategic choices related to technology adoption, HR practices, and organizational culture. Organizations that integrate AI into digital work systems while prioritizing employee well-being—through policies supporting work–life balance, mental health, and recovery—are more likely to achieve sustainable outcomes (Leesakul et al., 2022; Mukhuty et al., 2022). Such organizations view AI not merely as a productivity tool but as part of a broader socio-technical system that must align with human needs.

 

Conversely, organizations that emphasize continuous availability, performance monitoring, and efficiency without adequate safeguards risk exacerbating burnout and disengagement (Palmucci et al., 2024; Giermindl et al., 2021). Sustainable well-being is more likely when organizations adopt a long-term perspective, balancing technological innovation with investment in employee health, supportive leadership, and ethical digital practices.

 

Theme Summary

This theme highlights that AI-enabled digital work environments have significant implications for sustainable employee well-being. While smart technologies and virtual work arrangements can enhance flexibility, health, and engagement, they also introduce risks related to overload, isolation, and boundary erosion. The reviewed evidence underscores the importance of organizational policies, team-level support, and individual boundary management in ensuring that digital and AI-enabled work environments promote, rather than undermine, long-term employee well-being.

 

FUTURE RESEARCH DIRECTIONS: OPPORTUNITIES AND CHALLENGES

This integrative review highlights that the implications of artificial intelligence (AI) for employee well-being are complex and context-dependent. Synthesizing evidence across the five themes identified in Section 3, the findings indicate that employee well-being outcomes emerge from interactions between AI design, implementation practices, employee perceptions, and organizational contexts across individual, group, and organizational levels (Bankins et al., 2023; Malik et al., 2022; Chang et al., 2023).

 

Building on these insights, we propose five future research pathways that address both opportunities and challenges for advancing organizational behavior and HRM scholarship on AI and employee well-being. These pathways are grounded in empirical gaps identified across the reviewed studies and are summarized in Table 2, which outlines key mechanisms and illustrative research questions.

 

TABLE 2: Future research directions on artificial intelligence and employee well-being

Research pathway

Level of analysis

Illustrative mechanisms / constructs (example references)

Example future research questions

1. Using AI to facilitate employee well-being and satisfaction

Individual

Job meaningfulness (Hill et al., 2022);

autonomy (Malik et al., 2022); positive affect (Jaiswal et al., 2021);

AI as a job resource (Braganza et al., 2020);

workload reduction (Torres et al., 2021)

How does AI-enabled task augmentation influence employee psychological well-being over time? Does perceived meaningfulness of AI-augmented work mediate the relationship between AI use and employee satisfaction?

Group

Collective sensemaking (Murphy, 2024);

shared norms around AI use (Bankins et al., 2023);

team support (Aubouin-Bonnaventure et al., 2023)

How do teams collectively frame AI as a well-being-enhancing versus stress-inducing technology?

Organizational

Well-being-oriented AI design (Chang et al., 2023); supportive HR practices (Malik et al., 2022); investment in employee development (Chin et al., 2024)

Under what organizational conditions does AI adoption enhance employee well-being rather than intensify strain?

2. Trust, fairness, and transparency in AI systems

Individual

Trust in AI (Soomro et al., 2024);

perceived fairness (Zhou et al., 2023);

explainability (Chang et al., 2023);

psychological safety (Hill et al., 2022)

How does AI transparency shape employee trust and emotional responses to algorithmic decisions?

Group

Shared perceptions of fairness (Aubouin-Bonnaventure et al., 2023);

team-level justice climate (Bankins et al., 2023)

How do shared fairness perceptions within teams influence collective well-being in AI-enabled work settings?

Organizational

Ethical AI governance (Chang et al., 2023);

accountability mechanisms (Giermindl et al., 2021); employee voice (Taslim et al., 2025)

Which governance practices most effectively foster employee trust in AI-enabled HR systems?

3. Technostress and algorithmic control as dynamic processes

Individual

Technostress trajectories (Wu et al., 2022);

coping strategies (Murphy, 2024);

perceived autonomy (Nazareno et al., 2021)

How do employees adapt psychologically to prolonged exposure to AI-enabled monitoring and control?

Group

Social comparison (Wu et al., 2022);

collective coping (Aubouin-Bonnaventure et al., 2023); peer support (Alkhayyal et al., 2024)

How do team norms moderate the relationship between algorithmic control and employee well-being?

Organizational

Control-oriented vs. support-oriented AI use (Giermindl et al., 2021);

framing of surveillance (Palmucci et al., 2024)

When is algorithmic control perceived as legitimate rather than harmful to well-being?

4. Human-centered governance of AI-enabled HR practices

Individual

Perceived organizational support (Peña et al., 2024); procedural justice (Zhou et al., 2023);

participation (Haipeter et al., 2024)

How does employee involvement in AI-driven HR decisions affect individual well-being and acceptance?

Group

Participation climate (Taslim et al., 2025);

shared legitimacy perceptions (Bankins et al., 2023)

How does collective participation influence group-level trust in AI-enabled HR practices?

Organizational

Ethical frameworks (Chang et al., 2023);

HR system strength (Malik et al., 2022);

leadership commitment (Palmucci et al., 2024)

How do different AI governance models shape long-term employee well-being outcomes?

5. Sustaining well-being in digital and AI-enabled work environments

Individual

Boundary management (Hill et al., 2022);

recovery (Jaiswal et al., 2021); digital fatigue (Palmucci et al., 2024)

How does continuous AI-enabled connectivity affect long-term employee well-being and burnout?

Group

Virtual collaboration quality (Murphy, 2024);

digital leadership (Alkhayyal et al., 2024)

How do team practices sustain social support in AI-enabled virtual work environments?

Organizational

Work–life balance policies (Leesakul et al., 2022); sustainable digital work design (Mukhuty et al., 2022)

What organizational practices enable sustainable employee well-being in highly digitalized workplaces?

 

4.1 | Pathway 1: Using AI to facilitate employee well-being and satisfaction

Findings from this review suggest that AI can enhance employee well-being when it augments work roles, reduces excessive demands, and supports meaningful task engagement (Braganza et al., 2020; Malik et al., 2022; Chin et al., 2024). However, the majority of existing research focuses on adverse outcomes such as technostress, insecurity, and emotional exhaustion (Nazareno et al., 2021; Wu et al., 2022; Jin et al., 2024), indicating a need for future studies that explicitly examine how AI can function as a well-being resource.

 

Empirical evidence shows that AI-enabled decision support, workload optimization, and skill enhancement are associated with higher job satisfaction, engagement, and sustainable performance when employees perceive AI as supportive rather than controlling (Sweiss et al., 2024; Hill et al., 2022). Similarly, digital and smart work environments supported by AI-enabled wellness tools have been linked to improvements in physical and affective well-being (Torres et al., 2021; Thynne et al., 2022). Future research should therefore examine the psychological mechanisms—such as meaningfulness of work, autonomy, and positive affect—through which AI use enhances well-being (Jaiswal et al., 2021; Aubouin-Bonnaventure et al., 2023).

 

Longitudinal and multilevel research designs are needed to assess whether these well-being benefits persist over time or are offset by emerging demands such as continuous skill updating and performance monitoring (Palmucci et al., 2024; Giermindl et al., 2021).

 

4.2 | Pathway 2: Trust, fairness, and transparency in AI systems

Across Themes 2 and 4, trust and perceived fairness consistently emerge as central mechanisms linking AI use to employee well-being. Studies show that opaque, biased, or poorly explained AI systems erode trust and increase stress and resistance (Giermindl et al., 2021; Zhou et al., 2023), whereas transparent and explainable AI systems foster acceptance and psychological safety (Soomro et al., 2024; Taslim et al., 2025).

 

Future research should move beyond treating trust as a static attitude and instead examine how trust in AI develops, deteriorates, or stabilizes over time. Experimental and field studies could investigate how explainability, human oversight, and contestability of AI decisions shape employee emotional responses and well-being outcomes (Chang et al., 2023; Malik et al., 2022). At the group level, shared perceptions of fairness and justice climates may further amplify or buffer individual well-being responses to AI-enabled decision-making.

 

4.3 | Pathway 3: Technostress and algorithmic control as dynamic processes

Theme 3 highlights technostress, surveillance, and algorithmic control as prominent risks to employee well-being. Empirical studies demonstrate that AI-enabled monitoring intensifies job demands, reduces perceived autonomy, and increases emotional exhaustion (Nazareno et al., 2021; Wu et al., 2022; Jin et al., 2024). However, most studies adopt cross-sectional designs, limiting understanding of how employees adapt to these systems over time.

 

Future research should conceptualize technostress and algorithmic control as dynamic processes. Longitudinal studies could explore whether employees develop coping strategies, normalize surveillance, or disengage from AI systems altogether (Murphy, 2024; Zhou et al., 2023). Group-level research could further examine how social comparison and collective coping influence well-being under algorithmic management (Aubouin-Bonnaventure et al., 2023). At the organizational level, comparative studies are needed to identify when algorithmic control is perceived as legitimate and developmental rather than coercive (Giermindl et al., 2021; Palmucci et al., 2024).

 

4.4 | Pathway 4: Human-centered governance of AI-enabled HR practices

AI-enabled HR practices—such as people analytics, algorithmic recruitment, and performance management—represent a critical locus for employee well-being outcomes. Studies show that ethical AI governance, employee involvement, and leadership commitment moderate the impact of AI-enabled HR systems on trust, stress, and engagement (Chang et al., 2023; Malik et al., 2022; Taslim et al., 2025).

 

Future research should examine which governance mechanisms are most effective in protecting employee well-being across organizational contexts. Comparative studies across industries and institutional settings could clarify how regulatory environments and cultural norms shape employee responses to AI-enabled HR practices (Mukhuty et al., 2022; Leesakul et al., 2022). Additionally, research should explore how employee participation in AI design and implementation influences perceptions of fairness, legitimacy, and well-being (Haipeter et al., 2024).

 

4.5 | Pathway 5: Sustaining employee well-being in digital and AI-enabled work environments

Theme 5 emphasizes that AI-enabled digital work environments offer both flexibility and well-being risks. Virtual work, smart technologies, and AI-supported coordination can enhance autonomy and health outcomes but also contribute to digital fatigue, isolation, and boundary erosion (Hill et al., 2022; Murphy, 2024; Palmucci et al., 2024).

 

Future research should adopt a sustainability lens to examine long-term well-being outcomes in digital work environments. Studies should investigate how organizational policies, leadership practices, and team norms shape employees’ ability to recover, disconnect, and maintain work–life boundaries in AI-enabled contexts (Leesakul et al., 2022; Aubouin-Bonnaventure et al., 2023). Integrating insights across levels of analysis will be essential for understanding how digital work environments can support enduring employee well-being.

CONCLUSION

This multilevel review synthesizes empirical evidence on artificial intelligence (AI) and employee well-being to provide a comprehensive understanding of how AI reshapes work experiences in contemporary organizations. Drawing on 72 empirical studies, the review demonstrates that AI is not inherently beneficial or harmful to employee well-being; rather, its effects depend on how AI technologies are designed, implemented, perceived, and governed across individual, group, and organizational levels (Bankins et al., 2023; Malik et al., 2022).

 

At the individual level, AI can enhance employee well-being by supporting task augmentation, reducing cognitive and physical demands, and enabling more meaningful and autonomous work experiences. However, when AI is perceived as opaque, threatening, or controlling, it contributes to technostress, anxiety, and reduced job satisfaction (Nazareno et al., 2021; Wu et al., 2022; Jin et al., 2024). At the group level, collective sensemaking, leadership support, and shared norms play a critical role in shaping how employees interpret and emotionally respond to AI-enabled work systems (Murphy, 2024; Aubouin-Bonnaventure et al., 2023). At the organizational level, ethical governance, transparent HR practices, and investment in employee development are central to sustaining trust and protecting well-being in AI-enabled workplaces (Chang et al., 2023; Taslim et al., 2025).

 

By integrating these findings into a multilevel framework, this review advances the literature beyond polarized narratives that frame AI solely as a source of efficiency gains or employee harm. Instead, the evidence highlights AI as a socio-technical phenomenon whose well-being outcomes emerge from the interaction between technology, human agency, and organizational context (Braganza et al., 2020; Hill et al., 2022). The review thus contributes to organizational behavior and HRM scholarship by clarifying key mechanisms—such as trust, autonomy, technostress, and governance—through which AI influences employee well-being.

 

Importantly, the findings underscore that the future of AI-enabled work hinges on human-centered choices. Organizations that prioritize ethical AI governance, transparent communication, employee participation, and continuous learning are better positioned to harness AI’s potential while minimizing risks to employee well-being (Malik et al., 2022; Palmucci et al., 2024). Conversely, neglecting these factors may exacerbate stress, disengagement, and inequality in AI-enabled work environments.

 

In conclusion, this review provides a robust foundation for future research and practice by demonstrating that employee well-being should be treated as a central outcome of AI adoption rather than a secondary consideration. As AI continues to evolve and permeate organizational life, adopting a multilevel, human-centered perspective will be essential for ensuring that technological advancement aligns with sustainable employee well-being and organizational effectiveness.

REFERENCES
  1. Murugesan, U., et al. (2023). A study of artificial intelligence impacts on human resource digitalization in Industry 4.0. Decision Analytics Journal, 7, 100223.
  2. Kaushik, A., Sharma, K., Gupta, G., Sharma, D., & Arora, P. (2024). Big data revolution: Transforming different sectors. International Journal of Progressive Research in Engineering Management and Science, 4(7), 244–248.
  3. Madan, S., Gandhi, P. & Sharma, G. (2025). Enhancing Organizational Sustainability by Fostering Employee Engagement through a Supportive Workplace Culture. Journal of Marketing & Social Research, 2(1), 52-57.
  4. Sposato, M., et al. (2025). New technologies in HR: Bridging efficiency and ethical considerations. The International Journal of Organizational Analysis, 33(1), 1–19.
  5. Aggarwal, D., Saxena, A. B., & Sharma, D. (2025). Innovative teaching pedagogy to promote green learning in technical institutions. International Journal of Humanities Education, 13(1), 207-214.
  6. Madero-Gómez, S., et al. (2023). Companies could benefit when they focus on employee wellbeing and the environment: A systematic review of sustainable human resource management. Sustainability, 15(6), 1–21.
  7. Dutta, D., et al. (2023). Bots for mental health: The boundaries of human and technology agencies for enabling mental well-being within organizations. Person-Centered Review, 7(1), 1–18.
  8. Lu, Y., et al. (2022). Sustainable human resource management practices, employee resilience, and employee outcomes: Toward common good values. Human Resource Management, 61(4), 427–444.
  9. Malik, A., et al. (2022). Employee experience – The missing link for engaging employees: Insights from an MNE’s AI-based HR ecosystem. Human Resource Management, 61(4), 385–402.
  10. Chillakuri, B., et al. (2020). Examining the effects of workplace well-being and high-performance work systems on health harm: A sustainable HRM perspective. Society and Business Review, 15(3), 371–389.
  11. Soulami, M., et al. (2024). Exploring how AI adoption in the workplace affects employees: A bibliometric and systematic review. Frontiers in Artificial Intelligence, 7, Article 1223341.
  12. Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2023). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159–182.
  13. Sypniewska, B. A., et al. (2023). Work engagement and employee satisfaction in the practice of SHRM: A study of Polish employees. The International Entrepreneurship and Management Journal, 19(2), 755–776.
  14. Malik, N., et al. (2021). Impact of AI on employees working in Industry 4.0-led organizations. International Journal of Manpower, 42(6), 1114–1132.
  15. Johnson, A., et al. (2020). A review and agenda: Technology-driven changes at work and mental health. Australian Journal of Management, 45(3), 384–408.
  16. Cramarenco, R., et al. (2023). The impact of AI on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernicana, 14(2), 287–315.
  17. Amrutha, V., & Geetha, S. N. (2020). A systematic review on green HRM: Implications for social sustainability. Journal of Cleaner Production, 247, 119131.
  18. Richards, J. (2020). Putting employees at the centre of sustainable HRM: A review, map and research agenda. Employee Relations: The International Journal, 42(4), 1001–1017.
  19. Rahi, S. (2021). Investigating the role of employee psychological well-being and psychological empowerment with relation to work engagement and sustainable employability. International Journal of Ethics and Systems, 37(4), 553–573.
  20. Chang, Y.-L., et al. (2023). Socially responsible AI-empowered people analytics: A novel framework towards sustainability. Human Resource Development Review, 22(2), 131–157.
  21. Kadir, B. A., Broberg, O., & Conceição, C. S. (2020). Human well-being and system performance in the transition to Industry 4.0. International Journal of Industrial Ergonomics, 76, 102936.
  22. Braganza, A., Chen, W., Canhoto, A., & Sap, S. (2020). Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. Journal of Business Research, 117, 667–676.
  23. Xiao, Q., et al. (2023). How does AI-enabled HR analytics influence employee resilience: Job crafting as a mediator and HRM system strength as a moderator. Personnel Review, 52(5), 1421–1440.
  24. Islam, S. M. F., et al. (2021). A systematic review of human capital and employee well-being: Putting human capital back on the track. European Journal of Training and Development, 45(4/5), 370–394.
  25. Rajashekar, S., et al. (2023). A thematic analysis on employee engagement in IT companies from the perspective of holistic well-being initiatives. Employee Responsibilities and Rights Journal, 35(3), 375–396.
  26. Agarwal, P. (2020). Shattered but smiling: HRM and wellbeing of hotel employees during COVID-19. International Journal of Hospitality Management, 89, 102602.Chin, Y.-S., et al. (2024). Harnessing the power of artificial intelligence (AI): A paradigm shift in HRM practices for employee sustainable performance. Global Knowledge, Memory and Communication, 73(3), 321–338.
  27. Asfahani, A. M. (2024). Fusing talent horizons: The transformative role of data integration in modern talent management. Discover Sustainability, 5, Article 18.
  28. Jankovic, S. D., et al. (2023). Strategic integration of artificial intelligence for sustainable businesses: Implications for data management and human user engagement in the digital era. Sustainability, 15(8), 6842.
  29. Taslim, W. S., et al. (2025). Employee involvement in AI-driven HR decision-making: A systematic review. SA Journal of Human Resource Management, 23, Article a2345.
  30. Xiang, H., et al. (2023). Sustainable development of employee lifecycle management in the age of global challenges: Evidence from China, Russia, and Indonesia. Sustainability, 15(9), 1–20.
  31. van Zoonen, W., et al. (2022). A tool and a tyrant: Social media and well-being in organizational contexts. Current Opinion in Psychology, 45, 101315.
  32. Pereira, V., et al. (2021). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Management Review, 31(1), 100770.
  33. Sweiss, M. I. K., et al. (2024). The role of AI-enabled human resource practices towards task satisfaction and employee creative willingness. SAGE Open, 14(1), 1–14.
  34. Soomro, S., et al. (2024). AI adoption: A bridge or a barrier? The moderating role of organizational support in the path toward employee well-being. Kybernetes, 53(5), 1543–1563.
  35. Haipeter, T., et al. (2024). Human-centered AI through employee participation. Frontiers in Artificial Intelligence, 7, Article 1354210.
  36. Manuti, A., et al. (2020). “Everything will be fine”: A study on the relationship between employees’ perception of sustainable HRM practices and positive organizational behavior during COVID-19. Sustainability, 12(23), 10216.
  37. Palmucci, D. N., et al. (2024). Managing employees’ needs and well-being in the post-COVID-19 era. Management Decision, 62(4), 1023–1041.
  38. Giermindl, L., et al. (2021). The dark sides of people analytics: Reviewing the perils for organisations and employees. European Journal of Information Systems, 30(6), 625–641.
  39. Peña, I., et al. (2024). Wellness programs, perceived organizational support, and their influence on organizational performance. SAGE Open, 14(2), 1–13.
  40. Lee, J.-G., et al. (2024). How does algorithm-based HR predict employees’ sentiment? Developing an employee experience model through sentiment analysis. Industrial and Commercial Training, 56(2), 235–250.
  41. Kuzior, A., et al. (2021). Digitalization of work and HR processes as a way to create a sustainable and ethical organization. Energies, 14(20), 6638.
  42. Ateeq, K., et al. (2025). The transformative impact of artificial intelligence on organisational behaviour: Employee engagement, performance, and ethical implications. Journal of Posthumanism, 5(1), 1–21.
  43. Mukhuty, S., et al. (2022). Strategic sustainable development of Industry 4.0: The role of HR practices. Business Strategy and the Environment, 31(7), 2983–2998.
  44. Dutta, D., et al. (2024). Inclusive and sustainable economic growth for MSME firms: Examining the impact of sustainable HRM practices on women’s well-being. International Journal of Manpower, 45(2), 275–293.Jaiswal, A., et al. (2021). Impact of happiness-enhancing activities and positive practices on employee well-being. Journal of Asia Business Studies, 15(4), 635–652.
  45. Nazareno, L., & Schiff, D. (2021). The impact of automation and AI on worker well-being. Technology in Society, 67, 101679.
  46. Murphy, L. (2024). Wellbeing in the age of virtual teams and workplace automation: A systematic review and future research agenda. International Journal of Organizational Analysis, 32(4), 1154–1173.
  47. Sharma, M., et al. (2022). Analysing the impact of SHRM practices and Industry 4.0 adoption on employability skills. International Journal of Manpower, 43(7), 1557–1576.
  48. Hill, N. S., et al. (2022). Unpacking virtual work’s dual effects on employee well-being: An integrative review and future research agenda. Journal of Management, 48(8), 2022–2053.
  49. Alkhayyal, S., et al. (2024). Countering technostress in virtual work environments: The role of work-based learning and digital leadership. Acta Psychologica, 246, 104207.
  50. Rahi, S. (2023). Fostering employee work engagement and sustainable employment during COVID-19 through HR practices, psychological well-being, and empowerment. Industrial and Commercial Training, 55(6), 623–639.
  51. Aubouin-Bonnaventure, J., et al. (2023). Well-being and performance at work: Virtuous organisational practices (VOPs). International Journal of Organizational Analysis, 31(6), 1658–1676.
  52. Bhardwaj, S., et al. (2025). Nurturing the roots of workplace flourishing: An in-depth exploration of employee well-being initiatives. International Journal of Organizational Analysis, 33(1), 1–22.
  53. Torres, E. N., et al. (2021). The impact of wearable devices on employee wellness programs: A study of hotel industry workers. International Journal of Hospitality Management, 94, 102864.
  54. Tong, S., et al. (2021). The Janus face of AI feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(12), 2290–2312.
  55. Jin, G., et al. (2024). The work affective well-being under the impact of artificial intelligence. Scientific Reports, 14, 1851.
  56. Leonardi, P. (2020). COVID-19 and the new technologies of organizing: Digital exhaust, digital footprints, and artificial intelligence. Journal of Management Studies, 57(8), 1699–1704.
  57. Saeidnia, H. R., et al. (2024). Ethical considerations in AI interventions for mental health and well-being. The Social Science Journal, 61(3), 435–450.
  58. Wu, W., et al. (2022). Technostress and the smart hospitality employee. Journal of Hospitality and Tourism Technology, 13(2), 333–349.
  59. Leesakul, N., et al. (2022). Workplace 4.0: Implications of tech adoption on a sustainable workforce. Sustainability, 14(11), 6763.
  60. Thynne, L., et al. (2022). Using smart technology to enhance paramedic well-being. Asia Pacific Journal of Human Resources, 60(3), 433–453.
  61. Aydın, E., et al. (2023). AI shortlisting model for sustainable HRM. Sustainability, 15(4), 3186.
  62. Zhou, Y., et al. (2023). The dark side of AI-enabled HRM on employees based on algorithmic features. Journal of Organizational Change Management, 36(4), 673–692.
  63. Mendy, J., et al. (2024). Artificial intelligence in the workplace: Challenges, opportunities, and HRM framework. Journal of Managerial Psychology, 39(2), 219–238.Muridzi, G., et al. (2024). Artificial intelligence in transforming HRM processes. International Journal of Innovation Management, 28(2), 2450012.
  64. Ghosh, V., et al. (2025). The evolution of global smart systems and future technologies in HRMS: Implications for SDGs. Journal of Global Information Management, 33(1), 1–23.
  65. Malik, A., et al. (2022). AI-assisted HRM: Towards an extended strategic framework. Human Resource Management Review, 32(4), 100901.
  66. Kim, S., et al. (2024). Strategic human resource management in the era of algorithmic technologies. Human Resource Management, 63(2), 153–171.
  67. Benabou, A., et al. (2025). Empowering HRM through AI: A systematic review and bibliometric analysis. International Journal of Production Management and Engineering, 13(1), 1–19.
  68. Basu, S., et al. (2022). The role of artificial intelligence in HRM: A systematic review and future research agenda. Human Resource Management Review, 32(4), 100901.
  69. Prikshat, V., et al. (2021). AI-augmented HRM: Antecedents, assimilation, and multilevel consequences. Human Resource Management Review, 31(2), 100808.
Recommended Articles
Research Article
Influence of Emotional Intelligence on Teaching and Administrative Performance in Management Institutes of Maharashtra
Published: 31/12/2025
Research Article
A Systematic Review of Artificial Intelligence Applications in Marketing
Published: 30/01/2026
Research Article
The Impact of Supply Chain Management on Firm Performance: A Study of Indian Manufacturing Sector
...
Published: 31/12/2025
Research Article
Integrated Reporting: Theoretical Framework and Literature Review
...
Published: 27/01/2026
© Copyright Asian Society of Management & Marketing Research (ASMMR)