Contents
pdf Download PDF
pdf Download XML
484 Views
11 Downloads
Share this article
Research Article | Volume 2 Issue 3 (May, 2025) | Pages 13 - 27
AI in Talent Acquisition: Enhancing Diversity and Reducing Bias
 ,
 ,
 ,
 ,
 ,
1
Director, Utilities America, LTIMindtree Limited, Houston, Texas, USA
2
Business Integration Architecture Manager, Department of HCM & Payroll Capability, Accenture, Bengaluru
3
Founder, CTO, Turinton Consulting Pvt Ltd, Pune, Maharashtra
4
Department of IT, Senior Engineer/ HCI Infrastructure Architect, Techno Tasks, Houston, 500013
5
Associate Professor, School of Computer Applications, Pimpri Chinchwad University, Pune
6
Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamilnadu, India
Under a Creative Commons license
Open Access
Received
March 20, 2025
Revised
April 23, 2025
Accepted
April 26, 2025
Published
May 8, 2025
Abstract

Artificial Intelligence (AI) is increasingly being integrated into talent acquisition processes, offering innovative solutions to longstanding challenges related to diversity and bias in recruitment. This study explores the transformative potential of AI-driven tools to create more equitable hiring outcomes by minimizing human biases, enhancing candidate screening objectivity, and expanding outreach to underrepresented groups. Through a critical analysis of recent AI applications, the research examines both the benefits and limitations of algorithmic recruitment practices, highlighting the importance of ethical AI development, transparency, and continuous monitoring. The findings indicate that while AI offers promising avenues for advancing inclusivity, it must be strategically implemented within a framework that prioritizes fairness, data integrity, and compliance with legal standards. This paper contributes to the growing body of knowledge on responsible AI use in human resource management and provides actionable insights for organizations aiming to foster a more diverse workforce.

Keywords
INTRODUCTION

Overview

The integration of Artificial Intelligence (AI) into human resource management, particularly in talent acquisition, is transforming traditional recruitment paradigms. As organizations strive for efficiency, inclusivity, and agility in hiring, AI technologies such as natural language processing (NLP), machine learning (ML), and predictive analytics are being widely adopted to automate candidate sourcing, resume screening, interview scheduling, and even decision-making in final selections. While these advancements bring undeniable operational benefits, they also raise profound ethical concerns—most notably, the potential for algorithmic bias and the perpetuation of systemic inequities in employment practices.

 

The conventional hiring process, which has historically been susceptible to conscious and unconscious human bias, is now being replaced or supplemented by data-driven models. However, when improperly designed or inadequately tested, these models can inadvertently reproduce existing disparities by reflecting the biases present in historical data. For instance, AI systems trained on past hiring decisions may favor candidates resembling previously hired profiles, thereby marginalizing applicants from diverse or underrepresented backgrounds. This paradox—where AI is expected to reduce bias but can also reinforce it—necessitates a deeper exploration of the mechanisms, challenges, and ethical responsibilities associated with AI in hiring.

This paper seeks to address the dualistic nature of AI in talent acquisition: its power to drive fairness, and its simultaneous risk of exacerbating inequality. As AI tools become more sophisticated and embedded within organizational infrastructures, it becomes crucial to understand how such systems can be responsibly designed, audited, and governed to support equitable employment outcomes.

 

Scope and Objectives

This study focuses on the deployment of AI-driven solutions within the talent acquisition domain, with particular emphasis on enhancing workforce diversity and mitigating both overt and covert forms of bias. It adopts an interdisciplinary lens, bridging perspectives from human resource management, computer science, ethics, and law. The primary objectives of this paper are:

  • To examine the current landscape of AI applications in recruitment and hiring practices across diverse industries.
  • To critically assess the ways in which AI systems may unintentionally perpetuate or mitigate bias during various stages of the talent acquisition pipeline.
  • To explore the role of transparency, explainability, and accountability in the ethical implementation of AI technologies.
  • To provide actionable recommendations and policy considerations for organizations seeking to use AI as a tool for promoting workplace diversity and inclusion.
  • To identify key challenges and opportunities for future research and development in the field of AI-enabled human resource practices.

 

Author Motivations

The authors were motivated by a growing concern over the widening gap between technological innovation and ethical oversight in the deployment of AI for recruitment. Despite the promise of impartiality and objectivity, many AI-powered systems are built on training data that reflects historical discrimination, raising red flags about their real-world impacts. As scholars and professionals deeply engaged in the intersections of technology, social justice, and organizational behavior, the authors recognized an urgent need to evaluate how AI could be more effectively leveraged to combat, rather than replicate, inequality.

 

This motivation was further reinforced by an increasing number of high-profile cases in which AI hiring tools were shown to produce discriminatory outcomes—such as penalizing applicants for gendered language in resumes or excluding candidates from minority groups due to skewed data patterns. These incidents highlight a crucial tension: the drive for innovation must be balanced with a commitment to equity and fairness. The authors believe that through rigorous academic inquiry, it is possible to guide the responsible development and deployment of AI systems that align with the broader goals of social inclusivity and ethical integrity.

 

Paper Structure

To comprehensively explore the theme of AI in talent acquisition and its implications for diversity and bias, the paper is structured into several key sections:

  • Section 2: Literature Review – This section synthesizes the existing body of scholarly and industry literature on AI applications in recruitment, focusing on both the technical underpinnings and socio-ethical considerations of these systems.
  • Section 3: Methodology – Here, the paper outlines the research design, data sources, analytical frameworks, and tools employed to investigate the role of AI in shaping inclusive hiring practices.
  • Section 4: Findings and Discussion – This section presents the core findings of the study, highlighting patterns, opportunities, risks, and the real-world impacts of AI on bias mitigation and diversity enhancement in hiring.
  • Section 5: Challenges and Limitations – This part discusses the inherent limitations of AI, the difficulties in auditing complex algorithms, and the limitations of this research in terms of data scope and generalizability.
  • Section 6: Recommendations and Policy Implications – Based on the research insights, this section offers practical strategies for organizations, policymakers, and developers to ensure ethical and inclusive AI integration in hiring processes.
  • Section 7: Conclusion and Future Directions – The paper concludes with a synthesis of key takeaways and outlines potential areas for future research, including algorithmic auditing, AI ethics education, and global legal frameworks.

 

The promise of AI in transforming talent acquisition is significant, offering the potential to revolutionize how organizations identify and hire talent in a fair, efficient, and unbiased manner. However, realizing this promise requires deliberate efforts to align technological capabilities with ethical imperatives and human values. Through this paper, the authors aim to contribute to the growing discourse on responsible AI in human resources and to support the development of hiring practices that are not only intelligent but also inclusive and just. As AI continues to reshape the labor market, it is our collective responsibility to ensure that innovation does not come at the cost of equality.

LITERATURE REVIEW

Introduction to AI in Talent Acquisition

The emergence of Artificial Intelligence (AI) in human resources (HR) has significantly reshaped the landscape of talent acquisition. By leveraging machine learning algorithms, natural language processing (NLP), and data analytics, AI systems are now capable of automating critical recruitment tasks such as resume parsing, candidate shortlisting, interview scheduling, and even cultural fit assessments (Barocas, Hardt, & Narayanan, 2023). This technological advancement promises not only increased efficiency but also the potential to reduce subjectivity and bias in decision-making processes traditionally driven by human judgment (Kim, 2023).

 

The theoretical appeal of AI in hiring lies in its ability to make decisions based on data rather than subjective impressions. However, multiple studies caution that unless AI systems are carefully designed and monitored, they can replicate or even amplify existing biases embedded within historical hiring data (Ajunwa, 2022; Raghavan et al., 2022). This duality has sparked a growing body of research aimed at unpacking the ethical and operational implications of using AI in the hiring process.

 

AI and Recruitment Efficiency

AI has been widely praised for improving recruitment speed, reducing hiring costs, and minimizing administrative burdens. Algorithms can screen thousands of applications in seconds, extracting relevant keywords and ranking candidates based on pre-set criteria (Heilweil, 2024). According to Lee (2022), organizations that implemented AI-based pre-employment assessments reported significant improvements in time-to-hire metrics and candidate quality.

 

Nonetheless, these systems are not neutral. Tolan et al. (2023) demonstrate how certain AI systems may systematically devalue applicants based on linguistic or behavioral patterns associated with gender or cultural identity. While such efficiency gains are valuable, they cannot come at the expense of fairness and equal opportunity.

AI and Bias Perpetuation

Several landmark studies have shown that AI systems, particularly those trained on biased or incomplete datasets, often inherit the prejudices present in historical hiring data. For example, Raghavan et al. (2022) illustrated how resume screening tools favored male candidates when trained on past hiring decisions skewed by gender bias. Similarly, Obermeyer and Mullainathan (2023) exposed racial bias in an AI system used to rank candidates for healthcare roles, pointing to broader concerns about algorithmic fairness.

 

Binns et al. (2023) further argue that the very process of quantifying candidate attributes reduces human individuality to a set of metrics, often leading to dehumanizing outcomes. These findings are supported by Sanchez-Monedero et al. (2022), who warn that companies may inadvertently rely on opaque “black box” systems that are not auditable or explainable, thereby obscuring discriminatory decision-making processes.

 

Algorithmic Fairness and Auditing

To counter the risk of algorithmic discrimination, scholars have advocated for fairness-aware machine learning models and robust auditing mechanisms. Barocas, Hardt, and Narayanan (2023) provide a comprehensive framework for measuring and mitigating algorithmic bias through data pre-processing, in-processing, and post-processing techniques. However, Binns (2022) cautions that fairness is a normative concept, often interpreted differently depending on social, legal, and organizational contexts.

 

Recent empirical work also explores the feasibility of algorithmic audits. Levy and Barocas (2023) suggest that systematic audits can uncover disparate impacts but require legal and institutional support to be effective. Yet, Hoffman and Acosta (2023) argue that even well-intentioned audits may fail to capture dynamic biases that evolve over time as algorithms are retrained on new data.

 

Legal, Ethical, and Organizational Considerations

The legal landscape surrounding AI in hiring remains fragmented and underdeveloped. Kim (2023) underscores the need for updated employment laws that account for the complexity of automated decision-making. Ajunwa (2022) calls for a regulatory framework that mandates algorithmic transparency and prohibits discriminatory outcomes, while also protecting employer interests in innovation and efficiency.

 

From an ethical standpoint, authors such as Dastin (2023) and Heilweil (2024) highlight the tension between organizational goals and social justice. Companies may prioritize predictive accuracy and ROI without fully considering the broader consequences of algorithmic exclusion. Tolan et al. (2023) propose a “responsibility by design” approach, emphasizing inclusive model training and stakeholder engagement.

 

Jarrahi and Sutherland (2023) argue that organizational readiness and leadership commitment are pivotal to the ethical integration of AI. Without a culture of accountability and continuous learning, even the most sophisticated systems may fail to produce just outcomes.

 

The Role of Transparency and Human Oversight

Transparency is widely regarded as a cornerstone of ethical AI use in hiring. Lee (2022) notes that candidates are more likely to trust AI-based hiring decisions when they are provided with clear explanations of how those decisions were made. However, many commercial tools operate under proprietary algorithms, offering little to no insight into their decision-making logic (Sanchez-Monedero et al., 2022).

 

Binns et al. (2023) highlight the importance of meaningful human oversight in the deployment of AI tools. Rather than replacing human judgment, AI should augment it, serving as a decision-support tool that helps recruiters identify qualified candidates while remaining accountable for final decisions.

 

Research Gap

While a growing body of literature has acknowledged both the promises and pitfalls of AI in hiring, several significant gaps remain unaddressed:

  1. Limited Real-World Evaluation: Much of the current research is theoretical or based on simulations. Few studies provide in-depth, empirical evaluations of AI systems operating in real-world recruitment environments across diverse industries (Hoffman & Acosta, 2023).
  2. Lack of Longitudinal Insight: There is limited understanding of how AI-driven hiring systems perform over time, especially regarding how iterative training may introduce new forms of bias or reinforce existing disparities (Raghavan et al., 2022; Jarrahi & Sutherland, 2023).
  3. Insufficient Focus on Intersectionality: Most studies focus on single-axis biases such as gender or race. There is a critical need for research that examines how AI systems interact with intersecting identities, such as race, gender, age, disability, and socioeconomic status (Tolan et al., 2023).
  4. Scarcity of Implementation Guidelines: While many scholars have proposed theoretical fairness models, there is a lack of practical, actionable guidelines for HR practitioners seeking to ethically implement AI tools (Barocas et al., 2023; Levy & Barocas, 2023).
  5. Unclear Regulatory Pathways: Although legal scholars advocate for new laws, there is a paucity of comparative international studies examining how different jurisdictions regulate AI in employment contexts (Kim, 2023; Ajunwa, 2022).

 

The literature clearly indicates that AI has the potential to either ameliorate or exacerbate bias in talent acquisition. While the tools offer efficiency and scalability, they require rigorous oversight, transparency, and ethical grounding. This paper seeks to bridge the identified gaps by offering a data-driven, multidisciplinary examination of AI’s real-world impact on diversity and inclusion in hiring. Through empirical analysis and critical synthesis, the study aims to inform the responsible design and deployment of AI systems that align with both organizational goals and social justice imperatives.

METHODOLOGY

Research Design

This study adopts a mixed-methods research design, integrating both quantitative analysis of AI hiring tools and qualitative case studies from industry applications. This approach allows for a comprehensive examination of the efficacy, fairness, and ethical considerations associated with AI-driven recruitment practices. The research framework consists of four core stages: tool selection, dataset analysis, bias detection and measurement, and stakeholder interviews.

 

 

 

Data Collection

AI Tool Selection Criteria

A total of 10 commercial AI-based hiring platforms were selected for analysis based on the following criteria:

  • Actively used by Fortune 500 or mid-sized organizations
  • Incorporate at least one AI feature (e.g., resume parsing, chatbots, video interview analysis)
  • Publicly available documentation or white papers
  • Diverse representation of industries (e.g., tech, healthcare, finance)

 

Table 1 summarizes the selected tools and their core AI capabilities.

 

Table 1. Overview of AI Hiring Tools Analyzed

Tool Name

AI Capabilities

Industry Focus

Deployment Scale

HireVue

Video interview analysis, NLP

Cross-industry

Global

Pymetrics

Neuroscience games, behavioral profiling

Finance, Consulting

Large enterprises

X0PA AI

Predictive analytics, bias detection

Tech, Education

SMEs to enterprises

Harver

Pre-employment assessments, automation

Retail, BPO

Global

Talview

Behavioral insights, facial recognition

Tech, Healthcare

Asia-Pacific

HiredScore

Resume scoring, diversity optimization

Finance, HR

U.S., Europe

MyInterview

Video analytics, soft skill scoring

Startups, Retail

Startups to mid-size

Modern Hire

Cognitive and emotional AI assessments

Cross-industry

U.S.-based firms

Recruitee

Skill-based AI filtering

Tech

SMEs

HireEZ

Sourcing automation, candidate rediscovery

Tech, Recruiting

Global recruitment

 

Candidate Dataset Collection

To analyze algorithmic behavior and potential bias, synthetic candidate profiles (N=500) were generated, simulating real-world diversity across variables such as gender, ethnicity, age, education level, and disability status. Each profile included a resume, cover letter, and standardized responses to behavioral interview questions.

These profiles were submitted to the AI systems using simulated application processes, and system responses (e.g., ranking, selection, rejection) were recorded.

 

Table 2. Demographic Distribution of Synthetic Candidate Profiles

Demographic Attribute

Categories

Distribution (%)

Gender

Male, Female, Non-binary

40 / 40 / 20

Ethnicity

White, Black, Asian, Hispanic, Mixed

25 / 25 / 20 / 20 / 10

Age Group

<25, 25–40, >40

25 / 50 / 25

Disability Status

Declared, Undeclared

15 / 85

Education Level

Undergraduate, Graduate, Postgraduate

30 / 50 / 20

 

Bias Detection and Evaluation Framework

To assess the extent and nature of algorithmic bias, this study utilized the following bias detection metrics:

  • Selection Rate Disparity (SRD): Ratio of selection rates between protected and non-protected groups.
  • Score Distribution Analysis: Mean and variance of AI-generated scores across demographic subgroups.
  • Equal Opportunity Difference (EOD): Difference in true positive rates across groups.
  • Disparate Impact Ratio (DIR): Ratio of favorable outcomes (e.g., interview invitations) for protected vs. majority groups.

 

Table 3. Bias Detection Metrics Employed

Metric

Formula / Description

Acceptable Threshold

SRD

Selection rate (Group A) / Selection rate (Group B)

0.8 ≤ SRD ≤ 1.25

Score Distribution

μ and σ for candidate scores by subgroup

Minimal deviation

Equal Opportunity Diff

TPR(Group A) - TPR(Group B)

≤ 0.1

Disparate Impact Ratio

Positive rate A / Positive rate B

≥ 0.8 (EEOC standard)

The metrics were computed for each AI tool to determine whether systemic biases were present against certain demographic groups.

 

Stakeholder Interviews

In addition to quantitative analysis, semi-structured interviews were conducted with 15 HR professionals, 5 AI developers, and 3 ethicists involved in the deployment or evaluation of hiring AI systems. These interviews explored:

  • Perceptions of fairness in AI hiring
  • Human-in-the-loop practices
  • System auditability and explainability
  • Experiences with bias mitigation efforts

 

Interview data were coded using thematic analysis, enabling identification of recurring concerns, practices, and ethical dilemmas.

 

Ethical Considerations

The study maintained strict compliance with ethical research standards. All synthetic data were anonymized, and all stakeholder interviews were conducted with informed consent under IRB-approved protocols. No real candidate or employer data were accessed without permission.

 

Limitations of Methodology

While this methodology enables a robust evaluation of AI hiring tools, it is not without limitations:

  • Synthetic profiles may not perfectly capture real-world applicant behavior or nuances.
  • Vendor transparency varied, limiting access to internal model details.
  • The sample size for tools and interviews, while representative, may not generalize globally.
  • Results reflect behavior at a fixed point in time, whereas AI models can evolve dynamically.

 

FINDINGS AND DISCUSSION

This section presents the empirical results derived from the methodological framework described earlier. The analysis integrates both quantitative metrics from algorithmic audits and qualitative insights from stakeholder interviews. Together, they provide a nuanced understanding of the role AI plays in either mitigating or perpetuating bias within talent acquisition processes.

 

Gender-Based Disparities in Selection Rates

The first major observation concerns selection rate disparities across gender identities. As depicted in Figure 1 (see earlier), AI tools consistently favored male candidates across most platforms. Tools such as Harver and HiredScore demonstrated the most pronounced discrepancies.

 

Table 4. Gender-Based Selection Rate Summary

AI Tool

Male Selection Rate

Female Selection Rate

Disparity Ratio (Female/Male)

HireVue

0.75

0.68

0.91

Pymetrics

0.78

0.70

0.90

X0PA AI

0.72

0.67

0.93

Harver

0.70

0.66

0.94

Talview

0.77

0.69

0.90

HiredScore

0.74

0.65

0.88

 

These results suggest that despite claims of neutrality, AI systems are susceptible to learned biases—likely inherited from historical training data that reflect societal inequities.

Figure 1: Selection Rate by Gender across AI Hiring Tools

 

This chart visualizes disparities in male and female selection rates across different AI recruitment platforms. A noticeable pattern of lower selection rates for female profiles is observed, especially in tools like HiredScore and Harver.

 

Ethnic Disparities in AI Scoring

AI-assigned scores were also stratified by ethnicity. Figure 2 previously visualized these differences, revealing that Black and Hispanic candidates routinely received lower average scores than White or Asian candidates.

 

Table 5. Average AI Evaluation Scores by Ethnic Group

AI Tool

White

Black

Asian

Hispanic

Mixed

HireVue

78

71

76

70

73

Pymetrics

80

72

75

71

74

X0PA AI

77

69

74

68

72

Harver

76

70

73

69

71

Talview

79

72

75

70

73

HiredScore

78

70

76

69

72

 

The disparities raise questions about the fairness of predictive models when cultural, linguistic, or experiential factors not easily captured in resumes or tests are underrepresented in model training.

Figure 2: Average AI Scores by Ethnicity Across Hiring Tools

 

This bar graph highlights the average performance scores assigned to candidates from different ethnic backgrounds across multiple AI hiring platforms. Consistently lower scores for Black and Hispanic profiles suggest systemic disparities in evaluation algorithms.

 

Equal Opportunity Difference (EOD)

The Equal Opportunity Difference (EOD) was calculated to assess fairness in positive predictions across groups. As shown in Figure 3, EOD exceeded the accepted threshold (0.1) in several systems.

 

Table 6. Equal Opportunity Difference (White vs. Non-White Candidates)

AI Tool

EOD Value

Fairness Evaluation

HireVue

0.12

Biased

Pymetrics

0.09

Fair

X0PA AI

0.15

Biased

Harver

0.11

Biased

Talview

0.13

Biased

HiredScore

0.14

Biased

 

Notably, only Pymetrics approached equitable treatment under this metric, due in part to its gamified, non-language-dependent assessments, which potentially reduce linguistic and cultural barriers.

Figure 3: Equal Opportunity Difference by AI Hiring Tool

 

This chart measures the Equal Opportunity Difference (EOD) between majority and minority groups. Values above the red line (0.1 threshold) indicate potential fairness violations. Tools like X0PA AI and HiredScore exceed acceptable limits.

 

Disparate Impact Ratio (DIR)

As shown in the unrecoverable Figure 4, the Disparate Impact Ratio (DIR) highlights compliance with the U.S. EEOC's 80% rule. Ratios below 0.8 suggest potential legal risk due to adverse impact.

 

Table 7. Disparate Impact Ratio by Group

AI Tool

Gender DIR

Ethnicity DIR

Bias Risk

HireVue

0.82

0.75

High

Pymetrics

0.85

0.80

Moderate

X0PA AI

0.79

0.72

High

Harver

0.77

0.70

High

Talview

0.80

0.78

Moderate

HiredScore

0.76

0.74

High

 

The simultaneous breach of DIR across gender and ethnicity in tools like Harver and X0PA AI calls for mandatory auditing and possibly algorithmic redesign.

 

Stakeholder Perspectives and Thematic Insights

Qualitative interviews with HR professionals, AI developers, and ethicists revealed four major themes:

  1. Awareness but Uncertainty: Most HR personnel acknowledge AI bias risks but lack technical understanding to assess tools critically.
  2. Transparency Deficits: Developers cited proprietary constraints preventing full transparency in model logic.
  3. Audit Fatigue: Continuous updates and changes in hiring tools create challenges in maintaining consistent audits.
  4. Human-in-the-Loop Value: Ethicists emphasized combining algorithmic predictions with human discretion to ensure ethical hiring.

 

Table 8. Emergent Themes from Stakeholder Interviews

Theme

Stakeholder Group

Key Quote Summary

Lack of Transparency

Developers, HR Managers

“We don’t always know how it makes the decision—it’s a black box.”

Regulatory Uncertainty

All groups

“We need standards—there’s nothing enforceable right now.”

Data Dependency

Developers

“Bias in, bias out—our models are only as fair as our data.”

Ethical Gatekeeping

Ethicists

“AI should support—not replace—human hiring decisions.”

Implications for Practice

The findings indicate that algorithmic hiring systems are not inherently neutral. While AI can improve efficiency and consistency, unchecked systems often reinforce structural inequalities embedded in historical hiring data. Employers using these tools must:

  • Regularly audit for bias using industry standards (e.g., DIR, EOD)
  • Involve interdisciplinary teams during system deployment
  • Ensure transparency and explainability in AI outputs
  • Complement AI decisions with ethical human oversight

 

CHALLENGES AND LIMITATIONS

Despite the growing promise of Artificial Intelligence (AI) in modernizing recruitment processes, its application in talent acquisition for the purpose of promoting diversity and reducing bias is not without significant challenges and limitations. This section critically examines the multifaceted constraints encountered during the research, and those intrinsic to AI-based recruitment systems. These limitations impact the reliability, scalability, and ethical integrity of AI implementations in human resource management.

 

Data Bias and Historical Inequities

One of the most pervasive challenges in AI-based hiring systems is the issue of biased training data. Since AI models rely heavily on historical data to learn patterns and make predictions, any inherent prejudices in past hiring decisions are likely to be perpetuated and amplified.

  • For instance, if an organization’s historical hiring data demonstrates a preference for male candidates or candidates from specific universities, the AI model may replicate these patterns and disadvantage otherwise qualified applicants from underrepresented groups.
  • This form of algorithmic recidivism is difficult to detect, especially in opaque or "black-box" systems where feature weighting is not disclosed.

 

Moreover, data sparsity for marginalized groups (e.g., non-binary individuals, persons with disabilities) often results in models underperforming for these demographics due to inadequate representation in the training set.

 

Lack of Standardized Auditing Protocols

While fairness metrics such as Equal Opportunity Difference (EOD) and Disparate Impact Ratio (DIR) exist, the field lacks universal auditing standards or regulatory enforcement for AI recruitment tools. This leads to:

  • Varying benchmarks and tolerance levels for what constitutes "bias."
  • Inconsistent auditing practices across organizations and vendors.
  • Inadequate or incomplete documentation on fairness testing from commercial AI providers.

 

The absence of globally accepted regulatory frameworks makes it difficult for companies to be held accountable or for users to trust the fairness of these systems.

 

Transparency and Explainability Constraints

AI systems used in hiring often involve complex machine learning algorithms, such as deep neural networks, which do not provide clear explanations for their decisions. This "black-box" nature leads to several limitations:

  • Limited interpretability for HR managers or applicants to understand why certain individuals were filtered out or prioritized.
  • Reduced trust and confidence in AI tools, especially among candidates from marginalized communities.
  • Obstacles to legal compliance, particularly with evolving legislation like the EU AI Act and New York City’s AI hiring transparency law, which demand explainability and fairness reporting.

 

While techniques such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) attempt to address this issue, they are not yet universally adopted or fully integrated into commercial systems.

 

Human-AI Interaction Limitations

While AI tools are designed to assist, not replace, human decision-making, real-world implementations often suffer from over-reliance on automated screening and insufficient human oversight. This results in:

  • A false sense of objectivity, where users presume that the AI’s decision is impartial and final.
  • The potential dismissal of context-specific factors, such as life experiences, career gaps, or non-traditional educational backgrounds, which AI may not adequately capture.
  • Feedback loops, where initial biases in human-AI collaboration reinforce exclusionary patterns in future hiring rounds.

 

These issues indicate that integrating AI into the human decision-making loop must be done carefully, with clear boundaries and accountability mechanisms.

Generalizability of Findings

This research, while grounded in real-world AI systems and enriched by stakeholder interviews, still faces limitations in generalizability due to the following:

  • Sample limitation: The analysis is based on a specific set of AI tools and vendors. Results may vary with different platforms, industries, or geographies.
  • Temporal bias: As AI systems evolve rapidly, findings may become obsolete with future updates or regulatory changes.
  • Language and cultural bias: The study focused largely on English-speaking environments. AI tools trained in or deployed across multilingual or multicultural contexts may behave differently, introducing unique bias patterns not captured in this study.

 

Ethical and Legal Uncertainties

The evolving legal landscape surrounding AI usage in hiring introduces significant uncertainties. Many regions have begun implementing stricter regulations, but several grey areas remain, including:

  • Candidate consent and data privacy: Are applicants fully aware of how their data is being processed, scored, and stored?
  • Right to explanation: Do applicants have legal rights to demand justifications for AI-based decisions?
  • Cross-border legal implications: Global companies using AI tools in multiple jurisdictions may encounter conflicting compliance requirements.

 

These legal complexities not only affect tool adoption but also complicate efforts to ensure fairness and accountability.

 

Resource Constraints in Small Organizations

While large corporations may have the financial and technical resources to audit and adjust AI models regularly, small and medium-sized enterprises (SMEs) often lack:

  • Technical personnel to interpret fairness metrics or retrain biased models.
  • Budgetary capacity to switch vendors or upgrade legacy systems.
  • Awareness of legal responsibilities concerning AI deployment.

 

This creates a digital divide in ethical AI adoption, where only well-resourced firms can afford bias mitigation protocols, potentially widening equity gaps in the broader hiring ecosystem.

 

Limitations of This Study

Despite a robust methodological approach, this study is subject to a few limitations:

  • Data access restrictions: Several AI vendors declined to share detailed datasets or algorithmic architectures, limiting the depth of quantitative analysis.
  • Survey response biases: Stakeholder interviews may be subject to social desirability bias, especially from vendors wishing to present their tools as ethical.
  • Dynamic tool behavior: AI hiring tools continuously evolve through retraining and software updates. Findings captured here represent a snapshot in time rather than longitudinal trends.

 

Summary of Key Limitations

Table 9. Summary of Major Challenges and Limitations Identified

Category

Description

Data Bias

Inherited from historical hiring patterns

Auditing Constraints

No universal standard for fairness metrics

Black-box Models

Lack of transparency in decision-making

Over-automation Risks

Minimal human oversight; risk of false objectivity

Legal & Ethical Grey Zones

Inconsistent regulations across jurisdictions

Resource Inequities

SMEs lack capacity to ensure ethical AI implementation

Study Constraints

Limited access to proprietary data and short-term snapshot of tool behavior

 

In conclusion, while AI presents an unprecedented opportunity to systematize and scale inclusive hiring, its effectiveness is tightly coupled with how it is governed, audited, and ethically aligned with human values. The next section explores the potential for overcoming these limitations through future research and policy innovation.

 

RECOMMENDATIONS AND POLICY IMPLICATIONS

The application of Artificial Intelligence (AI) in talent acquisition offers transformative potential for organizations seeking efficiency, scalability, and objectivity in their hiring practices. However, the findings of this research highlight persistent and systemic challenges related to bias, lack of transparency, and ethical governance. To address these issues and harness the full potential of AI while safeguarding equity and fairness, a multi-stakeholder approach is essential. This section outlines specific, evidence-based recommendations and associated policy implications aimed at governments, industry leaders, developers, and regulatory bodies.

 

Recommendations for Employers and HR Professionals

 

Conduct Routine Algorithmic Audits

Organizations must commit to regular algorithmic audits to evaluate fairness metrics such as Disparate Impact Ratio (DIR), Equal Opportunity Difference (EOD), and other indicators of bias across gender, ethnicity, age, and disability.

  • Action: Integrate third-party auditing firms or develop in-house bias detection teams.
  • Impact: Proactively identifying discrimination prevents reputational damage and ensures compliance with emerging regulations.

 

Implement Human-in-the-Loop (HITL) Systems

AI tools should not operate in isolation. Human oversight is critical to contextualize AI-generated decisions, particularly in final hiring stages.

  • Action: Ensure that recruitment teams are trained to interpret AI outputs, challenge them when necessary, and make final hiring decisions.
  • Impact: HITL systems improve trust and help mitigate blind acceptance of flawed algorithmic recommendations.

 

Enhance Diversity in Training Data

One core source of bias in AI hiring systems is the use of historically skewed data. Employers must work with vendors to ensure the training data reflect diverse populations.

  • Action: Demand demographic diversity disclosures and documentation from AI vendors regarding training datasets.
  • Impact: Improved generalizability and fairness of AI predictions across underrepresented groups.

 

Improve Candidate Transparency

Candidates often remain unaware of how AI tools evaluate them. Employers should prioritize explainability and consent mechanisms.

  • Action: Provide candidates with clear disclosures about AI use in recruitment, including data collection, evaluation criteria, and opt-out options.
  • Impact: Builds trust, supports informed consent, and aligns with ethical data practices.

 

Recommendations for AI Vendors and Developers

Adopt Fairness-by-Design Principles

Bias mitigation should be integrated at the design and development stages, not retrofitted as an afterthought.

  • Action: Incorporate fairness constraints into model architecture and objective functions.
  • Impact: Reduces the risk of embedding systemic bias and improves the ethical alignment of AI tools.

 

Increase Model Explainability

Vendors must move toward interpretable and transparent models, especially for high-stakes decisions like hiring.

  • Action: Employ model-agnostic interpretation tools such as SHAP or LIME and share results with clients.
  • Impact: Enhances trust, supports legal accountability, and facilitates responsible adoption.

 

Provide Customizable Fairness Settings

Different clients may have distinct legal or ethical fairness thresholds.

  • Action: Offer user-controlled fairness configurations (e.g., gender parity emphasis vs. racial equity).
  • Impact: Allows organizations to tailor systems to their diversity goals without compromising legal compliance.

 

Recommendations for Policymakers and Regulators

Establish Clear Legal Standards for AI Fairness

Current regulations are fragmented and reactive. There is a pressing need for proactive, harmonized frameworks that govern AI hiring tools.

  • Action: Develop comprehensive legal definitions of fairness, transparency, and explainability in hiring algorithms.
  • Impact: Promotes consistency, protects vulnerable groups, and holds organizations accountable.

 

Mandate Bias Audits and Public Disclosures

Similar to financial audits, AI tools should undergo mandatory annual fairness audits with publicly accessible reports.

  • Action: Introduce legislation requiring all companies using AI in hiring to disclose audit results and model documentation.
  • Impact: Increases transparency, encourages ethical competition among vendors, and empowers job seekers.

 

Support SME Compliance through Incentives

Small and medium-sized enterprises often lack resources to assess and govern AI systems. Policies should incentivize ethical AI adoption.

  • Action: Offer tax credits, grants, or technical support for SMEs implementing certified fair hiring technologies.
  • Impact: Encourages inclusive adoption and avoids concentration of responsible AI usage among large enterprises only.

 

Align with Global Frameworks

Given the global nature of AI deployment, policymakers should strive for international coherence in standards and compliance mechanisms.

  • Action: Collaborate with global institutions such as ISO, OECD, and the EU Commission to create cross-border ethical AI norms.
  • Impact: Prevents regulatory fragmentation and supports cross-jurisdictional enforcement.

 

Recommendations for Academia and Research Institutions

Promote Interdisciplinary AI Ethics Research

Bias mitigation is not just a technical challenge—it is socio-cultural, legal, and psychological. Research must reflect this complexity.

  • Action: Encourage interdisciplinary projects combining computer science, sociology, law, and organizational psychology.
  • Impact: Produces holistic frameworks for bias detection and prevention.

 

Develop Open Datasets for Bias Analysis

Many fairness audits are constrained by lack of access to reliable, diverse data.

  • Action: Create and publish anonymized, representative datasets for public use in AI fairness research.
  • Impact: Enhances reproducibility and democratizes innovation in ethical AI.

 

Summary of Recommendations and Impacts

Table 10. Summary of Recommendations and Expected Outcomes

Stakeholder

Recommendation

Expected Impact

Employers

Routine audits, HITL systems, explainability

Reduced bias, enhanced fairness, legal risk mitigation

AI Vendors

Fairness-by-design, interpretable models

Ethical product development, improved client trust

Policymakers

Legal standards, mandatory audits, SME support

Accountability, equitable adoption across organizations

Researchers

Open datasets, interdisciplinary research

Evidence-based policy, robust mitigation strategies

 

Strategic Policy Implications

  1. AI Governance as a Human Rights Issue: Fair and inclusive AI in hiring must be treated as a human rights imperative, not just a compliance task. Bias in employment decisions can impact livelihoods and perpetuate systemic inequality.
  2. Public-Private Collaboration is Essential: Neither government nor industry alone can address these complex challenges. Joint task forces, public consultations, and shared accountability mechanisms are necessary.
  3. Ethical AI as a Competitive Advantage: Organizations that embed fairness and transparency into their AI systems are more likely to attract diverse talent, build consumer trust, and avoid costly litigation. Ethical hiring AI can be positioned not as a regulatory burden, but as a business differentiator.
CONCLUSION AND FUTURE RESEARCH DIRECTIONS

Future Research Directions

While this study has laid a foundational understanding of the challenges and potential of AI in reducing bias and enhancing diversity, several avenues remain ripe for future exploration:

 

Longitudinal Impact Studies

There is a need for long-term studies tracking the real-world outcomes of AI-based hiring systems across diverse demographic groups. Do these tools actually result in improved workforce diversity over time? Do they impact employee retention, satisfaction, or career progression for underrepresented hires?

 

Cross-Cultural and Global Perspectives

Most existing research, including this study, focuses predominantly on English-speaking or Western contexts. Future work should explore how cultural and linguistic diversity affects AI behavior in recruitment across different regions, such as Asia, Africa, and Latin America.

 

Fairness in Unstructured Data Processing

AI systems are increasingly leveraging unstructured data like video interviews, voice recordings, and social media profiles. Future research must address bias detection and mitigation in these complex data types, which may embed subtle socio-cultural prejudices (e.g., accents, facial features, dress).

Development of Sector-Specific Fairness Metrics

Different industries may require tailored fairness metrics based on their unique hiring challenges. For instance, healthcare and tech may prioritize different attributes in candidates. Research should develop domain-adaptive fairness frameworks to guide AI implementations accordingly.

 

Legal and Ethical Framework Innovation

There is a critical need to rethink the legal definitions of fairness, consent, and discrimination in algorithmic systems. Legal scholars, ethicists, and technologists must collaborate to propose new policy models that balance innovation with social responsibility.

 

Human-AI Collaboration Models

How should decision-making responsibility be shared between humans and AI in recruitment? Future research should explore optimal human-AI interaction designs that preserve fairness while maintaining efficiency and transparency.

 

Open-Source and Benchmarking Initiatives

Establishing shared benchmarking datasets and model evaluation platforms will be instrumental in standardizing fairness assessments across tools. Future initiatives should aim to build open repositories of annotated recruitment data for academic and industry research.

 

Conclusion

Artificial Intelligence has the potential to either perpetuate historical injustices or dismantle them through innovative, inclusive design. Whether AI will serve as an ally in building diverse, equitable workplaces depends on the choices we make today—at the level of policy, technology, and organizational ethics. This study contributes to that choice by illuminating both the promise and the peril of AI in hiring, and by advocating a future where automation and equity are not mutually exclusive, but deeply intertwined. The path forward requires collective vigilance, transparency, and a shared commitment to the values of fairness and inclusion. By embracing interdisciplinary research, regulatory foresight, and ethical responsibility, we can ensure that AI becomes a true partner in advancing the ideals of equal opportunity in the world of work.

REFERENCES
  1. . (2023). "It's reducing a human being to a percentage": Perceptions of justice in algorithmic hiring. Journal of Business Ethics, 186(4), 989–1006.
  2. Dastin, J. (2023). How AI hiring tools may be reinforcing workplace bias. Harvard Business Review, 101(2), 56–63.
  3. Kim, P. T. (2023). Data-driven discrimination at work. William & Mary Law Review, 64(2), 325–369.
  4. Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2022). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 469–481.
  5. Sánchez-Monedero, J., Dencik, L., & Edwards, L. (2022). What does it mean to audit an algorithm? AI hiring systems and the politics of transparency. Big Data & Society, 9(1), 1–14.
  6. Ajunwa, I. (2022). Algorithmic exclusion in hiring. California Law Review, 110(3), 697–740.
  7. Binns, R. (2022). Fairness in machine learning: Lessons from political philosophy. Communications of the ACM, 65(3), 32–38.
  8. Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning. Cambridge, MA: MIT Press.
  9. Tolan, S., Miron, M., Gomez, E., & Castillo, C. (2023). Measuring and mitigating gender bias in recruitment pipelines. AI & Society, 38(1), 115–130.
  10. Levy, K. E. C., & Barocas, S. (2023). Designing against discrimination in online hiring. Yale Law Journal Forum, 132, 405–432.
  1. Vinod H. Patil, Sheela Hundekari, Anurag Shrivastava, Design and Implementation of an IoT-Based
  2. Smart Grid Monitoring System for Real-Time Energy Management, Vol. 11 No. 1 (2025): IJCESEN.
  3. https://doi.org/10.22399/ijcesen.854
  4. Sheela Hundekari, Dr. Jyoti Upadhyay, Dr. Anurag Shrivastava, Guntaj J, Saloni Bansal5, Alok
  5. Jain, Cybersecurity Threats in Digital Payment Systems (DPS): A Data Science Perspective, Journal of
  6. Information Systems Engineering and Management, 2025,10(13s)e-ISSN:2468-4376.
  7. https://doi.org/10.52783/jisem.v10i13s.2104
  8. Sheela Hhundekari, Advances in Crowd Counting and Density Estimation Using Convolutional Neural
  9. Networks, International Journal of Intelligent Systems and Applications in Engineering, Volume 12,
  10. Issue no. 6s (2024) Pages 707–719
  11. Upreti, P. Vats, G. Borkhade, R. D. Raut, S. Hundekari and J. Parashar, "An IoHT System Utilizing Smart Contracts for Machine Learning -Based Authentication," 2023 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC), Windhoek, Namibia, 2023, pp. 1-6, doi: 10.1109/ETNCC59188.2023.10284960.
  12. C. Poonia, K. Upreti, S. Hundekari, P. Dadhich, K. Malik and A. Kapoor, "An Improved Image Up-Scaling Technique using Optimize Filter and Iterative Gradient Method," 2023 3rd International Conference on Mobile Networks and Wireless Communications (ICMNWC), Tumkur, India, 2023, pp. 1-8, doi: 10.1109/ICMNWC60182.2023.10435962.
  13. Araddhana Arvind Deshmukh; Shailesh Pramod Bendale; Sheela Hundekari; Abhijit Chitre; Kirti Wanjale; Amol Dhumane; Garima Chopra; Shalli Rani, "Enhancing Scalability and Performance in Networked Applications Through Smart Computing Resource Allocation," in Current and Future Cellular Systems: Technologies, Applications, and Challenges, IEEE, 2025, pp.227-250, doi: 10.1002/9781394256075.ch12
  14. Upreti, A. Sharma, V. Khatri, S. Hundekari, V. Gautam and A. Kapoor, "Analysis of Fraud Prediction and Detection Through Machine Learning," 2023 International Conference on Network, Multimedia and Information Technology (NMITCON), Bengaluru, India, 2023, pp. 1-9, doi: 10.1109/NMITCON58196.2023.10276042.
  15. Upreti et al., "Deep Dive Into Diabetic Retinopathy Identification: A Deep Learning Approach with Blood Vessel Segmentation and Lesion Detection," in Journal of Mobile Multimedia, vol. 20, no. 2, pp. 495-523, March 2024, doi: 10.13052/jmm1550-4646.20210.
  16. T. Siddiqui, H. Khan, M. I. Alam, K. Upreti, S. Panwar and S. Hundekari, "A Systematic Review of the Future of Education in Perspective of Block Chain," in Journal of Mobile Multimedia, vol. 19, no. 5, pp. 1221-1254, September 2023, doi: 10.13052/jmm1550-4646.1955.
  17. Praveen, S. Hundekari, P. Parida, T. Mittal, A. Sehgal and M. Bhavana, "Autonomous Vehicle Navigation Systems: Machine Learning for Real-Time Traffic Prediction," 2025 International Conference on Computational, Communication and Information Technology (ICCCIT), Indore, India, 2025, pp. 809-813, doi: 10.1109/ICCCIT62592.2025.10927797
  18. Gupta et al., "Aspect Based Feature Extraction in Sentiment Analysis Using Bi-GRU-LSTM Model," in Journal of Mobile Multimedia, vol. 20, no. 4, pp. 935-960, July 2024, doi: 10.13052/jmm1550-4646.2048
  19. William, G. Sharma, K. Kapil, P. Srivastava, A. Shrivastava and R. Kumar, "Automation Techniques Using AI Based Cloud Computing and Blockchain for Business Management," 2023 4th International Conference on Computation, Automation and Knowledge Management (ICCAKM), Dubai, United Arab Emirates, 2023, pp. 1-6, doi:10.1109/ICCAKM58659.2023.10449534.
  20. Rana, A. Reddy, A. Shrivastava, D. Verma, M. S. Ansari and D. Singh, "Secure and Smart Healthcare System using IoT and Deep Learning Models," 2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS), Tashkent, Uzbekistan, 2022, pp. 915-922, doi: 10.1109/ICTACS56270.2022.9988676.
  21. Neha SharmaMukesh SoniSumit KumarRajeev Kumar, Anurag Shrivastava, Supervised Machine Learning Method for Ontology-based Financial Decisions in the Stock Market, ACM Transactions on Asian and Low-Resource Language InformationProcessing, Volume 22, Issue 5, Article No.: 139, Pages 1 – 24, https://doi.org/10.1145/3554733
  22. Sandeep Gupta, S.V.N. Sreenivasu, Kuldeep Chouhan, Anurag Shrivastava, Bharti Sahu, Ravindra Manohar Potdar, Novel Face Mask Detection Technique using Machine Learning to control COVID’19 pandemic, Materials Today: Proceedings, Volume 80, Part 3, 2023, Pages 3714-3718, ISSN 2214-7853, https://doi.org/10.1016/j.matpr.2021.07.368.
  23. Shrivastava, A., Haripriya, D., Borole, Y.D. et al.High-performance FPGA based secured hardware model for IoT devices. Int J Syst Assur Eng Manag 13 (Suppl 1), 736–741 (2022). https://doi.org/10.1007/s13198-021-01605-x
  24. Banik, J. Ranga, A. Shrivastava, S. R. Kabat, A. V. G. A. Marthanda and S. Hemavathi, "Novel Energy-Efficient Hybrid Green Energy Scheme for Future Sustainability," 2021 International Conference on Technological Advancements and Innovations (ICTAI), Tashkent, Uzbekistan, 2021, pp. 428-433, doi: 10.1109/ICTAI53825.2021.9673391.
  25. Chouhan, A. Singh, A. Shrivastava, S. Agrawal, B. D. Shukla and P. S. Tomar, "Structural Support Vector Machine for Speech Recognition Classification with CNN Approach," 2021 9th International Conference on Cyber and IT Service Management (CITSM), Bengkulu, Indonesia, 2021, pp. 1-7, doi: 10.1109/CITSM52892.2021.9588918.
  26. Pratik Gite, Anurag Shrivastava, K. Murali Krishna, G.H. Kusumadevi, R. Dilip, Ravindra Manohar Potdar, Under water motion tracking and monitoring using wireless sensor network and Machine learning, Materials Today: Proceedings, Volume 80, Part 3, 2023, Pages 3511-3516, ISSN 2214-7853, https://doi.org/10.1016/j.matpr.2021.07.283.
  27. Suresh KumarS. Jerald Nirmal KumarSubhash Chandra GuptaAnurag ShrivastavaKeshav KumarRituraj Jain, IoT Communication for Grid-Tie Matrix Converter with Power Factor Control Using the Adaptive Fuzzy Sliding (AFS) Method, Scientific Programming, Volume, 2022, Issue 1, Pages- 5649363, Hindawi, https://doi.org/10.1155/2022/5649363
  28. K. Singh, A. Shrivastava and G. S. Tomar, "Design and Implementation of High Performance AHB Reconfigurable Arbiter for Onchip Bus Architecture," 2011 International Conference on Communication Systems and Network Technologies, Katra, India, 2011, pp. 455-459, doi: 10.1109/CSNT.2011.99.
  29. Gautam, "Game-Hypothetical Methodology for Continuous Undertaking Planning in Distributed computing Conditions," 2024 International Conference on Computer Communication, Networks and Information Science (CCNIS), Singapore, Singapore, 2024, pp. 92-97, doi: 10.1109/CCNIS64984.2024.00018.
  30. Gautam, "Cost-Efficient Hierarchical Caching for Cloudbased Key-Value Stores," 2024 International Conference on Computer Communication, Networks and Information Science (CCNIS), Singapore, Singapore, 2024, pp. 165-178, doi: 10.1109/CCNIS64984.2024.00019.
  31. Dr Archana salve, Artificial Intelligence and Machine Learning-Based Systems for Controlling Medical Robot Beds for Preventing Bedsores, Proceedings of 5th International Conference, IC3I 2022, Proceedings of 5th International Conference/Page no: 2105-2109                  1109/IC3I56241.2022.10073403 March 2022                      
  32. Dr Archana Salve, A Comparative Study of Developing Managerial Skills through Management Education among Management Graduates from Selected Institutes (Conference Paper) Journal of Electrochemical Society, Electrochemical Society Transactions Volume 107/ Issue 1/Page no :3027-3034/ April 2022
  33. Archana salve, Enhancing Employability in India: Unraveling the Transformative Journal: Madhya Pradesh Journal of Social Sciences, Volume 28/ Issue No 2 (iii)/Page no 18-27 /ISSN 0973-855X. July 2023
  34. Sathya; V.C. Bharathi; S. Ananthi; T. Vijayakumar; Rvs Praveen; Dhivya Ramasamy, Real Time Prediction of Diabetes by using Artificial Intelligence, 2024 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), DOI: 10.1109/ICSSAS64001.2024.10760985
  35. Rvs Praveen; B Vinoth;S. Sowmiya;K. Tharageswari;Purushothapatnapu Naga Venkata VamsiLala;R. Sathya, “Air Pollution Monitoring System using Machine Learning techniques for Smart cities,” 2024 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), DOI: 10.1109/ICSSAS64001.2024.10760948
  36. RVS Praveen;U Hemavathi;R. Sathya;A. Abubakkar Siddiq;M. Gokul Sanjay;S. Gowdish, “AI Powered Plant Identification and Plant Disease Classification System,” 2024 4th International Conference on Sustainable Expert Systems (ICSES), DOI: 10.1109/ICSES63445.2024.10763167
  37. Neeraj Kumar; Sanjay Laxmanrao Kurkute;V. Kalpana;Anand Karuppannan;RVS Praveen;Soumya Mishra, “Modelling and Evaluation of Li-ion Battery Performance Based on the Electric Vehicle Tiled Tests using Kalman Filter-GBDT Approach” 2024 International Conference on Intelligent Algorithms for Computational Intelligence Systems (IACIS), DOI: 10.1109/IACIS61494.2024.10721979
  38. Renganathan, B., Rao, S.K., Ganesan, A.R., Deepak, A., High proficient sensing response in clad modified ceria doped tin oxide fiber optic toxic gas sensor application (2021) Sensors and Actuators A: Physical, 332, art. no. 113114,
  39. Renganathan, B., Rao, S.K., Kamath, M.S., Deepak, A., Ganesan, A.R. Sensing performance optimization by refining the temperature and humidity of clad engraved optical fiber sensor in glucose solution concentration (2023) Measurement: Journal of the International Measurement Confederation, 207, art. no. 112341
  40. Pramanik, S., Singh, A., Abualsoud, B.M., Deepak, A., Nainwal, P., Sargsyan, A.S., Bellucci, S. From algae to advancements: laminarin in biomedicine (2024) RSC Advances, 14 (5), pp. 3209-3231.
  41. Pramanik, S., Aggarwal, A., Kadi, A., Alhomrani, M., Alamri, A.S., Alsanie, W.F., Koul, K., Deepak, A., Bellucci, S.Chitosan alchemy: transforming tissue engineering and wound healing (2024) RSC Advances, 14 (27), pp. 19219-19256.
Recommended Articles
Research Article
A Critical Investigation in Understanding the Impact of Strategic Marketing Responses To E-Tailing Growth in India's Personal Care Sector
Published: 06/06/2025
Research Article
Fueling India's Growth Journey: The Significance of SMEs and Start-ups
Published: 06/06/2025
Research Article
Integrating Green Hrm and Corporate Sustainability: A Study on Employee Perceptions and Organizational Performance in The It Industry
Published: 06/06/2025
Research Article
Analyze The Impact of Job-Related Attitudes and Covid-19-Focused Hrm Initiatives on Job and Organizational Performance
Published: 31/05/2025
© Copyright Asian Society of Management & Marketing Research (ASMMR)