Ai Hr

From BloomWiki
Revision as of 01:46, 25 April 2026 by Wordpad (talk | contribs) (BloomWiki: Ai Hr)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

How to read this page: This article maps the topic from beginner to expert across six levels � Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Scan the headings to see the full scope, then read from wherever your knowledge starts to feel uncertain. Learn more about how BloomWiki works ?

AI for human resources (HR) applies machine learning to recruiting, employee retention, performance management, workforce planning, and organizational design. HR has historically relied on intuition, relationships, and manual processes. AI offers data-driven alternatives: resume screening at scale, interview scheduling automation, turnover prediction, skills gap analysis, and personalized career pathing. However, HR AI also carries substantial risks — biased hiring algorithms that discriminate, surveillance tools that erode trust, and performance systems that create perverse incentives. Responsible HR AI requires exceptional care around fairness, transparency, and human dignity.

Remembering[edit]

  • Applicant Tracking System (ATS) — Software managing the recruitment process; most large companies use AI-enhanced ATS.
  • Resume screening — Automated ranking and filtering of job applications based on qualifications.
  • Predictive hiring — Using historical data (employee performance, tenure) to predict which candidates will succeed.
  • Psychometric assessment — AI-analyzed structured tests measuring cognitive ability, personality, and job fit.
  • HireVue — A video interview AI platform that analyzes facial expressions, speech patterns, and content for candidate assessment.
  • Employee attrition prediction — ML models predicting which employees are likely to leave the organization.
  • Workforce planning — Forecasting future skill needs and talent supply to guide hiring, training, and succession.
  • Skills taxonomy — A structured classification of skills used to match employees to roles and identify gaps.
  • Performance management AI — Using data (output metrics, peer ratings, manager assessments) to evaluate employee performance.
  • People analytics — Using data analysis to understand and improve workforce dynamics.
  • Organizational network analysis (ONA) — Mapping communication and collaboration patterns to understand informal organizational structure.
  • Disparate impact — When a neutral practice disproportionately affects a protected group; a legal standard in employment discrimination.
  • Fair Credit Reporting Act (FCRA) — US law governing background checks; applies to many AI-based hiring tools.
  • EEOC (Equal Employment Opportunity Commission) — US agency enforcing employment discrimination law; increasingly active in AI hiring.

Understanding[edit]

The promise: Structured, data-driven hiring should reduce the bias of unstructured, impression-based interviewing. Humans are notoriously inconsistent and susceptible to affinity bias (favoring candidates like themselves), primacy/recency effects, and stereotype threat. Algorithmic hiring could standardize evaluation on job-relevant criteria.

The reality: The Amazon resume screening scandal (2018) exposed the core problem. Amazon trained a hiring ML model on 10 years of successful employee resumes — predominantly male engineers. The model learned to penalize resumes containing words like "women's" (as in "women's chess club") and downgraded graduates of all-female universities. The system had learned historical discrimination patterns from training data.

What's legal in hiring AI: The EEOC requires that employment tests (including AI assessments) show validity — they must predict actual job performance — and not produce disparate impact against protected classes. Video interview AI like HireVue has faced scrutiny; Illinois requires informed consent for AI video analysis. The EU AI Act classifies employment AI as high-risk, requiring conformity assessments.

Attrition prediction: ML models trained on HR data (tenure, performance ratings, compensation, promotion history, manager changes) can predict which employees are likely to leave with AUC 0.7–0.8. This enables proactive retention interventions. Ethical concerns: does the organization use this information to help employees or to avoid promoting them (to reduce attrition risk)?

Skills-based HR: Moving from credential-based hiring (degree requirements) to skills-based hiring using NLP to extract and match skills from resumes to job requirements. Platforms like LinkedIn, Eightfold, and Beamery build skills taxonomies and use them to match candidates to roles across the entire organization.

Applying[edit]

Employee attrition prediction with explainability: <syntaxhighlight lang="python"> import pandas as pd import shap from sklearn.ensemble import GradientBoostingClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import roc_auc_score, classification_report from sklearn.preprocessing import LabelEncoder

  1. IBM HR Analytics dataset (public benchmark)

df = pd.read_csv("WA_Fn-UseC_-HR-Employee-Attrition.csv")

  1. Encode categoricals

le = LabelEncoder() cat_cols = df.select_dtypes('object').columns.tolist() cat_cols.remove('Attrition') for col in cat_cols:

   df[col] = le.fit_transform(df[col])

df['Attrition'] = (df['Attrition'] == 'Yes').astype(int)

feature_cols = [c for c in df.columns if c != 'Attrition'] X, y = df[feature_cols], df['Attrition'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y)

model = GradientBoostingClassifier(n_estimators=200, max_depth=4, learning_rate=0.05) model.fit(X_train, y_train) print(f"AUC-ROC: {roc_auc_score(y_test, model.predict_proba(X_test)[:,1]):.3f}")

  1. SHAP explainability — critical for HR use: always explain predictions to managers

explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_test)

  1. Global feature importance

shap.summary_plot(shap_values, X_test)

  1. Individual employee explanation (for responsible HR use)

employee_idx = 0 shap.force_plot(explainer.expected_value, shap_values[employee_idx],

               X_test.iloc[employee_idx])
  1. FAIRNESS CHECK: measure AUC separately by gender

for gender in [0, 1]:

   mask = X_test['Gender'] == gender
   auc_g = roc_auc_score(y_test[mask], model.predict_proba(X_test[mask])[:,1])
   print(f"Gender={gender} AUC: {auc_g:.3f}")

</syntaxhighlight>

HR AI tools
Recruiting → Greenhouse, Lever, Workday Recruiting (ATS with AI)
Skills matching → Eightfold AI, Beamery, LinkedIn Talent Intelligence
Retention prediction → Workday People Analytics, IBM Watson Talent
Video interviews → HireVue, Spark Hire, Willo
People analytics platforms → Visier, Orgvitals, ONA tools (Panalyt)

Analyzing[edit]

HR AI Application Risk Assessment
Application Potential Benefit Bias Risk Legal/Regulatory Risk
Resume screening Efficiency High (historical bias) High (EEOC, EU AI Act)
Video interview AI Consistency High (facial/accent) High (Illinois AEIA)
Attrition prediction Proactive retention Medium Medium (surveillance concerns)
Skills matching (internal) Career mobility Low Low
Workforce planning Strategic foresight Low Low
Performance AI Efficiency Medium Medium (labor law)

Failure modes and ethical concerns: Historical bias replication — models trained on past hires perpetuate past discrimination. Adverse impact on protected classes — resume screening disproportionately filtering women, minorities. Proxy discrimination — filtering by "cultural fit" or "communication style" correlates with race or accent. Employee surveillance overreach — productivity monitoring creating toxic work culture. Feedback loops — rejected candidates never know they were screened by AI; no recourse.

Evaluating[edit]

HR AI evaluation must include fairness:

  1. Predictive validity: does the model actually predict job performance? Collect outcome data and measure correlation.
  2. Disparate impact testing: compare selection rates across protected groups; flag if any group's rate is less than 80% of the highest group (4/5ths rule).
  3. Adverse impact analysis: test for age, gender, race, disability disparate impact explicitly.
  4. Explainability audit: can a hiring manager explain every AI-assisted decision in terms the candidate would find fair?
  5. Outcome tracking: track job performance of AI-selected vs. non-AI-selected employees; validate prediction accuracy.

Creating[edit]

Designing a responsible HR AI system:

  1. Validity first: every AI tool must demonstrate predictive validity for the specific job and context before deployment.
  2. Bias audit: conduct adverse impact analysis before launch; remediate disparities.
  3. Human oversight: AI is decision-support, not decision-maker; humans review and approve all hiring and firing decisions.
  4. Transparency: inform candidates when AI is used in hiring; provide explanation on request.
  5. Limited scope: avoid video interview emotion analysis — evidence is weak, discrimination risk is high.
  6. Monitoring: track demographic outcomes of AI-assisted hiring quarterly; alert if disparate impact emerges.